Is History Repeating Itself with AI?


Is History Repeating Itself with AI?

Remember the late 90s? Back then, the big giants of the web were sites like Altavista and Yahoo. They were making tons of money, but there was a catch: they spent almost all of it on hardware. Rumor has it that for every dollar Yahoo made, they had to give about 95 cents to NetApp just to keep things running.

Then Google came along. They won because they figured out a smart way (GFS: Google File System) to use normal, “general purpose” servers instead of expensive storage systems. This meant they only spent about 45 cents on hardware for every dollar earned. They used that extra money for research and basically took over the whole market.

Now, look at the last few years. We’ve had this huge AI explosion led by OpenAI (ChatGPT). Right now, all of this runs on NVIDIA graphics cards (which explains why NVIDIA’s value has gone up “just a little bit” recently 😉).

But Google might be pulling the same old trick again. They are using their own TPUs (also known as Tensor Chips) for AI calculations. I actually saw an early version of this on my Pixel 6, which used these chips for image processing way before it was cool.

Here is the point: If Google manages to do the same thing again — running AI while spending way less on hardware than everyone else — they could end up winning everything.

The risk of a “year 2000 style” bubble is pretty real right now. Everyone is investing crazy amounts of money and burning mountains of electricity, but… it’s not really clear how they are making a profit since most services are free. It feels exactly like the early 2000s: I was there, I remember!

If you want to dig deeper, I think this article is detailed but still easy to understand: The Chip Made for the AI Inference — Uncover Alpha

And never forget that… software is everything!

The text has been generated with the collaboration of Gemini, and the picture, really nice, painted by ***ChatGPT ***:-D