Why Google will win the AI race

1 month ago 1

Happy Sunday and welcome to Investing in AI! We have a new podcast focused on AI in NYC. It highlights companies, investors, and executives in NYC and the idea is to look at AI from many angles. Who is building what? How does local government use it? How do sports teams use it? What are some unusual and surprising use cases? We have two episodes up and it’s been an incredibly fun podcast to record, with lots of things said that you won’t hear in other places. So please check it out.

We are booked up through early November but if you want to come on, or know someone who should be a guest, let me know.

The AI race feels like a sprint right now, with OpenAI grabbing headlines and Meta open-sourcing everything they can. But if you’re placing bets on who wins the marathon, don’t overlook the company that’s been running this race since before most people knew there was a race to run. Google has structural advantages that become more decisive as AI shifts from clever demos to planetary-scale deployment.

Let’s start with the obvious one that everyone somehow forgets: YouTube. Every minute, 500 hours of video get uploaded to YouTube. That’s not just cat videos and gaming streams – it’s documentation of how the physical world works. Cooking techniques, car repairs, surgical procedures, dance movements, manufacturing processes. When everyone else is scrambling to license robot training data or sending cars around with cameras, Google is sitting on the world’s largest collection of human demonstrations. For training world models that understand physics, causality, and human behavior, this dataset is irreplaceable. And critically, Google already has the rights to use it for training.

Then there’s the compute story. While everyone else is begging NVIDIA for H100 allocations and watching Jensen’s leather jacket keynotes, Google has been quietly building TPUs since 2015. They’re now on their sixth generation. This isn’t just about having chips – it’s about having chips designed specifically for the kinds of models Google wants to run, optimized for their specific workloads, deployed in their own datacenters. When you control the full stack from silicon to serving, you can make optimizations that companies dependent on third-party hardware simply cannot match. The capital efficiency matters when you’re talking about trillion-parameter models that cost hundreds of millions to train.

The search integration point deserves more attention than it gets. Everyone assumes Google search is under threat from ChatGPT, but they’re missing the jujitsu move Google is pulling. Gemini isn’t trying to replace search – it’s augmenting it. When you have a trillion searches telling you what people actually want to know, in real-time, across every domain of human knowledge, you have a feedback loop that no amount of venture funding can replicate. Every search query is a training signal. Every user interaction teaches the model what good answers look like. OpenAI has to guess what users want. Google knows.

On the infrastructure side, Google already serves billions of users across Search, Gmail, Maps, Android, and YouTube. They’ve solved the hard problems of global-scale deployment, abuse prevention, and reliability that startups are just beginning to encounter. When Gemini needs to scale to a billion users, the pipes are already there. The trust and safety systems are already there. The monetization engine is already there.

The other underappreciated advantage is Android. As AI moves from cloud to edge, having the operating system on billions of phones becomes decisive. Google can push AI models directly to devices, run experiments at population scale, and create experiences that require tight OS integration. Apple might match this on iOS, but no other AI company has this distribution.

Google has AI ingrained into its culture. Google researchers invented the transformer architecture that powers every large language model. They pioneered BERT, T5, and PaLM. They’ve been doing AI research at scale longer than almost anyone. Yes, they’ve had execution issues, but the technical depth is undeniable. When the game shifts from who can build a chatbot to who can solve robotics or protein folding or weather prediction, that research culture matters.

The bear case on Google usually centers on innovator’s dilemma – they’re too wedded to search revenue to fully embrace AI. But that misreads what’s happening. Google isn’t trying to protect search; they’re using search to bootstrap into AI dominance. The search business generates $200+ billion annually to fund AI development. That’s a war chest that dwarfs any competitor.

In the end, AI competition isn’t about who has the best demo or the most hype. It’s about who can train the biggest models, deploy them most efficiently, and improve them fastest based on real-world feedback. On all three dimensions, Google’s structural advantages compound over time. They might not win every news cycle, but they’re positioned to win the war.

Thanks for reading.

Discussion about this post

Read Entire Article