In the winter of 1636, a single tulip bulb sold for more than a canal house in Amsterdam. Bidding wars broke out in taverns. Promissory notes for rare bulbs were traded like startup term sheets, with about the same level of scrutiny. One bulb, the Semper Augustus, reportedly fetched 5,500 guilders, close to two hundred thousand US dollars in today's money
.
Then, in February 1637, the market stopped. Buyers disappeared. Prices collapsed. Fortunes vanished. What was left were crates of unsold flowers and a simple question that echoed across the canals: what were we thinking?
Smithsonian: There Never Was a Real Tulip Fever Even centuries later, people still argue about how much was real and how much was story. That, too, sounds familiar.
Tulip Mania was never really about tulips. It was about belief. A story that said the rules had changed, and the future belonged to those who acted quickly.
Sound familiar?
Today, we are not trading bulbs. We are trading agents, copilots, and synthetic employees. And just like in 17th-century Holland, some people are mortgaging their future for something they do not quite understand, but are sure everyone else is already profiting from.
Just as tulip speculation once spread through taverns and markets, today’s AI enthusiasm is spreading through boardrooms, classrooms, and newsfeeds. The dynamics are strikingly similar.
What follows is something I’ve been thinking about for a while. It’s a look at the conditions that make hype so contagious, the signs that suggest a bubble, the risks hidden just beneath the bloom, and a few ways to stay grounded while the fever runs its course.
So what is driving this latest wave of conviction? Let’s take a closer look at what is blooming in the AI space, and what might be quietly rotting beneath the surface.
In early 2022, most people hadn’t heard of large language models. By early 2023, ChatGPT had more than 100 million users. By mid-2024, "prompt engineer" was a six-figure job title (and no one could quite agree on what it meant).
The speed of public adoption is hard to overstate. ChatGPT reached one million users in five days. Facebook took ten months. Netflix took more than three years. Even the smartphone, arguably the most transformative tech of the last two decades, took years to gain serious traction.
It is not just consumer hype. The underlying technology is moving fast. OpenAI went from GPT-3 to GPT-4 in 18 months and is now touting GPT-5 with GPT-6 in the wings. Claude, Gemini, Llama, and others release frequent updates, each version promising more power, more fluency, and fewer guardrails. The ground keeps shifting, even for the people building on top of it.
For professionals, that pace creates a widening gap. It is not just between research and practice, but between what the tech actually does, what it is claimed to do, and what people think they are supposed to understand about it. That gap is where FOMO (Fear of Missing Out) grows.
"I need to understand this so I don’t look obsolete."
"What if I’m the only one not leveraging AI at work?"
"Am I about to get left behind by software that writes itself?"
This is a time when people are most likely to confuse speculation for understanding. Not because they are gullible, but because the pressure to appear knowledgeable is stronger than the incentive to slow down and ask hard questions.
Suddenly everyone is a systems thinker. Everyone is building "agentic workflows." Everyone is pitching AI strategy decks filled with terms they only half understand. Nobody wants to be the last one to get it, even if nobody really does.
The tools are impressive, and the future may be bright. But the environment we’ve built around them? It is perfect for a bubble.
And when a market tilts toward bubble territory, the symptoms show up quickly — in how we buy, how we talk, and how we convince ourselves that speed matters more than clarity.
You don’t need to squint to find it — the signs are everywhere. Inflated promises, semantic sleight-of-hand, guru-led gold rushes, frothy valuations. The patterns are familiar to anyone who’s seen a speculative wave before.
In late 2023, PwC announced it would become the largest enterprise customer for ChatGPT, rolling out OpenAI’s tools to more than 100,000 employees. It wasn’t just adoption — it was a full embrace: customer and reseller in one stroke. The messaging emphasized “responsible AI,” but the speed and scale felt more like a hedge against obsolescence than a measured deployment.
Moves like this echo a deeper trend: the pressure to “do something with AI” — even before knowing what that something is.
Where there’s a gold rush, there’s someone selling picks, shovels — or €1500 ChatGPT courses.
A sudden wave of self-anointed “AI experts” has appeared, offering training programs with little grounding. These offerings lean more on fear of missing out than meaningful instruction — tapping into the anxiety of not keeping up or not being able to contribute in an AI-driven world.
Meanwhile, vendors have discovered the branding power of terms like “agentic AI.” The problem? No one agrees on what that means.
“No one knows what the hell an AI agent is. Or, at least, not everyone can agree on a definition for what, exactly, an AI agent is or does.” - Stack Overflow: “Wait, what is agentic AI?”
When definitions are loose, everything can be labeled “agentic,” from task automation wrappers to glorified chatbots. Ambiguity becomes a sales tool.
In 2024, generative AI startups raised $56 billion; nearly doubling the previous year. Startups with little more than a demo and a deck landed nine-figure rounds. Some investors admitted they were “willing to overpay” just to get AI into their portfolios.
Carta reported that valuations for Series D AI startups in 2024 were more than 200% higher than non-AI peers. Databricks CEO Ali Ghodsi called it “peak bubble,” pointing to billion-dollar valuations awarded to companies with “no product, no IP.”
But the problem isn’t only inflated valuations. Beneath the surface, the behaviors shaped by this mania introduce very real risks.
The deeper problem is not the hype itself, but what the hype displaces. Context, caution, and clear thinking get pushed aside to make room for narrative, momentum, and the desire to look like you understand what is going on.
The biggest risk here is misplaced trust.
We already have examples. ChatGPT generated fake legal citations that were submitted in court. Samsung engineers accidentally leaked sensitive code while trying to improve productivity with AI. These are not fringe cases. They are what happens when misunderstood systems are deployed in high-stakes environments.
A system that sounds confident is often treated as credible. But confidence is not competence. One bad prompt can bring legal or reputational trouble.
In speculative markets, incentives bend. The person who sounds like they understand AI gets promoted. The team that rolls out a pilot gets budget. The vendor that uses the right vocabulary gets the contract. Understanding is optional. Echoing market perception is rewarded.
This makes even seasoned professionals vulnerable. You end up buying tools based on claims, not outcomes. You staff projects with partners who know how to demo, but not how to deliver.
The less you understand the limits of a system, the more likely you are to believe it can solve your hardest problems.
If your digital transformation project has stalled, generative AI starts to sound like a shortcut. If you are short on staff, an AI agent seems like the answer. If your stack is weighed down by years of technical debt, you start hoping the model can rewrite the mess.
It rarely works that way. What you get instead is added complexity, deeper fragility, and an expensive lesson in misuse. Tools pitched as accelerators are often dropped into brittle environments. More abstraction. Spaghetti code. Less clarity.
When something breaks, nobody knows where to look. Trying to fix bad systems with misunderstood technology is not a strategy. It’s a recipe for deeper dysfunction.
So how do you avoid these traps? Stop chasing the promise. Start solving the problems that are right in front of you. Do less guessing and more investigating. Focus less on what the tools say they can do, and more on how they actually behave.
You don’t need to outrun the hype. You just need to stay grounded.
Which raises the obvious question: what does “grounded” look like in practice? How do you move forward without getting swept up in the fever?
The key to surviving change, especially at the pace we are seeing now, is not to be first. Instead it is better to be clear. Clear about your business, your problems, and your process. The tools will keep evolving. The hype will keep spinning. The companies that thrive will be the ones that stay focused on what actually matters.
Treat generative AI as tooling, not a product feature.
You do not win by betting on every new platform. You win by solving real problems. You get there by understanding how things work, not just what they promise.
These systems sound confident. That is part of the problem. As I wrote in The Model Is Not the Territory, output is not the same as truth. A smooth response is just that, a surface. If you mistake polish for knowledge, you are likely to build confidence in places where caution is needed.
As we learned from Fred Brooks, the product that claims to be a solution to every problem is appealing. But almost always, it is a trap. All-in-one tools mask the real work. They give you convenience, not understanding.
A better approach: take a small problem. Try a new tool. If it helps, use it. If not, keep moving.
Being early makes for good press, but it rarely makes for lasting results. Most organizations would be better off letting others go first by employing the Second-Mover Advantage. Let someone else run into the bugs, the lawsuits, and the rushed integrations. Then, once the dust settles, take what has been learned and apply it.
Waiting is not hesitation. Waiting is a choice. And staying grounded is not about being conservative — it is about being deliberate. The people who survived the tulip crash were not the ones with the biggest bets. They were the ones who knew the difference between something valuable and something merely fashionable.
That distinction matters now more than ever, because the bloom of AI won’t vanish overnight. It will settle, as all bubbles do.
Tulips did not disappear after the bubble burst. They are still around. Still beautiful. Still cultivated. But they are no longer mistaken for currency, or salvation.
That is how most bubbles end. Not with total collapse, but with correction. The fever breaks, prices fall, and attention shifts. The object remains, but the myth fades.
This latest round of AI seems likely to follow the same arc. The tools will continue to improve. The use cases will get clearer. Some of the promise is real. But so is the noise, the marketing theater, the consultant gold rush, and the pressure to act fast before you understand the ground you’re standing on.
The risk is not that AI is fake. The risk is that in our rush to believe, we stop asking what is real.
Whether you’re building, buying, or just trying to keep up, keep asking questions. Look past the interface. Your job is to keep your head, solve your real problems, and stop buying tulips sight unseen.
That is how you build something that lasts.