You’ve seen the MIT study. 95% of corporate AI initiatives FAIL.
You’ve probably shared it in meetings, posted about it on LinkedIn, used it to justify your AI concerns. But do you know why that number is so high? I do. Because I lived it.
I spent three months becoming part of that 95% on purpose.
As a fractional CTO and advisor, I kept getting the same question: “How should we use AI in our engineering teams?” I could have given the standard consultant answer about augmentation and efficiency. Instead, I decided to find out what actually happens when you go all-in.
I forced myself to use Claude Code exclusively to build a product. Three months. Not a single line of code written by me. I wanted to experience what my clients were considering—100% AI adoption. I needed to know firsthand why that 95% failure rate exists.
I got the product launched. It worked. I was proud of what I’d created. Then came the moment that validated every concern in that MIT study: I needed to make a small change and realized I wasn’t confident I could do it. My own product, built under my direction, and I’d lost confidence in my ability to modify it.
Twenty-five years of software engineering experience, and I’d managed to degrade my skills to the point where I felt helpless looking at code I’d directed an AI to write. I’d become a passenger in my own product development.
Now when clients ask me about AI adoption, I can tell them exactly what 100% looks like: it looks like failure. Not immediate failure—that’s the trap. Initial metrics look great. You ship faster. You feel productive. Then three months later, you realize nobody actually understands what you’ve built.
The company gets excited about AI. Leadership mandates AI adoption. Everyone starts using AI tools. Productivity metrics look great initially. Then something breaks, or needs modification, or requires actual judgment, and nobody knows what to do anymore.
The developers can’t debug code they didn’t write. Product managers can’t explain decisions they didn’t make. Leaders can’t defend strategies they didn’t develop.
Everyone’s pointing at their AI tools saying, “It told me this was the right approach.”
During my experiment, I found myself in constant firefighting mode. Claude Code would generate something, it would be slightly off, I’d correct it, it would make the same mistake again, I’d correct it again. I was working harder than if I’d just written the code myself, but with none of the learning or skill development.
Bob Galen watched me go through this and called it perfectly in our latest podcast: “Who owns that product, Josh? You or Claude Code?” The answer was Claude Code. I’d abdicated ownership while telling myself I was being innovative.
The formula should be AI + HI, where HI (Human Intelligence) is larger than AI. What’s actually happening in those 95% of failures? It’s AI with a tiny bit of human oversight, if any.
When AI helps you write better code faster while you maintain architectural understanding—that’s augmentation. When AI writes code you don’t understand—that’s abdication.
When AI helps you analyze customer feedback while you make product decisions—that’s augmentation. When AI tells you what to build next—that’s abdication.
When AI helps you write better faster while maintaining your voice—that’s augmentation. When AI writes for you in a voice that isn’t yours—that’s abdication.
I know the difference because I’ve been on both sides. The abdication side feels easier initially. You’re shipping more! You’re moving faster! Then you realize you’re not actually in control anymore, and when something goes wrong—and something always goes wrong—you’re helpless.
We’re about to face a crisis nobody’s talking about. In 10 years, who’s going to mentor the next generation? The developers who’ve been using AI since day one won’t have the architectural understanding to teach. The product managers who’ve always relied on AI for decisions won’t have the judgment to pass on. The leaders who’ve abdicated to algorithms won’t have the wisdom to share.
Bob and I represent something that might disappear: masters of our craft who learned by doing, failing, debugging, and doing again. We have 25+ years of accumulated scar tissue that tells us when something’s about to go wrong, why that architectural decision will haunt you, and what that customer feedback really means.
You can’t prompt your way to that knowledge. You can’t download that experience. You have to earn it. And if you’re letting AI do the work, you’re not earning anything except a dangerous dependency.
Time for a little uneasiness. Look at your recent work:
Can you explain every decision in detail without referencing what AI suggested? Could you do your job tomorrow if all AI tools disappeared? Are you getting better at your craft, or just better at prompting? When something breaks, is your first instinct to fix it or to ask AI to fix it?
If you’re squirming, you’re part of the 95%.
For the next week, pick one core skill of your job. Just one. Do it without any AI assistance. Write code without Copilot. Make product decisions without ChatGPT. Write strategy without Claude.
Feel that discomfort? That’s not incompetence. That’s your actual skill level revealing itself. That’s the gap between who you are and who you’ve been pretending AI makes you.
Now you have a choice. You can close that gap by developing your actual skills, using AI as a training partner rather than a replacement. Or you can keep abdicating, keep telling yourself you’re being innovative, and become part of that 95% failure rate.
The companies that will thrive aren’t the ones with the best AI tools. They’re the ones whose people use AI to become better, not to become lazier. They’re the ones where humans own the decisions, own the code, own the strategy, and use AI as an amplifier, not an autopilot.
I spent three months learning this the hard way. I let AI own my product development and almost lost myself as a developer. Don’t make my mistake. Don’t become another statistic in that 95%.
Own your craft. Use the tools. Don’t let the tools use you.
Stay courageous,
Josh Anderson
The Leadership Lighthouse
P.S. MIT’s study isn’t an outlier. Gartner, McKinsey, and others are finding similar failure rates. The pattern is consistent: abdication fails, augmentation succeeds. The question is: which side of that divide are you on?