Dear CEO: A Love Letter to Your AI Revolution

3 months ago 4

Adrian Booth

Dear CEO,

Firstly, congratulations on your recent announcement regarding the AI revolution transforming your business.

I get it. You’re under immense pressure right now. The board is asking why your competitors are announcing AI initiatives while you’re apparently stuck in the stone age of paying people salaries. Your shareholders want to see those margins improve. The tech press is full of stories about companies slashing costs with AI, and frankly, it sounds too good to ignore.

Maybe you were one of those CEOs who anonymously confessed that your AI initiatives are “just for show” and like other CEOs in this survey you’re worried about being ousted if you don’t integrate AI in some form into your products

In the end you’re doing what any responsible leader would do and exploring every opportunity to keep your company competitive. The AI revolution is real, and the potential is genuinely exciting.

But as someone who’s been in the trenches of this technology, I wanted to share a few things the headlines aren’t telling you. Not to discourage your AI journey, but to help you navigate it more successfully.

If I could be blunt for a second, I think you’ve confused using AI as a tool with replacing humans entirely. You’ve mistaken efficiency for intelligence, automation for innovation, and cost-cutting for strategy.

You think you’re leading a revolution but what you’re actually doing is conducting a very expensive experiment with your company as the lab rat.

This letter is my attempt to go beyond the headlines, beyond the hype, and explore the real consequences of the decisions you’re making.

The era of AI employees is apparently here. Maybe you saw this ad dotted around bus stops and tube stations in London and thought to yourself just how annoying it is to have employees ask about work life balance whilst you’re out there trying to change the world.

Zoom image will be displayed

So I want to talk about something you may be unfamiliar with.

You see, whilst you’ve been busy studying org charts and reporting structures and thinking you understand how work actually gets done, there’s been an entire shadow economy operating right under your nose. Remember when Sarah from accounting somehow always knew which vendor payments could be delayed without causing a fuss? Or how Ian pushed back on a product requirement because he recognised it would create a security vulnerability?

That’s what organisational theorist Niklas Luhmann would call the “informal” side of your organization: the vast network of institutional knowledge, creative problem-solving, and preemptive disaster prevention that happens precisely because it wasn’t assigned by management.

You’re wagering that when Sophie instinctively refactored that messy part of the code base because she knew it would cause problems down the road, she was just “following the ticket.” That when your tech lead pushed back on a product requirement because he recognised it would create a regulatory nightmare, he was merely “executing assigned tasks.” That when your developers spent an extra hour upgrading an external library dependency to keep the system maintainable, they were just “completing their deliverables.”

It’s a bold assumption that your software development process operates like a perfectly designed assembly line rather than the beautifully chaotic ecosystem of human expertise, intuition, and collective intelligence it actually is.

Your AI agents will write the code you ask for.

They won’t write the code you should have asked for but didn’t know you needed.

They won’t refactor the adjacent system that’s about to become a bottleneck. They won’t suggest a completely different approach that sidesteps the problem entirely. They won’t catch the subtle architectural decision that will make your system unmaintainable in six months.

Most importantly, they won’t have that informal conversation where someone mentions, “Hey, didn’t we try something like this two years ago? It didn’t work because…”

That institutional memory, that collective wisdom, that informal knowledge network that prevents your team from repeatedly stepping on the same rake? Yeah, gone.

My biggest issue with LLMs is that they make it incredibly easy to write code that never should have existed in the first place. AI Bros and LinkedIn crackpots like our friend Lauri here will boast about 250,000 lines of code generated like it’s some sort of Olympian achievement.

Zoom image will be displayed

But the best developers I’ve ever worked with, the ones I believe added the most value to an organization, did so by solving a problem without writing any code at all.

This is the informal side of your organisation doing the heavy lifting. Applying hard-won experience to recognise when a simple conversation can eliminate weeks of development, when an existing solution can be repurposed instead of rebuilt, and when the real problem is completely different from what was originally described.

Zoom image will be displayed

I know you’ve always prided yourself on avoiding unionized labour. It’s within your fiduciary duty to shareholders that you keep labour costs competitive and margins expanding.

I get it. If you led by moral example instead of by ruthless accounting and paid your employees more out of pure generosity, then you’d eventually be out of the job. Due to the way our system works, if I were in your position I’d likely follow the same cut throat approach to workforce management.

But I’m not sure you’ve thought deeply enough about the implications of outsourcing your entire labour force to a single supplier.

You’re about to create the most powerful union in your company’s history, and its run by Sam Altman, Dario Amodei, or any AI vendor you’ve chosen as your new workforce provider.

These companies are deliberately creating very public distortions on the unit economics of these tools you’re relying on. There are many instances of people paying a fixed price for Claude Code subscriptions (usually around $200 a month) and realising they’re costing the company over $2000 in LLM calls. This is a 90% subsidy provided by AI vendors that themselves are subsidized by venture capital funds. These LLMs are the £5 Uber rides from 2013 and you’re organising your entire business around them.

When contract renewal time comes around, you won’t be negotiating with individual employees who need their jobs to pay rent.

You’ll be sitting across from a tech company that knows exactly how dependent you’ve become on their services.

They’ll have intimate knowledge of your operations, your workflows, and, most importantly, your complete inability to function without them. It’s like firing your entire IT department and then discovering that your new “cost-effective” cloud provider wants to triple their fees. Except this time, it’s not just your servers they’re holding hostage, but your entire business operation.

In your quest to eliminate worker leverage you’ve handed the ultimate leverage to a single corporate entity that make traditional unions look like a local prayer group.

Zoom image will be displayed

Would you ever hire somebody that never took initiative and never learned from their mistakes? Regardless of how intelligent they are, I think we both agree that those are characteristics that no one would want to hire for.

Humans have this annoying habit of making mistakes, but we also have this wonderful ability to say “Well that was stupid” and instantly learn from it. Your new AI workforce however operates on a slightly different principle.

When an AI Agent makes a mistake (say, consistently misinterpreting customer complaints or hallucinating invoices) it won’t have a lightbulb moment and course correct. It will continue to make that exact same mistake with unwavering consistency until its next training cycle (and even then it’s not guaranteed it would have properly learned to avoid those mistakes). And when you point out its mistake of offering out a £50,000 contract to a supplier for £30 you’ll get an unremarkable response along with “You’re absolutely right” and a never ending cycle of apologies with no recourse for avoiding a repeat of those mistakes.

These next-token predictors are “frozen in time”, and by that I mean they do not learn from mistakes in real time. Due to the way they’re architected, they cannot learn in the same way that humans do. This is a fundamental limitation of the technology that won’t be addressed for quite some time.

Stop.

I don’t mean to patronise you here, but you’re not superhuman and can’t understand the nuances of every single job within your company. You are the face of the org. The salesman. The most senior strategist. You leave the nitty gritty details to us so you can focus on higher level ambitions that can move us all forward.

But this division of labor creates a perfect storm when it comes to AI hype, because what you’re reading in the headlines and what’s actually happening in the trenches are two very different stories.

Zoom image will be displayed

To be clear, the exact quote from Microsoft CEO Satya Nadella was “I’d say maybe 20%, 30% of the code that is inside of our repos today and some of our projects are probably all written by software”.

There’s a huge gap between what the media reports and what the reality is.

Here’s what actually happened: A tech CEO accurately described something that’s been happening for years. A massive chunk of code in every tech company is already “written by software”. Code generators that create boilerplate templates, package managers that auto-generate dependency files, build systems that produce configuration code, and scaffolding tools that save developers from writing the same basic structures over and over again.

But I understand the confusion. When you’re seeing headlines like “Microsoft: AI Writes 30% of Our Code!” it’s easy to think you’re falling behind in some kind of AI arms race. It doesn’t help that there’s now a very public d**k measuring contest between tech CEOs competing to see who can claim that AI generates the most code at their company.

Zoom image will be displayed

Zoom image will be displayed

Zoom image will be displayed

But here’s where the hype becomes dangerous: when you read that Microsoft or Google has “30% AI-generated code,” you might think, “Great! I can reduce my development team by 30%!” That’s like reading that lawyers now use AI research tools for 30% of their case preparation and deciding to fire 30% of your legal department and replace them with legal databases.

Yes, AI can rapidly search through thousands of case precedents, generate standard contract clauses, and even draft routine motions. But it can’t read between the lines of opposing counsel’s strategy, recognise when a client isn’t telling the whole truth, or craft the nuanced argument that turns a losing case into a winning one. The AI handles the grunt work; the lawyer still needs to know what questions to ask and how to use the answers.

I know you’ll get questions on this from some of your technologically illiterate board members. So when some of them ask, “Why are we hiring developers when Google’s AI writes all their code?” you can confidently explain that Google isn’t actually replacing developers with AI, they’re just making their developers more productive. Just like every other tool advancement in the history of software development (maybe also point out that big tech companies like Google and Microsoft have a huge financial incentive to get more companies using AI and so these headlines should always be read with that in mind).

The real question isn’t whether you should fire your developers because AI can write code. It’s whether you want your developers spending their time on creative problem-solving and strategic thinking, or on writing the same boilerplate functions for the thousandth time.

Because I guarantee you: your competitors who figure out how to amplify their developers’ capabilities will run circles around those who simply try to replace them.

This letter isn’t written from a place of fear. Not fear of unemployment, of obsolescence or economic displacement. I’ve already organised my life in such a way that an extended period of unemployment wouldn’t lead to personal financial destitution. I also don’t feel personally threatened by next-token predictors and I think our friend Jason McDonald put it better than I ever could.

Zoom image will be displayed

It’s also not written from the perspective of someone who despises AI. I personally love these tools and believe they can make a remarkable difference in organisations when used sensibly. I’ve written previously about how LLMs can be leveraged within companies like yours to increase the quality of everyone’s work.

What I’m worried about is something far more profound. A deep concern for the future of….well, everything!

You see, when we treat humans as “resources” to be optimised away, we’re making a statement about what we believe human beings are worth. We’re declaring that the messy, unpredictable, gloriously complex nature of human intelligence is a bug to be fixed rather than a feature to be celebrated.

When entire industries decide that human workers are redundant, we create a crisis of meaning on top of the devastation of unemployment.

We tell millions of people that their skills, their insights, their very humanity has no place in the economy that shapes their lives. And when people feel worthless, when they feel discarded by the systems they once contributed to, they don’t just disappear quietly.

No. They vote for demagogues who promise to restore their dignity. They embrace ideologies that offer simple explanations for complex displacement. They retreat into tribal thinking because the broader society has told them they don’t belong.

The political toxicity we see around us isn’t separate from these economic choices, but a direct consequence of them. When we build an economy that treats human beings as inefficiencies to be eliminated, we shouldn’t be surprised when our politics becomes a battle between those who feel valued and those who feel discarded.

You don’t have to look too far into the past to see where our current trajectory is leading to. You only have to look at the reaction to the murder of Brian Thompson, the CEO of health insurance company United Healthcare, last year to get a preview of this simmering class tension. It really hit me when I stumbled upon a subreddit for nurses around the time of this incident.

Look at some of the comments on this r/nursing Reddit post. This is a Reddit forum filled with people who care for a living, and they couldn’t give a shit that another human being was shot in cold blood.

“I’ve never wished a man dead. But I’ve read some obituaries with great pleasure”

“I feel bad that his family is grieving around the holidays but every cent they spend on his funeral was made on the funerals of others.”

“Huh. So he was on his way to a meeting to discuss new and innovative ways to prevent people from accessing lifesaving care to make more money for investors and himself? Sounds like this is related to the pre-existing condition of being an amoral asshole.”

I’m not sure about you, but I don’t want to live in a world where nurses are celebrating the murder of corporate executives. Similarly I don’t want to live in a world where CEOs fundamentally break the social contract and publicly celebrate the decimation of their workforce with open glee.

I don’t want to live in a world where human life has no value. I don’t want to live in a world where large numbers of economically displaced people vote for race baiting lunatics because they somehow think it’ll make their life better.

The world is watching leaders like you make these choices. Your employees are watching. Your children are watching. And history will judge whether we used this moment of technological possibility to build a more human future, or whether we used it to build a more efficient path to our own irrelevance.

I really think that the most successful companies of the next few decades won’t be those that eliminated humans most efficiently. They’ll be those that figured out how to combine human creativity with artificial intelligence in ways that neither could achieve alone. The question is: do you want to lead that future, or be disrupted by it?

Read Entire Article