Look, I get it, you’re sick to death of hearing about AI. It’s in the news, it’s all over social media, it’s in your favourite apps, and the slop is infecting every corner of your world. You just want to go outside, touch grass, and hope the whole thing goes away.
I have devastating news for you: it’s not going away.
I’m not a AI-evangalist, and I’m not an AI-luddite. I’m something much more nuanced because life, and humans, are messy and not easily categorised into a divisive binary system (hello, US politics!). So I’d like to explain my position with probably an excruciating amount of detail…
What is AI?
AI is a label that you apply to a product that’s worse than the alternative product without AI but it sells for more money
j/k, kinda.
I’m mean, that’s basically a true reflection of our frothy bubble-like times but it’s not particularly useful.
AI is ‘Artificial Intelligence’, a field of academic research that’s almost 70 years old (the term was coined in 1956) and covers everything from speech recognition, robotics, computer vision, agent-based simulations, neural networks, machine learning, genetic algorithms, etc etc. It’s incredibly broad and has been widely taught in colleges for the past 30 years. It is not new.
You will be familiar with many spin-offs of AI and they probably have a place in your home or pocket. Face recogition in your camera. Siri and Alexa. The entire personalised ad industry of the Internet. Book recommendations. Robot vacuums and lawnmowers. Grocery store vouchers. Fitness watches with their activity detection, analysis, and prediction. Gmail spam detection. Voicemail transcription. Google Translate. Non-playable characters in video games.
AI as a broad field spanning all the way from cognitive science to computer science. It’s not all uncanny videos of 6-fingered/three-armed women and mediocre writing. That’s just how the current crop of AI companies, led by OpenAI, have claimed the word for themselves to specifically refer to systems based on Large Language Models (LLMs).
I am more than little bit peeved the ‘AI’ name has been captured by such a narrow thing part of the field but that battle is lost now. The broader AI field will still be around, providing hugely diverse and interesting research, for decades to come.
It’s this AI-as-LLMs that we need to discuss today.
The last few years can basically be summarised as the latent potential of decades of research into neural networks, the invention of the transformer architecture, the availability a massive training corpus in the Internet, rapidly increased computational power (particularly in GPUs that excel at vector arithmetic), and the recognition by commercial companies that investing large sums of money into this machine could now produce meaningfully-useful results.
That’s the basic ingredients for the AI boom we’re experiencing.
Is it all just a stinking pile?
All this enthusiasm has created a lot of noise. There’s a sudden gold rush to figure out what this technology can do and who will pay for it. The enthusiasm is exploding outwards in every direction with lots of energy, no direction, and no consideration for what’s in its way.
Most of it, I think, is dross.
It ranges from the pointless, to the awful, to the dangerous.
Actually, there’s quite a lot of danger because the scariest monsters in the woods (powerful people) just get scarier when you equip them with AI. For example, t’s never been easier or cheaper to create fake videos of presidential candiates dropping out of the election in an effort to subvert democracy.
And there’s a lot more bad news to go around…
Mass copyright-infringement. Forced adoption by companies. Life-altering decisions being made by AI. Incomprehensible amounts of energy usage during a time of a climate crisis. Hallucinations. Six-fingered women. Data centers. A pyramid scheme of investments holding up the entire AI industry (and US economy). Inhuman standards of beauty. Synthetic relationships. AGI. Goodbye searching Google; Hello asking ChatGPT. Life-threatening “medical” advice. AI-suicides. Misinformation. Disinformation. Lack of critical thinking. The mass-transformation of jobs for people into AI-revenue concentrated in just one or two companies. Did I mention the six-fingered women? Sometimes they have three arms too. Vibe-coding. AI-drones. AI-intelligence gathering. AI-enabled browsers snooping on your web activity. Adverts generated on-demand tailored just to you. Sycophantic responses. I’m sorry, you’re right. Slop.
That’s a lot to be worried about. A lot to be pissed off about.
But maybe not everything is bad; maybe there is some value to be found in this pile of shit.
What you do need to care about
Right now there is billions of dollars being poured into AI technology. It’s flowing into the few major players (OpenAI, Anthropic), it’s being reinvested inside the likes of Facebook and Google, and it’s trickling down into any company that can slap “AI” on the side of their product. The scale of the money involved is so large that there are convincing arguments that AI is the only thing propping up the US economy right now.
With all this money on the line, there’s a lot of marketing going on and it’s broadly targeted at the decision makers in government and commerce. Every country seems to need an AI strategy now. We’ve seen Shopify, Coinbase, Intercom, and host of other companies go “AI-first”.
Every department in our businesses is being hit with the AI wand, some more successfully than others. Support is moving increasingly to AI-chatbots. Emails are being summarised. Documents are being spruced up. Spreadsheet formulas are being calculated. Automation is being handed off to AI.
And software development? Every single CEO is now asking, “if OpenAI can have AI write 80% of their code, why can’t we?”.
So, what’s our answer?
We’re engineers and inventing, building, using, understanding, and evaluating technologies is what we do. We’re supposed to be curious about this stuff! When someone hands us a puzzle, we figure out how it works.
It takes only a cursory use of an AI-coding tool to realise that there’s definitely some value here as a developer. Perhaps you ask Claude to brainstorm a new approach for processing webhooks in your application. Or you give it a SQL query and ask it to suggest optimisations. Or you just experience an inline suggestion in your editor that autocompletes exactly the next line you were about to type.
There is no denying that there’s the hint of value here. How much? What’s the extent of it’s usefulness? How can we make the most of it? How does it fit into the wider human and technical systems of software development?
Why sitting this out is not an option
I know there are many sceptical developers out there seeing AI either as a threat or a fad.
As I mentioned previously, there’s much not to like about the AI industry right now and we should be rightly concerned about many of the ways it’s being used and the consequent effects on society.
I also think we can separate out those troublesome uses of AI from the AI tools in our own domain. We can simultaneously hold the view that AI videos threaten democracy whilst also believing that AI coding assistants can improve our work—they are not mutually-exclusive views.
Know your enemy
You can’t figure out the value or limitations of AI by ignoring it.
You need to use it and explore it to really understand it and, after giving it a genuine attempt, you are left with two outcomes. In the best case, the AI helps you do your job. Win! In the worst case, you can start to push back against the self-interested AI-crusaders that are red-pilling our entire industry—not with vague thoughts about your “craft” but actual data, knowledge, and experiences you’ve gained using AI.
If you’re an AI-sceptic, then I think it’s imperative you start to learn this new tool because you cannot adequately question something you haven’t spent the time trying to understand. Your arguments will only get stronger as you become an expert in the use of AI.
We’ve seen over and over again that the output of AI systems can look very authentic and convincing to the untrained eye. Ask ChatGPT for a swim set and it’ll give you back one but a real swim coach will poke holes in it. Likewise, a recipe from Claude is perfectly plausible until you try to bake it.
As software engineers, we are both the best placed to evaluate AI’s ability to produce software and the only ones that should do so. If we leave it to the non-technical person that just vibe-coded their first web app, or the CEO’s listening to the latest AI sales pitch, then of course all we’ll hear is how amazing AI is for coding.
If we really want to know how good or bad it is, we need engineers to explore it.
Game theory-ing your career
You might like to wish for a world without AI.
You might like to believe that if you can just ignore it for a few more months then it will all blow over and next year Claude Code and Cursor will be the Bored Apes of 2025: embarrassing, laughable, forgotten.
I think you’re playing a game akin to the Prisoner’s Dilemma with your career.
You’re assuming that if you don’t embrace AI, and no other developer embraces it, then it will die on the vine. Except, if other developers do embrace it and learn how to use it well, they will be more productive and more valuable than you are.
A developer using AI well, will be more productive than a developer without it. A developer using it badly is worse than both. Who are you going to be? Who is a business going to hire?
In the Prisoner’s Dilemma, if one prisoner defects (i.e., embraces AI), then the other prisoner is subjected to an even longer sentence. That’s going to translate to a massive disadvantage in our careers from not getting promotions, to poor performance reviews, to being the first on the firing list, and missing out on job opportunities.
There is no universe where your colleagues and competitors in the job market aren’t going to utilise a tool if it gave them even a 10-20% productivity boost. Wishing for broad non-compliance is a losing strategy.
Bubbles won’t save you
Perhaps you’ve seen the absurd amounts of money being invested in AI, noted that Nvidia appears to basically be paying companies to buy its own product, and you’ve seen the huge losses these companies are racking up with each AI query their users execute.
You think this is a bubble. You’re probably right.
You think that when the bubble bursts, all this AI stuff will go away. You’re wrong.
After the dotcom bubble burst, e-commerce did not die. At the same time in Europe, the telecoms crash did not scupper 3G networks (or 4G, or 5G, or the iPhone, or Android). The housing crash of 2008 didn’t kill off houses, mortgages, or banks.
Those events culled the dead and defective companies but it didn’t off the industry entirely. The bubble will (probably) burst but the useful AI tools will survive.
It might actually improve your (work) life
I was a Java developer for more than 10 years before I discovered Ruby on Rails. That jump from verbose, type-ridden code to method-missing magic and duck-typing was weird. It was very uncomfortable to leave behind the safety of types, compilation, and large comment blocks.
But through that discomfort I discovered a language which I adore and one that fulfills my goals much better than Java did. Equally, adopting Java in the first place back in ’97 wasn’t an easy path: no one was hiring for this very new, and very slow language. The easy choice at the time was C++.
And over the years, we’ve all had to learn new editors, new libraries, new languages, new processes, new version control. Or perhaps you’re still young enough that you haven’t but I’m here to tell you that you will need to learn new things.
That new thing today is AI.
It might will be weird and uncomfortable initially but it might also help improve your work and many of the stressors that come with it.
There is the concern that all this “progress” will put ourselves out of a job once the AI gets good enough. I’m sorry to tell you this( if you have investment in AI companies) but the AI is not going to get good enough. Our teams of developers are not going to be replaced with agents running 24/7 because the fundamental architecture of the systems is statistical not logical. The LLMs have no conceptual model of a system, or it’s dependencies, it’s flows, and it’s constraints. They are “just” a very sophisticated language prediction engine. It’s incredibly interesting to solve problems by providing the context in natural language, appending the question, then getting the AI to predict the appropriate answer. That’s a stunning development—but I think it has its limits and we may already be seeing a slow-down in AI progress. Without any conceptual understanding, I think there will always be a role for human minds.
I may be biased because I have always run small teams that punch above their weight, working on systems that our competitors have teams multiple times our size to manage. We achieved this, frankly, by having to be more ruthless about the work we take on. There are are so many good ideas, must-do’s and should-do’s that we don’t because we’re small and so we have to focus only on the highest priorities.
I don’t see AI being a threat to our team size in any way. I don’t think we’ll ever need fewer developers but I am enthusiastic about being able to tackle more of those good ideas. I’m hopeful that AI can help us achieve things that were previously too high-effort and lower down our priority list. That we can file off the rough edges which we’ve worked around for years.
I’m actually really hopeful that AI can lower the cognitive load for the team. I wrote recently about how I consider that to be one of our biggest scaling challenges and it’s something that concerns me a lot. A pair of developers might be working on a new feature for a few weeks/months and become familiar with it but it’s a massive workload to then jump to fixing a bug in another area of the app. Can AI, like Neo in The Matrix, load up that context into our brains faster? Can it help us summarise and map out the codebase? Can it remind us about a feature which is dependent on the current behaviour?
I think there is a massive opportunity beyond “AI writes code”. I think constraining your thinking to AI-generated code is frankly, Level 1 thinking. It’s “mid”, as the kids might say. We never hired the programmers with the fastest typing skills (at least, not since the 1950’s). We have never felt like what has held our businesses back is the lack of a faster typist—because typing is not the bottleneck. And frankly, code is not the end goal either.
We participate in a system of software development. Part of that system is the physical typing of the code into the machine but it’s not even the hard part or interesting part. The hardest bits are knowing what to type: whether that’s product decisions, or architecture decisions, or decisions around maintainability or performance.
This is where the magic happens. In our heads, not our fingers.
.png)

