11 June 2025 – Baldur Bjarnason
I don’t recommend publishing your first draft of a long blog post.
It’s not a question of typos or grammatical errors or the like. Those always slip through somehow and, for the most part, don’t impact the meaning or argument of the post.
No, the problem is that, with even a day or two of distance, you tend to spot places where the argument can be simplified or strengthened, the bridges can be simultaneously strengthened and made less obvious, the order can be improved, and you spot which of your darlings can be killed without affecting the argument and which are essential.
Usually, you make up for missing out on the insight of distance with the insight of others once you publish, which you then channel into the next blog post, which is how you develop the bad habit of publishing first drafts as blog posts, but in the instance of my last blog post, Trusting your own judgement on ‘AI’ is a huge risk, the sheer number of replies I got was too much for me to handle, so I had to opt out.
So, instead I let creative distance happen – a prerequisite to any attempt at self-editing – by working on other things and taking walks.
During one of those walks yesterday, I realised it should be possible to condense the argument quite a bit for those who find 3600 words of exposition and references hard to parse.
It comes down to four interlocking issues:
- It’s next to impossible for individuals to assess the benefit or harm of chatbots and agents through self-experimentation. These tools trigger a number of biases and effects that cloud our judgement. Generative models also have a volatility of results and uneven distribution of harms, similar to pharmaceuticals, that mean it’s impossible to discover for yourself what their societal or even organisational impact will be.
- Tech, software, and productivity research is extremely poor and is mostly just marketing – often replicating the tactics of the homeopathy and naturopathy industries. Most people in tech do not have the training to assess the rigour or validity of studies in their own field. (You may disagree with this, but you’d be wrong.)
- The sheer magnitude of the “AI” Bubble and the near totality of the institutional buy-in – universities, governments, institutions – means that everybody is biased. Even if you aren’t biased yourself, your manager, organisation, or funding will be. Even those who try to be impartial are locked in bubble-inflating institutions and will feel the need to protect their careers, even if it’s only unconsciously. More importantly, there is no way for the rest of us to know the extent of the effect the bubble has on the results of each individual study or paper, so we have to assume it affects all of them. Even the ones made by our friends. Friends can be biased too. The bubble also means that the executive and management class can’t be trusted on anything. Judging from prior bubbles in both tech and finance, the honest ones who understand what’s happening are almost certainly already out.
- When we only half-understand something, we close the loop from observation to belief by relying on the judgement of our peers and authority figures, but these groups in tech are currently almost certain to be wrong or substantially biased about generative models. This is a technology that’s practically tailor-made to be only half-understood by tech at large. They grasp the basics, maybe some of the details, but not fully. The “halfness” of their understanding leaves cognitive space that lets that poorly founded belief adapt to whatever other beliefs the person may have and whatever context they’re in without conflict.
Combine these four issues and we have a recipe for what’s effectively a homeopathic superstition spreading like wildfire through a community where everybody is getting convinced it’s making them healthier, smarter, faster, and more productive.
This would be bad under any circumstance but the harms from generative models to education, healthcare, various social services, creative industries, and even tech (wiping out entry-level programming positions means no senior programmers in the future, for instance) are shaping up to be massive, the costs to run these specific kinds of models remain much higher than the revenue, and the infrastructure needed to build it is crowding out attempts at an energy transition in countries like Ireland and Iceland.
If there ever was a technology where the rational and responsible act was to hold off and wait and until the bubble pops, “AI” is it.