LLMs are steroids for your Dunning-Kruger

2 hours ago 1

In his 1933 essay “The Triumph of Stupidity,” Bertrand Russell remarked that “the problem with the world is that the stupid are cocksure, while the intelligent are full of doubt.” This is something I often think about when ChatGPT hits me up with another “that’s a fantastic idea” when the idea is clearly anything but great.

How often do you think a ChatGPT user walks away not just misinformed, but misinformed with conviction? I would bet this happens all the time. And I can’t help but wonder what the effects are in the big picture.

I can relate to this on a personal level: As I ChatGPT user I notice that I’m often left with a sense of certainty. After discussing an issue with an LLM I feel like I know something — a lot, perhaps — but more often than not this information is either slightly incorrect or completely wrong. And you know what? I can’t help it. Even when I acknowledge this illusion, I can’t help chasing the wonderful feeling of conviction these models give. It’s great to feel like you know almost everything. Of course I come back for more. And it’s just not the feeling; I would be dishonest to claim these models wouldn’t have huge utility. Yet I’m a little worried about the psychological dimension of this whole ordeal.

They say AI is a mirror. This summarizes my experience. I feel LLMs “amplify” thinking. These models make your thoughts reverberate by taking them to multiple new directions. And sometimes these directions are really interesting. The thing is, though, that this goes both ways: A short ChatGPT session may help improve a good idea to a great idea. On the other hand, LLMs are amazing at supercharging self-delusion. These models will happily equip misguided thinking with a fluent, authoritative voice, which, in turn, sets up a psychological trap by delivering nonsense in a nice package.

And it’s so insanely habit-forming! I almost instinctively do a little back and forth with an LLM when I want to work on an idea. It hasn’t even been that long (these models have been around for, what, three years?) and I’m so used to them that I feel naked without. It’s getting even comical sometimes. When I lost my bag the other day and was going through the apartment looking for it, my first response to my growing frustration was that “I should ask ChatGPT where it is”.

I feel like LLMs are a fairly boring technology. They are stochastic black boxes. The training is essentially run-of-the-mill statistical inference. There are some more recent innovations on software/hardware-level, but these are not LLM-specific really. Is it too sardonic to say that the real “innovation” was throwing enough money at the problem to train the models at a huge scale? Maybe RLHF was a real innovation; I’m not sure. However, I don’t really feel like there is a lot to be interested in there. And yet, the current AI boom is extraordinarily interesting.

It’s the impact. The very real effect of all this in our lives. In hindsight, this will probably be one of the major shifts, and it will be reflected upon in terms of education, work and even society at large. Language cuts to the core of what and who we are. Speech is so natural to us that we even think in speech. And when a machine credibly stepped into that territory, something changed. I’m not sure what it is — I don’t think anyone really knows at this point — but I think there is a sense of shifting tides. I think it’s something most of us are trying to make sense of.

I think LLMs should not be seen as knowledge engines but as confidence engines. That, I feel, would better illustrate the potential near and medium-term futures we are dealing with.

My photo

Article by

Matias Heikkilä

Let's stay in touch

Subscribe to the newsletter for ideas that help you lead with focus and navigate complexity in software, strategy, and AI. If you’re looking to sharpen your team’s next move, let’s schedule a call. We’ll explore how we can turn insight into action together.

Read Entire Article