September 28, 2025
Now that LLMs have been around for a little while, we can discern what they’re good at and what they’re not. Clearly they are good at using a lot of energy and resources and tend to make things up. But they have also demonstrated to be good summarizers, decent coders, and helpful teachers.
But that’s not what I find truly novel about LLMs. They can generate a lot of content, much of which you will see people refer to as AI slop. But what’s interesting to me is the emergent property of understanding that LLMs seem to have.
Example: support documentation
AI integration is hit and miss in a lot of products and often it’s super annoying and unwanted. That said, when I was trying to figure out how to do something in Cloudflare a while back I noticed that the AI assistant was very good at directing me to relevant support documentation.
When is the last time you went to a support site, ran a search and got back anything close to what you were looking for? In my experience, it’s usually futile. To be fair, I find Cloudflare’s docs in general to be better than most services, so that probably has something to do with it.
That said, the AI assistant seemed to understand the question I was asking and in addition to giving me an answer in its own words, it was able to point me to relevant articles to read. (Look, I realize this is anecdotal and I’ve also run into problematic uses of AI in customer support. But the point here is that it was able to improve a previously horrible interaction.)
What is understanding?
I use the word “seem” because it doesn’t actually understand what I’m saying. At least not in the way that we humans think of understanding.
But this advanced processing of natural language feels like it’s the closest we’ve come to a computer understanding natural language and intent. It’s Clippy on steroids.
There are AI integrations everywhere and I’m skeptical about letting AI do things on my behalf since they do make many mistakes. That said, as integrations improve, and especially safety around AI doing things on my behalf, I can imagine a world where humans can act more like humans and have computers respond appropriately.
Accessibility implications
What I mean by humans acting like humans is that from the dawn of computers we’ve had to adapt ourselves to them. If you want the computer to do something you have to speak its language, a programming language. So programmers build software that lets us use mice and keyboards to interact with the computer giving us a more human friendly way to instruct them.
Before ChatGPT we had the ability to turn our own speech into text and digital assistants like Siri and Google could process that speech and formulate a decent response in some cases (and fail miserably in many cases—looking at you Siri). But the experience was and is often subpar. It still feels like you are having to speak in code to the computer so that it will understand you.
But now we are closer and closer to being able to use natural language to communicate with computers for real. Rather than needing to know specific sequences of words and specially formulated sentences, we can just be ourselves. The computer is starting to be able to understand our intent.
The accessibility implications of this are staggering. I know people who struggle to use the computer to write, to game, to access the internet because of the physical actions needed to interact with computers through keyboards, mice, and other peripherals.
I know people who struggle to use voice recognition software because of speech impediments caused by physical disabilities.
I just experienced these barriers first hand while I was on a road trip. Since I didn’t have my Mac with me and I’m unable to hold and handle a phone I was forced to use Voice Control and Siri to do things like check and respond to messages. It was a huge pain because Voice Control often misunderstands commands and you have to say them in a very particular way for them to work. I would love to be able to chat with my phone, to ask if certain people have messaged me, help me respond, and do other things reliably without needing to physically handle the device.
But the computer’s ability to process speech is getting even better. And its ability to act on that transcribed speech is also getting better.
This could open up whole worlds to people who were previously isolated.
Criticisms
Okay, let me come back down from the clouds for a moment. I do think the concerns about AI are valid. I’m interested to see how some of this stuff is going to play out. In particular, I hate that AI summarizers and training bots exploit information freely given on the web to the point that websites are being DDoSed. Website owners aren’t even rewarded with human eyeballs because the AI just summarizes the content for the user who doesn’t need to click through any longer.
That’s not to mention things like copyright, environmental impact, etc.
Cautiously optimistic
Despite everything, I am still cautiously optimistic that we’ll be able to enjoy these accessibility benefits while mitigating the negative effects of LLMs.
I currently maintain enough strength to use a mouse to type with an onscreen keyboard, combine Talon and Cursorless to code by voice, and employ Whisper to transcribe prose. But my condition, spinal muscular atrophy, is progressive and will continue to get worse (albeit slowly thanks to modern treatments). As I begin to lose the ability to operate peripherals, I am hopeful that I will be able to replace that interaction with natural language.
That will keep me productive for longer and allow me to continue to do my job as a software engineer (assuming AI doesn’t take my job, *nervous laughter*).
In summary, the slop is not the interesting part of LLMs, it’s their potential (not saying we’re there yet) to simplify human-computer interaction and meet people where they are.