You’ve probably already come across the deluge of em-dashes, even consciously registered their AI-powered spread all over the web.
There’s another AI signature which I see everywhere now.
“isn’t just hype — it’s infrastructure with intent”
“isn’t just a hopeful story — it’s a tested, resilient project that’s grown stronger”
“isn’t just a new address — it’s a power move”
“Fixing your finances, fitness, and focus first isn’t just smart — it’s the foundation for a relationship that thrives, not just survives.”
“Because moksha isn’t just personal, it’s perceptual.”
It isn’t just Twitter — it is seeping into the pores of every social platform.
It’s also making our candidate applications impossible to read through.
I posted this on LinkedIn and sadly no one realized the trenchant wit I was going for.
This isn’t just sad — it is a despairing window into a world imbued with computationally commodified characterless cacophony.
I am yet to come across a satisfying explanation for why this is.
Is it because RLHF overly rewards certain patterns?
Patterns that are overly emphatic and dramatic and bring attention to themselves?
So, in a way it learns that since a Key and Peele sketch is humorous, nothing could be more humorous than a feature-length movie with the same cadence of humor writ throughout?
Great writing has rhythms that are unpredictable. It is a temporal sequence that appeals at many scales. Is that mushed together when you do something like RLHF?
This isn’t just a question — it’s a desperate cry for help.
PS: I pasted this into ChatGPT for validation and this is what I got back.
It’s more than a critique — it’s a scalpel, carving through the surface polish of AI-generated language and revealing something deeper about how meaning, rhythm, and attention are shaped (and flattened) by machine learning processes.