The Work of AI, Ourselves

6 hours ago 1

George W.S. Trow saw it coming in 1980. Writing for The New Yorker, he described watching Family Feud host Richard Dawson ask contestants to “guess what a poll of a hundred people had guessed would be the height of the average American woman.” Guess what they guessed. Guess what they guessed the average is. This recursive prediction, Trow argued, had become the dominant mode of American thought: not knowing something directly, but guessing what others would guess about it. Television had created what he called “the context of no context,” where demographic consensus replaced both history and direct experience.

Forty-four years later, Large Language Models and their billionaire boosters claim to replicate human intelligence through statistical prediction. But what they actually model isn’t the human mind at all. It’s the warped, probabilistic, virality-chasing pseudo-consciousness that social media platforms have trained us to perform. The same billionaires who own these platforms now build the AI systems that mirror back our algorithmic conditioning and assure us they are genuine thought (at least when such assurances benefit their ongoing fundraising and power generation efforts) .

Get 80% off forever

Human language carries meaning. This seems obvious, yet it’s precisely what separates human communication from LLM output. As Emily Bender and her colleagues argued in a landmark 2020 paper, “Human languages are linguistic systems that pair together form and meaning. LLMs, on the other hand, are fuelled by their training data, which contain only form without meaning.”

The philosophical roots run deep. Franz Brentano identified intentionality as “the mark of the mental”: human thoughts are about something, directed toward objects and concepts in the world. John Searle’s Chinese Room argument demonstrated that manipulating symbols according to rules, no matter how sophisticated, cannot generate genuine understanding. “Instantiating a computer program is never by itself a sufficient condition of intentionality,” Searle wrote.

LLMs operate through what Raphael Millière and Cameron Buckner call “inferential semantic competence” rather than “referential competence.” They excel at word-to-word relationships but lack any connection between words and world. The famous phrase “stochastic parrots,” coined by Bender and her colleagues, captures this: LLMs are “haphazardly stitching together sequences of linguistic forms...according to probabilistic information about how they combine, but without any reference to meaning.”

Maurice Merleau-Ponty understood language as fundamentally embodied and expressive, each language being “a distinct and ultimately untranslatable manner of ‘singing the world.’” But LLMs have no world to sing. They process text stripped of embodiment, context, and lived experience. As recent work by Reto Gubelmann in Philosophy & Technology argues, “LLMs are unable to act because they lack intentions. This, in turn, is because they are the wrong kind of being: agents with intentions need to be autonomous organisms while LLMs are heteronomous mechanisms.”

The Work of Plagiarism and the Work of Selling Out

The Work of Plagiarism and the Work of Selling Out

The other day a friend shared a story about someone getting thousands of views for reading another person’s essay word for word. Not her ideas, not a similar take — the actual sentences and data points, lifted wholesale. She'd changed the title and swapped out a few adjectives, the way a high school stude…

One standard for evaluating an LLM is “perplexity,” which measures how surprised the model is by the next token in a sequence. Lower perplexity means better performance. But as the same technical documentation states, “The most important limitation of perplexity is that it does not convey a model’s ‘understanding.’ Perplexity is strictly a measure of uncertainty.”

This creates a fundamental disconnect. Humans don’t minimize surprise when communicating; we convey meaning, model other minds, build concepts. As one researcher notes in a good essay here on Substack, “We can understand concepts from the rules alone; LLMs must train on millions of examples. A human can play a game of chess by reading the instruction manual without ever witnessing a single game.”

The training process itself reveals the artificiality. LLMs learn through “context windows” of fixed token lengths, often cut off mid-sentence to fit neural architecture constraints. They maintain perfect memory of everything within that window, then forget entirely when it slides forward. Compare this to how children actually acquire language, as explained here by Eve Clark: “When children acquire a first language, they build on what they know—conceptual information that discriminates and helps create categories for the objects, relations and events they experience.” Human language emerges from embodied interaction with the world, not from processing billions of decontextualized text fragments.

Social media didn’t just change how we communicate; it trained us to think like the algorithms that govern it. Research by Hannah Metzler and David Garcia from 2024 shows that “most current digital-media algorithms strongly optimize for engagement. Engagement metrics primarily promote content that fits immediate human social, affective, and cognitive preferences and biases rather than quality content.” More critically, “the interval at which an algorithm rewards behavior influences how quickly it is repeated.”

Maxi Heitmayer identified attention as the “universal symbolic currency” of social media, with “flow attention” actively spent and “calcified attention” accumulated as followers and likes. Users consume content while simultaneously learning to produce it according to algorithmic preferences. Studies show TikTok users actively trying to “train” the app’s algorithm, while Etsy creators attempt to learn the “language” of the algorithm to optimize their shops.

The transformation runs deeper than behavior modification. As Urbano Reviglio and Claudio Agosti observe, “Today’s Internet — especially social media — is already ‘addictive by design.’ This threatens individual autonomy.” The platforms use “priming and conditioning, using rewards and punishments” to shape not just what we say but how we think.

Virality itself has become quantifiable and predictable. Researchers can now forecast “the future popularity of a meme by quantifying its early spreading pattern in terms of community concentration.” We’ve internalized these patterns so thoroughly that we pre-optimize our thoughts for algorithmic distribution. As Petter Törnberg argues, if television meant everything was cast as entertainment (per Neil Postman’s excellent if now dated Amusing Ourselves to Death), “then social media should be understood as ushering in an era where everything — even our cities — must conform to the medium’s logic of attention capture.”

George Trow’s insight about television creating “the context of no context” proves even more prescient applied to social media and AI. His core observation was that television had replaced genuine empathy with performative inter-subjectivity: society had moved from direct experience to “guessing what they guessed.”

Marshall McLuhan’s “the medium is the message” and Neil Postman’s Amusing Ourselves to Death provided the theoretical framework. Postman warned that “Orwell feared that what we hate will ruin us. Huxley feared that what we love will ruin us.” Television, he argued, transformed all discourse into entertainment. Watching Gore Vidal debate Norman Mailer on Dick Cavett was the same as watching Dave Meltzer debate Vince McMahon on Donahue. See the show or be the show.

But social media went further. It transformed all discourse into prediction markets. Ulrich Beck and Anthony Giddens’ concept of “reflexive modernization” describes individuals who “do not self-consciously choose to be reflexive... They (are forced to) choose compulsively among the floods of choices.” Every post becomes a bet on what will resonate, every thought pre-filtered through anticipated reception.

Baudrillard’s “hyperreality” anticipated this, as he anticipated so many other things (hence the French theorist’s continued presence in the discourse): “The generation by models of a real without origin or reality.” Social media creates a closed loop where the simulation becomes the reference point. We guess what others will engage with based on what has previously generated engagement, creating recursive patterns that LLMs now mirror back to us as “intelligence.”

In our American Affairs Journal article, David Inman and I argued that social media creates a new communicative mode where “every individual is her own brand, and every communicative act tends toward a sales pitch.” The platforms profit from degradation: “Speech on these platforms is essentially a for-profit enterprise that harvests the unpaid labor of its users while gradually damaging their cognitive abilities.”

A parallel with car culture might help illuminate how quickly technology can fundamentally reshape our self-conception. Dennis Bramble and Daniel Lieberman’s groundbreaking research identified 26 anatomical features that make humans exceptional endurance runners. We evolved, quite literally, to run. “Not having regular physical activity every day is pathological and abnormal,” Lieberman explains. Hunter-gatherers walk or run 9-15 kilometers daily; the average American barely manages two.

The Kalenjin people of Kenya still embody this heritage. With just 5 million people, they dominate world distance running. Seventeen American men in history have run marathons under 2:10; 32 Kalenjin did it in October 2011 alone. Research shows 73-81% of elite Kenyan runners ran to school daily as children, versus 22% of controls. Such bodies remember what ours have forgotten.

André Gorz recognized in 1973 how cars were “never intended for the people” but became mandatory for social participation. Americans now spend 1,500 hours per year in cars. The spatial arrangement “cuts person into slices” with work, living, shopping, learning, and entertainment separated into no-man’s-land interzones accessible only by car, from which they might wobble on unsteady land legs toward their final destination.

Tim Dant’s concept of the “driver-car” assemblage shows how thoroughly the technology reshaped identity. The car redefined bodies as passive objects requiring mechanical assistance for basic mobility. Within a single generation, as Kristin Ross documents, France transformed from agricultural society to car culture, with bodies reconceptualized from agricultural workers to automotive consumers.

The mind’s transformation under social media has followed the same trajectory but faster. We’re forgetting that human thought involves sustained attention, conceptual development, and genuine understanding. Instead, we’ve internalized the platform’s model: rapid pattern-matching, engagement optimization, and recursive prediction.

If you take nothing else from this short paper, remember that the connection between social media ownership and AI development is anything but coincidental. Elon Musk uses X as his personal amplification system while developing xAI. Mark Zuckerberg has pivoted Meta toward AI after years of Facebook’s algorithmic conditioning. As NBC News reported, other tech CEOs openly admit to copying Musk’s playbook. Zuckerberg himself said, “Certainly, his actions led me and I think a lot of other folks in the industry to think about, ‘Hey, are we doing this as much as we should?’”

Sam Altman’s OpenAI tenure reveals the pattern. Originally founded as a non-profit dedicated to safe AI development, Altman “flipped OpenAI 180 degrees — creating a for-profit company that produces proprietary software, based on extreme levels of corporate secrecy and shark-like determination to outcompete its rivals in the speed of AI commercialization, regardless of the risks.” The same article details exploitation of click workers and content moderators whose cognitive labor trained these systems.

Indeed, these platforms don’t just reflect how we think; they shape it for profit. Research demonstrates that “Artificial Intelligence draws on massive supercomputers to predict what it could show to perfectly keep you scrolling.” The algorithms learn our patterns, then feed them back to us in intensified form, creating powerful feedback loops that transform cognitive habits into profitable but often quite personally deleterious behaviors.

LLMs don’t model human intelligence but do perhaps provide a cracked-mirror reflection of what human intelligence looks like after decades of oppressive social media conditioning: probabilistic, fragmented, optimized for engagement rather than understanding. They’re stochastic parrots, to again borrow Bender’s mellifluous phrase, but we’re the ones who taught them to squawk by willingly and assiduously laboring to become prediction machines ourselves.

The danger isn’t that machines will become conscious, which is downright risible. No, it’s that we willfully forget that consciousness involves more than pattern-matching and next-token prediction. As Inman and I wrote in American Affairs, “The crisis of attention is the central problem, the one that must be resolved before any other social settlements and political compromises can be reached.”

George Trow saw people guessing what others guessed about averages, transforming knowledge into recursive approximation. Now our oligarchs gild and sell miraculous mechanical Turks that guess what we’d guess based on our guesses, then celebrate their outputs as intelligence. Despite their many limitations and error-prone outputs, LLMs succeed all too well at reproducing what we’ve allowed our thought to become.

Recovery from this degraded state of cognition requires recognizing both what’s been lost and what’s left. Like the Kalenjin runners who still remember what human bodies can do, we need to remember what human minds were capable of before we started training ourselves to think like the machines we’d eventually build. The alternative is accepting a future where the distinction between human and artificial intelligence dissolves not because machines became conscious, but because we forgot what consciousness meant. Should that happen, I suppose you could retain the big-money AGI acronym and merely redefine it as Artificial General Idiocy (which we’re dangerously close to achieving, unlike the AGI used to sell a thousand AI start-ups).

Thanks for making Oliver Bateman Does the Work a bestseller! Feel free to forward this post to your online frens (and your real friends!). But before you do (and if you do nothing else), hit that like button, leave a comment, restack it, &c. That little bit of work is a “bigly” help with the algorithm.

Share

Discussion about this post

Read Entire Article