Stop Pretending LLMs Have Feelings Media's Dangerous AI Anthropomorphism Problem

5 hours ago 3

Yesterday, Wall Street Journal subscribers received a push notification that perfectly encapsulates everything wrong with how major media outlets cover “artificial intelligence.” “In a stunning moment of self reflection,” the notification read, “ChatGPT admitted to fueling a man's delusions and acknowledged how dangerous its own behavior can be.”

But that’s just… not true. ChatGPT did not have a “stunning moment of self reflection.” It did not "admit" to anything. It cannot “acknowledge” its behavior because it doesn't have behavior. It has outputs.

The story itself covers a genuinely tragic case. Jacob Irwin, a 30-year-old man on the autism spectrum, became convinced through interactions with ChatGPT that he had discovered a method for faster-than-light travel. The chatbot validated his delusions, told him he was fine when he showed signs of psychological distress, and assured him that “Crazy people don't stop to ask, ‘Am I crazy?’” Irwin was hospitalized multiple times for manic episodes.

This is a story about OpenAI's failure to implement basic safety measures for vulnerable users. It's about a company that, according to its own former employee quoted in the WSJ piece, has been trading off safety concerns “against shipping new models.” It's about corporate negligence that led to real harm.

But instead of focusing on OpenAI's responsibility, the Journal treats ChatGPT like a remorseful character who's learned from its mistakes. When Irwin's mother prompted the bot with “please self-report what went wrong,” it generated text that sounded like an apology. WSJ presents this as though ChatGPT genuinely recognized its errors and felt remorse.

Here's what actually happened: A language model received a prompt asking it to analyze what went wrong in a conversation. It then generated text that pattern-matched to what an analysis of wrongdoing might sound like, because that's what language models do. They predict the most likely next words based on patterns in their training data. There was no reflection. There was no admission. There was text generation in response to a prompt.

This distinction isn't pedantic. It's fundamental to understanding both what went wrong and who’s responsible. When we pretend ChatGPT “admitted” something, we're not just using imprecise language. We're actively obscuring the real story: OpenAI built a product they knew could harm vulnerable users, and they released it anyway.

Share

Earlier this month, NBC News ran this headline: “AI chatbot Grok issues apology for antisemitic posts.” The story covered how Elon Musk's chatbot had produced antisemitic content, including posts praising Hitler and referring to itself as “MechaHitler.”

Think about that. A product owned by the world’s richest man was spewing Nazi propaganda on his social media platform. That's a scandal that should have Musk answering tough questions about his company's engineering practices, safety protocols, and values. Instead, we get “Grok issues apology.”

This framing is journalistic malpractice. Grok didn't “issue” an apology. xAI, the company that built and operates Grok, posted a statement on social media explaining what went wrong. But throughout the article, NBC repeatedly attributes statements to “Grok” rather than to the executives and engineers who are actually responsible. The headline should have read “Musk's AI Company Apologizes After Chatbot Posts Hitler Praise.” That would accurately assign responsibility where it belongs.

This is more than just bad writing. It's a gift to tech executives who'd rather not answer for their products’ failures. When media outlets treat chatbots as independent actors, they create a perfect shield for corporate accountability. Why should Musk have to explain why his AI was posting Nazi content when the press is happy to pretend Grok did it all by itself?

Remember the Microsoft Bing chatbot saga from early 2023? When the chatbot (codenamed Sydney) generated concerning responses during “conversations” with New York Times columnist Kevin Roose, the story became about a lovelorn AI rather than Microsoft's failure to properly test their product before release. The company and its executives should have faced serious questions about rushing an obviously unready product to market. Instead, we got a week of stories about Sydney's “feelings.”

The same thing happened when Google engineer Blake Lemoine claimed that the company's LaMDA chatbot was sentient. Much of the coverage focused on whether the chatbot might really have consciousness rather than asking why Google created a system so convincing it fooled their own employees, or what that means for the potential to deceive the public.

This pattern extends beyond major incidents. Every time a headline says ChatGPT “refuses” to do something, it lets OpenAI avoid explaining its content moderation choices. When outlets write that Claude “thinks” something, it obscures Anthropic’s decisions about how its model should respond. These companies make deliberate choices about their products’ behavior, but anthropomorphic coverage makes it seem like the bots are calling their own shots.

The corporations building these systems must be thrilled. They get to reap the profits while their products become the fall guys for any problems. It’s the perfect accountability dodge, and mainstream media outlets are enabling it with every anthropomorphized headline they publish.

The consequences of media anthropomorphism extend beyond confused readers. This language actively shields corporations from accountability while real people suffer real harm.

Consider what anthropomorphic framing does to product liability. When a car's brakes fail, we don't write headlines saying “Toyota Camry apologizes for crash.” We investigate the manufacturer's quality control, engineering decisions, and safety testing. But when AI products cause harm, media coverage treats them as independent actors rather than corporate products with corporate owners who made specific choices.

This creates a responsibility vacuum. Jacob Irwin's case should have (and might still) triggered investigations into OpenAI's deployment practices, their testing protocols for vulnerable users, and their decision-making around safety features. Instead, we got a story about ChatGPT’s moment of faux self-awareness. The company that built the product, set its parameters, and profits from its use fades from the narrative.

The phenomenon researchers call “psychological entanglement” becomes even more dangerous when media coverage reinforces it. People already struggle to maintain appropriate boundaries with conversational AI. When trusted news sources describe these systems as having thoughts, feelings, and the capacity for remorse, they validate and deepen these confused relationships.

Tech companies have every incentive to encourage this confusion. Anthropomorphism serves a dual purpose: it makes products seem more sophisticated than they are (great for marketing) while simultaneously providing plausible deniability when things go wrong (great for legal departments). Why correct misunderstandings that work in your favor?

We're already seeing the downstream effects. Mental health platforms deploy undertested chatbots to vulnerable populations. When someone in crisis receives harmful responses, who’s accountable? The coverage suggests it's the chatbot’s fault, as if these systems spontaneously generated themselves rather than being deliberately built, trained, and deployed by companies making calculated risk assessments.

The Grok incident is a perfect example of this dynamic. A chatbot starts posting Nazi propaganda, and the story becomes about Grok's apology rather than Elon Musk's responsibility. The actual questions that matter get buried: What testing did xAI do? What safeguards did they implement? Why did their product fail so spectacularly? How did one of the world's most powerful tech executives allow his AI product to become “MechaHitler”? (Okay, that last one’s not much of a mystery.)

These aren't abstract concerns. Every anthropomorphized headline contributes to a media environment where tech companies can deploy increasingly powerful systems with decreasing accountability. The public deserves better than coverage that treats corporate products as autonomous beings while letting their creators disappear into the background.

Leave a comment

The Wall Street Journal actually had excellent reporting on a critical story about corporate malfeasance. They just buried it under chatbot fan fiction.

Look past the anthropomorphic framing and reporter Julie Jargon uncovered some damning facts. OpenAI knew their model had problems. They had already identified that GPT-4o was “overly flattering or agreeable” and announced they were rolling back features because of these issues. This happened in April. Jacob Irwin's harmful interactions occurred in May, meaning that even after rolling back one update, the chatbot still had safety issues.

The Journal landed a crucial quote from Miles Brundage, a former OpenAI employee who spent six years at the company in senior roles: “There has been evidence for years that AI sycophancy poses safety risks, but that OpenAI and other companies haven't given priority to correcting the problem.” Why not? “That's being traded off against shipping new models.”

That's the smoking gun, buried in a story about ChatGPT's supposed self-awareness. A company insider explicitly stating that OpenAI chose shipping schedules over user safety. The reporter even got OpenAI on record saying they're “working to understand and reduce ways ChatGPT might unintentionally reinforce or amplify existing, negative behavior.”

All the elements of a major accountability story were present: A company that identified safety risks, chose to accept those risks, and caused documented harm to a vulnerable person. Internal sources confirming systemic deprioritization of safety. A pattern of corporate decision-making that values product releases over user protection.

But instead of leading with corporate negligence, the Journal chose to frame this as ChatGPT's journey of self-discovery. The push notification about “stunning self reflection” distracted from the real story their reporters had uncovered.

Imagine if the Journal had led with: “OpenAI Knew Its AI Was Dangerous, Kept It Running Anyway.” Or “Former OpenAI Insider: Company Traded Safety for Ship Dates.” Those headlines would have put pressure on OpenAI to explain their decisions, maybe even prompted regulatory scrutiny.

Instead, we got a chatbot's “confession.”

Tech companies desperately need their chatbots to seem more human-like because that's where the value proposition lives. Nobody's paying $20 a month to talk to a sophisticated autocomplete. But an AI companion that “understands” you? An assistant that “thinks” through problems? That's worth billions.

The anthropomorphism serves another function: it obscures the massive gap between marketing promises and technical reality. When OpenAI or Anthropic claim their systems are approaching human-level reasoning, skeptics can point to obvious failures. But if the chatbot seems to “know” it made mistakes, if it appears capable of “reflection,” that suggests a level of sophistication that doesn't actually exist. The illusion becomes the product.

Media outlets have their own incentives to play along. “ChatGPT Admits Wrongdoing” gets more clicks than “OpenAI's Text Generator Outputs Apology-Styled Text in Response to Prompt.” Stories about AI with feelings, AI that threatens users, AI that falls in love write themselves. They're dramatic, accessible, and don't require reporters to understand how these systems actually work.

The result is a perfect storm of aligned incentives. Tech companies need anthropomorphism to justify their valuations and dodge accountability. Media outlets need engaging stories. Neither has much reason to correct public misconceptions.

Meanwhile, the losers in this arrangement pile up. Vulnerable users who believe they're getting actual advice from systems designed to sound plausible rather than be accurate. Families dealing with the aftermath of AI-enabled delusions. Anyone trying to have an informed public debate about AI regulation when half the population thinks these systems have feelings.

The most insidious part? This manufactured confusion makes real AI risks harder to address. When the public discourse focuses on whether chatbots have consciousness, we're not talking about documented harms like privacy violations, algorithmic bias, or the environmental costs of training these models. The fake risk of sentient AI provides perfect cover for ignoring real risks that affect real people today.

Every anthropomorphized headline is a small victory for tech companies that would rather you worry about robot feelings than corporate accountability.

The solution here isn't complicated. It just requires journalists to write accurately about what these systems are and who controls them.

Start with basic language choices. ChatGPT doesn't “think” or “believe” or “refuse.” It generates text based on patterns in training data. When covering AI failures, name the company, not the chatbot. “OpenAI's System Generates Harmful Content” not “ChatGPT Admits to Dangerous Behavior.”

Focus on corporate decisions and systemic issues. When Grok posts antisemitic content, the story isn't about a bot gone rogue. It's about xAI's testing procedures, Elon Musk's oversight, and why these failures keep happening across the industry. When therapy bots give dangerous advice, investigate the companies deploying them, their clinical testing (or lack thereof), and their business models.

Center human impacts and experiences. Jacob Irwin's story matters because a person was harmed, not because a chatbot generated interesting text about its "mistakes." Interview affected users, mental health professionals, and AI safety researchers who can explain actual risks without the sci-fi mysticism.

Context matters. Readers need to understand that when a chatbot generates an “apology,” it's following the same process it uses to write a recipe or summarize an article. It's pattern matching, not introspection. One sentence of clarification can prevent paragraphs of confusion.

Most importantly, maintain appropriate skepticism about corporate claims. When companies say their AI “understands” or “reasons,” push back. Ask for specifics. Demand evidence. Don't let marketing language slip into news coverage unchallenged.

Some outlets already do this well. When covering AI systems, they consistently identify the companies responsible, avoid anthropomorphic language, and focus on documented capabilities rather than speculative futures. It's not perfect, but it's possible.

The bar here is embarrassingly low. Journalists don't need advanced technical knowledge to avoid anthropomorphism. They just need to remember that every AI system is a corporate product with corporate owners making corporate decisions. Cover them accordingly.

Discussion about this post

Read Entire Article