The AI Industry Is Radicalizing: its critics occupy parallel universes

3 hours ago 2

The story unfolds so rapidly that it can all seem, at a glance, preordained. After transferring to Columbia last fall, as Chungin “Roy” Lee tells it, he used AI to cheat his way through school, used AI to cheat his way through internship interviews at Amazon and Meta—he received offers from both—and in the winter broadcasted his tool on social media. He was placed on probation, suspended, and, more keen on AI than education, dropped out this spring to found a start-up. That start-up, Cluely, markets the ability to “cheat on everything” using an AI assistant that runs in the background during meetings or sales calls. Last month, it finished a $15 million fundraising round led by Andreessen Horowitz, the storied venture-capital firm. (Columbia, Meta, and Amazon declined to comment on the record about Lee’s case.)

Lee unapologetically believes that the arrival of omniscient AI is inevitable, that bots will soon automate every job. The language about “cheating” is really just a provocative way to get everyone on board with the idea, Lee told me when we spoke recently. “We have no choice but to keep spreading the word: Do not think it’s cheating,” he said. (“Every time technology makes us smarter, the world panics. Then it adapts. Then it forgets. And suddenly, it’s normal,” Cluely states on its website.) Lee said that it may seem unfair to some people if others can use AI to “be 1,000 times better or more efficient,” but soon this will simply be how the world operates. Even if ChatGPT didn’t get an iota more capable than it is today, already “every single white-collar job in America should essentially be gone already,” Lee said (or “conservatively,” 20 to 30 percent of them). And “I would bet my entire life on AI getting exponentially better.”

As we spoke over Zoom, Lee munching on the occasional corn chip while opining on superintelligence, his pitch began to sound familiar. He seemed an awful lot like OpenAI CEO Sam Altman. Both founders treat selling a product like evangelizing a faith. In a recent essay, Altman wrote that the singularity—the period after which technology eclipses human control and comprehension—has already begun. “The rate of technological progress will keep accelerating, and it will continue to be the case that people are capable of adapting to almost anything,” Altman wrote. “There will be very hard parts like whole classes of jobs going away, but on the other hand the world will be getting so much richer so quickly that we’ll be able to seriously entertain new policy ideas we never could before.”

AI zealots are everywhere in the Bay Area. I’ve met dozens of them: people who believe that AI’s rapid ascension is inevitable and by far the most important thing happening on this planet. (Some told me it’s the only thing worth caring about at all.) Their vision is in some way optimistic—the idea, however naive, is that superintelligence will eventually make life better for everyone—which allows them to easily dismiss the immediate downsides (such as job loss and resource guzzling). AI start-ups promise “full automation of the economy,” “unbounded connection” with millions of AI personas, “limitless” memory, a solution to “all disease.” In recent weeks, several AI researchers and founders have told me they’re rethinking the value of school: One entrepreneur told me that today’s bots may already be more scholastically capable than his teenage son will ever be, leading him to doubt the value of a traditional education.

Yet AI’s radicalizing effects go beyond the technology’s proponents. To match Silicon Valley’s escalating rhetoric, AI skeptics have ramped up their own, like atheists heckling from the pews at Mass. They dismiss AI as overhyped and practically useless, and pronounce the technology’s certain collapse. One of the industry’s chief opponents, the computational linguist Emily Bender, recently co-authored a book titled The AI Con and encourages referring to chatbots as “a racist pile of linear algebra”—a reference to well-documented algorithmic biases against people of color—or “stochastic parrots.” Gary Marcus, another prominent critic of the AI industry and a cognitive scientist at NYU, recently summed up one of his major points to me. Are chatbots intelligent? “I mean, you could say your calculator thinks, depending on how you define the word thinking,” he said.

The two camps are more and more frequently coming into direct conflict. A few days before we spoke, Marcus had triggered his latest online spat with the AI industry after posting an edited image showing Altman’s face plastered over a photograph of the infamous Elizabeth Holmes. “True performance art,” Altman quipped in response. Ed Zitron, a prominent AI critic, recently wrote a nearly 7,000-word essay insisting that he is “sick and tired of everybody pretending that generative AI is the next big thing,” which the political analyst Nate Silver described as “old man yells at cloud vibes” and “detached from reality.”

This war has transcended reality, and perhaps evidence, to become a contest between cosmologies. There are now two parallel AI universes, and most of us are left to occupy the gap in between them.

There have been disagreements between boosters and skeptics for as long as AI has existed. But in recent months, the argument has intensified as the industry aggressively expands across digital space. Billions of people are now likely to encounter generative AI each day through Google, Facebook, Instagram, X, their iPhones, Amazon review summaries, various voice assistants, and more—not necessarily because they want to, but because there’s simply no avoiding it. Many people are deliberately seeking out the tools as well. ChatGPT is now the fifth-most-visited website in the world, and OpenAI’s new image generator was reportedly used by more than 130 million people in its first week, putting a massive strain on the company’s servers. (Whoever commands the White House X account was one of those people, sharing an AI-generated meme of a weeping immigrant being detained by ICE.)

As the technology and its outputs become ubiquitous, AI executives have grown strident, even brazen, about the technology’s stakes. Two weeks ago, Jack Clark, a co-founder of Anthropic, warned Congress that there are perhaps 18 months until the arrival of “truly transformative technology”—AI systems that far exceed any existing chatbot or brain. The day after Donald Trump’s second inauguration, Alexandr Wang, the recently hired chief AI officer at Meta, wrote to the president that the United States and China are in an “AI war.”

The extreme rhetoric is accompanied by extreme spending. The tech industry has collectively burned through hundreds of billions of dollars since the arrival of ChatGPT to train more powerful AI systems and build the physical infrastructure they require, and it shows no signs of stopping. In recent weeks, Meta CEO Mark Zuckerberg, apparently desperate to catch up in the AI race, has been on a recruiting spree in which he has reportedly offered nine-figure packages to top researchers. (Meta says that the numbers have been exaggerated or misrepresented.) Exactly how generative AI will make a profit is not at all clear, but tech companies seem to have faith that the money will flow once the technology has completely rewired the world. As for the skeptics: “When the AI bubble bursts, I don’t think the tech industry is ready for how many people are going to take genuine pleasure in it,” Zitron wrote last week.

There may be no better illustration of the rift than the response to a recent paper, published by a team at Apple, titled “The Illusion of Thinking.” The researchers gave advanced AI programs, known as “large reasoning models,” from OpenAI, Anthropic, and DeepSeek various tasks to accomplish: rearranging checkers according to a pattern, for instance, or restacking blocks in the smallest number of moves possible. The puzzles were all solvable by following the same underlying logic, no matter their length—nothing changes about the process for rearranging the blocks, even if many more blocks need to be moved. But these “reasoning” AI models failed completely once the puzzles got large enough. “That’s sort of like a little kid saying, I’m actually a great mathematician, but I can’t add these numbers that you’re asking me to add because I don’t have enough toes and fingers,” Subbarao Kambhampati, a computer scientist at Arizona State University who was not involved with the study, told me.

Kambhampati has been at the forefront of exploring “reasoning” models’ abilities and limitations, and to him and like-minded researchers, including Marcus, the Apple paper reaffirmed long-held doubts. “Things I’ve been warning about as an Achilles’ heel for the field for 30 years are real,” Marcus told me. “I won’t deny that there’s some vindication in that.” In this view, generative-AI models are not “thinking” entities but statistical approximators, stellar at reapplying patterns in their training data but not much else. The original ChatGPT struggled to count, and today’s ChatGPT fails at some basic puzzles.

Yet many AI boosters descended on the Apple paper with gleeful scorn. In one meme shared to a large AI discussion group on Facebook, giant robots incinerate a city while a group of humans huddle nearby and say, “But they’re not actually ‘reasoning.’” Who cares if AI “thinks” like a person if it’s better than you at your job? If anything, some of the paper’s detractors argued, the findings simply demonstrated how humanlike AI models are through their shortcomings. (Who among us doesn’t fail to solve a long, complex problem on occasion?)

Marcus’s gloating about the paper on X turned him into a target for those who find AI’s abilities undeniable, including Altman, who wrote, “We deliver, he keeps ordering us off his lawn.” Kevin Roose, a tech journalist at The New York Times, took his own shot at Marcus, responding to Altman’s post: “A man predicts 85 of the last 0 AI crashes and this is how you treat him?”

Roose’s comment struck me as particularly illuminating; he doesn’t quite adore the technology like Altman, but he does regard it as powerful and present. His recent work for the Times has been focused on issues such as what to do if AI systems become conscious and whether AI will pose an existential risk to humans in a few years. He is writing a book about the “race to build artificial general intelligence,” a version of the technology that matches or exceeds the capabilities of humankind. More recently, he has likened some AI skeptics to “an antinuclear movement that didn’t admit fission was real.” When I reached out to ask Roose about this seemingly hard-line stance, he told me, “Increasingly, I feel like the people who are denying the capabilities of these models are just telling feel-good bedtime stories to people who don’t want to believe that change is coming.”

The conflict between AI believers and atheists may be destined to carry on for some time. Generative AI is labyrinthine, and the terms used to describe it are fuzzy—is it “intelligent” or “conscious,” or both or neither, and does it matter? The firms behind the technology are also unwilling to provide any kind of straightforward definitions or fixed goalposts for “generally” or “super” intelligent capabilities. “We don’t know how to even ask the questions about the best way to understand these things,” Kambhampati said. Without questions, let alone answers, faith fills the void. Anything can be spun to support either side of the debate.

Independent and industry research—by Kambhampati, Bender, researchers at Apple, and countless others—has continuously shown chatbots failing at various tasks: basic arithmetic, logic, conceptual reasoning, you name it. Yet tech companies also regularly produce chatbots that are better, sometimes drastically so, at those same tasks. Is there a deep, systemic flaw to generative AI, or is the technology hurtling down a path toward unlimited advancement? You could make an argument either way, based on the same exact evidence, and people do so constantly.

The problem with the radicalization of AI is that it pushes people to look beyond the material conditions of the world as it exists. In reality, AI models are speeding up scientific discovery and software engineering while also fabricating information and pushing people into mental breakdowns. Ignoring the chatbot era or insisting that the technology is useless distracts from more nuanced discussions about its effects on employment, the environment, education, personal relationships, and more. Perhaps worse, accepting that superintelligence is around the corner permits trivializing just about any concern with the technology in its present form.

Beneath many, many layers of digital vitriol, there may even be room for agreement between the two camps. For all his bombast online, for instance, Marcus has said that today’s chatbots are a legitimate breakthrough, just far from the breakthrough; for all of Altman’s petulance, OpenAI’s latest large reasoning models rely on new approaches not so dissimilar from Marcus’s own, decades-old ideas. AI can be both very powerful and very bad, Kevin Roose told me. “What I am not saying is: We should take the industry at its word,” he said. If OpenAI is truly “confident we know how to build AGI,” as Altman wrote this year, he must prove it.

After all, today’s incarnation of generative AI was not inevitable. When the field of “artificial intelligence” emerged in the 1950s, there were two main schools of thought: “Connectionists” thought digital “neural networks” gradually learning from data would be sufficient to produce intelligence. “Symbolists” thought intelligence would come only from hard-coded rules, logic, and knowledge. Neural networks won out: They are the foundation of today’s chatbots, and what much of the modern tech industry is built on.

Companies such as Meta and Google spent the 2010s constructing ever bigger neural networks and data centers to power digital advertisements, social media, search engines, shopping algorithms, and so on. As consumers were funneled into these products, the tech firms accumulated huge amounts of data, which they were then able to exploit for tremendous profits. Now those datasets are a treasure trove for training chatbots.

In 2023, researchers at MIT found that 70 percent of people with Ph.D.s in AI go into industry and that almost all of the largest, and thus most powerful, AI models are corporate. With hundreds of billions of dollars already invested into generative-AI products and profitability seemingly still years away, these firms cannot afford to show any signs of weakness. They have radicalized at least in part because they need their vision to come true. Even Lee, near the end of our conversation about Cluely, admitted to some cynicism: “Sure, it is a ploy to gain the attention of venture capitalists, but that’s only downstream of getting the attention of hundreds of millions of regular people.” He reminded me, once again, of Altman, whose ability to tell and capitalize on a story has transformed OpenAI from a research lab to a factory for new AI products.

As we spoke about radicalization, Lee made another point that interested me. Imagine, he said, if “half of America had moralized against the internet and technology, and half of America had openly embraced it.” Half of the nation would “be living as if electricity was never invented,” the other half floodlit with prosperity. “There would be such a massive gap in outcomes,” Lee said. “This is living in a dystopian society. This sort of unfairness is crazy.”

Of course, half the nation did not reject the internet, much less electricity. And a “crazy” unfairness will have existed long before the theoretical arrival of superintelligence, much of it driven by technology. Automation is responsible for at least half of the nation’s growing wage gap over the past 40 years, according to one economist. Tens of millions of Americans and billions of people around the world lack broadband internet access. Amazon, Uber, Airbnb, and other platforms have destroyed entire classes of businesses without offering clear, equally compensated replacements. The 10 richest tech billionaires in the world are collectively worth nearly $2 trillion, more than the GDP of all but 11 countries in the world. Singularity or not, Silicon Valley has already erected a parallel universe.

Read Entire Article