In 2017, soon after Google researchers invented a new kind of neural network called a transformer, a young OpenAI engineer named Alec Radford began experimenting with it. What made the transformer architecture different from that of existing A.I. systems was that it could ingest and make connections among larger volumes of text, and Radford decided to train his model on a database of seven thousand unpublished English-language books—romance, adventure, speculative tales, the full range of human fantasy and invention. Then, instead of asking the network to translate text, as Google’s researchers had done, he prompted it to predict the most probable next word in a sentence.
The machine responded: one word, then another, and another—each new term inferred from the patterns buried in those seven thousand books. Radford hadn’t given it rules of grammar or a copy of Strunk and White. He had simply fed it stories. And, from them, the machine appeared to learn how to write on its own. It felt like a magic trick: Radford flipped the switch, and something came from nothing.
His experiments laid the groundwork for ChatGPT, released in 2022. Even now, long after that first jolt, text generation can still provoke a sense of uncanniness. Ask ChatGPT to tell a joke or write a screenplay, and what it returns—rarely good, but reliably recognizable—is a sort of statistical curve fit to the vast corpus it was trained on, every sentence containing traces of the human experience encoded in that data.
When I’m drafting an e-mail and type, “Hey, thanks so much for,” then pause, and the program suggests “taking,” then “the,” then “time,” I’ve become newly aware of which of my thoughts diverge from the pattern and which conform to it. My messages are now shadowed by the general imagination of others. Many of whom, it seems, want to thank someone for taking . . . the . . . time.
That Radford’s breakthrough happened at OpenAI was no accident. The organization had been founded, in 2015, as a nonprofit “Manhattan Project for A.I.,” with early funding from Elon Musk and leadership from Sam Altman, who soon became its public face. Through a partnership with Microsoft, Altman secured access to powerful computing infrastructures. But, by 2017, the lab was still searching for a signature achievement. On another track, OpenAI researchers were teaching a T-shaped virtual robot to backflip: the bot would attempt random movements, and human observers would vote on which resembled a flip. With each round of feedback, it improved—minimally, but measurably. The company also had a distinctive ethos. Its leaders spoke about the existential threat of artificial general intelligence—the moment, vaguely defined, when machines would surpass human intelligence—while pursuing it relentlessly. The idea seemed to be that A.I. was potentially so threatening that it was essential to build a good A.I. faster than anyone else could build a bad one.
Even Microsoft’s resources weren’t limitless; chips and processing power devoted to one project couldn’t be used for another. In the aftermath of Radford’s breakthrough, OpenAI’s leadership—especially the genial Altman and his co-founder and chief scientist, the faintly shamanistic Ilya Sutskever—made a series of pivotal decisions. They would concentrate on language models rather than, say, back-flipping robots. Since existing neural networks already seemed capable of extracting patterns from data, the team chose not to focus on network design but instead to amass as much training data as possible. They moved beyond Radford’s cache of unpublished books and into a morass of YouTube transcripts and message-board chatter—language scraped from the internet in a generalized trawl.
That approach to deep learning required more computing power, which meant more money, putting strain on the original nonprofit model. But it worked. GPT-2 was released in 2019, an epochal event in the A.I. world, followed by the more consumer-oriented ChatGPT in 2022, which made a similar impression on the general public. User numbers surged, as did a sense of mystical momentum. At an off-site retreat near Yosemite, Sutskever reportedly set fire to an effigy representing unaligned artificial intelligence; at another retreat, he led colleagues in a chant: “Feel the AGI. Feel the AGI.”
In the prickly “Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI” (Penguin Press), Karen Hao tracks the fallout from the GPT breakthroughs across OpenAI’s rivals—Google, Meta, Anthropic, Baidu—and argues that each company, in its own way, mirrored Altman’s choices. The OpenAI model of scale at all costs became the industry’s default. Hao’s book is at once admirably detailed and one long pointed finger. “It was specifically OpenAI, with its billionaire origins, unique ideological bent, and Altman’s singular drive, network, and fundraising talent, that created a ripe combination for its particular vision to emerge and take over,” she writes. “Everything OpenAI did was the opposite of inevitable; the explosive global costs of its massive deep learning models, and the perilous race it sparked across the industry to scale such models to planetary limits, could only have ever arisen from the one place it actually did.” We have been, in other words, seduced—lulled by the spooky, high-minded rhetoric of existential risk. The story of A.I.’s evolution over the past decade, in Hao’s telling, is not really about the date of machine takeover or the degree of human control over the technology—the terms of the A.G.I. debate. Instead, it’s a corporate story about how we ended up with the version of A.I. we’ve got.
The “original sin” of this arm of technology, Hao writes, lay in a decision by a Dartmouth mathematician named John McCarthy, in 1955, to coin the phrase “artificial intelligence” in the first place. “The term lends itself to casual anthropomorphizing and breathless exaggerations about the technology’s capabilities,” she observes. As evidence, she points to Frank Rosenblatt, a Cornell professor who, in the late fifties, devised a system that could distinguish between cards with a small square on the right versus the left. Rosenblatt promoted it as brain-like—on its way to sentience and self-replication—and these claims were picked up and broadcast by the New York Times. But a broader cultural hesitancy about the technology’s implications meant that, once OpenAI made its breakthrough, Altman—its C.E.O.—came to be seen not only as a fiduciary steward but also as an ethical one. The background question that began to bubble up around the Valley, Keach Hagey writes in “The Optimist: Sam Altman, OpenAI, and the Race to Invent the Future” (Norton), “first whispered, then murmured, then popping up in elaborate online essays from the company’s defectors: Can we trust this person to lead us to AGI?”
Within the world of tech founders, Altman might have seemed a pretty trustworthy candidate. He emerged from his twenties not just very influential and very rich (which isn’t unusual in Silicon Valley) but with his moral reputation basically intact (which is). Reared in a St. Louis suburb in a Reform Jewish household, the eldest of four children of a real-estate developer and a dermatologist, he had been identified early on as a kind of polymathic whiz kid at John Burroughs, a local prep school. “His personality kind of reminded me of Malcolm Gladwell,” the school’s head, Andy Abbott, tells Hagey. “He can talk about anything and it’s really interesting”—computers, politics, Faulkner, human rights.
Altman came out as gay at sixteen. At Stanford, according to Hagey, whose biography is more conventional than Hao’s but is quite compelling, he launched a student campaign in support of gay marriage and briefly entertained the possibility of taking it national. At an entrepreneur fair during his sophomore year, in 2005, the physically slight Altman stood on a table, flipped open his phone, declared that geolocation was the future, and invited anyone interested to join him. Soon, he dropped out and was running a company called Loopt. Abbott remembered the moment he heard that his former student was going into tech. “Oh, don’t go in that direction, Sam,” he said. “You’re so personable!”