What Isn't Intelligence

21 hours ago 1

Patrick House is inspired by Blaise Agüera y Arcas’s “What Is Intelligence?” to think about what might constitute the difference between artificial and natural intelligence.

What Is Intelligence? Lessons from AI About Evolution, Computing, and Minds by Blaise Agüera y Arcas. The MIT Press, 2025. 600 pages.

THE LYREBIRD is shy. Carl Bergstrom is not. Unfortunately for Bergstrom, a professor of theoretical and evolutionary biology at the University of Washington, shyness has won. Despite decades of deep passion for birding, he has never seen a lyrebird in the wild. Earlier this year, I spotted one in the Blue Mountains just outside Katoomba, Australia; Bergstrom was on the same hike and within earshot, just around the bend behind me. A group of us studying animal cognition and diverse intelligences had broken off from a conference for a writing retreat, a walk through the temperate rainforests of Jamison Valley, where the lyrebird is endemic.

I considered that I could yell “Carl!” through the eucalyptus canopy. Though he and I had never settled on me yelling “Carl!” as code for a sighting, I knew that he knew that I knew that he had never seen a lyrebird—he would quickly understand my vocalization. After all, I had been playfully asking him “Have you seen one?” all afternoon, echoing the physicist Ernst Mach’s famous refrain to those who believed in, but had never seen, an atom.

Then I considered another option. A rock climber and nature writer once told me that it would be anathema to listen to the audiobook of the British naturalist J. A. Baker’s masterpiece The Peregrine (1967) because there is a necessary silence to the tracking of birds that the spoken version violates—one must never be louder than the sound of the scratch of a quill on paper, he said. That way, the bird has no reason to think of you as anything but one of its own kind: also scratching, also grub hunting. And so, as I watched the furtive lyrebird weave deeper into the ferns, scratching the ground as it looked for grub, and then moving on toward the sandstone-and-shale gorge, I silently pointed.

Silent pointing is one of the many things humans can do that almost no other animal understands (dogs do, sure, but only because we selectively bred them to). Wolves do not understand pointing. Chimps do not understand pointing. But Bergstrom would, of course, understand. He researches the evolution of communication and the physics of how information is stored in nature, and in that moment, my silence was also information: a clue that I didn’t want to scare something away, and that the something was likely to be a lyrebird.

The lyrebird’s name is a mistake. In the 19th century, a taxidermist at the British Museum, who had never seen one in the wild, assumed the tail feathers would be spread like a peacock, like the curved wood of a musical lyre. They aren’t; the tail feathers of the male spread like a Japanese hand fan. Similarity can be seductive to the human eye: earlier this year, a series of papers in Science resolved a different avian ambiguity involving a brain region called the pallium (Latin for “cloak”). For decades, there have been grumblings in evolutionary circles about whether the mammalian and avian pallium should be considered the same region or just analogous ones. The regions are similar in location (front and top of the brain, like a covering—or a cloak), cognitive function (tool use, problem-solving, and complex social behaviors like theory of mind), and structure (columnar, like a skyscraper with many floors, each doing its own work but highly interconnected). Nature occasionally re-evolves useful traits in different species. The pallium appears to be an example of this process of convergent evolution: two regions independently evolved for similar tasks.

Eventually, Bergstrom appeared around a bend holding a camera with the proportions of an Olympic sprinter’s good leg. Bergstrom gracefully and silently tiptoed toward the lyrebird’s last known position using every ounce of his forebrain. Alas, the bird countered with every ounce of its own forebrain: it hid. Shyness had won again. The lyrebird was gone.

The Blue Mountains contain no mountains. What appears at first to be a series of mountains is the result of a dissected, uplifted plateau that, during the last 170 million years, through erosion and stress, cracked into gorges and escarpments.

The plateau’s most famous landmark, the Three Sisters, are three sandstone spires left standing after the cliffs around them eroded. A local legend tells that three Aboriginal sisters had gotten on the wrong side of a bunyip, a swamp monster. Their father, to save them, turned them to stone with a magic bone and transformed himself into a lyrebird. But he lost the magic bone. Which is why, today, you will find lyrebirds searching the ground and underbrush: pecking, scratching, looking, hoping. Not just for food but for the bone too.

That story is not a true story. It may not even be a Dreamtime myth. Some contend that it was made up by an Australian teenager for a school assignment and just caught on. But one thing is true: the animist tale of the Three Sisters is neither the first nor the last time that people would breathe life and soul into hardened rock. The Three Sisters are made of Sydney sandstone, also known as “yellow gold,” which contains crystals of fused silicon and oxygen known as quartz. Quartz can be refined into chip-grade silicon to make transistors. If you put enough transistors together and shine electricity through them, and boot the whole thing up alongside the right computer code, you get artificial intelligence.

¤

The trajectory of intelligent life on this planet can be described as an evolution of its verbs: to move, to reproduce, to hunt, to hide, to feel, to make, to use, to think. With the recent rise of artificial intelligence and competent chatbots, many experts have volubly opined about which verbs matter for what counts as “intelligence.” But like artificial insemination, artificial hearts, and artificial reefs, artificial intelligence was designed to interface with biology; its abilities and purpose are inferred exclusively from this interaction.

Nonetheless, there are cogent arguments that humanity has birthed, inside the world’s computer data centers and built on or alongside large language models, a computational process that skipped having to move, reproduce, hunt, hide, or feel and went straight to intelligence. Is this true?

The computer engineer Blaise Agüera y Arcas believes so. Sort of. It depends on how you define “intelligence.” In his new book What Is Intelligence? Lessons from AI About Evolution, Computing, and Minds, Agüera y Arcas toes the line between naturalist and computer scientist, and with broad genius and rare humility answers in the affirmative: our computers are indeed intelligent. Not because of what they are, but because of what they do.

Like a good scientist, Agüera y Arcas decomposes the title’s question into smaller, tractable questions: How did life evolve, and does it require, or presuppose, intelligence? Is everything that can be computed intelligent? If certain features of intelligence can arise separately in nature (see lyrebird pallium versus Bergstrom pallium), does this also mean that intelligence can exist in any material, anywhere, so long as the parts are arranged properly?

And like a good computer engineer, Agüera y Arcas, who works at Google, believes in the primacy of mathematics, computation, and functions. “[A] living organism is a composition of functions. Which means that it is, itself, a function,” he writes. That we made intelligence out of Sydney sandstone is no problem for Agüera y Arcas, who rightly points out that our brains are also made of inanimate, elemental building blocks such as calcium and carbon: “[O]ur planet’s metal and sand are shaped into steam engines and computer chips, just as calcium we ingest turns into bones and ionic currents.” So what’s the difference, really?

The list of things humanity cannot make is far longer than the list of things it can make. We cannot manufacture, from scratch, tears, feathers, rubber, crude oil, coffee, a star, life, amber, or lithium. (The astronomer Carl Sagan argues that we can’t even make apple pie: “If you wish to make an apple pie from scratch, you must first invent the universe.”) Some things take geologic time, a universe-crunching level of energy, or chemistry we do not understand. Synthetic diamonds, synthetic rubber, and synthetic oil do exist, yes—but their material properties differ in their fine details from the natural stuff. For example, we can wrap a mosquito in plasticine resin, but it will preserve neither the microstructural shape of the trapped organism nor its organic debris as well as natural amber does. Sometimes, there is no substitution for the natural kind.

If, like Agüera y Arcas argues, we have made intelligence from silicon scratch, is there anything missing from synthetic intelligence which differs from the natural kind? If an AI succeeds in outputting something roughly akin to human language, reason, and thinking, is that enough? Is something lost in the fine details? What is natural intelligence anyway?

¤

Years ago, I visited the Haidlhof Research Station, which is about an hour by train from Vienna. The facility, nestled into the Austrian Alps, is home to the largest captive population of kea, the world’s only alpine parrot. Keas are famed for their unbridled, blunt curiosity and are affectionately referred to as the “Clown of the Alps.” The kea is considered by many animal cognition researchers to be possibly the most intelligent bird species in the world.

The kea’s curiosity is the exact opposite of the lyrebird’s shyness. Keas practically want to be found. They seem to find it fun. At the time of my visit, the aviary housed 23 adult keas and three juveniles the size of a ski boot. I was warned that, because I was new and unfamiliar, I would likely be the object of many a verb: played with, climbed on, stared at, flown to, hooted at, and bitten. The stern tone of the warnings reminded me of warnings around metal near an MRI machine—as if kea curiosity, like magnetic field lines, were a fundamental force in the universe.

The prophecy held. A few keas crowded my feet, pecking at the yellow stitching of my boots, while others hopped on my shoulder or arm and bit each of my buttons. One stuck his nose in my armpit and then took the scent and rubbed it on his feathers. Two kept a cautious distance on branches above my head and seemed—I’ve seen this movie—to be strategically flanking me. Half an hour later, I was still the focus of their unrelenting, curious drives.

Megan Lambert, one of the researchers, introduced me to the keas: they included Anu (“total weirdo,” lies face down, ignores everyone), Kermit (biter), Pick (sweet, “obsessed with smelling things,” loves solving puzzles), Jean-Luc (“hates me”), and Sunny (disconsolate, plucker). I asked what Lambert, who has also studied emotions and cognition in crows, ravens, chimps, lemurs, gibbons, and alligators, sees when she looks at a kea. “A sweet little ball of feathers,” she replied.

Lambert explained that a theory of curiosity in animals starts with the idea that life has had to evolve intrinsic rewards and motivations because the outside world has no obvious point total or score to keep track of and optimize for. In this context, new information can be thought of as akin to points in a game. “We are all faced with streams and streams and streams of information coming at us at any given time. How do we determine what information to pay attention to? That’s the big question,” said Lambert.

In its broadest sense, curiosity is an organism’s drive to gather information about the world. Sofia Forss, a biologist at the University of Zurich, who researches novelty-seeking behavior in vervets and meerkats, defines curiosity as “the intrinsically motivated acquisition of novel information.” Kea curiosity encourages novelty-seeking for the purposes of acquiring new sensory information; the kea takes this information and runs it through its mental models of the world in order to better predict what happens next. “You can think of the brain as essentially a prediction-making machine with the ‘goal’ of minimizing the gap between your internal model of the world and the sensory input that you’re receiving,” said Lambert.

In the Austrian Alps, I was victim to a drive, a reward, and an information-gathering exercise all in one; over time, kea predictions about me, or about interlopers in general, improve. Agüera y Arcas goes even further. He argues that prediction is intelligence: “Is the emergence of intelligence merely a side effect of ‘solving’ prediction, or are prediction and intelligence actually equivalent?” He favors the latter. Not only is an internal model of the world or others good, he argues, when it effectively predicts the future—it is also good merely because it can, and does: “In fact, evolutionarily speaking, only creatures with models that successfully predict their own future even have a future!”

This is not a throwaway line. Nested and layered, it can get at things like minds: “In particular, minds arise when modeling other minds becomes important,” he writes. Agüera y Arcas goes beyond just explaining how and why evolution learned to model what he calls “effective prediction of something that is itself a predictor”—that is, other people. He also models the mind of the reader, which means he knows how little we know about what he knows, and so wraps his complex arguments in introductory tours of computation, biology, and the brain.

One reason the AIs predict their worlds is that they have been programmed to be “curious”; like the kea, some AIs explore their virtual or physical world to get information. Overall, the curious artificial agents, with intrinsic reward to explore, consistently outperform non-curious agents in virtual environments when rewards are infrequent or unknown.

And, yes, even these AIs have individual quirks: one, programmed to seek newness, killed itself just to experience the GAME OVER screen; another, programmed to avoid death, indefinitely paused a Tetris game; and another, programmed to maximize surprise, became catatonically attracted to virtual television. (David Foster Wallace, who was also addicted to television, once said that “entertainment’s chief job is to make you so riveted by it that you can’t tear your eyes away.”)

One of Darwin’s major paradigmatic insights was that no two individuals, even within a species, are ever the same. This is because every life grows from a few, messy biological parts that interact with local conditions and never quite acquire the same shape twice. Even species we call “clones” are not technically so. The Amazon molly, a small fish about the size of a finger, is the first vertebrate discovered to be capable of clonal, asexual reproduction. All individual Amazon mollys are clones of their mother—of the mother—and reproduce through gynogenesis. A nearby sperm from a similar fish activates the egg but is then mostly discarded. Kate Laskowski, a professor of evolution and ecology at UC Davis, studies the kind of variation in nature that leads to atypical species like this. “Even on day one, the Amazon molly seems to exhibit variation,” she told me. The fish show differences hours after birth, and these differences stay with them.

For Agüera y Arcas, life’s ability to reproduce—imperfectly, at times—is part of its charmed ascent to thinking and consciousness. He finds similar abilities in our computers. In one scene, he describes making “imperfect replicators,” in silico, which he defines as “very short bits of code that, in one way or another, have some nonzero probability of generating more code.” The idea is that there is a recipe of minimal parts needed to bake intelligence into any system: code, a compiler, the ability to make more code, and variance. And, by gosh, humans have that too: DNA (code), ribosomes (compiler), sex (the ability to make more code), and individual variation (variance).

Watching one piece of code make patterns out of simple rules, Agüera y Arcas is reminded of peering into the primordial soup when life was flirting with hydrothermal vents and becoming a single cell; that piece of code “might have had close analogs during abiogenesis on Earth.”

¤

A few days after my visit to the kea, some members of the lab and I went to the BirdYard, an avian-themed speakeasy in Vienna. Inside, I asked the group whether animal cognition researchers would one day study curiosity in large, moving robots as they do in birds at Haidlhof Research Center. And if so, who should study the cognitive capacities, if any, of artificial intelligence if we were to release some into the cloud/wild? Engineers? Naturalists? Which field’s methods are best suited to study chatbot intellect? (One researcher responded, too quickly: “Or, more than likely, they’ll be studying us.”)

Should the bird people who study kea curiosity be the ones to study virtual curiosity? Should the chimp researchers studying theory of mind apply their tools to chatbot inference? Should the Amazon molly experts study the copy-paste, clonal lineages of computer programs in the cloud? What should those who, standing on the shoulders of centuries of naturalist giants, have spent decades researching these topics say about what “intelligence,” “curiosity,” and “variation” mean in the natural world?

The search for a clear, computational definition of “intelligence” is in vogue at AI companies, in part because lawyers have forced the issue. OpenAI, creator of ChatGPT, is contractually obligated to share its models with Microsoft until OpenAI develops “sufficient AGI” (artificial general intelligence), a clause tucked in alongside billions of dollars in investment. In law, every word or phrase must be defined in the affirmative.

In science, however, one should not start with an end definition and work toward it. The reason is clear, because the bias this would foment is also clear: starting with a definition and working backward results in a product, not a fact. In science, you start with a question and refine it until you end up with a set of better questions. Scientific inquiry is not the pursuit of an ordained definition that can be crossed like a finish line; it is, instead, the laborious removal of inaccurate questions with no knowledge of, assumption about, or requirement for a telos.

We do not have consensus definitions of sentience, intelligence, life, or consciousness that allow us to answer, definitively, how close any synthetic or artificial intelligence gets to the real stuff. Perhaps, like with motion, heat, or natural selection, the eventual definitions for these concepts will be the same thing as their explanations. The lack of consensus is not a black mark against ethology or neuroscience, fields that have studied related concepts for centuries; it is a sign of unhurried patience.

Agüera y Arcas embraces this unhurried patience—and yet is also in a hurry to describe just how breathtakingly neat the underlying tools and tricks of modern computation are. In a profound, throwaway clause tucked into a technical sentence on machine learning, he writes that some of the key algorithms powering chatbots are “more like discoveries than inventions.” If true, the closest analogues to Agüera y Arcas’s ideas are perhaps those of James Prescott Joule, the 19th century British physicist whose research on heat and energy led to the discovery that natural and mechanical “work” can be converted back and forth; calories can be used to describe energetic inputs to both a brain and steam engines, and the equations for each are, in fact, the same. If, one day, universal laws for how both AI and brains learn or think are discovered, a debt (perhaps, even, a unit name) will be owed to What Is Intelligence?

LARB Staff Recommendations

Did you know LARB is a reader-supported nonprofit?


LARB publishes daily without a paywall as part of our mission to make rigorous, incisive, and engaging writing on every aspect of literature, culture, and the arts freely accessible to the public. Help us continue this work with your tax-deductible donation today!


Read Entire Article