Extract from Grace Huckins' winning entry

1 day ago 4

GRACE HUCKINS – THE END OF UNDERSTANDING

Ask any neuroscientist about “the microprocessor paper,” and they’ll immediately know what you mean. It’s a quirky bit of research, led by a quirky scientist: University of Pennsylvania professor Konrad Kording, whose shock of electric-blue hair stands out at any academic conference. With his then-student Eric Jonas, who is now a professor at the University of Chicago, Kording aimed the toolset of neuroscience away from chunks of brain tissue and toward a single computer chip, to see if they could figure out how it worked.

The feeling at the time, Kording and Jones wrote, was that neuroscientists desperately lacked for data. If only they could record from more monkey neurons, or turn off more fruit fly genes, or map out more mouse synapses, they could work out how the brain functioned once and for all. To test that idea, they chose a system from which they could glean as much data as they wished: an exact computer simulation of the computer chip used to run the Commodore 64 and Atari 2600 gaming consoles. If neuroscientists were right to put their faith in data, then Kording and Jones should be able to understand the computer chip just as well as its designers did, as long as they examined it thoroughly enough.

Treating the chip like a brain, they recorded the electrical activity of each “neuron” (i.e., transistor) as the microprocessor “behaved” (ran Donkey Kong), and they observed how that “behaviour” changed when individual “neurons” were destroyed. But all that data—which they analysed using common neuroscientific approaches—taught them very little about how the chip worked. “Our results stayed well short of what we would call a satisfying understanding,” they wrote. Without new theories and analysis approaches, they concluded, more data wasn’t going to help anyone understand how the brain worked.

Since Kording and Jonas published their paper almost a decade ago, neuroscience has grown even more enamoured of data. The Allen Institute in Seattle is exhaustively cataloguing neurons in human and mouse brains, and, late last year, an international consortium of scientists released a complete map of the fruit fly brain—containing over ten million individual neuron-to-neuron connections. And the tools available for analysing all of that data have grown far more powerful with the rise of deep learning and generative AI. In the past couple of years, neuroscientists have used AI tools to write sentences that will activate specific regions in the brain of someone who hears them, mimic the process by which baby brains learn to respond to the visual world, and even reconstruct the podcast episode that someone is listening based solely on their brain activity. But though this research has radically advanced our abilities to manipulate the human brain, decode its activity, and accurately model it in a computer, they have offered limited insight into how it does all of the remarkable things that it does.

In my book, I will argue that the explosion of public, large-scale datasets and the rapid advancement of AI have only made Kording and Jones’s warning all the more pressing. From neuroscience and psychology to biochemistry and climate science, this glut of data has enabled extraordinary practical advancements: new snake antivenoms and cheaper and more accurate weather prediction will meaningfully improve human lives. But, thus far, large-scale data has failed to deliver on its greatest promise: to slake the primal human thirst to understand our universe and ourselves.

I believe we are in the midst of a radical change in the way that science operates. Never before has it made sense to ask whether science is about developing new technologies and interventions or about understanding the universe—for centuries, those two goals have been one and the same. Now that big data and AI have dissociated those two objectives, we have the responsibility to decide which matters most. Data has given us permission not to understand the world around us. Whether or not it fails us depends on what we do with that permission.

A New Scientific Revolution

When you understand a system well, you can also make a wide variety of predictions about it. That’s part of the reason that understanding is a core goal of scientific inquiry. You can’t predict the effects of mutations without understanding how genes are transformed into proteins. You can’t predict which car designs might reduce air resistance without understanding fluid dynamics. And you can’t predict whether new drugs might be effective against diabetes without understanding the biochemistry of the disease. You might achieve some one-off successes—if you know that GLP-1 agonists cause weight loss, and type 2 diabetes is associated with obesity, you might correctly anticipate that semaglutide can treat the disease—but if you want to make successful predictions in varied circumstances, you need understanding.

Or, you used to. Then AI got good. Most AI systems are prediction machines: they churn through prodigious volumes of data to find statistical patterns that allow them to predict an output, such as the next word in a document, from some input, like the 100 words that came before it. Before getting their data infusion, these systems aren’t very intelligent at all. They don’t have access to any of the theories or models that we humans use to understand the world, so they need lots and lots of data to uncover the patterns that they need—far more data than any human could hope to comprehend. And because they’re so good at working through huge volumes of data, they can make predictions far better than any human could—no understanding required.

In the past few years, AI systems have been used to predict all sorts of outcomes in all sorts of scientific disciplines. They can identify which molecules might make good antibiotics, choose which antidepressant is likely to work best for a particular patient, and design new types of batteries. We don’t yet have theories of depression that can help us explain why particular patients respond better to Prozac than Zoloft. But with AI and big data, that’s no longer a necessity.

If an AI system does ever win itself a Nobel prize, it will not be the first agent to achieve that goal in the absence of understanding: Alexander Fleming discovered penicillin, and thereby saved innumerable lives, by pure chance. But the systematization of innovation without understanding is nothing short of revolutionary. For centuries, understanding has been a key step on the way to achieving the practical, prediction-dependent goals—technological innovation, disaster forecasting, drug discovery—for which science is most lauded. As understanding is increasingly sidelined from that process, we are entering a new scientific era.

Read Entire Article