Sign up for Big Think on Substack
The most surprising and impactful new stories delivered to your inbox every week, for free.
A few years ago, in 2022, Google engineer Blake Lemoine was fired for claiming that the chatbot he was working on was sentient. He believed that it could feel things, potentially suffer, and that the moral status of large language models (LLMs) needed to be taken seriously. It turned out that it was Lemoine who was not taken seriously.
But a few weeks ago, Kyle Fish — who works as an “AI welfare researcher” for Anthropic — told the New York Times that there is a 15% chance that chatbots are already conscious. What’s changed?
One thing is that discussions about “AI consciousness” have moved from philosophical seminars (and pubs) to center-stage in academia, and out of the shadows in the tech industry, too. This transition, driven in large part by the astonishing progress in LLMs, is in one sense a good thing. If we did end up creating conscious machines, deliberately or not, we’d unleash an unprecedented moral crisis. We would introduce new potential for suffering in the world, at the click of a mouse, and of a kind we might not even recognize.
I think the odds of real artificial consciousness — at least along current trajectories for AI — are much lower than most people think (and certainly much lower than 15%). In my work at the interface of neuroscience and AI, and in my book Being You, I lay out three reasons why we tend to overestimate the likelihood of sentient machines.
The first lies in our own psychological makeup. We tend to assume that “intelligence” and “consciousness” go together, so that something sufficiently smart would also be conscious. But just because intelligence and consciousness go together in us doesn’t mean they go together in general. The assumption that they do is a reflection of our psychological biases, not an insight into reality. Language exerts a particularly strong pull on these biases, which is why people wonder whether Anthropic’s Claude is conscious, but not DeepMind’s protein-folding AlphaFold.
The second reason is also an assumption: in this case, that the biological brain is a computer of some kind. If the brain really is a meat-based computer, then everything that depends on its activity — whether intelligence or consciousness — should in principle be possible in a silicon alternative. But the closer you look at the brain, the less like a computer it seems. There is no clean division between “mindware” and “wetware” as there is between hardware and software in our silicon devices, and even a single neuron is a vastly complex biological factory. The brain-as-computer metaphor was only ever a metaphor, and we always get into trouble when we confuse a metaphor with the thing itself. If the brain isn’t actually a computer, there’s much less reason to think that consciousness could happen in silicon form.
To put the point another way: Nobody expects a computer simulation of a hurricane to generate real wind and real rain. In the same way, a computer model of the brain may only ever simulate consciousness, but never give rise to it.
The third reason is that we underestimate other possible explanations. Nobody knows how and why consciousness happens, but there are many other possibilities beyond it being an algorithm on the one hand, or immaterial magic on the other. One possibility I explore in my research is that consciousness arises from our nature as living creatures: that it is life, rather than “information processing,” that breathes fire into the equations of consciousness.
The ethics of all this matter. Even if real artificial consciousness is off the table with current forms of AI, emerging “neuromorphic” technologies, which are becoming ever-more brain-like, may yet move the needle. Even AI that merely seems to be conscious is ethically problematic, even if, under the hood, there is only subjective oblivion. Conscious-seeming AI can exploit our psychological vulnerabilities, distort our moral priorities, and — if we treat things that seem to have feelings as if they don’t (perhaps ignoring their pleas for help) — we risk brutalizing our minds.
In the face of this uncertainty, what should we do? First, we should not deliberately try to create artificial consciousness. Even if we don’t know what it takes to create conscious AI, we also don’t know how to rule the possibility out altogether. Second, we should carefully distinguish the ethical implications of AI that actually is conscious from AI that irresistibly seems to be conscious. The existential uncertainties of the former should not distract us from the clear and present dangers of the latter.
Finally, we face similar uncertainties in many other contexts too: people with severe brain injuries, non-human animals (from bacteria to bats), human foetuses, and strange new creations in synthetic biology, such as “cerebral organoids” (brain cells in a dish that wire up together). In each case, there are ambiguities about whether consciousness is present, and in each case, our decisions carry moral weight. As science, technology, and medicine continue to advance, more of these scenarios will move from the fringes to the spotlight. What we need is nothing less than a satisfactory scientific understanding of consciousness itself.
Disclaimer: Anil Seth is an advisor to Conscium Ltd and AllJoined Inc.
Sign up for Big Think on Substack
The most surprising and impactful new stories delivered to your inbox every week, for free.