Neither Robot nor Baby

4 months ago 12

Tyler Cowen links to Rebecca Lowe, who writes,

What I really mean, of course, is that AI has brought a foundational subset of philosophy to the forefront of our lives — specifically, philosophy of mind and metaphysics. These are the philosophical domains that focus, respectively, on the relation between the mind and the body, and on what kinds of things exist in the world. At heart, philosophy of mind is focused on questions like how it is we humans can be both ‘thinking things’ and ‘physical things’. And metaphysics is focused on “chart[ing] the possibilities of real existence”. In many ways, these are the deepest and most difficult domains of philosophy.

Oy.

Instead of starting with professional philosophy, let us start with Daniel Wegner and Kurt Gray, whose book The Mind Club takes an empirical approach. By taking surveys, they found out what I might call the folk theory of mind. As I wrote in my review,

Wegner and Gray conclude that when we perceive that an entity falls short of having a mind of the same nature as our own, we tend to categorize it in one of two ways. An entity could lack the ability to experience feelings; or it could lack the ability to make intentional decisions.

In my experience with Claude, it does not have the ability to experience feelings, and it does not have the ability to make intentional decisions. Therefore, folk theory of mind would say that it does not have one. End of story.

To elaborate on the folk theory of mind, it says that humans have agency and feelings. This can lead to a moral dyad, in which we consider one actor to have agency but no feelings (like a robot) and the other actor to have feelings but no agency (like a baby).

Where I disagree with Tyler and with the Zvi is that I do not think of AI as having a mind. It does not have agency and it does not have feelings.

Consider what happened the other day. I asked Claude to rewrite the first chapter of my seminar, using updated instructions. What it came back with was a lot of dialogue that was mostly in the manner of the old seminar. Was this agency? No. It was a mistake. I realized that Claude pulled into the process an old file from our project folder, so it was taking the wrong descriptions of the characters.

When I called Claude out on its mistake, two things happened. First, it rewrote the chapter in a way that was much, much closer to what I wanted. Second, it did not get depressed or defensive, the way that a human would have with this sort of interaction.

The Zvi could be correct that we should be scared of AI. But that is not because AI will develop a mind of its own. It is because AI could do something bad that a human intends, or it could do something bad that is unintended because of a miscommunication like the one I had with Claude when I first asked it to rewrite the chapter.

It is my firm belief that AI is neither a robot nor a baby. It is powerful computer code. It can have consequences that are good and bad. But the deep philosophical questions are appropriate for fiction, not fact.

Share

substacks referenced above: @

Read Entire Article