Google DeepMind's CEO Thinks AI Will Make Humans Less Selfish

1 day ago 4

If you reach a point where progress has outstripped the ability to make the systems safe, would you take a pause?

I don't think today's systems are posing any sort of existential risk, so it's still theoretical. The geopolitical questions could actually end up being trickier. But given enough time and enough care and thoughtfulness, and using the scientific method …

If the time frame is as tight as you say, we don't have much time for care and thoughtfulness.

We don't have much time. We're increasingly putting resources into security and things like cyber and also research into, you know, controllability and understanding these systems, sometimes called mechanistic interpretability. And then at the same time, we need to also have societal debates about institutional building. How do we want governance to work? How are we going to get international agreement, at least on some basic principles around how these systems are used and deployed and also built?

How much do you think AI is going to change or eliminate people's jobs?

What generally tends to happen is new jobs are created that utilize new tools or technologies and are actually better. We'll see if it's different this time, but for the next few years, we'll have these incredible tools that supercharge our productivity and actually almost make us a little bit superhuman.

If AGI can do everything humans can do, then it would seem that it could do the new jobs too.

There's a lot of things that we won't want to do with a machine. A doctor could be helped by an AI tool, or you could even have an AI kind of doctor. But you wouldn’t want a robot nurse—there's something about the human empathy aspect of that care that's particularly humanistic.

Tell me what you envision when you look at our future in 20 years and, according to your prediction, AGI is everywhere?

If everything goes well, then we should be in an era of radical abundance, a kind of golden era. AGI can solve what I call root-node problems in the world—curing terrible diseases, much healthier and longer lifespans, finding new energy sources. If that all happens, then it should be an era of maximum human flourishing, where we travel to the stars and colonize the galaxy. I think that will begin to happen in 2030.

I’m skeptical. We have unbelievable abundance in the Western world, but we don't distribute it fairly. As for solving big problems, we don’t need answers so much as resolve. We don't need an AGI to tell us how to fix climate change—we know how. But we don’t do it.

I agree with that. We've been, as a species, a society, not good at collaborating. Our natural habitats are being destroyed, and it's partly because it would require people to make sacrifices, and people don't want to. But this radical abundance of AI will make things feel like a non-zero-sum game—

AGI would change human behavior?

Yeah. Let me give you a very simple example. Water access is going to be a huge issue, but we have a solution—desalination. It costs a lot of energy, but if there was renewable, free, clean energy [because AI came up with it] from fusion, then suddenly you solve the water access problem. Suddenly it’s not a zero-sum game anymore.

Read Entire Article