Why aren't more people here worried about AI?

4 months ago 19

I'm one of those people that keep saying that no one knows how to control an AI that is much more all-around capable than (organized groups of) people are, and that we should stop AI research till this is figured out (or till someone figures out how to make sure a big training run will definitely not yield an AI that is more capable that people are).

But even if you don't believe me that no one knows how to control a super-capable AI, why is no one worried about some nation or disaffected group intentionally creating an AI to kill us all as some kind of doomsday weapon? Every year the craft of creating powerful AIs becomes better understood, and researchers (recklessly IMHO) publish this better understanding for anyone to see. We don't know whether all the knowledge needed to create an AI more capable than people will be published this year or 25 years from now, but as soon as it happens, any actor on earth capable of reading and understanding machine-learning papers and in possession of the necessary GPUs and electricity-generating capacity can destroy the world or at least destroy the human species. Why are so many of you so complacent about that risk?

In the news recently was a young man who killed some people at a fertility clinic. He was a "promortalist": someone who believes that there is so much suffering in the world that the only moral response is to help all the people die (so they cannot suffer any more). Eventually, the craft of machine learning will become so well understood and access to compute resources so widespread and affordable that anyone living in a basement somewhere who happens to inherits $100 million from some eccentric uncle will have the means to end the human experiment. He will not have to figure out how to stay in control of the AI he unleashes. Any AI (just like any human being) will have some system of preferences: there will be some ways the future might unfold that the AI will prefer to other ways. And if you put enough optimization pressure behind almost any system of preferences, what happens strongly tends to be incompatible with continued human survival unless the AI has been correctly programming to care whether the humans survive.

Again, even if you don't believe me that no one knows (and no one is likely to find out in time) how to create an AI such that it cares what happens to the people, aren't you worried about human actors who do not bother to make sure that the AI will care what happens to the people because this actor is troubled and wants all the people to die?

I mean, yes, some of you genuinely disbelieve that AI can or will get good enough to be able to wrestle control over the future out of the hands of humankind. But many of you who consider it likely that AI technology will continue to improve (or else people wouldn't've invested so much in AI and wouldn't've driven the market cap of Nvidia to 3 trillion dollars). Why so little worry?

Read Entire Article