Public figures urge AI 'superintelligence' ban

8 hours ago 2

Steve Wozniak, co-founder of the technology company Apple.

Thomas Banneyer | Picture Alliance | Getty Images

A group of prominent figures, including artificial intelligence and technology experts, has called for an end to efforts to create 'superintelligence' — a form of AI that would surpass humans on essentially all cognitive tasks. 

Over 850 people, including tech leaders like Virgin Group founder Richard Branson and Apple cofounder Steve Wozniak, signed a statement published Wednesday calling for a pause in the development of superintelligence. 

The list of signatories was notably topped by prominent AI pioneers, including the computer scientists Yoshua Bengio and Geoff Hinton, who are widely considered "godfathers" of modern AI. Leading AI researchers like UC Berkeley's Stuart Russell also signed on. 

Superintelligence has become a buzzword in the AI world, as companies from Elon Musk's xAI to Sam Altman's OpenAI compete to release more advanced large language models. Meta notably has gone so far as to name its LLM division the 'Meta Superintelligence Labs.' 

But signatories of the recent statement warn that the prospect of superintelligence has "raised concerns, ranging from human economic obsolescence and disempowerment, losses of freedom, civil liberties, dignity, and control, to national security risks and even potential human extinction."

It called for a ban on developing superintelligence until there is strong public support for the technology and a scientific consensus that it can be built and controlled safely.

In addition to AI and tech figures, the names behind the statement came from a broad coalition ranging from academics, media personalities, religious leaders and a bipartisan group of former U.S. politicians and officials. 

Those retired officials included former chairman of the Joint Chiefs of Staff Mike Mullen and former National Security Advisor Susan Rice.

Meanwhile, Steve Bannon and Glenn Beck — influential media allies to U.S. President Donald Trump — were also prominently featured on the list.

Chief Strategist to the President Steve Bannon speaks during the Semafor World Economy Summit 2025 at Conrad Washington on April 23, 2025 in Washington, DC.

Kayla Bartkowski | Getty Images

Other high-profile signatories included the British royal family members Prince Harry and his wife, Meghan, Duchess of Sussex, as well as former president of Ireland Mary Robinson. As of Wednesday, the list was still growing.

AI doomers versus AI boomers

There has been a growing divide in the tech space between those who see AI as a powerful force for good, warranting unfettered development, and those who believe it's dangerous and in need of more regulation. 

However, as noted by the 'Statement on Superintelligence' signatory website, even the leaders of the world's leading artificial intelligence companies, such as Musk and Altman, have, in the past, warned about the dangers of superintelligence. 

Before becoming CEO of OpenAI, Altman wrote in a 2015 blog post that "development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity."

The great divide between those who see AI as a force for good, and those who say it's dangerous

Meanwhile, Elon Musk said on a podcast earlier this year that there was "only a 20% chance of annihilation" when discussing the risks of advanced AI surpassing human intelligence. 

The 'Statement on Superintelligence' cited a recent survey from the Future of Life Institute showing that only 5% of U.S. adults support "the status quo of fast, unregulated" superintelligence development. 

The survey of 2,000 American adults also found that a majority believe "superhuman AI" shouldn't be created until proven safe or controllable and want robust regulation on advanced AI. 

In a statement provided on the site, computer scientist Bengio said AI systems could surpass most individuals in most cognitive tasks within a few years. He added that while such advances could help solve global challenges, they also carry significant risks.

"To safely advance toward superintelligence, we must scientifically determine how to design AI systems that are fundamentally incapable of harming people, whether through misalignment or malicious use," he said. 

"We also need to make sure the public has a much stronger say in decisions that will shape our collective future."

Read Entire Article