top of page

Global Coalition Calls for Ban on Superintelligent AI

  • Writer: By The Financial District
    By The Financial District
  • Oct 28, 2025
  • 2 min read

Updated: Nov 3, 2025

Hundreds of public figures, including Nobel Prize-winning scientists, former military leaders, artists, and British royalty, have signed a statement calling for a ban on work that could lead to computer superintelligence — a yet-to-be-reached stage of artificial intelligence (AI) that they said could one day pose a threat to humanity, David Ingram reported for NBC News.


The statement adds to a growing list of calls for an AI slowdown at a time when the technology is threatening to remake large swaths of the economy and culture.
The statement adds to a growing list of calls for an AI slowdown at a time when the technology is threatening to remake large swaths of the economy and culture.
ree
ree

The statement proposes “a prohibition on the development of superintelligence” until there is both “broad scientific consensus that it will be done safely and controllably” and “strong public buy-in.”


Organized by AI researchers concerned about the fast pace of technological advances, the statement had more than 800 signatures from a diverse group of people.


ree

The signers include Nobel laureate and AI researcher Geoffrey Hinton, former Joint Chiefs of Staff Chairman Mike Mullen, rapper Will.i.am, former Trump White House aide Steve Bannon, and U.K. Prince Harry and his wife, Meghan Markle.


The statement adds to a growing list of calls for an AI slowdown at a time when the technology is threatening to remake large swaths of the economy and culture.


OpenAI, Google, Meta, and other tech companies are pouring billions of dollars into new AI models and the data centers that power them, while businesses of all kinds are looking for ways to add AI features to a broad range of products and services.


ree

Some AI researchers believe AI systems are advancing fast enough that soon they’ll demonstrate what’s known as artificial general intelligence (AGI), or the ability to perform intellectual tasks as a human could.


From there, researchers and tech executives believe what could follow might be superintelligence — in which AI models perform better than even the most expert humans.


The statement is a product of the Future of Life Institute, a nonprofit organization that works on large-scale risks such as nuclear weapons, biotechnology, and AI.



ree
ree
ree





TFD (Facebook Profile) (1).png
TFD (Facebook Profile) (3).png

Register for News Alerts

  • LinkedIn
  • Instagram
  • X
  • YouTube

Thank you for Subscribing

The Financial District®  2023

bottom of page