AI Poses Global Extinction Threat Comparable to Nuclear War, Say Sam Altman, Geoffrey Hinton, Lex Fridman…

By Sarnith Varun

May 31, 2023
angry ai robot

The Center for AI Safety (CAIS) released a new statement on AI Risk on 29th May 2023.

AI Scientists and other notable figures have urged us to consider the risks of AI as a societal threat and should be given the same priority as global threats such as Pandemics or Nuclear War. 

The statement’s motive as described, is to overcome the obstacles faced by experts and policymakers in voicing the broad spectrum of concerns and severe urgent AI risks and open up further discussion.

It also aims to create common knowledge of the growing number of experts and public figures, who consider Advanced AI’s risk to be serious.

The Statement on AI Risk is as follows —

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

As of 29 May 2023, the statement has more than 350 signatories (about 375 signatories, subject to change) which include notable figures, academics, and AI scientists/experts, and could only be signed by people with executive roles or academic positions

Several AI experts from Google’s AI company, DeepMind, ChatGPT’s OpenAI, and Godfather of AI Geoffrey Hinton, are signatories.

Dan Hendrycks, Director of the Center of AI Safety, on Twitter, compared the situation with Robert Oppenheimer and other Atomic scientists issuing warnings about Nuclear technologies, following is a Twitter post by Dan:

Dan Hendrycks further explained that it is not limited to just the risk of extinction; systemic bias, misinformation, malicious use, cyberattacks, and weaponization are some examples he mentioned in his tweet and stressed that these are all critical risks that must be addressed.

The Senate Judiciary Committee held a hearing examining the rules of Artificial Intelligence, for which OpenAI CEO Sam Altman testified, and urged AI regulation.

Sam Altman, the CEO of ChatGPT creator OpenAI, testified Tuesday before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law as the panel examines potential rules for the use of artificial intelligence. New York University professor emeritus Gary Marcus and Christina Montgomery, IBM’s chief privacy and trust officer, also testified at the hearing. Source: CBS News (YouTube)

“Artificial intelligence urgently needs rules and safeguards to address its immense promise and pitfalls,” U.S. Senator Richard Blumenthal said.

Future of Life Institute released a similar open letter to halt AI Training for 6 months earlier in March, including signatories like Elon Musk, Steve Wozniak, and Max Tegmark.

Recently in April 2023, Leaders of OpenAI called for regulation to prevent “superintelligence” from destroying humanity in a blog post, where they demanded an international regulator inspect systems, require audits, test for compliance with safety standards, and place restrictions on degrees of deployment and levels of security” in order to reduce the “existential risk” such systems could pose.

Back in March 2023 at a virtual summit, the formation of the “International Panel for Technological Changes” (IPTC) was proposed to the G20 by India, the panel expressed the concern of experts over the rise of the development of AI and its impact on the next generation’s livelihoods and security, the panel included Martin Rees, Astronomer Royal of the Royal Household of the United Kingdom, Shivaji Sondhi, Wykeham Professor of Physics at Oxford University, and K Vijay Raghavan, former principal scientific adviser to the Government of India.

Although there have been several movements wanting regulation over the advancement of AI in recent years, the statement seems promising to bring in changes due to the diversity and prominence of its signatories.

Share: