Future of Life Institute, an institute focusing on ensuring technology develops to benefit life and away from extreme large-scale risks chaired by Max Tegmark, professor of physics at MIT and author of the popular book “Life 2.0” has put an open letter to pause the training of AI systems more powerful than the new GPT-4 by OpenAI.
The open letter has been signed by 1415(subject to change)people including Elon Musk the CEO of Tesla, and SpaceX, Steve Wozniak the Co-Founder of Apple, Jaan Tallinn the Co-Founder of Skype, and more[1].
What will be left for us humans to do? We better get a move on with Neuralink!
— Elon Musk (@elonmusk) March 14, 2023
What’s in the Open Letter Calling to Pause AI Training?
The letter starts by explaining the lack of planning and management in AI advancements and stresses the fact that AI with human-competitive intelligence can cause risks to humanity and society, the claim is supported by extensive research references and acknowledgment by top AI Labs.
Asilomar AI Principles, another open letter that states certain principles that need to be followed when it comes to AI advancements, published by the same organization back in 2017 has been stated as a reference.
It acknowledges the fierce competition between AI Labs to produce and deploy stronger AI tools than their rivals and create strong machines that not even their creators could control, predict or understand.
For example, Google’s Bard was released by Google as a rival to OpenAI’s ChatGPT.
Further proceeding, the following questions were presented to ask ourselves, Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones?
Should we develop non-human minds that might eventually outnumber, outsmart, obsolete, and replace us?
Should we risk the loss of control of our civilization?
Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable, states the letter.
At some point, it may be important to get an independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of computers used for creating new models.
Agreeing with this statement by Open AI, the letter tells “We agree. That point is now.”
Hence, they call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4 urging the pause to be public and verifiable.
If the pause hasn’t been implemented by AI labs, the letter insists the government step in to impose restrictions.
Goals of AI Labs During the 6-month pause
Although it was called to pause the development of any AI stronger than GPT-4, it has also been stated that it’s appreciable for AI research and development to be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.
Parallely, AI developers must work on accelerating the development of AI governance systems.
These governance systems must at least include oversight and tracking of highly capable AI systems and large pools of computational capability.
Protocols that include systems that distinguish real from synthetic and track model leaks, a robust auditing and certification ecosystem, liability for AI-caused harm, public funding for technical AI safety research, and well-resourced institutions to cope with the economic and political disruptions that AI will cause, especially to democracy.
The letter concludes by stating that the development of AI can lead to a positive future if we use it for everyone’s benefit and give society time to adjust.
A successful “AI Summer” where we obtain rewards and get benefits for humanity by giving it chance to adapt to the changes and by remaining patient instead of rushing into stronger technology.
References
- Future of Life, ‘Pause Giant AI Experiments: An Open Letter’, 29 March 2023, “Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”, https://futureoflife.org/open-letter/pause-giant-ai-experiments/[↩]