Experts Call for a Pause on Giant AI Experiments: The Need for Safety and Governance
The rapid development of artificial intelligence (AI) has raised critical questions about its potential impact on society and humanity. Extensive research and acknowledgment by leading AI laboratories have highlighted the profound risks associated with AI systems that possess human-competitive intelligence. The Asilomar AI Principles, which have been widely endorsed, assert that advanced AI could represent a fundamental shift in the history of life on Earth and, as such, must be planned for and managed with the utmost care and resources.
The Race for AI Development and Emerging Questions
Despite this recognition, the current level of planning and management is falling short. Recent months have seen AI laboratories engaged in a race to develop and deploy digital minds with increasing power, systems that are difficult to understand, predict, or control, even by their creators. Contemporary AI systems are achieving human-competitive performance on general tasks, prompting important questions: Should machines be allowed to flood information channels with propaganda and falsehoods? Should we automate all jobs, including fulfilling ones? Should we create nonhuman minds that might eventually outsmart and replace us? Should we risk losing control of our civilization?
The Role of Tech Leaders and Independent Review
These decisions should not be delegated to unelected tech leaders. The development of powerful AI systems should only proceed when confidence in their positive impact and manageable risks is established—a confidence that must be justified and commensurate with the system's potential effects. OpenAI's recent statement on artificial general intelligence acknowledges the importance of independent review and limiting the rate of computational growth for creating new models. The statement resonates with the wider AI community, leading to a consensus that the time for such precautions is now.
A Call for Pause and Development of Safety Protocols
Therefore, experts are calling on all AI laboratories to immediately pause the training of AI systems more powerful than GPT-4 for at least six months. This pause should be public, verifiable, and include all key stakeholders. Should a voluntary pause prove difficult to enact, experts advocate for government intervention to institute a moratorium. During this pause, AI laboratories and independent experts should collaboratively develop and implement shared safety protocols for advanced AI design and development. These protocols, audited and overseen by independent experts, should ensure safety beyond reasonable doubt. The focus is not on halting AI development but on stepping back from the race toward large, unpredictable black-box models with emergent capabilities.
Refocusing AI Research and Accelerating Governance
AI research should prioritize enhancing the accuracy, safety, interpretability, transparency, robustness, alignment, trustworthiness, and loyalty of existing state-of-the-art systems. Concurrently, AI developers must work with policymakers to accelerate the development of robust AI governance systems, including dedicated regulatory authorities for AI, oversight of highly capable AI systems, provenance and watermarking systems, robust auditing and certification, liability for AI-caused harm, public funding for AI safety research, and well-resourced institutions to address economic and political disruptions caused by AI.
The Vision for a Flourishing Future with AI
A flourishing future with AI is possible. After successfully creating powerful AI systems, humanity can enjoy an "AI summer" in which we reap rewards, engineer systems for the benefit of all, and give society time to adapt. Previous technologies with potentially catastrophic societal effects have been paused, and the same can be achieved here. Let's embrace a long AI summer, rather than rushing unprepared into the fall.