() — Dozens of AI industry leaders, academics and even some celebrities called on Tuesday to reduce the risk of global annihilation due to artificial intelligence, arguing that the threat of an AI extinction event should be a top global priority. order.
“Mitigate the extinction risk of artificial intelligence (AI) should be a global priority, along with other societal risks such as pandemics and nuclear war,” reads the brief statement released by the Center for AI Safety.
The statement is signed, among others, by Sam Altman, CEO of OpenAI; Geoffrey Hinton, the so-called “godfather” of AI; senior executives and researchers from Google DeepMind and Anthropic; Kevin Scott, Microsoft’s CTO; Bruce Schneier, pioneer of internet security and cryptography; Bill McKibben, climate advocate; and artist Grimes.
The statement highlights widespread concern about the ultimate danger of runaway artificial intelligence. AI experts have argued that society is still a long way from developing the kind of artificial general intelligence seen in science fiction; Today’s cutting-edge chatbots largely replicate patterns based on training data fed to them and don’t think for themselves.
Still, the spate of hype and investment in the sector has led to calls for regulation early in the AI era, before serious misfortunes strike.
The statement follows the viral success of OpenAI’s ChatGPT, which helped fuel the tech industry’s arms race around artificial intelligence. In response, a growing number of lawmakers, advocacy groups, and technology experts have raised alarm bells about the possibility of a new generation of AI-powered chatbots spreading misinformation and displacing jobs.
Hinton, whose pioneering work helped shape today’s AI systems, previously told that he decided to leave his position at Google and “report” the technology after “suddenly” realizing “that these things are getting smarter.” than we”.
Dan Hendrycks, director of the Center for AI Security, said in a tweet on Tuesday that the statement first proposed by David Kreuger, a professor of AI at the University of Cambridge, does not prevent society from addressing other types of security risk. AI, such as algorithmic bias or misinformation.
Hendrycks compared Tuesday’s statement to warnings from atomic scientists who have issued “alerts about their own technologies that they themselves have created.”
“Partnerships can manage multiple risks at once; it’s not ‘either/or’, it’s ‘yes/and'”, Hendrycks tweeted. “From a risk management perspective, just as it would be unwise to exclusively prioritize present damage, so too would it be unwise to ignore it.”