() — Human extinction.
Think about it for a second. Think about it for real: the disappearance of the human race from planet Earth.
That’s why the top leaders in the artificial intelligence industry are frantically sounding the alarm. These technologists and academics continue to push the red panic button, doing their best to warn of the potential dangers that artificial intelligence (AI) poses to the very existence of civilization.
On Tuesday, hundreds of AI scientists, researchers and other personalities—including Sam Altman, CEO of OpenAI, and Demis Hassabis, CEO of Google DeepMind—renewed their deep concern for the future of humanity, signing a a one-sentence open letter to the public that was intended to lay out in no uncertain terms the risks of rapidly advancing technology.
“Mitigate the risk of extinction from AI should be a global priority, along with other societal risks such as pandemics and nuclear war,” the letter, signed by many of the industry’s most respected figures, read.
You cannot be more direct and urgent. These industry leaders literally warn that the impending AI revolution must be taken as seriously as the threat of nuclear war. They plead with policymakers to put up some guardrails and lay down ground rules to disable this primitive technology before it’s too late.
Dan Hendrycks, executive director of the Center for AI Security, called the situation “reminiscent of atomic scientists issuing warnings about the very technologies they have created.” As Robert Oppenheimer noted, ‘we knew the world would not be the same’.
“There are many ‘important and urgent AI risks’, not just the risk of extinction, for example systemic bias, misinformation, malicious use, cyberattacks, and weaponization,” Hendrycks continued. “They are all major risks that need to be addressed.”
And yet it seems that the terrible message these pundits are desperately trying to send to the public is not getting through to the noise of everyday life. AI experts may be sounding the alarm, but the level of concern — and in some cases terror — about the technology is not being echoed with the same urgency by the media.
Instead, generally speaking, news organizations treated Tuesday’s letter — like every other warning we’ve seen in recent months — as just another headline, mixed with a wide variety of stories. Some of the major news organizations did not even include an article about the chilling warning on the home pages of their websites.
To some extent, the situation is eerily reminiscent of the early days of the pandemic, before widespread panic, lockdowns, and overburdened emergency rooms. Newsrooms kept an eye on the growing threat posed by the virus, publishing stories about its slow spread around the world. But by the time the severity of the virus was fully recognized and melted into the very essence of its coverage, it had already turned the world upside down.
History risks repeating itself in the case of artificial intelligence, with even greater risk. Yes, news organizations are covering the development of the technology. But there has been a considerable lack of urgency around the issue, given the open possibility of planetary danger.
Maybe it’s because it can be hard to accept the idea that a Hollywood-style sci-fi apocalypse could become a reality, that the advancement of computer technology could reach escape velocity and decimate human existence. However, it is precisely what the world’s leading experts warn could happen.
It’s much easier to avoid uncomfortable realities, pushing them from the foreground to the background and hoping that problems will resolve themselves over time. But that’s often not the case, and the growing AI-related concerns seem unlikely to resolve themselves. In fact, it is much more likely that with the breakneck pace at which technology is developing, the problems will become more apparent over time.
As Cynthia Rudin, professor of computer science and AI researcher at Duke University, told on Tuesday: “Do we really need more evidence that the negative impact of AI could be as big as nuclear war?”