New York () — Geoffrey Hinton, considered the “godfather of artificial intelligence”, confirmed Monday that he left his position at Google last week to warn about the “dangers” of the technology he himself helped develop.
Hinton’s pioneering work on neural networks shaped the artificial intelligence systems that power many of today’s products. He worked part-time at Google for a decade on the tech giant’s artificial intelligence development efforts, but now worries about the technology and the role he played in its development.
“I console myself with the usual excuse: if I hadn’t done it, it would have been someone else,” Hinton said. to The New York Times, who was the first to report his decision.
In a tweet Monday, Hinton said he resigned from Google so he could speak freely about the dangers of artificial intelligence (AI), rather than out of a desire to specifically criticize Google.
“I left so I could talk about the dangers of AI without considering how it affects Google,” said Hinton in a tweet. “Google has acted very responsibly.”
Jeff Dean, Google’s chief scientist, said Hinton “has made foundational advances in AI” and expressed appreciation for Hinton’s “decade of contributions at Google.”
“We remain committed to a responsible approach to AI,” Dean said in a statement provided to .
“We are continually learning to understand emerging risks while boldly innovating.”
Hinton’s decision to step down from the company and talk about the technology comes at a time when a growing number of lawmakers, advocacy groups and tech-savvy people have raised alarm bells about the potential for a new crop of powered chatbots. by AI to spread misinformation and displace jobs.
The wave of attention garnered by ChatGPT late last year helped renew an arms race among tech companies to develop and deploy similar AI tools into their products. OpenAI, Microsoft and Google are leading this trend, but IBM, Amazon, Baidu and Tencent are working on similar technologies.
In March, some leading tech figures signed a letter calling on AI labs to stop training the most powerful AI systems for at least six months, citing “profound risks to society and humanity.” . The letter, published by the Future of Life Institute, a non-profit organization endorsed by Elon Musk, came just two weeks after OpenAI announced GPT-4, an even more powerful version of the technology that powers ChatGPT. In early testing and a company demo, GPT-4 was used to write demands, pass standardized tests, and create a functional website from a hand-drawn sketch.
In the interview with The New York Times, Hinton added to concerns about the potential of artificial intelligence to kill jobs and create a world in which many “will no longer be able to know what is true.” He also pointed to the staggering rate of progress, far beyond what he and others had anticipated.
“The idea that these things could become more intelligent than people, few people believed,” Hinton said in the interview. “But most people thought it wasn’t like that. And so did I. I thought it was 30 to 50 years away or even more. Obviously, I don’t think that anymore.”
Even before parting ways with Google, Hinton had spoken publicly about the potential for AI to do as much harm as good.
“I think the rapid progress of AI is going to transform society in ways we don’t fully understand and not all the effects are going to be good,” Hinton said in a 2021 keynote address at the Indian Institute of Technology in Bombay. He noted that AI will boost healthcare, but it will also create opportunities for lethal autonomous weapons. “This prospect seems to me much more immediate and much more terrifying than the prospect of robots taking over, which I think is a long way off.”
Hinton isn’t the first Google employee to sound the AI alarm. In July, the company fired an engineer who claimed an as-yet-unpublished AI system had become sentient, alleging it had violated data security and employment policies. Many members of the AI community strongly rejected the engineer’s claim.
— Samantha Murphy Kelly and Ramishah Maruf contributed reporting.