Science and Tech

Why this pioneer of artificial intelligence decided to expose the dangers of the technology

New York () — Geoffrey Hinton, also known as the “godfather of artificial intelligence,” decided he had to “expose” the technology he helped develop, after he grew concerned about how smart it was getting, he told on Tuesday. .

“I’m just a scientist who suddenly realized that these things are getting smarter than us,” Hinton told ‘s Jake Tapper in an interview Tuesday. “I want to, in a way, raise an alarm and say that we should be seriously concerned about how we stop these things from controlling us,” he added.

Hinton’s pioneering work on neural networks shaped the artificial intelligence systems that power many of today’s products. On Monday, his name made headlines for resigning from his job at Google, where he worked for a decade, to speak out about his growing concern about technology.

In a interview this monday with the newspaper The New York Times, who first reported his decision, Hinton said he was concerned about the potential for artificial intelligence (AI) to kill jobs and create a world in which many “will no longer be able to know what is true.” He also pointed to the staggering rate of progress, far beyond what he and others had anticipated.

“If he gets to be much smarter than us, he’ll be very good at manipulating because he’s learned it from us, and there are very few examples of a smarter thing being controlled by a less smart thing,” Hinton told Tapper on Tuesday.

“This knows how to program, so it will find ways around the restrictions we put on it. It will find ways to manipulate people into doing what it wants.”

Hinton isn’t the only tech leader who has raised concerns about artificial intelligence. Several community members signed a letter in March calling on AI labs to stop training the most powerful systems for at least six months, citing “profound risks to society and humanity.”

The letter, published by the Future of Life Institute, a non-profit organization endorsed by Elon Musk, was published just two weeks after OpenAI announced GPT-4, an even more powerful version of the technology that powers the viral chatbot ChatGPT. In early testing and a company demo, GPT-4 was used to write demands, pass standardized tests, and create a functional website from a hand-drawn sketch.

Steve Wozniak, co-founder of Apple and one of the signatories to the letter, appeared on This Morning on Tuesday to echo concerns about its potential to spread disinformation.

How do artificial intelligence machines think? 1:21

“Cheating is going to be much easier for those who want to cheat you,” Wozniak said on . “We’re not really making any changes in that regard: we’re just assuming that the laws we have will take care of it.”

Wozniak also said that “some type” of regulation is probably needed.

Hinton, for his part, told that he did not sign the petition. “I don’t think we can stop the progress,” he said. “I didn’t sign the petition saying we should stop working on AI because if people in the US stop, people in China won’t.”

But he confessed to not having a clear answer on what to do instead.

“It’s not clear to me that we can solve this problem,” Hinton told Tapper. “I think we should make a lot of effort to think of ways to solve the problem. I don’t have a solution at the moment.”

Source link