Elon Musk has positioned himself in favor of Geoffrey Hinton, one of the fathers of AI, who left Google regretting his work and making the risks of technology very clear.
On May 1, Geoffrey Hinton, one of the fathers of AI and winner of the Nobel Prize in Computer Science, he left Google regretting much of his work to dedicate himself to speaking freely about the problems of AI: We are walking towards a world in which no one will be able to know what is true, ever again.
After an earthquake that nobody expected, many voices in the industry have begun to position themselves for or against Hinton’s words and, how could it be otherwise, Elon Musk has already taken a stand.
Hinton knows what he’s talking about, tweets the current owner of Twitter in response to a Breitbart article on the subject.
As they progress in Business InsiderHinton spoke about concerns that future versions of the technology could harm humanity and made it clear that he is very concerned about how AI could lead to the spread of false information, among many other problems.
Musk has already taken his stand against AI for years, despite the fact that it was part of OpenAI, creators of ChatGPT. To be more exact, he put his name in an open letter where he asked for a 6-month pause in the development of AI to put up barriers and be clearer how far this technology can go.
Furthermore, he has also discussed the potential risks of the technology in various interviews.
Despite all this, Musk has gone ahead with his own generative AI projectwhich involves a large language model like the one that drives ChatGPT.
Hinton has made it clear on Twitter after the interview that he resigned from Google so he could warn about the risks of AI without worrying about hurting his employers. And he adds: Google has acted very responsibly.
In recent weeks, Google has launched Bard, its own chatbot in response to Microsoft’s ChatGPT which is integrated into Bing. Since then, several employees have raised concerns about this new technology, and some have even tried to stop its release, citing concerns about inaccurate and dangerous responses.