Dec. 22 (Portaltic/EP) –
The artificial intelligence (AI) behind the OpenAI ChatGPT chatbot has alerted Googlefor posing a risk to the company’s core business given its potential application in search engines.
A Google engineer warned this summer that the Language Model for Dialogue Applications (LaMDA), the artificial intelligence model created by the company to develop ‘chatbots’ with advanced language models, was capable of thinking and reasoning like a human being.
The company denied such a fact and explained that “these systems mimic the types of exchanges found in millions of sentences and can touch on any fantastic subject”, but the experience shared by the engineer alerted to the possibility of conscious AI in the long term.
Recently, the experience of ChatGPT has recovered this concern. Developed by OpenAI, it is an artificial intelligence chat trained to hold a text conversation, and it has surprised people with the naturalness of its responses and its ability to generate and link ideas, correct your mistakes and remember previous conversations that you use as context.
ChatGPT has been released on an experimental basis and still has a lot to improve, but it has put Google on alert for its potential application in search enginesthe field that Google has dominated for more than 20 years, where it has a market share 92 percentaccording to data from Statista.
As reported in The New York Timeswhich cites internal reports and recordings to which it has had access, Google is reorienting the artificial intelligence strategy of the company, for which it has involved numerous working groups in this area and has even encouraged employees to develop AI solutions that allow for the creation of artwork and other images similar to DALL-E, also from OpenAI.
At the moment Google is not considering the use of LaMDA in the search engine, since it is a technology that does not fit well with announcementsand this business is the monopolized only last year 80 percent of the income. Specifically, it opposes offering precisely the results that the user is looking for with a business model based on clicks on advertising content.
The tech giant has also failed to make LAMDA testing widely accessible, as it can generate false, toxic and full of prejudice contentas has already happened with Microsoft’s Tay chatbot or more recently with Meta’s Galactica.