Do you remember what you did on November 30, 2022? Possibly it was a day like any other in your life and nothing extraordinary happened that was worth remembering months later. Google can’t say the same. That day, by surprise and without warning, OpenAI came up with something called ChatGPT.
Some business movements are capable of generating earthquakes in the technology industry whose effects can set off alarm bells in the competition. And we say they can because it doesn’t always happen (just remember the reactive response of mobile phone leaders Nokia and BlackBerry to the appearance of the iPhone in 2007).
More than a “red code” in Google
The launch of the conversational chatbot from the company led by Sam Altman has sparked more than just a “code red” at Google. The Mountain View company, which currently leads the market for searchesfinds itself for the first time in years facing a substantial threat that has forced it to take drastic measures.
One of these measures has been the initial launch (at the moment only available in some countries by invitation) of Bard, a direct competitor of ChatGPT and its improved version connected to the Internet through Bing Chat. The problem? Many Google employees believe that we are facing a too hasty launch.
According to internal information seen by Bloombergthose led by Sundar Pichai had the task of testing the chatbot of artificial intelligence powered by LaMDA (Language Model for Dialogue Applications) before its deployment. The truth is that many of the comments were overwhelming, but the company continued with its plan anyway.
Negative messages between Google teams have not gone unnoticed. “Bard is worse than useless: please don’t release it,” an employee said in February of this year after evaluating the tool. “He is a pathological liar,” pointed out another in relation to his tendency to try information and throw imprecise answers.
The atmosphere of tension and anger among certain employees of the company increased after a series of decisions made by management after launch of ChatGPT, as explained by the aforementioned American media. For years, Google had been working on strengthening its ethics team for its artificial intelligence initiatives.
The mission of this group of experts was to help develop products and services that were aligned with the company’s principles, with high security standards. These types of objectives not only required a lot of resources, which Google agreed to provide in 2021, but also time. The latter ended up being an inconvenience after the OpenAI move.
Although the company founded by Larry Page and Sergey Brin is not inexperienced in the field of artificial intelligence, many of its most ambitious projects were limited to living inside the laboratory. In 2022, the pace of development will sped up considerably leaving certain ethical issues before elementary in the background.
Google’s work dynamics establish that before a product reaches the market, it must achieve a high score in certain categories. In Bard’s case this changed. “Child Safety”, for example, has yet to reach a score of 100, but “fairness” can be admitted for a release with 80 or 85 points.
Those mentioned, as we say, correspond to internal information seen by Bloomberg. Publicly, the head of Google has been optimistic with the possibilities offered by its search chatbot, although it has pointed out that all models have hallucination problems and has insisted that security is a priority.
Google Images
In Xataka: Sundar Pichai gives a message to investors: Bard in Google search will not be a threat to the advertising business
In Xataka: The dark secret of ChatGPT and Bard is not what they make up and make up. It’s what they contaminate