Science and Tech

Chatbots often make mistakes, so some researchers have had an idea: have them supervised by other chatbots

Chatbots Text

Unreliable responses are one of the big problems with artificial intelligence chatbots. These tools are evolving in leaps and bounds, but they are still they reproduce biases and generate hallucinations. Researchers at the University of Oxford have had a particular idea to address these limitations: that chatbots are supervised by other chatbots.

The group, made up of Sebastián Farquhar, Jannik Kossen, Lorenz Kuhn and Yarin Gal, points out that false results have prevented the adoption of chatbots in various fields. Now, the method they have designed, he says, addresses the fact that an idea can be expressed in different ways and has allowed users to detect when in the conversation they should be especially careful.

Using chatbots to monitor other chatbots

The researchers asked a chatbot a series of trivia questions and math problems. Next, they asked a group of humans and a different chatbot to will review the answers. After purchasing the evaluations they discovered that the chatbot agreed with the human evaluators by 93%. The human evaluators, for their part, agreed with the chatbot by 92%.

These findings are part of a study published in the journal Nature titled “Hallucination detection in large language models using semantic entropy”. As we can see, a fairly manual methodology has been used that can be taken as a model to inspire possible automated solutions that address the unreliability of the AI ​​chatbots that we use daily.


Chatbots Text

The tools themselves often include warning messages about the accuracy of the responses at the bottom of the chat window. “ChatGPT can make mistakes. Consider verifying important information,” says the OpenAI chatbot. “Gemini may display inaccurate information, even about people, so double-check your answers,” says the proposal created by Google.

It seemed that in 2024 Windows and macOS were going to be flooded with AI features.  All we have are crumbs

Tango OpenAI like Google, and other companies as well, have said they are working to improve the reliability and security of their AI products. For now, however, the results are far from perfect. In many cases the answers usually present a text that, at first glance, seems very coherent, but may contain everything from small imperfections to major errors.

Images | Xataka with Bing Image Creator

In Xataka | An AI has created the script for a film that precisely talks about creativity in cinema. A theater refuses to release it

Source link