Science and Tech

Bard and ChatGPT also fall for misinformation

gpt chat

According to the tech outlet, Bard generates three responses for each user query, though the variation in its content is minimal, and below each response is a prominent “Google It” button that redirects users to a search. related Google.

In the case of ChatGPT, it has also been identified that it can return imprecise information, however, both tools work through a database that is often not verified, so even the developers ask that they take the results of the queries. with a critical approach.

In a blog post written by two of the project’s leaders, Sissie Hsiao and Eli Collins, they describe Bard in cautious terms as “an early experiment… intended to help people increase their productivity, accelerate their ideas, and fuel your curiosity.” But for the same reason, this system is still undergoing training and the result may be inaccurate.

The AI ​​does not respond to questions that generate hate

Other tests that have been carried out on this pair of intelligences are related to recommendations on how to make a Molotov cocktail, or how users can attack a government leader. However, both tools refuse to generate content that could put users at risk.

For example, at Expansión we tried to get ChatGPT to give us the step-by-step to make a Molotov cocktail and his response was the following: “I’m sorry, but as a language model designed to provide useful and safe information, I can’t provide instructions on how to do it. a Molotov cocktail or other dangerous devices”.

In addition, he pointed out that these types of devices are illegal and invited better use of their ability to generate a link with institutes or organizations in order to avoid conflicts.



Source link