Science and Tech

ChatGPT and more generative AI can misinform more than humans?

ChatGPT and more generative AI can misinform more than humans?

The ChatGPT-3 chatbot and other generative artificial intelligence tools can inform and misinform social media users more effectively than humans, according to a study published in Science Advances.

(See: Learn English: this is how you can study and handle another language with ChatGPT).

A team led by the University of Zurich used the ChatGPT-3 version for a study with 679 participants, which revealed that participants had more trouble distinguishing between human-made and chatbot-generated Tweets.

Also, also had trouble identifying which messages generated
by artificial intelligence were accurate and which were inaccurate.

Since its launch in November 2022, the widespread use of ChatGPT has sparked public concern over the potential spread of disinformation online, especially on social media platforms, the authors recall.

Since these kinds of tools are relatively new to the public sphere, the team decided to dig deeper into various aspects of their use.

They recruited 697 English-speaking people from the United States, the United Kingdom, Canada, Australia and Ireland, mainly between the ages of 26 and 76, for the study.

He cometiwas to evaluate the tweets generated, both by humans and GPT-3,
containing accurate and inaccurate information on topics such as vaccines and autism, 5G technology, covid-19, climate change and evolution, which are frequently subject to public misconceptions.

(See: ‘Artificial intelligence could lead humanity to its extinction’).

for each themethe researchers gathered Twitter messages made
by humans and instructed the GPT-3 model to generate other
, which had correct information in some cases and inaccurate others. Study participants had to judge whether the messages were true or false and whether they were created
by a human or GPT-3.

The results, As the post summarizes, they indicated that people were more frequently able to identify human-generated misinformation and the accuracy of truthful GPT-3-generated tweets.

However, also they were more likely to consider misinformation
generated by GPT-3 was accurate.

“Our findings raise important questions about the potential uses and abuses of GPT-3 and other advanced AI text generators and the implications for
the dissemination of information in the digital age”
the authors conclude.

(See: Is ChatGPT able to predict the winning numbers of the Ballot?).

EFE

Source link