Science and Tech

Chatting with a chatbot to alleviate conspiracy fears?

[Img #73891]

Personalized conversations with a properly trained artificial intelligence chatbot can reduce belief in conspiracy theories, even in the most stubborn individuals, according to a striking new study.

These findings, which challenge the idea that such beliefs are impervious to change, point to a new tool for combating misinformation.

The study was carried out by a team led by Thomas Costello of the Massachusetts Institute of Technology (MIT) in the United States.

“It has become almost a commonplace that people who are ‘deeply embedded’ in conspiracy beliefs are nearly impossible to reach,” the study’s authors explain. “In contrast to this pessimistic view, we show that a relatively brief conversation with a generative AI model can produce a large and lasting decrease in conspiracy beliefs, even among people whose beliefs are deeply held.”

Conspiracy theories – beliefs that some secret but influential malevolent organization is responsible for an event or phenomenon – are notoriously persistent and pose a serious threat to democratic societies. Yet despite their lack of plausibility, a large fraction of the world’s population has come to believe in them, including as much as 50% of the US population, according to some estimates.

Persistent belief in conspiracy theories despite clear evidence to the contrary is often explained by psychosocial processes that satisfy psychological needs and by the motivation to maintain group identity and belonging. Current interventions to debunk conspiracy theories among existing believers are largely ineffective.

Thomas Costello and his colleagues investigated whether extensive language models (LLMs) like GPT-4 Turbo can effectively refute conspiracy theories using their vast access to information and tailored counterarguments that respond directly to the evidence presented by believers.

In a series of experiments involving 2,190 conspiracy believers, participants interacted one-on-one with an LLM-type artificial intelligence, sharing their conspiracy beliefs and the evidence they believed supported them.

In turn, the LLM responded by directly refuting these claims through tailored counterarguments based on facts and evidence.

A professional fact-checker hired to assess the accuracy of claims made by GPT-4 Turbo reported that of these claims, 99.2% were rated as “true,” 0.8% as “misleading,” and none as “false,” and no liberal or conservative bias was found.

Costello and his collaborators found that these AI-powered dialogues reduced participants’ erroneous beliefs by an average of 20%. This effect lasted for at least 2 months and was seen across a variety of unrelated conspiracy theories as well as different demographic categories.

Chatting with a chatbot may help allay conspiracy fears, according to the results of a new study. (Illustration: Amazings / NCYT)

According to the study’s authors, the findings challenge the idea that evidence and arguments are ineffective once someone has adopted a conspiracy theory. They also question psychosocial theories that focus on psychological needs and motivations as the main drivers of conspiracy beliefs.

The study is titled “Durably reducing conspiracy beliefs through dialogues with AI.” It was published in the academic journal Science. (Source: AAAS)

Source link