Science and Tech

The hidden ideology of AI, are chatbots like ChatGPT manipulating your political thinking?

Political bias AI

Surely in your usual group of friends there is one who is particularly intelligent, who always has an answer for everything and who you usually consult from time to time with your questions and even get informed through what he knows and has read.

Now let’s put on the table that this person has a very specific ideology and because you have questions about everything, he is little by little showing you his specific political agenda. Well then, according to a recent studythat’s pretty much what’s happening with ChatGPT and other AI chatbots.

And no, it is not a conspiracy theory straight out of a science fiction movie, since a group of researchers has gotten to work and discovered that ChatGPT has a tendency to lean towards the center-left on political and social issues. social.

How did they find out? Well bombarding the chatbot with questions about 100% political topics such as abortion or immigration and they were surprised when they saw that the answers were consistently more on one side than the other.

David Rozado

Are you being subtly manipulated every time you ask the AI ​​something?

Taking the aforementioned as a basis, just think about that teenager doing a job on politics or that person who is going to vote for the first time and looking for information about the elections in these chatbots. If AI is biased, it could be shaping opinions without you knowing.

But be careful because this is not a conspiracy by big technology behind these tools to brainwash you. Experts comment and confirm that it is rather a reflection of society. AI, in the end, learns from the data that everyone gives it, and if that data has certain biases, AI replicates them. And, yes, this also happens with other aspects such as gender or religion.

The regulation of artificial intelligence in Europe

To give you an idea, artificial intelligence is like a little child. See, listen and learn from the content that exists, in this case on the Internet. The big problem is that social networks, the media, forums… everything tends to amplify certain voices and silence others, and that is exactly what the AI ​​is “reading” to learn.

As he tells us in an interview for Computer Today Sergio Rodríguez de Guzmán, CTO and Partner of PUE Data, “a model trained with partial data, which does not give a global and complete vision, will be ignoring parts of reality to the user. If the data is poorly labeled or biased, the models will carry also lead to biased associations, which can lead to inaccurate results for users, especially in predictive models.”

“There is a great responsibility in terms of politics, since impartiality is lost and AI can even learn from fake news, which can be created on purpose in order to influence tools that use AI. This could lead to a polarization of society and even affect decision-making,” supports Rafa López, technical director of Iberia Perception Point.

artificial intelligence medicine

Some companies are already taking action on the matter. OpenAI, which is precisely behind ChatGPT, is trying to make its responses more neutral on political issues. But the million dollar question is: Is it possible to create completely unbiased AI?

Of course, it is not easy. Imagine trying to explain what democracy is without using any words that might sound more left-wing or right-wing. It’s almost impossible and that’s what AI is up against. Every word, every concept, comes with its own bias, connotations and associations.

In the end, Perhaps the key is not to seek a perfectly neutral AI – although this fight on the part of companies is very beneficial – but to be aware that all information, wherever it comes from, has some type of bias. And that includes AI.

“If the structure behind these solutions is not supported by quality information and a robust data architecture, people will be offered biased or incomplete views, something that, if not solved and optimized, will be negative in relation to how the answers offered by AI can influence users,” adds the expert.

“Without controlling the quality, veracity and reliability of the information that avoids bias, we run the risk of fostering greater ideological polarization, which must be taken into account,” he concludes.

Get to know how we work in ComputerToday.

Tags: Interviews, Artificial intelligence, Software

Source link