Science and Tech

Can artificial intelligence lie?

Can artificial intelligence lie?

PIXABAY

Artificial intelligence is, without a doubt, one of the revolutions of the 21st century. It is the key player in the so-called fourth industrial revolution, and has been successfully applied to fields as disparate as medicine, education, and the social sciences.

This discipline was born in the middle of the last century, although it was not until a few years ago that it achieved its most valuable and impressive results. The success has been possible thanks to the combination of the phenomenon of big data and the increase in the computing power of computers.

Can we trust artificial intelligence?

The use of artificial intelligence is increasingly widespread in our daily lives. Because the appearance of cases where this type of system behaved in a racist or sexist manner, the need arises for artificial intelligence to be ethical and trustworthy. This is demonstrated by the Ethical Guidelines for reliable artificial intelligence of the European Union and the Spanish National Artificial Intelligence Strategy.

The most successful artificial intelligence systems have the disadvantage of being, in many cases, impossible to interpret. In a context where we are urged to trust them, this makes us question whether these types of systems can be easily fooled. Or even worse: wonder if they have the power to deceive us.

Can we fool artificial intelligence?

In recent years, a new branch of artificial intelligence has gained importance: the adversarial learning. Intelligent systems are often used to make important decisions that affect people, so it is necessary to ensure that they cannot be easily fooled.

Adversarial learning tries to prevent possible attacks, or the introduction of false data that can fool a machine, but not a person. Cases in which minimal noise is added to an image, which is imperceptible to the human eye, are famous. This noise is enough for the artificial intelligence algorithm to believe that it is a different image. Sometimes the deception is as hard to believe as the case of a 3D print of a turtle that Google’s own artificial intelligence classified as a rifle.

Research on this type of adversary images, which can deceive an intelligent system, is of vital importance. Imagine the dire consequences of an autonomous vehicle misreading road signs. We know, then, that artificial intelligence can be fooled relatively easily. But can an intelligent system fool people?

Learn to lie to win

They try to educate us humans in honesty, but deceit and lies are part of us, as well as some animals. We usually lie or manipulate to achieve some kind of benefit. Even some plants like the orchid Ophrys apifera They have flowers that simulate being a queen bee to attract the males of this species and help pollinate the flowers.

Machines, for their part, learn what we humans teach them. A two-year-old child learns what a dog or a cat is because an adult teaches him. Most artificial intelligence algorithms learn in the same way. The developer provides you with thousands or millions of examples where you are told, for example, which images correspond to dogs and which to cats.

A new artificial intelligence technique has recently appeared, reinforcement learning, which builds a reward system for the algorithm. When a satisfactory result is achieved, that behavior is reinforced. On the contrary, when the result is far from its objective, that behavior is discarded.

Systems based on reinforcement learning are very successful, such as the system AlphaZero of the company DeepMind. This artificial intelligence program reached a superhuman level in several board games in just 24 hours, defeating other previous programs that had won human champions.

The peculiarity of this program was that it did not train by learning from human games, but rather by competing against itself over and over again. He started playing completely randomly until he managed to develop skills never seen before in these games.

Can an intelligent system fool us?

If we use reinforcement learning, and deception leads an intelligent system to achieve the desired goal, it seems feasible that artificial intelligence can learn to lie. And so it happened in 2017, when two researchers from Carnegie Mellon University got their artificial intelligence system to beat the best players in no-limit Texas Hold ‘Em, one of the most complex variants of poker. To achieve such a feat, libratus –that was his name– had to learn to bluff.

In the game of poker, the skill of lying is key. The player needs his opponents to believe that he has better cards than he does, or the opposite. Libratus was able to perfect the art of deception to the point that one of the best players in the world, Dong Kim, thought that the machine could see his cards.

And he achieved this technological milestone without anyone telling him to lie. Its creators provided him with the description of the game, but did not tell him how to play it. For months, Libratus played millions of games until he reached a level of play that challenged the best human players.

Now that we know that artificial intelligence is capable of lying, and will surely continue to do so, we have to prepare for the future. The ethical issues surrounding this type of intelligent systems that, whether we like it or not, are here to stay, will be increasingly important.

Font: Veronica Bolon Canedo / THE CONVERSATION

Reference article: https://theconversation.com/artificial-intelligence-can-lie-182256

Source link