What seemed like science fiction is now very close to happening in reality, and they have managed to replicate the personality of dozens of people in artificial intelligence.
So a study has achieved it carried out by Stanford University and Google DeepMind and, for this, the researchers created artificial intelligence copies of more than 1000 participants: All they needed was to conduct two-hour interviews.
With this research, they achieved that AI agents could imitate real people with their innate behavior.
To do this, they recruited participants through the market research company Bovitz and then paid each of them $60 to interact with an artificial intelligence interviewer for two hours.
In these two hours they had to discuss different topics, ranging from politics, family and even social networks.
In these conversations they were able to create an average of 6,491 words per interviewee and which later became the basis for training artificial intelligence replicas.
When these artificial intelligence replicas were tested in the field in different surveys, AI was able to replicate 85% of participants’ responses appropriately.
The only thing that artificial intelligence failed was in the issue of the “prisoner’s dilemma”, where the response dropped to 60% when compared to human behavior.
This could have different applications, such as it would no longer be necessary to interview hundreds of people to predict public reactions for, for example, new policies or products that are going to be launched on the market.
The worst of all is that this could be used in the future to manipulate public opinion so that some rulers remain in office or even to manipulate a specific population at will.
Get to know how we work in ComputerToday.
Tags: Artificial intelligence
Add Comment