A few weeks ago we heard on Xataka how artificial intelligence is being used for fraud in various ways. In this article We told the story of Ruth Card, 73, an older woman who received a call from what appeared to be her grandson Brandon, but was actually a group of con artists: “Grandma, I’m in jail, no wallet, no phone. I need money for bail.” The thymus was performed using a deepfake of audio that imitated the voice of his grandson while the call was made.
The thing has not stopped there. This kind of scams has evolved rapidly in recent months with the incipient creation of AI tools and voice and even video clones of a person are already being used to scam their relatives by video call.
AI with the aim of defrauding. As pointed out by El Mundo in this article, a man from North China received a WeChat video call from his wife. She asked him for 3,600 euros because she had been in a car accident and she had to solve the situation with the other driver. Of course, she was not his woman. And although he called him from another account, the man fell into the trap since the face that appeared in the video call gesturing and speaking with the same tone of voice was that of his wife. Although it was another imitation with AI.
Apparently, the scammers had been watching the couple and knew their habits, in addition to the fact that the woman had a quite popular cooking channel on a social network and from there they took the screenshots of her face and voice to make the deepfake.
A global trend. A new wave of scams using AI-generated voices is growing around the world. The Washington Post brings together several recent cases and warn that according to FTC data, in 2022 this type of fraud in which someone impersonates another person was the second most frequent, with more than 36,000 complaints of people who were deceived (or almost) by others who pretended to be friends or relatives. In 2021, one person managed to steal 35 million dollars to a bank using this technology.
How does it work? Advances in artificial intelligence already allow replicate a voice with an audio sample of just a few sentences (something very easily accessible on the person’s social media). Speech generation software analyzes what makes a person’s voice unique (age, gender, or accent), and searches a vast database of voices to find similar voices and predict patterns. then you can recreate the pitch, timbre, and individual sounds of a person’s voice to create a similar effect. From there, the scammer can say whatever he wants with that voice.
In most cases it is almost impossible to distinguish it, much less when the person making the call does so with a certain urgency. And it is even more complicated for an older person who is unaware of these technologies to realize the danger. companies like ElevenLabsan AI speech synthesis startup, transform a short vocal sample into a synthetically generated voice for a modest price ranging from 5 euros to 300 per month, depending on the audio limit.
Concerns in China. In the Asian country this phenomenon It is already cause for concern on the part of the authorities.which have begun advising the public through posts on Weibo, the Chinese twitter, to “be cautious when giving out biometric information and to refrain from sharing videos and other images of themselves on the Internet.” Very different cases are occurring. One It has created some controversy in the e-commerce industry because some users are using this technology to clone the faces of famous streamers and sell their products.
Another high profile case was the arrest of a man who had used ChatGPT to create a fake article about a train accident with nine dead. Not only that, she had managed to position it at the top of the Baidu search engine.
legislation. It is the biggest obstacle to stop this scourge. Experts say regulators, law enforcement and the courts do not have enough resources to curb this growing phenomenon. First, because it is very difficult to identify the scammer or trace calls, which are located around the world and the jurisdiction of a country does not always reach everywhere. And second, because it is a new technology and there is not enough jurisprudence for the courts to hold companies responsible for this.
In China they are leading the battle against this type of fraud. In the Asian country, it has been approved a new law regulating generative AI technologies text, images and videos. This law, launched by the Cyberspace Administration, which deals with the Internet field in China, was passed shortly after the launch of ChatGPT, the OpenAI chatbot, which is also censored in the country, although many have accessed it illegally. .
Image:
In Xataka | Differentiating the real Chicote from the AI generated deepfake Chicote is already almost impossible (and it is a problem)