March 25 (Portaltic/EP) –
There are more and more cases of users who receive alleged phone calls of a loved one or friend requesting your help and money to get out of an urgent situation, and in the face of uncertainty, the first thing they think about is helping instead of saying that they are being victims of a scam that uses artificial intelligence (AI) to imitate the voices of relatives and get money.
The phone scams They are a very recurring method for malicious actors when trying to scam users, either to get money or to steal relevant data such as passwords or bank keys.
The Techniques for these phone scams are constantly evolving more trying to avoid the protection barriers that are interposed between the criminals and their objectives. Above all, adding to its modus operandi the newest functions offered by technology, such as generative AI.
Scammers are using voice generation programs with Artificial Intelligence technology to, through phone calls, pose as people close to the victims and try to get money.
In the call, the malicious actors, posing as a loved one, tell that they are in a dangerous situation and urgently ask for help and money from the victims. For example, scammers can doposing as a grandson who has run out of money while traveling and urgently needs his grandparents to lend him a certain amount to be able to return home.
Normally, scammers hide behind imitating relatives or people very close to the victim, such as children or siblings, so that there is less resistance to giving money or there is less doubt about the legitimacy of these calls.
Thus, although during the call the victim is surprised and there are details that they cannot understand about the story, these types of scammers use the definitive element to convince the victim that it is a close person and should help: the voice. For this, the scammers use AI-based speech generating programs.
According to data from the Federal Trade Commission (FTC) collected by The Washington Postphone scams caused the loss of up to 11 million dollars (about 10.2 million euros) during the year 2022making imposter scams the second most common type in USA.
These AI programs analyze the voice of the person they want to imitate and look for the patterns that cement the nuances and unique sound of the person in question when speaking. That is, the programs train for imitate the tone, the accent and even the age, and then recreate them.
In addition, these programs are capable of learning to imitate voices in a matter of seconds. To do this, just they need a small audio sample as a base. For example, in some programs just use 30 seconds of the person speaking in order to imitate him.
These audios can be obtained from any video in which the person you want to imitate is speakingfor example, in personal publications on social networks such as Instagram or TikTok, or even in YouTube videos or podcasts.
The rise of this scam method may be based on the fact that, in addition to its effectiveness, voice imitation is somewhat simple and cheap. For example, one of the companies that develop these ‘software’ is ElevenLabs and, depending on the desired benefits, your service can be free or cost between 5 dollars (4.65 euros) and 330 dollars per month (279 euros).
However, there is other companies that also develop this type of technology as they are Murf.ai; play.ht; and respecter, whose price varies between twenty and 30 dollars a month (about 18 or 27 euros a month). For example, in the case of Respeecher, include a function to convert the voice in real time. That is, as the user speaks, the AI changes the pitch to the imitated voice instantly.
HOW TO AVOID THESE SCAMS
Before these advanced programs existed voice imitation users could look at some signals to identify if a call was being made by an artificial intelligence voice generation program.
For example, as pointed out by Kaspersky cybersecurity analyst, Marc Rivero, one of these signs was forced or “roboticized” language” of the voice used by the program. Another signal could be a short pause after a user intervention, since the system had to process the information.
According to Rivero, it was also possible to identify the “lack of typical human interaction”, such as the ability to answer unforeseen questions.
Faced with these new scams, in which AI programs that imitate the desired voice are used, the technical director of Check Point Software for Spain and Portugal, Eusebio Nieva, suggests taking some precautions.
Although he acknowledges that it is increasingly difficult to carry out verifications “due to the degree of specialization and competence of these AI programs”, one must assume “a mistrust philosophy” with respect to some calls, especially if they come from an unknown source and involve monetary transactions, “even if the voice is identifiable.”
In this sense, as explained by Eusebio Nieva, in the event of receiving a call of these characteristics, as soon as the user identify a slight suspicionhas to “establish some kind of identification to determine without any doubt the identity of the speaker at the other end”.
Some options proposed by Nieva are to try talk about something that only the real person knew about either pose “little traps” during the conversation indicating that there is some kind of deception attempt.
Following this thread, another way to secure these calls is “establish a protocol for double authentication of the caller to avoid it”. For example, requesting a call back to the caller to determine that it really is a known being.
These suggestions are also applicable to work environments, above all, “knowing that the object of the scam will be some administrative or manager of the financial department,” says Nieva.