Europe

Europol warns about criminal uses of ChatGPT

Europol warns about criminal uses of ChatGPT

ChatGPT is already capable of facilitating a significant number of criminal activitiesranging from helping criminals remain anonymous to specific crimes, including terrorism and child sexual exploitation. While some of the results from it are still pretty basic, the next iterations of this and other models are only going to improve on what’s possible.”

This is the dystopian scenario with which the European Police Office (Europol) has launched the voice of alarm on the risks posed by criminal uses of artificial intelligence language models. Of particular concern is fraud and phishing or the spread of disinformation on a large scale.

In a report published on MondayEuropol asks the security forces of the Member States to prepare to identify and combat these new forms of cybercrime. “As technology advances and new models become available, it will become increasingly important for law enforcement to stay ahead of this evolution to anticipate and prevent abuse,” the text states.

[Así puedes conectar WhatsApp con ChatGPT para charlar con la IA de la que todo el mundo habla]

ChatGPT’s ability to write highly realistic text based on user input “makes it a extremely useful tool for phishing (or identity theft)” on a large scale, says Europol.

While previously many scams phishing basics were easier to spot thanks to obvious spelling and grammatical errors, “now it’s possible to impersonate an organization or individual in a highly realistic manner even with only a basic knowledge of English.

In fact, the context of the emails from phishing can easily adaptable depending on the needs of the threat authorranging from fraudulent investment opportunities to impersonating company email or CEO, all to obtain critical information or extract money.

“Therefore, ChatGPT may offer criminals new opportunities, especially for crimes related to social manipulation, thanks to its ability to respond to messages in context and adopt a specific writing style,” the report continues. Various types of internet fraud can be given more credibility using ChatGPT to generate fake social media interactionsfor example, to advertise a fraudulent investment offer.

Terrorism and disinformation

The ability of artificial intelligence language models to detect and reproduce language patterns “not only facilitates the phishing and internet fraud, but can also be used to impersonate the way of speaking of specific people or groups“. “This ability can be abused on a large scale to deceive potential victims into placing their trust in the hands of criminals,” warns Europol.

ChatGPT also lends itself “to possible cases of abuse in the area of ​​terrorism, propaganda and disinformation.” Specifically, it can be used to collect more information that facilitates terrorist activitiessuch as terrorist financing or anonymous file sharing.

This artificial intelligence language model constitutes an ideal tool for propaganda and disinformation purposesas it allows users to generate and disseminate messages that reflect a specific narrative with relatively little effort.

“For example, ChatGPT can be used to generate propaganda on the internet on behalf of other actors in order to promote or defend certain points of view that have been discredited as disinformation or fake news“, highlights the study.

Finally, in addition to generating a human-like language, ChatGPT is capable of producing code in several different programming languages. “For a potential criminal with little technical knowledge, this is an invaluable resource for producing malicious code“warns Europol.

Source link