Science and Tech

Prosegur warns against the use of ChatGPT for the "phishing" or the spread of hate messages

Prosegur warns against the use of ChatGPT for the "phishing" or the spread of hate messages

March 16 () –

The extension of the use of ChatGPT, one of the latest developments of generative artificial intelligence (AI) that has been made available to the public, brings advantages in terms of security such as the automation of routine tasks or the creation of friendly ‘chatbots’, but it can also entail risks such as the use of this technology for the dissemination of hate messages or to carry out Internet fraud such as ‘phishing’.

This is clear from an analysis carried out by Prosegur Research, the company’s forum for reflection and analysis, which analyzes the implications of ChatGPT from a security perspective and identifies the main risks and opportunities that open with their application to different fields.

Social polarization is one of the ten risks detected by the Prosegur Research study, which explains that, due to the ability of generative artificial intelligence to produce multimedia content, it can be used to spread messages of hate or discriminationas well as messages of a radical or extremist nature.

Phishing, or the automated generation of emails that appear to be real in order to deceive users in order to access confidential information or computer systems, is another of the risks that this technology entails, since it his high-quality writing does not arouse suspicion.

The generation of false news, “an issue that affects national security, damaging social cohesion and democratic principles”, is another of the points highlighted by Prosegur, which it sees in ‘doxing’, or dissemination of hoaxes to damage the credibility of organizations, another of the negative points of this AI.

the possible information leak or data theftthe scams and frauds “quality” and the generation of malicious chatbots with the aim of obtaining sensitive information or achieving illicit economic purposes are also on the dark side of this technology.

Prosegur also warns of phishing through ‘deep fakes’ and the ability of this AI to generate text, images and videos and to simulate voice; the generation of malicious code or the use of this new technology in the geopolitical and geoeconomic power struggle, since “data and technologies are at the center of the configuration of power”.

AUTOMATION OF TEDIOOUS TASKS AND ACCESS TO INFORMATION

However, this technology is not normally born intended for malicious use and can generate opportunities in the security field, such as automating routine tasks in security functions, which facilitates the well-being of employees by eliminating repetitive and tedious tasks, according to the report.

Just as there is a risk of generating malicious ‘chatbots’ -he adds-, there are others of an “attractive” nature, which have a friendlier and more humane profile and which improve interaction with customers and other people.

This AI allows access to huge amounts of information of interest to security in a structured way through the use of natural language, enhancing open source intelligence (OSINT) capabilities and, according to the company, can be positive. in risk analysis and in the field of pattern and anomaly recognition.

In terms of intelligence, Prosegur ensures that ChatGPT can contribute to the generation of hypotheses, the identification of trends and the construction of scenarios, and in the field of recommendations “It is not in any way a substitute for the work of an international security analyst but does support some tasks.”

It helps in predictive analytics by providing certain predictions with their associated probabilities based on the huge amount of data on which it is based while helping to detect ‘phishing’ and identify vulnerabilities or generate secure passwords.

Generative artificial intelligences also have a learning side, according to the study, since they can be a first point for learning about issues related to security, technologies or risks.

Source link