Science and Tech

OpenAI recognizes that cybercriminals use ChatGPT to support the creation of malware

OpenAI recognizes that cybercriminals use ChatGPT to support the creation of malware

Oct. 14 (Portaltic/EP) –

Cybercriminals use OpenAI’s GPT large language model to create malicious software with the support of artificial intelligence (AI), despite the limitations integrated by the technology company, as well as to create and disseminate content on social networks.

OpenAI has disrupted more than 20 malicious operations and networks dedicated to deception that they have tried to use their models. Added to this are activities such as writing publications for social networks and debugging malware, which the company includes in the report update. ‘Cyber ​​influence and operations’.

With this analysis, OpenAI intends understand the different ways in which advanced AI models are used by malicious actors for dangerous purposes, to adequately anticipate and plan law enforcement measures.

In this context, as of October 2024, the company has verified that cybercriminals use AI models “to perform tasks in a specific and intermediate phase of activity”, That is, after having acquired basic tools such as social media accounts and emails, but before “deploying finished products”, such as social media posts and malware.

In any case, OpenAI indicates that although malicious actors use their models, they have not detected that this use has translated into important capabilities to exploit vulnerabilities in the real world, that is, that it has not led to “the substantial creation of new malware.” nor in the creation of viral audiences.

Yes they have detected misleading activity on social networks that reaches a large audience, but in these cases, he clarifies, “the interest was a deception about the use of AI, not the use of AI itself.”

One of the cases analyzed is that of SweetSpecta Chinese-based threat group that used OpenAI models “to support their offensive cyber operations while also carry out phishing attacks” that he directed against the technology company’s employees, posing as a ChatGPT user who was seeking help from technical support, to infect their computers and control them.

For his part, the Iranian actor STORM-0817 used AI models to debug code and get programming assistance of a new ‘malware’ for Android still in development and “relatively rudimentary”, with standard surveillance capabilities. I also use ChatGPT to implement the command and control infrastructure.

OpenAI has also assured that, in a year in which more than 2,000 million people are called to participate in electoral processes in 50 countries around the world, they have not observed “no cases of influence operations related to the elections that attract viral engagement or build sustained audiences” with the use of their AI models.

Source link