Science and Tech

Cybercriminals bypass ChatGPT limitations to spread ‘malware’ via chatbot apps

Cybercriminals bypass ChatGPT limitations to spread 'malware' via chatbot apps

13 Feb. (Portaltic/EP) –

cybercriminals have started distributing versions of the application programming interface (API) of one of the OpenAI modelswhich allows the creation of ‘malware’ and ‘phishing’ emails, thereby bypassing the restrictions implemented by the company to prevent the abuse of its ChatGPT artificial intelligence (AI) tool.

ChatGPT is an artificial intelligence-based chat developed by OpenAI that is trained to hold a text conversation. To do this, it is based on the GPT 3.5 language model and, in its most recent advances, it has shown the ability to generate and link ideas, as well as to remember previous conversations.

To use this chatbot, you only need an account on the creator platform, from where it can be downloaded for free, so that “anyone with minimal resources and zero knowledge in code you can exploit it”, according to the technical director of Check Point ‘software’, Eusebio Nieva.

A group of researchers from said cybersecurity company found that malicious actors had begun to use ChatGPT to execute ‘malware’ campaigns following traditional methods, as Check Point announced in the middle of last month.

Despite these facilities, OpenAI has established a series of restrictions with which it stops creating malicious content on their platform, thus preventing malicious actors from abusing their models.

Thus, if ChatGPT is asked to write a ‘phishing’ email impersonating an organization (such as a bank) or create ‘malware’, the model will not respond to that request, as the company has said in a new post.

However, cybercriminals have found a way to bypass the restrictions of this chatbot and have shared the steps to bypass these limitations through underground forums, where they reveal how to use the OpenAI API.

According to Check Point researchers, the scammers propose to use the one of the OpenAI GPT-3 modelsknown as text-davinci-003, rather than referencing ChatGPT, which is a variant of these models designed specifically for chatbot applications.

just as you remember Ars TechnicaOpenAI offers developers the text-davinci-003 API and other APIs of different models to be able to integrate the bot into their applications. The difference between this API and ChatGPT’s is that the first does not contemplate the restriction of malicious content.

Check Point comments that the version of the API of the GPT-3 model can be used freely by external applications, such as Telegram, a platform that cybercriminals have begun to use to create and distribute ‘malware’.

The cybersecurity company also comments on its blog that some users are publishing and selling the code that uses text-davinci-003 to generate such malicious content for free.

Source link