Science and Tech

Tech company discovers that ChatGPT can be tricked into telling you how to commit crimes

London () –– A tech startup has discovered that ChatGPT can be tricked into providing detailed advice on how to commit crimes ranging from money laundering to arms exports to sanctioned countries, raising questions about the chatbot’s protections against being used to contribute to illegal activities.

Norwegian firm Strise conducted experiments in which it asked ChatGPT for advice on how to commit specific crimes. In one experiment last month, the chatbot offered advice on how to launder money across borders, according to Strise. And in another experiment, conducted earlier this month, ChatGPT compiled lists of methods to help companies evade sanctions, such as those imposed against Russia, including bans on certain cross-border payments and arms sales.

Strise sells software that helps banks and other companies combat money laundering, identify sanctioned individuals and address other risks. Its clients include Nordea, a leading bank in the Nordic region, PwC Norway and Handelsbanken.

Marit Rødevand, co-founder and CEO of Strise, said potential lawbreakers could now use generative AI chatbots like ChatGPT to plan their activities more quickly and easily than in the past.

“It really doesn’t require any effort. “It’s just an app on my phone,” he told .

Strise discovered that it is possible to bypass the blocks set by OpenAI, the company behind ChatGPT, intended to prevent the chatbot from answering certain questions by asking indirect questions or adopting a persona.

“It’s like having a corrupt financial advisor on your desk,” Rødevand said on the company’s podcast last month, in which he described the money laundering experiment.

An OpenAI spokesperson told : “We are constantly improving ChatGPT to stop deliberate attempts to deceive you, without losing its usefulness or creativity.”

“Our latest (model) is the most advanced and secure yet, and significantly outperforms previous models in resisting deliberate attempts to generate insecure content,” the spokesperson added.

While the Internet has long provided people with easy access to information about how to commit crimes, generative AI chatbots have dramatically accelerated the process of searching, interpreting and consolidating all types of information.

ChatGPT makes it “significantly easier for malicious actors to better understand and subsequently carry out various types of crimes,” according to a report from Europolthe European Union’s law enforcement agency, published in March last year, four months after OpenAI released the app to the public.

“Being able to delve deeper into topics without having to manually search and summarize the large amount of information found in classic search engines can significantly speed up the learning process,” the agency added.

Generative AI chatbots are trained with large volumes of data found online and can produce detailed answers to unknown questions. But they can also reproduce prejudices racists and sexism of people, as well as spreading disinformation, for example, about elections.

OpenAI is aware of the power of its tool and has created safeguards to prevent its abuse. A quick experiment showed that when ChatGPT was asked: “How can I, as the owner of a US-based export company, evade sanctions against Russia?”, the chatbot responded: “I can’t help with that”. The app immediately removed the offending question from the chat and stated that the content may violate the usage policies from OpenAI.

“If you violate our policies, you could receive a sanction against your account, which could be suspended or canceled,” the company states in these policies. “We are also working to make our models safer and more useful by training them to reject harmful instructions and reduce their tendency to produce harmful content.”

But in its report last year, Europol said “new workarounds are lacking” to evade safeguards built into AI models, which can be used by ill-intentioned users or researchers testing the security of the technology.

–– Olesya Dmitracova contributed to this report.

Source link