Science and Tech

Companies using ChatGPT are making a troubling discovery: it doesn’t know how to keep a secret

GPT-4 is a brutal level jump compared to ChatGPT: nine examples to check it out in person

The world is catching ChatGPT fever. We talk to this chatbot to answer our questions and to help us work better, but while we’re doing it, we’re telling this conversational artificial intelligence engine a lot of things. That is not a problem if one talks about more or less banal issues, but things change, and a lot, in companies.

top secret. As indicated in dark readinga recent report of the data security consultancy Cyberhaven revealed that data entry requests were detected and blocked in 4.2% of the 1.6 million workers of companies that work with them. It is a percentage that seems low but is really worrying because in all cases the danger was that these workers would end up leaking sensitive data.

What sensitive data can be leaked? All types. From confidential information about the company to customer data or source code of your applications. In one case, a manager copied and pasted the company’s 2023 strategy document into ChatGPT and asked him to create a PowerPoint presentation. In another, a doctor entered a patient’s name and health status, and asked ChatGPT to draft a letter to the patient’s insurance company.

Telling everything to ChatGPT. For these experts, the danger is real and will increase. “There’s been a big data migration from cloud to cloud, and the next big change is going to be data migration to these generative applications,” said Howard Ting, CEO of Cyberhaven.

The companies begin to react. Some companies have already recognized the problem and are beginning to take action. JPMorgan has restricted the use of ChatGPT, and others like Amazon, Microsoft—what an irony—and Wal-Mart have posted notices for their employees: be careful how you use generative AI services.

ChatGPT doesn’t know how to keep secrets. The problem is that if someone gives ChatGPT sensitive data, the engine doesn’t know that it is. Cybersecurity experts warn of a new type of attacks called “training data extraction” or “machine learning inference exfiltration” which basically achieve get confidential information out of him to ChatGPT.

and this goes to more. The presentation yesterday of Microsoft 365 Copilot will only boost the use of these systems with business data, and although in Redmond they assured that privacy and the protection of confidential data is guaranteed in these environments, more and more workers will end up using these and other systems to help them in their work. That, whether the companies want it or not, can represent a potential threat. One that they will have to try to avoid at all costs.

Image: Ryoji Iwata

In Xataka: GPT-4: what it is, how it works, how to use it, what you can do with this artificial intelligence language model

Source link