Science and Tech

OpenAI says its technology is safe, but misuse of ChatGPT could cause "significant harm to the world"

OpenAI says its technology is safe, but misuse of ChatGPT could cause "significant harm to the world"

May 17. (Portaltic/EP) –

OpenAI CEO Sam Altman has expressed concern that his ChatGPT system may cause “significant harm to the world” and considers that misuse of this technology, in the absence of a solid regulation that regulates it, could have catastrophic consequences.

This has been expressed by the executive director of the company that creates tools such as Whisper and Dall-E2, who has appeared before the Subcommittee on Privacy, Technology, and Law of the Committee on the Judiciary of the United States Senate to request this body to regulate the use of systems such as its ‘chatbot’.

Generative Artificial Intelligence (AI) systems are criticized by authorities and users because malicious actors make use of them to carry out phishing campaigns or spreading hate messagesamong other harmful and criminal activities.

Due to the concern that the misuse of this technology has generated, the executive director of the company that develops ChatGPT, Sam Altman, has appeared before the upper house of the US Congress to express his willingness to work with legislators and institutions in order to address the risks that this ‘chatbot’ presents.

“We believe that it is essential to develop regulations that encourage the security of AI and, at the same time, guarantee that people can access the many benefits of the technology,” he clarified, according to a document collected on the website of this institution.

First of all, Altman has defended that OpenAI continues to be a non-profit organization dedicated to the research and development of AI projects and with a largely independent board of directors.

With this, he recalled that the AI ​​models that he develops “are freely available anywhere in the world” and that his mission “is to ensure that AI systems are built, implemented and used in a safe and beneficial way”.

To do this, the organization “extensive testing, hire outside experts” and improve the behavior of this model with techniques such as reinforcement learning (RLHF).

After using these filters, it reduces “inaccurate information, known as ‘hallucinations,’ hateful content, misinformation, and information related to weapons proliferation,” as well as text of a sexual nature.

In addition to commenting that in this time it has been improving the capabilities of ChatGP until reaching its latest iteration, GPT-4, “which is 40 percent more likely to offer factual content”, He has ensured that he has launched initiatives to make this AI more secure, such as the creation of a program that contributes to users who detect errors in their system, as well as another grant that will finance those who carry out security research.

Despite having the objective of continuing to improve its service, Altman has insisted that the improvements that this organization carries out internally with its technology must be backed by regulations that regulate its use because generative AI “could cause significant harm to the world.”

“If this situation worsenseverything could go quite wrong and we want to talk about it, work with the Government so that this does not happen,” explained the CEO in a hearing chaired by Senator Richard Blumenthal.

It should be remembered that this is not the first time that Altman has expressed his intention to protect users of this technology, since a few days ago he participated in a conference on AI at the White House (United States) in defense of copyright systems. author.

His testimony took place shortly after Universal Music Group, one of the largest labels in the music industry, urged streaming platforms such as Apple Music or Spotify to block the training of AI models, because this would would be infringing the ‘copyright’ of the artists in their catalogues.

Spotify, in fact, indicated a few days ago that it had eliminated “tens of thousands of songs” generated by this technology, which is why it would have strengthened its surveillance system to detect this type of file and fraudulent activity.

The crossing of limits in terms of copyright has also reached other cultural disciplines, such as art, since artists have sued companies dedicated to digital art –Stability AI, DeviantArt and midjourney-, for infringing copyright in the development of works of art created by the IA Stable Difussion tool.

Source link