Science and Tech

OpenAI and Anthropic will share their models with the US government before publishing them. The reason: to check that they are safe

European AI Act enters into force: this is how the world's first artificial intelligence regulation will be implemented

A year ago, the Biden administration launched the Artificial Intelligence Security Institute, an initiative whose objective, they explained at the time, is to evaluate the known and emerging risks of foundational artificial intelligence models. That is, to evaluate the possible impact that those models trained with a huge amount of data could have, see GPT, Claude, Gemini or LLaMA.

Industry collaboration is vital to this. After all, without access to the models, little or nothing can be studied. This week, It has been announced the first agreement between the US government and two leading companies in the sector: OpenAI (GPT) and Anthropic (Claude), which will share their models with the government before releasing them for study.

Formal collaborationAccording to the AI ​​Security Institute, which is part of the National Institute of Standards and Technology (NIST), which in turn is part of the US Department of Commerce, the government, OpenAI and Anthropic have established a “formal collaboration on the research, testing and evaluation of the security” of their artificial intelligences.

What will they do? Granting access to their models, of course. As the institution details in a statement, it will have “access to each company’s major new models before and after their public release.” This will allow “collaborative research on how to assess capabilities and security risks, as well as methods to mitigate those risks.”

In return, the AI ​​Safety Institute “plans to provide feedback to Anthropic and OpenAI on potential safety improvements to their models,” something they will do in “close collaboration” with the AI ​​Safety Institute in the United Kingdom. In short, the goal is to audit the models, detect potential risks and find possible solutions, all with the ultimate goal of achieving responsible AI development.

How to regulate AIThis is a small step forward, but no less significant. Regulating artificial intelligence is not easy, as there are infinite nuances to take into account. There is also a significant associated risk: too strict regulation could reduce innovation and harm, above all, smaller developers.

Not to mention that, although the United States is home to some of the most cutting-edge companies in the sector, China is also making moves, and in a big way. Although the United States has OpenAI, Anthropic and Meta, among other firms, China has many more patents related to artificial intelligence. The numbers speak for themselves: between 2013 and 2023, China registered 38,000 patents. The United States, “only” 6,300.

The United States wants to lead this technology, but China is hot on its heels and surpasses it in patents

According to Elizabeth Kelly, director of the AI ​​Safety Institute in the United States, “These agreements are just the beginning, but they are an important milestone as we work to help responsibly manage the future of AI.” Jason Kwon, OpenAI’s chief strategy officer, has expressed “We believe the Institute has a critical role to play in defining American leadership in the responsible development of artificial intelligence and hope that our work together will provide a framework on which the rest of the world can build.”

Claude
Claude

Image | Xataka with Mockuuups Studio

Self-regulationIt is worth noting that on July 21, 2023, seven companies (Amazon, Google, OpenAI, Microsoft, Meta, Anthropic and Inflection) committed to comply with a set of eight voluntary commitments related to, for example, transparency, testing, risk management or investment in cybersecurity. Here (PDF) can all be read.

While that agreement is non-binding, OpenAI and Anthropic’s collaboration falls under the first-ever commitment under which “companies agree to conduct internal and external security testing of their AI systems prior to launch. These tests, which will be carried out in part by independent experts, protect against some of the most significant sources of AI risks, such as biosecurity and cybersecurity, as well as its broader societal effects.”

Image | Unsplash 1, 2edited by Xataka

At Xataka | ChatGPT and its rivals are boring. And turning them into funny chatbots is not going to be easy

Source link