Science and Tech

Draft European Code of Practice for AI addresses risks of use and need for transparency

Draft European Code of Practice for AI addresses risks of use and need for transparency

Nov. 15 (Portaltic/EP) –

The European Commission (EC) has published the first draft Code of Good Practice for general-purpose Artificial Intelligence (AI), addressing the systemic risks of its use and the need for transparency in the application of copyright rules.

It is about a document in which they have participated independent experts from four specialized working groups and which aims to facilitate the correct application of a series of standards aimed at the development and implementation of future safe and reliable AI models.

These four teams are made up of experts dedicated to transparency and standards related to ‘copyright’, the identification and evaluation of systemic risks, the mitigation of technical risks and the neutralization of governance risks. All of them work in close collaboration, as indicated in the document.

Key aspects of the Code include details on transparency and enforcement of copyright regulation for providers of general-purpose AI models, along with a number of systemic risks, risk assessment methodologies and mitigation measures for suppliers of advanced models driven by said technology.

Likewise, in this draft a series of objectives have been established that will ultimately form part of the Code. For example, it is noted that providers of general-purpose AI models must effectively ensure a good understanding of these throughout the AI ​​value chain, offering documentation about your training and testing process or the results of your evaluation.

On the other hand, the draft Code, which will offer tools and recommended practices accessible to relevant actors in the AI ​​ecosystem, determines that the developers of these models must establish a specific policy to identify and comply with legislation of the European Union on copyright and related rights.

With this document, the European Commission also proposes the implementation of channels for reporting irregularities in the use of technology, through the Artificial Intelligence Office, as well as the need for provide adequate protections to users.

This Code will continue to be debated next week in the four working groups that formulated it, which will meet from November 18 to 21, as well as in the Plenary Session of the Code of Good Practice, which will take place on November 22.

The final document is scheduled to be published and presented at the Closing Plenary in May 2025, so that the rules governing AI models general use under the AI ​​Act will come into force on August 1 of next year.

Source link