July 31 (Portaltic/EP) –
This Thursday it will come into force the first legislation which seeks to regulate the systems artificial intelligence (AI) to ensure the security and fundamental rights of citizens of the European Union in the face of the risks posed by this technology.
European legislation is a world pioneer in regulating a technology as complex as artificial intelligence, which has been present for years in digital services of daily use, such as social networks, streaming content systems and search engines such as Google or Bing. It is also used in sectors such as finance, health, customer service, agriculture and logistics, to name a few.
This standard seeks to regulate its use under a uniform legal frameworkthereby facilitating the marketing and circulation of AI-based products and systems, without forgetting cybersecurity and technological development under an ethical approach.
Aims to that the adoption of this technology is done with the human being at the centerwith the aim of making it reliable and “ensuring a high level of protection of health, safety and fundamental rights enshrined in the Charter of Fundamental Rights of the European Union, including democracy, the rule of law and environmental protection”, to protect them from “harmful effects” that may arise from AI systems.
After being published on July 12 in the Official Journal of the EUthe AI regulation officially comes into force this Thursday, although Its mandatory application will begin in two years.
RISK LEVEL BASED APPROACH
To understand the main points that this standard groups together, it must be taken into account that it has been developed under an approach based on risk levels presented by AI, which are grouped into three categories: systems that pose an unacceptable risk, high-risk systems, and limited-risk systems.
AI applications and systems that “pose an unacceptable risk” are directly prohibitedAlthough the list is extensive, this category encompasses cases in which biometric categorization systems are used that infer sensitive attributes such as race, political opinions and sexual orientation.
Social scoring systems such as those used in countries like China – which classify users to grant certain rights or punish bad behavior – are also subliminal, manipulative or deceptive techniques that seek to distort behavior and undermine decision-making.
However, there are some exceptions. For example, although the use of biometric identification systems by security forces is prohibited, they may be used in specific and strictly defined situations, for which a permit must be previously authorized by a judge.
The rules focuses primarily on so-called high-risk AI systems, That is, those that “have a significant detrimental effect on the health, safety and fundamental rights of individuals.”
This description includes systems as varied as remote biometric identification systems, those used to monitor and detect prohibited behavior in students during exams, those that assess the solvency of individuals, polygraphs and similar tools, and those designed to influence the outcome of an election or the voting behavior of individuals.
GENERAL PURPOSE AI MODELS
The Regulation also includes: The modelsin this case referring to those for general use, which are understood as those that are trained with large volumes of data and through methods such as self-supervised, unsupervised or reinforcement learning. He clarifies that, although the models are essential components of systems and are part of them, they do not constitute AI systems in themselves.
These general-purpose AI models are available on the market through libraries, application programming interfaces (APIs), as a direct download or as a physical copy, and can be modified or refined into new models.
An example of these models is the generative artificial intelligencewhich allows the generation of new content in various formats such as text, image, video and audio, and adapts to a wide range of tasks, as is the case with Gemini from Google or OpenAI’s GPT.
The law recognises that they may have free and open source AI components or even be disclosed under a free and open source licence, and in this case highlights their high degree of transparency, but emphasises the need to protect copyright in the parts that correspond to the substantial information and the contents of the databases with which they are trained.
And he points out that these general-purpose AI models may pose systemic risks, that increase with capabilities and their reachand can arise throughout the life cycle of the model. It urges to follow international legislation and pay attention to misuses such as the discovery and exploration of vulnerabilities in computer systems, interference in the operation of critical infrastructures, or even the fact that some models can self-replicate and train other models.
RESPONSIBILITY AND REVIEW
This law is set out in the provider of AI systems and modelswhich may be the distributor, the importer, the person responsible for deployment or another third party, and who is responsible for this technology throughout its value chain.
Broadly speaking, it requires that the company carry out assessments of the level of risk of its products and the risks they may pose both before bringing them to market and once there, as well as putting in place the necessary measures to avoid or mitigate them. It also requires compliance with a code of good practice, which will be supervised by the IA Office, and which will be the basis for compliance with the relevant obligations.
The Regulation also takes into account the speed with which the development of this technology is advancing, and has planned periodic evaluations and reviews of the regulation and AI systems, especially high-risk AI, high-risk areas and prohibited practices, for updating.
KEY ASPECTS FOR CYBERSECURITY
The new regulation includes some Key points that have direct implications at the cybersecurity level for users and ensure proper use, as highlighted by the firm Check Point in March, on the occasion of its approval by the European Parliament.
The law requires stricter development and implementation guidelines that take security into account from the start, for example by incorporating secure coding practices and ensuring that AI systems are resistant to cyber attacks.
It also calls on responsible companies to take appropriate measures to Preventing and addressing security breachesand be more transparent, especially in high-risk systems, to help identify vulnerabilities and mitigate potential threats.
The standard also seeks Preventing the use of AI for malicious purposessuch as the creation of ‘deepfakes’ or the automation of cyberattacks, by regulating certain uses of this technology. This helps reduce the risk of it being used as a tool of cyberwarfare.
The regulation also advocates mitigate bias and discriminationwhich calls for ensuring that AI systems are trained with diverse and representative data sets to reduce biased decision-making processes.
Add Comment