Science and Tech

Europe is leading the race to regulate artificial intelligence. This is what you need to know

London () — The European Union took a big step on Wednesday towards setting standards, the first in the world, on how companies can use artificial intelligence (AI).

It’s a bold step that Brussels hopes will pave the way for global standards for technology used in everything from chatbots like OpenAI’s ChatGPT to surgical procedures and fraud detection in banks.

“Today we made history,” Brando Benifei, a member of the European Parliament working on the European Union’s Artificial Intelligence Law, told reporters.

Lawmakers agreed on a draft version of the law, which will now be negotiated with the Council of the European Union and EU member states before becoming law.

“While big tech companies are sounding the alarm about their own creations, Europe has gone ahead and proposed a concrete response to the risks that AI is beginning to pose,” added Benifei.

Hundreds of leading AI scientists and researchers warned last month that the technology posed a risk of extinction for humanity, and several notables, including Microsoft Chairman Brad Smith and OpenAI CEO, Sam Altman, have called for more regulation.

At the Yale CEO Summit this week, more than 40% of business leaders, including Walmart boss Doug McMillion and Coca-Cola CEO James Quincy, said AI had the potential to destroy humanity five or ten years from now.

In this context, the European Union Artificial Intelligence Law aims to “promote the adoption of reliable and human-centred artificial intelligence and guarantee a high level of protection of health, safety, fundamental rights, democracy and the rule of law, as well as the environment, against its harmful effects”.

These are the key points:

High risk, low risk, prohibited

Once approved, the law will apply to anyone developing and deploying AI systems in the EU, including companies located outside the bloc.

The scope of regulation will depend on the risks created by a particular application, from minimal to “unacceptable.”

Systems that fall into this last category are totally prohibited. These include real-time facial recognition systems in public spaces, predictive police tools, and social scoring systems, such as those in China, that assign people a “health score” based on their behavior.

The legislation also places strict restrictions on “high risk” AI applications, which are those that threaten “significant harm to people’s health, safety, fundamental rights or the environment.”

These include systems used to influence voters in elections, as well as social media platforms with more than 45 million users who recommend content to their users, a list that would include Facebook, Twitter and Instagram.

The law also sets transparency requirements for AI systems.

For example, systems like ChatGPT would need to disclose that their content is AI-generated, distinguish fake images from real ones, and provide safeguards against generating illegal content.

Detailed summaries of the copyrighted data used to train these AI systems would also have to be published.

AI systems with minimal or no risk, such as filtering spamare largely outside the norms.

heavy fines

According to Racheal Muldoon, a trial lawyer at London-based Maitland Chambers, most AI systems are likely to fall into high-risk or prohibited categories, exposing their owners to potentially huge fines if they break regulations.

Engaging in prohibited AI practices could result in a fine of up to 40 million euros (US$43 million) or an amount equivalent to a maximum of 7% of a company’s global annual turnover, whichever is higher.

This goes far beyond European data protection law, the General Data Protection Regulation (GDPR), under which Meta was fined €1.2 billion last month ( US$1.3 billion). The GDPR establishes fines of up to 10 million euros (US$ 10.8 million), or up to 2% of a company’s global turnover.

The fines under the AI ​​law serve as a “rallying cry for legislators to say ‘take this seriously,'” Muldoon said.

Protection of innovation

At the same time, the penalties will be “proportionate” and take into account the market position of small providers, suggesting there might be some grace for start-ups.

The law also requires EU states to create at least a regulatory “sandbox” or sandbox for testing AI systems prior to deployment.

“What we wanted to achieve with this proposal is balance,” Dragoș Tudorache MEP told reporters. The law protects citizens while “promoting innovation, not hindering creativity, and the deployment and development of AI in Europe,” he added.

The law gives citizens the right to file complaints against AI system providers and provides for the creation of a European AI Office to monitor compliance with the law. It also requires Member States to designate national AI supervisory authorities.

Company response

Microsoft, which, along with Google, is at the forefront of AI development globally, welcomed the advances in the law but said it expected “more adjustments.”

“We believe that AI requires legislative protections, international alignment efforts, and significant voluntary action by the companies that develop and deploy AI,” a Microsoft spokesperson said in a statement.

Microsoft calls for regulations for the use of artificial intelligence 1:04

IBM, for its part, called on European Union lawmakers to take a “risk-based approach” and suggested four “key improvements” to the bill, including greater clarity around high-risk AI “to that only truly high-risk use cases are picked up.”

According to Muldoon, the law may not go into effect until 2026, noting that it is likely to be reviewed, given how fast AI is advancing. The legislation has already gone through several updates since it began to be drafted, in 2021.

“The law will expand in scope as technology develops,” Muldoon said.

Source link