Skip to content.

EU AIA: agreement on Europe’s new AI regulatory opus

The EU Council and Parliament agreed on December 8, 2023 to a ground breaking, but controversial, law (the AIA) to regulate certain AI systems. The Commission proposal, originally presented in April 2021, is a key element of the EU’s policy intended to foster the development, investment and uptake across the EU of safe and lawful AI that respects fundamental rights and mitigates risks associated with AI systems. It does so by classifying AI systems in categories deemed too unacceptable to be used, AI systems that are “high risk” or pose “systemic risks”,  or those with lesser risks that still require at least some level of regulatory control, and regulates them according to that classification.

Here are some of the highlights of the provisional AIA.

Unacceptable AI systems

Some AI systems will be banned or banned with exceptions. These unacceptable systems include biometric systems that use sensitive characteristics (e.g. political, religious, philosophical beliefs, sexual orientation, race); untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases; emotion recognition in the workplace and educational institutions; social scoring based on social behaviour or personal characteristics; AI systems that manipulate human behaviour to circumvent their free will; and AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation).

The use of real-time remote biometric identification systems in publicly accessible spaces will be banned except where the use is strictly necessary for law enforcement purposes subject to prior judicial authorization in cases of victims of certain crimes, prevention of genuine, present, or foreseeable threats, such as terrorist attacks, and searches for people suspected of the most serious crimes.

High Risk AI systems

Some AI systems will be treated as “high risk” (due to their significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law). The provisional agreement provides for a fundamental rights impact assessment before a high-risk AI system is put on the market by its deployers. This will now be applicable to the insurance and banking sectors. AI systems used to influence the outcome of elections and voter behaviour will also now be classified as high-risk.

The provisional agreement also provides for increased transparency regarding the use of high-risk AI systems. Newly added provisions put emphasis on an obligation for users of an emotion recognition system to inform natural persons when they are being exposed to such a system.

AI systems presenting only limited risk will be subject to lighter transparency obligations. AI systems that are not likely to cause serious fundamental rights violations or other significant risks are not captured.

General AI systems

As major developments in generative AI (GenAI) were only realized after the AIA was introduced a compromise agreement was reached for a two tier regulatory approach.

After considerable debate and controversy, Foundation models that meet certain computational characteristics (high-impact general-purpose AI models) that could create “systemic risks” are subject to more stringent regulation. If these models meet certain criteria they will have to conduct model evaluations, assess and mitigate systemic risks, conduct adversarial testing, report to the Commission on serious incidents, ensure cybersecurity and report on their energy efficiency. Until harmonised EU standards are published, GPAIs with systemic risk may rely on codes of practice to comply with the regulation.

Other general purpose AI systems (GPAI) and the GPAI models they are based on, will have to adhere to transparency requirements. These include drawing up technical documentation, complying with EU copyright law and disseminating detailed summaries about the content used for training. The provisional agreement calls for disclosing that content was AI-generated so users can make informed decisions on further use.

The compromise EU position was reached after concerns raised by Germany, France, and Italy that it was not appropriate to regulate a technology (rather than an application) and about the impacts of regulating foundation models on innovation.

AI Office

An AI Office backed by a scientific panel within the EU Commission is to be set up to oversee these high impact GPAI models, contribute to fostering standards and testing practices, and enforce the common rules in all member states

Fines

The fines for violations of the AIA are set as a percentage of the offending company’s global annual turnover in the previous financial year or a predetermined amount, whichever is higher. This would be €35 million or 7% for violations of the banned AI applications, €15 million or 3% for violations of the AIA’s obligations and €7,5 million or 1,5% for the supply of incorrect information.

The provisional agreement provides for more proportionate caps on administrative fines for SMEs and start-ups in case of infringements of the provisions of the AIA.

What’s next for the AIA

Work will continue at technical level in the coming weeks to finalise the details of the new regulation. The presidency will submit the compromise text to the member states’ representatives for endorsement once this work has been concluded. The entire text will need to be confirmed by both institutions before formal adoption by the co-legislators.

The provisional agreement provides that the AIA should apply two years after its entry into force, with some exceptions for specific provisions.

AIA v international approaches

There are many differences between the AIA and the US approach to regulating AI systems. The US is largely relying on existing regulatory agencies and existing laws to regulate AI systems using a hub and spoke structure backed up by the strong US White House Executive Order on AI. The UK has also decided not to follow the EU approach, also relying on a hub and spoke regulatory model.

Canada’s beleaguered and widely castigated proposed law to regulate AI, AIDA has some similarities with AIA, but has some marked differences also (assessed based on the considerable amendments being proposed by the ISED Minister and assuming they will be adopted by Parliament).  I intend to consider some of them in my forthcoming blog that examines the amendments to AIDA being proposed by the Minister.

AIA summaries

For some good summaries of the EU AIA, see the following:

This article was first posted on www.barrysookman.com

Authors

Subscribe

Stay Connected

Get the latest posts from this blog

Please enter a valid email address