EU’s Proposed Artificial Intelligence Regulation: The GDPR of AI
On April 21, 2021, the European Commission revealed its proposed Regulation laying down harmonized rules on artificial intelligence (the “Proposed Regulation”). If adopted, the Proposed Regulation will have significant implications for businesses both inside and outside the EU that help make AI available in the EU. As the first of its kind, the Proposed Regulation may also influence how other countries, including Canada, regulate AI, similar to how the EU General Data Protection Regulation (“GDPR”) has influenced how other countries regulate privacy. Like the GDPR, the Proposed Regulation provides for severe penalties for non-compliance. It contemplates administrative fines for certain offences of up to €30 million or, if the offender is a company, 6% of its total worldwide annual turnover (whichever is higher), with lesser fines for lesser offences.[1] Thus, the Proposed Regulation is a major regulatory development that has the potential to shape global standards in this rapidly evolving field.
In this article, we highlight several key aspects of the Proposed Regulation.
Objectives
The Proposed Regulation’s stated objective is to improve the functioning of the EU market by laying down harmonized rules governing the development, marketing, and use of AI in the EU.[2] This objective encompasses four sub-objectives: (i) ensuring AI systems in the EU are safe and respect fundamental rights and values; (ii) fostering investment and innovation in AI; (iii) enhancing governance and enforcement; and (iv) encouraging a single European market for AI.
To achieve these objectives, the Proposed Regulation establishes minimum standards aimed at addressing the risks associated with AI without unduly constraining or hindering innovation. The Proposed Regulation is designed to be both risk-based (i.e., tailored to the degree of risk associated with a particular use of AI) and future-proof (i.e., flexible enough to keep pace with technological innovation).
Scope of AI covered
The Proposed Regulation applies to “artificial intelligence systems”, defined as “software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with”.[3]
This broad definition is intended to be flexible enough to capture both existing forms of AI (e.g., self-driving automobiles, automated financial investing, smart assistants, manufacturing robots, chatbots) and future forms of AI.
Scope of persons covered
The Proposed Regulation’s most onerous compliance obligations fall on “providers”, defined as “a natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark”.[4]
Less onerous obligations fall on:
- “users”, defined “any natural or legal person, public authority, agency or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity”;[5]
- “importers”, defined as “any natural or legal person established in the Union that places on the market or puts into service an AI system that bears the name or trademark of a natural or legal person established outside the Union”;[6] and
- “distributors”, defined as “any natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the Union market without affecting its properties”.[7]
Extraterritorial application
Like the GDPR, the Proposed Regulation has extraterritorial application. For example, the Proposed Regulation applies to “providers and users of AI systems that are located in a third country, where the output produced by the system is used in the Union”.[8] Any Canadian business that participates in the AI value chain that makes AI systems available in the EU is potentially subject to the Proposed Regulation.[9]
Risk-based approach
Recognizing that AI systems can deliver a wide array of economic and societal benefits, the Proposed Regulation seeks to strike a balance between fostering investment and innovation in AI systems, and addressing the risks associated with AI systems. To achieve this balance, the Proposed Regulation adopts a risk-based approach, with three levels of regulation tailored to the degree of risk associated with a given AI system: (i) unacceptable risk; (ii) high risk; and (iii) low/minimal risk.
Unacceptable risk
Title II of the Proposed Regulation prohibits AI systems that pose an unacceptable risk to society. These include AI systems intended to manipulate persons subliminally, exploit vulnerabilities of particularly vulnerable groups, and perform social scoring by government for general purposes.[10]
High risk
Title III imposes strict regulations on AI systems that pose a high risk to health, safety, and/or fundamental rights. Much of the Proposed Regulation is dedicated to this category of AI systems.
Chapter 1 of Title III identifies two categories of high-risk AI systems. The first category comprises AI systems intended to be used as safety components of products, or as products themselves, that are required to undergo an ex-ante third-party conformity assessment.[11] This category would include AI systems used in critical infrastructure, medical devices, mobile devices, Internet of Things products, toys, and machinery. The second category comprises listed AI systems used for a purpose that poses a risk to health, safety, and/or fundamental rights.[12] This category would include AI systems used in the administration of justice, biometric identification, credit scoring, hiring decisions, and educational or vocational training. The Proposed Regulation includes a mechanism by which the Commission may add AI systems to the list according to prescribed criteria.[13]
Chapter 2 of Title III imposes data and data governance standards requiring providers of high-risk AI systems to train their models with training, validation, and testing data sets that meet a specified standard of quality (i.e., relevant, representative, free of errors, and complete). Chapter 2 also imposes requirements on providers of high-risk AI systems relating to documentation and record-keeping, transparency and understandability, human oversight, accuracy, robustness, and cybersecurity. The Proposed Regulation leaves it to providers of high-risk AI systems to decide which technical solutions to adopt in order to achieve compliance.
Chapter 3 of Title III imposes a range of requirements on providers, users, and other participants in the value chain (e.g., importers, distributors, and authorized representatives) for high-risk AI systems. These requirements vary depending on the category of person, authority, or other body subject to the regulation, with providers being subject to the most onerous requirements.
Chapter 4 of Title III sets the framework for conformity assessment bodies (“notified bodies”) to participate as independent third parties in the ex-ante conformity assessment process for high-risk AI systems.
Chapter 5 of Title III provides for detailed ex-ante conformity assessment procedures.
Low/minimal risk
The final category of AI systems is low/minimal-risk AI systems. In effect, if an AI system is neither prohibited nor high-risk, it is excluded from the Proposed Regulation, subject to the transparency requirements outlined below. For example, a simple spam filter would fall within this residual category. However, Title IX creates a framework for the creation of codes of conduct intended to encourage providers of low/minimal-risk AI systems to comply voluntarily with the requirements for high-risk AI systems.
Transparency Requirements
Title IV creates transparency requirements (which draw inspiration from GDPR) for certain AI systems. These include AI systems that interact with humans (e.g., chatbots), are used to detect emotions or determine association with social categories based on biometric data, or generate or manipulate content (e.g., “deep fakes”). Providers and users of these AI systems must inform individuals of the operation of such systems so that they can make an informed decision on whether to interact with them.
Governance, Monitoring, Enforcement, and Penalties
Title VI establishes governance systems at the EU level and at the national (Member States) level. At EU level, the Proposed Regulation establishes a European Artificial Intelligence Board composed of representatives from the Commission and Member States. At the national level, Member States will have to designate one or more national competent authorities, including a national supervisory authority, for the purpose of supervising the application and implementation of the regulation. This structure resembles that established under the GDPR.
Title VII aims to facilitate the work of the Board and national authorities through the establishment of an EU-wide database for stand-alone high-risk AI systems. The database will be operated by the Commission and will house data supplied by providers of high-risk AI systems, who will be required to register their AI systems before placing them on the market or otherwise putting them into service.
Title VIII sets out the monitoring and reporting obligations of providers of high-risk AI systems with regard to post-market monitoring and reporting and investigating on AI-related incidents and malfunctioning. Providers of high-risk AI systems will be required to inform national competent authorities about serious incidents or malfunctioning that constitute a breach of fundamental rights as soon as they become aware of them. National competent authorities will then investigate the incident or malfunctioning, collect the necessary information, and transmit it to the Commission.
As mentioned above, like the GDPR, the Proposed Regulation provides for severe penalties for non-compliance. It contemplates administrative fines for certain offences of up to €30 million or, if the offender is a company, 6% of its total worldwide annual turnover (whichever is higher), with lesser fines for lesser offences.[14]
Conclusion
Before it becomes law, the Proposed Regulation must pass through a potentially lengthy process involving both the European Parliament and the Member States, which will likely involve amendments. If adopted, it will be one of the first — if not the first — major AI regulation to arrive on the global stage. Legislators and regulators around the world are now beginning to move beyond high-level principles and policy frameworks to hard law, with severe penalties for non-compliance. As this progression continues, Canadian businesses would be well advised to stay up to date on AI-related legal developments and start to position themselves for future compliance.
To stay up to date on AI-related and other technology law developments, subscribe to our TechLex blog. To learn about how we can help your business navigate the rapidly evolving AI legal landscape, please contact Charles Morgan or Dan Glover.
____________________
[1] Articles 71(3)-(5).
[2] Recital 1.
[3] Article 3(1).
[4] Article 3(2).
[5] Article 3(4).
[6] Article 3(6).
[7] Article 3(7).
[8] Article 2(1)(c).
[9] Subject to an exclusion for AI systems developed or used exclusively for military purposes: Article 2(3).
[10] Article 5(1).
[11] Article 6(1).
[12] Article 6(2).
[13] Article 7.
[14] Articles 71(3)-(5).