Skip to content.

Liability for Artificial Intelligence — Why Canadian Businesses Should Pay Attention to Recent Developments in Europe

Overview

Late last year, the European Commission’s Expert Group on Liability and New Technologies – New Technologies Formation (NTF) released a report on Liability for Artificial Intelligence. The report focuses on liability regimes across European Union (EU) member states and offers high-level recommendations on how those liability regimes can be adapted to meet challenges posed by artificial intelligence (AI) and other digital technologies.

Insights from this report may inform legislative and regulatory changes in the EU and elsewhere, including in Canada. Here’s what you need to know.

Background and Scope of the Report

The NTF first convened in June 2018. It is an independent group comprised of 16 members that was appointed by the European Commission to examine and identify shortcomings in existing liability regimes in respect of their application to new digital technologies, such as autonomous cars and healthcare applications, the Internet of Things, and algorithmic decision-making in the financial and other sectors. The NTF was also tasked with making recommendations, limited to matters of non-contractual liability, and leaving aside rules on safety or other technical standards.

Key Findings on Existing Liability Regimes

According to the report, current legal frameworks generally provide basic protection for losses caused by emerging digital technologies. For example, the EU’s Product Liability Directive will apply where an AI device, such as a smart home system, is defective at the point it is put into circulation. Similarly, damage caused by the use of AI in financial markets will be subjected to fault-based liability in tort.

The report also found that certain characteristics of new technologies — including limited predictability, opacity, complexity and autonomy — will sometimes prevent existing liability regimes from assigning fault or compensating victims in a manner that is adequate, efficient or fair. For example, there is risk that AI systems could perform actions that are not foreseeable to their designers and operators, since they may identify and respond to stimuli in a manner that was not preprogrammed. Further, their operations can be opaque, as they may be determined by highly complex “black-box algorithms” that evolve through self-learning, independent of direct human control or supervision.

Consider a passerby injured by an AI-based tower crane that malfunctions in a manner that could not have been foreseen, and that is unrelated to the condition that the crane and its AI engine were in when the crane was first placed on the market. The construction company that makes use of the crane in its operations might have discharged all applicable duties of care. Accordingly, in these circumstances, it may be impossible for the passerby to establish that the injury was caused by, and not too remote from, the negligent actions of a particular party or parties. Nevertheless, it would not be appropriate to leave the passerby uncompensated.

Key Recommendations for Liability Regimes

The report acknowledges that, given the diversity of emerging digital technologies and risks they might pose, an equally diverse set of solutions may be required. At a high level, however, the report provides recommendations for adapting existing liability schemes, including:

  • Operator’s Strict Liability — A person in control of the risk connected with the operation of a new technology and who benefits from its operation should be held liable, independent of fault, where the new technology carries risk of “significant harm” and is used in a non-private environment. The report suggests that, for the time being, strict liability ought to apply primarily to operators of technologies moving in public spaces, such as autonomous vehicles and drones.
  • Producer’s Strict Liability — A person in charge of placing a new technology on the market, or who remains in control of updates to or upgrades on the technology, should be held liable, independent of fault, for defects in the technology.
  • Burden of Proof — Due to the complexity and opacity of emerging digital technologies, it may be disproportionately difficult and costly for a plaintiff to establish liability. Accordingly, where a new technology causes harm, a defendant who has better access to information about the technology’s defects or compliance with safety standards should have to disprove Additionally, the burden of proof for causation should lie with a defendant who either: (a) failed to comply with a safety rule designed to prevent the harm that materialized; or (b) failed to equip the technology with an appropriate means of logging information about its operations that would have facilitated proof of causation.
  • Legal Personality — It is not necessary to give new technologies, including fully autonomous systems, a legal personality. Funds would have to be allocated to “electronic agents”, since they would require assets to satisfy findings of liability made against them, which would effectively impose a cap on liability. Further, this solution would not be practically useful since any harm they may cause can be attributed to existing persons.

Lessons to Draw and Future Developments

A key takeaway from this report is that current liability rules will soon be put to the test, and may need to adapt to challenges posed by AI and other emerging digital technologies. More comprehensive legislation and regulation may be needed, and may be informed by the insights found in this report. The report has already been cited in a resolution on AI and automated decision-making, which was approved by the European Parliament’s Internal Market and Consumer Protection Committee on January 23, 2020.

Steps taken in Europe to regulate a new generation of technologies have already had spillover effects within Canada. Notably, the EU General Data Protection Regulation (GDPR) has had practical implications for many Canadian organizations that do business internationally. On February 19, 2020, the European Commission is expected to present its plans for a European approach to legislating AI, further to President Ursula von der Leyen’s pledge to do so within her first 100 days in office. Canadian companies should take note of the challenges described in the Commission’s report on liability for AI, and stay tuned for these further legal developments.

For more information about our firm’s artificial intelligence or machine learning expertise, please see our Technology and Cybersecurity, Privacy & Data Management pages.

Authors

Subscribe

Stay Connected

Get the latest posts from this blog

Please enter a valid email address