Skip to content.

Could AI get you sued? Artificial intelligence and litigation risk

This article is part of our Artificial Intelligence Insights Series, written by McCarthy Tétrault’s multidisciplinary Cyber/Data team. This series brings you practical and integrative perspectives on the ways in which AI is transforming industries, and how you can stay ahead of the curve.

View other blog posts in the series here.

 

To maximize hiring efficiency, your business has decided to employ artificially intelligent technology to scope out job applicants and recommend top candidates. Unbeknownst to your business, the technology’s algorithm is biased in favor of white male candidates. Your business is sued for hiring discrimination. Could the business be held liable? If not, who could be?

AI technology raises tricky questions about the scope of potential civil liability, including for AI-driven privacy violations, discrimination, failure to spot compliance issues, accidents, and security breaches. It may also be difficult to foresee which legal or natural person(s) could, or should, be held responsible for harm caused by AI.

Challenges in determining liability

When human beings make decisions, the nexus between the decision and the decision maker is typically obvious. This makes establishing liability conceptually straightforward.

Establishing liability for AI-caused harm is potentially more complex. Three potential problems arise:

  1. The autonomy problem: If an alleged harm has been caused by an AI system that uses machine learning to make decisions without a “human in the loop”, general liability principles – which are founded on agency, control, and foreseeability by natural or legal persons – may be difficult to apply. The more autonomous AI technology becomes, the harder it may be to identify the party that caused the damage.
  2. The multi-stakeholder problem: More complex AI systems involve more stakeholders in their development and deployment.[1] If such a system has caused harm, it may be difficult to determine which stakeholder or stakeholders to hold responsible.
  3. The “black box” problem: Programmers may lack an exact understanding of how an autonomous AI system made an impugned decision. How would causation or fault be established – or disproved – if the harmful output cannot be explained or proven?

Faced with liability questions arising out of AI technology, courts will start with legal frameworks that long predate the development of AI technology, including tort law and product liability law. These frameworks could, however, prove unsatisfactory for addressing potential liability in the context of advanced, autonomous AI systems. Indeed, while existing liability principles may address cases in which an AI system’s act or omission may be traced to a specific agent’s design defect, as discussed above, these liability principles may be inadequate to contend with emerging, autonomous AI technologies.

In the civil law regime, the issues of foreseeable damages or direct damages, depending on the liability regime, and of the “autonomous act of the thing”[2] will raise interesting debates in the context of AI applications. Law and its corollary, liability, fall within the field of human action from which AI is, by nature, excluded. It will, therefore, be necessary to consider whether the applicable laws on civil liability can or must be adapted to the specificities of situations involving AI. The challenge will be to preserve the universal character of the law while promoting the emergence of AI in a legally secure space.

Bridging gaps between old laws and new technologies

AI is constantly evolving. Laws that govern AI systems and their users (both statutory and common law) will have to be versatile and continuously updated in order to remain adequately responsive.

One novel and controversial proposal is to confer legal rights and a corresponding set of legal duties on AI systems (similar to laws ascribing legal personhood to corporations). This would allow the AI itself to be held directly accountable for any harm it causes.

Legal personality has been granted to ships in the United Kingdom and, more recently, Saudi citizenship has been granted to a robot named Sofia.[3] In 2015, the Civil Code of Quebec was amended to affirm that animals are not “things” and that they enjoy biological imperative.[4]

As is the case for animals, one could consider a suis generis legal status for AI systems that mimic human cognitive processes. Though this approach is likely to remain more theoretical than real – in the short and medium term, at least – it may supply a useful analytical lens through which policymakers and courts can reckon with the challenge of private law regulation of AI systems.

Another possibility is developing a strict liability regime to ensure compensation from AI operators that expose third parties to an increased risk of harm (should that harm ever materialize), combined with liability insurance schemes to protect the AI operators themselves.[5] Most recently, the European Parliament adopted its Resolution on a Civil Liability Regime for Artificial Intelligence, which applies strict liability to any operator of a high-risk AI system, even if they acted with due diligence in its operation.[6] This liability regime can also be closely compared to strict liability principles in the United States that apply to hold those who engage in “ultra-hazardous” or “abnormally dangerous” activities liable for the harm caused by these activities.[7]

Common enterprise liability could be also used to hold several AI stakeholders, each of which worked towards a common objective to build the AI technology (e.g., manufacturers, software engineers, and software installers), jointly responsible for indemnifying a plaintiff for AI-caused harm.[8] As common enterprise liability has only ever been applied when entities are organizationally related (e.g., a parent company and its subsidiary), this doctrine would have to be amended to apply to the creation of AI where this may not always be the case.[9]

Bottom line

The application of AI technology could create liability risks for businesses for which the law, as it stands, does not provide complete or satisfactory answers. Managing this risk without unduly delaying the deployment of useful and novel AI systems is, and will remain, an important challenge for Canadian businesses. To address this challenge, leaders and their advisers will need to match foresight concerning technological developments with foresight concerning the potential evolving of applicable legal frameworks.

 

To learn more about how our Cyber/Data Group can help you navigate the privacy and data landscape, please contact national co-leaders Charles Morgan and Daniel Glover.

 

[1] Yaniv Benhamou & Justine Ferland, “Artificial Intelligence & Damages: Assessing Liability and Calculating the Damages” in Giuseppina (Pina) D’Agostino, Aviv Gaon & Carole Piovesan, eds, Leading Legal Disruption: Artificial Intelligence & a Toolkit for Lawyers and the Law, (Toronto: Thompson Reuters, 2021) 1 at 5-6.

[2] Section 1465 of the Civil Code of Quebec provides: The custodian of a thing is bound to make reparation for injury resulting from the autonomous act of the thing, unless he proves that he is not at fault.

[3] The Independent, “Saudi Arabia grants citizenship to a robot for the first time ever” (October 26, 2017), online: https://www.independent.co.uk/tech/saudi-arabia-robot-sophia-citizenship-android-riyadh-citizen-passport-future-a8021601.html.

[4] Section 898.1 of the Civil Code of Quebec provides: Animals are not things. They are sentient beings and have biological needs.

[5] Yaniv Benhamou & Justine Ferland, “Artificial Intelligence & Damages: Assessing Liability and Calculating the Damages” in Giuseppina (Pina) D’Agostino, Aviv Gaon & Carole Piovesan, eds, Leading Legal Disruption: Artificial Intelligence & a Toolkit for Lawyers and the Law, (Toronto: Thompson Reuters, 2021) 1 at 11.

[6] European Parliament, Resolution on a Civil Liability Regime for Artificial Intelligence, (2020/2014(INL)), online: https://www.europarl.europa.eu/doceo/document/TA-9-2020-0276_EN.html#title1.

[7]Great Lakes Dredging & Dock Co. v. Sea Gull Operating Corp., 460 So. 2d 510, 512 (Fla. 3d DCA 1984); Old Island Fumigation, Inc. v. Barbee, 604 So. 2d 1246, 1247; Yaniv Benhamou & Justine Ferland, “Artificial Intelligence & Damages: Assessing Liability and Calculating the Damages” in Giuseppina (Pina) D’Agostino, Aviv Gaon & Carole Piovesan, eds, Leading Legal Disruption: Artificial Intelligence & a Toolkit for Lawyers and the Law, (Toronto: Thompson Reuters, 2021) 1 at 11.

[8] Yaniv Benhamou & Justine Ferland, “Artificial Intelligence & Damages: Assessing Liability and Calculating the Damages” in Giuseppina (Pina) D’Agostino, Aviv Gaon & Carole Piovesan, eds, Leading Legal Disruption: Artificial Intelligence & a Toolkit for Lawyers and the Law, (Toronto: Thompson Reuters, 2021) 1 at 12.

[9]Fed. Trade Comm'n v. Wash. Data Res., 856 F. Supp. 2d 1247 (M.D. Fla. 2012); Mortimer v. McCool, 255 A.3d 261 (Pa. 2021).

Additional Resources

Authors

Subscribe

Stay Connected

Get the latest posts from this blog

Please enter a valid email address