Skip to content.

Explaining AI to a Court: Practical Considerations for AI Disputes

This article is part of our Artificial Intelligence Insights Series, written by McCarthy Tétrault’s AI Law Group - your ideal partners for navigating this dynamic and complex space. This series brings you practical and integrative perspectives on the ways in which AI is transforming industries, and how you can stay ahead of the curve. View other blog posts in the series here.

 

As Artificial Intelligence revolutionizes aspects of business and everyday life, it will also change the subject matter and practice of dispute resolution. The intellectual property generated by AI, decisions taken through AI, and decision-making processes of AI will increasingly be the subject of litigation and arbitration. Such “AI disputes” will be widespread and the stakes will be high. But how will they be litigated?

Arguing persuasively about AI will require lawyers and their clients to describe to a decision-maker, simply yet completely, the unique features of a given AI product or technology. Expert evidence will often be required to educate judges and arbitrators who — like the senior lawyers handling these cases — are unlikely to be “digital natives”. It will be the role of litigators to help judges and arbitrators make decisions about this entirely new world with which they are not familiar, and these decisions will impact the way in which AI is developed and deployed.

None of this is new. Litigators who handle technology cases and intellectual property disputes have been using advocacy techniques to demystify highly technical subject matter for many decades. Judges can, and are, frequently taught the innerworkings of highly complex medicines or technical products before making important rulings that have significant implications for entire sectors of the economy. Still, some of AI’s current and potential features will make applying these same advocacy techniques in AI disputes particularly challenging. Prudent risk management teams should therefore proactively seek out multidisciplinary advice to ensure that their organizations are ready in the event that an AI dispute arises.[1]

Be ready to describe the product

In every dispute involving technical complexity, courts and arbitrators expect litigants to be able to explain the technology at issue and how it relates to the dispute between the parties. AI stands to complicate this task, particularly as “black box” models become broadly commercially applicable. If a claimant seeks to hold an AI developer, or an AI user, responsible for AI decision-making, a lack of interpretability of the impugned decisions will pose problems in litigation.

In order to mitigate this risk, organizations that adopt AI systems in their business processes should be capable of answering at least the following questions:

  • Which data were used to train the AI, where did that data originate, and who owns or owned the data? Some of the highest profile AI disputes to date have been lawsuits concerning alleged improper data scraping used to train “large language models” and other generative AI. Considering the provenance of training data is an important step in risk mitigation, including for organizational users of AI.
  • How does the AI interact with customers or end-users, and are customers aware that the product they are using integrates AI? With “B2C” (business to consumer) applications of AI, the consumer perspective is a crucial consideration in risk mitigation, and ultimately in constructing a persuasive narrative for contesting liability in a dispute. Organizations that adopt AI for consumer-facing applications should take care to design and document the user experience. This will assist in identifying potential consumer protection issues, and will have the benefit of marshalling evidence of safe, appropriate, and lawful use. Careful consideration of terms of service and clear disclaimers can help mitigate litigation exposure down the line.
  • What steps are taken to ensure that the AI protects the privacy of users, that confidential or sensitive information contained in the data set remains private or anonymized, and that the product complies with relevant privacy laws? What protections are in place to recognize and combat bias present in the AI? Each of these is a potential locus of disputes risk. Depending on the applicable legal framework, organizations that use AI may be required to prove proper care and diligence to a court or tribunal. To do so, an organization would be well served by contemporaneous records documenting steps taken to protect privacy and detect bias. Though such records could be subject to production as part of a discovery process in litigation, they are likely to be effective tools for internal accountability for the same reason.

Litigants in AI disputes will need to be able to answer these questions methodically. Organizations can prepare for this by establishing AI governance processes and policies that are compliant and align with industry best practices.

Identify potential independent experts

AI is a rapidly developing and complex area. It draws from competencies including computer science, mathematics, and engineering. While courts have demonstrated a desire to better understand and incorporate AI into their judicial functions,[2] it remains a novel and highly technical subject area. Ultimately, courts will require expert assistance to understand AI generally, and the unique features of the AI product in a given dispute.

Working with experts in litigation requires careful compliance with the law concerning expert evidence. Most importantly, to be admissible in a Canadian court, an expert’s evidence must be independent, impartial, and reflective of the expert’s particular expertise. Early involvement of litigation counsel, including in the risk management process, can help to ensure that potential litigation experts are identified proactively and that their independence is maintained.

As in intellectual property litigation, professional liability disputes, and consumer protection cases, expert evidence will be essential in AI disputes. Courts and tribunals will, by necessity, rely on the expertise of independent litigation experts to understand the technologies at issue and how they ought to be used. Organizations that develop and adopt AI should bear this in mind as part of their risk management and mitigation.

Keep risk management in the foreground

When an AI dispute arises, an organization may need to compile an evidentiary record quickly. It may also face document preservation and production obligations, as part of a discovery process. Records that are relevant to the AI in issue will have to be identified, held, and considered by counsel in fairly short order.

Particularly in this initial era of widespread AI adoption, organizations should anticipate the possibility of AI disputes and design internal document management systems that manage and mitigate, rather than add to, risk. Here, again, well documented AI governance processes and policies will be an essential tool.

McCarthy Tétrault can help. If you have questions about the content of this article or about how your organization can manage and mitigate AI disputes risk, please reach out to one of the authors, to your regular McCarthy Tétrault contact, or to a member of our AI disputes team.

___

[1] See McCarthy Tétrault’s Artificial Intelligence Insights Series written by our AI Law Group for further insights.

[2] See, for example, the Guidelines for the Use of Artificial Intelligence in Canadian Courts, September 2024, by the Canadian Judicial Council. Available at: https://cjc-ccm.ca/en/news/canadian-judicial-council-issues-guidelines-use-artificial-intelligence-canadian-courts. The Guidelines detail a principled framework for understanding the extent to which AI tools can be used appropriately to support or enhance the judicial role.

Authors

Subscribe

Stay Connected

Get the latest posts from this blog

Please enter a valid email address