Skip to content.

AI and Dispute Resolution: friends of foes?

On May 13, 2021, the London Disputes Week conference (“LIDW21”) hosted a panel of thought leaders to discuss the intersection of artificial intelligence (“AI”) and dispute resolution (“DR”). The panel was moderated by Dan Wyatt of RPC and featured Charles Morgan, national co-leader of McCarthy Tétrault’s Cyber/Data Group, Trish Shaw of Beyond Reach Consulting, Sophia Adams Bhatti of Simmons Wavelength Limited and Steve Shinn of Disputed.iou. The panel was part of the session entitled, “The use of technology and AI in the future of dispute resolution in London.” This article summarizes some of the key points raised during this session.

Background

LIDW21 is an international conference with a focus on centering London, England as the global centre for dispute resolution. The panel explored common concerns that technology may replace lawyers and perhaps even arbitrators or judges. The panel considered how technology is affecting the business of law as well as about how the law needs to develop and adapt in order to provide a legal framework capable of maintaining the rule of law.

How are AI and automation being used in DR in London, and where are the major opportunities?

Mr. Shinn opened the discussion on the topic of how AI is being used in DR. He noted that automation and AI are good at finding patterns in complex data sets and are then able to predict a future outcome based on that understanding. They work better in areas where there are many similar instances with structured data including; class actions, lower value disputes and less complex cases - as such these areas are ripe for innovation.      

Mr. Shinn explained that in terms of deployment within DR, lower court decisions could eventually be driven by AI in due course, and that more hands-on human intervention could become limited to appeals. In the future, commoditising access to deep tech in the legal domain will enable further acceleration of adoption. There are already resources available such as Canadian firm Conflict Analytics, which is using deep learning models, and Google’s latest natural language processing tool to digest and attempt to use all US and European cases, academic papers, text books and legislation. These and similar tools will undoubtedly be the basis for many innovations.

What are the most commonly understood legal and ethical risks of using AI in the context of dispute resolution?

Mr. Morgan pointed to a first common concern with AI called the “black box” problem. The “black box” problem occurs when individuals do not understand how outcomes are reached by AI, leading to a lack of trust in AI-driven outcomes. Machine learning (“ML”) developers make basic architectural decisions when designing the AI. Mr. Morgan noted, however, that ML developers do not typically decide on the particular parameter values. Thus, a trust issue develops among participants in light of the lack of explainability of the outcome. This is why organizations should respect the Transparency and Explainability principle of the Responsible AI Policy Framework by ITechLaw (the “Framework”). In effect, the Transparency and Explainability principle aims to ensure a fair and impartial decision-making environment. Janet Martinez, Ethan Katsh and Colin Rule suggest that in the context of DR, “the lack of transparency in AI can, at a minimum, raise concerns and erode trust or further entrench a lack of trust in courts and other forms of dispute resolution.”

Another common concern with AI is informational asymmetry. Mr. Morgan considered the situation where a big tech company proposes to allow its users to use an AI-enhanced DR tool that it has developed to resolve disputes between the company and its consumers. The informational imbalance in such a situation is enormous, exacerbating the already present trust issues described in the subsection above. One way of dealing with this concern is to have an independent third party as the administrator of the AI–enhanced DR tool.

A third common concern is discriminatory outcomes. Discriminatory outcomes refer to the problem of discrimination resulting from poor quality or non-representative data sets that are used to train the AI system. Mr. Morgan pointed to a concrete example of this raised by Jake Silberg and James Manyika regarding COMPAS. This tool was used to predict criminal recidivism in Boward County, Florida. It was a prime example of AI scaling-up systemic racism as it incorrectly labeled African-American defendants as “high-risk” at nearly twice the rate it mislabeled white defendants. This is precisely what the Framework’s Fairness and Non-Discrimination principle aims to guard against.

This segment of the session revealed some of the ethical concerns inherent in AI systems, and thus reinforcing the importance of approaching the development and deployment of AI systems with ethics as a top priority. The goal is to leverage the many benefits of AI while mitigating these known risk factors. To this end, Ms. Bhatti made clear that AI is not inherently good or bad, rather, the purpose of deployment ought to be the focus. Approaching AI while prioritizing the mitigation of the aforementioned risks serves to promote purposes that advance societal good.

1. How do we avoid these pitfalls?

Ms. Bhatti opened this segment by explaining that we need to ‘get under the bonnet’ of AI and automation. She noted that we cannot blindly turn to AI without attempting to know what is going on in the background. Mr. Shinn offered several examples on how he tackled this when building the capabilities on his CaseFunnel platform for different cases including  Data Subject Access Requests (DSAR). More than anything he emphasized the importance of ensuring the robustness, reliability and fairness of results.

Mr. Morgan explained that at a high-level, commonly proposed solutions to the common concerns regarding AI include the imposition of obligations of governance, data quality, impact assessment, non-discrimination/transparency, and explainability. Mr. Morgan also noted that diversity in an AI solution’s development team, traceability, testing for biased outcomes, feedback loops, and audit processes are ways to avoid the aforementioned pitfalls.

Mr. Morgan noted that if not deliberately approached with ethics in mind, AI can create (even if unintentionally) a range of unjust situations. On the other hand, AI can increase access to justice and adding ethical value to DR. Notable examples include British Columbia’s Civil Resolutions Tribunal (“CRT”) which promotes access to justice by offering a streamlined online DR (“ODR”) process for certain low-value and strata disputes that would  otherwise likely never be pursued in the mainstream court system. The CRT has an optional first step tool, the Solution Explorer, where an AI does not make binding decisions, but “helps people resolve their dispute without having to file a CRT claim.” Another tool emerging from British Columbia, Smartsettle ONE, facilitates the negotiation of monetary settlement amounts. Both of these solutions are rooted in the ethical promotion of access to justice.

3. What will be in place to mitigate AI risks?

Ms. Bhatti opened this segment by drawing attention to the General Data Protection Regulation (the “GDPR”). Naturally, the GDPR does not purport to regulate AI, but it does so indirectly as it does regulate data rights. Ms. Bhatti pointed specifically to articles 15 and 22 of the GDPR, which are about access rights to data and automated individual decision-making, respectively.

Mr. Morgan discussed the new EU draft AI regulation (the “Proposed Regulation”) on which McCarthy Tétrault recently published an overview article: EU’s Proposed Artificial Intelligence Regulation: The GDPR of AI | McCarthy Tétrault. The Proposed Regulation, the first of its kind, may also influence the way in which AI is regulated outside of the EU. The Proposed Regulation envisions a risk-based approach to regulating AI. It provides for three categories of regulation tailored to three degrees of risk. When viewed through the lens of the Proposed Regulation, certain uses of AI in DR would likely fall under the low/minimal risk category. Document review tools are an example. Other uses, such as when AI is used in judicial decision-making, likely fall into the high-risk category of the Proposal Regulation.

4. What more could / should be done?

Mr. Shinn explained that much of the AI innovation that we have seen so far has focused on due diligence and discovery-related tasks, with some inroads having been made into legal analytics. He noted that there are significant gains being made in this space but the really interesting work is in prediction. It’s happening and just as in so many other commercial and societal fields, law firms risk significant disruption if they don’t start to embrace it more proactively.  Mr. Shinn suggested that while not every actor wants nor is able to lead this innovation, it is vital to be able to follow it quickly.

5. Who is likely to be liable if AI is used and things go wrong?

Mr. Morgan, Ms. Bhatti and Ms. Shaw closed the session with a brief discussion on who could be liable if AI is used and things go wrong. The three panellists suggested that there is no clear answer to this question, and that it really depends on the specific factual matrix, as well as the regulatory landscape within the relevant jurisdictions. Various actors could, in theory, be liable, including lawyers or other legal professionals using AI and tech providers (subject to contractual exclusions, where permitted).

6. Conclusion

Despite common concerns, AI has a place in DR. The panel drew attention to key issues to consider in ensuring that AI can add value in an ethically sound and legal way. Actors in the AI ecosystem will have to consider the ethical issues raised above from the earliest stages of system development. This is especially important in light of the Proposed Regulation and what could well be a global push toward tighter regulation of AI.

For more information about our firm’s expertise on the above, please contact the authors and see our Technology and Cyber/Data group pages. To receive timely updates, please subscribe to our TechLex blog. To learn about upcoming seminars and events, please visit the Seminars page on our website.

Authors

Subscribe

Stay Connected

Get the latest posts from this blog

Please enter a valid email address