Skip to content.

Deep Learning: AI Regulation Comes Into Focus Part I

On May 12, 2021 McCarthy Tétrault was privileged to host clients and industry members for an exclusive online event with Susie Lindsay and Nye Thomas, counsel and executive director, respectively, of the Law Commission of Ontario (“LCO”), a leading Ontarian law reform agency producing independent research on legal policy issues. Their presentation focused on the pressing topic of regulation of artificial intelligence (“AI”) and automated-decision making (“ADM”) and was followed by a fire-side chat conversation moderated by Christine Ing, our National Technology Law Group Leader and FinTech Group co-Leader.  

Mr. Thomas and Ms. Lindsay are two of the authors of a recent LCO report: Regulating AI: Critical Issues and Choices. Published in April 2021, the report is part of the LCO’s project on AI, ADM and the justice system, and comes after the publication of a first issue paper in October 2020 on the lessons Canada can learn from the United States’ experience with the use of algorithms in their criminal court cases.

Canadian governing bodies and public institutions, at all levels and in all jurisdictions, increasingly use AI and ADM systems to assist them in their functions. They are inspired by similar jurisdictions, notably in the United States, that use such systems to make benefits allocation decisions, monitor public health risk, combat fraud, recommend parole eligibility or predict crimes. In terms of potential use of AI and ADM systems in the public sector, the “sky is the limit”, Mr. Thomas told the virtual audience.

Those systems have the potential to be more efficient and reliable than human-decision making, but also come with specific risks. Those risks, also present in the private sector, are more acute in the public sector where decision makers are notably subject to the Canadian Charter of Rights and Freedoms and administrative law. In that regard, Mr. Thomas and Ms. Lindsay provided several examples of systems that have been at the center of public controversies, such as the use of the COMPAS system by US courts to predict recidivism risks in the criminal law context. The main sources of complaints in such instances are the perceived opacity of these systems, the risk of actualising bias, as well as the lack of transparent disclosure and public participation at all stages of development and implementation. The LCO’s Regulating AI report seeks to help policymakers create a regulatory framework that will nurture the benefits of AI and ADM innovations while minimizing potential harm, notably algorithmic discrimination.

With increased adoption and public concerns, how AI and ADM systems should be regulated has become an insistent question for governments. The LCO recommends an approach that combines soft law instruments such as ethical guidelines or best practices, with comprehensive legislation that would impose obligations on public bodies and provide remedies. Governments, NGOs and industry organizations have published a great number of ethical guidelines and policy recommendations in the last five years. McCarthy Tétrault lawyers, including the authors, have participated in those efforts with the publication of two editions of Responsible AI: A Global Policy Framework by the International Technology Law Association.

Mr. Thomas and Ms. Lindsay however confirmed the existence of a trend towards a stronger legislative approach. Most notably, the European Commission (“EC”) unveiled in April 2021 its Proposal for a Regulation laying down harmonised rules on artificial intelligence (“Proposal”) (you can find our analysis of this Proposal here). In 2019, the Canadian government published a Directive on Automated Decision-Making (“Directive”). The Directive is a first in Canada and the two presenters noted that there are multiple jurisdictional and accountability gaps as regard to regulation of AI and ADM systems in the country. First, contrary to the EC proposal, the Directive does not apply to private organizations. Second, it only applies to a limited range of Federal government decisions, leaving the use of AI and ADM systems by provinces and municipality without a bespoke regulatory framework. Third, criminal decisions are not covered by the Directive, as it mainly applies to administrative decisions. In short, the scope of the Directive remains limited, which contrasts with the European Commission’s Proposal, and it is at risk of leaving many potential uses of AI and ADM systems unregulated.

Consequently, AI regulation in Canada remains in its infancy. To help us move forward, Mr. Thomas and Ms. Lindsay identified key emerging themes and issues that are at the heart of current discussions as well as points of consensus and divergence. They notably brought forward the consensus around the importance of building trustworthy AI solutions to reduce the public’s anxiety surrounding AI and ADM systems and laying the foundation for more innovation. On the other hand, Ms. Lindsay noted that no consensus exists on how to define AI and that diverging approaches exist on this question. Some may define AI as autonomous self-learning systems based on the latest deep learning technologies, while others prefer technologically neutral definitions that focus on the impact of the systems. In its report, the LCO favors an explicit and expansive definition, which is also the approach of the Directive (it can apply to systems as ranging from an Excel spreadsheet to a neural network).

The co-authors also stressed the importance of a risk-based approach for the regulation of AI and ADM systems. It is generally accepted that rules governing AI and ADM should be based on the level of risk posed by those systems, but concrete application of this principle remains debated.  For instance the Canadian Directive implements a sliding scale approach accompanied by obligations to conduct algorithmic impact assessments to determine the level of risk of a given system. For its part, the EC has preferred a pre-emptive approach that identifies certain AI technologies that must be banned or regulated due to their level of risk.

Although the LCO’s report and the Directive focuses on uses of AI and ADM systems in the public sector, during the presentation, Mr. Thomas underscored the importance for private sector organizations to pay attention to efforts to regulate the use of such systems in the public sector as those regulations may influence the framework for future private sector rules. Nevertheless, he noted the possibility of the creation of specific regulatory regimes for the private and public sector, with public sector regulations mandating a greater degree of fairness, transparency and disclosure.

In that regard, Ms. Ing raised the complex issues suppliers could face when providing services and systems to public bodies. For instance, it may be difficult for suppliers to meet potentially diverging sets of standards between the public and private sectors, which could limit governments’ ability to procure AI and ADM systems. Moreover, the adequate balance between requirements of transparency and disclosure still needs to be properly weighted against the importance of protecting trade secrets and proprietary information.

Businesses can also learn from the controversies that have accompanied some high-profile uses of AI and ADM systems by governments. Ms. Lindsay’s experience has taught her that legal issues can quickly transform into public relations challenges, especially as AI tends to attract media attention. Moreover, AI systems, like all computer systems, are prone to glitches and errors that are made worse when proper oversight and testing mechanisms are not in place.

In conclusion, Canada is moving closer toward developing legislation to regulate artificial intelligence, but the precise shape of these rules is not yet known. Businesses that use or plan to use AI systems for their activities should keep up to date with the latest developments in regulations governing the public sector’s use of AI, as regulators may transpose those developments to the private sector.

For more information about our firm’s expertise on the above, please contact the authors and see our Technology and Cyber/Data group pages. To receive timely updates, please subscribe to our TechLex blog. To learn about upcoming seminars and events, please visit the Seminars page on our website.



Stay Connected

Get the latest posts from this blog

Please enter a valid email address