Passer au contenu directement.

Using privacy laws to regulate automated decision making

Making decisions about individuals using computers and computer algorithms is now commonplace. There are now also increasing proposals to use privacy laws to regulate automated decision making. One of the first explicit attempts to regulate automated decision-making using privacy laws is the European Union General Data Protection Regulation (GDPR).  More recently (and locally), both the Consumer Privacy Protection Act (CPPA), Canada’s proposed controversial new privacy law, and Bill 64, Quebec’s proposed privacy amendments, would enact new transparency and explainability obligations for automated decision making.

These proposed laws may help to ensure that AI uses personal information responsibly. But they also raise questions about what the new obligations will require in practice, the challenges organizations will have in trying to comply, whether the proposed changes go far enough or too far, and whether using privacy laws to regulate automated decision making is the appropriate mechanism for any such regulation. We address these questions in this blog post.

Background

The concept of automated decision making or automated decision systems vary in scope. These are generally premised on the uses of artificial intelligence systems (AI systems), a fast evolving family of technologies that can bring a wide array of economic and societal benefits across the entire spectrum of industries and social activities.

AI systems can be generally understood as software that is developed with one or more of certain techniques and approaches and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with. Illustrations recently set out in a draft European Regulation for Harmonizing Rules on Artificial Intelligence are machine learning using methods such as deep learning, logic and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems; and statistical approaches, Bayesian estimation, search and optimization methods.

The scope of regulating automated decision making using privacy laws varies internationally. The UK Information Commissioner’s Office (ICO) defines “automated decision-making” as “the process of making a decision by automated means without any human involvement. These decisions can be based on factual data, as well as on digitally created profiles or inferred data.” A broader meaning is included in the Government of Canada’s Directive on Automated Decision-Making, from which the CPPA definition is derived, that defines “Automated Decision System” as including any technology that either assists or replaces the judgement of human decision-makers. These systems draw from fields like statistics, linguistics, and computer science, and use techniques such as rules-based systems, regression, predictive analytics, machine learning, deep learning, and neural nets.”

This latter definition is exceptionally broad. It is not limited by classes of technology that will make automated decisions, the products, services, or sectors of the economy that will use it, the nature of the automated decisions that the systems could implicate, the classes of individuals that could be affected by decisions, or the nature or significance of the impacts of the decisions to individuals. In short, under such a broad definition, “automated decisions” will be ubiquitous.

Nascent as AI technology still is, automated decision making is already pervasive. It is used by private sector entities to screen job applicants and to flag violent content or “fake news” on social media.[1] It powers autonomous and semi-autonomous vehicles, and includes “self-parking” and collision avoidance systems. In the Australian province of New South Wales, authorities use a human-in-the-loop AI system to detect drivers who text behind the wheel. According to a 2018 report published by the International Human Rights Program and Citizen Lab, both at the University of Toronto:

A call with a senior … data analyst [at Immigration, Refugees and Citizenship Canada (“IRCC”)] in June 2018 confirmed that IRCC [was] already using some form of automated system to “triage” certain [immigration] applications into two streams, with “simple” cases being processed and “complex” cases being flagged for review. The data analyst also confirmed that IRCC had experimented with a pilot program to use automated systems in the Express Entry application stream, which has since been discontinued.[2]

Canada’s federal government put out a Request for Information in 2018 seeking “[i]nformation … in relation to whether use of any AI/[machine learning] powered solutions could be expanded, in future, to users such as front-end decision makers” at both IRCC (which processes immigration claims) and Economic and Social Development Canada (which processes benefits claims). In British Columbia, the Worker’s Compensation Board uses an automated decision board for claim intake and (in a small number of cases) adjudication.[3] In Estonia, meanwhile, the government is working towards deploying an automatic decision system to adjudicate small claims civil disputes.

Automated decision systems also make judgment calls that do not take the form of formal decisions. Autonomous vehicle systems that decide when to accelerate, decelerate, stop, and turn are one example. So are algorithms that decide which advertisements (or movies or news stories or social media posts) to display to an individual user, based on the user’s demographic data and past behaviour.

More concerning to many is the use of automated decisions for AI-powered surveillance, profiling, and behavior control and manipulation which also have pervasive potential including in the fields of employment and work, health, social media, location and movements, among others. It is quite likely that, by the time you read this blog post, you will have interacted with at least one automated decision system as part of your day.

Of course, many AI-powered automated decisions offer tremendous advantages to organizations that make them available to users and do not raise the Orwellian concerns often associated with AI. Services that offer consumers products they would want or that are suitable to their interests (as opposed to a wash of irrelevant ads or product offerings) such as recommendations for books or movies by Amazon or Netflix, or new financial services products, are examples. Speech recognition software that facilitates dictating emails or text messages by learning individuals’ speaking intonations and patterns, are a godsend to busy parents and professionals – and older smartphone users whose thumb dexterity will never match those of their kids. So are autocorrect and suggestion features in text and word processing software (despite the sometimes hysterically funny glitches that are already memes online). Many people also increasingly rely on virtual assistants like Siri, Alexa, Cortana and Google Assistant, the features and functions of which are constantly expanding. We like it when search engines return relevant and contextual results, although it is somewhat eerie when after doing a search on Google on a topic one starts getting YouTube recommendations for videos on related topics. AI has also helped us through the COVID 19 pandemic.

All of which is to say that, while there are concerns about the uses of AI and automated decision making, not all uses of automated tools for making decisions call out for regulation. The diversity of applications raises questions about the appropriateness of a one-size-fits-all approach to regulation.

Regulating automated decision making under privacy laws

Automated decision making is an easy target for regulators. Decisions made by automated systems affect individuals’ lives and livelihoods and reputations. This can sit uncomfortably with notions of fairness and justice. This unease is compounded by stories that depict automated decision systems misfiring, discriminating against individuals, and making biased decisions based on inadequate data sets or poorly trained or monitored algorithms, as well as by perceived threats of mass surveillance, profiling and behaviour manipulation.[4]

GDPR’s regulation of automated decision making under privacy laws

The European Union is a leader in its approach to regulating automated decision making under privacy laws under the GDPR. [5]

Two articles of GDPR apply expressly to automated decision-making. Under Article 22 of GDPR, there is a specific prohibition to protect individuals against carrying out solely automated decision making that has legal or similarly significant effects on them.[6] This Article reads as follows:

1. The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.

2. Paragraph 1 shall not apply if the decision:

(a) is necessary for entering into, or performance of, a contract between the data subject and a data controller;

(b) is authorised by Union or Member State law to which the controller is subject and which also lays down suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests; or

(c) is based on the data subject’s explicit consent.

3. In the cases referred to in points (a) and (c) of paragraph 2, the data controller shall implement suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision.

4. Decisions referred to in paragraph 2 shall not be based on special categories of personal data referred to in Article 9(1), unless point (a) or (g) of Article 9(2) applies and suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests are in place.

Articles 14(1)(2)(g) and 15(1)(h) of the GDPR provide data subject with express rights to transparency and explainability of automated decisions: 

“The data subject shall have the right to obtain form the controller confirmation [of] … the following information: (h) the existence of automated decision-making, including profiling, referred to in Article 22(1) and (4) and, at least in those cases, meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject.”

It is notable that the transparency and explainability obligations do not apply to all decisions, predictions, or recommendations made using, or partially relying on, AI systems. Rather, the obligations are limited to only those decisions referred to in Article 22, namely those decisions based solely on automated processing, including profiling, which produces legal effects concerning an individual.

Such express regulation of automated-decision making under GDPR must also be read with its generally applicable provisions in relation to data minimization, data accuracy, notification obligations regarding rectification or erasure of personal data, and restrictions on processing.[7] The GDPR also has requirements in certain circumstances for data protection impact assessments of large scale profiling.[8]

What the rest of the world is doing for regulation of automated decision making under privacy laws

Despite all of the concerns about automated decision making and privacy, most countries internationally have not yet enacted specific obligations under their privacy laws to deal with such technologies or processes, although recent trends suggest that approaches to the regulation of automated decision-making is evolving rapidly.

The U.K. House of Lords Select Committee on Artificial Intelligence concluded in 2018 that AI technology should not be deployed in automated decision systems unless and until it is capable of explaining its own decision making, even if this meant delaying the deployment of cutting-edge AI systems. It acknowledged that both E.U. and U.K. legislation endorsed explainability as the standard, and called on various expert bodies to “produce guidance on the requirement for AI systems to be intelligible”.[9]  The UK subsequently adopted the GDPR, whose provisions reflect this perspective. The EU also recently released a draft regulation for  Harmonizing Rules on Artificial Intelligence. (For a summary of this regulation, see, EU’s Proposed Artificial Intelligence Regulation: The GDPR of AI.)

Apart from the GDPR, there are relatively few international benchmarks for the specific legal regulation of automated decision-making under privacy laws.[10] The jurisdictions that have specifically targeted such automated decision making, other than the jurisdictions in which the GDPR applies, are Brazil and California.

In Brazil, the General Data Protection Law (Lei Geral de Proteção de Dados, or the “LGPD”), which is modelled on the GDPR, allows a data subject to request a review of any decision made solely by an automated decision system. Like the GDPR and the proposed CPPA, the LGPD entitles a data subject to an explanation of the criteria and procedures used in the automated decision (Article 20).

The California Consumer Privacy Act of 2018 does not address automated decision making. However, on November 3, 2020, Californians approved Proposition 24, a ballot measure that enacts the California Privacy Rights Act of 2020. Once it comes into force in 2023, the California law will require the state’s Attorney General to promulgate regulations that govern individuals’ opting out of automated decision making, and that require businesses to provide “meaningful information about the logic involved in [automated] decision-making processes, as well as a description of the likely outcome of the process with respect to the consumer”.

By contrast, recent privacy law reforms in Australia[11] and New Zealand[12] declined to address automated decision making specifically. Legislation that would have regulated automated decision making by large corporations – the Algorithmic Accountability Act – was introduced in both the U.S. House of Representatives and the U.S. Senate in 2019, but did not become law.[13]

Hong Kong’s Privacy Commissioner for Personal Data, while endorsing the territory’s “principle-based and technology neutral” data protection law, suggested that added protection should come from holding businesses and other organizations that use data to a higher ethical, but non-regulatory, standard “alongside the laws and regulations”.[14]

Similarly, Singapore’s Personal Data Protection Commission has endorsed a norms-based approach that encourages the adoption of ethical principles by private sector entities. One of these principles is explainability. Though Singapore law does not impose these requirements, the Commissioner writes that, “[w]here the effect of a fully-autonomous decision on a consumer may be material, it would be reasonable to provide an opportunity for the decision to be reviewed by a human”.[15]

Responsible AI practices can also be advanced in other ways beyond regulation. There have been a plethora of studies, working papers, and guidelines on the responsible uses of AI. For example, ITechlaw’s Responsible AI: A Global Policy Framework (and 2021 update)[16] presents a policy framework for the responsible deployment of artificial intelligence that is based on “best practice” principles.  In addition, many other organizations including the IEEE and NIST have developed, or are in the process of developing, standards for the uses of AI in automated decisions. The Law Commission of Ontario also recently published a report, Legal Issues and Government AI Development: Workshop Report, which produced a summary of eight major themes and insights into the use of AI systems for decision making.

The level of engagement on these issues internationally highlights the fluidity of the analysis and suggests caution in regulatory approaches the impacts of which are hard to predict.

The Proposed regulation of automated decision-making in Canada: an “appropriate” response?

PIPEDA

PIPEDA does not have any express provisions that deal with automated decision making. However, PIPEDA is principle based and intended to be technologically neutral. Its general provisions have been applied to the collection, use and disclosure of personal information by automated means such as most recently in the Cadillac Fairview and Clearview AI decisions of the OPC, and there is no reason to think it would not also apply to the uses of personal information for automated decision making.

As such, as under the GDPR, PIPEDA’s fair information practice principles would likely apply to the collection, uses, and disclosures of personal information by automated means including those pertaining to consent, identifying purposes, data accuracy, limiting collection, use, disclosure and retention, openness (transparency), and individual access. Further, s. 5(3) of PIPEDA has an overriding “appropriate purposes” limitation whereby “[a]n organization may collect, use or disclose personal information only for purposes that a reasonable person would consider are appropriate in the circumstances”. Accordingly, as the Clearview AI case demonstrates, PIPEDA already is capable of addressing some of the potentially problematic challenges associated with the use of automated technologies that use personal information.

In 1995, the European Union adopted the GDPR. Section 25 of this Directive prohibits member states (and companies within their borders) from transferring personal data to a third state whose laws do not adequately protect the data. Transfers to non-member states may occur if the European Union determines that the privacy protection regime of such jurisdictions is “adequate” (or if other specified protective measures are put in place by the transferring entity). For the purposes of Section 25, Canada's PIPEDA received a favourable “adequacy” determination in 2001.

Under GDPR, the European Union must again assess the adequacy of PIPEDA's protections, an exercise to which it will be invited every four years. As part of this assessment, the Europeans will assess Canadian privacy laws in light of the new, higher standards of protection set out in the GDPR.

In it is not surprising therefore that both the Quebec and federal government have looked to GDPR standards, including in relation to automated decision-making, when proposing their respective overhauls of the existing privacy regulatory framework.

Quebec and Bill 64  proposal to regulate automated decision making under privacy laws

Québec’s proposed Bill 64 would require public bodies and enterprises to provide certain information to the person concerned when they collect personal information using technology that includes functions allowing the person to be identified, located or profiled, or when they use personal information to render a decision based exclusively on an automated processing of such information. It establishes a person’s right to access computerized personal information concerning him or her in a structured, commonly used technological format or to require such information to be released to a third person.[17]

New Section 65.2 addresses the obligations of public bodies.

65.2. A public body that uses personal information to render a decision based exclusively on an automated processing of such information must, at the time of or before the decision, inform the person concerned accordingly.

It must also inform the person concerned, at the latter’s request,

(1) of the personal information used to render the decision;

(2) of the reasons and the principal factors and parameters that led to the decision; and

(3) of the right of the person concerned to have the personal information used to render the decision corrected.

New Section 12.1 addresses the obligations of private sector enterprises:

12.1. Any person carrying on an enterprise who uses personal information to render a decision based exclusively on an automated processing of such information must, at the time of or before the decision, inform the person concerned accordingly.

He must also inform the person concerned, at the latter’s request,

(1) of the personal information used to render the decision;

(2) of the reasons and the principal factors and parameters that led to the decision; and

(3) of the right of the person concerned to have the personal information used to render the decision corrected.

The person concerned must also be given the opportunity to submit observations to a member of the personnel of the enterprise who is in a position to review the decision.

Under Bill 64, a monetary administrative penalty may be imposed on anyone who does not inform the person concerned by a decision based exclusively on an automated process or does not give the person an opportunity to submit observations, in contravention of section 12.1.

The most striking difference between the approach to regulating automated decision-making under GDPR and Quebec’s Bill 64, is that whereas Quebec appears to have imported the concepts of transparency and explainability from GDPR (and other well-known policy frameworks), the legislator has chosen not to import GDPR’s prohibition against the use of automated decision-making (although the Bill would impose fines on enterprises that do not inform individuals of the fact that they have been subject to a decision based exclusively on an automated process or of their right to submit observations on the decision to a member of the personnel of the enterprise who is in a position to review the decision). In this regard, the lighter regulatory burden is a welcome departure from GDPR, since the policy objective of insisting upon maintaining a “human in the loop” in relation to all automated decision-making is far from clear, especially where the subjects of such decisions are provided with general details as to how such decisions are rendered and maintain their contractual or statutory rights to contest outcomes.

CPPA

The CPPA’s provisions are also intended to be technologically neutral. Accordingly, one would expect that these provisions would, like PIPEDA, generally apply to automated decision making (as well as profiling).

The CPPA would add several new provisions to promote transparency with respect to automated decision making and to introduce explainability obligations with respect to such decisions.

Speaking to the House of Commons on November 24, 2020 – in moving that the proposed CPPA be read a second time and referred to committee – Minister Bains described the purpose of the bill’s automated decision making provisions as follows:

In the area of consumer control, Bill C-11 would improve transparency around the use of automated decision-making systems, such as algorithms and AI technologies, which are becoming more pervasive in the digital economy.

Under Bill C-11, organizations must be transparent that they are using automated systems to make significant decisions or predictions about someone. It would also give individuals the right to an explanation of a prediction or decision made by these systems: How is the data collected and how is the data used?

As indicated above, the CPPA would define an “automated decision system” as

any technology that assists or replaces the judgement of human decision-makers using techniques such as rules-based systems, regression analysis, predictive analytics, machine learning, deep learning and neural nets. (système décisionnel automatisé)”

Section 62(1) (Openness and transparency) would require organizations to provide a general account of their practices with respect to making automated decisions:

62 (1) An organization must make readily available, in plain language, information that explains the organization’s policies and practices put in place to fulfil its obligations under this Act.

Additional information
(2) In fulfilling its obligation under subsection (1), an organization must make the following information available…

(c) a general account of the organization’s use of any automated decision system to make predictions, recommendations or decisions about individuals that could have significant impacts on them;

Section 63 (Information and access) would also create a new explainability obligation for automated decisions.

63 (1) On request by an individual, an organization must inform them of whether it has any personal information about them, how it uses the information and whether it has disclosed the information. It must also give the individual access to the information…

Automated decision system

(3) If the organization has used an automated decision system to make a prediction, recommendation or decision about the individual, the organization must, on request by the individual, provide them with an explanation of the prediction, recommendation or decision and of how the personal information that was used to make the prediction, recommendation or decision was obtained.

Comments on Bill 64 and the CPPA’s proposed approach to regulating automated decision making using privacy laws

For starters, the CPPA definition of “automated decision system” is exceptionally broad. It is not limited by classes of technology, the products, services, or sectors of the economy that will use it, the nature of the automated decisions that the systems could implicate, the classes of individuals that could be affected by decisions, or nature or the significance of the impacts of the decisions to individuals.

The legislation also applies to technology that “assists or replaces” human judgment. This accounts for two broad categories of decision making processes.

First, there are processes in which decisions are ultimately made by a “human in the loop”, rather than by an AI system without human intervention. Think of a system that screens immigration applications by applying prescribed criteria to make a recommendation to a human official, who must then decide whether to accept or reject the application. The “automated decision system”, as defined in the CPPA, will have assisted human judgment, not replaced it.

Second, there are processes without a “human in the loop”, in which the AI system makes decisions without human intervention. Here, technology will have replaced human judgment as opposed to merely having assisted it. Think of a computer program that marks multiple-choice exams, determines the distribution of scores, and applies a curve to assign grades. Or, consider software that a bank might use to decide loan applications by assessing an applicant’s creditworthiness based on their personal information.[18]

In addition to applying to a wider range of decisions because it includes AI systems that “assist” in making decisions, it is also much broader as it would apply to make “predictions, recommendations or decisions about individuals”. The transparency and explainability obligations are also much broader than those in the EU because they are not limited to decisions which “produces legal effects”  “or similarly significantly affects” on individuals. As noted above, the transparency obligation would apply where “predictions, recommendations or decisions about individuals could have significant impacts on them”, while the explainability obligation has no such limitation.

There are pros and cons of the proposed approach to regulating automated decision making under privacy laws.

Transparency and “explainability” invite regulation because of how automated decision systems can use personal information to make or inform determinations that can have legal or other significant consequences for individuals. The logic of the proposed CPPA is that, just as Canadians should have the right to know which personal information an organization has on file about them and how the organization uses that information, they should also be entitled to know whether the organization uses an automated system to make decisions, why any such decision was made, and how their personal information was used in making it. The proposed new requirements track those that the federal government has imposed on itself.

Requiring organizations to be transparent about their use of automated decision systems and to provide explanations of how those systems have made particular decisions can further the objective of building trust in automated decision systems.[19] Transparency and explainability requirements may prevent or expose erroneous or abusive automated decisions, such as decisions that unlawfully discriminate. Transparency and explainability rules may also ensure that Canadians who are affected by automated decisions have the information they may need to challenge them.

The proposed CPPA’s requirement, in section 66(1), that an explanation “must be provided … in plain language” is ostensibly similarly motivated, though it belies both the potential difficulty of explaining predictions (and thus decisions) made by “black box” AI systems,[20] and the potential limitations of a non-expert’s ability to understand an explanation in a way that builds trust rather than promotes confusion or even misplaced confidence.[21] Moreover, requiring transparency could increase the risk that the security of automated decision systems – or, more accurately, of the data they use to make or inform decisions – will be compromised, or a company’s trade secrets or intellectual property will be publicly exposed.[22]

The scope of decisions to which the provisions apply is also extremely broad, especially given how pervasive and ubiquitous AI applications will become. The transparency obligation is somewhat narrowed by the qualification that it applies only to “predictions, recommendations or decisions about individuals that could have significant impacts on them”. However, given that AI systems can make decisions across ecosystems of products and services and that the types of “impacts” are not limited to specific categories such as those that have legal significance, there will likely be many more organizations affected by the obligations than are probably intended. As noted above, the explainability obligation has no limitations either as to whether the decision is made solely by an automated decision making process or that the decision has any significant impact on an individual. In fact, subsection 63(3) is not even expressly limited to decisions that involve the use of personal information.

As many organizations have for decades used relatively less sophisticated prediction and decision making tools that “assist” in making decisions, both Bill 64 and the CPPA could now sweep in decision making processes that heretofore never had to be disclosed or explained. As well, the transparency and explainability obligations are not technologically neutral, as similar obligations are not imposed under Bill 64 and the CPPA for decisions that are made using traditional manual processes.

But the choice to use privacy law, and to legislate specifically in relation to automated decision systems, raises important other policy considerations, even despite the proposed Bill 64 and CPPA’s more limited scope than the GDPR.

Automated decision systems use personal information to draw inferences, make predictions, and either offer recommendations or take independent decisions. Individuals have privacy interests in whether and how their personal information is used to make decisions about them.[23] This is what justifies using privacy law to regulate automated decision making; it is the field of legislation that is concerned with protecting individuals from the misuse of their personal information.

The automated decision systems provisions of Bill 64 and the CPPA, like the comparable provisions in the GDPR, are not, however, truly directed at privacy-related mischief. They regulate the use of particular technologies more than the use of information – though, due to the nature of the technologies in question, the line is admittedly difficult to draw. Further, the goals are not the protection of reasonable expectations of privacy – which is what privacy laws advance - but to avoid other harms such as ensuring that decisions are not biased or inaccurate.

If the purpose of regulating automated decision making is to avoid potential harms, then regulation should ostensibly be concerned not only with means, but also with ends. These ends – i.e., the actual decisions that automated systems make – are already regulated under numerous existing frameworks.[24] To the extent that Bill 64 and the CPPA add value to the regulation of automated decision making by focusing on means of automated decisions, the extent of that contribution can only be measured in light of how the law already governs the ends of automated decisions. For example:

  • Competition law already governs interactions between firms with respect to consumer-facing decisions, particularly about pricing. The involvement of automated decision systems does not change these rules or their application, as the Competition Bureau has confirmed.[25]
  • Consumer protection law already governs firms’ behaviour vis-à-vis customers. For example, if a business employs an automated decision system in its e-commerce activities, and the automated decision system makes a false, misleading, or deceptive representation to a consumer, then the consumer may seek to avail themselves of the protections of federal (under the Competition Act) or provincial consumer protection law.[26] Similarly, to the extent that an organization uses an automated decision system for credit rating or other consumer reporting, its activities will presumably be governed by the same statutory frameworks that apply in the absence of an automated decision system.[27] There are also a myriad of other laws that regulate decisions made using personal information. For example, there are  comprehensive regimes governing transparency and explainability in credit reporting such as under the Ontario Consumer Reporting Act. 
  • Human rights law already prohibits unlawful discrimination. This includes human rights legislation that governs transactions between private parties,[28] as well as the Canadian Charter of Rights and Freedoms, which governs transactions between the state and private parties. If automated decision systems cause organizations (including government agencies) to run afoul of these anti-discrimination measures – by manifesting “algorithmic bias”, for example[29] – then existing law will be available to respond and impose available sanctions.[30]

The regulation of automated decision making under Bill 64 and the CPPA would overlap with these and numerous other existing legal and regulatory frameworks. The result would be the regulation of the means of automated decisions under privacy law, and the regulation of their ends and perhaps also the means under other regulatory frameworks. This creates the possibility not only of a duplicative compliance burden, but also of duplicative enforcement by different regulators at different levels of government, both federal and provincial. Moreover, the regulation of automated decision making under privacy law will likely result in privacy commissioners such as the OPC, with limited or no expertise in the other regulatory areas, becoming mixed up in areas that are better handled by the existing regulatory regimes already in place. Expanding the federal privacy laws into a myriad of areas under provincial jurisdiction also raises new constitutional division of powers issues. 

If the true objective of the new transparency and explainability rules is more broadly to promote the responsible deployment of automated-decision systems, it may be more prudent to consider either stand-alone legislation or “best practice” standards, rather than to artificially extend privacy law beyond its natural confines. The draft European Regulation for Harmonizing Rules on Artificial Intelligence, referenced above, provides an example of a “fit for purpose” stand-alone legislative framework (one that will no doubt generate a lot of interest and debate over the coming months). The ITechLaw Responsible AI: A Global Policy Framework (and 2021 update) provides an example of the latter “best practice” industry standard approach. Another approach is to enforce existing laws and to amend those laws where needed. For example, in the U.S. which has no comprehensive privacy laws, the Federal Trade Commission (FTC) has signalled in publications, Using Artificial Intelligence and Algorithms and Aiming for truth, fairness, and equity in your company’s use of AI that it will enforce existing laws over which it has jurisdiction where data and algorithms are used to make decisions about consumers.

As governments, organizations, and other members of the public grapple with privacy issues associated with the responsible deployment of automated decision systems, the various regulatory options and choices are starting to come into focus. However, moving forward with Bill 64 and the CPPA’s obligations related to automated decision making requires a robust and informed discussion about the implications of using privacy law to regulate ethical uses of AI.

Published simultaneously on barrysookman.com. 

_______________________________

[1]           See, B. Marr, Artificial Intelligence in Practice: How 50 Successful Companies Used AI and Machine Learning To Solve Problems (John Wiley & Sons Ltd., 2019).

[2]           Bots at the Gate, at p. 14.

[3]           See, e.g., A2001018 (Re), 2020 CanLII 48128 (B.C. W.C.A.T.), at para. 11; A1902041 (Re), 2019 CanLII 141985 (B.C. W.C.A.T.), at para. 12; . A 2019 review of the B.C. Workers’ Compensation Board stated that: “A small number of claims can be accepted by [the Board’s case management system (“CMS”)] after meeting certain eligibility rules. Other decisions including wage-loss, average earnings and anticipated RTW calendar can be automated if rules are met. The majority of claims do not satisfy these rules. In the last 19 years, about 3% of claims were accepted and paid initial wage loss by CMS” (J. Patterson, New Directions: Report of the WCB Review 2019 (October 30, 2019), p. 347). A similar system may be use in the determination of claims for employment insurance at the federal level (see Béland-Falardeau v. Chairperson of the Immigration and Refugee Board, 2014 PSST 18,  at paras. 15-18).

[4]           See, e.g., J. Wakefield, “The man who was fired by a machine“ (June 21, 2018), BBC News: “It took Mr Diallo’s bosses three weeks to find out why he had been sacked. His firm was going through changes, both in terms of the systems it used and the people it employed…. His original manager had been recently laid off and sent to work from home for the rest of his time at the firm and in that period he had not renewed Mr Diallo’s contract in the new system…. After that, machines took over – flagging him as an ex-employee.”

[5]           See GDPR Recital 15 “In order to prevent creating a serious risk of circumvention, the protection of natural persons should be technologically neutral and should not depend on the techniques used. The protection of natural persons should apply to the processing of personal data by automated means, as well as to manual processing, if the personal data are contained or are intended to be contained in a filing system. Files or sets of files, as well as their cover pages, which are not structured according to specific criteria should not fall within the scope of this Regulation.” Article 1 “This Regulation applies to the processing of personal data wholly or partly by automated means and to the processing other than by automated means of personal data which form part of a filing system or are intended to form part of a filing system.” Definition of “processing” as “any operation or set of operations which is performed on personal data or on sets of personal data, whether or not by automated means, such as collection, recording, organisation, structuring, storage, adaptation or alteration, retrieval, consultation, use, disclosure by transmission, dissemination or otherwise making available, alignment or combination, restriction, erasure or destruction”.

[6]           See, Guidelines on Automated Individual Decision-making and Profiling for the purposes  of Regulation 2016/679 – European Commission, UK ICO Rights related to automated decision making including profiling, UK ICO What does the UK GDPR say about automated decision-making and profiling?

[7]           See, GDPR Recital 63: “A data subject should have the right of access to personal data which have been collected concerning him or her, and to exercise that right easily and at reasonable intervals, in order to be aware of, and verify, the lawfulness of the processing. This includes the right for data subjects to have access to data concerning their health, for example the data in their medical records containing information such as diagnoses, examination results, assessments by treating physicians and any treatment or interventions provided. Every data subject should therefore have the right to know and obtain communication in particular with regard to the purposes for which the personal data are processed, where possible the period for which the personal data are processed, the recipients of the personal data, the logic involved in any automatic personal data processing and, at least when based on profiling, the consequences of such processing.” (emphasis added) Also, Article 13(2): “In addition to the information referred to in paragraph 1, the controller shall, at the time when personal data are obtained, provide the data subject with the following further information necessary to ensure fair and transparent processing:…(f) the existence of automated decision-making, including profiling, referred to in Article 22(1) and (4) and, at least in those cases, meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject.”

[8]           See, GDPR Recital 91, Necessity of a data protection impact assessment: “This should in particular apply to large-scale processing operations which aim to process a considerable amount of personal data at regional, national or supranational level and which could affect a large number of data subjects and which are likely to result in a high risk, for example, on account of their sensitivity, where in accordance with the achieved state of technological knowledge a new technology is used on a large scale as well as to other processing operations which result in a high risk to the rights and freedoms of data subjects, in particular where those operations render it more difficult for data subjects to exercise their rights. A data protection impact assessment should also be made where personal data are processed for taking decisions regarding specific natural persons following any systematic and extensive evaluation of personal aspects relating to natural persons based on profiling those data or following the processing of special categories of personal data, biometric data, or data on criminal convictions and offences or related security measures. A data protection impact assessment is equally required for monitoring publicly accessible areas on a large scale, especially when using optic-electronic devices or for any other operations where the competent supervisory authority considers that the processing is likely to result in a high risk to the rights and freedoms of data subjects, in particular because they prevent data subjects from exercising a right or using a service or a contract, or because they are carried out systematically on a large scale. The processing of personal data should not be considered to be on a large scale if the processing concerns personal data from patients or clients by an individual physician, other health care professional or lawyer. In such cases, a data protection impact assessment should not be mandatory.”

See Article 35, Data protection impact assessment:

1. Where a type of processing in particular using new technologies, and taking into account the nature, scope, context and purposes of the processing, is likely to result in a high risk to the rights and freedoms of natural persons, the controller shall, prior to the processing, carry out an assessment of the impact of the envisaged processing operations on the protection of personal data. A single assessment may address a set of similar processing operations that present similar high risks.

2. The controller shall seek the advice of the data protection officer, where designated, when carrying out a data protection impact assessment.

3. A data protection impact assessment referred to in paragraph 1 shall in particular be required in the case of:

a. a systematic and extensive evaluation of personal aspects relating to natural persons which is based on automated processing, including profiling, and on which decisions are based that produce legal effects concerning the natural person or similarly significantly affect the natural person;

b. processing on a large scale of special categories of data referred to in Article 9(1), or of personal data relating to criminal convictions and offences referred to in Article 10; or

c. a systematic monitoring of a publicly accessible area on a large scale.

[9]           House of Lords (U.K.), Select Committee on Artificial Intelligence, AI in the UK: ready, willing and able? (April 16, 2018), at paras. 105-106. In its response to the Select Committee’s report, the U.K. government affirmed its position, which is reflected in the Data Protection Act 2018 (which implements the GPDR in U.K. law), that “[i]ndividuals should not be subject to a decision based solely on automated processing if that decision significantly and adversely impacts them, either legally or otherwise, unless required by law” (Secretary of State for Business, Energy and Industrial Strategy (U.K.), Government response to House of Lords Artificial Intelligence Select Committee’s Report on AI in the UK: Ready, Willing and Able? (June 2018), at para. 11).

[10]          The word “specific” is important here. As described above, automated decision making is generally regulated by the same legal frameworks that apply to non-automated decision making (human rights law, consumer protection law, competition law, and so on).

[11]          Australia’s Privacy Act 1988, despite significant amendments in 2014 and 2018, does not specifically address automated decision making. See, Office of the Australian Information Commissioner, “History of the Privacy Act”. In a submission to the Australian Government’s Department of Industry, Innovation and Science in 2019, the Australian Information Commissioner suggested that the government consider incorporating protections that specifically deal with automated decision making into Australian law, along the lines of the GDPR:

The [Office of the Australian Information Commissioner, or “OAIC”] recognises that international data protection regulations include additional principles beyond those rights and obligations contained in the Privacy Act. In particular, we note that the [government’s] discussion paper refers to the several rights contained in the EU’s General Data Protection Regulation (EU GDPR) including the right to erasure, and rights related to profiling and automated decision making.

The OAIC appreciates the importance of ensuring that Australia’s privacy protection framework is fit for purpose in the digital age and we suggest that further consideration should be given to the suitability of adopting some EU GDPR rights in the Australian context where gaps are identified in relation to emerging and existing technologies, including AI. Other rights in the EU GDPR that may merit further consideration include rights relating to compulsory data protection impact assessments for data processing involving certain high risks projects, the right of an individual to be informed about the use of automated decisions which affect them and express requirements to implement data protection by design and by default. [Emphasis added; footnotes omitted.]

[12]          New Zealand’s Privacy Act 2020 does not entitle individuals to an explanation when their personal information is used in automated decision making. Section 23 of the New Zealand Official Information Act 1982 does, however, entitle individuals to written reasons for decisions made by government authority, upon request.

[13]          See, A. Robertson, “A new bill would force companies to check their algorithms for bias” (April 10, 2019), The Verge.

[14]          Office of the Privacy Commissioner for Personal Data (H.K.), Ethical Accountability Framework for Hong Kong China (October 2018), at p. 2. See also, ibid., at p. 19:

[G]iven the significant impact nonpersonal data can have on individuals, a broader view of governance is needed. Rather than trying to change the definition of personal data to keep up with the technological developments, for example, it is more workable to develop governance mechanisms that help organizations determine that data is not being used in an inappropriate fashion

[15]          Model Artificial Intelligence Governance Framework, 2d ed. (2020), at p. 57

[16]    See : Responsible AI: 2021 Update | ITechLaw

[17]          See Explanatory Notes to Bill 64

[18]          See, Information Commissioner’s Office (U.K.), “What is automated individual decision-making and profiling?”

[19]          See, B. Khaleghi, “The Why of Explainable AI“ (August 19, 2019), Element AI:

Trust is the first, and perhaps most important, driver of interest in [explainable AI (“XAI”)]. It is especially significant for applications involving high-stakes decision-making and those where rigorous testing isn’t feasible due to a lack of sufficient data or the complexity of comprehensive testing….

To trust model predictions in such applications, users need to ensure the predictions are produced for valid and appropriate reasons. This does not imply the goal of XAI is to fool human users into misplacing trust in a model. On the contrary, this means XAI must reveal the true limitations of an AI model so users know the bounds within which they can trust the model. This is particularly important as humans have been already shown to be prone to blindly trusting AI explanations. Some attribute this tendency to the so-called illusion of explanatory depth phenomenon, in which a person falsely assumes their high-level understanding of a complex system means that that they understand it nuances and intricacies. To avoid misplaced trust from users, the explanations provided by AI models must be truly reflective of how the model works, in a human-understable form, and presented using interfaces capable of communicating their limitations.

[20]          See, A. Rai, “Explainable AI: from black box to glass box” (December 17, 2019), 48 J. Acad. Marketing Sci. 137, at p. 138:

[D]eep learning algorithms are a class of ML algorithms which sacrifice transparency and interpretability for prediction accuracy. These algorithms are now being employed to develop applications such as prediction of consumer behaviors based on high-dimensional inputs, speech recognition, image recognition, and natural language processing. As an example, convolutional neural networks, which underlie facial recognition applications, extract high-level complex abstractions of a face through a hierarchical learning process which transforms pixel-level inputs of an image to relevant facial features to connected features that abstract to the face. The model learns the features that are important by itself instead of requiring the developer to select the relevant feature. As the model involves pixel-level inputs and complex connections across layers of the network which yield highly nonlinear associations between inputs and outputs, the model is inherently uninterpretable to human users.

Addressing the trade-off between prediction and explanation associated with deep learning models, there have been significant recent advances in post-hoc interpretability techniques—these techniques approximate deep-learning black-box models with simpler interpretable models that can be inspected to explain the black-box models. These techniques are referred to XAI as they turn black-box models into glass-box models and are receiving tremendous attention as they offer a way to pursue both prediction accuracy and interpretability objectives with AI applications.

Cf., C. Rudin and J. Radin, “Why Are We Using Black Box Models in AI When We Don’t Need To? A Lesson From An Explainable AI Competition” (November 22, 2019), 1:2 Harv. Data Sci. Rev. (“Being asked to choose an accurate machine or an understandable human is a false dichotomy. Understanding it as such helps us to diagnose the problems that have resulted from the use of black box models for high-stakes decisions throughout society. These problems exist in finance, but also in healthcare, criminal justice, and beyond.”).

[21]          See, W. D. Heaven, “Why asking an AI to explain itself can make things worse” (January 29, 2020), MIT Technology Review.

[22]          See, B. Hartwig, “Artificial Intelligence Transparency and Privacy Law: Myths & Reality” (December 8, 2020), IOT For All.

[23]          See, C.F. Kerry, “Protecting privacy in an AI-driven world” (February 10, 2020), Brookings.

[24]          See, A. Goldenberg and M. Scherman, Automation Not Domination: Legal and Regulatory Frameworks for AI (Canadian Chamber of Commerce, June 2019).

[25]          See, Competition Bureau Canada, Big data and innovation: key themes for competition policy in Canada (February 19, 2018), p. 4:

[T]he Bureau believes that the emergence of firms that control and exploit data can raise new challenges for competition law enforcement but does not, in and of itself, necessitate an immediate cause for concern. There is little evidence that a new approach to competition policy is needed although big data may require the use of tools and methods that are somewhat specialized and, thus, may be less familiar to competition law enforcement. The fundamental aspects of the analytical framework (e.g., market definition, market power, competitive effects) should continue to guide enforcement.

The OECD has warned of the possibility of algorithmic collusion. See, OECD, Algorithms and Collusion: Competition policy in the digital age (2017), at p. 25:

One of the main risks of algorithms is that they expand the grey area between unlawful explicit collusion and lawful tacit collusion, allowing firms to sustain profits above the competitive level more easily without necessarily having to enter into an agreement. For instance, in situations where collusion could only be implemented using explicit communication, algorithms may create new automatic mechanisms that facilitate the implementing of a common policy and the monitoring of the behaviour of other firms without the need for any human interaction. In other words, algorithms may enable firms to replace explicit collusion with tacit co-ordination.

It is not clear whether, given the current state of AI technology, these concerns are more theoretical than real. See, U. Schwalbe, “Algorithms, Machine Learning, and Collusion” (2018), 14:4 J. Competition L. & Econ. 568.

[26]          See, e.g., Competition Act, R.S.C. 1985, c. C-34, s. 52; Consumer Protection Act, 2002, S.O. 2002, c. 30, Sch. A, Part III. As the Director of the U.S. Federal Trade Commission’s Bureau of Consumer Protection noted in a blog post in April 2020, the FTC has “used our FTC Act authority to prohibit unfair and deceptive practices to address consumer injury arising from the use of AI and automated decision-making”.

[27]          See, e.g., Consumer Reporting Act, R.S.O. 1990, c. C.33.

[28]          For example, section 1 of Ontario’s Human Rights Code, R.S.O. 1990, c. H.19, provides that “[e]very person has a right to equal treatment with respect to services, goods and facilities, without discrimination because of race, ancestry, place of origin, colour, ethnic origin, citizenship, creed, sex, sexual orientation, gender identity, gender expression, age, marital status, family status or disability”. Similar protections exist in the Canadian Human Rights Act, R.S.C. 1985, c. H-6, and in other provincial and territorial human rights laws.

[29]          One oft-cited example is the use of automated decision systems in consumer lending. As Sian Townson recently noted in the Harvard Business Review (“AI Can Make Bank Loans More Fair” (November 6, 2020)):

Lenders often find that artificial-intelligence-based engines exhibit many of the same biases as humans. They’ve often been fed on a diet of biased credit decision data, drawn from decades of inequities in housing and lending markets. Left unchecked, they threaten to perpetuate prejudice in financial decisions and extend the world’s wealth gaps.

The problem of bias is an endemic one, affecting financial services start-ups and incumbents alike. A landmark 2018 study conducted at UC Berkeley found that even though fintech algorithms charge minority borrowers 40% less on average than face-to-face lenders, they still assign extra mortgage interest to borrowers who are members of protected classes.

[30]          In the criminal justice system, for example, Canadian courts have already encountered legal challenges to risk assessment tools, albeit not AI-enabled ones, that allegedly fail to assess particular offenders. In Ewart v. Canada, 2018 SCC 30, the Supreme Court of Canada ruled that Correctional Service of Canada (“CSC”) breached a statutory obligation to “take all reasonable steps to ensure that any information about an offender that it uses is as accurate, up to date and complete as possible” (under section 24(1) of the Corrections and Conditional Release Act, S.C. 1992, c. 20) when it “disregard[ed] the possibility that [its assessment] tools are systematically disadvantaging Indigenous offenders and by failing to take any action to ensure that they generate accurate information” (at para. 66). Though the Court did not conclude that the CSC had violated the Canadian Charter of Rights and Freedoms, its decision – the reasoning of which would doubtless equally apply if the impugned assessment tools had been automated decision systems – suggests how statutory frameworks that do not specifically pertain to automated decision making (or to its analog analogues) may nonetheless meaningfully regulate decision making by public and private entities.

Auteurs

Abonnez-vous

Recevez nos derniers billets en français

Inscrivez-vous pour recevoir les analyses de ce blogue.
Pour s’abonner au contenu en français, procédez à votre inscription à partir de cette page.

Veuillez entrer une adresse valide