Skip to content.

An AI a Day Keeps the Doctor Away?: Regulating Artificial Intelligence in Healthcare

This article is part of our Artificial Intelligence Insights Series, written by McCarthy Tétrault’s multidisciplinary Cyber/Data team. This series brings you practical and integrative perspectives on the ways in which AI is transforming industries, and how you can stay ahead of the curve.

View other blog posts in the series here.

 

Artificial Intelligence (“AI”) promises to transform many aspects of everyday life for Canadians. AI tools are predicted to dramatically improve the provision of heath care by improving the quality, safety, and efficiency of diagnostic tools, treatment decisions, and care. Although AI innovations are, in many cases, still years away from general deployment into the Canadian health care ecosystem, AI is already used in some circumstances to read medical images, allowing machine learning to support diagnosticians in their decision-making.

Like many other jurisdictions, Canada’s health governance systems currently lack the appropriate legal and regulatory mechanisms to effectively deal with the challenges that AI poses. There is currently uncertainty with respect to key issues such as the related legal requirements for health privacy, medical device regulation and liability for AI-related harms. In Canada, regulation of AI in health care involves the additional challenge of navigating constitutionally fragmented jurisdiction over health care, which results in layers of governance and the need to coordinate multiple different actors.

This blog post highlights some of the legal challenges and issues that need to be addressed in order for Canada to have a robust and well-regulated governance structure for the use of AI in health care, including:

  • Coordination of federal and provincial authority;
  • Privacy and oversight with respect to the use of AI in treatment;
  • Promotion of Equity through AI; and
  • Liability for AI-related harms.

Coordination of Federal and Provincial Authority

Canada’s federal system and constitutional division of powers pose unique challenges for the regulation of AI in health care.[1] Under the Constitution, health care is under provincial jurisdiction. Although similar, each province has its own set of regulatory frameworks addressing the safety and quality of health care, health information privacy, informed consent, human rights and non-discrimination, and licensing of health care professionals. With respect to the adoption of AI, provincial legislation and regulation will be the primary legal structure that governs the end users of AI technology and its application to patients.

However, despite health care being primarily a provincial concern, the federal government plays a significant role, particularly through its spending powers under the Canada Health Act,[2] and its responsibilities for Indigenous Peoples, federal prisoners, and the military. The federal government is also a significant player in the regulation of drugs and medical devices. 

Health Canada is the key regulatory authority at the federal level that controls which medical devices are available for sale and may be included in the public insurance plans of the provinces and territories. Health Canada’s primary mode of regulation is through the licensing process applicable to all medical devices. This licensing process requires manufacturers to classify their devices according to risk (e.g., invasiveness, risk of erroneous diagnosis, and intended medical purpose) under the Medical Devices Regulations,[3] and obtain approval from Health Canada. If a device is licensed, the Medical Device Directorate continues to monitor the safety and efficacy of the device.[4]

A significant challenge for the licensing and regulation of AI-technology is the application of machine learning, which is often referred to as “black-box” decision making, because the relevant algorithms are often proprietary and commercially sensitive and decisions and impacts of the algorithms cannot be fully explained. A question that is being asked by regulators around the world is “how can a regulator verify and validate machine learning algorithms to ensure that they do what they say well and safely?”[5] Another question is: what role should machine learning and automated decision making have in health care?

Other key actors in the regulatory framework of Canadian health care are the professional bodies that provide oversight and self-regulation. Ensuring coordination between these regulatory bodies, the provincial and federal legislatures, and Health Canada to minimize or eliminate regulatory blind-spots will be a challenge that must be overcome to ensure good governance of AI in health care.

In April 2021, the European Commission released a 108-page proposal to regulate AI. Although the European Union has yet to reach consensus on the final text of the legislation, the proposal has received significant interest, and the question of how the European model could inform the development of AI governance in Canada is being considered by thought leaders in Canada. [6]

Privacy and Oversight

Even though the European Union has not yet implemented an overarching framework for AI regulation, the rules in its General Data Protection Regulation (“GDPR”) provide significant guidance for the European medical community with respect to regulation of medical AI.[7] For example, under the GDPR, any controller of an AI-system based solely on automated processing must provide the subject with information about the existence of the automated decision-making, meaningful information about the logic involved, and the significance and consequences of such processing.[8] The GDPR also provides a robust regulatory framework that governs the data privacy of citizens whose data may be used in machine learning algorithms. In Europe, the GDPR also requires that medical AI have human oversight.[9] Further, the training data for the AI must be checked for bias and the ongoing operation of AI must be constantly monitored for the occurrence of bias to ensure that use of AI does not unintentionally result in discrimination.[10] 

Recent changes to Canada’s provincial privacy landscape suggests that Canada will not only follow the European example, but also seek to enforce robust privacy rights in its own way. For instance, on September 22, 2021, the province of Quebec’s landmark legislation, the Act to Modernize Legislative Provisions respecting the Protection of Personal Information ("Bill 64"), received Royal Assent. Bill 64 will impose a duty to inform with respect to technological tools that enable the identification, location or profiling of an individual in order to collect personal information from an individual. From September 22, 2023, organizations will also be required to inform the individual when a decision is made based solely on automated processing of his or her personal information, no later than the time the organization informs the individual of that decision. Organizations shall also give the individual the opportunity to make representations to a member of their staff who is in a position to review the decision. For more information about Bill 64, consult our blog series here.

Similarly, the federal government’s Directive on Automated Decision Making (“Canada ADM Directive”),[11] indicates that Canadian regulatory frameworks will also likely require that any health-focused AI technology provide for human intervention in the decision-making process, and ensure that all data be tested for bias and non-discrimination.[12] The Canada ADM Directive is a risk-based governance model that establishes four levels of risk, judged by the impact of the automated decision. Certain risk-mitigating requirements are then established for each impact level, including: notice before automated decision making decisions and explanations after automated decision making decisions; peer review; employee training; and human intervention.

Promotion of Equity

Arguably the most significant concern associated with the use of AI and automated decision making in health care is their potential to amplify bias and discrimination. Canada’s health care system already grapples with the problems associated with inequities in health care – from differential resource allocation between communities, to differential treatment of individuals based upon their gender or race.

The current legal framework for funding health care in Canada (i.e., the Canada Health Act) only protects universal coverage for “medically necessary” hospital and physician services.[13] Due to the current novelty of AI-assisted medical services, it is unlikely that many AI-assisted medical services would currently be considered medically necessary. Therefore, only patients who have the means to afford add-on fees or private or boutique health care would gain access to this sophisticated technology. Further, if only larger medical centers have the infrastructure and access to such computer science programs necessary to develop AI-assisted medical programs, access to AI-technology may be limited, even if cost is not a barrier. Therefore, regulating AI through the appropriate legal frameworks to ensure that it is developed and deployed in an accessible manner will be an important matter for legislatures to consider and address.

In addition to potential inequities of access, there are two main sources of concern relating to discrimination in AI systems: (1) bias in the data used to train the system; and (2) bias in the algorithm. 

If the data used to train the AI system is flawed or incomplete, for example by failing to include sufficient data from a certain population, the AI system may be ineffective or dangerous for patients of the underrepresented population. For example, AI-assisted cancer screening tools that are trained primarily on images of light-skinned patients are more likely to misdiagnose cancer lesions in patients with skin of colour.[14] Bias in the AI-training data, while easier to identify, poses serious questions relating to access to data, data transfer, and consent. The importance of training AI systems with data from diverse populations will have to be balanced with laws relating to data collection, use, transfer and storage across multiple jurisdictions, so that diverse patients residing in areas with less diverse populations are not at risk of being harmed by treatments based on a lack of diverse data.

Bias in the algorithm of an AI system may be impossible detect, particularly where machine learning techniques are employed. When the decision making of the AI is a “black box”, due to the opacity of how the AI is identifying patterns (and potentially to changes in the algorithm over time as the machine learns), it can be a challenge to ensure that discrimination is not occurring. To combat this, several jurisdictions are considering explicit legislative commitments that ensure AI systems are compliant with anti-discrimination and human rights legislation.[15] Paired with robust monitoring requirements, these types of provisions would provide greater legal certainty, accountability, and public confidence in AI-assisted health care.

Liability for AI-Related Harms

Another considerable hurdle in the adoption of AI technologies in health care, particularly in the medical community, is the continued uncertainty regarding the potential liability attached to the use of AI. Who do you sue when AI goes wrong?

Although AI has been used for various applications over the past few years, it remains unclear where liability should fall when an AI system fails. In 2020, the question of who (if anyone) is liable when an AI-powered trading investment system causes substantial losses for an investor, was before the English courts for the first time.[16] Unfortunately for the development of tort law in this important area, the parties reached an out of court settlement, leaving the question to be answered another day.[17]

In Canada, medical harms may be dealt with under the law of negligence.[18] How AI-technology changes the standard of care expected of a medical practitioner and, in particular, the acceptable level of decision-making delegation to the AI system, are questions that must be considered.[19] Further, the courts must query whether an AI company has any liability to a patient that is misdiagnosed. Could an AI company contract out of its liability to a hospital, if harm results from the use of its technology? Is an AI company only liable if bias is found in the data or algorithm? What type of consent is needed from patients before AI technologies are employed?

Another area where AI may cause significant harm to Canadians is through breaches of private health care data, which could harm the public’s overall confidence in the health care system. In the recent Supreme Court of Canada decision, Reference re Genetic Non-Discrimination Act (2020 SCC 17), the majority held that the federal government had the power to make rules combating genetic discrimination and protecting health through its jurisdiction over criminal law:

Many of this Court’s decisions illustrate how the criminal law purpose test operates. A law directed at protecting a public interest like public safety, health or morality will usually be a response to something that Parliament sees as posing a threat to that public interest. For example, prohibitions aimed at combatting tobacco consumption and protecting the public from adulterated foods and drugs were upheld because they protect public health from threats to it...

Parliament took action in response to its concern that individuals’ vulnerability to genetic discrimination posed a threat of harm to several public interests traditionally protected by the criminal law. Parliament enacted legislation that, in pith and substance, protects individuals’ control over their detailed personal information disclosed by genetic tests in the areas of contracting and the provision of goods and services in order to address Canadian’s fears that their genetic test results will be used against them and to prevent discrimination based on that information. It did so to safeguard autonomy, privacy and equality, along with public health. The challenged provisions fall within Parliament’s criminal law power because they consist of prohibitions accompanied by penalties, backed by a criminal law purpose.[20]

This case demonstrates that the federal government’s regulatory power is not limited to health care spending. However, it is unclear how effectively criminal law can be used to govern AI and, as AI technology becomes more common in health care, the legislatures and the courts will have to carefully consider how the current private and criminal law frameworks can be adapted to deal with attributing and apportioning liability arising from AI decision-making.

Conclusion

AI poses both risks and opportunities in the health care space. AI systems aim to democratize health and provide superior patient care. However, regulators must contend with the challenge of ensuring that AI technology does what it is intended to do, does it well, and that there remains legal accountability for any harms caused.

 

To learn more about how our Cyber/Data Group can help you navigate the privacy and data landscape, please contact national co-leaders Charles Morgan and Daniel Glover.

 

[1] Colleen M. Flood and Catherine Régis, Régis, Catherine and Flood, Colleen M., AI and Health Law (February 1, 2021). in Florian Martin-Bariteau & Teresa Scassa, eds., Artificial Intelligence and the Law in Canada (Toronto: LexisNexis Canada, 2021), Available at SSRN: https://ssrn.com/abstract=3733964.

[2] Canada Health Act, R.S.C., 1985, c. C-6.

[3] Medial Devices Regulations, SOR 98/282

[4] https://www.canada.ca/en/health-canada/corporate/about-health-canada/branches-agencies/health-products-food-branch/medical-devices-directorate.html

[5] W. Nicholson Price II, “Artificial Intelligence in Health Care: Applications and Legal Issues” (2017) The SciTech Lawyer 14:1; David Schneeberger et al., (2020) The European Legal Framework for Medical AI. In: Holzinger A., Kieseberg P., Tjoa A., Weippl E. (eds) Machine Learning and Knowledge Extraction. CD-MAKE 2020. Lecture Notes in Computer Science, vol 12279. Springer, Cham. https://doi.org/10.1007/978-3-030-57321-8_12

[6] Law Commission of Ontario, “Comparing European and Canadian AI Regulation” (November 2021). https://www.lco-cdo.org/wp-content/uploads/2021/12/Comparing-European-and-Canadian-AI-Regulation-Final-November-2021.pdf

[7] Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) [GDPR]

[8] GDPR, ibid, Arts 13, 14.

[9] GDPR, ibid, Art 22. Scheenberge, supra note 5, 211.

[10] Schneeberger, supra note 5, at 211.

[11] Canada ADM Directive.

[12] Federal Government’s Directive on Automated Decision-Making: Considerations and Recommendations

[13] Canada Health Act, R.S.C., 1985, c. C-6., s. 2 sub nom “hospital services” and “physician services”

[14] Adamson AS, Smith A. Machine Learning and Health Care Disparities in Dermatology. JAMA Dermatol. 2018 Nov 1;154(11):1247-1248. doi: 10.1001/jamadermatol.2018.2348. PMID: 30073260.

[15] Law Commission of Ontario, “Regulating AI: Critical Issues and Choices” (April 2021) at 38-39. https://www.lco-cdo.org/wp-content/uploads/2021/04/LCO-Regulating-AI-Critical-Issues-and-Choices-Toronto-April-2021-1.pdf

[16] Minesh Tanna “AI-powered investments: Who (if anyone) is liable when it goes wrong? Tyndaris v VWM” (November 2019). https://www.simmons-simmons.com/en/publications/ck2xifd2ddmrq0b48u46j2nns/ai-powered-investments-who-if-anyone-is-liable-when-it-goes-wrong-tyndaris-v-vwm

[17] Jeremy Kahn, “Why do so few business see financial gains from using A.I?” (October 20, 2020). https://fortune.com/2020/10/20/why-do-so-few-businesses-see-financial-gains-from-using-a-i/

[18] In Quebec, civil law liability principles would govern.

[19] Mélanie B. Forcier, et al., “Liability issues for the use of artificial intelligence in health care in Canada: AI and medical decision-making” (July 2020) Dalhousie Medical Journal 46(2). DOI:10.15273/dmj.Vol46No2.10140

[20] Reference re Genetic NonDiscrimination Act, 2020 SCC 17 at paras. 73, 103

Additional Resources

Authors

Subscribe

Stay Connected

Get the latest posts from this blog

Please enter a valid email address