Skip to content.

CSA’s Recent Guidance on AI in Capital Markets

Introduction

In response to the increasing opportunities and risks involving the use of artificial intelligence (“AI”) in Canadian capital markets, the Canadian Securities Administrators (“CSA”) have recently provided clarification on the application of existing securities laws to AI systems in Staff Notice and Consultation 11-348 Applicability of Canadian Securities Laws and the Use of Artificial Intelligence Systems in Capital Markets (“Staff Notice and Consultation 11-348”). This notice does not create any new legal requirements or modify existing requirements. It also seeks comments from stakeholders by March 31, 2025, on what, if any, changes are needed to securities laws considering the emergence of AI.

1. Overarching Themes Relating to the Use of AI Systems

The CSA listed the overarching themes relating to the use of AI systems in the capital markets:

(a) Technology and Securities Regulation: Securities laws are technology-neutral, but different technologies are treated in varying ways. It is “the activity being conducted, not the technology itself, that is regulated.” For example, an AI system that simply gathers information will require different treatment under securities laws than an AI system that automates trade execution.

(b) AI Governance & Oversight: Policies and procedures should be developed in a way that accounts for the unique risks of AI systems. For example, this includes having a “human-in-the-loop” and ensuring adequate AI literacy for those using these systems.

(c) Explainability: AI systems can often involve “black boxes” with low degrees of explainability, making it difficult for regulated market participants to meet securities law requirements involving transparency, accountability, record keeping, and auditability. Accordingly, when “selecting or developing AI systems, the need for advanced capabilities of an AI system should be balanced with the need for explainability.”

(d) Disclosure: The use of AI systems must be adequately disclosed considering existing disclosure obligations. Exaggerating AI use (so-called “AI washing”) must be avoided.

(e) Conflicts of Interest: Unique aspects of AI systems, such as a lack of explainability or biased data sets, make it especially important to carefully guard against “an AI system making conflicted decisions that favour the interests of the market participant over those of their client/investor.”

2. Specific Guidance for Market Participants

The CSA then examined existing requirements and guidance under Canadian securities laws in the context of various regulated industry participants:

(a) Registrants

Registrants must:

  • Disclose AI use in sufficient detail if AI will directly change the way in which it offers services to their clients;
  • Maintain records to demonstrate compliance with securities laws;
  • Not outsource registerable activities;
  • Verify outsourced functions that rely on AI;
  • Identify and address material conflicts of interest when using AI; and
  • Should address the risks of using AI with appropriate governance structures.

(i) Advisers and Dealers

Advisers and dealers must ensure human oversight over AI-related activities, with an appropriate degree of explainability and regular testing. The CSA cautioned that “it would be challenging for a registrant” using AI systems to autonomously manage portfolios to “demonstrate proper compliance with securities laws.” At the current stage of AI technology, the CSA does not “believe it is possible to use an AI system as a substitute for an advising representative acting as decision-maker for clients’ investments and consistently satisfy regulatory requirements such as for making suitability determinations or reliably deliver the desired outcomes for clients.”

(ii) Investment Fund Managers (“IFMs”)

IFMs must:

  • Ensure “careful oversight and understanding” of AI systems and that such systems are “explainable, transparent, and free from biases and conflicts of interest”;
  • Adequately disclose a fund’s use of AI as part of its material investment strategy, and risks from such usage. Where AI usage is a material change to the fund’s investment strategy, that change must be approved by investors;
  • Ensure both an absence of discretion and sufficient transparency for funds using AI to track one or more indices if the IFM wants them to be considered an “index fund” rather than an “active investment strategy”;
  • Have policies and procedures to carefully review disclosure concerning AI usage; and
  • Identify and address material conflicts of interest when using AI, including through an independent review committee, if appropriate.

(b) Non-Investment Fund Reporting Issuers (“Non-IF Issuers”)

There is no “one size fits all” for Non-IF Issuers and their disclosure “is expected to be tailored … not boilerplate, and commensurate with the materiality of their use of AI systems and the associated risks.” Disclosures about the development or use of AI systems must be balanced, fair, not misleading, and not exaggerated.

Since Non-IF Issuers cannot disclose forward-looking information (“FLI”) unless they have a reasonable basis for it, they should carefully “consider whether making statements about the prospective or future use of AI systems in their continuous disclosure record may constitute FLI.”

(c) Marketplaces and Marketplace Participants

Marketplaces must:

  • Develop “robust internal controls and technology controls” when deploying AI systems;
  • Keep records of system failures and malfunctions;
  • Conduct annual system reviews, vulnerability assessments, and capacity stress tests;
  • Develop policies that include regular testing of AI systems, validation of AI system outputs, and procedures for mitigating any identified risks;
  • Ensure there is ongoing training and education for staff using AI systems; and
  • Ensure that their operations’ use of AI in systems that automatically generate or electronically transmit orders on a pre-determined basis do not compromise market integrity or investor protection, and comply with market conduct rules, including those related to market manipulation, insider trading, and other forms of market abuse.

(d) Clearing Agencies and Matching Service Utilities

There are “comprehensive requirements for risk management, systems design, operational performance, and regulatory compliance that are applicable to the use of AI systems” in the context of clearing agencies and matching service utilities. They must, when using AI systems:

  • “[D]evelop and maintain adequate internal controls and adequate cyber resilience and information technology controls, including controls relating to information systems, information security, change management, problem management, network support and system software support in relation to their use of systems that support clearing agencies’ clearing, settlement, and depository functions”; and
  • “[C]onduct capacity stress tests, review the adequacy of cyber resilience, review the vulnerability of systems and data centres, maintain adequate contingency and business continuity plans, conduct an annual independent review, and promptly notify the securities regulatory authority of a material failure.”

(e) Trade Repositories and Derivatives Data Reporting

Trade Repositories must:

  • “[I]mplement, maintain, and enforce appropriate controls and procedures to identify and minimize the impact of all plausible sources of operational risk relating to the use of AI systems”;
  • “[D]evelop and maintain adequate internal controls and adequate technology controls, including information security, cyber resilience, processing capability and change management in relation to their use of AI systems”; and
  • Ensure that data is secure, that business, legal, and operational risks are managed, and that regulators have access to any relevant data.

(f) Designated Rating Organizations (“DROs”)

DROs “must ensure that any use of AI systems in the credit rating process provides an appropriate degree of transparency and explainability” to comply with existing securities laws in this context. Moreover, DROs should exercise diligence and caution when implementing AI into their rating process, and publicly disclose any use of AI in their processes.

(g) Designated Benchmark Administrators (“DBAs”)

The CSA’s view is that “a DBA must ensure that any use of AI systems in the benchmark determination process provides an appropriate degree of transparency and explainability” to comply with existing securities laws. Similarly, DBAs “should exercise caution and diligence when considering using AI systems to automate any aspect of the benchmark determination process and assess appropriate safeguards for any such use, and publicly disclose any use of AI systems in the benchmark determination process.”

3. Outsourcing

The CSA understand that many firms will source AI-related systems from third parties, which may represent an outsourcing arrangement. The CSA remind registrants that any kind of service that is based on, or enhanced by, AI systems is likely to require employees or professional advisors involved in such AI enhancements to have specialized skills and an understanding of registrant conduct requirements. Tailored policies and procedures will also be needed to address the unique risks posed by the use of AI systems developed and operated by third-parties. Registrants should bear in mind the privacy law implications associated with any outsourcing arrangements where client information might be inputted into an AI system and take appropriate steps to keep client information confidential.

4. Consultation

The CSA has sought comments from stakeholders by March 31, 2025, on ten questions that will help the CSA determine if any changes need to be made to securities laws within the context of AI.

How McCarthy Tétrault Can Help

If you are wondering how Staff Notice and Consultation 11-348 may affect your business, or for assistance in providing feedback in the Request for Comments, we are here to help. By leveraging our deep industry expertise and experience, we help our clients navigate Canada’s complex securities law regime to achieve their business goals. Please reach out to Sean Sadler, Sonia Struthers, Lori Stein, or Shane D’Souza if you would like to discuss the opportunities and risks of AI to registrant businesses.

Authors