Skip to content.

OSFI-FCAC Releases Joint Report Highlighting AI Risks For Financial Institutions

This article is part of our Artificial Intelligence Insights Series, written by McCarthy Tétrault’s AI Law Group - your ideal partners for navigating this dynamic and complex space. This series brings you practical and integrative perspectives on the ways in which AI is transforming industries, and how you can stay ahead of the curve. View other blog posts in the series here.

 

On September 24, 2024, the Office of the Superintendent of Financial Institutions (“OSFI”) and the Federal Consumer Agency of Canada (“FCAC”) released a joint report (the “Report”) highlighting key risks to financial institutions associated with the use of artificial intelligence (“AI”). This Report, based on feedback from a voluntary AI questionnaire by federally regulated financial institutions, offers insights into current AI adoption trends, key risks and mitigation best practices for Canadian regulated financial institutions.

The Report is part of OSFI’s and FCAC’s consideration of the evolving risks the use of AI may pose for financial institutions and their advocacy for responsible AI adoption.

Growth in AI Adoption for the Foreseeable Future

The Report highlights a significant increase in AI usage among Canadian financial institutions. Findings show an approximate 67% rise in AI adoption from 2019 to 2023, with 75% of the financial institutions that responded to the questionnaire expecting to invest in AI over the next three years, and 70% expect to use AI models by 2026.

OSFI and the FCAC have seen AI become increasingly used by financial institutions in core functions like underwriting, claims management, algorithmic trading, and compliance monitoring. Insurers and deposit-taking institutions prioritize AI for fraud detection and cybersecurity, often involving third-party providers. Most institutions, especially smaller ones, are in the prototype stage for generative AI adoption, exploring large language models (commonly known as “LLMs”) for internal use and customer engagement.

Internal Risks from AI Adoption

As a result of AI adoption in different areas, OSFI and the FCAC have adopted the view that AI risk is considered a “transverse risk” which will amplify existing risks or introduce new ones as AI usage increases or scales into new areas within a financial institution. The Report identifies key internal risks from AI adoption, which financial institutions should be acutely aware of when considering the adoption of AI, including:

  • Data Governance Risks: Data-related risks, including privacy, governance, and quality, are top concerns in this area. Challenges arise from regulatory differences, fragmented data ownership, and third-party arrangements.
  • Model Risk and Explainability: AI model risks heighten as their complexity and opacity increase. Challenges include indeterminable causal relationships between inputs and outputs which affect the stability of the AI model.
  • Legal, Ethical and Reputational Risks: Financial institutions that adhere only to regulatory requirements from a limited number of jurisdictions may expose themselves to reputational risks. Prioritizing consumer privacy and consent is crucial, and will require clear communication about AI impacts to consumers and proper data usage consent.
  • Third-Party Risks: Dependency on third-party providers for AI models could present concentration risk.
  • Operational and Cybersecurity Risks: The interconnectedness of systems can lead to unforeseen issues, threatening resiliency. Without proper security, AI use may heighten the risk of cyber attacks and financial vulnerabilities. Internal AI tools can expose institutions to data poisoning or unintentional disclosure of consumer data or trade secrets, especially when using LLMs with internal data.

In addition to the above internal AI risks, financial institutions are continuing to face external risks relating to: (i) generative AI being used by threat actors; (ii) challenges adopting AI due to limited in-house expertise and knowledge; and (iii) potential credit losses due to job losses as a result of automation and potential liquidity risks resulting from the use of AI to move liquidity automatically.”

AI Risk Management Best Practices

The Report suggests a number of general best practices for managing AI risks, including:

  • Agility and Vigilance: The novelty of AI technology necessitates agile and vigilant risk management, especially in business areas not traditionally subject to model risk management (See OSFI Guideline E-23 – Model Risk Management).
  • Comprehensive Risk Frameworks: Gaps in AI risk management oversight can emerge if financial institutions address AI risks only within individual frameworks (such as model or cyber risk). Comprehensive organizational, governance, and operational frameworks, particularly those adopting a multidisciplinary approach, can assist to mitigate these risks.
  • Periodic Reassessments: Initial risk assessments often overlook the full spectrum of risks across the AI model lifecycle, highlighting the need for periodic reassessments.
  • Safeguards When Using AI Models: Controls such as human-in-the-loop and performance monitoring, back-up systems, alerts, machine learning operations, and limitations on use of personally identifiable information should be in place when using AI models.
  • Specific Generative AI Controls: The use of generative AI models amplifies risks like bias, third-party dependencies, and potential disclosure of confidential data. To mitigate these risks, financial institutions should implement specific generative AI controls, including enhanced monitoring and employee education on the appropriate use of confidential inputs within generative AI models.
  • Employee AI training: Insufficient AI training for employees can lead to poor decision-making and increased risks, underscoring the importance of comprehensive training programs.
  • Strategic Approach to AI adoption and risks: A strategic approach to AI adoption is crucial, as falling behind on technology advances can present business risks, especially without in-house AI expertise and knowledge. Financial institutions that are not using AI should still consider updating their risk frameworks, training or governance processes to address AI-related risks.

The Regulatory Road Map

OSFI and FCAC have signaled their intention to engage in additional dialogue with financial institutions on operationalizing best practices, including hosting a second Financial Industry Forum on AI. Accordingly, financial institutions should ensure they remain engaged with the development and use of AI in the financial services sector and seek to implement practical strategies to address the associated risks.

 

Please contact Nancy Carroll, Hartley Lefton, Christine Ing, Vanessa Johnson, Charles Morgan or Christopher Yam if you have any questions or for assistance.

Authors

Subscribe

Stay Connected

Get the latest posts from this blog

Please enter a valid email address