Skip to content.

A new way to manage AI risks: The National Institute of Standards and Technology’s AI Risk Management Framework

On January 26, 2023, the United States’ National Institute of Standards and Technology (“NIST”) published its anticipated Artificial Intelligence Risk Management Framework (the “AI RMF”).

This framework builds on a large number of principles-based “responsible AI” frameworks that have been adopted in recent years[1], laying the foundations of a concrete operational framework to help organizations identify and manage the unique risks artificial intelligence systems (“AI Systems”) create. Nevertheless ,the AI RMF’s scope remains broad and was not designed with any specific context or technology in mind. As such, although it will undoubtedly prove useful to many different types of organizations, whether operating in the US or not, involved at any point in the AI lifecycle (i.e. “AI Actors”), the implementation of the AI RMF will have to be further tailored to the specificity of each organization’s compliance landscape.

Currently, businesses that wish to develop or use AI responsibly are left with little bespoke guidance adapted to this technology, save perhaps from certain privacy laws which include provisions on “automated decision making” like Quebec’s new Law 25 or EU GDPR. Despite renewed calls for AI regulation in the wake of the sudden popularity of OpenAI’s ChatGPT[2], the adoption of currently proposed AI regulation is likely still a few years from coming into force, including in Canada and the EU who have been pioneers in that regard (see our blogs on Canada’s Artificial Intelligence and Data Act (“AIDA”) here, and on the EU’s Artificial Intelligence Act here). As we will see in this blog, although the AI RMF is voluntary, it represents an important step toward the creation of rules to govern the development and deployment of artificial intelligence systems.

Background

NIST is a leading U.S. standards setting institution affiliated with the U.S. Department of Commerce. An important part of its work is the development and maintenance of standards, guidelines and best practices for multiple technological and scientific sectors. As an example, NIST published in 2014 the first version of its Cybersecurity Framework, similar in many aspects to ISO’s 27001 Information Security Management standard.

In 2021, the U.S. Congress gave NIST the mission to develop a “voluntary risk management framework for trustworthy artificial intelligence systems”. In particular, the U.S. Congress requested that this framework include best practices and voluntary standards on how to develop and assess trustworthy AI Systems, mitigate potential risks, establish common definitions of explainability, transparency, security, fairness, etc. and remain technology-neutral.

The AI RMF’s development began shortly thereafter, with an initial request for information in July 2021. NIST published a first draft of the AI RMF in March 17 2022, than a second in August of the same year. All throughout this process NIST received comments and feedback from a broad set of stakeholders from academia, civil society and the private and public sectors, including Microsoft, Google, NASA, IBM and the National Artificial Intelligence Institute. This degree of participation from multiple leading industry participants may favour its adoption by AI Actors. The fact that the AI RMF is intended to be a living document that will evolve over time with the benefit of further input based on lessons learned, may also contribute to its success.

In parallel to the publication of the AI RMF, NIST presented a Roadmap that sets out further actions it intends to take to improve and expand the framework, including the alignment of the framework with other standards (mainly ISO’s IT and AI standards, some of which are still under development)[3].

Structure of the Framework

The AI RMF is divided in two main parts. Part 1 (Foundational Information) focuses on providing guidance on how to assess AI risks and measure trustworthiness while Part 2 (Core and Profiles) details the core of the framework and its four main functions: Govern, Map, Measure and Manage.[4] Its main goal is to assist organizations in managing risks, both to the enterprise and society at large, that can emerge from the use of AI systems and, ultimately, cultivate trustworthiness.[5]

As a companion to the AI RMF, NIST also published an online Playbook, which was built to help organizations use and implement the main framework. The Playbook is a platform which expands on the four core functions with detailed explanations, suggested actions and recommended resources for every step of the AI RMF Core.

The AI Risk Management Framework

Part 1 – Foundational Information

As a practical guide, the AI RMF first focuses on the unique risks brought by AI Systems. Organizations have been managing cyber and computer risks for several decades, but AI Systems (such as systems that can operate autonomously on our roads, that create novel art pieces  based on databases of human creations or that make hiring, financing or policing recommendations with limited human inputs) introduce a new range of risks.

Rather than proposing a list of specific risks that may be rapidly outdated or nor not apply to all AI Systems, NIST identifies AI-specific challenges that organizations should keep in mind when developing their own risk management approach:

  • Risk Measurement[6]: AI risks can be difficult to precisely measure quantitatively or qualitatively. This is due in part to the fact that many organizations are dependent on external service providers to respond to their AI needs, which can create alignment and transparency issues. This is compounded by the current absence of generally accepted methods to measure AI risks. Some other risk measurement challenges include: tracking emergent risks, measuring risk in real-world settings (rather than in controlled settings), and the inscrutability of the algorithms (lack of explainability).
  • Risk Tolerance[7]: Tolerance to risk will vary from one organization to another, and from one use-case to another. Establishing risk tolerance will require taking into account multiple evolving factors, including the unique characteristics of each organization, their legal/regulatory environments and the broader social context in which they operate.
  • Risk Prioritization[8]: Organizations using AI Systems will be faced with the challenge of efficiently triaging risks. NIST recommends an approach where the highest risks are prioritized and where organizations stop using AI Systems that present “unacceptable negative risk levels”. Once again, this evaluation will be contextual. For example, initial risk prioritization may be higher for systems interacting directly with humans.
  • Organizational Integration and Management of Risk[9]: AI risk management should not be considered in isolation from the broader enterprise risk strategy. The AI RMF should be integrated within the organization’s existing risk governance processes so that AI will be treated along with other critical risks to create a more integrated outcome.

NIST’s focus on risk is in line with current legislative proposals as they have appeared in Canada and Europe: draft regulation currently being considered in those jurisdiction both take a risk-based approach where the development or use of so-called “high-risk” AI Systems would be subject to specific governance and transparency requirements. For example, the Canadian draft Artificial Intelligence and Data Act (“AIDA”)) proposes that any person responsible for an AI System should have the obligation to assess whether the system is a “high-impact system” based on criteria that will be set out in regulations[10].

As such, assessing risk, and especially putting in place workflows to identify and manage high-risk AI Systems, may soon become legally mandated. For organizations wanting to stay ahead of the curve, the AI RMF represents a good starting point. In section 2 of Part 1 of its Framework, NIST further details its risk management approach by proposing a general TEVV framework (for: test, evaluation, verification and validation), inspired by work done by the OECD, that identifies recommended risk management activities for the different stages of the AI lifecycle. The following figures are taken directly from the AI RMF:

View Pic%20%281%29.jpegView pic%2B2

We can find in NIST’s approach certain similarities with privacy-by-design approaches, in particular as regard to the use of impact assessments. Privacy impact assessments (“PIAs”) have indeed become a staple of privacy compliance, including in Quebec where Law 25, starting September 2023, will require companies to conduct PIAs when conducting new projects involving information systems that can process personal information.[11] And if adopted in its current form, AIDA would add the obligation for all organization responsible for an AI System to conduct an assessment to determine if the system is “high-impact”.

Ultimately, the goal of an effective AI-risk management process should be not only to reduce risk for the organization, but also to encourage “trust” by those who adopt the technology. “Trust”, “trustworthy AI” are recurrent motifs of existing principle-based approaches, including Montréal Declaration for Responsible Development of Artificial Intelligence and iTechLaw’s Responsible AI: A Global Policy Framework (which some authors of this blog have contributed to develop). In a manner consistent with such frameworks, AI RMF defines trust as the cornerstone of AI risk management.[12]. From the outset, the AI impact assessment should evaluate if AI technology is an appropriate or necessary tool for the task at hand, which should consider the trustworthiness characteristics in conjunction with the relevant risks, impacts, costs, and benefits with contributions by a variety of stakeholders.[13]The AI RMF describes seven characteristics of a trustworthy AI, all of which should be considered and treated holistically when deploying an AI System.

  1. Valid and Reliable
  2. Safe
  3. Secure and Resilient
  4. Accountable and Transparent
  5. Explainable and Interpretable
  6. Privacy-Enhanced
  7. Fair - with Harmful Bias Managed

NIST places particular emphasis on the “Accountable and Transparent” characteristic as it underpins all the others. Transparency notably leads to greater accountability by allowing any person who interacts with an AI System to have a better understanding of its behaviour and limitations. NIST provides no detailed guidance on how to favour such transparency, but we note that the Institute of Electrical and Electronics Engineers recently made freely available some of its standards on AI Ethics and Governance, including is Standard for Transparency of Autonomous Systems which provides specific guidance on the topic.[14] Transparency of AI Systems is a key challenge as their underlying algorithms have the reputations of being “black-boxes” and producing outputs that is not easily explainable.

 

Part 2

Part 2 of the AI RMF is what NIST calls the AI RMF Core, a framework of actions and desired outcomes in developing responsible and trustworthy AI Systems.[15] It is composed of four functions to maximize benefit and minimize risk in AI outcomes and activities:

  • The Govern function focuses on the implementation of policies and procedures related to the mapping, measuring, and managing of AI. It has a focus on “people”, emphasizing workplace diversity, inclusion and a strong risk mitigation culture.[16]
  • The Map function highlights the systematic understanding and categorization of the AI’s performance, capabilities, goals and impacts.[17]
  • The Measure function employs quantitative, qualitative, or mixed-method tools, techniques, and methodologies to analyze, assess, benchmark, and monitor AI risk and related impacts.[18]
  • The Manage function entails allocating risk resources to map and measure risks on a regular basis and as defined by the Govern function.[19]

For each function, NIST provides several categories and subcategories of operational tools for implementation. For example, the first category of the Govern function relates to ensuring that “Policies, processes, procedures, and practices across the organization related to the mapping, measuring, and managing of AI risk are in place, transparent, and implemented effectively”. This category is further divided in seven subcategories that suggest different actions to fulfill the purpose of the function. We will not go over each here, but we would recommend anyone working to develop an AI governance structure within an organization to look at the AI RMF, and in particular the interactive Playbook (which comes with additional details), for guidance that can then be adapted to the context.

 

Takeaways

Although we remain at an early stage in the development of AI standards and laws, the advent and popularity of generative AI Systems like ChatGPT and StabilityAI (which have brought both the risk and rewards of AI into focus[20]), the need for an efficient and adaptable risk management framework becomes more and more evident by the day.

The AI RMF is an important step in the right direction, helping to fill the current guidance gap on how best to manage AI risk. While compliance with the AI RMF is evidently not mandatory, implementing its recommendations is a proactive way that organizations can get ahead of the curve as regards regulatory compliance requirements that are already coming into focus.  In this sense, implementation of the AI RMF will serve as a useful step toward the responsible deployment of AI Systems “by design”.

 

 

 

[1] In 2020, Harvard University’ Berkman Klein Center published a widely circulated “Map of Ethical and Rights-Based Approaches to Principles for AI” which listed almost 40 documents published since 2016 by private and public actors, from Microsoft, Amnesty International, the OECD and the University of Montreal to the UK and US governments.  See also: Responsible AI: A Global Policy Framework 2021 Edition | McCarthy Tétrault.

[2] https://www.nytimes.com/2023/01/23/opinion/ted-lieu-ai-chatgpt-congress.html

https://www.euronews.com/next/2023/02/06/chatgpt-in-the-spotlight-as-the-eu-steps-up-calls-for-tougher-regulation-is-its-new-ai-act

[3] See ISO/IEC AWI 42005 (AI system Impact assessment) and ISO/IEC DIS 42001 (AI Management System).

Note that ISO is also developing its own AI risk management standard with ISO/IEC 23894.

[4] AI RMF 1.0, p. 2.

[5] The AI RMF framework to designed to address risks of « harm to people », “harm to an organization” and “harm to an ecosystem”.

[6] AI RMF 1.0, p. 5.

[7] AI RMF 1.0, p. 7.

[8] AI RMF 1.0 p. 7.

[9] AI RMF 1.0, p.8.

[10] Article 7 of the Artificial Intelligence and Data Act.

[11] ACT RESPECTING THE PROTECTION OF PERSONAL INFORMATION IN THE PRIVATE SECTOR, s. 3.3.

[12] AI RMF 1.0, p. 12

[13] AI RMF 1.0, p. 13

[14] https://ieeexplore.ieee.org/browse/standards/get-program/page/series?id=93

[15] AI RMF 1.0, p. 20

[16] AI RMF 1.0, p. 22

[17] AI RMF 1.0, p. 27

[18] AI RMF 1.0, p. 28

[19] AI RMF 1.0, p. 31

[20] See J. Doe 1 and J. Doe 2 (as representative of a class) v. Githhub Inc., Microsoft Corporation, Open AI, Inc. and al. (GitHub Copilot class action lawsuit)

See Sarah Andersen, Kelly McKernan & Karla Ortiz (as representative of a class) v. Stability AI LTD, Stability AI Inc., Midjourney, Inc. & DeviantArt, Inc. (Stable Diffusion class action lawsuit) DeviantArt, Inc. (Stable Diffusion class action lawsuit)

And see Getty Images (US), Inc. v. Stability AI, Inc.

Authors

Subscribe

Stay Connected

Get the latest posts from this blog

Please enter a valid email address