Skip to content.

Singapore Model AI Governance Framework – From Principles to Governance

The development of regulations specific to artificial intelligence (“AI”) is still at an early stage. In Canada, there is currently no specific law that regulates AI, but it does not mean developers and users of AI systems do not have legal obligations. They are still subject to current legal regimes (tort, criminal law, privacy, consumer protection, etc.). Organizations that are taking a proactive stance in implementing and developing responsible and ethical AI principles will be better prepared for upcoming regulatory changes, will reduce risks associated with AI and will take leadership positions as responsible corporate agents.

Setting out and detailing ethical principles is an important first step. However, organizations should seek to go further and develop concrete internal governance processes which aim to comply with such ethical principles and which provide clear roles and responsibilities within the organization. Some frameworks have already been published as discussed in our previous publications, many of which focus on ethical principles (see also our discussion on AI Policy Framework released by the International Technology Law Association and the OECD).

The Singapore Model Artificial Intelligence Governance Framework (Second Edition) (the “Singapore Framework”) is particularly interesting as it does not simply state ethical principles, but it also links them to precise and concrete measures that can be implemented by organizations. The main principles underlying the Singapore Framework are: (1) explainability, transparency and fairness of decision-making process and (2) AI solutions should be human-centric (i.e. used to amplify human capabilities, including their well-being and safety). These principles constitute the underlying pillars of the Singapore Framework and are further developed in the following four key areas:

(A) Internal Governance Structures and Measures

The governance structure (which can take the form of many instruments, such as a policy, guideline, framework, process, etc.) should allocate clear roles and responsibilities within the organization, along with creating a coordinating body. Responsibilities should include (i) applying risk control measures, (ii) deciding on appropriate AI decision-making model, (iii) maintaining, monitoring and reviewing AI models, (iv) implementing communications channels with consumers and customers and (v) training staff dealing with AI systems.

(B) Determining AI Decision-Making Model

Prior to deploying an AI solution, organizations should consider the commercial objectives they wish to accomplish and weigh them against the risks of using AI, taking into account the differences in societal norms and values between countries and jurisdictions. The commercial objectives should be consistent with the organization’s own core values. In addition to identifying objectives, organizations must also identify risks relevant to the AI solution. Based on the risk assessment, organizations can determine the degree of human involvement, or oversight, in the decision-making model. For safety-critical systems, organizations should ensure that a person is always in control of decision-making, with the AI solution only providing recommendations or input (human-in-the-loop). By contrast, in situations where both the probability and the severity of harm are low, fully automated decision-making (human-out-of-the-loop) may be appropriate.

(C) Operations Management

The development of an AI solution consists of various phases, the first being data preparation. Organizations must ensure that the datasets used for building models are unbiased, accurate and representative in order to avoid discriminatory or unfair decisions. Understanding the lineage of data, keeping a data provenance record and taking active steps to ensure data quality are examples of practices that help reduce risks and strengthen accountability. Regardless, periodic reviewing and updating of data sets will be required on an ongoing basis.

Once data is formatted and cleansed, algorithms can be applied for analysis. Organizations should consider measures to enhance explainability, repeatability and traceability of their AI algorithms:

  • Explainability is the ability for the developer of the AI solution to explain to a third party how the AI Solution’s algorithms function in human understandable terms.
  • Repeatability is the ability to consistently perform an action or make a decision, given the same scenario.
  • Traceability is the documentation of the decision-making processes.

(D) Stakeholder Communications

Appropriate communication inspires trust. As such, organizations should provide general information and be transparent on whether AI is used in their products and/or services. Again, easy-to-understand language should consistently be used. Opting-out options should also be considered, when feasible. Finally, organizations should put in place communications channels for feedbacks and decision reviews.

The Office of the Privacy Commissioner of Canada (“OPC”) recently held a consultation process on its proposal for ensuring appropriate regulation of AI. It is interesting to note that many of the OPC’s proposals embed principles covered by existing frameworks, such as the Singapore Framework and the ITechLaw framework, including non-discrimination, fairness, right to object to AI decision-making, explanation and transparency, traceability and accountability. The proposals are meant to be adopted as a suite within the law, therefore organizations would benefit from already incorporating those principles in their internal governance processes.

For more information on McCarthy Tétrault’s expertise in AI and related fields please see our Cybersecurity Privacy & Data Management group page.

Authors

Subscribe

Stay Connected

Get the latest posts from this blog

Please enter a valid email address