Federal Government’s Directive on Automated Decision-Making: Considerations and Recommendations
The Government of Canada’s Directive on Automated Decision-Making (the “Directive”) recently took effect, which imposes a number of requirements on the federal government’s use of automated decision-making systems (i.e. technology that assist or replaces the judgment of a human decision-maker, including through the use of machine learning and predictive analytics). While this Directive will clearly have an impact on federal government departments, it will also impact companies that license or sell automated decision-making technologies to the federal government. Further, the Directive may be a helpful reference for companies that are developing policies and frameworks for the adoption and implementation of artificial intelligence (AI) solutions in their businesses.
The Directive took effect on April 1, 2019 and requires compliance within a year. It applies to systems used by federal government departments to provide services to a client external to the Government of Canada, and to systems, tools, or statistical models used to recommend or make an administrative decision about a client of a federal government department. The goal of the Directive is to ensure that automated decision-making systems (“ADM Systems”) are implemented with the least risk possible, incorporating concepts of procedural fairness and due process, while simultaneously supporting more efficient, accurate, consistent, and interpretable decisions made pursuant to Canadian law.
As a first step, the Directive requires an algorithmic impact assessment for each ADM System based on assessment criteria that are specified in the Directive (examples of the criteria include the level of impact on rights of individuals or communities, and whether or not the impact is reversible). This assessment will result in a classification from Level I (the lowest impact) to Level IV (the highest impact). The results of the algorithmic impact assessment must be publicly released, and must be updated when functionality or scope of the system changes.
Once the impact level has been assessed, the Directive imposes different requirements based on the assessed impact level. These include requirements in relation to:
- peer review of the ADM System through an appropriate qualified expert;
- transparency with respect to providing notice that decisions will be undertaken by an ADM System before decisions and providing explanations to affected individuals of how and why a decision was made after decisions;
- ensuring that the ADM System allows for human intervention where appropriate;
- employee training in the design, function and implementation of the ADM System to be able to review, explain and oversee its operation;
- contingency systems and processes; and
- approval requirements for the system to operate.
Further, the Directive imposes additional requirements that apply to all ADM Systems, regardless as to their assessed impact level. These include requirements in relation to:
- access, diligence, testing and auditability requirements for software that is licensed;
- release of any custom source code that is owned by the Government of Canada;
- testing and monitoring of outcomes by testing data and information used by the ADM System for unintended data biases and other factors that may unfairly impact outcomes before production launch and by developing processes to monitor outcomes of ADM Systems to safeguard against unintentional outcomes and verify compliance with applicable legislation and the Directive itself on a scheduled basis;
- validating the quality of data collected for and used by the ADM System;
- security safeguards;
- legal consultations to ensure that the use of the ADM System complies with applicable laws;
- providing clients with recourse on decisions of the ADM System so that clients are able to challenge them; and
- reporting information on effectiveness and efficiency of the ADM System.
The Directive is clearly relevant to the federal government. However, all companies that intend to provide technologies that include elements of automated decision-making to the federal government (as well as those downstream in the supply chain) would be advised to pay particular attention as well. Many of the requirements imposed by this Directive will require specific functionality, disclosures or other assistance that could only be provided (or at a minimum are most efficiently provided) by the developer of the technology.
For more updates and other information in relation to AI and the law, stay tuned to our CyberLex blog.