Skip to content.

A Framework for Trustworthy AI: The EU’s New Ethics Guidelines

On April 8, 2019, the European Commission’s High-Level Expert Group on Artificial Intelligence released the Ethics Guidelines for Trustworthy AI (the “Guidelines”). The High-Level Expert Group on Artificial Intelligence is an independent group comprised of 52 members, established by the Commission and tasked with drafting the Guidelines in support of the Commission’s vision of ethical, secure and cutting-edge AI made in Europe. A first draft of the Guidelines was released this past December and was subject to an open consultation which generated feedback from more than 500 contributors.

The Guidelines clearly state that they do not reflect an official position of the European Commission nor do they aim to substitute any form of current or future policymaking or regulation. Rather, the Guidelines aim to offer guidance for AI applications by building a foundation to achieve trustworthy AI. More specifically, the Guidelines are intended to be a voluntary living document that will evolve over time, and that will serve as a starting point for the conversation surrounding trust and AI. AI is a powerful tool, but it can introduce unfair biases to systems or produce results that are unreliable. The Guidelines suggest that proper limits and governance surrounding AI will help harness these systems in an ethical and responsible way, and provides some practical guidance to help achieve this.

Recently on CyberLex, we outlined the Government of Canada’s new Directive on Automated Decision-Making. It imposes several requirements for the Canadian government’s use of automated decision-making systems. That directive and the European Commission’s Guidelines (and in particular the assessment list in Chapter III of the Guidelines) will be helpful references for companies that are developing policies for the adoption and implementation of AI solutions in their businesses.

To achieve trust in AI, the Guidelines state that systems must be lawful, ethical, and robust. Fundamental principles from EU treaties, the EU Charter, and international human rights law are then leveraged to produce a list of seven practical requirements that systems should incorporate to produce trustworthy AI, namely:

  • Human agency and oversight - including fundamental rights, human agency and human oversight
  • Technical robustness and safety - including resilience to attack and security, fall back plan and general safety, accuracy, reliability and reproducibility
  • Privacy and data governance - including respect for privacy, quality and integrity of data, and access to data
  • Transparency - including traceability, explainability and communication
  • Diversity, non-discrimination and fairness - including the avoidance of unfair bias, accessibility and universal design, and stakeholder participation
  • Societal and environmental wellbeing - including sustainability and environmental friendliness, social impact, society and democracy
  • Accountability - including auditability, minimization and reporting of negative impact, trade-offs and redress

The Guidelines explain each of the seven requirements in more detail.

The Guidelines then canvas a number of methods that can be employed to implement the seven requirements to realize trustworthy AI throughout all stages of an AI system’s lifecycle: 

  • Technical methods might be employed, such as creating architectures within systems to avoid or promote certain behaviors. Rule of law and ethics “by-design” (i.e. proactively embedding these elements into the design and operation of the system itself) should be used. Testing and validation of the system should be performed by a diverse group of people using multiple metrics.
  • Non-technical methods for implementing the seven requirements should also be used and evaluated on an ongoing basis. Adopting codes of conduct for key performance indicators, standardizing business practices, and educating system architects about ethical design are a few of the suggestions provided. Teams diverse in thought, skill, gender, and age will help to create AI systems to effectively integrate into our diverse world.

Finally, a draft assessment list is provided to assist with the development and deployment of trustworthy AI. The Guidelines suggest governance mechanisms that can be tailored at each level of an organization to implement these new processes for trustworthy systems. The assessment list is not meant to be exhaustive; it is a starting point for stakeholders, and will likely be the most useful portion of the Guidelines for companies that are developing policies and frameworks for the adoption and implementation of AI solutions in their businesses.

Stakeholders in the public and private sectors are invited to conduct a pilot project to use and critique the Guidelines’ draft assessment list. The draft list will be further revised based on their findings, and will be released to the European Commission in early 2020. Although the Guidelines are a common effort to harmonize the EU’s approach to trustworthy AI, they are not binding law. As the Guidelines evolve after the pilot project, adopting the Guidelines could become standard industry practice in the EU, either through consumer expectations, a regulation like the EU’s General Data Protection Regulation, the law of tort, or a combination thereof.

Visit our Cybersecurity, Privacy & Data Management page and contact us with any questions or for assistance.

Authors

Subscribe

Stay Connected

Get the latest posts from this blog

Please enter a valid email address