Skip to content.

Harmonizing Global AI Governance: The EU AI Treaty's Role and Impact

This article is part of our Artificial Intelligence Insights Series, written by McCarthy Tétrault’s AI Law Group - your ideal partners for navigating this dynamic and complex space. This series brings you practical and integrative perspectives on the ways in which AI is transforming industries, and how you can stay ahead of the curve. View other blog posts in the series here.

 

Introduction

In a time where artificial intelligence (hereafter “AI”) is increasingly integrated into every aspect of our lives, the need for a comprehensive framework to ensure its ethical and responsible use has never been more pressing. The sense of this urgency is at the heart of the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law[1] (hereafter “EU AI Treaty”), the world’s first legally binding international instrument on AI.

The EU AI Treaty was negotiated by the 46 member states of the Council of Europe alongside 11 non-members (including the US and Canada)[2] and was opened for signature on September 5, 2024, during a conference of Ministers of Justice in Lithuania[3]. Several non-state actors participated in the negotiations, among them UNESCO representatives.

The EU AI Treaty is yet another example of the growing body of AI law emerging as part of the global effort to reconcile the rapid advancement of AI technologies with the imperatives of human rights protection and the preservation of fundamental precepts of society.

Purpose and Scope

Not to be confused with the EU AI Act which creates targeted obligations for specific actors in the AI value chain operating on the EU market[4], the EU AI Treaty is an international instrument designed to address the lifecycle of AI systems, particularly those that might interfere with human rights, democracy, and the rule of law. Its application is broad, extending to both public and private sectors[5].

Notably, the treaty carves out certain exemptions. National security and national defense related AI systems are excluded from the application of the EU AI Treaty[6] (an approach also adopted in the EU AI Act), while research and development activities are exempt on the condition that they are not being utilized in a manner that may interfere with human rights[7].

Requirements and Obligations

The EU AI Treaty establishes general obligations for adherent countries to address in their national AI laws. It emphasizes broad principles rather than detailed execution strategies and lacks specificity on how to achieve the goals it outlines.

Under the EU AI Treaty, parties must put in place measures to ensure that the use of AI systems is consistent with upholding human rights[8], preserving the integrity and independence of institutional structures[9] and maintaining democratic procedures[10], including the separation of powers, judicial independence, access to justice, and public discourse.

The EU AI Treaty alludes to the notion of “risks and adverse impacts” without providing a clear definition in its text. While there is some indication that this notion “[refers] principally to the human rights obligations and commitments applicable to each Party’s existing frameworks on human rights, democracy and the rule of law”[11], the specific boundaries of this concept remain ambiguous. A risk or impact based approach is a staple of emerging AI legislation, notably in the EU AI Act and the proposed Canadian Artificial Intelligence and Data Act (hereafter “AIDA”).

Other obligations imposed by the EU AI Treaty include establishing measures to:

  • Safeguard human dignity and autonomy (Article 7);
  • Ensure transparency and oversight in accordance with contexts and risks, which notably includes facilitating the identification of AI output and which serves to aid in the exercise and enforcement of intellectual property rights[12] (Article 8);
  • Establish accountability and responsibility for AI systems that may adversely impact human rights, democracy, and the rule of law (Article 9);
  • Help ensure that activities of AI systems respect equality and non-discrimination rights (Article 10);
  • Protect individual privacy rights and personal data (Article 11);
  • Enhance the reliability and trustworthiness of AI output (Article 12);
  • Foster safe innovation within controlled environments and under the supervision of competent and qualified authorities (Article 13).

Remedies and Oversight

Parties must put in place effective and accessible remedies for violations of human rights in the context of AI systems[13]. For a remedy to be effective, it must have the ability to directly address and rectify the challenged situations[14]. Accessibility refers to a remedy that is it readily obtainable and accompanied by adequate procedural protections[15].

The EU AI Treaty calls for ongoing assessment and mitigation of risks related to AI systems, demanding thorough consideration of potential impacts, stakeholder perspectives, and continuous monitoring, with special attention given to the needs of children and persons with disabilities[16].

Parties must also promote digital literacy, including the development of specialist skills for the actors that are part of the AI value chain[17], and ensure public discussion and multi-stakeholder consultation for “important questions” [18] relating to AI systems, though this term is not defined.

Lastly, parties must ensure effective independent oversight mechanisms of AI measures and regulation and facilitate cooperation with human rights protection actors[19].

Entry into Force

The current signatories include the EU, US, UK, Andorra, Georgia, Iceland, Norway, Moldova, San Marino, and Israel[20]. However, the EU AI Treaty will only enter into force three months after at least five signatories have ratified it. As none have done so to date, the date of its entry into force remains unknown (at international law, a treaty is not binding on its signatories without their ratification, which is the effective legal act that express a State’s consent to be bound by the treaty). For later signatories, potentially including Canada (which has participated in the negotiation of the EU AI Treaty), the EU AI Treaty would become binding three months after their ratification[21].

Upon signature, submission of instrument of ratification, acceptance, approval or accession, parties must specify how they intend to fulfill the obligations prescribed by the EU AI Treaty in a declaration addressed to the Secretary General of the Council of Europe[22].

Bottom Line: What this Means For Future AI Regulation

If Canada decides to sign the EU AI Treaty, AIDA, currently in the committee stage of the House of Commons review process[23], will require important revisions.

While AIDA aligns with the EU AI Treaty in its obligations related to record-keeping, risk assessment, mitigation measures, notification of harm, public disclosure of AI system descriptions, and establishment of an overseeing body (which are however, mostly left to be detailed in future regulations that are yet to be drafted), it falls short in many key ways. AIDA’s regulatory focus is confined to “high-impact” AI systems within the private sector. Though it has been suggested that the concept of “high-impact systems” would cover those systems having an impact on human rights[24], there is currently no explicit consideration of safeguarding democracy and the rule of law. Proposed amendments to AIDA do, however, suggest implementing tiers of high-impact systems, notably including AI systems used by courts or by administrative bodies in their proceedings adjudicating an individual’s rights[25]. Additional amendments targeting general-purpose AI systems which may pose societal risks due to their scale would involve risk-assessment obligations regarding “the spread of disinformation and the functioning of societal and democratic institutions”[26].

With the consultation period for AIDA coming to a close this past September 6th[27], its future remains unclear. The federal election in fall 2025 and the substantial rewrites that may be required to bring AIDA into step with the EU AI Treaty loom large on the horizon.

Conclusion

The EU AI Treaty is another demonstration of the emergence of AI law on the global stage and seeks to guide the international community towards a harmony in the regulation of AI systems. This goal is furthered by its key signatories such as the EU, the US and the UK. It is important to note that certain powerful nations, including China, were not party to the negotiations.

Considering the EU AI Treaty principally establishes guiding principles, it remains to be seen how the signatories translate the provisions into enforceable local legislation.

Further Reading

This blog post is part of our technology law insights series. For earlier posts on EU AI legislation, please refer to the below.

__

[1] Council of Europe, Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law, CETS 225 (2024), available online: https://rm.coe.int/1680afae3c.

[2] Council of Europe, “Council of Europe opens first ever global treaty on AI for signature”, September 5, 2024, available online: https://www.coe.int/en/web/portal/-/council-of-europe-opens-first-ever-global-treaty-on-ai-for-signature.

[3] Reuters, “US, Britain, EU to sign first international AI treaty”, September 5, 2024, available online: https://www.reuters.com/technology/artificial-intelligence/us-britain-eu-sign-agreement-ai-standards-ft-reports-2024-09-05/.

[4] EU Artificial Intelligence Act, “High-level summary of the AI Act”, February 27, 2024, available online: https://artificialintelligenceact.eu/high-level-summary/.

[5] Council of Europe, supra, note 1, art. 3(1)(a) and (b).

[6] Ibid, art. 3(2) and (4).

[7] Ibid, art. 3(3).

[8] Ibid, art. 4.

[9] Ibid, art. 5(1).

[10] Ibid, art. 5(2).

[11] Council of Europe, Explanatory Report to the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law, 2024, available online: https://rm.coe.int/1680afae67, par. 67.

[12] Ibid, par. 58.

[13] Council of Europe, supra, note 1, art. 14.

[14] Council of Europe, supra, note 11, par. 95 and following.

[15] Ibid.

[16] Council of Europe, supra, note 1, art. 16-18.

[17] Ibid, art. 20.

[18] Ibid, art. 19.

[19] Ibid, art. 26.

[20] Council of Europe, supra, note 2.

[21] Council of Europe, supra, note 1, art. 30 and 31.

[22] Ibid, art. 3.1.

[23] An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts, Bill C-27, (Committee stage – November 22, 2021), 1st session, 44th legis. Can, available online: https://www.parl.ca/legisinfo/en/bill/44-1/c-27.

[24] Government of Canada, “The Artificial Intelligence and Data Act Companion Document”, March 13, 2023, available online: https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act-aida-companion-document#s6.

[25] Minister of Innovation, Science and Industry, “Letter to the Standing Committee on Industry and Technology”, available online: https://www.ourcommons.ca/content/Committee/441/INDU/WebDoc/WD12751351/12751351/MinisterOfInnovationScienceAndIndustry-2023-11-28-Combined-e.pdf, p. 9.

[26] Ibid, p. 12.

[27] Government of Canada, “Government of Canada launches public consultation on artificial intelligence computing infrastructure”, June 26, 2024, available online: https://www.canada.ca/en/innovation-science-economic-development/news/2024/06/government-of-canada-launches-public-consultation-on-artificial-intelligence-computing-infrastructure.html.

Authors

Subscribe

Stay Connected

Get the latest posts from this blog

Please enter a valid email address