Skip to content.

Key Takeaways from McCarthy Tétrault’s Artificial Intelligence Law Summit

On May 11th 2023, McCarthy Tétrault hosted an Artificial Intelligence Law Summit – a client-focused event that featured the latest industry trends and legal developments in AI laws and regulation.

Speakers provided practical and insightful guidance on responsible AI governance, contracting for AI in light of AI regulation, financing, acquiring and selling AI businesses that use AI, navigating IP and data-related challenges, and more.

Key takeaways and topics of discussion from the Summit are summarized below.

Further, if you would like to receive a link to watch a recording of the full Summit, please email our team at [email protected].

Responsible AI Governance

Guest panelists Phil Dawson (Armila AI) and Stephanie Kelley (Ivey Business School) joined McCarthy’s own Charles Morgan to discuss responsible AI governance and the practical ways that businesses can maximize opportunity while minimizing risk in a complex and uncertain regulatory environment.

Phil Dawson advocated for responsible AI policies and encouraged businesses to assess their risk tolerance by weighing the opportunities offered by AI against any reputational risks. Stephanie Kelley encouraged businesses to avoid “recreating the wheel” when operationalizing AI ethics and governance, pointing out that many of the principles relied on by businesses in their privacy policies can be leveraged with AI. Businesses that are developing or deploying AI systems must start to consider implementation of "Responsible AI by design" programs based on recognized industry standards such as the NIST AI Risk Management Framework.

The panel asserted that the time to implement an AI governance strategy is now. Businesses should proactively address the risks and challenges posed by AI. By implementing responsible AI practices and staying informed about regulatory developments, you can ensure that your business is prepared for the future of AI.

Contracting for tech under the AI provisions of the CPPA and AIDA

This panel was presented by Michael Scherman and Barry Sookman. Michael and Barry shed light on recent and significant developments in Canadian AI regulation such as the Artificial Intelligence and Data Act (“AIDA”), Consumer Privacy Protection Act (“CPPA”) and Quebec’s Law 25, and the impacts those laws will have on contracting for technology services and products. Although the legislative landscape of AI regulation remains relatively uncertain (AIDA and CPPA in particular are at a relatively early stage and could still change significantly before coming into force), businesses can be proactive by contracting for uncertainty. A few key takeaways include:

  • agreements need mechanisms to address evolving regulations for all markets in which

  • business is conducted

  • agreements should be alert to the overlapping requirements that are currently known, both across Canada and abroad

  • businesses should manage liability with due diligence and for support and coordination in connection with audits, record keeping, compliance with orders, regular monitoring and assessments and indemnities (with procedures)

Businesses who are parties to cross-border agreements must also be aware of the quickly evolving landscape in other markets as well. As AI class action litigation starts to shape AI risk analysis in the United States and regulatory frameworks are passed by the European Union, Canadian businesses must be flexible and prepared for change when negotiating their agreements.

Financing, Acquiring and Selling AI Businesses

Guest panelist, Daniel Lee (CIBC) joined Conrad Lee and Heidi Gordon

The panel discussed what new risks businesses need to think about when acquiring an AI company, and noted that assessing and redressing AI-related risk in the context of due diligence and representations and warranties will follow a trajectory similar to the path of privacy and cybersecurity.

The panel also provided a helpful breakdown of the differences between AI technology companies, organizing them into three distinct categories:

  1. Foundational AI Models

  2. AI Tools

  3. AI Applications

The integration of any AI business requires the careful alignment of the AI technology with the acquiring business’ objectives, guidelines and guardrails.

AI Conundrum – Navigating IP and data-related challenges in the age of intelligent machines

This panel was presented by David Crane and Amy Fong. Amy and David explored some of the unanswered questions on IP ownership for AI-generated content and important data considerations for businesses:

  1. Complete the necessary upfront due diligence: Understand the AI system, know what data will be processed and generated and understand on whose behalf data is being processed

  2. Ensure the training/source data is fit for purpose: Assess accuracy and reliability of the data, avoid bias and discrimination and consider data anonymization and de-identification

  3. Confirm you have the necessary underlying data rights: Avoid infringement of third party intellectual property rights, have a lawful basis to process any personal data, obtain consent where required, anonymize/de-identify where possible and conduct privacy impact assessments

  4. Protect training datasets, data outputs and learnings using available intellectual property rights

  5. Protect rights and appropriately allocate risks in contracts: Guard against inadvertent loss of rights in data and address IP/ownership rights in AI leanings/insights and outputs

  6. Monitor evolving data protection and AI laws

  7. Consider ethical implications and reputational risks

  8. Implement and maintain data and AI policies: Data and AI governance, data retention and data management technology solutions

The Dawn of AI Law

Lastly, Francis Langlois and Charles Morgan traced the trajectory of AI principles from policy to regulation.

As AI continues to play an increasingly important role in the business world, it is essential for lawyers and business professionals to stay informed about regulatory developments. In the EU, Canada, and the US, AI-specific regulation is on its way and non-compliance with these new regulations is likely to result in material fines.

Francis and Charles recalled the core themes of Responsible AI which should be used by businesses to build policies:

  1. Ethical Purpose

  2. Accountability

  3. Fairness and Non-discrimination

  4. Transparency and Explainability

  5. Privacy and Security

  6. Reliability

The future of AI regulation is unknown, however it will be dynamic and far reaching, with developments expected to affect all sectors. As has been the case with privacy law in Canada, it will be important to watch developments in the EU to get a clearer sense of what is coming in Canada. Those businesses who want to help shape the coming legislation by participating in public consultation should contact McCarthy Tétrault.

For further information about any of these issues, or other ongoing developments regarding Artificial Intelligence, please contact any of the panelists to this discussion, or your other McCarthy Tétrault trusted advisor.



Stay Connected

Get the latest posts from this blog

Please enter a valid email address