Responsible Artificial Intelligence Governance for Business Starts Now: McCarthy Tétrault’s Charles Morgan speaks to Canadian Lawyer
With artificial intelligence (AI) regulation on the horizon, it’s critical that companies proactively address the associated risks before these new laws take effect, McCarthy Tétrault partner Charles Morgan tells Canadian Lawyer.
Charles, the co-leader of McCarthy Tétrault’s Cyber/Data Group, talks about the complexities and challenges businesses will be facing both now and in the future with AI rapidly evolving.
“What we're seeing is that every single industry vertical that AI touches can be potentially transformed,” Charles says.
With AI innovation happening at such a rapid pace, it’s imperative that companies respond quickly to the misuse of these technologies.
“It's going to be necessary to work even harder to make sure that the guard rails are more robust,” Charles explains. “The feedback loop is faster.”
The combination of the speed with which the technology is advancing and the desire for regulation of AI seem to be moving in parallel at the moment. As Charles says, while legal regimes will take time to enact, regulators and governments are already beginning to move forward and turn policy into law fairly quickly.
In Canada, as part of Bill C-27, The Digital Charter Implementation Act, 2022, the federal government introduced the Artificial Intelligence and Data Act (AIDA). That act has just passed second reading and is now before committee.
AIDA, as Canadian Lawyer explains, seeks to set out clear requirements for the responsible development, deployment and use of AI systems by the private sector. It aims to implement a regulatory framework to govern the responsible adoption of AI systems to limit harms such as the reproduction and amplification of biases and discrimination in decision-making.
This approach that Canada is taking, as Charles says, appears to be a model in between the EU’s more advanced regulatory regime for AI and the more litigation-focused method that the United States appears to be taking.
“[Canada is] always trying to ensure that we find a regulatory approach that means that we can interact with our major trading partners in a reasonably harmonious way,” says Charles.
This incoming regulation remains a relatively slow-moving process, however, as Canada doesn’t expect AIDA to be in effect until 2025. In the meantime, Charles helps clients implement responsible AI governance and navigate vendor management by advising on how to set up an AI committee, develop and implement policies and responsible AI impact assessments, and regularly helps companies negotiate contracts with vendors proposing AI-enhanced solutions.
For more information, read author Tim Wilbur’s article, “Massive interest in AI comes with significant responsibilities: McCarthy Tétrault’s Charles Morgan,” in Canadian Lawyer.
Charles has also written extensively on the topic of AI, including in the publication “Responsible AI: A Global Policy Framework (2021 Update)”, and other articles such as:
- “Artificial Intelligence, Law Over Borders Comparative Guide 2022”
- “Policy design principles to maximize people-centered benefits of digital identity (2022)”
- “Technology Governance in a Time of Crisis (2020)”
- “Responsible AI: A Global Policy Framework (2019 First Edition)”
Charles also addressed responsible AI governance and the practical ways that businesses can maximize opportunities while minimizing risk during McCarthy Tétrault’s recent Artificial Intelligence Law Summit. Read our Key Takeaways from the summit and contact Charles for any additional information.