Skip to content.

Building Trust Into COVID-19 Recovery: Moving Beyond Privacy

On June 25, 2020, our “Building Trust Into COVID-19 Recovery” series returned with “Moving Beyond Privacy: Multi-factor risk impact assessments for the responsible deployment of advanced technologies”. Charles Morgan (co-leader of both McCarthy Tétrault’s Information and Technology Law Group and Cyber-security, Privacy and Data Protection Group) sat down with Cathy Cobey (EY’s Global Trusted AI Advisory Leader) to discuss the use of Artificial Intelligence (“AI”), particularly in response to the challenges that have arisen as a result of the pandemic.

The following are a few key takeaways from this discussion:

  1. What is blocking the growth of AI?

Studies have shown that in recent years, the rate of technological adoption has been hindered by a lack of confidence in the quality and trustworthiness of AI technologies and the data sets they use. To alleviate the stopgaps, organizations should focus on building trust with external stakeholders. One way organizations can build trust is through the implementation of governance frameworks which instill assurances on the use of AI technologies and data, to ensure that their usage will produce a net positive impact on society. These assurances are intended to inform stakeholders of the explainability, transparency, performance, accountability, and good governance of AI technologies and the use of data therewith. Overall, the ability for organizations to clearly communicate how it uses AI technologies and user data will be critical to establishing consumer trust and fostering the adoption of AI.

  1. Why are governance frameworks integral to AI adoption?

Unlike traditional technology, which are built on rules-based models, AI technology is instead built on  probability based models that use incomplete information to make predictions on the best outcome. Given the speed and dynamism at which AI technologies operate and evolve (particularly those technologies that use machine learning), such technologies could produce outcomes and results that are biased or that could potentially cause harm. To avoid these risks, organizations must use governance models that are sufficiently flexible and agile to respond to the dynamic nature of AI. Without a robust governance structure, organizations may not be able to promptly respond to unintended outcomes, which may require the organization to constantly monitor, redevelop and retrain their AI technologies (which can carry an exorbitant price tag). Agile governance frameworks build in a supervisory process, which evolves with AI technologies and ensures that assurances to stakeholders are maintained.

  1. How to develop an agile AI governance framework?

A good AI governance model should identify problems as they arise, and should be able to distinguish whether the problems arose from design, data, algorithms or performance issues. Further, when remediating identified problems, good agile AI governance models should foster and build on the assurances that organizations make to their stakeholders (i.e. that the AI technology will provide high-performance, unbiased, resilient, explainable and transparent results). In order to build such frameworks, organizations should borrow from existing standards and controls, such as change management frameworks and privacy impact assessments. Ultimately, effective agile AI governance frameworks should create structures that mitigate risks by monitoring, responding and optimizing AI models, and controlling low-risk outcomes.

  1. What agile AI governance frameworks are available?

One agile AI governance framework that organizations can adopt is the multi-factor impact assessment framework, which in relation to AI technologies, addresses: (a) the ethical purpose and societal benefit; (b) accountability; (c) fairness and non-discriminatory results; (d) transparency and explainability; (e) reliability and security; (f) fair competition through the use of open data and intellectual property; and (g) privacy. The use of frameworks similar to the foregoing, is especially useful in light of COVID-19, which has accelerated the public’s desire for solutions to the pandemic (including, through the use of AI technologies). For example, some AI use cases include the use of AI facial recognition, which measures temperatures as consumers walk through stores to detect and slow the spread of COVID, or the use of AI contact tracing measures which can be used to ensure that individuals comply with public health requirements for re-opening. However, in order for these technologies to be successful and to move us into a post-pandemic world, operators of AI technologies should ensure that the use thereof is done in an ethical manner.

To see a recording of the webinar presentation, please click here.

For more information about our firm’s expertise, please see our Technology and Cybersecurity, Privacy & Data Management pages.

Authors

Subscribe

Stay Connected

Get the latest posts from this blog

Please enter a valid email address