Passer au contenu directement.

Building Trust Into COVID-19 Recovery: Ethical Data

The third installment of our “Building Trust Into COVID-19 Recovery” series took place on June 19, 2020, during which Christine Ing (McCarthy Tétrault, Partner and Co-Leader of Technology Law Group and FinTech Group) sat down with Jeff Lui (Deloitte, Director of Artificial Intelligence) to discuss AI technologies and the ethical use of large data sets with such technologies. The rapid proliferation of AI technologies has presented many use cases that have created industrial efficiencies, which can produce a net positive benefit on economic progression. However, whether intentional or not, the use of AI technologies presents several challenges and unintended consequences, which may harm or disenfranchise certain populations. Jeff described the key principles, drivers and frameworks when developing AI technologies and the data sets that may be fed into such technologies that should be used to mitigate any such harmful consequences. Below are our key takeaways from the presentation:

  1. AlphaGO: To illustrate using an example, Jeff touched upon AlphaGO, which is a computer AI program designed to play the game “Go” against human players. In a widely televised event, AlphaGO played against Lee Sedol, who, at the time of the event, was widely considered to be the best Go player in the world. Interestingly, AlphaGo made moves which professional Go analysts considered to be poor moves. However, upon reflection, the analysts had realized that the program had developed an entirely new way to win the game. AlphaGO went on to win 4 out of 5 games against Lee Sedol, resulting in Lee apologizing to the public “for being so powerless”. The example highlights that the use of AI may enable new ways of thinking (such as, finding new strategies in Go), but could also produce harm individuals (such as, albeit relatively minor, the humiliation that Lee Sedol felt after losing to AlphaGO).
  2. New Way of Thinking. The AlphaGO example also highlighted nuances between how humans and AI systems make decisions. When making decisions, humans generally use a cognitive decision making framework, where decisions are made based on environments and inputs, and a criterion which has been built up recursively through personal experiences. AI systems, particularly ones that rely upon machine learning, take a different approach, where the systems formulate selection criterion based on large data sets of inputs and outcomes. The result is that AI systems sometime formulate criterion which result in autonomous judgments which may not consider qualitative factors that humans consider in their decision making process, such as whether the criterion will result in malicious activity or cause harm. As a result, both developers and users of AI systems should consider their use of such systems and the data being fed into such systems to ensure that AI is being used both responsibly and ethically.
  3. AI Governance: To prevent unintended harm, organizations that wish to adopt AI should develop and implement responsible and ethical AI use frameworks, which should consider the justification for, ethos of, fairness of and safety of AI use. Further, as autonomous AI systems do not have an inherent sense of judgment, the frameworks should instill a sense of ethics and responsibility into the development and use of AI systems, by placing constraints and rules around how AI systems should be developed and what data may be used with such systems. However, ethics and responsibility are not always consistent amongst cultures or particular individuals, and careful consideration should be taken when developing frameworks to mitigate unintended biases.
  4. Diversity: Using Google’s image search function as an example to highlight some of the challenges with unintended biases in AI systems, when searching for “CEO” using Google’s image search feature, the function previously returned male-dominated results, which, from a gender perspective, was not representative of the global pool of CEOs. This example emphasizes the need for diversity when developing AI frameworks, to ensure that underlying ethics and responsibility are founded on morals and principles that are consistent and representative of cultures and societies. As a part of this webinar, Christine and Jeff provided their thoughts on the development of ethical frameworks and some considerations for organizations and governments to take to mitigate potential biases and risks.

To see a recording of the webinar presentation, please click here.

For more information about our firm’s expertise, please see our Technology and Cybersecurity, Privacy & Data Management pages.

Auteurs

Abonnez-vous

Recevez nos derniers billets en français

Inscrivez-vous pour recevoir les analyses de ce blogue.
Pour s’abonner au contenu en français, procédez à votre inscription à partir de cette page.

Veuillez entrer une adresse valide