Skip to content.

US Lawmakers propose Algorithmic Accountability Act intended to regulate AI

Overview

In April 2019, Democratic lawmakers in the U.S. House of Representatives and the U.S. Senate introduced the Algorithmic Accountability Act (the “Act”). The proposed legislation, intended to regulate artificial intelligence (“AI”) and machine learning systems, emphasizes the ethical, non-discriminatory use of such systems, and addresses their impact on data security and privacy. 

The Proposed Legislation

The Act would require companies to consider the “accuracy, fairness, bias, discrimination, privacy and security” of their AI tools and systems. It takes aim at two data-processing issues: the ethical use of machine learning systems, and the privacy and security of sensitive user data more generally.

The Act’s scope is broad and covers companies with over $50 million in average annual gross receipts, that hold personal information of at least 1 million individuals or their devices, or that act primarily as data brokers, will fall under the Act’s purview. The Act would clearly apply to large tech companies with access to significant amounts of information – no matter their industry.

Rather than applying an industry or technology-specific lens, the Act aims to regulate machine learning process design generally, by requiring that companies proactively assess the potential impacts of their AI systems. This wide-ranging scope purports to capture a wide variety of tools, systems, and issues, while raising numerous questions worthy of consideration and debate.

Specifics of the Legislation

With respect to machine learning systems, the first part of the Act seeks to minimize discrimination by addressing algorithmic bias, which ingrains human bias and discrimination in “black box machines”. The potential for algorithmic bias exists within many AI systems because machine learning models generally apply large data sets to generate probabilistic outcomes based on historical data and past results. Where previous outcomes have reflected discrimination, a machine learning system may, by design, continue to propagate their own biases. Biased AI systems generally cannot self-identify or correct ingrained biases. To address this issue, the Act proposes to grant the Federal Trade Commission new powers to require companies to audit their machine learning systems for bias and discrimination. A violation of the Act would be treated as an “unfair or deceptive act or practice,” under the Federal Trade Commission Act.

The second part of the Act addresses processes involving sensitive data. It would require such processes to be audited for privacy and security risks by way of “impact assessments”. Processes that involve sensitive data would include any processes which create a “significant risk” to privacy or security of personal information or which use the personal information of “a significant number” of consumers. Processes that systematically monitor large, publicly accessible physical places would also be included.

The Act in U.S. Context

This proposed Act is only one plank of a broad, emerging U.S. strategy aimed at regulating AI and tech companies. In December, President Trump signed an executive order specifically prioritizing AI regulation across various technologies and industries. The Executive Order’s stated aim is to “unleash AI resources … while maintaining … safety, security, civil liberties, privacy, and confidentiality protections”. The order also tasked the National Institute of Standards and Technology with leading the development of technical standards for “reliable, robust, trustworthy, secure, portable, and interoperable AI systems”.

U.S. government agencies have also already begun implementing their own regulations. For instance, the U.S. Department of Defence, instigated by the Export Control Reform Act of 2018, is exploring including AI and machine learning systems as “emerging and foundational technologies”. If so classified, these technologies could be subject to export controls and restrictions on sharing them with foreign nationals. The U.S. Defense Advanced Research Projects Agency has identified replacing “black box systems” with “explainable AI” as one of its goals. Finally, in April 2019, the U.S. Food and Drug Administration (the “FDA”) released a discussion paper seeking feedback for a forthcoming FDA draft guidance addressing the regulation of AI-based medical devices.

Lessons to Draw and Questions Raised

The application of the law and related regulatory sanctions currently contemplated by the Act will not be tailored to the specific industries using AI systems. For example, public access to algorithms and the explainability of outcomes in autonomous weapons systems deployed by the U.S. Department of Defence is likely to be very different from the concerns raised about clinical decision software implemented in family health clinics. And yet, both stand to be captured under the same requirements of the Act.

If enacted, the impact of the U.S. legislation will surely offer useful lessons to other jurisdictions, as they determine whether top-down, broad regulations are the most effective way to promote and ensure “ethical” AI. It remains to be seen whether Canada will move in the same direction as the U.S., or will instead continue to rely on the self-regulation of tech companies until the field matures.

For more information, please see McCarthy Tétrault’s previous related commentary on current Canadian and international AI trends, the Government of Canada’s Directive on Automated Decision-Making for federal government departments, as well as AI-related ethical considerations in the Canadian context.

Authors

Subscribe

Stay Connected

Get the latest posts from this blog

Please enter a valid email address