To Lead in AI, House of Lords Urges UK to stay nimble, focus on ethics, and look to Canada
Earlier this year, the House of Lords published their evidence-based report: “AI in the UK: Ready, Willing, and Able”. Informed by extensive testimony from witnesses across the UK, the Report offers conclusions on the economic, ethical and social impacts of AI advancements, and sets out recommendations for a focused approach to the development of AI in the UK.
Rather than broadly aim to be a global leader in every area of AI, the Lords advised that the UK “forge a distinctive role for itself as a pioneer in ethical AI”, since a targeted approach would allow the country to be “nimble, clever and move quickly” on the world stage. The Lords concluded that a cross-sector “AI code” of ethics should be created, and the Report received a comprehensive response from the Government two months after publication.
The ethics-forward strategy emerges throughout the report, and is particularly central to the Lords’ advice on building intelligible AI, ensuring AI algorithms learn from representative datasets, clarifying legal liability in AI contexts, and guiding AI to help–not hinder–productivity and the job market.
Ensure AI is understandable
One of the Report’s most sweeping and decisive recommendations focuses on the importance of designing intelligible and transparent AI. The Lords are clear that AI systems must only be used to make decisions of significant impact on peoples’ lives to the extent that such decisions can be thoroughly explained.
The newer generation of deep learning systems–known as “black box” systems–are said to be so complex that even developers may not know how or why a system arrived at a particular decision. While some witnesses argued that intelligibility requirements would restrict AI from reaching its full potential in certain critical domains, others raised concerns that unintelligible black box systems would unfairly thwart people from being able to question or contest consequential automated decisions.
The Lords firmly took the position that “it is not acceptable to deploy any AI system which could have a substantial impact on an individual’s life, unless it can generate a full… explanation for the decisions it will take”, and clarified that AI should remain intelligible “even at the potential expense of power and accuracy”. This view sees intelligibility as a precondition to public trust in AI, which is essential to AI being embraced as an integral and useful tool for society.
In its response, the Government took the view that overemphasizing transparency may deter the development of deep learning. In healthcare settings in particular, the Government was clear that the likely benefits of AI technologies must be carefully weighed against broad transparency requirements.
Ensure AI decision-making is unbiased
The Report highlights that many AI systems learn from biased or unrepresentative datasets, which may lead to prejudiced decisions that perpetuate societal inequalities. Consider a job application screening system that bases decisions on the characteristics of past candidates. If human interviewers consciously or unconsciously declined applications in the past based on characteristics like age, sexual orientation, or ethnicity, the AI system learning from those datasets would inevitably make decisions “in a way which would be deeply unacceptable today”. The potential for discrimination is exacerbated by black box systems whose decision-making process cannot be thoroughly explained.
To mitigate prejudicial decision-making by AI, the Lords urge researchers and developers to ensure that data is collected and processed in a way that promotes the fair representation of all members of society. This includes gathering data from individuals who lack the financial resources to incorporate data-producing technology into their daily lives, and building more diverse data production teams to collect and process information.
The Government agrees that automated decisions based on data that reflects structural inequalities carries a risk of bias. It affirms that it will work to ensure structural inequalities are not perpetuated by the proliferation of machine learning, and will seek to retain diverse talent in the AI workforce.
Ensure AI is accountable
The lack of clarity on the interaction between AI and legal liability is also flagged: who is at fault when AI causes harm?
The Lords note that AI systems may malfunction or make decisions that result in injury, and that legal challenges present “when an algorithm learns or evolves on its own accord”. Concepts such as standards of care and reasonable foreseeability that animate common law approaches to liability are difficult to reconcile with decisions made by autonomous machine-learning systems. The Report calls on the Law Commission to clarify legal liability in AI contexts by reviewing the adequacy of current legislation and making recommendations to government where legal shortcomings exist.
In terms of AI regulation more broadly, the Lords conclude that blanket AI-specific legislation is not appropriate. The nascence of AI technology, coupled with its rapidly-evolving nature, make it difficult to craft an all-encompassing regulatory regime that would safeguard fairness without stifling innovation. Rather, the Lords conclude that existing sector-specific regulators are best situated to determine the impact on their sectors of any potential AI regulation that may be needed.
The Government agrees that the Law Commission should be engaged on legal liability issues as appropriate. It also agrees that a blanket AI regulation is not appropriate, and cites its existing commitments as outlined in its Industrial Strategy to work with businesses on developing an agile approach to regulation.
Ensure AI benefits the job market
To realise the potential productivity gains to be accrued from AI in the workforce, the Lords emphasize that companies of all sizes must be able to benefit from new technologies. The Government should play a central role on this front through its procurement processes. Procurement could ensure greater uptake of AI in the public sector, and encourage solutions to public policy challenges through “limited speculative investments” that help businesses convert ideas to prototypes.
Greater productivity as a result of AI raises the question of whether there will be a net decrease in jobs as automation increases, an upswing in jobs due to new AI-related positions being created, or a net neutral impact on employment. The Lords concede that the future of the labour market remains uncertain, but recommends that both industry and Government work to support a National Retraining Scheme to help people “re-skill and up-skill” as the economy adapts to AI.
The Government agrees that broad access to the benefits of AI is a priority, and points to significant infrastructure investments it has made on this front. It also states that it will take into account the findings of the research on responsible public procurement that was set into motion at Davos 2018, in order to “equip public sector procurers with the tools they need to consider AI solutions when sourcing digital solutions”.
Moving forwards: Look to Canada for effective AI strategy
In assessing the benefits of a targeted approach to excellence in AI, the Lords commented that they “were greatly impressed by the focus and clarity of Canada[’s] national strategy” for attracting talented AI researchers, and flagged the 2017 Pan-Canadian AI Strategy for its goal of establishing “nodes of scientific excellence in Canada’s three major centres for AI in Edmonton, Montreal, and Toronto-Waterloo”.
The Lords urged the UK to follow a similarly focused strategy, and advised the UK’s new national AI research centre (the Alan Turing Institute) to learn from and build relationships with AI centres in Canada.
The Report’s recommendations – particularly regarding the importance of AI being intelligible, unbiased, and widely-accessible – also resonate with the approach set out in the Montreal 2018 G7 Innovation Ministers’ Statement on AI, which focuses on “the interconnected relationship between supporting economic growth from AI innovation; increasing trust in and adoption of AI; and promoting inclusivity in AI development and deployment”.
As AI continues to expand its reach across domestic and international markets, the UK – like Canada – has made clear its intention to capitalize on the benefits of AI, while remaining responsive to the impacts of this complex technology on business and on society more broadly.