Europe boosts public funding for AI and makes ethics a priority
The law and ethics of artificial intelligence is increasingly getting the attention at the highest levels. Governments and industry recognize the immediate and practical potential of AI to fuel significant financial growth. In April 2018 the UK government and the European Commission (EC) each released their respective strategies on AI with a significant commitment of public funds and a unique approach to managing the forthcoming technology revolution. The UK Policy Paper is entitled the “AI Sector Deal”, while the EC’s communication is entitled “Artificial Intelligence for Europe”. A month earlier, the French government released its 150-page report, “For a Meaningful Artificial Intelligence: Towards a French and European Strategy” along with an announcement of €1.5 billion in R&D funding. Despite some differences between the strategies, they all share a similar concern for ethics governing AI development and the growth of a globally competitive European AI industry.
Foundation of the strategies
Data fuels AI. The European, French and British strategies recognize this. They aim to mobilize large amounts of data in their respective public agencies, while building on existing infrastructure and excellence in science and innovation. By making their data available to public and private researchers, the EC, France, and the UK are hoping to boost productivity and improve living standards for their citizens.
At the heart of the UK’s approach to AI is the country’s Industrial Strategy, a white paper published in November 2017 which sets out a long-term plan for the country to boost productivity and earning power. It creates Grand Challenges to put the UK at the forefront of the industries of the future, one of which is “growing the Artificial Intelligence and data driven economy”, which will “embed AI across the UK” to create thousands of good quality jobs and drive economic growth.
The EC’s strategy, aimed at boosting the EU’s competitiveness and ensuring trust based on European values, is founded on three pillars: (1) being ahead of technological developments and encouraging uptake by the public and private sectors; (2) preparing for socio-economic changes brought by AI; and (3) ensuring an appropriate ethical and legal framework. The approach builds on the April 10, 2018 signing of a Declaration of Cooperation on Artificial Intelligence by 25 European countries and has as its financial foundation R&D commitments within the existing Horizon 2020 research and innovation programme. AI can expect further EC support under the successor to Horizon 2020, Horizon Europe, which will come into effect in 2021.
The latest French report on AI follows from earlier government support for the technology under the French Ministry of Higher Education, Research and Innovation. In early 2017, that ministry released its report “France Intelligence Artificielle” (FR), which drew on submissions prepared by working groups convened to issue recommendations to the government. The latest report, with its funding commitments and tangible goals around data access, promoting world class research, preparing for the future of jobs, and emphasizing an inclusive and diverse application of AI, can be seen as an extension of the French government’s existing strategy.
Most recently, on July 5, 2018, France’s DATAIA and the UK’s Alan Turing Institute signed a co-operation agreement on AI and data, to facilitate collaborative research projects. This is yet another international partnership formed by France. In June 2018, President Macron and Prime Minister Trudeau made a commitment to engage experts across various areas of research to investigate and explore ethical issues related to AI.
France and the EC are both making significant funding commitments, with each strategy committing at least €1.5 billion over the next two years (four years for France) to fund AI research and innovation. French President Macron announced France’s strategy and funding via a public address and interview with high profile US technology magazine Wired, setting out the President’s own enthusiasm and hope for a homegrown AI industry. Both the French and EC strategies are seeking to spur investment from technology companies small and large; the EC anticipates that, if member states and the private sector make similar contributions to its own, the total EU spend on AI could surpass €20 billion by the end of 2020.
The UK is approach is to build upon public-private partnerships through its AI Sector Deal. This commitment between government and industry will draw on £950 million in support – £603 million of which is newly allocated funding – to make the UK a global leader in the future economy.
Private sector AI investments in China approached nearly €10 billion in 2016 while the US spent €18 billion in the same period. Silicon Valley still attracts significant AI-focused economies like France and the UK. Both countries are emphasising a concerted effort to retain homegrown talent and attract foreign experts otherwise drawn to California – and increasingly Beijing – by six-figure pay checks and the promise of being at the forefront of the technology’s development.
A central component of all three strategies is ethics.
The EC has committed to developing AI ethics guidelines by the end of 2018 – which will incorporate values inherent in the Charter of Fundamental Rights of the European Union – and to identifying gaps in existing liability frameworks. A series of related proposals around individual rights and data will also be advanced under the rubric of the Digital Single Market Strategy. Coupled with increased protections for individual data rights under the EU-wide General Data Protection Regulation (GDPR) – which came into force on May 25, 2018 – the EU is counting on its citizens’ trust and acceptance of the technology to drive demand and spur growth.
France, meanwhile, is championing a central role for the state by committing to “opening the black box” – a reference to the obscurity inherent in AI decision-making algorithms – and establishing an AI ethics committee. This committee will lead public discussion on the technology in a transparent way and set benchmarks for resolving ethical matters with the technology’s applications. ‘Ethics by design’, the phrase du jour in French AI policy, also appears in the report and reflects the need for AI developers to be trained in ethics. It reflects the French government’s focus on maintaining accountability by keeping humans at the heart of decision-making with boundaries proposed for the use of algorithms in areas such as policing, banking, insurance, the courts, and defence (most notably, lethal autonomous weapons systems or “LAWS”).
The UK has established a Centre for Data Ethics and Innovation to advise on the ethical use of Data. This centre is being tasked with ensuring safe and ethical innovation. Alongside the centre, an AI Council is being convened as a central forum where industry, academia, and government leaders will come together to identify opportunities and issues, as well as the actions to address them. Supporting this council is a new Office for AI, a delivery body tasked with implementing the AI Sector Deal and the government’s overarching strategy in the area.
The Government of Canada has also announced significant investments in AI R&D. See our blog post, Artificial Intelligence: The Year in Review, for more information. Most recently, in June 2018, the Honourable Minister Navdeep Bains, Minister of Innovation, Science and Economic Development, announced a national consultation in cities across Canada with business, academia, civil society and others, on data and digital transformation. Carole Piovesan was invited to the first consultation with Minister Bains and his team, where issues regarding data ethics, access, disclosure and use were discussed.
The law and ethics of AI will continue to be important issues as AI is more widely used in business and society, and as AI technology becomes more sophisticated. McCarthy Tetrault is closely following these trends through our national Cybersecurity, Privacy and Data Management group.