Despite the dramatised fears of Skynet, GLaDOS, or even Wall-E, Artificial Intelligence (AI) has for some time been perceived as a policy ‘whitespace’ within Brussels and the EU. Indeed, Google, IBM, Intel, and DigitalEurope have already tried to put their broad brushstrokes on the landscape. In addition, with several upcoming events such as Politico’s €2,000 a ticket AI Summit on 19-20 March and the Greens/EFA Group’s Future of AI event on 7 March, and against a backdrop of the Commission Communication which is expected on 25 April, it would seem that this blank canvas is rapidly getting further splashes of colour.
However, I would argue that the EU’s approach to AI is in fact more of a ‘paint by numbers’ exercise. Across Commissioner’s speeches, Parliamentary amendments, and Council conclusions, there are some clearly defined policy topics which face the future development of AI; these policy considerations offer a degree of insight into how policymakers may tackle certain aspects of AI.
Ok Google, was that your fault?
When considering the risks of AI, discussion often finds itself diverted towards a scenario in which a self-driving car is about to crash, and whether the two passengers or five pedestrians should be protected. This morbid conundrum is layered with other complexities, such as age differences between the passengers and pedestrians, or whether the pedestrians could be crossing the road illegally. Ultimately, the question is: where do we assign blame when a self-learning, algorithm-based AI system makes a decision which causes physical damage to the user or a product?
In the Parliament’s non-legislative Resolution on AI and Robotics, led by MEP Mady Delvaux (S&D, LU), liability is singled out as an issue, especially with regards to cars. Delvaux contends that the EU Product Liability Directive can only cover manufacturing defects, and does not factor into a ‘robot’s autonomy’. Proposed solutions include strict liability, a risk management approach, or limited liability accompanied by a compensation fund. In the Council, the General Approach on the proposed Free Flow of Data (FFOD) Regulation introduced a new Recital requesting work on liability for ‘decisions and actions taken without human interaction along the entire value chain of data processing’. While not offering such direct solutions as the Parliament, the Council advises the consideration of ‘responsibility transfers among cooperating services, insurance and auditing’. A politically eloquent phrase which may give scope for a compensation fund, as directly called for by the Parliament.
On the side of the pen-holders, the Commission has less than enthusiastically argued that the aforementioned liability Directive ‘seems appropriate for AI’. In this sense, they have undertaken a review of the Directive and have informally stated that ‘further guidance’ could prove useful. Evidently, hard legal clarity does not seem to be the Commission’s immediate decision. Hopefully, sufficient space remains for industry to lead the way in determining who could be at fault, for example through contractual provisions.
Alexa, what was an accountant?
Unsurprisingly, MEPs from across the political spectrum have submitted questions about the impact of AI on the workforce or, as MEP Ivan Jakovčić (ALDE, HR) lovingly phrases it, the alleged ‘fact’ that AI ‘will put human resources out of jobs’. Meanwhile, Member States in the Council have paid less-EU focused attention to the impact of AI on the workforce. This can be attributed to the fact that employment policy is primarily a national competence, and thus Member States are reluctant to give the Commission any leeway on the topic.
Given the lack of EU competence, the Commission’s proposed solution has been directed at re-skilling the workforce and reassuring policymakers. Indeed, in their responses to MEPs, the Commission cites evidence from the 2016 Employment and Social Developments in Europe report which suggests that new technologies in general could have a net positive effect on jobs at EU level, and references several Commission initiatives aimed at improving the digital skills of the workforce such as the new Skills Agenda for Europe, the European Pillar of Social Rights and the launch of a pilot action to support regions in industrial transition.
This positive narrative on the employment potential of AI also comes across in Delvaux’s non-legislative Resolution, which states that the ‘automation of jobs has the potential to liberate people from manual monotone labour allowing them to shift direction towards more creative and meaningful tasks’. A phrase that is unlikely to assure the accountants that have a 94% probability of being automated, according to an Oxford University paper.
Siri, what do you do with my data?
If the topic is tech and data is involved, data protection will always rear its head; for the Parliament, this concern is taken under their ‘ethical considerations’. On the Commission’s side, the world-famous General Data Protection Regulation’s (GDPR) scope for ‘processing of personal data by automated means’ leads to the consideration that certain AI operations would need to respect the relevant data processing requirements.
Looking further ahead, Commission Vice-President Andrus Ansip has commented that the FFOD Regulation will help improve AI, and moreover that ‘access to data is vital’. In this respect, the Commission will ‘do more to unlock data; later in 2018’. This may cover the accessibility and re-use of public and publicly funded data, and also ‘explore’ the issue of privately held data that is of public interest. Further possible avenues are still being explored, as revealed in the early 2017 non-legislative Communication on Building a European Data Economy which provides options such as:
- Guidance on incentivising businesses to share data
- Fostering the development of technical solutions for reliable identification and exchange of data
- Default contract rules
- Access for public interest and scientific purposes
- Data producer’s right
- Access against remuneration
It seems that whilst data protection is covered, the Commission now sees the exclusive ownership of these data sets as being of particular concern.
Cortana, what does your future hold?
Despite an apparent blank canvas, it is clear that policymakers have identified liability, job displacement, and data as the first numbers that need to be painted in. Admittedly, other parts of this tableau are becoming visible, including AI’s potential bias, moral and ethical aspects, military usage, and algorithms. In a measured assessment, the Commission has advised that they have ‘no early commitment on regulation at this stage’. Practically, this means that the upcoming ‘initiative’ on AI, scheduled for 25 April 2018, will most likely be non-legislative, for example as a Communication. This will be designed to encourage investment in European AI and also clarify the landscape for both European businesses and regulators. According to reports, it is expected that the initiative will be made up of three pillars focusing on the socioeconomic impact, financing and ethics.
Whilst concerns and speculation are more attracted to the rise of SkyNet and a cataclysmic global AI hierarchy, the regulatory landscape is more likely to determine AI’s future. In this context, industry experts can provide first-hand information to European policymakers about the real-life practical applications of AI, be this in digital health, energy use, or safe self-driving cars. Here’s hoping that the discussions at Brussels’ upcoming AI events focuses more on these practical examples and possible regulatory solutions, and not on the imminent rise of Wall-E.
 As an aside, this Resolution has the best opening line from the Parliament, namely ‘whereas from Mary Shelley’s Frankenstein’s Monster to the classical myth of Pygmalion, through the story of Prague’s Golem to the robot of Karel Čapek, who coined the word, people have fantasised about the possibility of building intelligent machines, more often than not androids with human features’.