Why are the AI Ethics Guidelines important?
In its work programme for 2019, the European Commission stated that it wants to be the “effective standard-setter and global reference point on issues such as data protection, big data, artificial intelligence and automation”. The values enshrined in the GDPR are already shaping the global economy and laws of other countries (see, for example, California’s Consumer Privacy Act). It is clear that the European Commission has the same ambitions for its AI-related initiatives. In fact, it has been already reported that Brussels’ “ethics-first approach has already attracted attention from outside Europe, including Australia, Japan, Canada and Singapore”.1
No other technology raises comparable ethical concerns (or even outright fear) and technical challenges quite like artificial intelligence. For example, how should a music recommendation engine react to an individual who is depressed or even suicidal (assuming that the device in question can measure this) and who chooses to continue listening to melancholic music? Would it be acceptable for the machine to refuse to play the desired music or should it nudge the user to listen to something more upbeat? As humans delegate more and more decisions to machines, there is a serious concern as to what this will mean for human autonomy and well-being.
Take another, more abstract, example: how should an AI solution handle the infamous ‘trolley problem’? This involves a trolley heading down railway tracks towards five people (tied up and unable to move) but which can be diverted to a different track to kill just one person. If the machine does nothing, five people will die. If it acts, just one dies. There are a number of variations of this problem (for example, by replacing the trolley with an autonomous vehicle) but the fundamental questions are the same. These include: (i) how to programme AI-powered solutions to uphold ethical values; (ii) how and who should decide what these values should be; (iii) who should be liable for such AI-agent’s decision; and (iv) should a human be able to step in and exercise a degree of oversight?
The seven requirements
The requirements put forward in the guidelines aim to provide a framework for analysing and discussing the above-mentioned (and many other) issues. These requirements include:
- Human agency and oversight, including evaluating AI systems in the context of fundamental rights, ensuring that users are able to make informed decisions regarding such systems and providing appropriate governance mechanisms;
- Technical robustness and safety, including resilience to attack and security, fall back plans and general safety, accuracy, reliability and reproducibility of results;
- Privacy and data governance, including respect for privacy, ensuring quality and integrity of data and appropriate controls on access to data;
- Transparency, including traceability and explainability to enable identification of reasons why an AI decision was erroneous;
- Diversity, non-discrimination and fairness, including the avoidance of unfair bias, accessibility and universal design, and stakeholder participation;
- Societal and environmental well-being, including sustainability and environmental friendliness, social impact, society and democracy;
- Accountability, including auditability, minimisation and reporting of negative impact, trade-offs and redress mechanism;
The document is supplemented by a non-exhaustive Trustworthy AI assessment list (pilot version) that is intended to assist stakeholders operationalise Trustworthy AI.
Practical considerations
Once an AI system takes a decision, it is important to understand why or how such a decision was reached. Without such information, a decision cannot be duly contested. The guidelines emphasise that explicability is crucial for building and maintaining users’ trust in AI systems. However, the desire to achieve algorithmic transparency will need to be balanced against keeping information about business models and IP related to the AI system confidential. The document acknowledges that current state of the art technology is not, for example, capable of explaining AI systems that rely on neural networks. Some are also sceptical as to the value of such explanations. The often highly complex nature of the underlying data used in AI systems means that a meaningful explanation of their actions is likely to be incomprehensible to a layperson. However, it is likely that (in the future) safety-critical applications or those deployed in heavily regulated industries (such as healthcare or financial services) may need to be auditable and explainable.
The issues raised and the ethics-first approach enshrined in the guidelines is likely to influence the development of the European AI sector. In the future, AI-powered chatbots and other user-facing tools might need to contain disclaimers ‘AI-powered’ or notices akin to the cookie consent banners. An entire sector focused on assessing and certifying the trustworthiness of AI solutions may emerge with new certifications (“made in Europe”, etc.) or perhaps even Trust Pilot-type of websites for comparing AI-based solutions such as image-recognition software, recommendation engines or instantaneous translators. Internal governance structures will also need to adapt to accommodate the complexity and pervasive nature of AI technology. Corporate governance policies already cover items such as privacy notices, data retention policies and supply chain standards. There is every reason to believe that AI-related policies and procedures will become standard business practice.
What comes next?
The expert group clearly engaged with the 500+ responses received during the consultation process, many of which highlighted the need for a tailored approach to different use cases and the difficulties in achieving full explainability of AI systems. When the draft guidelines were first released in December 2018, the final version was intended to include a mechanism to allow stakeholders to voluntarily endorse them. Instead, a pilot phase is being set up to gather practical feedback on how the assessment list, that operationalises the key requirements, could be improved. The pilot will commence in summer 2019 and interested stakeholders are invited to register their interest via the European AI Alliance. Feedback from the pilot will be used to re-evaluate the assessment list in early 2020. Based on this review, the Commission will propose any next steps.
In the meantime, the same expert group that produced the guidelines is working on the second deliverable consisting of AI Policy and Investment Recommendations. The document, which is set to be released in May this year, will shed more light on what laws may need to be revised, adapted or introduced to accommodate the rapid progress and widespread use of AI technologies.
- Mehreen Khan and Madhumita Murgia, “EU publishes guidelines on ethical artificial intelligence”, Financial Times, 8 April 2019, www.ft.com.
Client Alert 2019-107