Image for post
Image for post
Parlement européen Strasbourg, France — photo by @internetztube

EU’s Strategic Bet on Ethical and Trustworthy Artificial Intelligence

A Closer Look at EU’s Work on AI Ethics in 2019

In this article, I will look at the one documents outlined in bold. This is the current framework I have found so far in regards to the AI strategy of the European Union. First I will explain the High-Level Expert Group on Artificial Intelligence (AI HLEG) and afterwards I will examine the ethics guidelines for trustworthy AI issued by the European Union.

  1. Artificial Intelligence for Europe (2018, April)
  2. Ethics Guidelines for Trustworthy AI (2019, April)
  3. Policy and investment recommendations for trustworthy Artificial Intelligence (2019, April)
  4. The European Alliance Assembly (2019, June)
  5. Liability for Artificial Intelligence (2019, November)

The Independent High-Level Expert Group on Artificial Intelligence

Both of the documents that I examine related to ethics and specific policy as well as investments were put together by the Independent High-Level Expert Group on Artificial Intelligence (AI HLEG). Therefore, I thought it may be good to explain first what AI HLEG is, their role and their members. There is a page for AI HLEG on the EU website.

1. Ethics Guidelines for Trustworthy AI

The document is split into three sections: foundations, realising and assessing trustworthy AI. As such you could say in some sense it is what is it built on in terms of values, how do we build it and how do we know what we have built is good or not. They outline that trustworthy AI should be (1) lawfully compliant; (2) ethical adhering to values; (3) robust from a technical and social perspective. If tensions arise between these components: “…society should endeavour to align them.”

  1. Technical robustness and safety
  2. Privacy and data governance
  3. Transparency
  4. Diversity, non-discrimination and fairness
  5. Environmental and societal well-being
  6. Accountability
  • Ethical
  • Robust
Image for post
Image for post

The Four Principles

It further outlines the four principles mentioned earlier.

Requirements of Trustworthy AI

Systemic and individual aspects in requirements matters.

Image for post
Image for post

“All other things being equal, the less oversight a human can exercise over an AI system, the more extensive testing and stricter governance is required.”

Technical robustness and safety. According to the report this is closely linked to the principle of prevention of harm. AI systems must reliably behave as intended while minimising unintentional and unexpected harm — this should also apply to changes in operating environment or presence of other agents (one could perhaps relate this to AI Safety in an operational sense). The physical and mental integrity of humans should be ensured. Resilience to attack and security is an aspect of this, and as such AI systems needs to be protected from hacking. This includes targeting of the data (data poisoning), the model (model leakage) or the underlying infrastructure, both software and hardware. If an AI system is attacked it can lead to different decisions, or causing it to shut down. Unintended applications and potential abuse by malicious actors should be taken into account, and steps should be taken to mitigate these. Fallback plan and general safety in case of problems can be devised. This could mean switching from a statistical to a rule-based procedure, or ask a human before continuing an action. Process to clarify and assess potential risks of AI across various application areas should be established. Safety measures must be treated proactively. Accuracy or correct judgements, for example in classifying information into the proper categories. A high level of accuracy is especially crucial in situations where the AI system directly affect human lives. Reliability and reproducibility is critical to be able to scrutinise an AI system and to prevent unintended harms. Reproducibility describes whether an AI experiment exhibits the same behaviour when repeated under the same conditions. This enables scientists and policy makers to accurately describe what AI systems do. Replications files can facilitate the process of testing and reproducing behaviours.

These are the seven requirements: (1) human agency and oversight; (2) technical robustness and safety; (3) privacy and data governance; (4) transparency; (5) diversity, non-discrimination and fairness; (6) societal and environmental well-being; (7) accountability.

In addition there is an importance to have evaluation and justification throughout the use. Within the use, analysis, development and re-design.

Image for post
Image for post

Written by

AI Policy and Ethics at www.nora.ai. Student at University of Copenhagen MSc in Social Data Science. All views are my own. twitter.com/AlexMoltzau

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store