Parlement européen Strasbourg, France — photo by @internetztube

EU’s Strategic Bet on Ethical and Trustworthy Artificial Intelligence

A Closer Look at EU’s Work on AI Ethics in 2019

Alex Moltzau
13 min readDec 24, 2019

--

In this article, I will look at the one documents outlined in bold. This is the current framework I have found so far in regards to the AI strategy of the European Union. First I will explain the High-Level Expert Group on Artificial Intelligence (AI HLEG) and afterwards I will examine the ethics guidelines for trustworthy AI issued by the European Union.

  1. Declaration of cooperation on Artificial Intelligence (2018, April)
  2. Artificial Intelligence for Europe (2018, April)
  3. Ethics Guidelines for Trustworthy AI (2019, April)
  4. Policy and investment recommendations for trustworthy Artificial Intelligence (2019, April)
  5. The European Alliance Assembly (2019, June)
  6. Liability for Artificial Intelligence (2019, November)

The Independent High-Level Expert Group on Artificial Intelligence

Both of the documents that I examine related to ethics and specific policy as well as investments were put together by the Independent High-Level Expert Group on Artificial Intelligence (AI HLEG). Therefore, I thought it may be good to explain first what AI HLEG is, their role and their members. There is a page for AI HLEG on the EU website.

“Following an open selection process, the Commission has appointed 52 experts to a High-Level Expert Group on Artificial Intelligence, comprising representatives from academia, civil society, as well as industry.”

They have a general objective to support the implementation of the European Strategy on Artificial Intelligence. Thus it is related to policy development, ethical, legal and societal issues related to AI including the socio-economic challenges. Since its creation, it is stated by the EU that they have delivered on the following two documents that I will go through.

The AI HLEG is also the steering group for the European AI Alliance, a multi-stakeholder forum for engaging in a broad and open discussion of all aspects of AI development and its impact on the economy and society. There was a European AI Alliance Assembly in June 2019. It is possible to look at the entire conference, at least what was discussed:

The focus on this was discussing investment and ethics. There is a piloting process with learnings that may announce additional documents in the coming year or at least internally release information to members participating.

The European AI Alliance is a forum that engages more than 3000 European citizens and stakeholders in a dialogue on the future of AI in Europe.

You can register online to join at Futurium. Once your Futurium account is created, you will be able to fill in the online registration form to join the European AI Alliance.

All the members of AI HLEG are publicly available online.

1. Ethics Guidelines for Trustworthy AI

The document is split into three sections: foundations, realising and assessing trustworthy AI. As such you could say in some sense it is what is it built on in terms of values, how do we build it and how do we know what we have built is good or not. They outline that trustworthy AI should be (1) lawfully compliant; (2) ethical adhering to values; (3) robust from a technical and social perspective. If tensions arise between these components: “…society should endeavour to align them.”

One should develop, deploy and use AI systems in a way that adheres to the ethical principles of respect for human autonomy, prevention of harm, fairness, and explicability

Tensions between these should be resolved too when they appear. Situations involving vulnerable groups should be prioritised, within this consideration we find for example children, disabled, and asymmetries of power (employee/employer and business/consumer). While bringing benefit AI systems pose certain risks, certain things might be hard to measure such as impact on democracy, rule of law and the human mind. There has to be measures taken to mitigate risk.

There are seven requirements that AI systems should meet through both technical and non-technical methods.

  1. Human agency and oversight
  2. Technical robustness and safety
  3. Privacy and data governance
  4. Transparency
  5. Diversity, non-discrimination and fairness
  6. Environmental and societal well-being
  7. Accountability

Technical and non-technical methods have top be considered to ensure implementation of those requirements. Fostering innovation, communication in a clear manner to stakeholders, facilitating traceability and auditability of AI systems. Adopting trustworthy AI assessment list can be useful and adapting it to specific cases keeping in mind that such lists are not exhaustive.

Mainly in short points we could say that according to the report there are three components of trustworthy AI:

  • Lawful
  • Ethical
  • Robust

Each of the three are necessary, but not sufficient.

Ideally, all three work in harmony and overlap in their operation. In practice, however, there may be tension s between these elements (e.g. at times the scope and content of existing law might be out of step with ethical norms ). It is our individual and collective responsibility as a society to work towards ensuring that all three components help to secure Trustworthy AI .

They talk of this in the report as ‘responsible competitiveness’ in a global framework. Stakeholders can voluntarily use these guidelines as a method to operationalise their commitment. They argue that different situations raise different challenges (music recommendation system vs. critical medical treatments). Thus the guidelines have to be adapted to different situations. As mentioned people are invited to pilot the Trustworthy AI assessment list that operationalise this framework.

These Guidelines articulate a framework for achieving Trustworthy AI based on fundamental rights as enshrined in the Charter of Fundamental Rights of the European Union (EU Charter ), and in relevant international human rights law . Below, we briefly touch upon Trustworthy AI’s three components.

(I) Lawful: AI does not operate in a lawless world. It is important to consider EU primary law: the Treaties of the European and its Charter of Fundamental Rights. Additionally EU secondary law such as General Data Protection Regulation (GDPR); the Product Liability Directive; the Regulation on the Free Flow of Non-Personal Data; anti-discrimination Directives; consumer law and safety and health at work Directives; the UN Human Rights treaties and the Council of Europe conventions (such as the convention on Human Rights), and numerous EU Member State laws. Then various domain laws apply. The guidelines does not mainly deal with these and none of the text should be regarded as legal advice.

(II) Ethical AI: laws are not always up to speed with technical developments, and can be out of step with ethical norms or not suited to addressing certain issues.

(III) Robust AI: individuals and society must be confident AI systems will not cause any intentional harm. Systems should perform in a safe, secure and reliable manner, and safeguards should be foreseen to prevent any unintended adverse impacts. This is needed both from a technical and social perspective.

The following model is used to display the approach in the guideline document:

The report frames AI ethics as a subfield of applied ethics relating it to the EU Agenda 2030. It speaks as well of building an ethical culture and mind-set through public debate, education and practical learning.

The fundamental rights mentioned are the (1) respect for human dignity; (2) freedom of the individual; (3) respect for democracy, justice and the rule of law; (4) equality, non-discrimination and solidarity; (5) citizens’ rights.

The Four Principles

It further outlines the four principles mentioned earlier.

Human autonomy: following human-centric design principles and leave options for meaningful human choice and human oversight over work processes in AI systems. It should aim for the creation of meaningful work.

Prevention of harm: it should not exacerbate harm to human beings so attention must be paid to systems where power asymmetries of information can arise. Preventing harm also entails the consideration of the natural environment and all living beings.

Fairness: development and deployment must be fair. This has to be a substantive and procedural dimension. It should increase societal fairness and equal opportunity to balance competing interests and objectives. In order to seek redress against decisions the entity accountable for the decision must be identifiable and the process of making a decision should be explicable.

Explicability: processes need to be transparent, capabilities communicated, explainable to those directly and indirectly affected. Explanation is not always possible according to the report (these so called ‘black box’ examples) in this way other measures may be required (traceability, auditability and transparent communcation on system capabilities). This is dependent on the context and severity of the consequences.

Different stakeholder should have different roles to play.

a. Developers should implement and apply the requirements to design and development processes;

b. Deployers should ensure that the systems they use and the products and services they offer meet the requirements;

c. End — users and the broader society should be informed about these requirements and able to request that the

Requirements of Trustworthy AI

Systemic and individual aspects in requirements matters.

These different aspects are described in detail within the report. Within each requirements there is a breakdown of sub-requirements, or perhaps keywords to consider.

Human agency and oversight. Systems should support human autonomy and act as enabler to a democratic as well as equitable society by supporting user’s agency. Fundamental rights can help in letting people track their personal data or increasing accessibility to education. Given the reach and capacity of AI systems, they can negatively affect fundamental rights, therefore in cases where such risks exist a fundamental rights impact assessment should be undertaken. This should be done prior to the system’s development and include an evaluation of whether those risks can be reduced or justified in order to respect the freedoms of others. Moreover, mechanisms should be put into place to receive external feedback regarding AI systems that potentially infringe on fundamental rights. Users should be able to self-assess or challenge the system where reasonable. Human autonomy should be kept so that humans are not subject to a decision solely based on automated processing when this produces legal effects on users or similarly significantly affects them.

Additionally governance mechanism such as human-in-the-loop (HITL), human-on-the-loop (HOTL), or human-in-command (HIC) approach.

Human-in-the-loop (HITL): refers to the capability for human intervention in every decision cycle of the system, which in many cases is neither possible nor desirable.

Human-on-the-loop (HOTL): refers to the capability for human intervention during the design cycle of the system and monitoring the system’s operation.

Human-in-command (HIC): refers to the capability to oversee the overall activity of the AI system (including its broader economic, societal, legal and ethical impact) and the ability to decide when and how to use the system in any particular situation.

This can include the decision not to use an AI system in a particular situation. Oversight mechanisms can be required in varying degrees to support safety and control depending on the application area and potential risk.

“All other things being equal, the less oversight a human can exercise over an AI system, the more extensive testing and stricter governance is required.”

Technical robustness and safety. According to the report this is closely linked to the principle of prevention of harm. AI systems must reliably behave as intended while minimising unintentional and unexpected harm — this should also apply to changes in operating environment or presence of other agents (one could perhaps relate this to AI Safety in an operational sense). The physical and mental integrity of humans should be ensured. Resilience to attack and security is an aspect of this, and as such AI systems needs to be protected from hacking. This includes targeting of the data (data poisoning), the model (model leakage) or the underlying infrastructure, both software and hardware. If an AI system is attacked it can lead to different decisions, or causing it to shut down. Unintended applications and potential abuse by malicious actors should be taken into account, and steps should be taken to mitigate these. Fallback plan and general safety in case of problems can be devised. This could mean switching from a statistical to a rule-based procedure, or ask a human before continuing an action. Process to clarify and assess potential risks of AI across various application areas should be established. Safety measures must be treated proactively. Accuracy or correct judgements, for example in classifying information into the proper categories. A high level of accuracy is especially crucial in situations where the AI system directly affect human lives. Reliability and reproducibility is critical to be able to scrutinise an AI system and to prevent unintended harms. Reproducibility describes whether an AI experiment exhibits the same behaviour when repeated under the same conditions. This enables scientists and policy makers to accurately describe what AI systems do. Replications files can facilitate the process of testing and reproducing behaviours.

Privacy and data governance. According to the report privacy is a fundamental right affected by AI systems. This means we need the right data governance, data integrity, access to protocols and data processing capability that protects privacy. Data protection is important in this regard throughout a system’s lifecycle. By this there needs to be consideration for information initially provided and generated in interactions. Digital records of human behaviour may allow AI systems to infer not only individual preferences, but also their sexual orientation, age, gender, religious or political views. Quality and integrity of data is paramount to performance of AI systems and this has to be addressed prior to training with any given data set. Integrity of the data must be ensured so that malicious data is not used in an AI system that may change its behaviour, especially with self-learning systems. Thus data sets must be tested and documented each step of the way. This should also apply to AI systems that were not developed in-house, but acquired elsewhere. In any given organisation handling data is important and data protocols governing data should be put in place. Access to data needs to be clear in conjunction with qualified personnel with the competence and need to access individual’s data (not all should be allowed).

Transparency of elements relevant to an AI system: the data, the system and the business models. Process that yields decision(s) in AI systems should be documented to the best possible standard to allow for traceability. This helps us know why an AI-decision was erroneous and in turn help prevent future mistakes enabling an easier facilitation of auditability and explainability. Explaining both technical processes and human decisions. The technical requires that decisions can be traced and understood by human beings. The report mentions a trade-offs between explainability that may reduce accuracy — however the explanation has to be adapted to the stakeholder involved (layperson, regulator, researcher). In communication AI systems should not represent themselves as humans to users, humans have the right to be informed that they are interacting with an AI system. AI must be identifiable as such and options to decide against this interaction in favour of human interaction should be provided to ensure compliance with fundamental rights. Limitations should be communicated and encompass the system’s level of accuracy.

Diversity, non-discrimination and fairness. Involvement of all affected stakeholders giving equal access through the design processes as well as equal treatment linked to the principle of fairness. Avoidance of unfair bias must be strived for, this can be against groups of people due to inadvertent historic bias, incompleteness and bad governance models. Harm can result from intentional exploitation of (consumer) biases or unfair competition and could be counteracted by putting in place oversight processes to analyse and address the system’s purpose, constraints, requirements and decisions in a clear and transparent manner. Moreover hiring from diversity of backgrounds, cultures and disciplines can ensure diversity of opinions and should be encouraged. Accessibility and universal design should be made to enable use of AI products regardless of age, gender, abilities or characteristics. Access for people with disabilities is of particular importance. Therefore AI systems should not have a one-size-fits-all approach that will enable equitable access and active participation. Stakeholder participation is advisable and beneficial, this could be done throughout the system life cycle.

Societal and environmental well-being. AI systems should be used to benefit all human beings, including future generations. Sustainability and ecological sustainability of AI systems should be encouraged and research fostered into AI solutions addressing areas of global concern, such as for instance the Sustainable Development Goals (SDGs). The system’s development, deployment and use process, as well as its entire supply chain, should be assessed in this regard. The effects of these systems in regards to social impact in all areas of our lives must be monitored and considered as well. For society and democracy the effect on institutions and society must be given careful consideration, including both political decision-making and electoral contexts.

Accountability. This last requirement complements the previous ones as it necessitates that mechanisms be put in place to ensure responsibility and accountability for AI systems and their outcomes, both before and after their development, deployment and use. Auditability entails the enablement of the assessment of algorithms, data and design processes. Evaluation must be by internal and external auditors, and the availability of such report can contribute to the trustworthiness of the technology. In applications affecting fundamental rights, including safety-critical applications, AI systems should be able to be independently audited. Ability to report on actions and respond to consequences must be ensured — minimising and reporting of negative impacts. The use of impact assessments for example through red teaming or forms of Algorithmic Impact Assessment both prior to and during the development can be helpful to minimise negative impact proportionate to risk that AI systems pose. Trade-offs when implementing these requirements may arise. Each trade-off should be reasoned and properly documented. Redress needs to happen when unjust adverse impact occurs especially with vulnerable persons or groups.

These are the seven requirements: (1) human agency and oversight; (2) technical robustness and safety; (3) privacy and data governance; (4) transparency; (5) diversity, non-discrimination and fairness; (6) societal and environmental well-being; (7) accountability.

In addition there is an importance to have evaluation and justification throughout the use. Within the use, analysis, development and re-design.

They describe technical and non-technical methods to ensure trustworthy AI.

I will described this in another article or within this same one going back at a later point. To finish the first and go onwards to the second document.

This is #500daysofAI and you are reading article 203. I am writing one new article about or related to artificial intelligence every day for 500 days. My current focus for 100 days 200–300 is national and international strategies for artificial intelligence.

--

--

Alex Moltzau

AI Policy, Governance, Ethics and International Partnerships at www.nora.ai. All views are my own. twitter.com/AlexMoltzau