Photo by — @tiagoaguiar

Impact Transparency and xAI

Explainable Artificial Intelligence (xAI) for a Climate Crisis

Alex Moltzau

--

Forgetting anything besides human life is not besides the norm, apparently. At least considering the type of situation we find ourselves in both with a climate crisis (worsening conditions for life on planet earth) and what has been termed as the sixth extinction (massive loss of biodiversity).

When we talk of explainable artificial intelligence (xAI) we seldom hear anything about the impact that AI has on the environment. Should that be explained? Yes, of course, yet in order of priority I do find that as per usual the impact on human beings is the ultimate determining factor of whether an application within artificial intelligence has been explained in an appropriate manner.

In an article by a series of prolific authors (amongst them Max Tegmark) published in January the 13th 2020 called The role of artificial intelligence in achieving the Sustainable Development Goals it is written:

“Consequently, we believe that it is imperative to develop legislation regarding transparency and accountability of AI, as well as to decide the ethical standards to which AI-based technology should be subjected to.”

Furthermore referring to an article by Jones N. they say this:

“Advanced AI technology, research, and product design may require massive computational resources only available through large computing centers. These facilities have a very high energy requirement and carbon footprint”

If we talk of impact transparency and xAI together it becomes both more complicated and interesting. How can we in a good manner explain the impact AI has on the planet as well as for human-beings?

Definition of xAI and Common Issues

Google Cloud says on their website:

Explainable AI is a set of tools and frameworks to help you develop interpretable and inclusive machine learning models and deploy them with confidence.”

In this way you can get human explanations for machine learning models (at least it is promised). Deploying with confidence, interpretable and inclusive sounds nice. Let us move onto another explanation, this time on Wikipedia:

Explainable AI (XAI) refers to methods and techniques in the application of artificial intelligence technology (AI) such that the results of the solution can be understood by human experts. It contrasts with the concept of the “black box” in machine learning where even their designers cannot explain why the AI arrived at a specific decision. XAI is an implementation of the social right to explanation.”

There are a few issues that are mentioned:

  • Interpretability problem
  • Info-besity (overload of information)
  • AI systems optimize behavior to satisfy a mathematically-specified goal system chosen by the system designers

Of course a human can audit rules in an XAI to get an idea how likely the system is to generalize to future real-world data outside the test-set.

What Different Approaches Have Been Used To Address xAI?

Ways to address this has been:

For the latter conference mentioned in ACM FAT 2020 none of the titles of the papers contained the words ‘climate’, ‘ecology’, ‘environment’ or ‘sustainability’. Perhaps of course that has nothing to do with fairness, accountability or transparency at all, I might have missed something…

  • Right to explanation. European Union introduced a right to explanation in General Data Protection Right (GDPR) as an attempt to deal with the potential problems stemming from the rising importance of algorithms. The implementation of the regulation began in 2018. However, the right to explanation in GDPR covers only the local aspect of interpretability.

On the DARPA website by Dr. Matt Turek the following figure is presented:

If we look through the book from h20.ai interpretability in machine learning there are different techniques listed:

  • Reason code generating techniques In particular, local interpretable model-agnostic explanations (LIME) and Shapley values.
  • Accumulated local effect (ALE) plots, one- and twodimensional partial dependence plots, individual condi‐ tional expectation (ICE) plots, and decision tree surrogate models.

Examples of these newer Bayesian or constrained variants of traditional black-box machine learning models include:

  • Explainable neural networks (XNNs).
  • Explainable boosting machines (EBMs).
  • Monotonically constrained gradient boosting machines.
  • Scalable Bayesian rule lists.
  • Super-sparse linear integer models (SLIMs).

Again ‘climate’, ‘ecology’, ‘environment’ or ‘sustainability’ are words not mentioned in this book of explanations. Fairness is a broad word that could be used in its place.

Could the Climate Crisis Be part of xAI?

Perhaps we have to learn from the different issues that we are facing. What are we trying to explain? It seems a symptomatic case of amnesia for those working with AI at times when they attempt to explain the actions inherent in the technology that is being built. Complexity does not absolve xAI of responsibility for the environment, if nothing else that is certainly a ‘black box’ as much unknown in consequences as buying a banana in the store. It would be wrong to blame the industry or field too much, yet it certainly is strange not to see this as part of good implementation of xAI for people, planet and the wider ecological footprint.

This is #500daysofAI and you are reading article 262. I am writing one new article about or related to artificial intelligence every day for 500 days. My current focus for 100 days 200–300 is national and international strategies for artificial intelligence.

--

--

Alex Moltzau
Alex Moltzau

Written by Alex Moltzau

Policy Officer at the European AI Office in the European Commission. This is a personal Blog and not the views of the European Commission.

No responses yet