Photo by @timmossholder of a mural painted by George Fox students Annabelle Wombacher, Jared Mar, Sierra Ratcliff and Benjamin Cahoon.

More Inclusive Ethics for AI

Our common responsibility to include diverse perspectives to a greater extent

This article was sparked by the recent article written in MIT Technology Review. It was written by Abhishek Gupta and Victoria Heath. Gupta is the founder of the Montreal AI Ethics Institute and ML engineer at Microsoft, where he serves on the CSE Responsible AI Board. Heath is a researcher at the Montreal AI Ethics Institute and a senior research fellow at the NATO Association of Canada.

In the article they argue that AI systems have affected marginalised groups and benefited a privileged few.

Within the AI ethics efforts globally there have been several attempts at creating guidelines and principles for developers, funders and regulators to follow.

They mention that examples could be recommending routine internal audits, requiring protection for users’ personally identifiable information.

The two authors argue that these decelerations and manifestos fail to account for cultural and regional context where AI operates.

They believe the AI community should agree on a set of international definitions and concepts for ethical AI.

Still, there is a lack of geographic representation. North America and northwestern Europe are overrepresented.

They say:

Depending on the place challenges and risks posed by AI differ.

Gupta and Heat mention a few cases:

“In 2018, for example, Facebook was slow to act on misinformation spreading in Myanmar that ultimately led to human rights abuses. An assessment (pdf) paid for by the company found that this oversight was due in part to Facebook’s community guidelines and content moderation policies, which failed to address the country’s political and social realities.”

They argue that there is a need for companies to engage users to help create appropriate standards to govern these systems.

In arguing so they mention the expert advisory group for Unicef’s AI for Children project that has no representatives from regions with the highest concentration of children and young adults, including the Middle East, Africa, and Asia.

Advances in technology is not necessarily for the benefit of all.

It can lead to exacerbating economic inequality.

It can be used for political oppression.

It can contribute to environmental destruction.

How far do we see with our ethical perspectives and framework?

Is it only towards that most immediate, right in front of us?

Even that is hard to handle, yet me must not forget, and we must ensure, as the authors say, that we: “…avoid repeating this pattern.”

Gupta and Heath mention three startling patterns:

  1. “The current concentration of AI research (pdf): 86% of papers published at AI conferences in 2018 were attributed to authors in East Asia, North America, or Europe.
  2. Fewer than 10% of references listed in AI papers published in these regions are to papers from another region. Patents are also highly concentrated: 51% of AI patents published in 2018 were attributed to North America.
  3. The newly formed Global AI Ethics Consortium, for example, has no founding members representing academic institutions or research centers from the Middle East, Africa, or Latin America. This omission is a stark example of colonial patterns (pdf) repeating themselves.”

These three worrying trends taken into consideration, we must build ethical, safe, and inclusive AI systems instead of engage in “ethics washing.”

Who has historically been harmed by these systems?

Who needs to be included?

How can we ensure this inclusion occurs to a much greater extent?

We can consider regional and cultural diversity as key to any conversation about AI ethics.

Is AI responsible when we fail to account for diversity in terms of gender, geography and backgrounds?

This is #500daysofAI and you are reading article 470. I am writing one new article about or related to artificial intelligence every day for 500 days.

AI Policy and Ethics at Student at University of Copenhagen MSc in Social Data Science. All views are my own.