Image for post
Image for post
Photo by @timmossholder of a mural painted by George Fox students Annabelle Wombacher, Jared Mar, Sierra Ratcliff and Benjamin Cahoon.

More Inclusive Ethics for AI

Our common responsibility to include diverse perspectives to a greater extent

This article was sparked by the recent article written in MIT Technology Review. It was written by Abhishek Gupta and Victoria Heath. Gupta is the founder of the Montreal AI Ethics Institute and ML engineer at Microsoft, where he serves on the CSE Responsible AI Board. Heath is a researcher at the Montreal AI Ethics Institute and a senior research fellow at the NATO Association of Canada.

“Fairness,” “privacy,” and “bias” mean different things (pdf) in different places.

Depending on the place challenges and risks posed by AI differ.

“At worst, these flawed standards will lead to more AI systems and tools that perpetuate existing biases and are insensitive to local cultures.”

Gupta and Heat mention a few cases:

  1. Fewer than 10% of references listed in AI papers published in these regions are to papers from another region. Patents are also highly concentrated: 51% of AI patents published in 2018 were attributed to North America.
  2. The newly formed Global AI Ethics Consortium, for example, has no founding members representing academic institutions or research centers from the Middle East, Africa, or Latin America. This omission is a stark example of colonial patterns (pdf) repeating themselves.”

Written by

AI Policy and Ethics at www.nora.ai. Student at University of Copenhagen MSc in Social Data Science. All views are my own. twitter.com/AlexMoltzau

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store