Photo by — @ivoprod

AI, Human Rights & AI Now

A brief look at ongoing discussions of artificial intelligence and human rights

Australian Human Rights Commission Human Rights & Technology asked for input from the AI Now Institute at New York University. They are inviting input from a variety of actors around the world on their discussion paper.

The feedback was on the Australian Human Rights Commissions’ comprehensive set of legal reform proposals around AI-informed decision making. It was sent March the 13th.

The AI Now Institute (AI Now) is a university research institute dedicated to studying the social implications of artificial intelligence and algorithmic technologies (AI)

In this article I will be unable to cover the full extent of their input, that can be read elsewhere.

Answering to this AI Now mentioned two operative parts needing improvement. By looking at their comments to these two parts perhaps we may gleam some insight.

For the first point they urge the commission to further define the scope for regulatory intervention. They suggest that instead they use the term defined in their AI Now Algorithmic Accountability Policy Toolkit.

“An 2 Automated Decision[-making/-support] System is a system that uses automated reasoning to aid or replace a decision-making process that would otherwise be performed by humans.”

In that manner AI Now argues that they will avoid getting caught up in underlying technical logics or mechanisms.

The second sentence is: “legal or similarly significant effect on an individual”.

Within this they are concerned about the focus of impact on the individual.

Systems can have impacts on groups and communities (predictive policing as an example).

They see three possible impacts here on:

  • Public safety.
  • Public health.
  • Education

These are argued to be easier to measure and identify at group level.

Thus these should not be overlooked.

They recommend that they add:

“effects on individuals, groups, or communities.”

They argue that threshold of “legal, or similarly significant effect” could be interpreted in ways that are not inclusive enough.

However tying the scope of the system to legal effect could fail to include harmful impacts that are well-documented.

Therefore, they recommend the definition cover: .

Covering this is only one point of many, so I hope this has made you interested in reading their full commentary as it is filled with a variety of interesting insightful points.

AI Policy and Ethics at www.nora.ai. Student at University of Copenhagen MSc in Social Data Science. All views are my own. twitter.com/AlexMoltzau