Photo by — @ivoprod

AI, Human Rights & AI Now

A brief look at ongoing discussions of artificial intelligence and human rights

Alex Moltzau
3 min readMay 4, 2020

--

Australian Human Rights Commission Human Rights & Technology asked for input from the AI Now Institute at New York University. They are inviting input from a variety of actors around the world on their discussion paper.

The feedback was on the Australian Human Rights Commissions’ comprehensive set of legal reform proposals around AI-informed decision making. It was sent March the 13th.

The AI Now Institute (AI Now) is a university research institute dedicated to studying the social implications of artificial intelligence and algorithmic technologies (AI)

In this article I will be unable to cover the full extent of their input, that can be read elsewhere.

“Question A: The Commission’s proposed definition of ‘AI-informed decision making’ has the following two elements: there must be a decision that has a legal, or similarly significant, effect for an individual; and AI must have materially assisted in the process of making the decision. Is the Commission’s definition of ‘AI-informed decision making’ appropriate for the purposes of regulation to protect human rights and other key goals?”

Answering to this AI Now mentioned two operative parts needing improvement. By looking at their comments to these two parts perhaps we may gleam some insight.

(1) “AI must have materially assisted in the process”

(2) that it should have had “legal or similarly significant effect on an individual”

For the first point they urge the commission to further define the scope for regulatory intervention. They suggest that instead they use the term defined in their AI Now Algorithmic Accountability Policy Toolkit.

“An 2 Automated Decision[-making/-support] System is a system that uses automated reasoning to aid or replace a decision-making process that would otherwise be performed by humans.”

In that manner AI Now argues that they will avoid getting caught up in underlying technical logics or mechanisms.

The second sentence is: “legal or similarly significant effect on an individual”.

Within this they are concerned about the focus of impact on the individual.

Systems can have impacts on groups and communities (predictive policing as an example).

They see three possible impacts here on:

  • Public safety.
  • Public health.
  • Education

These are argued to be easier to measure and identify at group level.

Thus these should not be overlooked.

They recommend that they add:

“effects on individuals, groups, or communities.”

They argue that threshold of “legal, or similarly significant effect” could be interpreted in ways that are not inclusive enough.

However tying the scope of the system to legal effect could fail to include harmful impacts that are well-documented.

Therefore, they recommend the definition cover: “any decision that has an impact on opportunities, access to resources, preservation of liberties, legal rights, or ongoing safety of individuals, groups, or communities”.

Covering this is only one point of many, so I hope this has made you interested in reading their full commentary as it is filled with a variety of interesting insightful points.

This is #500daysofAI and you are reading article 336. I am writing one new article about or related to artificial intelligence every day for 500 days. My focus for day 300–400 is about AI, hardware and the climate crisis.

--

--

Alex Moltzau

AI Policy, Governance, Ethics and International Partnerships at www.nora.ai. All views are my own. twitter.com/AlexMoltzau