AI & Consistency of Actions

Circles of judgement and failures in iterations

Alex Moltzau

--

How do I judge decisions made by automated systems; how do others judge automated systems; how do I judge my own actions and how do others judge me? That is an interesting circle of possible mistakes.

To some extent I am judged by a moment.

I can fail in a moment and it could change my life.

It could mean nothing as well.

When a machine learning algorithm decides it can be based on my decision in the way such a system is structured.

It could be based on the data that has been aggregated.

Likely a situated action contains fragments of these various influences.

“Situated Action is the idea that human activity is based on a “swarm of contingencies,” that nothing can be understood without first understanding its context. It is a critique of cognitive science in that it denies that human procedures are replicable.”

I consider quickly a conversation with my wife.

Do I decide to sit down and listen despite the lateness of the hour?

What stories a late night conversation can unravel is unknown.

It is serendipity, an unplanned fortunate discovery.

Stories from her childhood that I am lucky to hear, another side that adds new perspective to the woman I love.

It could equally be an unlucky discovery if she were to tell me bad news.

In life we have lucky and less fortunate discoveries.

By now you may wonder how this relates to consistency of actions.

Artificial intelligence is intelligence demonstrated by machines.

Unlike the ‘natural intelligence’ displayed by humans.

What do we deem to be intelligent behaviour?

Consistency is the quality of always behaving or performing in a similar way, or of always happening in a similar way.

Victory in games of chess, as often as possible.

Delivering on time every time, or better than the alternative.

Sorting large amounts of information and communicating insight.

Observing behaviour and determining actions taken by an individual.

Intelligence. One could explore notions of these, and I have done so.

I heard it said:

“Algorithms understands nothing. Much like the heart.”

The heart alone cannot implicitly generate understanding.

However, it can pump blood around a circulatory system, supplying oxygen and nutrients to the tissues and removing carbon dioxide and other wastes.

Despite its immediate function, I can believe that it is part of a body that can think of an abstract concept: love.

Can you feel love?

Highly possibly.

Can you calculate whether someone will fall in love with you?

Not likely.

Despite this we can attempt to take a consistent set of actions to enter circles of judgement and failures in iterations.

Iteration, as in the repetition of a process – obtaining a closer approximation.

Although, that does not sound very romantic at all.

Algorithms are often programmed to take different actions.

Sometimes these actions will depend on changing environments.

Whether it be sensors, interruptions, video, sound or text as inputs these can be turned into actionable insight.

Decision engineering or engineering decision making becomes part of design.

I become less certain of where I wanted to take this argument.

Uncertainty, it is important in statistical measurements and in learning.

Machine learning often deals with probabilistic bounds on performance.

There is a chance that an action has different outcomes.

I can fail in a moment and it could change my life.

Do I allow myself to fail?

To what extent can I allow algorithmic decision-making to fail?

This may depend on whether we are talking about vital equipment in a hospital or the interaction of a virtual agent in an online game.

These are a few thoughts on AI & consistency of actions.

Regardless of what is decided it is likely that I will be held accountable.

This is #500daysofAI and you are reading article 490. I am writing one new article about or related to artificial intelligence every day for 500 days.

--

--

Alex Moltzau
Alex Moltzau

Written by Alex Moltzau

Policy Officer at the European AI Office in the European Commission. This is a personal Blog and not the views of the European Commission.

No responses yet