Photo by @ro_ka

AI & Processes of Processing

A short reflection on processing

Alex Moltzau

--

I thought it would be great to think briefly about AI and processing.

Processing can seldom be said, if ever, to be an isolated activity.

Therefore we can consider the processes of processing.

Explain.

Process.

Procedural.

A process often becomes procedural. That is, relating to an established or official way of doing something.

In computing, it has other connotations:

“In computing, procedural generation is a method of creating data algorithmically as opposed to manually, typically through a combination of human-generated assets and algorithms coupled with computer-generated randomness and processing power.”

Yet, in some ways it is similar. A framework of possible rules are created. Regardless of whether these are changing according to an environment.

We talk of ‘process’ – in a human context. It can be a series of actions or steps taken in order to achieve a particular end.

I am in the process of…

It was a great process….

In computing to some extent this is still the case, yet we can additionally talk of processing power.

This made me think of a new issue in MIT Technology Review.

Although, processing power is often a technical measurement — it can certainly be a political.

There are processes of processing that exceed the affixation.

It goes beyond the practical technical complications with geopolitical connotations.

An example could be made of processing text.

Google processes most of the searches on the web.

Still, others are trying to generate text.

Replicate human process into a machine process.

GPT-3 is, by certain standards, one of the most advanced language models out there.

Can it be explained? It depends on what kind of explanation you want.

It depends who you are explaining to.

Can Google be explained at this point?

Can you explain a smartphone?

Can you explain the subway?

The answer to all these questions are yes, however many people would explain these differently.

We could digress or extend.

Can you explain each component in a smartphone and its function?

How does the control system that direct the subway system work in the city of New York?

How does one Google search for ‘butter’ work for a citizen in the United Kingdom when located in Sussex?

A language model can be constructed on web scraped data, part of the process of developing processing.

Annotations could be made on different topics, or not,

This would form part of an explanation.

An explanandum is a sentence describing a phenomenon that is to be explained, and the explanans are the sentences adduced as explanations of that phenomenon.

“For example, one person may pose an explanandum by asking “Why is there smoke?”, and another may provide an explanans by responding “Because there is a fire”. In this example, “smoke” is the explanandum, and “fire” is the explanans.”

Rather some time ago it was said by Hempel & Oppenheim [bold added]:

“It may be said… that an explanation is not fully adequate unless its explanans, if taken account of in time, could have served as a basis for predicting the phenomenon under consideration…. It is this potential predictive force which gives scientific explanation its importance: Only to the extent that we are able to explain empirical facts can we attain the major objective of scientific research, namely not merely to record the phenomena of our experience, but to learn from them, by basing upon them theoretical generalizations which enable us to anticipate new occurrences and to control, at least to some extent, the changes in our environment”
— Hempel & Oppenheim, 1948, (p.138)

There is a sense of anticipation and predictions in explanations.

Especially in regards to artificial intelligence, often considered statistical predictions or interpretations.

OpenAI will begin charging companies for access to GPT-3, hoping that its system can soon power a wide variety of AI products and services.

It is interesting to consider what decisions will be made on the background of information received by a natural language model.

“Explainable AI (XAI) refers to methods and techniques in the application of artificial intelligence technology (AI) such that the results of the solution can be understood by humans.”

A typical question to this explanation of explainable AI is — for whom.

The answer in my experience seems to be technical staff or developers.

In terms of consumer protection, the consumer must to a reasonable extent understand the risks involved.

What is a reasonable explanation?

For the Google Cloud team XAI is defined as:

“Explainable AI is a set of tools and frameworks to help you develop interpretable and inclusive machine learning models and deploy them with confidence.”

Explanations can indeed be tools and frameworks for AI output and consumer trust.

It very much depends on the companies involved, and what they can do or deliver within a given context.

A process of processes — delivery of an argument or of an explanans. It is fire.

Based on the images there was…

Based on the text we can conclude…

Conclusions lead to further processing.

A computer program can be set up in a manner that receives for some flexibility, humans are set up for a certain flexibility.

It is interesting to see how these boundaries are made or negotiated. The explanations, processes and the procedural.

This is #500daysofAI and you are reading article 445. I am writing one new article about or related to artificial intelligence every day for 500 days.

--

--

Alex Moltzau
Alex Moltzau

Written by Alex Moltzau

Policy Officer at the European AI Office in the European Commission. This is a personal Blog and not the views of the European Commission.

No responses yet