Image created by João Pedro Costa. Submitted for United Nations Global Call Out To Creatives — help stop the spread of COVID-19. Source: Unsplash.

Technology Optimism and Criticism

Reflections on binary oppositions

This will be a short piece reflecting on optimism and criticism in technology. It is not extensive and is a reflection including a quote, a contributor article from Forbes and a few thoughts about rhetorics in binary oppositions.

I often hear the question: can technology solve our problems?

Other times I hear it said as a statement: technology will solve our problems.

Evgeny Morozov defines this latter as solutionism within technology:

Reading the article from a former Disney screenwriting consultant called With Tech On Our Side, We Can Beat This Crisis one can certainly get that particular vibe.

Tech can beat this crisis.

At times I feel this is a binary divide that gets strung upon any speaker, reader, writer on many occasions. Judging any statement made.

Can I be an optimist and critical at the same time?

I often talk of the material limitations of technology especially in how one must think of the value chain as well as the material conditions of artificial intelligence— yet this is labelled criticism.

In a sense it is, and at the same time I say that I am optimistic about the potential that AI can have to do it contingent on the responsible use.

This contingency in optimism and criticism and borders between is something I find challenging.

Perhaps the question is rhetorical?

Not as a question asked without expecting an answer, although I doubt to find one, instead one could wonder if it has to do with rhetorics.

Say for example that I would talk of artificial intelligence. When making examples in speeches it is not uncommon to create binary oppositions.

It is fascinating that this binary opposition is not new (at all):

“In order to specify the realm of rhetoric, Aristotle replaces Plato’s binary opposition between reality and appearance with his own binary opposition between the necessary and contingent. Once this seemingly unproblematic distinction is accepted, that is, once rhetoric is safely located in the realm of the contingent, Plato’s charge of unspecifiability dissolves. By placing rhetoric (along with the dialectic) in the realm of the contingent, Aristotle gives it a domicile, a space within which it can manifest and contain itself.”

In this manner knowledge in use, practical.

The philosopher Aristotle held that there were three basic activities of humans:

  • Theoria (thinking),
  • Poiesis (making),
  • Praxis (doing).

With practical (praxis), the end goal is action.

He distinguished between

  1. Eupraxia (εὐπραξία, “good praxis”).
  2. Dyspraxia (δυσπραξία, “bad praxis, misfortune”).

There is a right practice and malpractice that action is contingent upon.

One of my most popular articles is named ‘AI for Good and AI for Bad’.

It plays directly into the rhetorics of praxis.

Current bad practice being destroying the planet and causing an overheating of climate, economies etc. Ignoring the release of carbon as a side effect (or part) of all technological applications.

Programming reduced is binary 1 or 0. All or nothing becomes something.

It is a binary signal that is easy to understand — programmed.

‘Critical optimist’ cannot be computed — you are one or the other.

Of course this is not practice. People are more nuanced.

Central to building artificial intelligence is categorisation. One can argue supervised or unsupervised, yet it creates a set of possible actions resulting in one particular action or many actions.

Many techniques, deep or not, leads to a practice.

We accept contingencies in good or bad and profitable/unprofitable.

The action is contingent on the input.

“You need to spend money to make money.”

Yet this process is negotiated, or as Plato’s binary opposition could nuance:

What appears to be the case is not in fact the case, because the ‘case’ is redefined and negotiated. It appears to be the right choice given the preconditions of the choice.

In other words: based on the data a certain action was taken.

Was it the right choice?

That depends on who defines the data.

One central aspect of data quality often forgotten is the purpose for which the given data was collected.

Fit to serve its intended purpose.

Early in computing a slave was a computer or peripheral device that operates under the control of another computer peripheral.

Within database replication, the master database is regarded as the authoritative source, and the slave databases are synchronised to it.

This similar engineering term is the case in hydraulic systems as well.

Hydraulic and pneumatic systems may use a master cylinder to control one or several slave cylinders.

The master cylinder converts force to hydraulic pressure.

In this relationship between this binary master/slave there is an attempt to predict input and necessary output.

Contemporary scholars argue that if rhetoric is merely about the contingent, it automatically excludes that which is either necessary or impossible. The “necessary” is that which either must be done or will inevitably be done.

Necessary output predicted future contingent propositions are statements about states of affairs in the future that are contingent: neither necessarily true nor necessarily false.

In computing this is called Boolean, or boolean logic, is a subset of algebra used for creating true/false statements.

Boolean data type, a form of data with only two possible values (usually “true” and “false”).

Is this a picture of a cat? True or false?

This is a process of asymmetric communication or control.

The master creates the input that generates the output by the slave.

When put this way the colloquial sounds very colonial — slavery.

This only changed recently (2018) in Python, one of the largest programming languages in the world.

They changed this to ‘worker’ and ‘helper’.

Changing ‘master process’ to ‘parent process’.

You can understand why I could, by some standards be labelled as critical to technology when I relate technology to slavery.

When I say critical to technology what I mean is critical to people.

Critical to how we use technology and optimistic it is used in different ways.

Optimism (n.) 1759 (in translations of Voltaire), from French optimisme (1737), from Modern Latin optimum, used by Gottfried Leibniz (in “Thodice,” 1710) to mean “the greatest good,” from Latin optimus “the best”.

Optimism needs criticism for this greatness.

When searching for critical on Google these seem to be common questions.

From the suffix -al and Latin criticus, from Ancient Greek κριτικός (kritikós, “of or for judging, able to discern”) < κρίνω (krínō, “I separate, judge”); also the root of crisis.

Critical can also mean — of, relating to, or being a turning point or specially important juncture a critical phase.

The typical questions on Google are different on ‘optimism’.

Both interestingly enough have connotations to thinking in the last question.

What is critical thinking?

What is optimistic thinking?

If I ask you: what is critical optimistic thinking?

Well, then it becomes an oxymoron.

It becomes a figure of speech in which apparently contradictory terms appear in conjunction.

As if I said bad good or good bad.

What meaning would you gleam from that binary opposition?

Binary meanings are not good at contextualising the meaning itself, but then again the goal is not meaning — it is practice.

The practice of meaning.


Interestingly enough there is one psychologist called Piaget that applied this distinction of Platon’s rhetoric to a psychological phenomenon in children.

“Appearance-Reality Distinction. According to Piaget, until age 5 or 6 children are not able to fully distinguish between reality and appearance. For example, the ability to understand that just because something looks like something else, it isn’t what it seems.”

Most machine learning algorithms can see the appearance of a dog or cat, but it cannot understand the reality, it isn’t what it seems.

Then again in certain instances, often helped or supervised by human cognition, an algorithm can help identify these better than humans.

Plato’s “Allegory of the Cave” is a reflection on the distinction between appearance and reality. Plato argues that there is the world of appearances and there is the real world.

Machines are created by humans that can be fooled by humans.

Humans can be fooled by machines that were built by humans.

Machines can see structures better than human cognition in many cases.

Machine learning as a process was structured by human cognition.

In technology, as a binary, one can often end up with human and machine.

Yet both are often now often assigned cultural meaning through design.

Human-centered design becoming human-centered artificial intelligence.

The human is studied to re-create the humanness or the attributes of human.

Rhetorical questions aside this seems like another oxymoron.

Can you place the human at the centre of a machine?

It brings to mind the Vitruvian man by Leonardo Da Vinci.

The drawing represents ideal human body proportions.

What is human-centered artificial intelligence?

Artificial intelligence that acts according to ideal human action.

Ideal action is certainly up for definition.

If the ideal action is to extract oil faster…

Again you see why I might be labelled as critical when AI can be used to save someone from disease, or assist my search for information on the Internet.

Critical optimistic.

Staying inside during the Coronavirus I am grateful for being able to communicate through technology.


Jost, W., & Olmsted, W. (Eds.). (2008). A companion to rhetoric and rhetorical criticism. John Wiley & Sons.

This is #500daysofAI and you are reading article 345. I am writing one new article about or related to artificial intelligence every day for 500 days. My focus for day 300–400 is about AI, hardware and the climate crisis.

AI Policy and Ethics at Student at University of Copenhagen MSc in Social Data Science. All views are my own.