Photo by @andreasdress

Is Artificial Intelligence the Best That Has Happened to Our Civilisation, or the Worst?

Alex Moltzau
Towards Data Science

--

It would be a spoiler alert to say: both. Yet can we embrace two extremes at the same time? It seems one must have an opinion. If we think of artificial intelligence as a tool we could state that it is the best used by the right people and the worst used by the wrong people. On the other hand there is seldom any clear line between the two.

If we conceive of artificial intelligence as a technological advance that has systemic consequences in regards to society then we could argue it is bad worsening the current climate crisis or inequality that can be experienced on this planet. Controlling quantitative information about a population is not novel, but the scale and how extensive it is can indeed be said to be. Therefore we can say that this climate perspective and societal concern is one path to take in arguing it is the worst.

AI & Capabilities

Consider instead if a meteor was to hit planet earth, and it is not inconceivable. Three asteroids zoomed past earth in October 2019, one it is said with only 24-hour notice. If that is the case or any other threat to life on earth was to be instigated we may want to have the most advanced technology possible to assisting with the potential issues that could arise.

Pandora’s box descriptions could find it useful to say: well the box is open, you cannot take back what has been revealed. The box cannot be unopened, however we could de-escalate the use of artificial intelligence, de-escalating technologies or keeping it to a minimal use was done in The Glass Bead Game, a novel written right before the war, published during the war and which received the Nobel Prize in literature in 1946.

It was amongst the death and destruction that a book came out considering what an utopia would look like. I am not one to fantasise about utopia or any such thing, rather pragmatically a slightly better world may be conceivable growing with time. It is strange to say this amidst concerns related to artificial intelligence. With surveillance of the poor recently being described in quite lucid wording by the Special Rapporteur on extreme poverty and human rights, Philip Alston:

“ The digital welfare state is either already a reality or is emerging in many countries across the globe. In these states, systems of social protection and assistance are increasingly driven by digital data and technologies that are used to automate, predict, identify, surveil, detect, target and punish. This report acknowledges the irresistible attractions for governments to move in this direction, but warns that there is a grave risk of stumbling zombie-like into a digital welfare dystopia.”

As such the use of technology or ‘digital’ in a manner to be described as ‘the best thing’ that has happened to society seems dreadfully ignorant. Yes, by some standards we can say the world is improving and most people point to statistics. Statistics in a dying world, on a house on fire etc. seems optimistic.

AI & Optimism

It was optimism that was touted by Mao Zedong the Chinese communist revolutionary with “the great leap” — killing of birds to better grow crop, yet getting a locust of insects and famine in return. American ‘Freedom’ in the form or shape seems just another word again to be a possibility with the neoliberal seemingly opposite narrative. You could put any political ideology ahead of the field of artificial intelligence if you so wanted.

Capital-driven or liberal AI has to some degree been the case with technology companies to better understand populations, some has dubbed it psychographics combined with sociometric indicators. As such both quantitative and qualitative social scientists working with ‘adoption’ or ensuring a certain technology is being used. If you have a trade then you would like to ensure the continuation of this, reminiscent of how Hong Kong was brokered in the first place – the first Opium war.

In saying this relating to factuality or how correct this is I cannot be sure of. I am not a historian, yet historian seem to nowadays want to speak much of the future. I have mentioned outlining the future as a practice. Planning ahead or interpreting the past to take action has a ring of this historification. What interpretations of artificial intelligence are we making when we ask this question of better or worse?

AI & Pessimism

The cliché that comes to mind is the Terminator or HAL in old movies, representations of doom for humanity or some humans. This is of course being used by certain technologists to earn urgency in relation to investments, Elon Musk is one proponent. He could be criticised for this, and still people that I meet in the technology sector or developers in specific seem to have a worry that it will come to fruition. Not in the anthropomorphic way, human-like vengeance, more in simple mistakes that we should or should not have made.

I mentioned nuclear war a few days ago. If we are on the doomsday-train, then that would be one of the first destinations. Epidemic or manufactured disease could be another. Next could be trade war, economic crisis or breakdown of the ecosystem. There is plenty to worry about, and still…

Is artificial intelligence the best that has happened to our civilisation? When this is mentioned I cannot help to picture President Trump declaring on Thursday “a great day for civilization” after Mike Pence apparently brokered a ceasefire with Turkey. A leader who was accused of starting this war in the first place by pulling out troops. International relations are not straightforward, nor defence.

Autonomous weapons in war is much discussed and increasingly protested. The Campaign Against Killer Robots has been on a tour showing the developments. I remember sitting down and seeing examples of weapons that are being developed by different countries: US, Russia, China, UK and more. The capabilities in war to destroy or ruin in various ways is increasing.

AI & Power

Artificial intelligence in hacking or distorting solutions in the way of extraction or shutting down is documented, however in terms of changing the course of decision-making is less documented. It is thought that artificial intelligence is giving more power to some of the richest people in the world to control this ebb and flow in national trade or power.

Jeff Bezos the owner of Amazon.com is said to be the richest person in the world. Bill Gates is said to be the second richest person in the world. These two have managed to utilise more than simply machine learning techniques or AI as a field, yet it can be said to have contributed to their continued rise. Two other towards the top of the richest list are both the co-founders of Google, and their company is now said to be ‘AI-first’.

Design paradigms have not been so much focused on technology, but it has been ego-centric first and later people-centric. Now it is claimed by some we have to change this to life-centric design. That sounds airy or vague, yet considering the ecosystem in the design process should not be a novel idea. If the field of AI can become life-centric what would that look like? If that was to be the case would there not still be power or prestige to be gained? Design is a plan or specification while algorithms are processes and rules, yet both these areas of consideration are not without politics.

I am increasingly influenced by the field of political science with its slant towards realism and in this perspective particularly in relation to international security policy, AI is equally tantalising and dangerous. If you could start a war or gain information without the ‘enemy’ knowing, or if you could wage a war without any lives lost —well it sounds a lot like drone war, although these are technologies controlled by humans.

AI & Control

When we say controlled by humans, it is a weird statement, as if there was a distinction. What is not controlled by humans? Not that a decision has immediate effect, especially if it is programmed. Then again humans do make investments too, and companies have person status, more rights at times. This has been suggested to do with ‘AI’-applications as well, give ‘AI’ legal status as a person within this framework. However it would be strange to attribute mistakes to a system instead of a person or company. We loop the loop, the company is a person is a company, I digress.

When we say: it is decided by the system – we are mistaken and correct. We are wrong, because a person did decide, and we are correct because a decision was jointly made. A system is a set of things working together as parts of a mechanism or an interconnecting network; a complex whole. Humans are part of this system and they are the system.

We can talk of human-computer interaction, an interdisciplinary or disciplinary distributed strange mix, and get mixed into science and technology studies. With AI we get mixed into UX (user experience) and UI (user interface). We design the interface to recognise a face to place into a category — conviction or convicted, user or used, then an action is taken. A human action perhaps.

AI & Falsification

Falsification of these systems has been important and neglected from an anthropological standpoint. Anthropologists have studied economic theory as opposed to practical implications of economy and seen the frameworks or effects lacking. Within technology or AI we seemingly need a lawyer to state the injustice done, such is the jurisdiction of the profession. Despite of these concerns: AI is of course a ‘tremendous opportunity’ and is described by the UN as such in different circumstances.

With increasing funding to the UN from technology firms alongside development, dubious questions can appear. I have heard talk of a borderline decision to reject a medical company wanting to test their medicine after funding, and some people say no. Yet others do not, they say yes, and we can wonder broadly which experiments with artificial intelligence abroad are we allowing to happen either within policing, health, or other areas?

I think saying the best that has happened to our civilisation is a stretch, yet progression is not to be underestimated. I come back to the thought of my father and his Parkinson, or wanting to save my wife if she was ill. I have thought as well about the potential great effects AI can have in mapping the climate crisis or poverty. I would however not be overtly optimistic and ‘borderless’ innovation without care seems more irresponsible.

Climate information has been used by oil companies to prepare for climate change and this is well documented. What effects can be seen from within the technology firms that we are not immediately seeing, or through the applications of solutions within the field of artificial intelligence. A typical finger-pointing is to the children of technology gurus in Silicon Valley, insofar as they restrict the use of devices for their children, or the ban on facial recognition in San Francisco.

I think fear of technology and excitement is a strange cocktail of feelings. I can admit to being ambivalent in dealing with these issues or statements manifold in my stream. A stream of consciousness is what it is, and it seems often what is wanted by certain technologists ‘pushing the boundaries’ in their machines. If we take this expression back to a previous problem then pushing the boundaries of our planetary ecosystem seems a simultaneous venture.

AI & Climate Crisis

Refrigeration, transport, minerals, mining, electricity, bandwidth, Kessler Syndrome, satellites, moon travel and so on. Blasting through planetary boundaries in technology is a pursuit that takes this question so towards its extreme it becomes a hyperbole. The dream of space travel and climate friendliness operates within the thought of progress and tribute to the past. Species extinct who are remembered or new planets to be found. Whether it is the new Silk Road or making America great again, we are intrinsically bound to a place in between on the way somewhere.

I have said that we need to stop futuring and instead tackle this present situation, yet it seem impossible to do it. Inevitably any change is bound to the vision of what could be or what should not be. This question immediately pushes into this space of scenario-making. It was truthfully or wrongfully committed not only when looking back, when looking at the screen of a laptop or in a newspaper it becomes so. When was intelligence real? The second meaning of intelligence is the collection of information of military or political value.

We say intelligence and by that do we mean information? If so then we are at a point of time when we have never been more intelligent, never has there been more intelligence available to humans for the possibility of decision-making. As such if we drop the pretence of rationality then perhaps we can do the best or make the best out of this situation as a civilisation. If we stop acting like intelligence is more important than understanding, sympathetic awareness or comprehension.

Information has little value if nothing or little of value is done with the information provided. As such whatever intelligence that we gather we must deem ourselves responsible of its use and strive for understanding for better or worse. We decide if we really are capable to do so, both independently and jointly, for better or worse.

This is #500daysofAI and you are reading article 140. I write one new article about or related to artificial intelligence every day for 500 days.

Thank you for reading and I appreciate your time. Any thoughts resulting from this that you may have would be valuable if you would be so kind as to share them.

--

--