Digital Violence and AI

Can we experience violence online and how can AI be used in this context?

We have long known psychological abuse, physical violence, sexualised violence and economic violence. The newest addition is digital violence claims Norwegian National News (NRK) 4th of July 2019.

The article title refers to an increase at the crisis centres for violence in digital violence. The authors talk of tracking info being used by an abuser. Login to an Apple ID as an example can aid someone in tracking you through ‘find my iPhone’. Tracking through location sharing on apps such as Snapchat. Abuse of parental apps to find children. Advanced breach of phone security. There may be more examples.

If we look beyond the news about direct violation or stalking we may see various examples from the personal to the structural. Cyber bullying, trolling, shaming, hacking, abusing and stealing. It is not immediately apparent and may be hard to discover. Equally at a structural level warfare is being redefined, a completely different violence, yet with physical consequences. Is the current definition of violence enough to describe how we understand violence as more than physical?

Violence is "the use of physical force so as to injure, abuse, damage, or destroy."

Of course I am not hinting at any sort of metaphysics of Internet as something beyond physical, it is still very much an infrastructure that can enable communities to communicate violent behaviour through expressed physical action. Keyboards are being pushed, satellites activated, cables connected etc. Yet we increasingly think of the digital, using digits in this framework, as something that functions differently.

There is a different dynamic to being hit in the face in a bar and hatemail with death threats, yet both can be said to be violent behaviours. In this latter case there is no clear use of physical force, yet it is arguably still violence. Digital violence?

The use of AI does pose some interesting questions. The first that comes to mind is: can violence be automated? The second is can violent behaviour learn?

In a prolific case, an episode of Last Week Tonight with John Oliver talked of the growing problem of Robocalls.

robocall is a phone call that uses a computerized autodialer to deliver a pre-recorded message, as if from a robot. Robocalls are often associated with political and telemarketing phone campaigns, but can also be used for public-service or emergency announcements. Some robocalls use personalised audio messages to simulate an actual personal phone call.

Certain banks and insurance companies used Robocalls to call lenders an extreme amount of time due to the lack of regulations. However it was also used by others to trick those who received the calls into sending money or providing specific information.

This is becoming easier due to services such as Lyrebird that allows almost anyone to use their platform to create ‘ultra realistic voices’. This is supported by deep neural network techniques.

Can violent behaviour learn? This may seem a misplaced question and confusing, because it is. A human takes an intended or unintended action that causes violence. If that action is to create a group of algorithms that learn how to be more or less violent and thus changes its algorithms it is still the human action that initiated the process.

Despite the automation and subsequent learning process or the change, when it is a group of algorithms there is a human responsibility. Digital police violence can take on a different scale and immediacy, a time-space compression (Graeber).

The time-space compression refers to technologies that have, in recent times, accelerated spatial and temporal distance. They include technologies employed incommunication, travel, and economics.

There are a few books tackling digital violence and related topics. Automating Inequality by Virginia Eubanks and Weapons of Math Destruction by Cathy O’Neil.

Additionally a Report on Algorithmic Risk Assessment Tools in the U.S. Criminal Justice System published April 2019 by Partnership on AI (PAI), formed by tech giants like Amazon, Facebook and Google along with advocacy groups like the American Civil Liberties Union may be of interest.

We can question what you gain or what you loose by securitization of the digital you.

The Copenhagen School of Security studies places an emphasis on this. Securitization, developed by Ole Wæver, is probably the most prominent concept.

A securitizing speech act needs to follow a specific rhetorical structure, derived from war and its historical connotations of survival, urgency, threat, and defense.

Securitization is a discursive process that has to fulfill three criteria: an actor (1) claims that a referent object is existentially threatened, (2) demands the right to take extraordinary countermeasures to deal with that the threat, and (3) convinces an audience that rule-breaking behavior to counter the threat is justified.

In short, by labeling something as “security,” an issue is dramatized as an issue of supreme priority. It is a process of which nonpoliticized (issues are not talked about) or politicized (issues are publicly debated) issues are elevated to a level of urgency, and could legitimate the bypassing of public debate and democratic procedures.

There is already a massively planned increased investment in cybersecurity in many countries across the world. As the individual and the structure changes even the meaning of violence may change.

We have to take great care so that we do not heighten our sense of urgency to the point where issues of digital violence and increasing digital defence leads to de-democratisation or democratic deconsolidation reverting to more authoritarian societies.

This is day 37 of #500daysofAI, follow me for daily updates on AI.

What is #500daysofAI?
I am challenging myself to write and think about the topic of artificial intelligence for the next 500 days with the #500daysofAI. Learning together is the greatest joy so please give me feedback if you feel an article resonates.

AI Policy and Ethics at www.nora.ai. Student at University of Copenhagen MSc in Social Data Science. All views are my own. twitter.com/AlexMoltzau