Image for post
Image for post
Photo by @drinksometea

70 days of Artificial Intelligence

Writing an article every day for 70 days about AI

After time and writing it seems that I am slowly moving into understanding more. Attempting to understand AI is jumping into deep water, and this can be a dangerous place to be in. As such I am splashing away in a swimming pool, here with you: at/on Medium. Therefore I know slightly more than I did before, yet people seem to make the assumption that I know far more than I do (which I do not). These last days I have been writing about AI Safety, and it has been a fascinating topic ranging from the existential threat of AI to practical implications of AI in given situations.

  1. IBM researchers propose a new “pruning method” that can decrease the success rates of backdoor (harder to identify and track) attacks. In this research, scientists can identify infected neurons serving as the entry for backdoor attacks, and effectively remove them.
  2. A study from researchers at the MIT-IBM Watson AI Lab examining adversarial robustness of graph neural networks (GNNs) and proposes a new training framework that can improve robustness in GNNs based on tested attack methods
  1. Poisoning attacks: Machine learning algorithms are often re-trained on data collected during operation to adapt to changes in the underlying data distribution. For instance, intrusion detection systems (IDSs) are often re-trained on a set of samples collected during network operation. Within this scenario, an attacker may poison the training data by injecting carefully designed samples to eventually compromise the whole learning process. Poisoning may thus be regarded as an adversarial contamination of the training data.
  • Artificial intelligence is perceived differently than it operates in many ways, shapes or forms. People do react either with fear or excitement — there are many in between of course. Yet the way we perceive AI as human, or deign to give rights as such, is sad considering the pervasive inequality that exists outside of certain technology environments (bubbles). Lack of rights to humans, fairness and equality should be focused on before machine behaviour or machine rights in this regard is considered. At least that is how it seems.
  • There is not enough focus on the climate crisis in the artificial intelligence communities however there is a growing movement led by leading voices in artificial intelligence that does give some hope (see climatechange.ai). There is little talk of AI in relation to emissions or sustainability and it is time these question is asked with a degree of seriousness and urgency.
  • Data citizenship is an important topic and is of course related to AI when data is increasingly used as psychographic information to sell products or to lead ‘weapon-grade’ level of propaganda campaigns against or for different populations. Propaganda is not new of course, yet the way it is distributed is of course new and novel.
  • Data has become securitised and in the context of artificial intelligence the fear against this unknown can be used in political terms for a ‘rally-round-the-flag effect’, or effectively in ‘two-level games’ to manage domestic or international relationships. Being perceived as reaching for modernity is highly the case as national AI strategies proliferate. Still there is a real insecurity as the cyberwarfare or attack capacity increases for different nations with the need for blockchain increasing in a parallel (more climate issues).
  • AI Safety is a discussion dominated mostly by the focus on Artificial General Intelligence which obscures or conceals closer issues that has to be dealt with urgently in relation to AI: the greatest risk is the climate crisis however it is seldom mentioned by these type of actors. The argument is often that we have to prevent large existential risks in algorithms, yet it is these local real risks that certainly in the present is a danger to humanity as a whole (slower and less visible).
  • I have not learned much about artificial intelligence, but I want to learn more.

Written by

AI Policy and Ethics at www.nora.ai. Student at University of Copenhagen MSc in Social Data Science. All views are my own. twitter.com/AlexMoltzau

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store