70 days of Artificial Intelligence
After time and writing it seems that I am slowly moving into understanding more. Attempting to understand AI is jumping into deep water, and this can be a dangerous place to be in. As such I am splashing away in a swimming pool, here with you: at/on Medium. Therefore I know slightly more than I did before, yet people seem to make the assumption that I know far more than I do (which I do not). These last days I have been writing about AI Safety, and it has been a fascinating topic ranging from the existential threat of AI to practical implications of AI in given situations.
Topics that I will be looking at going forward in the next ten days will be focused on state-of-the-art attacks within the field of artificial AI. I have been contacted by IBM and I have decided to talk to them regarding the following:
- A new concept called “Block Switching” designed to provide a never-before-seen defense strategy against adversarial attacks by programming parts of an AI’s model layers with randomly assigned run times so that it “fools” the adversary and prevents them from knowing and exploiting model layer weaknesses
- IBM researchers propose a new “pruning method” that can decrease the success rates of backdoor (harder to identify and track) attacks. In this research, scientists can identify infected neurons serving as the entry for backdoor attacks, and effectively remove them.
- A study from researchers at the MIT-IBM Watson AI Lab examining adversarial robustness of graph neural networks (GNNs) and proposes a new training framework that can improve robustness in GNNs based on tested attack methods
Then again to explore these three topics I will have to dive deeper into practical cyber defence. From what I have understood so far in AI Safety there is a challenge with adversarial attacks (recurrent neural network or adaptive algorithms) or poisoning attacks within machine learning techniques (altering the data set algorithms are based on).
Earlier this year Jesus Rodriguez described adversarial attacks and a common scenario in an article called Adversarial Attacks that can Make Your Neural Network Look Stupid:
“One of the most common scenarios of using adversarial examples to disrupt deep learning classifiers. Adversarial examples are inputs to deep learning models that another network has designed to induce a mistake. In the context of classification models, you can think of adversarial attacks as optical illusions for deep learning agents”
There is a page on Adversarial Machine Learning on Wikipedia. There are two attacks listed that I chose to communicate here:
- Evasion attacks: for instance, spammers and hackers often attempt to evade detection by obfuscating the content of spam emails and malware code. In the evasion setting, malicious samples are modified at test time to evade detection; that is, to be misclassified as legitimate. No attacker influence over the training data is assumed. A clear example of evasion is image-based spam in which the spam content is embedded within an attached image to evade the textual analysis performed by anti-spam filters. Another example of evasion is given by spoofing attacks against biometric verification systems.
- Poisoning attacks: Machine learning algorithms are often re-trained on data collected during operation to adapt to changes in the underlying data distribution. For instance, intrusion detection systems (IDSs) are often re-trained on a set of samples collected during network operation. Within this scenario, an attacker may poison the training data by injecting carefully designed samples to eventually compromise the whole learning process. Poisoning may thus be regarded as an adversarial contamination of the training data.
Still as I have mentioned previously I am hoping over the course of the 500 days with 430 days left that I will be able to move increasingly into programming in Python and talking of maths combined with social science. This may move discussions to (an even more) narrow audience. However perhaps due to my lack of skill I may be able to communicate basic learning within the given area or my lack of knowledge so that we can engage in discussion. Either that or you have learnt more with me as a reader.
There is still very much to learn and after 70 days, as I usually say in a summary, I am still scratching the surface or operating with little understanding. It is clear that it will take a far longer time even more than 500 days (which is not that long) to get to know the field of artificial intelligence slightly better. Neuroscience, biology and the natural sciences is not even a field I have considered enough yet is emerging now in a few discussions I am having online. I have to gain an understanding in these or new friends that I can continue to discuss these topics with. Perhaps even move into this type of environment, a PhD student recently sent me a medical project that could be relevant.
Aside from this I am writing on a draft for a book project that I have thought about doing a pre-release on when I hit day 100 in 30 days. If you are interested in reading through this and giving me feedback I would be incredibly happy. Feel free to contact me directly on social media (anywhere) or write a response to this Medium post if you are interested.
Before I round of day 70 it may be suitable to tell you a few key points that I have reflected on during this time. I will recite from memory rather than attempting to give you a full breakdown of all the 70 days.
- The field of artificial intelligence is pervasive: you can search for most words and type in ‘artificial intelligence’ and you could find interesting (or worrying) results.
- Artificial intelligence is perceived differently than it operates in many ways, shapes or forms. People do react either with fear or excitement — there are many in between of course. Yet the way we perceive AI as human, or deign to give rights as such, is sad considering the pervasive inequality that exists outside of certain technology environments (bubbles). Lack of rights to humans, fairness and equality should be focused on before machine behaviour or machine rights in this regard is considered. At least that is how it seems.
- There is not enough focus on the climate crisis in the artificial intelligence communities however there is a growing movement led by leading voices in artificial intelligence that does give some hope (see climatechange.ai). There is little talk of AI in relation to emissions or sustainability and it is time these question is asked with a degree of seriousness and urgency.
- Data citizenship is an important topic and is of course related to AI when data is increasingly used as psychographic information to sell products or to lead ‘weapon-grade’ level of propaganda campaigns against or for different populations. Propaganda is not new of course, yet the way it is distributed is of course new and novel.
- Data has become securitised and in the context of artificial intelligence the fear against this unknown can be used in political terms for a ‘rally-round-the-flag effect’, or effectively in ‘two-level games’ to manage domestic or international relationships. Being perceived as reaching for modernity is highly the case as national AI strategies proliferate. Still there is a real insecurity as the cyberwarfare or attack capacity increases for different nations with the need for blockchain increasing in a parallel (more climate issues).
- AI Safety is a discussion dominated mostly by the focus on Artificial General Intelligence which obscures or conceals closer issues that has to be dealt with urgently in relation to AI: the greatest risk is the climate crisis however it is seldom mentioned by these type of actors. The argument is often that we have to prevent large existential risks in algorithms, yet it is these local real risks that certainly in the present is a danger to humanity as a whole (slower and less visible).
- I have not learned much about artificial intelligence, but I want to learn more.
Thank you so much for following my post. Day 70 is still a milestone no matter how far I get every day is a present and I appreciate learning tidbits about such a vast topic or field that artificial intelligence is.
This is day 71 of #500daysofAI. My current focus for day 50–100 is on AI Safety. If you enjoy this please give me a response as I do want to improve my writing or discover new research, companies and projects.