Image for post
Image for post
Liège-Guillemins, Liège, Belgique, Photo by @danielm05

The Failed Facebook Spam Filtering of the Coronavirus

Was it a Fault Within the Field of Artificial Intelligence or Not?

How fine grained can moderation be? If moderation goes wrong on a global scale in the middle of a global health crisis, how bad would it be? In this article I will look at the event occurring 17–18th of March 2020 during the spread of the Coronavirus where Facebook moderation was heavy-handed deleting posts that people felt was informative, in many cases preventing the sharing of information relating to the Coronavirus. In doing so I will cover a discussion within one of the largest groups about AI & deep learning on Facebook. After this I will share my own experience as well as summarise the media coverage from three articles in BusinessInsider, TechCrunch and The Verge.

The 17th and 18th of March several users on Facebook noted that their posts meant to be informative about the Coronavirus were designated as spam by Facebook. Some people say it was their algorithm that went haywire with annoyance, other claimed conspiracy or expressed outrage. One user was commenting on the event within the Artificial Intelligence & Deep Learning group on Facebook:

“…the classical precision/recall problem. They want to have a good recall so that not much misinformation spreads. The false positives is bound to be high.”

In statistics, when performing multiple comparisons, a false positive ratio is the probability of falsely rejecting the null hypothesis for a particular test. In a sense of the word a false positive could result in an innocent party being found guilty. Many seemed to feel this was the case.

Another post on the same group expressed frustration, and was actually answered by VP and Chief AI Scientist, Facebook Yann LeCun.

Image for post
Image for post

As you can see he claimed it was caused by a bug in the filtering system.

One of the articles that I shared detailing specifically the failing digital infrastructure in Norway related to AI and Coronavirus was deleted.

Image for post
Image for post

“In the face of the mounting COVID-19 pandemic, Facebook has sent many of its content moderators home, saying it will rely more on automated software instead. Alex Stamos, an outspoken former Facebook security executive, speculated that this shift might be to blame.” Business Insider, March 2020

This is an interesting aspect — the lack of resourcing to moderate a pandemic in a sense. The world could experience the attempt to replace human moderation with an algorithm as this has been tried previously.

Officially it was a bug in the system, yet as Alex Stamos who worked at Facebook previously argued it makes sense to think that this might have been the case. Stamos said that moderators were sent home that cannot work from home due to the nature of the work being done.

Still Facebook officially argues:

“…the issue was with an automated moderation tool and was not related to any changes to its moderator workforce.” The Verge, March 2020

TechCrunch described Facebook as an essential communications utility and they are after all the largest platform for communication in the world. It is not like they have ignored the coronavirus, they have attempted to take steps to address it:

“Earlier this month, Facebook banned ads for protective face masks in an effort to prevent price gouging during the outbreak. Facebook has also been sharing contagion prevention tips atop Instagram’s home screen, sending misinformation to fact-checkers for review, and providing data to researchers.” TechCrunch, March 2020

The question here is whether it was due to a bug or staffing issue?

We are not likely to get a clear overview of this although the statement from Facebook says it was according to a ‘bug’ – it is hard to know the truth.

Was it lack of human-moderation?

Was it a fault within the field of artificial intelligence or not?

Image for post
Image for post

The question remains.

This is #500daysofAI and you are reading article 288. I am writing one new article about or related to artificial intelligence every day for 500 days. My current focus for 100 days 200–300 is national and international strategies for artificial intelligence. I have decided to spend the last 25 days of my AI strategy writing to focus on the climate crisis.

Written by

AI Policy and Ethics at www.nora.ai. Student at University of Copenhagen MSc in Social Data Science. All views are my own. twitter.com/AlexMoltzau

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store