Are YouTube’s Algorithms Racist?
YouTube and Alphabet is facing a putative class action lawsuit
YouTube alleges its automated systems aren’t designed to identify race, however is that really true?
I read an article published on June the 17th called: “YouTube Alleged to Racially Profile Via Artificial Intelligence, Algorithms.” The article describes that a group of African American content creators has launched a putative class action against YouTube and its Alphabet parent company.
There is a link to the full complaint here.
The group alleges that YouTube is violating laws intended to prevent racial discrimination.
This is not the first lawsuit questioning YouTube, and it is a question increasingly of the controversial immunities afforded digital services under Section 230 of the Communications Decency Act.
Previous attempts includes:
- PragerU on claims that conservative videos weren’t being treated the same as liberal ones. The suit claimed a First Amendment violation, but the suit was rejected, and in February, the 9th Circuit Court of Appeals agreed that YouTube is a private forum, not a public one subject to First Amendment scrutiny.
- LGTBQ YouTubers and alleges (among other things) that videos and channels with “gay,” “bisexual,” or “transgender” in the title are being unfairly targeted. At the time of filing, a YouTube spokesperson denied its systems restricted based on this criteria.
As might be understand the precedent of the first case makes the second case more challenging.
Section 230 is used in defence by digital giants like Google, Facebook and Twitter.
The now-famous provision of the 1996 law provides immunity to digital sites for whatever third-party content is hosted by them. The law also states that these interactive services can’t be held liable for actions taken in good faith to restrict access or the availability of material deemed “objectionable.”
Apparently in the case of the LGTBQ YouTuber the Department of Justice intervened in the case just last month to defend the constitutionality of Section 230.
Government lawyers told the court,
“Section 230(c) does not regulate or limit Plaintiffs’ primary conduct, such as their expressive activities. For example, Plaintiffs do not allege that Section 230(c) prevents them from creating videos or posting them on the Internet.”
As mentioned, section 230 was meant for moderation of what was found to be ‘objectionable’.
This is increasingly becoming difficult as national or international political decisions become intertwined with platform expressions.
Consider Twitter’s decision to fact-check some tweets by President Donald Trump.
One can simultaneously consider that tweets reposted from a different accounts copying Trump’s tweet was suspended in three days.
A Facebook page that copied Trump’s posts got flagged for violence.
The original intention of Section 230 was to encourage moderation.
Yet, arguably not everyone shows the same amount of moderation and not everyone will be moderated equally.
Platforms were supposed to be able to decide what should be allowed to delete, however The Department of Justice issued a proposal to change this by striking the provision that allows platforms to delete content deemed objectionable.
If so, then it will have severe repercussions for the distribution of power in moderation.
Facebook has been building its own oversight board separated from the company, yet connected.
As such the racism on digital platforms discussed by this lawsuit is set against a larger play for moderation power.
This lawsuit is taking direct aim at Section 230 of the Communications Decency Act says the site applies “restricted mode” to videos tagged “Black Lives Matter,” among other names, words and phrases.
In a recent executive order published on the 28th of May the President Trump argues:
“Online platforms are engaging in selective censorship that is harming our national discourse.”
In this sense the President is arguing for his will to use hate speech or against selective censorship. Remember, if it was fair he would have been banned many times over, yet he is in this executive order targeting Section 230.
Twitter has attempted to make a statement, but by doing so they have opened up far larger questions in terms of moderation.
What does a social media company do when a President refuses to show moderation? They moderate the President.
Twitter even had to mark an outright lie by Trump regarding the way the democratic process in the United States was to be conducted.
At this point adding an exclamation mark linking to facts, although there are several statistics showing that many read only the headline rather than clicking on any links or add-ons.
If a lie damaging to the democratic progress of an entire country is perpetuated by the most powerful person in the given country, then should the tweet have been moderated to the point of deletion?
If we move back to the case of Black Lives Matter we may come back with more or less moderation despite the apparent injustice.
On the one hand it seems the most powerful person in the country cannot be moderated.
On the other hand black people protesting with videos tagged as ‘BLM’ can be marked with ‘Restricted Mode’. The ‘Restricted Mode’, YouTube notes, is an optional feature used by less than 2 percent of its users.
But with two billion users…
2 000 000 000.
2% = 40 000 000.
If 40 million people are restricted from seeing even mild content about BLM this could be reconsidered.
YouTube says it is reviewing the complaint.
Ironically with the little support Trump has shown for ‘BLM’ the President may create a precedent.
Trump’s order precludes the government from stepping in to enforce Section 230 to claims based on viewpoint discrimination.
The social video site uses these tools to help viewers screen out potentially mature content, but the plaintiffs here say that by employing a “Restricted Mode,” this acts as an improper censor.
Whether they are right or not remains to be decided, however it is without a doubt an interesting case to follow.
Personally, I believe it should be possible to have a video tagged with BLM that has appropriate and informative content without having it enter ‘Restricted Mode’.
Section 230 needs review, but who will step in moderating if social media companies are deprived of part of their role in undertaking this practice?
Is YouTube’s AI is Racist? Personally and unfortunately I think the answer is yes. Legally, however, it remains to be defined.
Along the way this poses larger questions in terms of social media moderation and that of power related to freedom of speech.
A President of the United States can spread hate speech and receive mild moderation while regular citizens of the United States can face deletion given the same conditions.
In the end there are people in the YouTube moderation team, and it is hard to argue against the possibility that they may take actions that can be construed as infringing upon regulatory framework relating to race, even intentionally or unintentionally being racist in the process. Relegating a search term to ‘Restriced Mode’ could make your day easier, however it has social repercussions that go beyond the mere categorisation.
I think in conclusion far more than Section 230 needs review.
This is #500daysofAI and you are reading article 383. I am writing one new article about or related to artificial intelligence every day for 500 days.