The Spread of Facial Recognition Technology in the United States Law Enforcement
An increasing issue for people of colour with increasing application despite a large amount of false positives
One investigation by the Center on Privacy & Technology at Georgetown Law found that one in four police forces were using facial recognition. That was back in 2016, and there are likely more doing so at this point.
New companies have started to use images from social media. According to Recode a company called Clearview AI’s technology was used by more than 600 law enforcement agencies in 2019, and they have popularised this practice claiming that they have more images to compare with than the FBI.
However, these facial recognition technologies are often biased against those with darker skin tones. It could create both further inequality and inequity in society.
Racial Inequality and Inequity in Facial Recognition Technology
An imbalance that reflects underlying patterns in society
These technologies are often more effective matching white faces than they are for those of people of colour.
The data several of these algorithms or systems is trained on is not representative of the populations it is used on.
This was well summarised by US Representative Alexandria Ocasio-Cortez in a 2019 congressional hearing:
“We have a technology that was created and designed by one demographic that is mostly only effective on that one demographic and [technology companies] are trying to sell it and impose it on the entirety of the country.”
A recent article in Verdict mentions that Clearview AI has published a study of its own technology that purports to show it is 100% accurate.
They assessed this using ACLU methodology.
Yet, this has attracted criticism from the ACLU, which called the report “absurd on many levels” and accused the company of attempting to manufacture an endorsement.
These technologies produce a lot of false positives.
A false positive is an error in data reporting in which a test result improperly indicates presence of a condition (the result is positive), when in reality it is not present.
This happens particularly often for people of colour.
This is #500daysofAI and you are reading article 369. I am writing one new article about or related to artificial intelligence every day for 500 days. Towards day 400 I am writing about artificial intelligence and racial inequality.