Michigan State Police Facial Recognition Software Wrongfully Accuses Man of Theft

Amidst Black Lives Matter protests and calls for racial justice reform in the United States, the story of Robert Williams –a Black man wrongfully arrested at the fault of a facial recognition software– underscores how current technologies run the risk of exacerbating preexisting inequalities.

After facial recognition software from the Michigan State Police matched Robert Williams’ face with that of a still image from a surveillance video catching a man stealing $3,800 USD of watches, officers from the Detroit Police Department arrived at Mr. Williams’ house to arrest him (Hill 2020). While Mr. Williams sat in the interrogation room knowing that he was innocent of the crime, he had no idea that his case would be the first known account in the United States of a wrongful arrest based on a facial recognition algorithm (ibid).

Facial recognition algorithms are becoming increasingly more common in policing, despite their concerningly high false-positive rates for racial minorities and women. In this case, Michigan State Police bought the software from a private firm DataWorks Plus for roughly $5.5 million USD. The software uses algorithms (developed by other third-party firms like Rank One Computing in Colorado and NEC in Japan) that map a suspect’s face and then compare it to the Michigan State Police’s database of more than 49 million photos. In 2019, a federal study demonstrated that algorithms found within DataWorks Plus’ software were 10 to 100 times more likely to incorrectly identify African-American and Asian faces than Caucasian faces (ibid). The federal study also demonstrated that the facial recognition algorithms had disproportionately higher false positive rates for women. Thus, in effect, facial recognition algorithms, when aiding law enforcement, wrongfully identify women and minorities as more likely to have committed crimes.

The Williams family reached out to defense attorneys but found little help. Despite carefully explaining the situation to many attorneys, most assumed that the software had accurately matched Mr. Williams face with that of the robber, believing him to be guilty and asking for high prices in order to represent him in court. Only when the ACLU of Michigan heard the case did the Williams family find legal representation that believed that the algorithm could be at fault. Phil Mayor, an attorney of the ACLU of Michigan, explained the organization’s stance to the New York Times, “We’ve been active in trying to sound the alarm bells around facial recognition, both as a threat to privacy when it works and a racist threat to everyone when it doesn’t… We know these stories are out there, but they’re hard to hear about because people don’t usually realize they’ve been the victim of a bad facial recognition search (ibid).”

While Mr. Williams’ case was ultimately dismissed at arraignment, his story alerts citizens and law enforcement agencies around the world to the dangers of facial-recognition technology with respect to public safety, justice, and privacy. Within the American context, the story demonstrates to a country weary and wounded from a history of racial injustice that emerging technologies, such as algorithms, can quickly and invisibly reproduce the same inequalities that has plagued it for centuries.