Gender and racial bias across facial recognition systems

Facial recognition algorithms have proven their inefficacy when it comes to classifying people that are not white males. A study published by the MIT Media Lab in early 2019 found that Rekognition, Amazon’s facial recognition system, performed substantially worse when identifying an individual’s gender if they were female or darker-skinned. Rekognition committed zero errors when identifying the gender of lighter-skinned men, but it confused women for men 19% of the time and mistook darker-skinned women for men 31% of the time (Raji and Buolamwini 2019). Similarly, a previous test conducted by the ACLU found that while scanning pictures of members of Congress, Rekognition falsely matched 28 individuals with police mugshots (Cagle and Ozer 2018).

Shortly after Buolamwini’s report was published, Microsoft, IBM and the Chinese firm Megvii vowed to improve their facial recognition software whereas Amazon, by comparison, denied that the research suggested anything about the performance of its technology (Vincent, 2019).

Amazon’s Rekognition system presents a dangerous pairing of inaccuracy and ubiquity; Amazon has been contracted by a host of governments and private firms for this technology.  For example, the Orlando Police use the software to identify, track, and analyze people in real-time. The system can recognize up to 100 people in a single image and can quickly cross-check information it collects against databases featuring tens of millions of faces (Cagle and Ozer 2018). According to Amazon’s marketing materials,  deployment by law enforcement agencies is a “common use case” for this Rekognition. Among other features, the company’s materials describe “person tracking” as an “easy and accurate” way to investigate and monitor people (Cagle and Ozer 2018). Civil liberties advocates speculate that such technology could easily be utilized against vulnerable populations and groups of civil disobedience.