The New York Police Department has developed its own algorithmic facial identification system used in both routine investigations and concrete criminal events, such as terrorist attacks (Fussell, 2018). The NYPD system compares biometric data stored in its database in order to check previous criminal records and identify suspects (Garvie, et al., 2016). Critics argue the system lacks transparency, hiding internal operations that produced biased results (Ibid).
Controversy exists surrounding the system’s tendency for erroneous misidentification. Facial recognition has been proven to only be accurate when dealing with white males (Lohr, 2018). A study demonstrated that ‘the darker the skin, the more errors arise” (ibid). The technology’s disproportionate inaccuracy with people of color has sparked debate on the impact of such systems on the privacy and civil rights of racial minorities, leading Georgetown lawyers to sue the NYPD over the opacity of the facial recognition software (Fussel, 2017). One of the main barriers to disclosure is the fact that often algorithms like this are proprietary, in the case of NYPD the software was developed by IBM.