Varying algorithmic instruments have been introduced in the penal system across local jurisdictions in the United States in order to predict recidivism and identify crime “red zones” where police deployment should be prioritized. This phenomenon echoes the US Justice Department’s National Institute of Corrections promotion of the use of software to create a fairer, more efficient criminal justice system (Angwin, 2016). However, such initiatives have brought with them their own inadvertent consequences.
New York City’s police force, the largest force in the US, has deployed crime-forecasting software according to individual and geographical variables on the basis of historical crime data such as gang affiliation, criminal records, territorial distribution data of crime events, etc (Winston, 2018). On top of receiving criticism for being racially biased, the predictive system has been questioned regarding its opacity, given its data sets sources are not publicly available (Winston, 2018).
Recidivism-predicting systems aim to circumvent human biases and shortcomings. In general, they should serve as a valuable tool in the hands of the government to reduce discrimination against vulnerable populations while simultaneously enabling judges to make more accurate decisions. The systems have been promoted as the necessary avenue to creating a fairer justice system and safer communities. Nevertheless, this vision has failed to actualize.