Predictive policing systems have become widespread across the United States and are gaining popularity in the United Kingdom. The police force in the city of Kent, England was the first British force to deploy predictive policing, using software from a popular American company called PredPol in 2013. Police found the technology to be “really useful”, with street violence falling by 6% after a four-month trial (BBC News 2018). Despite the software allowing increased efficiency for the police force, a closer examination of its internal logic elucidates dangerous impacts on society.
As Cathy O’Neil (2018) points out in her book “Weapons of Math Destruction”, these systems don’t attempt to directly predict individual behavior but pinpoint geographical areas where crime is more likely to occur. While this appears to be an objective way to predict and tackle crime, the algorithm proves susceptible to a debilitating feedback loop that inflicts damage on marginalized communities. Because serious and violent crimes occur less often than nuisance or petty crime, smaller crime comprises a larger portion of the data set of the algorithm. As a result of this, the model tends to direct police attention to these types kinds of crimes, which generally are carried out in working-class/minority neighborhoods (idem). Since police are sent to patrol these specific areas, more criminal activity is reported to the algorithm as occurring in these zones. This process confirms the algorithm’s prediction and instates a feedback loop where working-class/minority neighborhoods continue to be over-policed and over-reported on.
The Kent police suspended the use of PredPol in 2018 (BBC News 2018), five years after its implementation. As far as we know, the potential social inequities caused by the system did not motivate this decision.