In Allegheny County, Pennsylvania, software containing a statistical model has been designed to predict potential child abuse and neglect by assigning a risk score to every case (Hurley 2018). This score ranges from 0 (lowest) to 20 (highest) and is the result of data processing from eight databases. The initiative involves various agencies including jails, public-welfare services, and psychiatric and drug treatment centers. The function of the score is to help social workers determine if an investigation should be carried out during the case assessment. The system hopes to save lives by alerting caseworkers to the most serious cases and allocating available resources in a manner that prioritizes these high-risk cases. In absence of the system, human decision-making had been found responsible for allotting the system’s resources quite inefficiently, admitting 48% of the high-risk families and excluding 27% of the high-risk families (ibid).
The algorithm has a mixed social impact. The system relies on proxy indicators instead of actual indicators of maltreatment, and these proxies can be biased against certain groups of people (Edes and Bowman 2018). For instance, the proxy known as ‘call re-referral’ measures the number of times that the case has been reported by third parties. As it turns out, “anonymous reporters and mandated reporters report black and biracial families for abuse and neglect three and a half time more often than they report white families” (Hurley, 2018). However, on the other hand, the algorithm’s discriminatory capacity is mitigated by a few key aspects. Not only is it owned by public authorities, but it operates with a high degree of transparency and does not function unilaterally (it only recommends to a social worker if the case should be given a closer look) (ibid). These embedded restraints help reduce the negative societal impacts of the system.