Biased algorithms are determining whether parents get to keep their kids

Social workers are put under enormous pressure to decide whether to devote resources to investigate reports of suspected child abuse. These investigations could ultimately result in the separation of families in the interest of a child’s wellbeing. In order to alleviate some of this responsibility, various local jurisdictions in the United States have implemented algorithms that assess various case criteria in order to help discern whether intervention is necessary (Brico 2019). For example, the Allegheny Family Screening Tool, an algorithmic instrument used by the Allegheny County Office of Children, Youth and Families (CYF), considers prior arrests and the mental health histories of parents as critical factors to evaluate (Brico 2019). The Florida Department of Children and Families contracted the tech firm SAS to build their own modeling system. According to public documents, the system reviews data like Medicaid status, criminal justice history, and substance-use treatment history in order to advise if family separation is appropriate.

While these algorithms remove a huge burden from social workers, they have successfully converted a legacy of discriminatory family separation into ostensibly neutral lines of code. These models have been trained to make intervention decisions from data sets where low-income families have historically lost their children at much higher rates than middle and higher-income families (Brico 2019). As mentioned, they make decisions by examining variables that often correlate with socioeconomic status, like welfare benefits and criminal justice history. Thus, these algorithms have learned to “codify poverty for child maltreatment” (idem). In the words of scholar Virginia Eubanks (2018), “the model confuses parenting while poor with poor parenting”. Three-quarters of child protective cases are related to neglect (in comparison to abuse), with neglect meaning a lack of food, clothing, supervision or suitable living conditions for the child (Brico 2019). Unfortunately, these deficiencies are often the daily realities of living in poverty. The algorithmic models largely rely on a set of historical criteria that understands indicators of poverty to be a basis for intervention and family separation. The result of these algorithms is the digital legitimization and automation of discrimination against the poor. This is unjust and demonstrates the danger of teaching an algorithm from data sets that harbor historical bias.