In Espoo, the second largest Finnish city, a software aimed at identifying risk factors associated with the need for social and medical services among children has been deployed at dubious ethical costs (Algorithm Watch 2019). The model, developed by a firm called Tieto, analyzes anonymized health care and social care data of the city’s population and client data of early childhood education. Producing preliminary results, it discovered “approximately 280 factors that could anticipate the need for child welfare services” (ibid). The system was unprecedented in Finland; no other program had ever used machine learning to integrate and analyze public service data (ibid). The next iteration of the system will seek to “utilise AI to allocate services in a preventive manner and to identify relevant partners to cooperate with towards that aim” (ibid).
Finnish authorities are taking ethical and legal issues seriously, but the use of predictive algorithms by public agencies tasked with allocating social benefits has delivered poor and discriminatory results in other cases. In addition to that, the reliance on private companies to do the data heavy-lifting (storage, processing, and analytics) adds a layer of concern to the situation. Given the proprietary nature of the software they produce, issues related to algorithmic transparency may surface.