RisCanvi, an algorithm to predict whether a prisoner will commit a violent offense if temporarily released from jail

We humans have always dreamt of being able to predict other people’s behaviour in the future, to know beforehand what the others are going to do. While developments in psychology have helped us understand people’s behaviour and be able to roughly estimate how others may behave in some circumstances, predicting actual behaviour is still seen almost like a superpower.

Maybe that’s why it’s not surprising that the explicit or implicit aim of many algorithms being developed and used, both by private companies and by public administrations, around the world is to predict people’s behaviour.

Some fields that have shown particular interest in algorithms tasked with predicting how people will behave are security and justice. And that’s where one of the most notorious algorithms in Spain, RisCanvi, is being used to predict whether a prisoner will commit a violent offence if let out temporarily.

The government of the Catalan region started using RisCanvi in 2009 to evaluate the risk that a prisoner would commit a violent offence if allowed temporary freedom. In principle, the appeal of the algorithm is great: currently, it considers 43 different factors, based on historical data as well as on data introduced by officers in charge of the case at hand, to evaluate each prisoner’s situation and produce an automated risk evaluation, which then should only inform the recommendation to let the prisoner temporarily out or not.

However, in reality the use of RisCanvi is more problematic. The historical data are based on statistics about prisoners who in the past reoffended: their crime, socioeconomic conditions, mental health situation, drug addictions…

Reportedly, being a victim of abuse yourself or having a relative who is also a prisoner are two of the indexes that make RisCanvi more likely to consider you as a potential reoffender. Such historical data may therefore contain biases against particular social groups and minorities that have traditionally been discriminated against by the judicial system.

Then, and as happens with other predictive algorithms used in sensitive fields, in practice in the majority of cases humans seem to simply accept by default the evaluation produced by the algorithmic system.

Also reportedly, the Catalan authorities have only released one study about prisoners’ recidivism [PDF] that includes detailed information about the use of RisCanvi, which is based in a very small sample of 410 prisoners who got a temporary release during 2010 and had received at least one full evaluation by the algorithm (they represent 12% of the total sample of prisoners included in the study).

According to the figures in that study, 42.7% of prisoners who were given a high or medium risk of committing a violent offence by RisCanvi, did in fact not reoffend when given temporary permission to leave the prison.

More comprehensive data, and especially covering a much bigger sample, would be needed to get an idea of RisCanvi’s potential unfair impact on people’s chances to get temporarily released from jail. But that one report already points at the problematic nature of predictive algorithms: reducing the complexity of human behaviour to a set of predetermined factors may lead to the automation of unfairness.

That’s also one of the reasons why Eticas is presently running a project to elaborate a guide to conduct external audits of algorithms, and is using RisCanvi as one of the case studies.

You can find information about RisCanvi in its entry in the OASI Register, where as of now you’ll find almost 70 different algorithms. You can also read more about the social impact of algorithmic systems in the OASI pages. And you can tell us about an algorithm that we are missing by submitting a simple online form.