Automating (In)Justice: An Audit of RisCanvi

The first adversarial audit of an AI criminal justice system in Europe: the RisCanvi tool. Designed to assess inmates' risk of recidivism, it has been in use in Catalonia, Spain, since 2009, influencing parole and sentencing decisions.

12

applications analyzed

5.5/10

average mark

35%

sell the data

In collaboration with:
Irídia

With the integration of predictive algorithms and AI systems, the criminal justice system is undergoing a profound transformation that requires a scrutiny of these new integrations.

In Europe, the RisCanvi tool, which has been used in Catalonia, Spain, since 2009, is at the center of this discussion. Eticas conducted the first adversarial audit of RisCanvi to evaluate its effectiveness and fairness. The reverse engineering audit, entitled "Automating Injustice: An Adversarial Audit of RisCanvi," used a socio-technical approach and uncovered significant deficiencies in the tool's reliability and its ability to provide the necessary assurances to inmates, lawyers, judges, and criminal justice authorities.

The audit methodology consisted of an Ethnographic Audit, which included interviews with inmates and personnel both within and outside the criminal justice system, and a Comparative Output Audit, which used public data on inmate population and recidivism.

This data was then compared to the RisCanvi risk factors and behaviors. The results indicated that RisCanvi does not meet the required standards of reliability and fairness.

01

Lack of Reliability and Fairness: RisCanvi does not meet the required standards of reliability and fairness.

Inmate Disempowerment: Inmates lack legal support and awareness of the system and their risk classification,preventing them from meaningfully participating in or challenging RisCanvi's findings.

Lack of Professional Understanding: Professionals using RisCanvi, such as lawyers and psychologists, often lack complete understanding of its mechanics and have limited influence over its results.

Missing human intervention: RisCanvi operates predominantly in an automated manner with minimal human intervention, and changes to its results are observed in less than 5% of cases.

Arbitrary Correlations: Eticas' reverse engineering revealed arbitrary correlations between risk factors, indicating a lack of consistency in assigning risk to inmates. Regulatory Non-compliance: RisCanvi does not meet the transparency and oversight requirements of the recently enacted EU AI Act.

Lack of Accountability: There is insufficient documentation and transparency regarding RisCanvi's decision-making processes and the data used to train its AI model.

Ethical and Social Implications: Reliance on historical data can perpetuate discrimination against marginalized groups, and the lack of meaningful human oversight can dehumanize the legal process.

02

To help users to decide which application to opt for when tracking their menstrual cycle, we have prepared this ranking based on our analysis:

The average mark of respect for privacy of these apps is a scraped pass: 5.5. We have been surprised to see how some of them do not even have a privacy policy that the user can access, as is the case with Menstrual Calendar (developed by SimpleInnovation) or My Fitness (by Xiaomi).

It is also surprising how only 1 of them, WomanLog (developed by Pro Active App SIA), does not sell or share data under any circumstances. About the rest, most of them share data with third parties just for the sake of using the app, while others use consent and built-in third-party services to leave open the possibility that data is being shared in some way. They don’t only share personal data, but also information about the user’s health, such as the symptoms she experiences, as it happens with Cycles (developed by Perigee), although, at least, they say they share it anonymously.

03

We end the study alarmed and confirming our worst suspicions about this type of digital service. Despite the historical neglect of menstruation as an object of study and treatment, its use as an excuse to exploit our personal data has not gone unnoticed. Digitalisation, instead of giving place to services that protect our privacy and focus on improving our sexual and reproductive health, seems to have fostered in period monitoring a business model in which a service is nothing more than the bait to get hold of our data and monetise it. When this data reveals sensitive information that can expose intimate processes or bring us to justice, the importance of protecting ourselves and demanding that our data is protected is more urgent than ever.

This study has been carried out briefly, but we think that there is still much more to investigate in this field. Do you have information related to data usage by menstrual tracking apps? Please, send it to us info@eticas.tech.

Let’s work together to build a present where AI is

Fair, Auditable and Safe for All.

Stay Informed, Stay Ahead

Sign up for our newsletter to receive updates on AI accountability, our latest projects, and how you can make an impact. Your details will be securely stored, and we’ll reach out as soon as we’re ready to share exclusive insights.