Automating (In)Justice: An Adversarial Audit of RisCanvi
With the integration of predictive algorithms and AI systems, the criminal justice system is undergoing a profound transformation that requires a scrutiny of these new integrations. In Europe, the RisCanvi tool, which has been used in Catalonia, Spain, since 2009, is at the center of this discussion. Eticas conducted the first adversarial audit of RisCanvi to evaluate its effectiveness and fairness. The reverse engineering audit, entitled “Automating Injustice:
An Adversarial Audit of RisCanvi,” used a socio-technical approach and uncovered significant deficiencies in the tool’s reliability and its ability to provide the necessary assurances to inmates, lawyers, judges, and criminal justice authorities.
The audit methodology consisted of an Ethnographic Audit, which included interviews with inmates and personnel both within and outside the criminal justice system, and a Comparative Output Audit, which used public data on inmate population and recidivism. This data was then compared to the RisCanvi risk factors and behaviors. The results indicated that RisCanvi does not meet the required standards of reliability and fairness.
Key Findings: A Flawed System
- Lack of Reliability and Fairness: RisCanvi does not meet the required standards of reliability and fairness.
- Inmate Disempowerment: Inmates lack legal support and awareness of the system and their risk classification,preventing them from meaningfully participating in or challenging RisCanvi’s findings.
- Lack of Professional Understanding: Professionals using RisCanvi, such as lawyers and psychologists, often lack complete understanding of its mechanics and have limited influence over its results.
- Missing human intervention: RisCanvi operates predominantly in an automated manner with minimal human intervention, and changes to its results are observed in less than 5% of cases.
- Arbitrary Correlations: Eticas’ reverse engineering revealed arbitrary correlations between risk factors, indicating a lack of consistency in assigning risk to inmates.
- Regulatory Non-compliance: RisCanvi does not meet the transparency and oversight requirements of the recently enacted EU AI Act.
- Lack of Accountability: There is insufficient documentation and transparency regarding RisCanvi’s decision-making processes and the data used to train its AI model.
- Ethical and Social Implications: Reliance on historical data can perpetuate discrimination against marginalized groups, and the lack of meaningful human oversight can dehumanize the legal process.
Recommendations
Based on these findings, the audit concludes that RisCanvi does not currently provide the necessary guarantees to inmates, attorneys, judges, and criminal justice agencies. While these findings are not conclusive due to limited access to system data, there is sufficient evidence to warrant further investigation.
- Further investigation: It is imperative that this system continue to be studied to ensure that it is fair and unbiased.
- Increase Transparency and Accountability: Improve documentation and transparency of decision-making processes and data used to train RisCanvi.
- Human Oversight and Participation: Ensure meaningful human oversight and allow inmates and professionals to meaningfully participate in and challenge the outcomes of the system.
- Regulatory Compliance: Ensure that RisCanvi meets the transparency and oversight requirements of EU AI Act.
- Address Ethical and Social Concerns: Address the potential for perpetuating discrimination against marginalized groups and the dehumanizing aspects of automated decision making.
These steps are critical to building trust in the system and ensuring that it operates with fairness and equity.