Reports

AUTOMATING (IN)JUSTICE:

AN ADVERSARIAL AUDIT OF RISCANVI

 

 

This deliverable from our WP3 partner Eticas exposes troubling issues in how AI is reshaping justice. The audit scrutinises RisCanvi, an AI tool used in Catalonia’s prisons, uncovering flaws in transparency, accuracy, and fairness. The findings raise serious concerns about biased decision-making and the consequences for inmates’ futures. Learn how automation might be perpetuating injustice and what can be done to improve accountability.

The adversarial audit highlights key risks of bias, fairness, and transparency in AI systems used in criminal justice, aligning with the DIVERSIFAIR project’s mission to address intersectional fairness in AI. Both initiatives seek to raise awareness about how algorithmic decisions can disproportionately affect marginalised groups, and advocate for more ethical and inclusive AI practices. Download the full report below or check out our detailed article for a complete breakdown.

#BiasFreeAI

Follow us on LinkedIn @DIVERSIFAIR Project to join the movement!

Project

Goals

Consortium

hey

Stay in the Loop!

Don’t miss out on the latest updates, subscribe to our newsletter and be the first to know what’s happening.


Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the Culture Executive Agency . Neither the European Union nor the granting authority can be held responsible for them.


This project has received funding from the European Education and Culture Executive Agency (EACEA) in the framework of Erasmus+,EU solidarity Corps A.2 – Skills and Innovation under grant agreement 101107969.