This deliverable from our WP3 partner Eticas exposes troubling issues in how AI is reshaping justice. The audit scrutinises RisCanvi, an AI tool used in Catalonia’s prisons, uncovering flaws in transparency, accuracy, and fairness. The findings raise serious concerns about biased decision-making and the consequences for inmates’ futures. Learn how automation might be perpetuating injustice and what can be done to improve accountability.
The adversarial audit highlights key risks of bias, fairness, and transparency in AI systems used in criminal justice, aligning with the DIVERSIFAIR project’s mission to address intersectional fairness in AI. Both initiatives seek to raise awareness about how algorithmic decisions can disproportionately affect marginalised groups, and advocate for more ethical and inclusive AI practices. Download the full report below or check out our detailed article for a complete breakdown.