Announcing the Results of the First DIVERSIFAIR Audit:

EVALUATING RISCANVI, A TOOL USED IN CRIMINAL JUSTICE SYSTEMS

Eticas’ audit of RisCanvi uncovered biases and reliability issues, crucial for transparency in AI. Through ethnographic and comparative audits, it highlighted discrepancies in risk assessments, calling for fairer practices in criminal justice AI. We are delighted to announce that Eticas, DIVERSIFAIR project partner, has successfully completed their inaugural audit for the project. This comprehensive examination focused on RisCanvi, an AI tool employed within Catalonia’s (Spain) criminal justice system. This audit marks a significant milestone in our mission to enhance transparency and fairness in AI technologies.

Key Findings from the Audit 

 

Titled “Automating (In)justice: An Adversarial Audit of RisCanvi“, the audit uncovered critical deficiencies in the tool: 

  • Bias in Risk Classifications: Static factors in risk assessments revealed biases against specific demographics, particularly those with challenging backgrounds. 

  • Reliability Issues: Significant shortcomings were identified in RisCanvi’s reliability, compromising its ability to provide assurances to inmates, lawyers, judges, and other criminal justice stakeholders. 

  • Regulatory Non-Compliance: Despite Spanish regulations mandating audits for automated systems since 2016, RisCanvi had not undergone scrutiny until Eticas’ examination.

Methodology of the Audit 

 

Eticas employed a comprehensive audit methodology comprising two primary components: 

  • Ethnographic Audit: This involved immersive research, including interviews with inmates, legal professionals, and stakeholders within and outside the criminal justice system, providing a holistic view of RisCanvi’s impact. 

  • Comparative Output Audit: Using public data on inmate populations and recidivism, Eticas compared RisCanvi’s risk factors and behaviours with real-world outcomes. This analysis revealed discrepancies and potential biases within the system

Adversarial audits play a crucial role in thoroughly evaluating AI systems. They extend beyond technical assessments to consider broader societal implications, emphasising fairness, transparency, and accountability. The RisCanvi audit underscores the impact of multidisciplinary collaboration in shaping responsible AI practices. ​

This audit of RisCanvi falls within DIVERSIFAIR’s scope by addressing intersectional bias in AI systems used in sensitive areas like criminal justice. Our objective is to develop, apply, and test tools and methods to identify and mitigate biases across various sectors. This includes conducting internal and external audits of AI solutions, such as risk assessment tools, predictive systems, natural language processing, facial recognition, and matching algorithms. Through this approach, we aim to ensure fairness in AI development and use, contributing to the creation of inclusive AI systems and fostering a more equitable digital future. Read the full audit report on Eticas website.

Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the Culture Executive Agency. Neither the European Union nor the granting authority can be held responsible for them.

RisCanvi report-Eticas

 

 

 

 

 

Results from this audit have also been featured in the Spanish newspaper El País.

 

About Eticas 

Eticas is the world’s first algorithmic auditing company, having conducted adversarial audits of systems used by YouTube, TikTok, Uber, insurance systems, and the Spanish government, examining their impact on radicalisation, migrant representation and discrimination, bias against people with disabilities, workers’ rights, and protection for victims of gender violence. Find out more about Eticas.

 

About the DIVERSIFAIR project 

DIVERSIFAIR is an Erasmus+ project aiming at addressing intersectional bias in AI and mitigating the discriminatory impact of AI on people’s lives. Committed to advancing AI technology that is fair, unbiased, and inclusive, we aim to raise social awareness, influence policy-making, and provide future-proof AI training. 

#BiasFreeAI

Follow us on LinkedIn @DIVERSIFAIR Project to join the movement!

Project

Goals

Consortium

hey

Stay in the Loop!

Don’t miss out on the latest updates, subscribe to our newsletter and be the first to know what’s happening.


Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the Culture Executive Agency . Neither the European Union nor the granting authority can be held responsible for them.


This project has received funding from the European Education and Culture Executive Agency (EACEA) in the framework of Erasmus+,EU solidarity Corps A.2 – Skills and Innovation under grant agreement 101107969.