Eticas’ audit of RisCanvi uncovered biases and reliability issues, crucial for transparency in AI. Through ethnographic and comparative audits, it highlighted discrepancies in risk assessments, calling for fairer practices in criminal justice AI. We are delighted to announce that Eticas, DIVERSIFAIR project partner, has successfully completed their inaugural audit for the project. This comprehensive examination focused on RisCanvi, an AI tool employed within Catalonia’s (Spain) criminal justice system. This audit marks a significant milestone in our mission to enhance transparency and fairness in AI technologies.
Key Findings from the Audit
Titled “Automating (In)justice: An Adversarial Audit of RisCanvi“, the audit uncovered critical deficiencies in the tool:
-
Bias in Risk Classifications: Static factors in risk assessments revealed biases against specific demographics, particularly those with challenging backgrounds.
-
Reliability Issues: Significant shortcomings were identified in RisCanvi’s reliability, compromising its ability to provide assurances to inmates, lawyers, judges, and other criminal justice stakeholders.
-
Regulatory Non-Compliance: Despite Spanish regulations mandating audits for automated systems since 2016, RisCanvi had not undergone scrutiny until Eticas’ examination.
Methodology of the Audit
Eticas employed a comprehensive audit methodology comprising two primary components:
-
Ethnographic Audit: This involved immersive research, including interviews with inmates, legal professionals, and stakeholders within and outside the criminal justice system, providing a holistic view of RisCanvi’s impact.
-
Comparative Output Audit: Using public data on inmate populations and recidivism, Eticas compared RisCanvi’s risk factors and behaviours with real-world outcomes. This analysis revealed discrepancies and potential biases within the system
Adversarial audits play a crucial role in thoroughly evaluating AI systems. They extend beyond technical assessments to consider broader societal implications, emphasising fairness, transparency, and accountability. The RisCanvi audit underscores the impact of multidisciplinary collaboration in shaping responsible AI practices.
This audit of RisCanvi falls within DIVERSIFAIR’s scope by addressing intersectional bias in AI systems used in sensitive areas like criminal justice. Our objective is to develop, apply, and test tools and methods to identify and mitigate biases across various sectors. This includes conducting internal and external audits of AI solutions, such as risk assessment tools, predictive systems, natural language processing, facial recognition, and matching algorithms. Through this approach, we aim to ensure fairness in AI development and use, contributing to the creation of inclusive AI systems and fostering a more equitable digital future. Read the full audit report on Eticas website.
Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the Culture Executive Agency. Neither the European Union nor the granting authority can be held responsible for them.