About

THE PROJECT

 

DIVERSIFAIRis a pioneering three-year Erasmus+ project (2023-2026) that addresses intersectional fairness in AI systems.

DIVERSIFAIR’s official information sheet can be found on the Erasmus+ project results platform.

Shaping education for fairer AI use: DIVERSIFAIR, a European initiative

Mixed group of diverse people

 

 

ADDRESSING INTERSECTIONAL BIAS IN AI

DIVERSIFAIR (Diversify with Intersectionally FAIRer Artificial Intelligence) is a pioneering European project funded under the Erasmus+ programme. It aims to address intersectional fairness in the context of AI systems by considering multiple protected characteristics such as race, gender, and social background.

DIVERSIFAIR challenges the conventional methods of addressing bias: unlike traditional approaches that focus on a single protected characteristic, DIVERSIFAIR ensures that AI systems are designed to be inclusive for everyone, especially marginalised communities.

 

 

 

 

PROMOTING INTERSECTIONAL FAIRNESS IN AI

This project aims to shine a light on the experiences of people who are often overlooked or marginalised in society and promote fairness in AI by considering the diverse and interconnected factors that affect people’s lives, such as race, gender, and social background.

By focusing on use cases, data, and models originating or deployed in Europe, the project goes beyond existing approaches to algorithmic bias and fairness. The project aligns with the European Union’s principles of equality and inclusion, striving to create a society where all individuals are treated fairly and with respect.

Child on the back of a family member

HOW DO WE UNDERSTAND INTERSECTIONAL BIAS
IN THE CONTEXT OF AI SYSTEMS?

 Intersectional bias in AI describes the AI harms as experienced by peopledue to multiple intersecting and often marginalised parts of their identity.

#9957cc (1)

For instance, image-recognition tools appear to perform slightly worse when tested on dark-skinned people versus light-skinned people as well as when tested on women versus men. However, the real flaw shows when tested on a group of dark-skinned women, with the error rate ranging from 20 to 34%.

 Buolamwini, J., & Gebru, T. (2018, January). Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency (pp. 77-91). PMLR

Privacy Policy

Contact

Stay in the Loop!

Don’t miss out on the latest updates, subscribe to our newsletter and be the first to know what’s happening.

This project has received funding from the European Education and Culture Executive Agency (EACEA) in the framework of Erasmus+, EU solidarity Corps A.2 – Skills and Innovation under grant agreement 101107969.

Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the Culture Executive Agency. Neither the European Union nor the granting authority can be held responsible for them.