Coorganised with the BIAS project

KEY INSIGHTS FROM THE INTERSECTIONAL FAIRNESS IN AI WEBINAR

On 10 December 2024, DIVERSIFAIR teamed up with the BIAS Project for a webinar titled “Intersectional Fairness in AI: Legal Frameworks, Tools, and Learning Opportunities.” The event delved into the challenges of fairness and discrimination in AI, emphasising how these systems affect society and the need for an intersectional approach to fairness. 

Recording of the webinar

Here are some insights from the event.

 

The EU AI Act: Progress and Gaps

The EU AI Act, a major focus of the webinar, categorises AI systems by risk level. Recruitment AI, for instance, is high-risk due to its significant impact on people’s lives. While the legislation is a step forward, its risk-based approach assumes fundamental rights are already protected—a view that often overlooks intersectional discrimination.

For AI systems to truly reflect fairness, policymakers must go beyond the Act’s basic legal standards. They need to address the nuanced ways biases intersect and ensure these complexities are accounted for in how the Act is applied.

 

Tackling Intersectional Bias and Expanding AI Awareness

A critical message from the webinar was the need for inclusive AI regulation. Many AI systems are built with the majority in mind, marginalising minority groups. DIVERSIFAIR underscores the importance of recognising how race, gender, and social class intersect to shape unique experiences of discrimination.

 

Auditing AI for Real-World Impact

Another vital point was the importance of auditing AI systems to evaluate their real-world impact. It’s not enough to examine training data or algorithms alone. For example, in recruitment, we must assess outcomes: who gets hired and why.

AI often reinforces majority norms, disadvantaging those who differ from the majority. DIVERSIFAIR, through their partner Eticas Foundation, advocates for audits that examine not just algorithms but also their societal impact, ensuring fairness in both design and results.

 

Looking Ahead

The webinar reaffirmed DIVERSIFAIR’s commitment to promoting intersectional fairness in AI. As technology advances, it’s vital to prevent AI from reinforcing existing inequalities. By collaborating with initiatives like the BIAS Project and pushing for inclusive policies, DIVERSIFAIR is paving the way for a fairer future.

Through continued collaboration and practical action, we can ensure AI works for everyone, regardless of background or identity. AI should drive positive change—not perpetuate inequality.

About the BIAS and DIVERSIFAIR projects

RGB_Logotype_positive_purple_JS_20221209
DIVERSIFAIR logo - full colour

The BIAS Project focuses on addressing algorithmic biases in recruitment processes. Hiring decisions have long been influenced by biases related to gender, race, and social background. With AI now playing a role, there’s potential for greater efficiency—but also a risk of amplifying these biases. The challenge lies in creating AI systems that reduce, rather than perpetuate, discrimination. Find out more about the BIAS project on their website.

Meanwhile, DIVERSIFAIR takes an intersectional approach to bias, recognising how overlapping identities—such as race, gender, and socio-economic status—can lead to unique forms of discrimination. A stark example is the Dutch welfare scandal, where an AI system disproportionately targeted certain groups, revealing systemic biases. DIVERSIFAIR works to ensure AI development is inclusive, so no one is left behind. DIVERSIFAIR addresses two key gaps in the AI landscape: the lack of expertise in tackling AI-related discrimination and the need for intersectional approaches. By creating educational kits for civil society, industry, and policymakers, the project equips stakeholders with practical tools to identify and mitigate bias. It also fosters collaboration to develop inclusive AI governance frameworks that prioritise fairness and human rights.

#FairerAI

Follow us on LinkedIn @DIVERSIFAIR Project to join the movement!

Project

Goals

Consortium

hey

Stay in the Loop!

Don’t miss out on the latest updates, subscribe to our newsletter and be the first to know what’s happening.

This project has received funding from the European Education and Culture Executive Agency (EACEA) in the framework of Erasmus+, EU solidarity Corps A.2 – Skills and Innovation under grant agreement 101107969.

Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the Culture Executive Agency. Neither the European Union nor the granting authority can be held responsible for them.