DEVELOPING THE DIVERSIFAIR TOOLKITS: A COLLABORATIVE APPROACH

The development of our Intersectional Fairness in AI toolkits was a truly collaborative effort, shaped by insights from civil society, industry, and policy experts. Through extensive consultation, research, and feedback, we sought to create practical resources that help embed fairness in AI systems.

A Research-Driven and Participatory Process

Our approach to building these toolkits included:

  • Surveys within the AI community to gauge awareness and challenges around intersectional bias
  • Interviews with over 20 experts across multiple sectors
  • Focus groups to refine strategies and ensure real-world applicability
  • Incorporation of research from the broader DIVERSIFAIR project

This process allowed us to bridge the gap between academic insights and practical solutions, making the toolkits accessible and actionable for all stakeholders working on AI fairness.

Key Insights from the Survey

One of the foundational steps in our process was a survey conducted within the AI community. This aimed to assess perceptions, challenges, and knowledge gaps related to intersectional bias in AI. The results were both insightful and revealing:

  • Limited awareness of intersectional bias – While 59% of respondents had experienced discrimination based on multiple factors, only 15% were very familiar with the concept of intersectional bias.
  • Recognition of AI bias risks – A large majority (86.3%) acknowledged that AI systems can perpetuate societal biases, yet 58% of those who disagreed came from the tech sector—highlighting a disconnect between developers and broader awareness of AI’s societal impacts.
  • The need for more education87% of respondents believe public awareness and training on intersectional bias in AI is insufficient, citing a lack of legal frameworks, diversity in AI teams, and expert training.
  • Mixed outlook on AI’s potential for fairness48% of respondents are optimistic about AI’s role in promoting fairness, but 26% remain neutral and another 26% are pessimistic, reflecting the challenges in translating ethical AI principles into practice.

These findings reinforced the urgent need for accessible, practical guidance—which directly informed the content and structure of our toolkits.

Collaboration with Experts and Stakeholders

Beyond the survey, we engaged with a diverse network of experts through interviews and focus groups, ensuring that multiple perspectives were incorporated into the toolkit development. We extend our deepest gratitude to the individuals and organisations who contributed their time and expertise, including:

From civil society, we thank Alexander Laufer (Amnesty International), Carolina Judith Medina Guzmán (CAIDP Research Group Member), Eleftherios Chelioudakis (Homo Digitalis), Eva Simon (Liberties Europe), Gabriela Del Barco (Independent Consultant), George Bandy (Alliance4Europe), Mariana Ungureanu (Think Tank 360), Silvia A Carretta (Women in AI) and Özge Yanbolluoğlu Çağlar (FAKT Consult for Management, Training and Technologies).

From industry and technology, we are grateful to A. Rosa Castillo (Data Scientist | ML Engineer), Anastasia Petrova (Meta Souls), Aurelia Takacs (Cisco), Aurelie Mazet (Iron Mountain), Barbara Ruiz Rodriguez (Cdiscount), Chloé Plédel (Hub France IA and European AI Forum), Diego Gosmar (Xcally), Iva Tasheva (CYEN), Leyla el khamlichi (Employee Insurance Agency – Uitvoeringsinstituut werknemersverzekeringen, UWV), Lilian Ho (AECOM), Luigi Lenguito (BforeAI), Paksy Plackis-Cheng (Impactmania), Priska Burkard (TechFace) and Sabrina Palme (PALQEE).

From policy and governance, we appreciate the insights of Anne-Catherine Lorrain (European Parliament), Anca Goron (Romania National Scientific and Ethics Council in AI), Diana Gutierrez (Optim.ai), Enrico Panai (CEN CENELEC), Immaculate Odwera (BlueDot Impact), Mariagrazia Squicciarini (UNESCO), Monique Steijns (Netherlands Scientific Council for the Government), Nicolas Zahn (Swiss Digital Initiative), Sarah Bitamazire (Lumiera), Sebastian Hallensleben (OECD) and Tjerk TIMAN (Technopolis Group).

(And many more—thank you all for your invaluable contributions!)

Civil Society Toolkit

Industry Toolkit

Policy Toolkit

Looking Ahead

The toolkits were created through extensive research and collaboration with AI professionals, policymakers, and civil society groups. Interviews, workshops, and focus groups shaped the resources, ensuring they meet real-world needs.

These toolkits are just the beginning. AI fairness is an ongoing challenge that requires continuous collaboration, feedback, and refinement. We encourage all stakeholders—developers, policymakers, civil society, and industry leaders—to explore the toolkits and share their insights.

Explore the toolkits and let us know your thoughts! Each toolkit includes a feedback form where you can share your experiences and suggestions. Your input will help us refine and improve these resources to better support efforts in ensuring fairness, transparency, and accountability in AI.

By working together, we can ensure that AI systems are designed and deployed ethically, inclusively, and equitably.

#FairerAI

Follow us on LinkedIn @DIVERSIFAIR Project to join the movement!

Project

Goals

Consortium

hey

Stay in the Loop!

Don’t miss out on the latest updates, subscribe to our newsletter and be the first to know what’s happening.

This project has received funding from the European Education and Culture Executive Agency (EACEA) in the framework of Erasmus+, EU solidarity Corps A.2 – Skills and Innovation under grant agreement 101107969.

Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the Culture Executive Agency. Neither the European Union nor the granting authority can be held responsible for them.