SHARE YOUR FEEDBACK ON DIVERSIFAIR'S DELIVERABLES

Looking for Your Feedback on DIVERSIFAIR’s Work

At DIVERSIFAIR, we are committed to ensuring that the resources we develop are useful, practical, and reflect the needs of the community. That’s why we regularly ask for feedback on our deliverables. While some of our work has been built collaboratively from the start, we are keen to improve other areas by incorporating diverse perspectives.

This page will be updated as we seek input on different aspects of our work. Right now, we’re looking for feedback on two key areas:

1- The Toolkits on Intersectional Fairness in AI

 

The DIVERSIFAIR toolkits on intersectional fairness were developed step by step with input from the AI community. We started with a survey to understand what people needed. Then, we carried out interviews and focus groups with AI professionals, policymakers, and civil society members. Now, we would love to hear your thoughts on whether the toolkits are clear, relevant, and useful. The toolkits are built using research findings from other work packages, ensuring they are grounded in thorough analysis.

The toolkits are designed to meet the specific needs of each sector:

  • Civil Society: Aimed at empowering organisations to raise awareness and advocate for inclusive AI, with a focus on the social impact of AI on marginalised communities.
  • Industry: Provides businesses and AI practitioners with strategies for integrating intersectional fairness into AI, helping to identify and mitigate bias while aligning with legal and societal values.
  • Policymakers: Supports policymakers in developing regulations that address AI bias and ensure ethical governance at national and international levels.

2 – AI Regulation Landscape

 

We are putting together a mapping of the AI regulation landscape. This overview aims to explore how AI regulations currently address intersectional bias and to highlight gaps and areas for improvement. While we strive to make it as comprehensive as possible, we recognise several limitations.

  • Incomplete data: The tool currently includes 94 documents, but this is not exhaustive. Some documents are unfinished, inaccessible, or not yet processed for analysis.
  • Quality of existing tools: Other policy tools, such as the AI Policy Portal and OECD.AI Policy Observatory, have practical limitations, including incomplete data and broken links. They also lack a detailed focus on discrimination and related issues.
  • Geographical focus: The tool primarily covers Western regulations, with less attention to non-Western policies. Expanding this scope is an ongoing priority.
  • Accessibility of legal text: While we aim to simplify legal language, some terms remain complex for non-experts. Natural language processing helps interpret and summarise these texts, but further improvements are needed.
  • Usability of visualisation: The mapping tool may need further refinement to improve clarity and ease of use for different audiences.

How You Can Help

We welcome feedback from AI practitioners, policymakers, researchers, and anyone interested in fairer AI systems. You can contribute by:

  • Reading through the materials and letting us know what works and what doesn’t.
  • Flagging anything that needs more explanation or could be improved.
  • Suggesting additional examples or regulatory insights.

We will be adding more deliverables for review over time, so keep an eye on this page. Your feedback helps make sure that DIVERSIFAIR’s work is useful and relevant to those who need it most.

Share Your Thoughts

The feedback form can be found in the button below but also in each DIVERSIFAIR toolkit and on the AI Regulation Mapping webpage.

#FairerAI

Follow us on LinkedIn @DIVERSIFAIR Project to join the movement!

Project

Goals

Consortium

hey

Stay in the Loop!

Don’t miss out on the latest updates, subscribe to our newsletter and be the first to know what’s happening.


Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the Culture Executive Agency . Neither the European Union nor the granting authority can be held responsible for them.


This project has received funding from the European Education and Culture Executive Agency (EACEA) in the framework of Erasmus+,EU solidarity Corps A.2 – Skills and Innovation under grant agreement 101107969.