We are thrilled to share that DIVERSIFAIR was recently featured in ILDA’s AI & Confusion series, where our lead scientist, Ilina Georgieva, LL.M., spoke about the complexities of bias in AI and why an intersectional approach is critical for creating fairer, more inclusive technologies.
This feature builds on our collaboration with ILDA during the AI Action Summit in Paris, where we had the pleasure of being showcased alongside 50 other innovative projects addressing the ethical and social challenges of AI.
In the interview, Ilina discusses:
- How single-axis tests often miss harms experienced at the intersections of race, gender, class, and other social factors.
- Concrete practices to reduce bias, including meaningful co-creation with affected communities, incorporating social determinants into datasets, flexible governance, and giving communities agency over their data.
- Organisational challenges and actionable steps that teams can implement today.
We are grateful to ILDA – the Iniciativa Latinoamericana por los Datos Abiertos – for hosting this insightful conversation. ILDA has been championing ethical, inclusive data practices across Latin America since 2018, promoting research, community-building, and evidence-based public policies to ensure data and technology serve the needs of all.
This collaboration reflects our shared commitment to fostering technology that considers social context and the lived experiences of diverse communities.
📖 Read the full interview here
🌐 Learn more about ILDA
