ANNOUNCING: THE DIVERSIFAIR COLUMN

A Research-Based Series on Intersectional Fairness in AI

What does it mean to build fair AI? Can fairness be reduced to numbers? And who gets left out when we try?

We are delighted to announce The DIVERSIFAIR Column, a five-part research-based series that explores intersectional fairness in AI. Drawing on real-world examples, academic work, and our own paper being presented at FACCT 2025, this series aims to inform, reflect, and spark dialogue on how AI systems intersect with systems of oppression – and how we might do better. 

These articles will be published on a weekly basis, starting next week. Our goal is not only to inform, but also to open up conversation. Each post examines how fairness is often narrowly defined in technical terms, and how we might begin reimagining it through a social justice lens — one that centres power, context, and community.

Here is what to expect:

#1 – Embracing A New Outlook: An Introduction to Intersectionality and AI Fairness

What’s missing in current fairness debates? We introduce intersectionality as a critical lens to understand how different axes of oppression (race, gender, class, ability, etc.) intersect in the realm of AI. This post makes the case for moving beyond “bias” and toward contextual, intersectional approaches to fairness. Coming soon.

# 2 – Entangled In the Web: A Starter’s Guide to Understanding Systemic Oppression

Using the case of predictive policing in Los Angeles, we explore how AI systems become entangled in pre-existing webs of systemic oppression. This post shows that algorithms don’t just reflect injustice — they can deepen it — especially when divorced from the structural context in which they operate. Coming soon.

# 3 – A New Frame and an Encompassing Ideal: Intersectionality and the Role of Power

This week focuses on power and visibility in AI systems. From biased facial recognition to health tech’s exclusion of trans users, we ask: Who is seen by the system? Who is imagined? Who is erased? And how can re-centering power help us develop stronger fairness frameworks? Coming soon.

# 4 – Paving The Path: Moving Towards Strong Intersectional Fairness

We critically examine statistical parity and subgroup fairness, using cases like COMPAS and Rotterdam’s welfare algorithm to show how weak definitions of fairness fall short. The path forward, we argue, lies in strong intersectional fairness — one that embeds AI within its broader socio-political context. Coming soon.

# 5 – Taking the First Steps Forward: Actionable Recommendations for Intersectional Fairness

We close with a roadmap for change, based on our team’s peer-reviewed paper accepted to FACCT 2025. This article outlines five key themes: collaboration, reflection, participation, social context, and data limitations. It’s a call to move beyond the technical — toward AI practices grounded in justice and care. Coming soon.

Why this series?

This column is not just about highlighting what’s wrong — it’s about opening space for better. All five articles are rooted in academic research, and each is meant to provoke conversation, encourage reflection, and invite participation from across fields.

Whether you are an AI practitioner, policymaker, academic, or advocate, we hope these articles help you think more deeply about what AI fairness should look like — and who it must include.

📌 Follow The DIVERSIFAIR Column on our website and Linkedin starting next week. 

#FairerAI

Follow us on LinkedIn @DIVERSIFAIR Project to join the movement!

Project

Goals

Consortium

hey

Stay in the Loop!

Don’t miss out on the latest updates, subscribe to our newsletter and be the first to know what’s happening.

This project has received funding from the European Education and Culture Executive Agency (EACEA) in the framework of Erasmus+, EU solidarity Corps A.2 – Skills and Innovation under grant agreement 101107969.

Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the Culture Executive Agency. Neither the European Union nor the granting authority can be held responsible for them.