Fairness Beyond the Algorithmic Frame: Introducing Our FAccT 2025 Paper

How can AI systems account for complex, overlapping forms of discrimination such as racism, ableism, and sexism? In our latest publication, “Fairness beyond the Algorithmic Frame: Actionable Recommendations for an Intersectional Approach”, presented at ACM FAccT 2025, we argue that AI fairness must move beyond narrow technical definitions and engage with intersectionality in a deeper, more meaningful way.

While intersectionality has gained traction in AI ethics, it is often interpreted through a limited algorithmic lens, primarily addressing group-based disparities through fairness metrics. This risks reducing intersectionality to a technical fix, and overlooks its foundations in power, positionality, and social justice.

This paper, authored by Steven Vethman, Quirine T.S. Smit, Nina M. van Liebergen, and Cor J. Veenman, draws from a thematic analysis of AI fairness literature, enriched by feedback from expert workshops.

We outline five key themes for a broader, practice-oriented approach:

  1. Insisting on collaboration across disciplines
  2. Embedding reflection and recognising positionality
  3. Approaching communities and enabling co-ownership
  4. Engaging with power and broader social context
  5. Critically assessing data framing and fairness metrics

Together, these themes form a foundation for actionable recommendations for AI experts working toward more just systems. The paper also highlights barriers such as tech-optimism and uncertainty around expertise, but shows that practitioners value tools that help open up these conversations within their teams.

The authors call on developers, researchers, and policymakers to move beyond a checklist approach and instead foster ongoing, interdisciplinary dialogue. Fairness, they argue, is not a technical destination, it’s a political, social, and ethical commitment.

Vethman, S., Smit, Q., van Liebergen, N., and Veenman, C. (2025). Fairness beyond the Algorithmic Frame: Actionable Recommendations for an Intersectional Approach. In Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency (FAccT).

#FairerAI

Follow us on LinkedIn @DIVERSIFAIR Project to join the movement!

Project

Goals

Consortium

hey

Stay in the Loop!

Don’t miss out on the latest updates, subscribe to our newsletter and be the first to know what’s happening.


Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the Culture Executive Agency . Neither the European Union nor the granting authority can be held responsible for them.


This project has received funding from the European Education and Culture Executive Agency (EACEA) in the framework of Erasmus+,EU solidarity Corps A.2 – Skills and Innovation under grant agreement 101107969.