Taking the First Steps Forward: Actionable Recommendations for Intersectional Fairness
By Ishani Mohit Udas
As people in technology in an increasingly connected digital society, we stand at the forefront of innovation. With just a few clicks, we have the power to impact millions of lives. So it is now more than ever that the above quote holds true as a guiding beacon to follow the path to social justice.
When aiming for social justice, the current working definitions of AI fairness are not enough. Statistical parity and intersectional sub-group fairness do not entirely capture the nuances that separate weak fairness from strong fairness. Aiming for strong fairness means aiming for intersectional fairness. This involves moving beyond a technology-focused algorithmic frame to a socio-technical frame where vulnerability and openness is cherished, social context is considered, and interdisciplinary and community collaboration is coveted. In the following paragraphs, we will discuss 5 themes which highlight how to adopt the intersectional approach and how the notion of AI fairness can evolve. While the recommendations generally hold, some of them are specifically geared towards data scientists and people in the field of AI to serve as a reminder that the sole responsibility of tackling such projects does not lie with them. The following recommendations are based on a paper that details actionable steps to achieve intersectional AI fairness. Via the actionable recommendations and suggestions below, we can take our first steps towards intersectional fairness.

As AI experts are centred in AI development and practice, they have the decisive role to insist on the interdisciplinary collaboration that AI fairness requires
Often the challenge of fairness is placed solely on the shoulders of AI experts. A key aspect of adopting the intersectional approach is facilitating dialogue to incorporate multiple perspectives. When we work in interdisciplinary teams, we can harness the knowledge of multiple experts to create a holistic approach to AI development.
Actionable recommendations:
Try to incorporate viewpoints from various disciplines to gain multiple perspectives in understanding the problem and setting goals before diving into technical details. Insist on iterative interdisciplinary collaboration throughout the AI lifecycle. Finally, intentionally spend time creating a trustful and open environment to facilitate a safe environment for sharing different ideas and opinions.
Interdisciplinary teams should discuss and document their position in society and reflect which perspectives are heard and which are still left unheard
It is important to be open about your position as it transparently shows what factors affect the priorities set by the team. It is also important to reflect on the AI, its life cycle and its impact and document how you aimed to (successfully or not) minimise the gaps in your team. It is also important to voice your doubts and concerns because they reflect your willingness to be accountable.
Actionable Recommendations:
Reflect on and discuss your own position in terms of power and privilege. It can be useful to document this when working on AI projects to see how your position influences the perspectives and decisions you make. Furthermore, it is helpful to document the perspectives and decisions which were made and considered throughout the lifecycle of your AI product. This promotes open communication and transparency.
Invite people at risk of AI harm to voice priorities and concerns and propose co-ownership in the participation process
In different collaborations, some voices are often more represented than others. Therefore, it is crucial to invite all communities that have a stake and give them a chance to meaningfully participate throughout the AI life cycle.
Actionable Recommendations:
Acknowledge that the AI product has the potential to do some serious harm and be open to criticisms and concerns from impacted communities. Create a platform through which they can safely voice their concerns and criticisms, sincerely invite them to be a part of the participatory process and make participation financially viable.
Interdisciplinary teams, together with communities, should analyse the power relations between those creating, researching, using, benefiting from and those (potentially) harmed by the AI, within their social context
It is crucial to remember that AI does not exist in a vacuum and therefore does not only affect the people in power. By centring marginalised voices, thoroughly understanding the social context in which the AI will present itself and analysing power dynamics, we can prevent fairness from being an afterthought.
Actionable Recommendations:
Thoroughly examine the role your AI technology will play and ground it in societal context to ensure a realistic representation and understanding of its impact. It is also useful to redefine concepts you aim to achieve (e.g., fairness, transparency, accountability) with power and social context in mind. This allows you to align your priorities and remind yourself of the greater purpose of your AI product.
Given all these perspectives and insights, discuss how and if the opportunities and limitations of measurement and technological solutions with data and metrics align with the goal of social justice
It is imperative to acknowledge the political and incomplete nature of data and metrics. To move beyond an algorithm-centric frame, it is necessary to change the intention behind the use of data and metrics. We can start by recognising that the chosen data and metrics have a real-world impact. Acknowledge that systemic oppression colours the data and will thereby impact the AI system. Finally, address the added value and limitations of the data and metrics to the AI system.
Actionable Recommendations:
Question whether an AI solution is what your problem needs. If so, be critical about the quality of data you have and insist on a participatory approach to the research. Provide thorough documentation of the research process. This includes the data used, the researcher’s goals, possible impact on potential stakeholders and the (vulnerable) communities the AI and preventative measures for the same.
When we insist on collaboration, we can create opportunities to meaningfully include the experiences and knowledge of marginalised voices and other knowledge domains. Through this, we are able to critically reflect on our what our background and experience bring to the table and notice where our gaps are. It is important to remember that filling this knowledge gap is not the sole responsibility of us as AI experts. It can only be done via interdisciplinary collaboration and community communication, which makes it our responsibility to create a psychologically safe and financially viable environment to invite and center those that have not been traditionally centred. By actively listening to different ideas, critique and concerns, we are able to ground the AI in social context which ensures a thorough understanding of its impact. Given these insights, it is important to discuss if and how the opportunities and limitations of an AI solution with its relevant data and metrics align with overarching goal of social justice.
This is a call for fellow people in the AI domain to step beyond the comfort zone of their education and experience fueled bubble. It is a call to venture into unfamiliar territory and invest in educating ourselves to be able to effectively collaborate with interdisciplinary teams and communities. It is a call to make social change a priority. We sincerely hope that the above actionable recommendations make moving towards intersectional fairness and social justice easier.
Our journey towards social justice has begun, will you be joining us?
Recommendations from our paper:
Fairness beyond the Algorithmic Frame: Actionable Recommendations for an Intersectional Approach. In Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency (FAccT).
.For sector specific resources, please visit the following:
Stay in the Loop!
Don’t miss out on the latest updates, subscribe to our newsletter and be the first to know what’s happening.

This project has received funding from the European Education and Culture Executive Agency (EACEA) in the framework of Erasmus+, EU solidarity Corps A.2 – Skills and Innovation under grant agreement 101107969.
Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the Culture Executive Agency. Neither the European Union nor the granting authority can be held responsible for them.