Paving The Path: Moving Towards Strong Intersectional Fairness

By Ishani Mohit Udas

AI Fairness

23 May 2016: ProPublica published an article detailing how COMPAS, a risk assessment algorithm, which calculated the probability of a criminal defendant’s likelihood of reoffending was biased against Black people. In a biased judicial system where odds are stacked against Black defendants, supposedly objective AI algorithms misclassify them at a higher risk of violent recidivism compared to white defendants. Widespread use of COMPAS meant further propagation of racial bias in US courts and a lower threshold to justify stripping Black defendants of their dignity.  

In recent years, there have been increasing concerns about supposedly ‘neutral’ AI algorithms perpetuating bias. As a result, there have been booming efforts in AI research to make AI fairer. There have been researchers who claim to have “fair” outcomes for debiasing algorithms trained on datasets like COMPAS. But what does fairness in the context of AI mean?  

In AI fairness research, fairness is often defined as “the equality of a statistical measure between protected (marginalised) and unprotected (privileged) groups”. Thus, the focus is primarily on minimising negative outcomes equally among different groups, aka statistical parity. On paper, breaking down fairness into a statistical problem and finding a mathematical solution seems like the easiest proposition. But is the answer to the complex issue of achieving fairness a simple yes or no, 0 or 1? Can we truly equate statistical equality to having achieved fairness in AI? The answer is no, since there are instances where the algorithm meets the fairness standard on individual groups but not on intersectional subgroups, thereby making this definition of AI fairness incomplete.  

2 groups of people on a scale
Icon elements from Canva
Weak AI Fairness

6 March 2023: WIRED and Lighthouse Reports publish an insight into the welfare state algorithm used by the city of Rotterdam in the Netherlands. The article revealed that it was being used to pinpoint who should be investigated for fraud instead of calculating the amount of welfare aid citizens should receive. The algorithm barely worked better than random selection yet was being used to flag people and potentially severely impact the trajectory of their lives.  

Out of 315 variables in the algorithm, 54 were based on subjective inputs from case workers. The conversion of subjective input to binary variables removed the nuance and therefore the context of the data which made a significant difference in the assigned score. The probability of a non-Dutch woman being flagged for fraud was much higher than the probability of a Dutch man in the same situation based only on a few variables such as gender and proxy data points for ethnicity. The algorithm was discriminatory on the basis of ethnicity and gender in addition to other flaws which made it both inaccurate and unfair.  

 

This case shows us that it is difficult to achieve true fairness owing to the overlapping discriminatory effects of a person’s intersections or sub-groups. There has been a change in the way groups considered in AI fairness are defined. These groups now increasingly include subgroups who face one or more different forms of discrimination such as racism, sexism, ableism, etc. This form of fairness is called intersectional subgroup fairness. It is, however, almost impossible to equally consider all sub-groups without splitting it so far down that the only data point left is the individual itself. Furthermore, it is still a highly technical interpretation of fairness that looks at algorithmic solutions for systemic issues. Thus, intersectional subgroup fairness, while more inclusive and stronger than only statistical parity, is still a weak form of fairness.  

From Weak to Strong AI Fairness

The call for stronger AI fairness is not just based on idealistic intentions. It is truly commendable how far we have come in recognising the need for fairer AI and simultaneously making huge strides towards achieving that goal. However, the existing critique is not made with the intention of belittling efforts thus far, but rather to serve as a reminder that we have ways to go before truly achieving social justice.  

The AI systems we create, that many users blindly trust, are inherently non-neutral owing to the faulty and/or missing data it is built on. This means an AI model’s output (which is considered gospel by many) is at best a partial perspective on the truth. It is here that we run the risk of making this partial perspective the whole story of the ones it adversely impacts. The problem of fairness that AI models such as COMPAS or Rotterdam’s welfare state algorithm are trying to solve goes beyond algorithmic solutions. They are cogs in a much larger, unfair system. Statistical fairness can only do so much, but if the social and power context of the issue is not considered then the AI model becomes nothing more than a technological band-aid to wounds that are innately human.  

It is important to remember that AI is not a done deal. It is still a work in progress, which means that we retain the right, responsibility and capability of making meaningful change. However, it is imperative to remember that the sole responsibility of achieving fairness does not lie only with AI professionals. We can move towards AI fairness when we acknowledge and document the limits of our knowledge and AI technology, adopt an interdisciplinary approach and share the responsibility with other disciplines. It is also useful to take a step back and question whether the problem at hand requires an AI solution and if so, critically examine the data you have and will work with. When we start to incorporate these changes, we begin our journey away from a purely algorithmic or data-centered frame. Slowly but surely, we will make our way towards a more socio-technical frame, one which warmly invites and allows us to place social justice at the center of AI innovation.  

A person walking from Fairness to Social Justice
Icon elements from Canva

*All sources used in the article are linked below 

Actionable Recommendations

  • Thoroughly examine the role your AI technology will play and ground it in societal context to ensure a realistic representation and understanding of its impact. It is also useful to redefine concepts you aim to achieve (e.g., fairness, transparency, accountability) with power and social context in mind. This allows you to align your priorities and remind yourself of the greater purpose of your AI product. 
  • Try to incorporate viewpoints from various disciplines to gain multiple perspectives in understanding the problem and setting goals. Insist on iterative interdisciplinary collaboration throughout the AI lifecycle.  
  • Question whether an AI solution is what your problem needs. If so, be critical about the quality of data you have and insist on a participatory approach to the research. 
  • Provide thorough documentation of the research process. This includes the data used, the researcher’s goals, possible impact on potential stakeholders and the (vulnerable) communities the AI and preventative measures for the same.    

Academic Papers:  

Factoring the Matrix of Domination: A Critical Review and Reimagination of Intersectionality in AI Fairness  

Are “Intersectionally Fair” AI Algorithms Really Fair to Women of Color? A Philosophical Analysis  

Algorithmic reparation  

Towards a critical race methodology in algorithmic fairness  

 

COMPAS investigation:  

How We Analyzed the COMPAS Recidivism Algorithm — ProPublica  

 

Rotterdam Welfare Algorithm investigation:  

Inside the Suspicion Machine | WIRED  

 

Understanding AI bias issues, fairness and AI fairness:  

AI Bias: Good intentions can lead to nasty results | by Cassie Kozyrkov | Medium  

AI Research Is in Desperate Need of an Ethical Watchdog | WIRED  

The Movement to Hold AI Accountable Gains More Steam | WIRED  

AI can be sexist and racist — it’s time to make it fair 

Building AI for the Global South | VentureBeat  

It’s Time to Move Past AI Nationalism | WIRED  

The Race to Harness AI in Enterprise | WIRED  

We Need a New Right to Repair for Artificial Intelligence | WIRED  

Worry About Misuse of AI, Not Superintelligence | WIRED  

What is Fair and What is Just? | Julian Burnside | TEDxSydney  

Stop assuming data, algorithms and AI are objective | Mata Haggis-Burridge | TEDxDelft  

AI Is Dangerous, but Not for the Reasons You Think | Sasha Luccioni | TED  

Fair is not the default: The myth of neutral AI | Josh Lovejoy | TEDxSanJuanIsland  

How I’m fighting bias in algorithms | Joy Buolamwini  

Gracious AI Ethics | Lisanne Buik | TEDxUniversityofGroningen  

Intersectionality will save the future of science | Shawntel Okonkwo | TEDxUCLA 

The power of vulnerability | Brené Brown | TED  

Chimamanda Ngozi Adichie: The danger of a single story | TED  

Data Feminism    

 

Algorithmic Injustice Stories :  

Predictive policing algorithms are racist. They need to be dismantled. | MIT Technology Review  

Racist Algorithms: How Code Is Written Can Reinforce Systemic Racism | Teen Vogue 

How We Did It: Amnesty International’s Investigation of Algorithms in Denmark’s Welfare System – Global Investigative Journalism Network 

Algorithmic Injustice: Mend it or End it | Heinrich Böll Stiftung  

Inside a Misfiring Government Data Machine | WIRED  

Racism and AI: “Bias from the past leads to bias in the future” | OHCHR 

Algorithmic Injustice — The New Atlantis 

Algorithms Policed Welfare Systems For Years. Now They’re Under Fire for Bias | WIRED  

A Move for ‘Algorithmic Reparation’ Calls for Racial Justice in AI | WIRED  

Dutch scandal serves as a warning for Europe over risks of using algorithms – POLITICO  

 

#FairerAI

Follow us on LinkedIn to join the movement!

Project

Goals

Consortium

hey

Stay in the Loop!

Don’t miss out on the latest updates, subscribe to our newsletter and be the first to know what’s happening.

This project has received funding from the European Education and Culture Executive Agency (EACEA) in the framework of Erasmus+, EU solidarity Corps A.2 – Skills and Innovation under grant agreement 101107969.

Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the Culture Executive Agency. Neither the European Union nor the granting authority can be held responsible for them.