COMMUNITY-LED AI AUDIT GUIDE

A practical toolkit for communities to uncover, understand, and challenge harmful AI impacts

The DIVERSIFAIR project is proud to release the Community-Led AI Audit Guide, a resource designed to help communities, civil-society organisations, and independent auditors investigate the real-world effects of AI systems.

Developed by Eticas, pioneers in algorithmic auditing, the guide provides practical methods for assessing AI systems without needing insider access, making accountable AI oversight accessible to all.

WHAT THE GUIDE OFFERS

The guide presents a clear, actionable framework for understanding how AI systems affect opportunities, rights, and daily life. Key features include:

  • Step-by-step auditing process: From identifying a system of concern to data collection, analysis, and recommendations.
  • Socio-technical approach: Combines technical tools (scraping, bot testing, comparative audits) with qualitative methods (interviews, ethnography, lived experience analysis).
  • Participatory model: Ensures communities are actively involved, not just consulted.
  • Seven adaptable audit methods: Suitable for AI systems such as recommender algorithms, pricing tools, facial recognition, and consumer platforms.
  • Case studies: Demonstrating how community-led audits have exposed bias, discrimination, and inefficiency across Europe.

Whether you are new to AI auditing or an experienced digital rights advocate, the guide equips you to scrutinise AI responsibly and effectively.

WHO THIS GUIDE IS FOR

The guide is accessible to a wide range of users:

  • Civil society organisations documenting algorithmic harms
  • Community groups affected by unfair automated systems
  • Journalists and researchers investigating AI impacts
  • Data scientists and auditors seeking community-centred methodology
  • Public authorities and regulators exploring external audit models

Its participatory model allows communities to shape audits regardless of technical expertise.

HOW TO USE THE GUIDE

The guide covers two main phases:

1. Planning

  • Identify a system with social impact
  • Understand context and potential harms
  • Map stakeholders and assess feasibility
  • Build alliances with communities and organisations
  • Co-design the audit methodology

2. Execution

  • Collect qualitative and quantitative data responsible
  • Analyse results using robust method
  • Formulate actionable mitigation strategies
  • Manage limitations and setbacks
  • Communicate findings clearly and effectively

Steps are adaptable to different contexts, timelines, and research capacities.

WHY THIS GUIDE MATTERS
AI increasingly shapes decisions about employment, policing, content exposure, mobility, and resource allocation. Yet the communities most affected — women, migrants, racialised populations, people with disabilities, and low-income groups — are often excluded from oversight.

This guide offers an alternative: audits led by and for affected communities. It empowers them to gather evidence, challenge harmful practices, and advocate for fairer, more transparent AI.

By supporting community-led audits, DIVERSIFAIR strengthens intersectional fairness and helps ensure AI serves the public good.

#FairerAI

Follow us on LinkedIn @DIVERSIFAIR Project to join the movement!

Project

Goals

Consortium

hey

Stay in the Loop!

Don’t miss out on the latest updates, subscribe to our newsletter and be the first to know what’s happening.


Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the Culture Executive Agency . Neither the European Union nor the granting authority can be held responsible for them.


This project has received funding from the European Education and Culture Executive Agency (EACEA) in the framework of Erasmus+,EU solidarity Corps A.2 – Skills and Innovation under grant agreement 101107969.