AI increasingly shapes access to services, opportunities, and rights—but the communities most affected are often excluded from decisions about how these systems work. The Community-Led AI Audit Guide, developed by Eticas as part of the DIVERSIFAIR project, offers a methodology that places communities at the center of AI accountability, transforming auditing into a participatory process that uncovers real-world harms.
What the guide does
The guide provides a practical framework for communities, civil-society organisations, and independent auditors to investigate AI systems. Instead of relying solely on developers, community-led audits (CLAs) produce evidence grounded in lived experience. It offers:
- A socio-technical approach: Combines technical tools (scraping, experimental testing) with social methods (interviews, ethnography) to reveal biases and discrimination often missed by internal audits.
- Actionable guidance: Supports communities in pressing for policy change, system redesign, or public accountability.
- Participatory focus: Ensures audits reflect the needs and realities of those most affected.
- Real-world examples: Case studies cover policing tools, facial recognition, ride-hailing platforms, educational surveillance, and more.
Who can use it
The guide is designed for both technical and non-technical users, including:
- Community & advocacy groups representing vulnerable or marginalised populations.
- Civil-society organisations (CSOs) for research, campaigns, or policy engagement.
- Independent auditors & researchers seeking structured methodologies for opaque systems.
- Policy actors & regulators to understand risks external audits reveal.
- Affected individuals who want to participate in or initiate audits.
How it works
The guide follows a two-phase process:
1. Planning
- Select a system to audit and map stakeholders.
- Conduct contextual analysis of potential biases or harms.
- Assess data feasibility and form alliances with communities and experts.
- Co-design the audit plan and research questions.
2. Execution
- Collect qualitative and quantitative data ethically.
- Analyse patterns and biases using mixed methods.
- Develop actionable recommendations.
- Document limitations and adapt methods as needed.
Auditing techniques include: scraping, sock-puppet accounts, crowdsourcing, ethnography, experimental testing, comparative output audits, and open-source code reviews.
Why community-led audits matter
- Reveal hidden harms: Combine lived experience with technical investigation to expose what other audits often miss.
- Build community capacity: Train members to oversee AI systems, interpret technical claims, and advocate for reform.
- Advance intersectional fairness: Highlight harms tied to gender, race, disability, migration status, and socio-economic inequalities.
- Work with limited data: Even partial access or proxy datasets can provide meaningful evidence.
- Drive real-world change: Past CLAs have influenced policy, media coverage, and regulatory investigations.
The Community-Led AI Audit Guide empowers communities to reclaim agency over AI systems. By blending technical tools with lived experience, it makes AI accountability practical, participatory, and grounded in the realities of those most affected—advancing intersectional fairness and public oversight.
