This toolkit, created through the DIVERSIFAIR Erasmus+ project, aims to support civil society organisations (CSOs) in their work. It provides clear information about intersectional bias in AI, along with materials CSOs can use in their advocacy campaigns. Additionally, the kit offers accessible resources to help inform the general public, raising awareness about these critical issues. By equipping CSOs and communities with these tools, we aim to promote fairness and inclusion in AI systems.
The primary objectives of this kit are to:
- Raise awareness about the concept of intersectional bias in AI.
- Centre the human experience by emphasising the lived realities of those affected by consequences of intersectional bias in AI.
- Equip Civil Society Organisations with tools to advocate for fairer AI systems.
- Provide actionable ideas for addressing intersectional bias in AI practices.
The development of this kit was informed by interviews and focus groups with members of the AI community and civil society, to understand their perspectives and knowledge gaps. While this version is tailored for civil society, additional kits targeting the industry and policy have also been developed under the DIVERSIFAIR project.