Situating Harms in the AI Governance Landscape: Understanding AI Harms Through the Lens of Interactivity, Temporality, and Intentionality

Title: Situating Harms in the AI Governance Landscape: Understanding AI Harms Through the Lens of Interactivity, Temporality, and Intentionality
Authors: Ilina Georgieva (TNO), Tessa Bruijne (TNO), Lieke Dom (TNO), Steven Vethman (Sciences Po)
In: Oxford Intersections: AI in Society
Editor-in-Chief: Philipp Hacker
Year: 2025

This paper explores how risk-based approaches to AI governance currently lack sufficient understanding of tangible harms and their real-world impacts. Understanding how AI harms emerge in real-life contexts, can provide both policy makers and researchers with important insights for their future efforts.

The authors identify three key characteristics of AI harms:

  • Intentionality: is the AI system built with the intent to do harm?
  • Temporality: does harm emerge as a singular event or does it accumulate over time?
  • Interactivity: does harm emerge because of direct interaction with the AI system?

Untangling these characteristics can help researchers and policymakers begin to identify and address the structurality, systemic, or procedural elements that enable AI harms to emerge.

Who should use it? 

If you are a researcher, you might use the characteristics of AI harms that we identify as starting point for analysis of AI harms that you study. You might add to this set of characteristics with additional insights.

If you are a policy maker, you could use the characteristics of AI harms to understand how current policies help mitigate harm or might even lack mechanisms to mitigate AI harms. The insights in this paper can assist in drafting new and improved policies that go beyond a risk perspective.

If you are in industry, you might use the insights of this paper to help review the AI systems that you build and understand potential impact of AI systems more broadly.

If you are in civil society organisations, you can use the insights from this work to strengthen your position in the domain and on issues that you are fighting for.

Access the paper

The paper is accessible at this link. You can also reach out to ilina.georgieva(a)tno.nl or tessa.bruijne(a)tno.nl.

This publication is the first of several studies that the consortium is conducting on AI harms within the context of the DIVERSIFAIR project.

#FairerAI

Follow us on LinkedIn @DIVERSIFAIR Project to join the movement!

Project

Goals

Consortium

hey

Stay in the Loop!

Don’t miss out on the latest updates, subscribe to our newsletter and be the first to know what’s happening.


Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the Culture Executive Agency . Neither the European Union nor the granting authority can be held responsible for them.


This project has received funding from the European Education and Culture Executive Agency (EACEA) in the framework of Erasmus+,EU solidarity Corps A.2 – Skills and Innovation under grant agreement 101107969.