Fake AI 

Edited by Frederike Kaltheuner

Meatspace Press (2021)

Book release: 14/12/2021

This book is an intervention - 

Chapter 15

Algorithmic registers and their limitations as a governance practice

By Fieke Jansen and Corinne Cath

Europe has been lured in by the siren call of artificial intelligence. Public debate is characterised by snake oil promises of AI’s benefits to the economy, social welfare, and urban development. Here, “AI” is a catch-all phrase used to describe a wide-ranging set of technologies, most of which apply statistical modelling to find patterns in large data sets and make predictions based on those patterns.

Concerns raised about the unpredictable nature and possible societal harms of AI models have given rise to a policy doctrine of ethical and procedural safeguards, the idea being that AI’s “great” potential can be harnessed and its harms mitigated by implementing safeguarding principles of non-binding fairness, accountability, and transparency.

Building on our work as researchers and practitioners in the field of technology and society, we will discuss one of these safeguards, namely algorithmic registers—websites that show what, where and how AI is used in a city. Extolled by some in the AI ethics community as an example of good AI governance, we argue that voluntary ethical and procedural AI guardrails in fact perpetuate the hype and neutralise important critical debate.

Algorithmic registers in Europe

In line with these ethical activities, a number of European cities1 are experimenting with algorithmic registers run by local municipalities. In September 2020, the cities of Amsterdam and Helsinki launched their registers to increase transparency around the deployment of AI in the public sector. These databases collect information about how AI systems are used in an open and voluntary fashion, which should provide insights into the local uses of these systems. Both the Helsinki and the Amsterdam register contain only five entries. The entries function on an opt-in basis and mostly cover automated government services, including city libraries, hospitals, crowd management and parking control. A little over two weeks after the launch of these registers in the autumn of 2020, prominent AI ethics scholar Luciano Floridi published an editorial letter in Philosophy & Technology,2 in which he heralded them as solutions for the many challenges of AI governance, not least those related to public accountability and trust in AI.

There are a number of governance assumptions attributed to the registers that we seek to question, especially regarding the ex-post, or after the fact, framework of “accountability through transparency” contained within the register concept. Some of the most harmful AI applications are evidently missing from the Amsterdam and Helsinki registers. The databases contain no mention of known welfare and law enforcement applications. For example, the Amsterdam object detection entry, in which the city is experimenting with GAIT recognition for crowd monitoring purposes, does not account for the police facial recognition trials taking place in these same locations. This means that some of the most sensitive applications of AI, often implicated in algorithmic discrimination,3 are not currently covered by the registers, and it is unclear if they will be in the future.

The lack of critical engagement by AI proponents with these information voids in the registers is telling of their inability to function as a transparency tool between the city and its residents. By defining accountability as transparency through voluntary registration, proponents of algorithmic registers are essentially taking an ex-post approach to governance too: push AI systems as a public utility first (“Just do it”) and ask for forgiveness later. This reinforces the assumptions that AI is neutral and should be used “for the greater good”, and neutralises criticism as simply a matter of imperfect information. This governance-by-database eschews difficult conversations about whether AI systems should be implemented at all, and how these systems are advancing punitive politics that primarily target already vulnerable populations.

Perpetuating the AI hype

What is even more telling than the registers themselves, however, is the lack of critical engagement by AI proponents with the power structures and political ideologies that shape these governance-by-database solutions. Isolating governance mechanisms outside their social, political, and economic context allows for the perpetuation of a discourse that reaffirms the arbitrary notion of “AI for good”. This ignores the fact that most algorithms in use for urban management pre-date the idea of registers and are deeply rooted in a political ideology and organisational culture bent towards efficiency and cost reduction. Efforts to abstract and generalise AI accountability frameworks allow their proponents to move beyond the messy nature of reality and to further depoliticise AI by replacing the outdated idea that technology is “neutral” with the notion that the “great” potential of AI can be harnessed when harms are mitigated through voluntary procedural and ethical safeguards. Lauding the registers without understanding their context discounts concerns about the negative impact of AI on society, because it is this which aligns safeguards with the political environment and commercial interests that are enabling the AI hype.

The deployment of AI for public services, from the critical (like urban infrastructure, law enforcement, banking, healthcare, and humanitarian aid) to the more mundane (like parking control and libraries), should be done with great caution. Not least as these systems are often implemented on top of, and in line with, existing neoliberal politics that are punitive and extractive. As such, AI systems cannot be seen as isolated from the context in which they are deployed. When looking at these registers, and other opt-in accountability frameworks, we need to consider what is missing and who is steering the conversation. Due to the complex nature of governance, registers are only a partial and, at times, overhyped answer to ensuring public accountability for the application of AI systems. Indeed, the ex-post model bypasses critical conversations about the root motivations for rolling out AI systems, and who is truly served by them.

Fieke Jansen is a doctoral candidate at Cardiff University. Her research is part of the Data Justice project funded by ERC Starting Grant (no.759903).

Dr Corinne Cath is a recent graduate of the Oxford Internet Institute's doctoral programme.


1. Johnson, K. (2020, September 28) Amsterdam and Helsinki launch algorithm registries to bring transparency to public deployments of AI. Venturebeat. https://venturebeat.com/2020/09/28/amsterdam-and-helsinki-launch-algorithm-registries-to-bring-transparency-to-public-deployments-of-ai/

2. Floridi, L. (2020) Artificial Intelligence as a Public Service: Learning from Amsterdam and Helsinki. Philosophy & Technology, 33(4), 541-546. DOI: 10.1007/s13347-020-00434-3

3. Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Conference on Fairness, Accountability, and Transparency (pp. 77-91) http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf; Amnesty International (2020) Netherlands: We sense trouble: Automated discrimination and mass surveillance in predictive policing in the Netherlands. https://www.amnesty.org/en/documents/eur35/2971/2020/en/

Next: Chapter 16
The power of resistance: from plutonium rods to silicon chips

by Aidan Peppin

Instagram        Twitter