Algorithmic Assemblages of Power: AI Harm and the Question of Responsibility

Keywords: algorithmic violence, decolonial feminism, human rights, machine learning, artificial intelligence

Abstract

Current debates on AI harm primarily focus on issues directly associated with AI systems, such as algorithmic harm or bias. In this article, I argue that AI harm should be analyzed through a power-aware lens using a systemic and multidimensional approach that accounts for the multiple scales at which harm unfolds—macro, meso, and micro. Reducing AI harm to mere technical failure or a lack of representation in data risks oversimplifying the issue. AI is not just a set of technologies, but a sociotechnical assemblage—a complex interplay of communities, markets, resources, labor, processes, practices, regulations, institutions, and knowledge systems. Its current form not only impacts society at multiple levels but also actively reproduces harm and structural violence at scale, exacerbating power asymmetries both within and across nations. This raises the question of who should be held accountable for harm across multiple scales and what frameworks should be established to address it. A feminist critique of AI frames harm through an ethics of care, emphasizing the need to place human rights at the core of AI governance, ecosystems, and systems.

Downloads

Download data is not yet available.

Publication Facts

Metric
This article
Other articles
Peer reviewers 
3
2.4

Reviewer profiles  N/A

Author statements

Author statements
This article
Other articles
Data availability 
N/A
16%
External funding 
N/A
32%
Competing interests 
N/A
11%
Metric
This journal
Other journals
Articles accepted 
68%
33%
Days to publication 
31
145

Indexed in

Editor & editorial board
profiles
Academic society 
N/A
Publisher 
Grupo de Investigación Cultura Digital y Movimientos Sociales. Cibersomosaguas

Author Biography

Paola Ricaurte Quijano, Tecnologico de Monterrey

Investigadora titular del Tecnológico de Monterrey e investigadora asociada del Berkman Klein Center for Internet & Society de la Universidad de Harvard. Es cofundadora de la red Tierra Común para avanzar la agenda de la descolonialidad de los datos e integrante de la Alianza A+ por algoritmos inclusivos, desde donde coordina el nodo de América Latina y El Caribe de la Red Feminista de Investigación en Inteligencia Artificial. Co-lideró el proyecto Towards Substantive Equality in Artificial Intelligence: Transformative AI Policy for Gender Equality and Diversity en el grupo de trabajo de IA Responsable de la Alianza Global para la Inteligencia Artificial. Participa en el Grupo de Expertos para la implementación de la Recomendación de la UNESCO sobre la Ética de la Inteligencia Artificial y es integrante de la red AI Ethics Experts Without Borders y la plataforma Women for Ethical AI, W4EAI, también de la UNESCO.

View citations

Article download

Metrics

Published
2025-03-25
OPEN PEER REVIEWS
How to Cite
Ricaurte Quijano P. (2025). Algorithmic Assemblages of Power: AI Harm and the Question of Responsibility. Teknokultura. Revista de Cultura Digital y Movimientos Sociales, 22(2), 201-208. https://doi.org/10.5209/tekn.101084