Intelligent conversational agents (chatbots) and gender stereotypes in addressing Gender-Based Violence: A taxonomy of potential threats from a threat modelling approach
Abstract
The use of artificial intelligence (AI) in the fight against gender-based violence (GBV) has gained increasing relevance, particularly through conversational agents or chatbots designed to support victims. However, the implementation of these tools is not without risks. This article analyzes the threats associated with the use of intelligent conversational agents or chatbots in addressing GBV and proposes a taxonomy of these threats based on the threat modeling approach. Based on an analysis of specialized literature and existing experiences, technological vulnerabilities, algorithmic biases, and ethical limitations are identified that may compromise the security and effectiveness of these tools. Furthermore, the discussion explores how Feminist AI can contribute to the creation of more inclusive systems that are sensitive to the needs of victims. The proposed taxonomy provides a reference framework for designing chatbots with an ethical approach, prioritizing user privacy, autonomy, and security
Downloads
Article download
License
In order to support the global exchange of knowledge, the journal Investigaciones Feministas is allowing unrestricted access to its content as from its publication in this electronic edition, and as such it is an open-access journal. The originals published in this journal are the property of the Complutense University of Madrid and any reproduction thereof in full or in part must cite the source. All content is distributed under a Creative Commons Attribution 4.0 use and distribution licence (CC BY 4.0). This circumstance must be expressly stated in these terms where necessary. You can view the summary and the complete legal text of the licence.







