Call for papers: Feminist Research 2025. Vol. 16(1). Artificial Intelligence and Gender Stereotypes: Challenges and Opportunities
In recent years, several applications that employ Artificial Intelligence (AI) to generate text, images or audio-visuals have become popular. Thousands of users have started using applications such as ChatGPT, Midjourney, Dreams or DALL-E, as tools for work or entertainment. This type of new technology has made it possible to speed up creative processes in different arts, as well as to improve educational processes; however, it has also been used to create false images related to fake news such as the arrest of Donald Trump or images of Pope Francis with an LGTBI flag.
AI has revolutionised the creation of digital content, but its development and implementation present significant challenges from a gender perspective. Equal gender representation in the design and implementation of AI is essential to avoid biases that perpetuate stereotypes and discrimination. Currently, the field of AI remains predominantly male-dominated, which can influence the creation of systems that do not adequately reflect the experiences and needs of all people.
This imbalance has a significant impact on the workplace. AI systems can reinforce inequalities in care and domestic employment, as the design of these technologies is often conditioned by a cultural environment that associates women with care-giving roles. In addition, the use of AI in selection processes may be marked by gender biases. This is because the algorithms are coded by people whose values and beliefs may influence the data used to train these systems. The under-representation of women in the STEM sector exacerbates this situation, as their concerns and interests frequently go unrepresented.
Another controversial aspect of AI tools is gender stereotypes in the representation of women. Several research studies have analysed images and content generated by these applications that represent women from traditional and stereotypical aspects or with traditional and stereotypical content. This problem reflects and amplifies existing biases in our society, as AI systems learn from the data they are trained on, which regularly contain historical and cultural biases. For example, when an AI is asked to generate images of professions such as ‘doctor’ or ‘engineer’, it tends to produce more male representations, while for professions such as ‘nurse’ or ‘secretary’ it tends to generate more female images.
These biases are not limited to professions. AI-generated representations of women often reinforce unrealistic beauty standards, showing idealised and undiverse bodies. Furthermore, in domestic or family contexts, women are typically depicted in traditional roles as caregivers or home-makers, perpetuating outdated gender stereotypes. Research has also revealed that the language used to describe women in AI-generated texts tends to be more emotional and less action or leadership-oriented compared to descriptions of men. This may have significant implications for how women's roles are perceived and valued in various social and professional contexts.
Therefore, it is crucial to develop a digital world from a gender perspective, considering the current social context and existing power relations. The active participation of women in the design and development of technologies is essential to eliminate inequalities in the technological environment and to ensure that AI is fair and effective for all. Only in this way can we maximise the transformative potential of these tools without leaving anyone behind.
Contributions will revolve around the following thematic lines, but other similar ones may be proposed:
- Ethical conflicts of the representation of women through AI.
- Representation of women in images and content created with AI.
- Fake news, deepfakes and female representation and Artificial Intelligence.
- Sexualisation of the image of women in images created with AI.
- Technological tools applied to female representation.
- AI applications for the social education of gender stereotypes.
- Gender biases in AI algorithms and their impact on female representation.
- Ethical implications of voice assistants with default female gender.
- Perpetuation of traditional gender roles in AI-generated content.
- Impact of lack of diversity in AI development teams on women's representation.
- Ethics in the development of AI applications for women's health.
- Intersectional representation of women in AI systems: race, age, disability and sexual orientation.
- Role of AI in reproducing or mitigating the gender pay gap.
- Comparative analysis of ethical and regulatory frameworks for AI from a gender perspective.
- Ethical challenges in the use of women's biometric data in facial recognition systems.
- Ethical implications of the use of AI in judicial systems and its impact on cases involving women.
- Challenges in accountability and transparency of AI systems affecting women's representation.
Issue coordinated by: Dr. Víctor Cerdán (Universidad Complutense de Madrid, Spain), Dr. Daniel Villa Gracia (Universidad Complutense de Madrid, Spain), Dr. Sonia Carcelén (Universidad Complutense de Madrid, Spain) and Dr. Fernanda Nascimento (Pontifícia Universidade Católica do Rio Grande do Sul).
Call for papers
All texts must be original and must be submitted via the platform (email submissions are not accepted), strictly following the rules of the journal: https://revistas.ucm.es/index.php/INFE/about/submissions
Papers may be submitted in English or Spanish. They cannot have been previously published, nor can they be in the process of evaluation in other journals.
It is preferable that they are the result of funded competitive research, with a team of clear international vision and work. This journal does not accept essays, chronicles, or similar.
All articles will be subject to double-blind review.
Please note that apart from the Monograph texts, the Miscellaneous section is always open, all year round, with articles in all issues of the journal.
Dates and deadlines
Submission of original articles: Until 31 January 2025.
Review of original articles: February 2025.
Publication: March 2025.