Journalism, artificial intelligence and disinformation: preliminary assessment of the potential use of natural language generation tools, based on the GPT model, for the dissemination of false news

Keywords: journalism, misinformation, artificial intelligence, conversational robots

Abstract

We propose an inconclusive reflection on the use of artificial intelligence systems in the field of Journalism, specifically, the use of generative systems based on GPT. We assume that such systems can support journalistic work by taking on repetitive tasks, but potentially are used as a tool for disseminating misinformation. The theoretical framework is supported by authors in the field, whether for understanding misinformation (Wardle and Derakashian 2017; Ireton and Posetti, 2018; Waisbord, 2018; Graves and Lauer, 2020), or for the different aspects of the use of generative systems in journalism (Diakopoulus, 2022; Deuze and Beckett, 2022; Zamith et al., 2022; Pavlik, 2023 among others). We present a dialogic experiment with the ChatGPT conversational bot, demonstrating vulnerabilities when faced with aspects of quality, ethics, journalistic clarity, as well as the production of bias. The indispensable role of the human in journalistic processes is evident through active editorial and checking functions, guaranteeing the legitimacy of the field in society.

Downloads

Download data is not yet available.
View citations

Crossmark

Metrics

Published
2023-12-12
How to Cite
Saad E. y Carneiro dos Santos M. (2023). Journalism, artificial intelligence and disinformation: preliminary assessment of the potential use of natural language generation tools, based on the GPT model, for the dissemination of false news. Estudios sobre el Mensaje Periodístico, 29(4), 783-794. https://doi.org/10.5209/esmp.87965