Journalism, artificial intelligence and disinformation: preliminary assessment of the potential use of natural language generation tools, based on the GPT model, for the dissemination of false news
Abstract
We propose an inconclusive reflection on the use of artificial intelligence systems in the field of Journalism, specifically, the use of generative systems based on GPT. We assume that such systems can support journalistic work by taking on repetitive tasks, but potentially are used as a tool for disseminating misinformation. The theoretical framework is supported by authors in the field, whether for understanding misinformation (Wardle and Derakashian 2017; Ireton and Posetti, 2018; Waisbord, 2018; Graves and Lauer, 2020), or for the different aspects of the use of generative systems in journalism (Diakopoulus, 2022; Deuze and Beckett, 2022; Zamith et al., 2022; Pavlik, 2023 among others). We present a dialogic experiment with the ChatGPT conversational bot, demonstrating vulnerabilities when faced with aspects of quality, ethics, journalistic clarity, as well as the production of bias. The indispensable role of the human in journalistic processes is evident through active editorial and checking functions, guaranteeing the legitimacy of the field in society.
Downloads
Article download
License
In order to support the global exchange of knowledge, the journal Estudios sobre el Mensaje Periodístico is allowing unrestricted access to its content as from its publication in this electronic edition, and as such it is an open-access journal. The originals published in this journal are the property of the Complutense University of Madrid and any reproduction thereof in full or in part must cite the source. All content is distributed under a Creative Commons Attribution 4.0 use and distribution licence (CC BY 4.0). This circumstance must be expressly stated in these terms where necessary. You can view the summary and the complete legal text of the licence.