Next Article in Journal
(Un)necessary Interaction: Audience Perceptions of Interactivity in Digital Media
Previous Article in Journal
Exploring Gamification in Online Journalism: Perspectives from Media Owners Through Interviews
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Are the Media Transparent in Their Use of AI? Self-Regulation and Ethical Challenges in Newsrooms in Spain

by
M. Ángeles Fernández-Barrero
* and
Carlos Serrano-Martín
Journalism II Department, Faculty of Communication, University of Seville (US), 41970 Seville, Spain
*
Author to whom correspondence should be addressed.
Journal. Media 2025, 6(3), 152; https://doi.org/10.3390/journalmedia6030152
Submission received: 3 July 2025 / Revised: 29 August 2025 / Accepted: 9 September 2025 / Published: 13 September 2025
(This article belongs to the Special Issue Reimagining Journalism in the Era of Digital Innovation)

Abstract

The integration of artificial intelligence (AI) into journalism is rapidly transforming the way news is produced, raising important questions about ethics, transparency, and professional standards. This study examines how Spanish journalists perceive and manage the use of AI in their work. A mixed methods research design is used, combining quantitative and qualitative approaches. The quantitative component consists of a survey administered to a sample of 50 journalists working in newsrooms in various Spanish provinces, selected by random sampling. The qualitative component involves eight in-depth interviews with journalists representing various perspectives on AI use. Although AI improves efficiency in news production, it also introduces ethical concerns, particularly about transparency, authorship, and content accuracy. In the absence of formal regulation, some media and scientific institutions have begun to develop self-regulation protocols. The findings reveal widespread use of AI tools among journalists, although a minority strongly opposes them. Most media outlets lack internal policies on AI use, leading to reliance on personal self-regulation. Transparency is a major concern, as AI involvement is rarely disclosed, raising issues of trust, intellectual property, and editorial responsibility. The lack of clear internal guidelines creates uncertainty and inconsistent practices. Journalists are calling for defined regulatory frameworks to ensure ethical and transparent integration of AI. Without transparency, audience trust can be eroded and journalistic integrity can be compromised.

1. Introduction

The application of artificial intelligence (AI) in the writing and production of journalistic texts has been a subject of debate in the scientific community, with intense production in recent years, including, among other issues, numerous ethical concerns, in a context in which its frequent use in the journalistic profession is becoming a palpable reality.
From the analysis and processing of data prior to writing to the automation of content, assistance in writing, the personification of content, verification, transcription, and translation, among other uses, these different applications raise numerous ethical dilemmas that have been widely discussed by the scientific community, such as transparency, originality and authorship, the veracity of the content generated, and the risk of technological dependence.
Examining the impact of AI on journalistic writing, particularly the ethical implications of its use, offers a valuable framework for understanding the transformations that the profession is currently undergoing. This perspective not only anticipates shifts in editorial routines and content production, but also highlights the emerging competencies that journalists must develop to operate effectively in AI-integrated newsrooms.
The academic debate has increasingly emphasised the ethical dilemmas posed by AI in journalism, especially in relation to transparency and authorship. A deeper exploration of these issues will contribute to the formulation of self-regulatory mechanisms that can guide responsible use and reinforce public trust in the media. By addressing these challenges, the profession can move toward a more ethically grounded and transparent integration of AI technologies.
In fact, the absence of such codes has led many media outlets and journalists to develop their own internal protocols for the responsible use of AI. The study of all these tools can help to identify areas of interest to be shared and gaps to be filled so standardised patterns can be established.
The aim of this paper is to analyse the positioning of Spanish journalists regarding the use of AI and in what sense they regulate the declaration of its use for the generation of written and audiovisual content, including the use of algorithms to generate content. Thus, the following research question is posed:
  • Q1: Are the media really transparent in the use of AI?
  • Q2: How should they be transparent?
The paper starts from a contextual framework that addresses the advancement of AI in the journalistic profession and the ethical dilemmas that arise around issues such as transparency and authorship. Within this framework, some emerging regulatory frameworks and ethical and normative recommendations are analysed, as well as some outstanding self-regulatory proposals and examples of good practice.

2. Literature Review

2.1. AI, Ethics, and Transparency

AI in the field of journalism refers to the use of computer systems capable of performing tasks that traditionally require human intervention, in such a way as to automate news processes and develop both mechanical tasks and creative functions, which poses new ethical and professional challenges. However, as Mondría Terol (2023) warns, despite the fact that AI can optimise journalistic processes, it faces major obstacles that slow its development: the resistance of workers and a lack of adequate technical training. One of the key findings highlighted by Misri et al. (2025) is the widespread lack of ethical knowledge regarding AI among journalists, so they often feel unprepared to critically assess the ethical implications of AI-generated content, data-driven decision making, and algorithmic bias. This gap in understanding not only undermines responsible innovation, but also increases the risk of ethical missteps in news production. Drawing on Bourdieu’s field theory, these authors argue that the integration of AI into journalism is reshaping the professional habitus and challenging the established doxa of newsroom ethics. Journalists are navigating a shifting landscape in which traditional norms and routines are being redefined by algorithms and data-driven practices. Their work aligns with broader concerns about ethical standards and the need for institutional guidelines.
De Lara et al. (2022) perceive a context of uncertainty in which professionals are cautious when talking about the usefulness of AI or acclaiming its virtues in newsrooms. They attribute these reservations to the absence of professionals specialised in the use of this technology, the dizzying evolution of the sector, and the uncertainty associated with a technology that can transform traditional journalistic methods and productive routines. As noted by Misri et al. (2025), the ethical frameworks traditionally guiding journalism are being challenged by the integration of AI into newsroom practices. Their work aligns with broader concerns about ethical standards and the need for institutional guidelines.
Although accurate data are not available, different studies show that the integration of AI in newsrooms is not homogeneous. Nevertheless, it follows a growing trend, and, despite its youth, it is already beginning to reshape the structure of news companies, both in terms of their content production and distribution processes and in the redefinition of their business models (Túñez-López, 2021). In some cases, it is used as a production assistant, generating drafts that are then reviewed by journalists. In others, it is tested to automate routine content, with the aim of freeing journalists from these repetitive tasks so that they have time for other more analytical or creative tasks. It has also been incorporated into data verification, audience analysis, and content personalisation processes. Cools and Diakopoulos (2024) identified sixteen distinct uses of generative AI in journalistic workflows that span the stages of news production, gathering, and distribution. These include the creation of articles, the summation of texts, the translation of content, the generation of headlines and visuals, and optimising content for SEO and social media. Journalists also use AI to extract key information, verify facts, and detect trends. In the distribution phase, AI supports personalisation, content moderation, layout optimisation, and A/B testing. These uses reflect a growing integration of generative AI into editorial routines, although often guided by professional intuition rather than formal regulation or ethical frameworks. These uses are not applied uniformly across all newsrooms; rather, according to Cools and Diakopoulos, their adoption depends on journalist intuition, ethical judgment, and familiarity with the tools. Today, new professional figures are already emerging, such as data journalists and algorithm editors, and the need for specific training in AI is being debated. This study complements Wu’s (2024) findings on the value-driven use of AI tools by journalists, highlighting the diversity of professional approaches.
In short, AI has established itself, as Illescas-Reinoso et al. (2025) warn, as a disruptive technology that automates journalistic tasks, optimises workflows, and redefines content generation through advanced algorithms. But this transformation also affects its ethical dimension, as it introduces new challenges related to the transparency, veracity, and authorship of content. The FleishmanHillard (2024) study provides a comprehensive overview of how generative AI is reshaping newsroom operations and reveals deep concerns about ethical risks, misinformation, and job displacement. Reyes-Hidalgo and Burgos-Zambrano (2024) also note this reconfiguration of traditional journalistic practices and warn that automation can create a technological dependency that could jeopardise the integrity of journalistic processes and the authenticity of stories, which, in turn, raises questions about objectivity and algorithmic bias. Simon (2024) offers a critical perspective by exposing the growing dependence of media organisations on technological platforms such as Google, Amazon, and Microsoft. This dependence can limit editorial autonomy and increase vulnerability to changes in the policies of these companies. Furthermore, this author explores how AI not only transforms news production, but also reshapes the public sphere and the relationship between media and audiences. Automated content, algorithmic personalisation, and technological mediation alter how citizens access information and participate in public debate.
Delving deeper into the motivations behind the use of AI by journalists and media organisations, Wu (2024) introduces the concept of ‘value-motivated use’ to describe how journalists engage with AI tools such as ChatGPT based on personal values rather than institutional mandates. This perspective underscores the importance of individual agency in shaping AI adoption, as journalists employ these technologies to foster creativity, preserve autonomy, and uphold ethical standards. The study by Wu also reveals that journalists frequently use AI tools outside the newsroom, in freelance work, or personal experimentation. This decentralised use challenges traditional regulatory frameworks and suggests that ethical guidelines must address not only institutional practices, but also individual engagements with AI.
Lopezosa et al. (2024) argue that it is essential to consider the ethical impact, as well as to establish information verification protocols and ensure adequate human oversight in the process of generating journalistic content by AI.
Ufarte-Ruiz et al. (2021) explore the ethical challenges and conclude that the use of AI in newsrooms poses different challenges, which involve guaranteeing people’s privacy and intimacy, contrasting the information produced by this emerging technology, training information professionals in its use and application, promoting transparency in its use and application, detecting and controlling algorithms’ biases, and not losing sight of journalism’s sense of commitment and social responsibility, among other issues.
Following the twelve ethical principles enunciated by Jobin et al. (2019) (justice and welfare; non-maleficence; responsibility; privacy; beneficence; freedom and autonomy; trust; sustainability; dignity; solidarity; transparency; explainability and accountability), González-Esteban and Sanahuja-Sanahuja (2023) focus on transparency and explainability, together with accountability and supervision, as the basic principles for the establishment of a regulatory framework for newsrooms. In this sense, they warn that it is key that it is clear who has produced, collected, or distributed information, particularly whether it is a person or an AI application, because ‘only in this way can we judge as affected parties and valid interlocutors whether or not we accept this use and journalistic practice’.
Ufarte-Ruiz et al. (2021), following Hansen et al. (2017), are categorical in this respect, warning that ‘transparency of automation programmes must be guaranteed, because readers have the right to understand how AI is used and the decisions made in understandable terms, without technicalities’. Nevertheless, these authors are in favour of declaring the use of AI as other sources of information are cited, for the sake of veracity; they urge the establishment of guidelines on what can and cannot be automated; and they open the debate on the journalist’s final role in the piece: producer or supervisor.
Along these lines, González-Esteban and Sanahuja-Sanahuja (2023) insist on the need to move toward self-regulation of journalistic activity in response to the growing use of AI. They consider this step essential and advocate following the path traced by academic and political spheres ‘to collectively unravel how we, as societies, want artificial intelligence to be used’.
Karlsson et al. (2023) elaborate on the establishment of ethical parameters for transparency and accountability and outline some recommendations, including do not leave editorial decisions to algorithms without considering their harms; provide as much transparency as possible on the rationale for the use of algorithms and their inner workings; and identify which algorithm has been used in the case of multiple algorithms, so that accountability can be applied.
But transparency does not only affect the algorithm and the trust that its training generates, but also the journalist’s responsibility regarding the use of AI, hence, Kent (2015) suggests alternatives such as identifying the source, providing context to the stories being told, reviewing the information provided by AI and checking whether it corresponds to journalistic (for example, if it is asked to shorten a story) and ethical parameters (the journalist cannot only attend to quantitative logic), adapting the style to offer unified writing, enriching the story with own sources, ensuring the legal use of the content provided by AI, and declaring the use of AI. Kent recalls that outlets such as The Guardian include the following at the end of automated stories: ‘This story was generated by ReporterMate, an experimental automated news system’, which is an exercise in transparency with its readers. RADAR in the UK includes the signature of the journalist, who is considered the author responsible for the text, regardless of the use of AI. In addition, this author explains that other media outlets do not include such an indication, understanding that it is an increasingly common practice in newsrooms. The study by Toff and Simon (2025) complicates the notion of transparency in AI-assisted journalism by demonstrating that disclosure alone does not guarantee audience trust. Their study reveals that labelling content as AI-generated can reduce perceived credibility, especially among more informed audiences. However, procedural transparency, such as including the sources used by AI, can counteract these effects. These insights suggest that ethical frameworks for the use of AI in journalism must go beyond superficial disclosure and take into account audience segmentation, media literacy, and the specific context in which AI is applied. Similarly, Jia et al. (2024) find that even minimal references to AI in news articles, whether describing it as a tool, assistant, or collaborator, can negatively influence readers’ perceptions of both the credibility of the source and the reliability of the message, as readers often interpret the AI involvement more broadly than intended. These insights suggest that ethical frameworks for AI in journalism must prioritise transparent and context-sensitive communication strategies that reinforce perceived human editorial control. Additionally, Gondwe (2025) emphasises the role of media literacy and demographic factors (such as age and education) in shaping audience attitudes toward AI-generated content. His findings reveal that audience trust is highly sensitive to how AI involvement is communicated, suggesting that superficial disclosure may not be enough to maintain credibility. Instead, audiences respond more positively when they perceive human editorial oversight and contextual transparency.
Schell (2024) challenges the simplistic notion of transparency as merely labelling AI-generated content, arguing instead for a more nuanced and procedural approach. Her study introduces the concepts of ‘editorial decision-making agency’ and ‘authorial autonomy’ to assess when and how AI involvement should be disclosed. Rather than relying on generic labels, Schell advocates for multi-layered transparency strategies that reflect the complexity of hybrid human–machine authorship. These insights expand the theoretical understanding of journalistic autonomy in the age of AI and support the development of ethical frameworks that go beyond surface-level disclosure.
Despite the collective demand for self-regulation and agreement on key issues regarding the use of AI in news production, the main professional codes of ethics in Spain have, so far, not incorporated specific rules to help the media take an informed stance. FAPE’s Code of Ethics has not yet incorporated specific rules on AI, although it has agreed to create a working group to analyse the regularisation of the use of AI in the media. The Network of Professional Associations of Journalists has not adopted a specific ethical agreement, although different associations offer specific training on different AI capabilities and their associated risks.
However, Parratt-Fernández et al. (2025) identify, in a specific study, 40 ethical documents (35 of which are guides on AI) that include one or more sections related to AI, with the majority in Europe, out of a total of 84 media organisations analysed. Among these codes, there are some Spanish ones, such as Agencia EFE, Ctxt, and JotDown. In this way, specific newspapers have implemented particular initiatives that include general recommendations on the need to preserve the journalist’s critical sense or the commitment to improve the quality of journalism. In fact, according to these authors, transparency is only partially addressed, and they mention the specific case of Finland and the Council of Mass Media, which calls for the identification of the use of AI and the specific source of the information published.
Transparency is key, according to Verma (2024), insofar as public trust in the media depends, to a large extent, on how content generation using AI technologies is disclosed. This author also emphasises the idea of accountability for errors or biases contained in algorithms. Responsibility, according to Verma, can lie with AI developers, journalists using AI applications, or the media themselves, so there must be protocols that outline the different assumptions and accountability.

2.2. Regulatory Framework

In terms of regulatory frameworks, initiatives such as the EU and UNESCO stand out. In recent years, the European Commission has focused its strategy for regulating the use of AI on the articulation of measures that seek to harmonise the protection and security of citizens and, at the same time, the promotion of progress and development associated with the benefits of new technologies. To this end, the Commission has launched the following three interrelated legal initiatives: a European legal framework, embodied in the AI Act; a civil liability framework; and a sector-specific review of security legislation. The AI Act, which entered into force on 1 August 2024, covers various aspects affecting work in newsrooms.
The act itself defines AI as ‘a machine-based system designed to operate with varying levels of autonomy and capable of demonstrating adaptability after deployment and which, for explicit or implicit purposes, infers, based on the input it receives, how to generate outputs such as predictions, content, recommendations or decisions that may influence physical or virtual environments’ and classifies risks into three levels (high, limited, and minimal), among which automatic news generation and fact checking are considered to be of limited or minimal risk.
The law specifically addresses aspects such as transparency and accountability as applied to high-risk systems and warns that certain AI technologies must be transparent and explainable, which means that information on how they work must be provided. The law defines transparency in the following terms: ‘that AI systems are developed and used in a way that allows adequate traceability and explainability, while at the same time making people aware that they are communicating or interacting with an AI system and adequately inform those responsible for its deployment of the capabilities and limitations of that AI system and those affected about their rights’ (European Union, 2024, p. 27). Furthermore, according to the text, information on the use of AI systems can help to preserve public trust. Therefore, it argues that generative AI systems should ensure that AI-generated content is identifiable, especially when informing the public about issues of public interest. Applied to journalism, this could be understood to mean that journalists and media outlets should inform users when AI is used to generate content, whether written or audiovisual.
On the other hand, the AI law also addresses the protection of personal data and the mitigation of biases in AI algorithms.
UNESCO, for its part, launched the first global standard on AI in November 2021, the Recommendation on the Ethics of Artificial Intelligence (UNESCO, 2022), applicable to the various UNESCO member states. In regard to communication and information, point 115 recommends in the second paragraph that member states should encourage the media to make ethical use of AI systems in their work, although the idea of transparency focuses on the functioning of algorithms and the data with which they have been trained. Furthermore, the recommendations emphasise the need for an appropriate and agreed ethical framework to ensure that AI brings benefits, reducing the possibility of misuse.
Regarding the current use of AI in newsrooms, some studies highlight its widespread use, with the aim of automating tasks, streamlining and simplifying them, and improving efficiency and data accuracy (Amponsah & Atianashie, 2024). However, there are differing views, such as those of Manrique (2023), who warns that the sector reflects ‘slowness’, ‘mistrust’, and ‘ignorance’ in its implementation. The author conducts a survey and identifies the following two opposing trends: media outlets reluctant to generate content with AI, such as El País, and media outlets open to experimentation, such as El Español and RTVE.
In terms of content production, the author notes the systematic use of ChatGPT to suggest topics and search for documentation. In addition, the media are investigating the potential of content creation for SEO and summaries, offering readers condensed content that they can expand upon with more in-depth reading or audio reading. All media outlets insist on the necessary supervision of journalists.
According to Simon (2024), the adoption of AI in the media is driven not only by ethical or technological considerations, but also by business imperatives. The pursuit of efficiency, cost reduction, and competitiveness in an increasingly saturated market is one of the key drivers behind the automation of editorial processes. So, sometimes decisions about the use of AI are shaped by economic and organisational pressures. These pressures may help to explain the absence of clear ethical protocols in many media organisations. Furthermore, as Misri et al. (2025) warn, larger media outlets such as CBC and The Globe and Mail possess the institutional capacity and resources to experiment with AI tools while simultaneously crafting internal ethical guidelines. On the contrary, smaller and local newsrooms often lack the infrastructure and support to participate in similar initiatives, resulting in uneven ethical standards and increased vulnerability to misuse or misunderstanding of AI technologies. This asymmetry not only reflects broader structural inequalities within the media landscape, but also underscores the urgent need for standardised, inclusive, and accessible ethical frameworks that can guide all journalists, regardless of organisation size and technological sophistication. Their work aligns with broader concerns about ethical standards and the need for institutional guidelines.

3. Materials and Methods

With the aim of investigating the positioning of journalists and media outlets regarding the impact of AI on newsrooms and transparency, this research uses a mixed methodology, combining quantitative methods based on surveys to determine the position of journalists and editors on the use of AI and ethical challenges, and qualitative methods, with in-depth interviews, to better understand how journalists approach the idea of transparency in the use of AI, exploring perceptions from a qualitative perspective based on experience and professional practice.
The choice of this methodology seeks to harness the potential of both methods: quantitative, to statistically measure positioning and frequency of use, and qualitative, to explore in depth certain perceptions, challenges, and opportunities for journalists and audiences.
The survey seek to investigate the use of different AI tools, especially regarding content production; the instructions and guidelines received, where applicable, on the use of these tools; self-regulation; desirable regulatory systems; and journalists’ perceptions of transparency. These questions provide the statistical data necessary for the initial analysis of the first phase of the investigation.
In designing the questionnaire, the Technology Acceptance Model (TAM) was applied, as it offers a valuable framework to understand how journalists perceive and engage with AI-driven tools. According to the TAM, the likelihood of adopting a new technology depends primarily on its perceived usefulness and perceived ease of use, meaning that AI systems are more likely to be embraced when journalists believe that these tools improve reporting efficiency, improve content quality, or facilitate investigative processes, while being also intuitive and accessible. This study incorporated the TAM approach by including items that measure aspects such as perceived usefulness (PU), perceived ease of use (PEOU), attitude towards use, and behavioural intention to use.
The target population for this study consists of journalists working in newsrooms, which is a fairly limited universe. For this investigation, a sample of 50 individuals performing editorial tasks was randomly selected from newsrooms in different Spanish provinces, with a varied profile in terms of media, including print, digital, and audio-visual destinations.
The quantitative instrument used was a questionnaire. The form was distributed through email, social media, and distribution lists of journalist groups, ensuring anonymous and voluntary access for participants. The data collected through the online questionnaire, designed and distributed using Google Forms, was exported in CSV/Excel format for further analysis. Responses were counted and organised using Microsoft Excel, taking advantage of its filtering functions, pivot tables, and basic statistical formulas (such as counting, averaging, and percentages) to systematise the information and facilitate its interpretation. Open-ended responses were analysed using the AI tool Perplexity in the early stages of thematic exploration, as an auxiliary tool. A thematic analysis was conducted following the Braun and Clarke (2006) model, which allowed for the identification of recurring patterns in the responses and the construction of an interpretative narrative. This methodology involved several phases of work, including familiarisation with the data, generation of initial codes, search for themes, and review and cataloguing of these themes. Furthermore, the TAM approach was applied by incorporating questions related to motivations and resistance toward the use of AI, specific experiences with particular tools, and, more incisively, ethical and professional perceptions regarding the impact of AI on journalism.
To develop the qualitative component of the study, eight in-depth interviews were also conducted, based on a semi-structured script. These interviews enriched the analysis by providing deeper insight into areas where the quantitative data revealed controversy or divergence. The selection of participants was guided by a purposive sampling strategy, aimed at capturing a diversity of perspectives on the use of AI in journalism. Responses to the integration of artificial intelligence (AI) into professional practice were categorised into the following three distinct attitudinal levels: negative, neutral, and positive. The negative response (divided into level 1, rejection, and level 2, distrustful use) was characterised by scepticism or outright rejection, often rooted in concerns about the erosion of journalistic integrity, the dehumanisation of storytelling, and the ethical risks associated with automation. The neutral response (divided into level 1, instrumental use, and level 2, passive acceptance) reflected a pragmatic acceptance of AI as a functional tool, typically used for efficiency in routine tasks, but without deep engagement or critical reflection on its broader implications. In contrast, the positive response (divided into level 1, basic acceptance and responsible use; level 2, critical integration; and level 3, leadership and innovation) encompassed a spectrum of proactive attitudes, ranging from responsible and ethical use to leadership in innovation. Journalists within this category not only integrate AI thoughtfully into their workflows, but also advocate for its ethical application, contribute to the development of newsroom standards, and actively participate in shaping the future of journalism through strategic and informed use of emerging technologies.
The selection criteria also considered professional roles, types of media outlets, and levels of engagement with AI tools, ensuring representation from different editorial contexts and technological experiences. By incorporating this range of voices, the study aims to reflect the complexity and plurality of journalistic attitudes toward AI and provide a more contextualised and ethically sensitive understanding of its adoption. This approach also responds to the ethical concerns documented in the literature, where journalists express uncertainty and caution regarding the implications of AI in news production.
The profiles of the eight journalists interviewed are detailed in Table 1.
All interviews were transcribed to capture every detail of the recording and analysed to identify key parameters for this study, such as frequency of AI use in journalistic work, level of training or preparation, motives and reasons behind this use, position regarding the need for regulation and ethical implications, position on transparency and methods of disclosing AI use, and personal perspective on the future of AI in the field of journalism and challenges facing the media sector. Personal data from interviewees, as well as any information that could directly or indirectly identify them, were omitted to foster a climate of trust in the treatment of sensitive issues. In such situations, anonymity allows interviewees to express themselves more freely, without fear of professional repercussions or damage to their reputation.

4. Results

Numerous studies highlight the prodigious transformation of routines that is taking place in the field of journalism in the face of the unstoppable advance of AI. One of the issues that most concerns professionals and researchers is ethics, more specifically, the transparency of journalists and the media, the attribution of authorship, and the treatment and responsibility that should be given to AI tools.
In this paper, we analyse the results of a survey administered to fifty journalists working in newsrooms, with the aim of understanding their position on the use of AI and transparency in the media.

4.1. Actual Use of AI in Newsrooms

The results of the survey show that the use of AI tools is an everyday reality for most of the journalists surveyed. Specifically, 72% acknowledge having used some type of AI tool in tasks related to their journalistic work in the newsroom, while 28% have not used AI tools (Figure 1).
Journalists who admit to having used AI tools, specifically 72% of those surveyed, mainly point to the following two tasks: translation (58.3%) and automatic content generation (e.g., GPT-3) (52.8%). This is followed, in order of importance, by assistance with writing and editing tasks (38.9%) and automatic summary generation (36.1%). Other tasks mentioned, although less frequently, include data analysis (36.1%), video and audio editing (19.4%), documentation tasks (5.6%), and audio transcription (2.8%). Therefore, these types of applications are used to assist journalists in their writing tasks, streamline news production, and enhance and enrich the editing and investigative journalism processes.
However, the interviews conducted reveal a very uneven degree of integration among journalists: some journalists use AI tools for occasional and ad hoc support in their production routines, while for others it is a regular tool. This frequency is impacted by various factors, such as age (the younger the journalist, the greater the use of AI tools) and specific training.
Interviewee 1, for example, uses AI daily and argues that ‘it can be used for complete content automation, from news production to the writing of sports reports and chronicles’. According to this interviewee, the main job of the journalist would be to ‘write very detailed instructions (instructions) for the AI to generate the content’, for which he acknowledges that the journalist must have a great background and deep knowledge of the subject matter. To get AI to reach an adequate level of writing, according to Respondent 1, ‘extensive and specific prompts (up to two pages long) that guide AI not only on the content but also on the syntactic structure (subject-verb-predicate, inverted pyramid) and style of the text, allowing the writer to have editorial control over the outcome’. A crucial aspect of this methodology is that it does not allow AI to select its own sources of information, but rather ‘it is the journalist himself who provides and narrows the necessary sources and data, which contributes to the veracity of the final product, preventing the system from drinking from unreliable sources’.
In addition to complete content automation, Interviewee 1 uses AI for other tasks, such as the transcription of audio (e.g., transcribing press conferences to obtain raw content), processing data from tables or PDF documents, and generating content from tables or PDF documents.
Interviewee 2, for his part, understands that the concept of AI is so broad that this umbrella can include, on the one hand, applications that can be of great use to journalists without entailing ethical conflicts, such as ‘the transcription of interviews, the conversion of audio files into text, which would be equivalent to the work carried out by a washing machine at home’, and, on the other hand, text generation applications that should not replace the very complex work carried out by journalists when they write a text. Interviewee 2, like the rest, considers that the journalist should, in any case, check the AI-generated content, even the transcriptions, as a particular expression used by the machine may give a different meaning to the one actually intended by the journalist.
Interviewees 3 and 8, along this line, recognise that AI has very useful applications for journalists, for example, for transforming press releases into news or for mechanical tasks, insofar as this optimises work time that the journalist can use to generate more exclusive information, allowing the medium to have a differentiated and quality news offer. Specifically, Interviewee 8 states that ‘it is useful for repetitive tasks, such as transcribing audio, editing video clips, or scheduling social media posts. Also for quick data searches or for summarising long reports. Tasks that do not require thinking like a journalist.’
Interviewee 5, who works at a local radio station, explains that, in his case, AI has been integrated into his daily routines on the radio, from searching for up-to-date information on the programme’s guests to reviewing the scripts for each show. Additionally, the generation of images and sounds adds extra value to programmes. In this regard, AI is used to design podcast cover art and compose melodies, without compromising the quality of the broadcast. Interview 7 also describes a use that is highly tailored to the outlet and the environment in which they work: “audio transcription, translation of texts in local languages, basic data analysis such as event statistics, and social media monitoring to detect trends. Also, for summarising long reports. However, final writing, in-depth analysis, and interviews should remain in the hands of the journalist.”
Interviewee 4 assures that he does not use AI tools in his work as a journalist and believes that ‘it should only be considered a positive tool for the journalist as an occasional support and in no case for editorial tasks’. In the same vein, Interviewee 6 expresses their rejection of incorporating AI into journalistic tasks and warns that ‘Its indiscriminate use can dehumanise journalistic storytelling, erode public trust, foster dependence on tools that do not understand context, and prioritise speed and volume over quality and truthfulness.’

4.2. Regulatory and Ethical Flaws

The survey data show that there is a significant gap in the internal regulation of the use of AI tools in the media. In fact, only a minority of respondents (20.8%) say that the media outlet where they work has specific ethical guidelines on the use of AI.
Most respondents have not received instructions from their media outlet (77.5%) (Figure 2, left sphere) or their superiors (79.6%) (Figure 2, right sphere) and point to the absence of protocols to manage the potential risks associated with automation, such as bias reproduction, loss of originality and style, ethical responsibility, and even the risk of misinformation. Despite this shortcoming, the data from the respondents show an almost unanimous consensus (94%) on the need for media organisations to have internal rules on the use of AI.
Most of the interviewees agree with this need and have not received instructions from their superiors or the media organisation where they work about the use of AI. Interviewee 2 suspects that this is due to the trust that the media places in journalists and their professionalism: self-regulation is trusted because it is part of the ethics that govern the principles of truthfulness with which journalists work, materialized in the SPJ Code of Ethics, promoted by the Members of the Society of Professional Journalists (2024), which, among other ethical considerations for journalists, includes the following: ‘Explain ethical choices and processes to audiences. Encourage a civil dialogue with the public about journalistic practices, coverage and news content.’
The most sought-after aspects of these rules are editorial responsibility (90%) and transparency (88%). Respondents also emphasise the protection of personal data (68%) and the prevention of algorithmic bias (70%).
Furthermore, in terms of the measures they believe should be taken to improve transparency, respondents prefer training and education in ethics and the use of AI (76%) and the publication of AI usage policies (76%).
Regarding the in-depth interviews, Interviewees 2, 4, 6 and 8 have not received any instructions from their superiors or their employer about the use of AI. Interviewee 4 insists that the need for AI has not been created because it is not commonly used. Interviewee 3 has some deontological guidelines and, in his environment, they have to declare the use of certain AI tools, while Interviewee 6 states that their outlet ‘does not have a specific ethical code for AI because there is an implicit trust that journalists will use their own judgment, although this is problematic since not everyone has the same level of knowledge or scruples.’
Interviewee 1 explains that ‘specific instructions are not really necessary in some media because the fundamental principles of journalism (truthfulness, contrast, responsibility) are already included in existing codes of ethics (such as FAPE).’ From his perspective, these codes are applicable regardless of the technology used to produce the information, so he sees no need to create specific regulations for AI, but rather to apply these existing principles.
Interviewee 5 does not believe that it is necessary to receive specific instructions within media organisations and considers AI to function as a software tool that reduces the time required for creation and research, enriching the journalistic creative process, which is inherently human.

4.3. Transparency and Attribution of Authorship

Transparency is one of the main challenges in the use of AI in journalism, according to this survey. Most of the journalists surveyed (87.8%) acknowledge that their media outlet does not require them to disclose the use of AI tools in editorial tasks, even if it is only used as an auxiliary tool. Only a small proportion of the respondents, specifically 12.2%, say that their media outlet requires journalists to disclose the use of AI (Figure 3).
In the few cases where respondents mention the obligation to inform, only 10.8% do so with a specific, short, and forceful phrase, similar to the one mentioned by Kent in The Guardian, as follows: ‘This content has been produced with AI or with the help of AI’ or ‘News produced with AI under the supervision of an editor’.
When the use of AI is partial, supportive, and instrumental in nature, the respondents reveal that their organisations do not usually declare the use of AI. Only 11.1% of the respondents say that they recognise it, for example, in the case of translation or data collection. Most (88.9%) also do not cite the use of AI as a source.
Regarding authorship of the final text, there are no clear and consistent criteria among the respondents. If AI has been used partially in the writing of a text, the majority consider that the main author is still the journalist (65.1%); 27.9% consider that the author would be the media outlet and, to a lesser extent, 7% consider that the authorship would fall to the AI application. This plurality of criteria reflects the uncertainty that exists in the professional journalism sector with respect to the role that AI plays and will play in the process of generating journalistic content.
In relation to the in-depth interviews conducted, the majority of interviewees are in favour of declaring the use of AI, and even of attributing authorship to the AI application itself when it comes to automatically generated content, with the exception of Interviewees 1 and 8, who maintain a contrary and critical position on this trend regarding the need to declare AI use. Interviewee 1 argues the following on this matter: ‘AI should be considered a tool, like a calculator for a mathematician or a word processor for a writer, and not a source, so it would not be necessary to declare its use.’ In this sense, he questions the need to cite AI. As a consequence, he does not believe it is necessary for texts to state that AI has been used or which AI application(s) assisted the writer, since ‘the authorship rests entirely with the writer’. He maintains that ‘the journalist is the one who directs, verifies, and takes responsibility for the final content’. For Interviewee 1, the real ethical responsibility lies in the control that the journalist exercises over the process. As he is the one who selects and provides the sources for AI, he ensures that the system does not invent or use unverified information. In this way, the journalist maintains, at all times, the authorship and ultimate responsibility for what is published. AI is an instrument that executes his orders, and any error or bias in the final product is his responsibility as process supervisor. Interviewee 5 expresses a similar view, as follows: ‘AI use should only be disclosed to the same extent as the use of software related to audiovisual production’. It is a tool that is directed and programmed by a human being. Interviewees 4 and 7 consider declaration to be extremely necessary because the credibility of the journalist and journalism is at stake.

4.4. Demands and Proposals from the Sector

Regarding the changes that the journalists surveyed consider necessary to ensure the ethical and transparent use of AI in journalism, the open responses reveal a common concern. AI should be a support tool, but it should not replace journalists.
The respondents mainly call for more training and education (36.7%); clear internal media standards that allow honest action and media support (32.7%); and government regulations (18.4%). To a lesser extent, there is also a demand for collaboration between journalists and technologists (4.1%) and for professional associations to have the power to regulate and verify (2%), as other professional groups already do.
In response to an open question, 2% of the respondents said AI should not be used in journalism. The vast majority said that it should be used honestly, telling readers, listeners, or viewers how it was used, if at all.
The in-depth interviews reflect precisely the dichotomy of the survey: with clear rules and transparency, AI can be an ally for journalists; without regulation and without informing the audience, there is a risk of losing credibility and meaning.
In relation to the future, Interviewees 2, 4, and 6 believe that journalism training in AI should be promoted to avoid certain risks associated with its misuse, such as not providing ethical and rigorous information and, therefore, misinforming and even manipulating. However, as explained by Interviewees 2 and 7, regardless of how AI develops, the immediate future of the journalistic profession must be based on ethics and rigour, as with all technological advances. Interviewee 4 insists that ‘the reader’s trust derives from the sincerity of the journalist and how he accesses the information’. Interviewee 6 considers that ‘without a solid foundation in journalistic ethics and critical thinking, the use of AI can be counterproductive. Journalists must understand how these tools work, their limitations and biases, how to use them responsibly and not as a crutch’. Interviewee 8 explains that the journalist should review everything AI does, as follows: ‘You cannot blindly trust a machine. There must always be a human eye to ensure everything is correct’. Echoing this perspective, Interviewee 7 maintains the following: ‘All work involving AI must be carefully reviewed, data verified and context ensured.’ It is also important not to let AI write for us; it should only serve as a support tool. The greatest risk is misinformation if we do not thoroughly check what AI produces.’
Interviewee 3 believes that AI and journalism can go hand in hand as long as it is the production itself that feeds into AI and not the other way around. Furthermore, he does not believe that new regulations and ethical codes are necessary, as training can be translated into appropriate self-regulation. In this regard, Interviewee 5 believes that ‘AI should focus on mechanical processes, while humans must focus more than ever on the accuracy of their sources, critical thinking, and creativity’.
On the other hand, Interviewee 1 recommends journalists ‘to train and abandon fear, as AI is present and is advancing by leaps and bounds’. He believes that ‘adaptation is crucial for the journalist and that resistance to change will leave behind those who do not update’. He also argues that the future fundamental skill of the journalist will be the creation of prompts, as follows: ‘The ability to give precise, detailed and knowledgeable instructions to AI will be what differentiates quality work’. He even advocates the inclusion of subjects on prompt generation in communication faculties. But in his opinion, AI ‘does not replace the judgment, ethics, and structure of journalism’; in contrast, in-depth knowledge of these elements allows the journalist to guide the tool in an effective and responsible manner. Although he describes himself as ‘not very optimistic’, his vision is not apocalyptic. He sees the irruption of AI as a source of ‘opportunities’ for those who adapt, similar to the transformations caused by word processing and the Internet. He concludes that the in-depth knowledge of the journalistic profession (the ability to assess a news event, to structure a news story, and to apply ethical principles) will not only not disappear, but will be more crucial than ever. This background will be the indispensable raw material to be able to direct AI and generate reliable and quality content.
On the other hand, Interviewee 1 believes that the next logical step ‘will be total disintermediation, where AI not only summarises the content of the media, but generates the news directly for the user, completely bypassing the media as an intermediary’.

5. Discussion

The results of this study show that AI tools are widely used in newsrooms, although the frequency and intensity of their use vary depending on factors such as age, specific job roles, technological interests, and journalist training. These findings are consistent with previous research, such as that of Mondría Terol (2023), who identified the lack of training and professional resistance as key barriers to AI adoption, and Manrique (2023), who highlighted the coexistence of scepticism and experimentation in Spanish media outlets.
The findings of this study confirm that the integration of AI tools into Spanish newsrooms is widespread, albeit uneven in terms of frequency and depth of use. This variability is influenced by factors such as age, professional role, technological knowledge, and individual attitudes toward innovation. These results align with Cools and Diakopoulos (2024), who identified a broad spectrum of generative AI applications used in journalistic workflows, ranging from content creation and summarisation to distribution and personalisation. However, as in the Spanish context, their study also highlights that such integration is often guided by professional intuition rather than formal ethical frameworks.
The survey reveals the existence of a small sector of journalists reluctant to incorporate AI tools into their routines, arguing that journalistic work must remain original and human-driven, anticipating serious consequences for journalism resulting from the use and application of AI. Doubts about authorship, responsibility, and transparency generate mistrust. This tension between innovation and tradition is echoed by Wu (2024) and Misri et al. (2025), who describe journalists who navigate between the promise of technological enhancement and the preservation of core professional values such as verification, authorship, and editorial control. These dilemmas are not unique to Spain, but reflect broader global concerns about the ethical integration of AI in journalism.
A key finding is the lack of clear regulation on the use of AI tools. Journalists do not receive precise instructions from their media outlets or superiors, leading to reliance on personal judgment and self-regulation. This situation mirrors the findings of Sonni et al. (2024), who report that many journalists operate without formal guidelines, relying instead on informal norms and individual ethical reasoning. Although this autonomy may promote innovation, it also increases the risk of inconsistent practices and ethical missteps.
The literature review suggests that this regulatory vacuum is beginning to be addressed. At the European level, the AI Act introduces transparency and accountability requirements that could influence newsroom practices. Parratt-Fernández et al. (2025) identify a growing number of ethical documents and AI guidelines among European media organisations, although their implementation remains uneven. But the main professional codes of ethics in Spain have not yet incorporated specific rules to help the media take the right position, and the major media outlets are tentatively beginning to incorporate some rules into their style guides and codes of ethics. El País (Alcaide, 2025) has already incorporated some guidelines into its style book. This reality contrasts with the slowness of the media to regulate their newsrooms. There are timid private initiatives implemented by specific publications that include general recommendations on the need to preserve the critical sense of journalists and the commitment to improving the quality of journalism in a context that points to change.
The survey also highlights a significant gap between the actual use of AI and the absence of ethical protocols. This disconnect creates a climate of uncertainty, where journalists must rely on intuition and personal ethics, and audiences can question the originality and accuracy of news content. Comparative studies, such as those by Cools and Diakopoulos (2024), show that other European contexts, such as the Netherlands and Denmark, have adopted more structured and experimental approaches, suggesting that national media cultures play a role in shaping AI integration strategies.
This reliance on individual judgment underscores the urgent need for structured policies and training programmes that can support the responsible and transparent integration of AI technologies into journalistic practice. Although most professionals use these tools on an occasional and auxiliary basis, the vast majority of journalists are not currently required to declare their use. Most of the journalists surveyed recognise that there is no obligation in the media outlet where they work to report the use of AI tools in editorial tasks, even if used as an auxiliary tool.
Transparency emerges as a central concern. Most journalists surveyed report that their media outlets do not require disclosure of AI use, even when such tools are employed in editorial tasks. This lack of institutional transparency can undermine public trust and complicate accountability. Kent (2015) proposes practical alternatives to improve transparency, such as identifying sources, reviewing AI-generated content, and adapting style to maintain journalistic standards. However, recent studies by Toff and Simon (2025), Jia et al. (2024), and Schell (2024) argue that transparency must be multidimensional, incorporating procedural clarity and editorial agency rather than relying solely on labels.
The absence of clear regulations is also reflected in the diversity of opinions on authorship. While most respondents consider the journalist to be the main author when AI is used partially, others attribute authorship to the media outlet or the AI application. This plurality of views opens a debate on intellectual property and editorial responsibility, suggesting the possibility of shared authorship models depending on the degree of AI involvement. Wu (2024) reinforces this perspective by highlighting the role of individual agency and value-driven use in shaping AI adoption. An interesting debate could be opened on this issue, with implications for intellectual property and editorial responsibility. It also raises the question of possible shared responsibility, even as a percentage basis, depending on the level of AI use.
Journalists express a strong commitment to transparency and ethical responsibility. They advocate for declaring AI use, citing it as a source when relevant, and ensuring editorial oversight. These proposals align with broader calls for algorithmic literacy and ethical training, as highlighted by FleishmanHillard (2024) and Cools and Diakopoulos (2024). Respondents also stress that AI should support, not replace, journalists, helping to enhance content quality while preserving human creativity and critical thinking.
The profession is undergoing a redefinition in response to technological change. In this sense, Simon’s (2024) report highlights the tension between efficiency and journalistic quality. Although AI can optimise workflows, it does not guarantee improved information quality. Automated content production requires human oversight, professional standards, and an institutional context that maintains journalistic values. The future role of journalists may evolve toward prompt engineering and ethical supervision, where the ability to guide AI tools effectively becomes a core competency. Although challenging, this will offer opportunities to strengthen journalistic integrity and public trust, provided that it is accompanied by robust ethical frameworks and a renewed commitment to transparency.
The impact of AI on journalism is a reality that cannot be ignored or sidestepped, but, to a large extent, the credibility of the profession and the media will depend on the proper use of these tools and the regulatory and ethical framework that is established, so journalists have validated criteria that allow them to use AI tools without losing their central creative role.
The sample of this study could be expanded to other countries to obtain more accurate results from the European landscape, although the limitations of the population and the rapid changes that journalists and the media are experiencing in terms of the use of AI tools and the adoption of regulatory measures would have to be taken into account. Comparative studies between different geographical areas could also be developed, making it possible to identify cultural and professional differences and assess the degree of technological implementation. It would also be possible to compare different regulatory frameworks and analyse public perceptions of the use of AI by journalists.

6. Conclusions

This research confirms that the use of AI tools as auxiliary instruments in Spanish newsrooms is widespread, although their recurrence and implementation remain uneven. Factors such as age, specific training, and conceptual mistrust of AI influence this variability. However, beyond descriptive data, the study reveals a deeper structural and ethical tension within journalistic practice.
One of the most significant insights is the disconnect between the actual use of AI and the lack of ethical and regulatory standards. Journalists operate in a context of normative ambiguity, often relying on personal judgement and self-regulation. This situation not only affects editorial consistency, but also raises concerns about transparency, authorship, and accountability, core values that are being redefined in the digital age.
Transparency emerges as a central concern. The absence of institutional requirements to disclose AI use reflects a broader reluctance to engage with the ethical implications of automation. This lack of disclosure can undermine public trust and complicate editorial responsibility. The diversity of professional opinions on authorship, whether attributed to the journalist, the media outlet, or the AI system, further illustrates the conceptual uncertainty surrounding hybrid content creation. This disparity of criteria highlights the opportunity to open a new debate on this issue, which falls squarely within the realm of intellectual property and editorial responsibility.
In this context of digital transformation, journalism is undergoing a process of redefinition. Journalists advocate for AI to remain a support tool, not a substitute, and express a strong commitment to transparency and ethical responsibility. Their proposals, such as declaring AI use, promoting AI literacy, and establishing internal guidelines, point to a roadmap for responsible integration.
Through in-depth interviews, critical perspectives have also been found, framing AI not as a source or co-author, but as an advanced tool at the service of the journalist. This view challenges the prevailing trend toward mandatory disclosure and is based on the argument that citing the use of AI is unnecessary, analogous to not citing a calculator or a word processor. Therefore, the focus of ethical responsibility shifts from statement to process. The journalist’s job is to exercise rigorous control over the sources of information he or she provides to the system and to verify the veracity and consistency of the end result. Under this model, authorship and ultimate responsibility fall unequivocally on the human professional, who acts as a supervisor and guarantor of published content.
In line with this approach, the creation of new AI specific ethical codes would be redundant, as the fundamental principles of journalism, truthfulness, contrast, and accountability, already enshrined in existing ethical codes, are timeless and technologically agnostic. The challenge would not lie in the drawing of new regulations, but in applying existing principles to the new technological context. It would also imply redefining the role of the journalist, who would evolve from a mere creator of texts to an architect of prompts and supervisor of the final product, where the ability to formulate precise and well-founded instructions would become the core competency to ensure that AI operates within established ethical and professional boundaries.
Although the study is geographically limited to Spain and is based on a relatively small sample, it nevertheless provides valuable information on the ethical and professional dynamics surrounding the use of AI in journalism. These findings offer a meaningful foundation for future comparative research and broader investigations across different media systems. Furthermore, the rapid evolution of AI tools and newsroom practices suggests that attitudes and regulations may change in the near future. Comparative studies between countries and longitudinal research would be valuable to assess cultural differences and track changes over time.
Ultimately, the credibility of journalism in the age of AI will depend on the profession’s ability to integrate these tools ethically and transparently. Journalists and media organisations must establish clear and consistent criteria that preserve editorial autonomy and reinforce public trust. Through this study, journalists outline a preliminary roadmap to transparency, including commitments to ethical oversight, training, and institutional accountability.
It would also be interesting to open a debate about the growing technological dependence of media organisations, as Simon (2024) points out, and to reflect on the risks of information power concentration and the need to preserve journalistic independence from external technological interests.

Author Contributions

Conceptualization, M.Á.F.-B.; methodology, M.Á.F.-B.; software, C.S.-M.; validation, C.S.-M.; formal analysis, M.Á.F.-B. and C.S.-M.; investigation, M.Á.F.-B. and C.S.-M.; resources, M.Á.F.-B. and C.S.-M.; data curation, M.Á.F.-B.; writing—original draft preparation, M.Á.F.-B.; writing—review and editing, M.Á.F.-B. and C.S.-M.; visualization, C.S.-M.; supervision, M.Á.F.-B.; project administration, M.Á.F.-B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not Applicable. All activities carried out in this research were in accordance with current legislation and institutional guidelines on research ethics, ensuring strict adherence to ethical principles at every stage of the study.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study. No sensitive or personally identifiable information was collected, guaranteeing the complete anonymity of the participants of the survey. In addition, the in-depth interviews have guaranteed the anonymity of the interviewees in order to favour the veracity of the testimonies and the freedom of expression of the interviewees.

Data Availability Statement

The data supporting the findings of this study are not publicly available due to privacy and ethical restrictions. Participants’ responses contain sensitive information that could compromise confidentiality and were collected under conditions that do not permit open sharing. Researchers interested in accessing anonymised or aggregated data may contact the corresponding author to discuss potential options, subject to ethical approval and institutional guidelines.

Acknowledgments

We thank the New Narratives and Emerging Technologies research group of the University of Seville and the Professional Association of Journalists of Andalusia for their contribution to the dissemination of the survey among journalists working in newsrooms. In addition, we thank the Spanish Society of Journalism (SEP) and the University of Nebrija for organising the XXXI International Congress on Innovation in Journalism in the Digital Context, for the opportunity to share the preliminary research of this study and to learn about other recent advances in the field. During the preparation of this article, the author used the following AI applications: Gemini 1.5 Pro, to transcribe the interviews conducted in audio format; and Writefull X, to improve academic writing in English. Also, the author conducted a thorough review and made the necessary revisions to the content. The author assumes full responsibility for the accuracy and integrity of the final version of the publication.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
EUEuropean Union
UNESCOUnited Nations Educational, Scientific and Cultural Organization
FAPEFederación de Asociaciones de Periodistas de España (Federation of Spanish Journalists’ Associations)
RTVERadiotelevisión Española
SEOSearch Engine Optimization

References

  1. Alcaide, S. (2025, January 19). El uso de la inteligencia artificial en EL PAÍS. El País, Defensora del Lector. Available online: https://elpais.com/defensor-a-del-lector/2025-01-19/el-uso-de-la-inteligencia-artificial-en-el-pais.html (accessed on 23 March 2025).
  2. Amponsah, P. N., & Atianashie, A. M. (2024). Navigating the new frontier: A comprehensive review of AI in journalism. Advances in Journalism and Communication, 12(1), 1–17. [Google Scholar] [CrossRef]
  3. Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. [Google Scholar] [CrossRef]
  4. Cools, H., & Diakopoulos, N. (2024). Uses of generative AI in the newsroom: Mapping journalists’ perceptions of perils and possibilities. Journalism Practice, 1–19. [Google Scholar] [CrossRef]
  5. De Lara, A., García-Avilés, J. A., & Arias-Robles, F. (2022). Implantación de la inteligencia artificial en los medios españoles: Análisis de las percepciones de los profesionales. Textual & Visual Media, 1(15), 1–16. [Google Scholar] [CrossRef]
  6. European Union. (2024). Regulation (EU) 2024/1689 of the European parliament and of the council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (artificial intelligence act). Official Journal of the European Union, L 2024/1689. Available online: https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng (accessed on 10 April 2025).
  7. FleishmanHillard. (2024). Generative AI in the newsroom: Friend or foe? Available online: https://fleishmanhillard.com/wp-content/uploads/2024/06/Generative-AI-in-the-Newsroom.pdf (accessed on 10 April 2025).
  8. Gondwe, G. (2025). Perceptions of AI-driven news among contemporary audiences: A study of trust, engagement, and impact. AI & Society, 1–12. [Google Scholar] [CrossRef]
  9. González-Esteban, J. L., & Sanahuja-Sanahuja, R. (2023). Exigencias éticas para un periodismo responsable en el contexto de la inteligencia artificial. Daimon Revista Internacional de Filosofia, (90), 131–145. [Google Scholar] [CrossRef]
  10. Hansen, M., Roca-Sales, M., Keegan, J., & King, G. (2017). Artificial intelligence: Practice and implications for journalism. Tow Center for Digital Journalism. [Google Scholar] [CrossRef]
  11. Illescas-Reinoso, D., Palacios, A. G., & Ortiz-Vizuete, F. (2025). La inteligencia artificial en el periodismo: Herramientas y aplicaciones. LATAM, Revista Latinoamericana de Ciencias Sociales y Humanidades, 6(1), 2355–2372. [Google Scholar] [CrossRef]
  12. Jia, H., Appelman, A., Wu, M., & Bien-Aimé, S. (2024). News bylines and perceived AI authorship: Effects on source and message credi-bility. Computers in Human Behavior: Artificial Humans, 2(2), 100093. [Google Scholar] [CrossRef]
  13. Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1, 389–399. [Google Scholar] [CrossRef]
  14. Karlsson, M., Ferrer Conill, R., & Örnebring, H. (2023). Recoding journalism: Establishing normative dimensions for a twenty-first century news media. Journalism Studies, 24(5), 553–572. [Google Scholar] [CrossRef]
  15. Kent, T. (2015, February 24). An ethical checklist for robot journalism. Medium. Updated October 2019. Available online: https://medium.com/@tjrkent/an-ethical-checklist-for-robot-journalism-1f41dcbd7be2 (accessed on 14 April 2025).
  16. Lopezosa, C., Pérez-Montoro, M., & Rey-Martín, C. (2024). El uso de la inteligencia artificial en las redacciones: Propuestas y limitaciones. Revista de Comunicación, 23(1), 279–293. [Google Scholar] [CrossRef]
  17. Manrique, M. (2023, April 2). ¿Cómo están usando la IA los medios en España? Fleet Street. Available online: https://fleetstreet.substack.com/p/como-usando-inteligencia-artificial-los-medios (accessed on 12 June 2025).
  18. Misri, A., Blanchett, N., & Lindgren, A. (2025). “There’s a Rule Book in my Head”: Journalism Ethics Meet A.I. in the Newsroom. Digital Journalism, 1–19. [Google Scholar] [CrossRef]
  19. Mondría Terol, T. (2023). Innovación MedIÁtica: Aplicaciones de la inteligencia artificial en el periodismo en España. Textual & Visual Media, 17(1), 41–60. [Google Scholar] [CrossRef]
  20. Parratt-Fernández, S., Chaparro-Domínguez, M. A., & Moreno-Gil, V. (2025). Journalistic AI codes of ethics: Analyzing academia’s contributions to their development and improvement. Profesional de La información, 33(6), e330602. [Google Scholar] [CrossRef]
  21. Reyes-Hidalgo, C. M., & Burgos-Zambrano, D. J. (2024). Inteligencia artificial y periodismo: Retos y desafíos en la nueva era. In Actas del IX Congreso de Investigación, Desarrollo e Innovación de la Universidad Internacional de Ciencia y Tecnología (pp. 384–389). Universidad Internacional de Ciencia y Tecnología. [Google Scholar] [CrossRef]
  22. Schell, K. (2024). AI transparency in journalism: Labels for a hybrid era. Reuters Institute for the Study of Journalism. Available online: https://apa.at/wp-content/uploads/2020/05/RISJ-Fellows-Paper_Katja-Schell_MT24_Final.pdf (accessed on 25 January 2025).
  23. Simon, F. M. (2024). Artificial intelligence in the news: How AI retools, rationalizes, and reshapes journalism and the public arena. Tow Center for Digital Journalism, Columbia University. [Google Scholar] [CrossRef]
  24. Society of Professional Journalists. (2024). SPJ code of ethics (Revised September 6, 2014 at 4:49 p.m. CT at SPJ’s National Convention in Nashville, Tenn.). Available online: https://www.spj.org/spj-code-of-ethics/ (accessed on 16 April 2025).
  25. Sonni, A. F., Hafied, H., Irwanto, I., & Latuheru, R. (2024). Digital newsroom transformation: A systematic review of the impact of artificial intelligence on journalistic practices, news narratives, and ethical challenges. Journalism and Media, 5(4), 1554–1570. [Google Scholar] [CrossRef]
  26. Toff, B., & Simon, F. M. (2025). “Or they could just not use it?”: The dilemma of AI disclosure for audience trust in news. The International Journal of Press/Politics. [Google Scholar] [CrossRef]
  27. Túñez-López, M. (2021). Tendencias e impacto de la inteligencia artificial en comunicación: Cobotización, gig economy, co-creación y gobernanza. Fonseca, Journal of Communication, 22, 5–22. [Google Scholar] [CrossRef]
  28. Ufarte-Ruiz, M. J., Calvo-Rubio, L. M., & Murcia-Verdú, F. J. (2021). Los desafíos éticos del periodismo en la era de la inteligencia artificial. Estudios Sobre el Mensaje Periodístico, 27(2), 673–684. [Google Scholar] [CrossRef]
  29. UNESCO. (2022). Recommendation on the ethics of artificial intelligence (Document code: SHS/BIO/PI/2021/1. Adopted on 23 November 2021). Available online: https://unesdoc.unesco.org/ark:/48223/pf0000381137_spa (accessed on 18 April 2025).
  30. Verma, D. (2024). Impact of artificial intelligence on journalism: A comprehensive review of AI in journalism. Journal of Communication and Management, 3(2), 150–156. [Google Scholar] [CrossRef]
  31. Wu, S. (2024). Journalists as individual users of artificial intelligence: Examining journalists “value-motivated use” of ChatGPT and other AI tools within and without the newsroom. Journalism. [Google Scholar] [CrossRef]
Figure 1. Use of AI tools in journalistic practice.
Figure 1. Use of AI tools in journalistic practice.
Journalmedia 06 00152 g001
Figure 2. Presence of ethical or regulatory frameworks for AI use in media organisations. The first sphere represents responses regarding the existence of internal guidelines or codes of ethics related to AI use. The second sphere reflects answers to a follow-up item designed to assess consistency in responses.
Figure 2. Presence of ethical or regulatory frameworks for AI use in media organisations. The first sphere represents responses regarding the existence of internal guidelines or codes of ethics related to AI use. The second sphere reflects answers to a follow-up item designed to assess consistency in responses.
Journalmedia 06 00152 g002
Figure 3. Disclosure requirements for AI use in journalistic content.
Figure 3. Disclosure requirements for AI use in journalistic content.
Journalmedia 06 00152 g003
Table 1. Profile of the journalists interviewed for the study.
Table 1. Profile of the journalists interviewed for the study.
RespondentsAge and GenderMedia TypeAI TrainingFrequency of AI UseResponse to AI
148, malePrinted newspaper and digital media. RegionalSpecialisation course, workshops, webinarsDailyVery positive, leadership and advocacy
246, femaleDigital newspaper. NationalSpecialisation courseSometimesNeutral, instrumental use
339, femalePrinted newspaper and digital media. RegionalNoSometimesPositive, responsible integration
448, malePrinted newspaper and digital media. NationalNoNeverNegative,
rejection
548, maleLocal radio stationSpecialisation course, tool-specific tutorialsDailyPositive, critical integration
661, malePrinted newspaper and digital media. NationalNoA few timesNegative, distrustful use
743, maleInternational correspondent for a national print media outletOnline courses, newsroom training sessionsSometimesNeutral, instrumental use
852, femaleRegional TV stationShort online courseSometimesNeutral, passive acceptance
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Fernández-Barrero, M.Á.; Serrano-Martín, C. Are the Media Transparent in Their Use of AI? Self-Regulation and Ethical Challenges in Newsrooms in Spain. Journal. Media 2025, 6, 152. https://doi.org/10.3390/journalmedia6030152

AMA Style

Fernández-Barrero MÁ, Serrano-Martín C. Are the Media Transparent in Their Use of AI? Self-Regulation and Ethical Challenges in Newsrooms in Spain. Journalism and Media. 2025; 6(3):152. https://doi.org/10.3390/journalmedia6030152

Chicago/Turabian Style

Fernández-Barrero, M. Ángeles, and Carlos Serrano-Martín. 2025. "Are the Media Transparent in Their Use of AI? Self-Regulation and Ethical Challenges in Newsrooms in Spain" Journalism and Media 6, no. 3: 152. https://doi.org/10.3390/journalmedia6030152

APA Style

Fernández-Barrero, M. Á., & Serrano-Martín, C. (2025). Are the Media Transparent in Their Use of AI? Self-Regulation and Ethical Challenges in Newsrooms in Spain. Journalism and Media, 6(3), 152. https://doi.org/10.3390/journalmedia6030152

Article Metrics

Back to TopTop