Next Article in Journal
Climate Change Misinformation in the United States: An Actor–Network Analysis
Next Article in Special Issue
Bibliometric and Content Analysis of the Scientific Work on Artificial Intelligence in Journalism
Previous Article in Journal
Expanding the Victory of Prohibition: Richmond P. Hobson’s Freelance Public Relations Crusade against Narcotics
Previous Article in Special Issue
Artificial Intelligence (AI) in Brazilian Digital Journalism: Historical Context and Innovative Processes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

How Generative AI Is Transforming Journalism: Development, Application and Ethics

School of Government and Public Affairs, Communication University of China, Beijing 100024, China
*
Author to whom correspondence should be addressed.
Journal. Media 2024, 5(2), 582-594; https://doi.org/10.3390/journalmedia5020039
Submission received: 17 April 2024 / Revised: 2 May 2024 / Accepted: 6 May 2024 / Published: 10 May 2024

Abstract

:
Generative artificial intelligence (GAI) is a technology based on algorithms, models, etc., that creates content such as text, audio, images, videos, and code. GAI is deeply integrated into journalism as tools, platforms and systems. However, GAI’s role in journalism dilutes the power of media professionals, changes traditional news production and poses ethical questions. This study attempts to systematically answer these ethical questions in specific journalistic practices from the perspectives of journalistic professionalism and epistemology. Building on the review of GAI’s development and application, this study identifies the responsibilities of news organizations, journalists and audiences, ensuring that they realize the potential of GAI while adhering to journalism professionalism and universal human values to avoid negative technological effects.

1. Introduction

Generative artificial intelligence (GAI) is a technology system based on algorithms, models, etc., designed to create text, audio, images, videos, and code. GAI includes various technologies and architectures to learn the underlying patterns and correlations in data, and subsequently generate content based on this acquired knowledge (Jovanovic and Campbell 2022). A research report published by Accenture in March 2023 shows that GAI ushers in a bold new future for science, business and society, with a profoundly positive impact on human creativity and productivity. Among business leaders, 98% of respondents agree AI foundation models will play an important role in their organization’s strategies over the next three to five years. Furthermore, as much as 40% of all working hours will be supported or augmented by language-based AI, such as GPT-4 (Harper et al. 2023).
GAI is currently used to assist and accelerate particular procedural tasks, such as content creation, fact-checking, data processing, image generation, speech conversion and translation, reducing the burden on human and increasing efficiency (Caswell et al. 2021; Nishal and Diakopoulos 2024). With the widespread adoption of GAI in news production, academia has begun to scrutinize GAI news production from three perspectives. The first line of study is comparing GAI-assisted journalism and traditional journalism (Hong and Tewksbury 2024; Wang et al. 2023). The second is to examine how GAI is used in journalism of specific countries or regions. For example, Gondwe (2023) explored how journalists in sub-Saharan Africa use GAI, while Pinto (2024) investigated GAI in the Brazilian news industry. The third is reflecting on ethical challenges and calling for upholding journalistic professionalism and its core values. Some scholars argue that journalistic professionalism is compromised by GAI, championing “Without journalists, there is no journalism” (Fernández et al. 2023). Some scholars express concern that the growing integration of GAI into news production may have a profound impact on public opinion and even influence the future of democracy (Spennemann 2023; Arguedas and Simon 2023). These three lines of study, whether emphasizing operational aspects or ethical concerns, isolate the examination of the ethical challenges posed by GAI in journalism and fail to contextualize these challenges within specific stages of news production for comprehensive investigation.
Therefore, this study attempts to discuss ethical challenges under the context of specific stages of news production. In addition to scrutinizing existing technological implementations, its goal is to offer theoretical perspectives that can guide researchers, professionals, and policymakers in leveraging the capabilities of GAI responsibly while upholding journalistic integrity and universal human values, thus mitigating adverse technological impacts.

2. Development of GAI in Journalism

AI has been used in the news industry for the past decade. The Associated Press (AP) was one of the earliest news media to integrate AI into news production (Radcliffe 2023). In 2014, the AP began using AI to process reports on corporate earnings. Before using AI, AP editors and journalists spent countless resources creating financial reports, which drew their focus away from news of greater significance. Despite considerable investment, the AP could only produce 300 financial reports per quarter, leaving thousands of potential corporate earnings reports unwritten. With the help of Wordsmith created by Automated Insights, now they are able to convert earnings data into publishable news stories within seconds; this is nearly 15 times more efficient than the traditional way (Miller 2015). Other media organizations including Bloomberg, Reuters, Forbes, The New York Times, The Washington Post, and the BBC also use AI for news production. The primary applications of AI in these large media organizations involve news gathering, production, and dissemination. In China, Tencent was the first to use AI for news writing. In September 2015, Tencent’s financial channel used a news-writing robot known as “Dreamwriter” for a news report titled “August CPI rose 2.0% year-on-year, hitting a new high in 12 months”. The content mainly consisted of data analysis and expert commentary on the data. Dreamwriter continued to release more reports, reaching 40,000 in the first three quarters of 2016 (Zhou 2015). On 18 November 2015, Xinhua News Agency unveiled a writing robot dubbed “Kuaibi Xiaoxin” or “Fast Pen Xiao Xin”, designed to swiftly craft news reports by leveraging published information. This innovative tool operates through a systematic workflow involving real-time data collection, meticulous data cleaning, and standardization using cutting-edge big data technologies. It then tailors algorithmic models according to specific business requirements, conducting data analysis and calculation. Based on these analyses, the system identifies suitable templates and generates reports adhering to Chinese language standards and automatically places the reports in a queue for editor review and release (Zhong and Zhang 2019).
Since 2018, breakthroughs in natural language processing (NLP) and large pre-trained models have unraveled the intricacies of language, enabling machines to read, learn from text, infer intent, and autonomously generate content. Moreover, these models can be quickly fine-tuned for various tasks, driving the development of GAI (Lamri 2023). In November 2022, OpenAI unveiled ChatGPT, an AI-driven natural language processing tool powered by the neural network architecture of Generative Pre-trained Transformer 3.5. This innovative tool is engineered to process sequential data, allowing it to comprehend language and generate text. By being trained on extensive datasets of real-world conversations, ChatGPT boasts a wealth of knowledge and can engage in conversations that closely resemble human interaction. Besides its chatbot capabilities, ChatGPT excels in various tasks including email composition, video scripting, copywriting, translation, and coding. The launch of ChatGPT in November 2022, followed by the introduction of GPT-4 in March 2023, has prompted close scrutiny from the news industry, which is closely monitoring both the positive and negative impacts of these advancements. Nicholas Carlson, global editor-in-chief of Business Insider, encouraged journalists to explore ways to use GAI in newsrooms. “A tsunami is coming”, he said in April 2023. “We can either ride it or get wiped out by it. But it’s going to be really fun to ride it, and it’s going to make us faster and better (Korn 2023)”. In 2024, the Reuters Institute at the University of Oxford released a comprehensive report based on a year-long study involving extensive interviews with industry experts aimed at forecasting trends in journalism. According to the report, 56% of the respondents emphasized the significance of employing AI for various back-end news automation tasks, such as tagging, transcription, and copyediting. This was followed by 37% of respondents who highlighted the importance of AI in providing better content recommendations, while 28% focused on commercial uses. Moreover, the report predicts the appointment of more senior editorial figures to coordinate AI activities and strategy this year (Reuters Institute 2023).
There are three main types of application of GAI in news organizations:

2.1. GAI Tools

In this particular application scenario, GAI tools operate independently of news organizations. Journalists access GAI tools as individual users seeking assistance in news production. This approach necessitates journalists to register and create accounts in order to use AI tools effectively. Such a low-cost and convenient approach is widely prevalent. For instance, journalists may pose queries to ChatGPT to swiftly obtain information. Another example is one of the UK’s biggest newspaper groups, Reach. The publisher of the Daily Mirror and Daily Express newspapers is exploring whether ChatGPT could help journalists write short news stories, as media organizations look at ways of using AI. Reach chief executive Jim Mullen told local media that the company had set up a working group to examine how the tool might be used to assist human reporters compiling coverage of topics such as local weather and traffic (Alim 2023).

2.2. GAI Platforms

GAI platforms offer a range of APIs tailored for content production, encompassing services like content analysis, sentiment analysis, event extraction, summary generation, personalized recommendations, content editing, and visualization. In this model of human–machine collaboration, journalists primarily contribute textual data, while machines handle content generation and presentation tasks. Within the dynamic interaction involving machines, media entities, and end-users, the media serves as an intermediary between end-users and GAI. Specifically, GAI manages the technical implementation, and the media furnishes user interfaces. The end result is a GAI platform accessible to end-users. A prime illustration of this approach is The New York Times’ interactive product, A VALENTINE, FROM A.I. TO YOU, developed with assistance from ChatGPT (The New York Times 2023). GAI simplifies creative expression by only requiring media professionals to issue design instructions before producing interactive products.

2.3. GAI Systems

News organizations invest in the development of tailored GAI systems, a process that demands significant technical expertise and resources, including the creation of proprietary algorithms. This approach yields several advantages, primarily centered on embedding the core values of the media organization directly into the GAI system’s framework. By doing so, it ensures that the generated content adheres closely to the organization’s editorial style and principles. However, this approach typically entails substantial upfront costs, making it feasible primarily for larger media conglomerates with extensive financial resources. One notable example is Bloomberg’s BloombergGPT platform, which exemplifies this approach in action (Bloomberg 2023). The advancement of GAI systems facilitates the provision of personalized services across different phases of news production, tailored to the specific requirements of media organizations. For instance, within interactive platforms designed for financial media, when users input “Apple”, the system would primarily focus on information related to the American technology company rather than the fruit. Moreover, GAI systems offer the capability to establish direct links to exclusive databases, such as real-time trading data of publicly listed companies.
In general, three concurrent approaches are commonly observed. With the ongoing advancement of GAI, its integration with news organizations will further enhance. The third approach signifies a complete assimilation of GAI within the news organization, where it operates as a subsystem of the institution. Ultimately, as the integration of GAI with news organizations progresses, the thinking and expression of GAI will increasingly resemble those of human journalists, enabling it to better fulfill the demands of journalistic work. From the realm of AI to GAI, technological progressions have revolutionized traditional news production, rendering it more automated.

3. Applications of GAI in Journalism

Based on a literature review and online information, three main applications of GAI in journalism are summarized:

3.1. Information Gathering

Information gathering serves as the cornerstone of news production, where journalists assess the relevance and significance of various topics based on available information. With the integration of GAI, the landscape of information gathering has broadened, costs have decreased, and traditional methodologies have evolved. More specifically, GAI leverages sophisticated algorithms to sift through extensive datasets, discern pertinent data points, and deliver valuable insights. Through automated data processing, visualization tools, and interpretive capabilities, GAI unveils previously obscured layers of information, enhancing the efficiency and depth of information-gathering processes (Diakopoulos 2019; Gondwe 2023). Moreover, GAI can analyze past data, forecast trends for complex events and prepare journalists for unforeseen events. For example, Google’s Bard, trained on a vast dataset of text and code spanning various topics such as history, science, technology, and current affairs, can use this information to forecast event trends (Patrick 2023).
Overall, in the information-gathering stage, the use of GAI involves three main aspects. First, GAI replaces journalists in completing repetitive and labor-intensive basic tasks, such as collecting background information. Reuters News Tracer is such an example. It runs machine-learning algorithms on a percentage of Twitter’s 700 million daily tweets to find breaking news. These algorithms look for clusters of tweets that are talking about the same event, and the tool then generates a newsworthiness rating, questioning whether the event is worth reporting and verifying whether a piece of information is true. Reuters’ journalists then independently verify the information through their own channels and reporting, before publishing (Reuters 2017).
Second, GAI serves to address the limitations of journalists in news reporting. Journalists often face constraints such as time and space, which may hinder their ability to promptly follow up on the latest developments. In such scenarios, GAI steps in to identify real-time information, continuously monitor unfolding events, and offer innovative perspectives for news coverage. This technological support enables journalists to stay updated with evolving stories and maintain relevance in their reporting despite constraints (Liu et al. 2017).
Third, journalists play a crucial role in training GAI systems to refine their capacity to evaluate newsworthiness, aiming for alignment with the editorial standards of the news organization. However, this process comes with a caveat. As GAI’s ability to assess newsworthiness improves, its information gathering may become biased toward the values and priorities of the news organization. Consequently, there is a risk of overlooking marginal events and relying on a narrower set of sources. To mitigate this risk, ongoing collaboration among news organizations, journalists, and programmers is essential. This collaboration should focus on continuously enhancing GAI’s cognitive and learning capabilities to adapt to emerging events and diverse news contexts (Simon 2022; Harcup 2023).

3.2. Content Production

The primary application of GAI in news production is to aid journalists in content creation. GAI systems can generate text tailored to the specific style and tone of news organizations, facilitating tasks such as transcription, translation, and daily news updates. This assistance not only enhances journalists’ productivity but also improves the overall quality and readability of news reports. Additionally, GAI plays a crucial role in increasing the efficiency of news organizations in producing content in multiple languages, thereby expanding their reach and audience base. For example, the French newspaper Le Monde uses AI to assist in translating articles, enabling around 30 stories a day to appear in its English edition (Newman 2024).
News channels generated and driven solely by GAI without human journalists and editors have also emerged, with NewsGPT, an American news website launched on 1 March 2023, standing as the world’s pioneering channel of its kind. NewsGPT’s algorithms are able to analyze and interpret data from a wide range of sources, including social media, news websites, and government agencies. Then, it uses this data to create reports. The website even has its AI anchors (Ians 2023).
Specifically, GAI primarily assists content production in two main aspects. First is content creation, such as summaries and headlines. Aftonbladet, a Swedish newspaper found that news reports supplemented with bullet points generated by AI could increase overall engagement, especially among young readers (Newman 2024). Furthermore, GAI can assist in modality conversion between text and images, audio, and video. For instance, ReelFramer can convert traditional text formats to engaging video shorts that match the platform’s style. GAI can provide suggestions on narratives, including characters, plots, backgrounds, and key information (Wang et al. 2023). The second is diversifying the writing. GAI can adapt news stories according to the style of news organizations, generating different writing styles, and even reframing stories semantically for different audiences. According to the research of Gina Chua (2023), the GAI tool Claude can narrate the same news content in different styles of The New York Times, The New York Post, China Daily, and Fox News. For news organizations, this feature may help attract audiences with different perspectives and values, thereby expanding their audience (Chua 2023).

3.3. Customization and Dissemination

Traditional news production typically follows a linear and unidirectional process, where professional journalists and editors progress through stages such as information gathering, writing, editing, and distribution of news to the audience. However, with the integration of GAI, this process undergoes enhancement through the introduction of algorithmic optimization and interactive dissemination. This transformation allows for seamless connectivity across different stages of news production, transitioning the linear process into a networked one. GAI systems have the capability to record and analyze the behavioral patterns of journalists and editors within newsrooms, thus enabling the creation of comprehensive databases. Leveraging this data, GAI can provide personalized recommendations for assigning journalists to specific news topics based on their expertise and past performance. By optimizing the allocation of human resources within a journalist team, GAI further enhances operational efficiency and overall productivity in news production workflows (Moravec et al. 2020; Zagorulko 2023; Cools et al. 2023). Furthermore, GAI involves the audience in new production through interactive dissemination, collecting the audience’s information and personalizing their feeds. GAI can analyze the audience’s data and content preferences and then customize summaries, headlines, and content for them (Marr 2024). For example, Forbes introduced Adelaide in the form of a search engine, where individuals can ask any question to the chatbot. Adelaide not only provides a concise answer but also recommends a series of news related to the question (David 2023). The news chatbot Charlie, launched by OneSub, assumes the role of a personal journalist for users. Charlie is dedicated to supporting users’ emotional well-being by avoiding sensational or negative content, alleviating users’ stress and anxiety, and making obtaining information simpler and more interactive (Twipe 2023).
GAI has fundamentally transformed the conventional linear process of news production, shifting the focus of each stage towards facilitating information sharing and enhancing the accuracy and quality of tailored information dissemination. For instance, platforms like NewsCube enable users to explore complex news stories from diverse perspectives. Each face of the NewsCube contains content sourced from various media organizations or outlets, offering users a multifaceted view of the news landscape. This approach helps users navigate through the vast hypertext world of news content with greater direction and clarity. Furthermore, NewsCube collects data on user behaviors and preferences, which serve as valuable input for guiding future decision-making in news production and facilitating personalized content dissemination (Doherty and Stephen 2020). Audience involvement in news production not only encourages content diversity but also promotes a more vibrant public discourse. By incorporating viewpoints from various racial, religious, and cultural backgrounds, news outlets can enrich the reading experience for their audience. Furthermore, this engagement helps improve the accuracy of content by enabling feedback on the credibility of information. As audiences actively participate in verifying and fact-checking news stories, there is a gradual reduction in the dissemination of false or misleading information.

4. Ethical Reflections: Challenges of Journalistic Ethics in the Era of GAI

Though GAI has brought about some benefits to the news industry, its integration also necessitates reflection. Canadian scholar Geoffrey Winthrop Young once pondered how computers redefine the boundaries between humans and technology and how digital machines affect individual subjectivity and social structures. As GAI assumes a more prominent role in journalism, media professionals must navigate the dynamic landscape where AI shares the power of news production. This shift challenges traditional conceptions of journalistic practice and necessitates the development of new ethical standards. One significant risk is that the fundamental principles of journalistic ethics, including truth, accuracy, objectivity, and accountability, may face unprecedented challenges in this evolving paradigm (Ouchchy and Dubljević 2020). The term “journalistic ethics” refers to the set of values, professional standards, and behavioral guidelines that should be followed in the practice of news production and dissemination. Journalistic ethics not only dictate the values and ethical considerations of news reporting but also define the norms and boundaries of journalistic practices. This section identifies the ethical challenges in the three stages of information gathering, content production, and customization and dissemination.

4.1. Information Gathering: Human Subjectivity of Journalists

The enduring question of the relationship between humans and machines lies at the heart of the philosophy of human subjectivity, which fundamentally explores who holds control of this relationship. The ancient Greek philosopher Protagoras proposed that “Of all things the measure is Man”, emphasizing that humans, due to their agency, are always the masters. In the traditional human–machine relationship, machines typically occupy a subordinate role to humans, with humans exerting control and direction over machine actions and functions (Mara and Hawk 2009). AI and automation represent two pivotal trends in contemporary technological advancement. AI encompasses machines’ capacity to learn, reason, adapt, and make decisions akin to humans. Automation, on the other hand, involves machines or computers autonomously executing tasks with minimal human intervention. These trends signify a shift towards bidirectional interaction between humans and machines, wherein computers not only respond to human commands but also intelligently process and execute tasks based on human input and feedback. In the era of deep mediatization, characterized by equal interaction between humans and machines, the role of machines in this relationship has undergone transformation. Machines are no longer mere intermediaries but active participants in social processes, contributing to the generation and dissemination of information. Furthermore, in human–machine communication, machines possess the capability to participate in meaning creation and context construction, further blurring the boundaries between human and machine agency.
In news production, GAI has become a quasi-subject with a certain degree of agency, reshaping the roles and responsibilities of journalists and other media professionals. For example, traditionally, the information gathered by journalists should possess societal significance and humanistic considerations, reflecting the intricate dynamics between individuals and society, while also adhering to principles of fairness, objectivity, and rationality. If journalists are entirely replaced by machines, and it is left to machines to decide what information to gather as news material, there is a risk that machines may indiscriminately collect personal information or user data without human constraints, thus infringing on individual privacy. As a result, news production may violate basic human rights as journalism forsakes its fundamental purpose of serving society and instead takes on the role of mass surveillance (Dilmaghani et al. 2019; Andrew and Huang 2023; Guembe et al. 2022). With intelligent systems in place, users’ privacy is exposed, and the prerequisite for accessing intelligent services is the relinquishment of personal information.
Last year, June 29 witnessed sixteen individuals in the United States filing a lawsuit against ChatGPT. They accused the platform of collecting and revealing their personal information, such as account details, login credentials, emails, payment information, browsing history, social media interactions, chat logs, and other online activities, without adequately informing or obtaining consent from the users (Mauran 2023). Additionally, the restricted accuracy of GAI, particularly in comprehending intricate emotions, results in a growing prevalence of standardized news devoid of human touch in its delivery.

4.2. Content Production: Credibility

Credibility serves as not only a foundational principle in journalism but also a fundamental aspect of journalistic ethics. However, with the advent of the digital age, both scholars and professionals are reexamining the concept of “news credibility”. Alongside the digital transformation of the news industry, discussions surrounding the credibility of news have become increasingly prevalent. For instance, within the realm of digital news, discussions delve into nuanced concepts such as “experiential truth” within media technology, “perceived truth” within cognitive psychology, and “negotiated truth” concerning power dynamics. Despite the evolving definitions and discussions surrounding news credibility, its significance remains paramount in defining what qualifies as “news”.
In fact, truth is not absolute but falls within a measurable range. Thanks to technological advancements, journalism is now closer to truth than ever, sometimes even overly detailed and accurate, appearing clearer than reality itself. For example, the fact-checking ability of GAI is highly accurate—for example, ChatGPT achieves an accuracy rate of up to 68.79% (Hoes and Bermeo 2023). However, GAI’s usefulness in making fabrications convincingly authentic is concealing its detrimental risks of blurring the boundaries of reality. GAI, characterized by technical rationale, stealthily integrates mechanistic cognition rooted in technical logic and preconceived judgments into news production. Extreme technology supporters become adherents of technological superstition (Shipley and Williams 2023). For example, GAI has caused deepfake challenges, where Generative Adversarial Networks (GANs) are used for face swapping, lip synchronization, facial re-enactment, motion transfer, and more. GAI’s ability to generate fake content of individuals expressing opinions with specific emotions and even dancing has led to an upheaval in the news industry, making it more difficult for media professionals to ensure the authenticity of their content (Marconi and Daldrup 2018).
Moreover, the decreasing cost associated with producing deepfakes via GAI is exacerbated by the adoption of the Platform as a Service (PaaS) model in cloud services and cloud terminals. This trend intensifies the challenges posed by deepfake technology, amplifying its potential negative political ramifications. For example, a video features a two-minute-long conversation in which the leader of Progresívne Slovakia, Mr Michal Simecka, appears to discuss buying votes from the Roma minority with a journalist. AFP fact-checkers consulted several experts who concluded that the audio was synthesized by an AI tool trained on real samples of the speakers’ voices, and several copies are available on social media without a label marking them as misleading (Solon 2023). As Thomson et al. (2022) observe, “while the creation of synthetic media is not inherently problematic (for example, in the case of art, satire, or parody), issues emerge when those media are presented without transparency and masquerade as reality”. Indeed, outlets like The New York Times have begun labeling AI-generated images as such to highlight their provenance.

4.3. Customization and Dissemination: Value Orientation

News values encompass the qualities that enhance the significance or appeal of a story to the public. Consistent factors of news values include credibility and timeliness, while variables are prominence, impact, proximity, and interest. The depth and diversity of values embedded within news facts directly correlate with the overall value of the news report. Traditional media pursue truth and fulfill social responsibilities. Their news values include timeliness, relevance, and prominence. Caring about what the audience want, they also produce interesting news or news that matter to a given audience. With the development of GAI, power replaces news values, which means that whoever possesses the technology and controls the resource decides what is newsworthy. Deploying GAI requires substantial resources such as digital infrastructure and training corpora that are controlled by countries and enterprises with the most resources and power, leading to their monopoly. Consequently, the AI models prioritize their worldviews and perspectives, while their technological regulations and standards establish global norms, and their value systems become universal, ultimately causing a shift in news values towards power (McIntosh 2018; Acemoglu 2021). New Zealand’s data scientist David Rozado created a GAI model called RightWingGPT to express US right-wing political views. Research indicates that language models can subtly influence users’ values, highlighting the potential serious consequences of political biases in GAI models (Knight 2023). Other researchers use DALL-E and Stable Diffusion to convert text into images, revealing that these models perpetuate common stereotypes. For instance, prompts related to cleaners consistently generate images depicting women (Bianchi et al. 2023).
The inherent flaws in large language models directly affect the output of GAI, which relies on machine learning to mimic human behavior and generate content. This situation leads to systemic biases, value conflicts, cultural hegemony, stereotypes, and misinformation (Zhou et al. 2024). In addition, GAI models lack common sense reasoning, making it difficult to comprehend complex issues and distinguish nuances in tone and subtle emotions. Issues like AI hallucinations emerge, where generated content may seem plausible but contradicts real-world knowledge or existing data (Salvagno et al. 2023; Alkaissi and McFarlane 2023). As a result, GAI values shape the customization and dissemination of news, hindering comprehension of events, narrowing perspectives, intensifying polarized discourse, and eroding public discourse.
To facilitate an understanding of the study, a figure is shown here to depict the application of GAI in news production and the associated ethical challenges (Figure 1).
Lastly, it is imperative to discuss the legal ramifications stemming from the integration of GAI in news production, particularly concerning potential violations of individual privacy rights and copyrights. On 15 February 2023, Francesco Marconi, a Wall Street Journal reporter, publicly accused OpenAI of unauthorized use of content from prominent foreign media outlets such as Reuters, The New York Times, The Guardian, and the BBC to train its ChatGPT model, without compensating the original sources. Presently, prevailing legal frameworks predominantly attribute responsibility to individuals, which may prove insufficient in addressing GAI as digital entities. Legal accountability demands the establishment of a direct causal link between the AI operator and any infringement, with penalties commensurate with the actual harm inflicted. However, existing legal standards lack the necessary mechanisms to precisely identify and attribute damages, as well as engage in ongoing assessment and risk mitigation protocols to minimize adverse consequences. Moreover, the incorporation of GAI poses significant challenges to the protection of intellectual property. The question of whether AI-generated entities are entitled to intellectual property rights remains unresolved, although disputes over the rights of AI-generated entities have already arisen. Notably, in China, a defendant faced litigation for employing AI-generated images without the plaintiff’s authorization as illustrations for articles. The Beijing Internet Court held that AI-generated images demonstrating human-like “originality” and intellectual input could be protected under copyright law as manifestations of creative intellectual endeavors.

5. Conclusions and Recommendations

This research extensively explores how GAI could potentially disrupt the conventional methods of news production. It delves deep into the integration of GAI within the news production workflow, thoroughly examining its current state and the ethical quandaries it introduces to the field of journalism. Through a meticulous examination of these ethical challenges, this study aims to offer invaluable insights into how GAI can play a constructive role in shaping the future of news production. Ultimately, its goal is to equip professionals and researchers with a comprehensive understanding to effectively navigate the continuous technological transformations occurring within the realm of journalism.
The advantages of GAI in news production are multifaceted. Firstly, GAI significantly enhances efficiency across various tasks such as collecting information, generating content, tailoring it to a specific audience, and disseminating it, thus optimizing the overall news production process. This relieves human reporters and editors of routine tasks, allowing them to focus on critical endeavors like fact-checking and closely monitoring unfolding news events. Moreover, GAI enriches the depth and breadth of news content, capturing the audience’s attention and catering to their information and emotional needs. Lastly, GAI fosters interaction between the audience and media entities, empowering the audience and prompting news organizations to incorporate a spectrum of viewpoints into their reporting. By embracing diversity and minimizing biases, GAI contributes to nurturing a more robust and inclusive media landscape.
The ethical challenges presented by GAI in journalism are significant and cannot be overlooked. The integration and progression of GAI disrupt the traditional roles of human journalists and editors, offering convenience while also raising valid concerns about news credibility. Of particular concern is the subtle influence that GAI may have on audience values, which may be less overt compared to the cognitive effects of conventional news reporting. This subtle influence threatens to undermine the integrity and professionalism of journalists, potentially resulting in a deterioration in the overall quality of media content. In response to these challenges, numerous media outlets have taken steps to develop guidelines for the responsible use of GAI, providing practical examples and guidance for newsrooms to effectively navigate these ethical dilemmas. For example, the Partnership on AI (PAI), a collaborative effort involving academic, civil society, industry, and media organizations, issued the AI Procurement and Use Guidebook for Newsrooms in August 2023. This guidebook aims to offer comprehensive measures for editorial departments to address the challenges posed by AI in a proactive and responsible manner (PAI Staff 2023).
To address these challenges, it is recommended that three key stakeholders—news organizations, journalists, and the audience—proactively embrace their roles in upholding journalistic ethics. For news organizations, responsible use of GAI is paramount. This involves ensuring that the benefits of GAI are harnessed for both the organization and the audience while mitigating the risks of amplifying biases and spreading misinformation with potential societal harm. Industry associations should establish guidelines or ethical standards for GAI application in journalism to facilitate this.
Journalists need to undergo a fundamental shift in their role, moving beyond simply producing information to serving as diligent fact-checkers. They must meticulously verify the credibility of news generated by GAI across a wide range of topics and domains, ensuring that only accurate information is disseminated to the public. Furthermore, journalists have a responsibility to actively inform the public of the truth and to critique any content that fails to meet the rigorous standards of journalism. In addition to these responsibilities, journalists should focus on enhancing their cross-disciplinary integration skills. This involves not only discovering news but also effectively integrating and organizing content from various sources and perspectives. Moreover, journalists must develop strong planning, organizing, and coordinating abilities through practical experience. By leveraging their innate human qualities such as intuition, adaptability, creativity, critical thinking, and a sense of humanistic care, journalists can maintain their agency and integrity in collaboration with GAI.
Audience members using news platforms driven by GAI should undergo thorough media literacy training to effectively navigate this evolving landscape. They must understand the nuances of GAI application to discern between true and false information generated by these systems. Additionally, it is vital to encourage users to minimize their reliance on GAI whenever possible, fostering critical thinking and independent judgment. Collaborative efforts from various stakeholders are essential to enhance audience media literacy. News organizations and journalists should seamlessly integrate media literacy education into their news practices, actively encouraging the audience to detect false information by facilitating the sharing of pertinent educational materials. Strengthening media literacy skills empowers the audience to adeptly identify misinformation and engage more meaningfully in the news ecosystem.

Author Contributions

Conceptualization, Y.S.; writing—original draft preparation, Y.S.; writing—review and editing, L.S.; supervision, Y.S.; project administration, L.S.; funding acquisition, L.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Tianjin Education Science Planning Project “Theoretical and Practical Research on Public Art Education as a Path for Building a Learning Society”, grant number EGE210268 and Special funding project for basic scientific research business expenses of Chinese central universities “Research on the Index System of the Influence of Chinese Civilization Communication Power”, grant number CUC230D046.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Acemoglu, Daron. 2021. Harms of AI. Working Paper No: 29247. Cambridge: National Bureau of Economic Research. [Google Scholar]
  2. Alim, Arjun Neil. 2023. Daily Mirror Publisher Explores Using ChatGPT to Help Write Local News. Financial Times. February 20. Available online: https://www.ft.com/content/4fae2380-d7a7-410c-9eed-91fd1411f977 (accessed on 7 April 2024).
  3. Alkaissi, Hussam, and Samy I. McFarlane. 2023. Artificial hallucinations in ChatGPT: Implications in scientific writing. Cureus 15: e35179. [Google Scholar] [CrossRef] [PubMed]
  4. Andrew, Baker, and Casey Huang. 2023. Data breaches in the age of surveillance capitalism: Do disclosures have a new role to play? Critical Perspectives on Accounting 90: 102396. [Google Scholar] [CrossRef]
  5. Arguedas, Amy Ross, and Felix M. Simon. 2023. Automating Democracy: Generative AI, Journalism, and the Future of Democracy. Oxford: Balliol Interdisciplinary Institute, University of Oxford. [Google Scholar]
  6. Bianchi, Federico, Pratyusha Kalluri, Esin Durmus, Faisal Ladhak, Myra Cheng, Debora Nozza, Tatsunori Hashimoto, Dan Jurafsky, James Zou, and Aylin Caliskan. 2023. Easily accessible text-to-image generation amplifies demographic stereotypes at large scale. Paper presented at the 2023 ACM Conference on Fairness, Accountability, and Transparency, Chicago, IL, USA, June 12–15; pp. 1493–504. [Google Scholar]
  7. Bloomberg. 2023. Introducing BloombergGPT, Bloomberg’s 50-Billion Parameter Large Language Model, Purpose-Built from Scratch for Finance. March 23. Available online: https://www.bloomberg.com/company/press/bloomberggpt-50-billion-parameter-llm-tuned-finance/ (accessed on 7 April 2024).
  8. Caswell, Gurevych, Fink, and Konstantin Dörr. 2021. Automated Journalism and Professional Identity: A Professional Community Analysis of News Writers and News Consumers. Digital Journalism 9: 850–69. [Google Scholar]
  9. Chua, Gina. 2023. Semfor. How Chatbots Can Change Journalism. Or Not. February 20. Available online: https://www.semafor.com/article/02/17/2023/how-chatbots-can-change-journalism-or-not (accessed on 11 March 2024).
  10. Cools, Hannes, Baldwin Van Gorp, and Michaël Opgenhaffen. 2023. The levels of automation and autonomy in the AI-augmented newsroom: Toward a multi-level typology of computational journalism. In Research Handbook on Artificial Intelligence and Communication. Cheltenham: Edward Elgar Publishing, pp. 284–99. [Google Scholar]
  11. David, Emilia. 2023. Forbes Now Has Its Own AI Search Engine. October 27. Available online: https://www.theverge.com/2023/10/26/23933799/forbes-generative-ai-search-adelaide (accessed on 26 March 2024).
  12. Diakopoulos, Nicholas. 2019. Automating the News: How Algorithms Are Rewriting the Media. Cambridge: Harvard University Press, p. 16. [Google Scholar]
  13. Dilmaghani, Saharnaz, Matthias R. Brust, Grégoire Danoy, Natalia Cassagnes, and Johnatan Pecero. 2019. Privacy and security of big data in AI systems: A research and standards perspective. Paper presented at 2019 IEEE International Conference on Big Data, Los Angeles, CA, USA, December 9–12; pp. 5737–43. [Google Scholar]
  14. Doherty, Skye, and Viller Stephen. 2020. Prototyping interaction: Designing technology for communication. In Reimagining Communication: Experience. Edited by Michael Filimowicz and Veronika Tzankova. New York: Routledge, pp. 80–96. [Google Scholar]
  15. Fernández, Peña, Simón Fernández, Koldobika Meso Ayerdi, Ainara Larrondo Ureta, and Javier Díaz Noci. 2023. Without journalists, there is no journalism: The social dimension of generative artificial intelligence in the media. El Profesional de la Información 32: 2. [Google Scholar] [CrossRef]
  16. Gondwe, Gregory. 2023. CHATGPT and the Global South: How are journalists in sub-Saharan Africa engaging with generative AI? Online Media and Global Communication 2: 228–49. [Google Scholar] [CrossRef]
  17. Guembe, Blessing, Ambrose Azeta, Sanjay Misra, Victor Chukwudi Osamor, Luis Fernandez-Sanz, and Vera Pospelova. 2022. The Emerging Threat of Ai-Driven Cyber Attacks: A Review. Applied Artificial Intelligence 36: 2037254. [Google Scholar] [CrossRef]
  18. Harcup, Tony. 2023. The Struggle for News Value in the Digital Era. Journalism and Media 4: 902–17. [Google Scholar] [CrossRef]
  19. Harper, Christian, Jenn Francis, and Julie Bennink. 2023. Accenture Technology Vision 2023: Generative AI to Usher in a Bold New Future for Business, Merging Physical and Digital Worlds. March 30. Available online: https://newsroom.accenture.com/news/2023/accenture-technology-vision-2023-generative-ai-to-usher-in-a-bold-new-future-for-business-merging-physical-and-digital-worlds (accessed on 7 April 2024).
  20. Hoes, Altay, and Juan Bermeo. 2023. Leveraging ChatGPT for Efficient Fact-Checking. PsyArXiv 3. Available online: https://osf.io/qnjkf (accessed on 7 April 2024).
  21. Hong, Chang, and David Tewksbury. 2024. Can AI Become Walter Cronkite? Testing the Machine Heuristic, the Hostile Media Effect, and Political News Written by Artificial Intelligence. Digital Journalism 12: 1–24. [Google Scholar] [CrossRef]
  22. Ians. 2023. Punjab News Express. World’s First AI-Generated News Channel Called NewsGPT Launched. March 16. Available online: https://www.punjabnewsexpress.com/technology/news/worlds-first-ai-generated-news-channel-called-newsgpt-launched-203017 (accessed on 9 April 2024).
  23. Jovanovic, Mladan, and Mark Campbell. 2022. Generative artificial intelligence: Trends and prospects. Computer 55: 107–12. [Google Scholar] [CrossRef]
  24. Knight, Will. 2023. Meet ChatGPT’s Right-Wing Alter Ego. April 27. Available online: https://www.wired.com/story/fast-forward-meet-chatgpts-right-wing-alter-ego/ (accessed on 11 April 2024).
  25. Korn, Jennifer. 2023. CNN Business, How Companies Are Embracing Generative AI for Employees… or Not. September 22. Available online: https://edition.cnn.com/2023/09/22/tech/generative-ai-corporate-policy/index.html (accessed on 15 March 2024).
  26. Lamri, Jeremy. 2023. How Do Generative Artificial Intelligences (GAI) Actually Work? Medium. January 21. Available online: https://jeremy-lamri.medium.com/how-do-generative-artificial-intelligences-gai-actually-work-42670d1ca19 (accessed on 6 April 2024).
  27. Liu, Xiaomo, Armineh Nourbakhsh, Quanzhi Li, Sameena Shah, Robert Martin, and John Duprey. 2017. Reuters tracer: Toward automated news production using large scale social media data. Paper presented at 2017 IEEE International Conference on Big Data (Big Data), Boston, MA, USA, December 11–14; pp. 1483–93. [Google Scholar]
  28. Mara, Andrew, and Byron Hawk. 2009. Posthuman rhetorics and technical communication. Technical Communication Quarterly 19: 1–10. [Google Scholar] [CrossRef]
  29. Marconi, Francesco, and Till Daldrup. 2018. How the Wall Street Journal Is Preparing Its Journalists to Detect Deepfakes. November 15. Available online: https://www.niemanlab.org/2018/11/how-the-wall-street-journal-is-preparing-its-journalists-to-detect-deepfakes/ (accessed on 11 April 2024).
  30. Marr, Bernard. 2024. How Generative AI Will Change the Jobs of Journalists. Forbes. March 22. Available online: https://www.forbes.com/sites/bernardmarr/2024/03/22/how-generative-ai-will-change-the-jobs-of-journalists/?sh=64319f1c2847 (accessed on 4 April 2024).
  31. Mauran, Cecily. 2023. OpenAI Is Being Sued for Training ChatGPT with ‘Stolen’ Personal Data. Mashable. June 30. Available online: https://sea.mashable.com/tech/24637/openai-is-being-sued-for-training-chatgpt-with-stolen-personal-data (accessed on 7 April 2024).
  32. McIntosh, Daniel. 2018. We need to talk about data: How digital monopolies arise and why they have power and influence. Journal of Technology Law & Policy 23: 185–213. [Google Scholar]
  33. Miller, Ross. 2015. AP’s ‘Robot Journalists’ Are Writing Their Own Stories Now. January 30. Available online: https://www.theverge.com/2015/1/29/7939067/ap-journalism-automation-robots-financial-reporting (accessed on 9 March 2024).
  34. Moravec, Václav, Veronika Macková, Jakub Sido, and Kamil Ekštein. 2020. The robotic reporter in the Czech news agency: Automated journalism and augmentation in the newsroom. Communication Today 11: 36. [Google Scholar]
  35. Newman, Nic. 2024. Journalism, Media, and Technology Trends and Predictions. Available online: https://reutersinstitute.politics.ox.ac.uk/journalism-media-and-technology-trends-and-predictions-2024 (accessed on 7 April 2024).
  36. Nishal, Sachita, and Nicholas Diakopoulos. 2024. Envisioning the Applications and Implications of Generative AI for News Media. arXiv arXiv:2402.18835. [Google Scholar]
  37. Ouchchy, Coin, and Veljko Dubljević. 2020. AI in the headlines: The portrayal of the ethical issues of artificial intelligence in the media. AI & Society 35: 927–36. [Google Scholar]
  38. PAI Staff. 2023. PAI Seeks Public Comment on the AI Procurement and Use Guidebook for Newsrooms. August 3. Available online: https://partnershiponai.org/pai-seeks-public-comment-on-the-ai-procurement-guidebook-for-newsrooms/ (accessed on 7 April 2024).
  39. Patrick, Joseph. 2023. How Does the Conversation between a Journalist and Bard, the AI Chatbot at Google Occur? Heart of Hollywood Magazine. June 2. Available online: https://www.heartofhollywoodmagazine.com/post/how-does-the-conversation-between-a-journalist-and-bard-the-ai-chatbot-at-google-occur (accessed on 14 March 2024).
  40. Pinto, Barbosa. 2024. Artificial Intelligence (AI) in Brazilian Digital Journalism: Historical Context and Innova-tive Processes. Journalism and Media 5: 325–41. [Google Scholar] [CrossRef]
  41. Radcliffe, Damian. 2023. Mediamakersmeet. Unlocking the Power of AI: 6 Lessons from AP for Publishers. March 3. Available online: https://mediamakersmeet.com/unlocking-the-power-of-ai-6-lessons-from-ap-for-publishers/ (accessed on 10 March 2024).
  42. Reuters. 2017. May 15. Available online: https://www.reutersagency.com/en/reuters-community/reuters-news-tracer-filtering-through-the-noise-of-social-media/ (accessed on 15 March 2024).
  43. Reuters Institute. 2023. Trends and Predictions 2024. Available online: https://reutersinstitute.politics.ox.ac.uk/journalism-media-and-technology-trends-and-predictions (accessed on 18 March 2024).
  44. Salvagno, Michele, Fabio Silvio Taccone, and Alberto Giovanni Gerli. 2023. Artificial intelligence hallucinations. Critical Care 27: 180. [Google Scholar] [CrossRef] [PubMed]
  45. Shipley, Gerhard P., and Deborah H. Williams. 2023. Critical AI Theory: The Ontological Problem. Open Journal of Social Sciences 11: 618–35. [Google Scholar] [CrossRef]
  46. Simon, Felix M. 2022. Uneasy bedfellows: AI in the news, platform companies and the issue of journalistic autonomy. Digital Journalism 10: 1832–54. [Google Scholar] [CrossRef]
  47. Solon, Olivia. 2023. Bloomberg Law. Trolls in Slovakian Election Tap AI Deepfakes to Spread Disinfo. September 29. Available online: https://news.bloomberglaw.com/artificial-intelligence/trolls-in-slovakian-election-tap-ai-deepfakes-to-spread-disinfo (accessed on 7 April 2024).
  48. Spennemann, Dirk H. R. 2023. Will the Age of Generative Artificial Intelligence Become an Age of Public Ignorance? Preprints 2023091528. [Google Scholar]
  49. The New York Times. 2023. A Valentine, from A.I. to You. February 13. Available online: https://www.nytimes.com/interactive/2023/02/13/opinion/valentines-day-chatgpt.html (accessed on 4 April 2024).
  50. Thomson, T. J., Daniel Angus, Paula Dootson, Edward Hurcombe, and Adam Smith. 2022. Visual Mis/Disinformation in Journalism and Public Communications: Current Verification Practices, Challenges, and Future Opportunities. Journalism Practice 16: 938–62. [Google Scholar] [CrossRef]
  51. Twipe. 2023. The “Wild West” of Generative AI Experiments, for News Publishers. Mediamakersmeet. May 12. Available online: https://mediamakersmeet.com/the-wild-west-of-generative-ai-experiments-for-news-publishers/ (accessed on 6 April 2024).
  52. Wang, Sitong, Samia Menon, Tao Long, Keren Henderson, Dingzeyu Li, Kevin Crowston, Mark Hansen, Jeffrey V. Nickerson, and Lydia B. Chilton. 2023. Reelf-ramer: Co-creating news reels on social media with generative AI. arXiv arXiv:2304.09653. [Google Scholar]
  53. Zagorulko, Dmytro I. 2023. ChatGPT in newsrooms: Adherence of AI-generated content to journalism standards and prospects for its implementation in digital media. Vcheni Zapysky TNU Imeni VI Vernadskoho 34: 319–25. [Google Scholar] [CrossRef]
  54. Zhong, Y, and Han Zhang. 2019. “Kuaibi Xiaoxin”: Xinhua News Agency’s First Robot Reporter. Xinwenzhanxian. February 27. Available online: http://media.people.com.cn/GB/n1/2019/0227/c425664-30905230.html (accessed on 6 April 2024).
  55. Zhou, Tong. 2015. This Is a Manuscript Written by a Robot in One Second. September 11. Available online: https://www.im2maker.com/news/20150911/475.html (accessed on 9 March 2024).
  56. Zhou, Mi, Vibhanshu Abhishek, Timothy Derdenger, Jaymo Kim, and Kannan Srinivasan. 2024. Bias in Generative AI. arXiv arXiv:2403.02726. [Google Scholar]
Figure 1. Application of GAI in news production and the associated ethical challenges.
Figure 1. Application of GAI in news production and the associated ethical challenges.
Journalmedia 05 00039 g001
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shi, Y.; Sun, L. How Generative AI Is Transforming Journalism: Development, Application and Ethics. Journal. Media 2024, 5, 582-594. https://doi.org/10.3390/journalmedia5020039

AMA Style

Shi Y, Sun L. How Generative AI Is Transforming Journalism: Development, Application and Ethics. Journalism and Media. 2024; 5(2):582-594. https://doi.org/10.3390/journalmedia5020039

Chicago/Turabian Style

Shi, Yi, and Lin Sun. 2024. "How Generative AI Is Transforming Journalism: Development, Application and Ethics" Journalism and Media 5, no. 2: 582-594. https://doi.org/10.3390/journalmedia5020039

APA Style

Shi, Y., & Sun, L. (2024). How Generative AI Is Transforming Journalism: Development, Application and Ethics. Journalism and Media, 5(2), 582-594. https://doi.org/10.3390/journalmedia5020039

Article Metrics

Back to TopTop