Next Article in Journal
Oral Health in the Remote Archipelago of Tuvalu
Next Article in Special Issue
Migration and Social Remittances: Different Lenses from Social Sciences
Previous Article in Journal / Special Issue
Career Anchors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Entry

Artificial Intelligence and the Transformation of the Media System

by
Georgiana Camelia Stănescu
Department of Arts and Media, University of Craiova, 200585 Craiova, Romania
Encyclopedia 2026, 6(2), 45; https://doi.org/10.3390/encyclopedia6020045
Submission received: 27 December 2025 / Revised: 12 January 2026 / Accepted: 5 February 2026 / Published: 10 February 2026
(This article belongs to the Collection Encyclopedia of Social Sciences)

Definition

Artificial intelligence is increasingly being used in all branches of the media system and has transformed the way specialists in this field work in recent years. Currently, applications of artificial intelligence are used across a range of processes involved in the production, editing, distribution, and consumption of media content. These include technologies such as generative chatbots, automated transcription, writing, translation, and editing tools, as well as applications for image and video creation. All of these types of applications have taken over a significant portion of the traditional activities carried out by media professionals. From a technological point of view, these uses primarily rely on machine learning, natural language processing, and computer vision techniques, complemented by generative models that automatically analyze, generate, and interpret text, sound, and images. Although these technologies contribute to increased efficiency, faster work, and reduced operating costs, they also pose significant risks, particularly regarding the spread of false information. From a theoretical perspective, artificial intelligence goes beyond the status of a technological tool, being conceptualized as a communicational actor that actively intervenes in the generation, structuring, and circulation of messages, influencing the relationships between producers, content, and audiences in the current media environment.

Graphical Abstract

1. Introduction

Artificial intelligence (AI), particularly generative AI (GenAI), which includes systems capable of generating textual, visual, audio, or multimodal content, is expanding rapidly [1,2,3] and playing an increasingly important role in transforming the contemporary media ecosystem [4]. Interest in artificial intelligence (AI) has grown significantly as its concrete applications expand across industry, society, and public policy [5]. AI-based applications are widely used in all branches of the media sphere and continue to evolve rapidly [6] from journalism, social platforms, and strategic communication, influencing content production processes, information distribution, message personalization, and decision-making.
The literature on the use of AI in the media has expanded rapidly in recent years, addressing applications such as automated journalism [7,8,9], content moderation [10,11], or audience analysis [12,13]. However, existing research is scattered across diverse disciplinary perspectives, including media studies, computer science, social sciences, and public policy. As a result, there is a limited number of works that provide an integrated, state-of-the-art synthesis of consolidated knowledge on the technological, ethical, and institutional implications of AI in the media.
Existing studies highlight benefits such as increased productivity, improved content accessibility, and the sustainability of data-driven journalism [14,15]. At the same time, research points to persistent risks, including the amplification of misinformation, diminished editorial control, and the erosion of public trust in media institutions [16,17].
In this context, media literacy and artificial intelligence literacy have become essential components for the responsible use of AI in the media. Both media professionals and the general public need to understand how algorithmic systems work, the limitations of automatically generated content, and the ethical implications of information automation [18,19]. At the same time, the literature emphasises the need to develop clear editorial policies and institutional governance frameworks that regulate the use of AI in accordance with the fundamental values of the media [20].
In an analytical sense, this entry correlates the main areas of application of artificial intelligence in the media with their technological, ethical, and institutional implications. The approach follows the media production chain, moderation, verification, and governance, and synthesizes the existing literature around common axes of analysis, providing a guidance tool for researchers, practitioners, and decision-makers.

2. History

The relationship between artificial intelligence and the media system has evolved gradually alongside the development of digital technologies and automation. Natural language generation technology, which involves the automatic production of text from structured digital data, has undergone rapid development over the last ten years [8]. Still, its emergence can be traced back to early research into machine translation (1950) [21]. In fact, the term artificial intelligence was coined by John McCarthy, who explained it as “the science and engineering of making intelligent machines.” [22,23]. Marvin Minsky, another founder of artificial intelligence research, defined AI as “the science of making machines do things that would require intelligence if done by humans” [24]. Early research focused primarily on computer science and analyzed the relationship between intelligent applications and human behavior [24]. Later, the idea of replacing humans with artificial intelligence emerged, as McLuhan [25] explained. Early applications focused on areas such as logical problem solving, expert systems used in medicine and industry, automatic planning, and robotics [26]. These directions were dominated by symbolic approaches and rule-based systems, which sought to reproduce human reasoning in well-defined contexts. Subsequently, interest in artificial intelligence expanded beyond the field of computer science and began to affect communication science research as well. In fact, some researchers have explained that artificial intelligence has been linked to communication from the outset, and one example of this is Alan Turing’s test, which focuses on how people perceive and interpret interaction with a machine when assessing its intelligence [27,28]. Between 1980 and 2000, research in the field of artificial intelligence began to focus increasingly on language and machine translation [29]. This shift was made possible by more powerful computers and the development of increasingly efficient statistical methods, followed later by advanced machine learning techniques. Natural language has become a central area of artificial intelligence because it is the basis of human interaction: through language, we produce knowledge, communicate in public spaces, and organize our social lives [30]. In this context, technologies capable of automatically analyzing and translating texts were the first through which artificial intelligence began to have a direct impact on the media and the way journalism is practiced. During this period, research started to focus on automating media processes through rule-based systems and expert systems. These technologies were mainly used for digital archiving, content classification, and media database management [31].
After 2010, the literature began to document the emergence of automated journalism [32,33]. At this stage, AI was mainly perceived as a tool for streamlining content production, with algorithms that edit, aggregate, and distribute content, as well as media production and consumption processes [34].
As AI applications expanded into newsrooms and media platforms, academic research began to address the ethical, social, and professional implications of using algorithms in the media [19,35,36,37]. The literature has highlighted risks such as lack of algorithmic transparency, reproduction of biases, erosion of editorial responsibility, and impact on journalistic autonomy. During this period, the concept of algorithmic accountability [38] became central to studies on AI and media.
The introduction of advanced generative AI models (such as large language models and multimodal systems) marked a new stage in the evolution of the field [39]. AI is no longer just a support tool, but an active player in the production, personalisation, and distribution of media content. This transformation has intensified debates about the authenticity of information, disinformation, deepfakes, and the need for robust editorial governance frameworks.
Understanding the historical development of artificial intelligence in the media field is important because it highlights the evolution of the objectives and capabilities of these technologies. Early applications of AI in the media focused primarily on automating routine tasks such as content indexing, news recommendation, and digital archive management, contributing to increased operational efficiency. Subsequent stages included significant advances in machine learning, big data analysis, and content distribution personalization, transforming the way information is produced and consumed. The emergence of generative artificial intelligence and large language models represents a major turning point, enabling the automatic generation of text, images, and audiovisual material, as well as more sophisticated interactions with audiences. This historical perspective highlights that current AI tools in the media are the result of incremental innovations and continuous technological learning, providing an essential framework for understanding their responsible and effective use in the contemporary media system.

3. Artificial Intelligence Applications in the Media Sector

3.1. Newsgathering

In the contemporary media context, the newsgathering process is increasingly influenced by the integration of artificial intelligence [40], which provides advanced tools for identifying, evaluating, and interpreting topics with the potential to become news [41]. Artificial intelligence-based systems can support content discovery [42] by analysing large volumes of data [43], both structured (such as court records or administrative databases) and unstructured (legislative documents, reports, or press releases), facilitating the detection of patterns and journalistically relevant themes. This enables access to relevant information [44]. In particular, generative artificial intelligence models can help summarise complex documents and support the sensemaking process, providing journalists with a quick initial understanding of the information and possible editorial angles. Trends can also be identified that can anticipate which topics have potential [1]. In addition, AI-based systems can generate ideas for journalistic issues and influence the selection of topics by analysing user interest [45]. Although publications relying exclusively on AI-generated content have begun to emerge, such publications tend to lack credibility and are therefore not considered reliable journalistic sources. So, artificial intelligence does not replace the professional judgment of journalists, but functions as a complementary tool that requires constant human validation to ensure accuracy, relevance, and compliance with the ethical standards of journalistic practice. Editorial responsibility, critical evaluation of information, and compliance with ethical standards remain exclusively human tasks, essential for maintaining the credibility and democratic function of journalism.

3.2. Generative Intelligence and Text Writing

In the media sphere, generative artificial intelligence has produced a series of significant structural changes in the way newsrooms work, but also in public communication, influencing both content production processes and the relationship between media institutions and the public. Generative artificial intelligence is a set of technologies that enable the production of original content [2], such as text, images, sounds, or video, by leveraging models trained on large data sets and interacting with requests expressed in the form of text prompts [46]. Therefore, the use of generative intelligence systems allows for the automation of standardised news writing, rapid analysis of large volumes of data, and adaptation of media messages to the needs of diverse audiences, increasing the operational efficiency and competitiveness of media organisations in an accelerated digital environment [47]. In the media field, generative models such as ChatGPT v5, Gemini v3, Grok v4 or Copilot are used to write articles on a wide range of topics. Press releases or Eurostat reports are used to write a complete news text for TV, radio, or digital media in a matter of seconds. However, media specialists have raised a number of issues regarding compliance with professional standards. Specialists have explained that writing with the help of generative artificial intelligence based on structured data sets is prone to errors, but summarizing documents can help initiate the journalistic writing process [48]. In the editorial process, AI-based tools can improve activities such as automatic correction of grammatical and spelling errors, and suggestions for text changes to improve fluency or coherence [49]. Texts for social media can also be generated.

3.3. Generating Images and Video with Artificial Intelligence

The use of AI-generated images in the media has expanded rapidly, becoming an increasingly relevant tool for the production of visual content in traditional media as well as in new forms of media [50]. Generative models allow the creation of realistic and stylised images from simple textual descriptions, significantly reducing the costs and time required for visual production [51,52]. In this context, media organisations use AI-generated photos or videos to illustrate various topics when there is no footage available or to supplement audiovisual narratives in situations where original material is missing or difficult to obtain. This technology is undoubtedly one of the most influential technological innovations of recent decades [53]. These applications are relatively accessible, allow for creative intervention by the person formulating the prompt, and enable aesthetic standards to be met without the need for complex technical equipment. However, the integration of artificially generated images in the media raises issues regarding transparency and the potential for visual manipulation, which requires clear labelling of content generated by artificial intelligence and maintaining rigorous editorial control to protect public trust and the integrity of information. That is why institutions such as the European Union [54] have attempted to create a legislative framework for this field [55].
Although often viewed as a controversial technology, deepfake-based artificial intelligence can be used, under certain conditions, to create specific sequences in video productions. In documentaries in particular, platforms and applications based on code and AI technologies add a new dimension to audiovisual storytelling, allowing for the realistic reconstruction of historical moments or the integration of public figures into digital reconstructions for explanatory and educational purposes. Some researchers have explained the context in which the technology should be called synthetic media, not deepfake, in the creative industries.
In Romania, deepfake technology based on artificial intelligence was used by a national television team to recreate a unique moment with a deceased comedy actress. The project involved the use of an actress with similar features, over whom the face and voice of the deceased artist were superimposed using artificial intelligence. The process was carried out using Deepswap software, and the result was an original sequence, not a reinterpretation of older recordings. The script and dialogue were created specifically for this project, and the voice was generated entirely using artificial intelligence technologies [56].

3.4. Integrating Voice Cloning into Media

Today, voice cloning technologies based on artificial intelligence allow for the faithful reproduction of a person’s voice using only a few minutes of audio recordings to generate varied and convincing statements [57]. In digital media, voice cloning has established itself as a strategic technology for the rapid and scalable production of audio-visual content, adapted to online consumption and the dynamics of digital platforms. Voice cloning is a simple technology that transforms text into a voice that is very similar to that of a human [58]. In newsrooms and media organisations, voice cloning is used to quickly produce audio versions of articles, video summaries of news stories, or content adapted for social media feeds. Synthetic voices can be configured to match the identity of a media brand, thus contributing to the consistency of editorial discourse and reducing dependence on traditional voice recordings. At the same time, this technology allows for rapid content customisation by generating different voices depending on the platform, target audience, or editorial format. All of these emerging technologies have led to a point where it is becoming increasingly difficult to differentiate between a human voice and one generated by artificial intelligence. The ability of text-to-speech technologies to faithfully reproduce the intonation, rhythm, and emotional nuances of the human voice contributes to blurring the line between artificial and human voices, with significant implications for the media [59,60,61]. In journalistic audio content, the voice has traditionally functioned as an indicator of human presence, professional authority, and editorial responsibility. When artificially generated voices become difficult to distinguish from human ones, these markers are weakened, complicating source attribution, authenticity assessment, and the relationship of trust between the media and the public. At the same time, the use of voice cloning in digital media requires responsible practices, particularly with regard to transparency towards the public. For this reason, clear labeling of artificial voices, explicit editorial rules, and transparent communication about the role of automation are essential for maintaining public trust and the responsible use of voice synthesis and cloning technologies in journalism. At the same time, the potential for misuse—including for information manipulation or fraud—requires explicit recognition of the associated risks [62]. Reducing these risks depends on implementing measures such as transparency regarding the use of synthetic voices, obtaining informed consent, and directing voice cloning applications toward socially valuable purposes [63].
Text-to-speech (TTS) technologies enable the faithful replication of the human voice, including emotional nuances [64].
In social media, voice cloning is often attributed to digital avatars, images, or videos generated by artificial intelligence, contributing to the emergence of hybrid forms of digital storytelling. Creators and media institutions can use synthetic voices to explain complex topics in an accessible format, to produce multilingual content, or to maintain a constant flow of posts without logistical constraints. However, the use of voice cloning in digital media requires responsible practices, particularly in terms of transparency towards the public, in an online space characterised by information overload and competition for attention.

3.5. Virtual Presenters, Avatars, and Influencers in Media Production

Advances in speech synthesis and visual modeling enable AI presenters to deliver scripted content with an increasingly high level of fluency and realism [65], which explains the growing use of virtual presenters in media production.
An AI presenter is a virtual character that replicates the presentation capabilities of a human presenter using technologies such as speech synthesis, facial expression generation, and other advanced digital methods [66].
These virtual entities are used to provide information, explain topics of public interest like politics, for example, ref. [67], or promote messages in a coherent and consistent manner, adapted to the accelerated pace of media consumption. In the online space, AI avatars can function as digital presenters or virtual influencers, capable of interacting with the public, maintaining a stable visual identity, and generating content frequently, without the constraints specific to human presence. They look great, can work non-stop, without breaks, without being paid, and do not age [68]. In addition, AI presenters allow for productions tailored to specific requirements. The process involves first establishing editorial needs, followed by configuring the AI presenter according to these requirements. For example, an AI presenter for morning news programs is designed and customised according to the particularities of this type of program [69].
The emergence of virtual presenters and influencers created with the help of artificial intelligence may change the traditional way of working in classic and new media. Journalists and content creators are less involved in direct appearances and can focus more on editorial work, strategy, and controlling the messages conveyed through technology. In the field of influencing, AI avatars are used for promotional campaigns, storytelling, or educational content, offering the advantage of total control over the message and a constant presence on social platforms. The use of virtual influencers is a strategic advantage in digital communication, as they tend to capture the attention of young audiences more effectively, especially Generation Z, whose viewing intentions are positively influenced by the innovation and originality associated with content generated by virtual influencers [70]. However, this practice influences the public’s perception of authenticity and trust, making it necessary to use virtual identities transparently and responsibly in order to avoid confusion between human and artificially generated communication [38,71].

3.6. AI-Powered Innovations in Video Post-Production

Artificial intelligence-based technologies play an important role and are extremely useful to image editors, as they can improve the quality of photos and videos [72,73] through video upscaling [74], de-noising [75], image upscaling or re-colourization [76] processes. These applications use machine learning and NLP algorithms to increase the quality of existing video or image material. Text-based diffusion models have also been remarkably successful in generation and editing [77]. Unlike older resolution enhancement applications, AI-based programs first analyse the camera angles and visual context. This makes images or photos clearer, more stable, and closer to the standards of television or media platforms.
In the media, video upscaling is also frequently used to restore and adapt archives to modern broadcast formats such as HD, Full HD, or 4K, without requiring the original content to be re-filmed [78]. This is particularly relevant for television stations and digital platforms that leverage historical material, older reports, or archive footage for reuse. At the same time, image upscaling is integrated into the work of media specialists to improve graphics, photographs, or visual elements used in news, documentaries, and promotional material. This can raise the standard of quality.
De-noising, or the process of removing noise from images, is one of the essential applications of deep learning techniques [79]. The main goal is to restore an image so that it is as close as possible to its “clean” or original version.
Re-colourisation refers to the technological process of adding colour to images that do not have it (such as black and white images) or to old photos, using artificial intelligence [80].
The integration of these technologies into media post-production significantly optimises the way and time of work, reducing the costs associated with manual processes of correction and restoration of visual materials.

3.7. The Role of Speech Recognition in Improving Media Workflows

Speech recognition is a technology that enables the automatic and accurate conversion of spoken language into written text and is widely used in applications capable of interpreting voice commands or providing answers to verbal questions [81]. Artificial intelligence-based speech recognition is increasingly used in media to make work processes faster and more efficient [82]. This technology quickly converts speech into text, which helps transcribe interviews, video, or audio material, and organise content more easily. This allows journalists and content creators to save time and focus more on writing and analysis instead of repetitive technical tasks [83].
In digital media and online platforms, speech recognition is widely used for the automatic generation of subtitles, descriptions, and summaries of audio-visual content. This technology also facilitates the rapid search for relevant information in extensive media materials and supports the reuse of content in multiple formats, adapted to different distribution channels [84]. In this context, the integration of automatic speech recognition as a core component of media processing pipelines enables the streamlining of production workflows and the scalable handling of large volumes of audiovisual content, while maintaining accessibility and supporting efficient downstream editing and distribution processes [85].

3.8. Artificial Intelligence-Assisted Content Moderation

Content moderation using artificial intelligence algorithms is a common practice in the digital media ecosystem, both on social platforms and in online publications that allow comments [86,87]. In the context of vast volumes of user-generated content, AI-based systems are used to identify, classify, and limit the distribution of materials that violate editorial policies or platform rules. Thus, posts that include hate speech [88], harassment, misinformation, violent or sexual content, spam, as well as coordinated information manipulation behaviors are removed using algorithms. In practice, AI-assisted moderation combines automatic content filtering, through detection and sorting processes, with soft moderation interventions, such as tagging, warning users, or reducing content visibility through ranking mechanisms, as well as referring sensitive cases to human moderators [89,90,91]. In this way, artificial intelligence functions as a preliminary assessment mechanism, enabling rapid risk management in an information environment characterised by high speed of circulation and heightened potential for virality [92].
At the same time, the use of artificial intelligence in content moderation raises significant normative and ethical issues, such as the opacity of algorithmic criteria, the lack of consistency in decisions across different languages and cultural contexts, the risk of overblocking or underblocking, and the possibility of reproducing biases existing in training data [93,94,95]. For this reason, studies show that it is very important to have clear rules and transparency in the use of artificial intelligence so that content moderation does not limit freedom of expression and does not affect public trust in the media [66,96].

3.9. Verifying Information and Combating Disinformation

In the current digital environment, disinformation has intensified considerably, especially on social networks, and in the context of automated content generation, the challenges are growing on a large scale. The phenomenon of fake news can affect public trust, socio-political decisions, and institutional stability, making it essential to detect and correct false information quickly [97].
Artificial intelligence (AI) is used to automate and improve information verification processes by classifying and analysing the digital signatures of textual and multimodal content. Among the most widely used techniques are machine learning algorithms and deep learning neural networks, capable of distinguishing real news from fabricated news based on semantic patterns, syntactic structure, and stylistic characteristics specific to disinformation [98]. Recent studies show that sophisticated models based on transformers and hybrid techniques can detect fake news with high accuracy [99]. AI extends the analysis to multimodal content, including manipulated images and videos, which is relevant in the context of the proliferation of deepfakes and visual manipulations. Explainable multimodal models combine textual and visual signals, allowing for more robust identification of false or misleading content in media sources [100]. At the same time, detection techniques are not limited to the content itself, but also monitor dissemination patterns on social networks [101]. In addition to automated detection, the current literature emphasises explainable, multimodal, and contextually adaptable models [102].
By using these technologies, media institutions and social platforms improve their ability to respond quickly to false content, supporting quality journalism and maintaining public trust.
Although artificial intelligence is used to verify information and identify misinformation, its use for this purpose involves specific risks that need to be openly discussed. Automated systems can make mistakes, either by mislabeling accurate information or missing problematic content. They can also be influenced by context or tricked by deliberate tactics, and they can pick up biases from the data they were trained on. All of these limitations can affect the fairness of editorial decisions and call into question the legitimacy of interventions made based on these systems.
In addition, over-reliance on automated tools can create a false sense of security, reducing journalists’ attention and critical thinking and shifting responsibility to technologies that are not fully transparent. For this reason, AI-assisted fact-checking must be complemented by clear editorial procedures, internal control mechanisms, and transparent communication about the limitations of these tools [103].

3.10. Examples of AI-Driven Practices in Media Sector

Media organizations such as Bloomberg, BBC, and Associated Press have been using these technologies for several years to increase productivity and analyze large volumes of data, with visible results in news automation, supporting investigations through computerized tools, and developing solutions to combat misinformation. For example, Quinonez and Meij provide a detailed analysis of how Bloomberg has integrated tools such as BloombergGPT and initiatives developed within the News Innovation Lab, highlighting the adaptation of editorial procedures, the restructuring of workflows, and, above all, the human oversight mechanisms implemented for the responsible use of artificial intelligence [104]. The authors also point out a number of vulnerabilities, including the risk of hostile attacks on systems and the lack of independent validation mechanisms from external audiences.
The Associated Press (AP) is one of the first newsrooms to integrate natural language generation (NLG) into journalistic practices. The technology has been used primarily for writing financial reports and automatic data analysis, helping to increase editorial efficiency without compromising the accuracy of published information [105].
De-Lima-Santos and Salaverría conducted a comprehensive analysis of how La Nación in Argentina automated some of its tasks. Even though they integrated a series of AI applications, infrastructure and cost constraints limited the expansion of automation. This case illustrates that the adoption of automated systems also depends on resource capacity [106].
At the same time, the literature points to an approach characterised by “cautious optimisation,” highlighting risks related to the loss of editorial autonomy through dependence on external technological infrastructures, algorithms, and a lack of transparency towards the public. These challenges are also reflected in the level of trust among audiences, who tend to be sceptical of content generated entirely by artificial intelligence and prefer hybrid models based on collaboration between journalists and automated systems.
Current research on the use of artificial intelligence in the media is marked by three major structural limitations. First, there is a significant time lag between the rapid pace of AI tool implementation in newsrooms and the much slower development of ethical, normative, and regulatory frameworks, which makes governance predominantly reactive. Second, the empirical literature is geographically concentrated in high-income countries, while regions in the Global South are understudied, even though they face distinct challenges related to infrastructure, language, censorship, and political pressures. Third, the field is characterised by marked disciplinary fragmentation, with a separation between technical research, ethical studies, and journalistic analyses, which limits the development of integrated frameworks for the responsible use of artificial intelligence in the media [107].

4. Challenges and Ethical Issues in the Use of Artificial Intelligence in the Media

The integration of artificial intelligence in the media sphere has brought significant benefits in terms of efficiency, production speed, and diversification of content formats. At the same time, it has important limitations that conflict with norms and values [46]. However, the widespread use of these technologies raises a number of ethical challenges and concerns [45,108,109] that directly influence the credibility of the media, its relationship with the public, and the role of professionals in the field. One of the most critical issues is related to the accuracy of information. Artificial intelligence systems, especially generative ones, can produce content that is linguistically plausible but factually incorrect or incomplete, which increases the risk of misinformation if there is no rigorous editorial verification [110].
Another major challenge is transparency [111]. The public is not always informed whether a news article, video, image, or voice has been generated or modified using artificial intelligence [112]. The lack of clear labelling of synthetic content can mislead media consumers and undermine their trust. In this context, transparency becomes an essential ethical requirement, especially in a media ecosystem characterised by information overload and intense competition for attention.
Issues related to manipulation and falsification are another critical dimension [113]. Technologies such as deepfake videos or voice cloning can be used for legitimate purposes, but also to manipulate public opinion, compromise people’s reputations, or spread fake news. The ability of artificial intelligence to create highly realistic content makes it increasingly difficult to distinguish between the real and the synthetic, putting additional pressure on newsrooms and information verification mechanisms.
A major concern relates to algorithmic opacity, where researchers [114] warn that obscure decision-making processes can undermine democratic autonomy and institutional accountability.
Furthermore, journalism’s dependence on large technology companies represents both a technical shift and a reconfiguration of power structures that carries with it profound democratic risks. On the one hand, the power to shape public opinion is shifting from traditional media organizations to platforms that provide critical data and infrastructure [115]. This trend accelerates the centralization of power in the hands of a small number of technology companies, forcing local publications to tie themselves to big tech platforms in order to survive [116]. These companies thus become major players in the development and application of AI in journalism, deepening existing dependencies [117]. On the other hand, the implementation of AI-generated summaries in search engines threatens the economic foundation of journalism by drastically reducing traffic to original sources [118]. This practice also reduces the brand value of media organizations, as the context of the information is lost in favor of the synthesis provided by the platform. AI acts as an intermediary between journalists and audiences, acting as an “interpreter” that synthesizes content, ensuring that journalists’ words never reach the public directly [119]. Furthermore, the use of AI challenges “human originality”—one’s own ideas and direct observations—which is the essence of professional identity [120]. Although the literature identifies common themes, particularly in the area of ethical concerns, research findings on the use of artificial intelligence in the media vary significantly depending on the geographical, economic, and political context.
Comparative studies indicate a structural gap between the Global North and the Global South: media and fact-checking organizations in Europe have the institutional and technological resources to develop their own AI systems, while organizations in Latin America face major financial and institutional constraints [121]. In Africa, the adoption of AI in newsrooms is limited by infrastructural disparities, political pressures, and the lack of stable regulatory frameworks, unlike in the Western context [122]. Similarly, case studies in Argentina show that although AI technologies can support data journalism and investigations, the high costs of infrastructure and access to data restrict their widespread use [123].
Although AI is used to verify information and detect disinformation, this use raises distinct risks that need to be explicitly discussed. Detection models can produce false positives/negatives, be sensitive to changes in context or adversarial tactics, and reproduce biases from training data, which can affect both the fairness of editorial decisions and the legitimacy of interventions. Furthermore, overreliance on automated tools can create a false sense of security, reducing professional vigilance and shifting responsibility to opaque systems; thus, AI-assisted verification must be accompanied by clear editorial procedures, internal auditing, and transparency about the limitations of the tools.
To limit the adverse effects of artificial intelligence, such as misinformation and hate speech, international bodies have developed guidelines to clarify how these technologies should be used and their impact [38]. In this context, the European Union has adopted a regulatory framework for artificial intelligence [54], structured around the principle of risk assessment, which imposes obligations proportional to the level of risk associated with each system. The EU framework on AI (AI Act) is built on a risk-based approach, which differentiates obligations according to the potential impact on rights and safety: certain practices are prohibited, high-risk systems are subject to data governance, documentation, compliance assessment, and human oversight requirements, and for limited-risk uses, transparency obligations are provided (e.g., informing users when they interact with an automated system and labeling synthetic content in relevant circumstances). For the media sector, the relevance of this architecture is twofold: (1) it shifts the discussion from voluntary ethics to compliance and accountability, and (2) it creates institutional expectations regarding traceability, transparency, and risk management, with direct effects on editorial practices and public trust.

5. Discussion

Artificial intelligence (AI) applications have become part of the current media system, influencing all stages of media production: from newsgathering and editorial production to content moderation and information verification. According to recent literature, AI is no longer used exclusively as a tool for efficiency, but as a socio-technical actor that reconfigures the relationships between media institutions, platforms, and audiences [14,20].
A first point of convergence with existing studies is the augmentative, rather than substitutive, nature of AI use in newsrooms. Research based on journalists’ perceptions shows that generative tools are appreciated for increasing speed and reducing repetitive tasks, but are viewed with reservation regarding accuracy and editorial autonomy [1,2].
In the area of video post-production, research confirms the maturation of super-resolution, de-noising, and restoration technologies, with clear benefits for media archives and adaptation to modern broadcasting standards [77,79,81]. However, from a media perspective, these technical improvements must also be evaluated editorially, as “quality” is not only technical, but also semantic and ethical [74,75].
Regarding speech recognition, there is broad consensus on its usefulness for transcription, indexing, and accessibility, contributing to workflow optimisation and content reuse across multiple platforms [86,87].
One area where the literature is particularly convergent is AI-assisted content moderation. Studies show that the practice is inevitably hybrid [10,87,88], and the challenges are not only technical but also institutional [94]. Recent research on moderation with large language models shifts the focus from accuracy to legitimacy, confirming the relevance of algorithmic accountability frameworks [11,38].
Finally, regarding the phenomenon of disinformation, research highlights AI’s dual role: on the one hand, it reduces barriers to the production of false content; on the other hand, it is used for detection and multimodal analysis [16,101].
Recent studies indicate high performance of detection models, but emphasize the need to combine them with editorial procedures and explainable models [102,103].
Although some technological trends suggest convergence (similar tools, global infrastructures, standardized practices), the literature points to cultural and market differences in adoption, professional perceptions, and public acceptance of AI. These variations are influenced by media organizations’ resources, regulatory regimes, local professional norms, platform language and infrastructure, and levels of trust in media and institutions. Therefore, generalizations must be nuanced: what works as good practice in a well-funded newsroom or in a market with clear rules may yield different results in contexts with limited resources or fragmented media ecologies, which warrants greater attention to cross-cultural comparisons and market studies. In addition, the political and ideological context significantly influences research results. In China, the use of AI is analyzed in relation to the consolidation of ideological echo chambers, and limited access to empirical data, caused by censorship, fundamentally differentiates this context from liberal democracies [123]. In the Gulf region, AI acceptance is correlated with specific cultural factors, such as institutional trust and hedonic motivation, highlighting differences from Western media markets [124].
Finally, journalism’s growing dependence on large technology companies appears to be a central issue that cuts across technical, ethical, and democratic dimensions. Platform-controlled infrastructures, AI-generated summaries in search engines, and opaque algorithmic mediation shift agenda-setting power from media institutions to technology providers, threatening editorial autonomy, economic sustainability, and media pluralism [119,120,121,122,123]. These dynamics reinforce the need to place the adoption of artificial intelligence within broader debates on platform governance, power asymmetries, and democratic accountability.

6. Conclusions

Artificial intelligence has entered the media system with force, providing advanced tools that influence the production, moderation, and verification of informational content. At the same time, integrating these technologies poses significant ethical, institutional, and professional challenges. The lack of fully harmonised regulatory frameworks, differences in digital maturity among media organisations, and asymmetries between technological capabilities and editorial control underscore the need for a strategic, responsible integration of artificial intelligence into the media ecosystem.
The integration of artificial intelligence into the media should be grounded in four key pillars. These are consistent with the European Union’s risk-based approach and its emphasis on transparency and accountability, while adapting these principles to the specific needs of the media sector.
  • Editorial responsibility and decision-making
Clear editorial standards must accompany the use of artificial intelligence in the media. Human oversight, transparency regarding the use of content assisted or generated by artificial intelligence, and the explicit delineation of editorial responsibility are essential conditions for maintaining journalistic credibility and public trust.
  • Media literacy and artificial intelligence literacy
Developing skills to understand artificial intelligence among media professionals and the general public is crucial. Knowing how algorithmic systems work, their limitations, and their potential biases supports a critical and informed relationship with automated media content.
  • Responsible technological integration
Artificial intelligence should not be used exclusively as a tool for efficiency or cost reduction. Its integration into journalistic workflows, content production, and post-production must support the fundamental professional values of the media, such as accuracy, diversity, and contextualization of information.
  • Institutional governance and regulatory adaptation
Media organizations and digital platforms must develop coherent governance frameworks that balance technological innovation with the protection of ethical and democratic values. Internal policies must be transparent, flexible, and adapted to specific technological, cultural, and legislative contexts.
Beyond these operational principles, the analysis highlights the need for a more reflexive approach to artificial intelligence as a socio-technological system integrated into existing power relations. The growing dependence of media organizations on proprietary technological infrastructures, digital platforms, and external AI providers generates structural power asymmetries between technology companies, platforms, and media institutions. These imbalances affect editorial autonomy, economic sustainability, and media pluralism, especially in the case of smaller or resource-constrained media organizations. Therefore, the integration of artificial intelligence into journalism cannot be treated exclusively as a technological or managerial decision, but must be understood as part of a broader structural transformation of the media field.
In this context, the issue of ethics cannot be reduced to voluntary codes or abstract principles, but is closely linked to questions of governance, responsibility, and legitimacy: who develops artificial intelligence systems, who controls their use, who sets the rules, and who is accountable in the event of negative effects. Similarly, regulatory mechanisms require critical analysis. Although the risk-based regulatory framework promoted by the European Union is an important step towards accountability and transparency, it is not free from ideological assumptions and does not solve all governance problems. At the same time, industry self-regulation models remain unclear in terms of legitimacy, applicability, and ability to protect the public interest.
Operationalising these principles requires evidence-based governance mechanisms, such as transparency by design, human oversight of editorial decisions with significant impact, compliance with data protection standards, and ensuring equitable access to artificial intelligence-based technologies within the media ecosystem.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Cools, H.; Diakopoulos, N. Uses of Generative AI in the Newsroom: Mapping Journalists’ Perceptions of Perils and Possibilities. J. Pract. 2024, 1–19. [Google Scholar] [CrossRef]
  2. Van Dalen, A. Revisiting the Algorithms behind the Headlines. How Journalists Respond to Professional Competition of Generative AI. J. Pract. 2024, 1–18. [Google Scholar] [CrossRef]
  3. Xu, J.; Qu, R.; Zhou, S.; Ren, J. Generative Artificial Intelligence and the Media Digital Divide: Comparison of Four Tiers of Chinese Media. J. Pract. 2025, 1–19. [Google Scholar] [CrossRef]
  4. Leaver, T.; Srdarov, S. ChatGPT Isn’t Magic. M/C J. 2023, 26. [Google Scholar] [CrossRef]
  5. Mühlhoff, R. Human-Aided Artificial Intelligence: Or, How to Run Large Computations in Human Brains? Toward a Media Sociology of Machine Learning. New Media Soc. 2019, 22, 1868–1884. [Google Scholar] [CrossRef]
  6. Silvia, T. Artificial Intelligence in Journalism: Changing the News; McFarland: McFarland, CA, USA, 2025. [Google Scholar]
  7. Wang, R.; Ophir, Y. Behind the Black Box: The Moderating Role of the Machine Heuristic on the Effect of Transparency Information about Automated Journalism on Hostile Media Bias Perception. Journalism 2024, 27, 103–121. [Google Scholar] [CrossRef]
  8. Caswell, D.; Dörr, K. Automated Journalism 2.0: Event-Driven Narratives. J. Pract. 2017, 12, 477–496. [Google Scholar] [CrossRef]
  9. Thäsler-Kordonouri, S.; Barling, K. Automated Journalism in UK Local Newsrooms: Attitudes, Integration, Impact. J. Pract. 2023, 19, 58–75. [Google Scholar] [CrossRef]
  10. Gillespie, T. Content Moderation, AI, and the Question of Scale. Big Data Soc. 2020, 7, 205395172094323. [Google Scholar] [CrossRef]
  11. Huang, T. Content Moderation by LLM: From Accuracy to Legitimacy. Artif. Intell. Rev. 2025, 58, 320. [Google Scholar] [CrossRef]
  12. Steensen, S.; Ferrer-Conill, R.; Peters, C. (Against a) Theory of Audience Engagement with News. J. Stud. 2020, 21, 1662–1680. [Google Scholar] [CrossRef]
  13. Lim, J.S.; Shin, D.; Zhang, J.; Masiclat, S.; Luttrell, R.; Kinsey, D. News Audiences in the Age of Artificial Intelligence: Perceptions and Behaviors of Optimizers, Mainstreamers, and Skeptics. J. Broadcast. Electron. Media 2022, 67, 353–375. [Google Scholar] [CrossRef]
  14. Broussard, M.; Diakopoulos, N.; Guzman, A.L.; Abebe, R.; Dupagne, M.; Chuan, C.-H. Artificial Intelligence and Journalism. J. Mass Commun. Q. 2019, 96, 673–695. [Google Scholar] [CrossRef]
  15. Hassan, A.; Albayari, A. The Usage of Artificial Intelligence in Journalism. In Studies in Computational Intelligence; Springer: Cham, Switzerland, 2022; pp. 175–197. [Google Scholar]
  16. Bontridder, N.; Poullet, Y. The Role of Artificial Intelligence in Disinformation. Data Policy 2021, 3, e32. [Google Scholar] [CrossRef]
  17. García-Faroldi, L.; Teruel, L.; Blanco, S. Unmasking AI’s Role in the Age of Disinformation: Friend or Foe? J. Media 2025, 6, 19. [Google Scholar] [CrossRef]
  18. Tiernan, P.; Costello, E.; Donlon, E.; Parysz, M.; Scriney, M. Information and Media Literacy in the Age of AI: Options for the Future. Educ. Sci. 2023, 13, 906. [Google Scholar] [CrossRef]
  19. Sánchez-García, P.; Diez-Gracia, A.; Mayorga, I.R.; Jerónimo, P. Media Self-Regulation in the Use of AI: Limitation of Multimodal Generative Content and Ethical Commitments to Transparency and Verification. J. Media 2025, 6, 29. [Google Scholar] [CrossRef]
  20. Lamprou, S.; Dekoulou, P.; Kalliris, G. The Critical Impact and Socio-Ethical Implications of AI on Content Generation Practices in Media Organizations. Societies 2025, 15, 214. [Google Scholar] [CrossRef]
  21. Reiter, E. Natural Language Generation. In The Handbook of Computational Linguistics and Natural Language Processing; Clark, A., Fox, C., Lappin, S., Eds.; Wiley-Blackwell: Oxford, UK, 2010; pp. 574–598. [Google Scholar]
  22. Myers, A. Stanford’s John McCarthy, Seminal Figure of Artificial Intelligence, Dies at 84. Available online: https://news.stanford.edu/stories/2011/10/stanfords-john-mccarthy-seminal-figure-artificial-intelligence-dies-84 (accessed on 20 December 2025).
  23. Manning, C. Artificial Intelligence Definitions. 2020. Available online: https://hai-production.s3.amazonaws.com/files/2020-09/AI-Definitions-HAI.pdf (accessed on 20 December 2025).
  24. Stonier, T. The Evolution of Machine Intelligence. In Beyond Information; Springer: London, UK, 1992. [Google Scholar] [CrossRef]
  25. McLuhan, M. Understanding Media: The Extensions of Man; McGraw Hill: New York, NY, USA, 1964. [Google Scholar]
  26. Russell, S.; Norvig, P. Artificial Intelligence: A Modern Approach, 4th ed.; Pearson: Harlow, UK, 2021. [Google Scholar]
  27. Westerman, D.; Edwards, A.P.; Edwards, C.; Luo, Z.; Spence, P.R. I-It, I-Thou, I-Robot: The perceived humanness of AI in human-machine. Commun. Stud. 2020, 71, 393–408. [Google Scholar] [CrossRef]
  28. Shi, Y.; Sun, L. How Generative AI Is Transforming Journalism: Development, Application and Ethics. J. Media 2024, 5, 582–594. [Google Scholar] [CrossRef]
  29. Dörr, K.N. Mapping the Field of Algorithmic Journalism. Digit. J. 2016, 4, 700–722. [Google Scholar] [CrossRef]
  30. Brown, P.F.; Della Pietra, S.A.; Della Pietra, V.J.; Mercer, R.L. The Mathematics of Statistical Machine Translation: Parameter Estimation. Comput. Linguist. 1993, 19, 263–311. [Google Scholar]
  31. Colavizza, G.; Blanke, T.; Jeurgens, C.; Noordegraaf, J. Archives and AI: An Overview of Current Debates and Future Perspectives. J. Comput. Cult. Herit. 2021, 15, 1–15. [Google Scholar] [CrossRef]
  32. Anderson, C.W. Towards a Sociology of Computational and Algorithmic Journalism. New Media Soc. 2012, 15, 1005–1021. [Google Scholar] [CrossRef]
  33. Broussard, M. Artificial Intelligence for Investigative Reporting. Digit. J. 2014, 3, 814–831. [Google Scholar] [CrossRef]
  34. Pavlik, J.V. Innovation and the Future of Journalism. Digit. J. 2013, 1, 181–193. [Google Scholar] [CrossRef]
  35. Kim, H. AI in Journalism: Creating an Ethical Framework. Bachelor’s Thesis, Syracuse University, Syracuse, NY, USA, 2019. [Google Scholar]
  36. Phillips, A. Transparency and the New Ethics of Journalism. J. Pract. 2010, 4, 373–382. [Google Scholar] [CrossRef]
  37. Thurman, N.; Lewis, S.C.; Kunert, J. Algorithms, Automation, and News. Digit. J. 2019, 7, 980–992. [Google Scholar] [CrossRef]
  38. Diakopoulos, N. Algorithmic Accountability. Digit. J. 2014, 3, 398–415. [Google Scholar] [CrossRef]
  39. Bender, S. Generative-AI, the Media Industries, and the Disappearance of Human Creative Labour. Media Pract. Educ. 2024, 26, 200–217. [Google Scholar] [CrossRef]
  40. Porlezza, C. Promoting Responsible AI: A European Perspective on the Governance of Artificial Intelligence in Media and Journalism. Communications 2023, 48, 370–394. [Google Scholar] [CrossRef]
  41. Diakopoulos, N.; Trielli, D.; Lee, G. Towards Understanding and Supporting Journalistic Practices Using Semi-Automated News Discovery Tools. Proc. ACM Hum.-Comput. Interact. 2021, 5, 1–30. [Google Scholar] [CrossRef]
  42. Shoemaker, P.J.; Reese, S.D. Mediating the Message in the 21st Century: A Media Sociology Perspective; Routledge: Oxfordshire, UK, 2013. [Google Scholar] [CrossRef]
  43. Hermida, A.; Simon, F.M. AI in the Newsroom: Lessons from the Adoption of the Globe and Mail’s Sophi. J. Pract. 2025, 19, 2323–2340. [Google Scholar] [CrossRef]
  44. Cools, H.; Van Gorp, B.; Opgenhaffen, M. New Organizations, Different Journalistic Roles, and Innovative Projects: How Second-Generation Newsroom Innovation Labs Are Changing the News Ecosystem. J. Pract. 2022, 18, 1605–1620. [Google Scholar] [CrossRef]
  45. Deuze, M.; Beckett, C. Imagination, Algorithms and News: Developing AI Literacy for Journalism. Digit. J. 2022, 10, 1913–1918. [Google Scholar] [CrossRef]
  46. Lorenz, P.; Perset, K.; Berryhill, J. Initial Policy Considerations for Generative Artificial Intelligence; OECD Artificial Intelligence Papers; OECD Publishing: Paris, France, 2023. [Google Scholar]
  47. Gutiérrez-Caneda, B.; Vázquez-Herrero, J.; López-García, X. AI Application in Journalism: ChatGPT and the Uses and Risks of an Emergent Technology. Prof. Inf. 2023, 32, e320514. [Google Scholar] [CrossRef]
  48. Nishal, S.; Diakopoulos, N. Envisioning the Applications and Implications of Generative AI for News Media. arXiv 2024, arXiv:2402.18835. [Google Scholar] [CrossRef]
  49. Du, W.; Kim, Z.M.; Raheja, V.; Kumar, D.; Kang, D. Read, Revise, Repeat: A System Demonstration for Human-in-the-loop Iterative Text Revision. In Proceedings of the First Workshop on Intelligent and Interactive Writing Assistants (In2Writing 2022); ACL: Stroudsburg, PA, USA, 2022; pp. 96–108. [Google Scholar] [CrossRef]
  50. Ho, J.; Chan, W.; Saharia, C.; Whang, J.; Gao, R.; Gritsenko, A.; Kingma, D.P.; Poole, B.; Norouzi, M.; Fleet, D.J.; et al. Imagen Video: High Definition Video Generation with Diffusion Models. arXiv 2022, arXiv:2210.02303. [Google Scholar] [CrossRef]
  51. Zhang, C.; Zhang, C.; Zhang, M.; Kweon, I.S.; Kim, J. Text-to-Image Diffusion Models in Generative AI: A Survey. arXiv 2023, arXiv:2303.07909. [Google Scholar] [CrossRef]
  52. Kondratyuk, D.; Yu, L.; Gu, X.; Lezama, J.; Huang, J.; Schindler, G.; Hornung, R.; Birodkar, V.; Yan, J.; Chiu, M.-C.; et al. VideoPoet: A Large Language Model for Zero-Shot Video Generation. arXiv 2024, arXiv:2312.14125. [Google Scholar] [CrossRef]
  53. Storey, V.C.; Yue, W.T.; Zhao, J.L.; Lukyanenko, R. Generative Artificial Intelligence: Evolving Technology, Growing Societal Impact, and Opportunities for Information Systems Research. Inf. Syst. Front. 2025, 27, 2081–2102. [Google Scholar] [CrossRef]
  54. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 Laying Down Harmonised Rules on Artificial Intelligence and Amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) (Text with EEA Relevance). Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32024R1689 (accessed on 20 December 2025).
  55. Thomson, T.J.; Thomas, R.J.; Matich, P. Generative Visual AI in News Organizations: Challenges, Opportunities, Perceptions, and Policies. Digit. J. 2024, 13, 1693–1714. [Google Scholar] [CrossRef]
  56. Boerescu, G. Moment unic la Protevelion 2024: Regretata Draga Olteanu Matei va avea un Moment Spectaculos ca Hologramă. ProTV.ro, 28 December 2023. Available online: https://www.protv.ro/articol/95460-moment-unic-la-protevelion-2024-regretata-draga-olteanu-matei-va-avea-un-moment-spectaculos-ca-holograma (accessed on 10 January 2026).
  57. Bendel, O. The synthetization of human voices. AI Soc. 2019, 34, 83–89. [Google Scholar] [CrossRef]
  58. Franganillo, J. La inteligencia artificial generativa y su impacto en la creación de contenidos mediáticos. Methaodos Rev. Cienc. Soc. 2023, 11, m231102a10. [Google Scholar] [CrossRef]
  59. Genelza, G.G. A Systematic Literature Review on AI Voice Cloning Generator: A Game-Changer or a Threat? J. Emerg. Technol. 2024, 4, 54–61. [Google Scholar]
  60. Dale, R. The voice synthesis business: 2022 update. Nat. Lang. Eng. 2022, 28, 401–408. [Google Scholar] [CrossRef]
  61. González-Docasal, A.; Álvarez, A. Enhancing Voice Cloning Quality through Data Selection and Alignment-Based Metrics. Appl. Sci. 2023, 13, 8049. [Google Scholar] [CrossRef]
  62. Berkowitz, A.E.; Sweeney, M.E. Look Who’s Talking: Voice Cloning as Tension Point Between Identity and Data. Philos. Technol. 2025, 38, 131. [Google Scholar] [CrossRef]
  63. Verma, P.; Oremus, W. AI Voice Clones Mimic Politicians and Celebrities, Reshaping Reality. The Washington Post. Retrieved 7 January 2026. 2023. Available online: https://www.washingtonpost.com/technology/2023/10/13/ai-voice-cloning-deepfakes/ (accessed on 20 December 2025).
  64. Milewski, K.; Zaporowski, S.; Czyżewski, A. Comparison of the ability of neural network model and humans to detect a cloned voice. Electronics 2023, 12, 4458. [Google Scholar] [CrossRef]
  65. Choi, M.; Jang, J.-W. AI in the Newsroom: How Presentation of AI Anchor and Viewers’ Familiarity with AI Shape Perceptions of AI Anchor. Mass Commun. Soc. 2026, 1–16. [Google Scholar] [CrossRef]
  66. Wang, X. AI Anchors’ Development Status and the Prospect of Traditional Hosts in the Era of Artificial Intelligence. Front. Soc. Sci. Technol. 2023, 5, 30–34. [Google Scholar] [CrossRef]
  67. Ndlovu, M. Audience Perceptions of AI-Driven News Presenters: A Case of ‘Alice’ in Zimbabwe. Media Cult. Soc. 2024, 46, 1692–1706. [Google Scholar] [CrossRef]
  68. Tait, A. ‘Here Is the News. You Can’t Stop Us’: AI Anchor Zae-In Grants Us an Interview. The Guardian, 20 October 2023. Available online: https://www.theguardian.com/tv-and-radio/2023/oct/20/here-is-the-news-you-cant-stop-us-ai-anchor-zae-in-grants-us-an-interview (accessed on 10 January 2026).
  69. Li, L. Impact of AI Virtual Anchors on Traditional News Anchors. Int. J. Knowl. Manag. 2024, 21, 1–17. [Google Scholar] [CrossRef]
  70. Choudhry, A.; Han, J.; Xu, X.; Huang, Y. I Felt a Little Crazy Following a ‘Doll’. Proc. ACM Hum.-Comput. Interact. 2022, 6, 1–28. [Google Scholar] [CrossRef]
  71. Kim, D.; Wang, Z. The Ethics of Virtuality: Navigating the Complexities of Human-like Virtual Influencers in the Social Media Marketing Realm. Front. Commun. 2023, 8, 1205610. [Google Scholar] [CrossRef]
  72. Gavran, I.; Honcharuk, S.; Mykhalov, V.; Stepanenko, K.; Tsimokh, N. The Impact of Artificial Intelligence on the Production and Editing of Audiovisual Content. Preserv. Digit. Technol. Cult. 2025, 54, 223–235. [Google Scholar] [CrossRef]
  73. Sun, P. A Study of Artificial Intelligence in the Production of Film. SHS Web Conf. 2024, 183, 03004. [Google Scholar] [CrossRef]
  74. Wang, X.; Xie, L.; Dong, C.; Shan, Y. Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV); IEEE: New York, NY, USA, 2021. [Google Scholar]
  75. Laine, S.; Karras, T.; Lehtinen, J.; Aila, T. High-quality self-supervised deep image denoising. arXiv 2019, arXiv:1901.10277. [Google Scholar] [CrossRef]
  76. Salmona, A.; Bouza, L.; Delon, J. DeOldify: A Review and Implementation of an Automatic Colorization Method. Image Process. Line 2022, 12, 347–368. [Google Scholar] [CrossRef]
  77. Zhou, S.; Yang, P.; Wang, J.; Luo, Y.; Loy, C.C. Upscale-A-Video: Temporal-Consistent Diffusion Model for Real-World Video Super-Resolution. arXiv 2024, arXiv:2312.06640. [Google Scholar] [CrossRef]
  78. Amiriparian, S.; Hübner, T.; Karas, V.; Gerczuk, M.; Ottl, S.; Schuller, B.W. DeepSpectrumLite: A Power-Efficient Transfer Learning Framework for Embedded Speech and Audio Processing from Decentralized Data. Front. Artif. Intell. 2022, 5, 856232. [Google Scholar] [CrossRef]
  79. Lehtinen, J.; Munkberg, J.; Hasselgren, J.; Laine, S.; Karras, T.; Aittala, M.; Aila, T. Noise2Noise: Learning Image Restoration without Clean Data. arXiv 2018, arXiv:1803.04189. [Google Scholar] [CrossRef]
  80. Zhang, H.; Goodfellow, I.; Metaxas, D.; Odena, A. Self-Attention Generative Adversarial Networks. arXiv 2018, arXiv:1805.08318. [Google Scholar] [CrossRef]
  81. Gillain, E. Demystifying Artificial Intelligence: Symbolic, Data-Driven, Statistical and Ethical AI; De Gruyter: Berlin, Germany, 2024. [Google Scholar]
  82. Fieiras-Ceide, C.; Vaz-Álvarez, M.; Túñez-López, M. Artificial Intelligence Strategies in European Public Broadcasters: Uses, Forecasts and Future Challenges. Prof. Inf. 2022, 31, 20. [Google Scholar] [CrossRef]
  83. De-Lima-Santos, M.F.; Ceron, W. Artificial intelligence in news media: Current perceptions and future outlook. J. Media 2021, 3, 13–26. [Google Scholar] [CrossRef]
  84. Hannun, A.; Case, C.; Casper, J.; Catanzaro, B.; Diamos, G.; Elsen, E.; Prenger, R.; Satheesh, S.; Sengupta, S.; Coates, A.; et al. Deep Speech: Scaling up end-to-end speech recognition. arXiv 2014, arXiv:1412.5567. [Google Scholar]
  85. Lucca, A.; Pierri, F. From Speech to Subtitles: Evaluating ASR Models in Subtitling Italian Television Programs. arXiv 2025, arXiv:2512.19161. [Google Scholar] [CrossRef]
  86. TK, B.; Annavarapu, C.S.R.; Bablani, A. Machine Learning Algorithms for Social Media Analysis: A Survey. Comput. Sci. Rev. 2021, 40, 100395. [Google Scholar] [CrossRef]
  87. Pradel, F.; Zilinsky, J.; Kosmidis, S.; Theocharis, Y. Toxic Speech and Limited Demand for Content Moderation on Social Media. Am. Political Sci. Rev. 2024, 118, 1895–1912. [Google Scholar] [CrossRef]
  88. Wang, S.; Kim, K.J. Content Moderation on Social Media: Does It Matter Who and Why Moderates Hate Speech? Cyberpsychol. Behav. Social. Netw. 2023, 26, 527–534. [Google Scholar] [CrossRef]
  89. Gillespie, T. Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media; Yale University Press: New Haven, CT, USA, 2018. [Google Scholar]
  90. Roberts, S.T. Behind the Screen: Content Moderation in the Shadows of Social Media; Yale University Press: New Haven, CT, USA, 2019. [Google Scholar]
  91. Jhaver, S.; Birman, I.; Gilbert, E.; Bruckman, A. Human–Machine Collaboration for Content Regulation: The Case of Reddit. ACM TOCHI 2019, 26, 1–35. [Google Scholar] [CrossRef]
  92. Klonick, K. The New Governors: The People, Rules, and Processes Governing Online Speech. Harv. Law. Rev. 2018, 131, 1598–1670. [Google Scholar]
  93. Gorwa, R.; Binns, R.; Katzenbach, C. Algorithmic content moderation: Technical and political challenges in the automation of platform governance. Big Data Soc. 2020, 7, 1–15. [Google Scholar] [CrossRef]
  94. Binns, R. Fairness in machine learning: Lessons from political philosophy. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency; Association for Computing Machinery: New York, NY, USA, 2018; pp. 149–159. [Google Scholar]
  95. Koenecke, A.; Nam, A.; Lake, E.; Nudell, J.; Quartey, M.; Mengesha, Z.; Toups, C.; Rickford, J.R.; Jurafsky, D.; Goel, S. Racial disparities in automated speech recognition. Proc. Natl. Acad. Sci. USA 2020, 117, 7684–7689. [Google Scholar] [CrossRef]
  96. Floridi, L.; Cowls, J.; Beltrametti, M.; Chatila, R.; Chazerand, P.; Dignum, V.; Luetge, C.; Madelin, R.; Pagallo, U.; Rossi, F.; et al. AI4People—An ethical framework for a good AI society. Minds Mach. 2018, 28, 689–707. [Google Scholar] [CrossRef]
  97. Park, S.; Nan, X. Generative AI and Misinformation: A Scoping Review of the Role of Generative AI in the Generation, Detection, Mitigation, and Impact of Misinformation. AI Soc. 2025, 1–15. [Google Scholar] [CrossRef]
  98. Alshuwaier, F.A.; Alsulaiman, F.A. Fake News Detection Using Machine Learning and Deep Learning Algorithms: A Comprehensive Review and Future Perspectives. Computers 2025, 14, 394. [Google Scholar] [CrossRef]
  99. Ayyasamy, R.K.; Ponnusamy, C.; Bhargavi, K.N.; Cherukuvada, S.; Babu, G.C.; Amutha, S.; Gamu, D.T. A Hybrid Deep Learning Framework for Fake News Detection Using LSTM-CGPNN and Metaheuristic Optimization. Sci. Rep. 2025, 15, 41522. [Google Scholar] [CrossRef]
  100. Jadhav, R.; Meshram, V.; Bhosle, A.; Patil, K.; Dash, S.; Jadhav, S. Explainable Multilingual and Multimodal Fake-News Detection: Toward Robust and Trustworthy AI for Combating Misinformation. Front. Artif. Intell. 2025, 8, 1690616. [Google Scholar] [CrossRef]
  101. Nasser, M.; Arshad, N.I.; Ali, A.; Alhussian, H.; Saeed, F.; Da’u, A.; Nafea, I. A Systematic Review of Multimodal Fake News Detection on Social Media Using Deep Learning Models. Results Eng. 2025, 26, 104752. [Google Scholar] [CrossRef]
  102. Yi, J.; Xu, Z.; Huang, T.; Yu, P. Challenges and Innovations in LLM-Powered Fake News Detection: A Synthesis of Approaches and Future Directions. arXiv 2025, arXiv:2502.00339. [Google Scholar] [CrossRef]
  103. Dierickx, L.; Opdahl, A.L.; Khan, S.A.; Lindén, C.-G.; Rojas, D.C.G. A Data-Centric Approach for Ethical and Trustworthy AI in Journalism. Ethics Inf. Technol. 2024, 26, 64. [Google Scholar] [CrossRef]
  104. Quinonez, C.; Meij, E. A New Era of AI-assisted Journalism at Bloomberg. AI Mag. 2024, 45, 187–199. [Google Scholar] [CrossRef]
  105. Pavlik, J.V. Automation, algorithms, artificial intelligence and cross-border journalism. In The Palgrave Handbook of Cross-Border Journalism; Springer: Cham, Switzerland, 2024. [Google Scholar]
  106. De-Lima-Santos, M.-F.; Salaverría, R. From Data Journalism to Artificial Intelligence: Challenges Faced by La Nación in Implementing Computer Vision in News Reporting. Palabra Clave 2021, 24, e2437. [Google Scholar] [CrossRef]
  107. Molla, M.A.M.; Ahsan, M.M. Artificial Intelligence and Journalism: A Systematic Bibliometric and Thematic Analysis of Global Research. arXiv 2025, arXiv:2507.10891. [Google Scholar] [CrossRef]
  108. Noain-Sánchez, A. Addressing the impact of artificial intelligence on journalism: The perception of experts, journalists and academics. Commun. Soc. 2022, 35, 105–121. [Google Scholar] [CrossRef]
  109. Calvo, D.; Cano-Orón, L.; Morales-I-Gras, J. Unstoppable Implementation. Technological Imaginaries on Artificial Intelligence in Southern European Journalism. J. Pract. 2025, 19, 2209–2229. [Google Scholar] [CrossRef]
  110. Mayopu, R.G.; Nalluri, V.; Chen, L.-S. Detecting ChatGPT Virtual News in the Era of Artificial Intelligence. Enterp. Inf. Syst. 2025, 19, 2508169. [Google Scholar] [CrossRef]
  111. Deuze, M. What is Journalism? Professional Identity and Ideology of Journalists Reconsidered. Journalism 2005, 6, 442–464. [Google Scholar] [CrossRef]
  112. Triantafyllou, S.; Panagopoulos, A.M.; Kapos, P. AI Pioneers and Stragglers in Greece: Challenges, Gaps, and Opportunities for Journalists and Media. Societies 2025, 15, 209. [Google Scholar] [CrossRef]
  113. de-Lima-Santos, M.F.; Yeung, W.N.; Dodds, T. Guiding the way: A comprehensive examination of AI guidelines in global media. AI Soc. 2024, 40, 2585–2603. [Google Scholar] [CrossRef]
  114. Molitorisz, S. A legal cure for news choice overload. Policy Internet 2024, 16, 643–660. [Google Scholar] [CrossRef]
  115. Dodds, T.; De Vreese, C.; Helberger, N.; Resendez, V.; Seipp, T. Popularity-Driven Metrics: Audience Analytics and Shifting Opinion Power to Digital Platforms. J. Stud. 2023, 24, 403–421. [Google Scholar] [CrossRef]
  116. Carlson, M.; Robinson, S.; Lewis, S.C. News After Trump: Journalism’s Crisis of Relevance in a Changed Media Culture; Oxford University Press: Oxford, UK, 2021. [Google Scholar]
  117. Simon, F.M. Uneasy Bedfellows: AI in the News, Platform Companies and the Issue of Journalistic Autonomy. Digit. J. 2022, 10, 1832–1854. [Google Scholar] [CrossRef]
  118. Hagey, K.; Kruppa, M.; Bruell, A. News Publishers See Google’s AI Search Tool as a Traffic-Destroying Nightmare. Available online: https://www.wsj.com/tech/ai/news-publishers-see-googles-ai-search-tool-as-a-traffic-destroying-nightmare-52154074 (accessed on 29 January 2026).
  119. Guzman, A.L.; Lewis, S.C. What Generative AI Means for the Media Industries, and Why It Matters to Study the Collective Consequences for Advertising, Journalism, and Public Relations. Emerg. Media 2024, 2, 347–355. [Google Scholar] [CrossRef]
  120. Cazzamatta, R.; Sarısakaloğlu, A. AI-Generated Misinformation: A Case Study on Emerging Trends in Fact-Checking Practices across Brazil, Germany, and the United Kingdom. Emerg. Media 2025, 3, 214–251. [Google Scholar] [CrossRef]
  121. Adjin-Tettey, T.D.; Muringa, T.; Danso, S.; Zondi, S. The role of AI in journalism practice in two African countries. J. Media 2024, 5, 846–860. [Google Scholar] [CrossRef]
  122. Umejei, E.; Ayisi, A.; Phiri, M.; Tallam, E. Artificial Intelligence and Journalism in Four African Countries: Optimists, Pessimists, and Pragmatists. J. Pract. 2025, 19, 2249–2265. [Google Scholar] [CrossRef]
  123. Ibañez, D.B.; Jamil, S.; De La Garza Montemayor, D.J. Disinformation and Artificial Intelligence: The Case of Online Journalism in China. Estud. Sobre Mensaje Periodístico 2023, 29, 761–770. [Google Scholar] [CrossRef]
  124. Cools, H.; De Vreese, C.H. From Automation to Transformation with AI-Tools: Exploring the Professional Norms and the Perceptions of Responsible AI in a News Organization. Digit. J. 2025, 1–20. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Stănescu, G.C. Artificial Intelligence and the Transformation of the Media System. Encyclopedia 2026, 6, 45. https://doi.org/10.3390/encyclopedia6020045

AMA Style

Stănescu GC. Artificial Intelligence and the Transformation of the Media System. Encyclopedia. 2026; 6(2):45. https://doi.org/10.3390/encyclopedia6020045

Chicago/Turabian Style

Stănescu, Georgiana Camelia. 2026. "Artificial Intelligence and the Transformation of the Media System" Encyclopedia 6, no. 2: 45. https://doi.org/10.3390/encyclopedia6020045

APA Style

Stănescu, G. C. (2026). Artificial Intelligence and the Transformation of the Media System. Encyclopedia, 6(2), 45. https://doi.org/10.3390/encyclopedia6020045

Article Metrics

Back to TopTop