Next Article in Journal
Strategic Communication and Democratic Resilience in the European Union: Journalism, Citizens and the Geopolitics of Narrative
Previous Article in Journal
Coping with Risk: The Three Spheres of Safety in Latin American Investigative Journalism
Previous Article in Special Issue
Media Self-Regulation in the Use of AI: Limitation of Multimodal Generative Content and Ethical Commitments to Transparency and Verification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Editorial

Journalism, Media, and Artificial Intelligence: Let Us Define the Journey

by
Rashid Mehmood
Faculty of Computer and Information Systems, Islamic University of Madinah, Madinah 42351, Saudi Arabia
Journal. Media 2025, 6(3), 122; https://doi.org/10.3390/journalmedia6030122
Submission received: 21 July 2025 / Accepted: 28 July 2025 / Published: 30 July 2025
This editorial introduces the Special Issue “Journalism, Media, and Artificial Intelligence: Let Us Define the Journey,” which explores the evolving relationship between journalism and artificial intelligence (AI). This issue collates fifteen peer-reviewed contributions from 36 authors spanning 12 countries, collectively examining AI’s role in content production, investigative reporting, ethical governance, audience perception, and scholarly reflection. A parameter-based framework is used to classify each contribution under one of five core themes so as to organize this diverse collection of work, while also recognizing their cross-cutting relevance. This editorial synthesizes key insights, identifies emerging trends, and proposes a conceptual framework to guide future research. It highlights the challenges and opportunities posed by AI, particularly in relation to trust, transparency, accountability, and professional responsibility. This collection of research aims to support a more informed, inclusive, and critically grounded understanding of AI’s role in shaping the future of journalism.

1. Introduction

We live in an information age, yet, ironically, achieving the core function of journalism—i.e., to provide people with access to unbiased, accurate, and trustworthy information—has never been more difficult. The late 20th-century critique by Herman and Chomsky, articulated in their propaganda model (Herman & Chomsky, 1988), remains alarmingly relevant: the structural inequalities of wealth and power continue to shape what news is produced, how it is framed, and what ultimately reaches the public. These dynamics are exacerbated by contemporary phenomena such as partisanship, misinformation, platform-driven content cycles, and algorithmic opacity.
As UN Secretary-General António Guterres noted, “at a time when disinformation and mistrust of the news media are growing, a free press is essential for peace, justice, sustainable development, and human rights” (UN News, 2019). Yet, despite its foundational role in democratic societies, journalism today struggles to meet these expectations, challenged by financial pressures, weakened editorial independence, and increasing technological disruptions.
This Special Issue, titled “Journalism, Media, and Artificial Intelligence: Let Us Define the Journey”, was created in response to these deep-rooted concerns. Through it, we asked the following questions: What role can artificial intelligence (AI) play in building the next generation of journalism? How can AI be used, responsibly and effectively, across the journalism lifecycle, from news gathering and production to editorial curation and audience engagement?
In the call for papers for this Special Issue, we emphasized the transformative potential of AI-powered and data-driven methods (Beckett, 2019; Canavilhas, 2022), such as those under the umbrella of deep journalism (Ahmad et al., 2022; Mehmood, 2022). These approaches promise not only scalable information discovery but also multi-perspective, cross-sectional, and impartial reporting that challenges bias and makes rigorous insights accessible to all.
We received 29 submissions to this Special Issue, of which 15 were accepted for publication following peer review. These contributions represent 36 authors from 12 countries, reflecting the global and interdisciplinary nature of this Special Issue. The papers cover a rich diversity of geographies, disciplines, and research methods, encompassing both empirical studies and critical reviews. To meaningfully analyze these contributions and reflect on the collective journey they chart, we have organized them into the following five core thematic parameters:
  • AI in Journalistic Content Production;
  • AI for Investigative and Data-Driven Journalism;
  • Ethical and Governance Frameworks;
  • Audience Trust, Perception, and Human–AI Collaboration;
  • Scholarly Reviews, Meta-Analysis, and Risk Discourse.
We use the term parameter to refer to a thematically cohesive construct that synthesizes fragmented research around a shared conceptual or functional concern. These parameters serve not only to organize the discussion but also to act as design elements for identifying, aligning, and evolving the key dimensions of AI integration into journalism. Each parameter reflects a dynamic area of transformation, enabling us to trace the applications, challenges, and directions across both research and practice.
Each paper is assigned to one primary parameter, where it is discussed in detail. Some papers also engage with other parameters. These are identified as secondary papers and will be briefly noted within the relevant sections to highlight their cross-cutting relevance; however, they are only fully discussed in the section focused on the parameter to which they hold primary membership. This two-tiered classification allows us to balance thematic clarity with intellectual depth and avoid duplication, while also recognizing the interdisciplinary nature of this field. Together, the fifteen papers in this Special Issue form a timely and insightful body of work that helps us to define and interrogate the evolving relationship between journalism, media, and artificial intelligence.
To provide a structured view of this Special Issue’s contributions, we have developed a taxonomy that maps all fifteen papers to the five core parameters (see Figure 1). Each paper is placed under its primary thematic focus using short, general titles that aid comprehension and highlight the article’s central contribution. These titles are deliberately broad to point to the wider domains where future research and development can evolve, while still remaining specific enough to reflect the content of each paper. This visual structure offers readers a clear overview of the Special Issue’s scope and thematic distribution, and the diversity of the methodological and conceptual insights presented.
In the sections that follow, we examine each of these thematic parameters, highlighting how they individually and collectively shape the evolving relationship between journalism and artificial intelligence. This is followed by a synthesis of their cross-cutting insights, the introduction of a conceptual framework for future research, and concluding reflections on the journey defined by this Special Issue.

2. AI in Journalistic Content Production

One of the areas where AI’s introduction into journalism is most visible is in content production. This includes writing, editing, headline generation, content personalization, and the optimization of media workflows. The five primary contributions under this parameter illustrate how AI is being introduced into the foundational tasks of media creation, both in large organizations and in resource-constrained environments.
Albizu-Rivas et al. (2024) investigate the application of AI among Spanish slow journalism practitioners. Based on interview data, their study finds that current usage of AI tools is minimal, though participants express cautious interest in learning more about them. Concerns are raised around ethical boundaries and job security, and the authors emphasize that quality, creativity, and emotional value are still firmly rooted in human authorship.
Gherheș et al. (2024) compare ChatGPT-generated and human-authored headlines in the context of Romanian media. Survey results show that the majority of respondents were more attracted to the AI-generated headlines, despite expressing distrust of clickbait styles in principle. This study highlights the effectiveness of AI in creating content that attracts engagement, while also raising its ethical implications.
Ali et al. (2024) examine the behavioural factors influencing the adoption of generative AI tools for media content creation in the Arab Gulf region. Using a model based on the Unified Theory of Acceptance and Use of Technology 2 (UTAUT2) and data from nearly 500 users, they find that hedonic motivation, trust, and habit are among the strongest predictors of an individual’s intention to use such tools. Their study points to growing familiarity with and interest in AI-supported content workflows.
Binlibdah (2024) presents a study on AI use in strategic communication among Saudi social marketing firms. The findings show that AI acts as a mediator between strategic inputs and personalized media content, improving service efficiency. Although the context is commercial rather than journalistic, the application of AI to enhance content delivery and audience engagement aligns with the broader transformations in media production which are occurring.
Pinto and Barbosa (2024) document the integration of AI into Brazilian digital journalism. Their analysis shows that small- and medium-sized newsrooms are increasingly using bots and open-source tools to support content creation. These efforts are often developed independently of large institutional infrastructures and offer an alternative model of AI-driven media innovation.
Three other papers make secondary contributions to this parameter. Sánchez-García et al. (2025) discuss editorial self-regulation and the importance of setting boundaries for generative AI in content production. Alaql et al. (2023), while primarily focused on data-driven journalism, develop a machine learning pipeline that can assist in the discovery of newsworthy themes, which has implications for content creation. Vicente and Burnay (2024) analyze recommender systems in the OTT context and touch on the automation of content distribution, a topic which connects to AI-assisted content delivery workflows.
Together, these studies reflect the growing use of AI to support or augment media production. While the motivations, tools, and levels of adoption differ across settings, the central observation remains: AI is becoming a component of journalistic practice in ways that challenge traditional workflows, professional identities, and notions of authorship.

3. AI for Investigative and Data-Driven Journalism

Investigative journalism often seeks to uncover patterns, behaviours, or irregularities that are not immediately visible to the public. The papers in this parameter demonstrate how artificial intelligence can be used to support such efforts by processing large-scale data, extracting insights, and enhancing journalists’ capacity to inform the public on complex societal issues.
Santos (2023) conducts a thematic analysis of how AI technologies are being applied in the detection of disinformation, identifying core strategies such as fact-checking, sentiment analysis, and hybrid systems involving human oversight. While being focused on disinformation detection tools, the paper also directly addresses the infrastructure required for investigative work in the digital age and contributes to discussions on transparency, credibility, and scalable content verification.
Alaql et al. (2023) present a machine learning pipeline developed to extract and analyze labour market trends from LinkedIn posts. Their approach classifies data into generational, industrial, and employment-related categories using parameter modelling. By designing a software tool to generate these insights from user-generated media, the paper demonstrates how AI can be embedded into the process of investigative journalism, enabling more structured, holistic, and multi-perspective reporting.
In addition to these primary papers, Pinto and Barbosa (2024) provide relevant secondary insights. Although their study is focused on Brazilian digital journalism broadly, they note how open-source AI tools are being used for data mining and pattern recognition, particularly in contexts where investigative capacity is limited by financial or institutional constraints. Their work highlights how lightweight AI systems are increasing the accessibility of investigative functions.
These papers collectively show that AI is not only an asset for routine reporting, but also a means to pursue deeper, evidence-based stories. By automating elements of data collection, classification, and verification, AI is positioned to expand the investigative capabilities of journalists, especially in circumstances where human resources are limited or time constraints are severe.

4. Ethical and Governance Frameworks

As artificial intelligence becomes more integrated into journalism, questions around ethics, regulation, and accountability become increasingly urgent. This parameter examines how media institutions, scholars, and practitioners are responding to these challenges through the frameworks of transparency, self-regulation, and professional responsibility.
Sánchez-García et al. (2025) analyze a collection of editorial stylebooks and internal AI guidelines from 45 journalistic organizations. Their findings show that media institutions are actively crafting rules to limit the use of generative AI in content creation, emphasizing human oversight, verification, respect for copyright, and a commitment to transparency. This paper is a valuable contribution to understanding how internal governance is emerging from within the industry.
Shi and Sun (2024) review the development and application of generative AI in journalism and explore its ethical implications. They argue that while generative AI tools offer substantial opportunities, they also raise concerns related to professional integrity and epistemic responsibility. The authors propose that journalists, media organizations, and audiences must share the responsibility of ensuring that these technologies are deployed in a manner consistent with journalistic values.
Several other papers make secondary contributions to this parameter. Lermann Henestrosa and Kimmerle (2024) study how readers perceive the credibility of articles labelled as AI- or human-authored. Their findings reveal a drop in perceived trust and intelligence when content is presented as AI-generated, suggesting that transparency in authorship has ethical implications for reader engagement. González-Arias and López-García (2024), while primarily focused on discourse, examine how national press systems construct the risks associated with AI, often basing this on professional norms and institutional expectations.
Ioscote et al. (2024) and Sonni et al. (2024) both conduct meta-level studies of AI in journalism and identify ethical concerns as a recurring research theme. Their work shows that questions of professional responsibility, algorithmic fairness, and regulatory standards are gaining visibility in the scholarly discourse on this topic. These contributions strengthen the argument that ethical governance must be integral to both journalism in practice and relevant research.
Together, the papers in this parameter highlight that governance in AI journalism is no longer a future concern; it is already being shaped by internal media practices, public perception, and academic critique. Addressing this dimension is essential if AI is to enhance rather than undermine journalism’s role in democratic societies.

5. Audience Trust, Perceptions, and Human–AI Collaborations

The public’s perception of AI-generated content and their willingness to trust or engage with it play a central role in shaping the future of journalism. This parameter focuses on how audiences respond to AI involvement in news production and what levels of automation they find acceptable across the different phases of journalism.
Lermann Henestrosa and Kimmerle (2024) conduct an experiment to test how readers evaluate a science article when it is labelled as being AI-authored versus human-authored. Although the content is identical in both cases, the AI-labelled version is consistently perceived as less credible, less intelligent, and less human. This study reveals the cognitive and emotional barriers that audiences may experience when interacting with AI-mediated journalism.
Heim and Chan-Olmsted (2023) analyze consumer preferences across various stages of news production, from information gathering to editing. Their structural equation modelling suggests that audiences generally prefer lower levels of AI integration. However, if human editorial control is maintained, trust and usage intentions can remain stable or even improve. These findings suggest that transparency, clear role division, and human oversight remain critical for audience acceptance of AI.
Other contributions offer secondary insights into this parameter. Gherheș et al. (2024) demonstrate that although users claim to reject clickbait, they are more likely to select AI-generated headlines that mimic such features as being preferable. This dissonance between stated preferences and actual behaviour points to a more complex landscape of trust and engagement. Ali et al. (2024) report that trust is one of the key factors influencing behavioural intention to use generative AI tools in media creation, emphasizing the role of user perception in AI adoption.
Shi and Sun (2024), while primarily focused on ethics, also highlight the shared responsibility of journalists and audiences in maintaining editorial standards when AI tools are utilized. Their analysis supports the idea that trust is not only a function of content quality, but also of audience awareness, expectations, and perceived legitimacy.
This group of papers underscores that trust is not merely a residual outcome of media content, but a structural condition that must be actively managed. Whether through transparent labelling, human-in-the-loop frameworks, or shared ethical commitments, audience trust remains a decisive factor in the future relationship between journalism and artificial intelligence.

6. Scholarly Reviews, Meta-Analysis, and Risk Discourse

This parameter collates contributions that reflect the circumstances of the broader field of AI and journalism through systematic reviews, bibliometric studies, and discursive analyses. These papers examine how AI is framed by media, how it is being studied in the academic literature, and how risk, ethics, and innovation are being conceptualized across different contexts.
González-Arias and López-García (2024) examine the way AI-related risks are constructed in the press of four European countries. Their study analyses how newspapers in Belgium, France, Portugal, and Spain represent AI’s societal impact. Fourteen types of risk and seven voice categories are identified, showing that media portrayals are shaped by national contexts, institutional cultures, and professional norms.
Ioscote et al. (2024) offer a ten-year retrospective study of AI in journalism research, with the scope covering publications from 2014 to 2023. Through a combined bibliometric and content analysis, they find that themes such as automation, ethics, and the future of journalistic labour have gained increasing attention. However, they also identify a gap in research regarding education and training for journalists in AI-related competencies.
Sonni et al. (2024) present a bibliometric and content analysis of 331 Scopus-indexed articles from 2019 to 2023, highlighting major research clusters around fake news, algorithms, and automated journalism. Their paper emphasizes that ethical considerations are frequently mentioned, even in technically oriented work, indicating a shift toward more critical and reflective approaches in journalism studies.
Vicente and Burnay (2024) conduct a systematic review of AI-based recommender systems in over-the-top (OTT) media services. Although their primary focus is not on journalism per se, their analysis of algorithmic mediation in content distribution reveals important challenges related to transparency, personalization, and media diversity. These findings resonate with current debates about AI-driven editorial control in journalism.
In addition to these four primary contributions, several papers offer relevant secondary insights. Sánchez-García et al. (2025) provide a comparative synthesis of internal editorial guidelines, which functions as a quasi-review of self-regulatory practices. Alaql et al. (2023) develop a methodological tool that could support future empirical studies in media analytics. Shi and Sun (2024) engage critically with the broader implications of generative AI and call for frameworks that integrate philosophical, ethical, and audience-centered considerations.
Together, these contributions provide a panoramic view of how AI is currently being researched, represented, and critiqued in journalism and media. They do not merely summarize the field, but rather shape it by raising new questions about accountability, governance, and the shifting boundaries of journalistic practice in the age of automation.

7. Synthesis and Reflections

The fifteen papers in this Special Issue collectively map the broad and evolving landscape at the intersection of journalism, media, and artificial intelligence. They show that AI is no longer a marginal or speculative addition to journalism, but an active agent reshaping editorial workflows, audience engagement, investigative capacity, and institutional norms. Across the five thematic parameters, several cross-cutting insights emerge.
First, ethical and governance concerns are not confined to dedicated policy discussions. Even in studies primarily focused on content production, audience behaviour, or technical innovation, questions of transparency, authorship, and editorial control repeatedly surface. This is evident in the analyses of institutional self-regulation (Sánchez-García et al., 2025), generative AI ethics (Shi & Sun, 2024), AI authorship perceptions (Lermann Henestrosa & Kimmerle, 2024), and even headline design (Gherheș et al., 2024). These works reflect an underlying consensus that innovation in journalism must remain anchored to professional values, even as tools evolve.
Second, trust and audience perceptions are central to the uptake and success of AI in journalism. The findings from multiple studies indicate that readers react differently based on how AI involvement is disclosed (Lermann Henestrosa & Kimmerle, 2024), and that the perceived credibility of content changes when the authorship is attributed to machines (Heim & Chan-Olmsted, 2023). Whether trust is reduced by the label of machine authorship (Lermann Henestrosa & Kimmerle, 2024), or maintained when it is known that humans are in control (Heim & Chan-Olmsted, 2023), the social contract between journalists and audiences is being renegotiated in real time. These responses underscore the need for transparency, meaningful human oversight, and shared ethical frameworks (Shi & Sun, 2024).
Third, the adoption of AI appears to occur through a variety of pathways. In some cases, innovation is led by major media organizations formalizing internal policies and workflows (Sánchez-García et al., 2025). In others, grassroots or small-scale initiatives use open-source tools to experiment with AI-driven reporting (Pinto & Barbosa, 2024). The divergence in adoption patterns is also visible in the regional studies presented from the Arab Gulf (Ali et al., 2024), Brazil (Pinto & Barbosa, 2024), and Spain (Albizu-Rivas et al., 2024); these cases highlight the importance of local context and institutional capacity in shaping how AI is used and what kinds of journalism it enables or displaces.
Fourth, there is a growing effort to critically assess the state of the field of AI in journalism itself. Bibliometric reviews (Ioscote et al., 2024; Sonni et al., 2024), discourse studies (González-Arias & López-García, 2024), and systematic reflections (Vicente & Burnay, 2024) contribute to a more grounded understanding of the academic landscape. These studies suggest that while research on AI in journalism has expanded significantly, there remain gaps in education, interdisciplinary collaboration, and geographic representation.
Finally, the use of AI for investigative purposes points to a promising future for deep journalism. By automating aspects of the discovery process and enabling new forms of data analysis, AI can help to highlight issues that were previously difficult to investigate (Alaql et al., 2023; Santos, 2023). However, realizing this potential will depend on the availability of ethical, transparent, and user-friendly tools that serve public-interest-related objectives rather than institutional or commercial agendas.
This Special Issue does not claim to offer a complete roadmap. Instead, it presents a set of grounded, situated explorations that help to define the journey ahead. What emerges is not a single trajectory, but rather a pluralistic and dynamic field where questions of ethics, utility, trust, and responsibility must be addressed at every stage of technological integration.

8. A Framework for Future Research

Building on the five parameters and the cross-cutting themes identified in this Special Issue, we outline a conceptual framework that can guide future research at the intersection of journalism, media, and artificial intelligence (see Figure 2). The diversity of the papers published herein does not only reveal multiple areas of inquiry, but also a set of shared concerns that demand a coherent and responsive research agenda. Rather than proposing a fixed model or linear roadmap, we suggest thinking in terms of a dynamic cycle that reflects how journalism and AI interact across different layers of media practice and responsibility.
At the heart of this framework is the recognition that AI is now embedded across the journalism lifecycle. The papers included in this issue have shown how AI is used in the creation of content through writing tools, in personalization engines, and in workflow automation (Albizu-Rivas et al., 2024; Ali et al., 2024; Binlibdah, 2024; Gherheș et al., 2024; Pinto & Barbosa, 2024). These studies emphasize that the focus of content production is not only output, but also editorial judgement, communicative intent, and audience reception. As such, future research must address both the technological performance of AI systems and the epistemological foundations of the content they help to produce.
Equally important is the role of AI in discovery and investigation. As demonstrated by the studies on disinformation detection and labour market analysis (Alaql et al., 2023; Santos, 2023), AI has the capacity to highlight patterns, classify themes, and support journalistic inquiry into socially relevant issues. Future research in this area should continue to explore how machine learning and natural language processing can enhance the depth and breadth of journalistic investigations, particularly in domains where public data is underused or inaccessible. The challenge lies not only in developing AI’s technical capacity, but in ensuring that such tools are usable, interpretable, and aligned with the goals of public-interest journalism.
Ethical and governance dimensions must also remain central in future studies. Papers in this issue have shown that media organizations are beginning to formalize internal policies on AI use (Sánchez-García et al., 2025), and that scholars are articulating normative frameworks for responsible innovation (Shi & Sun, 2024). These efforts should be extended to include comparative studies of regulatory regimes, institutional accountability mechanisms, and the newsroom-level implementation of ethical AI practices. As the influence of generative and algorithmic systems grows, so too does the urgency of embedding ethics and governance into everyday journalistic practices.
Audience trust and perception have emerged as critical variables mediating how AI-generated or AI-assisted content is received (Heim & Chan-Olmsted, 2023; Lermann Henestrosa & Kimmerle, 2024). Several contributions show that users evaluate content differently depending on its presentation, authorship disclosure, and perceived authenticity (Ali et al., 2024; Gherheș et al., 2024; Shi & Sun, 2024). Future research must investigate how trust is built, lost, and potentially restored in environments where automation plays an increasingly visible role. This includes drawing attention toward transparency practices, co-authorship models, and audience expectations across different cultures and media systems.
Finally, continued investment in meta-reflection and scholarly synthesis is required. The bibliometric and discursive studies included in this issue (González-Arias & López-García, 2024; Ioscote et al., 2024; Sonni et al., 2024; Vicente & Burnay, 2024) not only chart the past progress of this field, but also reveal areas of underdevelopment, such as media education, regional disparities, and methodological diversity. Future work should not only track academic output, but critically evaluate the assumptions, vocabularies, and frameworks used to structure research on AI and journalism.
This emerging framework proposes that future research engage with AI in journalism not only as a set of tools or topics, but as a system of responsibilities. These responsibilities span the technical, editorial, institutional, and civic dimensions of journalistic work. A sustainable research agenda must therefore be interdisciplinary, reflexive, and explicitly committed to preserving journalism’s democratic and public-serving functions in an increasingly automated world.

9. Conclusions

This Special Issue began with a call to critically examine and proactively shape the evolving relationship between journalism and artificial intelligence. At its core, the call emphasized the urgent need to improve public access to unbiased information in a media landscape marked by structural inequalities, rapid technological shifts, and growing public distrust. Drawing inspiration from the propaganda model articulated by Herman and Chomsky, and reaffirmed by international voices such as UN Secretary-General António Guterres, we asked how journalism could reclaim its public-serving mission in an increasingly algorithmic environment. The proposed answer lay in exploring AI not simply as a disruptive force, but as a possible enabler of next-generation journalism, a form of journalism that is rigorous, inclusive, data-informed, and transparent.
The fifteen papers included in this issue offer rich, multidimensional responses to this challenge. Across five thematic parameters (AI in content production, AI for investigative journalism, ethical and governance frameworks, audience trust and perception, and scholarly reviews and risk discourse), these contributions collectively chart a field which is in motion. They show that AI is used to write headlines, streamline workflows, detect disinformation, map labour market trends, and personalize content delivery. At the same time, they reveal that journalists, audiences, and institutions are grappling with deep questions about authorship, credibility, professional responsibility, and public trust.
The contributions to this editorial are organized using a two-level classification system. Each paper has been assigned to one primary parameter, where it is discussed in detail, and may also be referenced as a secondary contribution to another parameter where thematically relevant. This structure reflects the inherently interdisciplinary nature of the field, while maintaining clarity and coherence in thematic analysis.
In reflecting on the insights generated across these parameters, we have proposed a conceptual framework for future research, the Journalism–AI Responsibility Cycle, which emphasizes the interdependence of technological innovation, ethical governance, audience engagement, and institutional accountability. This model invites researchers to move beyond narrow application studies and engage with AI in journalism as a system of responsibilities that span creation, investigation, trust, and critical reflection.
The title of this Special Issue, “Let Us Define the Journey”, is not rhetorical. It is a call for a collaborative agenda-setting process, one that recognizes the urgency of the current moment and the opportunity that AI offers to transform journalism not only technologically, but institutionally and ethically. The journey defined in these pages is still unfolding, but the contributions presented herein offer strong foundations and critical guideposts for the path ahead.

Funding

This article is derived from a research grant funded by the Research, Development, and Innovation Authority (RDIA), Kingdom of Saudi Arabia, with grant number 12615-iu-2023-IU-R-2-1-EI-.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Ahmad, I., AlQurashi, F., & Mehmood, R. (2022). Potrika: Raw and balanced newspaper datasets in the bangla language with eight topics and five attributes. arXiv, arXiv:2210.09389. [Google Scholar]
  2. Alaql, A. A., AlQurashi, F., & Mehmood, R. (2023). Data-driven deep journalism to discover age dynamics in multi-generational labour markets from linkedin media. Journalism and Media, 4(1), 120–145. [Google Scholar] [CrossRef]
  3. Albizu-Rivas, I., Parratt-Fernández, S., & Mera-Fernández, M. (2024). Artificial intelligence in slow journalism: Journalists’ uses, perceptions, and attitudes. Journalism and Media, 5(4), 1836–1850. [Google Scholar] [CrossRef]
  4. Ali, M. S. M., Wasel, K. Z. A., & Abdelhamid, A. M. M. (2024). Generative AI and media content creation: Investigating the factors shaping user acceptance in the arab gulf states. Journalism and Media, 5(4), 1624–1645. [Google Scholar] [CrossRef]
  5. Beckett, C. (2019). New powers, new responsibilities. A global survey of journalism and artificial intelligence. Polis. Available online: https://blogs.lse.ac.uk/polis/2019/11/18/new-powers-new-responsibilities/ (accessed on 20 July 2025).
  6. Binlibdah, S. (2024). Investigating the role of artificial intelligence to measure consumer efficiency: The use of strategic communication and personalized media content. Journalism and Media, 5(3), 1142–1161. [Google Scholar] [CrossRef]
  7. Canavilhas, J. (2022). Artificial intelligence and journalism: Current situation and expectations in the portuguese sports media. Journalism and Media, 3(3), 510–520. [Google Scholar] [CrossRef]
  8. Gherheș, V., Fărcașiu, M. A., & Cernicova-Buca, M. (2024). Are ChatGPT-generated headlines better attention grabbers than human-authored ones? An assessment of salient features driving engagement with online media. Journalism and Media, 5(4), 1817–1835. [Google Scholar] [CrossRef]
  9. González-Arias, C., & López-García, X. (2024). Rethinking the relation between media and their audience: The discursive construction of the risk of artificial intelligence in the press of Belgium, France, Portugal, and Spain. Journalism and Media, 5(3), 1023–1037. [Google Scholar] [CrossRef]
  10. Heim, S., & Chan-Olmsted, S. (2023). Consumer trust in AI–human news collaborative continuum: Preferences and influencing factors by news production phases. Journalism and Media, 4(3), 946–965. [Google Scholar] [CrossRef]
  11. Herman, E., & Chomsky, N. (1988). Manufacturing consent: The political economy of the mass media. Pantheon Books. [Google Scholar]
  12. Ioscote, F., Gonçalves, A., & Quadros, C. (2024). Artificial intelligence in journalism: A ten-year retrospective of scientific articles (2014–2023). Journalism and Media, 5(3), 873–891. [Google Scholar] [CrossRef]
  13. Lermann Henestrosa, A., & Kimmerle, J. (2024). The effects of assumed AI vs. human authorship on the perception of a GPT-generated text. Journalism and Media, 5(3), 1085–1097. [Google Scholar] [CrossRef]
  14. Mehmood, R. (2022). ‘Deep journalism’ driven by AI can aid better government. The Mandarin. Available online: https://www.themandarin.com.au/201467-deep-journalism-driven-by-ai-can-aid-better-government/ (accessed on 20 July 2025).
  15. Pinto, M. C., & Barbosa, S. O. (2024). Artificial Intelligence (AI) in Brazilian digital journalism: Historical context and innovative processes. Journalism and Media, 5(1), 325–341. [Google Scholar] [CrossRef]
  16. Santos, F. C. C. (2023). Artificial Intelligence in automated detection of disinformation: A thematic analysis. Journalism and Media, 4(2), 679–687. [Google Scholar] [CrossRef]
  17. Sánchez-García, P., Diez-Gracia, A., Mayorga, I. R., & Jerónimo, P. (2025). Media self-regulation in the use of AI: Limitation of multimodal generative content and ethical commitments to transparency and verification. Journalism and Media, 6(1). [Google Scholar] [CrossRef]
  18. Shi, Y., & Sun, L. (2024). How Generative AI Is transforming journalism: Development, application and ethics. Journalism and Media, 5(2), 582–594. [Google Scholar] [CrossRef]
  19. Sonni, A. F., Putri, V. C. C., & Irwanto, I. (2024). Bibliometric and content analysis of the scientific work on artificial intelligence in journalism. Journalism and Media, 5(2), 787–798. [Google Scholar] [CrossRef]
  20. UN News. (2019). A free press is ‘cornerstone’ for accountability and ‘speaking truth to power’: Guterres. Available online: https://news.un.org/en/story/2019/05/1037741 (accessed on 20 July 2025).
  21. Vicente, P. N., & Burnay, C. D. (2024). Recommender systems and over-the-top services: A systematic review study (2010–2022). Journalism and Media, 5(3), 1259–1278. [Google Scholar] [CrossRef]
Figure 1. Taxonomy of journalism, media, and AI, as seen through the lens of this Special Issue’s contributions organized by five core parameters.
Figure 1. Taxonomy of journalism, media, and AI, as seen through the lens of this Special Issue’s contributions organized by five core parameters.
Journalmedia 06 00122 g001
Figure 2. A framework to guide future research in journalism, media, and AI.
Figure 2. A framework to guide future research in journalism, media, and AI.
Journalmedia 06 00122 g002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mehmood, R. Journalism, Media, and Artificial Intelligence: Let Us Define the Journey. Journal. Media 2025, 6, 122. https://doi.org/10.3390/journalmedia6030122

AMA Style

Mehmood R. Journalism, Media, and Artificial Intelligence: Let Us Define the Journey. Journalism and Media. 2025; 6(3):122. https://doi.org/10.3390/journalmedia6030122

Chicago/Turabian Style

Mehmood, Rashid. 2025. "Journalism, Media, and Artificial Intelligence: Let Us Define the Journey" Journalism and Media 6, no. 3: 122. https://doi.org/10.3390/journalmedia6030122

APA Style

Mehmood, R. (2025). Journalism, Media, and Artificial Intelligence: Let Us Define the Journey. Journalism and Media, 6(3), 122. https://doi.org/10.3390/journalmedia6030122

Article Metrics

Back to TopTop