The Critical Impact and Socio-Ethical Implications of AI on Content Generation Practices in Media Organizations
Abstract
1. Introduction
2. Materials and Methods
2.1. Search Strategy and Inclusion Criteria
2.2. Inclusion and Exclusion Criteria
2.3. PRISMA Flow Diagram
2.4. Thematic Analysis
3. Literature Review
3.1. Language-Based AI and Media Bias Detection
Author(s) | Approach | Limitation | Ethical Safeguard |
---|---|---|---|
Dörr (2016) [5] | NLG for detecting bias | Depends on dataset quality | Requires human editorial oversight |
Aparna (2021) [1] | Critique of algorithmic neutrality | Data bias persists | Systemic review mechanisms |
Carlson (2018) [2] | Critique of AI objectivity in newsrooms | Editorial automation risks | Human judgment remains essential |
Raza et al. (2024) [8] | Dbias for news fairness detection | Early development stage | Open-source transparency tools |
Wach et al. (2023) [10] | SLR on AI’s media impact | Limited primary data | Advocates for stakeholder transparency |
Turner Lee et al. (2019) [11] | Best practices for bias detection | Tech gaps in policy translation | Policy-linked design protocols |
Obermeyer et al. (2019) [13] | Bias case in health algorithms | Misapplied proxies lead to bias | Diverse training and impact audits |
Spinde et al. (2021) [9] | BABE model using expert bias labels | Annotation subjectivity | External expert panels |
Harvard Business Review (2023) [14] | Fairness risks in AI systems | Lack of auditability | Ethical AI audit frameworks |
3.2. Storytelling and the Human–AI Narrative Shift
3.3. Ethical Frameworks and Institutional Governance
3.4. Summary
4. Limitations
- 1.
- Language BiasThe review was conducted exclusively in English, potentially excluding valuable non-English studies.Impact: This may introduce a Eurocentric bias, limiting the applicability of the conclusions to non-English-speaking media environments.
- 2.
- Database CoverageSources were selected from a combination of academic databases and grey literature repositories, including ProQuest, Google Scholar, and institutional reports.Impact: The exclusion of region-specific or subscription-only databases may have caused selection bias, narrowing the scope of perspectives and frameworks captured.
- 3.
- Temporal ScopeThe review emphasized literature published between 2016 and 2024 to capture recent developments.Impact: This may overlook earlier foundational work or the historical evolution of AI–media interactions, potentially limiting longitudinal insight.
- 4.
- Researcher BiasDespite the use of thematic coding and inter-coder reliability checks (89%), subjective interpretation of themes may still influence analysis. This approach follows Braun & Clarke’s six-phase framework [7]Impact: Researcher positionality could have shaped thematic emphasis, affecting the internal validity of the synthesis.
- 5.
- Grey Literature IntegrationSeveral industry white papers and policy documents were included to reflect current practice.Impact: These may lack peer review, raising concerns about credibility and evidence robustness, although their practical relevance is high.
5. Discussion
5.1. AI and Bias Detection
5.2. Human–AI Storytelling
5.3. Ethics and Governance
5.4. Theoretical Implications
5.5. Practical Implications
- Adopt hybrid editorial models that balance AI automation with human oversight;
- Invest in AI literacy and training across journalistic roles;
- Develop in-house ethics protocols tailored to the specific uses of AI in production;
- Apply bias-detection tools cautiously and ensure cross-checks with human editors.
6. Future Research Agenda
7. Conclusions
- Bias Mitigation Protocols that combine algorithmic tools with editorial checks;
- Creative Diversity Indexes to monitor lexical and thematic variation in AI-generated narratives;
- Cross-disciplinary training programs in AI ethics tailored for media professionals.
Author Contributions
Funding
Institutional Review Board Statement
Conflicts of Interest
Appendix A. Coding Manual
- 1.
- Coding Framework Development
- AI Bias Detection and Mitigation;
- Storytelling Transformation and Human–AI Co-Creation;
- Ethical Governance and Institutional Frameworks.
- 2.
- Definitions of Key Codes
Main Code | Definition | Example Indicators |
---|---|---|
AI Bias Detection | Systems used to detect or reduce bias in news output | Dbias tools, audits, training data diversity |
Editorial Oversight | Human supervision mechanisms integrated into AI systems | Internal review boards, cross-checking AI outputs |
Storytelling Automation | AI’s role in automating or co-authoring narratives | Automated summaries, AI-generated articles |
Lexical Homogenization | Loss of linguistic diversity in AI-generated content | <10% lexical variance across platforms |
Ethical Frameworks | Formal policies or principles guiding AI use | Newsroom ethics charters, EC guidelines |
Education and Training | Institutional response through curriculum or training | Journalism degrees adding AI modules |
Governance Models | Organizational or regulatory structures managing AI | Internal boards, public regulators, industry standards |
- 3.
- Coding Process
- Step 1: Three researchers independently coded 10 sample articles for familiarization.
- Step 2: Codes were compared and reconciled to ensure intercoder reliability.
- Step 3: A unified coding matrix was used for the full set of 44 articles.
- Step 4: Coded data were clustered under thematic umbrellas and summarized in tables and visuals (e.g., Table 1, Storytelling Continuum, Conceptual Framework diagram).
Intercoder Reliability:- Cohen’s kappa was 0.89
- Disagreements were resolved through consensus meetings.
- 4.
- Analytical Decisions and Reflexivity Notes
- Articles were double-coded when applicable to multiple themes
- Commercial whitepapers or opinion pieces were excluded unless peer-reviewed or cited by scholarly literature.
- Reflexivity was also considered: the coders acknowledged the potential for disciplinary bias (media studies versus computer science) and actively cross-validated interpretations.
References
- Aparna, D.; Coded Bias: An Insightful Look at AI Algorithms and Their Risks to Society. Forbes 2021. Available online: https://www.forbes.com/ (accessed on 1 July 2025).
- Carlson, M. Automating Judgment? Algorithmic Judgment, News Knowledge and Journalistic Professionalism. New Media Soc. 2018, 20, 1755–1772. [Google Scholar] [CrossRef]
- Diakopoulos, N. Automating the News; Harvard University Press: Cambridge, MA, USA, 2019. [Google Scholar]
- Shi, Y.; Sun, L. How Generative AI is Transforming Journalism. J. Media 2024, 5, 582–594. [Google Scholar]
- Dörr, K.N. Mapping the Field of Algorithmic Journalism. Digit. J. 2016, 4, 700–722. [Google Scholar] [CrossRef]
- Snyder, H. Literature Review as a Research Methodology: An Overview and Guidelines. J. Bus. Res. 2019, 104, 333–339. [Google Scholar] [CrossRef]
- Braun, V.; Clarke, V. Using Thematic Analysis in Psychology. Qual. Res. Psychol. 2006, 3, 77–101. [Google Scholar] [CrossRef]
- Raza, S.; Reji, D.J.; Ding, C. Dbias: Ensuring Fairness in News. Int. J. Data Sci. Anal. 2024, 17, 89–102. [Google Scholar] [CrossRef] [PubMed]
- Spinde, T.; Plank, M.; Krieger, J.-D.; Ruas, T.; Gipp, B.; Aizawa, A. Neural Media Bias Detection Using Distant Supervision With BABE—Bias Annotations By Experts. In Findings of the Association for Computational Linguistics: EMNLP; Association for Computational Linguistics: Punta Cana, Dominican Republic, 2021; Volume 101, pp. 1206–1222. [Google Scholar] [CrossRef]
- Wach, K.; Duong, C.D.; Ejdys, J.; Kazlauskaite, R.; Korzynski, P.; Mazurek, G.; Ziemba, E. AI’s Impact on Media and Society: A Systematic Review. Entrep. Bus. Econ. Rev. 2023, 11, 45–67. [Google Scholar]
- Turner Lee, N.; Resnick, P.; Barton, G. Algorithmic Bias Detection and Mitigation: Best Practices and Policies to Reduce Consumer Harms. Brookings. 2019. Available online: https://www.brookings.edu/ (accessed on 1 July 2025).
- Broussard, M. Artificial Unintelligence; MIT Press: Cambridge, MA, USA, 2018. [Google Scholar] [CrossRef]
- Obermeyer, Z.; Powers, B.; Vogeli, C.; Mullainathan, S. Racial Bias in Health Algorithms. Science 2019, 366, 447–453. [Google Scholar] [CrossRef]
- Harvard Business Review. Eliminating Algorithmic Bias Is Just the Beginning of Equitable AI. Harvard Business Review. 2023. Available online: https://hbr.org/ (accessed on 1 July 2025).
- Stray, J. The Curious Journalist’s Guide to Artificial Intelligence; Columbia Journalism School: New York, NY, USA, 2021. [Google Scholar]
- Gupta, N.; Tenove, C. The Promise and Peril of AI in Newsrooms. Columbia J. Rev. 2021, 59, 22–29. [Google Scholar]
- Urman, A.; Katz, S. What They Do in the Shadows. Inf. Commun. Soc. 2022, 25, 1234–1250. [Google Scholar] [CrossRef]
- Grobman, S. McAfee Reveals 2024 Cybersecurity Predictions. Express Computer. 2023. Available online: https://www.expresscomputer.in/ (accessed on 1 July 2025).
- Zuboff, S. The Age of Surveillance Capitalism; PublicAffairs: New York, NY, USA, 2019. [Google Scholar]
- World Economic Forum. Why AI Bias May Be Easier to Fix Than Humanity’s. WEF Agenda 2023. Available online: https://www.weforum.org/agenda/ (accessed on 1 July 2025).
- Lewis, S.C.; Guzman, A.L.; Schmidt, T.R. Automation and Human–Machine Communication: Rethinking Roles and Relationships of Humans and Machines in News. Digit. J. 2019, 7, 176–195. [Google Scholar] [CrossRef]
- Shah, A. Media and Artificial Intelligence: Current Perceptions and Future Outlook. Acad. Mark. Stud. J. 2024, 28, 45–59. [Google Scholar]
- Bodó, B.; Helberger, N.; De Vreese, C.H. Algorithmic News Recommenders. Digit. J. 2019, 7, 246–265. [Google Scholar]
- Thurman, N.; Lewis, S.C.; Kunert, J. Algorithms, Automation, and News. Digit. J. 2019, 7, 980–992. [Google Scholar] [CrossRef]
- Brigham, N.G.; Gao, C.; Kohno, T.; Roesner, F.; Mireshghallah, N. Case Studies of Generative AI Use. In Proceedings of the NeurIPS SoLaR Workshop, New Orleans, LA, USA, 9 December 2024. [Google Scholar]
- Beckett, C.; New Powers, New Responsibilities. LSE Report 2019. Available online: https://www.lse.ac.uk/ (accessed on 1 July 2025).
- Sundar, S.S. Rise of Machine Agency. J. Comput.-Mediat. Commun. 2020, 25, 74–88. [Google Scholar] [CrossRef]
- Bouguerra, R. AI and Ethical Values in Media. Ziglobitha J. 2024, 12, 34–47. [Google Scholar]
- Wardle, C.; Derakhshan, H.; Information Disorder. Council of Europe. 2017. Available online: https://www.coe.int/ (accessed on 1 July 2025).
- Gutiérrez-Caneda, B. Ethics and Journalistic Challenges in the AI Age. Front. Commun. 2024, 9, 123–141. [Google Scholar] [CrossRef]
- Press Gazette. The Ethics of Using Generative AI to Create Journalism. Press Gazette 2023. Available online: https://pressgazette.co.uk/ (accessed on 1 July 2025).
- European Commission. Ethics Guidelines for Trustworthy AI. EC 2021. Available online: https://digital-strategy.ec.europa.eu/ (accessed on 1 July 2025).
- Poynter Institute. How to Create Newsroom AI Ethics Policy. Poynter.org 2024. Available online: https://www.poynter.org/ (accessed on 1 July 2025).
- Gondwe, G. Typology of Generative AI in Journalism. J. Media Stud. 2024, 38, 112–130. [Google Scholar]
- De-Lima-Santos, M.F.; Yeung, W.N.; Dodds, T. AI Guidelines in Global Media. AI Soc. J. 2024, 39, 201–215. [Google Scholar] [CrossRef]
- Dierickx, L.; Lindén, C.; Opdahl, A. Data-Centric Ethics in Journalism AI. Ethics Inf. Technol. 2024, 26, 77–93. [Google Scholar] [CrossRef]
- Reynders, D.; 7th Evaluation of the Code of Conduct. European Commission 2023. Available online: https://commission.europa.eu/document/download/5dcc2a40-785d-43f0-b806-f065386395de_en (accessed on 1 July 2025).
- Ferrara, E. Should ChatGPT Be Used for Journalism? Nat. Hum. Behav. 2023, 7, 1–2. [Google Scholar]
- Pasquale, F. New Laws of Robotics; Harvard University Press: Cambridge, MA, USA, 2020. [Google Scholar]
- Stark, L.; Hutson, J. Automating Judgment (Follow-up Study). New Media Soc. 2021, 23, 345–363. [Google Scholar]
- Couldry, N.; Mejias, U.A. The Costs of Connection; Stanford University Press: Stanford, CA, USA, 2019. [Google Scholar]
- Markkula Center for Ethics. How Must Journalists and Journalism View Generative AI? SCU Ethics Blog. 2024. Available online: https://www.scu.edu/ethics/ (accessed on 1 July 2025).
- Knight Foundation. AI in Journalism: Practice and Implications. Knight Foundation, 2019. Available online: https://knightfoundation.org/ (accessed on 1 July 2025).
- Harvard Magazine. Artificial Intelligence Limitations. Harvard Magazine. 2018. Available online: https://www.harvardmagazine.com/ (accessed on 1 July 2025).
Author(s) | Storytelling Application | Observed Impact | Ethical/Creative Concern |
---|---|---|---|
Shi & Sun (2024) [4] | Generative models for summaries and localization | Increased efficiency, lower engagement | Loss of narrative richness |
Lewis et al. (2019) [21] | Human–machine collaboration in newsrooms | Flexible co-authorship roles | Ambiguity in accountability |
Brigham et al. (2024) [25] | AI-assisted longform journalism | Faster production cycles | Shallower narrative structure |
Sundar (2020) [27] | Machine agency framework | Hybrid creativity model | Need for role delineation |
Bouguerra (2024) [28] | AI as co-author in value-driven journalism | Conditional creativity based on ethical design | Value alignment challenges |
Beckett (2019) [26] | Global newsroom adoption in low-resource settings | Cost reduction, increased automation | Ethical trade-offs in underserved markets |
Shah (2024) [22] | Analysis of lexical sameness | Homogenization across platforms | Reduced content diversity |
Source | Framework Type | Implementation Example | Scope | Strengths | Limitations |
---|---|---|---|---|---|
European Commission (2021) [32] | Regulatory Guidelines | Trustworthy AI principles | EU-wide | Sets baseline ethical standards | Voluntary adoption; uneven enforcement |
Poynter Institute (2024) [33] | Organizational Ethics Charter | Adopted by 142 newsrooms | Global | Practical newsroom application | Lacks auditing mechanism |
Gondwe (2024) [34] | Curricular Model | AI ethics integration in journalism education | Academic | Long-term professional development | Not yet widespread |
de-Lima-Santos et al. (2024) [35] | Policy Review | Global AI media policy analysis | Cross-national | Comparative insight | No universal standard |
Ferrara (2023) [38] | Educational Program | Stanford’s AI+Media Lab | Curricular | Combines theory and application | Early-stage rollout |
Bouguerra (2024) [28] | Organizational Ethics Board | AI review boards in newsrooms | Internal | Context-specific governance | Risk of internal bias |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Lamprou, S.; Dekoulou, P.; Kalliris, G. The Critical Impact and Socio-Ethical Implications of AI on Content Generation Practices in Media Organizations. Societies 2025, 15, 214. https://doi.org/10.3390/soc15080214
Lamprou S, Dekoulou P, Kalliris G. The Critical Impact and Socio-Ethical Implications of AI on Content Generation Practices in Media Organizations. Societies. 2025; 15(8):214. https://doi.org/10.3390/soc15080214
Chicago/Turabian StyleLamprou, Sevasti, Paraskevi (Evi) Dekoulou, and George Kalliris. 2025. "The Critical Impact and Socio-Ethical Implications of AI on Content Generation Practices in Media Organizations" Societies 15, no. 8: 214. https://doi.org/10.3390/soc15080214
APA StyleLamprou, S., Dekoulou, P., & Kalliris, G. (2025). The Critical Impact and Socio-Ethical Implications of AI on Content Generation Practices in Media Organizations. Societies, 15(8), 214. https://doi.org/10.3390/soc15080214