Artificial Intelligence and Journalistic Ethics: A Comparative Analysis of AI-Generated Content and Traditional Journalism
Round 1
Reviewer 1 Report
Comments and Suggestions for Authors
This is a very interesting paper about an increasingly relevant topic such as the use of artificial intelligence in media production from the journalism ethics perspective. This study, based on a comparative analysis of AI-generated content and human-written journalistic pieces and readers' perceptions, also opens a necessary gate to better know journalism practice in Kazakhstan, normally a understudied region in media and communication research.
Although this is a well-developed work and of clear interest to the journal's readers, authors should improve several aspects, related to the conceptual framework and the structure of the text, in order to gain clarity and consistency before being considered for publication in Journalism and Media. These elements that should be revised are as follows:
-Authors indicate that AI-generated articles tend to exhibit "greater objectivity". The use of the term "objectivity" here in light of what the academic literature on journalistic ethics indicates requires a rethinking. In media ethics, objectivity is not a static state or a fixed standard, but a method or approach followed by journalists that aims at navigating biases and perspectives and ensuring fair reporting. Stephen Ward (2011) argues that objectivity in journalism isn't not about achieving a completely neutral, unbiased perspective, but rather about employing a method of "pragmatic objectivity". Kovach and Rosenstiel (2001) refer to objectivity as a method of verification and a disciplined approach to gathering and presenting information. As they explain, the journalist must make decisions and cannot be objective. But journalistic methods are objective. Thus, if objectivity actually refers to a work method developed by professionals/humans, how can we consider AI-generated texts as objective. Maybe in this context it would be more accurate to describe them as 'neutral' or 'balanced' from the point of view of the selection of sources and approaches.
Some important references are fully recommended here:
Kieran, M. (2022). Media Ethics
Ward, Stephen J.A. (2011). Ethics and the Media: An Introduction
Kovach, B. & Rosenstiel, T. (2001). The Elements of Journalism: What Newspeople Should Know and the Public Should Expect
- Regarding the structure of this article, the presentation of content could gain in clarity if it were presented differently. Introduction, literature review and theoretical framework are all mixed up in the same Introduction section. They should be divided in at least two different parts. Theoretical framework and literature review should be apart and include subheadings according to the content, e.g. the evolution of AI in journalism, AI ethical concerns, and the dicotomy/and perceptions between AI-generated texts and human-written ones.
Also, a more structured differentiation between the literature review and theoretical framework would be even adequate in order to clearly define the research gap and let a discussion open about different forms of use of AI in the current journalistic media from an ethical perspective and how the authors understand those uses and their implications for journalism practice and credibility.
On the other hand, in the discussion section authors contextualize the findings within the existing literature, but these results are not thoroughly interpret in relation to the research questions. So the discussion should be better articulated showing to what extent the study's results respond to the research questions posed in the introduction, and what new understanding or insights have emerged.
Likewise, to complete the Conclusion authors should address the limitations of this study and had better suggest potential lines of further research that may stem from this study.
- Concerning Methodology, the explanation should also be more robust in several ways:
Authors do not describe the sample completely: they refer to articles published "by major news agencies in Kazakhstan", but what are their names and how many different agencies are there in the sample? This is important to compare if there are different uses between media or not. Although you can infer some agency names below (from line 162 when they refer to data sets), we do not know whether these are the only ones that exist in this country or just the selected ones out of a bigger number. Either way, those names should be mentioned before.
Moreover, to better contextualize this study being Kazakhstan a less studied country in communication research, authors had better include some explanation about the media system in Kazakhstan.
Also, authors refer to "an evaluation of articles written by both artificial intelligence and traditional journalists among a Kazakhstani audience using a survey method". However,
it's clearly explained how many articles are comprised in the sample. Are they just the six included in the data set or did they select them from a bigger scope of news pieces? What type of articles are they? Only news?
We read "all six articles were selected due to their popularity among the Kazakhstani audience", but according to what source or metrics these are the most popular ones?
Authors mention they conducted a survey method but how did they develop that survey? What population did they address to and how was the sample selected? How many people were initially addressed to and how many accepted to participate?
These aspects need to be better justified and further detailed so as to fully understand both the design and the scope of this study.
-In relation to the Introduction (theoretical framework), to better contextualize the AI capability to generate automated texts, I suggest including some references about two key concepts not mentioned in this article: Machine Learning, and Natural Language Processing.
We read in lines 81-82: "Amponsah & Atianashie (2024) highlight ethical concerns around the use of AI, including bias in data and breaches of confidentiality". Maybe some explanation of algorithmic bias can be added here.
- Author support the idea of transparency and accountability while using AI as two of the most challenging tasks in journalism, but why is that? This needs to be further developed. On the one hand, transparency implies that journalists let readers know how and to what extend they used AI in their pieces regardless the software is commonly used and accepted.
- Another problem author do not mention either is the fact that AI-generated pieces do not always include links to sources from where they take the data, and this make affect their degree of trustworthiness. As AI-generated texts proliferate, how can people better understand the veracity of these texts? One increasingly common solution applied by media outlets is linking AI-generated text to the sources from which they were derived. This could also be part of a bigger discussion though.
Author Response
Summary
We would like to express our sincere gratitude to the reviewers for taking the time to read and evaluate our manuscript entitled “Artificial Intelligence and Journalistic Ethics: A Comparative Analysis of Ai-Generated Content and Traditional Journalism”. We appreciate the constructive and insightful comments provided, which have helped us improve the clarity and depth of our research.
All comments have been carefully addressed in a point-by-point manner. The corresponding revisions and corrections have been made and are highlighted in green in the revised manuscript, as requested. We hope that these changes meet the reviewers’ and the editor’s expectations.
Comments 1: Authors indicate that AI-generated articles tend to exhibit "greater objectivity". The use of the term "objectivity" here in light of what the academic literature on journalistic ethics indicates requires a rethinking. In media ethics, objectivity is not a static state or a fixed standard, but a method or approach followed by journalists that aims at navigating biases and perspectives and ensuring fair reporting. Stephen Ward (2011) argues that objectivity in journalism isn't not about achieving a completely neutral, unbiased perspective, but rather about employing a method of "pragmatic objectivity". Kovach and Rosenstiel (2001) refer to objectivity as a method of verification and a disciplined approach to gathering and presenting information. As they explain, the journalist must make decisions and cannot be objective. But journalistic methods are objective. Thus, if objectivity actually refers to a work method developed by professionals/humans, how can we consider AI-generated texts as objective. Maybe in this context it would be more accurate to describe them as 'neutral' or 'balanced' from the point of view of the selection of sources and approaches.
Response 1: Thank you for your detailed and thoughtful comment regarding the use of the term objectivity. We agree that, according to the academic literature on journalistic ethics, objectivity refers to a professional method rather than a static or inherent quality of a text.
In response, we have:
- Replaced the term “objectivity” with more precise alternatives such as “neutrality” and “balance” in the Abstract, Hypothesis, Results, Discussion, and Conclusion sections.
- Added a new paragraph to the Discussion section clarifying the meaning of objectivity in journalism, supported by the recommended references: Ward (2011), Kovach & Rosenstiel (2001) (lines 653-659).
- Included these references in the bibliography and highlighted all changes in green in the revised manuscript.
These modifications better align our terminology with established theoretical frameworks and improve the conceptual clarity of the manuscript. Thank you again for this insightful recommendation.
Comments 2: Regarding the structure of this article, the presentation of content could gain in clarity if it were presented differently. Introduction, literature review and theoretical framework are all mixed up in the same Introduction section. They should be divided in at least two different parts. Theoretical framework and literature review should be apart and include subheadings according to the content, e.g. the evolution of AI in journalism, AI ethical concerns, and the dicotomy/and perceptions between AI-generated texts and human-written ones.
Also, a more structured differentiation between the literature review and theoretical framework would be even adequate in order to clearly define the research gap and let a discussion open about different forms of use of AI in the current journalistic media from an ethical perspective and how the authors understand those uses and their implications for journalism practice and credibility.
Response 2: Thank you very much for your valuable feedback and insightful suggestions regarding the structure of the article. We sincerely appreciate your attention to clarity and academic standards, particularly your note about the need to separate the Introduction, Literature Review, and Theoretical Framework.
We have made the following structural revisions:
- Introduction is now a standalone section that only includes the background, research aim, hypothesis, and research questions. In no longer contains theoretical or literature review content.
- A new section, 2 Literature review, has been created and organized into clear subheadings as per your recommendations:
2.1 Evolution of AI in Journalism
2.2 Capabilities of AI in Journalism
2.3 Ethical Concerns in AI-Powered Journalism
2.4 Comparative Studies: AI vs Human Journalism
2.5 Research Gaps
These subheadings align directly with suggested structure in your review and allow for better thematic navigation and analytical clarity.
- Another standalone section, 3 Theoretical Framework, has been added to differentiate between the conceptual foundation of the study and the literature review. This section currently includes:
3.1 Journalistic Ethics Theory
3.2 Media Credibility and Audience Perception
These theoretical lenses were selected because they directly support the study’s focus on ethical quality and audience trust in AI-generated versus human-written news content.
- In total, eight new sources were added to strengthen both the theoretical framework and the literature review. All newly added references have been marked in green in the manuscript for your convenience.
We hope that this revised structure improves the clarity and scholarly rigor of the article and meets the expectations outlined in your review.
Comments 3: On the other hand, in the discussion section authors contextualize the findings within the existing literature, but these results are not thoroughly interpret in relation to the research questions. So the discussion should be better articulated showing to what extent the study's results respond to the research questions posed in the introduction, and what new understanding or insights have emerged.
Response 3: In response, we have thoroughly revised the Discussion section to ensure that each research question is directly addressed and interpreted in light of the study’s findings. The revised discussion is now structured around RQ1 and RQ2, clearly indicating how the results answer these questions. We also added a new subsection highlighting the emerging insights, particularly regarding the performance of AI across different journalistic topics and its limitations in complex or ethically sensitive domains.
Additionally, one new source was added (Szabo, 2023) to further support our interpretation of audience trust dynamics across AI- and human- authored texts. All changes are marked in green in the manuscript for your convenience.
Comments 4: Likewise, to complete the Conclusion authors should address the limitations of this study and had better suggest potential lines of further research that may stem from this study.
Response 4: In response, the Conclusion section has been expanded with two new paragraphs. The first paragraph outlines the limitations of the study, including scope, sample, and contextual constraints. The second paragraph presents potential future research directions, such as exploring AI-generated content in other cultural and linguistic settings, assessing audience reactions to AI transparency, and investigating AI-human collaboration in journalistic workflows (lines 762-772).
These additions aim to enhance the academic rigor and practical relevance of the study’s conclusion. All changes are also marked in green for ease of reference.
We hope these revisions fully address your concerns and improve the clarity and completeness of the article. Thank you once again for your constructive feedback.
Comments 5: Concerning Methodology, the explanation should also be more robust in several ways:
Authors do not describe the sample completely: they refer to articles published "by major news agencies in Kazakhstan", but what are their names and how many different agencies are there in the sample? This is important to compare if there are different uses between media or not. Although you can infer some agency names below (from line 162 when they refer to data sets), we do not know whether these are the only ones that exist in this country or just the selected ones out of a bigger number. Either way, those names should be mentioned before.
Moreover, to better contextualize this study being Kazakhstan a less studied country in communication research, authors had better include some explanation about the media system in Kazakhstan.
Also, authors refer to "an evaluation of articles written by both artificial intelligence and traditional journalists among a Kazakhstani audience using a survey method". However,
it's clearly explained how many articles are comprised in the sample. Are they just the six included in the data set or did they select them from a bigger scope of news pieces? What type of articles are they? Only news?
We read "all six articles were selected due to their popularity among the Kazakhstani audience", but according to what source or metrics these are the most popular ones?
Response 5: Thank you for this valuable comment. The methodology section has been revised and significantly expanded to address all points raised:
Clarification of the sample and media outlets:
We have now explicitly named all six Kazakhstani media outlets from which journalist-written articles were selected: Azattyq, Inform.kz, Sputnik.kz, Zhas Alash, Minber.kz, and Tengrinews.kz. A sentence was added to clarify that these outlets were chosen due to their national relevance, thematic diversity, and wide public recognition (lines 246-250).
Contextualization of the Kazakhstani media system:
To provide contextual background, we included a short explanation of Kazakhstan’s media environment. Specifically, we discussed the role of state control, the distinction between state-run and independent outlets, and the importance of analytical content (lines 251-258). Two sources were added to support this section:
- Utemissov Z. Z., & Koshkenov N. (2021). Kazakhstan media challenges in the context of media and government relations in the Republic of Kazakhstan. Bulletin of L.N. Gumilyov Eurasian National University, 134(1), 37-51.
- Atay, S. (2025). The Role of Independent and Alternative Media in Conducting Investigative Journalism. Bulletin of L.N. Gumilyov Eurasian National University, 150(1), 7-18.
Clarification of article selection procedure:
We explained that for each of the six thematic areas (politics, economy, law, sports, education and science, and social issues), a topic of strong public interest was first identified. Then, from each of the six selected media outlets, one article per topic was chosen – specifically the one with the highest view count on the outlet’s website. This process is now clearly described (lines 308-312).
Clarification of article types:
The revised text explicitly states that the selected materials represent various journalistic genres, including both straight news and analytical reviews (lines 313-319).
Clarification of survey design:
We clarified that twelve articles were presented to the audience – six written by AI, and six by journalists. Each pair of articles covered the same topic, allowing for direct comparison (lines 284-286).
All newly added or revised content has been highlighted in green in the manuscript for easier identification.
Comments 6: Authors mention they conducted a survey method but how did they develop that survey? What population did they address to and how was the sample selected? How many people were initially addressed to and how many accepted to participate?
These aspects need to be better justified and further detailed so as to fully understand both the design and the scope of this study.
Response 6: Thank you for this important observation. In response to your comment, we have significantly expanded the Methodology section to provide greater clarity and detail on the survey design, population, and sampling strategy. The following changes have been made:
Survey development:
We explained that the survey was designed with five comparative questions per article pair, evaluating: article structure, writing style, factual accuracy, citation of sources, and completeness of information (lines 279-280).
These were close-ended questions, requiring respondents to compare the AI-generated article with its journalist-written counterpart. A total of 30 comparative judgments were collected from each participant (lines 283-284).
Survey population and sampling:
We clarified that the survey was distributed via a closed WhatsApp group named “Liga doktorantov KazNU”, which includes 841 members, mainly doctoral students and university instructors. Participation was voluntary, and the group is typically used to support academic research (lines 267-271 and 274-277).
Respondents were 26 to 48 years old, had higher education, and worked or studied in academic fields (line 277).
Clarification of article distribution:
It is now clearly stated that twelve articles were shown to participants: six written by AI and six by journalists. Each article pair corresponded to the same topic, allowing for consistent comparative evaluation.
All newly added or revised content in the manuscript is highlighted in green for ease of review.
Comments 7: In relation to the Introduction (theoretical framework), to better contextualize the AI capability to generate automated texts, I suggest including some references about two key concepts not mentioned in this article: Machine Learning, and Natural Language Processing.
We read in lines 81-82: "Amponsah & Atianashie (2024) highlight ethical concerns around the use of AI, including bias in data and breaches of confidentiality". Maybe some explanation of algorithmic bias can be added here.
Response 7: Thank you for this insightful comment. In response, we have made two key updates to strengthen both the introduction and the Literature Review sections:
Clarification of core AI concepts
To better contextualize how AI systems are able to generate text, we added a concise explanation of Machine Learning (ML) and Natural Language Processing (NLP) in the part of the introduction. These technologies are now introduced as foundational mechanisms behind generative models such as ChatGPT (lines 27-33). Relevant references have been added to support this addition, including:
- Solanki, A., & Jain, D. K. (2020). Emerging Trends and Applications in Cognitive Computing. Recent Advances in Computer Science and Communications, 13(5), 812-817.
- Carrasco Ramírez, J. G. (2024). Natural language processing advancements: Breaking barriers in human-computer interaction. Journal of Artificial Intelligence General science (JAIGS) ISSN: 3006-4023, 3(1), 31-39.
Explanation of algorithmic bias
We expanded the Literature Review, under the subsection on ethical concerns (Section 2.3), by adding a short paragraph that defines and contextualizes the concept of algorithmic bias. This addition helps clarify the ethical risks posed by biased training data in AI systems and how such bias may manifest in journalistic outputs (lines 114-119). Two further references were added:
- Stinson, C. (2022). Algorithms are not neutral: Bias in collaborative filtering. AI and Ethics, 2(4), 763-770.
- Cossette-Lefebvre, H., & Maclure, J. (2023). AI’s fairness problem: understanding wrongful discrimination in the context of automated decision-making. AI and Ethics, 3(4), 1255-1269.
Comments 8: Author support the idea of transparency and accountability while using AI as two of the most challenging tasks in journalism, but why is that? This needs to be further developed. On the one hand, transparency implies that journalists let readers know how and to what extend they used AI in their pieces regardless the software is commonly used and accepted.
Response 8: Thank you for highlighting the need to elaborate on the challenges of transparency and accountability in AI-powered journalism.
In response, we have revised Section 2.3: Ethical Concerns in AI-powered Journalism to further clarify why these two principles are particularly difficult to uphold when using AI tools in newsrooms. Specifically, we added an explanation that transparency requires journalists to disclose whether and how AI was used – even if such tools are widely adopted – and that this practice is not yet standardized across media organizations (lines 121-127).
We appreciate this constructive comment, as it helped us improve the theoretical clarity and ethical depth of the article.
Comments 9: Another problem author do not mention either is the fact that AI-generated pieces do not always include links to sources from where they take the data, and this make affect their degree of trustworthiness. As AI-generated texts proliferate, how can people better understand the veracity of these texts? One increasingly common solution applied by media outlets is linking AI-generated text to the sources from which they were derived. This could also be part of a bigger discussion though.
Response 9: Thank you for this insightful observation. We agree that the absence of source citation in AI-generated content is a critical issue that affects the overall trustworthiness and credibility of such texts. In response to your suggestion, we expanded the Discussion section to further elaborate on this point.
This new addition discusses how the lack of transparent citation mechanisms in AI-generated texts prevents audiences from verifying the origin of facts (lines 673-378).
Thank you again for helping us strengthen the clarity and depth of our argument.
Author Response File: Author Response.pdf
Reviewer 2 Report
Comments and Suggestions for Authors
The article presents a topic of interest and offers a relevant contribution to the field. It constitutes a comparative study between AI-generated content and articles authored by professional journalists.
The introduction is well-founded and thorough, although it is notably longer than other sections of the article, such as the discussion and the conclusions.
Including additional references on Media Ethics and Journalism Ethics related to AI would enrich the introduction’s bibliography, especially given the emphasis on ethics in the article’s title.
It would be necessary to include contextual information on the media structure and media pluralism in Kazakhstan. Additionally, data on the penetration of AI technologies within the Kazakhstani media landscape —and particularly their usage, or at least the overall technology penetration among the population— should be provided, along with audience reach and media penetration statistics for the selected outlets. A brief introduction to the media outlets chosen for the analysis would also be beneficial, along with a detailed justification for their selection.
Furthermore, providing more detailed demographic information about the 97 respondents (such as gender, age, etc.) would help to better contextualize and understand the results.
More detailed information about the questions included in the questionnaire is needed. Furthermore, it is important to specify the response options provided to participants, as well as the methods used to grade or assess these responses.
The discussion section is rather brief and lacks sufficient development. It would be valuable to compare your analysis of the six AI-generated news items with the results obtained from the audience survey. In particular, a comparison between your evaluation of news containing factual inaccuracies and the audience’s assessment would provide meaningful insights, allowing for a deeper discussion of these inaccurate contents.
Similarly, the conclusions are concise and would benefit from further elaboration. Moreover, strengthening the connection between the conclusions and the Research Questions 1 (RQ1) and 2 (RQ2) would improve the overall coherence of the study.
Author Response
Summary
We would like to express our sincere gratitude to the reviewers for taking the time to read and evaluate our manuscript entitled “Artificial Intelligence and Journalistic Ethics: A Comparative Analysis of Ai-Generated Content and Traditional Journalism”. We appreciate the constructive and insightful comments provided, which have helped us improve the clarity and depth of our research.
All comments have been carefully addressed in a point-by-point manner. The corresponding revisions and corrections have been made and are highlighted in yellow in the revised manuscript, as requested. We hope that these changes meet the reviewers’ and the editor’s expectations.
Comments 1: The introduction is well-founded and thorough, although it is notably longer than other sections of the article, such as the discussion and the conclusions.
Response 1: Thank you for your observation regarding the relative length of the introduction section. In response, we have made several key structural adjustments:
The introduction has been significantly shortened and now includes only the background of the study, the research aim, the hypothesis, and the research questions.
The theoretical and contextual information previously included in the introduction has been relocated to two separate sections:
- Literature Review
- Theoretical Framework
To ensure overall structural balance, we have also expanded and strengthened both the Discussion and Conclusion sections, providing deeper analysis in response to the research questions and highlighting the study’s limitations and potential directions for future research.
We believe these revisions now ensure better proportionality between all sections and improve the overall clarity and coherence of the manuscript. All changes have been marked in yellow in the revised file for your convenience.
Comments 2: Including additional references on Media Ethics and Journalism Ethics related to AI would enrich the introduction’s bibliography, especially given the emphasis on ethics in the article’s title.
Response 2: Thank you for this helpful suggestion. We have incorporated additional theoretical references that directly address Media Ethics and Journalism Ethics in the context of artificial intelligence. These additions appear primarily in the Theoretical Framework section, specifically in 3.1 Journalistic Ethics Theory (lines 185-191 and 201-202) and 2.3 Ethical Concerns in AI-Powered Journalism (lines 117-120 and 125-128).
We have elaborated on concepts such as algorithmic bias, transparency, and accountability, and explained why these are considered some of the most pressing ethical challenges in AI-assisted journalism. To support these points, we added recent and relevant sources, including:
- Stinson, C. (2022). Algorithms are not neutral: Bias in collaborative filtering. AI and Ethics, 2(4), 763-770.
- Cossette-Lefebvre, H., & Maclure, J. (2023). AI’s fairness problem: understanding wrongful discrimination in the context of automated decision-making. AI and Ethics, 3(4), 1255-1269.
- Mukta, S. (2025). Moral Agency and Responsibility in Artificial Intelligence: Can Autonomous Systems Be Held Ethically Accountable. Journal of Artificial Intelligence General Science (JAIGS) ISSN:3006-4023, 8(1), 198-207.
- Ward, S. J. (2011). Ethics and the media: An introduction. Cambridge University Press.
- Kovach, B. & Rosenstiel, T. (2001). The Elements of Journalism: What Newspeople Should Know and the Public Should Expect. Crown Publishers
- Sigsgaard, M. E. (2024). Striking the (im)balance – a review of the relative prevalence of meta-ethical models in AI journalism research. Journalism.
All new references have been clearly marked in yellow in the manuscript.
These additions help strengthen the article’s theoretical grounding in ethics and align more closely with the core focus expressed in the article’s title and research questions.
Comments 3: It would be necessary to include contextual information on the media structure and media pluralism in Kazakhstan. Additionally, data on the penetration of AI technologies within the Kazakhstani media landscape – and particularly their usage, or at least the overall technology penetration among the population – should be provided, along with audience reach and media penetration statistics for the selected outlets. A brief introduction to the media outlets chosen for the analysis would also be beneficial, along with a detailed justification for their selection.
Response 3: Thank you for your thoughtful feedback. In response, we have significantly revised the “Methods and Materials” section to better contextualize the media environment in Kazakhstan and justify our media selection.
Media System Context:
We have added a brief overview of the structure of the media landscape in Kazakhstan, including the role of state-run vs. independent outlets (lines 253-260). To support this, we included two scholarly references:
Utemissov Z. Z., & Koshkenov N. Zh. (2021). Kazakhstan media challenges in the context of media and government relations in the Republic of Kazakhstan. Bulletin of L.N. Gumilyov Eurasian National University, 134(1), 37-51
Atay, S. (2025). The Role of Independent and Alternative Media in Conducting Investigative Journalism. Bulletin of L.N. Gumilyov Eurasian National University, 150(1), 7-18.
Media Outlet Justification:
We now explicitly name the six selected outlets (Azattyq, Inform.kz, Sputnik.kz, Zhas Alash, Minber.kz, and Tengrinews.kz) and explain why each was chosen – based on their national recognition, topic coverage, editorial independence, and popularity (lines 248-252 and 311-322).
AI Penetration Context:
We have added a statistical reference from Informburo.kz reporting that Kazakhstan ranks 48th globally in AI-readiness, leading in Central Asia (Introduction, lines 47-51 & Research gape, lines 165-167). This contextualizes the relevance of studying AI in journalism in Kazakhstan.
Caspian Post. (2024, September 20). IMF Index: Kazakhstan Tops Central Asia in AI Readiness.
Media Penetration and Popularity Metrics:
All new online sources were cited using APA format and highlighted in yellow in the revised manuscript for clarity.
These additions aim to strengthen the justification for our case study approach and provide readers with a clearer understanding of both the national media context and the technological landscape in Kazakhstan.
Comments 4: Furthermore, providing more detailed demographic information about the 97 respondents (such as gender, age, etc.) would help to better contextualize and understand the results.
More detailed information about the questions included in the questionnaire is needed. Furthermore, it is important to specify the response options provided to participants, as well as the methods used to grade or assess these responses.
Response 4: Thank you for your insightful comment. In response, we have substantially revised the “Methods and Materials” section to provide detailed information about the development and structure of the survey, the evaluation criteria, and the respondent demographics. The following key additions were made and marked in yellow in the revised manuscript:
Survey structure and response options:
We clarified that the survey consisted of five closed-ended questions for each article pair, covering five criteria: article structure, writing style, factual accuracy, citation of sources, and completeness of information. Respondents were asked to select which article (AI-generated or journalist-written) performed better in each criterion. These same five questions were replicated across six thematic areas, totaling 30 comparison points per respondent (lines 282-290).
Demographics of the participants:
We added that the survey was distributed among members of the closed WhatsApp group “Liga doktorantov KazNU”, which includes 841 members – primarily doctoral students and university lecturers. From this group, 97 respondents voluntarily participated between October and December 2024. The sample consisted of highly educated individuals, all holding higher education degrees. Most were either PhD students or faculty members affiliated with Al-Farabi Kazakh National University, aged between 26 and 48 (lines 269-273 and 276-280).
These clarifications improve the transparency of our data collection process and provide better context for interpreting the survey findings. We appreciate your suggestion and have addressed it accordingly.
Comments 5: The discussion section is rather brief and lacks sufficient development. It would be valuable to compare your analysis of the six AI-generated news items with the results obtained from the audience survey. In particular, a comparison between your evaluation of news containing factual inaccuracies and the audience’s assessment would provide meaningful insights, allowing for a deeper discussion of these inaccurate contents.
Response 5: Thank you for your helpful comment regarding the need for a more developed discussion section. In response, we have:
- Expanded the discussion to include direct comparisons between the content analysis and audience survey results for all six topics.
- Highlighted how factual inaccuracies in AI-generated articles (e.g., incorrect dates, missing context, outdated names) correspond with lower audience trust ratings, particularly in political and economic news (lines 700-708 and 711-716).
- Added a new sentence confirming that the findings from the content analysis and survey data are closely aligned across all topics (lines 718-720).
These additions strengthen the discussion and address your suggestion for deeper analysis. Thank you again for your valuable feedback.
Comments 6: Similarly, the conclusions are concise and would benefit from further elaboration. Moreover, strengthening the connection between the conclusions and the Research Questions 1 (RQ1) and 2 (RQ2) would improve the overall coherence of the study.
Response 6: Thank you for your valuable suggestion regarding the need to strengthen the connection between the conclusions and the research questions.
In response, we have added a sentence in the second paragraph of the Conclusion section explicitly linking the findings to RQ1 and RQ2 (lines 746-754). This addition clarifies how the comparative analysis addressed differences in content quality (RQ1) and how the survey results reflected audience perceptions (RQ2), thereby reinforcing the alignment between the study’s objectives and its outcomes.
All revisions have been highlighted in yellow in the updated manuscript.
Thank you again for your insightful feedback.
Reviewer 3 Report
Comments and Suggestions for Authors
This is a thoughtful and timely manuscript that addresses the growing intersection between artificial intelligence and journalistic ethics. I was particularly impressed by the study’s clear comparative framework, which examines AI-generated articles alongside those written by professional journalists across diverse topic areas such as politics, law, education, and sports. The decision to focus on Kazakhstan not only brings regional nuance to a largely Western-dominated discourse but also enriches our understanding of how AI technologies are evaluated in underrepresented media contexts.
The research design is clearly laid out, the data collection and analysis are rigorous, and the integration of both content analysis and survey data enhances the reliability and depth of the findings. The argumentation is coherent and logically developed, and the manuscript is well-structured from introduction to conclusion. The paper also reflects a strong command of the literature, drawing on recent and relevant sources to support its claims. Ethical considerations are addressed appropriately, and the limitations of AI in journalistic writing particularly around factual accuracy and source attribution are presented with balanced critique.
A light editorial review may help further polish the text, but overall, the work is already of a high standard. I believe it will be a valuable contribution to current debates on AI integration in the media, especially from a global and ethical perspective.
Author Response
Summary
We would like to express our sincere gratitude to the reviewers for taking the time to read and evaluate our manuscript entitled “Artificial Intelligence and Journalistic Ethics: A Comparative Analysis of Ai-Generated Content and Traditional Journalism”. We appreciate the constructive and insightful comments provided, which have helped us improve the clarity and depth of our research.
Comments 1: A light editorial review may help further polish the text, but overall, the work is already of a high standard. I believe it will be a valuable contribution to current debates on AI integration in the media, especially from a global and ethical perspective.
Response 1: Thank you for your positive and encouraging feedback. We greatly appreciate your thoughtful remarks and recognition of the study’s relevance, clarity, and methodological rigor.
In response to your suggestion for a light editorial review, we have conducted a thorough proofreading of the entire manuscript to refine grammar, sentence structure, and stylistic consistency in line with American English academic standards.
These changes have been incorporated throughout the text to improve clarity and polish, while preserving the original meaning and structure of the work.
Thank you again for your valuable support.
Round 2
Reviewer 1 Report
Comments and Suggestions for Authors
I would like thank the author(s) for making the required changes, which have significantly both the quality and the scientific contribution of the paper.