Unveiling the Mechanics of AI Adoption in Journalism: A Multi-Factorial Exploration of Expectation Confirmation, Knowledge Management, and Sustainable Use
Round 1
Reviewer 1 Report
Comments and Suggestions for AuthorsThis manuscript contributes to our understanding of AI adoption in journalism. Integrating the Expectation Confirmation Model, Knowledge Management factors, and additional constructs provides a comprehensive framework that advances theoretical understanding in this domain. The methodology is sound, and the statistical analysis is rigorous.
I recommend the following improvements before publication:
- Enhanced contextualization of the Chinese journalistic environment and how findings might differ in other media contexts
- More thorough discussion of limitations, particularly regarding the cross-sectional design and self-reported data
- Improved language clarity in some sections to better express the nuanced findings
- More specific practical implications for different stakeholders (journalists, media organizations, AI developers)
- Consider addressing potential social desirability bias in the methodology section
Overall, this is a substantial manuscript addressing a timely topic with important implications for the evolving field of journalism in the digital age.
Comments on the Quality of English LanguageThe manuscript demonstrates good English proficiency but would benefit from some stylistic and grammatical refinements to enhance clarity and readability. Occasionally, awkward sentence constructions and minor grammatical issues throughout the text should be addressed. Specific areas for improvement include:
Some overly complex sentences could be simplified for better comprehension
Occasional inconsistencies in tense usage
Some terminology that could be more precisely defined
A few instances of repetitive phrasing that should be varied
While the content is understandable, polishing the language would help ensure the critical findings are communicated more effectively to the journal's international readership. I recommend thoroughly proofreading by a native English speaker or professional editor before final publication.
Author Response
Comments 1:Enhanced contextualization of the Chinese journalistic environment and how findings might differ in other media contexts
Response 1: Thank you to the reviewer for this comment. We have substantially improved the contextualization of the Chinese journalistic ecosystem in our revised manuscript, particularly in Section 2.2 (“AI Technologies in a Chinese News Industry”). We expanded upon how China’s AI adoption is informed—not just by advancements in technology—but by regulatory and cultural considerations, such as content governance mechanisms, ideological alignment requirements, and institutional sensitivities impacting the training and application of AI tools in journalistic practice.In addition, we provide clarity on how these structural elements reinforce different algorithmic behaviours, creating diverging biases of performance between Chinese and Western media ecosystems. For example, Chinese AI models are often trained to be more sensitive to politically-sensitive topics and to be calibrated toward narratives that promote social stability, while Western applications of AI technologies may be pluralistic, which includes adversarial journalism norms, and wider spectrums of political topics.To demonstrate wider relevance of our findings, we also added a paragraph discussing the non-relation of some affordances of AI; that is, the need to consider "contextual calibration" when interpreting our findings in contexts other than China. We have now amplified this point in the literature review and discussion sections of the revised manuscript.We are hopeful that our revisions will meet the reviewer’s expectations and assist in strengthening and clarifying the application and contributions of the study.
Comments 2:More thorough discussion of limitations, particularly regarding the cross-sectional design and self-reported data
Response 2: We appreciate the reviewer’s insightful recommendation to better expose the limitations of the study. We have substantially amended the article (Section 8.1: Limitations) to provide a more robust and critical approach to discussing the methodological limitations of our study. More specifically, we acknowledge that the cross-sectional design precludes causal inferences or longer-term observations of journalists’ perceptions or practices towards AI technologies. In response to the cross sectional design, we put greater emphasis on the need for longitudinal work to examine changes across time among expectation confirmation, satisfaction, and continued use of AI tools. Moreover, we discussed self-reported data which may introduce bias, including social desirability bias and potentially the inclination of respondents to respond in a professionally favorable way, particularly with constructs such as trust, satisfaction, and perceived usefulness. We did our best to mitigate these biases through survey anonymity and self-administration, however we acknowledge residual biases likely still exist. We now explicitly have recommended the inclusion of objective behavioral data (for example, system logs, platform analytics) in future work to triangulate self-report measures and improve construct validity. We feel confident these expanded limitations now provide transparency to the findings and discussion of this study and provide a clearer roadmap of methodological rigor.
Comments 3: Improved language clarity in some sections to better express the nuanced findings
Response 3: We truly appreciate the reviewer for emphasizing the necessary conveyance of the nuanced implications of our findings’ use of language clarity. In response to that, we took the time to read through and edit portions of the manuscript, primarily Sections 6 (Findings) and 7 (Discussion), for the express purpose of improving language clarity and flow of concept.The revisions entail:(1) Making distinctions between constructs such as perceived usefulness, satisfaction, or sustainable use, especially in regard to their coupling to expectation-confirmation model.(2) Strengthening the causal logic linking knowledge management factors with sustained AI use.(3) Strengthening the interpretation of SEM results by identifying both statistical significance and practicality, primarily utilizing more clear transitional statements and concise summary statements.For example, we went through the manuscript and clarified ambiguous wording or either too technical phrasing (e.g., “enhance performance” or “journalists are more likely to embrace”) to more explicitly estate the mechanisms involved with these variables across realms of practice (e.g., “perceived usefulness strengthens journalists’ continued intention to use AI by connecting tools’ functionality with respective workflows across newsrooms.”)We hope these edits improve clarity and flow of the findings, and extend the complexities of our analysis and interpretations. All edits are marked in the revised manuscript.
Comments 4: More specific practical implications for different stakeholders (journalists, media organizations, AI developers)
Response 4: We appreciate the reviewer’s thoughtful suggestion. In response, we have reflected on “Discussion” and “Conclusion” by including stakeholder-specific practical implications to the knowledge gained in the study and its empirical findings. For journalists, we highlight the need for purposefully applied training programs which reinforce AI literacy and facilitate its knowledge acquisition and application. Informed by our findings, journalists with greater levels of tech affinity and trust in AI participate in more sustainable ways, but one of the greatest professional development opportunities is to support these journalists who are less comfortable with tech in their use of AI. For media organizations we now place emphasis on the need to embed AI tools into the editorial processes through participatory design and the use of feedback loops and processes. We also describe how an organization might build institutional trust in AI through the development of transparency in governance and structural trust which aligns with journalistic values of credibility and autonomy. For AI developers, we have offered clearer guidance about user-centered design, particularly the need to ensure AI systems are extended, context sensitive, and enhance content production in a way that does not disrupt newsroom routines. Finally, we recommend that regulatory and cultural cues are included in algorithmic models at the local level and as such, local models would allow compatibility of AI tool development across media environments, particularly in complex, and sometimes, diverse geopolitical contexts like China. These practical implications are now more explicitly framed for and contextualized for each stakeholder group as actionable and useable. We feel that these additions significantly address the paper’s relevance to the world and policy making.
Comments 5: Consider addressing potential social desirability bias in the methodology section
Response 5: We appreciate the reviewer for their valuable comment. In response, we have provided additional details in the Methodology (Section 4.1) about the issue of social desirability bias. We made mention of our usage of self-reported survey data, we have elaborated to examine how social desirability bias may manifest in responses in regard to constructs such as trust, satisfaction, and perceived usefulness..In order to minimize social desirability bias, we noted that: (1) the online survey was anonymous, thus reducing the possibility of any social pressure or reputational concerns from respondents, (2) respondents’ participation was voluntary and assured the data collected would only be used for academic research and that their identifiable information would be kept confidential, (3) the questionnaire design did not lead respondents and the length of response scales diluted any one response bias. Furthermore, in the Limitations section we have further disclosed that, to some extent, there may be some bias. We recommend that future studies collect objective behavioral measures, such as usage logs, interaction logs, etc, to triangulate their results with self-reported data and ultimately reduce potential bias. We hope that these additions are sufficient to address the reviewer’s comment and add transparency and methodological clarity to the study.
Reviewer 2 Report
Comments and Suggestions for AuthorsThe topic is relevant and novel, and the approach is potentially interesting However, the way the authors present their research has some weaknesses.
Section 2 (comprising a single subsection) seems more of a contextualization of the study, and could be probably combined with the Introduction.
The theoretical framework needs further explanation and clarity. The authors use variables from two different models (ECM and KM), complemented by elements from the TAM. Although the conceptual research framework might be relevant and correct as it is, insufficient argumentation is provided as to why these models and variables are selected, and what their contribution to academic knowledge would be.
Figure 1 could be presented after all the Hs, including them in the model, with expected directions.
There are 12 hypotheses. This is not wrong per se, but the model gets quite complex. Besides, the Hs mention significant impact, but not whether this impact is positive or negative, which makes it less interesting, specially considering that previous literature probably offers sufficient basis to expect a positive or negative relationship for some of the Hs. Moreover, the expression “significantly impacts on” is not correct.
Importantly, the creation and validation of the constructs is not addressed. It is only mentioned that the items were collected from existing scales, but not examples are provided, nor explanations or Cronbach’s alphas or previous Factorial Analyses... It is hard to evaluate the quality of the study without this information.
In general, the whole manuscript is very focused on the testing of the model, and there is a limited attention to the social or empirical relevance of the study beyond the specific variables and models. The model can be seen as a tool to understand a social phenomenon (adoption of AI by Chinese journalists), but testing the model seem to be the central aspect of the study. The literature review and introduction are generally shallow, and the discussion shows how the Hs have been confirmed, but the real implications and the contributions of the study are not really addressed.
There are no mentions to limitations and future lines of work.
Author Response
Reviewer 2
Comments 1: Section 2 (comprising a single subsection) seems more of a contextualization of the study, and could be probably combined with the Introduction.
Response 1: We would like to thank the reviewer for the constructive structural suggestions. After careful consideration, we agree that Section 2, which provides a contextual overview of AI and the media (with a focus on the global perspective and on the China media system), functions more as a lengthy, contextual background justification for the study, rather than a literature review in the traditional sense. Thus, in the revised manuscript, we have: (1) Combined the content of Section 2 into the Introduction, especially the second half of the Introduction, to make a cohesive and contextual narrative, building the context of the study’s motivation, the regulatory and technological context, and the relevance for journalism in China. (2) Supported this integration by refining the problem statement, providing stronger justification for the study’s conceptual framework, and preparing the reader appropriately for the research questions that follow. We believe the structural revision improved the overall flow, coherence, and readability of the manuscript, while maintaining all contextual information relevant to the study’s originality and relevance. Again, we thank the reviewer for this helpful suggestion.
Comments 2: The theoretical framework needs further explanation and clarity. The authors use variables from two different models (ECM and KM), complemented by elements from the TAM. Although the conceptual research framework might be relevant and correct as it is, insufficient argumentation is provided as to why these models and variables are selected, and what their contribution to academic knowledge would be.
Response 2: We truly appreciate the reviewer’s comment to provide better clarity and justification around our theoretical approach. In response, we have modified Section 3 (Theoretical Approach) to more clearly articulate our rationale for drawing construct elements from the Expectation Confirmation Model (ECM), Knowledge Management (KM) framework, and some operating elements from the Technology Acceptance Model (TAM). Specifically, we have added clarification on the following points:
- We clarify that ECM offers a well-established and reasonable basis to examine post-adoption behavior - which is central to our inquiry into sustainable AI use by journalists. The ECM constructs (expectation confirmation, perceived usefulness, and satisfaction) aid in explaining how initial expectations lead to intentions for continued use, thereby locating our work in the literature regarding post-adoption technology use.
- We justify the KM constructs (knowledge sharing, knowledge acquisition, and knowledge application) on the basis that journalism is fundamentally knowledge-intensive, and AI tools function as mediators of knowledge and knowledge processes throughout a journalist’s workflow --able to organize, disseminate, and apply knowledge in the journalism context. This then extends prior KM work to a new domain of study, individual-level technology use in journalism.
- We introduce TAM elements (especially perceived ease of use) based on a strong theoretical underpinning through much empirical work indicating their explanatory power during initial adoption of technology, particularly in AI studies. Adding this construct to the hybrid model increases our ability to make sense of journalists’subjective assessments about ease of use in relation to AI systems.
- Finally, we added clarification about the theoretical contribution of this hybrid model, which sits within the context of this study as a hybrid model of post-adoption behavior (ECM), cognitive/organizational knowledge processes (KM), and ease-of-use perceptions (TAM) to provide an integrated theoretical explanation of the sustainable use of technology (AI tools) in the practice of journalism and reporting in particular, in a non-Western and variable resource context such as China. We hope that these theoretical refinements, in response to our reviewer, are visible in the modified manuscript to better articulate the internal coherence to the model, and the contribution being made to extend prior existing literature in journalism, media studies, and technology adoption studies.
Comments 3: Figure 1 could be presented after all the Hs, including them in the model, with expected directions
Response 3: We appreciate the revisioner’s sound suggestion regarding the position and form of Figure 1 (Conceptual Research Framework). In the revised manuscript we have made the following revisions: Figure 1 has been placed following the complete list of hypotheses (H1–H12), so the reader sees the theoretical rationale and each hypothesis before seeing the visual summary of the hypotheses relationships. The figure has been also modified to include all of the hypotheses specified in the structure paths of the model. In the revised figure, each arrow has a label indicating the corresponding hypothesis that goes with that arrow (e.g., H1, H2,…H12), and the anticipated directional relationships are demonstrated with unidirectional arrows. Additionally, we added a brief sentence preceding the figure to orient the reader: “Figure 1 below illustrates the proposed research model, summarizing all hypothesized relationships among the constructs.” This reorganization and labeling of hypotheses enhances the clarity, coherence, and visual utility of the conceptual framework, as suggested.
Comments 4: There are 12 hypotheses. This is not wrong per se, but the model gets quite complex. Besides, the Hs mention significant impact, but not whether this impact is positive or negative, which makes it less interesting, specially considering that previous literature probably offers sufficient basis to expect a positive or negative relationship for some of the Hs. Moreover, the expression “significantly impacts on” is not correct.
Response 4: We sincerely thank the reviewer for this detailed and thoughtful feedback. We have carefully considered each point and implemented several important revisions to improve the clarity, precision, and scholarly rigor of the manuscript.
- Regarding the number and complexity of our hypotheses (H1–H12):We recognize the concern regarding the twelve hypotheses we are proposing and appreciate the complexity of including so many in the paper. However, we believe that it is warranted, given that our hypotheses capture the integrative components of our theoretical model, which draws on Expectation Confirmation Model (ECM) constructs, Knowledge Management (KM) theory and selected constructs from the Technology Acceptance Model (TAM). Each of our hypotheses was explicitly developed to suggest and measure a different theoretical construct which contributes to our understanding of sustainable AI use in journalism.To assist with cognitive burdens on the reader and enhance the clarity of presentation, we have now organized the hypotheses section by: Organizing the hypotheses thematically (i.e., ECM-based, KM-based, and TAM-based constructs),Each group includes short rationales prior to them that explain each group of hypotheses in relation to the theoretical literature,including a summary of the proposed contributions that each group creates in the overall research model.This modular presentation aims to retain the depth of the analysis while improving readability and coherence for the reader.
- On the absence of directional language in hypotheses: We thank the reviewer for pointing out that our hypotheses stated “significant influence” or “impact” without explicitly saying whether the influence would be hypothesized to be positive or negative. Although our study was theoretically grounded and drew upon empirical studies, this omission may have led to less interpretive clarity in our hypotheses. We have revised all 12 hypotheses to include a statement that expresses directional influence. For example: H1: Expectation confirmation has a positive influence on perceived usefulness of AI technologies for journalistic purposes. H5: Perceived satisfaction has a positive influence on sustainable use of AI technologies in journalism. Each directional expectation now acknowledges supporting citations in the text, including seminal studies (e.g., Bhattacherjee, 2001; Davis et al., 1989; Al-Emran & Teo, 2020), as a constructive way to enhance theoretical clarity and rigor.
- On the grammatical usage of “significantly impacts on”: We appreciate the reviewer pointing out this language issue. We agree that the phrase “significantly impacts on” is not grammatically acceptable in academic English, so we have now corrected this language issue throughout the manuscript. In the revised manuscript, we replaced this phrase with more accurate and preferred, academically acceptable alternatives, such as “positively influences,” “is significantly associated with,” and “has a significant effect on.”We have made these consistent changes to the hypothesis statements, results interpretations, and discussion sections of the manuscript.
Comments 5: Importantly, the creation and validation of the constructs is not addressed. It is only mentioned that the items were collected from existing scales, but not examples are provided, nor explanations or Cronbach’s alphas or previous Factorial Analyses... It is hard to evaluate the quality of the study without this information.
Response 5: We appreciate the reviewer for this valuable and constructive feedback on the construct creation and construct validation process. We agree that transparent reporting on item development and psychometric evaluation is important in gauging the methodological quality and trustworthiness of the study. In response to this concern, we have made several important revisions to the revised manuscript, specifically in Section 4.1 (Questionnaire Design) and Section 5.4 (Measurement Model Evaluation). Below, we summarize the main new additions and reworded clarifications:
- The source of each construct and the respective item sources is now more clear: all items used to measure each construct were adapted from previously validated/peer-reviewed scales in information systems, AI-use, knowledge management, and technology adoption contexts. Specifically: For constructs such as Expectation Confirmation, Perceived Usefulness, and Satisfaction, the items were derived from Bhattacherjee’s Expectation-Confirmation Model (2001). In the case of Ease of Use and Technology Affinity, these items were adapted from TAM-related studies (see Davis, 1989; Trautwein et al., 2021). Knowledge Sharing, Knowledge Acquisition, and Knowledge Application were taken from KM literature; specifically Al-Emran & Teo (2020) and Al-Sharafi et al. (2023). Personal Trust was informed by established, validated trust scales in AI acceptance (see Pham & Nguyet, 2023; Kim et al., 2008). In the revised paper, we added Appendix A that lists all 41 items used in the survey, complete with sources identified. Including this reference allows our paper to be transparent and replicable.
- Reliability Analysis -- Cronbach’s Alpha and Composite Reliability (CR):In order to measure the internal consistency of each of the constructs, both Cronbach’s alpha (α) and composite reliability (CR) were computed and incorporated into Table 2 (Section 5.4). All constructs fulfilled the criterion for internal consistency with the values of Cronbac’s alpha well above the acceptable limit of 0.70, specifically a range of 0.838 - 0.920. With scores above 0.84, the CR values indicated excellent reliability.
- Convergent Validity-- Factor Loadings and AVE:In addition, we assessed convergent validity, in terms of: Outer loadings (λ) of the individual items that were all greater than the acceptability threshold of 0.70. Average Variance Extracted (AVE) for each construct was all above the threshold of 0.50. These values are also outlined in Table 2. Following Hair et al. (2019), the complementary values of the loading and AVE indicates that a sufficient amount of variance is explained by each construct and its respective indicators.
- Discriminant Validity--Fornell-Larcker and HTMT: To assess the discriminant validity, we used: The Fornell-Larcker criterion which indicated that the square root of the AVE for each construct was greater than inter-construct correlations,The HTMT ratio showed values below 0.85. Based on the findings in Table 2, results show that each construct is different from all other constructs indicated, so the criteria has been met for discriminant validity.
- Appropriateness for Confirmatory Method: Since the current model was constructed based upon theory and prior validated measures, we took a confirmatory approach using Partial Least Squares Structural Equation Modeling (PLS-SEM), rather than exploring with EFA. This confirmatory approach is aligned with recommendations from other respected scholars in IS and communication literature, especially in studies with multiple latent variables and many effects.
Response 6: In general, the whole manuscript is very focused on the testing of the model, and there is a limited attention to the social or empirical relevance of the study beyond the specific variables and models. The model can be seen as a tool to understand a social phenomenon (adoption of AI by Chinese journalists), but testing the model seem to be the central aspect of the study. The literature review and introduction are generally shallow, and the discussion shows how the Hs have been confirmed, but the real implications and the contributions of the study are not really addressed.
Response 6: We appreciate the reviewer for this insightful and important comment which raises larger considerations about balance between methodological rigor and empirical or social significance. We clearly recognize, while testing structural models represents one component of the study, it has also to be understood in a larger sociotechnical context and anchored in meaningful implications for the real world, particularly given the study’s focus on the practice of adopting AI technologies among journalists in China.In response to both suggestions, the authors have made the following major changes to advance the relevance, depth, and contribution of the manuscript to scholarly and professional audiences:
- Expanded Introduction and Reviews of Related Literature (Sections 1 and 2)
We made substantial additions to the Introduction to clarify: The real-world context of AI deployment in newsrooms in China, including the regulatory context, labor restructuring, and differences in the technological capacities of resource-rich and rich-poor organizations. The societal importance of understanding how journalists engage with AI tools—not only in terms of use value, but also in terms of autonomy from (i.e., agency with) journalism, ethical considerations, and cognitive alteration of their editorial work. The growing international interest in analyzing AI-human collaborations in the production of journalism, giving our paper relevance as part of a broader international conversation about the future of journalism in the context of digital transformation. We also revised the literature to go beyond a strictly instrumental account of AI technologies and integrate more critical and contextual accounts, including algorithmic governance (Diakopoulos, 2017), institutional trust, and the relevance of cultural-linguistic framing in AI models.
- Expanded Theoretical Framing and Social Framing (Section 3)
We reframed the model as not only a statistical model and contributor to but also a heuristic to understanding a broader socio-technological adoption process. In this version, we are more explicit about how each construct (e.g., trust, knowledge use, affinity for the technology) is mapped onto people's lived experience as journalists in China, who work within the context of:top down digital governance,bottom up professional adaptation, andnew and changing norms of editorial autonomy and innovation.
- Expanded Discussion of Empirical and Practical Implications (Section 7)
We have rewritten our discussion section to better emphasize: Practical opportunities that many stakeholders (journalists, media managers, AI developers) can utilize (e.g., design specific training around AI, onboarding protocol with AI for newsrooms, regulations that inform AI development). Contextual relevance of the findings - particularly how adoption by Chinese journalists was not only being shaped by internal motivations (trust, perceived usefulness) as well as external obstacles (infrastructure, policy). Limitations to a purely functionalist model when applied to high-stakes industries e.g., journalism, and how our study intersects organizational and behavioral considerations in addressing this limitation.
- Clarified Academic Contribution
We can now more clearly articulate the study’s contribution to the academic literature, particularly in the following ways: We extend both ECM and KM theories into the field of media studies and journalism, where their application remains somewhat sparse; We provide again a non-Western empirical lens on AI adoption that contributes to global knowledge on digital transformation in journalism; Finally, we illustrate how the political conditions and social contexts shape the individual-level technological use--particularly in media in more authoritarian contexts, or in more semi-authoritarian, semi-regulated media environments.
Comments 7: There are no mentions to limitations and future lines of work.
Response 7: We appreciate the reviewer for indicating the additive value of an explicit discussion of the limitations of the study and to outline future research directions. In response, we incorporated a new section entitled “8. Limitations and Future Directions” in the revision to include a robust counterpoint that discusses methodological and contextual limitations. In the limitation discussion, we highlight that the cross-sectional design does not allow us to determine if causal relationships can be inferred, nor to observe changes of the phenomena over time, and that self-reported data may introduce a social desirability bias. In response to the latter concern, we discuss the options of anonymous survey methods in the revised manuscript, and encourage the use of objective behavioural data (e.g., system usage logs) in future studies. We also point out that the selection of participants in the study, although contextually helpful with respect to purposively sampling Chinese journalists, limits the study generalization. As such, we call on researchers to undertake comparative studies in different media ecologies and regulatory contexts. In thinking about future research, we suggest that researchers conduct longitudinal studies to develop thick data about changes in the journalist-AI relationships, comparative studies to understand variations such as culture and institutions, and to consider moderating variables (e.g., organizational support, algorithmic transparency) to understand the complex realities of sustainable AI in journalism. We believe the additions address the reviewer’s issue and certainly improve the manuscript in terms of academia and the ongoing realities of journalism across the globe.
Reviewer 3 Report
Comments and Suggestions for AuthorsThe introduction effectively establishes the growing importance of AI in journalism. It provides context with appropriate references, as well as identifies clear research gaps. Nevertheless, the introduction would benefit from explicitly stating the research objectives or questions in bullet points or a summary paragraph. Also, a clearer distinction between the challenges of AI adoption in developed vs. developing countries (or resource-rich vs. resource-constrained environments) would be helpful.
The literature review is well-structured into clear subsections, providing comprehensive discussions and citing recent and relevant studies. Yet, some references (e.g., Eren, 2021; Li & Fang, 2019) are repeated across hypotheses. It may help consolidate the evidence before the hypotheses section to improve clarity.
The theoretical framework integrates the Expectation Confirmation Model (ECM), Knowledge Management (KM), and constructs from the Technology Acceptance Model (TAM). While comprehensive, the framework could discuss potential mediators or moderators (e.g., organizational size, type of media outlet). A clearer theoretical justification for why trust and technology affinity are included outside ECM/KM is recommended.
The method is good, except that the convenience sampling method should be acknowledged as a limitation in the main body, not just in methods. In addition, it would help elaborate more on non-response bias mitigation and ethical clearance, especially since data collection was digital and voluntary. The data analysis and results are suitable and clearly presented. Still, the author(s) may consider providing effect size interpretations alongside beta values for practical significance.
Matters in the discussion section are well-well-articulated. However, the discussion could be enhanced by differentiating between short-term adoption drivers and long-term sustainability factors. Similarly, more commentary on cultural or institutional factors specific to China would strengthen external validity claims.
The conclusion could integrate future research directions, such as testing this model in other countries or media contexts and encouraging the use of longitudinal or experimental designs to confirm causality. Moreover, the manuscript can provide a dedicated limitations section discussing the generalizability due to non-probability sampling, self-report bias and cultural specificity (China). Additionally, ethical review or informed consent details are absent – this should be explicitly stated for transparency.
Finally, the author(s) used recent, credible, and relevant references (2020–2024). However, the references have minor inconsistencies in citation formatting (e.g., missing DOI links and placement of page ranges). Perhaps summarizing prior models (e.g., ECM, TAM) in a Table can reduce in-text citations.
Author Response
Comments 1: The introduction effectively establishes the growing importance of AI in journalism. It provides context with appropriate references, as well as identifies clear research gaps. Nevertheless, the introduction would benefit from explicitly stating the research objectives or questions in bullet points or a summary paragraph. Also, a clearer distinction between the challenges of AI adoption in developed vs. developing countries (or resource-rich vs. resource-constrained environments) would be helpful.
Response 1: We appreciate the reviewer’s critical and positive feedback. In this regard, we have clarified the Introduction section, articulating clearly salient research questions in a standalone paragraph formatted in bulleted list format for overall clarity and reader-accessibility. Said research questions (now transparently named in the text as RQ1--RQ3) deal with expectation confirmation, knowledge management processes, and other personal and technological factors for sustaining AI use among Chinese journalists. We also noted additional framing of context by acknowledging or distinguishing AI adoption challenges in developed media environments versus developing media environments (or resource-rich versus resource-constrained media environments). With reference to either global literature or the specific instance of China, we noted that developed countries face ethical or institutional challenges, while developing countries struggle predominantly with weak infrastructure, unreliable distribution of access to training, and unprecedented regulatory pressure. Clarifying these points should enhance both the conceptual depth and international relevance of the work. The point raised by the reviewer enabled us to significantly communicate the clarity and richness of context/setting in the opening of the manuscript.
Comments 2: The literature review is well-structured into clear subsections, providing comprehensive discussions and citing recent and relevant studies. Yet, some references (e.g., Eren, 2021; Li & Fang, 2019) are repeated across hypotheses. It may help consolidate the evidence before the hypotheses section to improve clarity.
Response 2: We appreciate the reviewer’s positive comment on the structure of our literature review and the thoughtful recommendation to eliminate redundant citations. Some of the central studies (e.g., Eren, 2021; Li & Fang, 2019) were cited multiple times in the context of Expectation Confirmation and in Perceived Usefulness, specific to the use of one or both studies, or the combined effect of multiple studies, to support the development of multiple constructs. This redundancy, for us, demonstrated empirical support behind our proposal, but we concur with the reviewer that some redundancy can be unnecessary and impact the clarity and readability of the theoretical framework. As a revision, we have restructured the beginning of Section 3 (Theoretical Approach) to create an integrated literature synthesis paragraph at the beginning of each major component of the model (i.e., ECM, KM, TAM-related variables). In these consolidated sections, we now agglomerate and summarize other relevant empirical studies (e.g., Bhattacherjee, 2001; Al-Emran & Tdeo, 2020 and others). This will allow us to: (1) Find that collectively supported studies provide scholarship weight, and equitable evidence to relevance in comparison to individual studies; (2) To eliminate repetitive citation patterns in the individual hypothesis statement; (3) To provide coherent introduction before providing a formal articulation of the hypotheses. We have kept essential citations in the hypotheses for traceability and clarity, while limiting them to only congruent support of the claim in working hypothesis. This approach--a converging citing of literature at the beginning of the sections and a continued citation within hypotheses--helps balance scholarly quality and scholarly efficiency. We feel this revision significantly improves the intellectual flow of the evidence and creates a better understanding to readers about how the theoretical model connects to and supports prior evidence-based studies. Finally, we thank the reviewers for their suggestion and we feel it greatly supported the clarity and scholarly quality of our manuscript.
Comments 3: The theoretical framework integrates the Expectation Confirmation Model (ECM), Knowledge Management (KM), and constructs from the Technology Acceptance Model (TAM). While comprehensive, the framework could discuss potential mediators or moderators (e.g., organizational size, type of media outlet). A clearer theoretical justification for why trust and technology affinity are included outside ECM/KM is recommended.
Response 3: We appreciate the reviewer for the thoughtful and insightful critique of our theoretical framework. We thank the reviewer for recognizing our effort to include ECM in combination with KM and aspects of TAM to establish a non-competing, but comprehensive model to understand possible factors contributing to the sustained use of AI by Chinese journalists. In light of your two important points, we have revised Section 3 (Theoretical Approach) with the intent of strengthening both the theoretical justification and future potential conceptualizations of the model. (1) On potential mediators or moderators (e.g., organization size, media type): We completely agree that addressing mediating or moderating variables may have provided greater insight into the complexities of AI adoption and use. While we included a model of direct effects in our study, primarily to ensure theoretical parsimony, and owing to sample density concerns, we have included a new paragraph in Section 3 to recognize this potentially important moderating factor at the organizational level, such as: Organizational size may influence the aggregate availability of resources related to AI infrastructures and knowledge-sharing internally. The type of media (e.g., state-owned vs. commercial; traditional vs. digital native) may impact editorial autonomy, risk tolerance, and willingness to adopt new technologies.Future research could investigate the empirical examination of moderators using a multi-group SEM or as interaction term(s), which is discussed in the new Limitations and Future Directions section.
(2) On theoretical justification of trust and technology affinity outside of the ECM/KM: We appreciate the reviewer for emphasizing our reduced ability to theorize beyond a brief definition. We expanded upon describing while engaging in the theoretical basis for including Personal Trust and Technology Affinity as significant psychological antecedents which would be especially applicable in regard to the potential of AI technologies in news production for journalists, with implications for design of AI technologies, organizational affordance and agency, and risk because of interactions between human and artificial intelligences.Building upon trust which has been foundational in the information systems and AI streams of literature--see Kim et al (2008) and Duan et al (2019)--we can frame both Trust and Technology Affinity as constructs that could be ornaments in the model and not simply flat determinates of technology acceptance, even asounds like a missing metaphor somewhere within. We believe that both Trust and Technology Affinity helped contribute in modelling constructs independently within the broader constructs of satisfaction and usefulness, especially considering the opaque algorithmic nature of many products in AI and its use in journalism in China, beyond a definition or ref in use and satisfaction. Technology affinity is not a new construct, considered and validated as part of past extensions of TAM; see for example Trautwein et al. (2021). However, we believe in a quickly emerging and/or dynamic environment journalism where affordance and cognitive readiness of journalists will vary across person as digital natives have vastly different familiarities than digital immigrants, Technical Affinity will offer an additional explanatory beyond as ease and an index of future undertaken exploration. We have also cited these theoretical foundations in a clearer fashion and, in the revised model, situated both as potential boundry spanners that acritically complimented the ECM/KM core and extends between a reader's psychological readiness and an organizations’ organizational learning capacitie. We believe that these revisions strengthen the theoretic foundation to the model and are thank to the reviewer for continual encouragement to clarify and philosophize these two, as well protocols.
Comments 4: The method is good, except that the convenience sampling method should be acknowledged as a limitation in the main body, not just in methods. In addition, it would help elaborate more on non-response bias mitigation and ethical clearance, especially since data collection was digital and voluntary. The data analysis and results are suitable and clearly presented. Still, the author(s) may consider providing effect size interpretations alongside beta values for practical significance.
Response 4: We sincerely thank the reviewer for the positive review of our methodological approach and for raising several valuable points related to transparency and interpretability.
- Acceptance of Convenience Sampling as a Limitation: We agree that the use of convenience sampling - while it is a well-established practice in exploratory research with specialized professional populations, such as journalists - can lead to sampling bias and limit generalizability. Although we mentioned this initially in Section 4.2 (Data Collection and Participants), we have now explicitly referenced convenience sampling as a limitation in Section 8 (Limitations and Future Directions). Specifically, we do acknowledge that this sampling strategy likely led to the overrepresentation of digitally active journalists and does not fully reflect and capture the diversity of the AI-adoption behaviors in the broader journalistic landscape in China.
- Responsiveness bias mitigation and ethics clearance: We expanded further on how we mitigated response bias. Specifically, we designed the online questionnaire to have forced-response settings (no optional questions / yes or no; indicating we required complete datasets). Furthermore, to mitigate selection bias, we shared the survey to multiple networks and outreach work streams (WeChat, WhatsApp, etc.). On the ethics consideration, we mentioned that the study received support from an institutional academic research ethics review committee, and that participation was voluntary, anonymous and confidential. Respondents also had the opportunity to withdraw at any time, and data were stored securely on a university system for academic purposes only. We confirmed this information in the method section and added a summary of this information in the ethical statement at the end of the paper.
- Effect Size Interpretations to Support Practical Significance: We appreciate the reviewer recommending to take it further than just reporting statistical significance on effect size interpretations. While beta coefficients (β) refer to the strength (and direction of relationships), we now include brief interpretations in Section 5.5 (Structural Model Assessment) that put this into context as it relates to practical significance, for example concerning the highest path coefficient suggesting moderate effect Perceived Usefulness → Sustainable Use (β = 0.221) and careful interpretation that while usefulness has a statistically meaningful role concerning sustainable AI use, it must be interpreted alongside other significant factors, including knowledge sharing and ease of use.We also added we consider R² values (model accounted for 75.1% of sustainability use variance), we offered interpretation here about the explanatory power of effect sizes (according to suggested standard SEM interpretation (Hair et al., 2019).We believe these changes improve the transparency, ethics integrity and applied interpretability of the study. The reviewer’s recommendations have helped us better clarify and strengthened the methodological and analytical rigour of the manuscript, and we are grateful for their recommendations.
Comments 5: Matters in the discussion section are well-well-articulated. However, the discussion could be enhanced by differentiating between short-term adoption drivers and long-term sustainability factors. Similarly, more commentary on cultural or institutional factors specific to China would strengthen external validity claims.
Response 5: We sincerely appreciate the reviewer’s positive evaluation of the Discussion section and we are particularly grateful for the thoughtful suggestions for us to deepen the analysis and consideration of our findings. In that regard, we have amended Section 7 (Discussion) to include 2 new-prongs improvements to address your recommendations:
(1) Distinction Between Short-Term Adoption Drivers and Long-Term Sustainability Factors. We now more clearly make the distinction between factors that facilitate the short-term adoption of AI tools versus factors that contribute to long-term sustainability of use, as follows:
â‘ Perceived Ease of Use, Technological Affinity, and Expectation Confirmation are treated as short-term enablers in the sense that they help to lower entry barriers in order to form a first attitudinal response and motivate eventual experimentation with AI-based technology. â‘¡In contrast, Knowledge Sharing, Knowledge Application, Trust, and Perceived Usefulness are framed as long-term sustainability drivers, allowing for a deeper integration of this technology into daily workflows, and the dependence towards rapidly become routine use overtime. In doing this we provide a more dynamic analytical understanding of AI adoption as a temporal and multi-phased process as opposed to a static behavioral outcome.
(2) Broader Discussion of Cultural/institutional Specificities of the Chinese Context. To provide stronger support of the external validity of the study, and to enhance the contextual richness of this study we have expanded the discussion on China-specific factors that shape AI technology adoption in journalism—from underlying cultural, historical, and institutional factors. For example:â‘ Top-down digital transformation policies such as media convergence directives and mandates for news production using AI;â‘¡Regulatory constraints related to modes of content moderation and algorithmic compliance-based systems in terms of AI tool development and use in Chinese newsrooms; â‘¢Cultural factors such as a higher tolerance of automation, hierarchical structures of newsrooms, and risk aversion in editorial practices which might shape and inform journalists’attitudes toward trust and usability--in contexts which are often different from the West. We also more clearly clarify that while we argue some of the results are generalizable (e.g., relative ease of use for knowledge application), some within findings may be context-bound (e.g., trust of AI in-state-guided frameworks for innovation). Explicitly in the Future Directions section we called for cross-national, comparative research to validate our distinctions. We believe these additions create a more nuanced and social context for the discussion, and once again we appreciate the reviewer’ s contribution toward elevating the analytical rigor and global relevance of the manuscript.
Comments 6: The conclusion could integrate future research directions, such as testing this model in other countries or media contexts and encouraging the use of longitudinal or experimental designs to confirm causality. Moreover, the manuscript can provide a dedicated limitations section discussing the generalizability due to non-probability sampling, self-report bias and cultural specificity (China). Additionally, ethical review or informed consent details are absent – this should be explicitly stated for transparency.
Response 6: We appreciate the reviewer’s thorough and helpful comments. In response, we have made a number of substantive changes to improve the transparency, rigor, and future-oriented value of the manuscript. First, we have expanded the Conclusion section to include a strong future research agenda, indicating the need to test the proposed model in multiple geopolitical and media contexts and to evaluate cross-cultural applicability. We also place emphasis on the need for longitudinal or experimental research designs in the future, to strengthen causal relations between constructs such as expectation confirmation, knowledge application, and sustained AI use. Second, we have added a completely dedicated section (Table of Content entry 8) called “Limitations and Future Directions” which explicitly now emphasizes caveats to the study related to using convenience (non-probability) sampling, self-report and social desirability response biases, and the cultural and institutional context of the Chinese media environment, which may limit external generalizability. Third, we have included an explicit statement addressing ethical transparency, as well as an explicit statement about obtaining ethics approval from the institutional academic ethics committee in the Method section, additionally clarifying to participants about the voluntarily and anonymous nature of the survey. We repeat these clarifications in a new sub-section (Ethical Consideration) for visibility. We believe all of these alterations constitute significant value added to the manuscript in terms of methodological transparency, recognizing contextual limitations, and providing value to future comparative and longitudinal scholarship. Again, we sincerely appreciate the reviewer’s guidance.
Comments 7: Finally, the author(s) used recent, credible, and relevant references (2020–2024). However, the references have minor inconsistencies in citation formatting (e.g., missing DOI links and placement of page ranges). Perhaps summarizing prior models (e.g., ECM, TAM) in a Table can reduce in-text citations.
Response 7: We appreciate the reviewer’s kind comments regarding the quality and currency of the citations, and for the helpful suggestions related to citation style and presentation in the manuscript. Accordingly, we have redistributed and edited all citations to conform with the journal’s citation style. We have, in particular:(1) included missing DOI links where applicable; (2) made page ranges consistent across all journal articles; and(3) ensured consistency in formatting of author names, publication year placements, and punctuation in both the reference list and in-text citations.In light of the suggestion for reducing citation load and improving conceptual clarity, we also included a summary table (now, Table 1) that summarizes the key theoretical models relevant in this study (including ECM,KM, and TAM), the core constructs of each model, possible sources, and applications to AI adoption in journalism. This addition promotes the reduction of redundant reference citations but also explains to readers visually how our integrated framework builds from, and adds to, those models. We feel this structural revision improves exposure, readability, and ease for theoretical interaction/understanding, particularly for audiences not steeped in interdisciplinary theory.We appreciate the reviewer’s feedback about citation style, and suggestions to improve both the academic rigor, while promoting a higher editorial standard for our manuscript.
Round 2
Reviewer 2 Report
Comments and Suggestions for AuthorsThe manuscript has significantly improved, and the suggestions have been adequately addressed. Only two minor comments:
- Figure 1 has indeed changed location, but this reviewer fails to see the mention of each Hypothesis in the figure. The sentence preceding the figure is also different in the response letter and the manuscript.
- Section 7 is called Findings, although it mostly focuses on the implications of the study. Perhaps the heading could be changed, as Findings could be confused with the Results.
Author Response
Comment 1: Figure 1 has indeed changed location, but this reviewer fails to see the mention of each Hypothesis in the figure. The sentence preceding the figure is also different in the response letter and the manuscript.
Response 1:
We appreciate the reviewer’s insightful feedback regarding Figure 1 and the sentence preceding it. We have taken the reviewer’s suggestion into account and made the following revisions to enhance clarity and consistency:
- Figure 1 has been relocated to follow the complete list of hypotheses (H1–H12), as we believe this order ensures the reader fully understands the theoretical basis and the hypotheses before reviewing the visual representation of their relationships. Additionally, we have updated the figure to include clear labels on each arrow, corresponding to the relevant hypotheses (e.g., H1, H2, H3,..., H12). This modification allows the reader to more easily track the relationships between the constructs and the hypotheses.
- Regarding the sentence preceding the figure, we have made a correction to align it with the response letter. The revised sentence is now: “Figure 1 below illustrates the proposed research model, summarizing all hypothesized relationships among the constructs.”This change ensures consistency between the manuscript and the response letter, addressing the reviewer’s concern.
These adjustments enhance the clarity and coherence of the conceptual framework, and we hope this revision meets the reviewer’s expectations. We are grateful for the constructive feedback, which helped us improve the manuscript.
Comment 2: Section 7 is called Findings, although it mostly focuses on the implications of the study. Perhaps the heading could be changed, as Findings could be confused with the Results.
Response 2: We appreciate the reviewer’s feedback regarding the heading of Section 7. We agree that the title “Findings” could potentially be confusing, as it may be interpreted as referring to the results of the study rather than the implications. To address this concern, we have revised the section title to “Discussion and Implications” to more accurately reflect the content of the section, which focuses on interpreting the study’s findings and discussing their broader implications.
This change helps to clarify the distinction between the Findings section, which presents the raw results, and the Discussion and Implications section, which interprets these results and explores their significance in the context of AI adoption in journalism.
We hope this revision meets the reviewer’s expectations and enhances the clarity of the manuscript.