Abstract
There is now widespread use of large language (LLM)-based generative artificial intelligence (AI) tools in academic research and writing. While these are convenient, quick, and output enhancing, they also arguably incur ethical issues, such as questionable authenticity and plagiarism. Here, I explore epistemological aspects of AI use in academic writing and posit that there is evidence for three related pitfalls in AI use that should not be ignored. These include (1) epistemic detriment or harm in terms of illusions of understanding, (2) potential for cognitive dulling or impairment, and (3) AI dependency (both habitual and/or emotional). Thus, any potential infringements of academic ethics aside, AI use in academic writing incurs intrinsic problems that are epistemic in nature. These epistemic downsides call for restraint and moderation beyond regulatory measures to address ethical issues in AI use.
1. Introduction
Large language model (LLM)-based generative artificial intelligence (AI) agents have dramatically changed the way academic research is being pursued and reported (AI for Science [AI4S], 2025). It is undeniable that AI agents have greatly facilitated research and development in many subfields of science, engineering, as well as the arts and humanities. For example, in accordance with the notion of having an “AI scientist” within the context of a laboratory or research group, LLMs has been proposed to become de facto “Co-PIs” in research tasks ranging from literature triage to hypothesis generation (Prasad et al., 2025). However, the use of such agents in academic writing has been controversial (Rentier, 2025), largely because of perceptions of unoriginality and nontransparency, impediments such as AI-based plagiarism (or Aigiarism, a portmanteau of ‘AI’ and ‘plagiarism’ (Tang, 2023)), as well as inaccuracies and fabrications resulting from AI hallucination (Sun et al., 2024). Earlier attempts at the inclusion of AI agents such as ChatGPT as coauthors in academic papers were quickly outlawed, and most journals and publishers have issued guidelines pertaining to the disclosure or declaration of AI use in academic writing (Yin et al., 2025).
Despite the prevailing guidelines, the uptake of full compliance is, perhaps not unexpectedly, somewhat untenable. Several studies have revealed signs of fairly widespread undeclared use of AI on published works and preprints (Strzelecki, 2024; Glynn, 2024; Kwon, 2025b; Kobak et al., 2025; Maupin et al., 2025; Spick et al., 2025; Liang et al., 2025). It is perhaps unsurprising that computer science papers should have AI-generated content (Jacobs, 2025), but the American Association for Cancer Research (AACR) has also found a significant number of AI-generated manuscript abstracts and peer review reports, many of which are undisclosed (Nadaff, 2025). The percentages of cases highlighted in the reports above likely underestimated the prevalence of AI writing in citable published works and preprints. This is because AI crafted text is difficult to distinguish from those written by humans (Casal & Kessler, 2023) and AI detection software is not particularly reliable.
Should we even be against the extensive use of AI in academic writing? A recent survey by Nature showed that researchers are split in their views (Kwon, 2025a). Although the issue remains controversial, given the current trend and rapid development, it might be a common perception that “resistance is futile” and one should thus either embrace AI or risk falling behind. Issues of academic ethics associated with AI use have been extensively discussed (Stahl & Eke, 2024; Kim, 2024), and LLM-based generative transformers are becoming less error prone (Gibney, 2025). A particularly strong argument made by proponents of AI use in academic writing is that AI use will improve language dexterity that would facilitate academic communication for the less privileged (Giglio & da Costa, 2023; Prakash et al., 2025), and the leveling of the competitive playing field for non-native users of English. Even prominent scholars in the field who were cautious in drafting earlier AI-use disclosure guidelines have reversed their stance to propose that such disclosures should be voluntary (Hosseini et al., 2025).
Research and academic ethics or integrity considerations (Arar et al., 2025) notwithstanding, it is important to recognize generative AIs such as LLM-based generative pretrained transformers (GPTs) (hereafter AI for short) as what they are meant to be. Alvarado, for example, has articulated that AI is an epistemic technology which is “… primarily designed, developed and deployed to be used in epistemic contexts …” (Alvarado, 2023). In essence, AI is made to be, and should be, facilitating the acquisition of knowledge. However, is this notion always true? I posit that while this would apply for many aspects of academic research in general, epistemic pitfalls lurk in the specific context of academic writing. I refer to academic writing in a broad sense, from students writing term papers and project reports to academic researchers at all levels writing their research papers. Below, I shall first review the findings and evidence for these epistemic downsides and then discuss why it is still best to do, or at least intellectually self-anchor, one’s own writing in academic work.
2. Epistemic Downsides of AI Use in Academic Writing
AI’s higher degree of accuracy in pattern recognition, fast output and wide coverage might entice the user to perceive it as having epistemic expertise or authority (Hauswald, 2025). One might argue that if this notion is real, why not let the expert do the writing? However, this notion may be somewhat delusional, as it presumes an equivalent of human intellect and belief in AI. As argued by Ferrario and colleagues, “… epistemic expertise requires a relation with understanding that AI systems do not satisfy and intellectual abilities that these systems do not manifest” (Ferrario et al., 2024). Furthermore, there is emerging evidence that this epistemic reliance on AI has negative consequences for the user’s cognitive state and capacity. I will describe three such interrelated downsides below.
2.1. Epistemic Detriment—An Illusion of Understanding
An important negative consequence of extended and extensive use of AI is that of epistemic detriment. I use the term “epistemic detriment” here to distinguish such negative consequences from “epistemic harm” or “epistemic injustice”, which are more conventionally used in the discussion of bias, exclusiveness, and misleading information in AI-generated content (Humphreys, 2025; Kay et al., 2024) and are ethical violations by nature. Here, epistemic detriment instead broadly refers to tradeoffs in ease and speedof content gained with the quality of knowledge attained.
One example of such epistemic detriment is an illusion of understanding. In their seminal commentary, Messeri and Crockett noted that “… the proliferation of AI tools in science risks introducing a phase of scientific enquiry in which we produce more but understand less” (Messeri & Crockett, 2024, p. 49), suggesting that AI solutions can create illusions of understanding in which we think we understand more than we actually do. Such a notion applies to AI-generated writings. With simple prompts, AI could allow an author to write more and write faster, thus effectively and substantially increase productivity under one’s name. Even when due diligence is exercised with one going through the AI-produced content carefully to weave out any perceivable inaccuracies, one would be unable to truly and thoroughly master or own the knowledge within the content. Being presented with a ready draft without going through the piecing together of information and the synthesis from the literature required for manuscript writing would drastically weaken the epistemological process. The undesirable consequence is that the extensively AI-aided human author would not fully understand what she claims under her authorship.
A case in hand related to an illusion of understanding is that the use of AI has allowed some apparently very productive authors (Conroy, 2024) to write articles outside their primary field of work or expertise, of whose contents they cannot confidently nor convincingly claim to be truly knowledgeable of. Another disconcerting trend is the explosion of formulaic articles in the biomedical literature (Spick et al., 2025), for example, papers relating single predictors to specific health conditions based on data available in public databases (Suchak et al., 2025), that were likely crafted by AI but with authorship claimed by those without true expertise in health statistics. Formulaic articles or writings are not limited to works in the sciences. In an online experiment, Doshi and Hauser studied the impact of AI ideas on the production of short stories and found that while access to AI ideas improved the creativity of less creative writers, AI-enabled stories were more like each other than stories crafted by humans alone (Doshi & Hauser, 2024). Although anyone, either imaginarily creative or otherwise, can now write stories of equivalent qualities with AI, to think that human creativity in this regard is enhanced by AI would be delusional.
Such an illusion of understanding may have profound consequences. Kosymyna and colleagues showed empirical evidence for the phenomenon of “cognitive debt”—an accumulation of long-term cognitive costs from over-reliance on AI. The authors used electroencephalography (EEG) to record the participants’ brain activity to assess cognitive engagement and cognitive load as a measure of neural activation during essay writing tasks by three groups of participants (LLM group, Search Engine group, Brain-only group). The essays were analyzed using Natural Language Processing (NLP) and scored by both human and AI judges. As shown by participants assigned to groups of AI usage and “Brain-only” control, when mental effort is spared in the short-term with AI, it comes with longer-term costs which include “… diminished critical thinking, reduced creativity and independent thought, increased vulnerability to bias and manipulation, and shallow information processing” (Kosmyna et al., 2025). Worryingly, self-reported ownership of essays was the lowest in the AI using group and the highest in the “Brain-only” group, with AI users also struggling to accurately quote their own work. Granted that the work has logistic and scale limitations, it aptly illustrates how AI-using authors, despite thinking and claiming as such, are neither confident of, nor do they know well enough, of the AI-generated content.
2.2. AI-Induced Cognitive Dulling/Impairment
AI use induced “cognitive debt” might arguably be limited to the full cognitive assimilation of specific content written by AI. However, there are also suggestions and evidence that AI use could result in more general effects of cognitive dulling or impairment in learning and memory, as well as loss of creativity (Ahmad et al., 2023; Bai et al., 2023; Dergaa et al., 2024; Zhai et al., 2024; Gerlich, 2025; Jose et al., 2025). For example, Barcaui’s report on a randomized controlled trial of AI use by undergraduate students showed that its unrestricted use during learning can impair the long-term retention of learned materials, likely via reduction in the cognitive effort required for durable memory formation (Barcaui, 2025). Gerlich’s survey of online participants in the UK showed a significant negative correlation between frequent AI use and critical thinking abilities, which is apparently mediated by increased cognitive offloading (Gerlich, 2025). Another survey by Lee and colleagues found that higher confidence in generative AI is associated with less critical thinking, while higher self-confidence is associated with more critical thinking (Lee et al., 2025). A systematic review by Zhai and colleagues concluded that over-reliance on AI, as learners and users increasingly favor fast and optimal solutions, negatively impacts cognitive abilities (Zhai et al., 2024).
Beyond mere plausibility therefore, there is emerging evidence that consistent and prolonged use of AI eventually dulls the intellect and blunts creativity. There are two sides to this notion. On the one hand, a writer could canvass and present many more ideas that are new to her using AI, and this might create a self-perceived as well as others-perceived illusion of heightened intellect and creativity. However, when many use AI pretrained with the same or similar corpuses of data, novelty and creativity become somewhat normalized. Moreover, the AI user, anesthetized from the pain of knowledge acquisition by the ease of achievement via cognitive offloading to AI, relaxes her own intellectual edge and cognitive intensity. The net result would be cognitive dulling, if not impairment, over time. Interestingly, and perhaps ironically, Dayan and colleagues found, by subjecting LLMs to the Montreal Cognitive Assessment (MoCA) test on cognitive capacity, that almost all AI models subjected to the MoCA test showed signs of mild cognitive impairment, and anthropomorphically speaking, the “older” chatbots performed worse (Dayan et al., 2024).
2.3. AI Reliance/Dependency
The ease of cognitive offloading to AI creates a psychological state of reliance on AI for one’s function in studies and/or work (Zhang et al., 2024), perhaps even more so when it comes to effort-intensive and time-consuming tasks like academic writing. There are two related aspects of this reliance, habitual or emotional. Habitual reliance is perhaps akin to the loss of the impetus and ability of some to effectively perform mental or pencil and paper arithmetical calculations as the electronic calculator becomes widely available. AI reliance or AI dependency in this regard can potentially impair learning and performance, which could be disguised by initial apparent enhancement of these education parameters. For example, Bastani and colleagues found that while access to GPT-4 improved the performance of high school students in mathematics, students performed worse than those who never had access when such access is subsequently removed (Bastani et al., 2024).
In terms of academic writing, habitual AI dependency might not be an issue under “normal” working conditions when the AI tools are readily accessible. However, it might become a problem when conditions preclude such accessibility. One could easily imagine the scenario of students going into a state of panic when needing to write essays or construct elaborate pieces of responses in examination settings without the aid of AI. In related cases, inaccessibility to AI tools in particular research conditions (such as fieldwork in places with poor signal coverage, or areas with information technology infrastructures devastated by natural disasters) may significantly reduce the productivity and reporting quality for those suffering from AI dependency.
Human emotional dependency on AI is now widely documented in different contexts (Adam, 2025). AI users in academia or research could also become emotionally attached to or dependent on AI (Nature Machine Intelligence Editorial, 2025), so much so that the latter is valued as an intellectually equivalent partner or colleague. This is prominently manifested by the tendency of some authors who insist on calling an AI chatbot a collaborator, or that the latter should be listed as a coauthor. There might be, however, only a fine line between the engagement of AI as a writing partner or as a crutch. While the former might apparently be productive and beneficial (Nguyen et al., 2024), the latter could potentially be crippling situationally. Imagine a creative author whose thoughts and ideas gradually become more dependent on or become entwined with the AI’s output because of her persistent interaction with AI, in what is perceived as a flourishing partnership. When such interaction is stalled, impeded, or otherwise broken under various conceivable situations, the author might be unable to pick up the pieces and go back to her old self, and thus became crippled in terms of intellectual competence and creativity without her AI partner.
3. Concluding Remarks
3.1. Conclusions
The use of AI can cause epistemic harm to individuals in general. Recent cases that include dire consequences from ill advice given by AI chatbots and facial misidentification in the United States have prompted Nelson to suggest that the summer of 2025 will be remembered as AI’s cruel summer, when “… the unheeded risks and dangers of AI became undeniably clear” (Nelson, 2025). Other than the possibility of AI chatbots fueling delusional thought in vulnerable individuals (Morrin et al., 2025), new epistemological issues and challenges arise with the increasingly widespread use of AI. Schneider (2025) has identified several such issues, including problems with digital privacy, misplaced epistemic trust in AI companions, the use of personality profiling and adaptive language, and misattributing sentience to AI. The author likened this to a “boiling frog” problem, in which gradual exposure dulls the sense for any immediate danger, but the combination of issues will ultimately result in detrimental engagements and diminished human autonomy (Schneider, 2025). Social downsides in AI use are emerging amidst the enthusiasm of embracing AI in our daily lives. My discussion above added another aspect of such downsides, pertaining specifically to epistemic pitfalls in academic writing.
Although the discussions above are focused on academic writing for grades or publication, it is conceivable that other types of writing within the academic realm could suffer from the same epistemic pitfalls when AI is used extensively. Take, for example, the crafting of ethics protocols in animal and human participant research for ethics committee approval. Compliance with one’s own written protocols requires one to truly understand not just the intricacies of the work procedures, but also the underlying risk–benefit assessments and ethical reasonings. Overdependence on AI in crafting such protocols could result in epistemic blindsight on the part of the researcher. The same would likely apply to the drafting of standard operating procedures and risk assessments in various academic settings, such as safety and health. Furthermore, although the discussion above focuses on epistemic rather than ethical aspects of AI in academic writing, it should be noted that recent studies have shown that offloading to AI could ultimately increase dishonest behavior (Köbis et al., 2025), which in the context of science and research include untruthful or exaggerated reporting.
3.2. Recommendations
Are we being overly pessimistic here? After all, electronic calculators and computers have advanced rather than stifling industrial civilization as a whole and mathematics as a subject of intellectual pursuit. Would academic authors who adopt AI writing agents as part of their extended mind (Tang, 2025b) not become better and smarter overall as a result? There is an important caveat to this rosy notion. When the gadgets that are supposed to extend the human mind weaken or stifle the central biological component because of offloading of critical activities by the latter to the former, benefits of mind extension diminish, and on balance the extension might even become detrimental to the core. Appropriate use of AI can benefit research, but it is critical for an individual to know when to offload to technology (Ferdman, 2025), and when not to. It is also important for academia to work toward preserving human intellect and creativity while integrating AI into our pursuit of knowledge.
With the above in mind, how should we move forward? Firstly, empirical studies pointing toward epistemic downsides of generative AI use are emerging (such as those cited above) but are still fairly limited in scope. More quality work should be conducted to better document and decipher the intricacies of all three aspects of epistemic downsides of AI use (illusion of understanding, epistemic dulling and AI dependency), some of which remained underexplored (for example, emotional AI dependency in academic writing). Such investigations would benefit from an interdisciplinary approach, combining expertise from fields such as neuropsychology, education, and AI cognition. Only with a more wholesome comprehension of these epistemic issues associated with AI use could we work toward antidotes and solutions. Secondly, academia must find ways of distinguishing AI writings from genuine products of human intellect. For the moment, I think (while others might disagree) it would be premature and careless for us to move away from disclosure or declarations of AI use in academic papers. If anything, such disclosures should be more elaborate to better demarcate human from AI contributions (Tang, 2025a). Likewise, students should be taught to use AI in an epistemically responsible manner. Finally, while other more upstream aspects of academic research (such as analytical depth and methodological power) are benefiting tremendously from the use of AI, it would appear paradoxical should academic writing alone continues to be hamstrung. We must find ways to properly synergize human intellect and AI efficiency in this critical phase of academic work.
Funding
This research received no external funding.
Data Availability Statement
No new data was created in this work.
Acknowledgments
The author is grateful to all reviewers for their insightful and constructive comments, which improved the manuscript.
Conflicts of Interest
The author declares no conflicts of interest.
References
- Adam, D. (2025). Supportive? Addictive? Abusive? How AI companions affect our mental health. Nature. Available online: https://www.nature.com/articles/d41586-025-01349-9 (accessed on 11 November 2025).
- Ahmad, S. F., Han, H., Alam, M. M., Rehmat, M. K., Irshad, M., Arraño-Muñoz, M., & Ariza-Montes, A. (2023). Impact of artificial intelligence on human loss in decision making, laziness and safety in education. Humanities and Social Science Communications, 10, 311. [Google Scholar] [CrossRef] [PubMed]
- AI for Science (AI4S). (2025). A data-driven look at AI’s transformative impact on the future of science. Nature. Available online: https://www.nature.com/articles/d42473-025-00164-0 (accessed on 15 October 2025).
- Alvarado, R. (2023). AI as an epistemic technology. Science and Engineering Ethics, 29, 32. [Google Scholar] [CrossRef]
- Arar, K. H., Özen, H., Polat, G., & Turan, S. (2025). Artificial intelligence, generative artificial intelligence and research integrity: A hybrid systemic review. Smart Learning Environment, 12, 44. [Google Scholar] [CrossRef]
- Bai, L., Liu, X., & Su, J. (2023). ChatGPT: The cognitive effects on learning and memory. Brain-X, 1, e30. [Google Scholar] [CrossRef]
- Barcaui, A. (2025). ChatGPT as a cognitive crutch: Evidence from a randomized controlled trial on knowledge retention. Available online: https://ssrn.com/abstract=5353041 (accessed on 15 October 2025).
- Bastani, H., Bastani, O., Sungu, A., Ge, H., Kabakcı, Ö., & Mariman, R. (2024). Generative AI can harm learning. The Wharton School research paper. Available online: https://ssrn.com/abstract=4895486 (accessed on 15 October 2025).
- Casal, J. E., & Kessler, M. (2023). Can linguists distinguish between ChatGPT/AI and human writing?: A study of research ethics and academic publishing. Research Methods in Applied Linguistics, 2(3), 100068. [Google Scholar] [CrossRef]
- Conroy, G. (2024). Surge in number of ‘extremely productive’ authors concerns scientists. Nature, 625, 14–15. Available online: https://www.nature.com/articles/d41586-023-03865-y (accessed on 15 October 2025).
- Dayan, R., Uliel, B., & Koplewitz, G. (2024). Age against the machine—Susceptibility of large language models to cognitive impairment: Cross sectional analysis. British Medical Journal, 387, e081948. [Google Scholar] [CrossRef]
- Dergaa, I., Ben Saad, H., Glenn, J. M., Amamou, B., Ben Aissa, M., Guelmami, N., Fekih-Romdhane, F., & Chamari, K. (2024). From tools to threats: A reflection on the impact of artificial-intelligence chatbots on cognitive health. Frontiers in Psychology, 15, 1259845. [Google Scholar] [CrossRef]
- Doshi, A. R., & Hauser, O. P. (2024). Generative AI enhances individual creativity but reduces the collective diversity of novel content. Science Advances, 10, eadn5290. [Google Scholar] [CrossRef]
- Ferdman, A. (2025). Practices worth preserving: Knowing when to offload to technology. Philosophy & Technology, 38, 100. [Google Scholar] [CrossRef]
- Ferrario, A., Facchini, A., & Termine, A. (2024). Experts or authorities? The strange case of the presumed epistemic superiority of artificial intelligence systems. Minds & Machines, 34, 30. [Google Scholar] [CrossRef]
- Gerlich, M. (2025). AI tools in society: Impacts on cognitive offloading and the future of critical thinking. Societies, 15(1), 6. [Google Scholar] [CrossRef]
- Gibney, E. (2025). Can researchers stop AI making up citations? Nature. Available online: https://www.nature.com/articles/d41586-025-02853-8 (accessed on 15 October 2025).
- Giglio, A. D., & da Costa, M. U. P. (2023). The use of artificial intelligence to improve the scientific writing of non-native English speakers. Revista da Associacao Medica Brasileira, 69(9), e20230560. [Google Scholar] [CrossRef]
- Glynn, A. (2024). Suspected undeclared use of Artificial Intelligence in the academic literature: An analysis of the academ-AI dataset. arXiv, arXiv:2411.15218. [Google Scholar] [CrossRef]
- Hauswald, R. (2025). Artificial epistemic authorities. Social Epistemology, 39, 716–725. [Google Scholar] [CrossRef]
- Hosseini, M., Gordijn, B., Kaebnick, G. E., & Holmes, K. (2025). Disclosing generative AI use for writing assistance should be voluntary. Research Ethics, 21, 728–735. [Google Scholar] [CrossRef]
- Humphreys, D. (2025). AI’s epistemic harm: Reinforcement learning, collective bias, and the new AI culture war. Philosophy & Technology, 38, 102. [Google Scholar] [CrossRef]
- Jacobs, P. (2025). One-fifth of computer science papers may include AI content. Science. Available online: https://www.science.org/content/article/one-fifth-computer-science-papers-may-include-ai-content (accessed on 15 October 2025).
- Jose, B., Cherian, J., Verghis, A. M., Varghise, S. M., Mumthas, S., & Joseph, S. (2025). The cognitive paradox of AI in education: Between enhancement and erosion. Frontiers in Psychology, 16, 1550621. [Google Scholar] [CrossRef]
- Kay, J., Kasirzadeh, A., & Mohamed, S. (2024). Epistemic injustice in generative AI. arXiv, arXiv:2408.11441v1. Available online: https://arxiv.org/html/2408.11441v1 (accessed on 15 October 2024).
- Kim, S. J. (2024). Research ethics and issues regarding the use of ChatGPT-like artificial intelligence platforms by authors and reviewers: A narrative review. Science Editing, 11(2), 96–106. [Google Scholar] [CrossRef]
- Kobak, D., González-Márquez, R., Horvát, E. Á., & Lause, J. (2025). Delving into LLM-assisted writing in biomedical publications through excess vocabulary. Science Advances, 11(27), eadt3813. [Google Scholar] [CrossRef] [PubMed]
- Kosmyna, N., Hauptmann, E., Tuan, Y. T., Situ, J., Liao, X. H., Beresnitzky, A. V., Braustein, I., & Maes, P. (2025). Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for essay writing task. arXiv, arXiv:2506.08872. [Google Scholar] [CrossRef]
- Köbis, N., Rahwan, Z., Rilla, R., Supriyatno, B. I., Bersch, C., Ajaj, T., Bonnefon, J. F., & Rahwan, I. (2025). Delegation to artificial intelligence can increase dishonest behaviour. Nature, 646(8083), 126–134. [Google Scholar] [CrossRef] [PubMed]
- Kwon, D. (2025a). Is it OK for AI to write science papers? Nature survey shows researchers are split. Nature, 641(8063), 574–578. [Google Scholar] [CrossRef] [PubMed]
- Kwon, D. (2025b). Science sleuths flag hundreds of papers that use AI without disclosing it. Nature, 641(8062), 290–291. [Google Scholar] [CrossRef]
- Lee, H., Kim, S., Chen, J., Patel, R., & Wang, T. (2025). The impact of generative AI on critical thinking: Self-reported reductions in cognitive effort and confidence effects from a survey of knowledge workers. In CHI conference on human factors in computing systems (CHI ’25) (pp. 1–23). ACM Digital Library. [Google Scholar] [CrossRef]
- Liang, W., Zhang, Y., Wu, Z., Lepp, H., Ji, W., Zhao, X., Cao, H., Liu, S., He, S., Huang, Z., Yang, D., Potts, C., Manning, C. D., & Zou, J. (2025). Quantifying large language model usage in scientific papers. Nature Human Behaviour. [Google Scholar] [CrossRef]
- Maupin, D., Suchak, T., Barnett, A., & Spick, M. (2025). Dramatic increases in redundant publications in the Generative AI era. medRxiv, 2025.09.09.25335401. [Google Scholar] [CrossRef]
- Messeri, L., & Crockett, M. J. (2024). Artificial intelligence and illusions of understanding in scientific research. Nature, 627, 49–58. [Google Scholar] [CrossRef]
- Morrin, H., Nicholls, L., Levin, M., Yiend, J., Iyengar, U., DelGuidice, F., Bhattacharyya, S., MacCabe, J., Tognin, S., Twumasi, R., Alderson-Day, B., & Pollak, T. (2025). Delusions by design? How everyday AIs might be fuelling psychosis (and what can be done about it). PsyArXiv. [Google Scholar] [CrossRef]
- Nadaff, M. (2025). AI tool detects LLM-generated text in research papers and peer reviews. Nature. Available online: https://www.nature.com/articles/d41586-025-02936-6 (accessed on 15 October 2025).
- Nature Machine Intelligence Editorial. (2025). Emotional risks of AI companions demand attention. Nature Machine Intelligence, 7, 981–982. [Google Scholar] [CrossRef]
- Nelson, A. (2025). An ELSI for AI: Learning from genetics to govern algorithms. Science, 389(6765), eaeb0393. [Google Scholar] [CrossRef]
- Nguyen, A., Hong, Y., Dang, B., & Huang, X. (2024). Human-AI collaboration patterns in AI-assisted academic writing. Studies in Higher Education, 49(5), 847–864. [Google Scholar] [CrossRef]
- Prakash, A., Aggarwal, S., Varghese, J. J., & Varghese, J. J. (2025). Writing without borders: AI and cross-cultural convergence in academic writing quality. Humanities and Social Sciences Communications, 12, 1058. [Google Scholar] [CrossRef]
- Prasad, D., Khandeshi, A., Sartin, S., Jain, R., Dahdaleh, N., Lesniak, M., Luo, Y., & Ahuja, C. (2025). Will AI become our co-PI? NPJ Digital Medicine, 8(1), 440. [Google Scholar] [CrossRef] [PubMed]
- Rentier, E. S. (2025). To use or not to use: Exploring the ethical implications of using generative AI in academic writing. AI Ethics, 5, 3421–3425. [Google Scholar] [CrossRef]
- Schneider, S. (2025). Chatbot epistemology. Social Epistemology, 39, 570–589. [Google Scholar] [CrossRef]
- Spick, M., Onoja, A., Harrison, C., Stender, S., Byrne, J., & Geifman, N. (2025). Quantifying new threats to health and biomedical literature integrity from rapidly scaled publications and problematic research. medRxiv, 2025.07.07.25331008. [Google Scholar] [CrossRef]
- Stahl, B. C., & Eke, D. (2024). The ethics of ChatGPT–Exploring the ethical issues of an emerging technology. International Journal of Information Management, 74, 102700. [Google Scholar] [CrossRef]
- Strzelecki, A. (2024). ‘As of my last knowledge update’: How is content generated by ChatGPT infiltrating scientific papers published in premier journals? Learned Publishing, 38(1), e1650. [Google Scholar] [CrossRef]
- Suchak, T., Aliu, A. E., Harrison, C., Zwiggelaar, R., Geifman, N., & Spick, M. (2025). Explosion of formulaic research articles, including inappropriate study designs and false discoveries, based on the NHANES US national health database. PLoS Biology, 23(5), e3003152. [Google Scholar] [CrossRef] [PubMed]
- Sun, Y., Sheng, D., Zhou, Z., & Wu, Y. (2024). AI hallucination: Towards a comprehensive classification of distorted information in artificial intelligence-generated content. Humanities and Social Sciences Communications, 11, 1278. [Google Scholar] [CrossRef]
- Tang, B. L. (2023). The underappreciated wrong of AIgiarism—Bypass plagiarism that risks propagation of erroneous and bias content. EXCLI Journal, 22, 907–910. [Google Scholar] [CrossRef]
- Tang, B. L. (2025a). Undeclared AI-assisted academic writing as a form of research misconduct. Science Editor, 48. [Google Scholar] [CrossRef]
- Tang, B. L. (2025b). Will widespread use of artificial intelligence tools in manuscript writing mark the end of human scholarship as we know it? Science Editing, 12(2), 231–233. [Google Scholar] [CrossRef]
- Yin, S., Huang, S., Xue, P., Xu, Z., Lian, Z., Ye, C., Ma, S., Liu, M., Hu, Y., Lu, P., & Li, C. (2025). Generative artificial intelligence (GAI) usage guidelines for scholarly publishing: A cross-sectional study of medical journals. BMC Medicine, 23(1), 77. [Google Scholar] [CrossRef] [PubMed]
- Zhai, C., Wibowo, S., & Li, L. D. (2024). The effects of over-reliance on AI dialogue systems on students’ cognitive abilities: A systematic review. Smart Learning Environment, 11, 28. [Google Scholar] [CrossRef]
- Zhang, S., Zhao, X., Zhou, T., & Kim, J. H. (2024). Do you have AI dependency? The roles of academic self-efficacy, academic stress, and performance expectations on problematic AI usage behavior. International Journal of Educational Technology in Higher Education, 21, 34. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).