Next Article in Journal
Radiological Insights into Sacroiliitis: A Narrative Review
Next Article in Special Issue
Clinical Case of Mild Tatton–Brown–Rahman Syndrome Caused by a Nonsense Variant in DNMT3A Gene
Previous Article in Journal
Predicting Phase 1 Lymphoma Clinical Trial Durations Using Machine Learning: An In-Depth Analysis and Broad Application Insights
Previous Article in Special Issue
Evaluating the Efficacy of ChatGPT in Navigating the Spanish Medical Residency Entrance Examination (MIR): Promising Horizons for AI in Clinical Medicine
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Ethical Dilemmas in Using AI for Academic Writing and an Example Framework for Peer Review in Nephrology Academia: A Narrative Review

by
Jing Miao
1,
Charat Thongprayoon
1,*,
Supawadee Suppadungsuk
1,2,
Oscar A. Garcia Valencia
1,
Fawad Qureshi
1 and
Wisit Cheungpasitporn
1
1
Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA
2
Chakri Naruebodindra Medical Institute, Faculty of Medicine Ramathibodi Hospital, Mahidol University, Bang Phli 10540, Samut Prakan, Thailand
*
Author to whom correspondence should be addressed.
Clin. Pract. 2024, 14(1), 89-105; https://doi.org/10.3390/clinpract14010008
Submission received: 15 November 2023 / Revised: 23 December 2023 / Accepted: 28 December 2023 / Published: 30 December 2023

Abstract

:
The emergence of artificial intelligence (AI) has greatly propelled progress across various sectors including the field of nephrology academia. However, this advancement has also given rise to ethical challenges, notably in scholarly writing. AI’s capacity to automate labor-intensive tasks like literature reviews and data analysis has created opportunities for unethical practices, with scholars incorporating AI-generated text into their manuscripts, potentially undermining academic integrity. This situation gives rise to a range of ethical dilemmas that not only question the authenticity of contemporary academic endeavors but also challenge the credibility of the peer-review process and the integrity of editorial oversight. Instances of this misconduct are highlighted, spanning from lesser-known journals to reputable ones, and even infiltrating graduate theses and grant applications. This subtle AI intrusion hints at a systemic vulnerability within the academic publishing domain, exacerbated by the publish-or-perish mentality. The solutions aimed at mitigating the unethical employment of AI in academia include the adoption of sophisticated AI-driven plagiarism detection systems, a robust augmentation of the peer-review process with an “AI scrutiny” phase, comprehensive training for academics on ethical AI usage, and the promotion of a culture of transparency that acknowledges AI’s role in research. This review underscores the pressing need for collaborative efforts among academic nephrology institutions to foster an environment of ethical AI application, thus preserving the esteemed academic integrity in the face of rapid technological advancements. It also makes a plea for rigorous research to assess the extent of AI’s involvement in the academic literature, evaluate the effectiveness of AI-enhanced plagiarism detection tools, and understand the long-term consequences of AI utilization on academic integrity. An example framework has been proposed to outline a comprehensive approach to integrating AI into Nephrology academic writing and peer review. Using proactive initiatives and rigorous evaluations, a harmonious environment that harnesses AI’s capabilities while upholding stringent academic standards can be envisioned.

1. Introduction

Artificial intelligence (AI) is now a cornerstone of contemporary technological progress, fueling breakthroughs in a wide array of fields—from healthcare and finance to transportation and the arts—leading to enhanced efficiency and productivity [1]. In the medical realm, AI systems are poring over patient histories to forecast health outcomes [2], while in the financial world, they are dissecting market fluctuations to fine-tune investment approaches [3]. Self-driving vehicles are transforming how we think about transportation [4], and in the realm of entertainment, AI is the unseen curator of your music playlists and film queues [5]. The scope of AI’s reach is both vast and awe-inspiring, especially when considering the capabilities of AI-generated large language models such as ChatGPT [6], Bard [7], Bing Chat [8], and Claude [9]. Generative AI refers to a subset of AI that generates content, including text and images, by utilizing natural language processing. OpenAI introduced ChatGPT, an AI chatbot employing natural language processing to emulate human conversation. Its latest iteration, GPT-4, possesses image analysis capabilities known as GPT-4 Vision [10]. Google’s Bard is another AI-driven chat tool utilizing natural language processing and machine learning to simulate human-like conversations [7]. Microsoft’s Bing Chat, integrated into Bing’s search engine, enables users to engage with an AI chatbot for search inquiries instead of typing queries. It operates on the same model as ChatGPT (GPT-4) from OpenAI [8]. Claude, developed by Anthropic, is yet another AI chatbot in the field, currently powered by a language model called Claude 2 [9].
Within academia, AI’s growing influence is reshaping traditional methodologies [11]. These AI tools, such as chatbots, are capable of providing personalized medical advice [12], disseminating educational materials and improving medical education [13,14,15], aiding in clinical decision-making processes [16,17,18], identifying medical emergencies [19], and providing empathetic responses to patient queries [20,21,22]. Specifically, in our nephrology-focused research, we have explored chatbot applications in critical care nephrology [23], kidney transplant care [24], renal diet support [25], nephrology literature searches [26], and answering nephrology-related questions [27]. Despite its potential, there are apprehensions about ChatGPT evolving into a “Weapon of Mass Deception”, emphasizing the necessity for rigorous assessments to mitigate inaccuracies [28]. The World Health Organization (WHO) is calling for caution to be exercised in using AI models to protect and promote healthcare, due to the major concerns such as safety, effectiveness, and ethics [21,22,29,30]. The remarkable surge in ChatGPT’s presence within the medical literature, accumulating more than 1400 citations on PubMed by October 2023, highlights a pivotal moment in the merging of AI and healthcare. The increasing adoption of natural language processing models like ChatGPT in various forms of writing, including scientific and scholarly publications, presents a notable shift in the academic domain [31]. These tools offer the potential to streamline academic writing and the peer review process, enhancing efficiency significantly [32,33]. However, this trend is accompanied by several critical concerns. Key among these are the issues of accuracy, bias, relevance, and the reasoning capabilities of these models. Additionally, there is growing apprehension regarding the impact these tools might have on the authenticity and credibility of academic work, resulting in ethical and societal dilemmas [34,35]. The integration of chatbots and similar technologies in academic settings, therefore, necessitates a careful and thorough examination to address these challenges effectively.
In the field of nephrology, the possibility that chatbots, whether deliberately or inadvertently, might generate incorrect references or introduce errors, threatens the reliability of the medical literature [26]. Similarly, a study assessing the capability of ChatGPT to summary possible mechanisms of acute kidney injury in patients with coronavirus disease 2019 (COVID-19), with references, found that hallucination is the most significant drawback of ChatGPT [36,37]. In addition, a prospective cross-sectional global survey in urology showed that among 456 urologists, almost half (48%) of them use ChatGPT or other large language models for medical research, with fewer (20%) using the technology in patient care, and more than half (62%) thinking there are potential ethical concerns when using ChatGPT for scientific or academic writing [38]. Practices that compromise academic integrity or disseminate misleading or false information could significantly affect patient care and the overall comprehension of scientific principles. This scenario underscores the need for vigilant assessment and regulation in the academic and peer review processes to uphold the standards of scholarly work.
This review highlights the importance of collaborative efforts among nephrology academic stakeholders to cultivate an ethical AI environment, safeguarding the integrity of scholarly discourse in the face of fast-paced technological progress. It promotes extensive research to gauge AI’s presence in the academic literature, assess the effectiveness of AI-powered plagiarism detection tools, and gain insights into the lasting effects of AI integration on academic integrity. By actively engaging in these initiatives and conducting thorough assessments, we can strive for a harmonious coexistence with AI while upholding the highest standards of academic excellence.

2. AI’s Unethical Role in Scholarly Writing

The transformative impact of AI on various sectors is well documented, and academia is no exception [39,40,41]. While AI has been praised for its ability to expedite research by sifting through massive datasets and running complex simulations, its foray into the realm of academic writing is sparking debate. AI large language model tools like ChatGPT offer tantalizing possibilities: automating literature reviews, suggesting appropriate research methods, and even assisting in the composition of scholarly articles [42]. Ideally, these advancements could liberate researchers to concentrate on groundbreaking ideas and intricate problem-solving. Yet, the reality diverges sharply from this optimistic scenario (Figure 1).
Recent discoveries have unveiled a more troubling aspect of AI’s role in academic writing [42,43,44,45]. Scholars have been caught red-handed, incorporating verbatim text from AI language models into their peer-reviewed articles. Each of these AI tools brings something different to the table: ChatGPT excels in natural language processing, Bard AI is adept at crafting academic prose, Bing Chat is designed for conversational engagement, and Claude AI can distill complex documents into summaries. Despite their potential for good, these tools have been exploited in ways that erode the bedrock of academic integrity. This malpractice has been detected across a spectrum of journals, from lesser-known outlets to those with substantial academic influence [22,46].
The ethical concerns surrounding this issue are multifaceted and deeply disquieting. Firstly, it casts a pall over the very core of academic integrity and the esteemed peer-review process. When scholars are willing to present machine-generated text as their own work, it raises doubts about the genuineness and caliber of contemporary academic pursuits. Secondly, it erodes the credibility of coauthors, editors, and reviewers who are entrusted with upholding scholarly rigor. How did these articles manage to evade detection at the various checkpoints designed to safeguard quality? The answer might lie in systemic weaknesses within the academic publishing landscape, where the imperative to publish at any cost may be compromising scholarly excellence. Moreover, this problem extends beyond academic articles alone. There is evidence to suggest that even grant applications, vital for securing research funding, have been tainted by AI-generated content. This disconcerting revelation raises profound questions about the allocation of research funds and the overarching integrity of academic research.
The recent guidelines issued by the World Association of Medical Editors (WAME) place strong emphasis that AI chatbots, both from an ethical and legal standpoint, should not be recognized as coauthors of manuscripts in scientific literature authorship [47]. This not only underscores the pressing need for standardized reporting and the implementation of checklists for the utilization of AI tools in medical research, but also advocates for meticulous disclosure of pertinent information about the AI tool employed, which includes its name, version, and specific prompts. Such transparency is pivotal to upholding the credibility and trustworthiness of AI-assisted academic writing. On the other hand, it has also been recognized that ChatGPT and other AI language models hold the potential to function as personal assistants for journal editors and reviewers [28]. By automating certain repetitive tasks, these AI tools could enhance and streamline their workflow, thereby potentially optimizing the review process. However, it is important to acknowledge that further research and guidance are essential in this domain.
Numerous studies have highlighted that ChatGPT, while proficient in various tasks, shows limitations when dealing with scientific and mathematical concepts that require advanced cognitive skills. This becomes particularly noticeable in tasks demanding deep understanding and complex problem-solving abilities [48,49,50,51]. Nephrology, distinct from other medical specialties, primarily focuses on diagnosing and treating kidney diseases, including chronic kidney disease, acute renal failure, hypertension, and electrolyte imbalances. It uniquely intersects fluid, electrolyte, and acid–base balance, fundamental for overall body homeostasis. Long-term care of chronic conditions in nephrology demands deep knowledge in kidney physiology, pathology, immunology, and sometimes oncology and pharmacology. Given its complexity, especially in areas like electrolytes and acid-base disorders requiring intricate calculations, the application of AI models like ChatGPT in nephrology poses significant challenges. These include nuanced interpretations and subtle calculations, making AI integration in nephrology academic writing more complex than in other specialties.

2.1. Examples of Academic Papers That Have Used AI-Generated Content, Focusing on ChatGPT-Based Chatbots

In a blinded, randomized, noninferiority controlled study, GPT-4 was found to be equal to humans in writing introductions regarding publishability, readability, and content quality [52]. An article using GPT-3 to write a review on “effects of sleep deprivation on cognitive function” demonstrated ChatGPT’s adherence to ICMJE co-authorship criteria, including conception, drafting, and accountability [53]. However, it revealed challenges with accurate referencing. Another paper had GPT-3 generate content on Rapamycin and Pascal’s wager, effectively summarizing benefits, risks, and advising healthcare consultation, listing ChatGPT as first author [54]. Further example testing ChatGPT’s capability to draft a scholarly manuscript introduction and expand it with references showed promising outcomes. However, it became evident that all references generated by the AI were fictitious. This underscores the limitation of relying solely on ChatGPT for medical writing tasks, particularly in contexts where accurate and real references are critical [55].
In nephrology, there are currently only a small number of published papers featuring AI-generated content. However, this is still concerning, as it poses questions about the integrity of academic publications. Our prior study employed ChatGPT for a conclusion in the study “Assessing the Accuracy of ChatGPT on Core Questions in Glomerular Disease” [56]. A letter to editor suggests that academic journals should clarify the proportion of AI language model-generated content in papers, and excessive use should be considered academic misconduct [57]. Many scientists disapprove that ChatGPT can be listed as author on research papers [58,59]. But recently, science journals have overturned their bans on ChatGPT-authored papers; the publishing group of the American Association for the Advancement of Science (AAAS) allows authors to incorporate AI-written text and figures into papers if technology use is acknowledged and explained [60]. Similarly, the WAME Recommendations on ChatGPT and Chatbots in Scholarly Publications were updated due to the rapid increase in chatbot usage in scholarly publishing and concerns about content authenticity. These revised recommendations guide authors and reviewers on appropriately attributing chatbot use in their work. They also stress the necessity for journal editors to have tools for manuscript screening to ensure content integrity [61]. Although ChatGPT’s language generation skills are remarkable, it is important to use it as a supplementary tool rather than a substitute for human expertise, especially in medical writing. Caution and verification are essential when employing AI in such contexts to ensure accuracy and reliability. We should proactively learn about the capabilities, constraints, and possible future developments of these AI tools [62].

2.2. Systemic Failures: The Root of the Problem

Such lapses in oversight raise critical questions about the efficacy of the peer-review system, which is intended to serve as a multilayered defense for maintaining academic integrity. The first layer that failed was the coauthors, who apparently did not catch the AI-generated content. The second layer was the editorial oversight, which should have flagged the issue before the paper was even sent for peer review. Currently, numerous AI solutions, such as GPTZero, Turnitin AI detection, and AI Detector Pro, have been created for students, research mentors, educators, journal editors, and others to identify texts produced by ChatGPT, though the majority of these tools operate on a subscription model [44]. The third layer was the peer-review process itself, intended to be a stringent evaluation of a paper’s merit and originality. A study showed that ChatGPT has the potential to generate human-quality text [63], which raises concerns about the ability to determine whether research was written by a human or an AI tool. As ChatGPT and other language models continue to improve, it is likely that it will become increasingly difficult to distinguish between AI-generated and human-written text [64]. A study of 72 experienced reviewers of applied linguistics research article manuscripts showed that only 39% were able to distinguish between AI-produced and human-written texts, and the top four rationales used by reviewers were a text’s continuity and coherence, specificity or vagueness of details, familiarity and voice, and writing quality at the sentence level [65]. Additionally, the accuracy of identification varied depending on the specific texts examined [65]. The fourth layer was the revision phase, where the paper should have been corrected based on reviewers’ feedback, yet the AI-generated text remained. The fifth and final layer was the proofing stage, where the paper should have undergone a last round of checks before being published. These lapses serve as instructive case studies, spotlighting the deficiencies in the current peer-review system. The breakdown at these various checkpoints suggests that there are underlying systemic problems that risk undermining the quality and integrity of scholarly work.

2.3. The Infiltration of AI in Academic Theses

The problem of AI-generated content is not limited to scholarly articles; it has also infiltrated graduate-level theses. A survey conducted by Intelligent revealed that nearly 30% of college students have used ChatGPT to complete a written assignment, and although 75% considered it a form of cheating, they continue to use it for academic writing [66]. For example, a master’s thesis from the Department of Letters and English Language displayed unmistakable signs of AI-generated text [67]. The thesis, focused on Arab American literary characters and titled “The Reality of Contemporary Arab-American Literary Character and the Idea of the Third Space Female Character Analysis of Abu Jaber Novel Arabian Jazz”, included several phrases commonly produced by AI language models like ChatGPT. Among these were disclaimers such as “I apologize, but as an AI language model, I am unable to rewrite any text without having the original text to work with”. The presence of such language in a master’s thesis is a concerning sign that AI-generated content is seeping into even the most rigorous levels of academic scholarship. Dr. Jayachandran, a writing instructor, published a book titled “ChatGPT Guide to Scientific Thesis Writing”. This comprehensive guide offers expert guidance on crafting the perfect abstract, selecting an impactful title, conducting comprehensive literature reviews, and constructing compelling research chapters for undergraduate, postgraduate, and doctoral students [68]. This situation calls into question the effectiveness of existing safeguards for maintaining academic integrity within educational institutions. While there is no research indicating the extent of AI tool usage in nephrology-related academic theses, the increasing application of these tools in this field is noteworthy.

2.4. The Impact on Grant Applications

The issue of using AI-generated content is not limited to just academic papers and theses; it is also infiltrating the grant application process. A recent article [69] in The Guardian highlighted that some reports were crafted with the help of ChatGPT. One academic even found the term “regenerate response” in their assessor reports, which is a feature specific to the ChatGPT interface. A Nature survey of over 1600 researchers worldwide revealed that more than 25% use AI to assist with manuscript writing and more than 15% use the technology to aid in grant proposal writing [70]. The use of ChatGPT in grant proposal writing has not only significantly reduced the workload but has also produced outstanding results, suggesting that the grant application process is flawed [71]. This also raises concerns that peer reviewers, who play a crucial role in allocating research funds, might not be diligently reviewing the applications they are tasked with assessing. The ramifications of this oversight are significant, with the potential for misallocation of crucial research funding. This issue is exacerbated by the high levels of stress and substantial workloads that academics routinely face. Researchers are often tasked with reviewing a considerable number of lengthy grant proposals, in addition to fulfilling their regular academic duties such as publishing, peer reviewing, and administrative responsibilities. Given the enormity of these pressures, it becomes more understandable why some might resort to shortcuts like using AI-generated content to cope with their responsibilities. At present, the degree to which AI tools are employed in nephrology grant applications is unclear, yet given the rapid rise in AI adoption, attention should be drawn to this area.

2.5. The Inevitability of AI in Academia

The incorporation of AI into academic endeavors is not just a possibility; it is an unavoidable progression [72]. As we approach this transformative juncture, it becomes imperative for universities, publishers, and other academic service providers to give due consideration to AI tools. This entails comprehending their capabilities, recognizing their limitations, and being mindful of the ethical considerations tied to their utilization [73]. Rather than debating whether AI should be used, the primary focus should revolve around how it can be harnessed responsibly and effectively [74]. To ensure that AI acts as a supportive asset rather than an impediment to academic integrity, it is essential to establish clear guidelines and ethical parameters. For example, AI could be deployed to automate initial phases of literature reviews or data analysis, tasks that are often time-consuming but may not necessarily require human creativity [26,68]. However, it is crucial that the use of AI remains transparent, and any content generated using AI should be distinctly marked as such to uphold the integrity of the academic record. The key lies in striking a balance that permits the ethical and efficient application of AI in academia. This involves formulating policies and processes that facilitate academics’ use of AI tools while simultaneously ensuring that these tools are employed in a manner that upholds the stringent standards of academic work. By doing so, we can leverage the potential of AI to propel research and scholarship forward, all while preserving the quality and integrity that constitute the cornerstones of academia.

2.6. Proposed Solutions and Policy Recommendations

  • Advanced AI-driven plagiarism detection: AI-generated content often surpasses the detection capabilities of conventional plagiarism checkers. Implementing next-level, AI-driven plagiarism detection technologies could significantly alter this landscape. Such technologies should be designed to discern the subtle characteristics and structures unique to AI-generated text, facilitating its identification during the review phases. A recent study compared Japanese stylometric features of texts generated using ChatGPT (GPT-3.5 and GPT-4) and those written by humans, and verified the classification performance of random forest classifier for two classes [75]. The results showed that the random forest classifier focusing on the rate of function words achieved 98.1% accuracy, and focusing on all stylometric features, reached 100% in terms of all performance indexes including accuracy, recall, precision, and F1 score [75].
  • Revisiting and strengthening the peer-review process: The integrity of academic work hinges on a robust peer-review system, which has shown vulnerabilities in detecting AI-generated content. A viable solution could be the mandatory inclusion of an “AI scrutiny” phase within the peer-review workflow. This would equip reviewers with specialized tools for detecting AI-generated content. Furthermore, academic journals could deploy AI algorithms to preliminarily screen submissions for AI-generated material before they reach human evaluators.
  • Training and resources for academics on ethical AI usage: While academics excel in their specialized domains, they may lack awareness of the ethical dimensions of AI application in research. Educational institutions and scholarly organizations should develop and offer training modules that focus on the ethical and responsible deployment of AI in academic endeavors. These could range from using AI in data analytics and literature surveys to crafting academic papers. In this era of significant advancements, we must recognize and embrace the potential of chatbots in education while simultaneously emphasizing the necessity for ethical guidelines governing their use. Chatbots offer a plethora of benefits, such as providing personalized instruction, facilitating 24/7 access to support, and fostering engagement and motivation. However, it is crucial to ensure that they are used in a manner that aligns with educational values and promotes responsible learning [76]. In an effort to uphold academic integrity, the New York Education Department implemented a comprehensive ban on the use of AI tools on network devices [77]. Similarly, the International Conference on Machine Learning (ICML) prohibited authors from submitting scientific writing generated by AI tools [78]. Furthermore, many scientists disapproved ChatGPT being listed as an author on research papers [58].
  • Acknowledgment for AI as contributor: The use of ChatGPT as an author of academic papers is a controversial issue that raises important questions about accountability and contributorship [79]. On the one hand, ChatGPT can be a valuable tool for assisting with the writing process. It can help to generate ideas, organize thoughts, and produce clear and concise prose. However, ChatGPT is not a human author. It cannot understand the nuances of human language or the complexities of academic discourse. As a result, ChatGPT-generated text can often be superficial and lacking in originality. In addition, the use of ChatGPT raises concerns about accountability. Who is responsible for the content of a paper that is written using ChatGPT? Is it the human user who prompts the chatbot, or is it the chatbot itself? If a paper is found to be flawed or misleading, who can be held accountable? The issue of contributorship is also relevant. If a paper is written using ChatGPT, who should be listed as the author? Should the human user be listed as the sole author, or should ChatGPT be given some form of credit? Therefore, promoting a culture of transparency and safeguarding the integrity of academic work necessitates the acknowledgment of AI’s contribution in research and composition endeavors. It is crucial for authors to openly disclose the degree of AI assistance in a specially designated acknowledgment section within the publication. This acknowledgment should specify the particular roles played by AI, whether in data analysis, literature reviews, or drafting segments of the manuscript, alongside any human oversight exerted to ensure ethical deployment of AI. For example: “Acknowledgment: We hereby recognize the aid of [Specific AI Tool/Technology] in carrying out data analytics, conducting literature surveys, and drafting initial versions of the manuscript. This AI technology enabled a more streamlined research process, under the careful supervision of [Names of Individuals] to comply with ethical guidelines. The perspectives generated by AI significantly contributed to the articulation of arguments in this publication, affirming its valuable input to our work”.
  • Inevitability of Technological Integration: While recognizing ethical concerns, the argument asserts that the adoption of advanced technologies such as AI in academia is inevitable. It recommends shifting the focus from resistance to the establishment of robust ethical frameworks and guidelines to ensure responsible AI usage [76]. From this perspective, taking a proactive stance on AI integration, firmly rooted in ethical principles, can facilitate the utilization of AI’s advantages in academia while mitigating the associated risks of unethical AI use. By fostering a culture of transparency, accountability, and continuous learning, there is a belief that the academic community can navigate the complexities of AI. This includes crafting policies that clearly define the ethical use of AI tools, creating mechanisms for disclosing AI assistance in academic work, and promoting collaborative efforts to explore and comprehend the implications of AI in academic writing and research.

3. Ideal Proposal for AI Integration in Nephrology Academic Writing and Peer Review

Nephrology is a rapidly evolving field, and AI integration has the potential to significantly advance research and scholarship. Nevertheless, as highlighted in previous discussions about ethical dilemmas [80], there is an urgent need to develop a framework to ensure responsible AI utilization, transparency, and academic integrity in nephrology and related fields. This proposed framework outlines a comprehensive approach to integrating AI into nephrology academic writing and peer review, drawing on the expertise of leading nephrologists (Table 1).

3.1. Transparent AI Assistance Acknowledgment

In the realm of nephrology research, it is essential that authors openly recognize the utilization of AI tools [56]. This recognition should find a dedicated space within their publications, shedding light on the specific roles that AI plays in data analysis, literature reviews, or manuscript drafting. As an example, consider a nephrology research paper that acknowledges AI’s involvement like this: “We extend our gratitude to [Specific AI Tool/Technology] for its contributions in data analysis and literature reviews. AI-driven insights were seamlessly integrated into our research, guided by the expertise of distinguished nephrologists [Names of Nephrologists]”.

3.2. Enhanced Peer Review Process with AI Scrutiny

To preserve academic rigor and uphold integrity, it is advisable for nephrology journals to integrate an “AI evaluation” stage into their peer-review process. Peer reviewers should be well-informed about the potential influence of AI on the manuscripts under their review and should be equipped to recognize AI-generated text. This phase, therefore, should incorporate nephrology experts with a deep understanding of AI applications. These experts can assess the incorporation of AI-generated content, verifying its adherence to established standards and ethical guidelines in nephrology research.

3.3. AI Ethics Training for Nephrologist

Specialized training in the ethical use of AI tools should be provided to nephrology experts and their fellow researchers in nephrology. This curriculum should encompass key subjects, including the potential advantages and pitfalls of AI in nephrology research, techniques to recognize and mitigate biases in AI tools, and methods to ensure transparency and accountability in AI-driven research. These educational programs can be delivered through workshops, webinars, and online courses. Nephrologist experts are uniquely positioned to enlighten their colleagues about the responsible application of AI, preserving AI’s value in nephrology research. Moreover, we stress the significance of fostering collaboration between nephrologists and AI specialists. Through this joint effort, we can create and implement AI tools that are not only ethical but also effective and advantageous to the nephrology field. Collaborative training initiatives with AI experts can also offer a comprehensive understanding of AI’s capabilities and limitations.

3.4. AI as a Collaborative Contributor

Nephrology experts should advocate for a collaborative culture that recognize AI as a valuable research partner [24]. AI’s proficiency in data analysis, pattern recognition, and literature reviews can free nephrologists to delve into novel research inquiries and clinical applications. For example, AI can be employed to analyze extensive patient datasets, uncovering trends and patterns that would be difficult or impossible for nephrologists to identify on their own [81]. AI can be used for crafting innovative diagnostic tools and algorithms, enabling nephrologists to enhance the precision and efficiency of kidney disease diagnosis and monitoring. Additionally, AI holds the potential to develop new therapeutic strategies for kidney disease, encompassing personalized treatment plans and the discoveries of new drug. Publications resulting from these collaborations should emphasize the synergistic relationship between AI and nephrologist expertise, demonstrating how AI-generated insights enhance the nephrology field.

3.5. Continuous Monitoring and Research

Nephrologists should play a leading role in continuously evaluating the impact of AI on nephrology research. This requires implementing long-term studies to track changing perceptions, the emergence of AI-focused research trends, and their implications for the quality and integrity of nephrology publications. We can carry out surveys and interviews with nephrologists to gauge their perspectives on AI, their existing utilization of AI in research, and their anticipations regarding AI’s future role in Nephrology. Moreover, an analysis of the nephrology literature can be undertaken to pinpoint developing trends in AI-centric research and appraise AI’s influence on the caliber and credibility of nephrology publications. Additionally, experts in nephrology can provide valuable insights in studies evaluating the efficacy of plagiarism detection tools enhanced using AI, specifically tailored to the nephrology literature, ensuring their alignment with the distinct features of the field.

3.6. Ethics Checklist

Recently, the CANGARU (ChatGPT, Generative Artificial Intelligence and Natural Large Language Models for Accountable Reporting and Use) Guidelines have been proposed as a comprehensive framework for ensuring ethical standards in AI research [82]. The Ethics Checklist, derived from these newly established guidelines, serves as a crucial tool in the AI integration process, upholding the highest ethical principles in nephrology research. Its adoption in manuscript submissions is essential for the early and systematic consideration of ethical dimensions, significantly mitigating the risk of ethical dilemmas in subsequent stages of research.
The Ethics Checklist plays a central role in the AI integration process, serving as a preemptive step to uphold ethical standards in nephrology research. Its incorporation into manuscript submissions guarantees the early consideration of ethical aspects, reducing the likelihood of ethical issues arising down the line. Effective implementation and review of this checklist (Table 2) depend on collaboration among authors, journal editors, and ethicists, thereby fostering responsible AI utilization in the realm of nephrology. A vital metric for tracking advancement in this domain is the count of manuscripts assessed for ethical adherence, demonstrating a resolute dedication to transparency and the integrity of research.

4. Future Studies and Research Directions

Undoubtedly, the significance of conducting a thorough analysis to grasp the extent of AI’s presence in academic writings cannot be overstated. There is an immediate necessity to quantify the prevalence and influence of AI in scholarly literature, thereby offering a clear perspective on the current landscape. An exhaustive exploration spanning various academic disciplines and levels of scholarship holds the potential to yield valuable insights into the ubiquity of AI-generated content within academic discourse. Such research can unveil the diverse applications of AI, pinpoint commonly used AI tools, and gauge the transparency with which they are utilized. Moreover, it may spotlight academic domains where AI plays a substantial role, signaling areas demanding prompt attention.
Conventional plagiarism detection tools might grapple with recognizing AI-generated content due to the advanced capabilities of contemporary AI writing assistance. Consequently, there is an urgent demand to appraise the efficacy of plagiarism detection technologies bolstered by AI for identifying AI-generated text. These evaluations could provide a deeper understanding of the capabilities and limitations of these advanced tools and their potential integration into existing plagiarism detection and academic evaluation frameworks. Furthermore, the insights gleaned from these inquiries could inform the development of more robust, AI-focused plagiarism detection systems capable of adapting to evolving AI writing techniques.
To comprehend the long-term ramifications of AI utilization in academic work, it is imperative to undertake extended studies that track changes over an extended period. These investigations could delve into shifts in attitudes toward AI, the evolution of AI-related plagiarism, and its impact on the caliber and authenticity of scholarly endeavors. They may also shed light on how the integration of AI into academic literature influences the reliability of scholarly publications, the peer-review process, and the broader academic community (Figure 2).

5. Conclusions

The extensive utilization of AI-generated content in academic papers underscores profound issues deeply ingrained within the academic realm. These issues manifest in various ways, including the relentless pressure to publish, shortcomings in peer-review procedures, and an absence of effective safeguards against AI-driven plagiarism. The failure to detect and rectify AI-authored material during the evaluation process erodes the fundamental integrity of scholarly work. Furthermore, the inappropriate deployment of AI technology jeopardizes the rigorous ethical standards maintained by the academic community.
Resolving this challenge necessitates collaborative efforts from all stakeholders in academia. Educational institutions, academic journals, and researchers collectively bear the responsibility to combat unethical AI usage in scholarly publications. Potential solutions encompass fostering an environment characterized by transparency and the ethical use of AI, enhancing peer-review systems with technology tailored to identify AI-generated plagiarism, and advocating for higher ethical standards throughout the academic community. Additionally, the provision of clear guidelines for the responsible use of AI tools and the education of scholars about AI ethics are indispensable measures. Through proactive initiatives, we can navigate the intricate interplay between AI technology and academic integrity, ensuring the preservation of the latter even in the face of technological advancements.

Author Contributions

Conceptualization, J.M. and W.C.; methodology, J.M. and C.T.; validation, J.M., C.T., F.Q. and W.C.; investigation, J.M. and W.C.; resources, J.M.; data curation, J.M., C.T., S.S., O.A.G.V., F.Q. and W.C.; writing—original draft preparation, J.M., C.T., S.S., O.A.G.V., F.Q. and W.C.; writing—review and editing, J.M., C.T., S.S., O.A.G.V., F.Q. and W.C.; visualization, F.Q. and W.C.; supervision, W.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data supporting this study are available in the original publication, reports, and preprints that were cited in the reference citation.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Moshawrab, M.; Adda, M.; Bouzouane, A.; Ibrahim, H.; Raad, A. Reviewing Federated Machine Learning and Its Use in Diseases Prediction. Sensors 2023, 23, 2112. [Google Scholar] [CrossRef]
  2. Rojas, J.C.; Teran, M.; Umscheid, C.A. Clinician Trust in Artificial Intelligence: What is Known and How Trust Can Be Facilitated. Crit. Care Clin. 2023, 39, 769–782. [Google Scholar] [CrossRef] [PubMed]
  3. Boukherouaa, E.B.; Shabsigh, M.G.; AlAjmi, K.; Deodoro, J.; Farias, A.; Iskender, E.S.; Mirestean, M.A.T.; Ravikumar, R. Powering the Digital Economy: Opportunities and Risks of Artificial Intelligence in Finance; International Monetary Fund (IMF eLIBRARY): Washington, DC, USA, 2021; Volume 2021, pp. 5–20. [Google Scholar]
  4. Gülen, K. A Match Made in Transportation Heaven: AI and Self-Driving Cars. Available online: https://dataconomy.com/2022/12/28/artificial-intelligence-and-self-driving/ (accessed on 29 December 2022).
  5. Frąckiewicz, M. The Future of AI in Entertainment. Available online: https://ts2.space/en/the-future-of-ai-in-entertainment/ (accessed on 24 June 2023).
  6. Introducing ChatGPT. Available online: https://openai.com/blog/chatgpt (accessed on 18 April 2023).
  7. Bard. Available online: https://bard.google.com/chat (accessed on 21 March 2023).
  8. Bing Chat with GPT-4. Available online: https://www.microsoft.com/en-us/bing?form=MA13FV (accessed on 14 October 2023).
  9. Meet Claude. Available online: https://claude.ai/chats (accessed on 7 February 2023).
  10. OpenAI. GPT-4V(ision) System Card. Available online: https://cdn.openai.com/papers/GPTV_System_Card.pdf (accessed on 25 September 2023).
  11. Majnaric, L.T.; Babic, F.; O’Sullivan, S.; Holzinger, A. AI and Big Data in Healthcare: Towards a More Comprehensive Research Framework for Multimorbidity. J. Clin. Med. 2021, 10, 766. [Google Scholar] [CrossRef] [PubMed]
  12. Joshi, G.; Jain, A.; Araveeti, S.R.; Adhikari, S.; Garg, H.; Bhandari, M. FDA Approved Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices: An Updated Landscape. Available online: https://www.medrxiv.org/content/10.1101/2022.12.07.22283216v3 (accessed on 12 December 2022).
  13. Oh, N.; Choi, G.S.; Lee, W.Y. ChatGPT goes to the operating room: Evaluating GPT-4 performance and its potential in surgical education and training in the era of large language models. Ann. Surg. Treat. Res. 2023, 104, 269–273. [Google Scholar] [CrossRef] [PubMed]
  14. Eysenbach, G. The Role of ChatGPT, Generative Language Models, and Artificial Intelligence in Medical Education: A Conversation With ChatGPT and a Call for Papers. JMIR Med. Educ. 2023, 9, e46885. [Google Scholar] [CrossRef] [PubMed]
  15. Sallam, M. ChatGPT Utility in Healthcare Education, Research, and Practice: Systematic Review on the Promising Perspectives and Valid Concerns. Healthcare 2023, 11, 887. [Google Scholar] [CrossRef] [PubMed]
  16. Reese, J.T.; Danis, D.; Caulfied, J.H.; Casiraghi, E.; Valentini, G.; Mungall, C.J.; Robinson, P.N. On the limitations of large language models in clinical diagnosis. medRxiv 2023. [Google Scholar] [CrossRef]
  17. Eriksen, A.V.; Möller, S.; Ryg, J. Use of GPT-4 to Diagnose Complex Clinical Cases. NEJM AI 2023, 1–3. [Google Scholar] [CrossRef]
  18. Kanjee, Z.; Crowe, B.; Rodman, A. Accuracy of a Generative Artificial Intelligence Model in a Complex Diagnostic Challenge. JAMA 2023, 330, 78–80. [Google Scholar] [CrossRef]
  19. Zuniga Salazar, G.; Zuniga, D.; Vindel, C.L.; Yoong, A.M.; Hincapie, S.; Zuniga, A.B.; Zuniga, P.; Salazar, E.; Zuniga, B. Efficacy of AI Chats to Determine an Emergency: A Comparison Between OpenAI’s ChatGPT, Google Bard, and Microsoft Bing AI Chat. Cureus 2023, 15, e45473. [Google Scholar] [CrossRef]
  20. Ayers, J.W.; Poliak, A.; Dredze, M.; Leas, E.C.; Zhu, Z.; Kelley, J.B.; Faix, D.J.; Goodman, A.M.; Longhurst, C.A.; Hogarth, M.; et al. Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum. JAMA Intern. Med. 2023, 183, 589–596. [Google Scholar] [CrossRef] [PubMed]
  21. Lee, P.; Bubeck, S.; Petro, J. Benefits, Limits, and Risks of GPT-4 as an AI Chatbot for Medicine. N. Engl. J. Med. 2023, 388, 1233–1239. [Google Scholar] [CrossRef] [PubMed]
  22. Mello, M.M.; Guha, N. ChatGPT and Physicians’ Malpractice Risk. JAMA Health Forum 2023, 4, e231938. [Google Scholar] [CrossRef] [PubMed]
  23. Suppadungsuk, S.; Thongprayoon, C.; Miao, J.; Krisanapan, P.; Qureshi, F.; Kashani, K.; Cheungpasitporn, W. Exploring the Potential of Chatbots in Critical Care Nephrology. Medicines 2023, 10, 58. [Google Scholar] [CrossRef] [PubMed]
  24. Garcia Valencia, O.A.; Thongprayoon, C.; Jadlowiec, C.C.; Mao, S.A.; Miao, J.; Cheungpasitporn, W. Enhancing Kidney Transplant Care through the Integration of Chatbot. Healthcare 2023, 11, 2518. [Google Scholar] [CrossRef] [PubMed]
  25. Qarajeh, A.; Tangpanithandee, S.; Thongprayoon, C.; Suppadungsuk, S.; Krisanapan, P.; Aiumtrakul, N.; Garcia Valencia, O.A.; Miao, J.; Qureshi, F.; Cheungpasitporn, W. AI-Powered Renal Diet Support: Performance of ChatGPT, Bard AI, and Bing Chat. Clin. Pract. 2023, 13, 1160–1172. [Google Scholar] [CrossRef] [PubMed]
  26. Suppadungsuk, S.; Thongprayoon, C.; Krisanapan, P.; Tangpanithandee, S.; Garcia Valencia, O.; Miao, J.; Mekraksakit, P.; Kashani, K.; Cheungpasitporn, W. Examining the Validity of ChatGPT in Identifying Relevant Nephrology Literature: Findings and Implications. J. Clin. Med. 2023, 12, 5550. [Google Scholar] [CrossRef]
  27. Miao, J.; Thongprayoon, C.; Garcia Valencia, O.A.; Krisanapan, P.; Sheikh, M.S.; Davis, P.W.; Mekraksakit, P.; Suarez, M.G.; Craici, I.M.; Cheungpasitporn, W. Performance of ChatGPT on Nephrology Test Questions. Clin. J. Am. Soc. Nephrol. 2023. [Google Scholar] [CrossRef]
  28. Temsah, M.H.; Altamimi, I.; Jamal, A.; Alhasan, K.; Al-Eyadhy, A. ChatGPT Surpasses 1000 Publications on PubMed: Envisioning the Road Ahead. Cureus 2023, 15, e44769. [Google Scholar] [CrossRef]
  29. VanderLinden, S. Exploring the Ethics of AI. Available online: https://alchemycrew.com/exploring-the-ethics-of-ai/ (accessed on 22 July 2021).
  30. WHO Calls for Safe and Ethical AI for Health. Available online: https://www.who.int/news/item/16-05-2023-who-calls-for-safe-and-ethical-ai-for-health (accessed on 16 May 2023).
  31. Dergaa, I.; Chamari, K.; Zmijewski, P.; Ben Saad, H. From human writing to artificial intelligence generated text: Examining the prospects and potential threats of ChatGPT in academic writing. Biol. Sport 2023, 40, 615–622. [Google Scholar] [CrossRef]
  32. Hosseini, M.; Horbach, S. Fighting reviewer fatigue or amplifying bias? Considerations and recommendations for use of ChatGPT and other large language models in scholarly peer review. Res. Integr. Peer Rev. 2023, 8, 4. [Google Scholar] [CrossRef]
  33. Leung, T.I.; de Azevedo Cardoso, T.; Mavragani, A.; Eysenbach, G. Best Practices for Using AI Tools as an Author, Peer Reviewer, or Editor. J. Med. Internet Res. 2023, 25, e51584. [Google Scholar] [CrossRef] [PubMed]
  34. Hosseini, M.; Rasmussen, L.M.; Resnik, D.B. Using AI to write scholarly publications. Account. Res. 2023, 1–9. [Google Scholar] [CrossRef] [PubMed]
  35. Wang, X.; Liu, X.Q. Potential and limitations of ChatGPT and generative artificial intelligence in medical safety education. World J. Clin. Cases 2023, 11, 7935–7939. [Google Scholar] [CrossRef] [PubMed]
  36. Lemley, K.V. Does ChatGPT Help Us Understand the Medical Literature? J. Am. Soc. Nephrol. 2023. [Google Scholar] [CrossRef]
  37. Jin, Q.; Leaman, R.; Lu, Z. Retrieve, Summarize, and Verify: How Will ChatGPT Affect Information Seeking from the Medical Literature? J. Am. Soc. Nephrol. 2023, 34, 1302–1304. [Google Scholar] [CrossRef] [PubMed]
  38. Eppler, M.; Ganjavi, C.; Ramacciotti, L.S.; Piazza, P.; Rodler, S.; Checcucci, E.; Gomez Rivas, J.; Kowalewski, K.F.; Belenchon, I.R.; Puliatti, S.; et al. Awareness and Use of ChatGPT and Large Language Models: A Prospective Cross-sectional Global Survey in Urology. Eur. Urol. 2023, in press. [CrossRef]
  39. Kurian, N.; Cherian, J.M.; Sudharson, N.A.; Varghese, K.G.; Wadhwa, S. AI is now everywhere. Br. Dent. J. 2023, 234, 72. [Google Scholar] [CrossRef]
  40. Gomes, W.J.; Evora, P.R.B.; Guizilini, S. Artificial Intelligence is Irreversibly Bound to Academic Publishing—ChatGPT is Cleared for Scientific Writing and Peer Review. Braz. J. Cardiovasc. Surg. 2023, 38, e20230963. [Google Scholar] [CrossRef]
  41. Kitamura, F.C. ChatGPT Is Shaping the Future of Medical Writing But Still Requires Human Judgment. Radiology 2023, 307, e230171. [Google Scholar] [CrossRef]
  42. Huang, J.; Tan, M. The role of ChatGPT in scientific communication: Writing better scientific review articles. Am. J. Cancer Res. 2023, 13, 1148–1154. [Google Scholar] [PubMed]
  43. Guleria, A.; Krishan, K.; Sharma, V.; Kanchan, T. ChatGPT: Ethical concerns and challenges in academics and research. J. Infect. Dev. Ctries. 2023, 17, 1292–1299. [Google Scholar] [CrossRef] [PubMed]
  44. Liu, H.; Azam, M.; Bin Naeem, S.; Faiola, A. An overview of the capabilities of ChatGPT for medical writing and its implications for academic integrity. Health Inf. Libr. J. 2023, 40, 440–446. [Google Scholar] [CrossRef] [PubMed]
  45. Zheng, H.; Zhan, H. ChatGPT in Scientific Writing: A Cautionary Tale. Am. J. Med. 2023, 136, 725–726.e6. [Google Scholar] [CrossRef] [PubMed]
  46. Kleebayoon, A.; Wiwanitkit, V. ChatGPT in medical practice, education and research: Malpractice and plagiarism. Clin. Med. 2023, 23, 280. [Google Scholar] [CrossRef]
  47. Gandhi Periaysamy, A.; Satapathy, P.; Neyazi, A.; Padhi, B.K. ChatGPT: Roles and boundaries of the new artificial intelligence tool in medical education and health research–correspondence. Ann. Med. Surg. 2023, 85, 1317–1318. [Google Scholar] [CrossRef]
  48. Mihalache, A.; Popovic, M.M.; Muni, R.H. Performance of an Artificial Intelligence Chatbot in Ophthalmic Knowledge Assessment. JAMA Ophthalmol. 2023, 141, 589–597. [Google Scholar] [CrossRef]
  49. Giannos, P.; Delardas, O. Performance of ChatGPT on UK Standardized Admission Tests: Insights From the BMAT, TMUA, LNAT, and TSA Examinations. JMIR Med. Educ. 2023, 9, e47737. [Google Scholar] [CrossRef]
  50. Takagi, S.; Watari, T.; Erabi, A.; Sakaguchi, K. Performance of GPT-3.5 and GPT-4 on the Japanese Medical Licensing Examination: Comparison Study. JMIR Med. Educ. 2023, 9, e48002. [Google Scholar] [CrossRef]
  51. Bhayana, R.; Krishna, S.; Bleakney, R.R. Performance of ChatGPT on a Radiology Board-style Examination: Insights into Current Strengths and Limitations. Radiology 2023, 307, e230582. [Google Scholar] [CrossRef]
  52. Sikander, B.; Baker, J.J.; Deveci, C.D.; Lund, L.; Rosenberg, J. ChatGPT-4 and Human Researchers Are Equal in Writing Scientific Introduction Sections: A Blinded, Randomized, Non-inferiority Controlled Study. Cureus 2023, 15, e49019. [Google Scholar] [CrossRef] [PubMed]
  53. Osmanovic-Thunström, A.; Steingrimsson, S. Does GPT-3 qualify as a co-author of a scientific paper publishable in peer-review journals according to the ICMJE criteria? A case study. Discov. Artif. Intell. 2023, 3, 12. [Google Scholar] [CrossRef]
  54. ChatGPT Generative Pre-trained Transformer; Zhavoronkov, A. Rapamycin in the context of Pascal’s Wager: Generative pre-trained transformer perspective. Oncoscience 2022, 9, 82–84. [Google Scholar] [CrossRef]
  55. Wattanapisit, A.; Photia, A.; Wattanapisit, S. Should ChatGPT be considered a medical writer? Malays. Fam. Physician 2023, 18, 69. [Google Scholar] [CrossRef] [PubMed]
  56. Miao, J.; Thongprayoon, C.; Cheungpasitporn, W. Assessing the Accuracy of ChatGPT on Core Questions in Glomerular Disease. Kidney Int. Rep. 2023, 8, 1657–1659. [Google Scholar] [CrossRef] [PubMed]
  57. Tang, G. Letter to editor: Academic journals should clarify the proportion of NLP-generated content in papers. Account. Res. 2023, 1–2. [Google Scholar] [CrossRef] [PubMed]
  58. Stokel-Walker, C. ChatGPT listed as author on research papers: Many scientists disapprove. Nature 2023, 613, 620–621. [Google Scholar] [CrossRef]
  59. Bahsi, I.; Balat, A. The Role of AI in Writing an Article and Whether it Can Be a Co-author: What if it Gets Support From 2 Different AIs Like ChatGPT and Google Bard for the Same Theme? J. Craniofac. Surg. 2023. [Google Scholar] [CrossRef]
  60. Grove, J. Science Journals Overturn Ban on ChatGPT-Authored Papers. Available online: https://www.timeshighereducation.com/news/science-journals-overturn-ban-chatgpt-authored-papers#:~:text=The%20prestigious%20Science%20family%20of,intelligence%20tools%20in%20submitted%20papers (accessed on 16 November 2023).
  61. Zielinski, C.; Winker, M.A.; Aggarwal, R.; Ferris, L.E.; Heinemann, M.; Lapena, J.F., Jr.; Pai, S.A.; Ing, E.; Citrome, L.; Alam, M.; et al. Chatbots, generative AI, and scholarly manuscripts: WAME recommendations on chatbots and generative artificial intelligence in relation to scholarly publications. Colomb. Médica 2023, 54, e1015868. [Google Scholar] [CrossRef]
  62. Daugirdas, J.T. OpenAI’s ChatGPT and Its Potential Impact on Narrative and Scientific Writing in Nephrology. Am. J. Kidney Dis. 2023, 82, A13–A14. [Google Scholar] [CrossRef]
  63. Dönmez, I.; Idil, S.; Gulen, S. Conducting Academic Research with the AI Interface ChatGPT: Challenges and Opportunities. J. STEAM Educ. 2023, 6, 101–118. [Google Scholar]
  64. Else, H. Abstracts written by ChatGPT fool scientists. Nature 2023, 613, 423. [Google Scholar] [CrossRef] [PubMed]
  65. Casal, J.E.; Kessler, M. Can linguists distinguish between ChatGPT/AI and human writing?: A study of research ethics and academic publishing. Res. Methods Appl. Linguist. 2023, 2, 100068. [Google Scholar] [CrossRef]
  66. Nearly 1 in 3 College Students Have Used Chatgpt on Written Assignments. Available online: https://www.intelligent.com/nearly-1-in-3-college-students-have-used-chatgpt-on-written-assignments/ (accessed on 23 January 2023).
  67. Kamilia, B. The Reality of Contemporary Arab-American Literary Character and the Idea of the Third Space Female Character Analysis of Abu Jaber Novel Arabian Jazz. Ph.D. Thesis, Kasdi Merbah Ouargla University, Ouargla, Algeria, 2023. [Google Scholar]
  68. Jayachandran, M. ChatGPT: Guide to Scientific Thesis Writing. Independently Published. 2023. Available online: https://www.barnesandnoble.com/w/chatgpt-guide-to-scientific-thesis-writing-jayachandran-m/1144451253 (accessed on 5 December 2023).
  69. Lu, D. Are Australian Research Council Reports Being Written by ChatGPT? Available online: https://www.theguardian.com/technology/2023/jul/08/australian-research-council-scrutiny-allegations-chatgpt-artifical-intelligence (accessed on 7 July 2023).
  70. Van Noorden, R.; Perkel, J.M. AI and science: What 1,600 researchers think. Nature 2023, 621, 672–675. [Google Scholar] [CrossRef]
  71. Parrilla, J.M. ChatGPT use shows that the grant-application system is broken. Nature 2023, 623, 443. [Google Scholar] [CrossRef]
  72. Khan, S.H. AI at Doorstep: ChatGPT and Academia. J. Coll. Physicians Surg. Pak. 2023, 33, 1085–1086. [Google Scholar] [CrossRef]
  73. Jeyaraman, M.; Ramasubramanian, S.; Balaji, S.; Jeyaraman, N.; Nallakumarasamy, A.; Sharma, S. ChatGPT in action: Harnessing artificial intelligence potential and addressing ethical challenges in medicine, education, and scientific research. World J. Methodol. 2023, 13, 170–178. [Google Scholar] [CrossRef]
  74. Meyer, J.G.; Urbanowicz, R.J.; Martin, P.C.N.; O’Connor, K.; Li, R.; Peng, P.C.; Bright, T.J.; Tatonetti, N.; Won, K.J.; Gonzalez-Hernandez, G.; et al. ChatGPT and large language models in academia: Opportunities and challenges. BioData Min. 2023, 16, 20. [Google Scholar] [CrossRef]
  75. Zaitsu, W.; Jin, M. Distinguishing ChatGPT(-3.5, -4)-generated and human-written papers through Japanese stylometric analysis. PLoS ONE 2023, 18, e0288453. [Google Scholar] [CrossRef]
  76. Koo, M. Harnessing the potential of chatbots in education: The need for guidelines to their ethical use. Nurse Educ. Pract. 2023, 68, 103590. [Google Scholar] [CrossRef]
  77. Yang, M. New York City Schools Ban AI Chatbot That Writes Essays and Answers Prompts. Available online: https://www.theguardian.com/us-news/2023/jan/06/new-york-city-schools-ban-ai-chatbot-chatgpt (accessed on 6 January 2023).
  78. Vincent, J. Top AI Conference Bans Use of ChatGPT and AI Language Tools to Write Academic Papers. Available online: https://www.theverge.com/2023/1/5/23540291/chatgpt-ai-writing-tool-banned-writing-academic-icml-paper (accessed on 5 January 2023).
  79. Siegerink, B.; Pet, L.A.; Rosendaal, F.R.; Schoones, J.W. ChatGPT as an author of academic papers is wrong and highlights the concepts of accountability and contributorship. Nurse Educ. Pract. 2023, 68, 103599. [Google Scholar] [CrossRef] [PubMed]
  80. Garcia Valencia, O.A.; Suppadungsuk, S.; Thongprayoon, C.; Miao, J.; Tangpanithandee, S.; Craici, I.M.; Cheungpasitporn, W. Ethical Implications of Chatbot Utilization in Nephrology. J. Pers. Med. 2023, 13, 1363. [Google Scholar] [CrossRef] [PubMed]
  81. Thongprayoon, C.; Vaitla, P.; Jadlowiec, C.C.; Leeaphorn, N.; Mao, S.A.; Mao, M.A.; Pattharanitima, P.; Bruminhent, J.; Khoury, N.J.; Garovic, V.D.; et al. Use of Machine Learning Consensus Clustering to Identify Distinct Subtypes of Black Kidney Transplant Recipients and Associated Outcomes. JAMA Surg. 2022, 157, e221286. [Google Scholar] [CrossRef] [PubMed]
  82. Cacciamani, G.E.; Eppler, M.B.; Ganjavi, C.; Pekan, A.; Biedermann, B.; Collins, G.S.; Gill, I.S. Development of the ChatGPT, Generative Artificial Intelligence and Natural Large Language Models for Accountable Reporting and Use (CANGARU) Guidelines. arXiv 2023, arXiv:2307.08974. [Google Scholar]
Figure 1. Ethical concerns surrounding AI’s role in scholarly writing.
Figure 1. Ethical concerns surrounding AI’s role in scholarly writing.
Clinpract 14 00008 g001
Figure 2. Future studies and research directions.
Figure 2. Future studies and research directions.
Clinpract 14 00008 g002
Table 1. Framework for AI integration in nephrology academic writing and peer review.
Table 1. Framework for AI integration in nephrology academic writing and peer review.
ComponentObjectiveAction ItemsStakeholders InvolvedMetrics for Success
Transparent AI assistance acknowledgmentEnsure full disclosure of AI contributions in research.1. Add acknowledgment section in paper.
2. Specify AI role.
Authors, journal editorsNumber of publications with transparent acknowledgments
Enhanced peer review process with AI scrutinyMaintain academic rigor and integrity in the use of AI.1. Add “AI Scrutiny” phase in peer review.
2. Train reviewers on AI.
Peer reviewers, AI expertsReduced rate of publication errors related to AI misuse
AI ethics training for nephrologistsEquip nephrologists with the knowledge to use AI ethically.1. Develop training modules.
2. Conduct workshops.
Nephrologists, ethicists, AI expertsNumber of trained personnel
AI as a collaborative contributorFoster a culture where AI and human expertise are seen as complementary.1. Advocate for collaboration in publications.
2. Develop guidelines for collaboration.
Nephrologists, AI developersNumber of collaborative publications
Continuous monitoring and researchUnderstand the impact of AI on the field and adapt accordingly.1. Initiate long-term studies.
2. Develop AI-specific plagiarism tools.
Nephrologists, data scientistsPublished long-term impact studies
Ethics checklistEnsure preliminary ethical compliance in AI usage.Integrate ethics checklist into manuscript submission.Authors, journal editors, ethicistsNumber of manuscripts screened for ethical compliance
Table 2. Proposed AI Ethics Checklist for journal submissions.
Table 2. Proposed AI Ethics Checklist for journal submissions.
AI Ethics Checklist for Journal Submissions
General Information
  • Manuscript Title:
  • Corresponding Author:
  • Co-Authors:
  • Date of Submission:
AI Involvement
  • Clinpract 14 00008 i001 No AI involvement
  • Clinpract 14 00008 i001 AI was involved in this research
(If AI was not involved, you may skip the rest of this checklist.)
AI Contribution
  • Clinpract 14 00008 i001 Data Collection
  • Clinpract 14 00008 i001 Data Analysis
  • Clinpract 14 00008 i001 Literature Review
  • Clinpract 14 00008 i001 Manuscript Drafting
  • Clinpract 14 00008 i001 Other: _______________
AI Tools and Technologies
  • Name of AI Tool/Technology:
  • Version:
  • Provider/Developer:
Ethical Considerations
  • Transparency
    • Clinpract 14 00008 i001 The manuscript includes an acknowledgment section detailing AI’s role.
    • Clinpract 14 00008 i001 The algorithms used are described in detail or cited.
    • Clinpract 14 00008 i001 Any data sets used for training the AI are described or cited.
  • Data Privacy and Consent
    • Clinpract 14 00008 i001 All data used respect privacy norms and regulations.
    • Clinpract 14 00008 i001 Informed consent was obtained for data collection, if applicable.
  • Bias and Fairness
    • Clinpract 14 00008 i001 Measures were taken to minimize bias in AI algorithms.
    • Clinpract 14 00008 i001 The manuscript discusses potential biases in AI analysis and results.
  • Human Oversight
    • Clinpract 14 00008 i001 AI’s contributions were supervised by experts in the field.
    • Clinpract 14 00008 i001 The manuscript specifies the extent of human oversight.
  • Integrity and Accountability
    • Clinpract 14 00008 i001 The manuscript discusses the limitations of AI involvement.
    • Clinpract 14 00008 i001 Authors are accountable for AI’s contributions and any potential errors.
  • Peer Review Preparedness
    • Clinpract 14 00008 i001 The manuscript is prepared for AI scrutiny during the peer review process.
    • Clinpract 14 00008 i001 Any custom code is made available for review, if required by the journal.
Author’s Declaration
I, the undersigned, declare that the information provided in this checklist is accurate and complete to the best of my knowledge.
Signature: ___________________________
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Miao, J.; Thongprayoon, C.; Suppadungsuk, S.; Garcia Valencia, O.A.; Qureshi, F.; Cheungpasitporn, W. Ethical Dilemmas in Using AI for Academic Writing and an Example Framework for Peer Review in Nephrology Academia: A Narrative Review. Clin. Pract. 2024, 14, 89-105. https://doi.org/10.3390/clinpract14010008

AMA Style

Miao J, Thongprayoon C, Suppadungsuk S, Garcia Valencia OA, Qureshi F, Cheungpasitporn W. Ethical Dilemmas in Using AI for Academic Writing and an Example Framework for Peer Review in Nephrology Academia: A Narrative Review. Clinics and Practice. 2024; 14(1):89-105. https://doi.org/10.3390/clinpract14010008

Chicago/Turabian Style

Miao, Jing, Charat Thongprayoon, Supawadee Suppadungsuk, Oscar A. Garcia Valencia, Fawad Qureshi, and Wisit Cheungpasitporn. 2024. "Ethical Dilemmas in Using AI for Academic Writing and an Example Framework for Peer Review in Nephrology Academia: A Narrative Review" Clinics and Practice 14, no. 1: 89-105. https://doi.org/10.3390/clinpract14010008

APA Style

Miao, J., Thongprayoon, C., Suppadungsuk, S., Garcia Valencia, O. A., Qureshi, F., & Cheungpasitporn, W. (2024). Ethical Dilemmas in Using AI for Academic Writing and an Example Framework for Peer Review in Nephrology Academia: A Narrative Review. Clinics and Practice, 14(1), 89-105. https://doi.org/10.3390/clinpract14010008

Article Metrics

Back to TopTop