You are currently viewing a new version of our website. To view the old version click .
Healthcare
  • Review
  • Open Access

19 March 2023

ChatGPT Utility in Healthcare Education, Research, and Practice: Systematic Review on the Promising Perspectives and Valid Concerns

1
Department of Pathology, Microbiology and Forensic Medicine, School of Medicine, The University of Jordan, Amman 11942, Jordan
2
Department of Clinical Laboratories and Forensic Medicine, Jordan University Hospital, Amman 11942, Jordan
This article belongs to the Section Artificial Intelligence in Healthcare

Abstract

ChatGPT is an artificial intelligence (AI)-based conversational large language model (LLM). The potential applications of LLMs in health care education, research, and practice could be promising if the associated valid concerns are proactively examined and addressed. The current systematic review aimed to investigate the utility of ChatGPT in health care education, research, and practice and to highlight its potential limitations. Using the PRIMSA guidelines, a systematic search was conducted to retrieve English records in PubMed/MEDLINE and Google Scholar (published research or preprints) that examined ChatGPT in the context of health care education, research, or practice. A total of 60 records were eligible for inclusion. Benefits of ChatGPT were cited in 51/60 (85.0%) records and included: (1) improved scientific writing and enhancing research equity and versatility; (2) utility in health care research (efficient analysis of datasets, code generation, literature reviews, saving time to focus on experimental design, and drug discovery and development); (3) benefits in health care practice (streamlining the workflow, cost saving, documentation, personalized medicine, and improved health literacy); and (4) benefits in health care education including improved personalized learning and the focus on critical thinking and problem-based learning. Concerns regarding ChatGPT use were stated in 58/60 (96.7%) records including ethical, copyright, transparency, and legal issues, the risk of bias, plagiarism, lack of originality, inaccurate content with risk of hallucination, limited knowledge, incorrect citations, cybersecurity issues, and risk of infodemics. The promising applications of ChatGPT can induce paradigm shifts in health care education, research, and practice. However, the embrace of this AI chatbot should be conducted with extreme caution considering its potential limitations. As it currently stands, ChatGPT does not qualify to be listed as an author in scientific articles unless the ICMJE/COPE guidelines are revised or amended. An initiative involving all stakeholders in health care education, research, and practice is urgently needed. This will help to set a code of ethics to guide the responsible use of ChatGPT among other LLMs in health care and academia.

1. Introduction

Artificial intelligence (AI) can be defined as the multidisciplinary approach of computer science and linguistics that aspires to create machines capable of performing tasks that normally require human intelligence [1]. These tasks include the ability to learn, adapt, rationalize, understand, and to fathom abstract concepts as well as the reactivity to complex human attributes such as attention, emotion, creativity, etc. [2].
The history of AI as a scientific discipline can be traced back to the mid-XX century at the Dartmouth Summer Research Project on AI [3]. This was followed by the development of machine learning (ML) algorithms that allow decision-making or predictions based on the patterns in large datasets [4]. Subsequently, the development of neural networks (brain-mimicking algorithms), genetic algorithms (finding optimal solutions for complex problems by application of evolutionary principles), and other advanced techniques followed [5].
Launched in November 2022, “ChatGPT” is an AI-based large language model (LLM) trained on massive text datasets in multiple languages with the ability to generate human-like responses to text input [6]. Developed by OpenAI (OpenAI, L.L.C., San Francisco, CA, USA), ChatGPT etymology is related to being a chatbot (a program able to understand and generate responses using a text-based interface) and is based on the generative pre-trained transformer (GPT) architecture [6,7]. The GPT architecture utilizes a neural network to process natural language, thus generating responses based on the context of input text [7]. The superiority of ChatGPT compared to its GPT-based predecessors can be linked to its ability to respond to multiple languages generating refined and highly sophisticated responses based on advanced modeling [6,7].
In the scientific community and academia, ChatGPT has received mixed responses reflecting the history of controversy regarding the benefits vs. risks of advanced AI technologies [8,9,10]. On one hand, ChatGPT, among other LLMs, can be beneficial in conversational and writing tasks, assisting to increase the efficiency and accuracy of the required output [11]. On the other hand, concerns have been raised in relation to possible bias based on the datasets used in ChatGPT training, which can limit its capabilities and could result in factual inaccuracies, but alarmingly appear to be scientifically plausible (a phenomenon termed hallucination) [11]. Additionally, security concerns and the potential of cyber-attacks with the spread of misinformation utilizing LLMs should also be considered [11].
The innate resistance of the human mind to any change is a well-described phenomenon and can be understandable from evolutionary and social psychology perspectives [12]. Therefore, the concerns and debate that arose instantaneously following the widespread release of ChatGPT appear to be understandable. The attention that ChatGPT received involved several disciplines. In education, for example, ChatGPT release could mark the end of essays as assignments [13]. In health care practice and academic writing, factual inaccuracies, ethical issues, and the fear of misuse including the spread of misinformation should be considered [14,15,16].
The versatility of human intelligence (HI) compared to AI is related to its biological evolutionary history, adaptability, creativity, the ability of emotional intelligence, and the ability to understand complex abstract concepts [2]. However, HI-AI cooperation can be beneficial if an accurate and reliable output of AI is ensured. The promising utility of AI in health care has been outlined previously with possible benefits in personalized medicine, drug discovery, and the analysis of large datasets aside from the potential applications to improve diagnosis and clinical decisions [17,18]. Additionally, the utility of AI chatbots in health care education is an interesting area to probe. This is related to the massive information and various concepts that health care students are required to grasp [19]. However, all of these applications should be considered cautiously considering the valid concerns, risks, and categorical failures experienced and cited in the context of LLM applications. Specifically, Borji comprehensively highlighted the caveats of ChatGPT use that included, but were not limited to, the generation of inaccurate content, the risk of bias and discrimination, lack of transparency and reliability, cybersecurity concerns, ethical consequences, and societal implications [20].
Therefore, the aim of the current review was to explore the future perspectives of ChatGPT as a prime example of LLMs in health care education, academic/scientific writing, health care research, and health care practice based on the existing evidence. Importantly, the current review objectives extended to involve the identification of potential limitations and concerns that could be associated with the application of ChatGPT in the aforementioned areas in health care settings.

2. Materials and Methods

2.1. Search Strategy and Inclusion Criteria

The current systematic review was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRIMSA) guidelines [21]. The information sources included PubMed/MEDLINE and Google Scholar.
The eligibility criteria involved any type of published scientific research or preprints (article, review, communication, editorial, opinion, etc.) addressing ChatGPT that fell under the following categories: (1) health care practice/research; (2) health care education; and (3) academic writing.
The exclusion criteria included: (1) non-English records; (2) records addressing ChatGPT in subjects other than those mentioned in the eligibility criteria; and (3) articles from non-academic sources (e.g., newspapers, internet websites, magazines, etc.).
The exact PubMed/MEDLINE search strategy, which concluded on 16 February 2023, was as follows: (ChatGPT) AND ((“2022/11/30” [Date–Publication]: “3000” [Date–Publication])), which yielded 42 records.
The search on Google Scholar was conducted using Publish or Perish (Version 8) [22]. The search term was “ChatGPT” for the years 2022–2023, and the Google Scholar search yielded 238 records and concluded on 16 February 2023.

2.2. Summary of the Record Screening Approach

The records retrieved following the PubMed/MEDLINE and Google Scholar searches were imported to EndNote v.20 for Windows (Thomson ResearchSoft, Stanford, CA, USA), which yielded a total of 280 records.
Next, screening of the title/abstract was conducted for each record with the exclusion of duplicate records (n = 40), followed by the exclusion of records published in languages other than English (n = 32). Additionally, the records that fell outside the scope of the review (records that examined ChatGPT in a context outside health care education, health care practice, or scientific research/academic writing) were excluded (n = 80). Moreover, the records published in non-academic sources (e.g., newspapers, magazines, Internet websites, blogs, etc.) were excluded (n = 18).
Afterward, full screening of the remaining records (n = 110) was carried out with the exclusion of an additional 41 records that fell outside the scope of the current review. An additional nine records were excluded due to the inability to access the full text of these records, being subscription-based. This yielded a total of 60 records eligible for inclusion in the current review.

2.3. Summary of the Descriptive Search for ChatGPT Benefits and Risks in the Included Records

Each of the included records was searched specifically for the following: (1) type of record (preprint, published research article, opinion, commentary, editorial, review, etc.); (2) the listed benefits/applications of ChatGPT in health care education, health care practice, or scientific research/academic writing; (3) the listed risks/concerns of ChatGPT in health care education, health care practice, or scientific research/academic writing; and (4) the main conclusions and recommendations regarding ChatGPT in health care education, health care practice, or scientific research/academic writing.
Categorization of the benefits/applications of ChatGPT was as follows: (1) educational benefits in health care education (e.g., generation of realistic and variable clinical vignettes, customized clinical cases with immediate feedback based on the student’s needs, enhanced communications skills); (2) benefits in academic/scientific writing (e.g., text generation, summarization, translation, and literature review in scientific research); (3) benefits in scientific research (e.g., efficient analysis of large datasets, drug discovery, identification of potential drug targets, generation of codes in scientific research); (4) benefits in health care practice (e.g., improvements in personalized medicine, diagnosis, treatment, lifestyle recommendations based on personalized traits, documentation/generation of reports); and (5) being a freely available package.
Categorization of the risks/concerns of ChatGPT was as follows: (1) ethical issues (e.g., risk of bias, discrimination based on the quality of training data, plagiarism); (2) hallucination (the generation of scientifically incorrect content that sounds plausible); (3) transparency issues (black box application); (4) risk of declining need for human expertise with subsequent psychologic, economic and social issues; (5) over-detailed, redundant, excessive content; (6) concerns about data privacy for medical information; (7) risk of declining clinical skills, critical thinking and problem-solving abilities; (8) legal issues (e.g., copyright issues, authorship status); (9) interpretability issues; (10) referencing issues; (11) risk of academic fraud in research; (12) incorrect content; and (13) infodemic risk.

3. Results

A total of 280 records were identified, and following the full screening process, a total of 60 records were eligible to be included in the review. The PRISMA flowchart of the record selection process is shown in Figure 1.
Figure 1. Flowchart of the record selection process based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRIMSA) guidelines.

3.1. Summary of the ChatGPT Benefits and Limitations/Concerns in Health Care

Summaries of the main conclusions of the included studies regarding ChatGPT utility in academic writing, health care education, and health care practice/research are provided in Table 1 for the records comprising editorials/letters to the editors, in Table 2 for the records comprising research articles, commentaries, news articles, perspectives, case studies, brief reports, communications, opinions, or recommendations, and in Table 3 for the records representing preprints.
Table 1. A summary of the main conclusions of the included records comprising editorials/letters to the editors.
Table 2. A summary of the main conclusions of the included records comprising research articles, commentaries, news articles, perspectives, case studies, brief reports, communications, opinions, or recommendations.
Table 3. A summary of the main conclusions of the included records representing preprints.

3.2. Characteristics of the Included Records

A summary of the record types included in the current review is shown in Figure 2.
Figure 2. Summary of the types of included records (n = 60). Preprints (not peer reviewed) are highlighted in grey while published records are highlighted in blue.
One-third of the included records were preprints (n = 20), with the most common preprint server being medRxiv (n = 6, 30.0%), followed by SSRN and arXiv (n = 4, 20.0%) for each. Editorials/letters to editors were the second most common type of included records (n = 19, 31.7%).

3.3. Benefits and Possible Applications of ChatGPT in Health Care Education, Practice, and Research Based on the Included Records

The benefits of ChatGPT were most frequently cited in the context of academic/scientific writing, which was mentioned in 31 records (51.7%). Examples included efficiency and versatility in writing with text of high quality, improved language, readability, and translation promoting research equity, and accelerated literature review. Benefits in scientific research followed, which was mentioned in 20 records (33.3%). Examples included the ability to analyze massive data including electronic health records or genomic data, the availability of more free time for the focus on experimental design, and drug design and discovery. Benefits in health care practice was mentioned by 14 records (23.3%), with examples including personalized medicine, prediction of disease risk and outcome, streamlining the clinical workflow, improved diagnostics, documentation, cost saving, and improved health literacy. Educational benefits in health care disciplines were mentioned in seven records (11.7%) with examples including the generation of accurate and versatile clinical vignettes, improved personalized learning experience, and being an adjunct in group learning. Being a free package was mentioned as a benefit in two records (3.3%, Figure 3).
Figure 3. Summary of benefits/applications of ChatGPT in health care education, research, and practice based on the included records.

3.4. Risks and Concerns toward ChatGPT in Health Care Education, Practice, and Research Based on the Included Records

Ethical concerns were commonly mentioned by 33 records (55.0%), especially in the context of risk of bias (mentioned by 18 records, 30.0%) and plagiarism (mentioned by 14 records, 23.3%) among data privacy and security issues.
Other concerns involved the risk of incorrect/inaccurate information, which was mentioned by 20 records (33.3%); citation/reference inaccuracy or inadequate referencing, which was mentioned by 10 records (16.7%); transparency issues, which was mentioned by 10 records (16.7%); legal issues were mentioned in seven records (11.7%); restricted knowledge before 2021 was mentioned by six records (10.0%); risk of misinformation spread was mentioned by five records (8.3%); over-detailed content was mentioned in five records (8.3%); copyright issues were mentioned in four records (6.7%); and the lack of originality was mentioned by four records (6.7%, Figure 4).
Figure 4. Summary of risks/concerns of ChatGPT use in health care education, research, and practice based on the included records.

4. Discussion

The far-reaching consequences of ChatGPT among other LLMs can be described as a paradigm shift in academia and health care practice [16]. The discussion of its potential benefits, future perspectives, and importantly, its limitations, appear timely and relevant [80].
Therefore, the current review aimed to highlight these issues based on the current evidence. The following common themes emerged from the available literature.

4.1. Benefits of ChatGPT in Scientific Research

ChatGPT, as an example of other LLMs, can be described as a promising or even a revolutionary tool for scientific research in both academic writing and in the research process itself. Specifically, ChatGPT was listed in several sources as an efficient and promising tool for conducting comprehensive literature reviews and generating computer codes, thereby saving time for the research steps that require more efforts from human intelligence (e.g., the focus on experimental design) [14,27,28,41,44,46,47,60,71,72]. Additionally, ChatGPT can be helpful in generating queries for comprehensive systematic review with high precision, as shown by Wang et al., despite the authors highlighting the transparency issues and unsuitability for high-recall retrieval [61]. Moreover, the utility of ChatGPT extends to involve an improvement in language and a better ability to express and communicate research ideas and results, ultimately speeding up the publication process with the faster availability of research results [23,25,29,40,48,63,66]. This is particularly relevant for researchers who are non-native English speakers [23,25,63]. Such a practice can be acceptable considering the already existent English editing services provided by several academic publishers. Subsequently, this can help to promote equity and diversity in research [46,55].

4.2. Limitations of ChatGPT Use in Scientific Research

On the other hand, the use of ChatGPT in academic writing and scientific research should be conducted in light of several limitations that could compromise the quality of research as follows. First, superficial, inaccurate, or incorrect content was frequently cited as a shortcoming of ChatGPT use in scientific writing [14,28,29,40,60]. The ethical issues including the risk of bias based on training datasets and plagiarism were also frequently mentioned, aside from the lack of transparency regarding content generation, which justifies the description of ChatGPT, on occasions, as a black box technology [14,25,26,40,44,45,46,47,48,55,60,63,65,72]. Importantly, the concept of ChatGPT hallucination could be risky if the generated content is not thoroughly evaluated by researchers and health providers with proper expertise [37,56,73,77,79]. This comes in light of the ability of ChatGPT to generate incorrect content that appears plausible from a scientific point of view [81].
Second, several records mentioned the current problems regarding citation inaccuracies, insufficient references, and ChatGPT referencing to non-existent sources [23,26]. This was clearly shown in two recently published case studies with ChatGPT use in a journal contest [29,49,50]. These case studies discouraged the use of ChatGPT, citing the lack of scientific accuracy, the limited updated knowledge, and the lack of ability to critically discuss the results [38,49,50]. Therefore, the ChatGPT generated content, albeit efficient, should be meticulously examined prior to its inclusion in any research manuscripts or proposals for research grants.
Third, the generation of non-original, over-detailed, or excessive content can be an additional burden for researchers who should carefully supervise the ChatGPT-generated content [14,24,25,26,65,71]. This can be addressed by supplying ChatGPT with proper prompts (text input), since varying responses might be generated based on the exact approach of prompt construction [72,82].
Fourth, as it currently stands, the knowledge of ChatGPT is limited to the period prior to 2021 based on the datasets used in ChatGPT training [6]. Thus, ChatGPT cannot be used currently as a reliable updated source of literature review [83]. Nevertheless, ChatGPT can be used as a motivation to organize the literature in a decent format, if supplemented by reliable and up-to-date references [28,74].
Fifth, the risk of research fraud (e.g., ghostwriting, falsified or fake research) involving ChatGPT should be considered seriously [23,37,38,66,72,79] as well as the risk of generating mis- or disinformation with the subsequent possibility of infodemics [26,46,48,66].
Sixth, legal issues in relation to ChatGPT use were also raised by several records including copyright issues [14,38,44,55,79]. Finally, the practice of listing ChatGPT as an author does not appear to be acceptable based on the current ICMJE and COPE guidelines for determining authorship, as illustrated by Zielinski et al. and Liebrenz et al. [43,48]. This comes in light of the fact that authorship entails legal obligations that are not met by ChatGPT [43,48]. However, other researchers have suggested the possibility of ChatGPT inclusion as an author in some specified instances [60,64].
A few instances were encountered in this review, where ChatGPT was listed as an author that can point to the initial perplexity of a few publishers regarding the role of LLM including ChatGPT in research [36,54]. The disapproval of including ChatGPT or any other LLM in the list of authors was clearly explained in Science, Nature, and the Lancet editorials, which referred to such practice as scientific misconduct, and this view was echoed by many scientists [24,27,35,40,45]. In the case of ChatGPT use in the research process, several records advocated the need for the proper and concise disclosure and documentation of ChatGPT or LLM use in the methodology or acknowledgement sections [35,63,65]. A noteworthy and comprehensive record by Borji can be used as a categorical guide for the issues and concerns of ChatGPT use, especially in the context of scientific writing [20].

4.3. Benefits of ChatGPT in Health Care Practice

From the health care practice perspective, the current review showed a careful excitement vibe regarding ChatGPT applications. The ability of ChatGPT to help in streamlining the clinical workflow appears promising, with possible cost savings and increased efficiency in health care delivery [31,37,39,77]. This was illustrated recently by Patel and Lam, highlighting the ability of ChatGPT to produce efficient discharge summaries, which can be valuable to reduce the burden of documentation in health care [53]. Additionally, ChatGPT, among other LLMs, can have a transforming potential in health care practice via enhancing diagnostics, prediction of disease risk and outcome, and drug discovery among other areas in translational research [51,52,68]. Moreover, ChatGPT showed moderate accuracy in determining the imaging steps needed in breast cancer screening and in the evaluation of breast pain, which can be a promising application in decision making in radiology [69]. ChatGPT in health care settings also has the prospects of refining personalized medicine and the ability to improve health literacy by providing easily accessible and understandable health information to the general public [30,32,59,73,74]. This utility was demonstrated by ChatGPT responses, highlighting the need to consult health care providers among other reliable sources on specific situations [16,54].

4.4. Concerns Regarding ChatGPT Use in Health Care Practice

On the other hand, several concerns regarding ChatGPT use in health care settings were raised. Ethical issues including the risk of bias and transparency issues appeared as recurring major concerns [51,68,69,77]. Additionally, the generation of inaccurate content can have severe negative consequences in health care; therefore, this valid concern should be cautiously considered in health care practice [30,32,53,84]. This concern also extends to involve the ability of ChatGPT to provide justification for incorrect decisions [69].
Other ChatGPT limitations including the issues of interpretability, reproducibility, and the handling of uncertainty were also raised, which can have harmful consequences in health care settings including health care research [68,72,73]. In the area of personalized medicine, the lack of transparency and unclear information regarding the sources of data used for ChatGPT training are important issues in health care settings considering the variability observed among different populations in several health-related traits [69]. The issue of reproducibility between the ChatGPT prompt runs is of particular importance, which can be a major limitation in health care practice [51].
Medico-legal and accountability issues in the case of medical errors caused by ChatGPT application should be carefully considered [44]. Importantly, the current LLMs including ChatGPT are unable to comprehend the complexity of biologic systems, which is an important concept needed in health care decisions and research [52,68]. The concerns regarding data governance, health care cybersecurity, and data privacy should draw specific attention in the discussion regarding the utility of LLMs in health care [32,39,53].
Other issues accompanying ChatGPT applications in health care include the lack of personal and emotional perspectives needed in health care delivery and research [30,55]. However, ChatGPT emulation of empathetic responses was reported in a preprint in the context of hepatic disease [74]. Additionally, the issue of devaluing the function of the human brain should not be overlooked; therefore, stressing the indispensable human role in health care practice and research is important to address any psychologic, economic, and social consequences that could accompany the application of LLM tools in health care settings [72].

4.5. Benefits and Concerns Regarding ChatGPT Use in Health Care Education

In the area of health care education, ChatGPT appears to have a massive transformative potential. The need to rethink and revise the current assessment tools in health care education comes in light of ChatGPT’s ability to pass reputable exams (e.g., USMLE) and possibility of ChatGPT misuse, which would result in academic dishonesty [24,34,58,59,62,76,85,86,87].
Specifically, in ophthalmology examination, Antaki et al. showed that ChatGPT currently performed at the level of an average first-year resident [70]. Such a result highlights the need to focus on questions involving the assessment of critical and problem-based thinking [34]. Additionally, the utility of ChatGPT in health care education can involve tailoring education based on the needs of the student with immediate feedback [46]. Interestingly, a recent preprint by Benoit showed the promising potential of ChatGPT in rapidly crafting consistent realistic clinical vignettes of variable complexities that can be a valuable educational source with lower costs [67]. Thus, ChatGPT can be useful in health care education including enhanced communication skills given proper academic mentoring [42,57,67]. However, the copyright issues should be taken into account regarding the ChatGPT-generated clinical vignettes, aside from the issue of inaccurate references [67]. Additionally, ChatGPT availability can be considered as a motivation in health care education based on the personalized interaction it provides, enabling powerful self-learning as well as its utility as an adjunct in group learning [30,33,36,57,58].
Other limitations of ChatGPT use in health care education include the concern regarding the quality of training datasets that could result in biased content and inaccurate information limited to the period prior to the year 2021. Additionally, other concerns include the current inability of ChatGPT to handle images as well as its low performance in some topics (e.g., failure to pass a parasitology exam for Korean medical students), and the issue of possible plagiarism [33,56,57,58,70,75]. Despite ChatGPT versatility in the context of academic education [79], the content of ChatGPT in research assignments was discouraged, being currently insufficient, biased, or misleading [36,78].

4.6. Future Perspectives

As stated comprehensively in a commentary by van Dis et al., there is an urgent need to develop guidelines for ChatGPT use in scientific research, taking into account the issues of accountability, integrity, transparency, and honesty [46,88]. Thus, the application of ChatGPT to advance academia and health care should be carried out ethically and responsibly, taking into account the potential risks and concerns it entails [47,89].
More studies are needed to evaluate the content of LLMs including its potential impact to advance academia and science with a particular focus on health care settings [90]. In academic writing, a question arises as to whether authors would prefer an AI-editor and an AI-reviewer considering the previous flaws in the editorial and peer review processes [91,92,93]. A similar question would also arise in health care settings involving the personal preference of emotional support from health care providers, rather than the potential efficiency of AI-based systems.
In health care education, more studies are needed to evaluate the potential impact of ChatGPT on the quality and efficiency of both educational content and assessment tools. ChatGPT utility to help in refining communication skills among health care students is another aspect that should be further explored as well as the applications of LLMs in the better achievement of the intended learning outcomes through personalized and instantaneous feedback for the students.

4.7. Strengths and Limitations

The current review represents the first rapid and concise overview of ChatGPT utility in health care education, research, and practice. However, the results of the current review should be viewed carefully in light of several shortcomings that include: (1) the quality of the included records can be variable, compromising the generalizability of the results; (2) the exclusion of non-English records might have resulted in selection bias; (3) the exclusion of several records that could not be accessed could have resulted in missing relevant data despite being small in number; (4) the inclusion of preprints that have not been peer reviewed but might also compromise the generalizability of the results; (5) the swift growth of literature addressing ChatGPT applications/risks mandate the need for further studies and reviews considering that the search in this review was concluded on 16 February 2023; and (6) this systematic review was based on the screening and interpretation of a single author, which may limit the interpretability of the results; therefore, future systematic reviews should consider collaborative work to improve the quality and credibility of the review results.

5. Conclusions

The imminent dominant use of LLM technology including the widespread use of ChatGPT in health care education, research, and practice is inevitable. Considering the valid concerns raised regarding its potential misuse, appropriate guidelines and regulations are urgently needed with the engagement of all stakeholders involved to ensure the safe and responsible use of ChatGPT powers. The proactive embrace of LLM technologies with careful consideration of the possible ethical and legal issues can limit the potential future complications. If properly implemented, ChatGPT, among other LLMs, have the potential to expedite innovation in health care and can aid in promoting equity and diversity in research by overcoming language barriers. Therefore, a science-driven debate regarding the pros and cons of ChatGPT is strongly recommended and its possible benefits should be weighed with the possible risks of misleading results and fraudulent research [94].
Based on the available evidence, health care professionals could be described as carefully enthusiastic regarding the huge potential of ChatGPT among other LLMs in clinical decision-making and optimizing the clinical workflow. “ChatGPT in the Loop: Humans in Charge” can be the proper motto to follow based on the intrinsic value of human knowledge and expertise in health care research and practice [14,25,55]. An inspiring example of this motto could be drawn based on the relationship between the human character Cooper and the robotic character TARS from Christopher Nolan’s movie Interstellar [95].
However, before its widespread adoption, the impact of ChatGPT from the health care perspective in a real-world setting should be conducted (e.g., using a risk-based approach) [96]. Based on the title of an important perspective article “AI in the hands of imperfect users” by Kostick-Quenet and Gerke [96], the real-world impact of ChatGPT among other LLMs should be properly evaluated to prevent any negative impact of its potential misuse. The same innovative and revolutionary tool can be severely deleterious if used improperly. An example to illustrate such severe negative consequences of ChatGPT misuse can be based on Formula 1 racing, as follows. In the 2004 Formula 1 season, the Ferrari F2004 (the highly successful Formula 1 racing car) broke several Formula 1 records in the hands of Michael Schumacher, one of the most successful Formula 1 drivers of all time. However, in my own hands —as a humble researcher without expertise in Formula 1 driving— the same highly successful car would only break walls and be damaged beyond repair.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Data supporting this systematic review are available in the original publications, reports, and preprints that were cited in the reference section. In addition, the analyzed data that were used during the current systematic review are available from the author on reasonable request.

Acknowledgments

I am sincerely grateful to the reviewers for their time and effort in reviewing the manuscript, which provided insightful and valuable comments that helped to improve the quality of the final manuscript to a great degree.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Sarker, I.H. AI-Based Modeling: Techniques, Applications and Research Issues Towards Automation, Intelligent and Smart Systems. SN Comput. Sci. 2022, 3, 158. [Google Scholar] [CrossRef] [PubMed]
  2. Korteling, J.E.; van de Boer-Visschedijk, G.C.; Blankendaal, R.A.M.; Boonekamp, R.C.; Eikelboom, A.R. Human- versus Artificial Intelligence. Front. Artif. Intell. 2021, 4, 622364. [Google Scholar] [CrossRef] [PubMed]
  3. McCarthy, J.; Minsky, M.L.; Rochester, N.; Shannon, C.E. A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence, August 31, 1955. AI Mag. 2006, 27, 12. [Google Scholar] [CrossRef]
  4. Jordan, M.I.; Mitchell, T.M. Machine learning: Trends, perspectives, and prospects. Science 2015, 349, 255–260. [Google Scholar] [CrossRef]
  5. Domingos, P. The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World, 1st ed.; Basic Books, A Member of the Perseus Books Group: New York, NY, USA, 2018; p. 329. [Google Scholar]
  6. OpenAI. OpenAI: Models GPT-3. Available online: https://beta.openai.com/docs/models (accessed on 14 January 2023).
  7. Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.D.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A. Language models are few-shot learners. Adv. Neural Inf. Process. Syst. 2020, 33, 1877–1901. [Google Scholar] [CrossRef]
  8. Wogu, I.A.P.; Olu-Owolabi, F.E.; Assibong, P.A.; Agoha, B.C.; Sholarin, M.; Elegbeleye, A.; Igbokwe, D.; Apeh, H.A. Artificial intelligence, alienation and ontological problems of other minds: A critical investigation into the future of man and machines. In Proceedings of the 2017 International Conference on Computing Networking and Informatics (ICCNI), Lagos, Nigeria, 29–31 October 2017; pp. 1–10. [Google Scholar]
  9. Howard, J. Artificial intelligence: Implications for the future of work. Am. J. Ind. Med. 2019, 62, 917–926. [Google Scholar] [CrossRef]
  10. Tai, M.C. The impact of artificial intelligence on human society and bioethics. Tzu Chi. Med J. 2020, 32, 339–343. [Google Scholar] [CrossRef]
  11. Deng, J.; Lin, Y. The Benefits and Challenges of ChatGPT: An Overview. Front. Comput. Intell. Syst. 2023, 2, 81–83. [Google Scholar] [CrossRef]
  12. Tobore, T.O. On Energy Efficiency and the Brain’s Resistance to Change: The Neurological Evolution of Dogmatism and Close-Mindedness. Psychol. Rep. 2019, 122, 2406–2416. [Google Scholar] [CrossRef]
  13. Stokel-Walker, C. AI bot ChatGPT writes smart essays—Should professors worry? Nature, 9 December 2022. [Google Scholar] [CrossRef]
  14. Stokel-Walker, C.; Van Noorden, R. What ChatGPT and generative AI mean for science. Nature 2023, 614, 214–216. [Google Scholar] [CrossRef]
  15. Chatterjee, J.; Dethlefs, N. This new conversational AI model can be your friend, philosopher, and guide … and even your worst enemy. Patterns 2023, 4, 100676. [Google Scholar] [CrossRef] [PubMed]
  16. Sallam, M.; Salim, N.A.; Al-Tammemi, A.B.; Barakat, M.; Fayyad, D.; Hallit, S.; Harapan, H.; Hallit, R.; Mahafzah, A. ChatGPT Output Regarding Compulsory Vaccination and COVID-19 Vaccine Conspiracy: A Descriptive Study at the Outset of a Paradigm Shift in Online Search for Information. Cureus 2023, 15, e35029. [Google Scholar] [CrossRef] [PubMed]
  17. Johnson, K.B.; Wei, W.Q.; Weeraratne, D.; Frisse, M.E.; Misulis, K.; Rhee, K.; Zhao, J.; Snowdon, J.L. Precision Medicine, AI, and the Future of Personalized Health Care. Clin. Transl. Sci. 2021, 14, 86–93. [Google Scholar] [CrossRef] [PubMed]
  18. Rajpurkar, P.; Chen, E.; Banerjee, O.; Topol, E.J. AI in health and medicine. Nat. Med. 2022, 28, 31–38. [Google Scholar] [CrossRef]
  19. Paranjape, K.; Schinkel, M.; Nannan Panday, R.; Car, J.; Nanayakkara, P. Introducing Artificial Intelligence Training in Medical Education. JMIR Med. Educ. 2019, 5, e16048. [Google Scholar] [CrossRef]
  20. Borji, A. A Categorical Archive of ChatGPT Failures. arXiv 2023, arXiv:2302.03494. [Google Scholar] [CrossRef]
  21. Moher, D.; Liberati, A.; Tetzlaff, J.; Altman, D.G. Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. PLoS Med. 2009, 6, e1000097. [Google Scholar] [CrossRef]
  22. Harzing, A.-W. Publish or Perish. Available online: https://harzing.com/resources/publish-or-perish (accessed on 16 February 2023).
  23. Chen, T.J. ChatGPT and Other Artificial Intelligence Applications Speed up Scientific Writing. Available online: https://journals.lww.com/jcma/Citation/9900/ChatGPT_and_other_artificial_intelligence.174.aspx (accessed on 16 February 2023).
  24. Thorp, H.H. ChatGPT is fun, but not an author. Science 2023, 379, 313. [Google Scholar] [CrossRef]
  25. Kitamura, F.C. ChatGPT Is Shaping the Future of Medical Writing but Still Requires Human Judgment. Radiology 2023, 230171. [Google Scholar] [CrossRef]
  26. Lubowitz, J. ChatGPT, An Artificial Intelligence Chatbot, Is Impacting Medical Literature. Arthroscopy, 2023; in press. [Google Scholar] [CrossRef]
  27. Nature editorial. Tools such as ChatGPT threaten transparent science; here are our ground rules for their use. Nature 2023, 613, 612. [Google Scholar] [CrossRef]
  28. Moons, P.; Van Bulck, L. ChatGPT: Can artificial intelligence language models be of value for cardiovascular nurses and allied health professionals. Available online: https://academic.oup.com/eurjcn/advance-article/doi/10.1093/eurjcn/zvad022/7031481 (accessed on 8 February 2023).
  29. Cahan, P.; Treutlein, B. A conversation with ChatGPT on the role of computational systems biology in stem cell research. Stem. Cell. Rep. 2023, 18, 1–2. [Google Scholar] [CrossRef]
  30. Ahn, C. Exploring ChatGPT for information of cardiopulmonary resuscitation. Resuscitation 2023, 185, 109729. [Google Scholar] [CrossRef] [PubMed]
  31. Gunawan, J. Exploring the future of nursing: Insights from the ChatGPT model. Belitung Nurs. J. 2023, 9, 1–5. [Google Scholar] [CrossRef]
  32. D’Amico, R.S.; White, T.G.; Shah, H.A.; Langer, D.J. I Asked a ChatGPT to Write an Editorial About How We Can Incorporate Chatbots Into Neurosurgical Research and Patient Care. Neurosurgery 2023, 92, 993–994. [Google Scholar] [CrossRef]
  33. Fijačko, N.; Gosak, L.; Štiglic, G.; Picard, C.T.; John Douma, M. Can ChatGPT Pass the Life Support Exams without Entering the American Heart Association Course? Resuscitation 2023, 185, 109732. [Google Scholar] [CrossRef] [PubMed]
  34. Mbakwe, A.B.; Lourentzou, I.; Celi, L.A.; Mechanic, O.J.; Dagan, A. ChatGPT passing USMLE shines a spotlight on the flaws of medical education. PLoS Digit. Health 2023, 2, e0000205. [Google Scholar] [CrossRef]
  35. Huh, S. Issues in the 3rd year of the COVID-19 pandemic, including computer-based testing, study design, ChatGPT, journal metrics, and appreciation to reviewers. J. Educ. Eval. Health Prof. 2023, 20, 5. [Google Scholar] [CrossRef]
  36. O’Connor, S. Open artificial intelligence platforms in nursing education: Tools for academic progress or abuse? Nurse Educ. Pract. 2023, 66, 103537. [Google Scholar] [CrossRef]
  37. Shen, Y.; Heacock, L.; Elias, J.; Hentel, K.D.; Reig, B.; Shih, G.; Moy, L. ChatGPT and Other Large Language Models Are Double-edged Swords. Radiology 2023, 230163. [Google Scholar] [CrossRef]
  38. Gordijn, B.; Have, H.t. ChatGPT: Evolution or revolution? Med. Health Care Philos. 2023, 26, 1–2. [Google Scholar] [CrossRef]
  39. Mijwil, M.; Aljanabi, M.; Ali, A. ChatGPT: Exploring the Role of Cybersecurity in the Protection of Medical Information. Mesop. J. CyberSecurity 2023, 18–21. [Google Scholar] [CrossRef]
  40. The Lancet Digital Health. ChatGPT: Friend or foe? Lancet Digit. Health 2023, 5, e112–e114. [Google Scholar] [CrossRef]
  41. Aljanabi, M.; Ghazi, M.; Ali, A.; Abed, S. ChatGpt: Open Possibilities. Iraqi J. Comput. Sci. Math. 2023, 4, 62–64. [Google Scholar] [CrossRef]
  42. Kumar, A. Analysis of ChatGPT Tool to Assess the Potential of its Utility for Academic Writing in Biomedical Domain. Biol. Eng. Med. Sci. Rep. 2023, 9, 24–30. [Google Scholar] [CrossRef]
  43. Zielinski, C.; Winker, M.; Aggarwal, R.; Ferris, L.; Heinemann, M.; Lapeña, J.; Pai, S.; Ing, E.; Citrome, L. Chatbots, ChatGPT, and Scholarly Manuscripts WAME Recommendations on ChatGPT and Chatbots in Relation to Scholarly Publications. Maced J. Med. Sci. 2023, 11, 83–86. [Google Scholar] [CrossRef]
  44. Biswas, S. ChatGPT and the Future of Medical Writing. Radiology 2023, 223312. [Google Scholar] [CrossRef]
  45. Stokel-Walker, C. ChatGPT listed as author on research papers: Many scientists disapprove. Nature 2023, 613, 620–621. [Google Scholar] [CrossRef]
  46. van Dis, E.A.M.; Bollen, J.; Zuidema, W.; van Rooij, R.; Bockting, C.L. ChatGPT: Five priorities for research. Nature 2023, 614, 224–226. [Google Scholar] [CrossRef] [PubMed]
  47. Lund, B.; Wang, S. Chatting about ChatGPT: How may AI and GPT impact academia and libraries? Library Hi. Tech. News, 2023; ahead-of-print. [Google Scholar] [CrossRef]
  48. Liebrenz, M.; Schleifer, R.; Buadze, A.; Bhugra, D.; Smith, A. Generating scholarly content with ChatGPT: Ethical challenges for medical publishing. Lancet Digit. Health 2023, 5, e105–e106. [Google Scholar] [CrossRef] [PubMed]
  49. Manohar, N.; Prasad, S.S. Use of ChatGPT in Academic Publishing: A Rare Case of Seronegative Systemic Lupus Erythematosus in a Patient With HIV Infection. Cureus 2023, 15, e34616. [Google Scholar] [CrossRef] [PubMed]
  50. Akhter, H.M.; Cooper, J.S. Acute Pulmonary Edema After Hyperbaric Oxygen Treatment: A Case Report Written With ChatGPT Assistance. Cureus 2023, 15, e34752. [Google Scholar] [CrossRef] [PubMed]
  51. Holzinger, A.; Keiblinger, K.; Holub, P.; Zatloukal, K.; Müller, H. AI for life: Trends in artificial intelligence for biotechnology. N. Biotechnol. 2023, 74, 16–24. [Google Scholar] [CrossRef]
  52. Mann, D. Artificial Intelligence Discusses the Role of Artificial Intelligence in Translational Medicine: A JACC: Basic to Translational Science Interview With ChatGPT. J. Am. Coll. Cardiol. Basic Trans. Sci. 2023, 8, 221–223. [Google Scholar] [CrossRef]
  53. Patel, S.B.; Lam, K. ChatGPT: The future of discharge summaries? Lancet Digit. Health 2023, 5, e107–e108. [Google Scholar] [CrossRef]
  54. Zhavoronkov, A. Rapamycin in the context of Pascal’s Wager: Generative pre-trained transformer perspective. Oncoscience 2022, 9, 82–84. [Google Scholar] [CrossRef]
  55. Hallsworth, J.E.; Udaondo, Z.; Pedrós-Alió, C.; Höfer, J.; Benison, K.C.; Lloyd, K.G.; Cordero, R.J.B.; de Campos, C.B.L.; Yakimov, M.M.; Amils, R. Scientific novelty beyond the experiment. Microb. Biotechnol. 2023; Online ahead of print. [Google Scholar] [CrossRef]
  56. Huh, S. Are ChatGPT’s knowledge and interpretation ability comparable to those of medical students in Korea for taking a parasitology examination?: A descriptive study. J. Educ. Eval. Health Prof. 2023, 20, 1. [Google Scholar] [CrossRef]
  57. Khan, A.; Jawaid, M.; Khan, A.; Sajjad, M. ChatGPT-Reshaping medical education and clinical management. Pak. J. Med. Sci. 2023, 39, 605–607. [Google Scholar] [CrossRef]
  58. Gilson, A.; Safranek, C.W.; Huang, T.; Socrates, V.; Chi, L.; Taylor, R.A.; Chartash, D. How Does ChatGPT Perform on the United States Medical Licensing Examination? The Implications of Large Language Models for Medical Education and Knowledge Assessment. JMIR Med. Educ. 2023, 9, e45312. [Google Scholar] [CrossRef] [PubMed]
  59. Kung, T.H.; Cheatham, M.; Medenilla, A.; Sillos, C.; De Leon, L.; Elepaño, C.; Madriaga, M.; Aggabao, R.; Diaz-Candido, G.; Maningo, J.; et al. Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. PLOS Digit. Health 2023, 2, e0000198. [Google Scholar] [CrossRef]
  60. Marchandot, B.; Matsushita, K.; Carmona, A.; Trimaille, A.; Morel, O. ChatGPT: The Next Frontier in Academic Writing for Cardiologists or a Pandora’s Box of Ethical Dilemmas. Eur. Heart J. Open 2023, 3, oead007. [Google Scholar] [CrossRef]
  61. Wang, S.; Scells, H.; Koopman, B.; Zuccon, G. Can ChatGPT Write a Good Boolean Query for Systematic Review Literature Search? arXiv 2023, arXiv:2302.03495. [Google Scholar] [CrossRef]
  62. Cotton, D.; Cotton, P.; Shipway, J. Chatting and Cheating. Ensuring academic integrity in the era of ChatGPT. EdArXiv, 2023; Preprint. [Google Scholar] [CrossRef]
  63. Gao, C.A.; Howard, F.M.; Markov, N.S.; Dyer, E.C.; Ramesh, S.; Luo, Y.; Pearson, A.T. Comparing scientific abstracts generated by ChatGPT to original abstracts using an artificial intelligence output detector, plagiarism detector, and blinded human reviewers. bioRxiv 2022. [Google Scholar] [CrossRef]
  64. Polonsky, M.; Rotman, J. Should Artificial Intelligent (AI) Agents be Your Co-author? Arguments in favour, informed by ChatGPT. SSRN, 2023; Preprint. [Google Scholar] [CrossRef]
  65. Aczel, B.; Wagenmakers, E. Transparency Guidance for ChatGPT Usage in Scientific Writing. PsyArXiv, 2023; Preprint. [Google Scholar] [CrossRef]
  66. De Angelis, L.; Baglivo, F.; Arzilli, G.; Privitera, G.P.; Ferragina, P.; Tozzi, A.E.; Rizzo, C. ChatGPT and the Rise of Large Language Models: The New AI-Driven Infodemic Threat in Public Health. SSRN, 2023; Preprint. [Google Scholar] [CrossRef]
  67. Benoit, J. ChatGPT for Clinical Vignette Generation, Revision, and Evaluation. medRxiv, 2023; Preprint. [Google Scholar] [CrossRef]
  68. Sharma, G.; Thakur, A. ChatGPT in Drug Discovery. ChemRxiv, 2023; Preprint. [Google Scholar] [CrossRef]
  69. Rao, A.; Kim, J.; Kamineni, M.; Pang, M.; Lie, W.; Succi, M.D. Evaluating ChatGPT as an Adjunct for Radiologic Decision-Making. medRxiv 2023. [Google Scholar] [CrossRef]
  70. Antaki, F.; Touma, S.; Milad, D.; El-Khoury, J.; Duval, R. Evaluating the Performance of ChatGPT in Ophthalmology: An Analysis of its Successes and Shortcomings. medRxiv, 2023; Preprint. [Google Scholar] [CrossRef]
  71. Aydın, Ö.; Karaarslan, E. OpenAI ChatGPT generated literature review: Digital twin in healthcare. SSRN, 2022; Preprint. [Google Scholar] [CrossRef]
  72. Sanmarchi, F.; Bucci, A.; Golinelli, D. A step-by-step Researcher’s Guide to the use of an AI-based transformer in epidemiology: An exploratory analysis of ChatGPT using the STROBE checklist for observational studies. medRxiv, 2023; Preprint. [Google Scholar] [CrossRef]
  73. Duong, D.; Solomon, B.D. Analysis of large-language model versus human performance for genetics questions. medRxiv, 2023; Preprint. [Google Scholar] [CrossRef]
  74. Yeo, Y.H.; Samaan, J.S.; Ng, W.H.; Ting, P.-S.; Trivedi, H.; Vipani, A.; Ayoub, W.; Yang, J.D.; Liran, O.; Spiegel, B.; et al. Assessing the performance of ChatGPT in answering questions regarding cirrhosis and hepatocellular carcinoma. medRxiv, 2023; Preprint. [Google Scholar] [CrossRef]
  75. Bašić, Ž.; Banovac, A.; Kružić, I.; Jerković, I. Better by You, better than Me? ChatGPT-3 as writing assistance in students’ essays. arXiv, 2023; Preprint. [Google Scholar] [CrossRef]
  76. Hisan, U.; Amri, M. ChatGPT and Medical Education: A Double-Edged Sword. Researchgate, 2023; Preprint. [Google Scholar] [CrossRef]
  77. Jeblick, K.; Schachtner, B.; Dexl, J.; Mittermeier, A.; Stüber, A.T.; Topalis, J.; Weber, T.; Wesp, P.; Sabel, B.; Ricke, J.; et al. ChatGPT Makes Medicine Easy to Swallow: An Exploratory Case Study on Simplified Radiology Reports. arXiv 2022, arXiv:2212.14882. [Google Scholar] [CrossRef]
  78. Nisar, S.; Aslam, M. Is ChatGPT a Good Tool for T&CM Students in Studying Pharmacology? SSRN, 2023; Preprint. [Google Scholar] [CrossRef]
  79. Lin, Z. Why and how to embrace AI such as ChatGPT in your academic life. PsyArXiv, 2023; Preprint. [Google Scholar] [CrossRef]
  80. Taecharungroj, V. “What Can ChatGPT Do?”; Analyzing Early Reactions to the Innovative AI Chatbot on Twitter. Big Data Cogn. Comput. 2023, 7, 35. [Google Scholar] [CrossRef]
  81. Cascella, M.; Montomoli, J.; Bellini, V.; Bignami, E. Evaluating the Feasibility of ChatGPT in Healthcare: An Analysis of Multiple Clinical and Research Scenarios. J. Med. Syst. 2023, 47, 33. [Google Scholar] [CrossRef] [PubMed]
  82. Nachshon, A.; Batzofin, B.; Beil, M.; van Heerden, P.V. When Palliative Care May Be the Only Option in the Management of Severe Burns: A Case Report Written With the Help of ChatGPT. Cureus 2023, 15, e35649. [Google Scholar] [CrossRef] [PubMed]
  83. Kim, S.G. Using ChatGPT for language editing in scientific articles. Maxillofac. Plast. Reconstr. Surg. 2023, 45, 13. [Google Scholar] [CrossRef] [PubMed]
  84. Ali, S.R.; Dobbs, T.D.; Hutchings, H.A.; Whitaker, I.S. Using ChatGPT to write patient clinic letters. Lancet Digit. Health, 2023; Online ahead of print. [Google Scholar] [CrossRef]
  85. Shahriar, S.; Hayawi, K. Let’s have a chat! A Conversation with ChatGPT: Technology, Applications, and Limitations. arXiv 2023, arXiv:2302.13817. [Google Scholar] [CrossRef]
  86. Alberts, I.L.; Mercolli, L.; Pyka, T.; Prenosil, G.; Shi, K.; Rominger, A.; Afshar-Oromieh, A. Large language models (LLM) and ChatGPT: What will the impact on nuclear medicine be? Eur. J. Nucl. Med. Mol. Imaging, 2023; Online ahead of print. [Google Scholar] [CrossRef]
  87. Sallam, M.; Salim, N.A.; Barakat, M.; Al-Tammemi, A.B. ChatGPT applications in medical, dental, pharmacy, and public health education: A descriptive study. Narra J. 2023, 3, e103. [Google Scholar] [CrossRef]
  88. Quintans-Júnior, L.J.; Gurgel, R.Q.; Araújo, A.A.S.; Correia, D.; Martins-Filho, P.R. ChatGPT: The new panacea of the academic world. Rev. Soc. Bras. Med. Trop. 2023, 56, e0060. [Google Scholar] [CrossRef]
  89. Homolak, J. Opportunities and risks of ChatGPT in medicine, science, and academic publishing: A modern Promethean dilemma. Croat. Med. J. 2023, 64, 1–3. [Google Scholar] [CrossRef]
  90. Checcucci, E.; Verri, P.; Amparore, D.; Cacciamani, G.E.; Fiori, C.; Breda, A.; Porpiglia, F. Generative Pre-training Transformer Chat (ChatGPT) in the scientific community: The train has left the station. Minerva. Urol. Nephrol. 2023; Online ahead of print. [Google Scholar] [CrossRef]
  91. Smith, R. Peer review: A flawed process at the heart of science and journals. J. R. Soc. Med. 2006, 99, 178–182. [Google Scholar] [CrossRef]
  92. Mavrogenis, A.F.; Quaile, A.; Scarlat, M.M. The good, the bad and the rude peer-review. Int. Orthop. 2020, 44, 413–415. [Google Scholar] [CrossRef]
  93. Margalida, A.; Colomer, M. Improving the peer-review process and editorial quality: Key errors escaping the review and editorial process in top scientific journals. PeerJ. 2016, 4, e1670. [Google Scholar] [CrossRef]
  94. Ollivier, M.; Pareek, A.; Dahmen, J.; Kayaalp, M.E.; Winkler, P.W.; Hirschmann, M.T.; Karlsson, J. A deeper dive into ChatGPT: History, use and future perspectives for orthopaedic research. Knee Surg. Sports Traumatol. Arthrosc. 2023; Online ahead of print. [Google Scholar] [CrossRef]
  95. Nolan, C. Interstellar; 169 minutes; Legendary Entertainment: Burbank, CA, USA, 2014. [Google Scholar]
  96. Kostick-Quenet, K.M.; Gerke, S. AI in the hands of imperfect users. Npj Digit. Med. 2022, 5, 197. [Google Scholar] [CrossRef] [PubMed]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.