Next Article in Journal
Prevalence of Stress, Depressive Symptoms, and Contributing Factors Among Undergraduate Nursing Students: A Systematic Review and Meta-Analysis
Next Article in Special Issue
A Nursing and Computer Science Perspective on Confronting Chronic Illness and Environmental Responsibility in AI Research
Previous Article in Journal
The Relationship Between Problematic Use of Social Networks, Perceived Stress, Distraction, and Self-Management in Nursing Students: A Cross-Sectional Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Educational Applications of AI-Based Chatbots in Nursing: A Scoping Review

by
Francisco Fernandes
1,*,
Rúben Encarnação
2,
José Alves
2,
Carla Pais-Vieira
2,
Suzinara Beatriz Soares de Lima
1 and
Paulo Alves
2
1
Graduate Program in Nursing (PPGENF/UFSM), Federal University of Santa Maria, Santa Maria 97105-900, Brazil
2
Center for Interdisciplinary Research in Health (CIIS), Faculty of Health Sciences and Nursing, Catholic University of Portugal, 4169-005 Porto, Portugal
*
Author to whom correspondence should be addressed.
Nurs. Rep. 2026, 16(3), 87; https://doi.org/10.3390/nursrep16030087
Submission received: 29 January 2026 / Revised: 20 February 2026 / Accepted: 28 February 2026 / Published: 3 March 2026

Abstract

Background/Objectives: The rapid expansion of generative artificial intelligence (AI) and large language model-based chatbots has accelerated their adoption in higher education, including nursing. This scoping review mapped the use of AI-based chatbots in nursing education, including curricular domains, pedagogical approaches, educational outcomes, and implementation challenges. Methods: A scoping review was conducted following the Joanna Briggs Institute methodology and reported in accordance with the PRISMA-ScR guideline. Searches were performed across major bibliographic databases and grey literature sources. Quantitative, qualitative, and mixed-methods studies addressing the use of AI chatbots in nursing education or professional training were included. Data were extracted using a standardized instrument and synthesized through descriptive statistics and qualitative content analysis. Results: Sixty-six studies (2019–2025) were included, with significant growth observed after 2023. Most studies employed quasi-experimental designs (37.9%) and were implemented in academic settings (83.3%). Application formats varied across online, hybrid, simulation-based, and classroom models. Reported benefits included improved learning performance, clinical reasoning, and student engagement. Key challenges involved the reliability of AI outputs, academic integrity, data protection, and limited institutional governance. Conclusions: AI-based chatbots represent promising tools to enhance nursing education, particularly when integrated into structured pedagogical strategies with active faculty supervision. Their use can support the development of clinical reasoning, student engagement, and personalized learning. However, methodological heterogeneity, ethical concerns, and governance gaps highlight the need for careful implementation and further rigorous research to ensure safe, effective, and pedagogically sound integration.

1. Introduction

Artificial Intelligence (AI), a term established by John McCarthy in 1955, represents the capability of machines to perform tasks and solve problems that traditionally depend on human intelligence, such as natural language processing, pattern recognition, and decision-making [1,2,3]. Recently, AI has undergone significant evolution with the development of Generative Artificial Intelligence (GenAI). Powered by large language models (LLMs), GenAI can automatically generate diverse content, including text, images, audio, and video, by learning from large volumes of data [4,5]. Prominent examples such as ChatGPT, Google Gemini, and Llama illustrate this transformation, generating content that responds to specific user requests [6].
Among the most prominent AI tools, chatbots have been the focus of numerous studies in higher education, demonstrating the potential to positively impact various aspects of learning. Evidence suggests that incorporating these tools can lead to improved student outcomes, including greater satisfaction [7]. Within healthcare and health professions education, recent reviews indicate that integrating AI into curricula and professional education can contribute to enhanced learning, assessment, and competency development [8,9]. This digital and technological revolution requires educational programs and future professionals to adapt in order to keep pace with sectoral evolution and to be prepared to use AI ethically and effectively.
However, despite the increasing adoption of chatbots, there is a lack of structured evidence regarding their pedagogical design, educational effectiveness, and ethical considerations within nursing education, especially in light of recent advances in generative AI.
In the context of nursing, the adoption of new technologies in professional education is crucial for developing essential skills and competencies that meet real-world needs and emerging care models. Nursing education increasingly requires adaptability to innovations and the use of tools that simulate clinical scenarios, thereby enhancing decision-making and clinical reasoning. The World Health Organization (WHO) emphasizes the need for innovation in health education to strengthen the global workforce [10]. Given the growing integration of these technologies in healthcare and the potential demonstrated by chatbots in higher education, it becomes imperative to investigate how these AI tools can be incorporated and utilized to enhance nursing education.
Given the promise of improved education, a comprehensive mapping of the available evidence is warranted. Scoping reviews offer a robust methodological approach to examine the extent, scope, and nature of research activity on a given topic, as well as to identify knowledge gaps and research priorities [11].
A preliminary search of major databases (CINAHL, PubMed, Scopus, and Web of Science) and registries (Open Science Framework and PROSPERO) identified three reviews related to the use of chatbots in nursing education. The first, a systematic review by Zhang et al. [12], included only qualitative studies and conducted its search in November 2024, without incorporating grey literature. The second, a scoping review by Labrague and Sabei [7], similarly considered studies published up to 2024 but excluded grey literature sources and studies not published in English. The third, a scoping review protocol by Rodrigues et al. [13], while addressing Intelligent Tutoring Systems broadly, does not specifically focus on the distinct characteristics of conversational AI chatbots. Although these reviews provide valuable information, they do not offer a comprehensive and up-to-date mapping of the evidence, particularly regarding different study designs, emerging literature, and contributions from grey literature. Therefore, a new scoping review is needed to capture the full range of available evidence, including recent publications and grey literature, and to provide a more comprehensive understanding of how chatbots are being used in nursing education.
For the purposes of this review, AI-based chatbots were defined as computer-based conversational agents capable of interacting with users through natural language processing, including generative large language model-based systems, as well as rule-based or hybrid conversational agents used to support educational processes.
Given the above, it is pertinent to conduct a scoping review that investigates and maps the use of AI-based chatbots in nursing education, both in academic education and in the professional development of nurses and nursing students at undergraduate and postgraduate levels. Understanding the state of the art on this topic will help identify the potential, challenges, and gaps in the literature, thereby contributing to the advancement of pedagogical practices and technological innovation in nursing education.
Specifically, this review aims to:
  • Identify the areas of the nursing curriculum in which chatbots are being applied.
  • Describe how AI-based chatbots are being used, including the pedagogical strategies applied in nursing education.
  • Map the main outcomes associated with the use of chatbots in nursing education.
  • Identify the main challenges and limitations reported in integrating chatbots into nursing education.

2. Materials and Methods

This scoping review protocol was prospectively registered in the Open Science Framework (OSF) (DOI: 10.17605/OSF.IO/DBYA7) [14]. The review was conducted and reported in accordance with the PRISMA-ScR guideline. Given the emerging nature of the topic and the limited knowledge regarding the application of AI-based chatbots in nursing education, a scoping review was selected as the most appropriate methodological approach [11], as it allows for the comprehensive mapping of available evidence irrespective of study design.
The review followed the methodology recommended by the Joanna Briggs Institute (JBI) for scoping reviews [11,15]. Following protocol registration, the manuscript title was refined to improve clarity and alignment with the final scope of the review. This modification was limited to the title and did not affect the research objectives, eligibility criteria, methodological approach, or analytical framework defined in the original protocol.
The review followed systematic steps, including the formulation of the research question, comprehensive literature searching, screening of eligible studies, data extraction and organization, evidence synthesis, and structured presentation of the results. The completed PRISMA-ScR checklist [16] is provided as Supplementary Material (Table S1).

2.1. Research Question

In scoping reviews, it is recommended that research questions be formulated broadly and clearly to encompass the concept to be explored, the target population, and the outcomes or context of interest, thereby guiding a systematic and comprehensive search [17].
To achieve the study objectives, the research question was formulated using the PCC mnemonic (Population, Concept, Context): What evidence currently exists regarding the use of AI-based chatbots in nursing education?

2.2. Search Strategy

To ensure comprehensive coverage of the available literature, systematic searches were conducted across multiple electronic databases, including PubMed, CINAHL Complete, Scopus, Web of Science, SciELO, Cochrane Library, and VHL/LILACS. In addition to bibliographic databases, grey literature sources were searched through OpenAIRE, Open Dissertations, BDTD/CAPES, ProQuest™ Dissertations & Theses Citation Index, and Google Scholar in order to identify relevant materials not indexed in conventional journals. This combined strategy was designed to maximize sensitivity and ensure broad identification of evidence on AI-based chatbots in nursing education.
The final searches across all sources, including bibliographic databases and grey literature, were completed on 13 October 2025. This date was considered the definitive search date for the purposes of this review. The complete search strategies for each database are provided in Table S2 in the Supplementary Material.
The Google Scholar search was performed using a structured query with the restrictive operator allintitle in order to increase retrieval specificity and prioritize studies explicitly focused on AI-based chatbots in nursing education. This approach was intentionally adopted to enhance alignment with the scope of the review and reduce the retrieval of irrelevant records, which is consistent with recommended practices for improving precision in Google Scholar searches on emerging topics.
All records retrieved from Google Scholar were saved within the platform and subsequently exported using the built-in citation export function in RefMan (RIS) format. These exported records were then imported into the Rayyan web platform (Rayyan Systems Inc., Cambridge, MA, USA), available at https://www.rayyan.ai (accessed on 20 October 2025) [18], where they were combined with records retrieved from other grey literature sources and bibliographic databases.

2.3. Eligibility Criteria

Eligibility criteria were defined using the JBI PCC framework. The Population comprised nursing students and professionals; the Concept focused on AI-based chatbots as educational tools; and the Context included teaching–learning processes and professional training in nursing.
Studies were included if they analyzed the use of AI-based chatbots in formal educational contexts, such as undergraduate and postgraduate programs, as well as in non-formal contexts, including training courses, continuing education, and professional development programs. Research focused exclusively on AI applications in clinical care, management, or diagnostic contexts, without a direct relationship to teaching–learning processes, was excluded.
To ensure a comprehensive mapping of the literature, this review included a wide range of empirical evidence, such as quantitative, qualitative, and mixed-methods studies encompassing experimental, quasi-experimental, cross-sectional, developmental, implementation, and case study designs, as well as grey literature sources reporting original empirical data from theses and dissertations. Only studies that explicitly addressed the use of AI-based chatbots in teaching, learning, or professional development within nursing education contexts were considered. Secondary research articles (e.g., systematic or narrative reviews), conceptual or theoretical articles, expert commentaries, discussion papers, consensus documents, educational reference materials, editorials, and letters to the editor were excluded.
The registered protocol initially allowed the inclusion of review studies; however, during the review process, the eligibility criteria were refined to include only primary studies in order to directly map original evidence and avoid duplication of synthesized findings. This modification did not affect the review objectives or overall methodological approach. Protocol deviations were transparently reported in accordance with PRISMA-ScR recommendations [11,16], ensuring methodological transparency and consistency.
Sources published in any language and from any year were considered, aiming to provide a complete mapping of relevant evidence. The review team possesses proficiency in English, Spanish, and Portuguese, allowing direct evaluation of studies published in these languages. For articles published in other languages, translations were arranged as needed to reduce language bias and ensure inclusion.

2.4. Evidence Screening and Study Selection

The study selection process was conducted in structured and sequential phases to ensure methodological rigor and transparency. Prior to full screening, a calibration exercise was performed to refine the application of the eligibility criteria and align reviewers’ interpretations.
Following the execution of the search strategies, all retrieved records were imported into the Rayyan web platform [18]. Duplicate records were automatically identified by the platform and subsequently verified and removed manually by the reviewers. The remaining records underwent independent title and abstract screening according to the predefined eligibility criteria.
Records considered potentially eligible were exported to Zotero software (v8.0.3; Corporation for Digital Scholarship, Vienna, VA, USA), which was used to retrieve, manage, and organize full-text reports for detailed assessment. No ar-bitrary numerical limits were applied to the Google Scholar search, and all retrieved records were screened.
Full-text reports were obtained through institutional access or, when necessary, by contacting the corresponding authors. Two reviewers independently assessed each full-text report based on the predefined PCC criteria. Discrepancies at any stage were resolved through discussion and consensus, with consultation of a third reviewer when required to ensure consistency in the final decision. Reference lists of included studies were also manually screened to identify additional relevant publications.
Inter-rater agreement for full-text eligibility assessment was calculated using Cohen’s kappa coefficient. Agreement was substantial (κ = 0.82), indicating high consistency between reviewers prior to consensus resolution.

2.5. Data Extraction and Organization

Data extraction was conducted using a structured spreadsheet developed specifically for this review to ensure consistency and transparency. The extracted variables included publication year, country of origin, study design, characteristics of the AI-based chatbot (e.g., rule-based or generative), educational context, target population, implementation setting, pedagogical strategy, and reported educational outcomes.
The classification of chatbot types, educational applications, pedagogical strategies, and outcomes followed a combined inductive and deductive approach. Initial categories were informed by existing educational and technological frameworks and were iteratively refined during the data extraction process to reflect patterns observed across the included studies. The categories were not mutually exclusive, as chatbot implementations frequently encompassed multiple functions and educational purposes.
Explicit decision rules were applied to guide category assignment. Studies were classified into all relevant categories when sufficient information was provided in the methods, intervention description, or results sections. No forced single-category assignment was applied. For example, when a chatbot was used both as a learning support tool and as a virtual tutor, the study was assigned to both categories. When classification information was unclear or insufficient, categorization was based solely on explicitly reported data, and no assumptions were made.
Data extraction and categorization were performed independently by two reviewers. Discrepancies were resolved through discussion and consensus to ensure consistency and methodological rigor. A descriptive synthesis was subsequently undertaken to summarize study characteristics, technological approaches, and educational implementation trends, consistent with the objectives of a scoping review.
Data extraction was conducted using a structured spreadsheet developed specifically for this review to ensure consistency and transparency. The extracted variables included publication year, country of origin, study design, characteristics of the AI-based chatbot (e.g., rule-based or generative), educational context, target population, implementation setting, pedagogical strategy, and reported educational outcomes.

2.6. Data Analysis and Synthesis

Data analysis was conducted by the reviewers involved in the previous stages, following an approach compatible with the objectives of a scoping review. Quantitative data were analyzed using descriptive statistics, while qualitative data were synthesized using content analysis.
Included publications were grouped into analytical categories according to how AI was applied to support nursing education. Considering the methodological, conceptual, and outcome diversity of the included studies, meta-analysis was not feasible, which is consistent with the exploratory and descriptive nature of this type of review [11].
The results are presented using a structured descriptive approach supported by summary tables and figures. This format was selected to improve readability and facilitate synthesis, given the heterogeneity of study designs, chatbot technologies, educational settings, and reported outcomes.

3. Results

The results are presented using a structured descriptive approach supported by summary tables and figures. This format was selected to improve readability and support synthesis, given the heterogeneity of study designs, chatbot technologies, educational settings, and outcomes.

3.1. Study Selection

The database searches identified 2957 records. After removing 1364 duplicate records, 1593 records were screened by title and abstract, of which 1281 were excluded. A total of 312 reports were sought for retrieval and assessed for full-text eligibility, of which 262 were excluded for not meeting the eligibility criteria, resulting in the inclusion of 50 studies from bibliographic databases. The reasons for exclusion at the full-text stage were systematically recorded and categorized in accordance with the predefined PCC eligibility framework. The most frequent reasons included the absence of a direct focus on nursing education contexts, lack of implementation of chatbot-based AI interventions, or the presentation of secondary or non-empirical publications. Additional exclusions involved studies primarily addressing clinical applications without an explicit educational component, protocol-only reports without results, and publications with insufficient methodological detail to allow reliable categorization. These decisions were applied consistently across reviewers following independent assessment and consensus procedures to ensure methodological coherence and transparency.
In addition, 1014 records were identified through other sources, including grey literature searches. After screening 891 records by title and abstract, 842 were excluded. The remaining 49 reports were assessed for full-text eligibility, and 33 were excluded, resulting in the inclusion of 16 studies from grey literature sources.
Overall, 66 studies met all eligibility criteria and were included in the final synthesis [19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84]. The study selection process is presented in Figure 1, in accordance with the PRISMA-ScR guidelines [15].
A detailed numerical reconciliation of the study selection process, including records identified, duplicates removed, screened records, full-text assessments, and final inclusions across bibliographic databases and grey literature sources, is provided in Supplementary Material (Table S4). In addition, Table S4 presents the categorized reasons for full-text exclusion to ensure full methodological transparency and auditability of the selection process, in accordance with PRISMA 2020 reporting recommendations.

3.2. Study Characteristics

The characteristics of the included publications are summarized in Table 1, with the full results presented in the corresponding data extraction table (see Supplementary Material, Table S3).
The 66 included studies were published between 2019 and 2025, with a marked increase from 2023 onward. Most publications occurred in 2025 (n = 43; 65.2%) [19,20,21], followed by 2024 (n = 12; 18.2%) [22,23] and 2023 (n = 6; 9.1%) [24,25]. Earlier years contributed only a small number of studies, including three publications in 2022 [26,27] and isolated studies published in 2021 and 2019 [28,29], reflecting the recent and rapidly expanding nature of research on AI-based chatbots in nursing education, reflecting the recent and rapidly expanding nature of research on AI-based chatbots in nursing education. Of the 66 included studies, five were preprints that had not yet undergone peer review at the time of data extraction. These were retained to ensure comprehensive mapping of this rapidly evolving field.
Geographically, research activity was concentrated in Asia (n = 42; 63.6%) [30], particularly in China [31], Taiwan [32,33], and South Korea [19,30]. North America contributed seven studies, all from the United States [34,35], while Europe [26,36], Africa, South America, and Oceania were less represented [37]. A small proportion of studies involved international collaborations [38,39], indicating emerging global engagement but an uneven regional distribution.
Methodologically, quasi-experimental designs predominated (n = 25; 37.9%) [23,27,40], followed by qualitative and cross-sectional approaches. Randomized controlled trials were comparatively scarce, and several studies adopted developmental, methodological, or quality improvement designs. Most investigations targeted undergraduate nursing students, with fewer studies addressing postgraduate or continuing professional education. In terms of educational implementation, chatbots were primarily used to support learning-focused activities, with a smaller number integrating teaching and assessment functions.
Implementation settings were predominantly academic (n = 55; 83.3%) [23,27,41], and delivery formats varied across online, blended, simulation-based, and classroom-based models. Intervention duration ranged from single-session applications to multi-week or course-embedded designs, although reporting of duration was inconsistent across some studies.

3.3. Technological and Educational Applications of AI Chatbots

Across the included studies, large language model (LLM)-based generative systems predominated [42,43], with ChatGPT representing the most frequently reported tool [23,27,44]. Additional generative AI applications were described in a smaller subset of studies [45,46], while earlier implementations more commonly relied on rule-based or knowledge-based architectures [25,28]. A limited number of studies employed adaptive non-generative systems or AI-driven virtual patient simulations [33], reflecting technological heterogeneity across publication years.
Chatbots were primarily deployed through web-based interfaces, including direct access to LLM platforms and web-integrated educational systems [23,27,44], with fewer implementations embedded in mobile applications, institutional learning management systems, or clinical simulation environments [28,33,40]. This distribution underscores the rapid adoption of accessible generative AI platforms within higher education settings.
From a pedagogical perspective, AI-based chatbots were most frequently positioned as supplementary learning support tools. Common applications included self-directed study assistance, clarification of academic content, scaffolded tutoring, and guided feedback [23,27,41]. More advanced integrations involved clinical case discussions, scenario-based reasoning, and virtual patient simulations [28,33], targeting higher-order competencies such as clinical judgment, communication skills, and decision-making.
Curricular integration spanned foundational nursing knowledge, specialty areas, and simulation-based learning contexts. Improvements in knowledge acquisition and short-term academic performance were the most consistently reported outcomes [23,27,40], while gains in clinical reasoning were more frequently associated with case-oriented and simulation-based applications [28,33]. At the affective level, increased engagement, motivation, and perceived usefulness were recurrent findings [23,41,45], although concerns regarding reliability, trust in AI-generated outputs, and academic integrity were also reported [44,47].
Overall, AI-based chatbots have largely been adopted as complementary tools within existing curricular structures rather than as fully integrated pedagogical systems. Learning support and tutoring remain the dominant applications, whereas simulation-based and reasoning-oriented implementations appear more closely aligned with the development of advanced clinical competencies.

3.4. Educational Applications and Outcomes

The included studies reported the application of AI-based chatbots across multiple domains of nursing education, frequently addressing more than one curricular area within the same intervention. Chatbots were most commonly applied to general nursing knowledge and core curriculum topics (n = 15; 22.7%) [23,27], followed by applications targeting clinical reasoning and the nursing process (n = 10; 15.2%) [33,40], as well as clinical simulation and case-based learning (n = 9; 13.6%) [28,33]. Additional applications encompassed specialty nursing areas such as pediatrics, mental health, and critical care (n = 7; 10.6%) [23,41]; maternal and obstetric nursing (n = 6; 9.1%) [33]; communication skills and clinical history-taking (n = 6; 9.1%) [25,28]; and academic writing and research skills (n = 6; 9.1%) [44,45]. Less frequently, chatbots supported medical terminology acquisition (n = 4; 6.1%) [27] and educational technology or AI ethics content (n = 3; 4.5%) [47].
Pedagogically, integration strategies were predominantly pragmatic and functional. Most studies embedded chatbots as supportive tools within existing teaching and learning processes rather than as components of formally articulated educational frameworks [23,27,44]. Learning-centered strategies predominated, particularly self-directed study support, clarification of doubts, content reinforcement, and the provision of immediate feedback [23,27,41]. A subset of studies incorporated chatbots into virtual tutoring, guided case discussions, and formative assessment activities [44,45], while more advanced implementations involved clinical case simulations and virtual patient scenarios designed to strengthen clinical reasoning and decision-making competencies [28,33]. Although these approaches align conceptually with case-based and simulation-based pedagogies, explicit theoretical frameworks were rarely reported.
Reported outcomes spanned cognitive, affective, and behavioral domains [23,27]. Improvements in knowledge acquisition and learning performance were the most frequently documented outcomes (n = 18) [23,27,44], particularly in quasi-experimental and controlled studies. Gains in skills and competency development (n = 14) [33,40], as well as improvements in clinical reasoning and critical thinking (n = 9), were more commonly associated with simulation-based and case-oriented applications [28,33]. At the affective level, chatbot use was associated with increased engagement, motivation, and self-directed learning (n = 9) [23,41], along with positive perceptions of usefulness and accessibility (n = 8) [44,45]. However, variability in trust toward AI-generated outputs and concerns regarding reliability and academic integrity were also reported [44,47].
Table 2 summarizes the main application areas and their associated educational outcomes.
Educational domains were coded as non-mutually exclusive categories; therefore, individual studies could contribute to more than one application area.
Figure 2 presents a conceptual synthesis integrating the reported educational benefits and implementation challenges identified across the included studies.

3.5. Implementation Challenges and Barriers

The integration of AI-based chatbots into nursing education revealed recurring challenges across technological, pedagogical, ethical, and organizational dimensions [44,47]. Technical limitations included restricted functionality, system instability, and the need for ongoing technical support during implementation [27,45,46]. Concerns regarding the accuracy and reliability of AI-generated information were frequently emphasized, particularly given the implications for patient safety in health education contexts [25,44,47]. Several studies highlighted the importance of continuous content validation and expert supervision to ensure safe and pedagogically appropriate use [23,40].
Pedagogical challenges were often linked to limited curricular integration and insufficient educator preparation. The absence of clearly articulated instructional frameworks contributed to superficial or supplementary adoption rather than systematic integration [23,27,40]. Additionally, concerns about potential overreliance on chatbots and reductions in independent critical thinking were reported, particularly when chatbot use occurred without structured pedagogical guidance [41,44,45]. Discussions surrounding academic integrity, authorship, and appropriate use in assessment contexts were also identified [44,47].
Ethical and legal considerations—including data privacy, confidentiality, and trust in AI-generated responses—were reported across several studies [25,47]. Many interventions were conducted within single institutions, involved small sample sizes, or had short durations, thereby limiting generalizability and long-term inferences regarding educational impact [33,40,41]. Furthermore, some chatbot applications remained at pilot or early implementation stages, lacking robust empirical validation or real-world testing within nursing education programs [28,46].

4. Discussion

This scoping review mapped 66 studies on the use of AI-based chatbots and related systems in nursing education, revealing a marked increase in publications from 2023 onward and a strong concentration in recent years. This temporal pattern reflects the rapid diffusion of generative AI in academic environments and its pragmatic incorporation into educational practice, largely driven by the accessibility of large language model-based tools such as ChatGPT. Similar trends have been observed across broader educational contexts, where the adoption of generative AI has accelerated pedagogical experimentation and research production [23,31].
Overall, the included studies consistently reported educational benefits, particularly improvements in knowledge acquisition, academic performance, skills development, and clinical reasoning. These outcomes were more frequently demonstrated in quasi-experimental and controlled studies, which reported measurable gains in learning performance and simulated clinical tasks, while qualitative and cross-sectional studies provided complementary insights into student engagement, acceptance, and perceived usefulness [19,20,21,22]. Collectively, these findings support the role of AI-based chatbots as effective supplementary tools within the nursing teaching–learning process, particularly when aligned with pedagogical objectives and intentionally integrated into instructional design.
Most implementations positioned chatbots as auxiliary resources to support studying, clarify doubts, and assist with academic tasks, contributing to increased learner autonomy, efficiency, and motivation [23,30,37]. This pattern of use may enable educators to devote greater attention to higher-order pedagogical activities, including clinical discussion, reflective supervision, and formative assessment, thereby reinforcing the central role of faculty mediation. More advanced applications, such as virtual tutors, AI-generated clinical cases, and virtual patient simulations, although less frequently reported, were more directly associated with the development of applied competencies, including clinical communication, decision-making, and diagnostic reasoning [26,28,35]. These findings highlight that the educational value of chatbots depends not solely on technological capability but on their integration within structured pedagogical strategies and supervised learning environments.
The integration of chatbots into simulation-based learning environments further demonstrated the potential to enhance realism, interactivity, and individualized feedback, particularly during structured phases such as clinical case analysis and guided reflection. These findings align with established simulation-based learning literature, which emphasizes the importance of instructional structure and guided debriefing in promoting clinical competence development [85,86]. However, concerns related to response accuracy, clinical realism, and the need for expert validation indicate that AI should be implemented as a supportive component within instructional design rather than as a replacement for human facilitation [19,33].
From a theoretical perspective, these findings can be interpreted through constructivist learning theory and self-regulated learning models, in which learners develop knowledge through guided interaction, feedback, and reflection [87]. AI-based chatbots may contribute to these processes by providing accessible, immediate, and adaptive feedback, thereby supporting metacognitive engagement and autonomous learning.
A cross-cutting finding identified across the included studies relates to the performance–trust paradox. Although chatbots demonstrated adequate performance in specific educational tasks, students and educators frequently reported lower levels of trust in AI-generated outputs [22]. This discrepancy reflects broader challenges in human–AI interaction, where perceived reliability, transparency, and explainability influence trust calibration and user acceptance [88]. In nursing education, these findings emphasize the importance of ensuring content accuracy, promoting critical appraisal skills, and maintaining appropriate pedagogical supervision, particularly in contexts involving clinical reasoning.
Ethical and academic integrity considerations also emerged as central themes. Studies reported concerns related to plagiarism, overreliance on AI tools, unclear authorship attribution, and the absence of consistent institutional policies [44,47,48]. Existing literature suggests that effective management of these risks requires comprehensive institutional strategies, including clear guidelines, transparent disclosure of AI use, and assessment designs aligned with authentic competencies and critical reasoning [89]. These considerations are particularly relevant in nursing education, where professional responsibility, ethical conduct, and patient safety constitute core educational outcomes.
Beyond pedagogical considerations, the integration of AI-based chatbots in nursing education requires alignment with institutional governance and data protection frameworks. Regulatory instruments such as the General Data Protection Regulation (GDPR) and the European Union Artificial Intelligence Act emphasize transparency, accountability, and human oversight in AI deployment [90,91]. These frameworks highlight the importance of ensuring lawful, responsible, and ethically grounded implementation of AI technologies, particularly in domains closely linked to clinical practice and public trust [92,93].
Taken together, the findings of this scoping review indicate that AI-based chatbots have substantial potential to support nursing education when integrated as complementary tools within structured pedagogical frameworks. Their educational value depends not only on technological capabilities but also on appropriate instructional design, faculty supervision, and institutional governance, reinforcing the importance of aligning technological innovation with established educational principles.

4.1. Practical Implications and Challenges

The integration of chatbots and large language models into nursing education has occurred predominantly through the pragmatic adoption of readily available tools, often preceding formal curricular integration. Evidence suggests that educational outcomes are more favorable when chatbots are used as supplementary tools rather than as replacements for teaching, particularly when accompanied by active faculty supervision, especially in activities involving clinical reasoning and decision-making. This approach aligns with constructivist learning models that emphasize guided autonomy and metacognitive development [87].
A major challenge concerns the reliability and safety of AI-generated outputs. Although chatbots have demonstrated adequate performance in specific tasks, variability in response accuracy and perceived reliability highlights the need for verification protocols, clear communication of AI limitations, and the development of learners’ critical appraisal skills. Aligning AI-generated content with clinical guidelines, institutional protocols, and evidence-based standards is essential to ensure pedagogical validity and patient safety.
Ethical and academic integrity considerations also require structured institutional responses. The widespread availability of AI tools necessitates clear policies regarding acceptable use, transparent disclosure, and assessment strategies that prioritize reasoning, clinical judgment, and reflective practice [89]. Additionally, data protection, privacy, and governance considerations must be addressed, particularly when external platforms are used. Ensuring compliance with regulatory requirements and establishing institutional oversight mechanisms are essential to support the responsible and ethical implementation of AI technologies in nursing education.
Educators should integrate AI-based chatbots through structured instructional design aligned with defined learning objectives and clinical competencies. Faculty supervision, clear pedagogical framing, and appropriate integration into teaching strategies are essential to maximize educational benefits while mitigating potential risks.

4.2. Future Directions

Despite the rapid growth of evidence in this field, several gaps remain and should guide future research. First, the predominance of quasi-experimental designs and self-reported outcomes highlights the need for multicenter studies with longitudinal follow-up and the use of objective performance measures, such as OSCE stations, standardized rubrics, and simulation-based assessments. Anchoring evaluations in established educational evaluation models may help distinguish short-term learning gains from sustained changes in professional performance [94].
Second, the adoption and implementation of AI-based chatbots should be examined through the lens of theoretical models of technology acceptance and use. Frameworks such as the Technology Acceptance Model (TAM), the Theory of Planned Behavior (TPB), and the Unified Theory of Acceptance and Use of Technology (UTAUT) can help elucidate determinants related to perceived usefulness, ease of use, social norms, and behavioral intentions [95,96,97,98]. These models may be complemented by implementation science frameworks, such as the Consolidated Framework for Implementation Research (CFIR), to identify organizational barriers and facilitators and to support the sustainability of educational interventions involving AI [99].
Third, future studies should systematically compare different technological and pedagogical approaches, including rule-based chatbots versus LLM-based systems, prompting strategies, integration with simulation and virtual reality, and differential effects across educational levels (undergraduate, postgraduate, and continuing professional education). Greater alignment with emerging reporting and validation guidelines for AI-based interventions, such as CONSORT-AI and SPIRIT-AI, is also recommended to enhance transparency, reproducibility, and methodological rigor [100].
Finally, future research should explicitly address issues of equity, accessibility, and contextualization, including linguistic diversity, digital access, institutional resources, and cultural appropriateness, in order to prevent the integration of generative AI from exacerbating existing educational inequalities.

4.3. Implications for Clinical Practice Readiness

Beyond their immediate educational applications, AI-based chatbots may also contribute to the development of clinical readiness among nursing students. Evidence from the included studies suggests that chatbot-supported learning environments can foster clinical reasoning through scenario-based problem solving and structured decision-making exercises. For example, one study demonstrated improvements in students’ ability to interpret clinical situations and prioritize nursing actions within simulated contexts [30]. Similarly, another study highlighted the potential of AI-driven chatbot interactions to support reflective clinical judgment and guided reasoning processes [40].
In addition to supporting decision-making skills, AI-based chatbots may serve as tools for developing communication and patient education competencies. Simulated dialogue with virtual patients has been shown to allow learners to practice explaining health-related information and adapting communication strategies to diverse scenarios [33]. Such applications align with competency-based nursing education approaches that emphasize not only knowledge acquisition but also critical thinking, communication, and professional preparedness.
However, it is important to distinguish between the pedagogical use of chatbots as learning support tools and their potential deployment as clinical decision-support systems. Although several studies reported increased engagement, confidence, and perceived competence, many relied on short-term or self-reported outcomes. Therefore, further longitudinal and performance-based research is needed to determine whether chatbot-assisted education translates into measurable improvements in real-world clinical practice.

4.4. Limitations

The limitations of this review operate at two interrelated levels.
First, limitations inherent to the included evidence must be acknowledged. Many studies were conducted within single institutions or specific educational contexts, frequently involved small sample sizes, had short intervention durations, and relied heavily on self-reported outcomes. The substantial methodological and conceptual heterogeneity across study designs, technological architectures, pedagogical strategies, and outcome measures limited direct comparability and precluded conclusions regarding sustained or transferable effects on clinical competence development. In addition, the use of the allintitle operator in Google Scholar, while increasing search specificity, may have reduced sensitivity and resulted in the omission of potentially relevant studies.
Second, as characteristic of scoping review methodology, the primary aim of this study was to map the breadth, nature, and distribution of evidence rather than to evaluate effectiveness through quantitative synthesis or causal inference. Accordingly, no formal critical appraisal of methodological quality or risk of bias was undertaken, consistent with established methodological guidance for scoping reviews [11,17,66,101]. While this approach prioritizes comprehensive coverage, it limits the ability to assess the internal validity of individual studies or to weigh findings according to methodological rigor.
The inclusion of preprint studies represents an additional consideration. Although these reports had not undergone formal peer review at the time of data extraction, they were retained to ensure comprehensive coverage of this rapidly evolving field. Importantly, a sensitivity analysis excluding preprints did not materially alter the overall thematic distribution of technological applications, pedagogical strategies, or reported outcomes, suggesting that their inclusion did not substantively influence the principal conclusions.
The rapid evolution of generative artificial intelligence technologies introduces a structural risk of partial obsolescence. New models, deployment frameworks, governance regulations, and empirical findings continue to emerge at an accelerated pace. Consequently, periodic updates and future systematic reviews incorporating formal quality appraisal and longitudinal outcome assessment will be necessary to consolidate and extend the present findings. Furthermore, publication bias and selective reporting cannot be excluded, particularly given the novelty and positive framing frequently associated with generative AI innovations.

5. Conclusions

This scoping review provides a comprehensive and up-to-date mapping of the rapidly expanding integration of AI-based chatbots in nursing education, capturing the post-2023 acceleration driven by generative large language model technologies. Beyond documenting growth trends, the review synthesizes how technological architectures, pedagogical strategies, and governance considerations intersect within nursing education contexts.
Across diverse settings, AI-based chatbots have primarily functioned as pedagogically supportive tools rather than autonomous instructional systems. Their educational contribution appears most meaningful when embedded within structured learning designs, aligned with curricular objectives, and mediated through active faculty supervision. Applications involving simulation, clinical reasoning, and guided tutoring demonstrate particular promise for fostering higher-order competencies, although their effectiveness remains contingent upon instructional coherence and contextual integration.
At the same time, substantial methodological heterogeneity, limited theoretical grounding, and the predominance of short-term and self-reported outcomes constrain the strength of current inferences. Ethical, governance, and accountability challenges—particularly those related to trust calibration, academic integrity, data protection, and institutional oversight—emerge as central determinants of responsible implementation.
Collectively, the evidence suggests that the educational value of AI-based chatbots in nursing does not reside solely in technological capability but in their integration within ethically governed, pedagogically intentional, and institutionally supported frameworks. Future research should advance beyond exploratory designs toward longitudinal, multicenter, and performance-based evaluations aligned with standardized reporting and regulatory guidance.
By consolidating dispersed evidence and highlighting structural, pedagogical, and governance dimensions, this review contributes a foundation for more theoretically grounded and policy-informed integration of AI technologies in nursing education. Ensuring that AI adoption strengthens professional standards, safeguards patient safety, and promotes equitable access will be essential for translating innovation into sustainable educational advancement.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/nursrep16030087/s1, Table S1: PRISMA-ScR Checklist; Table S2: Search Strategy; Table S3: Extracted Data; Table S4: Numerical Reconciliation of Study Selection Process and Reasons for Full-Text Exclusion. Ref. [102] is cited in the Supplementary Materials file.

Author Contributions

Conceptualization, Methodology, Investigation, Data Curation, Formal Analysis, Project Administration, Visualization, Writing—Original Draft: F.F. Conceptualization, Methodology, Investigation, Formal Analysis, Data Curation, Visualization, Writing—Review & Editing: R.E. Formal Analysis, Data Curation, Visualization, Writing—Review & Editing: S.B.S.d.L. Writing—Review & Editing: J.A. Writing—Review & Editing: C.P.-V. Conceptualization, Methodology, Investigation, Formal Analysis, Data Curation, Supervision, Visualization, Writing—Review & Editing, Funding acquisition: P.A. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by FCT—Fundação para a Ciência e Tecnologia, I.P. by project reference UID/04279/2025 and DOI identifier https://doi.org/10.54499/UID/04279/2025—Centro de Investigação Interdisciplinar em Saúde; CAPES (Coordenação de Aperfeiçoamento de Pessoal de Nível Superior).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data supporting the findings of this study are available through open-access repositories. The review protocol, search strategies, and Supplementary Materials are published in the Athena Health & Research and registered in the Open Science Framework (OSF). The published protocol is accessible at https://doi.org/10.62741/ahrj.v3iSuppl.124, and the OSF registry can be found at https://doi.org/10.17605/OSF.IO/DBYA7. No new primary data were generated for this scoping review.

Public Involvement Statement

No public involvement in any aspect of this research.

Guidelines and Standards Statement

This scoping review was conducted in accordance with the JBI methodology for scoping reviews [11], and this manuscript was drafted against the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) reporting guideline [16].

Use of Artificial Intelligence

AI or AI-assisted tools were not used in drafting any aspect of this manuscript.

Acknowledgments

The authors would like to acknowledge the support of the Catholic University of Portugal, particularly the Centre for Interdisciplinary Research in Health (CIIS) and the Wounds Research Lab, for providing an intellectually stimulating research environment and institutional support. The authors also wish to acknowledge the Federal University of Santa Maria for its collaboration and academic support.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Helm, J.M.; Swiergosz, A.M.; Haeberle, H.S.; Karnuta, J.M.; Schaffer, J.L.; Krebs, V.E.; Spitzer, A.I.; Ramkumar, P.N. Machine learning and artificial intelligence: Definitions, applications, and future directions. Curr. Rev. Musculoskelet. Med. 2020, 13, 69–76. [Google Scholar] [CrossRef]
  2. Wang, P. On defining artificial intelligence. J. Artif. Gen. Intell. 2019, 10, 1–37. [Google Scholar] [CrossRef]
  3. Jakhar, D.; Kaur, I. Artificial intelligence, machine learning and deep learning: Definitions and differences. Clin. Exp. Dermatol. 2020, 45, 131–132. [Google Scholar] [CrossRef]
  4. Pham, T.D.; Karunaratne, N.; Exintaris, B.; Liu, D.; Lay, T.; Yuriev, E.; Lim, A. The impact of generative AI on health professional education: A systematic review in the context of student learning. Med. Educ. 2025, 59, 1280–1289. [Google Scholar] [CrossRef] [PubMed]
  5. Subillaga, O.; Coulter, A.P.; Tashjian, D.; Seymour, N.; Hubbs, D. Artificial intelligence-assisted narratives: Analysis of surgical residency personal statements. J. Surg. Educ. 2025, 82, 103566. [Google Scholar] [CrossRef]
  6. Sengar, S.S.; Hasan, A.B.; Kumar, S.; Carroll, F. Generative artificial intelligence: A systematic review and applications. Multimed. Tools Appl. 2025, 84, 23661–23700. [Google Scholar] [CrossRef]
  7. Labrague, L.J.; Sabei, S.A. Integration of AI-powered chatbots in nursing education: A scoping review of their utilization, outcomes, and challenges. Teach. Learn. Nurs. 2025, 20, e285–e293. [Google Scholar] [CrossRef]
  8. Shaw, K.; Henning, M.A.; Webster, C.S. Artificial intelligence in medical education: A scoping review of the evidence for efficacy and future directions. Med. Sci. Educ. 2025, 35, 1803–1816. [Google Scholar] [CrossRef]
  9. Feigerlova, E.; Hani, H.; Hothersall-Davies, E. A systematic review of the impact of artificial intelligence on educational outcomes in health professions education. BMC Med. Educ. 2025, 25, 129. [Google Scholar] [CrossRef]
  10. World Health Organization. Digitalized Health Workforce Education: An Elicitation of Research Gaps and Selection of Case Studies; World Health Organization: Geneva, Switzerland, 2023. [Google Scholar]
  11. Peters, M.D.J.; Marnie, C.; Tricco, A.C.; Pollock, D.; Munn, Z.; Alexander, L.; McInerney, P.; Godfrey, C.M.; Khalil, H. Updated methodological guidance for the conduct of scoping reviews. JBI Evid. Synth. 2020, 18, 2119–2126. [Google Scholar] [CrossRef]
  12. Zhang, S.; Yang, X.; Zhang, L.; Liu, H.; Chen, X.; Yang, X.; Hu, Y.; Liu, Q.; He, Y. Exploring the role and potential of chatbots in learning from the perspective of nursing students: A systematic review of qualitative studies. Int. Nurs. Rev. 2025, 72, e70060. [Google Scholar] [CrossRef]
  13. Rodrigues, D.D.; Alves, J.; Ribeiro, L.; Pereira, R. Intelligent tutoring systems in nursing education: A scoping review protocol. Servir 2025, 2, e41879. [Google Scholar] [CrossRef]
  14. Fernandes, F.; Encarnação, R.; Lima, S.; Alves, P. Artificial intelligence chatbots in nursing education: A scoping review protocol. Athena Health Res. J. 2026, 3. [Google Scholar] [CrossRef]
  15. Pollock, D.; Peters, M.D.J.; Khalil, H.; McInerney, P.; Alexander, L.; Tricco, A.C.; Evans, C.; de Moraes, É.B.; Godfrey, C.M.; Pieper, D.; et al. Recommendations for the extraction, analysis, and presentation of results in scoping reviews. JBI Evid. Synth. 2023, 21, 520. [Google Scholar] [CrossRef]
  16. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ 2021, 372, n71. [Google Scholar] [CrossRef]
  17. Levac, D.; Colquhoun, H.; O’Brien, K.K. Scoping studies: Advancing the methodology. Implement. Sci. 2010, 5, 69. [Google Scholar] [CrossRef] [PubMed]
  18. Ouzzani, M.; Hammady, H.; Fedorowicz, Z.; Elmagarmid, A. Rayyan—A web and mobile app for systematic reviews. Syst. Rev. 2016, 5, 210. [Google Scholar] [CrossRef] [PubMed]
  19. Park, S.-A.; Kim, H.Y. Development and effects of a scenario-based labor nursing simulation education program using an artificial intelligence tutor: A quasi-experimental study. Womens Health Nurs. 2025, 31, 143–154. [Google Scholar] [CrossRef] [PubMed]
  20. Wang, R.; Raman, A. Enhancing nursing education: An AI-powered chatbots for fostering engagement and higher-order thinking skills. Res. Sq. 2025. [Google Scholar] [CrossRef]
  21. Arkan, B.; Ordin, Y.; Yilmaz, D. Artificial intelligence applications in nursing education: A systematic and pedagogical perspective. Comput. Inform. Nurs. 2025, 43, 198–206. [Google Scholar] [CrossRef]
  22. Chang, C.Y.; Chen, Y.H.; Lin, Y.J. Nursing students’ perceptions of chatbot-assisted learning: A cross-sectional study. Nurse Educ. Today 2024, 128, 105874. [Google Scholar] [CrossRef]
  23. Ni, Z.; Peng, R.; Zheng, X.; Xie, P. Embracing the future: Integrating ChatGPT into China’s nursing education system. Int. J. Nurs. Sci. 2024, 11, 295–299. [Google Scholar] [CrossRef]
  24. Ahn, J.; Park, H.O. Development of a case-based nursing education program using generative artificial intelligence. J. Korean Acad. Soc. Nurs. Educ. 2023, 29, 234–246. [Google Scholar] [CrossRef]
  25. Chen, Y.; Lin, Q.; Chen, X.; Liu, T.; Ke, Q.; Yang, Q.; Guan, B.; Ming, W.K. Needs assessment for history-taking instruction using a chatbot for nursing students: A qualitative focus group study. Digit. Health 2023, 9, 20552076231185435. [Google Scholar] [CrossRef]
  26. Rodriguez-Arrastia, M.; Martinez-Ortigosa, A.; Ruiz-Gonzalez, C.; Ropero-Padilla, C.; Roman, P.; Sanchez-Labraca, N. Experiences and perceptions of final-year nursing students of using a chatbot in a simulated emergency situation: A qualitative study. J. Nurs. Manag. 2022, 30, 3874–3884. [Google Scholar] [CrossRef]
  27. Han, J.W.; Park, J.; Lee, H. Analysis of the effect of an artificial intelligence chatbot educational program on non-face-to-face classes: A quasi-experimental study. BMC Med. Educ. 2022, 22, 830. [Google Scholar] [CrossRef]
  28. Shorey, S.; Ang, E.; Yap, J.; Ng, E.D.; Lau, S.T.; Chui, C.K. A virtual counseling application using artificial intelligence for communication skills training in nursing education: Development study. J. Med. Internet Res. 2019, 21, e14658. [Google Scholar] [CrossRef]
  29. Chuang, Y.H.; Chen, Y.T.; Kuo, C.L. The design and application of a chatbot in clinical nursing education. J. Nurs. 2021, 68, 19–24. [Google Scholar] [CrossRef]
  30. Han, J.; Park, J.; Lee, H. Development and effects of a chatbot education program for self-directed learning in nursing students. BMC Med. Educ. 2025, 25, 825. [Google Scholar] [CrossRef] [PubMed]
  31. Topaz, M.; Peltonen, L.-M.; Michalowski, M.; Stiglic, G.; Ronquillo, C.; Pruinelli, L.; Song, J.; O’Connor, S.; Miyagawa, S.; Fukahori, H. The ChatGPT effect: Nursing education and generative artificial intelligence. J. Nurs. Educ. 2025, 64, e40–e43. [Google Scholar] [CrossRef] [PubMed]
  32. Chang, L.-C.; Wang, Y.-N.; Lin, H.-L.; Liao, L.-L. Registered nurses’ attitudes towards ChatGPT and self-directed learning: A cross-sectional study. J. Adv. Nurs. 2025, 81, 3811–3820. [Google Scholar] [CrossRef]
  33. Chen, P.-J. Effectiveness of integrating generative artificial intelligence with virtual reality for maternity communication simulation: A randomized controlled trial. Clin. Simul. Nurs. 2025, 105, 101786. [Google Scholar] [CrossRef]
  34. Dabney, B.W.; DeNotto, L.A. A qualitative exploration of student perceptions on the use of AI in an undergraduate nursing research course. J. Nurs. Educ. 2025, 64, 620–626. [Google Scholar] [CrossRef] [PubMed]
  35. Vaughn, J.; Ford, S.H.; Scott, M.; Jones, C.; Lewinski, A. Enhancing healthcare education: Leveraging ChatGPT for innovative simulation scenarios. Clin. Simul. Nurs. 2024, 87, 101487. [Google Scholar] [CrossRef]
  36. Laun, M.; Puderbach, L.; Hirt, K.; Wyss, E.L.; Friemert, D.; Hartmann, U.; Wolff, F. Chatbots in education: Outperforming students but perceived as less trustworthy. Contemp. Educ. Psychol. 2025, 81, 102373. [Google Scholar] [CrossRef]
  37. Elliott, M.; Williams, J.; Aldwikat, R.; Wong, P. Using ChatGPT to enhance student learning: A case study in a nursing curriculum. Teach. Learn. Nurs. 2025, 20, e309–e312. [Google Scholar] [CrossRef]
  38. Shokr, E.A. Integrating a knowledge-based artificial intelligence chatbot into nursing training programs: A comparative quasi-experimental study in Egypt and Saudi Arabia. BMC Nurs. 2025, 24, 1245. [Google Scholar] [CrossRef]
  39. Abou Hashish, E.A.; Alsayed, S.A.; Abdel Razek, N.M.F. Embracing AI in academia: A mixed methods study of nursing students’ and educators’ perspectives on using ChatGPT. PLoS ONE 2025, 20, e0327981. [Google Scholar] [CrossRef]
  40. Benfatah, M.; Elazizi, I.; Lamiri, A.; Belhaj, H.; Nejjari, C.; Youlyouz-Marfak, I. AI-assisted prebriefing to enhance simulation readiness in nursing education. Teach. Learn. Nurs. 2025, 21, e57–e63. [Google Scholar] [CrossRef]
  41. Gunawan, J.; Aungsuroch, Y.; Marzilli, C.; Nazliansyah; Chaerani, E.; Montayre, J. Artificial intelligence chatbot as perceived by nursing students: A qualitative study. SAGE Open 2024, 14, 21582440241303453. [Google Scholar] [CrossRef]
  42. Aslan, F. Postgraduate nursing students’ experiences with ChatGPT: A descriptive phenomenological study. J. Prof. Nurs. 2025, 59, 148–154. [Google Scholar] [CrossRef] [PubMed]
  43. Kang, S.R.; Kim, S.J.; Kang, K.A. Awareness of using chatbots and factors influencing usage intention among nursing students in South Korea: A descriptive study. Child Health Nurs. Res. 2023, 29, 290–299. [Google Scholar] [CrossRef]
  44. Kazley, A.S.; Andresen, C.; Mund, A.; Blankenship, C.; Segal, R. Is use of ChatGPT cheating? Students of health professions perceptions. Med. Teach. 2025, 47, 894–898. [Google Scholar] [CrossRef] [PubMed]
  45. Moskovich, L.; Rozani, V. Health professions students’ perceptions of ChatGPT in healthcare and education: A mixed-methods study. BMC Med. Educ. 2025, 25, 98. [Google Scholar] [CrossRef]
  46. Eltaybani, S.; Ali, H.F.M.; Abdelhalim, G.E. Exploring nurse educators’ and students’ use of large language models for academic purposes in a developing country. Nurse Educ. Pract. 2025, 87, 104502. [Google Scholar] [CrossRef] [PubMed]
  47. Durmuş Sarıkahya, S.; Özbay, Ö.; Torpuş, K.; Usta, G.; Çınar Özbay, S. The impact of ChatGPT on nursing education: A qualitative study based on the experiences of faculty members. Nurse Educ. Today 2025, 152, 106755. [Google Scholar] [CrossRef]
  48. Sultan, H.M.; Sam, B.J.; Pillai, R.R. Nursing students’ perceptions and ethical considerations of ChatGPT usage in nursing education: A cross-sectional study. Teach. Learn. Nurs. 2025, 20, e1197–e1206. [Google Scholar] [CrossRef]
  49. Olla, P.; Wodwaski, N.; Long, T. Beyond the bot: A dual-phase framework for evaluating AI chatbot simulations in nursing education. Nurs. Rep. 2025, 15, 280. [Google Scholar] [CrossRef]
  50. Sandoval Peña, J.M.; Macalupú Ipanaqué, J.V.; Rufino Sosa, M.; García Paz, E.S.; Morocho Ricalde, C.J. Chatbot as an artificial intelligence program in autonomous learning in nursing students. Migr. Lett. 2023, 20, 85–104. [Google Scholar] [CrossRef]
  51. Ahmed, F.R.; Rushdan, E.E.; Al-Yateem, N.; Almaazmi, A.N.; Subu, M.A.; Hijazi, H.; Abdelbasset, W.K.; Mottershead, R.; Ahmed, A.A.; Aburuz, M.E. AI in higher education: Nursing students’ perspectives on ChatGPT. Teach. Learn. Nurs. 2025, 20, e408–e413. [Google Scholar] [CrossRef]
  52. Benfatah, M.; Marfak, A.; Saad, E.; Hilali, A.; Nejjari, C.; Youlyouz-Marfak, I. Assessing the efficacy of ChatGPT as a virtual patient in nursing simulation training. Teach. Learn. Nurs. 2024, 19, e486–e493. [Google Scholar] [CrossRef]
  53. Reid, J.A. Building clinical simulations with ChatGPT in nursing education. J. Nurs. Educ. 2025, 64, e6–e7. [Google Scholar] [CrossRef]
  54. Ghaffari, R.; Ghaffari, F.; Mehrabi, M.; Sabery, M. Effectiveness of ChatGPT for clinical scenario generation. Adv. Biomed. Res. 2025, 14, 172. [Google Scholar]
  55. Makhlouf, E.; Alenezi, A.; Shokr, E.A. Effectiveness of designing a knowledge-based chatbot system. Nurse Educ. Today 2024, 137, 106159. [Google Scholar] [CrossRef]
  56. Ríos Gonzales, J.D.R.; Tomanguilla Reyna, J.T.; Vereau Amaya, E.A.; Vásquez Luján, I.G. Evaluation of the impact of ChatGPT on research skills. Int. J. Learn. Teach. Educ. Res. 2025, 24, 370–390. [Google Scholar] [CrossRef]
  57. Saleh, Z.T.; Rababa, M.; Elshatarat, R.A.; Alharbi, M.; Alhumaidi, B.N.; Al-Za’areer, M.S.; Jarrad, R.A.; Al Niarat, T.F.; Almagharbeh, W.T.; Al-Sayaghi, K.M.; et al. Faculty perceptions regarding AI chatbots in nursing education. BMC Nurs. 2025, 24, 440. [Google Scholar] [CrossRef] [PubMed]
  58. Çalık, A.; Özkul, D. Exploring the utility of ChatGPT in nursing care plan development. Lokman Hekim Health Sci. 2025, 5, 170–180. [Google Scholar] [CrossRef]
  59. Ma, Y.; Liu, T.; Qi, J.; Gan, Y.; Cheng, Q.; Wang, J.; Xiao, M. Facilitators and barriers of large language model adoption. J. Adv. Nurs. 2025, 81, 4856–4870. [Google Scholar] [CrossRef]
  60. Higashitsuji, A.; Otsuka, T.; Watanabe, K. Impact of ChatGPT on case creation efficiency. Teach. Learn. Nurs. 2025, 20, e159–e166. [Google Scholar] [CrossRef]
  61. Gonzalez-Garcia, A.; Bermejo-Martinez, D.; Lopez-Alonso, A.I.; Trevisson-Redondo, B.; Martín-Vázquez, C.; Perez-Gonzalez, S. Impact of ChatGPT usage on nursing education. Heliyon 2025, 11, e41559. [Google Scholar] [CrossRef]
  62. Bouriami, A.; Takhdat, K.; Barkatou, S.; Chiki, H.; Boussaa, S.; El Adib, A.R. Nurse educators’ use of ChatGPT. Educ. Med. 2025, 26, 101006. [Google Scholar] [CrossRef]
  63. Chan, V.C. Integrating generative artificial intelligence in writing courses. J. Prof. Nurs. 2025, 57, 85–91. [Google Scholar] [CrossRef]
  64. Aboulfotoh, M.A. Integration of Poe AI chatbot into medical vocabulary learning. Int. J. Res. Educ. Sci. 2025, 11, 554. [Google Scholar]
  65. Albikawi, Z.F.; Abuadas, M.H. Impact of ChatGPT utilization on psychiatric nursing education. Univers. J. Public Health 2025, 13, 403–412. [Google Scholar] [CrossRef]
  66. Wang, Y.F.; Hsu, M.H.; Wang, M.Y.F. Gamified mobile learning chatbot. Health Educ. J. 2025, 84, 174–188. [Google Scholar] [CrossRef]
  67. Karaçay, P. Nursing students’ experiences toward using ChatGPT. Teach. Learn. Nurs. 2025, 20, 104063. [Google Scholar] [CrossRef]
  68. Han, S.; Kang, H.S.; Gimber, P.; Lim, S. Nursing students’ perceptions of generative artificial intelligence. Nurs. Rep. 2025, 15, 68. [Google Scholar] [CrossRef]
  69. Chang, C.Y.; Hwang, G.J.; Gau, M.L. Mobile chatbot approach for nursing training. Br. J. Educ. Technol. 2022, 53, 171–188. [Google Scholar] [CrossRef]
  70. Kestel, S.; Calik, A.; Kus, M. Effect of chatbot-supported instruction on nursing students. J. Prof. Nurs. 2025, 60, 93–100. [Google Scholar] [CrossRef]
  71. Yin, J.; Hao, X.; Xing, G.; Xu, M. Effects of ChatGPT-driven blended teaching model. Nurse Educ. Pract. 2025, 88, 104545. [Google Scholar] [CrossRef] [PubMed]
  72. Shin, H.; De Gagne, J.C.; Kim, S.S.; Hong, M. The Impact of Artificial Intelligence-Assisted Learning on Nursing Students’ Ethical Decision-making and Clinical Reasoning in Pediatric Care: A Quasi-Experimental Study. Comput. Inform. Nurs. 2024, 42, 704–711. [Google Scholar] [CrossRef]
  73. Başaran, F.; Duru, P. Impact of Kahoot and ChatGPT educational technologies. Sex. Disabil. 2024, 42, 801–815. [Google Scholar] [CrossRef]
  74. Clodfelter, A.D. Improving Alarm Management Practices Wireless Bed Exit Alerts on Medical-Surgical Units. Comput. Inform. Nurs. 2025, 43, e01324. [Google Scholar] [CrossRef] [PubMed]
  75. Bumbach, M.D. Use of AI-powered ChatGPT for nursing education. J. Nurs. Educ. 2024, 63, 564–567. [Google Scholar] [CrossRef] [PubMed]
  76. Russell, R.G.; White, J.; Karns, A.; Rodriguez, K.; Jeffries, P.R.; Sengstack, P. Toward amplifying the good in nursing education: A quality improvement study on implementing artificial intelligence-based assistants in a learning system. Nurs. Outlook 2025, 73, 102483. [Google Scholar] [CrossRef]
  77. Sağlam, R.K.; Kalanlar, B. Use of ChatGPT: Perspectives of Graduate Students in Public Health Nursing. Nurse Educ. Pract. 2025, 88, 104585. [Google Scholar] [CrossRef] [PubMed]
  78. Salazar, C.F. Using cloud-based chatbot builder in education. Int. J. Eng. Trends Technol. 2023, 71, 301–314. [Google Scholar] [CrossRef]
  79. Khlaif, Z.N.; Salameh, N.; Ajouz, M.; Mousa, A.; Itmazi, J.; Alwawi, A.; Alkaissi, A. Using generative AI in nursing education. BMC Med. Educ. 2025, 25, 926. [Google Scholar] [CrossRef]
  80. Afonso, D.L.A. Uso da Inteligência Artificial em Chatbot para Apoio aos Estudantes na Área da Saúde; Universidade Federal de São Paulo: São Paulo, Brazil, 2024. [Google Scholar]
  81. Goktas, P.; Kucukkaya, A.; Karacay, P. Utilizing GPT-4.0 in nursing education. Teach. Learn. Nurs. 2024, 19, e358–e367. [Google Scholar] [CrossRef]
  82. Hsu, M.H.; Wang, Y.F.; Wang, M.Y.F. Mastering medical terminology with ChatGPT and Termbot. J. Educ. Comput. Res. 2024, 83, 352–358. [Google Scholar] [CrossRef]
  83. Kim, D.; Choi, Y.R.; Lee, Y.N.; Park, W.H.; Kwon, D.Y.; Chang, S.O. Towards web-based adaptive learning on the behavioral and psychological symptoms of dementia care for nursing staff of long-term care facilities: A quasi-experimental study. Res. Sq. 2023. [Google Scholar] [CrossRef]
  84. Mohamed, A.M.; Alanezi, N.A.; Darrag, Y.; Alrwuaili, N.S.; Alshamrani, R.A.H.; Berdida, D.J.E. The role of AI chatbots in nursing students’ autonomous learning in mastering medical vocabulary: A quasi-experimental study. Teach. Learn. Nurs. 2025, 20, e76–e85. [Google Scholar] [CrossRef]
  85. Jeffries, P.R. A framework for designing, implementing, and evaluating simulations used as teaching strategies in nursing. Nurs. Educ. Perspect. 2005, 26, 96–103. [Google Scholar] [PubMed]
  86. International Nursing Association for Clinical Simulation and Learning. Healthcare simulation standards of best practiceTM. Clin. Simul. Nurs. 2021, 58, 66. [Google Scholar] [CrossRef]
  87. Zimmerman, B.J. Becoming a self-regulated learner: An overview. Theory Pract. 2002, 41, 64–70. [Google Scholar] [CrossRef]
  88. Lee, J.D.; See, K.A. Trust in automation: Designing for appropriate reliance. Hum. Factors 2004, 46, 50–80. [Google Scholar] [CrossRef]
  89. Bretag, T.; Harper, R.; Burton, M.; Ellis, C.; Newton, P.; van Haeringen, K.; Saddiqui, S.; Rozenberg, P. Contract cheating and assessment design: Exploring the relationship. Assess. Eval. High. Educ. 2019, 44, 676–691. [Google Scholar] [CrossRef]
  90. Hohmann, B.; Kollár, G. Reflections on the data protection compliance of AI systems under the EU AI Act. Cogent Soc. Sci. 2025, 11, 2560654. [Google Scholar] [CrossRef]
  91. Haynes, M.d.L. Governing at a distance: The EU AI Act and GDPR as pillars of global privacy and corporate governance. SSRN 2025. [Google Scholar] [CrossRef]
  92. Lau, P.L. The AI Act and data protection: The interplay between artificial intelligence and data: The AI Act and the GDPR. In The European Artificial Intelligence Act: Promises and Perils? Raposo, V.L., Ed.; Springer Nature: Cham, Switzerland, 2025; pp. 289–313. [Google Scholar]
  93. Finch, W.W.; Butt, M. Gaps in AI-compliant complementary governance frameworks’ suitability and structural asymmetries: A systematic review. J. Cybersecur. Priv. 2025, 5, 101. [Google Scholar] [CrossRef]
  94. Kirkpatrick, D.L.; Kirkpatrick, J.D. Evaluating Training Programs: The Four Levels, 3rd ed.; Berrett-Koehler: San Francisco, CA, USA, 2006. [Google Scholar]
  95. Davis, F.D. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 1989, 13, 319–340. [Google Scholar] [CrossRef]
  96. Ajzen, I. The theory of planned behavior. Organ. Behav. Hum. Decis. Process. 1991, 50, 179–211. [Google Scholar] [CrossRef]
  97. Venkatesh, V.; Morris, M.G.; Davis, G.B.; Davis, F.D. User acceptance of information technology: Toward a unified view. MIS Q. 2003, 27, 425–478. [Google Scholar] [CrossRef]
  98. Bosnjak, M.; Ajzen, I.; Schmidt, P. The theory of planned behavior: Selected recent advances and applications. Eur. J. Psychol. 2020, 16, 352–356. [Google Scholar] [CrossRef]
  99. Damschroder, L.J.; Aron, D.C.; Keith, R.E.; Kirsh, S.R.; Alexander, J.A.; Lowery, J.C. Fostering implementation of health services research findings into practice. Implement. Sci. 2009, 4, 50. [Google Scholar] [CrossRef]
  100. Cruz Rivera, S.; Liu, X.; Chan, A.-W.; Denniston, A.K.; Calvert, M.J. Guidelines for clinical trial protocols involving artificial intelligence: The SPIRIT-AI extension. Nat. Med. 2020, 26, 1351–1363. [Google Scholar] [CrossRef] [PubMed]
  101. Arksey, H.; O’Malley, L. Scoping studies: Towards a methodological framework. Int. J. Soc. Res. Methodol. 2005, 8, 19–32. [Google Scholar] [CrossRef]
  102. Tricco, A.C.; Lillie, E.; Zarin, W.; O’Brien, K.K.; Colquhoun, H.; Levac, D.; Moher, D.; Peters, M.D.J.; Horsley, T.; Weeks, L.; et al. PRISMA Extension for Scoping Reviews (PRISMAScR): Checklist and Explanation. Ann. Intern Med. 2018, 169, 467–473. [Google Scholar] [CrossRef] [PubMed]
Figure 1. PRISMA 2020 flow diagram [16].
Figure 1. PRISMA 2020 flow diagram [16].
Nursrep 16 00087 g001
Figure 2. Synthesis of reported outcomes and challenges identified in studies on AI-based chatbots in nursing education.
Figure 2. Synthesis of reported outcomes and challenges identified in studies on AI-based chatbots in nursing education.
Nursrep 16 00087 g002
Table 1. Study characteristics (n = 66).
Table 1. Study characteristics (n = 66).
n%
Year of Publication
201911.5
202111.5
202234.5
202369.1
20241218.2
20254365.2
Region
Europe34.5
North America710.6
Asia4263.6
Africa57.6
South America34.5
Oceania11.5
Multiple regions57.6
Study design
Quasi-experimental study2537.9
Randomized controlled trial46.0
Cross-sectional surveys 812.1
Qualitative study1421.2
Mixed-methods study710.6
Methodological/developmental study46.0
Case studies/quality improvement initiatives46.0
Note: Percentages were calculated based on the total number of included studies (n = 66) and rounded to one decimal place.
Table 2. Summary of chatbot application areas in nursing education and the most frequently reported outcomes.
Table 2. Summary of chatbot application areas in nursing education and the most frequently reported outcomes.
Application AreaNumber of StudiesExamples of UseMost Frequently Reported Outcomes
Learning support34Self-directed study, clarification of doubts, academic task assistance, concept explanationImproved knowledge acquisition, increased autonomy, enhanced engagement, improved academic performance
Clinical simulation11Virtual patients, case-based interaction, scenario-based reasoning, simulation supportImproved clinical reasoning, increased confidence, enhanced decision-making skills
Virtual tutoring12Guided feedback, question-answer interaction, personalized tutoring, scaffoldingImproved learning outcomes, increased engagement, improved skill acquisition
Teaching support6Content preparation, instructional material generation, teaching assistanceIncreased teaching efficiency, improved instructional design
Assessment and skills practice9Formative quizzes, structured clinical responses, skills rehearsal, competency assessmentImproved skill performance, reinforcement of learning, enhanced competency development
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Fernandes, F.; Encarnação, R.; Alves, J.; Pais-Vieira, C.; Lima, S.B.S.d.; Alves, P. Educational Applications of AI-Based Chatbots in Nursing: A Scoping Review. Nurs. Rep. 2026, 16, 87. https://doi.org/10.3390/nursrep16030087

AMA Style

Fernandes F, Encarnação R, Alves J, Pais-Vieira C, Lima SBSd, Alves P. Educational Applications of AI-Based Chatbots in Nursing: A Scoping Review. Nursing Reports. 2026; 16(3):87. https://doi.org/10.3390/nursrep16030087

Chicago/Turabian Style

Fernandes, Francisco, Rúben Encarnação, José Alves, Carla Pais-Vieira, Suzinara Beatriz Soares de Lima, and Paulo Alves. 2026. "Educational Applications of AI-Based Chatbots in Nursing: A Scoping Review" Nursing Reports 16, no. 3: 87. https://doi.org/10.3390/nursrep16030087

APA Style

Fernandes, F., Encarnação, R., Alves, J., Pais-Vieira, C., Lima, S. B. S. d., & Alves, P. (2026). Educational Applications of AI-Based Chatbots in Nursing: A Scoping Review. Nursing Reports, 16(3), 87. https://doi.org/10.3390/nursrep16030087

Article Metrics

Back to TopTop