Next Article in Journal
Environmental and Social Impacts of Community-Based Household Plastic Waste Collection for High-Value Recycling
Previous Article in Journal
The Pilates Method as a Therapeutic Intervention in Patients with Fibromyalgia: A Systematic Review and Meta-Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Development and Usability Evaluation of a Leap Motion-Based Controller-Free VR Training System for Inferior Alveolar Nerve Block

1
Clinical Coordinating Center, College of Dentistry, Chosun University, Gwangju 61452, Republic of Korea
2
Dental Biomedical Engineering, College of Dentistry, Chosun University, Gwangju 61452, Republic of Korea
3
Department of Oral and Maxillofacial Surgery, Chosun University Dental Hospital, Gwangju 61452, Republic of Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2026, 16(3), 1325; https://doi.org/10.3390/app16031325
Submission received: 2 January 2026 / Revised: 23 January 2026 / Accepted: 26 January 2026 / Published: 28 January 2026

Abstract

This study developed a virtual reality (VR) simulator for training the inferior alveolar nerve block (IANB) procedure using Leap Motion-based hand tracking and the Unity engine, and evaluated its interaction performance, task-level outcomes within the simulator, and usability. Built on a 3D anatomical model, the system provides a pre-clinical practice environment for realistic syringe manipulation and visually guided needle insertion, enabling repeated rehearsal of the procedural workflow. Interaction stability was assessed using participant-level gesture recognition rates and input latency. Usability was evaluated via a questionnaire addressing ease of use, cognitive load, and perceived educational usefulness. The results indicated participant-level mean gesture recognition rates of 88.8–90.5% and mean response latencies of approximately 64–66 ms. In usability testing (n = 40), the item related to perceived procedural skill improvement received the highest score (4.25/5.0). Because this study did not include controlled comparisons with conventional training or objective measures of clinical competency transfer, the findings should be interpreted as preliminary evidence of technical feasibility and learner-perceived usefulness within a simulated setting. Controlled comparative studies using objective learning outcomes are warranted.

1. Introduction

1.1. Background of the Study

To perform safe and accurate procedures in clinical settings, clinicians must possess comprehensive competencies, including anatomical knowledge, procedural skills, patient-care experience, appropriate equipment use, and sound clinical judgment. Dentistry is no exception. In particular, local anesthesia is a fundamental prerequisite for most dental procedures and demands a high level of accuracy and proficiency. The inferior alveolar nerve block (IANB) is one of the most widely used mandibular anesthesia techniques in dental practice, yet it is associated with a relatively high failure rate [1,2]. Successful IANB requires precise identification of the injection site, appropriate needle angulation and depth, and a thorough understanding of surrounding anatomy. Accordingly, dental schools use multiple educational approaches—didactic instruction, preclinical laboratory training, clinical clerkships, procedural demonstrations, and model-based practice—to improve students’ competence in local anesthesia techniques [3,4].
However, training for high-difficulty procedures such as IANB may remain insufficient when relying solely on conventional preclinical and clinical education [5,6]. First, model-based training is limited by the anatomical fidelity of available models. This makes it difficult for learners to understand and verify insertion path, angle, depth, and key anatomical landmarks. Second, hands-on training requires costly practice models and repeated use of disposables (e.g., needles, anesthetic cartridges, gloves, and disinfectants), which increases the financial burden. Third, patient safety concerns restrict repeated practice opportunities for beginners and may lead to uneven hands-on exposure across students [4,7]. In addition, training settings often provide limited exposure to anatomical variations encountered in real practice. As a result, conventional approaches alone may not be sufficient to achieve consistent IANB learning outcomes, and skill disparities among learners can persist.
One potential approach is an educational tool that facilitates intuitive anatomical visualization and provides feedback that can be tailored to a learner’s proficiency level while enabling repeated rehearsal of procedural workflow. In recent years, VR-based education systems have been explored as an adjunct to conventional pre-clinical training by offering immersive 3D visualization and simulated practice without involving real patients. VR environments can also enable structured procedural rehearsal and basic syringe-handling exercises in controlled scenarios. Interaction devices such as Leap Motion can provide controller-free hand input, allowing users to perform syringe-related actions (e.g., grasping, orientation adjustment, and simplified tool-use motions) within the simulator. However, the extent to which such VR-based systems transfer to clinical competency or demonstrate comparative effectiveness relative to standard teaching methods remains to be established and requires controlled comparative studies. Based on this motivation, we developed a Leap Motion-based VR IANB training system and evaluated gesture-recognition performance and usability as pre-liminary indicators of technical feasibility and learner-perceived usefulness within the simulated setting.

1.2. Related Work on VR-Based Medical Education

Immersive technologies such as VR have been increasingly explored as educational platforms for hands-on clinical learning, procedural skill acquisition, and preoperative planning [8,9,10,11,12,13]. Early clinical training applications emphasized scenario-based learning and teamwork. For example, the University of Northampton integrated multiple HMDs with a shared display to support nurses’ clinical response and team coordination [14], and the University of Oxford introduced Oxford Medical Simulation to improve the clinical competencies of healthcare professionals in a hospital setting [14,15]. In nursing education, VR simulation has also been investigated as a strategy for clinical procedure skills training in quasi-experimental and immersive simulation studies, supporting its broader applicability in undergraduate curricula [16,17]. In emergency training, VR simulation has been evaluated in Advanced Trauma Life Support (ATLS) using a randomized controlled design, reporting feasibility for structured medical education contexts [18].
VR has also been applied to anatomy learning and emergency-response training, where spatial understanding and rapid decision-making are essential. Falah et al. developed a VR-based cardiac anatomy education system using stereoscopic 3D visualization to address limitations of traditional anatomy instruction [19,20,21]. Multiuser VR has been explored for guided training scenarios such as anaphylactic shock response [22] and interprofessional sepsis recognition and management, demonstrating the feasibility of team-oriented immersive simulations [23]. Procedure-oriented emergency education has likewise been investigated for airway management and tracheostomy training, indicating that immersive simulation can support workflow learning in high-stakes clinical skills [24,25,26]. In addition, VR-based CPR training has been studied for skill acquisition and maintenance, further expanding immersive education into time-critical resuscitation tasks [27]. In gastrointestinal endoscopy education, digital platforms including VR/AR have been summarized and validated in dedicated immersive training systems for physicians and nurses [28,29].
Beyond conceptual learning, many studies have examined whether VR improves procedural performance compared with conventional methods. In surgical and interventional education, immersive simulators have been developed for ophthalmic surgical skills [30] and vitreoretinal surgery training using portable/affordable VR approaches [31]. For tasks requiring accurate trajectory and depth control, immersive MR/VR systems have been proposed and evaluated for pedicle screw placement [32,33] and ventriculostomy catheter insertion [34]. In minimally invasive surgery education, VR has been used in laparoscopic skill training programs, including curricula integrating smartphone-based learning and VR modules, as well as VR-based training targeting laparoscopic assistance skills [35,36]. Additional procedure-skills training studies have used VR to improve clinical manipulation performance in medical students [37] and have extended immersive procedure training into vascular intervention education through personalized VR-based systems [38]. Comparative studies have also evaluated immersive vers0us non-immersive simulation formats (e.g., arthroscopy), reporting similar effectiveness across modalities in certain settings [39]. Several controlled comparisons in the earlier literature similarly reported that VR-based training can match or outperform traditional approaches across different skills and learner groups [40,41,42,43,44].
In dentistry and cranio-maxillofacial education, immersive simulation has been explored for anatomically complex workflows and preclinical skill development. Immersive reality training has been reported for Le Fort I orthognathic surgery, supporting feasibility for complex maxillofacial procedure education [45]. VR haptic-based dental simulators have also been used to assess or predict preclinical crown preparation performance, suggesting that simulator-derived metrics can relate meaningfully to conventional evaluation in dental education [46]. Importantly, needle-related education has been examined in immersive environments: VR-based ultrasound-guided needling education for regional anaesthesia has been evaluated in a randomized controlled trial and reported learning outcomes comparable to instructor-led education [47]. Hand-tracking-based interaction has also been explored in procedure-education contexts, including HMD–Leap Motion-based tools for fracture education [48,49], indicating the feasibility of more natural, controller-free interaction in immersive training.
A key methodological dimension across prior work is the interaction modality used to manipulate tools and perform procedures in immersive environments. Many systems rely on HMD controllers, while others incorporate haptics or MR-based guidance to increase realism and learning support [50,51]. However, for dental local anesthesia education—particularly inferior alveolar nerve block (IANB), which requires precise syringe manipulation and anatomically informed needle positioning [52,53,54]—VR systems can be designed to incorporate anatomically layered 3D visualization and controller-free syringe interaction via hand tracking, while assessing interaction robustness using real-time indicators (e.g., gesture-recognition rate and response latency) together with user-centered usability evaluation. Accordingly, the present study develops a Leap Motion-based VR simulator for IANB education and evaluates interaction performance and usability as preliminary evidence of technical feasibility and learner-perceived usefulness within the simulated setting.

2. Materials and Methods

This section describes the development process, interaction design, and the methods for performance and usability evaluation of a VR simulator for inferior alveolar nerve block (IANB) education. The proposed simulator was designed to operate in a PC-based VR environment. Users can observe three-dimensional anatomical structures through a head-mounted display (HMD) and perform local anesthesia procedures by manipulating a virtual syringe using a Leap Motion-based hand-tracking device. The simulation was implemented through (1) development of 3D models reflecting facial anatomy, (2) design of the user interface (UI), and (3) implementation of a Leap Motion-based interaction module for syringe manipulation. In addition, to verify the simulator’s potential for educational use, we conducted a Leap Motion gesture recognition performance evaluation and a user-based usability evaluation. Figure 1 shows the overall architecture of the IANB VR simulator developed in this study.

2.1. Construction of the 3D Anatomical Model

The 3D anatomical model used in the IANB education simulator consists of four structures: skin, muscle, bone, nerve structures. Based on these anatomical references, 3D anatomical modeling was created using 3Ds Max 2024.2 and Cinema 4D R23. The skin layer was first modeled based on an actual facial shape, and the superficial and deep facial muscles were organized in a hierarchical structure to represent the facial soft-tissue anatomy. Next, the skull—including the mandible and maxilla—was modeled in detail to reproduce the morphology of the craniofacial skeletal anatomy. Finally, nerve structures directly related to IANB were implemented, completing the anatomical model spanning the skin–muscle–bone–nerve layers. Figure 2 shows the facial anatomical structures developed for the simulator.
In detail, the skin layer was created by sculpting the overall facial topography based on the actual facial surface geometry. After establishing the basic head shape with reference to standard facial proportions, additional details—such as the lips, buccal region, and jawline—were added to reflect an open-mouth posture. The entire skin layer was then constructed as a polygon mesh, and surface textures were adjusted to clearly delineate the boundaries with the muscle and skeletal structures.
The muscle layer was created by focusing on key muscles in the maxillofacial region that are relevant to IANB. Both superficial and deep facial muscles—including the masseter, buccinator, medial and lateral pterygoid muscles, orbicularis oris, and temporalis—were modeled as separate meshes. The muscle models were organized in a hierarchical structure, enabling independent on/off toggling within the VR authoring tool. In addition, surface normal and shading processing were applied to visually represent muscle fiber directionality, thereby facilitating understanding of the anatomical relationships among facial muscles.
The craniofacial skeletal structures were modeled with reference to maxillofacial anatomical resources and morphological information derived from facial CBCT data. In particular, regions directly related to the IANB procedure—such as the ramus, mandibular foramen, lingula, and mandibular canal—were modeled and calibrated to average adult dimensions. The maxilla, mandible, and temporal bone were modeled as independent meshes, and the head axis, proportions, and surface curvature were iteratively refined to maintain alignment with the skin and muscle layers.
The nerve model was constructed by extracting only the structures directly relevant to IANB from the numerous nerve pathways in the face. Each nerve pathway was modeled as a polyline-based mesh, and enhanced color, emissive materials, and thickness adjustments were applied to improve visibility, allowing users to perceive nerve location and direction in VR. In addition, colliders were assigned to the nerve models to detect syringe insertion and potential contact. This enabled the system to detect cases in which the syringe trajectory intersected or approached the nerve structures excessively and to provide feedback accordingly.

2.2. Design of a Leap Motion-Based Interaction Method

In this study, an interaction system was implemented to track hand movements in real time using a Leap Motion Controller (Ultraleap Ltd., Bristol, UK) and, based on this tracking, to manipulate a local-anesthesia syringe in a virtual environment. Leap Motion provides a total of 26 skeletal features per hand, including the wrist, palm, joint positions of each finger, and fingertip (tip) positions; these data are typically streamed to the virtual environment through the Leap Motion SDK (Ultraleap Hyperion 6.2.0). The transmitted skeletal data were mapped to a predefined 3D hand model in the virtual environment, enabling the user’s real hand movements to be visualized with the same pose and trajectory in VR. Figure 3 shows the skeletal information acquired by Leap Motion and the corresponding mapped 3D hand model.
Based on the skeletal data mapped to the 3D hand model, gestures required for performing the local anesthesia procedure were defined. Each gesture was specified using feature values such as the degree of finger flexion, finger-to-finger distance, palm normal direction, and wrist rotation. Each gesture was designed to be recognized when these values satisfied predefined threshold ranges. Figure 4 shows the methods used to compute the feature values for gesture definition.
A total of three gestures were defined. For the left hand (or the right hand for left- handed users), one gesture was defined for GUI interaction in the virtual environment. This posture maintained the thumb and index finger extended while fully flexing the remaining three fingers (middle, ring, and little fingers), and it was recognized as a GUI selection gesture when the predefined threshold angle was satisfied. When this gesture was activated, the user could point to and select desired items in the VR GUI (e.g., syringe volume selection and anatomical structure visualization options). For the right hand (or the left hand for left-handed users), two gestures related to syringe manipulation for local anesthesia were defined. First, the syringe-grasping gesture (grip gesture) maintained the ring and little fingers fully flexed; when the thumb, index finger, and middle finger were flexed within predefined ranges, the syringe model was attached as a child object of the hand model and was configured to translate and rotate in accordance with the right-hand movement. Second, the anesthetic injection gesture (injection gesture) was activated in the grip-gesture state when the distance between the thumb tip and the index fingertip decreased below a specified value. When activated, an animation visualizing the injection of anesthetic solution inside the virtual syringe was triggered, and the injection progress was displayed visually. Figure 5 shows the three defined gestures and their roles, and Table 1 shows the parameter ranges configured for recognizing each gesture.
To prevent gesture conflicts and unintended state switching, two constraints were applied. First, a gesture was defined to become active only when its recognition condition was continuously maintained for at least 1 s, which suppresses transient mis-tracking and noise-induced triggers. Second, while syringe-related gestures (grip/injection) were performed, the left-hand GUI gesture recognition was temporarily disabled to prevent accidental GUI selections during procedural interaction. Gesture thresholds were fixed across participants, and no per-user calibration (e.g., hand-size normalization) was applied in this study to maintain a lightweight and reproducible rule-based pipeline.
Through this gesture-based interaction, the VR simulator enabled users to perform the local anesthesia procedure. First, the user performed the left-hand GUI selection gesture to choose the required items for the simulation within the virtual environment. This allowed the user to switch the visualization states of the anatomical structures (skin, muscle, bone, nerve structures) and to set simulation parameters such as syringe volume. Next, when the grip gesture was performed to grasp the virtual syringe, the syringe model was linked to the hand movement, allowing the user to approach the 3D anatomical model and adjust the needle insertion position and angle in a manner similar to the actual procedure. Finally, when the injection gesture was activated, the anesthetic injection process was visualized, enabling the learner to perform the IANB procedure step by step in the virtual environment. Through this sequence of interactions, users could repeatedly learn the full procedural workflow in VR—from GUI selection to syringe grasping, contact with the anatomical model, and anesthetic injection. Figure 6 shows the local anesthesia VR simulation during execution.

2.3. Performance and Usability Evaluation Methods

To verify the utility of the implemented simulator, performance and usability evaluations were conducted. Because the simulator’s core functionality relied on Leap Motion-based interaction, the performance evaluation focused on gesture recognition accuracy and input response latency. Gesture recognition accuracy was calculated by having participants perform the three predefined gestures (GUI selection, grip, and injection) for a fixed number of trials and dividing the number of correctly recognized instances by the total number of trials. The system was configured to record each recognition event as either success or failure. The cumulative counts of successes and failures were then used to compute the recognition accuracy for each gesture. Input response latency was defined as the time from the moment a gesture was performed to the moment the system switched the corresponding gesture state to active. To measure this, 20 participants performed each gesture 20 times, and latency was calculated by comparing the Leap Motion frame timestamp at the time of execution with the internal log timestamp at which the gesture recognition event occurred. The mean response time and standard deviation were computed for each gesture to assess the suitability for real-time interaction. To facilitate the performance evaluation, a simplified testbed environment containing only the 3D hand model and syringe model was built separately. In addition, a minimal GUI was implemented to confirm real-time gesture recognition status and response time, enabling immediate monitoring of gesture success and system response during the evaluation. Figure 7 shows the testbed environment for performance evaluation and the evaluation procedure.
Additionally, task-level performance metrics were collected from the same 20 participants to quantify procedural execution in the simulator. Task completion time was defined as the elapsed time required to complete a standardized practice sequence (GUI option selection → syringe grasping → needle insertion → injection activation/completion), and was computed from logged start/end timestamps. Because the simulator permits free needle trajectories, task success was assessed using a target-based criterion rather than enforcing a single predefined path. Specifically, a nerve-related target region of interest (ROI) was defined in the 3D anatomical model, and an attempt was labeled as a “hit” if the needle tip entered the target ROI during insertion; otherwise, it was labeled as a “miss.” The target-hit rate (hits/attempts) was used as a practical proxy for whether the learner reached an anatomically relevant target zone. Together, these task-level metrics complemented the interaction-level measures (gesture recognition accuracy and input latency) and provided additional quantitative indicators of procedural execution in the simulator.
Next, to evaluate the overall usability of the simulator, a questionnaire-based method was used. Because the simulator targets an IANB-specific educational workflow, we employed a procedure-oriented questionnaire to assess domain-relevant aspects that may not be sufficiently captured by generic usability instruments alone. Specifically, the questionnaire covered perceived support for understanding the local anesthesia procedure (Q1) and intraoral anatomy (Q2), realism of HMD-based viewpoint changes (Q3), perceived realism of Leap Motion-based practice (Q4), engagement (Q5), perceived potential for procedural skill improvement (Q6), VR-related discomfort (Q7), and intention to use VR for other dental procedures (Q8). The questionnaire comprised eight items, each rated on a 5-point Likert scale (1 = strongly disagree, 5 = strongly agree). Table 2 lists the questionnaire items used in the usability evaluation.
The usability evaluation was conducted with 40 dental students who had completed didactic instruction on local anesthesia/IANB as part of the dental anesthesiology curriculum and had participated in mannequin/phantom-head-based practice sessions. Participants’ academic year was recorded (Year 2 = 20, Year 3 = 20). To minimize potential confounding effects of VR familiarity on subjective ratings, only participants with no prior VR experience were recruited.
Participants were instructed to complete the full interaction workflow using the simulator, including GUI control, syringe grasping, syringe manipulation, and local anesthetic injection. During the session, instructor intervention was minimized so that participants could naturally experience the system’s controllability and intuitiveness. After completing the simulation, the questionnaire was administered to collect subjective usability ratings of the overall system. Based on the collected scores, the overall acceptability of the simulator and areas requiring improvement were analyzed. Figure 8 shows participants performing the simulator for the usability evaluation.

3. Results

3.1. Performance Evaluation Results

3.1.1. Interaction Performance Evaluation Results

The performance evaluation of the IANB education simulator was conducted in a PC-based testbed in which an HMD (Meta Quest 2) and Leap Motion were connected. 20 participants were instructed to repeat each of the three gestures (GUI selection, grip, and injection) 20 times, and gesture recognition accuracy and input response latency were measured. Table 3 shows the gesture recognition rates for each participant and the response times for the corresponding gestures.
As shown in Table 3, analysis of the recognition rates for the three gestures (GUI se-lection, grip, and injection) confirmed overall high stability and real-time performance. Recognition rate was computed per participant as the proportion of correctly recognized trials out of 20 attempts, and the reported values represent mean ± SD across participants (n = 20), together with 95% confidence intervals (CIs) for the mean. Here, SD reflects between-participant variability in the participant-level metrics (each participant’s rate computed from 20 repetitions), rather than within-participant trial-to-trial variability. The 95% CI corresponds to the confidence interval of the participant-level mean and was computed using the t-distribution (df = 19): mean ± t {0.975, 19} × (SD/√20). The mean recognition rate was 89.25 ± 7.79%, 95% CI: 85.50–92.99) for the GUI selection gesture, 88.75 ± 7.40%, 95% CI: 85.20–92.30) for the grip gesture, and 90.50 ± 8.20%, CI: 86.56–94.44) for the injection gesture, indicating that all three gestures achieved stable performance above 88% on average. In the participant-level analysis, most participants exhibited high recognition rates in the 85–100% range, whereas a few participants (Participants #1, #10, #17, #18, #19) showed relatively lower recognition rates. This may be attributed to their unusually large or small hand sizes, which could have caused the finger flexion angles and finger-to-finger distance values estimated by Leap Motion to repeatedly fall outside the predefined gesture-recognition thresholds. In addition, some participants showed temporary drops in recognition for specific gestures, which may have occurred when rapid hand movements caused fingertips to momentarily leave the Leap Motion field of view (FoV) or when transient sensor noise affected joint-position estimates, pushing gesture features (e.g., flexion and distance) outside the threshold ranges. Except for these cases, gesture execution patterns were generally consistent, suggesting that the gesture-based interaction provided sufficient reliability and stability for an IANB VR education environment.
Gesture response time was evaluated as the latency from the moment a participant performed a gesture to the moment the system transitioned the corresponding gesture state to active. Response time was treated as a system-level latency (gesture execution → state activation). The mean response time was 65.45 ± 8.16 ms (CI: 61.53–69.37) for the GUI selection gesture, 65.3 ± 6.71 ms (CI: 62.03–68.47) for the grip gesture, and 63.75 ± 7.36 ms (CI: 60.22–67.28) for the injection gesture, with all three gestures exhibiting stable response times below 70 ms. These results indicate that Leap Motion-based input processing satisfied real-time interaction requirements (commonly < 100 ms) [55] and was suitable for providing natural action–response coupling in procedure-training content such as IANB simulation. However, for participants with lower gesture recognition rates, unstable joint data or intermittent mis-tracking was also observed, and a tendency toward occasional increases in time-to-activation was noted. This suggests that reduced recognition stability can co-occur with increased time-to-activation in some cases; nevertheless, across the full participant group, no delay occurred at a level that would hinder real-time interaction.

3.1.2. Task-Level Performance Evaluation Results

Task-level performance was evaluated with the same 20 participants who took part in the interaction performance test. Two task-level metrics were analyzed: (1) task completion time for executing the standardized simulator workflow and (2) target-hit rate indicating whether the needle tip reached the predefined nerve-related region of interest (ROI). Table 4 shows the task-level performance evaluation results.
Task completion time was defined as the elapsed time required to finish a standardized workflow (GUI option selection → syringe grasping → needle insertion attempt → injection activation/completion). The mean task completion time was 280.2 ± 23.12 s (CI: 269.10–291.30), with a participant-level range of 257–337 s. Needle-placement performance was assessed using a target-based hit/miss criterion rather than mm-level 3D positional error. A predefined nerve-related region of interest (ROI) was specified in the 3D anatomical model, and an attempt was labeled as a hit if the needle tip entered the ROI during insertion; otherwise, it was labeled as a miss. Each participant completed 20 attempts, and the hit success rate was computed as (number of hits/20). The mean hit success rate was 95.0 ± 3.87% (CI: 93.14–96.86), ranging from 90% to 100% across participants (SD reflects between-participant variability in the proportion, whereas the CI reflects uncertainty in the participant-level mean).
While some participants exhibited longer completion times—potentially reflecting additional hand repositioning and/or repeated state transitions within the work-flow—the ROI hit success rate remained consistently high. This suggests that, despite occasional variability in gesture recognition, participants were generally able to localize and approach the target nerve region reliably within the simulator. However, because the ROI hit metric showed a near-ceiling distribution and is a binary educational success proxy, it should be interpreted descriptively and does not quantify mm-level placement accuracy or clinically validated anesthesia outcomes

3.1.3. Statistical Validation of Interaction and Task Performance

To evaluate the statistical validity of the interaction-performance outcomes (gesture recognition rate and response time) and the task-performance outcome (completion time), we first tested the associations between interaction metrics and completion time using correlation analyses. We then compared completion time between groups stratified by interaction-performance level using group-comparison tests. In addition, we conducted a sensitivity-based power analysis to estimate the minimum detectable effect size given the limited sample.
To examine the association between interaction performance and task efficiency, Spearman’s rank correlation analyses were conducted between task completion time and (i) total gesture recognition rate and (ii) total gesture response time. The ROI hit success rate was not included in the correlation analysis because it exhibited a near-ceiling distribution (90–100%) and represents a hit/miss educational success proxy rather than a continuous mm-level precision measure; thus, it was reported descriptively only. Table 5 summarizes the correlation results.
Task completion time showed a strong negative correlation with total gesture recognition rate (ρ = −0.911, p < 0.001; N = 20), indicating that higher recognition rates were associated with shorter completion times. In contrast, task completion time was positively correlated with total gesture response time (ρ = 0.837, p < 0.001; N = 20), suggesting that longer response times were associated with longer completion times.
To examine whether task performance differs by interaction-performance level, participants were stratified into a high-recognition group (≥56; n = 9) and a low-recognition group (≤55; n = 11) using the median total gesture recognition count (55), and task completion time was compared between groups. Because normality assumptions were not satisfied, the Mann–Whitney U test was used. Table 6 summarizes the Mann–Whitney U test results.
The group comparison revealed a significant difference in task completion time between the two recognition-level groups (Mann–Whitney U = 0, Z = −3.764, p < 0.001). Consistent with the rank pattern (High: mean rank 5.00; Low: mean rank 15.00), the high-recognition group exhibited shorter completion times than the low-recognition group. The effect size was large (r = 0.842), indicating a substantial performance gap associated with interaction-performance level.
Finally, a sensitivity-based power analysis (two-sided α = 0.05) indicated that, with N = 20, the study achieves 80% power to detect large monotonic associations of approximately |ρ| ≥ 0.59 in correlation analyses, suggesting limited sensitivity to small-to-moderate effects but sufficient sensitivity for large effects.

3.2. Usability Evaluation Results

A questionnaire-based usability evaluation was conducted with 40 dental students to verify the educational usefulness of the simulator. Responses to each item were collected, and the mean and standard deviation were calculated. Figure 9a shows the response results for each questionnaire item, and Figure 9b shows the mean and standard deviation for each questionnaire item.
Among 40 participants who completed the usability evaluation across eight items, the mean total score was approximately 3.6, indicating a neutral-to-positive user experience. High satisfaction was reported regarding the simulator’s educational value for hands-on practice and the potential scalability of VR-based dental training, and most participants rated the simulator as helpful for learning local anesthesia. In addition, negative experiences such as motion sickness or dizziness were reported at a low level, suggesting that the system provided a stable usage environment.
Item-level analysis showed that the highest score was obtained for Q6 (improvement of practical skills), with a mean of 4.3 (SD = 0.73), indicating that participants perceived the simulator as potentially beneficial for improving local anesthesia procedural proficiency. For Q1 (local anesthesia method) and Q2 (facial anatomy), mean scores exceeded 3.6, and 95% of responses were neutral or higher, suggesting that the simulator was effective in supporting foundational concept acquisition and understanding. The items related to VR experience via the HMD (Q3) yielded a mean score above 3.5, with 87% of responses neutral or higher, implying that participants accepted the VR viewing environment but did not perceive it as fully identical to the real clinical view. For Q5 and Q7, which addressed interest and discomfort during VR-based practice, mean scores exceeded 3.4 and 82.5% of responses were neutral or higher. These results suggest that the VR simulator may provide educational motivation and that, although some participants reported mild motion sickness, no severe discomfort was observed. Together, these findings support that a VR environment can provide usability suitable for enhancing immersion and sustaining practice engagement. For Q8, which assessed the perceived potential of VR use in dentistry, mean scores exceeded 3.7 and 95% of responses were neutral or higher, indicating that the simulator was recognized as an educational tool that could be extended beyond local anesthesia to other dental procedures, reflecting the broader applicability and educational scalability of VR simulator in dental practice. The lowest score was observed for Q4 (realism), with a mean of 2.9 and only 62.5% of responses neutral or higher. This was likely due not only to joint-position estimation errors or occasional field-of-view (FoV) loss in Leap Motion-based hand tracking, but also to the absence of haptic feedback—such as needle insertion resistance, tissue contact sensation, and pressure changes—unlike haptic-enabled devices. Because local anesthesia procedures rely heavily on tactile cues, the lack of such feedback likely contributed to lower realism ratings.
The usability questionnaire consisted of eight single items rated on a 5-point Likert scale; thus, responses to each item are discrete and ordinal in nature. Accordingly, mean-based inference assuming normality is limited at the single-item level. Therefore, overall usability was summarized using the sum score (Total) and the mean score (Avg) across the eight items. Table 7 shows the Shapiro–Wilk normality test results for these summary scores.
The Shapiro–Wilk test indicated no evidence to reject normality for the usability summary scores (Total: W = 0.956, n = 40, p = 0.122; Avg: W = 0.956, n = 40, p = 0.122). Therefore, we report the mean ± SD and the 95% confidence interval (CI) of the mean for Total (28.63 ± 4.40, CI: 27.20, 30.05) and Avg (3.58 ± 0.55, CI: 3.40, 3.76), computed using the t-distribution (df = 39).

4. Discussion

This study developed a Leap Motion-based VR simulator for inferior alveolar nerve block (IANB) training and evaluated (i) interaction performance, (ii) task-level outcomes within the simulator, and (iii) learner-reported usability. The simulator is designed as a supplementary, pre-clinical tool to support anatomical visualization and rehearsal of procedural workflow under visual guidance. Accordingly, the results should be interpreted as evidence of technical feasibility and learner-perceived usefulness in a simulated setting. Because this study did not include controlled comparisons (e.g., versus mannequin/phantom-head training or alternative simulators) and did not assess clinical outcomes, the present findings do not support claims of comparative effectiveness or superiority over conventional training.
In the usability evaluation, participants reported that the simulator helped them understand the basic concepts and steps of local anesthesia and rated the 3D anatomical visualization positively. These findings suggest that an immersive 3D environment can support multi-view inspection of anatomical structures and repeated rehearsal of procedural sequencing. Such capabilities are consistent with the intended role of the simulator as a scalable platform for practice opportunities, while educational effectiveness relative to existing training modalities remains to be established in future comparative studies.
The interaction evaluation yielded participant-level mean recognition rates of 88.8–90.5% across the three gestures and mean response times of approximately 64–66 ms, indicating that gesture processing latency was generally compatible with real-time simulator flow. Nevertheless, some participants exhibited lower or intermittently fluctuating recognition rates, which is consistent with sensor-based hand tracking characteristics such as inter-individual differences in hand size and joint flexibility, occasional departures from the Leap Motion field of view (FoV), and transient joint-estimation errors due to sensor noise. These factors can affect threshold-based gesture features (e.g., flexion angles and inter-finger distances) and may lead to sporadic state activation delays.
To connect interaction robustness to task efficiency within the simulator, we analyzed task-level performance from the same 20 participants. The standardized workflow completion time averaged 280.2 s (SD = 23.12). Inferential analyses suggested that completion time was strongly associated with interaction performance: it showed a strong negative association with total gesture recognition rate (Spearman’s ρ = −0.911, p < 0.001; N = 20) and a strong positive association with total gesture response time (ρ = 0.837, p < 0.001; N = 20). In addition, participants stratified by the median total recognition count (55) demonstrated a significant difference in completion time (Mann–Whitney U = 0, Z = −3.764, p < 0.001; r = 0.842), with the higher-recognition group completing the workflow faster. Collectively, these within-simulator analyses indicate that interaction stability can influence procedural efficiency during simulated training; however, they do not establish transfer to real clinical performance.
Needle-placement performance was assessed using an ROI-based hit criterion, and the target-hit outcome remained high (mean hit rate = 95.0%, SD = 4.0; 19.0/20 hits on average). Because this metric is a binary educational success proxy with a near-ceiling distribution (90–100%) and does not quantify continuous mm-level error, it should be interpreted descriptively. It does not capture clinically validated accuracy, anesthetic success, or other clinical outcomes.
The use of Leap Motion-based hand tracking (rather than haptic devices) was chosen to enable an accessible and low-burden setup that can be deployed in typical educational spaces with minimal hardware. This design supports repeated practice of procedural steps, target localization strategies, and workflow rehearsal under visual guidance. However, many dental local-anesthesia simulators incorporate haptic or visuo-haptic feedback to convey insertion-related cues and improve motor fidelity; therefore, the present simulator should be viewed as complementary to hands-on or haptics-augmented training rather than a replacement for it [56].
The usability pattern reflected these design trade-offs: Q6 (perceived potential for improving procedural skills) received the highest score, whereas Q4 (realism) received the lowest score. This is plausibly explained by tracking artifacts inherent to mid-air hand tracking and the absence of tactile cues (e.g., insertion resistance, tissue contact sensation, pressure changes) that are important in clinical needle procedures [57,58,59]. Clinical IANB depends on tactile feedback (including bony contact and resistance changes) and on syringe-hand biomechanics with tissue contact constraints. Because the present simulator provides primarily visual and motion-based feedback and adopts simplified, sensor-friendly postures for stable tracking, it may not fully reproduce these tactile and biomechanical components. Therefore, the findings should be interpreted as supporting pre-clinical rehearsal of anatomical targeting and procedural sequencing under visual guidance, while tactile skill acquisition remains dependent on hands-on or haptics-augmented training.
Gesture recognition was implemented using a threshold-based, rule-based approach to avoid training-data collection and model training, reduce computational cost, and provide transparent behavior suitable for real-time educational VR. Prior work suggests that learning-based approaches using Leap Motion skeletal input can improve robustness by learning user motion patterns [60]. Accordingly, future work will examine learning-based recognition and user-specific calibration routines (e.g., estimating hand size and range of motion) to reduce sensitivity to inter-individual differences and im-prove recognition stability [61,62,63].
This study has several limitations. First, realism was constrained by the absence of tactile feedback. Second, gesture recognition performance varied across individuals, and the user study was conducted without personalized calibration. Third, outcomes were limited to within-simulator performance and subjective usability, without evaluation of real clinical outcomes or objective competencies in external settings. Fourth, no con-trolled comparisons against standard teaching methods were conducted; therefore, comparative effectiveness cannot be inferred. Finally, the sample size was modest, providing sensitivity for large effects but potentially limited power for small-to-moderate effects. Future research should incorporate controlled comparative designs (e.g., versus mannequin/phantom-head practice) and objective learning out-comes (e.g., error rates, completion time, retention, and clinically meaningful proxies), and examine whether improvements observed in the simulator transfer to real or vali-dated training environments.

5. Conclusions

This study presented a VR simulator for IANB training using Leap Motion hand tracking and evaluated its interaction performance, task-level outcomes, and usability in a simulated setting. The results showed gesture-based interaction performance with mean recognition rates of approximately 89–91% and mean latencies of approximately 64–66 ms, which were generally compatible with real-time simulator operation. Task-level outcomes further indicated an ROI-based target-hit outcome under visual guidance. Inferential analyses also suggested that workflow completion time was strongly associated with interaction performance (recognition rate and response time) and differed significantly between groups stratified by recognition level.
These findings should be interpreted as preliminary evidence of technical feasibility and learner-perceived usefulness within a simulated environment, rather than as evidence of improved real-world procedural competency, comparative effectiveness versus existing training modalities, or clinical outcomes. Realism was constrained by the absence of tactile feedback, and recognition stability varied across individuals, reflecting known limitations of sensor-based mid-air hand tracking.
Future work will incorporate user-specific calibration and learning-based gesture recognition to improve robustness, and will explore optional tactile augmentation to increase procedural realism. Educational outcomes will be evaluated using controlled comparative study designs (e.g., in comparison with mannequin/phantom-head practice and haptics-augmented simulators) with objective metrics such as error rates, completion time, retention, and validated performance proxies, and will further examine whether simulator gains transfer to external or clinically relevant settings.

Author Contributions

Conceptualization, S.-Y.M. and H.-J.K.; methodology, S.-Y.M. and J.-S.K.; software, K.-W.K. and H.-J.K.; validation, J.-S.K. and H.-J.K.; formal analysis, H.-J.K.; investigation, J.-S.K.; resources, H.-J.K.; data curation, S.-Y.M. and H.-J.K.; writing—original draft preparation, J.-S.K.; writing—review and editing, S.-Y.M. and H.-J.K.; visualization, J.-S.K. and K.-W.K.; supervision, S.-Y.M.; project administration, S.-Y.M.; funding acquisition, S.-Y.M. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the Basic Science Research Program through the National Research Foundation of Korea (NRF), funded by the Ministry of Education under Grant RS-2022-NR075669 and in part by the National Research Foundation of Korea (NRF) grant funded by Korean Government [Ministry of Science and Information and Communication Technology (MIST)] under Grant RS-2023-00214613.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Review Board of Chosun University Dental Hospital (CUDHIRB 2202001Q01).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Khalil, H. A basic review on the inferior alveolar nerve block techniques. Anesth. Essays Res. 2014, 8, 3–8. [Google Scholar] [CrossRef]
  2. Milani, A.S.; Froughreyhani, M.; Rahimi, S.; Zand, V.; Jafarabadi, M.A. Volume of anesthetic agents and IANB success: A systematic review. Anesth. Prog. 2018, 65, 16–23. [Google Scholar] [CrossRef]
  3. Marei, H.F.; Al-Jandan, B.A. Simulation-based local anaesthesia teaching enhances learning outcomes. Eur. J. Dent. Educ. 2013, 17, e44–e48. [Google Scholar] [CrossRef]
  4. Lee, J.S.; Graham, R.; Bassiur, J.P.; Lichtenthal, R.M. Evaluation of a local anesthesia simulation model with dental students as novice clinicians. J. Dent. Educ. 2015, 79, 1411–1417. [Google Scholar] [CrossRef] [PubMed]
  5. Moussa, R.; Alghazaly, A.; Althagafi, N.; Eshky, R.; Borzangy, S. Effectiveness of virtual reality and interactive simulators on dental education outcomes: Systematic review. Eur. J. Dent. 2022, 16, 14–31. [Google Scholar] [CrossRef]
  6. Patil, S.; Bhandi, S.; Awan, K.H.; Licardi, F.W.; Di Blasio, M.; Ronsivalle, V.; Cicciù, M.; Minervini, G. Effectiveness of haptic feedback devices in preclinical training of dental students—A systematic review. BMC Oral Health 2023, 23, 739. [Google Scholar] [CrossRef] [PubMed]
  7. Rosenberg, M.; Orr, D.L.; Starley, E.D.; Jensen, D.R. Student-to-student local anesthesia injections in dental education: Moral, ethical, and legal issues. J. Dent. Educ. 2009, 73, 127–132. [Google Scholar] [CrossRef]
  8. Reinschluessel, A.V.; Muender, T.; Salzmann, D.; Doering, T.; Malaka, R.; Weyhe, D. Virtual reality for surgical planning–evaluation based on two liver tumor resections. Front. Surg. 2022, 9, 821060. [Google Scholar] [CrossRef] [PubMed]
  9. Rad, A.A.; Vardanyan, R.; Lopuszko, A.; Alt, C.; Stoffels, I.; Schmack, B.; Ruhparwar, A.; Zhigalov, K.; Zubarevich, A.; Weymann, A. Virtual and augmented reality in cardiac surgery. Braz. J. Cardiovasc. Surg. 2022, 37, 123–127. [Google Scholar] [CrossRef]
  10. Bakhuis, W.; Sadeghi, A.H.; Moes, I.; Maat, A.P.W.M.; Siregar, S.; Bogers, A.J.J.C.; Mahtab, E.A.F. Essential surgical plan modifications after virtual reality planning in 50 consecutive segmentectomies. Ann. Thorac. Surg. 2023, 115, 1247–1255. [Google Scholar] [CrossRef]
  11. Oyekunle, D.; Matthew, U.O.; Waliu, A.O.; Fatai, L.O. Healthcare applications of Augmented Reality (AR) and Virtual Reality (VR) simulation in clinical ceducation. J. Clin. Images Med. Case Rep. 2024, 5, 3141. [Google Scholar] [CrossRef]
  12. Queisner, M.; Eisenträger, K. Surgical planning in virtual reality: A systematic review. J. Med. Imaging 2024, 11, 062603. [Google Scholar] [CrossRef]
  13. Ujiie, H.; Chiba, R.; Yamaguchi, A.; Nomura, S.; Shiiya, H.; Fujiwara-Kuroda, A.; Kaga, K.; Eitel, C.; Clapp, T.R.; Kato, T. Developing a Virtual Reality Simulation System for Preoperative Planning of Robotic-Assisted Thoracic Surgery. J. Clin. Med. 2024, 13, 611. [Google Scholar] [CrossRef] [PubMed]
  14. Pottle, J. Virtual reality and the transformation of medical education. Future Healthc. J. 2019, 6, 181–185. [Google Scholar] [CrossRef]
  15. Oxford Medical Simulation. Available online: https://oxfordmedicalsimulation.com (accessed on 1 November 2024).
  16. Yoon, H.; Lee, E.; Kim, C.J.; Shin, Y. Virtual Reality Simulation-Based Clinical Procedure Skills Training for Nursing College Students: A Quasi-Experimental Study. Healthcare 2024, 12, 1109. [Google Scholar] [CrossRef]
  17. Jallad, S.T.; Işık, B. The Effectiveness of Immersive Virtual Reality Simulation as an Innovative Learning Strategy for Acquisition of Clinical Skills in Nursing Education: Experimental Design. Games Health J. 2025, 14, 110–118. [Google Scholar] [CrossRef] [PubMed]
  18. Birrenbach, T.; Stuber, R.; Müller, C.E.; Sutter, P.-M.; Hautz, W.E.; Exadaktylos, A.K.; Müller, M.; Wespi, R.; Sauter, T.C. Virtual Reality Simulation to Enhance Advanced Trauma Life Support Trainings—A Randomized Controlled Trial. BMC Med. Educ. 2024, 24, 666. [Google Scholar] [CrossRef]
  19. Falah, J.; Khan, S.; Alfalah, T.; Alfalah, S.F.M.; Chan, W.; Harrison, D.K.; Charissis, V. Virtual Reality medical training system for anatomy education. In Proceedings of the 2014 Science and Information Conference, London, UK, 27–29 August 2014; pp. 752–758. [Google Scholar] [CrossRef]
  20. Arango, S.; Gorbaty, B.; Tomhave, N.; Shervheim, D.; Buyck, D.; Porter, S.T.; Laizzo, P.A.; Perry, T.E. A high-resolution virtual reality-based simulator to enhance perioperative echocardiography training. J. Cardiothorac. Vasc. Anesth. 2023, 37, 299–305. [Google Scholar] [CrossRef]
  21. Kim, J.S.; Kim, K.W.; Kim, S.R.; Woo, T.G.; Chung, J.W.; Yang, S.W.; Moon, S.Y. An Immersive Virtual Reality Simulator for Echocardiography Examination. Appl. Sci. 2024, 14, 1272. [Google Scholar] [CrossRef]
  22. Schild, J.; Misztal, S.; Roth, B.; Flock, L.; Luiz, T.; Lerner, D.; Herkersdorf, M.; Weaner, K.; Neuberaer, M.; Franke, A.; et al. Applying multi-user virtual reality to collaborative medical training. In Proceedings of the 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Tuebingen/Reutlingen, Germany, 18–22 March 2018; pp. 775–776. [Google Scholar] [CrossRef]
  23. Zackoff, M.W.; Cruse, B.; Sahay, R.D.; Zhang, B.; Sosa, T.; Schwartz, J.; Depinet, H.; Schumacher, D.; Geis, G.L. Multiuser Immersive Virtual Reality Simulation for Interprofessional Sepsis Recognition and Management. J. Hosp. Med. 2024, 19, 185–192. [Google Scholar] [CrossRef]
  24. Gohman, T.; Nisar, H.; Gupta, A.; Javed, M.J.; Rau, N. Development and Usability of a Virtual Reality Umbilical Venous Catheter Placement Simulator. Int. J. Comput. Assist. Radiol. Surg. 2024, 19, 881–889. [Google Scholar] [CrossRef]
  25. Siddique, H.; Abbas, A.; Abdul-Hamid, A. Effectiveness of Virtual Reality Simulation for Tracheostomy Education Among Healthcare Professionals. Cureus 2025, 17, e97083. [Google Scholar] [CrossRef]
  26. Jain, S.; Barua Chowdhury, B.D.; Mosier, J.M.; Subbian, V.; Hughes, K.; Son, Y.-J. Design and Development of an Integrated Virtual Reality (VR)-Based Training System for Difficult Airway Management. IEEE J. Transl. Eng. Health Med. 2025, 13, 49. [Google Scholar] [CrossRef] [PubMed]
  27. Zhang, N.; Ye, G.; Yang, C.; Zeng, P.; Gong, T.; Tao, L.; Zheng, Y.; Liu, Y. Benefits of Virtual Reality Training for Cardiopulmonary Resuscitation Skill Acquisition and Maintenance. Prehosp. Emerg. Care 2025, 29, 843–849. [Google Scholar] [CrossRef]
  28. Pagani, W.; Buysse, T.; Dua, K.S. Digital Platforms, Virtual Reality, and Augmented Reality in Gastrointestinal Endoscopy Training. Clin. Endosc. 2025, 58, 653–661. [Google Scholar] [CrossRef]
  29. Henniger, D.; Engelke, M.; Kreiser, J.; Riemer, V.; Wierzba, E.; Dimitriadis, S.; Meining, A.; Seufert, T.; Ropinski, T.; Hann, A. Validation of the ViGaTu Immersive Virtual Reality Endoscopy Training System for Physicians and Nurses. J. Gastrointestin. Liver Dis. 2024, 33, 226–233. [Google Scholar] [CrossRef] [PubMed]
  30. Wang, N.; Yang, S.; Gao, Q.; Jin, X. Immersive Teaching Using Virtual Reality Technology to Improve Ophthalmic Surgical Skills for Medical Postgraduate Students. Postgrad. Med. 2024, 136, 487–495. [Google Scholar] [CrossRef]
  31. Antaki, F.; Doucet, C.; Milad, D.; Giguère, C.-É.; Ozell, B.; Hammamji, K. Democratizing Vitreoretinal Surgery Training with a Portable and Affordable Virtual Reality Simulator in the Metaverse. Transl. Vis. Sci. Technol. 2024, 13, 5. [Google Scholar] [CrossRef]
  32. McCloskey, K.; Turlip, R.; Ahmad, H.S.; Ghenbot, Y.G.; Chauhan, D.; Yoon, J.W. Virtual and augmented reality in spine surgery: A systematic review. World Neurosurg. 2023, 173, 96–107. [Google Scholar] [CrossRef]
  33. He, Q.; Li, J.; Xiao, Y.; Tian, J.; Mao, N.; Lu, S.; Zhao, Y. An Immersive Mixed Reality Surgical Self-Training System for Precise Pedicle Screw Placement: A Randomized Controlled Trial. BMC Med. Educ. 2025, 25, 1360. [Google Scholar] [CrossRef]
  34. Konovalov, A.; Okishev, D.; Pilipenko, Y.; Grebenev, F.; Artemyev, A.; Mamedbekova, G.; Abzalov, T.; Erokhin, I.; Sergeeva, S.; Eliava, S. Application of Virtual Reality for Developing a Ventriculostomy Simulator: Concept and Initial Results. Surg. Neurol. Int. 2025, 16, 525. [Google Scholar] [CrossRef]
  35. Liang, Y.; Huang, H.; Tan, Y.-B.; Li, T.; Huang, W.; Zhang, Q.-L.; Liu, Z.-W.; Kuang, M. Construction and Implementation of a Laparoscopic Skill Training Course Based on a Smartphone Application and Virtual Reality. BMC Med. Educ. 2024, 24, 1111. [Google Scholar] [CrossRef]
  36. Chen, X.; Liao, P.; Liu, S.; Wang, L.; Zhao, Y.; Hu, J.; Zhang, Z. Effect of Virtual Reality Training to Enhance Laparoscopic Assistance Skills. BMC Med. Educ. 2024, 24, 29. [Google Scholar] [CrossRef]
  37. Knobovitch, R.M.; Tokuno, J.; Botelho, F.; Fried, H.B.; Carver, T.E.; Fried, G.M. Virtual Reality Training Improves Procedural Skills in Mannequin-Based Simulation in Medical Students: A Pilot Randomized Controlled Trial. Surg. Innov. 2025, 32, 364–373. [Google Scholar] [CrossRef]
  38. Li, P.; Xu, B.; Zhang, X.; Fang, D.; Zhang, J. Design and development of a personalized virtual reality-based training system for vascular intervention surgery. Comput. Methods Programs Biomed. 2024, 249, 108142. [Google Scholar] [CrossRef] [PubMed]
  39. Rahman, O.F.; Kunze, K.N.; Yao, K.; Kwiecien, S.Y.; Ranawat, A.S.; Banffy, M.B.; Kelly, B.T.; Galano, G.J. Hip Arthroscopy Simulator Training with Immersive Virtual Reality Has Similar Effectiveness to Nonimmersive Virtual Reality. Arthroscopy 2024, 40, 2840–2849.e3. [Google Scholar] [CrossRef]
  40. McKinney, B.; Dbeis, A.; Lamb, A.; Frousiakis, P.; Sweet, S. Virtual Reality Training in Unicompartmental Knee Arthroplasty: A Randomized, Blinded Trial. J. Surg. Educ. 2022, 79, 1526–1535. [Google Scholar] [CrossRef]
  41. Andersen, N.L.; Jensen, R.O.; Konge, L.; Laursen, C.B.; Falster, C.; Jacobsen, N.; Elhakim, M.T.; Bojsen, J.A.; Riishede, M.; Franse, M.L.; et al. Immersive Virtual Reality in Basic Point-of-Care Ultrasound Training: A Randomized Controlled Trial. Ultrasound Med. Biol. 2023, 49, 178–185. [Google Scholar] [CrossRef] [PubMed]
  42. Kennedy, G.A.; Pedram, S.; Sanzone, S. Improving safety outcomes through medical error reduction via virtual reality-based clinical skills training. Saf. Sci. 2023, 165, 106200. [Google Scholar] [CrossRef]
  43. O’Connor, M.; Rainford, L. The impact of 3D virtual reality radiography practice on student performance in clinical practice. Radiography 2023, 29, 159–164. [Google Scholar] [CrossRef]
  44. Gan, W.; Mok, T.-N.; Chen, J.; She, G.; Zha, Z.; Wang, H.; Li, H.; Li, J.; Zheng, X. Researching the application of virtual reality in medical education: One-year follow-up of a randomized trial. BMC Med. Educ. 2023, 23, 3. [Google Scholar] [CrossRef]
  45. Stevanie, C.; Ariestiana, Y.Y.; Anshar, M.; Sukotjo, C.; Boffano, P.; Forouzanfar, T.; Kurniawan, S.H.; Ruslin, M. Immersive Reality Surgical Training for Le Fort I Orthognathic Surgery: Initial Results of a Randomized Feasibility Study. J. Cranio-Maxillofac. Surg. 2025, 53, 1009–1017. [Google Scholar] [CrossRef]
  46. Hsu, M.; Chang, Y.-C. Prediction of Dental Students’ Pre-Clinical Crown Preparation Performances by a Virtual Reality Haptic-Based Dental Simulator. J. Dent. Sci. 2025, 20, 2301–2306. [Google Scholar] [CrossRef] [PubMed]
  47. Chuan, A.; Bogdanovych, A.; Moran, B.; Chowdhury, S.; Lim, Y.C.; Tran, M.T.; Lee, T.Y.; Duong, J.; Qian, J.; Bui, T.; et al. Using Virtual Reality to Teach Ultrasound-Guided Needling Skills for Regional Anaesthesia: A Randomized Controlled Trial. J. Clin. Anesth. 2024, 97, 111535. [Google Scholar] [CrossRef] [PubMed]
  48. Ayoub, A.; Pulijala, Y. The application of virtual reality and augmented reality in Oral & Maxillofacial Surgery. BMC Oral Health 2019, 19, 238. [Google Scholar] [CrossRef]
  49. Ultraleap. Available online: https://www.ultraleap.com (accessed on 1 November 2024).
  50. Kim, K.-W.; Kim, J.-S.; Kim, H.-J.; Moon, S.-Y. Development and Evaluation of a Virtual Reality and Haptic-Integrated Clinical Simulation System for OSCE-Based Blood-Related Procedures. IEEE Access 2025, 13, 192947–192957. [Google Scholar] [CrossRef]
  51. Tashiro, Y.; Miyafuji, S.; Kojima, Y.; Kiyofuji, S.; Kin, T.; Igarashi, T.; Koike, H. MR Microsurgical Suture Training System with Level-Appropriate Support. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems (CHI ’24), Honolulu, HI, USA, 11–16 May 2024; Association for Computing Machinery: New York, NY, USA, 2024; pp. 1–19. [Google Scholar] [CrossRef]
  52. Khoury, J.; Townsend, G. Neural Blockade Anaesthesia of the Mandibular Nerve and Its Terminal Branches: Rationale for Different Anaesthetic Techniques Including Their Advantages and Disadvantages. Anesthesiol. Res. Pract. 2011, 1, 307423. [Google Scholar] [CrossRef]
  53. Lee, C.R.; Yang, H.J. Alternative techniques for failure of conventional inferior alveolar nerve block. J. Dent. Anesth. Pain Med. 2019, 19, 125. [Google Scholar] [CrossRef]
  54. Roi, C.I.; Roi, A.; Nicoară, A.; Motofelea, A.C.; Riviș, M. Retromolar Triangle Anesthesia Technique: A Feasible Alternative to Classic? J. Clin. Med. 2023, 12, 5829. [Google Scholar] [CrossRef]
  55. Miller, R.B. Response time in man-computer conversational transactions. In AFIPS ‘68 (Fall, Part I), Proceedings of the Fall Joint Computer Conference, Part I, San Francisco, CA, USA, December 9–11, 1968; Thomson Book Company: Washington, DC, USA, 1968; pp. 267–277. [Google Scholar] [CrossRef]
  56. Samuel, S.; Elvezio, C.; Khan, S.; Bitzer, L.Z.; Moss-Salentijn, L.; Feiner, S. Visuo-haptic vr and ar guidance for dental nerve block education. IEEE Trans. Vis. Comput. Graph. 2024, 30, 2839–2848. [Google Scholar] [CrossRef]
  57. Rangarajan, K.; Davis, H.; Pucher, P.H. Systematic review of virtual haptics in surgical simulation: A valid educational tool? J. Surg. Educ. 2020, 77, 337–347. [Google Scholar] [CrossRef]
  58. Fahmi, F.; Tanjung, K.; Nainggolan, F.; Siregar, B.; Mubarakah, N.; Zarlis, M. Comparison study of user experience between virtual reality controllers, leap motion controllers, and senso glove for anatomy learning systems in a virtual reality environment. IOP Conf. Ser. Mater. Sci. Eng. 2020, 851, 012024. [Google Scholar] [CrossRef]
  59. Cox, C.M.J.; Hicks, B.; Gopsill, J.; Snider, C. From haptic interaction to design insight: An empirical comparison of commercial hand-tracking technology. Proc. Des. Soc. 2023, 3, 1965–1974. [Google Scholar] [CrossRef]
  60. Kim, K.-W.; Kim, J.-S.; Prak, H.-J. Hand Gesture-based Interface for Navigating a Virtual Space using Leap Motion and Machine Learning. Korean J. Comput. Des. Eng. 2021, 26, 239–248. [Google Scholar] [CrossRef]
  61. Aziz, O.; Musngi, M.; Park, E.J.; Mori, G.; Robinovitch, S.N. A comparison of accuracy of fall detection algorithms (threshold-based vs. machine learning) using waist-mounted tri-axial accelerometer signals from a comprehensive set of falls and non-fall trials. Med. Biol. Eng. Comput. 2017, 55, 45–55. [Google Scholar] [CrossRef] [PubMed]
  62. Björkelund, A.; Ohlsson, M.; Forberg, J.L.; Mokhtari, A.; de Capretz, P.O.; Ekelund, U.; Björk, J. Machine learning compared with rule-in/rule-out algorithms and logistic regression to predict acute myocardial infarction based on troponin T concentrations. JACEP Open 2021, 2, e12363. [Google Scholar] [CrossRef]
  63. Morgan, D.J.; Bame, B.; Zimand, P.; Dooley, P.; Thom, K.A.; Harris, A.D.; Bentzen, S.; Ettinger, W.; Garrett-Ray, S.D.; Tracy, J.K.; et al. Assessment of machine learning vs standard prediction rules for predicting hospital readmissions. JAMA Netw. Open 2019, 2, e190348. [Google Scholar] [CrossRef]
Figure 1. VR Simulator Architecture for IANB Education.
Figure 1. VR Simulator Architecture for IANB Education.
Applsci 16 01325 g001
Figure 2. Constructed 3D anatomical models: (a) skin; (b) muscle; (c) bone; and (d) nerve. Colors are for visualization only.
Figure 2. Constructed 3D anatomical models: (a) skin; (b) muscle; (c) bone; and (d) nerve. Colors are for visualization only.
Applsci 16 01325 g002
Figure 3. T Leap Motion data–3D model mapping: (a) Acquired Leap Motion data; (b) Hand motion; (c) 3D model mapping.
Figure 3. T Leap Motion data–3D model mapping: (a) Acquired Leap Motion data; (b) Hand motion; (c) 3D model mapping.
Applsci 16 01325 g003
Figure 4. Feature computation for gesture definition: (a) Angle calculation; (b) Distance calculation.
Figure 4. Feature computation for gesture definition: (a) Angle calculation; (b) Distance calculation.
Applsci 16 01325 g004
Figure 5. Three defined gestures: (a) GUI selection; (b) Grip; (c) Injection.
Figure 5. Three defined gestures: (a) GUI selection; (b) Grip; (c) Injection.
Applsci 16 01325 g005
Figure 6. Implemented VR simulator during use: (a) GUI selection; (b) Syringe grip; (c) Exploration of the local anesthesia target area; (d) Syringe injection; (e) Anatomical model visualization.
Figure 6. Implemented VR simulator during use: (a) GUI selection; (b) Syringe grip; (c) Exploration of the local anesthesia target area; (d) Syringe injection; (e) Anatomical model visualization.
Applsci 16 01325 g006
Figure 7. Constructed testbed and performance evaluation setup: (a) testbed interface; (b) Participant performing the evaluation.
Figure 7. Constructed testbed and performance evaluation setup: (a) testbed interface; (b) Participant performing the evaluation.
Applsci 16 01325 g007
Figure 8. Usability evaluation session in progress.
Figure 8. Usability evaluation session in progress.
Applsci 16 01325 g008
Figure 9. Results of the usability evaluation: (a) Response rates; (b) Average and standard deviations.
Figure 9. Results of the usability evaluation: (a) Response rates; (b) Average and standard deviations.
Applsci 16 01325 g009
Table 1. Gesture recognition features and distance and angle ranges.
Table 1. Gesture recognition features and distance and angle ranges.
Gesture TypeFeature PointsRanges
GUI selection p m d , p m p , p m m 70 < θ m p < 100
p m d , p r p , p r m 70 < θ m p < 100
p l d , p l p , p l m 70 < θ m p < 100
p t t , p p 1 < D t p
p i t , p p 0.7 < D i p
Grip p m d , p r p , p r m 70 < θ m p < 100
p l d , p l p , p l m 70 < θ m p < 100
p m d , p m p , p m m 110 < θ m p < 130
p i d , p i p , p i m 110 < θ i p < 130
p i t , p t t 0.5 < D i t
Injection p m d , p r p , p r m 70 < θ m p < 100
p l d , p l p , p l m 70 < θ m p < 100
p m d , p m p , p m m 110 < θ m p < 130
p i d , p i p , p i m 110 < θ i p < 130
p i t , p t t D i t < 0.5
Table 2. Questionnaire items for the usability evaluation.
Table 2. Questionnaire items for the usability evaluation.
Item No.Questionnaire Item
Q1Practicing local anesthesia using the VR simulator helps me understand the local anesthesia technique.
Q2The simulator’s virtual anatomical model helps me understand intraoral anatomy.
Q3Viewpoint changes using the HMD are similar to real-life viewing.
Q4Practice using the Leap Motion device feels realistic, similar to real hands-on practice.
Q5The practice session using the VR simulator is interesting.
Q6I believe that practicing with the VR simulator can improve my procedural skills.
Q7I did not experience nausea, dizziness, or headache while using the simulator.
Q8I would like to learn other dental procedures using a VR simulator.
Table 3. Result of Interaction Performance Evaluation.
Table 3. Result of Interaction Performance Evaluation.
Gesture TypeItem#1#2#3#4#5#6#7#8#9#10#11#12#13#14#15#16#17#18#19#20Avg.
SD
CI
[L–U]
GUI
selection
Success Count151819182019172019161819201817191516161817.85 ± 1.5617.10–18.60
Recognition Rate (%)759095901009585100958090951009085957580809089.25 ± 7.7985.51–92.99
Response Time (ms) 866567726059704855786359606569627367706165.45 ± 8.1661.53–69.37
GripSuccess Count161819201918191719151818181916181715162017.75 ± 1.4817.04–18.46
Recognition Rate (%)80909510095909585957590909095809085758010088.75 ± 7.4085.20–92.30
Response Time (ms)766365505761636958756868636571646875715565.25 ± 6.7162.03–68.47
InjectionSuccess Count161920171820192018151920201917181516171918.10 ± 1.6417.31–18.89
Recognition Rate (%)80951008590100951009075951001009585907580859590.50 ± 8.2086.56–94.44
Response Time (ms)736053676350595568776153596370687372686363.75 ± 7.3660.22–67.28
TotalSuccess Count475558555757555756465557585650554747495753.70 ± 4.1251.75–55.68
Recognition Rate (%)789297929595929593779295979383927878829589.50 ± 6.8786.20–92.80
Response Time (ms)235188185189180170192172181230192180182193210194214214209179194.45 ± 17.88185.86–203.04
Note: Success count = number of correctly recognized trials out of 20 attempts per participant. Recognition rate(%) = (success count/20) × 100. Response time (ms) = latency from gesture execution to state activation. Values are reported as mean ± SD across participants (n = 20); 95% CI denotes the confidence interval of the mean (t-distribution, df = 19).
Table 4. Result of Task-level Performance Evaluation.
Table 4. Result of Task-level Performance Evaluation.
Item#1#2#3#4#5#6#7#8#9#10#11#12#13#14#15#16#17#18#19#20Avg.
SD
CI
[L–U]
Task completion time (s)337275261282267268299257259324273259257263289270301312291260280.20 ± 23.12269.10–291.30
ROI hit181920182019182019181819201920191819192019.00 ± 0.7718.63–19.37
ROI hit rate (%)909510090100959010095909095100951009590959510095.00 ± 3.8793.14–96.86
Note: ask completion time(s) = time to complete the standardized workflow. ROI hit = number of successful attempts (out of 20) where the needle tip entered the predefined ROI. ROI hit rate(%) = (ROI hit/20) × 100. Values are mean ± SD across participants (n = 20); 95% CI is for the mean (t-distribution, df = 19).
Table 5. Spearman’s rank correlations between interaction-performance metrics and task completion time (N = 20).
Table 5. Spearman’s rank correlations between interaction-performance metrics and task completion time (N = 20).
VariableSpearman’s ρp-ValueN
Total recognition rate and Task completion time−0.911<0.00120
Total response time and Task completion time0.837<0.00120
Note: ρ, Spearman’s rank correlation coefficient; p-values are two-sided. Total recognition rate (%) and total response time (ms) are participant-level summaries across the three gestures.
Table 6. Mann–Whitney U test comparing task completion time between recognition-level groups.
Table 6. Mann–Whitney U test comparing task completion time between recognition-level groups.
Mann–Whitney UZp-ValueEffect Size r
0−3.764<0.0010.842
Note: Groups were defined by the median total recognition count (55). p-values are two-sided. Effect size r was computed as |Z|/√N (N = 20).
Table 7. Shapiro–Wilk normality test for usability summary scores (n = 40).
Table 7. Shapiro–Wilk normality test for usability summary scores (n = 40).
VariableShapiro–Wilk WNp-Value
Total (sum score)0.956400.122
Avg (mean score)0.956400.122
Note: Total = sum of eight items; Avg = Total/8 (range 1–5). p-values are two-sided.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kim, J.-S.; Kim, K.-W.; Kim, H.-J.; Moon, S.-Y. Development and Usability Evaluation of a Leap Motion-Based Controller-Free VR Training System for Inferior Alveolar Nerve Block. Appl. Sci. 2026, 16, 1325. https://doi.org/10.3390/app16031325

AMA Style

Kim J-S, Kim K-W, Kim H-J, Moon S-Y. Development and Usability Evaluation of a Leap Motion-Based Controller-Free VR Training System for Inferior Alveolar Nerve Block. Applied Sciences. 2026; 16(3):1325. https://doi.org/10.3390/app16031325

Chicago/Turabian Style

Kim, Jun-Seong, Kun-Woo Kim, Hyo-Joon Kim, and Seong-Yong Moon. 2026. "Development and Usability Evaluation of a Leap Motion-Based Controller-Free VR Training System for Inferior Alveolar Nerve Block" Applied Sciences 16, no. 3: 1325. https://doi.org/10.3390/app16031325

APA Style

Kim, J.-S., Kim, K.-W., Kim, H.-J., & Moon, S.-Y. (2026). Development and Usability Evaluation of a Leap Motion-Based Controller-Free VR Training System for Inferior Alveolar Nerve Block. Applied Sciences, 16(3), 1325. https://doi.org/10.3390/app16031325

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop