Next Article in Journal
Conflict-Free 3D Path Planning for Multi-UAV Based on Jump Point Search and Incremental Update
Previous Article in Journal
Pillar-Bin: A 3D Object Detection Algorithm for Communication-Denied UGVs
Previous Article in Special Issue
Search, Detect, Recover: A Systematic Review of UAV-Based Remote Sensing Approaches for the Location of Human Remains and Clandestine Graves
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Randomized Controlled Trial of ABCD-IN-BARS Drone-Assisted Emergency Assessments

1
School of Nursing and Health Sciences, Hong Kong Metropolitan University, 1 Sheung Shing Street, Ho Man Tin, Hong Kong, China
2
Hong Kong Institute of Paramedicine, 11D, Pos Shau Centre, 115 How Ming Street, Kwun Tong, Kowloon, Hong Kong, China
3
S.K. Yee School of Health Sciences, Saint Francis University, 2 Chui Ling Lane, Tseung Kwan O, New Territories, Hong Kong, China
*
Author to whom correspondence should be addressed.
Drones 2025, 9(10), 687; https://doi.org/10.3390/drones9100687
Submission received: 9 August 2025 / Revised: 24 September 2025 / Accepted: 30 September 2025 / Published: 3 October 2025

Abstract

Emergency medical services confront significant challenges in delivering timely patient assessments within geographically isolated or disaster-impacted regions. While drones (unmanned aircraft systems, UAS) show transformative potential in healthcare, standardized protocols for drone-assisted patient evaluations remain underdeveloped. This study introduces the ABCD-IN-BARS protocol, a 9-step telemedicine checklist integrating patient-assisted maneuvers and drone technology to systematize remote emergency assessments. A wait-list randomized controlled trial with 68 first-aid-trained volunteers evaluated the protocol’s feasibility. Participants underwent web-based modules and in-person simulations and were randomized into immediate training or waitlist control groups. The ABCD-IN-BARS protocol was developed via a content validity approach, incorporating expert-rated items from the telemedicine literature. Outcomes included time-to-assessment, provider confidence (Modified Cooper–Harper Scale), measured at baseline, post-training, and 3-month follow-up. Ethical approval and informed consent were obtained. Most of the participants can complete the assessment with a cue card within 4 min. A mixed-design repeated measures ANOVA assessed the effects of Time (baseline, post-test, 3-month follow-up within subject) on assessment durations. Assessment times improved significantly over three time points (p = 0.008), improving with standardized protocols, while patterns were similar across groups (p = 0.101), reflecting skill retention at 3 months and not affected by injury or not. Protocol adherence in simulated injury identification increased from 63.3% pre-training to 100% post-training. Provider confidence remained high (MCH scores: 2.4–2.7/10), and Technology Acceptance Model (TAM) ratings emphasized strong Perceived Usefulness (PU2: M = 4.48) despite moderate ease-of-use challenges (EU2: M = 4.03). Qualitative feedback highlighted workflow benefits but noted challenges in drone maneuvering. The ABCD-IN-BARS protocol effectively standardizes drone-assisted emergency assessments, demonstrating retained proficiency and high usability. While sensory limitations persist, its modular design and alignment with ABCDE principles offer a scalable solution for prehospital care in underserved regions. Further multicenter validation is needed to generalize findings.

1. Introduction

Emergency medical services confront significant challenges in delivering timely patient assessments within geographically isolated or disaster-impacted regions. Drones (unmanned aircraft systems, UAS), remotely operated aircraft with expanding healthcare applications [1], offer transformative potential. Krey et al. [2] delineate four key medical use cases: medical supply delivery, telemedicine diagnostics, emergency surveillance, and intra-hospital transport. Systematic reviews [3,4,5] confirm drones’ efficacy in logistics and environmental assessments. Despite these advancements, a significant research gap persists in the standardized use of drones for remote patient assessment. No structured protocol exists for conduct a comprehensive clinical evaluation—especially those aligning with established clinical frameworks like the ABCDE (Airway, Breathing, Circulation, Disability, Exposure) approach, which lacks telemedicine adaptations [6] and drone-mediated video interactions. Therefore, despite accelerated telehealth adoption during the COVID-19 pandemic [7], there is no literature reproducibility, safety, and scalability of drone-assisted medical interventions in real-world settings, highlighting a critical need for evidence-based telemedicine protocols tailored to robotic systems.
To address this gap, we introduce the ABCD-IN-BARS protocol, a novel 9-step checklist integrating drone technology with evidence-based telemedicine practices. This protocol emphasizes patient-assisted maneuvers (e.g., self-palpation) and provider-guided observations to systematize remote assessments. Through a randomized controlled trial involving first-aid trained volunteers, this study evaluates the feasibility and effectiveness of ABCD-IN-BARS in simulated emergency scenarios via drone, focusing on training outcomes and operational standardization. We hypothesis that the protocol will improve the feasibility, accuracy, and efficiency of remote assessment conducted via drones. This study addresses critical human-factor and systems-integration challenges at the intersection of robotics and medicine, paving the way for scalable, reproducible drone-mediated emergency care, ultimately enhancing the response times and reach of medical services in the most challenging, time-sensitive situations via robotics.

2. Methods

2.1. Study Design and Participation

Sixty-eight first-aid-trained volunteers were recruited via convenience sampling in Hong Kong. This sample was chosen to ensure controlled and reproducible evaluation of the protocol structure before future studies with real patients. These volunteers possess similar basic emergency medical training to that of real real-world non-medical Search and Rescue Team, making them suitable for simulating the target user. Therefore, technicians’ level trained volunteers will be excluded from this study. Inclusion criteria are completion of first aid certificate training and being above 18 years old. Exclusion criteria are professional medical experience, such as former emergency medical technicians or advanced clinical training, such as doctors, nurses, and allied health professionals. This ensures participants do not have prior familiarity with the specific protocol or advanced medical knowledge that could introduce performance bias. Recruitment occurred between August–November 2023; follow-up concluded in January 2024.

2.2. Intervention Protocol

2.2.1. Training

Participants completed a web-based module covering drone regulations (Hong Kong Civil Aviation Department), basic control maneuvers, and safety protocols. Novices progressed from visual observers to manual pilots under instructor guidance, practicing telemedicine assessments while instructors provided real-time support and took control of the drone if necessary for safety. Both intervention and control groups received identical training (web-based modules, supervised drone operation sessions at High Island Reservoir, Sai Kung) and applied the ABCD-IN-BARS protocol identically during assessments. Both groups will be further randomized to evaluate standardized patients in non-injury scenarios (scripted responses indicating no pain/functional limitations), or simulated acute ankle sprains (scripted injury-specific responses, the most common injury found in search and rescue operations in Hong Kong, for being trapped at the mountain due to slips and falls).

2.2.2. Device

A DJI Mini 4 Pro (249 g) drone transmitted real-time video to a remote controller, with a 68 g walkie-talkie enabling one-way audio communication (patient to pilot), as Figure 1. Flight duration was capped at 19 min, with wind speeds monitored via anemometer; flights were aborted exceeding 12 m/s (43.2 km/h). Our study operated within safe wind speed limits; image jitter remains negligible for vision-dependent measurements with the drone’s built-in gimbals.

2.2.3. Drone In-Flight Patient Assessment Development

The design of the ABCD-IN-BARS protocol followed a quantitative content validity approach [8]. In development stage, potential items were generated from telemedicine studies [6,9] and Medical Priority Dispatch system key questions set [10], then group into domains (ABCD, etc.). The second stage, Judgment and Quantification, A list of items was developed based on face validity from the authors’ experiences in telemedicine patient assessment. Experts rated item relevance on a 5-point Likert scale. Items with *p* < 0.05 agreement were retained, and low-ranking items (by standard deviation) were excluded. The final ABCD-IN-BARS protocols were derived by eliminating items that did not receive the minimum agreement of experts or were ranked low, with the highest to lowest ranking items determined by the standard deviation (SD) of the rankings [8]. Participants who had the intervention were not involved in the design or reporting of this trial due to its focus on the provider training.

2.2.4. ABCD-IN-BARS Protocols

Airway, Breathing, Circulation, Disability, Injuries, Neck, Back, Abdomen, Range of Motion, Stand is a structured telemedicine framework designed for drone-assisted emergency evaluations. ABCD-IN-BARS checklist is available in Supplementary File S1. Below, each component is detailed with evidence-based rationale and citations to validate its clinical application.

2.2.5. Airway (A)

Providers initiate airway assessment by instructing the patient, “Can you speak?” to evaluate vocalization and rule out obstruction. Visual inspection via drone camera focuses on oral secretions, facial/neck swelling (e.g., periorbital edema), or asymmetry. Difficulty speaking between breaths (DSBB), a key indicator of critical illness in prehospital triage (prevalence: 50.3%; [11], is documented as a sign of potential airway compromise.

2.2.6. Breathing (B)

Patients are instructed to “Take a deep breath and let it out slowly.” Providers count respiratory rates and observe for accessory muscle use, paradoxical breathing, or cyanosis via drone feed. Respiratory distress is identified if patients cannot hold their breath [12], with telemedicine-based assessments demonstrating strong inter-rater reliability (κ = 0.85 for distress; [13]).
Patients experiencing dyspnea are directed to lean forward to engage diaphragmatic contraction and increase lung volume [14]. This maneuver has been shown to improve breathing in dyspneic adults [15] and pediatric asthma cases [16]. Leaning forward also aims at contraction of the diaphragm to increase chest wall motion, thereby.

2.2.7. Circulation (C)

Remote pulse assessment involves patient self-palpation of carotid pulses, with instructions to tap their thigh once (60 bpm) or twice (120 bpm) per heartbeat. Suspected arrhythmias are cross-verified via wearable devices (if available). Providers zoom the drone camera to detect pallor or diaphoresis. This method aligns with telecardiology studies showing 94.6% concordance between remote and in-person pulse checks [17].

2.2.8. Disability (D)

Patients are instructed to point to the drone to assess visual clarity. Motor coordination is assessed using the finger-to-nose test and arm elevation (45° for 5 s) to evaluate upper extremity strength (C6–T1 nerve roots). These protocols, derived from the Telehealth version of the Buffalo Concussion Physical Examination (Tele-BCPE), demonstrate both inter-modality agreement (ICC = 0.95 [95% CI 0.86–0.98, p < 0.001]) and inter-rater agreement (ICC = 0.88 [95% CI 0.71–0.95, p < 0.001]) were reliable compared with in-person [18].

2.2.9. Injuries (I)

Patients are directed to “Identify wounds, bleeding, or deformities” and point to affected areas. Providers assess wound severity via drone camera, while patients self-report pain (0–10 scale). This approach aligns with evidence suggesting telemedicine reduces emergency departments’ length of stay in trauma cases [19].

2.2.10. Neck (N)

Cervical spine evaluation follows the Canadian C-Spine Rules (CCR): high-risk factors (age ≥ 65, dangerous mechanism, paresthesia), active head rotation (45° left/right), and absence of midline tenderness [20]. Restricted motion, pain, or guarding observed via drone feed indicates potential injury. The same approach was adopted in the Tele-BCPE, which was mentioned above [18].

2.2.11. Back (B)

Patients are asked to “touch your back” and report tenderness or stiffness. Asymmetry in rotation, tenderness over spinous processes, or inability to complete the movement may indicate fractures or soft tissue injury, consistent with tele-spine assessment frameworks [21].

2.2.12. Abdomen (A)

Abdominal evaluation involves patient self-palpation: “Press deeply and point to tender areas.” Providers observe guarding, distension, or pain-related facial expressions via drone. Telemedicine assessments correlate with in-person imaging decisions (80% agreement; [22]), though limited by absent haptic feedback.

2.2.13. Range of Motion (R)

Patients perform joint movements (e.g., “squeeze and move your joints”) while providers assess for swelling, crepitus, or restricted mobility, consistent with telehealth spine examination guidelines [21].

2.2.14. Stand (S)

Gait and balance are evaluated by instructing patients to “stand up and walk if safe.” The drone’s camera captures unsteadiness, limping, or inability to bear weight. The Romberg test (standing with eyes closed) further detects proprioceptive deficits.

2.3. Study Outcome Measures

Study outcomes were assessed at three time points: baseline (pre-training), post-course, and 3-month follow-up. The primary outcome, time-to-assessment (seconds) for each protocol domain, was quantified using synchronized timestamps across all phases. Secondary outcomes included provider confidence, measured via the Modified Cooper–Harper Scale (MCH; 1 = excellent, 10 = unacceptable) adapted for unmanned aerial vehicle pilot performance by Herrington et al. [23] and inter-rater reliability (Cohen’s κ) between drone-assisted and in-person assessments. Baseline measurements served as the control to evaluate post-intervention changes and longitudinal retention of protocol proficiency.

2.4. Sample Size

A sample size of 34 participants per group (68 total) was calculated to detect a medium effect size (Cohen’s d = 0.5, approximately f = 0.25), consistent with prior drone intervention studies reporting effect sizes of Cohen’s d = 0.542–0.833 [24]. Using G*Power 3.1.9.7 for repeated measures ANOVA, within–between interaction, with two groups and three time points, assuming 80% power, a 5% significance level, and a correlation of 0.5 between repeated measurements, required 28 participants per group. To align with longitudinal design requirements and mitigate a 20% attrition rate common in intervention studies, recruitment was increased to 34 per group, ensuring robust detection of time-dependent changes in non-injury assessment outcomes.

2.5. Statistical Analysis and Randomization

This study employed a wait-list controlled, randomized trial design. All participants were initially assessed at baseline (Time 0: Pre-training) using their own traditional, non-standardized assessment methods (which served as the control data). Subsequently, all participants received the identical comprehensive ABCD-IN-BARS protocol training (the intervention). Participants were randomly allocated into two groups: Cohort A: immediate Training (n = 34) or Cohort B: Delayed Training (wait list) (n = 34), in 1:1 ratio using a computer-generated sequence (block randomization, block size = 4) stratified by prior drone experience (<10 vs. ≥10 h). Allocation concealment was ensured using sequentially numbered, opaque envelopes opened after baseline assessments. An independent research assistant generated the sequence and enrolled participants. Baseline demographics (e.g., age, prior drone experience) were compared between groups using independent t-tests (continuous variables) or chi-square tests (categorical variables, e.g., gender), confirming randomization integrity. This design allows for two primary analyses to isolate the training effect by comparing the Immediate Training Group against the Delayed Training group at the first post-training assessment (Time 1), where the sole difference is the receipt of training. This design also allows examining the scenario effect by comparing Cohort A against Cohort B after all participants had been trained.
The primary statistical analysis focused on the interaction effect between Group (Immediate vs. Delayed Training) and Time (T0 vs. T1) was analyzed via an independent t-test. Effect of simulated injury and non-injury group and Time (T0, T1, T2) analyzed via repeated-measures ANOVA via General Linear Model of SPSS 30. Data analyses were performed in SPSS 30, applying Greenhouse-Geisser corrections where sphericity assumptions were violated. According to Cohen [25], partial eta squared values of 0.01, 0.06, and 0.14 are interpreted as small, medium, and large effect sizes, respectively. Outcome assessors were blinded to group allocation during analysis, though participant/instructor blinding was impractical due to hands-on training. Missing data or being unable to attend the follow-up were excluded. Post-training focus group discussions, structured around the Technology Acceptance Model (TAM, Table 1), provided qualitative insights. No protocol changes post-commencement noted.

2.6. Ethical Considerations

Ethical approval was granted by Caritas Institute of Higher Education, the Research and Ethics Committee (Ref:HRE230218). Written informed consent was obtained from all participants, and procedures adhered to the Declaration of Helsinki. They were informed that their experiences and the content of their interviews would be shared in research dissemination, and their anonymity would be guaranteed to protect their personal identity. The trial followed CONSORT guidelines for randomized designs [26], with no interim analyses conducted. This randomized controlled trial was registered with the Chinese Clinical Trial Registry (Registration Number: ChiCTR2500104207).

3. Results

All 68 participants completed the post-test, with 61 (89.7%) retained at the 3-month follow-up (Figure 1). Losses n = 7 due to scheduling. Details in Figure 2. Baseline demographics, including age, prior drone experience, and gender, were balanced between groups (Table 2), with no significant different (all p > 0.05). No technical malfunctions or adverse events occurred, and wind speeds remained below safety thresholds (<12 m/s) during all sessions.

3.1. Assessment Efficiency and Skill Retention

The assessment variation was limited, showing a statistically significant difference (p = 0.04, 95% CI = 1.45 to 67.12) between the Immediate group at T1 post-training (mean duration = 196.52 s, SD = 81.125) and the Delayed Training group at T0 by their usual practice without ABCD-IN-BAR Training. (mean duration = 230.8 s, SD = 81.13). A mixed-design repeated measures ANOVA examined the effects of Time (baseline, post-test, 3-month follow-up) and Group (non-injury vs. simulated injury) on assessment durations. The non-injury group’s mean durations were 181.90 s (SD = 81.43) at baseline, 188.33 s (SD = 17.23) post-test, and 189.10 s (SD = 19.31) at follow-up. The simulated injury group showed higher means of 207.06 s (SD = 87.02), 262.65 s (SD = 26.97), and 260.87 s (SD = 31.35) at the same time points. Since Mauchly’s test indicated a violation of sphericity (W = 0.169, χ2(2) = 103.00, p < 0.001), Greenhouse-Geisser corrections were applied. Results revealed a significant main effect of Time, F(1.09, 64.46) = 7.59, p = 0.008, partial η2 = 0.114, showing that assessment durations changed over time, which improved by the standardized protocol. This corresponds to a medium effect size (according to Cohen’s benchmarks [25]). The Time × Group interaction was not significant, F(1.09, 64.46) = 2.38, p = 0.101, partial η2 = 0.072, indicating similar patterns between non-injury and simulated injury groups with the standardized assessment protocol, suggesting a small effect. Variability in durations was comparable between groups at baseline and post-test, suggesting skill retention. Notably, the Range of Motion (ROM) times improved significantly (p = 0.02), and the simulated injured group slightly, reducing the time from 38.48 to 37.00 s, reflecting maintained skill after 3 months. Before training, 36.7% of injuries were missed due to not being asked; after training, injury identification reached 100%, resulting in nearly 3 times improvement. Detailed results of each component are in Table 3.

3.2. Provider Confidence and Protocol Acceptance

Modified Cooper–Harper (MCH) scores indicated retained proficiency, decreasing slightly from 2.4 (SD = 0.5) post-test to 2.7 (SD = 0.46) at 3 months follow-up (scale: 1 = excellent, 10 = unacceptable). Technology Acceptance Model (TAM) ratings highlighted strong agreement on utility. For Perceived Usefulness (PU), high scores for reducing operation time (PU1: M = 4.11, SD = 0.97) and facilitating remote assessments (PU2: M = 4.48, SD = 0.67). Perceived Ease of Use (EU), clarity of the protocol (EU1: M = 4.95, SD = 0.28) contrasted with moderate ratings for learning ease (EU2: M = 4.03, SD = 0.97). Notably, intention to use (IU) reflected unanimous recognition of drone utility (IU2: M = 5.00, SD = 0.00), though confidence in adoption varied (IU1/IU3: M = 4.02, SD = 1.01).
Qualitative feedback contextualized these findings. Participants praised the protocol’s workflow efficiency (PU) but noted challenges in environmental navigation, impacting EU scores. One paramedic noted, “The checklist is intuitive, but aligning the drone’s camera for abdominal exams requires practice.” The high IU2 score underscored the system’s perceived value in rural emergencies. These insights align with TAM’s emphasis on usability and perceived utility as adoption determinants [27], highlighting opportunities for refining drone stabilization and connectivity to bolster EU and IU metrics.

4. Discussion

4.1. Comparison with ABCDE Literature and Protocol Efficacy

The ABCDE (Airway, Breathing, Circulation, Disability, Exposure) framework, first formalized for trauma care by Styner in 1976 [28], has long provided a systematic foundation for in-person emergency assessments. However, adherence to such frameworks remains variable (18–84% in clinical settings), as highlighted in Bruinink et al.’s [29] scoping review, underscoring the need for structured tools to standardize evaluations. In addition, the trauma ABCDE protocol required physical contact, which cannot be applied in a telemedicine situation. The ABCD-IN-BARS protocol extends this legacy into telemedicine, demonstrating that structured remote frameworks can replicate critical aspects of in-person assessments while addressing prehospital challenges. By integrating simulation training—a factor shown by Peran et al. [30] to improve adherence by 17% (p = 0.023)—our protocol achieved sustained reductions in assessment times and high inter-rater reliability. Unlike static telehealth models, its modular design enables dynamic prioritization of life-threatening conditions (e.g., airway compromise or hemorrhage) in real time, a critical advantage in unpredictable emergency environments. Several very rapid assessments were outlined in the baseline because they identified the injury very quickly; however, there is a chance that their initial guess was wrong and the assessment was incomplete. Consequently, 36.7% of cases missed the diagnosis in the baseline (T0) test if their scenario is an injured case.

4.2. Alignment with Existing Evidence

Our findings align with emerging research on drone-assisted diagnostics while addressing unique telemedicine constraints. Improvements in Range of Motion (ROM) assessments mirror Iyer et al.’s [21] findings, where patient-assisted maneuvers enhanced remote evaluations of musculoskeletal injuries. However, sensory-dependent tasks, such as abdominal tenderness assessment, remained limited by the absence of haptic feedback [9]. This limitation underscores a broader challenge in remote care: while visual inspection (e.g., distension, facial expressions) and patient self-reports provide valuable data, they cannot fully replace tactile cues like guarding or rebound tenderness—key indicators of acute pathologies such as appendicitis. Despite these constraints, the protocol’s emphasis on structured workflows and patient engagement (e.g., self-palpation) mitigated variability, supporting its role as a triage tool in resource-limited settings.

4.3. Limitations and Future Directions

Generalizability is constrained by the study’s convenience sampling of medically trained volunteers, which may overestimate proficiency in non-trained lay-first responder populations. Additionally, the lack of participant/instructor blinding introduces potential performance bias, though outcome assessor blinding mitigated this risk. While this study demonstrates that the ABCD-IN-BARS protocol can standardize the process and improve the efficiency of remote assessments, its ultimate clinical utility hinges on the diagnostic accuracy of the outcomes it produces. Future studies should validate the protocol in real-world, diverse populations, including non-emergency scenarios (e.g., chronic conditions) and culture differences in ensuring acceptance and clinically reliable diagnoses. Technical refinements are poised to significantly mitigate the usability limitations in drone teleoperation—such as AI-assisted image analysis for breathing and pulse counting [31].

5. Conclusions

The ABCD-IN-BARS protocol represents a pivotal innovation and significant advancement in drone-assisted emergency care, balancing structured efficiency with diagnostic reliability. By adapting the established ABCDE framework for trauma and emphasizing simulation training, it addresses critical gaps in remote assessment standardization. The protocol’s modular design and strong user acceptance (evidenced by TAM scores) position it as a scalable foundation for prehospital telemedicine, especially in underserved regions. Its design simplicity and reliance on drone technology make it uniquely suitable for deployment in disaster zones, rural settings, and other low-resource environments where conventional medical infrastructure is absent or compromised. To ensure broader adoption and robust validation, future efforts should prioritize multicenter trials under real-world conditions, involving lay rescuers across diverse geographic and socioeconomic contexts. Such studies will not only generalize the protocol’s efficacy but also confirm its operational resilience across a wide spectrum of emergency scenarios.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/drones9100687/s1, Table S1: ABCD-IN-BARS assessment script.

Author Contributions

Conceptualization, C.K.J.C.; methodology, F.L.N.T. and J.Y.; software, C.K.J.C.; validation, J.Y. and F.L.N.T.; formal analysis, J.Y., S.Y.J.H. and A.Y.; investigation, J.Y., A.Y. and Z.T.; resources, A.Y.; data curation, J.Y.; writing—original draft preparation, C.K.J.C. and F.L.N.T.; writing—review and editing, C.K.J.C., S.Y.J.H., A.Y., J.Y., F.L.N.T. and Z.T.; project administration, A.Y. and C.K.J.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Institutional Development Grant of Saint Francis University, IDG20220201 and IDGP250101.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and was approved by the Research and Ethics Committee of Caritas Institute of Higher Education (HRE230218).

Informed Consent Statement

Informed consent was taken from participants to participate in the study. In addition, all participants were assured that their shared experience and interview content would be reported in international journals anonymously.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request. De-identified data is available upon reasonable request up to 5 years before data destroy. ABCD-IN-BARS checklist is available in Supplementary File S1.

Acknowledgments

The authors thank the Hong Kong Adventure Corps and research participants for their time dedicated to this study.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Royo-Vela, M.; Black, M. Drone images versus terrain images in advertisements: Images’ verticality effects and the mediating role of mental simulation on attitude towards the advertisement. J. Mark. Commun. 2018, 26, 21–39. [Google Scholar] [CrossRef]
  2. Krey, M.; Seiler, R. Usage and acceptance of drone technology in healthcare: Exploring patients and physicians perspective. In Proceedings of the 52nd Hawaii International Conference on System Sciences (HICSS), Grand Wailea, HI, USA, 8–11 January 2020; pp. 4135–4144. [Google Scholar] [CrossRef]
  3. Konert, A.; Smereka, J.; Szarpak, L. The Use of Drones in Emergency Medicine: Practical and Legal Aspects. Emerg. Med. Int. 2019, 2019, 3589792. [Google Scholar] [CrossRef] [PubMed]
  4. Robakowska, M.; Ślęzak, D.; Żuratyński, P.; Tyrańska-Fobke, A.; Robakowski, P.; Prędkiewicz, P.; Zorena, K. Possibilities of Using UAVs in Pre-Hospital Security for Medical Emergencies. Int. J. Environ. Res. Public Health 2022, 19, 10754. [Google Scholar] [CrossRef] [PubMed]
  5. Roberts, N.B.; Ager, E.; Leith, T.; Lott, I.; Mason-Maready, M.; Nix, T.; Gottula, A.; Hunt, N.; Brent, C. Current summary of the evidence in drone-based emergency medical services care. Resusc. Plus 2023, 13, 100347. [Google Scholar] [CrossRef]
  6. Russell, S.W.; Artandi, M.K. Approach to the telemedicine physical examination: Partnering with patients. Med. J. Aust. 2022, 216, 131–134. [Google Scholar] [CrossRef]
  7. Hollander, J.E.; Carr, B.G. Virtually Perfect? Telemedicine for Covid-19. N. Engl. J. Med. 2020, 382, 1679–1681. [Google Scholar] [CrossRef]
  8. Lynn, M.R. Determination and Quantification Of Content Validity. Nurs. Res. 1986, 35, 382–386. [Google Scholar] [CrossRef]
  9. Benziger, C.P.; Huffman, M.D.; Sweis, R.N.; Stone, N.J. The Telehealth Ten: A Guide for a Patient-Assisted Virtual Physical Examination. Am. J. Med. 2020, 134, 48–51. [Google Scholar] [CrossRef]
  10. Slovis, C.M.; Carruth, T.B.; Seitz, W.J.; Thomas, C.M.; Elsea, W.R. A priority dispatch system for emergency medical services. Ann. Emerg. Med. 1985, 14, 1055–1060. [Google Scholar] [CrossRef]
  11. Clawson, J.; Barron, T.; Scott, G.; Siriwardena, A.N.; Patterson, B.; Olola, C. Medical Priority Dispatch System Breathing Problems Protocol Key Question Combinations are Associated with Patient Acuity. Prehospital Disaster Med. 2012, 27, 375–380. [Google Scholar] [CrossRef]
  12. Lumb, A.B.; Thomas, C.B. Nunn and Lumb’s Applied Respiratory Physiology, 9th ed.; Elsevier Ltd.: Edinburgh, UK; Amsterdam, The Netherlands, 2021. [Google Scholar]
  13. Siew, L.; Hsiao, A.; McCarthy, P.; Agarwal, A.; Lee, E.; Chen, L. Reliability of Telemedicine in the Assessment of Seriously Ill Children. Pediatrics 2016, 137, e20150712. [Google Scholar] [CrossRef] [PubMed]
  14. Delgado, H.R.; Braun, S.R.; Skatrud, J.B.; Reddan, W.G.; Pegelow, D.F. Chest wall and abdominal motion during exercise in patients with chronic obstructive pulmonary disease. Am. Rev. Respir. Dis. 1982, 126, 200–205. [Google Scholar]
  15. Topcuoglu, C.; Yumin, E.T.; Saglam, M.; Cankaya, T.; Konuk, S.; Ozsari, E.; Goksuluk, M.B. Neural Respiratory Drive During Different Dyspnea Relief Positions and Breathing Exercises in Individuals With COPD. Respir. Care 2024, 69, 1129–1137. [Google Scholar] [CrossRef]
  16. Alay, G.K.; Yıldız, S. Comparison of Forward-Leaning and Fowler Position: Effects on Vital Signs, Pain, and Anxiety Scores in Children With Asthma Exacerbations. Respir. Care 2024, 69, 968–974. [Google Scholar] [CrossRef]
  17. Scott, G.; Clawson, J.; Rector, M.; Massengale, D.; Thompson, M.; Patterson, B.; Olola, C.H. The Accuracy of Emergency Medical Dispatcher-Assisted Layperson-Caller Pulse Check Using the Medical Priority Dispatch System Protocol. Prehospital Disaster Med. 2012, 27, 252–259. [Google Scholar] [CrossRef]
  18. Jack, A.I.; Digney, H.T.; Bell, C.A.; Grossman, S.N.; McPherson, J.I.; Saleem, G.T.; Haider, M.N.; Leddy, J.J.; Willer, B.S.; Balcer, L.J.; et al. Testing the Validity and Reliability of a Standardized Virtual Examination for Concussion. Neurol. Clin. Pract. 2024, 14, e200328. [Google Scholar] [CrossRef] [PubMed]
  19. Lewis, E.R.; Thomas, C.A.; Wilson, M.L.; Mbarika, V.W.A. Telemedicine in Acute-Phase Injury Management: A Review of Practice and Advancements. Telemed. E-Health 2012, 18, 434–445. [Google Scholar] [CrossRef] [PubMed]
  20. Stiell, I.G.; Clement, C.M.; Grimshaw, J.; Brison, R.J.; Rowe, B.H.; Schull, M.J.; Lee, J.S.; Brehaut, J.; McKnight, R.D.; A Eisenhauer, M.; et al. Implementation of the Canadian C-Spine Rule: Prospective 12 centre cluster randomised trial. BMJ 2009, 339, b4146. [Google Scholar] [CrossRef]
  21. Iyer, S.; Shafi, K.; Lovecchio, F.; Turner, R.; Albert, T.J.; Kim, H.J.; Press, J.; Katsuura, Y.; Sandhu, H.; Schwab, F.; et al. The Spine Physical Examination Using Telemedicine: Strategies and Best Practices. Glob. Spine J. 2020, 12, 8–14. [Google Scholar] [CrossRef] [PubMed]
  22. Hayden, E.M.; Borczuk, P.; Dutta, S.; Liu, S.W.; A White, B.; Lavin-Parsons, K.; Zheng, H.; Filbin, M.R.; Zachrison, K.S. Can video-based telehealth examinations of the abdomen safely determine the need for imaging? J. Telemed. Telecare 2021, 29, 761–774. [Google Scholar] [CrossRef] [PubMed]
  23. Herrington, S.M.; Muhammad Fields, T. Pilot Training and Task Based Performance Evaluation of an Unmanned Aerial Vehicle. In Proceedings of the AIAA Scitech 2021 Forum, Virtual, 11–15 and 19–21 January 2021. [Google Scholar] [CrossRef]
  24. Fink, F.; Kalter, I.; Steindorff, J.-V.; Helmbold, H.K.; Paulicke, D.; Jahn, P. Identifying factors of user acceptance of a drone-based medication delivery: User-centered control group design approach. JMIR Hum. Factors 2024, 11, e51587. [Google Scholar] [CrossRef]
  25. Cohen, J. Statistical Power Analysis for the Behavioral Sciences, 2nd ed.; Routledge: Oxford, UK, 1988. [Google Scholar]
  26. Schulz, K.F. CONSORT 2010 Statement: Updated Guidelines for Reporting Parallel Group Randomized Trials. Ann. Intern. Med. 2010, 152, 726. [Google Scholar] [CrossRef] [PubMed]
  27. Davis, F.D. Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. MIS Q. 1989, 13, 319–340. [Google Scholar] [CrossRef]
  28. Bush, M.S. A Trauma Course Born Out of Personal Tragedy. Royal College of Surgeons. 2016. Available online: https://www.rcseng.ac.uk/news-and-events/blog/personal-tragedy (accessed on 17 May 2025).
  29. Bruinink, L.J.; Linders, M.; de Boode, W.P.; Fluit, C.R.M.G.; Hogeveen, M. The ABCDE approach in critically ill patients: A scoping review of assessment tools, adherence and reported outcomes. Resusc. Plus 2024, 20, 100763. [Google Scholar] [CrossRef]
  30. Peran, D.; Kodet, J.; Pekara, J.; Mala, L.; Truhlar, A.; Cmorej, P.C.; Lauridsen, K.G.; Sari, F.; Sykora, R. ABCDE Cognitive Aid Tool in Patient Assessment—Development and Validation in a Multicenter Pilot Simulation Study. BMC Emerg. Med. 2020, 20, 95. [Google Scholar] [CrossRef] [PubMed]
  31. Van Esch, R.J.C.; Cramer, I.C.; Verstappen, C.; Kloeze, C.; Bouwman, R.A.; Dekker, L.; Montenij, L.; Bergmans, J.; Stuijk, S.; Zinger, S. Camera-Based Continuous Heart and Respiration Rate Monitoring in the ICU. Appl. Sci. 2025, 15, 3422. [Google Scholar] [CrossRef]
Figure 1. The drone, walkie-talkie, and ABCD-IN-BARS protocol cue card.
Figure 1. The drone, walkie-talkie, and ABCD-IN-BARS protocol cue card.
Drones 09 00687 g001
Figure 2. CONSORT flow diagram illustrating enrollment, allocation, follow-up, and analysis of participants.
Figure 2. CONSORT flow diagram illustrating enrollment, allocation, follow-up, and analysis of participants.
Drones 09 00687 g002
Table 1. Focus group discussion question with Technology Acceptance Model (TAM).
Table 1. Focus group discussion question with Technology Acceptance Model (TAM).
ConstructOperation DefinitionsMeasured Items
Perceived Usefulness (PU)Perceived Usefulness refers to the pilot’s belief that using a drone with the MARS guide to assist in patient assessment would lead to an improvement in the process.PU1: MARS reduced operation time
PU2: drone allows the pilot to assess the
patient at difficult access location
Perceived Ease of Use (EU)Perceived Ease Of Use is the level of ease that the pilot experiences when using a drone with the MARS guide.EU1: MARS would be clear and understandable
EU2: I would find it easy to learn MARS
EU3: I would be able to use MARS easily and become proficient in achieving my desired outcomes.
Intention to use (IU)The intention to use can uncover possible challenges in drone patient assessment and offer useful insights on what works and what does not. Such feedback can facilitate a better comprehension of how users engage with the system.IU1: I intend to use MARS to perform patient assessment
IU2: Use of drones is important in patient assessment
IU3: I am confident in using drones in patient assessment with MARS guide
Table 2. Baseline demographic and clinical characteristics of study groups (within-group analysis).
Table 2. Baseline demographic and clinical characteristics of study groups (within-group analysis).
Cohort A
(Immediately)
(n = 30)
Cohort B
(Wait List)
(n = 31)
p Value
Age44.23 (S.D. 11.68)41.32 (S.D. 11.9)0.34
Gender M27290.62
F32
Previous experience with drone—Yes15150.9
No1516
Baseline unstructured assessment duration192.84 (S.D. 87.8)194.52 (S.D. 82.77)0.99
Post-training (T1) Duration230.8 (S.D. 39.28)221.55 (S.D. 47.81)0.41
3 months follow-up (T2) Duration226.7 (S.D. 38.42)224.48 (S.D. 50.31)0.85
Simulated Injured scenario17140.37
Missed diagnosis6 (35.29%)5 (35.71%)0.9
Table 3. Performance of each patient assessment step between post-test and follow-up in scenario nature based.
Table 3. Performance of each patient assessment step between post-test and follow-up in scenario nature based.
Post-Test (Mean Second)3 Month Follow-Up (Mean Second)Compare Post vs. 3 Month Timepoint
Non Injury SDInjured SDNon Injury SDInjured SDPaired
t-Test
95% CI
General13.62.1613.92.413.82.2314.162.380.47−0.80 to 0.38
Airway7.172.236.871.826.92.17.061.670.85−0.33 to 0.39
Breathing14.933.2513.552.7315.673.0913.452.190.28−0.88 to 0.25
Circulation61.2317.560.9716.2958.0312.426116.050.59−4.13 to 7.34
Disability8.11.639.322.078.231.489.652.060.24−0.62 to 0.16
Injury10.931.7825.743.810.831.4426.234.060.4−0.65 to 0.27
Neck17.31.9220.062.717.871.9620.12.740.3−0.86 to 0.27
Back11.21.9612.811.9911.231.7412.711.70.89−0.43 to 0.49
Abdomen17.53.2419.613.2917.373.4819.552.860.78−0.60 to 0.80
Range of Motion23.474.538.484.6522.974.38375.650.020.19 to 1.81
Stand14.072.1227.686.5414.631.7127.716.190.4−0.99 to 0.40
Total188.3317.2262.6526.97189.1019.31260.8731.350.85−4.91 to 5.96
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chan, C.K.J.; Tung, F.L.N.; Ho, S.Y.J.; Yip, J.; Tsui, Z.; Yip, A. A Randomized Controlled Trial of ABCD-IN-BARS Drone-Assisted Emergency Assessments. Drones 2025, 9, 687. https://doi.org/10.3390/drones9100687

AMA Style

Chan CKJ, Tung FLN, Ho SYJ, Yip J, Tsui Z, Yip A. A Randomized Controlled Trial of ABCD-IN-BARS Drone-Assisted Emergency Assessments. Drones. 2025; 9(10):687. https://doi.org/10.3390/drones9100687

Chicago/Turabian Style

Chan, Chun Kit Jacky, Fabian Ling Ngai Tung, Shuk Yin Joey Ho, Jeff Yip, Zoe Tsui, and Alice Yip. 2025. "A Randomized Controlled Trial of ABCD-IN-BARS Drone-Assisted Emergency Assessments" Drones 9, no. 10: 687. https://doi.org/10.3390/drones9100687

APA Style

Chan, C. K. J., Tung, F. L. N., Ho, S. Y. J., Yip, J., Tsui, Z., & Yip, A. (2025). A Randomized Controlled Trial of ABCD-IN-BARS Drone-Assisted Emergency Assessments. Drones, 9(10), 687. https://doi.org/10.3390/drones9100687

Article Metrics

Back to TopTop