Next Article in Journal
Few-Shot 6D Object Pose Estimation via Decoupled Rotation and Translation with Viewpoint Encoding
Previous Article in Journal
Integrating Inverse Kinematics and the Facial Action Coding System for Physically Grounded Facial Expression Synthesis
Previous Article in Special Issue
Gecko-Inspired Robots for Underground Cable Inspection: Improved YOLOv8 for Automated Defect Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Mobile Triage Robot for Natural Disaster Situations

by
Darwin-Alexander Angamarca-Avendaño
1,*,
Diego-Alexander Zhañay-Salto
1 and
Juan-Carlos Cobos-Torres
1,2,*
1
Unidad Académica de Ingeniería, Industria y Construcción (Carrera de Electricidad), Universidad Católica de Cuenca, Cuenca 010105, Ecuador
2
Unidad Académica de Posgrado, Universidad Católica de Cuenca, Cuenca 010107, Ecuador
*
Authors to whom correspondence should be addressed.
Electronics 2026, 15(3), 559; https://doi.org/10.3390/electronics15030559
Submission received: 12 December 2025 / Revised: 22 January 2026 / Accepted: 24 January 2026 / Published: 28 January 2026
(This article belongs to the Special Issue Robotics: From Technologies to Applications)

Abstract

This research describes the development of an autonomous robotic triage system, carried out by a student through project-based and challenge-based learning methodologies, aimed at solving real-world problems using applied technologies. The system operated in three phases: environment exploration, victim detection through computer vision supported by autonomous navigation, and remote measurement of vital signs. The system incorporated SLAM algorithms for mapping and localization, YOLOv8 pose for human detection and posture estimation, and remote photoplethysmography (rPPG) for contactless vital-sign measurement. This configuration was integrated into a mobile platform (myAGV) equipped with a robotic manipulator (myCobot 280) and tested in scenarios simulating real emergency conditions. All three triage phases defined in this case study were executed continuously and autonomously, enabling navigation in unknown environments, human detection, and accurate positioning in front of victims to measure vital signs without human intervention. Although limitations were identified in low-light environments or in cases of facial obstruction, the modular ROS-based architecture was designed to be adaptable to other mobile platforms, thereby extending its applicability to more demanding scenarios and reinforcing its value as both an educational and technological solution in emergency response contexts.

1. Introduction

Emergency situations arising from natural disasters, human-induced incidents, or the search for missing persons pose significant challenges for response and rescue teams, where time is critical for preserving human lives. Most successful rescue operations take place within the first 24 h after the onset of a disaster, a period known as the critical rescue window. Studies indicate that in events such as earthquakes, between 85% and 95% of survivors are rescued within this window. According to an analysis of the earthquake that occurred in the Philippines in 1990, 99% of the people found alive were rescued during the first 48 h following the disaster [1].
However, in many cases, access to affected zones is highly hazardous due to environmental uncertainty, thereby endangering rescue personnel. This situation delays the initial inspection of the area until a safe and feasible contingency plan is implemented. To address this problem, the use of robotics and computer vision technologies has enabled the development of systems that assist in triage processes during emergency situations. These robots, both ground-based and aerial (UAVs), are equipped with advanced sensors and artificial intelligence (AI) algorithms. Such systems are capable of accessing hard-to-reach zones, detecting trapped individuals, assessing vital signs, and even autonomously assigning medical priority categories, as proposed by Senthilkumaran et al. [2].
Several studies have explored the use of robots in rescue operations. Cobos-Torres et al. [3] developed a rapid triage system based on a KUKA youBot robot equipped with sensors and an RGB-D camera to identify victims in hard-to-reach zones. In another study, Sharma et al. [4] integrated artificial intelligence to detect vital signs through rPPG and to categorize patients using the CTAS scale, while Ádám et al. [5] proposed a patient-to-hospital allocation system based on geolocation and medical urgency. Previous studies have focused on affordable and low-cost solutions. The studies reported in [6,7,8] describe robots built with simple microcontrollers, environmental sensors, and cameras, providing basic functions such as monitoring, tracking, and real-time video transmission. Building on this line of research, Reghunath et al. [9] developed a robot equipped with PIR sensors and a Doppler radar for human detection, which was capable of transmitting GPS alerts and visual data to emergency teams. Regarding the use of drones, studies such as [10,11,12] have developed aerial triage systems that integrate computer vision, 5G networks, and deep learning. These systems have demonstrated effectiveness in detecting victims, assessing vital signs through rPPG, and supporting coordinated operations between ground and aerial robots. Additional studies have contributed to the evolution of triage robotics with innovative detection strategies and lightweight architectures. Pająk et al. [13] proposed a drone-based system capable of estimating heart rate remotely using videoplethysmography (VPG), achieving a mean error of under 10 bpm in outdoor tests. Similarly, Cruz Ulloa et al. [14] presented an autonomous thermal-vision robot for victim recognition, validated in real-time rescue simulations where the system effectively distinguished humans from background heat signature. In the area of contact-based sensing, Kerr et al. [15] introduced a fuzzy logic triage system using tactile sensors for vital-sign classification, demonstrating potential for low-computation embedded triage. For rapid deployment, Tian et al. [16] designed LightSeek-YOLO, a lightweight deep-learning model tailored for victim detection in disaster scenes using limited onboard computational resources. Finally, Lee et al. [17] implemented a multi-agent rescue system using online SLAM, enabling cooperative mapping and exploration between robots with reduced processing delay in unknown environments. On the other hand, other approaches have addressed challenges such as energy autonomy and robust locomotion. Studies [18,19,20] have developed intelligent predictors of battery consumption, humanoid robots with firefighting capabilities, and quadrupeds that integrate perception, planning, and predictive control to operate on uneven terrain.
Beyond technical feasibility, this project was designed as an active learning experience, grounded in methodologies such as Project-Based Learning (PBL), Challenge-Based Learning (CBL), design-based learning, and agile methodologies involving collaboration between students and professors. Recent studies have shown that PBL significantly enhances both technical and soft skills in engineering [21,22,23], while CBL fosters creativity, critical thinking, and autonomy [24], all of which are increasingly emphasized in robotics education. These approaches focus on the design of real artifacts under real constraints [25], while agile methodologies and collaborative learning based on Scrum and eduScrum have shown improvements in the organization and effectiveness of teamwork [26].
This methodology provided an appropriate framework for assigning a comprehensive challenge to the student: developing an autonomous robotic system for contactless triage in simulated disaster environments using remote sensors and open-source software. This central challenge guided the system design and structured the development process within the Project-Based and Challenge-Based Learning approaches.
This work presents two complementary contributions. From a technical perspective, it proposes an integrated robotic triage system that combines autonomous frontier-based navigation using SLAM, vision-based victim detection and posture estimation through YOLOv8-Pose, and contactless vital-sign measurement using remote photoplethysmography (rPPG). Unlike previous studies that address these capabilities in isolation, the proposed system integrates navigation, perception, and physiological monitoring within a single ROS-based operational architecture, enabling continuous autonomous operation in unknown environments without human intervention.
From an educational perspective, this study presents a structured case study on the application of Project-Based Learning (PBL) and Challenge-Based Learning (CBL) approaches in robotics education. The project was intentionally designed as an open-ended challenge in which the student was required to make autonomous technical decisions, adapt existing robotic software frameworks, and iteratively validate solutions under realistic constraints. Beyond the functional prototype, the work documents how active learning methodologies can support the acquisition of interdisciplinary competencies in robotics, including system integration, autonomous decision-making, and applied research.
In this article, Section 2 describes the methodology applied to implement the robotic triage system, beginning with the hardware proposal and followed by the three triage phases defined for this project:
Phase 1: Autonomous navigation based on frontier detection and SLAM, along with victim search.
Phase 2: Computer vision-guided navigation with obstacle avoidance.
Phase 3: Measurement of vital signs using remote photoplethysmography (rPPG).
Furthermore, Section 3 presents the results obtained in each phase of the triage process, which were validated through experiments conducted with seven participants in a simulated emergency environment.

2. Materials and Methods

This project was developed as part of an interdisciplinary academic challenge proposed within the Robotics course in the Electrical Engineering program. The main objective was to design and implement a robotic system capable of performing autonomous triage in simulated disaster scenarios, approached from a pedagogical perspective using Challenge-Based Learning (CBL). Through this methodology, students are expected to acquire knowledge and practical skills by addressing real or context-specific problems, thereby promoting autonomy, applied research, innovation, and critical thinking.

2.1. Active Methodological Approach

The system was developed through iterative phases, consistent with Challenge-Based Learning (CBL) and Problem-Based Learning (PBL) methodologies, as summarized in Table 1. The student was encouraged to make technical decisions, explore available solutions, select appropriate tools, and experimentally validate the outcomes. Each phase of this project provided a learning opportunity in which technical knowledge and skills were integrated with practical experience in simulation and prototyping environments. A key tool in this process was ROS (Robot Operating System), a middleware for robotic development that has been widely adopted in both academic and industrial settings. ROS provides a modular architecture that facilitates the integration of components and, most importantly in educational contexts, offers access to a global community of developers who release reusable packages, document their experiences, and actively engage in specialized forums. These features make ROS an ideal platform for projects based on active learning methodologies, as they substantially lower technical entry barriers and foster autonomous learning.

2.2. Overall System Design

The system architecture follows a distributed perception–action scheme, as illustrated in Figure 1. The mobile robot myAGV (Elephant Robotics, Shenzhen, China) provides locomotion and environmental perception through a 2D LiDAR, enabling autonomous mapping and navigation via SLAM. Visual data acquired by the camera mounted on the robotic manipulator myCobot 280 Pi (Elephant Robotics, Shenzhen, China) are transmitted to the main computer through ROS topics, where computer vision and vital-sign estimation algorithms are executed.
  • Holonomic mobile robot (myAGV): Designed with SLAM algorithms and autonomous navigation capabilities for exploration of unknown environments and victim detection.
  • 6-DoF robotic manipulator (myCobot 280 Pi): Equipped with a panoramic camera as the end-effector, enabling facial recognition and the remote measurement of vital signs through rPPG techniques.
Both systems were integrated into a modular architecture based on ROS Noetic Ninjemys, which enabled task synchronization, planning, and distributed execution. Device-to-computer communication was handled over a Wi-Fi network, with a computer serving as the master node for data processing and decision-making.

2.3. System Operation

The system follows a phase-based control logic (Figure 2). Phase I remains active during exploration until a victim is confirmed by YOLOv8-Pose. This event triggers Phase II, where the robot replans and approaches while maintaining visual tracking. Phase III starts only when the robot reaches a stable observation pose and the face ROI is reliably acquired for rPPG. Phase transitions are executed by enabling and disabling the corresponding ROS nodes via services.
Phase I—Navigation and Victim Detection:
A real-time map was generated using the SLAM algorithm, while the explore_lite package simultaneously planned trajectories toward unexplored zones. At the same time, victim detection was carried out through computer vision algorithms supported by the robotic arm, using the YOLOv8 pose model.
Phase II—Replanning and Approaching the Victim:
When a person was detected, the system activated a computer vision-guided navigation strategy integrated with LiDAR data. The robot reoriented its trajectory to avoid obstacles while keeping the victim centered in the image through image segmentation. Meanwhile, the robotic manipulator adjusted its position to maintain visual contact with the victim.
Phase III—Measurement of Vital Signs:
Once in position, the system applied remote photoplethysmography (rPPG) techniques to estimate heart rate and oxygen saturation. The camera mounted on the robotic arm was then carefully moved toward a region of interest (ROI) on the victim’s face in order to capture images under controlled conditions. Based on the measured physiological parameters, the system performs a basic and preliminary physiological classification aimed at supporting early triage assessment by identifying normal or potentially abnormal vital-sign ranges. This classification does not constitute a clinical diagnosis or autonomous medical decision; instead, the estimated values and their preliminary interpretation are transmitted telematically to the user’s computer, where they are monitored in real time by a human operator, who remains responsible for interpreting the information and making any subsequent triage or prioritization decisions.
After completing the measurement, the estimated victim position obtained from the SLAM map was stored to avoid repeated vital-sign measurements of the same individual during subsequent exploration. Under this sequential strategy, the system attends to one victim at a time and does not implement an explicit prioritization mechanism among multiple victims.

2.4. ROS Architecture Design for Triage

The triage system was based on ROS, which integrated and managed the navigation, victim detection, robotic manipulation, and vital signs monitoring modules. This modular architecture allowed efficient communication among the nodes responsible for data processing, motion control, and real-time decision-making. The triage system comprised two main subsystems: The first subsystem included the mobile robot and the manipulator, which together carried out exploration and image acquisition of the environment. The second subsystem consisted of a computer that processed the data and managed decision-making according to system status. Figure 3 shows the ROS node architecture, illustrating the intercommunication between nodes through topics and services.
The system was structured into multiple nodes within ROS, each assigned a specific task in the operation process:
  • main_agv: Managed exploration, victim detection, and measurement of vital signs.
  • patrol_agv: Implemented SLAM to generate a map of the real environment and plan trajectories.
  • mycobot: Executed a set of motion sequences with the robotic manipulator to scan the entire zone using the camera.
  • detect_person: Applied computer vision to identify victims.
  • follow_person, follow_agv, follow_mycobot: Operated jointly to perform trajectory replanning after victim detection and to position the system for measuring vital signs.
  • Headtrate: Measured the vital signs of the victim using rPPG.
  • my_cobot_services: Managed service calls for controlling the movement of the robotic manipulator.
  • usb_cam: Streamed real-time compressed video to the computer.
  • MyAGV;/scan,/cmd_vel: Provided LiDAR data and robot control, respectively.

2.4.1. Autonomous Navigation and Victim Detection

For this project, ROS was used to integrate existing navigation and exploration packages, providing the student with direct access to solutions already validated in real-world applications. The first phase of the system addressed the challenge of enabling the robot to autonomously explore an unknown environment. To this end, the Simultaneous Localization and Mapping (SLAM) [27] algorithm was applied using the Gmapping package provided within the official myAGV ROS navigation stack [28], allowing the robot to generate a real-time map of the environment while simultaneously estimating its own position. The SLAM module relied on LiDAR data and odometry and was executed using the default parameter configuration supplied by the myAGV ROS package, without manual tuning of map resolution or update rate. In addition, the ROS explore_lite package [29] was used to achieve complete exploration of the environment by identifying unexplored frontiers based on the map data generated by SLAM.
This node subscribed to the nav_msgs/OccupancyGrid and map_msgs/OccupancyGridUpdate topics and automatically planned trajectories to unexplored zones. The generated goal points were transmitted to the move_base module, which calculated optimal paths that avoided obstacles, ensuring smooth and fully autonomous navigation without human intervention (Figure 4).
This exploration logic is illustrated in Figure 5, which shows how the robot builds a map in real time while autonomously planning trajectories toward unexplored zones.
Person detection was carried out in parallel with the navigation process. To this end, the student integrated a pre-trained YOLOv8l-pose computer vision model [30], as reported by the original authors, for human detection and pose estimation. This process was executed through the detect_person node, which analyzed the images captured by the camera mounted on the robotic manipulator. At the same time, the manipulator executed sequential scanning to expand its field of view. Upon detecting a person, an alert was issued by the main node (main_agv), activating the second triage phase, which involved computer vision-guided navigation, as illustrated in Figure 6.

2.4.2. Victim Detection and Trajectory Replanning

The system enabled dynamic trajectory replanning whenever a victim was detected. For this purpose, the second-phase nodes (follow_person, follow_agv, follow_mycobot) operated jointly to approach the detected victim. Trajectory replanning was based on LiDAR data combined with continuous victim tracking performed by the robotic manipulator. To ensure safe navigation and effective obstacle avoidance, a criterion was applied to select the direction with the greatest available space, establishing a safety perimeter of 0.4 m. This process is mathematically described in Equation (1). This procedure allowed the robot to avoid obstacles in real time while continuously keeping the victim within the field of view of the camera. The camera was positioned 0.5 m above the ground to provide a wide visual range and minimize the likelihood of small obstacles interfering with victim detection (see Figure 7).
From a pedagogical perspective, this phase of the project was developed following the principles of Project-Based Learning (PBL), in which the student not only addressed a specific technical problem but also integrated multiple technologies—perception, planning, and actuation—into a functional system. This process fostered autonomous decision-making, empirical validation of results, and complex problem-solving within a simulated environment, which are key elements of active and applied learning.
θ o p t = a r g _ m a x θ i 90 ° , 90 ° ( r i )
where
  • θ o p t is the direction with the greatest available free space,
  • r i represents the distances measured by the LiDAR along the frontal semicircle, and
  • a r g _ m a x indicates that the selected angle θ i corresponds to the maximum measured distance.

2.4.3. Trajectory Replanning Through Computer Vision

Simultaneously, the follow_person node analyzed the position of the victim within the image, dividing the frame into three control zones. Based on this position, the robot continuously adjusted its orientation—left, center, or right—to keep the victim centered in its field of view. Figure 8 illustrates the segmented regions within the 640 × 480 image, which met the following conditions required for autonomous navigation:
  • Left zone (0–240 px): If the victim was detected in this zone, the robot would rotate to the right until the target was centered in the image.
  • Central zone (240–400 px): Within this range, the victim would be correctly aligned, so the robot would move forward toward the target.
  • Right zone (400–640 px): If the victim was detected in this zone, the robot would rotate to the left until the image was centered and then move forward.
Integrating autonomous navigation with computer vision effectively addressed a key challenge in the project: enabling the robot to approach the victim safely and efficiently while maintaining continuous visual contact and avoiding collisions.
From an educational standpoint, this phase contributed significantly to the student’s development of practical skills, as it required real-time interpretation of visual data and its translation into autonomous robotic actions. By implementing and validating this system, the student enhanced his ability to analyze dynamic situations, make well-founded technical decisions, and refine solutions based on observed outcomes, marking a clear advancement within the Project-Based Learning (PBL) approach.

2.4.4. Coordination Between Navigation and Manipulation

Figure 9 illustrates the trajectory-replanning process that combined the algorithms from both modules described above. Initially, the robot navigated using the SLAM algorithm, and once a victim was detected, the system transitioned to the second phase, where trajectory replanning was carried out through computer vision while obstacle avoidance relied on the established perimeter. When an obstacle was detected between the robot and the victim, the system evaluated the available free space and adjusted its path in real time, avoiding obstacles until a clear route was found. Throughout the process, the camera maintained continuous visual contact with the victim, supported by the robotic manipulator, which continuously repositioned itself whenever necessary. As the robot approached the victim, it aligned its viewpoint with the victim’s head to ensure accurate measurement of vital signs within a defined region of interest (ROI). Integrating computer vision-based navigation with a LiDAR-defined perimeter allowed the robot to operate effectively in unpredictable environments, maintaining visual reference to the victim while avoiding entrapment by obstacles.

2.5. Vital-Sign Estimation Methodology

The third phase of the triage process involved the non-invasive assessment of the detected victim’s vital signs. Once the robot was properly positioned, the manipulator adjusted the camera to remain at a distance of 0.3 m from the victim’s face, as shown in Figure 10. During this phase, the student revised the initial proposal, which had considered placing sensors directly on the victim, after identifying that such an approach was not feasible in a simulated disaster scenario due to ethical and operational limitations. As an alternative, the student independently explored a computer-vision-based approach and implemented a remote photoplethysmography (rPPG) package described in [31] to extract the pulsatile signal directly from the real-time video. Key physiological parameters, including heart rate and oxygen saturation (SpO2), were derived from this signal without requiring physical contact. This process was part of a self-directed study in signal processing and computer vision, representing a valuable instance of active learning in which the student applied research, critical analysis, and experimental validation to integrate a realistic and technically robust solution.
The heartrate node managed this analysis, processing the images to extract physiological values within a well-defined region of interest (ROI). This phase required careful adjustment of focus, illumination, and system stability, providing valuable experience in the integration of biometric sensing with robotics.

2.6. Educational Value of the Project

This project allowed the student to put into practice the knowledge acquired in previous courses, including electronics, control, signal processing, programming, and artificial intelligence. The Challenge-Based Learning (CBL) and Project-Based Learning (PBL) approaches fostered both the transfer of theoretical knowledge to real-world contexts and the gradual development of technical and transversal skills that are essential to engineering practice.
Throughout the development of the robotic triage system, the student strengthened key skills such as:
  • integrating hardware and software effectively,
  • solving complex and unstructured problems,
  • managing time and planning toward specific objectives,
  • experimenting under conditions of technical uncertainty, and
  • applying engineering thinking and evidence-based decision-making.
This methodological approach fostered an active, autonomous, and contextualized learning experience that aligns with current educational needs in engineering education.

3. Results

This section presents the results of implementing the triage system, assessing the performance of victim detection during autonomous navigation, the robot’s trajectory replanning capabilities, and the measurement of vital signs. Each module was developed, integrated, and validated by the student, who actively participated in the design, programming, and tuning of the algorithms under simulated conditions. All experiments were conducted in a controlled indoor environment designed to simulate an emergency scenario, with stable lighting conditions, predefined obstacle layouts representing debris, and volunteers remaining in static postures. A total of seven volunteers participated in the experimental evaluation, covering variability in body pose and facial characteristics to verify the functional behavior of the system. The results highlight the system’s technical performance as well as the student’s learning process and decision-making throughout each phase.

3.1. Autonomous Frontier-Based Exploration

The frontier-based exploration system was tested without a predefined map, enabling real-time exploration and map generation. During testing, the robot successfully generated a detailed map, which allowed accurate localization and efficient navigation in unknown environments. As the robot explored, the explore_lite package detected unexplored frontiers and sent navigation commands to move_base to cover the area, as shown in Figure 11.

3.2. Person Detection

Figure 12 shows the results of person detection using YOLOv8l-pose, highlighting the system’s ability to accurately identify victims in diverse positions and scenarios. The left section shows instances where no individuals were detected, confirming that the model maintains a low false detection rate in environments without victims. The right section displays successful detections at varying confidence levels, where the model recognized victim postures with confidence scores between 70% and 91%, validating its effectiveness in identifying bodies under diverse conditions and orientations. Throughout testing, the system consistently demonstrated high reliability, detecting victims even in partially obstructed scenes, confirming its ability to operate effectively in complex environments without compromising accuracy.

3.3. Computer Vision-Guided Navigation

The navigation system adjusted its trajectory in real time based on the position of the victim within the field of view of the camera. Figure 13 illustrates the complete computer vision-guided navigation process, from person detection to the final robot position in front of the victim. The upper part of the image shows the progressive detection of a person, with confidence levels increasing as the robot approached the target. The lower part shows the sequence of the robot’s trajectory, where navigation was guided by the position of the victim within the field of view of the camera. Throughout this process, the system continuously adjusted its heading by segmenting the image into zones to correct its orientation and approach the victim.

3.4. Measurement of Vital Signs

Once the mobile robot positioned itself and the manipulator reached the desired orientation and focus with respect to the victim, as shown in Figure 14a, phase III of the triage process began. This phase initially involved identifying the region of interest (ROI) for measuring vital signs. Figure 14b shows the selection of the ROI in the facial region, highlighting key areas for vital sign analysis such as the forehead. The blue and green lines indicate the reference points used to position the ROI on the person’s forehead. Precise segmentation of this region ensured reliable heart rate measurement by minimizing motion-related interference.
To assess system performance, vital-sign measurements were carried out on seven volunteers simulating an accident scenario, as illustrated in Figure 15. In each case, the robot completed all three triage phases, successfully achieving autonomous navigation, person detection, and computer vision-guided navigation while accurately positioning itself for vital sign measurement. To validate the system’s accuracy, heart rate and oxygen saturation values obtained through rPPG were compared with those recorded by a fingertip pulse oximeter, confirming the reliability of the proposed method.
The Bland–Altman analysis was performed to assess the agreement between the values obtained using rPPG and those recorded by the pulse oximeter. In total, 210 measurements were analyzed, corresponding to 30 temporal samples collected from each of the seven participants. Table 2 shows that the rPPG method tended to measure slightly lower SpO2 values than the pulse oximeter, with a mean difference of −1.2143. The limits of agreement were wide (−8.4309 to 6.0023), indicating that the difference between the two methods varied across measurements. Moreover, this difference was statistically significant (p < 0.0001). For heart rate, the mean difference was 0.1095, with narrower limits of agreement (−2.6036 to 2.8227) and no significant difference relative to the pulse oximeter (p = 0.2529). These findings suggest that rPPG provides more consistent heart rate measurements, whereas the greater variability observed in SpO2 indicates that further adjustments may be required to improve its accuracy. The higher variability observed in SpO2 estimation is consistent with known limitations of camera-based rPPG methods. While heart rate is primarily derived from temporal variations in the pulsatile signal, SpO2 estimation relies on subtle amplitude and spectral differences that are more sensitive to illumination changes, skin tone, camera noise, and small facial movements. In addition, the use of a standard RGB camera without controlled illumination or multi-wavelength sensing limits the robustness of SpO2 estimation compared to contact-based pulse oximeters. These factors contribute to the wider limits of agreement observed for SpO2, despite acceptable performance in heart rate estimation.
Figure 16a shows that the rPPG method and the pulse oximeter can be considered interchangeable for measuring heart rate (BPM). This is supported by a small mean bias and relatively narrow limits of agreement. Despite variations among participants, the data suggest that rPPG provides reliable heart rate measurements. Figure 16b, corresponding to oxygen saturation (SpO2) measurements, shows a slight negative bias, indicating that rPPG tends to produce lower values than the pulse oximeter. Nevertheless, most data points lie within the limits of agreement, suggesting that rPPG may be suitable for SpO2 monitoring in applications where a small margin of error is acceptable.

3.5. Technical Analysis

The development of this robotic triage system using ROS exemplifies the effective application of active learning methodologies, specifically Project-Based Learning (PBL) and Challenge-Based Learning (CBL). Throughout the project, the student played an active and self-directed role, addressing real-world technical challenges that required research, critical analysis, evidence-based decision-making, and empirical validation. These aspects can be evidenced in Table 3.
Each phase of the project provided a valuable opportunity to apply interdisciplinary knowledge, from autonomous navigation and computer vision to real-time biometric signal processing. Moreover, using ROS facilitated the integration of multiple functional nodes, promoting learning about distributed architectures, software reusability, and open collaboration—skills that are essential in contemporary technological environments.
Through this project, the student was able to:
  • Integrate theory and practice in real-world scenarios.
  • Develop systemic and interdisciplinary thinking.
  • Strengthen core technical skills in robotics, vision, and sensing.
  • Exercise autonomy, innovation and real-world problem-solving.

3.6. Educational Analysis

The project was developed using PBL/CBL (Project-Based Learning/Challenge-Based Learning), focusing the learning on solving each challenge with autonomy and decision-making. The student identified the problem, defined the objectives and functional requirements, compared available hardware and sensors, and selected the components most suitable for integration in ROS. The student also developed the system architecture; assembled the prototype; configured communication between the myAGV, myCobot, and the RGB camera; and carried out functional tests and calibration of rPPG measurements. As documented in Table 4, these activities—such as defining the problem, comparing alternatives, designing the ROS integration, validating system behavior, and evaluating overall performance—demonstrate the development of critical thinking, applied research, and continuous improvement within a realistic engineering context.

4. Discussion

The development of the robotic triage system validated the proposed technical functionalities—autonomous navigation, victim detection, trajectory replanning, and remote vital-sign measurement—while simultaneously providing a comprehensive educational experience aligned with the principles of Project-Based Learning (PBL) and Challenge-Based Learning (CBL).
From a PBL perspective, the student engaged in an iterative process to address a real, context-specific problem, requiring the integration of prior knowledge in electronics, control systems, computer vision, programming, and signal processing. The system’s modular design and the use of ROS enabled direct engagement with distributed architectures, reusable nodes, and sensor-based planning. In addition to this technical development, the student also reported specific difficulties during the project, summarized in Table 5, such as the need to select suitable hardware due to the absence of predefined specifications, the requirement to adapt existing ROS functionalities to support autonomous navigation, synchronization issues in the system workflow, and instability in the rPPG measurements. These issues were resolved through hardware comparison and selection, modification of ROS modules, implementation of a coordination node, and iterative calibration of the sensing process. Together, these actions reflect the student’s direct application of the PBL cycle, in which problems are identified, tested, and resolved through systematic adjustments. Several studies have shown that using ROS as a development platform in academic settings facilitates the integration of complex systems and fosters active, self-directed learning through the Project-Based Learning (PBL) approach, providing realistic testing environments and collaborative development communities [32,33].
From a Challenge-Based Learning (CBL) perspective, the student encountered specific challenges that required innovative solutions. A notable example was the shift in the approach to vital-sign measurement: after determining that placing sensors on the victim was not feasible, the student independently researched and implemented an rPPG system based on computer vision, demonstrating initiative, critical thinking, and adaptability. Such unplanned decisions, not considered in the initial proposal, demonstrate an active, reflective, and problem-oriented learning process—core pillars of the CBL approach. As reported in previous studies, this methodology serves as a catalyst for motivation, critical thinking, and deep learning in educational robotics contexts [34].
Furthermore, the project fostered the development of transferable skills such as goal-oriented planning, effective time management, experimentation under uncertainty, and collaboration through open-source tools. Collectively, these elements contributed to an authentic educational experience consistent with the current demands of engineering training.
From a technical perspective, the results confirm the effectiveness of the robotic triage system in disaster and emergency scenarios, enabling safe exploration of hazardous areas, efficient victim localization, and remote assessment of vital signs without endangering human personnel. Recent global events, particularly the COVID-19 pandemic, have highlighted the importance of developing autonomous systems that can operate effectively in environments characterized by health or structural risks. In contrast to previous studies, this project integrated multiple technologies into a unified operational framework, combining autonomous navigation using SLAM, person detection through YOLOv8-Pose, and non-invasive vital-sign measurement using rPPG. While Bavle et al. [35] used SLAM to achieve situational awareness without incorporating victim detection, and Caputo et al. [36] used YOLOv5 on drones to locate people without assessing their vital signs, the project developed in this study integrated these components into a coherent, field-oriented triage solution.
Implementing YOLOv8-Pose allowed the system to detect victims and estimate their posture, maintaining robust performance under partial occlusion and in complex disaster scenarios. This could be further improved by integrating thermal vision and deep learning methods, as suggested in [37,38].
In terms of vital-sign measurement, rPPG provides a distinct advantage over contact-based methods such as pulse oximetry. Previous studies, including [39] with the Dr. Spot robot, have demonstrated that heart and respiratory rates can be estimated remotely with good accuracy, although performance is influenced by lighting conditions and facial visibility. This study also identified these limitations and suggests, as a future improvement, the integration of active infrared LED illumination.
Overall, this project provides a feasible approach to autonomous triage and exemplifies how applied technology development can serve as an educational framework that consolidates learning through solving real-world robotics challenges.
To contextualize the contribution of the proposed system within existing research, a comparative analysis was performed with representative approaches addressing similar triage and victim localization tasks. Table 6 summarizes the technologies used, their capabilities, and main limitations. The results show that integrating autonomous navigation, posture estimation, and remote vital-sign measurement positions the system proposed in this study as a more comprehensive solution compared with other approaches focused solely on detection or partial measurement. Furthermore, the analysis reveals potential areas for improvement regarding system robustness under adverse conditions, such as limited visibility and low illumination.
Table 7 summarizes the main evaluation conditions, identified limitations, and directions for improvement of the system. Consequently, the results should be interpreted within the scope of controlled scenarios, where the architecture showed stable operation. From the implementation and validation carried out in this project, challenges associated with dynamic environments, lighting variations, limited sensing and larger-scale operation were identified. These observations suggest lines of improvement aimed at semantic perception, adaptive safety margins, multimodal sensing and greater computing capacity and mobility, in order to move towards a more autonomous operation in more demanding disaster scenarios.
As future work, it would be valuable to explore learning-based control strategies, such as Deep Deterministic Policy Gradient (DDPG), as a means to enhance adaptability in continuous control tasks. While recent studies have demonstrated the effectiveness of DDPG for robotic control in continuous action spaces, particularly in manipulator trajectory control [40], adapting such approaches to mobile robot navigation and victim tracking would require further investigation and system-specific design. Nevertheless, learning-based methods may offer improved robustness and generalization in dynamic or unstructured environments compared to purely rule-based strategies.

5. Conclusions

The robotic triage project provided a solid learning experience aligned with the principles of Project-Based Learning (PBL) and Challenge-Based Learning (CBL). Through addressing a context-based challenge, the student integrated technical knowledge in electronics, control, computer vision, and signal processing while developing essential skills such as autonomy, critical thinking, experimentation under uncertainty, and goal-oriented planning. Self-directed research, particularly during the remote vital-sign measurement phase, highlighted the student’s ability to reassess strategies and apply innovative solutions in real-world contexts. From a technical standpoint, the system successfully executed the three proposed triage phases autonomously and efficiently: environment exploration, victim detection through computer vision-guided navigation, and remote vital-sign measurement. Integrating SLAM, YOLOv8-Pose, and rPPG algorithms into a mobile platform equipped with a manipulator enabled continuous operation without human intervention, even in unmapped environments. Person detection proved reliable across diverse scenarios, delivering fast and consistent responses. Although false detections were infrequent, they were managed through a progressive verification mechanism that allowed the system to return to the exploration phase whenever the presence of a victim could not be confirmed. During the measurement phase, the system achieved stable positioning and accurate facial capture, yielding vital-sign estimates that closely matched those obtained from reference devices. However, certain limitations were observed under low-light conditions or when the victim’s face was partially obscured by dirt, blood, or visible injuries, as these factors can compromise both ROI detection and signal quality. These findings highlight real-world challenges that should be addressed in future research, for instance, by incorporating active illumination or multispectral vision. Moreover, due to its modular architecture developed on ROS, the system is not limited to the mobile robot used in this study and can be readily adapted to other platforms with more advanced navigation or manipulation capabilities. The use of standard ROS navigation and SLAM components further supports scalability to larger environments and more complex layouts when deployed on platforms with extended mobility, sensing range, or computational resources. Overall, the project validated both the technical feasibility of the proposed approach and its pedagogical effectiveness as an active learning strategy for teaching robotics in real-world, multidisciplinary settings. Overall, this work contributes a functional robotic triage framework within an emerging research area, while also validating its pedagogical effectiveness as an active learning strategy for robotics education in multidisciplinary, real-world contexts.

Author Contributions

Conceptualization, D.-A.A.-A., D.-A.Z.-S. and J.-C.C.-T.; Data curation, D.-A.A.-A. and D.-A.Z.-S.; Formal analysis, D.-A.A.-A., D.-A.Z.-S. and J.-C.C.-T.; Funding acquisition, J.-C.C.-T.; Investigation, D.-A.A.-A. and J.-C.C.-T.; Methodology, D.-A.A.-A., D.-A.Z.-S. and J.-C.C.-T.; Project administration, J.-C.C.-T.; Software, D.-A.A.-A. and D.-A.Z.-S.; Supervision, J.-C.C.-T.; Validation, D.-A.A.-A. and J.-C.C.-T.; Visualization, D.-A.A.-A. and J.-C.C.-T.; Writing—original draft, D.-A.A.-A.; Writing—review and editing, J.-C.C.-T. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the project “CompostBot robot de servicio para tratamiento de residuos orgánicos domiciliarios de cocina”, PICCG24-14 at Universidad Católica de Cuenca. The author(s) also declare that they received financial support for the research, authorship, and/or publication of this article from the formative research project entitled “Diseño y construcción de robot móvil omnidireccional didáctico”, code PIFCVI24, funded through an official call by the Undergraduate Research Area of the Universidad Católica de Cuenca.

Institutional Review Board Statement

Ethical review and approval were waived for this study due to its proof-of-concept nature, conducted under controlled laboratory conditions, involving only non-invasive measurements and no clinical intervention, diagnosis, or medical decision-making.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors upon request.

Acknowledgments

This study was conducted as part of the activities of the Robótica, Automatización, Sistemas Inteligentes y Embebidos (RobLab) laboratory and Sistemas Embebidos y Visión Artificial en Ciencias Arquitectónicas, Agropecuarias, Ambientales y Automática (SEVA4CA) research group at Universidad Católica de Cuenca.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Peleg, K.; Reuveni, H.; Stein, M. Earthquake Disasters—Lessons to Be Learned. Isr. Med. Assoc. J. 2002, 4, 361–365. [Google Scholar]
  2. Senthilkumaran, R.K.; Prashanth, M.; Viswanath, H.; Kotha, S.; Tiwari, K.; Bera, A. ARTEMIS: AI-Driven Robotic Triage Labeling and Emergency Medical Information System. arXiv 2023, arXiv:2309.08865. [Google Scholar]
  3. Cobos-Torres, J.-C.; Alhama Blanco, P.J.; Sánchez, A.R.; Abderrahim, M. Desarrollo de Un Sistema Robótico de Triaje Rápido Para Situaciones de Catástrofe. Figshare. 2019. Available online: https://figshare.com/articles/journal_contribution/Desarrollo_de_un_sistema_rob_tico_de_triaje_r_pido_para_situaciones_de_cat_strofe/7817810/1 (accessed on 23 January 2026).
  4. Sharma, D.; Rashno, E.; Zulkernine, F.; El Khodary, E.; Beninger, M.; Almeida, R.; Tao, J.; Alaca, F.; Elgazzar, K. Triage-Bot: An Assistive Triage Framework. In 2024 IEEE International Conference on Digital Health (ICDH), Shenzhen, China, 7–13 July 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 138–140. [Google Scholar]
  5. Ádám, N.; Vaľ’Ko, D.; Chovancová, E. Using Modern Hardware and Software Solutions for Mass Casualty Incident Management. In 2023 IEEE 27th International Conference on Intelligent Engineering Systems (INES), Nairobi, Kenya, 26–28 July 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 000183–000188. [Google Scholar]
  6. Saha, A.; Islam, M.Z.; Mondal, S. Hazard Hunter: A Low Cost Search and Rescue Robot. In 2024 International Conference on Signal Processing, Computation, Electronics, Power and Telecommunication (IConSCEPT), Karaikal, India, 4–5 July 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 1–6. [Google Scholar]
  7. Xin, C.; Qiao, D.; Hongjie, S.; Chunhe, L.; Haikuan, Z. Design and Implementation of Debris Search and Rescue Robot System Based on Internet of Things. In 2018 International Conference on Smart Grid and Electrical Automation (ICSGEA), Changsha, China, 9–10 June 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 303–307. [Google Scholar]
  8. Narayana Raju, P.J.; Gaurav Pampana, V.; Pandalaneni, V.; Gugapriya, G.; Baskar, C. Design and Implementation of a Rescue and Surveillance Robot Using Cross-Platform Application. In 2022 International Conference on Inventive Computation Technologies (ICICT), Lalitpur, Nepal, 20–22 July 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 644–648. [Google Scholar]
  9. Reghunath, S.; Mony, S.S.; Sreeyuktha, M.; Saranya, R. Human Detection Robot; AIJR Publisher: Balrampur, India, 2023; pp. 304–310. [Google Scholar]
  10. Mösch, L.; Pokee, D.Q.; Barz, I.; Müller, A.; Follmann, A.; Moormann, D.; Czaplik, M.; Pereira, C.B. Automated Unmanned Aerial System for Camera-Based Semi-Automatic Triage Categorization in Mass Casualty Incidents. Drones 2024, 8, 589. [Google Scholar] [CrossRef]
  11. Lu, J.; Wang, X.; Chen, L.; Sun, X.; Li, R.; Zhong, W.; Fu, Y.; Yang, L.; Liu, W.; Han, W. Unmanned Aerial Vehicle Based Intelligent Triage System in Mass-Casualty Incidents Using 5G and Artificial Intelligence. World J. Emerg. Med. 2023, 14, 273. [Google Scholar] [CrossRef] [PubMed]
  12. Kuswadi, S.; Adji, S.I.; Sigit, R.; Tamara, M.N.; Nuh, M. Disaster Swarm Robot Development: On Going Project. In 2017 International Conference on Electrical Engineering and Informatics (ICELTICs), Banda Aceh, Indonesia, 18–20 October 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 45–50. [Google Scholar]
  13. Pająk, A.; Przybyło, J.; Augustyniak, P. Touchless Heart Rate Monitoring from an Unmanned Aerial Vehicle Using Videoplethysmography. Sensors 2023, 23, 7297. [Google Scholar] [CrossRef]
  14. Cruz Ulloa, C.; Prieto Sánchez, G.; Barrientos, A.; Del Cerro, J. Autonomous Thermal Vision Robotic System for Victims Recognition in Search and Rescue Missions. Sensors 2021, 21, 7346. [Google Scholar] [CrossRef]
  15. Kerr, E.; McGinnity, T.M.; Coleman, S.; Shepherd, A. Human Vital Sign Determination Using Tactile Sensing and Fuzzy Triage System. Expert Syst. Appl. 2021, 175, 114781. [Google Scholar] [CrossRef]
  16. Tian, X.; Zheng, Y.; Huang, L.; Bi, R.; Chen, Y.; Wang, S.; Su, W. LightSeek-YOLO: A Lightweight Architecture for Real-Time Trapped Victim Detection in Disaster Scenarios. Mathematics 2025, 13, 3231. [Google Scholar] [CrossRef]
  17. Lee, S.; Kim, H.; Lee, B. An Efficient Rescue System with Online Multi-Agent SLAM Framework. Sensors 2019, 20, 235. [Google Scholar] [CrossRef]
  18. Ramezani, M.; Amiri Atashgah, M.A. Energy-Aware Hierarchical Reinforcement Learning Based on the Predictive Energy Consumption Algorithm for Search and Rescue Aerial Robots in Unknown Environments. Drones 2024, 8, 283. [Google Scholar] [CrossRef]
  19. Shafkat Tanjim, M.S.; Rafi, S.A.; Oishi, A.N.; Barua, S.; Dey, H.C.; Babu, M.R. Image Processing Intelligence Analysis for Robo-Res 1.0: A Part of Humanoid Rescue-Robot. In 2020 IEEE Region 10 Symposium (TENSYMP), Dhaka, Bangladesh, 5–7 June 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1229–1232. [Google Scholar]
  20. Grandia, R.; Jenelten, F.; Yang, S.; Farshidian, F.; Hutter, M. Perceptive Locomotion Through Nonlinear Model-Predictive Control. IEEE Trans. Robot. 2023, 39, 3402–3421. [Google Scholar] [CrossRef]
  21. Robledo-Rella, V.; Neri, L.; García-Castelán, R.M.G.; Gonzalez-Nucamendi, A.; Valverde-Rebaza, J.; Noguez, J. A Comparative Study of a New Challenge-Based Learning Model for Engineering Majors. Front. Educ. 2025, 10, 1545071. [Google Scholar] [CrossRef]
  22. Ramírez de Dampierre, M.; Gaya-López, M.C.; Lara-Bercial, P.J. Evaluation of the Implementation of Project-Based-Learning in Engineering Programs: A Review of the Literature. Educ. Sci. 2024, 14, 1107. [Google Scholar] [CrossRef]
  23. Lavado-Anguera, S.; Velasco-Quintana, P.-J.; Terrón-López, M.-J. Project-Based Learning (PBL) as an Experiential Pedagogical Methodology in Engineering Education: A Review of the Literature. Educ. Sci. 2024, 14, 617. [Google Scholar] [CrossRef]
  24. Farizi, S.F.; Umamah, N.; Soepeno, B. The Effect of the Challenge Based Learning Model on Critical Thinking Skills and Learning Outcomes. Anatol. J. Educ. 2023, 8, 191–206. [Google Scholar] [CrossRef]
  25. Oo, T.Z.; Kadyirov, T.; Kadyjrova, L.; Józsa, K. Design-Based Learning in Higher Education: Its Effects on Students’ Motivation, Creativity and Design Skills. Think. Skills Creat. 2024, 53, 101621. [Google Scholar] [CrossRef]
  26. Neumann, M.; Baumann, L. Agile Methods in Higher Education: Adapting and Using EduScrum with Real World Projects. In 2021 IEEE Frontiers in Education Conference (FIE), Lincoln, NE, USA, 13–16 October 2021; IEEE: Piscataway, NJ, USA, 2021. [Google Scholar]
  27. Macenski, S.; Jambrecic, I. SLAM Toolbox: SLAM for the Dynamic World. J. Open Source Softw. 2021, 6, 2783. [Google Scholar] [CrossRef]
  28. Elephant Robotics. myagv_ros: ROS Package for MyAGV Mobile Robot. 2020. Available online: https://github.com/elephantrobotics/myagv_ros (accessed on 23 January 2026).
  29. Hörner, J. Map-Merging for Multi-Robot System. Bachelor’s Thesis, Charles University in Prague, Faculty of Mathematics and Physics, Prague, Czech Republic, 2016. [Google Scholar]
  30. ultralytics YOLOv8 Pose Models. Available online: https://github.com/ultralytics/ultralytics/issues/1915 (accessed on 16 March 2025).
  31. Cobos Torres, J.C.; Abderrahim, M. Measuring Heart and Breath Rates by Image Photoplethysmography Using Wavelets Technique. IEEE Lat. Am. Trans. 2017, 15, 1864–1868. [Google Scholar] [CrossRef]
  32. Wilkerson, S.A.; Gadsden, S.A.; Lee, A.; Vandemark, R.N.; Hill, E.; Gadsden, A.D. ROS as an Undergraduate Project-Based Learning Enabler; American Society for Engineering Education: Salt Lake, UT, USA, 2018. [Google Scholar]
  33. Wilkerson, S.; Forsyth, J.; Korpela, C. Project Based Learning Using the Robotic Operating System (ROS) for Undergraduate Research Applications. In Proceedings of the 2017 ASEE Annual Conference & Exposition Proceedings, Columbus, OH, USA, 24 June 2017. ASEE Conferences. [Google Scholar][Green Version]
  34. Hernandez, J.L.; Roman, G.; Saldana, C.K.; Rios, C.A. Application of the Challenge-Based Learning Methodology, as a Trigger for Motivation and Learning in Robotics. In 2020 X International Conference on Virtual Campus (JICV), Tetouan, Morocco, 3–5 December 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1–4. [Google Scholar]
  35. Bavle, H.; Sanchez-Lopez, J.L.; Cimarelli, C.; Tourani, A.; Voos, H. From SLAM to Situational Awareness: Challenges and Survey. Sensors 2023, 23, 4849. [Google Scholar] [CrossRef]
  36. Caputo, S.; Castellano, G.; Greco, F.; Mencar, C.; Petti, N.; Vessio, G. Human Detection in Drone Images Using YOLO for Search-and-Rescue Operations. In AIxIA 2021—Advances in Artificial Intelligence; Springer: Cham, Switzerland, 2022; pp. 326–337. [Google Scholar]
  37. Tsai, P.-F.; Liao, C.-H.; Yuan, S.-M. Using Deep Learning with Thermal Imaging for Human Detection in Heavy Smoke Scenarios. Sensors 2022, 22, 5351. [Google Scholar] [CrossRef] [PubMed]
  38. Ben-Shoushan, R.; Brook, A. Fused Thermal and RGB Imagery for Robust Detection and Classification of Dynamic Objects in Mixed Datasets via Pre-Trained High-Level CNN. Remote Sens. 2023, 15, 723. [Google Scholar] [CrossRef]
  39. Huang, H.-W.; Chen, J.; Chai, P.R.; Ehmke, C.; Rupp, P.; Dadabhoy, F.Z.; Feng, A.; Li, C.; Thomas, A.J.; da Silva, M.; et al. Mobile Robotic Platform for Contactless Vital Sign Monitoring. Cyborg Bionic Syst. 2022, 2022, 9780497. [Google Scholar] [CrossRef] [PubMed]
  40. Ben Hazem, Z.; Guler, N.; Altaif, A.H. Model-Free Trajectory Tracking Control of a 5-DOF Mitsubishi Robotic Arm Using Deep Deterministic Policy Gradient Algorithm. Discov. Robot. 2025, 1, 4. [Google Scholar] [CrossRef]
Figure 1. Hardware architecture.
Figure 1. Hardware architecture.
Electronics 15 00559 g001
Figure 2. Triage system operation.
Figure 2. Triage system operation.
Electronics 15 00559 g002
Figure 3. Nodes used in the triage system, where numbers indicate the different operational phases (I–III) and arrows represent the interactions between the corresponding nodes and services.
Figure 3. Nodes used in the triage system, where numbers indicate the different operational phases (I–III) and arrows represent the interactions between the corresponding nodes and services.
Electronics 15 00559 g003
Figure 4. Explore_lite architecture [29].
Figure 4. Explore_lite architecture [29].
Electronics 15 00559 g004
Figure 5. Frontier-based exploration.
Figure 5. Frontier-based exploration.
Electronics 15 00559 g005
Figure 6. Person detection in autonomous navigation.
Figure 6. Person detection in autonomous navigation.
Electronics 15 00559 g006
Figure 7. Established perimeter for obstacle avoidance.
Figure 7. Established perimeter for obstacle avoidance.
Electronics 15 00559 g007
Figure 8. Segmented scenes for computer vision-based navigation: (a) rightward orientation to align the robot toward the central target; (b) forward motion for frontal approach; (c) leftward orientation to refine alignment and positioning.
Figure 8. Segmented scenes for computer vision-based navigation: (a) rightward orientation to align the robot toward the central target; (b) forward motion for frontal approach; (c) leftward orientation to refine alignment and positioning.
Electronics 15 00559 g008
Figure 9. Navigation replanning and victim tracking through computer vision. Red arrows indicate the robot visual focus toward the victim, while black arrows represent the robot navigation trajectory.
Figure 9. Navigation replanning and victim tracking through computer vision. Red arrows indicate the robot visual focus toward the victim, while black arrows represent the robot navigation trajectory.
Electronics 15 00559 g009
Figure 10. Robot positioning for measuring vital signs.
Figure 10. Robot positioning for measuring vital signs.
Electronics 15 00559 g010
Figure 11. Frontier-based exploration and map generation.
Figure 11. Frontier-based exploration and map generation.
Electronics 15 00559 g011
Figure 12. Person detection using YOLOv8.
Figure 12. Person detection using YOLOv8.
Electronics 15 00559 g012
Figure 13. Computer vision-guided navigation and obstacle avoidance using LiDAR. Blue arrows illustrate the sequence of robot movements throughout the navigation process.
Figure 13. Computer vision-guided navigation and obstacle avoidance using LiDAR. Blue arrows illustrate the sequence of robot movements throughout the navigation process.
Electronics 15 00559 g013
Figure 14. ROI selection for measuring vital signs: (a) robot positioning, (b) ROI selection.
Figure 14. ROI selection for measuring vital signs: (a) robot positioning, (b) ROI selection.
Electronics 15 00559 g014
Figure 15. Accident scenario simulations.
Figure 15. Accident scenario simulations.
Electronics 15 00559 g015
Figure 16. Bland–Altman plots comparing rPPG and pulse oximeter measurements for (a) BPM and (b) SpO2. The solid line represents the mean difference, and the dotted lines indicate the ±1.96 standard deviation limits of agreement.
Figure 16. Bland–Altman plots comparing rPPG and pulse oximeter measurements for (a) BPM and (b) SpO2. The solid line represents the mean difference, and the dotted lines indicate the ±1.96 standard deviation limits of agreement.
Electronics 15 00559 g016
Table 1. Methodological stages of the project and expected learning outcomes.
Table 1. Methodological stages of the project and expected learning outcomes.
EtapaStudent RolePedagogical PurposeKey Learning OutcomeEvaluation Method
1. Proposed ChallengeAssumes the task of configuring and programming a robot capable of performing autonomous triage of victims in a simulated environment.Motivate through a real-world problem (challenge-based learning) and connect robotics with social impact.Understanding the context, defining the problem, formulating objectives and functional requirements.Review of the problem statement and initial proposal: clarity of the challenge, relevance of the approach, and technical justification.
2. Technical InvestigationAnalyzes hardware, sensors, and software options; compares architectures and vision/control tools in ROS.Develop autonomy and critical thinking through guided inquiry.Evidence-based selection of components (sensors, camera, actuators, ROS), understanding principles of detection and triage.Technical report or research logbook: relevance, rigor, and justification of technological decisions.
3. System Design and IntegrationDevelops the system architecture; programs navigation, detection, and vital-sign measurement nodes.Foster active learning through practical, experiential integration.Competence in hardware–software integration, modular design, technical problem-solving, and collaborative work.Practical performance assessment and peer review of the functional prototype: architectural coherence and basic operation.
4. Validation and AdjustmentPerforms tests, debugs errors, calibrates sensors, and optimizes robot behavior.Apply the scientific method: hypothesis, testing, and iterative improvement.Development of analytical thinking, technical autonomy, and continuous-improvement skills.Experimental validation rubric: robot performance (victim detection, autonomy, vital-sign measurement) and evidence of iteration and refinement.
5. Final Evaluation and ReflectionEvaluates the overall performance of the system and proposes improvements or future versions.Promote metacognition and authentic assessment of learning.Critical thinking, technical communication, and self-regulation of learning.Final presentation and defense of the project, including robot demonstration and technical report delivery. Comprehensive triage-flow assessment, peer review, innovation, and justification of improvements.
Table 2. Statistical comparison between the rPPG method and the pulse oximeter in the measurement of SpO2 and BPM.
Table 2. Statistical comparison between the rPPG method and the pulse oximeter in the measurement of SpO2 and BPM.
MetricSpO2 ValueBPM Value
Sample size210210
Arithmetic mean−1.21430.1095
95% confidence interval (mean)−1.7152 to −0.7134−0.07879 to 0.2978
P (H0: mean = 0)<0.00010.2529
Lower limit−8.4309−2.603
95% confidence interval (lower)−9.2883 to −7.5735−2.9260 to −2.2813
Upper limit6.00232.8227
95% confidence interval (upper)5.1450 to 6.85972.5003 to 3.1450
Regression equationy = 22.1236 − 0.2401xy = 2.6192 − 0.03554x
Table 3. Phases of the robotic triage system: challenges, solutions, and methodologies.
Table 3. Phases of the robotic triage system: challenges, solutions, and methodologies.
PhaseChallengeSolutionMethodologyNode(s)/Technology Used
Phase 1
Autonomous navigation based on frontier detection and SLAM, along with victim search
Exploring an unknown environment without human intervention.Implementing SLAM for autonomous mapping and exploration.PBL/CBLpatrol_agv, move_base, explore_lite
Detecting people in real time during exploration.Integrating a model to identify human bodies and estimate posture.PBLdetect_person, YOLOv8-pose
Phase 2
Computer vision-guided navigation with obstacle avoidance
Replanning the trajectory upon detecting a victim.Coordinating modules for computer vision-based navigation and real-time obstacle avoidance.PBLfollow_person, follow_agv, follow_mycobot
Avoiding obstacles while keeping the victim within the field of view.Combining LiDAR data and robotic manipulator control to maintain continuous vision.PBLLiDAR, camera + manipulator
Coordinating navigation and computer vision.Synchronizing movement, visual tracking, and camera positioning.PBLmain_agv, manipulator, camera
Phase 3
Contactless measurement of vital signs using rPPG
Measuring vital signs without placing sensors directly on the victim.Researching and applying the rPPG technique to estimate heart rate and SpO2 without contact.CBL/self-directed studyHeartrate, rPPG algorithm
Positioning the camera correctly in front of the victim’s face.Adjusting the camera to 0.3 m from the victim’s face using the manipulator, with a defined ROI.PBLMycobot, usb_cam, frontal ROI
Stabilizing the signal to obtain reliable measurements.Adjusting lighting, focus, and distance to minimize interference and optical noise.PBLComputer vision + signal processing
Table 4. Summary of Project Stages and Obtained Results.
Table 4. Summary of Project Stages and Obtained Results.
StageDelivered EvidenceResults
1. Proposed ChallengeDocument detailing the analysis and definition of the problem, objectives, scope, and importance of the project, and a functional requirements matrix.The main problem of the autonomous triage system was established, defining the objectives, scope, and relevance of the project in emergency contexts. The student identified the technical requirements needed for prototype development.
2. Technical ResearchTechnical logbook with a comparative analysis of sensors, modules, and robotic platforms available in the lab. Their SDKs and compatibility with ROS were considered as the basis for system development and integration.The student evaluated options for available hardware, sensors, and robotic platforms. The most suitable components were selected, and the general system architecture in ROS was defined.
3. System Design and IntegrationIntegration diagram of ROS nodes and communication configuration between the manipulator robot (myCobot), the mobile platform (myAGV), and the RGB camera used for visual perception. Fully assembled and operational prototype.The myCobot manipulator and myAGV mobile platform were integrated and mechanically adapted. ROS control, communication, and perception nodes were configured using an RGB camera, achieving stable and coordinated system operation.
4. Validation and AdjustmentFunctional validation records, person-detection tests, autonomous navigation trials, and comparison of rPPG data obtained from the camera against a reference sensor. ROS parameter adjustments were made to improve accuracy and stability.Person detection, autonomous navigation of the myAGV, and synchronization with the myCobot were validated. rPPG values showed good correlation with the physical sensor after calibration, resulting in stable system performance.
5. Final Evaluation and ReflectionFinal technical report, documented source code, system evaluation rubric, and peer-evaluation records.The overall system performance was evaluated, confirming detection, navigation, and vital-sign estimation. The myAGV–myCobot setup demonstrated stable operation and synchronization within the ROS architecture. Peer evaluation provided feedback on technical execution and collaboration, identifying improvements for rPPG accuracy and motion response.
Table 5. Student-Reported Difficulties and Implemented Solutions.
Table 5. Student-Reported Difficulties and Implemented Solutions.
StageDifficulty and Student-Expressed Solution
1. Proposed Challenge“The task did not specify the hardware; I analyzed alternatives and selected the most suitable mobile platform, robotic arm, and camera for the system.”
2. Technical Research“The reviewed projects focused on basic control; I investigated the capabilities already available in the robot under ROS and adapted them to the proposed challenge to incorporate autonomous navigation.”
3. System Design and Integration“I had issues with synchronizing the system workflow, so I implemented an orchestrator node to coordinate the modules.”
4. Validation and Adjustment“The rPPG measurements were unstable; I adjusted lighting/distance until achieving consistent readings.”
5. Final Evaluation and Reflection“The complete triage sequence was validated, confirming the integration of the system as a functional solution.”
Table 6. Comparative analysis of robotic triage systems.
Table 6. Comparative analysis of robotic triage systems.
Study/SystemTechnologies UsedCapabilitiesLimitations
Proposed system (YOLOv8-Pose + rPPG + SLAM)YOLOv8-Pose, SLAM, rPPG, RGB cameraAccurate person and posture detection; autonomous navigation; contactless vital-sign measurementSensitive to low lighting; it requires a visible, well-lit face; limited performance under severe occlusions
YOLOv5 on dronesYOLOv5, aerial RGB cameraFast aerial person detection; efficient for visual searchesIt does not measure vital signs or perform autonomous navigation
Thermal Vision + YOLOYOLOv4, thermal (IR) cameraDetection in smoky or low-visibility environments; contactless operationIt does not measure pulse; low resolution; no posture estimation
Dr. Spot (rPPG + RGB and IR camera)rPPG with fixed RGB and IR camera mounted on robotHighly accurate remote monitoring of heart rate, temperature, and respirationSensitive to low lighting; it requires a visible, well-lit face; limited performance under severe occlusions
Contact-based sensorsPulse oximeters, thermometers, respiratory bandsClinically accurate measurement of pulse, respiration, temperature, pressure, and SpO2It requires physical contact; slow for mass triage applications
Table 7. Summary of Identified Limitations and Future Research Directions.
Table 7. Summary of Identified Limitations and Future Research Directions.
AppearanceCondition/Limitation IdentifiedPossible Improvement/Future Research
Operating EnvironmentEvaluation carried out in simulated and controlled emergency scenariosProgressive validation in larger, more dynamic environments
Environmental dynamicsSLAM-based and border-based scanning assumes mostly static environments.Incorporating dynamic mapping and semantic perception
NavigationSensitive to moving obstacles and changes in the sceneAdaptive planning strategies
Visual TrackingStable operation at moderate speed (0.25 m/s); possible instability on uneven terrainDynamic adjustment in camera orientation based on orientation sensors
Obstacle AvoidanceUse of a fixed safety distance for simplicity and stabilityContext-dependent adaptive safety margins
Vision (YOLOv8-Pose)Capable of detecting human poses with partial occlusions; sensitive to unfavorable lighting or severe occlusionsImage pre-processing and controlled lighting and, in adverse scenarios, the incorporation of complementary sensors (RGB-D or thermal camera) for greater robustness.
rPPG EstimationDependent on good lighting and facial visibilityControlled lighting and incorporation of thermal sensors
SensorsThe impact of noise and calibration of the RGB camera (rPPG/detection) and LiDAR (navigation) was not quantifiedNoise modeling, recalibration and systematic verification of sensors
ScalabilityValidation in areas of limited size due to mobile platform mobility restrictionsPlatforms with greater locomotion capabilities on uneven terrain
Victim managementSequential care for a single victim; without explicit prioritization. Multi-objective management and prioritization criteria
ROS ArchitectureSequential execution using services; delays not formally assessedLatency optimization and analysis in complex scenarios
Computational LoadIntensive processing performed outside the robotic platformOn-board execution of SLAM and detection, and on-demand rPPG, to reduce latency and reliance on external compute.
SensingExclusive use of RGB visionIntegration of thermal or depth sensors
Cases of failureIf visual contact with the victim is lost, the system restarts the search from Phase I.Recovery strategies and perceptual redundancy
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Angamarca-Avendaño, D.-A.; Zhañay-Salto, D.-A.; Cobos-Torres, J.-C. A Mobile Triage Robot for Natural Disaster Situations. Electronics 2026, 15, 559. https://doi.org/10.3390/electronics15030559

AMA Style

Angamarca-Avendaño D-A, Zhañay-Salto D-A, Cobos-Torres J-C. A Mobile Triage Robot for Natural Disaster Situations. Electronics. 2026; 15(3):559. https://doi.org/10.3390/electronics15030559

Chicago/Turabian Style

Angamarca-Avendaño, Darwin-Alexander, Diego-Alexander Zhañay-Salto, and Juan-Carlos Cobos-Torres. 2026. "A Mobile Triage Robot for Natural Disaster Situations" Electronics 15, no. 3: 559. https://doi.org/10.3390/electronics15030559

APA Style

Angamarca-Avendaño, D.-A., Zhañay-Salto, D.-A., & Cobos-Torres, J.-C. (2026). A Mobile Triage Robot for Natural Disaster Situations. Electronics, 15(3), 559. https://doi.org/10.3390/electronics15030559

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop