Next Article in Journal
Sampling-Based Path Planning and Semantic Navigation for Complex Large-Scale Environments
Previous Article in Journal
Biomechanical Design and Adaptive Sliding Mode Control of a Human Lower Extremity Exoskeleton for Rehabilitation Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Enhancing Emergency Response: The Critical Role of Interface Design in Mining Emergency Robots

1
Department of Mining Engineering, University of Kentucky, Lexington, KY 40506, USA
2
Research Associate, SenseLab Space Informatics, Technical University of Crete, 73100 Chania, Greece
3
Petroleum Recovery Research Center, New Mexico Tech, Socorro, NM 87801, USA
4
Department of Electrical Engineering, Colorado School of Mines, Golden, CO 80401, USA
5
Department of Mechanical Engineering, New Mexico Tech, Socorro, NM 87801, USA
*
Author to whom correspondence should be addressed.
Robotics 2025, 14(11), 148; https://doi.org/10.3390/robotics14110148 (registering DOI)
Submission received: 25 August 2025 / Revised: 3 October 2025 / Accepted: 16 October 2025 / Published: 24 October 2025
(This article belongs to the Section Industrial Robots and Automation)

Abstract

While robotic technologies have shown great promise in enhancing productivity and safety, their integration into the mining sector, particularly for search and rescue (SAR) missions, remains limited. The success of these systems depends not only on their technical capabilities, but also on the effectiveness of human–robot interaction (HRI) in high-risk, time-sensitive environments. This review synthesizes key human factors, including cognitive load, situational awareness, trust, and attentional control, that critically influence the design and operation of robotic interfaces for mine rescue missions. Drawing on established cognitive theories such as Endsley’s Situational Awareness Model, Wickens’ Multiple Resource Theory, Mental Model and Cognitive Load Theory, we identified core challenges in current SAR interface design for mine rescue missions and mapped them to actionable design principles. We proposed a human-centered framework tailored to underground mine rescue operations, with specific recommendations for layered feedback, multimodal communication, and adaptive interfaces. By contextualizing cognitive science in the domain of mining emergencies, this work offers a structured guide for designing intuitive, resilient, and operator-supportive robotic systems.

1. Introduction

The rapid advancement of robotics has transformed industrial practices, moving from isolated robotic systems to collaborative platforms that operate alongside humans in complex environments [1]. In many applications, full automation remains impractical due to technical limitations and contextual uncertainty. As a result, human–robot collaboration, especially in high-risk or unstructured domains, has emerged as a critical focus of research [2,3]. In the mining industry, one of the most critical and high-stakes applications of robotics is in mine search and rescue (SAR) operations, where it may be too dangerous—or entirely impossible—for humans to enter. In such scenarios, robots are deployed as remote extensions of human responders. However, these systems must be operated effectively (often under severe time pressure) through interfaces that convey situational awareness data, support decision-making, and enable safe control. Poorly designed interfaces can increase operator workload, reduce situational awareness, and compromise mission success.
While human–robot interaction (HRI) has been extensively studied in domains such as manufacturing and defense, its application in mine emergency response remains limited [4,5,6]. The unique constraints of underground mining, confined geometry, unpredictable hazards, degraded sensor input, and communication interruptions, create specific challenges for HRI design that are not addressed by existing interface standards. In this context, a targeted review and discussion was conducted about interface design for mine rescue robotics, with a focus on human-centered design principles grounded in cognitive psychology. While earlier reviews [7,8] provided hardware-oriented summaries, they did not systematically integrate cognitive HRI models emerging from 2023–2025 Studies [9,10]. This paper aims to synthesize and advance the field by bridging these new findings into a cognitive-model framework for SAR interface design. This work synthesizes insights from HRI and cognitive science to guide interface development for safety-critical mining environments. Specifically, we address the following research questions:
  • − RQ1: Which human factors most critically affect the design and operation of robotic interfaces in underground mine SAR missions?
  • − RQ2: How can interface design mitigate these challenges to improve operator performance and mission safety?
To answer these questions, we draw on established cognitive models, including Endsley’s Situational Awareness Model [11], Wickens’ Multiple Resource Theory [12], Mental Model [13,14], and Cognitive Load Theory [15], to identify key design requirements for SAR interfaces. We then analyze interface modalities used in existing mine rescue robots and extract patterns of success and failure. Based on this analysis, we propose a structured set of design guidelines aligned with the cognitive demands of SAR operations. By answering these questions, we aim to bridge the gap between cognitive science and robotic interface engineering in high-risk mining context. It emphasizes that human-centered interface design is not a secondary consideration, but a mission-critical element in ensuring effective emergency response

2. Methodology

This review adopts a narrative review, employing a qualitative and analytical approach to evaluate human–robot interaction (HRI) requirements in underground mine rescue robotics. The methodology integrates three main components: literature synthesis, cognitive model application, and comparative evaluation (see Figure 1).

2.1. Literature Search and Selection

A structured literature search was conducted across major academic databases, including IEEE Xplore, Scopus, ACM Digital Library, and Web of Science, covering the period from 2010 to 2025. Search terms included “mine rescue robots,” “human–robot interaction,” “situational awareness,” “cognitive models,” and “emergency interface design.”
  • Records identified through database searching: ~300
  • Duplicates removed and irrelevant records excluded: ~190
  • Studies assessed for eligibility: ~110
  • Studies included in final synthesis: 110 peer-reviewed journal and conference papers
Inclusion criteria required studies to (1) address HRI in high-risk contexts, or discuss cognitive frameworks, (2) be peer-reviewed journal or conference papers in English. Exclusion criteria removed duplicates, non-English works.

2.2. Data Analysis

Although no formal scoring system was applied, priority was given to highly cited and peer-reviewed works to ensure robustness and coverage of influential contributions. The final pool of ~110 studies was systematically reviewed to identify recurring themes, methodological approaches, and design considerations. Meanwhile, 10 technical reports were added to the selected studies.
To guide interpretation, four established cognitive models were applied as theoretical lenses. These frameworks provided structured insight into operator workload, decision-making, and emergency interface design in mine rescue robotics.

2.3. Comparative Evaluation

Finally, a comparative evaluation of existing Search and Rescue (SAR) robot interfaces was conducted. This allowed identification of design gaps, human factor implications, and opportunities for improvement. Synthesizing cognitive frameworks with literature findings enabled the development of design principles and recommendations to guide future HRI-focused interface design in mining emergency robotics. These three components are not isolated steps but interdependent elements of the review. The narrative review provided the evidence base from 110 studies, which was then interpreted through established cognitive models to frame operator challenges and design principles. The comparative evaluation subsequently mapped these principles onto existing SAR robot interfaces to highlight gaps and opportunities for improvement. Together, these stages form a layered methodology in which literature synthesis supplies the inputs, cognitive models provide the analytical lens, and comparative evaluation demonstrates applied relevance in real-world mining rescue contexts. This methodological approach directly addresses RQ1 by mapping the critical human factors affecting interface design in underground mine SAR missions.

3. Robotic Technologies in Mining: An Overview

Robotic systems, particularly Unmanned Aerial Vehicles (UAVs) and Unmanned Ground Vehicles (UGVs), are increasingly deployed in the mining industry to improve operational efficiency, data collection, and worker safety. UAVs have gained wide adoption in both surface and underground environments due to their versatility and ability to carry advanced sensors. They support a range of applications, including ecological monitoring, slope stability assessment, and terrain surveying. UAVs have been used to map areas impacted by acid mine drainage, evaluate land degradation from coal mining, and create high-resolution 3D models of mine features such as unstable slopes [16,17,18,19,20]. For structural monitoring, UAVs enable detailed topographic and geological assessments [21]. UGVs also can play a vital role in enhancing safety and automation in underground mining. These robots are equipped with infrared cameras, gas sensors, and stereo vision systems to perform tasks such as machinery inspection, anomaly detection, and hazard identification in confined spaces [7,22]. Importantly, UAVs and UGVs have been adapted for high-risk applications such as search and rescue missions, where they navigate ahead of personnel to assess unstable areas. Notable examples of UAVs include robots like Wolverine, Gemini Scout, and Numbat, purpose-built for mine rescue operations [22,23,24]. Human factors were sometimes considered in these kinds of robots —for instance, the Gemini Scout was equipped with simplified joystick controls and pan-tilt cameras to reduce operator workload and maintain situational awareness in low-visibility environments [24]. However, the Numbat system highlighted the challenges of cognitive overload under poor lighting and terrain complexity, underscoring the need for user-centered interfaces that align with operator mental models [7].

Search and Rescue Robots in Mining Emergencies

Rescue robots are classified as field robots, built to operate in unstructured, dynamic, and high-risk environments. In underground mine emergencies, robotic systems face collapsed roofs, toxic gases, water ingress, and debris, all of which can impair mobility, localization, and sensing. These challenges often require human operators to remotely control the robot, relying on delayed or degraded feedback. Even basic actions, such as traversing debris or identifying hidden passages, become difficult under such conditions, with GPS-denied environment, darkness, and communication delays. To address these challenges, a variety of coal mine rescue robots (CMRRs) have been developed. Early examples like RATLER [25,26,27] and Numbat [28,29,30] had basic mobility and sensing, while later systems such as ANDROS Wolverine V2 [7,31] and Groundhog [23] improved on navigation and environmental mapping. More recent designs like the Gemini Scout [23,24,32], Subterranean Robot [33], and CUMT-V [34,35] incorporate explosion-proof designs, partial autonomy, and multimodal sensing systems. These robots share several operational challenges, including environmental hazards (e.g., impaired sensor performance due to debris), teleoperation dependence, limited operator interface, and modularity and sensory overload. Kadous et al. provided one of the earliest systematic evaluations of SAR robot user interfaces, grounding design in awareness, efficiency, familiarity, and responsiveness. Their RoboCup Rescue trials showed that intuitive, game-like layouts reduced learning time and improved teleoperation performance [36].
Despite their life-saving potential, robotic systems remain underdeveloped due to high R&D costs, limited market incentives, and the infrequent nature of such emergencies (see Figure 2). To date, most research and development efforts have also focused on enhancing the robustness and mobility of rescue robots. While these are essential capabilities, HRI dimensions remain largely overlooked. For instance, in the case of ANDROS Wolverine, a critical failure occurred when a single operator was unable to simultaneously manage robot navigation and location tracking, ultimately compromising the mission. Similarly, the Numbat robot presented significant usability challenges, when operators faced increased cognitive load due to low visibility and unfamiliar terrain (which ultimately contributed to mission failure) [7]. By contrast, the Gemini Scout integrated human-factor considerations such as simplified control schemes and real-time visual feedback to support situational awareness, demonstrating the value of user-centered design in high-stress contexts. Lessons from these systems highlight the importance of applying cognitive models to improve perception under degraded feedback, and mental models to align system behavior with operator expectations. These examples underscore the urgent need to integrate human factors principles into interface design. Developing intuitive, user-centered control systems that account for operator workload, situational awareness, and environmental stressors is essential to enhancing mission effectiveness in high-risk search and rescue operations. Effective collaboration between human mine rescuers and robotic systems is fundamental to mission success, especially in high-stakes environments characterized by uncertainty, stress, and limited communication. While only a few representative examples are discussed here, a broader set of SAR robots and their HRI characteristics are analyzed in Section 9 where we detail their interface design, associated challenges, and how these challenges can be interpreted and addressed through cognitive models. A broader review of SAR robot platforms and design requirements has been presented in our earlier work [37], which complements this paper’s focus by providing detailed coverage of state-of-the-art robotic systems. In the present manuscript, we build on that foundation to emphasize interface design and the integration of human factors through cognitive models.

4. HRI in Underground Mine Emergencies

HRI refers to the study and design of systems in which humans and robots interact in shared environments or collaborative tasks. In the field of HRI, predicting human intentions is an essential step for safe and intuitive collaboration. System’s designs goal in this research is to infer human goals while performing tasks using the acquired sensor data [7]. This involves translating observable signals into estimates of operator intent, which can support adaptive robot responses [7,38,39,40,41,42]. In mine emergency scenarios, where environments are often unpredictable, ensuring safety in HRI is critical to prevent injuries and enable effective collaboration [42,43,44].
For HRI in mine rescue to be effective, robotic systems must prioritize safety, ease of use, and the ability to flexibly adapt to rapidly changing underground conditions, all while reducing the physical and cognitive burden on human operators [44]. HRI in emergencies may fall into different categories depending on the nature of the interaction (see Table 1). In Human robot coexistence (HRCx), robots may operate alongside miners or rescuers in shared tunnels without direct coordination, focusing mainly on safe navigation and collision avoidance [45,46,47,48,49]. Human–robot cooperation (HRCp) becomes relevant when robots and humans pursue shared goals in real time, such as scanning debris fields or stabilizing structures, requiring machine vision, force sensing, and situational awareness [50,51]. At its most advanced, Human robot collaboration (HRC) involves physical or cognitive cooperation, such as robots predicting human actions or responding to voice commands during rescue tasks [52,53,54]. These interaction modes are essential to developing intelligent robotic systems that can be deployed not only to assist, but also to actively partner with humans during high-stakes mine emergency operations.
Robots in underground mining are often preferred to follow a HRCx model, where humans remain in the loop to supervise, guide, or intervene as needed. In dangerous and unpredictable environments, robots handle high-risk tasks including entering mine before human rescuers to check safety, while humans contribute critical thinking and adaptability. This setup requires effective human–robot interaction, supported by interfaces designed with human factors to ensure clarity, reduce cognitive load, and enhance situational awareness. For these reasons, semi-autonomous robots are preferred in search and rescue, they can operate independently in hazardous areas but still allow human oversight when crucial decisions are needed. This balance ensures both safety and flexibility in emergency operations. Mitigation of the challenges related to HRI can help in the successful application of robots in emergency scenarios. Semi-autonomous robots operating in high-risk environments face numerous challenges. Integrating human factors into interface design is crucial to ensure these robots can work effectively and safely with human teammates.

5. Human Factors Challenges in SAR Interface Design

Designing interfaces for mine rescue robots requires more than technical competence, it demands a deep understanding of human cognitive and behavioral limitations under stress. In underground emergency settings, responders must make rapid decisions based on limited and often ambiguous feedback from robotic systems. A poorly designed interface can overload the operator, impair situational awareness, and erode trust, ultimately compromising the mission. In this section, we examine five critical human factors challenges that interface designers must address, particularly in the context of underground rescue missions. Each factor is grounded in cognitive science and has direct implications for interface performance (see Table 2).

5.1. Situational Awareness (SA)

Situational Awareness, defined by Endsley as the “perception of elements in the environment, comprehension of their meaning, and projection of their status in the near future,” is essential for successful teleoperation. In SAR missions, low-quality video feeds, poor lighting, limited sensors, and time pressure degrade SA [11]. A well-known challenge is the “soda-straw” effect, where narrow field-of-view cameras create tunnel vision, preventing the operator from understanding the robot’s spatial context. Operators become overly focused on short-term control, losing the broader picture of where the robot is and what it is encountering. To mitigate these issues, interface designers should consider implementing multi-camera views, panoramic stitching, or map overlays that help operators reconstruct the robot’s spatial position in real time. Environmental hazards that fall outside the operator’s immediate field of view should be flagged with visual or auditory alerts to support anticipatory decision-making [55].

5.2. Cognitive Load

Rescue operations are inherently high-pressure tasks. The operator must manage navigation, communication, sensor interpretation, and decision-making, often simultaneously. According to Cognitive Load Theory (CLT) [15], performance declines when working memory is overwhelmed. Interfaces that present too much information, or require excessive manual input, can exceed cognitive capacity, leading to errors or delays. This is exacerbated when users are unfamiliar with robotic systems, which is common in mine SAR. To reduce cognitive load, designers should structure interfaces with layered information hierarchies, prioritizing critical data while allowing secondary data to be accessed as needed. Automating routine tasks (such as scanning gas concentrations or recalibrating position) and applying visual hierarchy (e.g., size, color, or grouping) can also reduce operator strain and response time. Szafir and Szafir proposed a framework linking human–robot interaction with data visualization, emphasizing task-oriented presentation (overview, filtering, anomaly detection) to support rapid operator sensemaking and reduce cognitive load [56].

5.3. Trust and Transparency

Trust determines whether an operator will rely on, ignore, or underutilize the robot. Mistrust can cause users to second-guess automated systems, while over-trust can lead to complacency [57]. Trust calibration is therefore essential. Interfaces should aim to provide transparent feedback about the robot’s intentions, status, and reasoning [58]. For instance, if a robot autonomously selects a route, it should also indicate why that route was chosen, such as gas concentration levels, obstacle proximity, or heat signatures. Including explainable AI (XAI) modules and displaying confidence levels or diagnostic information can help operators assess reliability [59]. Additionally, manual override options should be accompanied by clear feedback about the consequences of intervention [60].

5.4. Attention and Individual Differences

Mine rescue operators, similar to urban search and rescue operators, often come from varied professional backgrounds and may differ in their sensory preferences and cognitive strategies [61]. Some may respond more effectively to visual signals, while others benefit from auditory or haptic cues [62]. The cognitive burden also increases when users are forced to shift frequently between different input and output modalities. To address this challenge, interfaces should support multimodal feedback and allow for customization based on user preferences, roles, or cognitive profiles [61]. Consistency within and between modes is essential, and critical information should be delivered through redundant channels, when possible, to reduce the need for modality switching [61].

5.5. Stress and Fragility

High-pressure environments activate physiological stress responses that impair fine motor control, working memory, and judgment [63]. These effects are amplified in rescue scenarios, where operators face personal risk, urgency, and information ambiguity. Interface designs should therefore prioritize robustness and simplicity. This includes using large, forgiving interaction targets (both physical and on-screen), minimizing the depth of command menus, and relying on clear visual or audio cues [64]. Redundancy is especially important: if visual feedback is degraded, audio alerts or haptic signals can ensure continuity of situational awareness [65]. Chitikena et al. highlight that SAR robot design must integrate ethical and design considerations at the development stage, focusing on snake robots in confined and hazardous environments as a case for balancing usability, safety, and responsibility [66]. These are among the human factors most critically affect robotic operation in SAR missions. These human factor challenges are closely tied to established cognitive models, which provide the theoretical foundation for understanding and mitigating these issues in interface design. Section 6 therefore links each of these challenges to cognitive frameworks such as Situational Awareness, Multiple Resource Theory, Cognitive Load Theory, and Mental Models.

6. Cognitive Models in Interface Design

Effective interface design for underground mine rescue robots must be grounded in an understanding of how human cognition operates under high-risk, time-sensitive conditions. Cognitive models offer structured frameworks that guide designers in anticipating operator’s needs, reducing error rates, and improving performance [10,67]. In this section, we examine four cognitive models that are particularly relevant to human–robot interaction (HRI) in the context of mine rescue: Endsley’s Situational Awareness Model [11], Wickens’ Multiple Resource Theory [12], Cognitive Load Theory [15], and the concept of mental models [68]. Each model is discussed in terms of its conceptual foundation and its direct implications for interface design (see Table 3).

6.1. Endsley’s Situational Awareness Model

In underground rescue operations, operators frequently struggle to maintain SA due to poor lighting, unreliable communication, narrow camera views, and ambiguous sensor feedback. These limitations hinder the operator’s ability to track the robot’s location, understand surrounding hazards, or anticipate system behavior. To support situational awareness in these environments, interface designs should facilitate all three levels of SA [11]. For perception, multi-camera feeds, panoramic stitching, and environmental overlays can provide a more complete visual field. For comprehension, summarized visualizations such as annotated maps, sensor fusion dashboards, or hazard heatmaps help users interpret environmental conditions more accurately [69]. To enhance projection, systems may include predictive path modeling or alerts for anticipated environmental changes, helping operators make forward-looking decisions in real time [70]. Recent work emphasizes that situational awareness (SA) is the “linchpin” enabling operators to detect and interpret unfolding hazards rapidly [55]. In high-risk HRI settings (e.g., mine rescue), interfaces should therefore continually update environmental and robot status so that operators can maintain this critical SA and make timely decisions.

6.2. Wickens’ Multiple Resource Theory

In the context of mine rescue robotics, many systems concentrate their informational output in the visual modality—through video feeds, maps, and sensor displays—which forces operators to continually shift and split their visual attention under stressful conditions. To address this limitation, interfaces should be designed to distribute cognitive load across different sensory pathways. For example, auditory alerts can complement visual displays to offload information from overloaded channels, while haptic feedback may provide additional spatial or directional cues. A recent study by Han et al. (2021) found that simply adding audio cues to a robot interface did not significantly reduce operator workload or improve performance [71]. Importantly, designers should ensure that simultaneous tasks draw from separate resource pools to minimize interference [12,71]. This cross-modality approach not only reduces perceptual bottlenecks but also helps sustain operator performance in extended missions [70]. Modern interfaces distribute information and control across different sensory modalities (visual, auditory, haptic) and processing channels. For example, providing complementary auditory alerts alongside visual displays can offload perceptual demand and reduce operator workload during complex teleoperation tasks, improving performance under stress [71].

6.3. Cognitive Load Theory

In high-stakes scenarios like mine rescue, extraneous load is particularly problematic. Interfaces that are cluttered, unintuitive, or inconsistent impose unnecessary burdens on the operator’s working memory, impeding their ability to focus on mission-critical decisions. To mitigate extraneous cognitive load, interface layouts should be streamlined and consistent, using spatial grouping and visual hierarchy to reduce scanning effort. Complex data should be broken into manageable chunks, with critical information presented by default and nonessential data revealed only upon user request. This strategy, often called progressive disclosure, ensures that the interface remains usable even when the operator is under cognitive strain or time pressure. Additionally, simplifying navigation paths and reducing the number of input actions required for common tasks can further ease cognitive burden [72]. Cognitive load theory stresses minimizing extraneous cognitive demands, since overload can impair performance (e.g., by increasing errors) under pressure [73]. In practice, this means designing clear, streamlined interfaces (reducing clutter and unnecessary steps) so that the operator’s working memory is conserved for the task itself, which is essential for safety in emergency scenarios.

6.4. Mental Models

When a robot acts contrary to an operator’s expectations, such as taking an unexpected path or failing to respond to a command—this can result in model mismatch, which leads to confusion, error, and ultimately mistrust in the system [74]. In the mine rescue domain, where many users are not trained roboticists, mismatches are particularly common due to limited feedback, inconsistent behavior, or opaque autonomous decision-making [75]. Interface designs should therefore help users develop and maintain accurate mental models of system behavior. Predictable feedback loops, consistent command-response relationships, and transparent status indicators all reinforce user confidence [68]. Systems that include rationale for autonomous actions—for instance, justifying a detour due to detected gas levels—foster trust and understanding. Additionally, offering training environments or simulated missions allows users to interact with the system in a low-risk setting, improving familiarity and reducing mismatch in real operations [68]. Combining visual, haptic, and auditory cues helps users build a coherent model of the teleoperation task [76]. Such multi-sensory feedback makes the robot’s state and intentions more transparent, enabling better-informed decisions in high-risk HRI.

6.5. Technology Acceptance Model

Another important model that informs the adoption of mine rescue robots is the Technology Acceptance Model (TAM). Originally developed by Davis and later consolidated by Lee et al. [77], TAM emphasizes that two primary beliefs—perceived usefulness (PU) and perceived ease of use (PEOU)—drive an individual’s acceptance of new technology. PU reflects the degree to which operators believe the robot will enhance mission performance and safety, while PEOU concerns how intuitive and cognitively manageable the interface is under stressful conditions. The 2003 review established TAM as one of the most widely applied and validated models in information systems, showing that PU consistently predicts adoption more strongly than PEOU, and that PEOU often exerts its influence indirectly by shaping perceptions of usefulness [77]. In mine rescue contexts, this distinction is critical: no matter how simple an interface appears, if operators do not perceive the robot as effective in improving safety or mission success, adoption is unlikely. More recent extensions of TAM have incorporated trust, social influence, and behavioral intention as key predictors of acceptance [78,79,80]. For high-risk HRI, trust is especially important: operators must believe that the robot’s decisions, data, and performance are reliable before integrating it into life-saving operations. Thus, effective adoption of mine rescue robots depends on designing interfaces that are transparent, cognitively efficient, and demonstrably useful, ensuring that TAM’s key factors—PU, PEOU, and trust—are satisfied to foster strong behavioral intention and long-term acceptance.

6.6. Ecological Interface Design

Ecological Interface Design (EID) is a cognitive-engineering framework that explicitly aligns an interface with the work domain’s underlying constraints and affordance [81]. In EID, designers perform an abstraction-hierarchy analysis of the system to identify its goals, processes, and physical components, and then “embed the semantics of the task” into the display [82] which significantly lowered drivers’ cognitive demands while improving vehicle control accuracy [83]. This means the interface makes invariant relationships perceptually available, so that operators can directly perceive what actions are possible rather than computing them mentally. In Gibsonian terms, the interface builds in affordances—operators see available actions and system limits at a glance—effectively enabling users to “simulate deep control by relying on surface control” orbit. By making hidden constraints and relationships visible, EID greatly reduces cognitive load. Empirical studies confirm that EID-style displays markedly improve performance For example, compared to a traditional speedometer, an EID-designed speed display cut drivers’ workload in half. More recent work in air traffic control has also shown that EID-based visual analytics improved novices’ understanding of complex information and enhanced conflict-resolution decision-making [84]. Applying these principles to mine-rescue robot interfaces should similarly lighten operator burden and support faster, more accurate decision-making under the stress of an underground emergency.
Table 3. Cognitive models linked to interface strategies with SAR-specific example.
Table 3. Cognitive models linked to interface strategies with SAR-specific example.
Cognitive ModelDescriptionImportance in SARHRI Relevance
Endsley’s Situational Awareness (SA)Endsley’s model defines Situational Awareness (SA) as a three-tiered process involving the perception of environmental elements, comprehension of their meaning, and projection of their future statusOperators need to monitor robot location, gas levels, victim status, and structural hazards, all in real time.Improves operator awareness during interaction, essential for decision-making in dynamic environments.
Wickens’ Multiple Resource TheorySuggests that human attention is divided across separate cognitive and sensory channels, such as visual-spatial, auditory-verbal, and manual-motor resourcesReduces overload and optimizes attention during multitasking, especially when the operator must control, monitor, and communicate simultaneously by distributing information across different resourcesHelp avoid overload by using different sensory channels for information delivery.
Cognitive Load Theorydistinguishes between intrinsic load (related to the task’s complexity), extraneous load (caused by poor design or presentation), and germane load (the mental effort directed toward learning or schema construction)Stress and complexity can impair memory and performance; simplicity boosts decision speed and accuracyPrevent performance degradation by reducing unnecessary cognitive effort in complex systems.
Mental Model (Norman)The internal representations that users develop to understand and predict how a system behaves.Users must quickly grasp system behavior, often under stress, mismatched expectations lead to errors.Ensures intuitive interaction by aligning system behavior with user expectations.
Technology Acceptance Model (TAM)Explains technology adoption via perceived usefulness (PU) and perceived ease of use (PEOU), later extensions add trust and behavioral intention.Acceptance hinges on effectiveness, reliability, and ease of operation—especially under stress.Promotes adoption by maximizing perceived usefulness and ease of use.
Ecological Interface Design (EID)Embeds system constraints and affordances into the interface using abstraction hierarchies, enabling operators to directly perceive limits and possibilities rather than compute them mentallyDegraded sensing, rapid decisions; EID makes limits visible—gas, energy, comms, terrain—speeding action and improving resilience.Accelerates decisions by making constraints and affordances directly visible [83,84].
Building directly on the human factor challenges outlined in Section 5, this section applies cognitive models to explain and structure design strategies that address that challenge.
These models form the foundation of the user-centered design framework proposed in this paper. By grounding interface features in cognitive science, rescue robot systems can better support operator decision-making under high stress and uncertainty.
While this review emphasizes four well-established cognitive models, it is important to note that other theoretical perspectives may also offer valuable insights. For instance, perception–action cycle models emphasize the continuous feedback loop between human operators and dynamic environments, which could enrich analyses of operator adaptation in high-stakes contexts [85]. Similarly, the ecological interface design approach highlights the structuring of displays around system constraints and affordances, offering a complementary pathway to reduce cognitive burden and support intuitive decision-making [86]. By implementing these cognitive models into SAR interface design, operators’ performance could be improved and resulting in a safe mission completion.

7. Interface Design Modalities and Technologies

Search and rescue robots rely on specialized interfaces to navigate complex mine layouts, detect dangers, and transmit critical data to rescue teams [34] (see Table 4). Visual teleoperation interfaces remain fundamental—operators typically control robots via cameras and dashboards. However, rough underground conditions make pure teleoperation challenging [87]. Researchers have enhanced visual interfaces using immersive technologies and smart UIs. For example, a 2021 study merged multiple camera feeds into a single “detail-in-context” view, reducing operators’ cognitive load and improving situational awareness [88]. This integrated “detail-in-context” visual approach aligns with Cognitive Load Theory [15], as it minimizes extraneous mental effort by presenting information in a cohesive view. It also supports Endsley’s Situational Awareness model [11] by enhancing perception and comprehension—giving operators a clearer mental picture of the environment with less cognitive strain [87]. Similarly, controlled experiments with virtual reality (VR) interfaces showed significant gains in operator awareness and reduced workload in simulated multi-robot rescue missions, compared to conventional monitor displays [89]. These results are consistent with Multiple Resource Theory [12], which posits that engaging multiple senses (e.g., immersive 3D vision in VR) can reduce interference and workload. The VR interface likely helped operators form a better mental model of the 3D space [13], improving spatial understanding and decision-making without overloading their cognitive resources. Tabrez et al. worked on Augmented reality (AR) as they compared descriptive (state-based) and prescriptive (action-based) AR guidance for shared situational awareness, finding that combined cues improved trust calibration, reduced workload, and enhanced decision support [90]. Walker et al. introduced a mixed-reality Cyber-Physical Control Room for robot teleoperation, reporting a 28% increase in efficiency and improved human–robot team coordination in simulated disaster response [91]. Visual feedback is still “the cornerstone” for situational understanding [92] but adding other modalities helps. Haptic feedback systems—such as force-feedback joysticks or vibrotactile cues—can supplement camera views by conveying collisions or terrain resistance to the operator. Studies report that adding haptic cues improves teleoperation success rates, precision, and even user trust [93]. By providing tactile feedback, the system distributes cognitive workload across modalities (supporting MRT’s principles [12]) and offers immediate physical cues that bolster the operator’s trust in the robot’s feedback [94]. This haptic channel serves as an additional “eyes-free” information source, aiding attentional focus and reducing reliance on vision alone—a benefit under high-stress conditions. Such multimodal interfaces (combining video, audio, and haptics) tend to boost operator confidence and performance in both lab-based trials and field tests. Beyond traditional console controls, researchers have explored more natural interaction methods. Voice-controlled interfaces have been proposed to allow operators to guide robots via spoken commands in mine rescue scenarios. One review of coal mine rescue robots noted the inclusion of voice recognition systems (alongside obstacle sensors and gas detectors) to facilitate underground rescue tasks [34]. In principle, voice commands could free operators’ hands and lower their training burden, though reliability in noisy mine environments remains an open concern (e.g., ensuring accurate speech recognition amid machinery noise). Finally, autonomous and semi-autonomous control interfaces are increasingly important. Many modern rescue robots offer multiple modes of control—from direct teleoperation to waypoint-based autonomy [87]. Overall, the literature suggests combining intuitive interfaces with autonomous assistance yields the best results. Visual teleoperation dashboards (especially enhanced with VR or smart overlays), tactile feedback devices, and even voice/gesture controls each contribute to better situational awareness and usability. Field and simulation studies consistently report that rescue personnel perform best—with fewer errors and lower workload—when interfaces are multimodal and support some autonomy, rather than relying on manual camera-only teleoperation. Dennler et al. developed the RoSiD toolkit, enabling operators to design multimodal robot signals, illustrating the value of user-tailored communication channels in reducing errors and enhancing collaboration [95]. Ranganeni et al. presented Access Teleop Kit, an open-source toolkit that allows drag-and-drop customization of teleoperation dashboards, demonstrating that customizable UIs improve accessibility and adaptability across diverse user needs [96]. In summary, multimodal interfaces and autonomy directly address the human factors highlighted in earlier sections—they prevent overloading any single sensory channel, lower mental workload by sharing tasks with the automation [16], and help maintain operator situational awareness even in chaotic environments [11]. By considering the human operator (actor) in the loop and applying cognitive models (e.g., ensuring information presentation fits within working memory limits [15] and that the interface supports attentional control [78]), these designs improve user performance and safety.

8. Design Recommendations for Underground HRI Interfaces

In this section, we synthesize insights to explicitly answer RQ1 and RQ2, connecting identified human factors with interface design implications. For semi-autonomous robots deployed in underground mine rescue operations, effective human–robot interaction is essential because human operators remain continuously in the loop. As demonstrated throughout this discussion, the success of such systems depends not only on the robot’s mechanical capabilities, but also on the quality of interaction facilitated by the interface. In the extreme conditions of subterranean emergencies, Operators in underground rescue must maintain situational awareness, interpret live sensor data, and make high-stakes decisions quickly—often under extreme physical and cognitive stress. Core human factors such as mental workload [9,12], attentional control [97], trust calibration [98,99,100], and working memory limitations directly affect performance in these conditions. Ignoring these human factors risks overwhelming the operator, causing delays, or inducing fatal mistakes. Human-centered design can therefore not be an afterthought—it must be treated as a core design principle from the outset of system development.
The cognitive models used in this study informed how we structured the interface, prioritized information, and implemented control logic. Rather than treat these as abstract theories, we translated them into concrete design strategies—such as layered, hierarchical displays for different cognitive stages, separating information by modality, simplifying alert mechanisms, and providing intuitive feedback for user actions [11,15,101,102]. Drawing on these models and supported by recent literature [103,104,105,106], we have proposed a set of human-centered design recommendations (summarized above) for underground rescue robot interfaces. These recommendations are grounded not only in theoretical principles but also in empirical HRI research and field-tested usability practices. The overarching goal is to empower the human operator—to ensure that despite the technological complexity of rescue robots, the interface remains readable, actionable, and trustworthy under pressure. By combining insights from psychology and human-factors engineering with advances in human-centered AI (such as adaptive interfaces and explainable autonomy) and continual feedback from real-world users, the interface effectively becomes a teammate rather than a hindrance to the operator. As rescue robotics continues to evolve, we strongly advise that these design guidelines be validated and refined through realistic field exercises, first-responder feedback sessions, and post-mission usability debriefs. Ongoing interdisciplinary collaboration will be essential in this endeavor, bringing together engineers, computer scientists, psychologists, ergonomics specialists, human–machine interaction experts, and the mine rescue personnel who ultimately must trust and use these life-saving robotic systems. Research shows that poor interface design can elevate operator stress and cognitive fatigue, undermining performance and user acceptance [107,108]. For instance, Murphy et al. emphasize that attention to human factors and usability is as critical as advances in autonomy [109]. In mine rescue missions, interface design must reflect how humans perceive, think, and behave under stress, guided by cognitive models and ergonomic principles [12,14]. Interfaces must also address mining-specific challenges, such as signal disruptions and limited visibility underground. Furthermore, underground mine rescue differs significantly from urban disaster response, demanding unique HRI solutions and training protocols [23,110]. Navigation capabilities and their representations in the interface are particularly important, as they directly affect situational awareness and team coordination [111]. To meet these challenges, designers should adopt user-centered processes by involving experienced mine rescuers early in development [112,113] and draw on universal design principles and virtual interface techniques to improve real-world usability [114]. By integrating theory, user input, and domain-specific needs, human-centered interface design can significantly enhance robotic system performance in mine emergencies.
The following recommendations are derived directly from the cognitive challenges and interface evaluations discussed in previous sections. We applied cognitive models to interpret how operators process information, handle stress, and interact with autonomous systems in underground environments. These models were then translated into design strategies grounded in empirical HRI findings and usability principles. Each recommendation addresses a specific gap identified in prior sections— whether it relates to perceptual overload, trust misalignment, high workload, or latency constraints—ensuring that the proposed solutions are both theoretically informed and practically relevant.
Effective interface design in mine emergencies must adhere to several key human-factors principles to ensure usability and mission success (see Figure 3). These include:
  • Layered “SA-first” Displays with Ecological Structure:
    Situational awareness is best supported by a central “mission pane” that fuses live video, map overlays, and hazard indicators, surrounded by compact widgets for system status (battery, communication, tether). By applying ecological interface design principles—such as making constraint boundaries visually legible—operators can perceive safe vs. unsafe states without exhaustive scanning. This approach reduces extraneous cognitive load and supports higher-level projection [83].
  • Multimodal Feedback to Reduce Visual Bottlenecks:
    Relying solely on visual channels in underground, low-visibility settings risks overloading the operator. Incorporating audio cues (e.g., rising tones for gas trends) and haptic signals (e.g., joystick vibration near obstacles) helps distribute key information across modalities. In shared-autonomy teleoperation, haptic feedback has been shown to improve both task performance and user satisfaction, enhancing situational awareness [115].
  • Camera Strategy for Mitigating the “Soda-Straw” Effect:
    To avoid narrow, tunnel-vision views, interfaces should provide multiple simultaneous camera/LiDAR feeds (forward, rear, side) with stitched or panoramic views and optional immersive VR/AR modes for planning. Work in immersive teleoperation shows improved spatial awareness and control under challenging environments [116].
  • Latency-Aware Control with Predictive Aids:
    Under underground communication constraints, latency can degrade control and raise operator stress. Adding predictive displays—such as a “ghost” robot trajectory preview—enables the operator to anticipate the robot’s motion despite delays. A low-cost predictive display improved operator performance by ~20% under latency in teleoperation experiments [117]. More advanced approaches combine XR-based intention overlaid displays and shared control to maintain operability under variable delays [118].
  • Shared Autonomy with Transparent Handover:
    Providing operators with autonomy modes (e.g., obstacle-avoid, path-following, safe-stop) must be accompanied by clear, real-time explanations of the robot’s decisions to maintain trust and situational alignment. Learning-based shared autonomy methods dynamically adjust assistance based on operator intent and constraints [119], mobile manipulator reviews emphasize variable autonomy for reducing workload in hazardous tasks [120].
  • Context-Sensitive, Workload-Aware Interfaces:
    Interfaces should default to minimal, essential displays and dynamically surface detailed panels or alarms when critical events occur (e.g., gas rise, fault). This adaptive design prevents information overload and helps operators focus. It aligns with calls for renewed user-centric teleoperation design to manage cognitive load [121].
  • Sensor Fusion for Hazard Projection:
    Integrating gas sensors, thermal imagery, LiDAR, and inertial measurements into a unified dashboard with predictive cues enables operators to anticipate hazardous conditions. This aligns interface behavior with operator mental models and supports proactive planning. In teleoperation survey work, fused perception-control frameworks are recommended for complex environments [122].
  • Audio & Communication as Core Interface Features:
    In subterranean contexts, video may degrade or fail, robust audio and voice channels become primary information conduits. Interfaces should maintain duplex audio with noise suppression and map distinct tones to hazard classes, as well as provide short spoken alerts (“Low O2—Stop”). Teleoperation latency studies confirm that audio and haptic feedback mitigate cognitive load and stabilize performance under delay [123].
These principles, grounded in human factors research, guide the creation of interfaces that improve the usability and effectiveness of mine emergency robots, contributing to better safety outcomes and more efficient emergency responses.

9. Illustration of the Current State of HRI in Mine Rescue and Areas for Improvement

To further illustrate the current state of HRI in mine rescue and identify areas for improvement, Table 5 presents a comparative analysis of several existing SAR robot interfaces. Each system is evaluated in terms of its interface design, the challenges encountered during deployment, and the implications of human-factor (with reference to cognitive models). This comparison highlights common gaps—such as interfaces overloading the operator’s visual channel or lacking support for situation projection—which inform the above design guidelines (see Table 5). To ensure coverage of the state-of-the-art, Table 5 integrates findings from the broader literature pool, providing a fuller picture of SAR robot design and its alignment (or misalignment) with human-centered interface principles. Across these varied systems, several recurring interface gaps emerge:
  • Overreliance on a single modality: Many robots place nearly all feedback on visual displays, leading to operator overload under stress and violating multiple-resource design principles (MRT) [124,125].
  • Limited support for projection: Few interfaces offer trend-based cues or predictive overlays to help the operator anticipate future states, reducing the ability to project situations per Endsley’s SA model [126].
  • Lack of adaptivity: Interfaces generally do not adjust to differences in operator cognitive abilities or workload, despite well-documented variability among users (e.g., due to training or fatigue) [127,128].
  • Reactive rather than proactive feedback: Alerts and information are often only provided after issues become obvious, missing opportunities to preempt problems or guide decision-making [129,130,131,132]
These findings underscore the need to move beyond hardware-centric control systems toward cognitively aligned, user-centered interfaces. By grounding interface design in well-established human-factors models—such as Endsley’s situational awareness framework, Wickens’ multiple resource theory, and cognitive load theory—developers can reduce cognitive strain on the operator, improve decision accuracy, and ultimately enhance mission outcomes for underground mine rescue. Below table presents the challenges in Interface designs and human factor implication in order to improve operation performance in SAR missions.
Table 5. Comparative Analysis of SAR Robot Interfaces and Human Factor Gaps.
Table 5. Comparative Analysis of SAR Robot Interfaces and Human Factor Gaps.
NameInterface DetailsChallengesHuman Factor Implications
RATLER [8,25]Console-based teleoperation with stereo/video cameras via RF/microwaveRough terrain navigation, latency, communication reliabilityInterface delays and poor feedback impair perception and projection (SA). Recommend enhancing real-time feedback and predictive cues to support SA and reduce visual/manual load (MRT). Delayed/low-fidelity feedback depresses PU; simplify control/visuals to raise PEOU; add status rationale and confidence cues to strengthen trust.
Numbat [8]Fiber-optic tethered GUI + joystick station with multiple camera feedsHigh cognitive workload due to poor lighting and terrain; interface complexityOverload illustrates high intrinsic cognitive load (CLT). Simplified visual channels and adaptive feedback are needed. Mapping views to user mental models improves response efficiency. Interface complexity under poor lighting lowers PEOU; show mission benefit (gas/map fusion) to raise PU; transparent fault reporting builds trust.
ANDROS Wolverine (V2) [7]Rugged laptop GUI, joystick/gamepad, touchscreen diagnostics; multiple camerasOperator overload during tracking and control in mission-critical scenariosInterface must integrate state + control feedback to support comprehension (SA). Failure highlights poor multitasking design violating MRT; automation could offload user demand (CLT). Overload and split attention reduce PEOU; integrate tracking + control to raise PU; clear autonomy/override cues calibrate trust.
Cave Crawler [133]PC GUI with SLAM-enabled mapping, laser + sensors; wired/wireless controlLow bandwidth, underground delay, darknessCommunication loss reduces perception and projection (SA). Interfaces should use buffered visuals, simplified overlays to match user mental models under degraded feedback. Bandwidth dropouts undermine trust and PU; buffered video + progressive disclosure improve PEOU; signal/quality badges restore trust.
Souryu V [134]Handheld controller for serial crawler, video feedbackNarrow, cluttered environments; limited visibilityTight spaces increase visual-spatial complexity (CLT). Ergonomic feedback and spatial mapping must reinforce mental models and reduce split attention (MRT). Tight-space teleoperation mapping raises effort (PEOU); add 3D/path hints to raise PU; obstacle-detection reliability indicators support trust.
Inuktun VGTV [110]Tethered video monitor + gamepad, video feeds via color/BW camerasNavigation in confined spaces, cable dragPhysical drag and split displays raise extraneous load (CLT). Feedback delay undermines comprehension (SA). Control-response alignment must follow mental model predictability. Cable drag + split displays hurt PEOU; unified view + tether-tension widgets improve PU; link quality/tension health boosts trust.
Western Australia Water Co. Robot [34] Fiber-optic tethered inspection robot with cameras and gas sensorsTether navigation in debris-filled passages; heavy design limited retrievalTether strain adds physical and cognitive workload (CLT). Intuitive layout and tether tension indicators improve mental model stability and reduce operator frustration. Tether handling burden lowers PEOU; retrieval aids and route previews raise PU; stable comms + failure modes increase trust.
Sub-terranean Robot [135] Semi-autonomous amphibious robot with GUI/joystick controlWater-filled shafts, zero visibility, sensor dropout, slippageSensor loss breaks perception chain (SA). Interface must predict/react to sensor gaps while maintaining operator trust via consistent logic (mental models, CLT). Sensor loss breaks trust and harms PU; graceful degradation + confidence bands aid PEOU; data provenance restores trust.
Leader [34]Remotely operated robot with multiple cameras and gas sensorsLimited communication, unstructured post-explosion environmentDynamic terrain affects projection and comprehension (SA). Modular views and signal confidence indicators help user maintain situational awareness and mental control loop. Comms limits reduce perceived benefit (PU); fused camera/gas overlays raise PU and PEOU; latency/signal badges calibrate trust.
Gemini Scout [23]Rugged PC + joystick; onboard gas sensor, thermal + pan-tilt camerasHazardous mines, poor visibilityHigh visual demand stresses visual resource channel (MRT). Sensor fusion and heat-map overlays support comprehension (SA) and reduce cognitive switching. EID can further reduce workload by visually encoding hazard thresholds (e.g., gas explosion limits, tether strain) to help operators instantly perceive safe vs. unsafe states. High visual demand stresses PEOU; heatmaps + waypoint assist raise PU; route-choice rationale (why this path) strengthens trust.
MINBOT II [136]Remote console + GUI, skid-steer teleoperate with sensorsRugged terrain, visibility issuesPredictive control aids reduce intrinsic cognitive load (CLT). Aligning visual layout with terrain types supports projection (SA) and operator mental modeling. Rugged terrain control cost lowers PEOU; predictive assists and terrain-aware UI raise PU; stability/fail-safe feedback builds trust.
CUMT V [8]GUI console with joystick; fiber + wireless relay; semi-autonomous driveOperator burden in terrain switching; signal reliabilityFrequent mode-switching disrupts situational continuity (SA). Adaptive displays and automation handover can offload demand (CLT) and reduce confusion across visual/manual channels (MRT). Frequent mode switches harm PEOU; consistent handover UX and unified controls raise PU; explicit mode/authority cues improve trust.
KRZ ITeleoperated system with IR camera, posture and obstacle alarmsNavigation under zero illumination, explosion-proof constraintsPoor lighting reduces perception (SA). IR must be presented with integrated overlays. Alarm overload risks channel interference (MRT), so prioritization and pre-attentive cues are key. Alarm overload hurts PEOU; prioritized, grouped alerts raise PU; IR calibration/health indicators support trust.
Mobile Inspection Platform [137]Remote GUI with video and gas sensor logging interfaceMapping gas in explosive zones; communication reliabilityMission-critical data must support comprehension under time stress (SA). Clear alert hierarchy + sensor syncing align with mental models and reduce user error. Unsynced gas/video lowers PU; timestamped sync + actionable thresholds raise PEOU/PU; sensor health bars bolster trust.
Tele Rescuer [138]VR/AR immersive GUI with gamepad + 3D mapping and gas sensorsVR latency, data overload, remote comms limitationsOverload reflects split-channel conflict (MRT). Excess sensory input burdens working memory (CLT). Interface should support selective attention and layered information access. VR latency/data overload reduces PEOU; limit channels + prediction panels raise PU; stable FPS/lag indicators calibrate trust.

10. Conclusions

Underground mine search and rescue (SAR) operations present extreme conditions that challenge human cognition, decision-making, and situational awareness. While robotics plays a growing role in these missions, the effectiveness of SAR robots remains highly dependent on how intuitively and reliably human operators can interact with their interfaces under stress. This paper highlights the critical human–robot interaction (HRI) challenges faced in SAR environments, including cognitive overload, fragmented perception, attentional breakdowns, and inconsistent trust. Through a systematic analysis of existing SAR robot interfaces and the integration of cognitive science, we identified how these challenges can be addressed through thoughtful interface design. Grounded in four foundational cognitive models, Endsley’s Situational Awareness Model, Wickens’ Multiple Resource Theory, Cognitive Load Theory, and Mental Models—this work proposes a layered framework for designing operator-centric interfaces. Design strategies such as layered feedback, multimodal alerts, adaptive layouts, and familiar metaphors are shown to support key human factors and improve operational performance. This framework not only bridges theory and practice but also offers practical guidance for interface developers and HRI researchers aiming to enhance transparency, reduce workload, and increase safety in mine rescue robotics. Together, these findings directly answer RQ1 and RQ2, highlighting both the critical human factors in SAR interface design and the strategies through which these challenges can be mitigated. The main contributions of this article include:
  • Identified HRI limitations in current SAR robot systems through cognitive and operational analysis.
  • Developed a cognitive model–based interface design framework tailored for underground emergency response.
  • Mapped specific design strategies to human factor needs with examples from real-world SAR robots.
  • Supported a shift from technology-centered to human-centered design thinking in mining robotics.
Future work should focus on designing and prototyping an interface that embodies the proposed framework and testing its performance through simulated SAR missions. This will include iterative user evaluation to validate how well the design supports cognitive performance, situational awareness, and trust under stress.

11. Limitation and Future Work

This study has some limitations that should be acknowledged. The analysis relied primarily on peer-reviewed literature and technical reports, which may not fully capture undocumented field practices or tacit knowledge held by mine rescue personnel. The proposed framework and design guidelines are therefore highlevel and conceptual, and practical implementation will require context-specific adaptations and trade-offs not addressed here. Future research should focus on prototyping interfaces that embody the proposed framework and subject them to realistic testing. Iterative user evaluations, including simulation-based SAR scenarios and field exercises, will be essential to validate how well the framework supports cognitive performance, situational awareness, workload management, and trust under operational stress. Such empirical work will not only refine the proposed guidelines but also bridge the gap between conceptual models and deployable mine rescue systems.

Author Contributions

Writing—original draft preparation, R.B. UAV-related contributions, K.M.J. Supervision, P.R. Project PI, V.A., H.K., S.S., M.H. and P.R. All authors have read and agreed to the published version of the manuscript.

Funding

The research was funded by the National Institute for Occupational Safety and Health (NIOSH), grant number U60OH012351.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

The authors would like to acknowledge the National Institute for Occupational Safety and Health (NIOSH) for supporting this research. The authors used ChatGPT (GPT-5, OpenAI) to refine language clarity. We thoroughly reviewed the final text and take full responsibility for its content and accuracy.

Conflicts of Interest

The authors declare no conflicts of interest. The funding sponsor (NIOSH) had no role in the design of the study; in the collection, analyses, or interpretation of data, in the writing of the manuscript, or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
CLTCognitive Load Theory
HRCHuman–Robot Collaboration
HRCpHuman–Robot Cooperation
HRCxHuman–Robot Coexistence
HRIHuman–Robot Interaction
MMMental Models
MSHAMine Safety and Health Administration
MRTMultiple Resource Theory
NIOSHNational Institute for Occupational Safety and Health
SASituational Awareness
SARSearch and Rescue
TAMTechnology Acceptance Model
EIDEcological Interface Design

References

  1. Lee, W.; Alkouz, B.; Shahzaad, B.; Bouguettaya, A. Package Delivery Using Autonomous Drones in Skyways. In Proceedings of the Adjunct Proceedings of the 2021 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2021 ACM International Symposium on Wearable Computers; Virtual, 21–26 September 2021, pp. 48–50.
  2. Asadi, E.; Li, B.; Chen, I.M. Pictobot: A Cooperative Painting Robot for Interior Finishing of Industrial Developments. IEEE Robot. Autom. Mag. 2018, 25, 82–94. [Google Scholar] [CrossRef]
  3. Belanger, F.; Resor, J.; Crossler, R.E.; Finch, T.A.; Allen, K.R. Smart Home Speakers and Family Information Disclosure Decisions. In Proceedings of the AMCIS 2021 Proceedings, Montreal, QC, Canada, 9–13 August 2021. [Google Scholar]
  4. Dahn, N.; Fuchs, S.; Gross, H.-M. Situation Awareness for Autonomous Agents. In Proceedings of the 27th IEEE International Symposium on Robot and Human Interactive Communication, Nanjing, China, 27–31 August 2018. [Google Scholar]
  5. Davids, A. Urban search and rescue robots: From tragedy to technology. IEEE Intell. Syst. 2002, 17, 81–83. [Google Scholar] [CrossRef]
  6. Bin Motaleb, A.K.; Hoque, M.B.; Hoque, M.A. Bomb disposal robot. In Proceedings of the 2016 International Conference on Innovations in Science, Engineering and Technology (ICISET), Dhaka, Bangladesh, 28–29 October 2016; pp. 1–5. [Google Scholar]
  7. Murphy, R.; Kravitz, J.; Stover, S.; Shoureshi, R. Mobile robots in mine rescue and recovery. IEEE Robot. Autom. Mag. 2009, 16, 91–103. [Google Scholar] [CrossRef]
  8. Zhai, G.; Zhang, W.; Hu, W.; Ji, Z. Coal Mine Rescue Robots Based on Binocular Vision: A Review of the State of the Art. IEEE Access 2020, 8, 130561–130575. [Google Scholar] [CrossRef]
  9. Caiazzo, C.; Savković, M.; Pusica, M.; Milojevic, D.; Leva, M.; Djapan, M. Development of a Neuroergonomic Assessment for the Evaluation of Mental Workload in an Industrial Human-Robot Interaction Assembly Task: A Comparative Case Study. Machines 2023, 11, 995. [Google Scholar] [CrossRef]
  10. Senaratne, H.; Tian, L.; Sikka, P.; Williams, J.; Howard, G.; Kulic, D.; Paris, C. A Framework for Dynamic Situational Awareness in Human Robot Teams: An Interview Study. ACM Trans. Hum.-Robot. Interact. 2025, 15, 5. [Google Scholar] [CrossRef]
  11. Endsley, M. Toward a Theory of Situation Awareness in Dynamic Systems. Human. Factors 1995, 37, 32–64. [Google Scholar] [CrossRef]
  12. Wickens, C.D. Multiple resources and mental workload. Hum. Factors 2008, 50, 449–455. [Google Scholar] [CrossRef]
  13. Norman, D.A. The Design of Everyday Things; Basic Books, Inc.: New York, NY, USA, 2002. [Google Scholar]
  14. Sweller, J. Cognitive Load During Problem Solving: Effects on Learning. Cogn. Sci. 1988, 12, 257–285. [Google Scholar] [CrossRef]
  15. Sweller, J.; Ayres, P.; Kalyuga, S. Cognitive Load Theory. In Psychology of Learning and Motivation; Academic Press: Cambridge, MA, USA, 2011. [Google Scholar] [CrossRef]
  16. Ren, H.; Zhao, Y.L.; Xiao, W.; Hu, Z.Q. A review of UAV monitoring in mining areas: Current status and future perspectives. Int. J. Coal Sci. Technol. 2019, 6, 320–333. [Google Scholar] [CrossRef]
  17. Xiao, W.; Chen, J.; Da, H.; Ren, H.; Zhang, J.; Zhang, L. Inversion and Analysis of Maize Biomass in Coal Mining Subsidence Area Based on UAV Images. Nongye Jixie Xuebao/Trans. Chin. Soc. Agric. Mach. 2018, 49, 169–180. [Google Scholar] [CrossRef]
  18. Jackisch, R.; Lorenz, S.; Zimmermann, R.; Möckel, R.; Gloaguen, R. Drone-Borne Hyperspectral Monitoring of Acid Mine Drainage: An Example from the Sokolov Lignite District. Remote Sens. 2018, 10, 385. [Google Scholar] [CrossRef]
  19. Martinez Rocamora, B.; Lima, R.R.; Samarakoon, K.; Rathjen, J.; Gross, J.N.; Pereira, G.A.S. Oxpecker: A Tethered UAV for Inspection of Stone-Mine Pillars. Drones 2023, 7, 73. [Google Scholar] [CrossRef]
  20. Yucel, M.A.; Turan, R.Y. Areal Change Detection and 3D Modeling of Mine Lakes Using High-Resolution Unmanned Aerial Vehicle Images. Arab. J. Sci. Eng. 2016, 41, 4867–4878. [Google Scholar] [CrossRef]
  21. Pathak, S. UAV based Topographical mapping and accuracy assessment of Orthophoto using GCP. Mersin Photogramm. J. 2023, 6, 1–8. [Google Scholar] [CrossRef]
  22. Szrek, J.; Zimroz, R.; Wodecki, J.; Michalak, A.; Góralczyk, M.; Worsa-Kozak, M. Application of the Infrared Thermography and Unmanned Ground Vehicle for Rescue Action Support in Underground Mine-The AMICOS Project. Remote Sens. 2021, 13, 69. [Google Scholar] [CrossRef]
  23. Reddy, A.H.; Kalyan, B.; Murthy, C.S.N. Mine Rescue Robot System–A Review. Procedia Earth Planet. Sci. 2015, 11, 457–462. [Google Scholar] [CrossRef]
  24. Sandia National Laboratories. Gemini-Scout Mine Rescue Vehicle. Available online: https://www.sandia.gov/research/gemini-scout-mine-rescue-vehicle/ (accessed on 15 October 2025).
  25. Krotkov, E.; Bares, J.; Katragadda, L.; Simmons, R.; Whittaker, R. Lunar rover technology demonstrations with Dante and Ratler. In Proceedings of the JPL, Third International Symposium on Artificial Intelligence, Robotics, and Automation for Space 1994, Pasadena, CA, USA, 18–20 October 1994. [Google Scholar]
  26. Purvis, J.W.; Klarer, P.R. RATLER: Robotic All-Terrain Lunar Exploration Rover. In Proceedings of the The Sixth Annual Workshop on Space Operations Applications and Research (SOAR 1992), Houston, TX, USA, 4–6 August 1992. [Google Scholar]
  27. Klarer, P. Space Robotics Programs at Sandia National Laboratories; Sandia National Labs.: Albuquerque, NM, USA, 1993. [Google Scholar]
  28. Ralston, J.C.; Hainsworth, D.W.; Reid, D.C.; Anderson, D.L.; McPhee, R.J. Recent advances in remote coal mining machine sensing, guidance, and teleoperation. Robotica 2001, 19, 513–526. [Google Scholar] [CrossRef]
  29. Hainsworth, D.W. Teleoperation user interfaces for mining robotics. Auton. Robot. 2001, 11, 19–28. [Google Scholar] [CrossRef]
  30. Ralston, J.; Hainsworth, D. The Numbat: A Remotely Controlled Mine Emergency Response Vehicle; Springer: London, UK, 1998; pp. 53–59. [Google Scholar] [CrossRef]
  31. Li, Y.; Li, M.; Zhu, H.; Hu, E.; Tang, C.; Li, P.; You, S. Development and applications of rescue robots for explosion accidents in coal mines. J. Field Robot. 2020, 37, 466–489. [Google Scholar] [CrossRef]
  32. Green, J. Mine Rescue Robots Requirements. In Proceedings of the 6th Robotics and Mechatronics Conference (RobMech), Durban, South Africa, 30–31 October 2013. [Google Scholar]
  33. Chung, T.; Orekhov, V.; Maio, A. Into the Robotic Depths: Analysis and Insights from the DARPA Subterranean Challenge. Annu. Rev. Control Robot. Auton. Syst. 2022, 6, 477–502. [Google Scholar] [CrossRef]
  34. Wang, Y.; Tian, P.; Zhou, Y.; Chen, Q. The Encountered Problems and Solutions in the Development of Coal Mine Rescue Robot. J. Robot. 2018, 2018, 8471503. [Google Scholar] [CrossRef]
  35. Li, M.; Zhu, H.; Tang, C.; You, S.; Li, Y. Coal Mine Rescue Robots: Development, Applications and Lessons Learned. In Proceedings of the 2021 International Conference on Autonomous Unmanned Systems (ICAUS 2021), Changsha, China, 24–26 September 2022; pp. 2127–2138. [Google Scholar] [CrossRef]
  36. Kadous, M.W.; Sheh, R.K.-M.; Sammut, C. Effective user interface design for rescue robotics. In Proceedings of the 1st ACM SIGCHI/SIGART Conference on Human-Robot Interaction, Salt Lake City, UT, USA, 2–3 March 2006; pp. 250–257. [Google Scholar]
  37. Bakzadeh, R.; Androulakis, V.; Khaniani, H.; Shao, S.; Hassanalian, M.; Roghanchi, P. Robots in Mine Search and Rescue Operations: A Review of Platforms and Design Requirements. Preprints 2023. [Google Scholar] [CrossRef]
  38. Hidalgo, M.; Reinerman-Jones, L.; Barber, D. Spatial Ability in Military Human-Robot Interaction: A State-of-the-Art Assessment. In International Conference on Human-Computer Interaction; Springer: Cham, Switzerland, 2019. [Google Scholar]
  39. Caccavale, R.; Finzi, A. Flexible Task Execution and Attentional Regulations in Human-Robot Interaction. IEEE Trans. Cogn. Dev. Syst. 2017, 9, 68–79. [Google Scholar] [CrossRef]
  40. Goodrich, M.A.; Schultz, A.C. Human-Robot Interaction: A Survey. Found. Trends® Hum.-Comput. Interact. 2007, 1, 203–275. [Google Scholar] [CrossRef]
  41. Perzanowski, D.; Schultz, A.C.; Adams, W.; Marsh, E.; Bugajska, M. Building a multimodal human-robot interface. IEEE Intell. Syst. 2001, 16, 16–21. [Google Scholar] [CrossRef]
  42. Haddadin, S.; Haddadin, S.; Khoury, A.; Rokahr, T.; Parusel, S.; Burgkart, R.; Bicchi, A.; Albu-Schäffer, A. On making robots understand safety: Embedding injury knowledge into control. Int. J. Robot. Res. 2012, 31, 1578–1602. [Google Scholar] [CrossRef]
  43. Haddadin, S.; Albu-Schäffer, A.; Hirzinger, G. Requirements for Safe Robots: Measurements, Analysis and New Insights. Int. J. Robot. Res. 2009, 28, 1507–1527. [Google Scholar] [CrossRef]
  44. Sanchez Restrepo, S.; Raiola, G.; Chevalier, P.; Lamy, X.; Sidobre, D. Iterative virtual guides programming for human-robot comanipulation. In Proceedings of the 2017 IEEE International Conference on Advanced Intelligent Mechatronics (AIM), Munich, Germany, 3–7 July 2017; pp. 219–226. [Google Scholar]
  45. Andrisano, A.O.; Leali, F.; Pellicciari, M.; Pini, F.; Vergnano, A. Hybrid Reconfigurable System design and optimization through virtual prototyping and digital manufacturing tools. Int. J. Interact. Des. Manuf. 2012, 6, 17–27. [Google Scholar] [CrossRef]
  46. Schiavi, R.; Bicchi, A.; Flacco, F. Integration of Active and Passive Compliance Control for Safe Human-Robot Coexistence. In Proceedings of the IEEE International Conference on Robotics and Automation, Kobe, Japan, 12–17 May 2009. [Google Scholar]
  47. Shen, Y.; Reinhart, G.; Tseng, M.M. A Design Approach for Incorporating Task Coordination for Human-Robot-Coexistence within Assembly Systems. In Proceedings of the Annual IEEE Systems Conference (SysCon) Proceedings, Vancouver, BC, Canada, 13–16 April 2015; IEEE: New York, NY, USA, 2012. [Google Scholar]
  48. Gaz, C.; Magrini, E.; De Luca, A. A model-based residual approach for human-robot collaboration during manual polishing operations. Mechatronics 2018, 55, 234–247. [Google Scholar] [CrossRef]
  49. Luca, A.D.; Flacco, F. Integrated control for pHRI: Collision avoidance, detection, reaction and collaboration. In Proceedings of the International Conference on Biomedical Robotics and Biomechatronics, Rome, Italy, 24–27 June 2012. [Google Scholar]
  50. Mörtl, A.; Lawitzky, M.; Kucukyilmaz, A.; Sezgin, M.; Basdogan, C.; Hirche, S. The role of roles: Physical cooperation between humans and robots. Int. J. Robot. Res. 2012, 31, 1656–1674. [Google Scholar] [CrossRef]
  51. Krüger, J.; Lien, T.K.; Verl, A. Cooperation of human and machines in assembly lines. CIRP Ann. 2009, 58, 628–646. [Google Scholar] [CrossRef]
  52. Chandrasekaran, B.; Conrad, J.M. Human-Robot Collaboration: A Survey. In Proceedings of the IEEE SoutheastCon 2015, Fort Lauderdale, FL, USA, 9–12 April 2015. [Google Scholar]
  53. Flacco, F.; Kroeger, T.; De Luca, A.; Khatib, O. A Depth Space Approach for Evaluating Distance to Objects. J. Intell. Robot. Syst. 2015, 80, 7–22. [Google Scholar] [CrossRef]
  54. Mainprice, J.; Alami, E.A.S.T.S.R. Planning Safe and Legible Hand-over Motions for Human-Robot Interaction. In Proceedings of the IARP/IEEE-RAS/EURON Workshop on Technical Challenges for Dependable Robots in Human Environments, Toulouse, France, January 2010. [Google Scholar]
  55. Pak, H.; Mostafavi, A. Situational Awareness as the Imperative Capability for Disaster Resilience in the Era of Complex Hazards and Artificial Intelligence. arXiv 2025, arXiv:2508.16669. [Google Scholar] [CrossRef]
  56. Szafir, D.; Szafir, D.A. Connecting Human-Robot Interaction and Data Visualization. In Proceedings of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, Boulder, CO, USA, 09–11 March 2021; pp. 281–292. [Google Scholar]
  57. Kok, B.C.; Soh, H. Trust in Robots: Challenges and Opportunities. Curr. Robot. Rep. 2020, 1, 297–309. [Google Scholar] [CrossRef]
  58. Wagner, A.R.; Robinette, P. Chapter 9-An explanation is not an excuse: Trust calibration in an age of transparent robots. In Trust in Human-Robot Interaction; Nam, C.S., Lyons, J.B., Eds.; Academic Press: Cambridge, MA, USA, 2021; pp. 197–208. [Google Scholar] [CrossRef]
  59. Alonso, V.; de la Puente, P. System Transparency in Shared Autonomy: A Mini Review. Front. Neurorobot 2018, 12, 83. [Google Scholar] [CrossRef]
  60. Wit, J.d.; Vogt, P.; Krahmer, E. The Design and Observed Effects of Robot-performed Manual Gestures: A Systematic Review. J. Hum.-Robot. Interact. 2023, 12, 1–62. [Google Scholar] [CrossRef]
  61. de Barros, P.G.; Lindeman, R.W. Multi-Sensory Urban Search-and-Rescue Robotics: Improving the Operator’s Omni-Directional Perception. Front. Robot. AI 2014, 1, 14. [Google Scholar] [CrossRef]
  62. Triantafyllidis, E.; McGreavy, C.; Gu, J.; Li, Z. Study of Multimodal Interfaces and the Improvements on Teleoperation. IEEE Access 2020, 7, 78213–78227. [Google Scholar] [CrossRef]
  63. Sosnowski, M.J.; Brosnan, S.F. Under pressure: The interaction between high-stakes contexts and individual differences in decision-making in humans and non-human species. Anim. Cogn. 2023, 26, 1103–1117. [Google Scholar] [CrossRef]
  64. NASA-STD-3001; NASA Space Flight Human-System Standard, Volume 2: Human Factors, Habitability, and Environmental Health. NASA Technical Standard: Washington, DC, USA, 2022; Volume 2, Revision C.
  65. Khaliq, M.; Munawar, H.; Ali, Q. Haptic Situational Awareness Using Continuous Vibrotactile Sensations. arXiv 2021, arXiv:2108.04611. [Google Scholar] [CrossRef]
  66. Chitikena, H.; Sanfilippo, F.; Ma, S. Robotics in Search and Rescue (SAR) Operations: An Ethical and Design Perspective Framework for Response Phase. Appl. Sci. 2023, 13, 1800. [Google Scholar] [CrossRef]
  67. Sarter, N. Multiple-Resource Theory as a Basis for Multimodal Interface Design: Success Stories, Qualifications, and Research Needs. In Attention: From Theory to Practice; Oxford University Press: Oxford, UK, 2006; pp. 187–195. [Google Scholar]
  68. Tabrez, A.; Luebbers, M.; Hayes, B. A Survey of Mental Modeling Techniques in Human–Robot Teaming. Curr. Robot. Rep. 2020, 1, 259–267. [Google Scholar] [CrossRef]
  69. Endsley, M.R. Situation Awareness Misconceptions and Misunderstandings. J. Cogn. Eng. Decis. Mak. 2015, 9, 4–32. [Google Scholar] [CrossRef]
  70. Rey-Becerra, E.; Wischniewski, S. Mastering a robot workforce: Review of single human multiple robots systems and their impact on occupational safety and health and system performance. Ergonomics 2025, 1–25. [Google Scholar] [CrossRef]
  71. Han, Z.; Norton, A.; McCann, E.; Baraniecki, L.; Ober, W.; Shane, D.; Skinner, A.; Yanco, H.A. Investigation of Multiple Resource Theory Design Principles on Robot Teleoperation and Workload Management. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021. [Google Scholar]
  72. National Aeronautics Space Administration. Risk of Inadequate Human-System Integration Architecture. In Human Research Program, Human Factors and Behavioral Performance; NASA Lyndon B. Johnson Space Center: Houston, TX, USA, 2021. [Google Scholar]
  73. Cavicchi, S.; Abubshait, A.; Siri, G.; Mustile, M.; Ciardo, F. Can humanoid robots be used as a cognitive offloading tool? Cogn. Res. Princ. Implic. 2025, 10, 17. [Google Scholar] [CrossRef]
  74. Richter, P.; Wersing, H.; Vollmer, A.-L. Improving Human-Robot Teaching by Quantifying and Reducing Mental Model Mismatch. arXiv 2025, arXiv:2501.04755. [Google Scholar] [CrossRef]
  75. Javaid, M.; Estivill-Castro, V. Explanations from a Robotic Partner Build Trust on the Robot’s Decisions for Collaborative Human-Humanoid Interaction. Robotics 2021, 10, 51. [Google Scholar] [CrossRef]
  76. Zheng, C.; Wang, K.; Gao, S.; Yu, Y.; Wang, Z.; Tang, Y. Design of multi-modal feedback channel of human–robot cognitive interface for teleoperation in manufacturing. J. Intell. Manuf. 2025, 36, 4283–4303. [Google Scholar] [CrossRef]
  77. Lee, Y.; Kozar, K.; Larsen, K. The Technology Acceptance Model: Past, Present, and Future. Commun. Assoc. Inf. Syst. 2003, 12, 50. [Google Scholar] [CrossRef]
  78. Cimperman, M.; Makovec Brenčič, M.; Trkman, P. Analyzing older users’ home telehealth services acceptance behavior-applying an Extended UTAUT model. Int. J. Med. Inform. 2016, 90, 22–31. [Google Scholar] [CrossRef] [PubMed]
  79. Al-Emran, M.; Shaalan, K. Recent Advances in Technology Acceptance Models and Theories; Springer: Cham, Switzerland, 2021. [Google Scholar]
  80. Wang, Z.; Wang, Y.; Zeng, Y.; Su, J.; Li, Z. An investigation into the acceptance of intelligent care systems: An extended technology acceptance model (TAM). Sci. Rep. 2025, 15, 17912. [Google Scholar] [CrossRef] [PubMed]
  81. Vicente, K.J.; Rasmussen, J. A Theoretical Framework for Ecological Interface Design; Risø National Laboratory: Roskilde, Denmark, 1988. [Google Scholar]
  82. Ma, J.; Li, Y.; Zuo, Y. Design and Evaluation of Ecological Interface of Driving Warning System Based on AR-HUD. Sensors 2024, 24, 8010. [Google Scholar] [CrossRef] [PubMed]
  83. Schewe, F.; Vollrath, M. Ecological interface design effectively reduces cognitive workload–The example of HMIs for speed control. Transp. Res. Part. F Traffic Psychol. Behav. 2020, 72, 155–170. [Google Scholar] [CrossRef]
  84. Zohrevandi, E.; Westin, C.; Vrotsou, K.; Lundberg, J. Exploring Effects of Ecological Visual Analytics Interfaces on Experts’ and Novices’ Decision-Making Processes: A Case Study in Air Traffic Control. Comput. Graph. Forum 2022, 41, 453–464. [Google Scholar] [CrossRef]
  85. Neisser, U. Cognition and Reality: Principles and Implications of Cognitive Psychology; W. H. Freeman: San Francisco, CA, USA, 1976. [Google Scholar]
  86. Vicente, K.J.; Rasmussen, J. Ecological interface design: Theoretical foundations. IEEE Trans. Syst. Man Cybern. 1992, 22, 589–606. [Google Scholar] [CrossRef]
  87. Zhao, J.; Gao, J.; Zhao, F.; Liu, Y. A Search-and-Rescue Robot System for Remotely Sensing the Underground Coal Mine Environment. Sensors 2017, 17, 2426. [Google Scholar] [CrossRef]
  88. Seo, S.H. Novel Egocentric Robot Teleoperation Interfaces for Search and Rescue. In Department of Computer Science; University of Manitoba: Winnipeg, MB, Canada, 2021. [Google Scholar]
  89. Roldán, J.J.; Peña-Tapia, E.; Martín-Barrio, A.; Olivares-Méndez, M.A.; Del Cerro, J.; Barrientos, A. Multi-Robot Interfaces and Operator Situational Awareness: Study of the Impact of Immersion and Prediction. Sensors 2017, 17, 1720. [Google Scholar] [CrossRef]
  90. Tabrez, A.; Luebbers, M.B.; Hayes, B. Descriptive and Prescriptive Visual Guidance to Improve Shared Situational Awareness in Human-Robot Teaming. In Proceedings of the 21st International Conference on Autonomous Agents and Multiagent Systems, Virtual Event, 9–13 May 2022; International Foundation for Autonomous Agents and Multiagent Systems: Auckland, New Zealand, 2022; pp. 1256–1264. [Google Scholar]
  91. Walker, M.; Gramopadhye, M.; Ikeda, B.; Burns, J.; Szafir, D. The Cyber-Physical Control Room: A Mixed Reality Interface for Mobile Robot Teleoperation and Human-Robot Teaming. In Proceedings of the HRI ’24: 2024 ACM/IEEE International Conference on Human-Robot Interaction, 2024; Boulder, CO, USA, 11–15 March 2024, pp. 762–771.
  92. Moniruzzaman, M.D.; Rassau, A.; Chai, D.; Islam, S. Teleoperation methods and enhancement techniques for mobile robots: A comprehensive survey. Robot. Auton. Syst. 2021, 150, 103973. [Google Scholar] [CrossRef]
  93. Louca, J.; Eder, K.; Vrublevskis, J.; Tzemanaki, A. Impact of Haptic Feedback in High Latency Teleoperation for Space Applications. J. Hum.-Robot. Interact. 2024, 13, 16. [Google Scholar] [CrossRef]
  94. Lee, Y.J.; Park, M.W. 3D tracking of multiple onsite workers based on stereo vision. Autom. Constr. 2019, 98, 146–159. [Google Scholar] [CrossRef]
  95. Dennler, N.; Delgado, D.; Zeng, D.; Nikolaidis, S.; Matarić, M. The RoSiD Tool: Empowering Users to Design Multimodal Signals for Human-Robot Collaboration. In International Symposium on Experimental Robotics; Springer Nature: Cham, Switzerland, 2024; pp. 3–10. [Google Scholar]
  96. Ranganeni, V.; Dhat, V.; Ponto, N.; Cakmak, M. AccessTeleopKit: A Toolkit for Creating Accessible Web-Based Interfaces for Tele-Operating an Assistive Robot. In Proceedings of the 37th Annual ACM Symposium on User Interface Software and Technology, Pittsburgh, PA, USA, 13–16 October 2024; Association for Computing Machinery: Pittsburgh, PA, USA, 2024; p. 87. [Google Scholar]
  97. Draheim, C.; Pak, R.; Draheim, A.; Engle, R. The role of attention control in complex real-world tasks. Psychon. Bull. Rev. 2022, 29, 1143–1197. [Google Scholar] [CrossRef] [PubMed]
  98. Lee, J.D.; See, K.A. Trust in Automation: Designing for Appropriate Reliance. Human. Factors 2004, 46, 50–80. [Google Scholar] [CrossRef]
  99. Yang, X.J.; Schemanske, C.; Searle, C. Toward Quantifying Trust Dynamics: How People Adjust Their Trust After Moment-to-Moment Interaction With Automation. Human. Factors: J. Human. Factors Ergon. Soc. 2021, 65, 001872082110347. [Google Scholar] [CrossRef]
  100. Guo, X.; Liu, Y.; Ma, X.; Fu, H. Human trust effect in remote human–robot collaboration construction task for different level of automation. Adv. Eng. Inform. 2025, 68, 103647. [Google Scholar] [CrossRef]
  101. Wickens, C. Multiple resources and performance prediction. Theor. Issues Ergon. Sci. 2002, 3, 159–177. [Google Scholar] [CrossRef]
  102. Marvel, J.A.; Bagchi, S.; Zimmerman, M.; Antonishek, B. Towards Effective Interface Designs for Collaborative HRI in Manufacturing: Metrics and Measures. J. Hum.-Robot. Interact. 2020, 9, 25. [Google Scholar] [CrossRef]
  103. Murphy, R.R.; Tadokoro, S.; Kleiner, A. Disaster Robotics. In Springer Handbook of Robotics; Siciliano, B., Khatib, O., Eds.; Springer International Publishing: Cham, Switzerland, 2016; pp. 1577–1604. [Google Scholar]
  104. Bouzón, I.; Pascual, J.; Costales, C.; Crespo, A.; Cima, C.; Melendi, D. Design, Implementation and Evaluation of an Immersive Teleoperation Interface for Human-Centered Autonomous Driving. Sensors 2025, 25, 4679. [Google Scholar] [CrossRef]
  105. Nourbakhsh, I.; Sycara, K.; Koes, M.; Yong, M.; Lewis, M.; Burion, S. Human-Robot Teaming for Search and Rescue. Pervasive Comput. IEEE 2005, 4, 72–79. [Google Scholar] [CrossRef]
  106. Diab, M.; Demiris, Y. A framework for trust-related knowledge transfer in human–robot interaction. Auton. Agents Multi-Agent. Syst. 2024, 38, 24. [Google Scholar] [CrossRef]
  107. Hancock, P.A.; Parasuraman, R. Human factors and safety in the design of intelligent vehicle-highway systems (IVHS). J. Saf. Res. 1992, 23, 181–198. [Google Scholar] [CrossRef]
  108. Panakaduwa, C.; Coates, S.; Munir, M.; De Silva, O. Optimising Visual User Interfaces to Reduce Cognitive Fatigue and Enhance Mental Well-being. In Proceedings of the 35th Annual Conference of the International Information Management Association (IIMA), Manchester, UK, 2–4 September 2024; University of Salford: Media City, Manchester, UK, 2024. [Google Scholar]
  109. Murphy, R.; Tadokoro, S. Disaster Robotics: Results from the ImPACT Tough Robotics Challenge; Springer: Berlin/Heidelberg, Germany, 2019; pp. 507–528. [Google Scholar]
  110. Casper, J.; Murphy, R. Human-robot interactions during the robot-assisted urban search and rescue response at the World Trade Center. IEEE Trans. Syst. Man Cybern. Part. B Cybern. A Publ. IEEE Syst. Man Cybern. Soc. 2003, 33, 367–385. [Google Scholar] [CrossRef] [PubMed]
  111. Drury, J.; Scholtz, J.; Yanco, H.A. Awareness in human-robot interactions. In Proceedings of the SMC’03—2003 IEEE International Conference on Systems, Man and Cybernetics, Conference Theme—System Security and Assurance (Cat. No.03CH37483). Washington, DC, USA, 8 October 2003; Volume 1, pp. 912–918. [Google Scholar]
  112. Ali, R.; Islam, T.; Prato, B.; Chowdhury, S.; Al Rakib, A. Human-Centered Design in Human-Robot Interaction Evaluating User Experience and Usability. Bull. Bus. Econ. 2023, 12, 454–459. [Google Scholar] [CrossRef]
  113. Kujala, S. User involvement: A review of the benefits and challenges. Behav. Inf. Technol. 2003, 22, 1–16. [Google Scholar] [CrossRef]
  114. Wheeler, S.; Engelbrecht, H.; Hoermann, S. Human Factors Research in Immersive Virtual Reality Firefighter Training: A Systematic Review. Front. Virtual Real. 2021, 2, 671664. [Google Scholar] [CrossRef]
  115. Zhang, D.; Tron, R.; Khurshid, R. Haptic Feedback Improves Human-Robot Agreement and User Satisfaction in Shared-Autonomy Teleoperation. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021. [Google Scholar]
  116. Kocer, B.; Stedman, H.; Kulik, P.; Caves, I.; van Zalk, N.; Pawar, V.; Kovac, M. Immersive View and Interface Design for Teleoperated Aerial Manipulation. In Proceedings of the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan, 23–27 October 2022; pp. 4919–4926. [Google Scholar]
  117. Dybvik, H.; Løland, M.; Gerstenberg, A.; Slåttsveen, K.B.; Steinert, M. A low-cost predictive display for teleoperation: Investigating effects on human performance and workload. Int. J. Hum.-Comput. Stud. 2021, 145, 102536. [Google Scholar] [CrossRef]
  118. Zhu, Y.; Fusano, K.; Aoyama, T.; Hasegawa, Y. Intention-reflected predictive display for operability improvement of time-delayed teleoperation system. ROBOMECH J. 2023, 10, 17. [Google Scholar] [CrossRef]
  119. Manschitz, S.; Güler, B.; Ma, W.; Ruiken, D. Sampling-Based Grasp and Collision Prediction for Assisted Teleoperation. arXiv 2025, arXiv:2504.18186. [Google Scholar] [CrossRef]
  120. Contreras, C.A.; Rastegarpanah, A.; Chiou, M.; Stolkin, R. A mini-review on mobile manipulators with Variable Autonomy. Front. Robot. AI 2025, 12, 1540476. [Google Scholar] [CrossRef]
  121. Rea, D.J.; Seo, S.H. Still Not Solved: A Call for Renewed Focus on User-Centered Teleoperation Interfaces. Front. Robot. AI 2022, 9, 704225. [Google Scholar] [CrossRef]
  122. Luo, J.; He, W.; Yang, C. Combined Perception, Control, and Learning for Teleoperation: Key Technologies, Applications, and Challenges. Cogn. Comput. Syst. 2020, 2, 33–43. [Google Scholar] [CrossRef]
  123. Kamtam, S.B.; Lu, Q.; Bouali, F.; Haas, O.C.L.; Birrell, S. Network Latency in Teleoperation of Connected and Autonomous Vehicles: A Review of Trends, Challenges, and Mitigation Strategies. Sensors 2024, 24, 3957. [Google Scholar] [CrossRef] [PubMed]
  124. Pollak, A.; Biolik, E.; Chudzicka-Czupała, A. New Luddites? Counterproductive Work Behavior and Its Correlates, Including Work Characteristics, Stress at Work, and Job Satisfaction Among Employees Working With Industrial and Collaborative Robots. Hum. Factors Ergon. Manuf. Serv. Ind. 2025, 35, e70016. [Google Scholar] [CrossRef]
  125. Van Dijk, W.; Baltrusch, S.J.; Dessers, E.; De Looze, M.P. The Effect of Human Autonomy and Robot Work Pace on Perceived Workload in Human-Robot Collaborative Assembly Work. Front. Robot. Ai 2023, 10, 1244656. [Google Scholar] [CrossRef]
  126. Mutzenich, C.; Durant, S.; Helman, S.; Dalton, P. Situation Awareness in Remote Operators of Autonomous Vehicles: Developing a Taxonomy of Situation Awareness in Video-Relays of Driving Scenes. Front. Psychol. 2021, 12, 727500. [Google Scholar] [CrossRef]
  127. Xie, B. Visual Simulation of Interactive Information of Robot Operation Interface. Autom. Mach. Learn. 2023, 4, 61–67. [Google Scholar] [CrossRef]
  128. Chang, Y.L.; Luo, D.H.; Huang, T.R.; Goh, J.O.; Yeh, S.L.; Fu, L.C. Identifying Mild Cognitive Impairment by Using Human–Robot Interactions. J. Alzheimer’s Dis. 2022, 85, 1129–1142. [Google Scholar] [CrossRef]
  129. Lu, C.L.; Huang, J.T.; Huang, C.I.; Liu, Z.Y.; Hsu, C.C.; Huang, Y.Y.; Chang, P.K.; Ewe, Z.L.; Huang, P.J.; Li, P.L.; et al. A Heterogeneous Unmanned Ground Vehicle and Blimp Robot Team for Search and Rescue Using Data-Driven Autonomy and Communication-Aware Navigation. Field Robot. 2022, 2, 557–594. [Google Scholar] [CrossRef]
  130. Belkaid, M.; Kompatsiari, K.; De Tommaso, D.; Zablith, I.; Wykowska, A. Mutual Gaze With a Robot Affects Human Neural Activity and Delays Decision-Making Processes. Sci. Robot. 2021, 6, eabc5044. [Google Scholar] [CrossRef]
  131. Ciardo, F.; Wykowska, A. Robot’s Social Gaze Affects Conflict Resolution but Not Conflict Adaptations. Journal of Cognition 2022, 5, 2. [Google Scholar] [CrossRef]
  132. Suh, H.T.; Xiong, X.; Singletary, A.; Ames, A.D.; Burdick, J.W. Energy-Efficient Motion Planning for Multi-Modal Hybrid Locomotion. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24 October 2020–24 January 2021; pp. 7027–7033. [Google Scholar] [CrossRef]
  133. Morris, A.; Ferguson, D.; Omohundro, Z.; Bradley, D.; Silver, D.; Baker, C.; Thayer, S.; Whittaker, C.; Whittaker, W. Recent developments in subterranean robotics. J. Field Robot. 2006, 23, 35–57. [Google Scholar] [CrossRef]
  134. Arai, M.; Tanaka, Y.; Hirose, S.; Kuwahara, H.; Tsukui, S. Development of “Souryu-IV” and “Souryu-V:” serially connected crawler vehicles for in-rubble searching operations. J. Field Robot. 2008, 25, 31–65. [Google Scholar] [CrossRef]
  135. Ray, D.N.; Dalui, R.; Maity, A.; Majumder, S. Sub-terranean robot: A challenge for the Indian coal mines. Online J. Electron. Electr. Eng. 2009, 2, 217–222. [Google Scholar]
  136. Wang, W.; Dong, W.; Su, Y.; Wu, D.; Du, Z. Development of Search-and-rescue Robots for Underground Coal Mine Applications. J. Field Robot. 2014, 31, 386–407. [Google Scholar] [CrossRef]
  137. Bołoz, Ł.; Biały, W. Automation and robotization of underground mining in Poland. Appl. Sci. 2020, 10, 7221. [Google Scholar] [CrossRef]
  138. Moczulski, W.; Cyran, K.; Novak, P.; Rodriguez, A.; Januszka, M. TeleRescuer-a concept of asystem for teleimmersion of a rescuerto areas of coal mines affected by catastrophes. Proc. Inst. Veh. 2015, 2, 57–62. [Google Scholar]
Figure 1. Methodological framework showing how the narrative review, cognitive model application, and comparative evaluation are integrated to identify HRI design gaps and guide recommendations.
Figure 1. Methodological framework showing how the narrative review, cognitive model application, and comparative evaluation are integrated to identify HRI design gaps and guide recommendations.
Robotics 14 00148 g001
Figure 2. Constraints, research gaps, and focus areas in robotic systems for mine rescue missions.
Figure 2. Constraints, research gaps, and focus areas in robotic systems for mine rescue missions.
Robotics 14 00148 g002
Figure 3. HRI Interface Design Priorities and Essential Design Principles for Emergency Interfaces.
Figure 3. HRI Interface Design Priorities and Essential Design Principles for Emergency Interfaces.
Robotics 14 00148 g003
Table 1. HRI Categories in Emergency Scenarios.
Table 1. HRI Categories in Emergency Scenarios.
HRI CategoryDescriptionEmergency Scenario
Human robot coexistence (HRCx)Robots and humans operate in shared environments with separate tasks, minimal interactionRobot (UGV & UAV) navigates a collapsed area prior to searching and rescue entrance
Human robot Cooperation (HRCp)Robots and humans pursue a shared goal with coordinated actions in time and spaceRescuers and robots inspect the same area simultaneously; robots scan for structural integrity while rescuer performs triage
Human robot collaboration (HRC)Direct or indirect communication and intentional cooperation between human and robotRobot assists responders by interpreting hand gestures or spoken instructions to deliver tools, provide lighting, or stabilize equipment
Table 2. HRI challenges in mining emergency scenarios and design implications.
Table 2. HRI challenges in mining emergency scenarios and design implications.
ChallengeDescriptionDesign Implication
Situational Awareness Difficulty maintaining an accurate mental model of the robot’s environment due to limited visual feedback and sensor data.Use multi-camera views and panoramic stitching
Integrate map overlays and spatial cues
Provide visual or auditory alerts for out-of-view hazards
Cognitive LoadOverload of working memory caused by simultaneous tasks such as navigation, communication, and sensor monitoring.Layer information hierarchies (critical vs. secondary)
Automate routine or repetitive tasks
Apply visual grouping and hierarchy to reduce search time
Trust and Transparency Misalignment between perceived and actual system reliability, leading to under-trust or over-trust of automation.Incorporate explainable AI modules
Display confidence levels and system rationale
Provide manual override with clear consequence feedback
Attention and Individual DifferencesCognitive strain from modality switching and diverse operator preferences for visual, auditory, or haptic cues.Support multimodal (visual, auditory, haptic) feedback
Allow user customization based on role/preference
Ensure redundant communication across channels
Stress and FragilityCognitive and motor performance decline under high-pressure and emergency conditions.Use large, forgiving interface elements
Minimize menu depth and complex interactions
Build in redundant cues (e.g., visual + audio alerts)
Table 4. Comparative Analysis of HRI Interfaces in Mine Rescue.
Table 4. Comparative Analysis of HRI Interfaces in Mine Rescue.
Interface TypeKey FeaturesAdvantagesLimitationsMining SAR Suitability
Graphical User Interface (GUI)2D/3D displays with video, maps, and telemetry; standard input devicesFamiliar and easy to implement; supports multiple data streamsHigh visual load; requires fine motor control; vulnerable to clutterWidely used; needs optimization for low-light and noisy data environments
Multimodal InterfaceCombines visual, auditory, and haptic feedbackDistributes cognitive load across modalities; improves redundancy and accessibilityRisk of sensory conflict; requires synchronization and user customizationHighly suitable if well-calibrated; reduces modality overload
Haptic/Tactile InterfacePhysical cues such as vibration or resistance to simulate contact or forceEnhances feedback in low-visibility conditions; supports intuitive sensingLimited precision; requires calibration; may be difficult to use with PPE or remotelyUseful in niche contexts or advanced systems; not widely deployed yet
Immersive Interface (AR/VR)Simulates environment using 3D visualization through headsets or projectionsEnhances spatial awareness and planning; ideal for training or mission rehearsalSusceptible to motion sickness and fatigue; hardware and reliability constraints in field usePromising for planning and training; less practical for real-time control in rescue scenarios
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bakzadeh, R.; Joao, K.M.; Androulakis, V.; Khaniani, H.; Shao, S.; Hassanalian, M.; Roghanchi, P. Enhancing Emergency Response: The Critical Role of Interface Design in Mining Emergency Robots. Robotics 2025, 14, 148. https://doi.org/10.3390/robotics14110148

AMA Style

Bakzadeh R, Joao KM, Androulakis V, Khaniani H, Shao S, Hassanalian M, Roghanchi P. Enhancing Emergency Response: The Critical Role of Interface Design in Mining Emergency Robots. Robotics. 2025; 14(11):148. https://doi.org/10.3390/robotics14110148

Chicago/Turabian Style

Bakzadeh, Roya, Kiazoa M. Joao, Vasileios Androulakis, Hassan Khaniani, Sihua Shao, Mostafa Hassanalian, and Pedram Roghanchi. 2025. "Enhancing Emergency Response: The Critical Role of Interface Design in Mining Emergency Robots" Robotics 14, no. 11: 148. https://doi.org/10.3390/robotics14110148

APA Style

Bakzadeh, R., Joao, K. M., Androulakis, V., Khaniani, H., Shao, S., Hassanalian, M., & Roghanchi, P. (2025). Enhancing Emergency Response: The Critical Role of Interface Design in Mining Emergency Robots. Robotics, 14(11), 148. https://doi.org/10.3390/robotics14110148

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop