Next Article in Journal
A Distributed Adaptive Multipath Redundant Transmission Mechanism for High-Reliability Communication over Wide-Area Networks
Previous Article in Journal
Investigations of Anomalies in Ship Movement During a Voyage
Previous Article in Special Issue
Investigating the Role of Personality in Appearance Preferences for Huggable Communication Interfaces: A User-Centered Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Safety Engineering for Humanoid Robots in Everyday Life—Scoping Review

Department of Mechatronics and Automation, Faculty of Engineering, University of Szeged, 6720 Szeged, Hungary
*
Author to whom correspondence should be addressed.
Electronics 2025, 14(23), 4734; https://doi.org/10.3390/electronics14234734 (registering DOI)
Submission received: 30 October 2025 / Revised: 24 November 2025 / Accepted: 28 November 2025 / Published: 1 December 2025

Abstract

As humanoid robots move from controlled industrial environments into everyday human life, their safe integration is essential for societal acceptance and effective human–robot interaction (HRI). This scoping review examines engineering safety frameworks for humanoid robots across four core domains: (1) physical safety in HRI, (2) cybersecurity and software robustness, (3) safety standards and regulatory frameworks, and (4) ethical and societal implications. In the area of physical safety, recent research trends emphasize proactive, multimodal perception-based collision avoidance, the use of compliance mechanisms, and fault-tolerant control to handle hardware failures and falls. In cybersecurity and software robustness, studies increasingly address the full threat landscape, secure real-time communication, and reliability of artificial intelligence (AI)-based control. The analysis of standards and regulations reveals a lag between technological advances and the adaptation of key safety standards in current research. Ethical and societal studies show that safety is also shaped by user trust, perceived safety, and data protection. Within the corpus of 121 peer-reviewed studies published between 2021 and 2025 and included in this review, most work concentrates on physical safety, while cybersecurity, standardization, and socio-ethical aspects are addressed less frequently. These gaps point to the need for more integrated, cross-domain approaches to safety engineering for humanoid robots.

1. Introduction

The field of humanoid robotics has undergone rapid development over the past decade, increasingly moving beyond laboratories and strictly controlled industrial production lines [1,2], while the reliability of the underlying software and hardware components has become a critical concern [3,4]. Applications such as assisting astronauts in micro-gravity environments [5] or supporting complex supply chains with platforms like Tesla Optimus or COMAN+ [6,7] are no longer distant visions but active research directions. In parallel, humanoid robots are gaining ground in direct human-centered services as assistive and companion systems [8,9], including support for individuals living with dementia [10,11,12], where safe physical interaction, such as reproducing gentle touch, has become a key research focus [9]. The presence and behavior of such robots strongly influence user trust and empathy, which play a crucial role in caregiving contexts [13]. As rehabilitation tools, humanoid robots support individuals with mobility impairments in maintaining balance and restoring natural gait patterns [14,15], even on uneven terrain. Furthermore, social robots are increasingly used in therapies for children with autism spectrum disorder, where their predictable behavior makes them effective mediators for social skill development [16,17], and their anthropomorphic form enhances user acceptance [18].
These paradigm shifts from isolated operation toward direct human–robot interaction (HRI) and collaboration (HRC) need to place safety engineering at the forefront of research and development efforts.
This paper uses human–robot interaction (HRI) as an umbrella term for any form of co-presence and interaction between humans and humanoid robots, including social, assistive, and supervisory scenarios. Human–robot collaboration (HRC), in contrast, refers to tightly coupled, shared-task settings in which humans and robots jointly manipulate objects or share workspaces in proximity. This usage is consistent with established HRI taxonomies, where HRI is broadly defined as understanding, designing, developing, and evaluating robotic systems to be used with or by humans, while HRC denotes tightly coupled physical cooperation in shared workspaces [19,20].
While the safety of an industrial robot cell could traditionally be ensured through physical barriers and strict procedural constraints, safety in dynamic, unstructured environments shared with humans (e.g., homes, hospitals, and construction sites) is fundamentally more complex and requires proactive and adaptive approaches [21], in which the robot must continuously respond to human motion and intention [22,23]. Ensuring safety in these settings demands a rethinking of risk assessment methodologies [24,25], while control architectures must also be capable of enforcing safety constraints [26,27]. The success of physical human–robot interaction (pHRI) ultimately depends on the robot’s ability to operate safely, reliably, and in a manner that accounts for human intention [28,29].
The literature extensively addresses the technical challenges of humanoid robotics. Numerous review studies analyze motion planning [30,31], trajectory optimization [32,33] and navigation strategies [34], visual perception systems [35], fall dynamics during bipedal locomotion [36], fall prediction [37], and controlled fall execution [38], as well as benchmarking methodologies for performance evaluation [39]. At the same time, systematic reviews focusing on safety aspects of human–robot collaboration and risk assessment highlight the lack of standardization [24,25] and the absence of unified methodologies [40].
The aim of this scoping review is to synthesize the available scientific publications and provide a comprehensive overview of the current state and key challenges of safety engineering for humanoid robots. Compared to previous reviews that typically focus on individual aspects of safety (e.g., collision avoidance, risk assessment, or social acceptance), this work provides an integrated, cross-domain mapping of safety engineering for humanoid robots. The present scoping review considers humanoid robots operating in everyday environments and integrates physical interaction safety, cybersecurity and software robustness, formal safety standards and regulatory frameworks, and ethical and societal implications within a single framework. The paper identifies not only technical trends but also misalignments between engineering practice, regulatory guidance, and societal expectations.
The paper is structured as follows. Section 3 discusses physical safety in direct interaction, Section 4 analyzes software robustness and cybersecurity threats, Section 5 reviews standards and regulatory frameworks, and Section 6 examines ethical and societal implications. An integrated approach to these domains is essential for fostering reliable, effective, and socially accepted human–robot interactions in contemporary society.

2. Materials and Methods

This study is a scoping review that aims to map the existing scientific literature related to the safety engineering of humanoid robots and to identify the key research directions, foundational concepts, and current gaps in the field. The methodology follows the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses—Extension for Scoping Reviews) guidelines [41].

2.1. Search Strategy

The systematic literature search was conducted across three major scientific databases: Web of Science, Scopus, IEEE Xplore. These databases cover the fields of engineering, computer science, medicine, and the social sciences, ensuring a multidisciplinary perspective on the topic. No review protocol was registered for this review, as protocol registration is not mandatory for scoping reviews. Data was charted using a piloted extraction form. It has specified fields for bibliographic data, robot/platform and context, safety domain (RQ1–RQ4), methods, outcomes/metrics, and standard/regulatory references.

2.2. Study Selection Process and Eligibility Criteria

The selection process was carried out in multiple stages. Peer-reviewed journal articles and full-length conference papers (2021–2025) in English were included in the screened corpus; standards and pre-2021 seminal works were treated as contextual sources and were not screened. Studies were excluded if they were non-peer-reviewed (such as conference abstracts or book reviews), did not focus on safety-related topics, or fell outside the defined publication timeframe. To enhance the reliability of the selection process, each stage of screening (title, abstract, and full-text) followed pre-defined inclusion and exclusion criteria. After applying these criteria, only studies with clear and explicit relevance to humanoid robot safety were retained for full-text analysis.
This study is structured around four core research questions (RQ), which are illustrated in Figure 1. These domains, which also define the main challenges, are addressed sequentially in this paper:
  • RQ1: What hardware and control design are applied in human-shared environments?
  • RQ2: How do studies address software/cybersecurity in shared environments?
  • RQ3: How do the studies map or interpret the requirements of ISO 10218 series, ISO/TS 15066, and related standards?
  • RQ4: How are social risks and user acceptance handled in human shared environments?
The search strategy was based on Boolean combinations of keywords, as shown in Table 1.
As Figure 2 shows, the screening process comprised four consecutive stages. Database searches identified 3973 records. After deduplication and filtering by year, language, and peer-review status, 1212 records remained. Title screening excluded 845, leaving 367 for abstract screening. After excluding 217 at the abstract stage, 150 publications underwent full-text assessments. Finally, 29 reports did not meet the eligibility criteria (e.g., no humanoid focus, wrong type/format), yielding 121 included studies (research corpus). Standards and pre-2021 contextual sources were recorded separately and are not represented in PRISMA counts.
The last search was performed on 8 October 2025. Figure 3 shows the yearly distribution of included publications: 10 (2021), 14 (2022), 24 (2023), 23 (2024), and 50 (2025), in total, n = 121. The marked increase in 2025 (~41% of all included items) indicates accelerating research activity in the field. This thematic analysis of the collected information forms the basis of the following chapters.
Given the aims of a scoping review, the goal was not exhaustive coverage but to obtain a sufficiently rich and diverse corpus to map key research directions. The final set of 121 peer-reviewed studies provides broad coverage across hardware and control design, cybersecurity, standardization, and ethical/societal aspects. While some relevant work may still be missed, the size and diversity of the corpus are adequate to identify major themes, gaps, and emerging trends in safety engineering for humanoid robots.
Figure 3 shows the numerical distribution of the 121 included studies across the four safety domains (RQ1–RQ4). Physical safety (RQ1) represents the largest share of the corpus, followed by ethical and societal implications (RQ4), whereas comparatively fewer studies focus on cybersecurity and software robustness (RQ2) and on standards and regulation (RQ3). Combined with the temporal trend in Figure 3, this suggests that while research activity is accelerating overall, the growth is uneven across safety domains.

3. Physical Safety in Human–Robot Interaction

Physical safety represents the foundational layer of safety in human–robot interaction (HRI). Its primary objective is to prevent human injury and environmental damage during robot motion and physical contact. This chapter discusses the main research directions across three defensive layers: proactive collision avoidance, the minimization of collision impacts, and the handling of emergencies and system-level failures.
Several interacting factors shape physical safety in HRI. Key determinants include the robot’s mass and inertia, joint speed limits, and the stiffness or compliance of both actuators and contact surfaces. The geometry and location of contacts influence injury risk, as do environmental constraints, such as confined spaces, obstacles, and uneven terrain. Safe operation, therefore, requires not only proactive collision avoidance, but also force and pressure limitation, compliant mechanical design, reliable emergency-stop and fail-safe behaviors, and explicit consideration of the surrounding environment. In practice, these requirements are addressed through speed and separation monitoring, power and force limiting, careful shaping and padding of contact surfaces, body region-specific force and pressure limits, and fall- and fault-management strategies that bring the system to a safe state in the case of anomalies.
Figure 4 shows the multi-layered physical safety architecture for humanoid HRI. The framework consists of three defensive zones protecting the interaction core: the outer prevention layer focuses on proactive collision avoidance and planning (Section 3.1); the middle contact layer ensures impact mitigation through compliance and force limiting (Section 3.2), and the inner emergency layer handles critical failures via fail-safe mechanisms and fall management (Section 3.3).

3.1. Collision Avoidance and Proximity Detection

The highest level of safety is achieved by completely avoiding unexpected or undesirable physical contact. This imposes complex perception, planning, and control demands on the robot, particularly in dynamic, changing environments [42], where safe navigation also requires identifying and avoiding hazardous areas [43,44,45].
Modern safety frameworks often adopt behavior-based approaches. Scianca et al. [1] propose a three-level safety behavior model comprising override, temporary override, and proactive behaviors. The proactive layer aims to reduce risk without interrupting task execution, for example, by modifying the robot’s speed or trajectory in response to human motion in the environment [1]. Similarly, control architectures that rely on a safety metric (e.g., minimum separation distance) dynamically scale or reshape the robot’s trajectory to keep safety above a pre-defined threshold [26,46]. For redundant manipulators, exploiting null-space motion offers additional freedom to raise the safety level during inverse-kinematics solutions [47,48], without affecting the primary task [49].
Bertoni et al. [50] presented a framework in which proximity sensors distributed across the robot’s body provide continuous awareness of the immediate workspace, enabling seamless and safe interaction, even in confined spaces. Researchers are moving toward multimodal sensor fusion, which may in the future even encompass biomimetic chemical sensing [51]. Researchers developed an active safety strategy that combines visual and tactile perception within a hierarchical control architecture [52,53,54]. The outer, vision-based loop foresees potential collisions and decelerates the robot, while the inner, tactile loop manages contact forces once contact has occurred [52]. Huang et al. [55] propose an active vision mechanism in which cameras have additional rotational degrees of freedom, allowing the observation direction to be dynamically optimized to minimize risk.
Collision detection and identification is a challenging problem, as it must distinguish accidental, hazardous collisions from intentional, task-specific contacts. This challenge is visualized in Figure 5, which contrasts safe, intentional contact (A) with a hazardous, accidental collision (B). Safety strategies can be broadly categorized into collision-avoidance and collision-detection methods. Regarding detection, Sharkawy et al. [56] distinguish between model-based approaches, which rely on dynamic models and proprioceptive sensors (e.g., torque sensors), and data-driven approaches that utilize machine learning algorithms and external sensors to identify contacts in unstructured environments. They emphasize that while model-based methods are robust, data-driven techniques are increasingly vital for handling the complexity of unpredictable human behaviors. Zhang et al. [57] developed an online scheme based on supervised learning and Bayesian decision theory that, using sensor signals, detects and classifies physical interactions in under 20 ms with 99.6% accuracy. Wong et al. [58] extended this approach with continuous, multimodal recognition of human intent and attention: in addition to tactile data, their system analyzes images from a robot-mounted camera, taking into account human posture and gaze direction to determine whether a touch was intentional or accidental.
Motion planning algorithms must also integrate safety constraints. Neural network-based dynamic learning schemes can avoid mutual collisions between the arms during cooperative tasks with dual-arm humanoid robots [59,60], as well as full-body self-collisions [61]. In addition to probabilistic, fast-reacting methods and nature-inspired dynamic path-planning strategies [62,63], multi-agent reinforcement learning is promising for safe navigation, particularly in multi-robot collaboration where safety is enforced during both training and execution, for example, via model predictive control-based safety filters [64], and can also support scheduling the motion of multiple robots based on symbolic control [65].
Table 2 summarizes the reviewed approaches to collision avoidance and proximity detection, grouped by their primary sensing modality, and highlights their main strengths and limitations.

3.2. Force Limitation and Compliant Design

If a collision cannot be avoided, the secondary line of defense aims to minimize impact forces and energy transfer, thereby reducing the risk of injury. This is achieved in two principal ways: through software-based (active) and hardware-based (passive) compliance.
Some robot control systems can actively respond to external forces. Impedance control is a widely used approach that models the robot’s behavior as a virtual mass-spring-damper system. Salt Ducaju et al. [66,67] employ model predictive control to adapt the robot’s impedance online to guarantee safety during human collaboration. The method uses control barrier functions with linear safety constraints to ensure the system state remains within a safe set. For humanoids equipped with high-stiffness actuators, precise torque control is indispensable. Liu et al. [68] propose a scheme based on dynamic current control and internal model control to improve torque-tracking accuracy.
Passive compliance arises from the robot’s mechanical design. Materials and structures in soft robotics, such as dielectric elastomer actuators and pneumatic artificial muscles (PAMs), inherently enable safer interaction [69,70]. Wang et al. [71] note that although soft robots have long been considered safe, there are still no established frameworks for evaluating perceived safety, which hampers their adoption. Biomechanical inspirations also play an important role. Yang et al. [72,73] developed a passive structure analogous to the human knee meniscus that effectively damps axial impact forces during walking, and the design of robot legs likewise follows principles that combine static stability with dynamic adaptability. Cable-driven manipulators, fluidic-network-based synergistic robot hands, and ultra-high-flexibility hands actuated by a single motor with pneumatic clutches—such as the SMUFR hand—similarly embody this safe, lightweight design philosophy, which is essential for manipulating fragile objects such as fruit [74,75,76,77]. Pang et al. [78] propose optimization algorithms for these systems to achieve sufficient stiffness.

3.3. Emergency Stop and Fail-Safe Mechanism

To handle the highest-risk events, the system must provide robust emergency-response and fault-tolerance capabilities. These include managing loss of stability (falls) and hardware failures.
Falls in humanoid robots pose serious hazards to both the robot and its surroundings. The literature intensively investigates fall dynamics and prevention strategies. Subburaman et al. [36] provide a comprehensive survey of humanoid fall control, encompassing fall prediction, controlled falling, and post-fall recovery. Zhang et al. [37] developed a fall-prediction algorithm that extracts time-series features from robot state data and predicts loss of stability more than 1 s in advance, enabling timely intervention. When a fall is unavoidable, the objective shifts to damage mitigation. Cai et al. [38] constructed a library of “self-protective” fall trajectories that a robot can select online, given its current state, to contact the ground in the safest possible posture. For wheeled humanoid robots, variable stiffness mechanisms are being developed to attenuate impact forces during collisions and prevent toppling [79]. Maintaining stability is fundamental and requires robust multi-contact motion-planning and control frameworks, as well as feasibility-driven schemes that adapt gait generation in real time [80,81]. Modern humanoids can perform adaptive transitions between bipedal and wheeled locomotion, and even execute dynamic jumping maneuvers, all of which demand high-level stability control [82,83].
Hardware failures can also have severe consequences. Prevention requires robust control and proper thermal management of hardware components, such as high-performance chips [84]. Khan et al. [85,86] examined scenarios in which a joint actuator failure in a robotic arm induces chaotic, hazardous motions. They proposed adaptive, fault-tolerant sliding-mode control that detects the onset of chaotic behavior, stabilizes the system, and restores task tracking, enabling the safe completion or shutdown of critical operations. The concept of passive safety, as demonstrated by Ding et al. [87], can be embedded into motion learning frameworks to ensure that a robot can stop rapidly without falling, even under unexpected conditions.
To synthesize the technological solutions discussed across these defensive layers, Table 3 provides a structured comparison of the key physical safety methods. The table classifies these strategies according to their operational phase and highlights their respective advantages and limitations to assist in selecting appropriate safety measures.

4. Cybersecurity and Software Robustness in HRI

Software reliability and cybersecurity have become at least as critical as physical safety with the proliferation of networked humanoid robots that make autonomous decisions. A software fault or a malicious attack can create immediate physical hazards. This chapter examines key aspects of software safety, from preventing unauthorized access to ensuring the reliability of AI models.

4.1. Access Control, Authentication, and System Integrity

Robots operating in public or private spaces can be targets of cyberattacks. Oruma et al. [88] conducted a systematic literature review that mapped the complete threat landscape of social robots. They identified types of potential threat actors, their motivations, system vulnerabilities, and possible attack vectors. The threats fall into four main categories: cybersecurity (e.g., data theft and takeover of system control), social (e.g., deceiving the robot and abuse of trust), physical (e.g., vandalism), and public-space-related (e.g., obstructing the robot’s motion). The study proposes a taxonomy for organizing these threats and highlights that safety and privacy are closely interrelated, transdisciplinary issues.
Handling sensitive data is particularly critical. Hajiabbasi et al. [89] introduced a robot-based cyber–physical banking customer management system in which a Pepper robot collects biometric data (e.g., fingerprints) for identification. To guarantee system integrity and data security, the architecture combines deep learning-based biometric modeling, symmetric and asymmetric encryption modules, and a blockchain network for secure, decentralized data transmission and validation.
Using two humanoid social service robots as case studies, the authors show that neglecting privacy and security by design leads to serious vulnerabilities—especially in WebSocket-based operational control—and offer accountability-focused remediation guidance [90].

4.2. Secure Communication and Command Validation

Security risks in humanoid robotics can be classified into four main categories regarding the threat landscape: cybersecurity (e.g., data theft and system takeover), social, physical, and public-space-related threats [88]. To systematically visualize these vulnerabilities, Figure 6 illustrates the cyber–physical threat landscape, detailing the specific attack surfaces and potential threats associated with each vector. Especially regarding communication, risks arise at two levels. Internal component communication risks involve vulnerabilities in protocols like WebSockets [90] and the lack of redundancy in real-time buses (e.g., EtherCAT), where disturbances can lead to loss of control stability [3,91]. External communication risks (robot-to-robot or remote-operation) include command manipulation or session hijacking during teleoperation, and man-in-the-middle attacks during over-the-air (OTA) updates. The latter is particularly critical as attackers may inject malicious code to deliberately disable safety protocols, such as collision avoidance [92,93].
Humanoid robots, particularly high-DOF electro-hydraulic systems, process large volumes of sensor and control data in real time. The stability of control architecture and the reliability of communication are fundamental to safe operation. Implementing any safety algorithm is feasible only on top of a stable and reliable control system [91].
Research has examined the use of EtherCAT bus technology, which enables fast and deterministic communication. Ahn et al. [3] developed a dual-channel system for the TOCABI humanoid robot based on open-source software. The dual-channel, redundant design increases fault tolerance under communication disturbances, reduces network load, and improves real-time performance capabilities—essential for high-DOF systems requiring hard real-time operation. Ghandour et al. [91] likewise propose a distributed, real-time, modular, and adaptable control architecture for electrohydraulic humanoids that improves the update rate by 20% and reduces master-side latency by 40% compared with other humanoids, thereby providing a stable foundation for integrating future safety and protection algorithms.

4.3. Remote Operation

Telepresence and teleoperation are key application domains of humanoid robotics, enabling intervention in hazardous or otherwise inaccessible environments. Ali and Kamal [94] employ a humanoid-based telepresence platform, ARAtronic, for the remote visualization and supervision of faults in industrial machinery. This approach enhances operator safety by keeping humans away from potentially dangerous areas.
Remote access introduces significant cybersecurity risks. The communication channel between the robot and the operator can be vulnerable, enabling hijacking of robot control or manipulation of commands. Agrawal and Kumar [2] developed an IoT-based system to synchronize a robotic arm in real time with a human operator’s motion for handling hazardous materials. The system’s precision and reliability hinge on a secure communication channel, underscoring the cybersecurity challenges of remote operations.
Over-the-air (OTA) software updates are a type of remote operation that enables improving and extending robot functionality without physical access. While essential for long-term maintenance, they also introduce a substantial new attack surface. An insecure OTA process can have catastrophic consequences. Attackers may intercept the update package and inject malicious code into the robot’s control system [92]. This can result in full takeover of the robot, theft of sensitive data (e.g., navigation maps and camera feeds), or the deliberate disabling of safety protocols (e.g., collision avoidance), creating immediate physical hazards. Designing secure OTA architecture is, therefore, critical and should include end-to-end encryption of update packages, cryptographic firmware signing, and a device-side secure bootloader that verifies software integrity at every startup [93].

4.4. Robustness and Reliability of Software

Safety encompasses not only the defense against malicious attacks but also the intrinsic robustness of software and hardware. Control policies based on AI, particularly deep reinforcement learning (DRL), are becoming increasingly widespread, yet their reliability remains a critical concern. Research is also exploring the use of large language models for complex motion control, and the learning-from-failure paradigm likewise aims to enhance robustness [95,96,97].
Bodmann et al. [4] investigated the reliability of the Google Coral Edge TPU running deep reinforcement learning models. The hardware was subjected to accelerated neutron irradiation to emulate the long-term effects of natural cosmic radiation. Their results showed that the error rate of the Edge TPU executing DRL was up to 18-fold higher than allowed by international reliability standards. Radiation-induced faults most often led to complete model failure or erroneous outputs (e.g., incorrect velocity or position), despite feedback and inherent redundancy in the DRL systems. This study highlights that physical environmental effects pose serious threats to the AI components governing intelligent robot behavior, and that developing fault-tolerant AI hardware and software is essential.
Beyond radiation effects, other environmental factors can also undermine software robustness and hardware reliability. Strong electromagnetic fields, electrostatic discharge, or power-quality disturbances may corrupt sensor readings and communication signals or trigger spurious resets in safety-related controllers. Extreme temperatures, dust, and humidity can accelerate hardware degradation, leading to latent faults that only manifest under load. These examples underline that environmental qualification and hardening of both hardware and software are integral parts of safety engineering for humanoid robots, especially when deployed in industrial or medical environments [84].
Another key aspect of software robustness is systematic testing and validation. Sekkat et al. [98] developed an open-source digital twin of the Pepper robot in the ROS 2 framework. Digital twins enable complex algorithms to be tested in a realistic, scalable, and safe simulation environment prior to deployment on the physical robot, and they also support precise elasto-geometric calibration, which underpins reliable and accurate motion execution [99,100]. Operational anomaly detection is likewise part of software robustness. The CLUE-AI framework introduced by Altan and Sariel [101] employs a convolutional, three-branch architecture that fuses visual, auditory, and proprioceptive data streams to identify anomalies during everyday object-manipulation tasks (e.g., dropping an object), thereby allowing the robot to respond to unexpected events.
Across the surveyed work, cybersecurity measures range from biometric access control [89] and encrypted communication to secure over-the-air updates [92] and anomaly detection in software behavior [101]. Table 4 contrasts these approaches in terms of deployment complexity and their protective focus. To visualize these relationships, Figure 7 presents a trade-off analysis mapping the reviewed measures. As illustrated, measures prioritizing Availability (A) and Integrity (I) typically yield a higher direct impact on physical safety (red nodes). In contrast, data-centric approaches like biometric access (blue nodes), while critical for Privacy (P) and Confidentiality (C), often entail high complexity with a less-direct effect on preventing immediate physical harm. This highlights that while data protection is essential for trust, explicit design of safety and security is required to address physical risks effectively.

5. Safety Standards and Regulatory Framework

Clear, harmonized, and technology-responsive standards and regulatory frameworks are essential for the safe integration of humanoid robots. At present, this domain lags significantly behind technological progress, creating uncertainty for both developers and end users.

5.1. Risk Assessment Methods and Limitations

Traditional risk-assessment methods, such as Process Failure Mode and Effects Analysis (PFMEA), Hazard and Operability Studies (HAZOP), and Fault Tree Analysis (FTA), were originally designed for more static industrial settings. Their applicability to dynamic, complex human–robot collaboration (HRC) scenarios is limited because they struggle to model the unpredictability of human behavior and continuously evolving interaction dynamics [25]. In a systematic review, Arents et al. [24] identified a concerning trend: more than half of the examined HRC studies did not reference any specific safety standard, and 25% reported no explicit safety measures during experiments. This practice points to significant shortcomings in research culture and safety awareness.
Filling the existing knowledge gaps requires new, data-driven methods. Zuo et al. [102] applied network analysis to 303 HRI accident reports to identify archetypal incident patterns. They distinguished seven principal patterns, such as “unexpected activation”, “sensor and signal communication errors”, and “classic hazard pitfalls in robot-assisted work”. These archetypes can provide a structured framework for future risk analyses and the design of preventive measures. Integrating artificial intelligence into risk assessment is likewise promising yet challenging. Alenjareghi et al. [25] highlighted that although AI can improve the identification of process failures, current approaches often rely heavily on historical data and insufficiently account for human factors. A shift is also needed in benchmarking: Aller et al. [39] emphasize that non-functional aspects—such as safety and the quality of HRI—are largely overlooked by current benchmarking procedures, even though measuring them is essential for safe deployment in real-world settings.

5.2. Regulatory Frameworks and Future Directions

Although underrepresented in the research literature, foundational standards exist that provide baseline guidance for the safety engineering of humanoid robots, particularly in industrial and collaborative applications. An overview of the relevant robotics safety and functional safety standards, together with their focus areas, is presented in Table 5.
While Section 5.1 highlighted the limitations of current risk assessment methods, it is crucial to understand which existing standards apply to which domain. Figure 8 provides a mapping of these standards against key humanoid application scenarios. As shown, while ISO 10218 remains the core framework for industrial tasks, service-oriented domains like healthcare and home care rely primarily on ISO 13482. IEC 61508 serves as a cross-cutting supporting standard, underpinning the functional safety of programmable electronic systems across all domains.
ISO 10218-1 and 10218-2: This two-part standard forms the cornerstone of industrial robot safety [103,104]. It defines safety requirements for the design, manufacture, and integration of robots, manipulators, and robot systems. Although it originally focused on fenced robot cells, it also lays the foundations for collaborative systems. While standards compliance is essential in industrial settings, AMRs also benefit from transparent intent signaling in human–robot work areas, which enhances operational efficiency and operator experience [105].
ISO/TS 15066:2016: This technical specification complements ISO 10218 and specifically addresses the safe operation of collaborative robots [106]. It provides key guidance for risk assessment and defines the four principal collaborative modes: safety-rated monitored stop, hand guiding, speed and separation monitoring, and power and force limitation. The latter is particularly relevant for humanoid physical interaction, as it specifies biomechanical thresholds for quasi-static and transient contacts.
ISO 13482:2014: This standard addresses the safety requirements for personal care robots [107,108]. It covers three main types: mobile servant robots, person-carrier robots, and physical assistant robots. Because humanoid robots are often deployed in healthcare and home-care contexts, this standard offers essential guidance for safe use in non-industrial environments.
IEC 61508: The foundational functional safety standard that defines lifecycle safety requirements for electrical, electronic, and programmable electronic systems [109]. It is essential when developing safety-critical software and hardware components, including emergency-stop circuits and safety PLCs.
While indispensable, these standards have clear limitations. They were largely designed for well-structured, pre-defined tasks and interactions, and are difficult to apply to highly autonomous, learning-capable humanoid robots operating in unpredictable human environments, where the range of possible interaction scenarios is virtually unbounded. Research should, therefore, focus not only on applying existing standards but also on extending and adapting them [110].
A particularly sensitive and still-evolving aspect of regulation concerns AI-based software. Under recent regulatory developments, such as the EU AI Act, many AI components used in humanoid robots (e.g., for navigation, manipulation, or social interaction) will likely fall into high-risk categories, triggering stringent requirements on data governance, transparency, robustness, and human oversight [110]. For humanoid safety engineering, this implies that safety, functional performance, and legal compliance can no longer be treated separately: the same AI system that plans motions or interprets social cues must be demonstrably robust, auditable, and aligned with both safety standards and AI regulation. Future work will need to bridge traditional robot safety standards with emerging AI-specific requirements [25,110].

6. Ethical and Social Implications

Safety in humanoid robotics extends far beyond purely technical parameters. It is also a social and psychological construct that fundamentally shapes whether people accept and trust these machines and how they integrate them into everyday life. Successful societal adoption requires a deep understanding of ethical norms and user perceptions.
To structure these non-technical dimensions, Figure 9 proposes a conceptual framework for social safety. It illustrates how robot design features and ethical safeguards (such as privacy protection) jointly influence human perception, ultimately determining trust and societal acceptance.

6.1. Transparency, Trust, and Human Acceptance

Trust is one of the cornerstones of successful HRI [111]. Modulating a robot’s behavior and demeanor influences perceived empathy and trust, and physical appearance—particularly anthropomorphism—also has a significant effect [112]. Users were less willing to trust a non-humanoid robot with the supervision of valuable objects, personal information, or living agents (e.g., pets and children) than a humanoid. Interestingly, the decrease in trust was mediated by different factors: for objects and information, lower perceived intelligence and likability, whereas for living agents, lower likability and perceived “aliveness” were decisive [113]. This indicates that people apply different mental models when evaluating a robot’s capabilities depending on task context [114]. Studies in commercial settings likewise confirm that trust in robots emerges from a complex interplay of multiple factors [115]. At the same time, gender presentation (male/female/neutral) was not found to have a significant effect on trust [113].
User perception changes dynamically with experience. Tobis et al. [12] showed that older adults’ perceptions of robots improved significantly after a real interaction compared with viewing photographs only. The effect was especially pronounced among lonely individuals, underscoring the importance of interaction for building acceptance, particularly in institutional elder care [116,117]. Users also tend to attribute intentionality and responsibility to robots, even when they know the behavior is preprogrammed. This phenomenon, the Fundamental Attribution Error, has likewise been observed in HRI, further complicating questions of responsibility in cases of robot-caused errors or harm [118]. Perceived safety and user emotions should be examined, even in application-specific contexts—such as food-serving robots—where design directly influences willingness to try the food [119].

6.2. Privacy Protection

When humanoid robots enter human spaces, homes, and hospital rooms, privacy becomes paramount. Equipped with cameras, microphones, and other sensors, they continuously collect data about their surroundings and the people within them, raising serious privacy concerns.
Slane and Pedersen [10] focus on social assistive robots used in elder care. These devices are hybrid in nature—simultaneously health monitoring tools, digital assistants, and safety aides. Their qualitative studies with older adults highlighted concerns about data handling. Incorporating seniors’ perspectives into future privacy and AI-governance debates is essential. Similarly, Liao et al. [11] identified privacy concerns among healthcare professionals and caregivers regarding the use of the Pepper robot in dementia care.
In analyses of the threat landscape for social robots operating in public spaces, Oruma et al. [88] likewise identified privacy as a central cybersecurity concern. Technical measures include real-time anonymization of collected visual data, as demonstrated by the banking system of Hajiabbasi et al. [89], to protect the identities of individuals captured by cameras. Developing ethical and legal frameworks that clearly regulate the handling, storage, and use of robot-collected data is a prerequisite for safe and socially accepted HRI—particularly when AI-driven systems may affect patient safety [120,121].

6.3. Social Interaction and Human Relationships

Humanoid robots are increasingly designed for roles that require direct social interaction, whether in therapy, education, or companionship. In this context, safety entails not only avoiding physical harm but also ensuring psychological and emotional well-being.
Elder care is one of the most promising domains. Numerous studies examine the impact of humanoid robots on loneliness, depression, and quality of life in nursing homes and home-care settings. Research evaluates the effectiveness of robot interventions for mitigating loneliness, analyzes caregivers’ and patients’ views on technology adoption, and shows that real interaction positively shapes older adults’ perceptions of robots [12,117]. Robots such as TIAGO and Pepper can initiate conversations, provide reminders, or reduce feelings of isolation through their presence [120]. Mazuz and Yamazaki [13] propose a trauma-informed care approach to companion-robot design, placing the creation of safety, trust, self-compassion, and self-efficacy at the center of the design process.
Another important area is robot-assisted therapy for children with autism spectrum disorder. With their predictable and simplified social cues, humanoid robots can serve as effective mediators for developing social and communication skills [16,17]. Aryania et al. [16] applied a risk-averse multi-armed bandit algorithm to decide which movement sequence the robot should demonstrate to maximize a child’s social engagement during imitation tasks. Roštšinskaja et al. [18] examined interactions between the Pepper robot and children with neurological disorders and found that an anthropomorphic design increases acceptance, indicating that the robot may be a useful tool for fostering social skills.
Robots are also used as storytellers in education to support young children’s language development. Wang et al. [122] found that children who listened to stories told by a robot showed better comprehension and higher attention levels than when the story was told via a tablet or even by a human. Robots can also enable creative interactions, such as co-creating stories or even dancing, which can further strengthen social bonding [123,124].

6.4. Affective and Multimodal Human–Robot Interaction

For seamless and safe interaction, the robot must interpret subtle, nonverbal aspects of human behavior—including intent, attention, and emotional state. This capability is a central focus of research on affective (emotional) and multimodal human–robot interactions.
Multimodal perception entails integrating data from multiple sensory channels (e.g., vision, audition, and touch). Wong et al. [58] combine binary signals from tactile sensors with posture and gaze analysis from a robot-mounted camera to distinguish intentional from accidental touches with high accuracy. This enables context-appropriate responses; for example, immediately stopping after an accidental collision while responding cooperatively to an intentional touch (e.g., a handover) [125]. Communicating with intent from the robot side is likewise critical. In a warehouse setting, Bhattathiri et al. [105] showed that when an autonomous mobile robot clearly communicates its intended motion (e.g., upcoming turns), both productivity and human coworkers’ trust and comfort improve.
Affective interaction focuses on the role of emotions. Kiilavuori et al. [126] used psychophysiological measures, skin conductance, facial electromyographic activity, and heart rate to show that eye contact with a humanoid robot elicits automatic affective and attentional responses similar to those evoked by human eye contact, indicating activation of the brain’s social processing mechanisms. Users’ emotional state also significantly shapes interaction. In an escape room experiment, Dong and Jeon [127] found that happy participants rated the robot as more likable and safer, whereas angry participants complied more with the robot’s instructions but were less successful at the task. Modulating the robot’s behavior and demeanor (e.g., positive/negative) likewise affects perceived empathy and trust [128]. Enhancing robots’ emotional expressivity, for example, realistic facial expressions or gentle gestures such as caressing, can further deepen the human–robot relationship [129].
In two experiments with young children, the authors compared a humanoid robot, a non-humanoid robot, and a human using conflict and single-source paradigms. Humanoid appearance increased selective trust only in direct comparison. When the robot was the only informant, appearance made no difference, and older children chose the humanoid less often than younger ones. Children remembered information from robots and humans equally well [130].

7. Discussion

7.1. Discussion and Summary of Evidence

The findings indicate that safety is a highly complex, multidisciplinary challenge that extends far beyond the boundaries of traditional industrial robotics. The evidence is summarized according to the review’s research questions.
RQ1: The literature shows a clear trend in physical safety moving toward proactive, multimodal, perception-based collision avoidance. This is coupled with the integration of both active (software-based) and passive (hardware-based) compliance mechanisms to minimize impact forces. Furthermore, robust operation relies heavily on the development of fault-tolerant control systems and effective fall-management strategies to handle hardware failures or loss of stability.
RQ2: Cybersecurity and software robustness are becoming increasingly salient as robots grow more networked and autonomous. The analysis confirms the fundamental importance of addressing the full threat landscape, designing secure real-time communication architectures, and critically assessing the reliability of AI components, which can be vulnerable to environmental factors like cosmic radiation.
RQ3: The literature identifies a significant lag in standardization and regulation compared to the rapid pace of technological advancement. Key standards (like ISO 10218 and ISO/TS 15066) are often underrepresented in research. This gap hinders safe, widespread deployment and highlights a critical need for unified, evidence-based risk assessment methodologies and updated benchmarking procedures that include safety metrics.
RQ4: The literature confirms that safety is also a socio-psychological domain. Ethical and societal factors, such as user trust, perceived safety, data protection (privacy), and the quality of social interaction, are shown to shape the success and acceptance of humanoid robots at least as much as their technical capabilities.
Table 6 shows the summary of evidence, key issues, methods, and references.
This scoping review synthesizes, based on the available literature, the key domains of safety engineering for humanoid robots. The findings indicate that safety is a highly complex, multidisciplinary challenge that extends far beyond the boundaries of traditional industrial robotics.
Research in physical safety is clearly moving toward proactive, multimodal perception-based collision avoidance and the integration of active and passive compliance mechanisms. Fault-tolerant systems and fall-management strategies are crucial for robust operation. Cybersecurity and software robustness are becoming increasingly salient as robots grow more networked and autonomous. Analyzing the threat landscape, designing secure communication architectures, and assessing the reliability of AI components are fundamental. Significant lag persists in standardization and regulation, hindering safe, widespread deployment. Unified, evidence-based risk assessment and benchmarking methodologies are needed. Finally, ethical and societal factors such as trust, acceptance, privacy, and the quality of social interaction shape the success of humanoid robots at least as much as technical capability.
Future research should strive for tighter integration across these domains. A safe humanoid robot is not merely physically non-harmful, but also a reliable software system, operationally transparent, and socially accepted. Advancing these dimensions together is essential for the field of humanoid robotics.

7.2. Challenges and Restrictions

Humanoid robot safety is constrained by several persistent technical and non-technical challenges. On the technical side, robust perception and prediction in cluttered, human-populated environments remain difficult, especially when safety-critical decisions must be taken under uncertainty and in real time. Integrating multimodal sensing, compliant actuation, and fault-tolerant control into compact humanoid platforms is still an open engineering problem. From a methodological perspective, risk assessment and benchmarking frameworks are not yet fully adapted to learning-capable, highly autonomous humanoids, and they rarely capture socio-psychological aspects, such as perceived safety or trust. On the regulatory and societal side, safety standards, certification processes, and AI-specific regulation lag behind technological developments, creating uncertainty for developers and operators. Addressing these restrictions requires coordinated progress in sensing, control, standardization, and governance.

7.3. Conclusions and Recommendations

This scoping review shows that safety engineering for humanoid robots is inherently multidimensional, spanning physical interaction safety, cybersecurity and software robustness, formal standards and regulatory frameworks, and ethical and societal implications. Most of the current research effort is concentrated on physical safety, while cybersecurity, standardization, and socio-ethical aspects receive comparatively less systematic attention.
From these findings, several recommendations emerge. First, future work should more tightly integrate physical safety mechanisms with cybersecurity and software robustness, treating safety and security as co-dependent rather than separate design problems. Second, there is a need for safety benchmarks and risk assessment methods that explicitly account for learning-based controllers and human factors in humanoid HRI. Third, closer alignment between technical research and evolving standards and AI regulation is essential to ensure that safety solutions remain deployable in real-world settings. Finally, socio-psychological dimensions of safety trust, perceived safety, acceptance, and privacy should be treated as first-class outcomes alongside traditional engineering performance metrics.

7.4. Limitations

This scoping review has several limitations. First, as a scoping review, its purpose is to map the extent and nature of the literature, not to perform a formal quality assessment (critical appraisal) of the included studies’ methodologies. Although multiple databases and pre-defined criteria were used, we cannot rule out the omission of pertinent studies arising from search strategy choices, database indexing, or screening decisions. This review did not implement duplicate charting/independent extraction, which may increase the risk of extraction error. We mitigated this by using pre-defined recording decision rules.
Second, the search strategy was intentionally limited to 2021–2025 to capture the current state of the art. Pre-2021 citations and standards were used for context only and were not part of the screened corpus. This choice necessarily excludes foundational safety research published before this period. Third, this review was limited to publications in English, potentially omitting relevant studies in other languages.
Finally, this review is subject to publication bias, as it focuses on the peer-reviewed academic literature. Significant research and development in humanoid robot safety is conducted within private corporations and is often proprietary, unpublished, or released only in non-peer-reviewed formats. Therefore, the findings represent the state of the public scientific literature, not necessarily the full state of the industry.

Author Contributions

Conceptualization, D.K. and J.S.; Writing—review and editing, D.K. and J.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data is contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial intelligence
AMRAutonomous Mobile Robot
ASDAutism Spectrum Disorder
DOFDegrees of freedom
DRLDeep reinforcement learning
E/E/PEElectrical, Electronic, and Programmable Electronic Safety-Related Systems
FTAFault Tree Analysis
HAZOPHazard and Operability Studies
HRCHuman–robot collaboration
HRIHuman–robot interaction
IoTInternet of Things
MPCModel predictive control
OTAOver-the-air
PAMPneumatic artificial muscle
PFMEAProcess Failure Mode And Effects Analysis
PFLPower and force limitation
pHRIPhysical human–robot interaction
PRISMA-ScRPreferred Reporting Items for Systematic Reviews and Meta-Analyses—Extension for Scoping Reviews
RQResearch questions
SMUFRSingle-Motor Ultra-High-Flexibility
SRMSSafety-rated monitored stop
SSMSpeed and separation monitoring

References

  1. Scianca, N.; Ferrari, P.; De Simone, D.; Lanari, L.; Oriolo, G. A Behavior-Based Framework for Safe Deployment of Humanoid Robots. Auton. Robots 2021, 45, 435–456. [Google Scholar] [CrossRef]
  2. Agrawal, A.K.; Kumar, J. Humanoid Left Arm Collaboration Empowered by IoT and Synchronization of Human Joints: IOT BASED JOINT SYNCHRONIZED COLLABORATIVE HUMANOID ARM. J. Sci. Ind. Res. JSIR 2024, 83, 819–829. [Google Scholar] [CrossRef]
  3. Ahn, J.; Park, S.; Sim, J.; Park, J. Dual-Channel EtherCAT Control System for 33-DOF Humanoid Robot TOCABI. IEEE Access 2023, 11, 44278–44286. [Google Scholar] [CrossRef]
  4. Bodmann, P.R.; Saveriano, M.; Kritikakou, A.; Rech, P. Neutrons Sensitivity of Deep Reinforcement Learning Policies on EdgeAI Accelerators. IEEE Trans. Nucl. Sci. 2024, 71, 1480–1486. [Google Scholar] [CrossRef]
  5. Pohl, C.; Hegemann, P.; An, B.; Grotz, M.; Asfour, T. Humanoid robotic system for grasping and manipulation in decontamination tasks: Humanoides Robotersystem für das Greifen und Manipulieren bei Dekontaminierungsaufgaben. at-Automatisierungstechnik 2022, 70, 850–858. [Google Scholar] [CrossRef]
  6. Shamsuddoha, M.; Nasir, T.; Fawaaz, M.S. Humanoid Robots like Tesla Optimus and the Future of Supply Chains: Enhancing Efficiency, Sustainability, and Workforce Dynamics. Automation 2025, 6, 9. [Google Scholar] [CrossRef]
  7. Ruscelli, F.; Rossini, L.; Hoffman, E.M.; Baccelliere, L.; Laurenzi, A.; Muratore, L.; Antonucci, D.; Cordasco, S.; Tsagarakis, N.G. Design and Control of the Humanoid Robot COMAN+: Hardware Capabilities and Software Implementations. IEEE Robot. Autom. Mag. 2025, 32, 12–23. [Google Scholar] [CrossRef]
  8. Lavin, P.; Lesage, M.; Monroe, E.; Kanevsky, M.; Gruber, J.; Cinalioglu, K.; Rej, S.; Sekhon, H. Humanoid Robot Intervention vs. Treatment as Usual for Loneliness in Long-Term Care Homes: Study Protocol for a Pilot Randomized Controlled Trial. Front. Psychiatry 2022, 13, 1003881. [Google Scholar] [CrossRef]
  9. Lapresa, M.; Lauretti, C.; Cordella, F.; Reggimenti, A.; Zollo, L. Reproducing the Caress Gesture with an Anthropomorphic Robot: A Feasibility Study. Bioinspir. Biomim. 2024, 20, 016010. [Google Scholar] [CrossRef]
  10. Slane, A.; Pedersen, I. Bringing Older People’s Perspectives on Consumer Socially Assistive Robots into Debates about the Future of Privacy Protection and AI Governance. AI Soc. 2025, 40, 691–710. [Google Scholar] [CrossRef]
  11. Liao, Y.-J.; Jao, Y.-L.; Boltz, M.; Adekeye, O.T.; Berish, D.; Yuan, F.; Zhao, X. Use of a Humanoid Robot in Supporting Dementia Care: A Qualitative Analysis. SAGE Open Nurs. 2023, 9, 23779608231179528. [Google Scholar] [CrossRef] [PubMed]
  12. Tobis, S.; Piasek-Skupna, J.; Neumann-Podczaska, A.; Religioni, U.; Suwalska, A. Determinants of Attitude to a Humanoid Social Robot in Care for Older Adults: A Post-Interaction Study. Med. Sci. Monit. 2023, 29, e941205. [Google Scholar] [CrossRef] [PubMed]
  13. Mazuz, K.; Yamazaki, R. Trauma-Informed Care Approach in Developing Companion Robots: A Preliminary Observational Study. Front. Robot. AI 2025, 12, 1476063. [Google Scholar] [CrossRef] [PubMed]
  14. Lippi, V.; Mergner, T. A Challenge: Support of Standing Balance in Assistive Robotic Devices. Appl. Sci. 2020, 10, 5240. [Google Scholar] [CrossRef]
  15. Yu, J.; Zhang, S.; Wang, A.; Li, W.; Song, L. Musculoskeletal Modeling and Humanoid Control of Robots Based on Human Gait Data. PeerJ Comput. Sci. 2021, 7, e657. [Google Scholar] [CrossRef]
  16. Aryania, A.; Aghdasi, H.S.; Heshmati, R.; Bonarini, A. Robust Risk-Averse Multi-Armed Bandits with Application in Social Engagement Behavior of Children with Autism Spectrum Disorder While Imitating a Humanoid Robot. Inf. Sci. 2021, 573, 194–221. [Google Scholar] [CrossRef]
  17. Puglisi, A.; Caprì, T.; Pignolo, L.; Gismondo, S.; Chilà, P.; Minutoli, R.; Marino, F.; Failla, C.; Arnao, A.A.; Tartarisco, G.; et al. Social Humanoid Robots for Children with Autism Spectrum Disorders: A Review of Modalities, Indications, and Pitfalls. Children 2022, 9, 953. [Google Scholar] [CrossRef]
  18. Roštšinskaja, A.; Saard, M.; Korts, L.; Kööp, C.; Kits, K.; Loit, T.-L.; Juhkami, J.; Kolk, A. Unlocking the Potential of Social Robot Pepper: A Comprehensive Evaluation of Child-Robot Interaction. J. Pediatr. Health Care 2025, 39, 572–584. [Google Scholar] [CrossRef]
  19. Goodrich, M.A.; Schultz, A.C. Human-Robot Interaction: A Survey. Found. Trends Hum.-Comput. Interact. 2007, 1, 203–275. [Google Scholar] [CrossRef]
  20. Sheridan, T.B. Human–Robot Interaction: Status and Challenges. Hum. Factors 2016, 58, 525–532. [Google Scholar] [CrossRef]
  21. Sun, Y.; Jeelani, I.; Gheisari, M. Safe Human-Robot Collaboration in Construction: A Conceptual Perspective. J. Saf. Res. 2023, 86, 39–51. [Google Scholar] [CrossRef]
  22. Lv, M.; Feng, Z.; Yang, X.; Liu, B. A Method for Human–Robot Complementary Collaborative Assembly Based on Knowledge Graph. CCF Trans. Pervasive Comput. Interact. 2025, 7, 70–86. [Google Scholar] [CrossRef]
  23. Duan, H.; Wang, P.; Yang, Y.; Li, D.; Wei, W.; Luo, Y.; Deng, G. Reactive Human-to-Robot Dexterous Handovers for Anthropomorphic Hand. IEEE Trans. Robot. 2025, 41, 742–761. [Google Scholar] [CrossRef]
  24. Arents, J.; Abolins, V.; Judvaitis, J.; Vismanis, O.; Oraby, A.; Ozols, K. Human–Robot Collaboration Trends and Safety Aspects: A Systematic Review. J. Sens. Actuator Netw. 2021, 10, 48. [Google Scholar] [CrossRef]
  25. Alenjareghi, M.J.; Keivanpour, S.; Chinniah, Y.A.; Jocelyn, S.; Oulmane, A. Safe Human-Robot Collaboration: A Systematic Review of Risk Assessment Methods with AI Integration and Standardization Considerations. Int. J. Adv. Manuf. Technol. 2024, 133, 4077–4110. [Google Scholar] [CrossRef]
  26. Palmieri, J.; Di Lillo, P.; Lippi, M.; Chiaverini, S.; Marino, A. A Control Architecture for Safe Trajectory Generation in Human-Robot Collaborative Settings. IEEE Trans. Autom. Sci. Eng. 2025, 22, 365–380. [Google Scholar] [CrossRef]
  27. Shi, K.; Chang, J.; Feng, S.; Fan, Y.; Wei, Z.; Hu, G. Safe Human Dual-Robot Interaction Based on Control Barrier Functions and Cooperation Functions. IEEE Robot. Autom. Lett. 2024, 9, 9581–9588. [Google Scholar] [CrossRef]
  28. Farajtabar, M.; Charbonneau, M. The Path towards Contact-Based Physical Human–Robot Interaction. Robot. Auton. Syst. 2024, 182, 104829. [Google Scholar] [CrossRef]
  29. Sandoval, E.B.; Sosa, R.; Cappuccio, M.; Bednarz, T. Human–Robot Creative Interactions: Exploring Creativity in Artificial Agents Using a Storytelling Game. Front. Robot. AI 2022, 9, 695162. [Google Scholar] [CrossRef]
  30. Rutili de Lima, C.; Khan, S.G.; Tufail, M.; Shah, S.H.; Maximo, M.R.O.A. Humanoid Robot Motion Planning Approaches: A Survey. J. Intell. Robot. Syst. 2024, 110, 86. [Google Scholar] [CrossRef]
  31. Vikas; Parhi, D.R. Chaos-Based Optimal Path Planning of Humanoid Robot Using Hybridized Regression-Gravity Search Algorithm in Static and Dynamic Terrains. Appl. Soft Comput. 2023, 140, 110236. [Google Scholar] [CrossRef]
  32. Kashyap, A.K.; Parhi, D.R.; Pandey, A. Multi-Objective Optimization Technique for Trajectory Planning of Multi-Humanoid Robots in Cluttered Terrain. ISA Trans. 2022, 125, 591–613. [Google Scholar] [CrossRef]
  33. Li, F.; Kim, Y.-C.; Lyu, Z.; Zhang, H. Research on Path Planning for Robot Based on Improved Design of Non-Standard Environment Map With Ant Colony Algorithm. IEEE Access 2023, 11, 99776–99791. [Google Scholar] [CrossRef]
  34. De Luca, A.; Muratore, L.; Tsagarakis, N.G. Autonomous Navigation with Online Replanning and Recovery Behaviors for Wheeled-Legged Robots Using Behavior Trees. IEEE Robot. Autom. Lett. 2023, 8, 6803–6810. [Google Scholar] [CrossRef]
  35. Bin, T.; Yan, H.; Wang, N.; Nikolić, M.N.; Yao, J.; Zhang, T. A Survey on the Visual Perception of Humanoid Robot. Biomim. Intell. Robot. 2025, 5, 100197. [Google Scholar] [CrossRef]
  36. Subburaman, R.; Kanoulas, D.; Tsagarakis, N.; Lee, J. A Survey on Control of Humanoid Fall Over. Robot. Auton. Syst. 2023, 166, 104443. [Google Scholar] [CrossRef]
  37. Zhang, C.; Gao, J.; Chen, Z.; Zhong, S.; Qiao, H. Fall Analysis and Prediction for Humanoids. Robot. Auton. Syst. 2025, 190, 104995. [Google Scholar] [CrossRef]
  38. Cai, Z.; Yu, Z.; Chen, X.; Huang, Q.; Kheddar, A. Self-Protect Falling Trajectories for Humanoids with Resilient Trunk. Mechatronics 2023, 95, 103061. [Google Scholar] [CrossRef]
  39. Aller, F.; Pinto-Fernandez, D.; Torricelli, D.; Pons, J.L.; Mombaur, K. From the State of the Art of Assessment Metrics Toward Novel Concepts for Humanoid Robot Locomotion Benchmarking. IEEE Robot. Autom. Lett. 2020, 5, 914–920. [Google Scholar] [CrossRef]
  40. Lee, S.H.; Kim, J.S.; Yu, S. The Impact of Care Robots on Older Adults: A Systematic Review. Geriatr. Nur. 2025, 65, 103507. [Google Scholar] [CrossRef]
  41. Tricco, A.C.; Lillie, E.; Zarin, W.; O’Brien, K.K.; Colquhoun, H.; Levac, D.; Moher, D.; Peters, M.D.J.; Horsley, T.; Weeks, L.; et al. PRISMA Extension for Scoping Reviews (PRISMA-ScR): Checklist and Explanation. Ann. Intern. Med. 2018, 169, 467–473. [Google Scholar] [CrossRef]
  42. Shamsah, A.; Gu, Z.; Warnke, J.; Hutchinson, S.; Zhao, Y. Integrated Task and Motion Planning for Safe Legged Navigation in Partially Observable Environments. IEEE Trans. Robot. 2023, 39, 4913–4934. [Google Scholar] [CrossRef]
  43. Ma, J.; Dai, H.; Mu, Y.; Wu, P.; Wang, H.; Chi, X.; Fei, Y.; Zhang, S.; Liu, C. DOZE: A Dataset for Open-Vocabulary Zero-Shot Object Navigation in Dynamic Environments. IEEE Robot. Autom. Lett. 2024, 9, 7389–7396. [Google Scholar] [CrossRef]
  44. Lee, D.; Nahrendra, I.M.A.; Oh, M.; Yu, B.; Myung, H. TRG-Planner: Traversal Risk Graph-Based Path Planning in Unstructured Environments for Safe and Efficient Navigation. IEEE Robot. Autom. Lett. 2025, 10, 1736–1743. [Google Scholar] [CrossRef]
  45. Li, Z.; Zeng, J.; Chen, S.; Sreenath, K. Autonomous Navigation of Underactuated Bipedal Robots in Height-Constrained Environments. Int. J. Robot. Res. 2023, 42, 565–585. [Google Scholar] [CrossRef]
  46. Palleschi, A.; Hamad, M.; Abdolshah, S.; Garabini, M.; Haddadin, S.; Pallottino, L. Fast and Safe Trajectory Planning: Solving the Cobot Performance/Safety Trade-Off in Human-Robot Shared Environments. IEEE Robot. Autom. Lett. 2021, 6, 5445–5452. [Google Scholar] [CrossRef]
  47. Dou, R.; Yu, S.; Li, W.; Chen, P.; Xia, P.; Zhai, F.; Yokoi, H.; Jiang, Y. Inverse Kinematics for a 7-DOF Humanoid Robotic Arm with Joint Limit and End Pose Coupling. Mech. Mach. Theory 2022, 169, 104637. [Google Scholar] [CrossRef]
  48. Kang, M.; Fan, Z.; Yu, X.; Wan, H.; Chen, Q.; Wang, P.; Fu, L. Division-Merge Based Inverse Kinematics for Multi-DOFs Humanoid Robots in Unstructured Environments. Comput. Electron. Agric. 2022, 198, 107090. [Google Scholar] [CrossRef]
  49. Xiong, Y.; Zhai, D.-H.; Xia, Y. Robust Whole-Body Safety-Critical Control for Sampled-Data Robotic Manipulators via Control Barrier Functions. IEEE Trans. Autom. Sci. Eng. 2025, 22, 16050–16061. [Google Scholar] [CrossRef]
  50. Bertoni, L.; Baccelliere, L.; Muratore, L.; Tsagarakis, N.G. A Proximity-Based Framework for Human-Robot Seamless Close Interactions. IEEE Robot. Autom. Lett. 2025, 10, 8514–8521. [Google Scholar] [CrossRef]
  51. Li, Y.; Zheng, H.; Xu, L.; Chen, L.; Xia, F.; Song, Y. A Biomimetic Nanofluidic Tongue for Highly Selective and Sensitive Bitterness Perception. J. Mater. Chem. A 2025, 13, 31023–31033. [Google Scholar] [CrossRef]
  52. Xu, C.; Zhou, Y.; He, B.; Wang, Z.; Zhang, C.; Sang, H.; Liu, H. An Active Strategy for Safe Human–Robot Interaction Based on Visual–Tactile Perception. IEEE Syst. J. 2023, 17, 5555–5566. [Google Scholar] [CrossRef]
  53. Jassim, H.S.; Akhter, Y.; Aalwahab, D.Z.; Neamah, H.A. Recent Advances in Tactile Sensing Technologies for Human-Robot Interaction: Current Trends and Future Perspectives. Biosens. Bioelectron. X 2025, 26, 100669. [Google Scholar] [CrossRef]
  54. Bao, R.; Tao, J.; Zhao, J.; Dong, M.; Li, J.; Pan, C. Integrated Intelligent Tactile System for a Humanoid Robot. Sci. Bull. 2023, 68, 1027–1037. [Google Scholar] [CrossRef] [PubMed]
  55. Huang, X.; Ying, Y.; Dong, W. CEASE: Collision-Evaluation-Based Active Sense System for Collaborative Robotic Arms. IEEE Trans. Instrum. Meas. 2024, 73, 1–11. [Google Scholar] [CrossRef]
  56. Sharkawy, A.-N.; Koustoumpardis, P.N. Human–Robot Interaction: A Review and Analysis on Variable Admittance Control, Safety, and Perspectives. Machines 2022, 10, 591. [Google Scholar] [CrossRef]
  57. Zhang, Z.; Qian, K.; Schuller, B.W.; Wollherr, D. An Online Robot Collision Detection and Identification Scheme by Supervised Learning and Bayesian Decision Theory. IEEE Trans. Autom. Sci. Eng. 2021, 18, 1144–1156. [Google Scholar] [CrossRef]
  58. Wong, C.Y.; Vergez, L.; Suleiman, W. Vision- and Tactile-Based Continuous Multimodal Intention and Attention Recognition for Safer Physical Human–Robot Interaction. IEEE Trans. Autom. Sci. Eng. 2024, 21, 3205–3215. [Google Scholar] [CrossRef]
  59. Zhang, Z.; Zhang, M.; Guo, J.; He, H. Barrier Offset Varying-Parameter Dynamic Learning Network for Solving Dual-Arms Human-Like Behavior Generation. IEEE Trans. Cogn. Dev. Syst. 2025, 17, 1199–1211. [Google Scholar] [CrossRef]
  60. Luo, Y.; Zhang, M.; Liu, Y.; Lin, J.; Zhang, Z. Dynamic Neural Learning for Obstacle Avoidance of Humanoid Robot Performing Cooperative Tasks. Neurocomputing 2025, 633, 129727. [Google Scholar] [CrossRef]
  61. Zheng, B.; Liang, D.; Huang, Q.; Liu, Y.; Zhang, P.; Wan, M.; Song, W.; Wang, B. Frame-By-Frame Motion Retargeting with Self-Collision Avoidance from Diverse Human Demonstrations. IEEE Robot. Autom. Lett. 2024, 9, 8706–8713. [Google Scholar] [CrossRef]
  62. Dai, S.; Hofmann, A.; Williams, B. Fast-Reactive Probabilistic Motion Planning for High-Dimensional Robots. SN Comput. Sci. 2021, 2, 484. [Google Scholar] [CrossRef]
  63. Li, H.; Wang, Y.; Guo, Y.; Duan, J. Vole Foraging-Inspired Dynamic Path Planning of Wheeled Humanoid Robots Under Workshop Slippery Road Conditions. Biomimetics 2025, 10, 277. [Google Scholar] [CrossRef] [PubMed]
  64. Dawood, M.; Pan, S.; Dengler, N.; Zhou, S.; Schoellig, A.P.; Bennewitz, M. Safe Multi-Agent Reinforcement Learning for Behavior-Based Cooperative Navigation. IEEE Robot. Autom. Lett. 2025, 10, 6256–6263. [Google Scholar] [CrossRef]
  65. Özbaltan, M.; Özbaltan, N.; Bıçakcı Yeşilkaya, H.S.; Demir, M.; Şeker, C.; Yıldırım, M. Task Scheduling of Multiple Humanoid Robot Manipulators by Using Symbolic Control. Biomimetics 2025, 10, 346. [Google Scholar] [CrossRef]
  66. Fan, Z.; Gao, F.; Chen, Z.; Yin, Y.; Yang, L.; Xi, Q.; Yang, E.; Luo, X. Force-Compliance MPC and Robot-User CBFs for Interactive Navigation and User-Robot Safety in Hexapod Guide Robots. IEEE Trans. Autom. Sci. Eng. 2025, 22, 20296–20310. [Google Scholar] [CrossRef]
  67. Salt Ducaju, J.; Olofsson, B.; Johansson, R. Model-Based Predictive Impedance Variation for Obstacle Avoidance in Safe Human–Robot Collaboration. IEEE Trans. Autom. Sci. Eng. 2025, 22, 9571–9583. [Google Scholar] [CrossRef]
  68. Liu, Y.; Chen, X.; Yu, Z.; Yu, H.; Meng, L.; Yokoi, H. High-Precision Dynamic Torque Control of High Stiffness Actuator for Humanoids. ISA Trans. 2023, 141, 401–413. [Google Scholar] [CrossRef]
  69. Jing, Z.; Luo, A.; Liu, X.; Wang, H.; Li, H.; Song, B.; Lu, S. Precision Actuation Method for Humanoid Eye Expression Robots Integrating Deep Reinforcement Learning. Sens. Actuators Phys. 2025, 393, 116762. [Google Scholar] [CrossRef]
  70. Liang, D.; Sun, N.; Wu, Y.; Liu, G.; Fang, Y. Fuzzy-Sliding Mode Control for Humanoid Arm Robots Actuated by Pneumatic Artificial Muscles with Unidirectional Inputs, Saturations, and Dead Zones. IEEE Trans. Ind. Inform. 2022, 18, 3011–3021. [Google Scholar] [CrossRef]
  71. Wang, Y.; Wang, G.; Ge, W.; Duan, J.; Chen, Z.; Wen, L. Perceived Safety Assessment of Interactive Motions in Human–Soft Robot Interaction. Biomimetics 2024, 9, 58. [Google Scholar] [CrossRef]
  72. Yang, L.; Zhao, Z. A Meniscus-Like Structure in Anthropomorphic Joints to Attenuate Impacts. IEEE Trans. Robot. 2024, 40, 3109–3126. [Google Scholar] [CrossRef]
  73. Zhang, J.; Chen, X.; Yu, Z.; Han, L.; Gao, Z.; Zhao, Q.; Huang, G.; Li, K.; Huang, Q. HTEC Foot: A Novel Foot Structure for Humanoid Robots Combining Static Stability and Dynamic Adaptability. Def. Technol. 2025, 44, 30–51. [Google Scholar] [CrossRef]
  74. Huang, Y.; Chen, Y.; Zhang, X.; Zhang, H.; Song, C.; Ota, J. A Novel Cable-Driven 7-DOF Anthropomorphic Manipulator. IEEE ASME Trans. Mechatron. 2021, 26, 2174–2185. [Google Scholar] [CrossRef]
  75. Higashi, K.; Koyama, K.; Ficuciello, F.; Ozawa, R.; Kiyokawa, T.; Wan, W.; Harada, K. Synergy Hand Using Fluid Network: Realization of Various Grasping/Manipulation Styles. IEEE Access 2024, 12, 164966–164978. [Google Scholar] [CrossRef]
  76. Xiong, Q.; Li, D.; Zhou, X.; Xin, W.; Wang, C.; Ambrose, J.W.; Yeow, R.C.-H. Single-Motor Ultraflexible Robotic (SMUFR) Humanoid Hand. IEEE Trans. Med. Robot. Bionics 2024, 6, 1666–1677. [Google Scholar] [CrossRef]
  77. Qiu, Y.; Ye, Z.; Tan, X.; Dai, M.; Ge, S.; Zhao, X.; Kong, D.; Ruan, Y. Fruit Grasping Evaluation Based on a Humanoid Underactuated Manipulator and an Adaptive Grasping Algorithm. Smart Agric. Technol. 2025, 11, 101007. [Google Scholar] [CrossRef]
  78. Pang, S.; Shang, W.; Dai, S.; Deng, J.; Zhang, F.; Zhang, B.; Cong, S. Stiffness Optimization of Cable-Driven Humanoid Manipulators. IEEEASME Trans. Mechatron. 2024, 29, 4168–4178. [Google Scholar] [CrossRef]
  79. Lin, S.; Liu, H.; Wu, C.; Huang, L.; Chen, Y. Anti-Falling of Wheeled Humanoid Robots Based on a Novel Variable Stiffness Mechanism. Smart Mater. Struct. 2025, 34, 085002. [Google Scholar] [CrossRef]
  80. Ferrari, P.; Rossini, L.; Ruscelli, F.; Laurenzi, A.; Oriolo, G.; Tsagarakis, N.G.; Mingo Hoffman, E. Multi-Contact Planning and Control for Humanoid Robots: Design and Validation of a Complete Framework. Robot. Auton. Syst. 2023, 166, 104448. [Google Scholar] [CrossRef]
  81. Scianca, N.; Smaldone, F.M.; Lanari, L.; Oriolo, G. A Feasibility-Driven MPC Scheme for Robust Gait Generation in Humanoids. Robot. Auton. Syst. 2025, 189, 104957. [Google Scholar] [CrossRef]
  82. Lai, J.; Chen, X.; Yu, Z.; Chen, Z.; Dong, C.; Liu, X.; Huang, Q. Towards High Mobility and Adaptive Mode Transitions: Transformable Wheel-Biped Humanoid Locomotion Strategy. ISA Trans. 2025, 158, 184–196. [Google Scholar] [CrossRef]
  83. Zhao, Z.; Sun, S.; Huang, H.; Gao, Q.; Xu, W. Design and Control of Continuous Jumping Gaits for Humanoid Robots Based on Motion Function and Reinforcement Learning. Procedia Comput. Sci. 2024, 250, 51–57. [Google Scholar] [CrossRef]
  84. Du, X.; Ye, Y.; Jiao, B.; Kong, Y.; Yu, L.; Liu, R.; Yun, S.; Lu, D.; Qiao, J.; Liu, Z.; et al. A Flexible Thermal Management Method for High-Power Chips in Humanoid Robots. Device 2025, 3, 100576. [Google Scholar] [CrossRef]
  85. Khan, S.G. Adaptive Chaos Control of a Humanoid Robot Arm: A Fault-Tolerant Scheme. Mech. Sci. 2023, 14, 209–222. [Google Scholar] [CrossRef]
  86. Khan, S.G.; Bendoukha, S.; Abdelmalek, S. Chaos Stabilization and Tracking Recovery of a Faulty Humanoid Robot Arm in a Cooperative Scenario. Vibration 2019, 2, 87–101. [Google Scholar] [CrossRef]
  87. Ding, J.; Lam, T.L.; Ge, L.; Pang, J.; Huang, Y. Safe and Adaptive 3-D Locomotion via Constrained Task-Space Imitation Learning. IEEE ASME Trans. Mechatron. 2023, 28, 3029–3040. [Google Scholar] [CrossRef]
  88. Oruma, S.O.; Sánchez-Gordón, M.; Colomo-Palacios, R.; Gkioulos, V.; Hansen, J.K. A Systematic Review on Social Robots in Public Spaces: Threat Landscape and Attack Surface. Computers 2022, 11, 181. [Google Scholar] [CrossRef]
  89. Hajiabbasi, M.; Akhtarkavan, E.; Majidi, B. Cyber-Physical Customer Management for Internet of Robotic Things-Enabled Banking. IEEE Access 2023, 11, 34062–34079. [Google Scholar] [CrossRef]
  90. Biström, D.; Westerlund, M.; Duncan, B.; Jaatun, M.G. Privacy and Security Challenges for Autonomous Agents: A Study of Two Social Humanoid Service Robots. In Proceedings of the 2022 IEEE International Conference on Cloud Computing Technology and Science (CloudCom), Bangkok, Thailand, 13–16 December 2022; pp. 230–237. [Google Scholar]
  91. Ghandour, M.; Jleilaty, S.; Ait Oufroukh, N.; Olaru, S.; Alfayad, S. Real-Time EtherCAT-Based Control Architecture for Electro-Hydraulic Humanoid. Mathematics 2024, 12, 1405. [Google Scholar] [CrossRef]
  92. Park, C.-Y.; Lee, S.-J.; Lee, I.-G. Secure and Lightweight Firmware Over-the-Air Update Mechanism for Internet of Things. Electronics 2025, 14, 1583. [Google Scholar] [CrossRef]
  93. Catuogno, L.; Galdi, C. Secure Firmware Update: Challenges and Solutions. Cryptography 2023, 7, 30. [Google Scholar] [CrossRef]
  94. Ali, A.R.; Kamal, H. Robust Fault Detection in Industrial Machines Using Hybrid Transformer-DNN With Visualization via a Humanoid-Based Telepresence Robot. IEEE Access 2025, 13, 115558–115580. [Google Scholar] [CrossRef]
  95. Baltes, J.; Christmann, G.; Saeedvand, S. A Deep Reinforcement Learning Algorithm to Control a Two-Wheeled Scooter with a Humanoid Robot. Eng. Appl. Artif. Intell. 2023, 126, 106941. [Google Scholar] [CrossRef]
  96. Sun, S.; Li, C.; Zhao, Z.; Huang, H.; Xu, W. Leveraging Large Language Models for Comprehensive Locomotion Control in Humanoid Robots Design. Biomim. Intell. Robot. 2024, 4, 100187. [Google Scholar] [CrossRef]
  97. Wu, M.; Cao, Y. Robust Human-Machine Teaming Through Reinforcement Learning from Failure via Sparse Reward Densification. IEEE Control Syst. Lett. 2025, 9, 2315–2320. [Google Scholar] [CrossRef]
  98. Sekkat, H.; Moutik, O.; El Kari, B.; Chaibi, Y.; Tchakoucht, T.A.; El Hilali Alaoui, A. Beyond Simulation: Unlocking the Frontiers of Humanoid Robot Capability and Intelligence with Pepper’s Open-Source Digital Twin. Heliyon 2024, 10, e34456. [Google Scholar] [CrossRef]
  99. Lin, X.; Guo, Z.; Jin, X.; Guo, H. Digital Twin-Enabled Safety Monitoring System for Seamless Worker-Robot Collaboration in Construction. Autom. Constr. 2025, 174, 106147. [Google Scholar] [CrossRef]
  100. Bonnet, V.; Mirabel, J.; Daney, D.; Lamiraux, F.; Gautier, M.; Stasse, O. Practical Whole-Body Elasto-Geometric Calibration of a Humanoid Robot: Application to the TALOS Robot. Robot. Auton. Syst. 2023, 164, 104365. [Google Scholar] [CrossRef]
  101. Altan, D.; Sariel, S. CLUE-AI: A Convolutional Three-Stream Anomaly Identification Framework for Robot Manipulation. IEEE Access 2023, 11, 48347–48357. [Google Scholar] [CrossRef]
  102. Zuo, Y.; Guo, B.H.W.; Goh, Y.M.; Lim, J.-Y. Identifying Human-Robot Interaction (HRI) Incident Archetypes: A System and Network Analysis of Accidents. Saf. Sci. 2025, 191, 106959. [Google Scholar] [CrossRef]
  103. ISO 10218-1:2025; Robotics—Safety Requirements Part 1: Industrial Robots. ISO: Geneva, Switzerland, 2025.
  104. ISO 10218-2:2025; Robotics—Safety Requirements Part 2: Industrial Robot Applications and Robot Cells. ISO: Geneva, Switzerland, 2025.
  105. Bhattathiri, S.S.; Bogovik, A.; Abdollahi, M.; Hochgraf, C.; Kuhl, M.E.; Ganguly, A.; Kwasinski, A.; Rashedi, E. Unlocking Human-Robot Synergy: The Power of Intent Communication in Warehouse Robotics. Appl. Ergon. 2024, 117, 104248. [Google Scholar] [CrossRef] [PubMed]
  106. ISO/TS 15066:2016; Robots and Robotic Devices–Collaborative Robots. International Organization for Standardization: Geneva, Switzerland, 2016.
  107. Jacobs, T.; Virk, G.S. ISO 13482–The New Safety Standard for Personal Care Robots. In Proceedings of the ISR/Robotik 2014; 41st International Symposium on Robotics, Munich, Germany, 2–3 June 2014; pp. 1–6. [Google Scholar]
  108. Fosch-Villaronga, E.; Calleja, C.J.; Drukarch, H.; Torricelli, D. How Can ISO 13482:2014 Account for the Ethical and Social Considerations of Robotic Exoskeletons? Technol. Soc. 2023, 75, 102387. [Google Scholar] [CrossRef]
  109. IEC 61508:2010; Functional Safety of Electrical/Electronic/Programmable Electronic Safety-Related Systems–Part 1: General Requirements (See Functional Safety and IEC 61508). ISO: Geneva, Switzerland, 2010.
  110. Fosch-Villaronga, E.; Shaffique, M.R.; Schwed-Shenker, M.; Mut-Piña, A.; van der Hof, S.; Custers, B. Science for Robot Policy: Advancing Robotics Policy through the EU Science for Policy Approach. Technol. Forecast. Soc. Change 2025, 218, 124202. [Google Scholar] [CrossRef]
  111. Chen, K. Does Ascribing Robots with Mental Capacities Increase or Decrease Trust in Middle-Aged and Older Adults? Int. J. Soc. Robot. 2025, 17, 1617–1631. [Google Scholar] [CrossRef]
  112. Tsumura, T.; Yamada, S. Shaping Empathy and Trust Toward Agents: The Role of Agent Behavior Modification and Attitude. IEEE Access 2025, 13, 116908–116923. [Google Scholar] [CrossRef]
  113. Holbrook, C.; Krishnamurthy, U.; Maglio, P.P.; Wagner, A.R. Physical Anthropomorphism (but Not Gender Presentation) Influences Trust in Household Robots. Comput. Hum. Behav. Artif. Hum. 2025, 3, 100114. [Google Scholar] [CrossRef]
  114. Song, C.S.; Kim, Y.-K. The Role of the Human-Robot Interaction in Consumers’ Acceptance of Humanoid Retail Service Robots. J. Bus. Res. 2022, 146, 489–503. [Google Scholar] [CrossRef]
  115. Song, C.S.; Kim, Y.-K.; Jo, B.W.; Park, S.-h. Trust in Humanoid Robots in Footwear Stores: A Large-N Crisp-Set Qualitative Comparative Analysis (csQCA) Model. J. Bus. Res. 2022, 152, 251–264. [Google Scholar] [CrossRef]
  116. Tobis, S.; Piasek-Skupna, J.; Neumann-Podczaska, A.; Suwalska, A.; Wieczorowska-Tobis, K. The Effects of Stakeholder Perceptions on the Use of Humanoid Robots in Care for Older Adults: Postinteraction Cross-Sectional Study. J. Med. Internet Res. 2023, 25, e46617. [Google Scholar] [CrossRef]
  117. Tobis, S.; Piasek-Skupna, J.; Suwalska, A.; Wieczorowska-Tobis, K. The Impact of Real-World Interaction on the Perception of a Humanoid Social Robot in Care for Institutionalised Older Adults. Technologies 2025, 13, 189. [Google Scholar] [CrossRef]
  118. Horstmann, A.C.; Krämer, N.C. The Fundamental Attribution Error in Human-Robot Interaction: An Experimental Investigation on Attributing Responsibility to a Social Robot for Its Pre-Programmed Behavior. Int. J. Soc. Robot. 2022, 14, 1137–1153. [Google Scholar] [CrossRef]
  119. Lee, C.-L.; Kwak, H.S. Effect of Cooking and Food Serving Robot Design Images and Information on Consumer Liking, Willingness to Try Food, and Emotional Responses. Food Res. Int. 2025, 214, 116626. [Google Scholar] [CrossRef] [PubMed]
  120. Misaroș, M.; Stan, O.P.; Enyedi, S.; Stan, A.; Donca, I.; Miclea, L.C. A Method for Assessing the Reliability of the Pepper Robot in Handling Office Documents: A Case Study. Biomimetics 2024, 9, 558. [Google Scholar] [CrossRef] [PubMed]
  121. Johnson, E.A.; Dudding, K.M.; Carrington, J.M. When to Err Is Inhuman: An Examination of the Influence of Artificial Intelligence-Driven Nursing Care on Patient Safety. Nurs. Inq. 2024, 31, e12583. [Google Scholar] [CrossRef]
  122. Wang, Z.; Law, T.S.-T.; Yeung, S.S.S. Let Robots Tell Stories: Using Social Robots as Storytellers to Promote Language Learning among Young Children. Comput. Hum. Behav. Artif. Hum. 2025, 6, 100210. [Google Scholar] [CrossRef]
  123. Sánchez-Orozco, D.; Valdez, R.; Uchuari, Y.; Fajardo-Pruna, M.; Quero, L.C.; Algabri, R.; Yumbla, E.Q.; Yumbla, F. YAREN: Humanoid Torso Robot Platform for Research, Social Interaction, and Educational Applications. IEEE Access 2025, 13, 106175–106187. [Google Scholar] [CrossRef]
  124. Kim, J.; Kang, T.; Song, D.; Ahn, G.; Yi, S.-J. Development of Dual-Arm Human Companion Robots That Can Dance. Sensors 2024, 24, 6704. [Google Scholar] [CrossRef]
  125. Zhou, X.; Menassa, C.C.; Kamat, V.R. Siamese Network with Dual Attention for EEG-Driven Social Learning: Bridging the Human-Robot Gap in Long-Tail Autonomous Driving. Expert Syst. Appl. 2025, 291, 128470. [Google Scholar] [CrossRef]
  126. Kiilavuori, H.; Sariola, V.; Peltola, M.J.; Hietanen, J.K. Making Eye Contact with a Robot: Psychophysiological Responses to Eye Contact with a Human and with a Humanoid Robot. Biol. Psychol. 2021, 158, 107989. [Google Scholar] [CrossRef]
  127. Dong, J.; Jeon, M. Happiness Improves Perceptions and Game Performance in an Escape Room, Whereas Anger Motivates Compliance with Instructions from a Robot Agent. Int. J. Hum.-Comput. Stud. 2025, 202, 103547. [Google Scholar] [CrossRef]
  128. Banerjee, S.; González-Jiménez, H.; Zheng, L. Help Please! Deriving Social Support from Geminoid DK, Pepper, and AIBO as Companion Robots. Int. J. Hum.-Comput. Stud. 2025, 203, 103577. [Google Scholar] [CrossRef]
  129. Trieu, N.; Nguyen, T. Enhancing Emotional Expressiveness in Biomechanics Robotic Head: A Novel Fuzzy Approach for Robotic Facial Skin’s Actuators. CMES–Comput. Model. Eng. Sci. 2025, 143, 477–498. [Google Scholar] [CrossRef]
  130. Cao, X.; Wu, Y.; Nielsen, M.; Wang, F. Does Appearance Affect Children’s Selective Trust in Robots’ Social and Emotional Testimony? J. Appl. Dev. Psychol. 2025, 96, 101739. [Google Scholar] [CrossRef]
Figure 1. Paper roadmap: overview of research questions and thematic sections.
Figure 1. Paper roadmap: overview of research questions and thematic sections.
Electronics 14 04734 g001
Figure 2. Identification of studies via databases.
Figure 2. Identification of studies via databases.
Electronics 14 04734 g002
Figure 3. Yearly distribution of included publications.
Figure 3. Yearly distribution of included publications.
Electronics 14 04734 g003
Figure 4. The multi-layered physical safety architecture for humanoid HRI.
Figure 4. The multi-layered physical safety architecture for humanoid HRI.
Electronics 14 04734 g004
Figure 5. Context-dependent contact risk in human–robot shared spaces: (A) low-risk, task-specific contact; (B) high-risk, accidental contact.
Figure 5. Context-dependent contact risk in human–robot shared spaces: (A) low-risk, task-specific contact; (B) high-risk, accidental contact.
Electronics 14 04734 g005
Figure 6. Overview of the cyber–physical threat landscape. The humanoid robot is exposed to risks from four primary vectors: (1) Network/Cloud, via compromised firmware updates and injection attacks. (2) Teleoperation channels susceptible to session hijacking. (3) Physical access, targeting sensors and hardware components. (4) Social interaction, where user trust is exploited through deception.
Figure 6. Overview of the cyber–physical threat landscape. The humanoid robot is exposed to risks from four primary vectors: (1) Network/Cloud, via compromised firmware updates and injection attacks. (2) Teleoperation channels susceptible to session hijacking. (3) Physical access, targeting sensors and hardware components. (4) Social interaction, where user trust is exploited through deception.
Electronics 14 04734 g006
Figure 7. Trade-off analysis of cybersecurity measures in humanoid HRI regarding deployment complexity and physical safety impact. The bubbles are color-coded by primary security focus (data vs. control) and labeled with the specific threat dimensions addressed: Confidentiality (C), Integrity (I), Availability (A), and Privacy (P).
Figure 7. Trade-off analysis of cybersecurity measures in humanoid HRI regarding deployment complexity and physical safety impact. The bubbles are color-coded by primary security focus (data vs. control) and labeled with the specific threat dimensions addressed: Confidentiality (C), Integrity (I), Availability (A), and Privacy (P).
Electronics 14 04734 g007
Figure 8. Applicability matrix of safety standards across humanoid robot application domains. The heatmap classifies the relevance of each standard: “Core” indicates the primary regulatory framework for the specific environment, “Supporting” denotes standards providing essential functional safety requirements or collaborative guidelines, and “Limited” suggests partial applicability where specific clauses may be relevant but do not cover the full use case.
Figure 8. Applicability matrix of safety standards across humanoid robot application domains. The heatmap classifies the relevance of each standard: “Core” indicates the primary regulatory framework for the specific environment, “Supporting” denotes standards providing essential functional safety requirements or collaborative guidelines, and “Limited” suggests partial applicability where specific clauses may be relevant but do not cover the full use case.
Electronics 14 04734 g008
Figure 9. Conceptual framework of social safety and trust in humanoid HRI. The model links technical robot features (Section 6.1 and Section 6.4) with human perception (Section 6.1 and Section 6.3), mediated by essential ethical safeguards like privacy protection (Section 6.2). The alignment of these factors is a prerequisite for achieving social acceptance and successful HRI.
Figure 9. Conceptual framework of social safety and trust in humanoid HRI. The model links technical robot features (Section 6.1 and Section 6.4) with human perception (Section 6.1 and Section 6.3), mediated by essential ethical safeguards like privacy protection (Section 6.2). The alignment of these factors is a prerequisite for achieving social acceptance and successful HRI.
Electronics 14 04734 g009
Table 1. Search queries and databases are linked to each research question.
Table 1. Search queries and databases are linked to each research question.
Research Question (RQ)Keywords/QueriesDatabase
RQ1(“humanoid robot” OR “anthropomorphic robot”) AND (“human–robot interaction” OR HRI OR “human–robot collaboration” OR HRC) AND (safety OR “collision avoidance” OR fall)Web of Science
Scopus
IEEE Xplore
RQ2(“humanoid robot” OR “anthropomorphic robot”) AND (cybersecurity OR “secure boot” OR “over-the-air” OR authentication OR “access control”)Web of Science
Scopus
IEEE Xplore
RQ3(“humanoid robot” OR “collaborative robot” OR cobot) AND (“ISO 10218” OR “ISO/TS 15066” OR “ISO 13482” OR “IEC 61508” OR standard OR certification OR “risk assessment”)Web of Science
Scopus
IEEE Xplore
RQ4(“humanoid robot” OR “anthropomorphic robot”) AND (ethical OR trust OR acceptance OR privacy OR GDPR)Web of Science
Scopus
IEEE Xplore
Table 2. Summary of collision management approaches, categorized by sensing modality.
Table 2. Summary of collision management approaches, categorized by sensing modality.
ApproachPrimary Sensing ModalitySensing TechnologyMain Safety FunctionReferences
Behavior-based safety layersHuman pose + robot stateExternal vision/motion tracking + kinematicsProactive speed and trajectory adaptation.[1,26,46,47,48,49]
Null-space motionInternal state + obstacle pos.Joint/task-space kinematics + obstacle estimatesAvoidance using redundancy without stopping the task.[47,48,49]
Proximity-based “skins”On-body proximityDistributed proximity sensorsBlind-spot safety.[50]
Emerging sensingChemicalBiomimetic chemical sensorsDetection of hazardous substances.[51]
Visual–tactile hierarchyVision + tactileVision (prediction) + tactile (feedback)Pre-impact deceleration + post-impact force control.[52]
Active visionVisionActuated cameras with dynamic gaze controlMinimizing occlusions and optimizing field of view.[55]
Dynamic path planningRange + robot stateRange sensors + bio-inspired plannersStability on slippery on dynamic terrain.[62,63]
Table 3. Comparison of physical safety methods in humanoid robotics: advantages, disadvantages, and classification.
Table 3. Comparison of physical safety methods in humanoid robotics: advantages, disadvantages, and classification.
CategoryKey Method/ApproachAdvantagesDisadvantagesReferences
Perception and DetectionMultimodal Sensor Fusion (Vision + Tactile)Distinguishes between accidental and intentional contact; high-accuracy contextual awareness.High computational cost; data synchronization challenges; sensor noise.[52,53,54,58]
AI-based Classification (Supervised Learning, Bayesian)Fast response (<20 ms); high classification accuracy (99.6%); enables context-appropriate reaction.Requires extensive training data; reliance on model generalization; potential “black box” unreliability.[57,58]
Proximity Sensing SkinsWhole-body awareness; effective in confined spaces and for blind spots.Wiring complexity; calibration maintenance; potentially limited range compared to vision.[50,51]
Proactive Avoidance (Pre-Impact)Safety Metrics/Separation MonitoringContinuously guarantees a minimum safety distance; standard-compliant approach (SSM).Can reduce robot productivity (frequent stops); conservative behavior in dense crowds.[26,46]
Null-Space MotionUtilizes kinematic redundancy to avoid obstacles without interrupting the primary task.Applicable only to redundant manipulators; limited by joint limits and singularity issues.[47,48,49]
Active Vision/Dynamic PlanningOptimizes observation direction to minimize occlusion; adapts trajectory in real time.Complex control architecture; depends heavily on environment predictability.[55,59,60,61,62,63,64,65]
Impact Mitigation (Contact)Active Compliance (Impedance/Admittance Control)Software-adjustable stiffness; adaptable to different tasks; no hardware modification needed.Bandwidth is limited by control loop and actuators; risk of instability; requires precise force sensing.[56,66,67,68]
Passive Compliance (Soft Robotics, Mechanisms)Inherently safe; infinite bandwidth response to impact; energy absorption.Lower positioning precision; difficult to model and control; lower payload capacity.[69,70,71,72,73,74,75,76,77,78]
Emergency Response (Fail-Safe)Fall Prediction and ManagementPrevents or minimizes damage from stability loss; protects environment.High dynamic complexity; inevitable hardware damage risk if recovery fails.[36,37,38,79]
Fault-Tolerant ControlMaintains stability during actuator/sensor failure; prevents chaotic motion.Design complexity; requires redundancy; effective only up to specific failure limits.[84,85,86]
Table 4. Cybersecurity measures in humanoid HRI: primary threat dimensions, deployment complexity, and impact on physical safety.
Table 4. Cybersecurity measures in humanoid HRI: primary threat dimensions, deployment complexity, and impact on physical safety.
Cybersecurity MeasurePrimary Threat Dimensions AddressedDeployment ComplexityDirect Impact on Physical SafetyReferences
Biometric Access Control and BlockchainConfidentiality, Privacy, IntegrityHighMedium[89,90]
(preventing unauthorized user access and data tampering)(requires deep learning models and decentralized network infrastructure)(prevents unauthorized commands; primary focus is data protection)
Secure Over-the-Air (OTA) UpdatesIntegrity, AvailabilityMedium/HighHigh[92,93]
(ensuring software authenticity and system boot)(requires PKI, encryption, and secure bootloader implementation)(prevents injection of malicious code that could disable collision avoidance)
Real-time Communication Redundancy (e.g., EtherCAT)AvailabilityMediumHigh[3,91]
(ensuring continuous control-signal flow)(requires dual-channel architecture and specialized drivers)(prevents loss of stability and erratic motion due to signal loss)
Encrypted Teleoperation and Command ValidationIntegrity, ConfidentialityMediumHigh[2,94]
(preventing hijacking of remote-controlled sessions)(standard encryption protocols and authentication logic)(prevents attackers from remotely driving the robot into hazardous states)
AI-based Anomaly DetectionIntegrity, AvailabilityMediumMedium[85,101]
(detecting abnormal software/hardware states)(depends on model architecture, e.g., CNN or statistical methods)(enables fail-safe reaction to unexpected operational faults)
Table 5. Robotics safety standards and focus areas.
Table 5. Robotics safety standards and focus areas.
StandardTitleFocus Area
ISO 10218-1:2025Robotics—Safety requirements
Part 1: Industrial robots
Safety requirements specific to industrial robots.
ISO 10218-2:2025Robotics—Safety requirements Part 2: Industrial robot applications and robot cellsSafety of industrial robot applications and robot cells.
ISO/TS 15066:2016Robots and robotic devices—Collaborative robotsSafety requirements for collaborative industrial robot systems
ISO 13482:2014Robots and robotic devices—Safety requirements for personal care robotsMobile servant robot, physical assistant robot, person-carrier robot.
IEC 61508:2010Functional safety of E/E/PE safety-related systemsElectrical, Electronic, and Programmable Electronic Safety-Related Systems
Table 6. Summary of evidence—safety domains, key issues, methods, and key references.
Table 6. Summary of evidence—safety domains, key issues, methods, and key references.
Safety DomainKey IssueMethods/ApproachesKey References (ID)
RQ1—Physical safetyProactive collision avoidance in shared spacesBehavior-based safety layers; safety-metric controllers (minimum separation); null-space motion; proximity sensing + multimodal fusion; active vision[1,26,46,47,48,49,50,51,52,53,54,55]
Distinguishing accidental vs. intentional contactCollision detection and identification (supervised learning + Bayesian decision); posture/gaze-aware intent inference[57,58]
Safety-constrained motion planningNeural dynamic schemes; probabilistic/nature-inspired planners; multi-agent RL with MPC safety filters; symbolic scheduling for multi-robot[59,60,61,62,63,64,65]
Active (software) complianceImpedance control (virtual mass-spring-damper); online MPC tuning; control-barrier enforcement; precise torque[66,67,68]
Passive (hardware) complianceSoft-robotic structures (PAMs); biomechanical damping (meniscus-like); lightweight hands (cable/fluidic/SMUFR); stiffness optimization[66,67,68,69,70,71,72,73,74,75]
Fall safety and stabilityEarly fall prediction; online selection of self-protective fall[36,37,38,79,80,81,82,83]
Fault tolerance and reliabilityThermal management of high-performance chips; adaptive fault-tolerant sliding-mode control; passive-safety stop behaviors[84,85,86,87]
RQ2—CybersecurityUnauthorized access/command hijacking (telepresence/IoT)Deep learning-based biometric modeling; symmetric and asymmetric encryption modules; decentralized data transmission[88,89]
Secure over-the-air updatesEnd-to-end encryption of update packages; cryptographic firmware signing; device-side secure bootloader and integrity checks[92,93]
Data/privacy protection in sensingReal-time anonymization of collected visual data; blockchain network for secure data transmission and validation[89,120,121]
Deterministic, reliable, real-time communicationEtherCAT fieldbus with dual-channel redundancy; modular, distributed real-time control[3,91]
RQ3—Standards and regulationBaseline industrial robot safetyISO 10218-1/-2 requirements for design, manufacture, and integration[103,104]
Collaborative operation guidanceISO/TS 15066 (modes: SRMS, hand-guiding, SSM, PFL; biomechanical limits)[106]
Personal-care robot safetyISO 13482 (mobile servant, person-carrier, physical assistant robots)[107]
Functional safety lifecycleIEC 61508 (safety-critical software and hardware)[108]
Risk assessment and benchmarking gapsPFMEA/HAZOP/FTA + data-driven incident archetypes; evidence-based benchmarking incl. safety and HRI quality[24,25,39,102]
RQ4—Ethical and social implicationsTrust, acceptance, anthropomorphismBehavior/demeanor tuning; study of anthropomorphism’s effect on trust; analysis of real-world interactions’ positive effect on perception[111,112,113,114,115]
Elder-care adoption and user perceptionReal-world interactions in elder care; include users’ perspectives in governance[12,116,117,118]
ASD therapy/education usePredictable social cues; multi-armed bandit; storytelling for language and bonding[16,17,18,122]
Nonverbal understanding and intent communicationMultimodal perception (vision/touch; posture/gaze); clear motion-intent signaling in shared work[58,125,126,127]
Privacy and data governanceDevelop ethical/legal frameworks for data handling, storage, and use; incorporate user (e.g., seniors’) perspectives into AI governance[10,11,120,121]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kóczi, D.; Sárosi, J. Safety Engineering for Humanoid Robots in Everyday Life—Scoping Review. Electronics 2025, 14, 4734. https://doi.org/10.3390/electronics14234734

AMA Style

Kóczi D, Sárosi J. Safety Engineering for Humanoid Robots in Everyday Life—Scoping Review. Electronics. 2025; 14(23):4734. https://doi.org/10.3390/electronics14234734

Chicago/Turabian Style

Kóczi, Dávid, and József Sárosi. 2025. "Safety Engineering for Humanoid Robots in Everyday Life—Scoping Review" Electronics 14, no. 23: 4734. https://doi.org/10.3390/electronics14234734

APA Style

Kóczi, D., & Sárosi, J. (2025). Safety Engineering for Humanoid Robots in Everyday Life—Scoping Review. Electronics, 14(23), 4734. https://doi.org/10.3390/electronics14234734

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop