Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (769)

Search Parameters:
Keywords = Human-Robot Collaboration

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 3856 KB  
Article
Human–Robot Interaction: External Force Estimation and Variable Admittance Control Incorporating Passivity
by Jun Wan, Zihao Zhou, Nuo Yun, Kehong Wang and Xiaoyong Zhang
Robotics 2026, 15(5), 84; https://doi.org/10.3390/robotics15050084 (registering DOI) - 22 Apr 2026
Abstract
In the context of Industry 5.0, human–robot collaboration increasingly demands intuitive, safe, and sensorless interaction for tasks such as hand-guided teaching and concurrent manipulation. However, conventional admittance control systems are prone to instability due to abrupt changes in human arm stiffness and their [...] Read more.
In the context of Industry 5.0, human–robot collaboration increasingly demands intuitive, safe, and sensorless interaction for tasks such as hand-guided teaching and concurrent manipulation. However, conventional admittance control systems are prone to instability due to abrupt changes in human arm stiffness and their reliance on accurate dynamic models. To address these challenges, this paper proposes a sensorless external force estimation and variable admittance control method that models robot dynamic uncertainties and interaction forces as normally distributed stochastic quantities. An improved particle swarm optimization algorithm is introduced to calibrate the variance parameters, enhancing estimation accuracy and robustness. Furthermore, an energy-based variable admittance control strategy is developed, which preserves system passivity by adaptively adjusting inertia and damping gains based on real-time energy variations. The proposed method was validated on a redundant robot platform. Experimental results show that the external force and torque estimation errors remain below 3 N and 3 N.m, respectively, with lower detection delays and errors than those of a first-order generalized momentum observer in collision detection. Variable admittance experiments demonstrate that the system maintains passivity and stable interaction even under sudden arm stiffness changes. The approach is well-suited for industrial applications requiring safe, sensorless, and compliant human–robot collaboration. Full article
(This article belongs to the Special Issue Human–Robot Collaboration in Industry 5.0)
Show Figures

Figure 1

25 pages, 5544 KB  
Article
Retrofitting a Legacy Industrial Robot Through Monocular Computer Vision-Based Human-Arm Posture Tracking and 3-DoF Robot-Axis Control (A1–A3)
by Paúl A. Chasi-Pesantez, Eduardo J. Astudillo-Flores, Valeria A. Dueñas-López, Jorge O. Ordoñez-Ordoñez, Eldad Holdengreber and Luis Fernando Guerrero-Vásquez
Robotics 2026, 15(4), 82; https://doi.org/10.3390/robotics15040082 - 21 Apr 2026
Abstract
This paper presents a low-cost retrofitting pipeline for a legacy industrial robot that uses a single RGB webcam and monocular 2D keypoint tracking to estimate human-arm posture angles θ(h) and map them to robot-axis joint targets [...] Read more.
This paper presents a low-cost retrofitting pipeline for a legacy industrial robot that uses a single RGB webcam and monocular 2D keypoint tracking to estimate human-arm posture angles θ(h) and map them to robot-axis joint targets qcmd(r) for A1–A3 on a KUKA KR5-2 ARC HW, while keeping the wrist orientation (A4–A6) fixed. Rather than targeting full six-DoF manipulation, the main contribution is an experimental characterization of how far monocular 2D posture-to-axis mapping can be used reliably for coarse placement and safeguarded low-speed demonstrations on a legacy robot platform. Vision-side accuracy was evaluated per axis against goniometer-based reference angles θref(h), showing low errors for A2–A3 within the tested range and larger errors for A1 due to monocular yaw/depth ambiguity and occlusions. The study also analyzes failure modes during simultaneous multi-joint motion, where performance degrades notably, especially for A2 and A3, and reports practical mitigation directions such as improved viewpoints, multi-view/depth sensing, and stricter dropout handling. Runtime behavior is additionally characterized through a loop timing budget, with an end-to-end latency of 185.44 ms and an effective loop frequency of 5.39 Hz, which is consistent with low-speed online operation within the demonstrated scope. The system was implemented in a fenced industrial cell with restricted access and emergency stop; no collaborative operation is claimed. Full article
(This article belongs to the Special Issue Artificial Vision Systems for Robotics)
Show Figures

Figure 1

35 pages, 13759 KB  
Article
BioLAMR: A Biomimetically Inspired Large Language Model Adaptation Framework for Automatic Modulation Recognition
by Yubo Mao, Wei Xu, Jijia Sang and Haoan Liu
Biomimetics 2026, 11(4), 288; https://doi.org/10.3390/biomimetics11040288 - 21 Apr 2026
Abstract
Automatic modulation recognition (AMR) is increasingly relevant to communication-sensing front ends in robotic and human–robot collaborative systems, where reliable spectrum awareness and adaptive wireless reception are desired. However, existing methods often degrade sharply at low signal-to-noise ratios (SNRs), and large language models (LLMs) [...] Read more.
Automatic modulation recognition (AMR) is increasingly relevant to communication-sensing front ends in robotic and human–robot collaborative systems, where reliable spectrum awareness and adaptive wireless reception are desired. However, existing methods often degrade sharply at low signal-to-noise ratios (SNRs), and large language models (LLMs) are not natively compatible with continuous I/Q signals due to the inherent modality gap. We propose BioLAMR, a GPT-2 adaptation framework for AMR inspired by the auditory system’s parallel time–frequency processing and cortical hierarchy. The framework combines bio-inspired dual-domain feature extraction with parameter-efficient LLM adaptation. BioLAMR includes three components. First, a lightweight dual-domain fusion (LDDF) module extracts complementary time- and frequency-domain features and fuses them through channel and spatial attention. Second, a convolutional embedding module converts continuous I/Q signals into GPT-2-compatible sequences without discrete tokenization. Third, a hierarchical fine-tuning strategy updates only 8.9% of parameters to preserve pretrained knowledge while adapting to modulation recognition. Experiments on the RadioML2016.10a and RadioML2016.10b benchmarks show that BioLAMR achieves overall accuracies of 64.99% and 67.43%, outperforming the strongest competing method by 2.60 and 2.47 percentage points, respectively. Under low-SNR conditions, it reaches 36.78% and 38.14%, the best results among the compared methods. Ablation studies verify the contribution of each component. These results demonstrate that combining dual-domain signal modeling with parameter-efficient GPT-2 adaptation is an effective route to robust AMR in challenging wireless environments. Full article
(This article belongs to the Section Locomotion and Bioinspired Robotics)
21 pages, 13356 KB  
Article
In Situ Casting Integrated with FDM 3D Printing: Curing Behavior, Process Constraints, and Mechanical Demonstration
by Supatpromrungsee Saetia, Pimolkan Piankitrungreang and Ratchatin Chancharoen
Polymers 2026, 18(8), 1003; https://doi.org/10.3390/polym18081003 - 21 Apr 2026
Abstract
Dispensing-based in situ casting offers a practical route for introducing dense or mechanically distinct polymer regions into fused deposition modeling (FDM) parts during fabrication. This study investigates the curing-dependent process constraints governing stable integration of in situ casting within an FDM workflow. In [...] Read more.
Dispensing-based in situ casting offers a practical route for introducing dense or mechanically distinct polymer regions into fused deposition modeling (FDM) parts during fabrication. This study investigates the curing-dependent process constraints governing stable integration of in situ casting within an FDM workflow. In the proposed process, FDM is used to fabricate thermoplastic confinement geometries, after which liquid polymer is dispensed into selected cavities and cured before printing resumes. Two representative curing systems were examined: a UV-curable photopolymer and a two-component epoxy resin. The experimental program included UV curing characterization under perpendicular 405 nm exposure, infrared thermal imaging of curing-induced heat generation and dissipation, confined curing of epoxy resin, layer-wise integration within an FDM-printed cavity, and a representative mechanical linkage demonstration. The results show that UV-based in situ casting is constrained by the coupled effects of curing depth, peak temperature, and visible deformation, making staged curing with intermediate thermal relaxation necessary for stable operation. In contrast, the epoxy system enabled bulk cavity filling with lower peak temperature, but required substantially longer curing time, introducing a different process limitation. A layer-wise UV curing strategy enabled successful stacking of four cast layers within an FDM-printed confinement without visible leakage or shell collapse. Mechanical testing of hybrid linkage specimens further showed that localized casting can modify structural stiffness through selective reinforcement. These findings demonstrate that dispensing-based in situ casting can be integrated into FDM when thermal, temporal, and filling constraints are explicitly managed, and they provide practical process guidance for hybrid polymer fabrication involving confined casting during printing. Full article
Show Figures

Figure 1

47 pages, 10208 KB  
Article
Graph-Based Task Allocation for Multi-Agent Fleet Management: A Genetic Algorithm Approach with LLM Integration
by Beril Yalcinkaya, Micael S. Couceiro, Salviano Soares and António Valente
Appl. Sci. 2026, 16(8), 3851; https://doi.org/10.3390/app16083851 - 15 Apr 2026
Viewed by 218
Abstract
Efficient task allocation and coordination are critical for heterogeneous multi-agent systems operating in dynamic field environments. This paper presents a closed-loop framework that integrates Large Language Models (LLMs) with graph-based optimisation to enable end-to-end task decomposition, allocation, and adaptive execution. High-level task scripts [...] Read more.
Efficient task allocation and coordination are critical for heterogeneous multi-agent systems operating in dynamic field environments. This paper presents a closed-loop framework that integrates Large Language Models (LLMs) with graph-based optimisation to enable end-to-end task decomposition, allocation, and adaptive execution. High-level task scripts are initially parsed by an LLM into structured execution flows, which are transformed into Directed Acyclic Graphs (DAGs) capturing action-level dependencies. A Genetic Algorithm (GA) then optimises agent-to-task assignments by minimising makespan under capability and battery constraints. To ensure robustness, the framework incorporates an LLM-driven recovery module that enables localised replanning under execution failures without interrupting unaffected agents. System-level experiments in a high-fidelity agroforestry simulation demonstrate a 37% increase (p<0.001) in harvesting productivity and a 19% reduction in human idle time compared to manual baselines. Under mid-execution failures, the system maintains significantly higher performance, with replanning latencies averaging 24 s. The framework scales to large fleets (up to 1000 agents) and effectively enhances human–robot collaboration through structured, dependency-aware coordination. Full article
Show Figures

Figure 1

35 pages, 57348 KB  
Article
A Target-Oriented Shared-Control Framework for Adaptive Spatial and Kinematic Support in Mixed Reality Teleoperation
by Soma Okamoto and Kosuke Sekiyama
Electronics 2026, 15(8), 1653; https://doi.org/10.3390/electronics15081653 - 15 Apr 2026
Viewed by 157
Abstract
Mixed Reality (MR) teleoperation offers an intuitive interface for Human-Robot Collaboration (HRC), yet it often faces the “Embodiment Gap”—a physical and kinematic mismatch between human operators and robotic platforms. Existing MR systems primarily rely on a “direct mapping” approach, where user movements are [...] Read more.
Mixed Reality (MR) teleoperation offers an intuitive interface for Human-Robot Collaboration (HRC), yet it often faces the “Embodiment Gap”—a physical and kinematic mismatch between human operators and robotic platforms. Existing MR systems primarily rely on a “direct mapping” approach, where user movements are transferred directly to the robot. This forces operators to manually adapt to robotic constraints, such as singularities and joint limits, making task performance heavily dependent on individual skill. This study proposes Mixed reality Adaptive Spatial and Kinematic support (MASK), an adaptive shared-control framework designed to bridge the “Gulf of Execution” and “Gulf of Evaluation” by separating target selection from reachability and kinematic feasibility. The MASK system integrates three core modules: (1) Target Object Identification (TOI) based on body motion features to identify the intended manipulation target; (2) a Base Relocation Module (BRI) utilizing Inverse Reachability Maps to optimize the robot’s spatial configuration; and (3) a Kinematic Correction Module (KCM) that autonomously resolves kinematic constraints through pose blending and null-space optimization. Initial experimental results suggest that MASK reduces the operator’s cognitive and physical load by shifting the burden of kinematic resolution from the human to the system. This approach enables high-precision manipulation through an intuitive interface, potentially reducing the performance gap between different levels of operator proficiency. Full article
(This article belongs to the Special Issue Artificial Intelligence for Cyber-Physical Systems)
Show Figures

Figure 1

28 pages, 3548 KB  
Article
Edge Computing Approach to AI-Based Gesture for Human–Robot Interaction and Control
by Nikola Ivačko, Ivan Ćirić and Miloš Simonović
Computers 2026, 15(4), 241; https://doi.org/10.3390/computers15040241 - 14 Apr 2026
Viewed by 343
Abstract
This paper presents an edge-deployable vision-based framework for human–robot interaction using a xArm collaborative robot and a single RGB camera mounted on the robot wrist, and lightweight AI-based perception modules. The system enables intuitive, contact-free control by combining hand understanding and object detection [...] Read more.
This paper presents an edge-deployable vision-based framework for human–robot interaction using a xArm collaborative robot and a single RGB camera mounted on the robot wrist, and lightweight AI-based perception modules. The system enables intuitive, contact-free control by combining hand understanding and object detection within a unified perception–decision–control pipeline. Hand landmarks are extracted using MediaPipe Hands, from which continuous hand trajectories, static gestures, and dynamic gestures are derived. Task objects are detected using a YOLO-based model, and both hand and object observations are mapped into the robot workspace using ArUco-based planar calibration. To ensure stable robot motion, the hand control signal is smoothed using low-pass and Kalman filtering, while dynamic gestures such as waving are recognized using a lightweight LSTM classifier. The complete pipeline runs locally on edge hardware, specifically NVIDIA Jetson Orin Nano and Raspberry Pi 5 with a Hailo AI accelerator. Experimental evaluation includes trajectory stability, gesture recognition reliability, and runtime performance on both platforms. Results show that filtering significantly reduces hand-tracking jitter, gesture recognition provides stable command states for control, and both edge devices support real-time operation, with Jetson achieving consistently lower runtime than Raspberry Pi. The proposed system demonstrates the feasibility of low-cost edge AI solutions for responsive and practical human–robot interaction in collaborative industrial environments. Full article
(This article belongs to the Special Issue Intelligent Edge: When AI Meets Edge Computing)
Show Figures

Figure 1

27 pages, 2982 KB  
Review
Intelligent Algorithms for Prefabricated Concrete Component Production Scheduling: A Bibliometric Review of Trends, Collaboration Networks, and Emerging Frontiers
by Yizhi Yang and Tao Zhou
Buildings 2026, 16(8), 1523; https://doi.org/10.3390/buildings16081523 - 13 Apr 2026
Viewed by 159
Abstract
Precast concrete (PC) component production scheduling is essential to the efficiency and reliability of industrialized construction. Although intelligent algorithms have been widely applied in this field, the relationships among research evolution, collaboration patterns, and industrial applicability remain insufficiently understood. To address this issue, [...] Read more.
Precast concrete (PC) component production scheduling is essential to the efficiency and reliability of industrialized construction. Although intelligent algorithms have been widely applied in this field, the relationships among research evolution, collaboration patterns, and industrial applicability remain insufficiently understood. To address this issue, this study presents a bibliometric review of 1272 publications indexed in the Web of Science Core Collection from 1990 to 2025. CiteSpace was employed to analyze publication trends, collaboration networks, co-citation structures, keyword co-occurrence, and burst terms. On this basis, a technology adaptability evaluation framework was developed to assess the alignment between algorithmic advances and industrial implementation in terms of dynamic adaptability, verification completeness, and technological generation gap. The results indicate that the field has evolved through four broad stages, from early static optimization to multi-objective coordination, digital twin-enabled dynamic scheduling, and emerging human-centric intelligent autonomous systems. The analysis also shows an increasing convergence of operations research, computer science, and civil engineering. However, a gap remains between academic output and industrial application. Specifically, 32% of the retrieved studies focused on genetic algorithms, whereas only 6% reported full-process industrial validation. In addition, Gen 4.0-related studies showed a technological generation gap of 82.5%, indicating that many frontier technologies have not yet reached broad industrial implementation. The collaboration network further reveals a “high-output, low-synergy” pattern, in which major publishing countries contribute substantially to the literature but exhibit limited cross-institutional integration. This study provides a structured overview of the development of PC component production scheduling research and highlights future directions for digital twin integration, human–robot collaboration, and cross-sector validation platforms. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
19 pages, 3273 KB  
Article
A Comprehensive Analysis of Human–Machine Interaction: Teaching Pendant vs. Gesture Control in Industrial Robotics
by Robert Kristof, Valentin Ciupe, Erwin-Christian Lovasz and Ghadeer Ismael
Actuators 2026, 15(4), 210; https://doi.org/10.3390/act15040210 - 8 Apr 2026
Viewed by 323
Abstract
In collaborative robotics, efficiency and user experience play a central role. This study looks at how perceived performance differs from measured performance when comparing two ways of controlling industrial robots: traditional teaching pendants and wearable EMG-based gesture control. A Myo Armband was used [...] Read more.
In collaborative robotics, efficiency and user experience play a central role. This study looks at how perceived performance differs from measured performance when comparing two ways of controlling industrial robots: traditional teaching pendants and wearable EMG-based gesture control. A Myo Armband was used as an accessible 8-channel EMG platform, and three experiments were carried out on a Universal Robots UR10e to test pick-and-place tasks and precision positioning. Time and accuracy data were gathered together with blind feedback from 13 participants through a multi-criteria analysis framework. Even though the teaching pendant turned out to be more accurate in every scenario, 85% of participants still rated gesture control higher in overall satisfaction. These results point to a notable gap between what users perceive and how they actually perform and suggest that user experience deserves more weight in the design of future robot control interfaces. Full article
(This article belongs to the Special Issue Actuation and Sensing of Intelligent Soft Robots—2nd Edition)
Show Figures

Figure 1

15 pages, 1148 KB  
Article
Collaborative Robotic Systems for Pre-Analytical Processing of Biological Specimens in a Medical Laboratory
by Andrey G. Komarov, Pavel O. Bochkov, Arkadiy S. Goldberg, Vasiliy G. Akimkin and Pavel P. Tregub
Diagnostics 2026, 16(7), 1093; https://doi.org/10.3390/diagnostics16071093 - 4 Apr 2026
Viewed by 463
Abstract
Background/Objectives: The increasing volume of laboratory testing and the tightening of quality standards have rendered automation tasks in medical laboratories highly relevant. Conventional total laboratory automation (TLA) systems demonstrate high throughput; however, their economic and organizational efficiency is often constrained by their [...] Read more.
Background/Objectives: The increasing volume of laboratory testing and the tightening of quality standards have rendered automation tasks in medical laboratories highly relevant. Conventional total laboratory automation (TLA) systems demonstrate high throughput; however, their economic and organizational efficiency is often constrained by their complex integration and substantial implementation costs. In this context, collaborative robots (cobots) are attracting increasing attention due to their ability to perform pre-analytical and logistical tasks in close association with laboratory personnel. The objective of the present study was the systematic integration of commercially available cobots into the pre-analytical workflow of a large centralized laboratory. Methods: The implemented system incorporated a set of specialized modules, including decapping, barcode orientation and verification, analyzer loading, aliquoting, and specimen sorting, with bidirectional integration into the Laboratory Information System (LIS). The architectural design, control algorithms, and primary effects on labor input and operational turnaround time were evaluated. Results: The results demonstrated that the implementation of cobots into laboratory processes led to an 87% reduction in labor input, a 3.4% improvement in liquid aliquoting accuracy, and an overall improvement in nominal throughput, while requiring minimal personnel training. However, human operators performed the aliquoting procedure significantly faster than cobots, with an average speed advantage of 42.5%. Conclusions: The use of collaborative robotic systems in centralized medical laboratories appears promising due to their operational efficiency and flexibility compared to conventional automation platforms and manual workflows. The effect of the use of cobots on the quality/accuracy of the tests needs to be evaluated, and perhaps a larger study of multiple laboratories needs to be conducted to confirm the results are generalizable. Full article
Show Figures

Graphical abstract

20 pages, 6648 KB  
Article
Sensorless Collision Detection and Classification in Collaborative Robots Using Stacked GRU Networks
by Jong Hyeok Lee, Minjae Hong and Kyu Min Park
Actuators 2026, 15(4), 206; https://doi.org/10.3390/act15040206 - 4 Apr 2026
Viewed by 351
Abstract
The increasing deployment of collaborative robots in industrial manufacturing environments has enabled close human–robot collaboration, making rapid and reliable collision detection essential for worker safety. This paper presents a learning-based framework for real-time detection and classification of hard and soft collisions using stacked [...] Read more.
The increasing deployment of collaborative robots in industrial manufacturing environments has enabled close human–robot collaboration, making rapid and reliable collision detection essential for worker safety. This paper presents a learning-based framework for real-time detection and classification of hard and soft collisions using stacked Gated Recurrent Unit (GRU) networks. A two-stage pipeline is introduced, in which collision detection and collision type classification are performed sequentially using separate models, and its performance is validated through extensive experiments on a collision dataset collected from a six-joint collaborative robot executing random point-to-point motions. Without requiring joint torque sensors, unmodeled joint friction is implicitly compensated through learning for both detection and classification. Compared to our previous work, the proposed method achieves improved detection performance, and its robustness is further demonstrated through systematic generalization experiments under simulated dynamic model uncertainties. In addition, the classification model accurately distinguishes between hard and soft collisions, providing a basis for differentiated post-collision reaction strategies. Overall, the proposed sensorless collision detection and classification framework provides a practical and cost-effective solution for real-world industrial human–robot collaboration. Full article
(This article belongs to the Special Issue Machine Learning for Actuation and Control in Robotic Joint Systems)
Show Figures

Figure 1

20 pages, 895 KB  
Article
The Dual-Edged Sword Effect of Human–Robot Collaboration on Migrant Workers’ Well-Being: Evidence from China
by Ruonan Wang and Guangsheng Zhang
Behav. Sci. 2026, 16(4), 526; https://doi.org/10.3390/bs16040526 - 1 Apr 2026
Viewed by 344
Abstract
Migrant workers are a unique group under China’s urban–rural dual system, and improving their well-being is an intrinsic requirement for achieving common prosperity. In recent years, human–robot collaboration has come forth as a novel work paradigm. Comprehending the influence of human–robot collaboration on [...] Read more.
Migrant workers are a unique group under China’s urban–rural dual system, and improving their well-being is an intrinsic requirement for achieving common prosperity. In recent years, human–robot collaboration has come forth as a novel work paradigm. Comprehending the influence of human–robot collaboration on the well-being of migrant workers is a vital yet insufficiently investigated matter. With the conservation of resources theory as an analytical framework, this study empirically analyzes data from a two-stage survey of 382 migrant workers in Chinese manufacturing enterprises. The findings suggest that perceived human–robot collaboration can positively influence migrant workers’ well-being by facilitating perceived decent work, while also negatively affecting their well-being by increasing job replacement anxiety. Perceived organizational support plays a positive moderating role in two distinct aspects: on the one hand, the relationship between perceived human–robot collaboration and perceived decent work; on the other hand, the mediating effect through which perceived decent work connects perceived human–robot collaboration with the well-being of migrant workers. Conversely, perceived organizational support exerts a negative moderating effect on the association between perceived human–robot collaboration and job replacement anxiety, while job replacement anxiety functions as a mediator in the relationship linking perceived human–robot collaboration to migrant workers’ well-being. This study helps understand how human–robot collaboration in manufacturing enterprises affects the well-being of migrant workers. Full article
Show Figures

Figure 1

19 pages, 1725 KB  
Review
A Comprehensive Narrative Review of Abrupt Movements in Human–Robot Interaction
by Greta Di Vincenzo, Elisa Digo, Valerio Cornagliotto, Laura Gastaldi and Stefano Pastorelli
Appl. Sci. 2026, 16(7), 3350; https://doi.org/10.3390/app16073350 - 30 Mar 2026
Viewed by 451
Abstract
Human–robot interaction (HRI) takes place in dynamic environments where both humans and robots act as active agents, making the system inherently unpredictable. Abrupt movements can originate from either side and include human reflexes, fatigue, or unexpected reactions, as well as robot malfunctions, control [...] Read more.
Human–robot interaction (HRI) takes place in dynamic environments where both humans and robots act as active agents, making the system inherently unpredictable. Abrupt movements can originate from either side and include human reflexes, fatigue, or unexpected reactions, as well as robot malfunctions, control errors, or task changes. These unpredictable events generate significant risks for both interaction fluency and safety, affecting not only the physical domain (e.g., collisions, excessive forces) but also cognitive aspects such as trust and predictability. Although different application areas present domain-specific challenges, a comprehensive overview of abrupt movements in HRI is still lacking, especially in the industrial scenario. This review aims to consolidate current knowledge regarding how abrupt phenomena are analyzed, prevented, and mitigated across various contexts and to offer new insights for researchers. In detail, after describing the literature search and the screening process, the review categorizes abrupt events, highlights key methodological approaches, and identifies gaps and future directions. By providing a structured synthesis of existing strategies, this work guides researchers in developing safer and more adaptive HRI frameworks capable of handling unpredictability. Full article
(This article belongs to the Special Issue Latest Advances and Prospects of Human-Robot Interaction (HRI))
Show Figures

Figure 1

21 pages, 2794 KB  
Article
Enhancing Trust in Collaborative Assembly Through Resilient Adversarial Reinforcement Learning
by Dario Antonelli, Khurshid Aliev and Bo Yang
Appl. Sci. 2026, 16(7), 3244; https://doi.org/10.3390/app16073244 - 27 Mar 2026
Viewed by 254
Abstract
Collaborative robots, or cobots, are designed to improve productivity and safety in industrial settings. However, effective Human–Robot Collaboration (HRC) relies heavily on the human operator’s trust in the robotic partner. This study posits that trust is significantly enhanced by the robot’s ability to [...] Read more.
Collaborative robots, or cobots, are designed to improve productivity and safety in industrial settings. However, effective Human–Robot Collaboration (HRC) relies heavily on the human operator’s trust in the robotic partner. This study posits that trust is significantly enhanced by the robot’s ability to adapt to unpredictable human behavior. To achieve this adaptability, we propose applying an Adversarial Reinforcement Learning (ARL) framework to the robot’s activity planning. We model the assembly process as a Markov Decision Process (MDP) on a Directed Acyclic Graph (DAG). The robot learns an assembly policy using an on-policy algorithm while a simulated human agent, trained with the same algorithm, acts as an adversary that introduces disturbances and delays. We applied the proposed approach to a simple industrial case study and evaluated it on complex assembly sequences generated synthetically. Although the ARL-trained robot did not outperform conventional assembly optimization algorithms in terms of task completion time, it guaranteed robustness against human variability. This ensured task completion within a bounded timeframe regardless of human actions. By demonstrating consistent performance and adaptability in the face of uncertainty, the robot exhibits the Ability and Benevolence components of the ABI model of trust. This fosters a more resilient and trustworthy collaborative environment. Full article
Show Figures

Figure 1

34 pages, 1413 KB  
Systematic Review
A Systematic Review of Safety-Driven Approaches in Human–Robot Collaborative Systems
by Akhtar Khan, Maaz Akhtar, Sheheryar Mohsin Qureshi, Muzzamil Mustafa, Naser A. Alsaleh and Imran Ahmad
Sensors 2026, 26(7), 2079; https://doi.org/10.3390/s26072079 - 27 Mar 2026
Viewed by 790
Abstract
Collaboration between humans and robots (HRC) is advancing rapidly due to the intersection of robotics and generative artificial intelligence (GenAI). The current paper includes a systematic review of 103 studies on the role of generative models, including Generative Adversarial Networks (GANs), Variational Autoencoders [...] Read more.
Collaboration between humans and robots (HRC) is advancing rapidly due to the intersection of robotics and generative artificial intelligence (GenAI). The current paper includes a systematic review of 103 studies on the role of generative models, including Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), diffusion models, and Large Language Models (LLMs) in improving the safety, trust, and adaptability of collaborative robotics using a PRISMA-based systematic approach. The review recognizes four major themed areas of GenAI-based safety frameworks—namely, data-driven simulation to synthesize hazards, predictive reasoning to forecast human motion, adaptive control to reduce risks dynamically, and trust-aware cognition to explain human–robot interaction. Findings indicate that generative models transform robotic safety from a reactive mechanism to proactive, contextual and interpretable systems. Nevertheless, real-time performance, interpretability, standard benchmarking, and ethical assurance are still some of the challenges to be overcome. The paper proposes a taxonomy linking generative modeling layers and physical, cognitive and ethical aspects of HRC safety, and gives a roadmap of certifiable hybrid systems with generative foresight and deterministic control. This synthesis provides a foundation for developing transparent, adaptive, and trustworthy collaborative robotic systems. Full article
(This article belongs to the Special Issue Feature Review Papers in Sensors and Robotics)
Show Figures

Figure 1

Back to TopTop