Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (56)

Search Parameters:
Keywords = HRC task

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 776 KB  
Article
How to Facilitate Safety Performance of Human–Robot Collaboration in High-Rise Construction Scenarios: An Empirical Model
by Peng Lin, Guangchong Chen, Ningshuang Zeng and Qiming Li
Systems 2025, 13(11), 1000; https://doi.org/10.3390/systems13111000 - 7 Nov 2025
Abstract
Despite the growing use of collaborative robots in high-rise construction, ensuring safe human–robot collaboration (HRC) in hazardous environments remains a critical challenge. Addressing the gap that previous studies optimized human, robot, or task factors in isolation without a systemic coordination perspective, this study [...] Read more.
Despite the growing use of collaborative robots in high-rise construction, ensuring safe human–robot collaboration (HRC) in hazardous environments remains a critical challenge. Addressing the gap that previous studies optimized human, robot, or task factors in isolation without a systemic coordination perspective, this study develops and empirically validates a Human–Task–Robot Alignment (HTRA) framework to explain how alignment mechanisms enhance safety performance in the construction of HRC. Data from 84 high-rise HRC projects were analyzed using Partial Least Squares Structural Equation Modeling (PLS-SEM). The results reveal that HTRA serves as a pivotal mechanism for safety improvement, where Human–Robot Alignment (HRA) and Task–Human Alignment (THA) exert stronger effects on bilateral safety perception than Task–Robot Alignment (TRA), underscoring the centrality of human factors at the current stage of HRC development. Moreover, this study identifies a sequential cognitive–behavioral path from safety perception to safety intention and safety behavior performance, explaining how alignment enhances safety performance. Standardized HRC guidelines further strengthen HRA, facilitating safer and more efficient collaboration. This study extends Task–Technology Fit theory to a triadic human–task–robot context and advances the concept of bilateral safety, providing theoretical and managerial guidance for developing next-generation safe collaboration systems in construction. Full article
Show Figures

Figure 1

51 pages, 1430 KB  
Article
The Effect of Critical Factors on Team Performance of Human–Robot Collaboration in Construction Projects: A PLS-SEM Approach
by Guodong Zhang, Xiaowei Luo, Wei Li, Lei Zhang and Qiming Li
Buildings 2025, 15(20), 3685; https://doi.org/10.3390/buildings15203685 - 13 Oct 2025
Viewed by 776
Abstract
Human–Robot Collaboration (HRC) in construction projects promises enhanced productivity, safety, and quality, yet realizing these benefits requires understanding the multifaceted human and robotic factors that influence team performance. This study develops and validates a multidimensional framework that links key human abilities (operational skill, [...] Read more.
Human–Robot Collaboration (HRC) in construction projects promises enhanced productivity, safety, and quality, yet realizing these benefits requires understanding the multifaceted human and robotic factors that influence team performance. This study develops and validates a multidimensional framework that links key human abilities (operational skill, decision-making ability, and learning ability) and robot capacities (functionality and operability) to HRC team performance, with task complexity considered as contextual influence. A field survey of construction practitioners (n = 548) was analyzed using partial least squares structural equation modeling (PLS-SEM) to test direct effects and human–robot synergies. Results reveal that all five main effects are positive and significant, indicating that both human abilities and robot capacities have significant contribution. Moreover, every hypothesized two-way interaction is supported, evidencing strong interaction effects. Three-way moderation analyses further reveal that task complexity significantly strengthened the interactions of human abilities with robot functionality, whereas its interactions with robot operability were not significant. The study contributes an integrated and theory-driven model of HRC team performance that accounts for human abilities and robot capacities under varying task complexity, and validated constructs that can be used to diagnose and predict performance. The findings offer actionable guidance for project managers by recommending that they prioritize user-friendly robot operability to translate worker expertise into performance across a wide range of tasks, invest in training to strengthen operators’ skills and decision-making, and, for complex tasks, pair highly skilled workers with high-functionality robots to maximize performance gains. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
Show Figures

Figure 1

18 pages, 2542 KB  
Article
A Two-Stage MLP-LSTM Network-Based Task Planning Method for Human–Robot Collaborative Assembly Scenarios
by Zhenyu Pan and Weiming Wang
Appl. Sci. 2025, 15(20), 10922; https://doi.org/10.3390/app152010922 - 11 Oct 2025
Viewed by 297
Abstract
In many current assembly scenarios, efficient collaboration between humans and robots can improve collaborative efficiency and quality. However, the efficient arrangement of human–robot collaborative (HRC) tasks constitutes a significant challenge. In a collaborative workspace where humans and robots collaborate on assembling a shared [...] Read more.
In many current assembly scenarios, efficient collaboration between humans and robots can improve collaborative efficiency and quality. However, the efficient arrangement of human–robot collaborative (HRC) tasks constitutes a significant challenge. In a collaborative workspace where humans and robots collaborate on assembling a shared product, the determination of task allocation between them is of crucial importance. To address this issue, offline feasible HRC paths are established based on assembly task constraint information. Subsequently, the HRC process is simulated within a virtual environment leveraging these feasible paths. Human assembly intentions are explicitly expressed through human assembly trajectories, and implicitly expressed through simulation results such as assembly time and human–robot resource allocation. Furthermore, a two-stage MLP-LSTM network is employed to train and optimize the assembly simulation database. In the first stage, a sequence generation model is trained using high-quality HRC processes. Then, the network learns human evaluation patterns to score the generated sequences. Ultimately, task allocation for HRC is performed based on the high-scoring generated sequences. The effectiveness of the proposed method is demonstrated through assembly scenarios of two products. Compared with traditional optimization methods like DFS and Greedy, the human collaboration ratio has been optimized by 10%, while the collaborative quality evaluation has been improved by 3%. Full article
(This article belongs to the Section Mechanical Engineering)
Show Figures

Figure 1

22 pages, 3918 KB  
Article
Evaluating Mental Workload and Productivity in Manufacturing: A Neuroergonomic Study of Human–Robot Collaboration Scenarios
by Carlo Caiazzo, Marko Djapan, Marija Savkovic, Djordje Milojevic, Arso Vukicevic and Luca Gualtieri
Machines 2025, 13(9), 783; https://doi.org/10.3390/machines13090783 - 1 Sep 2025
Viewed by 1248
Abstract
The field of human–robot collaboration (HRC) still lacks research studies regarding the evaluation of mental workload (MWL) through objective measurement to assess the mental state of operators in assembly tasks. This research study presents a comparative neuroergonomic analysis to evaluate the mental workload [...] Read more.
The field of human–robot collaboration (HRC) still lacks research studies regarding the evaluation of mental workload (MWL) through objective measurement to assess the mental state of operators in assembly tasks. This research study presents a comparative neuroergonomic analysis to evaluate the mental workload and productivity in three laboratory experimental conditions: in the first, the participant assembles a component without the intervention of the robot (standard scenario); in the second scenario, the participant performs the same activity in collaboration with the robot (collaborative scenario); in the third scenario, the participant is fully guided in the task in collaboration with the robot (collaborative guided scenario) through a system of guiding labels according to Poka-Yoke principles. The assessment of participants’ mental workload is shown through combinative analysis of subjective (NASA TLX) and objective (electroencephalogram—EEG). Objective MWL was assessed as the power waves ratio β/α (Beta—stress indicator, Alpha—relaxation indicator). Furthermore, the research used observational measurements to calculate the productivity index in terms of accurately assembled components across the three scenarios. Through ANOVA RM, mental workload significantly decreased in the activities involving the cobot. Also, an increase in productivity was observed shifting from the manual scenario to the cobot-assisted one (18.4%), and to the collaborative guided scenarios supported by Poka-Yoke principles (33.87%). Full article
(This article belongs to the Section Robotics, Mechatronics and Intelligent Machines)
Show Figures

Figure 1

28 pages, 1856 KB  
Article
Trust-Based Modular Cyber–Physical–Human Robotic System for Collaborative Manufacturing: Modulating Communications
by S. M. Mizanoor Rahman
Machines 2025, 13(8), 731; https://doi.org/10.3390/machines13080731 - 17 Aug 2025
Viewed by 562
Abstract
The objective was to propose a human–robot bidirectional trust-triggered cyber–physical–human (CPH) system framework for human–robot collaborative assembly in flexible manufacturing and investigate the impact of modulating communications in the CPH system on system performance and human–robot interactions (HRIs). As the research method, we [...] Read more.
The objective was to propose a human–robot bidirectional trust-triggered cyber–physical–human (CPH) system framework for human–robot collaborative assembly in flexible manufacturing and investigate the impact of modulating communications in the CPH system on system performance and human–robot interactions (HRIs). As the research method, we developed a one human–one robot hybrid cell where a human and a robot collaborated with each other to perform the assembly operation of different manufacturing components in a flexible manufacturing setup. We configured the human–robot collaborative system in three interconnected components of a CPH system: (i) cyber system, (ii) physical system, and (iii) human system. We divided the functions of the CPH system into three interconnected modules: (i) communication, (ii) computing or computation, and (iii) control. We derived a model to compute the human and robot’s bidirectional trust in each other in real time. We implemented the trust-triggered CPH framework on the human–robot collaborative assembly setup and modulated the communication methods among the cyber, physical, and human components of the CPH system in different innovative ways in three separate experiments. The research results show that modulating the communication methods triggered by bidirectional trust impacts on the effectiveness of the CPH system in terms of human–robot interactions, and task performance (efficiency and quality) differently. The results show that communication methods with an appropriate combination of a higher number of communication modes (cues) produces better HRIs and task performance. Based on a comparative study, it was concluded that the results prove the efficacy and superiority of configuring the HRC system in the form of a modular CPH system over using conventional HRC systems in terms of HRI and task performance. Configuring human–robot collaborative systems in the form of a CPH system can transform the design, development, analysis, and control of the systems and enhance their scope, ease, and effectiveness for various applications, such as industrial manufacturing, construction, transport and logistics, forestry, etc. Full article
(This article belongs to the Section Robotics, Mechatronics and Intelligent Machines)
Show Figures

Figure 1

32 pages, 2341 KB  
Review
Human and Multi-Robot Collaboration in Indoor Environments: A Review of Methods and Application Potential for Indoor Construction Sites
by Francis Xavier Duorinaah, Mathanraj Rajendran, Tae Wan Kim, Jung In Kim, Seulbi Lee, Seulki Lee and Min-Koo Kim
Buildings 2025, 15(15), 2794; https://doi.org/10.3390/buildings15152794 - 7 Aug 2025
Cited by 1 | Viewed by 2122
Abstract
The integration of robotic agents into complex indoor construction environments is increasing, particularly through human–robot collaboration (HRC) and multi-robot collaboration (MRC). These collaborative frameworks hold great potential to enhance productivity and safety. However, indoor construction environments present unique challenges, such as dynamic layouts, [...] Read more.
The integration of robotic agents into complex indoor construction environments is increasing, particularly through human–robot collaboration (HRC) and multi-robot collaboration (MRC). These collaborative frameworks hold great potential to enhance productivity and safety. However, indoor construction environments present unique challenges, such as dynamic layouts, constrained spaces, and variable lighting conditions, which complicate the safe and effective deployment of collaborative robot teams. Existing studies have primarily addressed various HRC and MRC challenges in manufacturing, logistics, and outdoor construction, with limited attention given to indoor construction settings. To this end, this review presents a comprehensive analysis of human–robot and multi-robot collaboration methods within various indoor domains and critically evaluates the potential of adopting these methods for indoor construction. This review presents three key contributions: (1) it provides a structured evaluation of current human–robot interaction techniques and safety-enhancing methods; (2) it presents a summary of state-of-the-art multi-robot collaboration frameworks, including task allocation, mapping, and coordination; and (3) it identifies major limitations in current systems and provides research directions for enabling scalable, robust, and context-aware collaboration in indoor construction. By bridging the gap between current robotic collaboration methods and the needs of indoor construction, this review lays the foundation for the development of adaptive and optimized collaborative robot deployment frameworks for indoor built environments. Full article
(This article belongs to the Special Issue Automation and Robotics in Building Design and Construction)
Show Figures

Figure 1

17 pages, 8252 KB  
Article
Probing Augmented Intelligent Human–Robot Collaborative Assembly Methods Toward Industry 5.0
by Qingwei Nie, Yiping Shen, Ye Ma, Shuqi Zhang, Lujie Zong, Ze Zheng, Yunbo Zhangwa and Yu Chen
Electronics 2025, 14(15), 3125; https://doi.org/10.3390/electronics14153125 - 5 Aug 2025
Cited by 1 | Viewed by 708
Abstract
Facing the demands of Human–Robot Collaborative (HRC) assembly for complex products under Industry 5.0, this paper proposes an intelligent assembly method that integrates Large Language Model (LLM) reasoning with Augmented Reality (AR) interaction. To address issues such as poor visibility, difficulty in knowledge [...] Read more.
Facing the demands of Human–Robot Collaborative (HRC) assembly for complex products under Industry 5.0, this paper proposes an intelligent assembly method that integrates Large Language Model (LLM) reasoning with Augmented Reality (AR) interaction. To address issues such as poor visibility, difficulty in knowledge acquisition, and strong decision dependency in the assembly of complex aerospace products within confined spaces, an assembly task model and structured process information are constructed. Combined with a retrieval-augmented generation mechanism, the method realizes knowledge reasoning and optimization suggestion generation. An improved ORB-SLAM2 algorithm is applied to achieve virtual–real mapping and component tracking, further supporting the development of an enhanced visual interaction system. The proposed approach is validated through a typical aerospace electronic cabin assembly task, demonstrating significant improvements in assembly efficiency, quality, and human–robot interaction experience, thus providing effective support for intelligent HRC assembly. Full article
(This article belongs to the Special Issue Human–Robot Interaction and Communication Towards Industry 5.0)
Show Figures

Figure 1

36 pages, 699 KB  
Article
A Framework of Indicators for Assessing Team Performance of Human–Robot Collaboration in Construction Projects
by Guodong Zhang, Xiaowei Luo, Lei Zhang, Wei Li, Wen Wang and Qiming Li
Buildings 2025, 15(15), 2734; https://doi.org/10.3390/buildings15152734 - 2 Aug 2025
Cited by 2 | Viewed by 1996
Abstract
The construction industry has been troubled by a shortage of skilled labor and safety accidents in recent years. Therefore, more and more robots are introduced to undertake dangerous and repetitive jobs, so that human workers can concentrate on higher-value and creative problem-solving tasks. [...] Read more.
The construction industry has been troubled by a shortage of skilled labor and safety accidents in recent years. Therefore, more and more robots are introduced to undertake dangerous and repetitive jobs, so that human workers can concentrate on higher-value and creative problem-solving tasks. Nevertheless, although human–robot collaboration (HRC) shows great potential, most existing evaluation methods still focus on the single performance of either the human or robot, and systematic indicators for a whole HRC team remain insufficient. To fill this research gap, the present study constructs a comprehensive evaluation framework for HRC team performance in construction projects. Firstly, a detailed literature review is carried out, and three theories are integrated to build 33 indicators preliminarily. Afterwards, an expert questionnaire survey (N = 15) is adopted to revise and verify the model empirically. The survey yielded a Cronbach’s alpha of 0.916, indicating excellent internal consistency. The indicators rated highest in importance were task completion time (µ = 4.53) and dynamic separation distance (µ = 4.47) on a 5-point scale. Eight indicators were excluded due to mean importance ratings falling below the 3.0 threshold. The framework is formed with five main dimensions and 25 concrete indicators. Finally, an AHP-TOPSIS method is used to evaluate the HRC team performance. The AHP analysis reveals that Safety (weight = 0.2708) is prioritized over Productivity (weight = 0.2327) by experts, establishing a safety-first principle for successful HRC deployment. The framework is demonstrated through a case study of a human–robot plastering team, whose team performance scored as fair. This shows that the framework can help practitioners find out the advantages and disadvantages of HRC team performance and provide targeted improvement strategies. Furthermore, the framework offers construction managers a scientific basis for deciding robot deployment and team assignment, thus promoting safer, more efficient, and more creative HRC in construction projects. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
Show Figures

Figure 1

22 pages, 1470 KB  
Article
An NMPC-ECBF Framework for Dynamic Motion Planning and Execution in Vision-Based Human–Robot Collaboration
by Dianhao Zhang, Mien Van, Pantelis Sopasakis and Seán McLoone
Machines 2025, 13(8), 672; https://doi.org/10.3390/machines13080672 - 1 Aug 2025
Cited by 1 | Viewed by 871
Abstract
To enable safe and effective human–robot collaboration (HRC) in smart manufacturing, it is critical to seamlessly integrate sensing, cognition, and prediction into the robot controller for real-time awareness, response, and communication inside a heterogeneous environment (robots, humans, and equipment). The proposed approach takes [...] Read more.
To enable safe and effective human–robot collaboration (HRC) in smart manufacturing, it is critical to seamlessly integrate sensing, cognition, and prediction into the robot controller for real-time awareness, response, and communication inside a heterogeneous environment (robots, humans, and equipment). The proposed approach takes advantage of the prediction capabilities of nonlinear model predictive control (NMPC) to execute safe path planning based on feedback from a vision system. To satisfy the requirements of real-time path planning, an embedded solver based on a penalty method is applied. However, due to tight sampling times, NMPC solutions are approximate; therefore, the safety of the system cannot be guaranteed. To address this, we formulate a novel safety-critical paradigm that uses an exponential control barrier function (ECBF) as a safety filter. Several common human–robot assembly subtasks have been integrated into a real-life HRC assembly task to validate the performance of the proposed controller and to investigate whether integrating human pose prediction can help with safe and efficient collaboration. The robot uses OptiTrack cameras for perception and dynamically generates collision-free trajectories to the predicted target interactive position. Results for a number of different configurations confirm the efficiency of the proposed motion planning and execution framework, with a 23.2% reduction in execution time achieved for the HRC task compared to an implementation without human motion prediction. Full article
(This article belongs to the Special Issue Visual Measurement and Intelligent Robotic Manufacturing)
Show Figures

Figure 1

27 pages, 3211 KB  
Article
Hybrid Deep Learning-Reinforcement Learning for Adaptive Human-Robot Task Allocation in Industry 5.0
by Claudio Urrea
Systems 2025, 13(8), 631; https://doi.org/10.3390/systems13080631 - 26 Jul 2025
Cited by 1 | Viewed by 1977
Abstract
Human-Robot Collaboration (HRC) is pivotal for flexible, worker-centric manufacturing in Industry 5.0, yet dynamic task allocation remains difficult because operator states—fatigue and skill—fluctuate abruptly. I address this gap with a hybrid framework that couples real-time perception and double-estimating reinforcement learning. A Convolutional Neural [...] Read more.
Human-Robot Collaboration (HRC) is pivotal for flexible, worker-centric manufacturing in Industry 5.0, yet dynamic task allocation remains difficult because operator states—fatigue and skill—fluctuate abruptly. I address this gap with a hybrid framework that couples real-time perception and double-estimating reinforcement learning. A Convolutional Neural Network (CNN) classifies nine fatigue–skill combinations from synthetic physiological cues (heart-rate, blink rate, posture, wrist acceleration); its outputs feed a Double Deep Q-Network (DDQN) whose state vector also includes task-queue and robot-status features. The DDQN optimises a multi-objective reward balancing throughput, workload and safety and executes at 10 Hz within a closed-loop pipeline implemented in MATLAB R2025a and RoboDK v5.9. Benchmarking on a 1000-episode HRC dataset (2500 allocations·episode−1) shows the hybrid CNN+DDQN controller raises throughput to 60.48 ± 0.08 tasks·min−1 (+21% vs. rule-based, +12% vs. SARSA, +8% vs. Dueling DQN, +5% vs. PPO), trims operator fatigue by 7% and sustains 99.9% collision-free operation (one-way ANOVA, p < 0.05; post-hoc power 1 − β = 0.87). Visual analyses confirm responsive task reallocation as fatigue rises or skill varies. The approach outperforms strong baselines (PPO, A3C, Dueling DQN) by mitigating Q-value over-estimation through double learning, providing robust policies under stochastic human states and offering a reproducible blueprint for multi-robot, Industry 5.0 factories. Future work will validate the controller on a physical Doosan H2017 cell and incorporate fairness constraints to avoid workload bias across multiple operators. Full article
(This article belongs to the Section Systems Engineering)
Show Figures

Figure 1

23 pages, 5168 KB  
Article
Multi-Scale Feature Mixed Attention Network for Cloud and Snow Segmentation in Remote Sensing Images
by Liling Zhao, Junyu Chen, Zichen Liao and Feng Shi
Remote Sens. 2025, 17(11), 1872; https://doi.org/10.3390/rs17111872 - 28 May 2025
Cited by 1 | Viewed by 749
Abstract
The coexistence of cloud and snow is very common in remote sensing images. It presents persistent challenges for automated interpretation systems, primarily due to their highly similar visible light spectral characteristic in optical remote sensing images. This intrinsic spectral ambiguity significantly impedes accurate [...] Read more.
The coexistence of cloud and snow is very common in remote sensing images. It presents persistent challenges for automated interpretation systems, primarily due to their highly similar visible light spectral characteristic in optical remote sensing images. This intrinsic spectral ambiguity significantly impedes accurate cloud and snow segmentation tasks, particularly in delineating fine boundary features between cloud and snow regions. Much research on cloud and snow segmentation based on deep learning models has been conducted, but there are still deficiencies in the extraction of fine boundaries between cloud and snow regions. In addition, existing segmentation models often misjudge the body of clouds and snow with similar features. This work proposes a Multi-scale Feature Mixed Attention Network (MFMANet). The framework integrates three key components: (1) a Multi-scale Pooling Feature Perception Module to capture multi-level structural features, (2) a Bilateral Feature Mixed Attention Module that enhances boundary detection through spatial-channel attention, and (3) a Multi-scale Feature Convolution Fusion Module to reduce edge blurring. We opted to test the model using a high-resolution cloud and snow dataset based on WorldView2 (CSWV). This dataset contains high-resolution images of cloud and snow, which can meet the training and testing requirements of cloud and snow segmentation tasks. Based on this dataset, we compare MFMANet with other classical deep learning segmentation algorithms. The experimental results show that the MFMANet network has better segmentation accuracy and robustness. Specifically, the average MIoU of the MFMANet network is 89.17%, and the accuracy is about 0.9% higher than CSDNet and about 0.7% higher than UNet. Further verification on the HRC_WHU dataset shows that the MIoU of the proposed model can reach 91.03%, and the performance is also superior to other compared segmentation methods. Full article
Show Figures

Figure 1

34 pages, 20595 KB  
Article
Collision-Free Path Planning in Dynamic Environment Using High-Speed Skeleton Tracking and Geometry-Informed Potential Field Method
by Yuki Kawawaki, Kenichi Murakami and Yuji Yamakawa
Robotics 2025, 14(5), 65; https://doi.org/10.3390/robotics14050065 - 17 May 2025
Viewed by 1338
Abstract
In recent years, the realization of a society in which humans and robots coexist has become highly anticipated. As a result, robots are expected to exhibit versatility regardless of their operating environments, along with high responsiveness, to ensure safety and enable dynamic task [...] Read more.
In recent years, the realization of a society in which humans and robots coexist has become highly anticipated. As a result, robots are expected to exhibit versatility regardless of their operating environments, along with high responsiveness, to ensure safety and enable dynamic task execution. To meet these demands, we design a comprehensive system composed of two primary components: high-speed skeleton tracking and path planning. For tracking, we implement a high-speed skeleton tracking method that combines deep learning-based detection with optical flow-based motion extraction. In addition, we introduce a dynamic search area adjustment technique that focuses on the target joint to extract the desired motion more accurately. For path planning, we propose a high-speed, geometry-informed potential field model that addresses four key challenges: (P1) avoiding local minima, (P2) suppressing oscillations, (P3) ensuring adaptability to dynamic environments, and (P4) handling obstacles with arbitrary 3D shapes. We validated the effectiveness of our high-frequency feedback control and the proposed system through a series of simulations and real-world collision-free path planning experiments. Our high-speed skeleton tracking operates at 250 Hz, which is eight times faster than conventional deep learning-based methods, and our path planning method runs at over 10,000 Hz. The proposed system offers both versatility across different working environments and low latencies. Therefore, we hope that it will contribute to a foundational motion generation framework for human–robot collaboration (HRC), applicable to a wide range of downstream tasks while ensuring safety in dynamic environments. Full article
(This article belongs to the Special Issue Visual Servoing-Based Robotic Manipulation)
Show Figures

Figure 1

27 pages, 5560 KB  
Article
A Stackelberg Trust-Based Human–Robot Collaboration Framework for Warehouse Picking
by Yang Liu, Fuqiang Guo and Yan Ma
Systems 2025, 13(5), 348; https://doi.org/10.3390/systems13050348 - 3 May 2025
Cited by 1 | Viewed by 1327
Abstract
The warehouse picking process is one of the most critical components of logistics operations. Human–robot collaboration (HRC) is seen as an important trend in warehouse picking, as it combines the strengths of both humans and robots in the picking process. However, in current [...] Read more.
The warehouse picking process is one of the most critical components of logistics operations. Human–robot collaboration (HRC) is seen as an important trend in warehouse picking, as it combines the strengths of both humans and robots in the picking process. However, in current human–robot collaboration frameworks, there is a lack of effective communication between humans and robots, which results in inefficient task execution during the picking process. To address this, this paper considers trust as a communication bridge between humans and robots and proposes the Stackelberg trust-based human–robot collaboration framework for warehouse picking, aiming to achieve efficient and effective human–robot collaborative picking. In this framework, HRC with trust for warehouse picking is defined as the Partially Observable Stochastic Game (POSG) model. We model human fatigue with the logistic function and incorporate its impact on the efficiency reward function of the POSG. Based on the POSG model, belief space is used to assess human trust, and human strategies are formed. An iterative Stackelberg trust strategy generation (ISTSG) algorithm is designed to achieve the optimal long-term collaboration benefits between humans and robots, which is solved by the Bellman equation. The generated human–robot decision profile is formalized as a Partially Observable Markov Decision Process (POMDP), and the properties of human–robot collaboration are specified as PCTL (probabilistic computation tree logic) with rewards, such as efficiency, accuracy, trust, and human fatigue. The probabilistic model checker PRISM is exploited to verify and analyze the corresponding properties of the POMDP. We take the popular human–robot collaboration robot TORU as a case study. The experimental results show that our framework improves the efficiency of human–robot collaboration for warehouse picking and reduces worker fatigue while ensuring the required accuracy of human–robot collaboration. Full article
Show Figures

Figure 1

20 pages, 3622 KB  
Article
Bayesian Prototypical Pruning for Transformers in Human–Robot Collaboration
by Bohua Peng and Bin Chen
Mathematics 2025, 13(9), 1411; https://doi.org/10.3390/math13091411 - 25 Apr 2025
Viewed by 1371
Abstract
Action representations are essential for developing mutual cognition toward efficient human–AI collaboration, particularly in human–robot collaborative (HRC) workspaces. As such, it has become an emerging research direction for robots to understand human intentions with video Transformers. Despite their remarkable success in capturing long-range [...] Read more.
Action representations are essential for developing mutual cognition toward efficient human–AI collaboration, particularly in human–robot collaborative (HRC) workspaces. As such, it has become an emerging research direction for robots to understand human intentions with video Transformers. Despite their remarkable success in capturing long-range dependencies, local redundancy in video frames can add up to the inference latency of Transformers due to overparameterization. Recently, token pruning has become a computationally efficient solution that selectively removes input tokens with minimal impact on task performance. However, existing sparse coding methods often have an exhaustive threshold searching process, leading to intensive hyperparameter search. In this paper, Bayesian Prototypical Pruning (ProtoPrune), a novel end-to-end Bayesian framework, is proposed for token pruning in video understanding. To improve robustness, ProtoPrune leverages prototypical contrastive learning for fine-grained action representations, bringing sub-action level supervision to the video token pruning task. With variational dropout, our method bypasses the exhaustive threshold searching process. Experiments show that the proposed method can achieve a pruning rate of 37.2% while retaining 92.9% of task performance using Uniformer and ActionCLIP, which significantly improves computational efficiency. Convergence analysis ensures the stability of our method. The proposed efficient video understanding method offers a theoretically grounded and hardware-friendly solution for deploying video Transformers in real-world HRC environments. Full article
(This article belongs to the Special Issue Bayesian Learning and Its Advanced Applications)
Show Figures

Figure 1

28 pages, 6530 KB  
Article
Obstacle Avoidance Technique for Mobile Robots at Autonomous Human-Robot Collaborative Warehouse Environments
by Lucas C. Sousa, Yago M. R. Silva, Vinícius B. Schettino, Tatiana M. B. Santos, Alessandro R. L. Zachi, Josiel A. Gouvêa and Milena F. Pinto
Sensors 2025, 25(8), 2387; https://doi.org/10.3390/s25082387 - 9 Apr 2025
Cited by 2 | Viewed by 4355
Abstract
This paper presents an obstacle avoidance technique for a mobile robot in human-robot collaborative (HRC) tasks. The proposed solution uses fuzzy logic rules and a convolutional neural network (CNN) in an integrated approach to detect objects during vehicle movement. The goal is to [...] Read more.
This paper presents an obstacle avoidance technique for a mobile robot in human-robot collaborative (HRC) tasks. The proposed solution uses fuzzy logic rules and a convolutional neural network (CNN) in an integrated approach to detect objects during vehicle movement. The goal is to improve the robot’s navigation autonomously and ensure the safety of people and equipment in dynamic environments. Using this technique, it is possible to provide important references to the robot’s internal control system, guiding it to continuously adjust its velocity and yaw in order to avoid obstacles (humans and moving objects) while following the path planned for its task. The approach aims to improve operational safety without compromising productivity, addressing critical challenges in collaborative robotics. The system was tested in a simulated environment using the Robot Operating System (ROS) and Gazebo to demonstrate the effectiveness of navigation and obstacle avoidance. The results obtained with the application of the proposed technique indicate that the framework allows real-time adaptation and safe interaction between robot and obstacles in complex and changing industrial workspaces. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

Back to TopTop