Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (97)

Search Parameters:
Keywords = collaborative robot joint

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 19432 KB  
Article
Robot Learning from Teleoperated Demonstrations: A Pilot Study Towards Automating Mastic Deposition in Construction Sites
by Irati Rasines, Erlantz Loizaga, Rebecca Erlebach, Anurag Bansal, Sara Sillaurren, Patricia Rosen, Sascha Wischniewski, Arantxa Renteria and Itziar Cabanes
Robotics 2025, 14(8), 114; https://doi.org/10.3390/robotics14080114 - 19 Aug 2025
Viewed by 544
Abstract
The construction industry faces significant challenges due to the physically demanding and hazardous nature of tasks such as manual filling of expansion joints with mastic. Automating mastic filling presents additional difficulties due to the variability of mastic density with temperature, which creates a [...] Read more.
The construction industry faces significant challenges due to the physically demanding and hazardous nature of tasks such as manual filling of expansion joints with mastic. Automating mastic filling presents additional difficulties due to the variability of mastic density with temperature, which creates a constantly changing environment that requires adaptive control strategies to ensure consistent application quality. This pilot study focuses on testing a new human–robot collaborative approach for automating the mastic application in concrete expansion joints. The system learns the task from demonstrations performed by expert construction operators teleoperating the robot. This study evaluates the usability, efficiency, and adoption of robotic assistance in joint-filling tasks compared to traditional manual methods. The study analyzes execution time and joint quality measurements, psychophysiological signal analysis, and post-task user feedback. This multi-source approach enables a comprehensive assessment of task performance and both objective and subjective evaluations of technology acceptance. The findings underscore the effectiveness of automated systems in improving safety and productivity on construction sites, while also identifying key areas for technological improvement. Full article
(This article belongs to the Section Industrial Robots and Automation)
Show Figures

Figure 1

24 pages, 17213 KB  
Review
Empowering Smart Soybean Farming with Deep Learning: Progress, Challenges, and Future Perspectives
by Huihui Sun, Hao-Qi Chu, Yi-Ming Qin, Pingfan Hu and Rui-Feng Wang
Agronomy 2025, 15(8), 1831; https://doi.org/10.3390/agronomy15081831 - 28 Jul 2025
Cited by 1 | Viewed by 767
Abstract
This review comprehensively examines the application of deep learning technologies across the entire soybean production chain, encompassing areas such as disease and pest identification, weed detection, crop phenotype recognition, yield prediction, and intelligent operations. By systematically analyzing mainstream deep learning models, optimization strategies [...] Read more.
This review comprehensively examines the application of deep learning technologies across the entire soybean production chain, encompassing areas such as disease and pest identification, weed detection, crop phenotype recognition, yield prediction, and intelligent operations. By systematically analyzing mainstream deep learning models, optimization strategies (e.g., model lightweighting, transfer learning), and sensor data fusion techniques, the review identifies their roles and performances in complex agricultural environments. It also highlights key challenges including data quality limitations, difficulties in real-world deployment, and the lack of standardized evaluation benchmarks. In response, promising directions such as reinforcement learning, self-supervised learning, interpretable AI, and multi-source data fusion are proposed. Specifically for soybean automation, future advancements are expected in areas such as high-precision disease and weed localization, real-time decision-making for variable-rate spraying and harvesting, and the integration of deep learning with robotics and edge computing to enable autonomous field operations. This review provides valuable insights and future prospects for promoting intelligent, efficient, and sustainable development in soybean production through deep learning. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

22 pages, 11043 KB  
Article
Digital Twin-Enabled Adaptive Robotics: Leveraging Large Language Models in Isaac Sim for Unstructured Environments
by Sanjay Nambiar, Rahul Chiramel Paul, Oscar Chigozie Ikechukwu, Marie Jonsson and Mehdi Tarkian
Machines 2025, 13(7), 620; https://doi.org/10.3390/machines13070620 - 17 Jul 2025
Viewed by 1084
Abstract
As industrial automation evolves towards human-centric, adaptable solutions, collaborative robots must overcome challenges in unstructured, dynamic environments. This paper extends our previous work on developing a digital shadow for industrial robots by introducing a comprehensive framework that bridges the gap between physical systems [...] Read more.
As industrial automation evolves towards human-centric, adaptable solutions, collaborative robots must overcome challenges in unstructured, dynamic environments. This paper extends our previous work on developing a digital shadow for industrial robots by introducing a comprehensive framework that bridges the gap between physical systems and their virtual counterparts. The proposed framework advances toward a fully functional digital twin by integrating real-time perception and intuitive human–robot interaction capabilities. The framework is applied to a hospital test lab scenario, where a YuMi robot automates the sorting of microscope slides. The system incorporates a RealSense D435i depth camera for environment perception, Isaac Sim for virtual environment synchronization, and a locally hosted large language model (Mistral 7B) for interpreting user voice commands. These components work together to achieve bi-directional synchronization between the physical and digital environments. The framework was evaluated through 20 test runs under varying conditions. A validation study measured the performance of the perception module, simulation, and language interface, with a 60% overall success rate. Additionally, synchronization accuracy between the simulated and physical robot joint movements reached 98.11%, demonstrating strong alignment between the digital and physical systems. By combining local LLM processing, real-time vision, and robot simulation, the approach enables untrained users to interact with collaborative robots in dynamic settings. The results highlight its potential for improving flexibility and usability in industrial automation. Full article
(This article belongs to the Topic Smart Production in Terms of Industry 4.0 and 5.0)
Show Figures

Figure 1

32 pages, 11752 KB  
Article
A Variable Stiffness System for Impact Analysis in Collaborative Robotics Applications with FPGA-Based Force and Pressure Data Acquisition
by Andrea D’Antona, Saverio Farsoni, Jacopo Rizzi and Marcello Bonfè
Sensors 2025, 25(13), 3913; https://doi.org/10.3390/s25133913 - 23 Jun 2025
Viewed by 440
Abstract
The integration of robots into collaborative environments, where they physically interact with humans, requires systems capable of ensuring both safety and performance. This work introduces the development of a Variable Stiffness Impact Testing Device (VSITD), designed to emulate physical human–robot interaction by replicating [...] Read more.
The integration of robots into collaborative environments, where they physically interact with humans, requires systems capable of ensuring both safety and performance. This work introduces the development of a Variable Stiffness Impact Testing Device (VSITD), designed to emulate physical human–robot interaction by replicating biomechanical properties such as muscle elasticity and joint compliance. The proposed system integrates a Variable Stiffness Mechanism (VSM) with a multi-sensor configuration that includes a high-resolution Force Sensitive Resistors (FSR) matrix, piezoelectric load cells, and an FPGA-based acquisition unit. The FPGA enables fast acquisition of contact forces and pressures, while the mechanical stiffness configuration of the VSM can be rapidly reconfigured to simulate a wide range of impact scenarios. The device aims to validate compliance with the standard ISO/TS 15066 safety standard of robotic work cell in the context of collaborative application. The modularity and flexibility of the VSITD make it suitable for evaluating a wide range of collaborative robotic platforms, providing a reliable tool for pre-deployment validation in shared workspaces. By combining real-time sensing with adaptable stiffness control, the VSITD establishes a new benchmark for safety testing in human–robot collaboration scenarios. Full article
(This article belongs to the Special Issue Collaborative Robotics: Prospects, Challenges and Applications)
Show Figures

Figure 1

23 pages, 8989 KB  
Article
Research on Robot Force Compensation and Collision Detection Based on Six-Dimensional Force Sensor
by Yunyi Wang, Zhijun Wang, Yongli Feng and Yanghuan Xu
Machines 2025, 13(7), 544; https://doi.org/10.3390/machines13070544 - 23 Jun 2025
Viewed by 499
Abstract
To address the shortcomings of existing robot collision detection algorithms that use six-dimensional force sensors, a force compensation algorithm based on Kane’s dynamics is proposed, along with a collision detection algorithm that uses the six-dimensional force sensor data combined with the robot’s outer [...] Read more.
To address the shortcomings of existing robot collision detection algorithms that use six-dimensional force sensors, a force compensation algorithm based on Kane’s dynamics is proposed, along with a collision detection algorithm that uses the six-dimensional force sensor data combined with the robot’s outer surface equations to derive the robot body’s collision point coordinates. Firstly, a collision detection model for a joint-type collaborative robot is presented. Secondly, based on Kane’s dynamics equations, a force compensation model for the joint-type collaborative robot is established and the corresponding force compensation algorithm is derived. Thirdly, a collision detection algorithm is derived, and an example using a cylindrical joint robot with a link’s outer surface equation is used to solve the collision point. The collision is categorized into nine cases, and the coordinates of the collision point are solved for each case. Finally, force compensation and collision detection experiments are conducted on an AUBO-I5 joint-type collaborative robot. The results of the force compensation show that the comparison curves for forces/torques in three directions are consistent, and the relative error is below 5.6%. The collision detection results indicate that the computed collision positions match the actual collision positions, thus verifying the correctness of the theoretical analysis of the force compensation and collision detection algorithms. The research results provide a theoretical basis for ensuring safety in human–robot collaboration. Full article
(This article belongs to the Section Robotics, Mechatronics and Intelligent Machines)
Show Figures

Graphical abstract

20 pages, 10186 KB  
Article
SC-CoSF: Self-Correcting Collaborative and Co-Training for Image Fusion and Semantic Segmentation
by Dongrui Yang, Lihong Qiao and Yucheng Shu
Sensors 2025, 25(12), 3575; https://doi.org/10.3390/s25123575 - 6 Jun 2025
Viewed by 567
Abstract
Multimodal image fusion and semantic segmentation play pivotal roles in autonomous driving and robotic systems, yet their inherent interdependence remains underexplored. To address this gap and overcome performance bottlenecks, we propose SC-CoSF, a novel coupled framework that jointly optimizes these tasks through synergistic [...] Read more.
Multimodal image fusion and semantic segmentation play pivotal roles in autonomous driving and robotic systems, yet their inherent interdependence remains underexplored. To address this gap and overcome performance bottlenecks, we propose SC-CoSF, a novel coupled framework that jointly optimizes these tasks through synergistic learning. Our approach replaces traditional duplex encoders with a weight-sharing CNN encoder, implicitly aligning multimodal features while reducing parameter overhead. The core innovation lies in our Self-correction and Collaboration Fusion Module (Sc-CFM), which integrates (1) a Self-correction Long-Range Relationship Branch (Sc-LRB) to strengthen global semantic modeling, (2) a Self-correction Fine-Grained Branch (Sc-FGB) for enhanced visual detail retention through local feature aggregation, and (3) a Dual-branch Collaborative Recalibration (DCR) mechanism for cross-task feature refinement. This design preserves critical edge textures and color contrasts for segmentation while leveraging segmentation-derived spatial priors to guide fusion. We further introduce the Interactive Context Recovery Mamba Decoder (ICRM) to restore lost long-range dependencies during the upsampling process; meanwhile, we propose the Region Adaptive Weighted Reconstruction Decoder (ReAW), which is mainly used to reduce feature redundancy in image fusion tasks. End-to-end joint training enables gradient propagation across all task branches via shared parameters, exploiting inter-task consistency for superior performance. Experiments demonstrate significant improvements over independently optimized baselines in both fusion quality and segmentation accuracy. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

16 pages, 5223 KB  
Article
Design and Control of Bio-Inspired Joints for Legged Robots Driven by Shape Memory Alloy Wires
by Xiaojie Niu, Xiang Yao and Erbao Dong
Biomimetics 2025, 10(6), 378; https://doi.org/10.3390/biomimetics10060378 - 6 Jun 2025
Viewed by 568
Abstract
Bio-inspired joints play a pivotal role in legged robots, directly determining their motion capabilities and overall system performance. While shape memory alloy (SMA) actuators present superior power density and silent operation compared to conventional electromechanical drives, their inherent nonlinear hysteresis and restricted strain [...] Read more.
Bio-inspired joints play a pivotal role in legged robots, directly determining their motion capabilities and overall system performance. While shape memory alloy (SMA) actuators present superior power density and silent operation compared to conventional electromechanical drives, their inherent nonlinear hysteresis and restricted strain capacity (typically less than 5%) limit actuation range and control precision. This study proposes a bio-inspired joint integrating an antagonistic actuator configuration and differential dual-diameter pulley collaboration, achieving amplified joint stroke (±60°) and bidirectional active controllability. Leveraging a comprehensive experimental platform, precise reference input tracking is realized through adaptive fuzzy control. Furthermore, an SMA-driven bio-inspired leg is developed based on this joint, along with a motion retargeting framework to map human motions onto the robotic leg. Human gait tracking experiments conducted on the leg platform validate its motion performance and explore practical applications of SMA in robotics. Full article
Show Figures

Figure 1

26 pages, 7159 KB  
Article
Methodology for Human–Robot Collaborative Assembly Based on Human Skill Imitation and Learning
by Yixuan Zhou, Naisheng Tang, Ziyi Li and Hanlei Sun
Machines 2025, 13(5), 431; https://doi.org/10.3390/machines13050431 - 19 May 2025
Viewed by 1101
Abstract
With the growing demand for personalized and flexible production, human–robot collaboration technology receives increasing attention. However, enabling robots to accurately perceive and align with human motion intentions remains a significant challenge. To address this, a novel human–robot collaborative control framework is proposed, which [...] Read more.
With the growing demand for personalized and flexible production, human–robot collaboration technology receives increasing attention. However, enabling robots to accurately perceive and align with human motion intentions remains a significant challenge. To address this, a novel human–robot collaborative control framework is proposed, which utilizes electromyography (EMG) signals as an interaction interface and integrates human skill imitation with reinforcement learning. Specifically, to manage the dynamic variation in muscle coordination patterns induced by joint angle changes, a temporal graph neural network enhanced with an Angle-Guided Attention mechanism is developed. This method adaptively models the topological relationships among muscle groups, enabling high-precision three-dimensional dynamic arm force estimation. Furthermore, an expert reward function and a fuzzy experience replay mechanism are introduced in the reinforcement learning model to guide the human skill learning process, thereby enhancing collaborative comfort and smoothness. The proposed approach is validated through a collaborative assembly task. Experimental results show that the proposed arm force estimation model reduces estimation errors by 10.38%, 8.33%, and 11.20% across three spatial directions compared to a conventional Deep Long Short-Term Memory (Deep-LSTM). Moreover, it significantly outperforms state-of-the-art methods, including traditional imitation learning and adaptive admittance control, in terms of collaborative comfort, smoothness, and assembly accuracy. Full article
Show Figures

Figure 1

34 pages, 20595 KB  
Article
Collision-Free Path Planning in Dynamic Environment Using High-Speed Skeleton Tracking and Geometry-Informed Potential Field Method
by Yuki Kawawaki, Kenichi Murakami and Yuji Yamakawa
Robotics 2025, 14(5), 65; https://doi.org/10.3390/robotics14050065 - 17 May 2025
Viewed by 1031
Abstract
In recent years, the realization of a society in which humans and robots coexist has become highly anticipated. As a result, robots are expected to exhibit versatility regardless of their operating environments, along with high responsiveness, to ensure safety and enable dynamic task [...] Read more.
In recent years, the realization of a society in which humans and robots coexist has become highly anticipated. As a result, robots are expected to exhibit versatility regardless of their operating environments, along with high responsiveness, to ensure safety and enable dynamic task execution. To meet these demands, we design a comprehensive system composed of two primary components: high-speed skeleton tracking and path planning. For tracking, we implement a high-speed skeleton tracking method that combines deep learning-based detection with optical flow-based motion extraction. In addition, we introduce a dynamic search area adjustment technique that focuses on the target joint to extract the desired motion more accurately. For path planning, we propose a high-speed, geometry-informed potential field model that addresses four key challenges: (P1) avoiding local minima, (P2) suppressing oscillations, (P3) ensuring adaptability to dynamic environments, and (P4) handling obstacles with arbitrary 3D shapes. We validated the effectiveness of our high-frequency feedback control and the proposed system through a series of simulations and real-world collision-free path planning experiments. Our high-speed skeleton tracking operates at 250 Hz, which is eight times faster than conventional deep learning-based methods, and our path planning method runs at over 10,000 Hz. The proposed system offers both versatility across different working environments and low latencies. Therefore, we hope that it will contribute to a foundational motion generation framework for human–robot collaboration (HRC), applicable to a wide range of downstream tasks while ensuring safety in dynamic environments. Full article
(This article belongs to the Special Issue Visual Servoing-Based Robotic Manipulation)
Show Figures

Figure 1

35 pages, 5913 KB  
Article
Embedding Fear in Medical AI: A Risk-Averse Framework for Safety and Ethics
by Andrej Thurzo and Vladimír Thurzo
AI 2025, 6(5), 101; https://doi.org/10.3390/ai6050101 - 14 May 2025
Cited by 2 | Viewed by 2584
Abstract
In today’s high-stakes arenas—from healthcare to defense—algorithms are advancing at an unprecedented pace, yet they still lack a crucial element of human decision-making: an instinctive caution that helps prevent harm. Inspired by both the protective reflexes seen in military robotics and the human [...] Read more.
In today’s high-stakes arenas—from healthcare to defense—algorithms are advancing at an unprecedented pace, yet they still lack a crucial element of human decision-making: an instinctive caution that helps prevent harm. Inspired by both the protective reflexes seen in military robotics and the human amygdala’s role in threat detection, we introduce a novel idea: an integrated module that acts as an internal “caution system”. This module does not experience emotion in the human sense; rather, it serves as an embedded safeguard that continuously assesses uncertainty and triggers protective measures whenever potential dangers arise. Our proposed framework combines several established techniques. It uses Bayesian methods to continuously estimate the likelihood of adverse outcomes, applies reinforcement learning strategies with penalties for choices that might lead to harmful results, and incorporates layers of human oversight to review decisions when needed. The result is a system that mirrors the prudence and measured judgment of experienced clinicians—hesitating and recalibrating its actions when the data are ambiguous, much like a doctor would rely on both intuition and expertise to prevent errors. We call on computer scientists, healthcare professionals, and policymakers to collaborate in refining and testing this approach. Through joint research, pilot projects, and robust regulatory guidelines, we aim to ensure that advanced computational systems can combine speed and precision with an inherent predisposition toward protecting human life. Ultimately, by embedding this cautionary module, the framework is expected to significantly reduce AI-induced risks and enhance patient safety and trust in medical AI systems. It seems inevitable for future superintelligent AI systems in medicine to possess emotion-like processes. Full article
Show Figures

Figure 1

21 pages, 7841 KB  
Article
Robot Joint Vibration Suppression Method Based on Improved ADRC
by Gang Wang, Shuhua Fang and Qiangren Xu
Appl. Sci. 2025, 15(10), 5476; https://doi.org/10.3390/app15105476 - 13 May 2025
Cited by 1 | Viewed by 662
Abstract
This article proposes an innovative joint vibration suppression method for six-axis collaborative robots. A permanent magnet synchronous motor (PMSM) and harmonic reducer are considered as a whole system in the design. A novel active disturbance rejection control (ADRC) scheme has been implemented by [...] Read more.
This article proposes an innovative joint vibration suppression method for six-axis collaborative robots. A permanent magnet synchronous motor (PMSM) and harmonic reducer are considered as a whole system in the design. A novel active disturbance rejection control (ADRC) scheme has been implemented by applying vibration suppression measures to the low-speed end of joint reducers. The mechanism of vibration generated by robot joints was analyzed. A vibration model for harmonic reducers was built. A new extended state observer model was designed to analyze the position disturbance data of two encoders in robot joints. Vibration signals were extracted and input into the ADRC algorithm to suppress joint vibration. A six-axis robot experimental platform was built and used to explore the changes in robot vibration trajectories under different speed conditions by applying the ADRC algorithm. The experimental results clearly show that the fluctuation amplitude of the trajectory of the robot has been reduced. The experimental results clearly show that by applying the new vibration suppression algorithm, the amplitude, vibration velocity, and acceleration of the robot at low speed have decreased 0.533 mm, 10.12 mm/s, and 0.49 mm/s2, respectively, and at the same time, the velocity stability of the PMSM has been improved. This article accurately evaluates the vibration suppression performance of the ADRC algorithm at different speeds of robots, effectively suppressing the vibration of robot joints. Full article
Show Figures

Figure 1

27 pages, 6796 KB  
Article
Comparative Analysis of Post Hoc Explainable Methods for Robotic Grasp Failure Prediction
by Aneseh Alvanpour, Cagla Acun, Kyle Spurlock, Christopher K. Robinson, Sumit K. Das, Dan O. Popa and Olfa Nasraoui
Electronics 2025, 14(9), 1868; https://doi.org/10.3390/electronics14091868 - 3 May 2025
Cited by 1 | Viewed by 629
Abstract
In human–robot collaborative environments, predicting and explaining robotic grasp failures is crucial for effective operation. While machine learning models can predict failures accurately, they often lack transparency, limiting their utility in critical applications. This paper presents a comparative analysis of three post hoc [...] Read more.
In human–robot collaborative environments, predicting and explaining robotic grasp failures is crucial for effective operation. While machine learning models can predict failures accurately, they often lack transparency, limiting their utility in critical applications. This paper presents a comparative analysis of three post hoc explanation methods—Tree-SHAP, LIME, and TreeInterpreter—for explaining grasp failure predictions from white-box and black-box models. Using a simulated robotic grasping dataset, we evaluate these methods based on their agreement in identifying important features, similarity in feature importance rankings, dependency on model type, and computational efficiency. Our findings reveal that Tree-SHAP and TreeInterpreter demonstrate stronger consistency with each other than with LIME, particularly for correctly predicted failures. The choice of ML model significantly affects explanation consistency, with simpler models yielding more agreement across methods. TreeInterpreter offers a substantial computational advantage, operating approximately 24 times faster than Tree-SHAP and over 2000 times faster than LIME for complex models. All methods consistently identify effort in joint 1 across fingers 1 and 3 as critical factors in grasp failures, aligning with mechanical design principles. These insights contribute to developing more transparent and reliable robotic grasping systems, enabling better human–robot collaboration through improved failure understanding and prevention. Full article
Show Figures

Figure 1

9 pages, 2950 KB  
Proceeding Paper
Cost-Effective Triboelectric-Assisted Sensory Actuator Designed for Intelligent Robot and Exoskeleton
by Haowen Liu, Yusong Chu, Yudong Zhao, Guanyu Zhu, Xuan Li, Minglu Zhu and Tao Chen
Eng. Proc. 2024, 78(1), 11; https://doi.org/10.3390/engproc2024078011 - 18 Apr 2025
Viewed by 2538
Abstract
Joint actuators are the key components in the innovation and iterative optimization of the robots, with a significant impact on both the performances of robots and manufacturing costs. Conventional industrial collaborative robots often use high-precision position and torque sensors, which are not cost-effective [...] Read more.
Joint actuators are the key components in the innovation and iterative optimization of the robots, with a significant impact on both the performances of robots and manufacturing costs. Conventional industrial collaborative robots often use high-precision position and torque sensors, which are not cost-effective or energy-efficient in specific applications like assistive exoskeletons, legged robots, or wheeled robots. Alternatively, we propose a triboelectric-assisted sensory actuator that balances lightweight design, performance, and affordability for large-scale applications. The actuator is composed of a high-power density motor, a low reduction gearbox, and integrated with a rotational triboelectric sensor, which leads to high dynamic performances and low power consumption. The feasibility of the prototype is initially verified by characterizing the angular positioning accuracy and the back drivability. Experiments indicate that the rotational triboelectric sensor is able to accurately detect the angular displacement of the actuator with the self-generated signals. Overall, a highly integrated actuator module with the actuation and sensing circuit is fabricated as a compact design ready for assembling a complete intelligent robot. This actuator holds great potential as a cost-effective, energy-efficient, and versatile solution for modern robotics, crucial for advancing this field and improving human convenience. Full article
Show Figures

Figure 1

21 pages, 85270 KB  
Article
Multi-Humanoid Robot Arm Motion Imitation and Collaboration Based on Improved Retargeting
by Xisheng Jiang, Baolei Wu, Simin Li, Yongtong Zhu, Guoxiang Liang, Ye Yuan, Qingdu Li and Jianwei Zhang
Biomimetics 2025, 10(3), 190; https://doi.org/10.3390/biomimetics10030190 - 19 Mar 2025
Cited by 1 | Viewed by 1750
Abstract
Human–robot interaction (HRI) is a key technology in the field of humanoid robotics, and motion imitation is one of the most direct ways to achieve efficient HRI. However, due to significant differences in structure, range of motion, and joint torques between the human [...] Read more.
Human–robot interaction (HRI) is a key technology in the field of humanoid robotics, and motion imitation is one of the most direct ways to achieve efficient HRI. However, due to significant differences in structure, range of motion, and joint torques between the human body and robots, motion imitation remains a challenging task. Traditional retargeting algorithms, while effective in mapping human motion to robots, typically either ensure similarity in arm configuration (joint space-based) or focus solely on tracking the end-effector position (Cartesian space-based). This creates a conflict between the liveliness and accuracy of robot motion. To address this issue, this paper proposes an improved retargeting algorithm that ensures both the similarity of the robot’s arm configuration to that of the human body and accurate end-effector position tracking. Additionally, a multi-person pose estimation algorithm is introduced, enabling real-time capture of multiple imitators’ movements using a single RGB-D camera. The captured motion data are used as input to the improved retargeting algorithm, enabling multi-robot collaboration tasks. Experimental results demonstrate that the proposed algorithm effectively ensures consistency in arm configuration and precise end-effector position tracking. Furthermore, the collaborative experiments validate the generalizability of the improved retargeting algorithm and the superior real-time performance of the multi-person pose estimation algorithm. Full article
Show Figures

Figure 1

19 pages, 15931 KB  
Article
Voronoi-GRU-Based Multi-Robot Collaborative Exploration in Unknown Environments
by Yang Lei, Jian Hou, Peixin Ma and Mingze Ma
Appl. Sci. 2025, 15(6), 3313; https://doi.org/10.3390/app15063313 - 18 Mar 2025
Viewed by 1149
Abstract
In modern society, the autonomous exploration of unknown environments has attracted extensive attention due to its broad applications, such as in search and rescue operations, planetary exploration, and environmental monitoring. This paper proposes a novel collaborative exploration strategy for multiple mobile robots, aiming [...] Read more.
In modern society, the autonomous exploration of unknown environments has attracted extensive attention due to its broad applications, such as in search and rescue operations, planetary exploration, and environmental monitoring. This paper proposes a novel collaborative exploration strategy for multiple mobile robots, aiming to quickly realize the exploration of entire unknown environments. Specifically, we investigate a hierarchical control architecture, comprising an upper decision-making layer and a lower planning and mapping layer. In the upper layer, the next frontier point for each robot is determined using Voronoi partitioning and the Multi-Agent Twin Delayed Deep Deterministic policy gradient (MATD3) deep reinforcement learning algorithm in a centralized training and decentralized execution framework. In the lower layer, navigation planning is achieved using A* and Timed Elastic Band (TEB) algorithms, while an improved Cartographer algorithm is used to construct a joint map for the multi-robot system. In addition, the improved Robot Operating System (ROS) and Gazebo simulation environments speed up simulation times, further alleviating the slow training of high-precision simulation engines. Finally, the simulation results demonstrate the superiority of the proposed strategy, which achieves over 90% exploration coverage in unknown environments with a significantly reduced exploration time. Compared to MATD3, Multi-Agent Proximal Policy Optimization (MAPPO), Rapidly-Exploring Random Tree (RRT), and Cost-based methods, our strategy reduces time consumption by 41.1%, 47.0%, 63.9%, and 74.9%, respectively. Full article
(This article belongs to the Special Issue Advanced Technologies in AI Mobile Robots)
Show Figures

Figure 1

Back to TopTop