Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (14)

Search Parameters:
Keywords = human arm motion mapping

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 6584 KB  
Article
Bilateral Teleoperation of Aerial Manipulator with Hybrid Mapping Framework for Physical Interaction
by Lingda Meng, Yongfeng Rong and Wusheng Chou
Sensors 2025, 25(18), 5844; https://doi.org/10.3390/s25185844 - 19 Sep 2025
Viewed by 627
Abstract
Bilateral teleoperation combines the agility of robotic manipulators with the ability to perform complex contact tasks guided by human expertise, thereby fulfilling a pivotal function in environments beyond human access. However, due to the limited workspace of existing master robots necessitating frequent mapping [...] Read more.
Bilateral teleoperation combines the agility of robotic manipulators with the ability to perform complex contact tasks guided by human expertise, thereby fulfilling a pivotal function in environments beyond human access. However, due to the limited workspace of existing master robots necessitating frequent mapping mode switches, coupled with the pronounced heterogeneity and asymmetry between the workspaces of the master and slave systems, achieving teleoperation of the mobile manipulator remains challenging. In this study, we innovatively introduced a 7 DOFs upper limb exoskeleton as the master control device, rigorously designed to align with the motion coordination of the human arm. Regarding teleoperation mapping, a hybrid heterogeneous teleoperation control framework with a variable mapping scheme, designed for an aerial manipulator performing physical operations, is proposed. The system incorporates mode switching driven by the operator’s hand gestures, seamlessly and intuitively integrating the advantages of position control and rate control modalities to enable adaptive transitions adaptable to diverse task requirements. Comparative teleoperation experiments were conducted using a fully actuated aerial equipped with a compliant 3D end-effector performing physical aerial writing tasks. The mode-switching algorithm was effectively validated in experiments, demonstrating no instability during transitions and achieving a position tracking RMSE of 7.7% and 5.2% in the X,Y-axis, respectively. This approach holds significant potential for future applications in UAM inspection and physical operational scenarios. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

17 pages, 1929 KB  
Article
Bio-Signal-Guided Robot Adaptive Stiffness Learning via Human-Teleoperated Demonstrations
by Wei Xia, Zhiwei Liao, Zongxin Lu and Ligang Yao
Biomimetics 2025, 10(6), 399; https://doi.org/10.3390/biomimetics10060399 - 13 Jun 2025
Cited by 1 | Viewed by 754
Abstract
Robot learning from human demonstration pioneers an effective mapping paradigm for endowing robots with human-like operational capabilities. This paper proposes a bio-signal-guided robot adaptive stiffness learning framework grounded in the conclusion that muscle activation of the human arm is positively correlated with the [...] Read more.
Robot learning from human demonstration pioneers an effective mapping paradigm for endowing robots with human-like operational capabilities. This paper proposes a bio-signal-guided robot adaptive stiffness learning framework grounded in the conclusion that muscle activation of the human arm is positively correlated with the endpoint stiffness. First, we propose a human-teleoperated demonstration platform enabling real-time modulation of robot end-effector stiffness by human tutors during operational tasks. Second, we develop a dual-stage probabilistic modeling architecture employing the Gaussian mixture model and Gaussian mixture regression to model the temporal–motion correlation and the motion–sEMG relationship, successively. Third, a real-world experiment was conducted to validate the effectiveness of the proposed skill transfer framework, demonstrating that the robot achieves online adaptation of Cartesian impedance characteristics in contact-rich tasks. This paper provides a simple and intuitive way to plan the Cartesian impedance parameters, transcending the classical method that requires complex human arm endpoint stiffness identification before human demonstration or compensation for the difference in human–robot operational effects after human demonstration. Full article
Show Figures

Figure 1

18 pages, 5409 KB  
Article
Research on Motion Transfer Method from Human Arm to Bionic Robot Arm Based on PSO-RF Algorithm
by Yuanyuan Zheng, Hanqi Zhang, Gang Zheng, Yuanjian Hong, Zhonghua Wei and Peng Sun
Biomimetics 2025, 10(6), 392; https://doi.org/10.3390/biomimetics10060392 - 11 Jun 2025
Cited by 1 | Viewed by 772
Abstract
Although existing motion transfer methods for bionic robot arms are based on kinematic equivalence or simplified dynamic models, they frequently fail to tackle dynamic compliance and real-time adaptability in complex human-like motions. To address this shortcoming, this study presents a motion transfer method [...] Read more.
Although existing motion transfer methods for bionic robot arms are based on kinematic equivalence or simplified dynamic models, they frequently fail to tackle dynamic compliance and real-time adaptability in complex human-like motions. To address this shortcoming, this study presents a motion transfer method from the human arm to a bionic robot arm based on the hybrid PSO-RF (Particle Swarm Optimization-Random Forest) algorithm to improve joint space mapping accuracy and dynamic compliance. Initially, a high-precision optical motion capture (Mocap) system was utilized to record human arm trajectories, and Kalman filtering and a Rauch–Tung–Striebel (RTS) smoother were applied to reduce noise and phase lag. Subsequently, the joint angles of the human arm were computed through geometric vector analysis. Although geometric vector analysis offers an initial estimation of joint angles, its deterministic framework is subject to error accumulation caused by the occlusion of reflective markers and kinematic singularities. To surmount this limitation, this study designed five action sequences for the establishment of the training database for the PSO-RF model to predict joint angles when performing different actions. Ultimately, an experimental platform was built to validate the motion transfer method, and the experimental verification showed that the system attained high prediction accuracy (R2 = 0.932 for the elbow joint angle) and real-time performance with a latency of 0.1097 s. This paper promotes compliant human–robot interaction by dealing with joint-level dynamic transfer challenges, presenting a framework for applications in intelligent manufacturing and rehabilitation robotics. Full article
(This article belongs to the Special Issue Advances in Biological and Bio-Inspired Algorithms)
Show Figures

Figure 1

15 pages, 2957 KB  
Article
A Vector-Based Motion Retargeting Approach for Exoskeletons with Shoulder Girdle Mechanism
by Jiajia Wang, Shuo Pei, Junlong Guo, Mingsong Bao and Yufeng Yao
Biomimetics 2025, 10(5), 312; https://doi.org/10.3390/biomimetics10050312 - 12 May 2025
Cited by 1 | Viewed by 639
Abstract
Shoulder girdle plays a dominant role in coordinating the natural movements of the upper arm. Inverse kinematics, optimization, and data-driven approaches are usually used to conduct motion retargeting. However, these methods do not consider shoulder girdle movement. When the kinematic structure of human [...] Read more.
Shoulder girdle plays a dominant role in coordinating the natural movements of the upper arm. Inverse kinematics, optimization, and data-driven approaches are usually used to conduct motion retargeting. However, these methods do not consider shoulder girdle movement. When the kinematic structure of human and that of exoskeletons share a similar joint configuration, analytical motion retargeting methods can be used for exoskeletons with shoulder girdle mechanism. This paper proposes a vector-based analytical motion retargeting approach for exoskeletons with shoulder girdle mechanism. The approach maps the vectors of the upper limb segments to the joint space using vector-based methods. Simulation results using four different motion descriptions confirm the method’s accuracy and efficiency. Full article
(This article belongs to the Special Issue Bionic Wearable Robotics and Intelligent Assistive Technologies)
Show Figures

Figure 1

21 pages, 85270 KB  
Article
Multi-Humanoid Robot Arm Motion Imitation and Collaboration Based on Improved Retargeting
by Xisheng Jiang, Baolei Wu, Simin Li, Yongtong Zhu, Guoxiang Liang, Ye Yuan, Qingdu Li and Jianwei Zhang
Biomimetics 2025, 10(3), 190; https://doi.org/10.3390/biomimetics10030190 - 19 Mar 2025
Cited by 1 | Viewed by 2393
Abstract
Human–robot interaction (HRI) is a key technology in the field of humanoid robotics, and motion imitation is one of the most direct ways to achieve efficient HRI. However, due to significant differences in structure, range of motion, and joint torques between the human [...] Read more.
Human–robot interaction (HRI) is a key technology in the field of humanoid robotics, and motion imitation is one of the most direct ways to achieve efficient HRI. However, due to significant differences in structure, range of motion, and joint torques between the human body and robots, motion imitation remains a challenging task. Traditional retargeting algorithms, while effective in mapping human motion to robots, typically either ensure similarity in arm configuration (joint space-based) or focus solely on tracking the end-effector position (Cartesian space-based). This creates a conflict between the liveliness and accuracy of robot motion. To address this issue, this paper proposes an improved retargeting algorithm that ensures both the similarity of the robot’s arm configuration to that of the human body and accurate end-effector position tracking. Additionally, a multi-person pose estimation algorithm is introduced, enabling real-time capture of multiple imitators’ movements using a single RGB-D camera. The captured motion data are used as input to the improved retargeting algorithm, enabling multi-robot collaboration tasks. Experimental results demonstrate that the proposed algorithm effectively ensures consistency in arm configuration and precise end-effector position tracking. Furthermore, the collaborative experiments validate the generalizability of the improved retargeting algorithm and the superior real-time performance of the multi-person pose estimation algorithm. Full article
Show Figures

Figure 1

22 pages, 45649 KB  
Article
A Whole-Body Coordinated Motion Control Method for Highly Redundant Degrees of Freedom Mobile Humanoid Robots
by Hao Niu, Xin Zhao, Hongzhe Jin and Xiuli Zhang
Biomimetics 2024, 9(12), 766; https://doi.org/10.3390/biomimetics9120766 - 16 Dec 2024
Cited by 1 | Viewed by 2266
Abstract
Humanoid robots are becoming a global research focus. Due to the limitations of bipedal walking technology, mobile humanoid robots equipped with a wheeled chassis and dual arms have emerged as the most suitable configuration for performing complex tasks in factory or home environments. [...] Read more.
Humanoid robots are becoming a global research focus. Due to the limitations of bipedal walking technology, mobile humanoid robots equipped with a wheeled chassis and dual arms have emerged as the most suitable configuration for performing complex tasks in factory or home environments. To address the high redundancy issue arising from the wheeled chassis and dual-arm design of mobile humanoid robots, this study proposes a whole-body coordinated motion control algorithm based on arm potential energy optimization. By constructing a gravity potential energy model for the arms and a virtual torsional spring elastic potential energy model with the shoulder-wrist line as the rotation axis, we establish an optimization index function for the arms. A neural network with variable stiffness is introduced to fit the virtual torsional spring, representing the stiffness variation trend of the human arm. Additionally, a posture mapping method is employed to map the human arm potential energy model to the robot, enabling realistic humanoid movements. Combining task-space and joint-space planning algorithms, we designed experiments for single-arm manipulation, independent object retrieval, and dual-arm carrying in a simulation of a 23-degree-of-freedom mobile humanoid robot. The results validate the effectiveness of this approach, demonstrating smooth motion, the ability to maintain a low potential energy state, and conformity to the operational characteristics of the human arm. Full article
Show Figures

Graphical abstract

18 pages, 9319 KB  
Article
Mapping Method of Human Arm Motion Based on Surface Electromyography Signals
by Yuanyuan Zheng, Gang Zheng, Hanqi Zhang, Bochen Zhao and Peng Sun
Sensors 2024, 24(9), 2827; https://doi.org/10.3390/s24092827 - 29 Apr 2024
Cited by 6 | Viewed by 3144
Abstract
This paper investigates a method for precise mapping of human arm movements using sEMG signals. A multi-channel approach captures the sEMG signals, which, combined with the accurately calculated joint angles from an Inertial Measurement Unit, allows for action recognition and mapping through deep [...] Read more.
This paper investigates a method for precise mapping of human arm movements using sEMG signals. A multi-channel approach captures the sEMG signals, which, combined with the accurately calculated joint angles from an Inertial Measurement Unit, allows for action recognition and mapping through deep learning algorithms. Firstly, signal acquisition and processing were carried out, which involved acquiring data from various movements (hand gestures, single-degree-of-freedom joint movements, and continuous joint actions) and sensor placement. Then, interference signals were filtered out through filters, and the signals were preprocessed using normalization and moving averages to obtain sEMG signals with obvious features. Additionally, this paper constructs a hybrid network model, combining Convolutional Neural Networks and Artificial Neural Networks, and employs a multi-feature fusion algorithm to enhance the accuracy of gesture recognition. Furthermore, a nonlinear fitting between sEMG signals and joint angles was established based on a backpropagation neural network, incorporating momentum term and adaptive learning rate adjustments. Finally, based on the gesture recognition and joint angle prediction model, prosthetic arm control experiments were conducted, achieving highly accurate arm movement prediction and execution. This paper not only validates the potential application of sEMG signals in the precise control of robotic arms but also lays a solid foundation for the development of more intuitive and responsive prostheses and assistive devices. Full article
Show Figures

Figure 1

16 pages, 9389 KB  
Article
Teleoperated Grasping Using Data Gloves Based on Fuzzy Logic Controller
by Chunxiao Lu, Lei Jin, Yufei Liu, Jianfeng Wang and Weihua Li
Biomimetics 2024, 9(2), 116; https://doi.org/10.3390/biomimetics9020116 - 15 Feb 2024
Cited by 2 | Viewed by 2322
Abstract
Teleoperated robots have attracted significant interest in recent years, and data gloves are one of the commonly used devices for their operation. However, existing solutions still encounter two challenges: the ways in which data gloves capture human operational intentions and achieve accurate mapping. [...] Read more.
Teleoperated robots have attracted significant interest in recent years, and data gloves are one of the commonly used devices for their operation. However, existing solutions still encounter two challenges: the ways in which data gloves capture human operational intentions and achieve accurate mapping. In order to address these challenges, we propose a novel teleoperation method using data gloves based on fuzzy logic controller. Firstly, the data are collected and normalized from the flex sensors on data gloves to identify human manipulation intentions. Then, a fuzzy logic controller is designed to convert finger flexion information into motion control commands for robot arms. Finally, experiments are conducted to demonstrate the effectiveness and precision of the proposed method. Full article
(This article belongs to the Special Issue Intelligent Human-Robot Interaction: 2nd Edition)
Show Figures

Figure 1

21 pages, 2015 KB  
Article
Extreme Learning Machine/Finite Impulse Response Filter and Vision Data-Assisted Inertial Navigation System-Based Human Motion Capture
by Yuan Xu, Rui Gao, Ahong Yang, Kun Liang, Zhongwei Shi, Mingxu Sun and Tao Shen
Micromachines 2023, 14(11), 2088; https://doi.org/10.3390/mi14112088 - 12 Nov 2023
Cited by 1 | Viewed by 1482
Abstract
To obtain accurate position information, herein, a one-assistant method involving the fusion of extreme learning machine (ELM)/finite impulse response (FIR) filters and vision data is proposed for inertial navigation system (INS)-based human motion capture. In the proposed method, when vision is available, the [...] Read more.
To obtain accurate position information, herein, a one-assistant method involving the fusion of extreme learning machine (ELM)/finite impulse response (FIR) filters and vision data is proposed for inertial navigation system (INS)-based human motion capture. In the proposed method, when vision is available, the vision-based human position is considered as input to an FIR filter that accurately outputs the human position. Meanwhile, another FIR filter outputs the human position using INS data. ELM is used to build mapping between the output of the FIR filter and the corresponding error. When vision data are unavailable, FIR is used to provide the human posture and ELM is used to provide its estimation error built in the abovementioned stage. In the right-arm elbow, the proposed method can improve the cumulative distribution functions (CDFs) of the position errors by about 12.71%, which shows the effectiveness of the proposed method. Full article
(This article belongs to the Special Issue Machine-Learning-Assisted Sensors)
Show Figures

Figure 1

20 pages, 3837 KB  
Article
Design and Research of Multimodal Fusion Feedback Device Based on Virtual Interactive System
by Zhen Zhang, Kenan Shi, Pan Ge, Taisheng Zhang, Manman Xu and Yu Chen
Actuators 2023, 12(8), 331; https://doi.org/10.3390/act12080331 - 16 Aug 2023
Viewed by 2442
Abstract
This paper proposes a kinesthetic–tactile fusion feedback system based on virtual interaction. Combining the results of human fingertip deformation characteristics analysis and an upper limb motion mechanism, a fingertip tactile feedback device and an arm kinesthetic feedback device are designed and analyzed for [...] Read more.
This paper proposes a kinesthetic–tactile fusion feedback system based on virtual interaction. Combining the results of human fingertip deformation characteristics analysis and an upper limb motion mechanism, a fingertip tactile feedback device and an arm kinesthetic feedback device are designed and analyzed for blind instructors. In order to verify the effectiveness of the method, virtual touch experiments are established through the mapping relationship between the master–slave and virtual end. The results showed that the average recognition rate of virtual objects is 79.58%, and the recognition speed is improved by 41.9% compared with the one without force feedback, indicating that the kinesthetic–tactile feedback device can provide more haptic perception information in virtual feedback and improve the recognition rate of haptic perception. Full article
(This article belongs to the Special Issue Actuators for Haptic Feedback Applications)
Show Figures

Figure 1

15 pages, 8783 KB  
Article
A Dual-Armed Robotic Puncture System: Design, Implementation and Preliminary Tests
by Yongzhuo Gao, Xiaomin Liu, Xu Zhang, Zhanfeng Zhou, Wenhe Jiang, Lei Chen, Zheng Liu, Dongmei Wu and Wei Dong
Electronics 2022, 11(5), 740; https://doi.org/10.3390/electronics11050740 - 28 Feb 2022
Cited by 5 | Viewed by 3761
Abstract
Traditional renal puncture surgery requires manual operation, which has a poor puncture effect, low surgical success rate, and high incidence of postoperative complications. Robot-assisted puncture surgery can effectively improve the accuracy of punctures, improve the success rate of surgery, and reduce the occurrence [...] Read more.
Traditional renal puncture surgery requires manual operation, which has a poor puncture effect, low surgical success rate, and high incidence of postoperative complications. Robot-assisted puncture surgery can effectively improve the accuracy of punctures, improve the success rate of surgery, and reduce the occurrence of postoperative complications. This paper provides a dual-armed robotic puncture scheme to assist surgeons. The system is divided into an ultrasound scanning arm and a puncture arm. Both robotic arms with a compliant positioning function and master–slave control function are designed, respectively, and the control system is achieved. The puncture arm’s position and posture are decoupled by the wrist RCM mechanism and the arm decoupling mechanism. According to the independent joint control principle, the compliant positioning function is realized based on the single-joint human–computer interactive admittance control. The simulation and tests verify its functions and performance. The differential motion incremental master–slave mapping strategy is used to realize the master–slave control function. The error feedback link is introduced to solve the cumulative error problem in the master–slave control. The dual-armed robotic puncture system prototype is established and animal tests verify the effectiveness. Full article
(This article belongs to the Special Issue Physical Diagnosis and Rehabilitation Technologies)
Show Figures

Graphical abstract

15 pages, 50586 KB  
Article
High-Speed Dynamic Projection Mapping onto Human Arm with Realistic Skin Deformation
by Hao-Lun Peng and Yoshihiro Watanabe
Appl. Sci. 2021, 11(9), 3753; https://doi.org/10.3390/app11093753 - 21 Apr 2021
Cited by 14 | Viewed by 7109
Abstract
Dynamic projection mapping for a moving object according to its position and shape is fundamental for augmented reality to resemble changes on a target surface. For instance, augmenting the human arm surface via dynamic projection mapping can enhance applications in fashion, user interfaces, [...] Read more.
Dynamic projection mapping for a moving object according to its position and shape is fundamental for augmented reality to resemble changes on a target surface. For instance, augmenting the human arm surface via dynamic projection mapping can enhance applications in fashion, user interfaces, prototyping, education, medical assistance, and other fields. For such applications, however, conventional methods neglect skin deformation and have a high latency between motion and projection, causing noticeable misalignment between the target arm surface and projected images. These problems degrade the user experience and limit the development of more applications. We propose a system for high-speed dynamic projection mapping onto a rapidly moving human arm with realistic skin deformation. With the developed system, the user does not perceive any misalignment between the arm surface and projected images. First, we combine a state-of-the-art parametric deformable surface model with efficient regression-based accuracy compensation to represent skin deformation. Through compensation, we modify the texture coordinates to achieve fast and accurate image generation for projection mapping based on joint tracking. Second, we develop a high-speed system that provides a latency between motion and projection below 10 ms, which is generally imperceptible by human vision. Compared with conventional methods, the proposed system provides more realistic experiences and increases the applicability of dynamic projection mapping. Full article
(This article belongs to the Collection Virtual and Augmented Reality Systems)
Show Figures

Figure 1

14 pages, 3802 KB  
Article
Mapping Three Electromyography Signals Generated by Human Elbow and Shoulder Movements to Two Degree of Freedom Upper-Limb Robot Control
by Pringgo Widyo Laksono, Kojiro Matsushita, Muhammad Syaiful Amri bin Suhaimi, Takahide Kitamura, Waweru Njeri, Joseph Muguro and Minoru Sasaki
Robotics 2020, 9(4), 83; https://doi.org/10.3390/robotics9040083 - 9 Oct 2020
Cited by 14 | Viewed by 5308
Abstract
This article sought to address issues related to human-robot cooperation tasks focusing especially on robotic operation using bio-signals. In particular, we propose to develop a control scheme for a robot arm based on electromyography (EMG) signal that allows a cooperative task between humans [...] Read more.
This article sought to address issues related to human-robot cooperation tasks focusing especially on robotic operation using bio-signals. In particular, we propose to develop a control scheme for a robot arm based on electromyography (EMG) signal that allows a cooperative task between humans and robots that would enable teleoperations. A basic framework for achieving the task and conducting EMG signals analysis of the motion of upper limb muscles for mapping the hand motion is presented. The objective of this work is to investigate the application of a wearable EMG device to control a robot arm in real-time. Three EMG sensors are attached to the brachioradialis, biceps brachii, and anterior deltoid muscles as targeted muscles. Three motions were conducted by moving the arm about the elbow joint, shoulder joint, and a combination of the two joints giving a two degree of freedom. Five subjects were used for the experiments. The results indicated that the performance of the system had an overall accuracy varying from 50% to 100% for the three motions for all subjects. This study has further shown that upper-limb motion discrimination can be used to control the robotic manipulator arm with its simplicity and low computational cost. Full article
(This article belongs to the Section Sensors and Control in Robotics)
Show Figures

Figure 1

22 pages, 2145 KB  
Article
AMiCUS—A Head Motion-Based Interface for Control of an Assistive Robot
by Nina Rudigkeit and Marion Gebhard
Sensors 2019, 19(12), 2836; https://doi.org/10.3390/s19122836 - 25 Jun 2019
Cited by 28 | Viewed by 5787
Abstract
Within this work we present AMiCUS, a Human-Robot Interface that enables tetraplegics to control a multi-degree of freedom robot arm in real-time using solely head motion, empowering them to perform simple manipulation tasks independently. The article describes the hardware, software and signal processing [...] Read more.
Within this work we present AMiCUS, a Human-Robot Interface that enables tetraplegics to control a multi-degree of freedom robot arm in real-time using solely head motion, empowering them to perform simple manipulation tasks independently. The article describes the hardware, software and signal processing of AMiCUS and presents the results of a volunteer study with 13 able-bodied subjects and 6 tetraplegics with severe head motion limitations. As part of the study, the subjects performed two different pick-and-place tasks. The usability was assessed with a questionnaire. The overall performance and the main control elements were evaluated with objective measures such as completion rate and interaction time. The results show that the mapping of head motion onto robot motion is intuitive and the given feedback is useful, enabling smooth, precise and efficient robot control and resulting in high user-acceptance. Furthermore, it could be demonstrated that the robot did not move unintendedly, giving a positive prognosis for safety requirements in the framework of a certification of a product prototype. On top of that, AMiCUS enabled every subject to control the robot arm, independent of prior experience and degree of head motion limitation, making the system available for a wide range of motion impaired users. Full article
(This article belongs to the Special Issue Assistance Robotics and Biosensors 2019)
Show Figures

Figure 1

Back to TopTop