error_outline You can access the new MDPI.com website here. Explore and share your feedback with us.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,155)

Search Parameters:
Keywords = Robotic Hand

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 2708 KB  
Article
A TPU-Based 3D Printed Robotic Hand: Design and Its Impact on Human–Robot Interaction
by Younglim Choi, Minho Lee, Seongmin Yea, Seunghwan Kim and Hyunseok Kim
Electronics 2026, 15(2), 262; https://doi.org/10.3390/electronics15020262 - 7 Jan 2026
Abstract
This study outlines the design and evaluation of a biomimetic robotic hand tailored for Human–Robot Interaction (HRI), focusing on improvements in tactile fidelity driven by material choice. Thermoplastic polyurethane (TPU) was selected over polylactic acid (PLA) based on its reported elastomeric characteristics and [...] Read more.
This study outlines the design and evaluation of a biomimetic robotic hand tailored for Human–Robot Interaction (HRI), focusing on improvements in tactile fidelity driven by material choice. Thermoplastic polyurethane (TPU) was selected over polylactic acid (PLA) based on its reported elastomeric characteristics and mechanical compliance described in prior literature. Rather than directly matching human skin properties, TPU was perceived as providing a softer and more comfortable tactile interaction compared to rigid PLA. The robotic hand was anatomically reconstructed from an open-source model and integrated with AX-12A and MG90S actuators to simplify wiring and enhance motion precision. A custom PCB, built around an ATmega2560 microcontroller, enables real-time communication with ROS-based upper-level control systems. Angular displacement analysis of repeated gesture motions confirmed the high repeatability and consistency of the system. A repeated-measures user study involving 47 participants was conducted to compare the PLA- and TPU-based prototypes during interactive tasks such as handshakes and gesture commands. The TPU hand received significantly higher ratings in tactile realism, grip satisfaction, and perceived responsiveness (p < 0.05). Qualitative feedback further supported its superior emotional acceptance and comfort. These findings indicate that incorporating TPU in robotic hand design not only enhances mechanical performance but also plays a vital role in promoting emotionally engaging and natural human–robot interactions, making it a promising approach for affective HRI applications. Full article
Show Figures

Figure 1

22 pages, 5346 KB  
Article
A Body Power Hydraulic Prosthetic Hand
by Christopher Trent Neville-Dowler, Charlie Williams, Yuting Zhu and Kean C. Aw
Robotics 2026, 15(1), 14; https://doi.org/10.3390/robotics15010014 - 4 Jan 2026
Viewed by 93
Abstract
Limb amputations are a growing global challenge. Electrically powered prosthetic hands are heavy, expensive, and battery dependent. Body-powered prostheses offer a simpler and lighter alternative; however, existing designs require high body forces to operate, exhibit poor aesthetics, and have limited dexterity. This study [...] Read more.
Limb amputations are a growing global challenge. Electrically powered prosthetic hands are heavy, expensive, and battery dependent. Body-powered prostheses offer a simpler and lighter alternative; however, existing designs require high body forces to operate, exhibit poor aesthetics, and have limited dexterity. This study aims to present a design of a hydraulically actuated soft bending finger with a simple and scalable manufacturing process. This is then realised into a five-fingered body-powered prosthetic hand that is lightweight, comfortable, and representative of a human hand. The actuator was formed from two silicone materials of different stiffness (Stiff Smooth-Sil 950 and flexible Ecoflex 00-30) and reinforced with double-helix fibres to generate bending under internal hydraulic pressure. A shoulder-mounted hydraulic system has been designed to convert scapular elevation and protraction into actuator pressure. Finite element analysis and physical tests were performed to examine the bending and blocking force performance of the actuators. The physical actuators achieved bending angles up to 230 degrees at 60 kPa and blocking forces of 5.9 N at 100 kPa. The prosthetic system was able to grasp and hold a 320-g water bottle. The results demonstrate a soft actuator design that provides simple and scalable manufacturing and shows how these actuators can be incorporated into a body-powered prosthesis. This study provides a preliminary demonstration of the feasibility of human-powered prosthetics and necessitates continued research. This work makes progress towards an affordable and functional body-powered prosthetic hand that can improve the lives of transradial amputees. Full article
(This article belongs to the Section Soft Robotics)
Show Figures

Figure 1

23 pages, 5819 KB  
Article
Finger Unit Design for Hybrid-Driven Dexterous Hands
by Chong Deng, Wenhao Lu, Yizhou Qian, Yongjian Liu, Meng Ning and Ziheng Zhan
Biomimetics 2026, 11(1), 35; https://doi.org/10.3390/biomimetics11010035 - 4 Jan 2026
Viewed by 179
Abstract
Dexterous hands are the core end-effectors of humanoid robots, and their design is a key research focus in this field. With multiple independent finger units, the units’ dexterity directly determines the hand’s operational performance, yet achieving three-degree-of-freedom (3-DOF) anthropomorphic motion remains a key [...] Read more.
Dexterous hands are the core end-effectors of humanoid robots, and their design is a key research focus in this field. With multiple independent finger units, the units’ dexterity directly determines the hand’s operational performance, yet achieving three-degree-of-freedom (3-DOF) anthropomorphic motion remains a key design challenge. To address this, this paper proposes a hybrid-driven index finger unit: combining linkage and tendon–cable drive advantages to realize 3-DOF anthropomorphic motion, and adopting independent drive/transmission modules to simplify manufacturing and boost parameter optimization flexibility. Validated via motion dynamics, DOF, and operational force assessments, this design offers key unit tech for dexterous hand development and serves as a reference for optimizing multi-DOF anthropomorphic finger designs. Full article
(This article belongs to the Section Locomotion and Bioinspired Robotics)
Show Figures

Figure 1

21 pages, 1566 KB  
Article
Robot-Assisted Mirror Therapy for Upper Limb and Hand Recovery After Stroke: Clinical Efficacy and Insights into Neural Mechanisms
by Shixin Li, Jiayi Zhang, Yang Xu and Yonghong Yang
J. Clin. Med. 2026, 15(1), 350; https://doi.org/10.3390/jcm15010350 - 2 Jan 2026
Viewed by 213
Abstract
Objective: This study investigated the efficacy and neural mechanisms of robot-assisted mirror therapy (RMT) for post-stroke upper limb rehabilitation. RMT integrates the multimodal feedback of mirror therapy with robotic precision and repetition to enhance cortical activation and neuroplasticity. Methods: Seventy-eight stroke patients were [...] Read more.
Objective: This study investigated the efficacy and neural mechanisms of robot-assisted mirror therapy (RMT) for post-stroke upper limb rehabilitation. RMT integrates the multimodal feedback of mirror therapy with robotic precision and repetition to enhance cortical activation and neuroplasticity. Methods: Seventy-eight stroke patients were randomly assigned to control, mirror therapy (MT), or RMT groups. All received conventional rehabilitation; the MT group additionally underwent mirror therapy, and the RMT group received robot-assisted mirror therapy combined with functional electrical stimulation. The primary outcome was the Fugl–Meyer Assessment for Upper Extremity (FMA-UE), with secondary measures including spasticity, dexterity, daily living, and quality of life. Functional near-infrared spectroscopy (fNIRS) was applied to assess cortical activation and connectivity at baseline, post-intervention, and one-month follow-up. Results: All groups showed significant time effects, though between-group differences were limited. Subgroup analysis revealed that patients at Brunnstrom stages I–II in the MT group achieved greater improvements in upper limb function, dexterity, and daily living ability. fNIRS findings showed enhanced activation in the right sensory association cortex and increased prefrontal–sensory connectivity. Conclusions: While all interventions improved motor outcomes, MT yielded slightly superior recovery associated with neuroplastic changes. RMT demonstrated high safety, compliance, and potential benefit for patients with severe motor deficits. Full article
(This article belongs to the Section Brain Injury)
Show Figures

Figure 1

24 pages, 19110 KB  
Article
Low-Code Mixed Reality Programming Framework for Collaborative Robots: From Operator Intent to Executable Trajectories
by Ziyang Wang, Zhihai Li, Hongpeng Yu, Duotao Pan, Songjie Peng and Shenlin Liu
Robotics 2026, 15(1), 9; https://doi.org/10.3390/robotics15010009 - 29 Dec 2025
Viewed by 170
Abstract
Efficient and intuitive programming strategies are essential for enabling robots to adapt to small-batch, high-mix production scenarios. Mixed reality (MR) and programming by demonstration (PbD) have shown great potential to lower the programming barrier and enhance human–robot interaction by leveraging natural human guidance. [...] Read more.
Efficient and intuitive programming strategies are essential for enabling robots to adapt to small-batch, high-mix production scenarios. Mixed reality (MR) and programming by demonstration (PbD) have shown great potential to lower the programming barrier and enhance human–robot interaction by leveraging natural human guidance. However, traditional offline programming methods, while capable of generating industrial-grade trajectories, remain time-consuming, costly to debug, and heavily dependent on expert knowledge. Conversely, existing MR-based PbD approaches primarily focus on improving intuitiveness but often suffer from low trajectory quality due to hand jitter and the lack of refinement mechanisms. To address these limitations, this paper introduces a coarse-to-fine human–robot collaborative programming paradigm. In this paradigm, the operator’s role is elevated from a low-level “trajectory drawer” to a high-level “task guider”. By leveraging sparse key points as guidance, the paradigm decouples high-level human task intent from machine-level trajectory planning, enabling their effective integration. The feasibility of the proposed system is validated through two industrial case studies and comparative quantitative experiments against conventional programming methods. The results demonstrate that the coarse-to-fine paradigm significantly improves programming efficiency and usability while reducing operator cognitive load. Crucially, it achieves this without compromising the final output, automatically generating smooth, high-fidelity trajectories from simple user inputs. This work provides an effective pathway toward reconciling programming intuitiveness with final trajectory quality. Full article
(This article belongs to the Section AI in Robotics)
Show Figures

Figure 1

30 pages, 1992 KB  
Article
Biomimetic Approach to Designing Trust-Based Robot-to-Human Object Handover in a Collaborative Assembly Task
by S. M. Mizanoor Rahman
Biomimetics 2026, 11(1), 14; https://doi.org/10.3390/biomimetics11010014 - 27 Dec 2025
Viewed by 330
Abstract
We presented a biomimetic approach to designing robot-to-human handover of objects in a collaborative assembly task. We developed a human–robot hybrid cell where a human and a robot collaborated with each other to perform the assembly operations of a product in a flexible [...] Read more.
We presented a biomimetic approach to designing robot-to-human handover of objects in a collaborative assembly task. We developed a human–robot hybrid cell where a human and a robot collaborated with each other to perform the assembly operations of a product in a flexible manufacturing setup. Firstly, we investigated human psychology and biomechanics (kinetics and kinematics) for human-to-robot handover of an object in the human–robot collaborative set-up in three separate experimental conditions: (i) human possessed high trust in the robot, (ii) human possessed moderate trust in the robot, and (iii) human possessed low trust in the robot. The results showed that human psychology was significantly impacted by human trust in the robot, which also impacted the biomechanics of human-to-robot handover, i.e., human hand movement slowed down, the angle between human hand and robot arm increased (formed a braced handover configuration), and human grip forces increased if human trust in the robot decreased, and vice versa. Secondly, being inspired by those empirical results related to human psychology and biomechanics, we proposed a novel robot-to-human object handover mechanism (strategy). According to the novel handover mechanism, the robot varied its handover configurations and motions through kinematic redundancy with the aim of reducing potential impulse forces on the human body through the object during the handover when robot trust in the human was low. We implemented the proposed robot-to-human handover mechanism in the human–robot collaborative assembly task in the hybrid cell. The experimental evaluation results showed significant improvements in human–robot interaction (HRI) in terms of transparency, naturalness, engagement, cooperation, cognitive workload, and human trust in the robot, and in overall performance in terms of handover safety, handover success rate, and assembly efficiency. The results can help design and develop human–robot handover mechanisms for human–robot collaborative tasks in various applications such as industrial manufacturing and manipulation, medical surgery, warehouse, transport, logistics, construction, machine shops, goods delivery, etc. Full article
(This article belongs to the Special Issue Human-Inspired Grasp Control in Robotics 2025)
Show Figures

Figure 1

24 pages, 2830 KB  
Article
Real-Time Radar-Based Hand Motion Recognition on FPGA Using a Hybrid Deep Learning Model
by Taher S. Ahmed, Ahmed F. Mahmoud, Magdy Elbahnasawy, Peter F. Driessen and Ahmed Youssef
Sensors 2026, 26(1), 172; https://doi.org/10.3390/s26010172 - 26 Dec 2025
Viewed by 313
Abstract
Radar-based hand motion recognition (HMR) presents several challenges, including sensor interference, clutter, and the limitations of small datasets, which collectively hinder the performance and real-time deployment of deep learning (DL) models. To address these issues, this paper introduces a novel real-time HMR framework [...] Read more.
Radar-based hand motion recognition (HMR) presents several challenges, including sensor interference, clutter, and the limitations of small datasets, which collectively hinder the performance and real-time deployment of deep learning (DL) models. To address these issues, this paper introduces a novel real-time HMR framework that integrates advanced signal pre-processing, a hybrid convolutional neural network–support vector machine (CNN–SVM) architecture, and efficient hardware deployment. The pre-processing pipeline applies filtration, squared absolute value computation, and normalization to enhance radar data quality. To improve the robustness of DL models against noise and clutter, time-series radar signals are transformed into binarized images, providing a compact and discriminative representation for learning. A hybrid CNN-SVM model is then utilized for hand motion classification. The proposed model achieves a high classification accuracy of 98.91%, validating the quality of the extracted features and the efficiency of the proposed design. Additionally, it reduces the number of model parameters by approximately 66% relative to the most accurate recurrent baseline (CNN–GRU–SVM) and by up to 86% relative to CNN–BiLSTM–SVM, while achieving the highest SVM test accuracy of 92.79% across all CNN–RNN variants that use the same binarized radar images. For deployment, the model is quantized and implemented on two System-on-Chip (SoC) FPGA platforms—the Xilinx Zynq ZCU102 Evaluation Kit and the Xilinx Kria KR260 Robotics Starter Kit—using the Vitis AI toolchain. The system achieves end-to-end accuracies of 96.13% (ZCU102) and 95.42% (KR260). On the ZCU102, the system achieved a 70% reduction in execution time and a 74% improvement in throughput compared to the PC-based implementation. On the KR260, it achieved a 52% reduction in execution time and a 10% improvement in throughput relative to the same PC baseline. Both implementations exhibited minimal accuracy degradation relative to a PC-based setup—approximately 1% on ZCU102 and 2% on KR260. These results confirm the framework’s suitability for real-time, accurate, and resource-efficient radar-based hand motion recognition across diverse embedded environments. Full article
(This article belongs to the Special Issue Sensor Systems for Gesture Recognition (3rd Edition))
Show Figures

Figure 1

11 pages, 4787 KB  
Article
Vision-Based Hand Function Evaluation with Soft Robotic Rehabilitation Glove
by Mukun Tong, Michael Cheung, Yixing Lei, Mauricio Villarroel and Liang He
Sensors 2026, 26(1), 138; https://doi.org/10.3390/s26010138 - 25 Dec 2025
Viewed by 297
Abstract
Advances in robotic technology for hand rehabilitation, particularly soft robotic gloves, have significant potential to improve patient outcomes. While vision-based algorithms pave the way for fast and convenient hand pose estimation, most current models struggle to accurately track hand movements when soft robotic [...] Read more.
Advances in robotic technology for hand rehabilitation, particularly soft robotic gloves, have significant potential to improve patient outcomes. While vision-based algorithms pave the way for fast and convenient hand pose estimation, most current models struggle to accurately track hand movements when soft robotic gloves are used, primarily due to severe occlusion. This limitation reduces the applicability of soft robotic gloves in digital and remote rehabilitation assessment. Furthermore, traditional clinical assessments like the Fugl-Meyer Assessment (FMA) rely on manual measurements and subjective scoring scales, lacking the efficiency and quantitative accuracy needed to monitor hand function recovery in data-driven personalised rehabilitation. Consequently, few integrated evaluation systems provide reliable quantitative assessments. In this work, we propose an RGB-based evaluation system for soft robotic glove applications, which is aimed at bridging these gaps in assessing hand function. By incorporating the Hand Mesh Reconstruction (HaMeR) model fine-tuned with motion capture data, our hand estimation framework overcomes occlusion and enables accurate continuous tracking of hand movements with reduced errors. The resulting functional metrics include conventional clinical benchmarks such as the mean per joint angle error (MPJAE) and range of motion (ROM), providing quantitative, consistent measures of rehabilitation progress and achieving tracking errors lower than 10°. In addition, we introduce adapted benchmarks such as the angle percentage of correct keypoints (APCK), mean per joint angular velocity error (MPJAVE) and angular spectral arc length (SPARC) error to characterise movement stability and smoothness. This extensible and adaptable solution demonstrates the potential of vision-based systems for future clinical and home-based rehabilitation assessment. Full article
(This article belongs to the Special Issue Advanced Sensors Technologies for Soft Robotic System)
Show Figures

Figure 1

19 pages, 6764 KB  
Article
A Dual-Validation Framework for Temporal Robustness Assessment in Brain–Computer Interfaces for Motor Imagery
by Mohamed A. Hanafy, Saykhun Yusufjonov, Payman SharafianArdakani, Djaykhun Yusufjonov, Madan M. Rayguru and Dan O. Popa
Technologies 2025, 13(12), 595; https://doi.org/10.3390/technologies13120595 - 18 Dec 2025
Viewed by 364
Abstract
Brain–computer interfaces using motor imagery (MI-BCIs) offer a promising noninvasive communication pathway between humans and engineered equipment such as robots. However, for MI-BCIs based on electroencephalography (EEG), the reliability of the interface across recording sessions is limited by temporal non-stationary effects. Overcoming this [...] Read more.
Brain–computer interfaces using motor imagery (MI-BCIs) offer a promising noninvasive communication pathway between humans and engineered equipment such as robots. However, for MI-BCIs based on electroencephalography (EEG), the reliability of the interface across recording sessions is limited by temporal non-stationary effects. Overcoming this barrier is critical to translating MI-BCIs from controlled laboratory environments to practical uses. In this paper, we present a comprehensive dual-validation framework to rigorously evaluate the temporal robustness of EEG signals of an MI-BCI. We collected data from six participants performing four motor imagery tasks (left/right hand and foot). Features were extracted using Common Spatial Patterns, and ten machine learning classifiers were assessed within a unified pipeline. Our method integrates within-session evaluation (stratified K-fold cross-validation) with cross-session testing (bidirectional train/test), complemented by stability metrics and performance heterogeneity assessment. Findings reveal minimal performance loss between conditions, with an average accuracy drop of just 2.5%. The AdaBoost classifier achieved the highest within-session performance (84.0% system accuracy, F1-score: 83.8%/80.9% for hand/foot), while the K-nearest neighbors (KNN) classifier demonstrated the best cross-session robustness (81.2% system accuracy, F1-score: 80.5%/80.2% for hand/foot, 0.663 robustness score). This study shows that robust performance across sessions is attainable for MI-BCI evaluation, supporting the pathway toward reliable, real-world clinical deployment. Full article
(This article belongs to the Collection Selected Papers from the PETRA Conference Series)
Show Figures

Figure 1

15 pages, 2369 KB  
Article
The Effect of Tactile Feedback on the Manipulation of a Remote Robotic Arm via a Haptic Glove
by Christos Papakonstantinou, Konstantinos Giannakos, George Kokkonis and Maria S. Papadopoulou
Electronics 2025, 14(24), 4964; https://doi.org/10.3390/electronics14244964 - 18 Dec 2025
Viewed by 496
Abstract
This paper investigates the effect of tactile feedback on the power efficiency and timing of controlling a remote robotic arm using a custom-built haptic glove. The glove integrates flex sensors to monitor finger movements and vibration motors to provide tactile feedback to the [...] Read more.
This paper investigates the effect of tactile feedback on the power efficiency and timing of controlling a remote robotic arm using a custom-built haptic glove. The glove integrates flex sensors to monitor finger movements and vibration motors to provide tactile feedback to the user. Communication with the robotic arm is established via the ESP-NOW protocol using an Arduino Nano ESP32 microcontroller (Arduino, Turin, Italy). This study examines the impact of tactile feedback on task performance by comparing precision, completion time, and power efficiency in object manipulation tasks with and without feedback. Experimental results demonstrate that tactile feedback significantly enhances the user’s control accuracy, reduces task execution time, and enables the user to control hand movement during object grasping scenarios precisely. It also highlights its importance in teleoperation systems. These findings have implications for improving human–robot interaction in remote manipulation scenarios, such as assistive robotics, remote surgery, and hazardous environment operations. Full article
(This article belongs to the Special Issue Advanced Research in Technology and Information Systems, 2nd Edition)
Show Figures

Figure 1

40 pages, 3275 KB  
Article
Siphon-Based Deadlock Prevention of Complex Automated Manufacturing Systems Using Generalized Petri Nets
by František Čapkovič
Electronics 2025, 14(24), 4889; https://doi.org/10.3390/electronics14244889 - 12 Dec 2025
Viewed by 249
Abstract
Modern AMSs (automated manufacturing systems) on the one hand bring many benefits, but on the other hand, they are cumbersome to coordinate. AMSs consist of various subsystems (e.g., production lines) that share a finite number of resources (robots, machines, buffers, automated guided vehicles, [...] Read more.
Modern AMSs (automated manufacturing systems) on the one hand bring many benefits, but on the other hand, they are cumbersome to coordinate. AMSs consist of various subsystems (e.g., production lines) that share a finite number of resources (robots, machines, buffers, automated guided vehicles, etc.). This forces AMS designers to build flexible and decentralized systems. However, in these cases, the danger of deadlocks exists. Consequently, such a situation requires the application of advanced supervisors. One solution to the deadline problem is the application of Petri nets. This paper is motivated by AMS control based on deadlock prevention by means of ordinary Petri nets (OPNs) and generalized Petri nets (GPNs). This paper examines two areas of AMS Petri net-based model structures and presents methods of deadlock prevention. First, simpler structures of AMSs modeled by OPNs and GPNs will be investigated, and then more complex structures of AMSs modeled by the same kinds of Petri nets (PNs) will be analyzed. The siphon-based approach will be used for deadlock prevention in all of these cases. The principal results are introduced, explained, and illustrated through examples. Key results are introduced, especially in Example 1 and Example 2. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

18 pages, 4234 KB  
Article
A Four-Chamber Multimodal Soft Actuator and Its Application
by Jiabin Yang, Helei Zhu, Gang Chen, Jianbo Cao, Jiwei Yuan and Kaiwei Wu
Actuators 2025, 14(12), 602; https://doi.org/10.3390/act14120602 - 9 Dec 2025
Viewed by 282
Abstract
Soft robotics represents a rapidly advancing and significant subfield within modern robotics. However, existing soft actuators often face challenges including unwanted deformation modes, limited functional diversity, and a lack of versatility. This paper presents a four-chamber multimodal soft actuator with a centrally symmetric [...] Read more.
Soft robotics represents a rapidly advancing and significant subfield within modern robotics. However, existing soft actuators often face challenges including unwanted deformation modes, limited functional diversity, and a lack of versatility. This paper presents a four-chamber multimodal soft actuator with a centrally symmetric layout and independent pneumatic control. While building on existing multi-chamber concepts, the design incorporates a cruciform constraint layer and inter-chamber gaps to improve directional bending and reduce passive chamber deformation. An empirical model based on the vector superposition of single- and dual-chamber inflations is developed to describe the bending behavior. Experimental results show that the actuator can achieve omnidirectional bending with errors below 5% compared to model predictions. To demonstrate versatility, the actuator is implemented in two distinct applications: a three-finger soft gripper that can grasp objects of various shapes and perform in-hand twisting maneuvers, and a steerable crawling robot that mimics inchworm locomotion. These results highlight the actuator’s potential as a reusable and adaptable driving unit for diverse soft robotic tasks. Full article
(This article belongs to the Section Actuators for Robotics)
Show Figures

Figure 1

23 pages, 65396 KB  
Article
Comparative Analysis of the Accuracy and Robustness of the Leap Motion Controller 2
by Daniel Matuszczyk, Mikel Jedrusiak, Denis Fisseler and Frank Weichert
Sensors 2025, 25(24), 7473; https://doi.org/10.3390/s25247473 - 8 Dec 2025
Viewed by 567
Abstract
Along with the ongoing success of virtual/augmented reality (VR/AR) and human–machine interaction (HMI) in the professional and consumer markets, new compatible and inexpensive hand tracking devices are required. One of the contenders in this market is the Leap Motion Controller 2 (LMC2), successor [...] Read more.
Along with the ongoing success of virtual/augmented reality (VR/AR) and human–machine interaction (HMI) in the professional and consumer markets, new compatible and inexpensive hand tracking devices are required. One of the contenders in this market is the Leap Motion Controller 2 (LMC2), successor to the popular Leap Motion Controller (LMC1), which has been widely used for scientific hand-tracking applications since its introduction in 2013. To quantify ten years of advances, this study compares both controllers using quantitative tracking metrics and characterizes the interaction space above the sensor. A robot-actuated 3D-printed hand and a motion-capture system provide controlled movements and external reference data. In the central tracking volume, the LMC2 achieves improved performance, reducing palm-position error from 7.9–9.8 mm (LMC1) to 5.2–5.3 mm (LMC2) and lowering positional variability from 1.3–2.2 mm to 0.4–0.8 mm. Dynamic tests confirm stable tracking for both devices. For boundary experiments, the LMC2 maintains continuous detection at distances up to 666 mm, compared to 250–275 mm (LMC1), and detects hands entering the field of view from distances up to 646 mm. Both devices show reduced accuracy toward the edges of the tracking volume. Overall, the results provide a grounded characterization of LMC2 performance in its newly emphasized VR/AR-relevant interaction spaces, while the metrics support cross-comparison with earlier LMC1-based studies and transfer to related application scenarios. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Graphical abstract

22 pages, 12089 KB  
Article
A Brain–Computer Interface for Control of a Virtual Prosthetic Hand
by Ángel del Rosario Zárate-Ruiz, Manuel Arias-Montiel and Christian Eduardo Millán-Hernández
Computation 2025, 13(12), 287; https://doi.org/10.3390/computation13120287 - 6 Dec 2025
Viewed by 1137
Abstract
Brain–computer interfaces (BCIs) have emerged as an option that allows better communication between humans and some technological devices. This article presents a BCI based on the steady-state visual evoked potentials (SSVEP) paradigm and low-cost hardware to control a virtual prototype of a robotic [...] Read more.
Brain–computer interfaces (BCIs) have emerged as an option that allows better communication between humans and some technological devices. This article presents a BCI based on the steady-state visual evoked potentials (SSVEP) paradigm and low-cost hardware to control a virtual prototype of a robotic hand. A LED-based device is proposed as a visual stimulator, and the Open BCI Ultracortex Biosensing Headset is used to acquire the electroencephalographic (EEG) signals for the BCI. The processing and classification of the obtained signals are described. Classifiers based on artificial neural networks (ANNs) and support vector machines (SVMs) are compared, demonstrating that the classifiers based on SVM have superior performance to those based on ANN. The classified EEG signals are used to implement different movements in a virtual prosthetic hand using a co-simulation approach, showing the feasibility of BCI being implemented in the control of robotic hands. Full article
Show Figures

Figure 1

32 pages, 6322 KB  
Article
Development of a Robotic Manipulator for Piano Performance via Numbered Musical Notation Recognition
by Pu-Sheng Tsai, Ter-Feng Wu and Chen-Ting Liao
Machines 2025, 13(12), 1121; https://doi.org/10.3390/machines13121121 - 5 Dec 2025
Viewed by 443
Abstract
This paper presents a piano-playing robotic system that integrates numbered musical notation recognition with automated manipulator control. The system captures the notation using a camera, applies four-point detection for perspective correction, and performs measure segmentation through an orthogonal projection method. A pixel-scanning technique [...] Read more.
This paper presents a piano-playing robotic system that integrates numbered musical notation recognition with automated manipulator control. The system captures the notation using a camera, applies four-point detection for perspective correction, and performs measure segmentation through an orthogonal projection method. A pixel-scanning technique is then used to locate the positions of numerical notes, pitch dots, and rhythmic markers. Digit recognition is achieved using a CNN model trained on both the MNIST handwritten digit dataset and a custom computer-font digit dataset (CFDD), enabling robust identification of numerical symbols under varying font styles. The hardware platform consists of a 3D-printed robotic hand mounted on a linear rail and driven by an ESP32-based embedded controller with custom driver circuits. According to the recognized musical notes, the manipulator executes lateral positioning and vertical key-press motions to reproduce piano melodies. Experimental results demonstrate reliable notation recognition and accurate performance execution, confirming the feasibility of combining computer vision and robotic manipulation for low-cost, automated musical performance. Full article
(This article belongs to the Special Issue Advances and Challenges in Robotic Manipulation)
Show Figures

Figure 1

Back to TopTop