Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (91)

Search Parameters:
Keywords = robot hand-arm system

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 23222 KiB  
Article
A Multi-View Three-Dimensional Scanning Method for a Dual-Arm Hand–Eye System with Global Calibration of Coded Marker Points
by Tenglong Zheng, Xiaoying Feng, Siyuan Wang, Haozhen Huang and Shoupeng Li
Micromachines 2025, 16(7), 809; https://doi.org/10.3390/mi16070809 - 13 Jul 2025
Viewed by 548
Abstract
To achieve robust and accurate collaborative 3D measurement under complex noise conditions, a global calibration method for dual-arm hand–eye systems and multi-view 3D imaging is proposed. A multi-view 3D scanning approach based on ICP (M3DHE-ICP) integrates a multi-frequency heterodyne coding phase solution with [...] Read more.
To achieve robust and accurate collaborative 3D measurement under complex noise conditions, a global calibration method for dual-arm hand–eye systems and multi-view 3D imaging is proposed. A multi-view 3D scanning approach based on ICP (M3DHE-ICP) integrates a multi-frequency heterodyne coding phase solution with ICP optimization, effectively correcting stitching errors caused by robotic arm attitude drift. After correction, the average 3D imaging error is 0.082 mm, reduced by 0.330 mm. A global calibration method based on encoded marker points (GCM-DHE) is also introduced. By leveraging spatial geometry constraints and a dynamic tracking model of marker points, the transformation between multi-coordinate systems of the dual arms is robustly solved. This reduces the average imaging error to 0.100 mm, 0.456 mm lower than that of traditional circular calibration plate methods. In actual engineering measurements, the average error for scanning a vehicle’s front mudguard is 0.085 mm, with a standard deviation of 0.018 mm. These methods demonstrate significant value for intelligent manufacturing and multi-robot collaborative measurement. Full article
Show Figures

Figure 1

18 pages, 2469 KiB  
Article
A Next-Best-View Method for Complex 3D Environment Exploration Using Robotic Arm with Hand-Eye System
by Michal Dobiš, Jakub Ivan, Martin Dekan, František Duchoň, Andrej Babinec and Róbert Málik
Appl. Sci. 2025, 15(14), 7757; https://doi.org/10.3390/app15147757 - 10 Jul 2025
Viewed by 276
Abstract
The ability to autonomously generate up-to-date 3D models of robotic workcells is critical for advancing smart manufacturing, yet existing Next-Best-View (NBV) methods often rely on paradigms ill-suited for the fixed-base manipulators found in dynamic industrial environments. To address this gap, this paper proposes [...] Read more.
The ability to autonomously generate up-to-date 3D models of robotic workcells is critical for advancing smart manufacturing, yet existing Next-Best-View (NBV) methods often rely on paradigms ill-suited for the fixed-base manipulators found in dynamic industrial environments. To address this gap, this paper proposes a novel NBV method for the complete exploration of a 6-DOF robotic arm’s workspace. Our approach integrates collision-based information gain metric, a potential field technique to generate candidate views from exploration frontiers, and a tunable fitness function to balance information gain with motion cost. The method was rigorously tested in three simulated scenarios and validated on a physical industrial robot. Results demonstrate that our approach successfully maps the majority of the workspace in all setups, with a balanced weighting strategy proving most effective for combining exploration speed and path efficiency, a finding confirmed in the real-world experiment. We conclude that our method provides a practical and robust solution for autonomous workspace mapping, offering a flexible, training-free approach that advances the state-of-the-art for on-demand 3D model generation in industrial robotics. Full article
(This article belongs to the Special Issue Smart Manufacturing and Industry 4.0, 2nd Edition)
Show Figures

Figure 1

31 pages, 9881 KiB  
Article
Guide Robot Based on Image Processing and Path Planning
by Chen-Hsien Yang and Jih-Gau Juang
Machines 2025, 13(7), 560; https://doi.org/10.3390/machines13070560 - 27 Jun 2025
Viewed by 275
Abstract
While guide dogs remain the primary aid for visually impaired individuals, robotic guides continue to be an important area of research. This study introduces an indoor guide robot designed to physically assist a blind person by holding their hand with a robotic arm [...] Read more.
While guide dogs remain the primary aid for visually impaired individuals, robotic guides continue to be an important area of research. This study introduces an indoor guide robot designed to physically assist a blind person by holding their hand with a robotic arm and guiding them to a specified destination. To enable hand-holding, we employed a camera combined with object detection to identify the human hand and a closed-loop control system to manage the robotic arm’s movements. For path planning, we implemented a Dueling Double Deep Q Network (D3QN) enhanced with a genetic algorithm. To address dynamic obstacles, the robot utilizes a depth camera alongside fuzzy logic to control its wheels and navigate around them. A 3D point cloud map is generated to determine the start and end points accurately. The D3QN algorithm, supplemented by variables defined using the genetic algorithm, is then used to plan the robot’s path. As a result, the robot can safely guide blind individuals to their destinations without collisions. Full article
(This article belongs to the Special Issue Autonomous Navigation of Mobile Robots and UAVs, 2nd Edition)
Show Figures

Figure 1

46 pages, 1347 KiB  
Review
Emerging Frontiers in Robotic Upper-Limb Prostheses: Mechanisms, Materials, Tactile Sensors and Machine Learning-Based EMG Control: A Comprehensive Review
by Beibit Abdikenov, Darkhan Zholtayev, Kanat Suleimenov, Nazgul Assan, Kassymbek Ozhikenov, Aiman Ozhikenova, Nurbek Nadirov and Akim Kapsalyamov
Sensors 2025, 25(13), 3892; https://doi.org/10.3390/s25133892 - 22 Jun 2025
Viewed by 1164
Abstract
Hands are central to nearly every aspect of daily life, so losing an upper limb due to amputation can severely affect a person’s independence. Robotic prostheses offer a promising solution by mimicking many of the functions of a natural arm, leading to an [...] Read more.
Hands are central to nearly every aspect of daily life, so losing an upper limb due to amputation can severely affect a person’s independence. Robotic prostheses offer a promising solution by mimicking many of the functions of a natural arm, leading to an increasing need for advanced prosthetic designs. However, developing an effective robotic hand prosthesis is far from straightforward. It involves several critical steps, including creating accurate models, choosing materials that balance biocompatibility with durability, integrating electronic and sensory components, and perfecting control systems before final production. A key factor in ensuring smooth, natural movements lies in the method of control. One popular approach is to use electromyography (EMG), which relies on electrical signals from the user’s remaining muscle activity to direct the prosthesis. By decoding these signals, we can predict the intended hand and arm motions and translate them into real-time actions. Recent strides in machine learning have made EMG-based control more adaptable, offering users a more intuitive experience. Alongside this, researchers are exploring tactile sensors for enhanced feedback, materials resilient in harsh conditions, and mechanical designs that better replicate the intricacies of a biological limb. This review brings together these advancements, focusing on emerging trends and future directions in robotic upper-limb prosthesis development. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

23 pages, 2568 KiB  
Article
Reinforcement Learning-Driven Digital Twin for Zero-Delay Communication in Smart Greenhouse Robotics
by Cristian Bua, Luca Borgianni, Davide Adami and Stefano Giordano
Agriculture 2025, 15(12), 1290; https://doi.org/10.3390/agriculture15121290 - 15 Jun 2025
Viewed by 839
Abstract
This study presents a networked cyber-physical architecture that integrates a Reinforcement Learning-based Digital Twin (DT) to enable zero-delay interaction between physical and digital components in smart agriculture. The proposed system allows real-time remote control of a robotic arm inside a hydroponic greenhouse, using [...] Read more.
This study presents a networked cyber-physical architecture that integrates a Reinforcement Learning-based Digital Twin (DT) to enable zero-delay interaction between physical and digital components in smart agriculture. The proposed system allows real-time remote control of a robotic arm inside a hydroponic greenhouse, using a sensor-equipped Wearable Glove (SWG) for hand motion capture. The DT operates in three coordinated modes: Real2Digital, Digital2Real, and Digital2Digital, supporting bidirectional synchronization and predictive simulation. A core innovation lies in the use of a Reinforcement Learning model to anticipate hand motions, thereby compensating for network latency and enhancing the responsiveness of the virtual–physical interaction. The architecture was experimentally validated through a detailed communication delay analysis, covering sensing, data processing, network transmission, and 3D rendering. While results confirm the system’s effectiveness under typical conditions, performance may vary under unstable network scenarios. This work represents a promising step toward real-time adaptive DTs in complex smart greenhouse environments. Full article
Show Figures

Figure 1

23 pages, 5095 KiB  
Article
Human-Machine Interaction: A Vision-Based Approach for Controlling a Robotic Hand Through Human Hand Movements
by Gerardo García-Gil, Gabriela del Carmen López-Armas and José de Jesús Navarro
Technologies 2025, 13(5), 169; https://doi.org/10.3390/technologies13050169 - 23 Apr 2025
Cited by 1 | Viewed by 725
Abstract
An anthropomorphic robot is a mechanical device designed to perform human-like tasks, such as manipulating objects, and has been one of the significant contributions in robotics over the past 60 years. This paper presents an advanced system for controlling a robotic arm using [...] Read more.
An anthropomorphic robot is a mechanical device designed to perform human-like tasks, such as manipulating objects, and has been one of the significant contributions in robotics over the past 60 years. This paper presents an advanced system for controlling a robotic arm using user hand gestures and movements. It eliminates the need for traditional sensors or physical controls by implementing an intuitive approach based on MediaPipe and computer vision. The system recognizes the user’s hand movements. It translates them into commands that are sent to a microcontroller, which operates a robotic hand equipped with six servomotors: five for the fingers and one for the wrist, which stands out for its orthonormal design that avoids occlusion problems in turns of up to 180°, guaranteeing precise wrist control. Unlike conventional systems, this approach uses only a 2D camera to capture movements, simplifying design and reducing costs. The proposed system allows replicating the user’s activity with high precision, expanding the possibilities of human-robot interaction. Notably, the system has been able to replicate the user’s hand gestures with an accuracy of up to 95%. Full article
(This article belongs to the Special Issue Image Analysis and Processing)
Show Figures

Graphical abstract

19 pages, 11348 KiB  
Article
Vision-Based Grasping Method for Prosthetic Hands via Geometry and Symmetry Axis Recognition
by Yi Zhang, Yanwei Xie, Qian Zhao, Xiaolei Xu, Hua Deng and Nianen Yi
Biomimetics 2025, 10(4), 242; https://doi.org/10.3390/biomimetics10040242 - 15 Apr 2025
Viewed by 667
Abstract
This paper proposes a grasping method for prosthetic hands based on object geometry and symmetry axis. The method utilizes computer vision to extract the geometric shape, spatial position, and symmetry axis of target objects and selects appropriate grasping modes and postures through the [...] Read more.
This paper proposes a grasping method for prosthetic hands based on object geometry and symmetry axis. The method utilizes computer vision to extract the geometric shape, spatial position, and symmetry axis of target objects and selects appropriate grasping modes and postures through the extracted features. First, grasping patterns are classified based on the analysis of hand-grasping movements. A mapping relationship between object geometry and grasp patterns is established. Then, target object images are captured using binocular depth cameras, and the YOLO algorithm is employed for object detection. The SIFT algorithm is applied to extract the object’s symmetry axis, thereby determining the optimal grasp point and initial hand posture. An experimental platform is built based on a seven-degree-of-freedom (7-DoF) robotic arm and a multi-mode prosthetic hand to conduct grasping experiments on objects with different characteristics. Experimental results demonstrate that the proposed method achieves high accuracy and real-time performance in recognizing object geometric features. The system can automatically match appropriate grasp modes according to object features, improving grasp stability and success rate. Full article
(This article belongs to the Special Issue Human-Inspired Grasp Control in Robotics 2025)
Show Figures

Figure 1

23 pages, 10794 KiB  
Article
Hand–Eye Separation-Based First-Frame Positioning and Follower Tracking Method for Perforating Robotic Arm
by Handuo Zhang, Jun Guo, Chunyan Xu and Bin Zhang
Appl. Sci. 2025, 15(5), 2769; https://doi.org/10.3390/app15052769 - 4 Mar 2025
Viewed by 726
Abstract
In subway tunnel construction, current hand–eye integrated drilling robots use a camera mounted on the drilling arm for image acquisition. However, dust interference and long-distance operation cause a decline in image quality, affecting the stability and accuracy of the visual recognition system. Additionally, [...] Read more.
In subway tunnel construction, current hand–eye integrated drilling robots use a camera mounted on the drilling arm for image acquisition. However, dust interference and long-distance operation cause a decline in image quality, affecting the stability and accuracy of the visual recognition system. Additionally, the computational complexity of high-precision detection models limits deployment on resource-constrained edge devices, such as industrial controllers. To address these challenges, this paper proposes a dual-arm tunnel drilling robot system with hand–eye separation, utilizing the first-frame localization and follower tracking method. The vision arm (“eye”) provides real-time position data to the drilling arm (“hand”), ensuring accurate and efficient operation. The study employs an RFBNet model for initial frame localization, replacing the original VGG16 backbone with ShuffleNet V2. This reduces model parameters by 30% (135.5 MB vs. 146.3 MB) through channel splitting and depthwise separable convolutions to reduce computational complexity. Additionally, the GIoU loss function is introduced to replace the traditional IoU, further optimizing bounding box regression through the calculation of the minimum enclosing box. This resolves the gradient vanishing problem in traditional IoU and improves average precision (AP) by 3.3% (from 0.91 to 0.94). For continuous tracking, a SiamRPN-based algorithm combined with Kalman filtering and PID control ensures robustness against occlusions and nonlinear disturbances, increasing the success rate by 1.6% (0.639 vs. 0.629). Experimental results show that this approach significantly improves tracking accuracy and operational stability, achieving 31 FPS inference speed on edge devices and providing a deployable solution for tunnel construction’s safety and efficiency needs. Full article
Show Figures

Figure 1

15 pages, 3274 KiB  
Article
Gesture-Controlled Robotic Arm for Small Assembly Lines
by Georgios Angelidis and Loukas Bampis
Machines 2025, 13(3), 182; https://doi.org/10.3390/machines13030182 - 25 Feb 2025
Cited by 1 | Viewed by 1593
Abstract
In this study, we present a gesture-controlled robotic arm system for small assembly lines. Robotic arms are extensively used in industrial applications; however, they typically require special treatment and qualified personnel to set up and operate them. Towards this end, hand gestures can [...] Read more.
In this study, we present a gesture-controlled robotic arm system for small assembly lines. Robotic arms are extensively used in industrial applications; however, they typically require special treatment and qualified personnel to set up and operate them. Towards this end, hand gestures can provide a natural way for human–robot interaction, providing a straightforward means for control without the need for significant training of the operators. Our goal is to develop a safe, low-cost, and user-friendly system for environments that often involve non-repetitive and custom automation processes, such as in small factory setups. Our system estimates the 3D position of the user’s joints in real time with the help of AI and real-world data provided by an RGB-D camera. Then, joint coordinates are translated into the robotic arm’s desired poses in a simulated environment (ROS), thus achieving gesture control. Through the experiments we conducted, we show that the system provides the performance required to control a robotic arm effectively and efficiently. Full article
(This article belongs to the Special Issue AI-Integrated Advanced Robotics Towards Industry 5.0)
Show Figures

Figure 1

19 pages, 8196 KiB  
Article
Human–Robot Interaction Using Dynamic Hand Gesture for Teleoperation of Quadruped Robots with a Robotic Arm
by Jianan Xie, Zhen Xu, Jiayu Zeng, Yuyang Gao and Kenji Hashimoto
Electronics 2025, 14(5), 860; https://doi.org/10.3390/electronics14050860 - 21 Feb 2025
Cited by 3 | Viewed by 2462
Abstract
Human–Robot Interaction (HRI) using hand gesture recognition offers an effective and non-contact approach to enhancing operational intuitiveness and user convenience. However, most existing studies primarily focus on either static sign language recognition or the tracking of hand position and orientation in space. These [...] Read more.
Human–Robot Interaction (HRI) using hand gesture recognition offers an effective and non-contact approach to enhancing operational intuitiveness and user convenience. However, most existing studies primarily focus on either static sign language recognition or the tracking of hand position and orientation in space. These approaches often prove inadequate for controlling complex robotic systems. This paper proposes an advanced HRI system leveraging dynamic hand gestures for controlling quadruped robots equipped with a robotic arm. The proposed system integrates both semantic and pose information from dynamic gestures to enable comprehensive control over the robot’s diverse functionalities. First, a Depth–MediaPipe framework is introduced to facilitate the precise three-dimensional (3D) coordinate extraction of 21 hand bone keypoints. Subsequently, a Semantic-Pose to Motion (SPM) model is developed to analyze and interpret both the pose and semantic aspects of hand gestures. This model translates the extracted 3D coordinate data into corresponding mechanical actions in real-time, encompassing quadruped robot locomotion, robotic arm end-effector tracking, and semantic-based command switching. Extensive real-world experiments demonstrate the proposed system’s effectiveness in achieving real-time interaction and precise control, underscoring its potential for enhancing the usability of complex robotic platforms. Full article
Show Figures

Figure 1

14 pages, 4877 KiB  
Article
Systematic Evaluation of IMU Sensors for Application in Smart Glove System for Remote Monitoring of Hand Differences
by Amy Harrison, Andrea Jester, Surej Mouli, Antonio Fratini and Ali Jabran
Sensors 2025, 25(1), 2; https://doi.org/10.3390/s25010002 - 24 Dec 2024
Viewed by 1950
Abstract
Human hands have over 20 degrees of freedom, enabled by a complex system of bones, muscles, and joints. Hand differences can significantly impair dexterity and independence in daily activities. Accurate assessment of hand function, particularly digit movement, is vital for effective intervention and [...] Read more.
Human hands have over 20 degrees of freedom, enabled by a complex system of bones, muscles, and joints. Hand differences can significantly impair dexterity and independence in daily activities. Accurate assessment of hand function, particularly digit movement, is vital for effective intervention and rehabilitation. However, current clinical methods rely on subjective observations and limited tests. Smart gloves with inertial measurement unit (IMU) sensors have emerged as tools for capturing digit movements, yet their sensor accuracy remains underexplored. This study developed and validated an IMU-based smart glove system for measuring finger joint movements in individuals with hand differences. The glove measured 3D digit rotations and was evaluated against an industrial robotic arm. Tests included rotations around three axes at 1°, 10°, and 90°, simulating extension/flexion, supination/pronation, and abduction/adduction. The IMU sensors demonstrated high accuracy and reliability, with minimal systematic bias and strong positive correlations (p > 0.95 across all tests). Agreement matrices revealed high agreement (<1°) in 24 trials, moderate (1–10°) in 12 trials, and low (>10°) in only 4 trials. The Root Mean Square Error (RMSE) ranged from 1.357 to 5.262 for the 90° tests, 0.094 to 0.538 for the 10° tests, and 0.129 to 0.36 for the 1° tests. Likewise, mean absolute error (MAE) ranged from 0.967 to 4.679 for the 90° tests, 0.073 to 0.386 for the 10° tests, and 0.102 to 0.309 for the 1° tests. The sensor provided precise measurements of digit angles across 0–90° in multiple directions, enabling reliable clinical assessment, remote monitoring, and improved diagnosis, treatment, and rehabilitation for individuals with hand differences. Full article
Show Figures

Figure 1

19 pages, 6078 KiB  
Article
Using a Guidance Virtual Fixture on a Soft Robot to Improve Ureteroscopy Procedures in a Phantom
by Chun-Feng Lai, Elena De Momi, Giancarlo Ferrigno and Jenny Dankelman
Robotics 2024, 13(9), 140; https://doi.org/10.3390/robotics13090140 - 18 Sep 2024
Viewed by 1438
Abstract
Manipulating a flexible ureteroscope is difficult, due to its bendable body and hand–eye coordination problems, especially when exploring the lower pole of the kidney. Though robotic interventions have been adopted in various clinical scenarios, they are rarely used in ureteroscopy. This study proposes [...] Read more.
Manipulating a flexible ureteroscope is difficult, due to its bendable body and hand–eye coordination problems, especially when exploring the lower pole of the kidney. Though robotic interventions have been adopted in various clinical scenarios, they are rarely used in ureteroscopy. This study proposes a teleoperation system consists of a soft robotic endoscope together with a Guidance Virtual Fixture (GVF) to help users explore the kidney’s lower pole. The soft robotic arm was a cable-driven, 3D-printed design with a helicoid structure. GVF was dynamically constructed using video streams from an endoscopic camera. With a haptic controller, GVF can provide haptic feedback to guide the users in following a trajectory. In the user study, participants were asked to follow trajectories when the soft robotic arm was in a retroflex posture. The results suggest that the GVF can reduce errors in the trajectory tracking tasks when the users receive the proper training and gain more experience. Based on the NASA Task Load Index questionnaires, most participants preferred having the GVF when manipulating the robotic arm. In conclusion, the results demonstrate the benefits and potential of using a robotic arm with a GVF. More research is needed to investigate the effectiveness of the GVFs and the robotic endoscope in ureteroscopic procedures. Full article
(This article belongs to the Section Soft Robotics)
Show Figures

Figure 1

25 pages, 6749 KiB  
Article
Application of Artificial Neuromolecular System in Robotic Arm Control to Assist Progressive Rehabilitation for Upper Extremity Stroke Patients
by Jong-Chen Chen and Hao-Ming Cheng
Actuators 2024, 13(9), 362; https://doi.org/10.3390/act13090362 - 16 Sep 2024
Cited by 1 | Viewed by 1576
Abstract
Freedom of movement of the hands is the most desired hope of stroke patients. However, stroke recovery is a long, long road for many patients. If artificial intelligence can assist human arm movement, the possibility of stroke patients returning to normal hand movement [...] Read more.
Freedom of movement of the hands is the most desired hope of stroke patients. However, stroke recovery is a long, long road for many patients. If artificial intelligence can assist human arm movement, the possibility of stroke patients returning to normal hand movement might be significantly increased. This study uses the artificial neuromolecular system (ANM system) developed in our laboratory as the core of motion control, in an attempt to learn to control the mechanical arm to produce actions similar to human rehabilitation training and the transition between different activities. This research adopts two methods. The first is hypothetical exploration, the so-called “artificial world” simulation method. The detailed approach uses the V-REP (Virtual Robot Experimentation Platform) to conduct different experimental runs to capture relevant data. Our policy is to establish an action database systematically to a certain extent. From these data, we use the ANM system with self-organization and learning capabilities to develop the relationship between these actions and establish the possibility of conversion between different activities. The second method of this study is to use the data from a hospital in Toronto, Canada. Our experimental results show that the ANM system can continuously learn for problem-solving. In addition, our three experimental results of adaptive learning, transfer learning, and cross-task learning further confirm that the ANM system can use previously learned systems to complete the delivered tasks through autonomous learning (instead of learning from scratch). Full article
Show Figures

Figure 1

23 pages, 11097 KiB  
Article
Multimodal Framework for Fine and Gross Upper-Limb Motor Coordination Assessment Using Serious Games and Robotics
by Edwin Daniel Oña, Norali Pernalete and Alberto Jardón
Appl. Sci. 2024, 14(18), 8175; https://doi.org/10.3390/app14188175 - 11 Sep 2024
Viewed by 1359
Abstract
A critical element of neurological function is eye–hand coordination: the ability of our vision system to coordinate the information received through the eyes to control, guide, and direct the hands to accomplish a task. Recent evidence shows that this ability can be disturbed [...] Read more.
A critical element of neurological function is eye–hand coordination: the ability of our vision system to coordinate the information received through the eyes to control, guide, and direct the hands to accomplish a task. Recent evidence shows that this ability can be disturbed by strokes or other neurological disorders, with critical consequences for motor behaviour. This paper presents a system based on serious games and multimodal devices aimed at improving the assessment of eye–hand coordination. The system implements gameplay that involves drawing specific patterns (labyrinths) to capture hand trajectories. The user can draw the path using multimodal devices such as a mouse, a stylus with a tablet, or robotic devices. Multimodal input devices can allow for the evaluation of complex coordinated movements of the upper limb that involve the synergistic motion of arm joints, depending on the device. A preliminary test of technological validation with healthy volunteers was conducted in the laboratory. The Dynamic Time Warping (DTW) index was used to compare hand trajectories without considering time-series lag. The results suggest that this multimodal framework allows for measuring differences between fine and gross motor skills. Moreover, the results support the viability of this system for developing a high-resolution metric for measuring eye–hand coordination in neurorehabilitation. Full article
(This article belongs to the Special Issue Robotics, IoT and AI Technologies in Bioengineering)
Show Figures

Figure 1

63 pages, 37620 KiB  
Article
BLUE SABINO: Development of a BiLateral Upper-Limb Exoskeleton for Simultaneous Assessment of Biomechanical and Neuromuscular Output
by Christopher K. Bitikofer, Sebastian Rueda Parra, Rene Maura, Eric T. Wolbrecht and Joel C. Perry
Machines 2024, 12(9), 617; https://doi.org/10.3390/machines12090617 - 3 Sep 2024
Cited by 3 | Viewed by 2394
Abstract
Arm and hand function play a critical role in the successful completion of everyday tasks. Lost function due to neurological impairment impacts millions of lives worldwide. Despite improvements in the ability to assess and rehabilitate arm deficits, knowledge about underlying sources of impairment [...] Read more.
Arm and hand function play a critical role in the successful completion of everyday tasks. Lost function due to neurological impairment impacts millions of lives worldwide. Despite improvements in the ability to assess and rehabilitate arm deficits, knowledge about underlying sources of impairment and related sequela remains limited. The comprehensive assessment of function requires the measurement of both biomechanics and neuromuscular contributors to performance during the completion of tasks that often use multiple joints and span three-dimensional workspaces. To our knowledge, the complexity of movement and diversity of measures required are beyond the capabilities of existing assessment systems. To bridge current gaps in assessment capability, a new exoskeleton instrument is developed with comprehensive bilateral assessment in mind. The development of the BiLateral Upper-limb Exoskeleton for Simultaneous Assessment of Biomechanical and Neuromuscular Output (BLUE SABINO) expands on prior iterations toward full-arm assessment during reach-and-grasp tasks through the development of a dual-arm and dual-hand system, with 9 active degrees of freedom per arm and 12 degrees of freedom (six active, six passive) per hand. Joints are powered by electric motors driven by a real-time control system with input from force and force/torque sensors located at all attachment points between the user and exoskeleton. Biosignals from electromyography and electroencephalography can be simultaneously measured to provide insight into neurological performance during unimanual or bimanual tasks involving arm reach and grasp. Design trade-offs achieve near-human performance in exoskeleton speed and strength, with positional measurement at the wrist having an error of less than 2 mm and supporting a range of motion approximately equivalent to the 50th-percentile human. The system adjustability in seat height, shoulder width, arm length, and orthosis width accommodate subjects from approximately the 5th-percentile female to the 95th-percentile male. Integration between precision actuation, human–robot-interaction force-torque sensing, and biosignal acquisition systems successfully provide the simultaneous measurement of human movement and neurological function. The bilateral design enables use with left- or right-side impairments as well as intra-subject performance comparisons. With the resulting instrument, the authors plan to investigate underlying neural and physiological correlates of arm function, impairment, learning, and recovery. Full article
(This article belongs to the Special Issue Advances in Assistive Robotics)
Show Figures

Figure 1

Back to TopTop