Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (47)

Search Parameters:
Keywords = hybrid robot hand

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 34013 KB  
Article
Vision-Based 6D Pose Analytics Solution for High-Precision Industrial Robot Pick-and-Place Applications
by Balamurugan Balasubramanian and Kamil Cetin
Sensors 2025, 25(15), 4824; https://doi.org/10.3390/s25154824 - 6 Aug 2025
Viewed by 668
Abstract
High-precision 6D pose estimation for pick-and-place operations remains a critical problem for industrial robot arms in manufacturing. This study introduces an analytics-based solution for 6D pose estimation designed for a real-world industrial application: it enables the Staubli TX2-60L (manufactured by Stäubli International AG, [...] Read more.
High-precision 6D pose estimation for pick-and-place operations remains a critical problem for industrial robot arms in manufacturing. This study introduces an analytics-based solution for 6D pose estimation designed for a real-world industrial application: it enables the Staubli TX2-60L (manufactured by Stäubli International AG, Horgen, Switzerland) robot arm to pick up metal plates from various locations and place them into a precisely defined slot on a brake pad production line. The system uses a fixed eye-to-hand Intel RealSense D435 RGB-D camera (manufactured by Intel Corporation, Santa Clara, California, USA) to capture color and depth data. A robust software infrastructure developed in LabVIEW (ver.2019) integrated with the NI Vision (ver.2019) library processes the images through a series of steps, including particle filtering, equalization, and pattern matching, to determine the X-Y positions and Z-axis rotation of the object. The Z-position of the object is calculated from the camera’s intensity data, while the remaining X-Y rotation angles are determined using the angle-of-inclination analytics method. It is experimentally verified that the proposed analytical solution outperforms the hybrid-based method (YOLO-v8 combined with PnP/RANSAC algorithms). Experimental results across four distinct picking scenarios demonstrate the proposed solution’s superior accuracy, with position errors under 2 mm, orientation errors below 1°, and a perfect success rate in pick-and-place tasks. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

23 pages, 3542 KB  
Article
An Intuitive and Efficient Teleoperation Human–Robot Interface Based on a Wearable Myoelectric Armband
by Long Wang, Zhangyi Chen, Songyuan Han, Yao Luo, Xiaoling Li and Yang Liu
Biomimetics 2025, 10(7), 464; https://doi.org/10.3390/biomimetics10070464 - 15 Jul 2025
Viewed by 466
Abstract
Although artificial intelligence technologies have significantly enhanced autonomous robots’ capabilities in perception, decision-making, and planning, their autonomy may still fail when faced with complex, dynamic, or unpredictable environments. Therefore, it is critical to enable users to take over robot control in real-time and [...] Read more.
Although artificial intelligence technologies have significantly enhanced autonomous robots’ capabilities in perception, decision-making, and planning, their autonomy may still fail when faced with complex, dynamic, or unpredictable environments. Therefore, it is critical to enable users to take over robot control in real-time and efficiently through teleoperation. The lightweight, wearable myoelectric armband, due to its portability and environmental robustness, provides a natural human–robot gesture interaction interface. However, current myoelectric teleoperation gesture control faces two major challenges: (1) poor intuitiveness due to visual-motor misalignment; and (2) low efficiency from discrete, single-degree-of-freedom control modes. To address these challenges, this study proposes an integrated myoelectric teleoperation interface. The interface integrates the following: (1) a novel hybrid reference frame aimed at effectively mitigating visual-motor misalignment; and (2) a finite state machine (FSM)-based control logic designed to enhance control efficiency and smoothness. Four experimental tasks were designed using different end-effectors (gripper/dexterous hand) and camera viewpoints (front/side view). Compared to benchmark methods, the proposed interface demonstrates significant advantages in task completion time, movement path efficiency, and subjective workload. This work demonstrates the potential of the proposed interface to significantly advance the practical application of wearable myoelectric sensors in human–robot interaction. Full article
(This article belongs to the Special Issue Intelligent Human–Robot Interaction: 4th Edition)
Show Figures

Figure 1

22 pages, 4481 KB  
Article
Hybrid Deep Learning Framework for Eye-in-Hand Visual Control Systems
by Adrian-Paul Botezatu, Andrei-Iulian Iancu and Adrian Burlacu
Robotics 2025, 14(5), 66; https://doi.org/10.3390/robotics14050066 - 19 May 2025
Viewed by 1518
Abstract
This work proposes a hybrid deep learning-based framework for visual feedback control in an eye-in-hand robotic system. The framework uses an early fusion approach in which real and synthetic images define the training data. The first layer of a ResNet-18 backbone is augmented [...] Read more.
This work proposes a hybrid deep learning-based framework for visual feedback control in an eye-in-hand robotic system. The framework uses an early fusion approach in which real and synthetic images define the training data. The first layer of a ResNet-18 backbone is augmented to fuse interest-point maps with RGB channels, enabling the network to capture scene geometry better. A manipulator robot with an eye-in-hand configuration provides a reference image, while subsequent poses and images are generated synthetically, removing the need for extensive real data collection. The experimental results reveal that this enriched input representation significantly improves convergence accuracy and velocity smoothness compared to a baseline that processes real images alone. Specifically, including feature point maps allows the network to discriminate crucial elements in the scene, resulting in more precise velocity commands and stable end-effector trajectories. Thus, integrating additional, synthetically generated map data into convolutional architectures can enhance the robustness and performance of the visual servoing system, particularly when real-world data gathering is challenging. Unlike existing visual servoing methods, our early fusion strategy integrates feature maps directly into the network’s initial convolutional layer, allowing the model to learn critical geometric details from the very first stage of training. This approach yields superior velocity predictions and smoother servoing compared to conventional frameworks. Full article
(This article belongs to the Special Issue Visual Servoing-Based Robotic Manipulation)
Show Figures

Figure 1

25 pages, 13985 KB  
Article
A Low-Cost Prototype of a Soft–Rigid Hybrid Pneumatic Anthropomorphic Gripper for Testing Tactile Sensor Arrays
by Rafał Andrejczuk, Moritz Scharff, Junhao Ni, Andreas Richter and Ernst-Friedrich Markus Vorrath
Actuators 2025, 14(5), 252; https://doi.org/10.3390/act14050252 - 17 May 2025
Viewed by 1143
Abstract
Soft anthropomorphic robotic grippers are attractive because of their inherent compliance, allowing them to adapt to the shape of grasped objects and the overload protection needed for safe human–robot interaction or gripping delicate objects with sophisticated control. The anthropomorphic design allows the gripper [...] Read more.
Soft anthropomorphic robotic grippers are attractive because of their inherent compliance, allowing them to adapt to the shape of grasped objects and the overload protection needed for safe human–robot interaction or gripping delicate objects with sophisticated control. The anthropomorphic design allows the gripper to benefit from the biological evolution of the human hand to create a multi-functional robotic end effector. Entirely soft grippers could be more efficient because they yield under high loads. A trending solution is a hybrid gripper combining soft and rigid elements. This work describes a prototype of an anthropomorphic, underactuated five-finger gripper with a direct pneumatic drive from soft bending actuators and an integrated resistive tactile sensor array. It is a hybrid construction with soft robotic structures and rigid skeletal elements, which reinforce the body, focus the direction of the actuator’s movement, and make the finger joints follow the forward kinematics. The hand is equipped with a resistive tactile dielectric elastomer sensor array that directly triggers the hand’s actuation in the sense of reflexes. The hand can execute precision grips with two and three fingers, as well as lateral grip and strong grip types. The softness of the actuation allows the finger to adapt to the shape of the objects. Full article
Show Figures

Figure 1

27 pages, 27217 KB  
Article
Improved Anthropomorphic Robotic Hand for Architecture and Construction: Integrating Prestressed Mechanisms with Self-Healing Elastomers
by Mijin Kim, Rubaya Yaesmin, Hyungtak Seo and Hwang Yi
Biomimetics 2025, 10(5), 284; https://doi.org/10.3390/biomimetics10050284 - 1 May 2025
Viewed by 1067
Abstract
Soft pneumatic robot-arm end-effectors can facilitate adaptive architectural fabrication and building construction. However, conventional pneumatic grippers often suffer from air leakage and tear, particularly under prolonged grasping and inflation-induced stress. To address these challenges, this study suggests an enhanced anthropomorphic gripper by integrating [...] Read more.
Soft pneumatic robot-arm end-effectors can facilitate adaptive architectural fabrication and building construction. However, conventional pneumatic grippers often suffer from air leakage and tear, particularly under prolonged grasping and inflation-induced stress. To address these challenges, this study suggests an enhanced anthropomorphic gripper by integrating a pre-stressed reversible mechanism (PSRM) and a novel self-healing material (SHM) polyborosiloxane–Ecoflex™ hybrid polymer (PEHP) developed by the authors. The results demonstrate that PSRM finger grippers can hold various objects without external pressure input (12 mm displacement under a 1.2 N applied), and the SHM assists with recovery of mechanical properties upon external damage. The proposed robotic hand was evaluated through real-world construction tasks, including wall painting, floor plastering, and block stacking, showcasing its durability and functional performance. These findings contribute to promoting the cost-effective deployment of soft robotic hands in robotic construction. Full article
Show Figures

Figure 1

18 pages, 5498 KB  
Article
Development and Evaluation of a Novel Upper-Limb Rehabilitation Device Integrating Piano Playing for Enhanced Motor Recovery
by Xin Zhao, Ying Zhang, Yi Zhang, Peng Zhang, Jinxu Yu and Shuai Yuan
Biomimetics 2025, 10(4), 200; https://doi.org/10.3390/biomimetics10040200 - 25 Mar 2025
Cited by 1 | Viewed by 783
Abstract
This study developed and evaluated a novel upper-limb rehabilitation device that integrates piano playing into task-oriented occupational therapy, addressing the limitations of traditional continuous passive motion (CPM) training in patient engagement and functional recovery. The system features a bi-axial sliding platform for precise [...] Read more.
This study developed and evaluated a novel upper-limb rehabilitation device that integrates piano playing into task-oriented occupational therapy, addressing the limitations of traditional continuous passive motion (CPM) training in patient engagement and functional recovery. The system features a bi-axial sliding platform for precise 61-key positioning and a ten-link, four-loop robotic hand for key striking. A hierarchical control framework incorporates MIDI-based task mapping, finger optimization using an improved Hungarian algorithm, and impedance–admittance hybrid control for adaptive force–position modulation. An 8-week randomized controlled trial demonstrated that the experimental group significantly outperformed the control group, with a 74.7% increase in Fugl–Meyer scores (50.5 ± 2.5), a 14.6-point improvement in the box and block test (BBT), a 20.2-s reduction in nine-hole peg test (NHPT) time, and a 72.6% increase in rehabilitation motivation scale (RMS) scores (55.4 ± 3.8). The results indicate that combining piano playing with robotic rehabilitation enhances neuroplasticity and engagement, significantly improving motor function, daily activity performance, and rehabilitation adherence. This mechanical-control synergy introduces a new paradigm for music-interactive rehabilitation, with potential applications in home-based remote therapy and multimodal treatment integration. Full article
Show Figures

Figure 1

17 pages, 2630 KB  
Article
Multimodal Deep Learning Model for Cylindrical Grasp Prediction Using Surface Electromyography and Contextual Data During Reaching
by Raquel Lázaro, Margarita Vergara, Antonio Morales and Ramón A. Mollineda
Biomimetics 2025, 10(3), 145; https://doi.org/10.3390/biomimetics10030145 - 27 Feb 2025
Viewed by 747
Abstract
Grasping objects, from simple tasks to complex fine motor skills, is a key component of our daily activities. Our approach to facilitate the development of advanced prosthetics, robotic hands and human–machine interaction systems consists of collecting and combining surface electromyography (EMG) signals and [...] Read more.
Grasping objects, from simple tasks to complex fine motor skills, is a key component of our daily activities. Our approach to facilitate the development of advanced prosthetics, robotic hands and human–machine interaction systems consists of collecting and combining surface electromyography (EMG) signals and contextual data of individuals performing manipulation tasks. In this context, the identification of patterns and prediction of hand grasp types is crucial, with cylindrical grasp being one of the most common and functional. Traditional approaches to grasp prediction often rely on unimodal data sources, limiting their ability to capture the complexity of real-world scenarios. In this work, grasp prediction models that integrate both EMG signals and contextual (task- and product-related) information have been explored to improve the prediction of cylindrical grasps during reaching movements. Three model architectures are presented: an EMG processing model based on convolutions that analyzes forearm surface EMG data, a fully connected model for processing contextual information, and a hybrid architecture combining both inputs resulting in a multimodal model. The results show that context has great predictive power. Variables such as object size and weight (product-related) were found to have a greater impact on model performance than task height (task-related). Combining EMG and product context yielded better results than using each data mode separately, confirming the importance of product context in improving EMG-based models of grasping. Full article
(This article belongs to the Special Issue Intelligent Human–Robot Interaction: 3rd Edition)
Show Figures

Graphical abstract

20 pages, 36005 KB  
Article
A Carpometacarpal Thumb Tracking Device for Telemanipulation of a Robotic Thumb: Development, Prototyping, and Evaluation
by Abdul Hafiz Abdul Rahaman and Panos S. Shiakolas
Appl. Sci. 2025, 15(3), 1301; https://doi.org/10.3390/app15031301 - 27 Jan 2025
Viewed by 1074
Abstract
Hand−tracking systems are widely employed for telemanipulating grippers with high degrees of freedom (DOFs) such as an anthropomorphic robotic hand (ARH). However, tracking human thumb motion is challenging due to the complex motion of the carpometacarpal (CMC) joint. Existing hand−tracking systems can track [...] Read more.
Hand−tracking systems are widely employed for telemanipulating grippers with high degrees of freedom (DOFs) such as an anthropomorphic robotic hand (ARH). However, tracking human thumb motion is challenging due to the complex motion of the carpometacarpal (CMC) joint. Existing hand−tracking systems can track the motion of simple joints with one DOF, but most fail to track the motion of the CMC joint, or to do so, there is a need for expensive and intricately set up hardware systems. This research introduces and realizes an affordable and personalizable tracking device to capture the CMC joint Flexion/Extension and Abduction/Adduction motions. Tracked human thumb motion is mapped to a robot thumb in a hybrid approach: the proposed algorithm maps the CMC joint motion to the first two joints of the robot thumb, while joint mapping is established between the metacarpophalangeal and interphalangeal joints to the last two joints. When the tracking device is paired with a flex glove outfitted with bend sensors, the developed system provides the means to telemanipulate an ARH with a four-DOF thumb and one-DOF underactuated digits. A three-stage framework is proposed to telemanipulate the fully actuated robot thumb. The tracking device and framework were evaluated through a device operation and personalization test, as well as a framework verification test. Two volunteers successfully personalized, calibrated, and tested the device using the proposed mapping algorithm. One volunteer further evaluated the framework by performing hand poses and grasps, demonstrating effective control of the robot thumb for precision and power grasps in coordination with the other digits. The successful results support expanding the system and further evaluating it as a research platform for studying human–robot interaction in grasping tasks or in manufacturing, assistive, or medical domains. Full article
(This article belongs to the Special Issue Human–Robot Collaboration and Its Applications)
Show Figures

Figure 1

13 pages, 3285 KB  
Article
Utilization of Machine Learning and Explainable Artificial Intelligence (XAI) for Fault Prediction and Diagnosis in Wafer Transfer Robot
by Jeong Eun Jeon, Sang Jeen Hong and Seung-Soo Han
Electronics 2024, 13(22), 4471; https://doi.org/10.3390/electronics13224471 - 14 Nov 2024
Cited by 1 | Viewed by 1498
Abstract
Faults in the wafer transfer robots (WTRs) used in semiconductor manufacturing processes can significantly affect productivity. This study defines high-risk components such as bearing motors, ball screws, timing belts, robot hands, and end effectors, and generates fault data for each component based on [...] Read more.
Faults in the wafer transfer robots (WTRs) used in semiconductor manufacturing processes can significantly affect productivity. This study defines high-risk components such as bearing motors, ball screws, timing belts, robot hands, and end effectors, and generates fault data for each component based on Fluke’s law. A stacking classifier was applied for fault prediction and severity classification, and logistic regression was used to identify fault components. Additionally, to analyze the frequency bands affecting each failed component and assess the severity of faults involving two mixed components, a hybrid explainable artificial intelligence (XAI) model combining Shapley additive explanations (SHAP) and local interpretable model-agnostic explanations (LIME) was employed to inform the user about the component causing the fault. This approach demonstrated a high prediction accuracy of 95%, and its integration into real-time monitoring systems is expected to reduce maintenance costs, decrease equipment downtime, and ultimately improve productivity. Full article
Show Figures

Figure 1

21 pages, 7973 KB  
Article
Research on Target Hybrid Recognition and Localization Methods Based on an Industrial Camera and a Depth Camera in Complex Scenes
by Mingxin Yuan, Jie Li, Borui Cao, Shihao Bao, Li Sun and Xiangbin Li
Electronics 2024, 13(22), 4381; https://doi.org/10.3390/electronics13224381 - 8 Nov 2024
Cited by 1 | Viewed by 1086
Abstract
In order to improve the target visual recognition and localization accuracy of robotic arms in complex scenes with similar targets, hybrid recognition and localization methods based on an industrial camera and depth camera are proposed. First, according to the speed and accuracy requirements [...] Read more.
In order to improve the target visual recognition and localization accuracy of robotic arms in complex scenes with similar targets, hybrid recognition and localization methods based on an industrial camera and depth camera are proposed. First, according to the speed and accuracy requirements of target recognition and localization, YOLOv5s is introduced as the basic algorithm model for target hybrid recognition and localization. Then, in order to improve the accuracy of target recognition and coarse localization based on an industrial camera (eye-to-hand), the AFPN feature fusion module, simple and parameter-free attention module (SimAM), and soft non-maximum suppression (Soft NMS) are introduced. In order to improve the accuracy of target recognition and fine localization based on a depth camera (eye-in-hand), the SENetV2 backbone network structure, dynamic head module, deformable attention mechanism, and chain-of-thought prompted adaptive enhancer network are introduced. After that, on the basis of constructing a dual camera platform for target hybrid recognition and localization, the hand–eye calibration, collection and production of image datasets required for model training are completed. Finally, for the docking of the oil filling port, the hybrid recognition and localization experimental tests are completed in sequence. The test results show that in target recognition and coarse localization based on the industrial camera, the recognition accuracy of the designed model reaches 99%, and the average localization errors in the horizontal and vertical directions are 2.22 mm and 3.66 mm, respectively. In target recognition and fine localization based on the depth camera, the recognition accuracy of the designed model reaches 98%, and the average errors in depth, horizontal, and vertical directions are 0.12 mm, 0.28 mm, and 0.16 mm, respectively. These not only verify the effectiveness of the target hybrid recognition and localization methods based on dual cameras, but also demonstrate that they meet the high-precision recognition and localization requirements in complex scenes. Full article
(This article belongs to the Special Issue Applications and Challenges of Image Processing in Smart Environment)
Show Figures

Figure 1

18 pages, 5076 KB  
Article
Gesture-Controlled Robotic Arm for Agricultural Harvesting Using a Data Glove with Bending Sensor and OptiTrack Systems
by Zeping Yu, Chenghong Lu, Yunhao Zhang and Lei Jing
Micromachines 2024, 15(7), 918; https://doi.org/10.3390/mi15070918 - 16 Jul 2024
Cited by 9 | Viewed by 2920
Abstract
This paper presents a gesture-controlled robotic arm system designed for agricultural harvesting, utilizing a data glove equipped with bending sensors and OptiTrack systems. The system aims to address the challenges of labor-intensive fruit harvesting by providing a user-friendly and efficient solution. The data [...] Read more.
This paper presents a gesture-controlled robotic arm system designed for agricultural harvesting, utilizing a data glove equipped with bending sensors and OptiTrack systems. The system aims to address the challenges of labor-intensive fruit harvesting by providing a user-friendly and efficient solution. The data glove captures hand gestures and movements using bending sensors and reflective markers, while the OptiTrack system ensures high-precision spatial tracking. Machine learning algorithms, specifically a CNN+BiLSTM model, are employed to accurately recognize hand gestures and control the robotic arm. Experimental results demonstrate the system’s high precision in replicating hand movements, with a Euclidean Distance of 0.0131 m and a Root Mean Square Error (RMSE) of 0.0095 m, in addition to robust gesture recognition accuracy, with an overall accuracy of 96.43%. This hybrid approach combines the adaptability and speed of semi-automated systems with the precision and usability of fully automated systems, offering a promising solution for sustainable and labor-efficient agricultural practices. Full article
Show Figures

Figure 1

16 pages, 3505 KB  
Article
Assessing the Suitability of Automation Using the Methods–Time–Measurement Basic System
by Malte Jakschik, Felix Endemann, Patrick Adler, Lennart Lamers and Bernd Kuhlenkötter
Eng 2024, 5(2), 967-982; https://doi.org/10.3390/eng5020053 - 24 May 2024
Cited by 1 | Viewed by 2179
Abstract
Due to its high complexity and the varied assembly processes, hybrid assembly systems characterized by human–robot collaboration (HRC) are meaningful. Suitable use cases must be identified efficiently to ensure cost-effectiveness and successful deployment in the respective assembly systems. This paper presents a method [...] Read more.
Due to its high complexity and the varied assembly processes, hybrid assembly systems characterized by human–robot collaboration (HRC) are meaningful. Suitable use cases must be identified efficiently to ensure cost-effectiveness and successful deployment in the respective assembly systems. This paper presents a method for evaluating the potential of HRC to derive automation suitability based on existing or to-be-collected time data. This should enable a quick and favorable statement to be made about processes, for efficient application in potential analyses. The method is based on the Methods–Time–Measurement Basic System (MTM-1) procedure, widely used in the industry. This ensures good adaptability in an industrial context. It extends existing models and examines how much assembly activities and processes can be optimized by efficiently allocating between humans and robots. In the process model, the assembly processes are subdivided and analyzed with the help of the specified MTM motion time system. The suitability of the individual activities and sub-processes for automation are evaluated based on criteria derived from existing methods. Two four-field matrices were used to interpret and classify the analysis results. The process is assessed using an example product from electrolyzer production, which is currently mainly assembled by hand. To achieve high statement reliability, further work is required to classify the results comprehensively. Full article
(This article belongs to the Special Issue Feature Papers in Eng 2024)
Show Figures

Figure 1

18 pages, 9319 KB  
Article
Mapping Method of Human Arm Motion Based on Surface Electromyography Signals
by Yuanyuan Zheng, Gang Zheng, Hanqi Zhang, Bochen Zhao and Peng Sun
Sensors 2024, 24(9), 2827; https://doi.org/10.3390/s24092827 - 29 Apr 2024
Cited by 5 | Viewed by 2790
Abstract
This paper investigates a method for precise mapping of human arm movements using sEMG signals. A multi-channel approach captures the sEMG signals, which, combined with the accurately calculated joint angles from an Inertial Measurement Unit, allows for action recognition and mapping through deep [...] Read more.
This paper investigates a method for precise mapping of human arm movements using sEMG signals. A multi-channel approach captures the sEMG signals, which, combined with the accurately calculated joint angles from an Inertial Measurement Unit, allows for action recognition and mapping through deep learning algorithms. Firstly, signal acquisition and processing were carried out, which involved acquiring data from various movements (hand gestures, single-degree-of-freedom joint movements, and continuous joint actions) and sensor placement. Then, interference signals were filtered out through filters, and the signals were preprocessed using normalization and moving averages to obtain sEMG signals with obvious features. Additionally, this paper constructs a hybrid network model, combining Convolutional Neural Networks and Artificial Neural Networks, and employs a multi-feature fusion algorithm to enhance the accuracy of gesture recognition. Furthermore, a nonlinear fitting between sEMG signals and joint angles was established based on a backpropagation neural network, incorporating momentum term and adaptive learning rate adjustments. Finally, based on the gesture recognition and joint angle prediction model, prosthetic arm control experiments were conducted, achieving highly accurate arm movement prediction and execution. This paper not only validates the potential application of sEMG signals in the precise control of robotic arms but also lays a solid foundation for the development of more intuitive and responsive prostheses and assistive devices. Full article
Show Figures

Figure 1

12 pages, 4255 KB  
Article
SMA Wire Use in Hybrid Twisting and Bending/Extending Soft Fiber-Reinforced Actuators
by Seyedreza Kashef Tabrizian, Fovel Cedric, Seppe Terryn and Bram Vanderborght
Actuators 2024, 13(4), 125; https://doi.org/10.3390/act13040125 - 28 Mar 2024
Cited by 6 | Viewed by 2478
Abstract
Soft fiber-reinforced actuators have demonstrated significant potential across various robotics applications. However, the actuation motion in these actuators is typically limited to a single type of motion behavior, such as bending, extending, and twisting. Additionally, a combination of bending with twisting and extending [...] Read more.
Soft fiber-reinforced actuators have demonstrated significant potential across various robotics applications. However, the actuation motion in these actuators is typically limited to a single type of motion behavior, such as bending, extending, and twisting. Additionally, a combination of bending with twisting and extending with twisting can occur in fiber-reinforced actuators. This paper presents two novel hybrid actuators in which shape memory alloy (SMA) wires are used as reinforcement for pneumatic actuation, and upon electrical activation, they create a twisting motion. As a result, the hybrid soft SMA-reinforced actuators can select between twisting and bending, as well as twisting and extending. In pneumatic mode, a bending angle of 40° and a longitudinal strain of 20% were achieved for the bending/twisting and extending/twisting actuators, respectively. When the SMA wires are electrically activated by the Joule effect, the actuators achieved more than 90% of the maximum twisting angle (24°) in almost 2 s. Passive recovery, facilitated by the elastic response of the soft chamber, took approximately 10 s. The double-helical reinforcement by SMA wires not only enables twisting in both directions but also serves as an active recovery mechanism to more rapidly return the finger to the initial position (within 2 s). The resulting pneumatic–electric-driven soft actuators enhance dexterity and versatility, making them suitable for applications in walking robots, in-pipe crawling robots, and in-hand manipulation. Full article
(This article belongs to the Special Issue Innovative Actuators Based on Shape Memory Alloys)
Show Figures

Figure 1

16 pages, 92907 KB  
Article
The Claw: An Avian-Inspired, Large Scale, Hybrid Rigid-Continuum Gripper
by Mary E. Stokes, John K. Mohrmann, Chase G. Frazelle, Ian D. Walker and Ge Lv
Robotics 2024, 13(3), 52; https://doi.org/10.3390/robotics13030052 - 16 Mar 2024
Cited by 1 | Viewed by 3945
Abstract
Most robotic hands have been created at roughly the scale of the human hand, with rigid components forming the core structural elements of the fingers. This focus on the human hand has concentrated attention on operations within the human hand scale, and on [...] Read more.
Most robotic hands have been created at roughly the scale of the human hand, with rigid components forming the core structural elements of the fingers. This focus on the human hand has concentrated attention on operations within the human hand scale, and on the handling of objects suitable for grasping with current robot hands. In this paper, we describe the design, development, and testing of a four-fingered gripper which features a novel combination of actively actuated rigid and compliant elements. The scale of the gripper is unusually large compared to most existing robot hands. The overall goal for the hand is to explore compliant grasping of potentially fragile objects of a size not typically considered. The arrangement of the digits is inspired by the feet of birds, specifically raptors. We detail the motivation for this physical hand structure, its design and operation, and describe testing conducted to assess its capabilities. The results demonstrate the effectiveness of the hand in grasping delicate objects of relatively large size and highlight some limitations of the underlying rigid/compliant hybrid design. Full article
(This article belongs to the Special Issue Intelligent Bionic Robots)
Show Figures

Figure 1

Back to TopTop