Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (56)

Search Parameters:
Keywords = autonomous object grasping

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 5534 KiB  
Article
Enhancing Healthcare Assistance with a Self-Learning Robotics System: A Deep Imitation Learning-Based Solution
by Yagna Jadeja, Mahmoud Shafik, Paul Wood and Aaisha Makkar
Electronics 2025, 14(14), 2823; https://doi.org/10.3390/electronics14142823 - 14 Jul 2025
Viewed by 198
Abstract
This paper presents a Self-Learning Robotic System (SLRS) for healthcare assistance using Deep Imitation Learning (DIL). The proposed SLRS solution can observe and replicate human demonstrations, thereby acquiring complex skills without the need for explicit task-specific programming. It incorporates modular components for perception [...] Read more.
This paper presents a Self-Learning Robotic System (SLRS) for healthcare assistance using Deep Imitation Learning (DIL). The proposed SLRS solution can observe and replicate human demonstrations, thereby acquiring complex skills without the need for explicit task-specific programming. It incorporates modular components for perception (i.e., advanced computer vision methodologies), actuation (i.e., dynamic interaction with patients and healthcare professionals in real time), and learning. The innovative approach of implementing a hybrid model approach (i.e., deep imitation learning and pose estimation algorithms) facilitates autonomous learning and adaptive task execution. The environmental awareness and responsiveness were also enhanced using both a Convolutional Neural Network (CNN)-based object detection mechanism using YOLOv8 (i.e., with 94.3% accuracy and 18.7 ms latency) and pose estimation algorithms, alongside a MediaPipe and Long Short-Term Memory (LSTM) framework for human action recognition. The developed solution was tested and validated in healthcare, with the aim to overcome some of the current challenges, such as workforce shortages, ageing populations, and the rising prevalence of chronic diseases. The CAD simulation, validation, and verification tested functions (i.e., assistive functions, interactive scenarios, and object manipulation) of the system demonstrated the robot’s adaptability and operational efficiency, achieving an 87.3% task completion success rate and over 85% grasp success rate. This approach highlights the potential use of an SLRS for healthcare assistance. Further work will be undertaken in hospitals, care homes, and rehabilitation centre environments to generate complete holistic datasets to confirm the system’s reliability and efficiency. Full article
Show Figures

Figure 1

14 pages, 3695 KiB  
Article
All-Light Remote Driving and Programming of Soft Actuator Based on Selective Laser Stimulation and Modification
by Jingjing Zhang, Hai Hu, Wenliang Liang, Zhijuan Fuyang, Chenchu Zhang and Deng Pan
Polymers 2025, 17(10), 1302; https://doi.org/10.3390/polym17101302 - 9 May 2025
Viewed by 367
Abstract
Soft robots are advantageous due to their flexibility, ability to interact with humans, and multifunctional adaptability. However, developing soft robots that are unrestrained and can be reprogrammed for reversible control without causing damage remains a significant challenge. The majority of soft robots have [...] Read more.
Soft robots are advantageous due to their flexibility, ability to interact with humans, and multifunctional adaptability. However, developing soft robots that are unrestrained and can be reprogrammed for reversible control without causing damage remains a significant challenge. The majority of soft robots have a bilayer structure with internal stress, which limits their motion to pre-programmed anisotropic structures. Taking inspiration from pillworms found in nature, we propose an approach for controlling and reprogramming the motion of actuators using infrared light as the driver and a laser-melted paraffin wax (PW) shell as the controller. The dual-purpose shell can not only protect the actuator but can also alter its initial motion behavior to achieve multiple programming, profile modeling, object grasping, and directional crawling tasks, thereby enabling active changes to the motion strategy in response to external stimuli. This method can also be extended to other materials with similar properties and multi-stimulus responses, offering a new pathway for developing unconstrained, autonomous soft robots and intelligent devices. Full article
(This article belongs to the Section Polymer Membranes and Films)
Show Figures

Figure 1

19 pages, 28961 KiB  
Article
Human-like Dexterous Grasping Through Reinforcement Learning and Multimodal Perception
by Wen Qi, Haoyu Fan, Cankun Zheng, Hang Su and Samer Alfayad
Biomimetics 2025, 10(3), 186; https://doi.org/10.3390/biomimetics10030186 - 18 Mar 2025
Cited by 2 | Viewed by 1183
Abstract
Dexterous robotic grasping with multifingered hands remains a critical challenge in non-visual environments, where diverse object geometries and material properties demand adaptive force modulation and tactile-aware manipulation. To address this, we propose the Reinforcement Learning-Based Multimodal Perception (RLMP) framework, which integrates human-like grasping [...] Read more.
Dexterous robotic grasping with multifingered hands remains a critical challenge in non-visual environments, where diverse object geometries and material properties demand adaptive force modulation and tactile-aware manipulation. To address this, we propose the Reinforcement Learning-Based Multimodal Perception (RLMP) framework, which integrates human-like grasping intuition through operator-worn gloves with tactile-guided reinforcement learning. The framework’s key innovation lies in its Tactile-Driven DCNN architecture—a lightweight convolutional network achieving 98.5% object recognition accuracy using spatiotemporal pressure patterns—coupled with an RL policy refinement mechanism that dynamically correlates finger kinematics with real-time tactile feedback. Experimental results demonstrate reliable grasping performance across deformable and rigid objects while maintaining force precision critical for fragile targets. By bridging human teleoperation with autonomous tactile adaptation, RLMP eliminates dependency on visual input and predefined object models, establishing a new paradigm for robotic dexterity in occlusion-rich scenarios. Full article
(This article belongs to the Special Issue Biomimetic Innovations for Human–Machine Interaction)
Show Figures

Figure 1

13 pages, 3385 KiB  
Article
Early Sacral Neuromodulation: A Promising Opportunity or an Overload for Patients with a Recent Spinal Cord Injury? A Cross-Sectional Study
by Sophina Bauer, Lukas Grassner, Doris Maier, Ludwig Aigner, Lukas Lusuardi, Julia Peters, Orpheus Mach, Karin Roider, Evelyn Beyerer, Michael Kleindorfer, Andreas Wolff, Iris Leister and Elena E. Keller
J. Clin. Med. 2025, 14(3), 1031; https://doi.org/10.3390/jcm14031031 - 6 Feb 2025
Viewed by 1187
Abstract
Background: A solid rationale exists for early sacral neuromodulation in the form of causal therapy that improves neurogenic lower urinary tract dysfunction after complete spinal cord injury. However, the short and early time frame for minimally invasive therapy poses a series of ethical [...] Read more.
Background: A solid rationale exists for early sacral neuromodulation in the form of causal therapy that improves neurogenic lower urinary tract dysfunction after complete spinal cord injury. However, the short and early time frame for minimally invasive therapy poses a series of ethical and medical issues, which has impeded clinical realisation thus far. Objectives: We performed a cross-sectional study on patients with chronic spinal cord injury to learn about patients’ attitudes towards early treatment to prepare for large randomised controlled trials. Methods: A cohort of patients (n = 86, mixed genders) with spinal cord injury over two years was analysed. Their lower urinary tract-related quality of life was assessed using the Qualiveen-30 tool. The extent of neurogenic lower urinary tract dysfunction, patients’ awareness of it, and their attitude towards early sacral neuromodulation were explored with a specific questionnaire. Results: A total of 61.9% (n = 52) of patients declared that, in retrospect, they would have agreed to early treatment prior to the emergence of their autonomic dysfunction. Of these patients, 51.8% (n = 29) would have also consented to early sacral neuromodulation. Quality of life had no impact on their decision. More than half of the patients (n = 49, 57.0%) stated they had not grasped the momentous nature of neurogenic lower urinary tract dysfunction when being informed about it. This finding was subsequently correlated with a decreased lower urinary tract-related quality of life. Conclusion: Patients with neurogenic lower urinary tract dysfunction are likely to agree to an early therapeutic approach. Clinical implementation requires knowledge and acceptance of the procedure on the part of patients and their caregivers. Full article
(This article belongs to the Special Issue Clinical Advances in Spinal Cord Injury)
Show Figures

Figure 1

28 pages, 18580 KiB  
Article
Segmented Hybrid Impedance Control for Hyper-Redundant Space Manipulators
by Mohamed Chihi, Chourouk Ben Hassine and Quan Hu
Appl. Sci. 2025, 15(3), 1133; https://doi.org/10.3390/app15031133 - 23 Jan 2025
Viewed by 827
Abstract
Hyper-redundant space manipulators (HRSMs), with their extensive degrees of freedom, offer a promising solution for complex space operations such as on-orbit assembly and manipulation of non-cooperative objects. A critical challenge lies in achieving stable and effective grasping configurations, particularly when dealing with irregularly [...] Read more.
Hyper-redundant space manipulators (HRSMs), with their extensive degrees of freedom, offer a promising solution for complex space operations such as on-orbit assembly and manipulation of non-cooperative objects. A critical challenge lies in achieving stable and effective grasping configurations, particularly when dealing with irregularly shaped objects in microgravity. This study addresses these challenges by developing a segmented hybrid impedance control architecture tailored to multi-point contact scenarios. The proposed framework reduces the contact forces and enhances object manipulation, enabling the secure handling of irregular objects and improving operational reliability. Numerical simulations demonstrate significant reductions in the contact forces during initial engagements, ensuring stable grasping and effective force regulation. The approach also enables precise trajectory tracking, robust collision avoidance, and resilience to external disturbances. The complete non-linear dynamics of the HRSM system are derived using the Kane method, incorporating both the free-space and constrained motion phases. These results highlight the practical capabilities of HRSM systems, including their potential to grasp and manipulate obstacles effectively, paving the way for applications in autonomous on-orbit servicing and assembly tasks. By integrating advanced control strategies and robust stability guarantees, this work provides a foundation for the deployment of HRSMs in real-world space operations, offering greater versatility and efficiency in complex environments. Full article
Show Figures

Figure 1

18 pages, 4649 KiB  
Article
Development of an Aerial Manipulation System Using Onboard Cameras and a Multi-Fingered Robotic Hand with Proximity Sensors
by Ryuki Sato, Etienne Marco Badard, Chaves Silva Romulo, Tadashi Wada and Aiguo Ming
Sensors 2025, 25(2), 470; https://doi.org/10.3390/s25020470 - 15 Jan 2025
Viewed by 1507
Abstract
Recently, aerial manipulations are becoming more and more important for the practical applications of unmanned aerial vehicles (UAV) to choose, transport, and place objects in global space. In this paper, an aerial manipulation system consisting of a UAV, two onboard cameras, and a [...] Read more.
Recently, aerial manipulations are becoming more and more important for the practical applications of unmanned aerial vehicles (UAV) to choose, transport, and place objects in global space. In this paper, an aerial manipulation system consisting of a UAV, two onboard cameras, and a multi-fingered robotic hand with proximity sensors is developed. To achieve self-contained autonomous navigation to a targeted object, onboard tracking and depth cameras are used to detect the targeted object and to control the UAV to reach the target object, even in a Global Positioning System-denied environment. The robotic hand can perform proximity sensor-based grasping stably for an object that is within a position error tolerance (a circle with a radius of 50 mm) from the center of the hand. Therefore, to successfully grasp the object, a requirement for the position error of the hand (=UAV) during hovering after reaching the targeted object should be less than the tolerance. To meet this requirement, an object detection algorithm to support accurate target localization by combining information from both cameras was developed. In addition, camera mount orientation and UAV attitude sampling rate were determined by experiments, and it is confirmed that these implementations improved the UAV position error to within the grasping tolerance of the robot hand. Finally, the experiments on aerial manipulations using the developed system demonstrated the successful grasping of the targeted object. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

18 pages, 4340 KiB  
Article
GFA-Net: Geometry-Focused Attention Network for Six Degrees of Freedom Object Pose Estimation
by Shuai Lin, Junhui Yu, Peng Su, Weitao Xue, Yang Qin, Lina Fu, Jing Wen and Hong Huang
Sensors 2025, 25(1), 168; https://doi.org/10.3390/s25010168 - 31 Dec 2024
Viewed by 891
Abstract
Six degrees of freedom (6-DoF) object pose estimation is essential for robotic grasping and autonomous driving. While estimating pose from a single RGB image is highly desirable for real-world applications, it presents significant challenges. Many approaches incorporate supplementary information, such as depth data, [...] Read more.
Six degrees of freedom (6-DoF) object pose estimation is essential for robotic grasping and autonomous driving. While estimating pose from a single RGB image is highly desirable for real-world applications, it presents significant challenges. Many approaches incorporate supplementary information, such as depth data, to derive valuable geometric characteristics. However, the challenge of deep neural networks inadequately extracting features from object regions in RGB images remains. To overcome these limitations, we introduce the Geometry-Focused Attention Network (GFA-Net), a novel framework designed for more comprehensive feature extraction by analyzing critical geometric and textural object characteristics. GFA-Net leverages Point-wise Feature Attention (PFA) to capture subtle pose differences, guiding the network to localize object regions and identify point-wise discrepancies as pose shifts. In addition, a Geometry Feature Aggregation Module (GFAM) integrates multi-scale geometric feature maps to distill crucial geometric features. Then, the resulting dense 2D–3D correspondences are passed to a Perspective-n-Point (PnP) module for 6-DoF pose computation. Experimental results on the LINEMOD and Occlusion LINEMOD datasets indicate that our proposed method is highly competitive with state-of-the-art approaches, achieving 96.54% and 49.35% accuracy, respectively, utilizing the ADD-S metric with a 0.10d threshold. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

15 pages, 2425 KiB  
Article
Online Self-Supervised Learning for Accurate Pick Assembly Operation Optimization
by Sergio Valdés, Marco Ojer and Xiao Lin
Robotics 2025, 14(1), 4; https://doi.org/10.3390/robotics14010004 - 30 Dec 2024
Cited by 2 | Viewed by 1402
Abstract
The demand for flexible automation in manufacturing has increased, incorporating vision-guided systems for object grasping. However, a key challenge is in-hand error, where discrepancies between the actual and estimated positions of an object in the robot’s gripper impact not only the grasp but [...] Read more.
The demand for flexible automation in manufacturing has increased, incorporating vision-guided systems for object grasping. However, a key challenge is in-hand error, where discrepancies between the actual and estimated positions of an object in the robot’s gripper impact not only the grasp but also subsequent assembly stages. Corrective strategies used to compensate for misalignment can increase cycle times or rely on pre-labeled datasets, offline training, and validation processes, delaying deployment and limiting adaptability in dynamic industrial environments. Our main contribution is an online self-supervised learning method that automates data collection, training, and evaluation in real time, eliminating the need for offline processes. Building on this, our system collects real-time data during each assembly cycle, using corrective strategies to adjust the data and autonomously labeling them via a self-supervised approach. It then builds and evaluates multiple regression models through an auto machine learning implementation. The system selects the best-performing model to correct the misalignment and dynamically chooses between corrective strategies and the learned model, optimizing the cycle times and improving the performance during the cycle, without halting the production process. Our experiments show a significant reduction in the cycle time while maintaining the performance. Full article
(This article belongs to the Section Industrial Robots and Automation)
Show Figures

Figure 1

26 pages, 8608 KiB  
Article
Manipulability-Aware Task-Oriented Grasp Planning and Motion Control with Application in a Seven-DoF Redundant Dual-Arm Robot
by Ching-Chang Wong, Chi-Yi Tsai, Yu-Cheng Lai and Shang-Wen Wong
Electronics 2024, 13(24), 5025; https://doi.org/10.3390/electronics13245025 - 20 Dec 2024
Cited by 2 | Viewed by 1241
Abstract
Task-oriented grasp planning poses complex challenges in modern robotics, requiring the precise determination of the grasping pose of a robotic arm to grasp objects with a high level of manipulability while avoiding hardware constraints, such as joint limits, joint over-speeds, and singularities. This [...] Read more.
Task-oriented grasp planning poses complex challenges in modern robotics, requiring the precise determination of the grasping pose of a robotic arm to grasp objects with a high level of manipulability while avoiding hardware constraints, such as joint limits, joint over-speeds, and singularities. This paper introduces a novel manipulability-aware (M-aware) grasp planning and motion control system for seven-degree-of-freedom (7-DoF) redundant dual-arm robots to achieve task-oriented grasping with optimal manipulability. The proposed system consists of two subsystems: (1) M-aware grasp planning; and (2) M-aware motion control. The former predicts task-oriented grasp candidates from an RGB-D image and selects the best grasping pose among the candidates. The latter enables the robot to select an appropriate arm to perform the grasping task while maintaining a high level of manipulability. To achieve this goal, we propose a new manipulability evaluation function to evaluate the manipulability score (M-score) of a given robot arm configuration with respect to a desired grasping pose to ensure safe grasping actions and avoid its joint limits and singularities. Experimental results demonstrate that our system can autonomously detect the graspable areas of a target object, select an appropriate grasping pose, grasp the target with a high level of manipulability, and achieve an average success rate of about 98.6%. Full article
Show Figures

Figure 1

18 pages, 6367 KiB  
Article
Sensor-Enhanced Smart Gripper Development for Automated Meat Processing
by Kristóf Takács, Bence Takács, Tivadar Garamvölgyi, Sándor Tarsoly, Márta Alexy, Kristóf Móga, Imre J. Rudas, Péter Galambos and Tamás Haidegger
Sensors 2024, 24(14), 4631; https://doi.org/10.3390/s24144631 - 17 Jul 2024
Cited by 1 | Viewed by 2375
Abstract
Grasping and object manipulation have been considered key domains of Cyber-Physical Systems (CPS) since the beginning of automation, as they are the most common interactions between systems, or a system and its environment. As the demand for automation is spreading to increasingly complex [...] Read more.
Grasping and object manipulation have been considered key domains of Cyber-Physical Systems (CPS) since the beginning of automation, as they are the most common interactions between systems, or a system and its environment. As the demand for automation is spreading to increasingly complex fields of industry, smart tools with sensors and internal decision-making become necessities. CPS, such as robots and smart autonomous machinery, have been introduced in the meat industry in recent decades; however, the natural diversity of animals, potential anatomical disorders and soft, slippery animal tissues require the use of a wide range of sensors, software and intelligent tools. This paper presents the development of a smart robotic gripper for deployment in the meat industry. A comprehensive review of the available robotic grippers employed in the sector is presented along with the relevant recent research projects. Based on the identified needs, a new mechatronic design and early development process of the smart gripper is described. The integrated force sensing method based on strain measurement and magnetic encoders is described, including the adjacent laboratory and on-site tests. Furthermore, a combined slip detection system is presented, which relies on an optical flow-based image processing algorithm using the video feed of a built-in endoscopic camera. Basic user tests and application assessments are presented. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

20 pages, 8337 KiB  
Article
YOLO-Based 3D Perception for UVMS Grasping
by Yanhu Chen, Fuqiang Zhao, Yucheng Ling and Suohang Zhang
J. Mar. Sci. Eng. 2024, 12(7), 1110; https://doi.org/10.3390/jmse12071110 - 2 Jul 2024
Cited by 3 | Viewed by 1473
Abstract
This study develops a YOLO (You Only Look Once)-based 3D perception algorithm for UVMS (Underwater Vehicle-Manipulator Systems) for precise object detection and localization, crucial for enhanced grasping tasks. The object detection algorithm, YOLOv5s-CS, integrates an enhanced YOLOv5s model with C3SE attention and SPPFCSPC [...] Read more.
This study develops a YOLO (You Only Look Once)-based 3D perception algorithm for UVMS (Underwater Vehicle-Manipulator Systems) for precise object detection and localization, crucial for enhanced grasping tasks. The object detection algorithm, YOLOv5s-CS, integrates an enhanced YOLOv5s model with C3SE attention and SPPFCSPC feature fusion, optimized for precise detection and two-dimensional localization in underwater environments with sparse features. Distance measurement is further improved by refining the SGBM (Semi-Global Block Matching) algorithm with Census transform and subpixel interpolation. Ablation studies highlight the YOLOv5s-CS model’s enhanced performance, with a 3.5% increase in mAP and a 6.4% rise in F1 score over the base YOLOv5s, and a 2.1% mAP improvement with 15% faster execution than YOLOv8s. Implemented on a UVMS, the algorithm successfully conducted pool grasping experiments, proving its applicability for autonomous underwater robotics. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

19 pages, 3065 KiB  
Article
Voxel- and Bird’s-Eye-View-Based Semantic Scene Completion for LiDAR Point Clouds
by Li Liang, Naveed Akhtar, Jordan Vice and Ajmal Mian
Remote Sens. 2024, 16(13), 2266; https://doi.org/10.3390/rs16132266 - 21 Jun 2024
Cited by 1 | Viewed by 1798
Abstract
Semantic scene completion is a crucial outdoor scene understanding task that has direct implications for technologies like autonomous driving and robotics. It compensates for unavoidable occlusions and partial measurements in LiDAR scans, which may otherwise cause catastrophic failures. Due to the inherent complexity [...] Read more.
Semantic scene completion is a crucial outdoor scene understanding task that has direct implications for technologies like autonomous driving and robotics. It compensates for unavoidable occlusions and partial measurements in LiDAR scans, which may otherwise cause catastrophic failures. Due to the inherent complexity of this task, existing methods generally rely on complex and computationally demanding scene completion models, which limits their practicality in downstream applications. Addressing this, we propose a novel integrated network that combines the strengths of 3D and 2D semantic scene completion techniques for efficient LiDAR point cloud scene completion. Our network leverages a newly devised lightweight multi-scale convolutional block (MSB) to efficiently aggregate multi-scale features, thereby improving the identification of small and distant objects. It further utilizes a layout-aware semantic block (LSB), developed to grasp the overall layout of the scene to precisely guide the reconstruction and recognition of features. Moreover, we also develop a feature fusion module (FFM) for effective interaction between the data derived from two disparate streams in our network, ensuring a robust and cohesive scene completion process. Extensive experiments with the popular SemanticKITTI dataset demonstrate that our method achieves highly competitive performance, with an mIoU of 35.7 and an IoU of 51.4. Notably, the proposed method achieves an mIoU improvement of 2.6 % compared to previous methods. Full article
(This article belongs to the Section Engineering Remote Sensing)
Show Figures

Figure 1

42 pages, 17605 KiB  
Review
Reinforcement Learning Algorithms and Applications in Healthcare and Robotics: A Comprehensive and Systematic Review
by Mokhaled N. A. Al-Hamadani, Mohammed A. Fadhel, Laith Alzubaidi and Balazs Harangi
Sensors 2024, 24(8), 2461; https://doi.org/10.3390/s24082461 - 11 Apr 2024
Cited by 30 | Viewed by 12345
Abstract
Reinforcement learning (RL) has emerged as a dynamic and transformative paradigm in artificial intelligence, offering the promise of intelligent decision-making in complex and dynamic environments. This unique feature enables RL to address sequential decision-making problems with simultaneous sampling, evaluation, and feedback. As a [...] Read more.
Reinforcement learning (RL) has emerged as a dynamic and transformative paradigm in artificial intelligence, offering the promise of intelligent decision-making in complex and dynamic environments. This unique feature enables RL to address sequential decision-making problems with simultaneous sampling, evaluation, and feedback. As a result, RL techniques have become suitable candidates for developing powerful solutions in various domains. In this study, we present a comprehensive and systematic review of RL algorithms and applications. This review commences with an exploration of the foundations of RL and proceeds to examine each algorithm in detail, concluding with a comparative analysis of RL algorithms based on several criteria. This review then extends to two key applications of RL: robotics and healthcare. In robotics manipulation, RL enhances precision and adaptability in tasks such as object grasping and autonomous learning. In healthcare, this review turns its focus to the realm of cell growth problems, clarifying how RL has provided a data-driven approach for optimizing the growth of cell cultures and the development of therapeutic solutions. This review offers a comprehensive overview, shedding light on the evolving landscape of RL and its potential in two diverse yet interconnected fields. Full article
(This article belongs to the Special Issue Feature Papers in Intelligent Sensors 2024)
Show Figures

Figure 1

18 pages, 13149 KiB  
Article
Posture Optimization of the TIAGo Highly-Redundant Robot for Grasping Operation
by Albin Bajrami, Matteo-Claudio Palpacelli, Luca Carbonari and Daniele Costa
Robotics 2024, 13(4), 56; https://doi.org/10.3390/robotics13040056 - 23 Mar 2024
Cited by 5 | Viewed by 2528
Abstract
This study explores the optimization of the TIAGo robot’s configuration for grasping operation, with a focus on the context of aging. In fact, featuring a mobile base and a robotic arm, the TIAGo robot can conveniently aid individuals with disabilities, including those with [...] Read more.
This study explores the optimization of the TIAGo robot’s configuration for grasping operation, with a focus on the context of aging. In fact, featuring a mobile base and a robotic arm, the TIAGo robot can conveniently aid individuals with disabilities, including those with motor and cognitive impairments in both domestic and clinical settings. Its capabilities include recognizing visual targets such as faces or gestures using stereo cameras, as well as interpreting vocal commands through acoustic sensors to execute tasks. For example, the robot can grasp and lift objects such as a glass of water and navigate autonomously in order to fulfill a request. The paper presents the position and differential kinematics that form the basis for using the robot in numerous application contexts. In the present case, they are used to evaluate the kinematic performance of the robot relative to an assigned pose in the search for the optimal configuration with respect to the higher-order infinite possible configurations. Ultimately, the article provides insight into how to effectively use the robot in gripping operations, as well as presenting kinematic models of the TIAGo robot. Full article
(This article belongs to the Special Issue Kinematics and Robot Design VI, KaRD2023)
Show Figures

Figure 1

18 pages, 6176 KiB  
Article
On Automated Object Grasping for Intelligent Prosthetic Hands Using Machine Learning
by Jethro Odeyemi, Akinola Ogbeyemi, Kelvin Wong and Wenjun Zhang
Bioengineering 2024, 11(2), 108; https://doi.org/10.3390/bioengineering11020108 - 24 Jan 2024
Cited by 3 | Viewed by 2542
Abstract
Prosthetic technology has witnessed remarkable advancements, yet challenges persist in achieving autonomous grasping control while ensuring the user’s experience is not compromised. Current electronic prosthetics often require extensive training for users to gain fine motor control over the prosthetic fingers, hindering their usability [...] Read more.
Prosthetic technology has witnessed remarkable advancements, yet challenges persist in achieving autonomous grasping control while ensuring the user’s experience is not compromised. Current electronic prosthetics often require extensive training for users to gain fine motor control over the prosthetic fingers, hindering their usability and acceptance. To address this challenge and improve the autonomy of prosthetics, this paper proposes an automated method that leverages computer vision-based techniques and machine learning algorithms. In this study, three reinforcement learning algorithms, namely Soft Actor-Critic (SAC), Deep Q-Network (DQN), and Proximal Policy Optimization (PPO), are employed to train agents for automated grasping tasks. The results indicate that the SAC algorithm achieves the highest success rate of 99% among the three algorithms at just under 200,000 timesteps. This research also shows that an object’s physical characteristics can affect the agent’s ability to learn an optimal policy. Moreover, the findings highlight the potential of the SAC algorithm in developing intelligent prosthetic hands with automatic object-gripping capabilities. Full article
Show Figures

Figure 1

Back to TopTop