Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (45)

Search Parameters:
Keywords = approach-to-grasp movement

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 12842 KB  
Article
Progressive Policy Learning: A Hierarchical Framework for Dexterous Bimanual Manipulation
by Kang-Won Lee, Jung-Woo Lee, Seongyong Kim and Soo-Chul Lim
Mathematics 2025, 13(22), 3585; https://doi.org/10.3390/math13223585 - 8 Nov 2025
Viewed by 820
Abstract
Dexterous bimanual manipulation remains a challenging task in reinforcement learning (RL) due to the vast state–action space and the complex interdependence between the hands. Conventional end-to-end learning struggles to handle this complexity, and multi-agent RL often faces limitations in stably acquiring cooperative movements. [...] Read more.
Dexterous bimanual manipulation remains a challenging task in reinforcement learning (RL) due to the vast state–action space and the complex interdependence between the hands. Conventional end-to-end learning struggles to handle this complexity, and multi-agent RL often faces limitations in stably acquiring cooperative movements. To address these issues, this study proposes a hierarchical progressive policy learning framework for dexterous bimanual manipulation. In the proposed method, one hand’s policy is first trained to stably grasp the object, and, while maintaining this grasp, the other hand’s manipulation policy is progressively learned. This hierarchical decomposition reduces the search space for each policy and enhances both the connectivity and the stability of learning by training the subsequent policy on the stable states generated by the preceding policy. Simulation results show that the proposed framework outperforms conventional end-to-end and multi-agent RL approaches. The proposed method was demonstrated via sim-to-real transfer on a physical dual-arm platform and empirically validated on a bimanual cube manipulation task. Full article
Show Figures

Graphical abstract

17 pages, 3936 KB  
Article
Markerless Force Estimation via SuperPoint-SIFT Fusion and Finite Element Analysis: A Sensorless Solution for Deformable Object Manipulation
by Qingqing Xu, Ruoyang Lai and Junqing Yin
Biomimetics 2025, 10(9), 600; https://doi.org/10.3390/biomimetics10090600 - 8 Sep 2025
Viewed by 730
Abstract
Contact-force perception is a critical component of safe robotic grasping. With the rapid advances in embodied intelligence technology, humanoid robots have enhanced their multimodal perception capabilities. Conventional force sensors face limitations, such as complex spatial arrangements, installation challenges at multiple nodes, and potential [...] Read more.
Contact-force perception is a critical component of safe robotic grasping. With the rapid advances in embodied intelligence technology, humanoid robots have enhanced their multimodal perception capabilities. Conventional force sensors face limitations, such as complex spatial arrangements, installation challenges at multiple nodes, and potential interference with robotic flexibility. Consequently, these conventional sensors are unsuitable for biomimetic robot requirements in object perception, natural interaction, and agile movement. Therefore, this study proposes a sensorless external force detection method that integrates SuperPoint-Scale Invariant Feature Transform (SIFT) feature extraction with finite element analysis to address force perception challenges. A visual analysis method based on the SuperPoint-SIFT feature fusion algorithm was implemented to reconstruct a three-dimensional displacement field of the target object. Subsequently, the displacement field was mapped to the contact force distribution using finite element modeling. Experimental results demonstrate a mean force estimation error of 7.60% (isotropic) and 8.15% (anisotropic), with RMSE < 8%, validated by flexible pressure sensors. To enhance the model’s reliability, a dual-channel video comparison framework was developed. By analyzing the consistency of the deformation patterns and mechanical responses between the actual compression and finite element simulation video keyframes, the proposed approach provides a novel solution for real-time force perception in robotic interactions. The proposed solution is suitable for applications such as precision assembly and medical robotics, where sensorless force feedback is crucial. Full article
(This article belongs to the Special Issue Bio-Inspired Intelligent Robot)
Show Figures

Figure 1

17 pages, 16817 KB  
Article
Design and Implementation of an Autonomous Mobile Robot for Object Delivery via Homography-Based Visual Servoing
by Jung-Shan Lin, Yen-Che Hsiao and Jeih-Weih Hung
Future Internet 2025, 17(9), 379; https://doi.org/10.3390/fi17090379 - 24 Aug 2025
Viewed by 1534
Abstract
This paper presents the design and implementation of an autonomous mobile robot system able to deliver objects from one location to another with minimal hardware requirements. Unlike most existing systems, our robot uses only a single camera—mounted on its robotic arm—to guide both [...] Read more.
This paper presents the design and implementation of an autonomous mobile robot system able to deliver objects from one location to another with minimal hardware requirements. Unlike most existing systems, our robot uses only a single camera—mounted on its robotic arm—to guide both its movements and the pick-and-place process. The robot detects target signs and objects, automatically navigates to desired locations, and accurately grasps and delivers items without the need for complex sensor arrays or multiple cameras. The main innovation of this work is a unified visual control strategy that coordinates both the vehicle and the robotic arm through homography-based visual servoing. Our experimental results demonstrate that the system can reliably locate, pick up, and place objects, achieving a high success rate in real-world tests. This approach offers a simple yet effective solution for object delivery tasks and lays the groundwork for practical, cost-efficient mobile robots in automation and logistics. Full article
(This article belongs to the Special Issue Mobile Robotics and Autonomous System)
Show Figures

Figure 1

19 pages, 2450 KB  
Review
First Web Space Reconstruction in Acquired Defects: A Literature-Based Review and Surgical Experience
by Cesare Tiengo, Francesca Mazzarella, Luca Folini, Stefano L’Erario, Pasquale Zona, Daniele Brunelli and Franco Bassetto
J. Clin. Med. 2025, 14(10), 3428; https://doi.org/10.3390/jcm14103428 - 14 May 2025
Viewed by 1338
Abstract
The first web space of the hand plays a fundamental role in daily hand function, facilitating crucial movements, such as pinching, grasping, and opposition. The structural anomalies of acquired defects of this anatomical region, whether secondary to trauma, burns, or post-oncological surgical resections, [...] Read more.
The first web space of the hand plays a fundamental role in daily hand function, facilitating crucial movements, such as pinching, grasping, and opposition. The structural anomalies of acquired defects of this anatomical region, whether secondary to trauma, burns, or post-oncological surgical resections, necessitate meticulous reconstructive strategies to ensure both functional restoration and aesthetic integrity. Given the complexity and variability of first web defects, a broad spectrum of reconstructive techniques has been developed, ranging from skin grafting and local flap reconstructions to advanced microsurgical approaches. This review comprehensively examines the existing literature on first web reconstruction techniques, analyzing their indications, advantages, and limitations. Additionally, it explores innovative techniques and emerging trends in the field, such as tissue engineering, regenerative medicine, and composite tissue allotransplantation, which may revolutionize future reconstructive strategies. The primary objective is to provide clinicians with an evidence-based guide to selecting the most appropriate reconstructive strategy tailored to individual patient needs. Furthermore, we incorporate our institutional experience in managing first web defects, highlighting key surgical principles, patient outcomes, and challenges encountered. Through this analysis, we aim to refine the understanding of first web reconstruction and contribute to the ongoing evolution of hand surgery techniques. Full article
(This article belongs to the Special Issue Innovation in Hand Surgery)
Show Figures

Figure 1

34 pages, 9384 KB  
Article
MEMS and IoT in HAR: Effective Monitoring for the Health of Older People
by Luigi Bibbò, Giovanni Angiulli, Filippo Laganà, Danilo Pratticò, Francesco Cotroneo, Fabio La Foresta and Mario Versaci
Appl. Sci. 2025, 15(8), 4306; https://doi.org/10.3390/app15084306 - 14 Apr 2025
Cited by 12 | Viewed by 3636
Abstract
The aging population has created a significant challenge affecting the world; social and healthcare systems need to ensure elderly individuals receive the necessary care services to improve their quality of life and maintain their independence. In response to this need, developing integrated digital [...] Read more.
The aging population has created a significant challenge affecting the world; social and healthcare systems need to ensure elderly individuals receive the necessary care services to improve their quality of life and maintain their independence. In response to this need, developing integrated digital solutions, such as IoT based wearable devices combined with artificial intelligence applications, offers a technological platform for creating Ambient Intelligence (AI) and Assisted Living (AAL) environments. These advancements can help reduce hospital admissions and lower healthcare costs. In this context, this article presents an IoT application based on MEMS (micro electro-mechanical systems) sensors integrated into a state-of-the-art microcontroller (STM55WB) for recognizing the movements of older individuals during daily activities. human activity recognition (HAR) is a field within computational engineering that focuses on automatically classifying human actions through data captured by sensors. This study has multiple objectives: to recognize movements such as grasping, leg flexion, circular arm movements, and walking in order to assess the motor skills of older individuals. The implemented system allows these movements to be detected in real time, and transmitted to a monitoring system server, where healthcare staff can analyze the data. The analysis methods employed include machine learning algorithms to identify movement patterns, statistical analysis to assess the frequency and quality of movements, and data visualization to track changes over time. These approaches enable the accurate assessment of older people’s motor skills, and facilitate the prompt identification of abnormal situations or emergencies. Additionally, a user-friendly technological solution is designed to be acceptable to the elderly, minimizing discomfort and stress associated with using technology. Finally, the goal is to ensure that the system is energy-efficient and cost-effective, promoting sustainable adoption. The results obtained are promising; the model achieved a high level of accuracy in recognizing specific movements, thus contributing to a precise assessment of the motor skills of the elderly. Notably, movement recognition was accomplished using an artificial intelligence model called Random Forest. Full article
(This article belongs to the Special Issue Human Activity Recognition (HAR) in Healthcare, 2nd Edition)
Show Figures

Figure 1

22 pages, 6830 KB  
Article
Topological Design and Modeling of 3D-Printed Grippers for Combined Precision and Coarse Robotics Assembly
by Mohammad Mayyas, Naveen Kumar, Zahabul Islam, Mohammed Abouheaf and Muteb Aljasem
Actuators 2025, 14(4), 192; https://doi.org/10.3390/act14040192 - 14 Apr 2025
Viewed by 1065
Abstract
This study presents a topological design and modeling framework for 3D-printed robotic grippers, tailored for combined precision and coarse robotics assembly. The proposed methodology leverages topology optimization to develop multi-scale-compliant mechanisms, comprising a symmetrical continuum structure of five beams. The proposed methodology centers [...] Read more.
This study presents a topological design and modeling framework for 3D-printed robotic grippers, tailored for combined precision and coarse robotics assembly. The proposed methodology leverages topology optimization to develop multi-scale-compliant mechanisms, comprising a symmetrical continuum structure of five beams. The proposed methodology centers on the hybrid kinematics for precision and coarse operations of the gripper, parametrizing beam deformations in response to a defined set of boundary conditions and varying input loads. The research employs topology analysis to draw a clear correlation between input load and resultant motion, with a particular emphasis on the mechanism’s capacity to integrate both fine and coarse movements efficiently. Additionally, the paper pioneers an innovative solution to the ubiquitous point-contact problem encountered in grasping, intricately weaving it with the stiffness matrix. The overarching aim remains to provide a streamlined design methodology, optimized for manufacturability, by harnessing the capabilities of contemporary 3D fabrication techniques. This multifaceted approach, underpinned by the multiscale grasping method, promises to significantly advance the domain of robotic gripping and manipulation across applications such as micro-assembly, biomedical manipulation, and industrial robotics. Full article
Show Figures

Figure 1

17 pages, 2630 KB  
Article
Multimodal Deep Learning Model for Cylindrical Grasp Prediction Using Surface Electromyography and Contextual Data During Reaching
by Raquel Lázaro, Margarita Vergara, Antonio Morales and Ramón A. Mollineda
Biomimetics 2025, 10(3), 145; https://doi.org/10.3390/biomimetics10030145 - 27 Feb 2025
Viewed by 1062
Abstract
Grasping objects, from simple tasks to complex fine motor skills, is a key component of our daily activities. Our approach to facilitate the development of advanced prosthetics, robotic hands and human–machine interaction systems consists of collecting and combining surface electromyography (EMG) signals and [...] Read more.
Grasping objects, from simple tasks to complex fine motor skills, is a key component of our daily activities. Our approach to facilitate the development of advanced prosthetics, robotic hands and human–machine interaction systems consists of collecting and combining surface electromyography (EMG) signals and contextual data of individuals performing manipulation tasks. In this context, the identification of patterns and prediction of hand grasp types is crucial, with cylindrical grasp being one of the most common and functional. Traditional approaches to grasp prediction often rely on unimodal data sources, limiting their ability to capture the complexity of real-world scenarios. In this work, grasp prediction models that integrate both EMG signals and contextual (task- and product-related) information have been explored to improve the prediction of cylindrical grasps during reaching movements. Three model architectures are presented: an EMG processing model based on convolutions that analyzes forearm surface EMG data, a fully connected model for processing contextual information, and a hybrid architecture combining both inputs resulting in a multimodal model. The results show that context has great predictive power. Variables such as object size and weight (product-related) were found to have a greater impact on model performance than task height (task-related). Combining EMG and product context yielded better results than using each data mode separately, confirming the importance of product context in improving EMG-based models of grasping. Full article
(This article belongs to the Special Issue Intelligent Human–Robot Interaction: 3rd Edition)
Show Figures

Graphical abstract

22 pages, 6057 KB  
Article
Enhancing Telexistence Control Through Assistive Manipulation and Haptic Feedback
by Osama Halabi, Mohammed Al-Sada, Hala Abourajouh, Myesha Hoque, Abdullah Iskandar and Tatsuo Nakajima
Appl. Sci. 2025, 15(3), 1324; https://doi.org/10.3390/app15031324 - 27 Jan 2025
Cited by 1 | Viewed by 2414
Abstract
The COVID-19 pandemic brought telepresence systems into the spotlight, yet manually controlling remote robots often proves ineffective for handling complex manipulation tasks. To tackle this issue, we present a machine learning-based assistive manipulation approach. This method identifies target objects and computes an inverse [...] Read more.
The COVID-19 pandemic brought telepresence systems into the spotlight, yet manually controlling remote robots often proves ineffective for handling complex manipulation tasks. To tackle this issue, we present a machine learning-based assistive manipulation approach. This method identifies target objects and computes an inverse kinematic solution for grasping them. The system integrates the generated solution with the user’s arm movements across varying inverse kinematic (IK) fusion levels. Given the importance of maintaining a sense of body ownership over the remote robot, we examine how haptic feedback and assistive functions influence ownership perception and task performance. Our findings indicate that incorporating assistance and haptic feedback significantly enhances the control of the robotic arm in telepresence environments, leading to improved precision and shorter task completion times. This research underscores the advantages of assistive manipulation techniques and haptic feedback in advancing telepresence technology. Full article
Show Figures

Figure 1

15 pages, 18745 KB  
Article
Robust Adaptive Robotic Visual Servo Grasping with Guaranteed Field of View Constraints
by Liang Li, Junqi Luo, Peitao Hong, Wenhao Bai, Zhenyu Zhang and Liucun Zhu
Actuators 2024, 13(11), 457; https://doi.org/10.3390/act13110457 - 14 Nov 2024
Viewed by 1889
Abstract
Visual servo grasping technology has garnered significant attention in intelligent manufacturing for its potential to enhance both the flexibility and precision of robotic operations. However, traditional approaches frequently encounter challenges such as task failure when visual features move outside the camera’s field of [...] Read more.
Visual servo grasping technology has garnered significant attention in intelligent manufacturing for its potential to enhance both the flexibility and precision of robotic operations. However, traditional approaches frequently encounter challenges such as task failure when visual features move outside the camera’s field of view (FoV) and system instability due to interaction matrix singularities, limiting the technology’s effectiveness in complex environments. This study introduces a novel control strategy that leverages an asymmetric time-varying performance function to address the issue of visual feature escape. By strictly limiting the range of feature error, our approach ensures that visual features consistently remain within the camera’s FoV, thereby enhancing both transient and steady-state system performance. Furthermore, we have developed an adaptive damped least squares controller that dynamically adjusts the damping term to mitigate numerical instability resulting from interaction matrix singularities. The effectiveness of our method has been validated through grasping experiments involving significant rotations around the camera’s optical axis and other complex movements. Full article
(This article belongs to the Section Actuators for Robotics)
Show Figures

Figure 1

16 pages, 3645 KB  
Article
A Statistical Approach for Functional Reach-to-Grasp Segmentation Using a Single Inertial Measurement Unit
by Gregorio Dotti, Marco Caruso, Daniele Fortunato, Marco Knaflitz, Andrea Cereatti and Marco Ghislieri
Sensors 2024, 24(18), 6119; https://doi.org/10.3390/s24186119 - 22 Sep 2024
Viewed by 4415
Abstract
The aim of this contribution is to present a segmentation method for the identification of voluntary movements from inertial data acquired through a single inertial measurement unit placed on the subject’s wrist. Inertial data were recorded from 25 healthy subjects while performing 75 [...] Read more.
The aim of this contribution is to present a segmentation method for the identification of voluntary movements from inertial data acquired through a single inertial measurement unit placed on the subject’s wrist. Inertial data were recorded from 25 healthy subjects while performing 75 consecutive reach-to-grasp movements. The approach herein presented, called DynAMoS, is based on an adaptive thresholding step on the angular velocity norm, followed by a statistics-based post-processing on the movement duration distribution. Post-processing aims at reducing the number of erroneous transitions in the movement segmentation. We assessed the segmentation quality of this method using a stereophotogrammetric system as the gold standard. Two popular methods already presented in the literature were compared to DynAMoS in terms of the number of movements identified, onset and offset mean absolute errors, and movement duration. Moreover, we analyzed the sub-phase durations of the drinking movement to further characterize the task. The results show that the proposed method performs significantly better than the two state-of-the-art approaches (i.e., percentage of erroneous movements = 3%; onset and offset mean absolute error < 0.08 s), suggesting that DynAMoS could make more effective home monitoring applications for assessing the motion improvements of patients following domicile rehabilitation protocols. Full article
Show Figures

Figure 1

35 pages, 12036 KB  
Article
Transfer Learning and Deep Neural Networks for Robust Intersubject Hand Movement Detection from EEG Signals
by Chiang Liang Kok, Chee Kit Ho, Thein Htet Aung, Yit Yan Koh and Tee Hui Teo
Appl. Sci. 2024, 14(17), 8091; https://doi.org/10.3390/app14178091 - 9 Sep 2024
Cited by 16 | Viewed by 2003
Abstract
In this research, five systems were developed to classify four distinct motor functions—forward hand movement (FW), grasp (GP), release (RL), and reverse hand movement (RV)—from EEG signals, using the WAY-EEG-GAL dataset where participants performed a sequence of hand movements. During preprocessing, band-pass filtering [...] Read more.
In this research, five systems were developed to classify four distinct motor functions—forward hand movement (FW), grasp (GP), release (RL), and reverse hand movement (RV)—from EEG signals, using the WAY-EEG-GAL dataset where participants performed a sequence of hand movements. During preprocessing, band-pass filtering was applied to remove artifacts and focus on the mu and beta frequency bands. The initial system, a preliminary study model, explored the overall framework of EEG signal processing and classification, utilizing time-domain features such as variance and frequency-domain features such as alpha and beta power, with a KNN model for classification. Insights from this study informed the development of a baseline system, which innovatively combined the common spatial patterns (CSP) method with continuous wavelet transform (CWT) for feature extraction and employed a GoogLeNet classifier with transfer learning. This system classified six unique pairs of events derived from the four motor functions, achieving remarkable accuracy, with the highest being 99.73% for the GP–RV pair and the lowest 80.87% for the FW–GP pair in intersubject classification. Building on this success, three additional systems were developed for four-way classification. The final model, ML-CSP-OVR, demonstrated the highest intersubject classification accuracy of 78.08% using all combined data and 76.39% for leave-one-out intersubject classification. This proposed model, featuring a novel combination of CSP-OVR, CWT, and GoogLeNet, represents a significant advancement in the field, showcasing strong potential as a general system for motor imagery (MI) tasks that is not dependent on the subject. This work highlights the prominence of the research contribution by demonstrating the effectiveness and robustness of the proposed approach in achieving high classification accuracy across different motor functions and subjects. Full article
(This article belongs to the Section Applied Biosciences and Bioengineering)
Show Figures

Figure 1

16 pages, 3410 KB  
Article
Feature Extraction Based on Sparse Coding Approach for Hand Grasp Type Classification
by Jirayu Samkunta, Patinya Ketthong, Nghia Thi Mai, Md Abdus Samad Kamal, Iwanori Murakami and Kou Yamada
Algorithms 2024, 17(6), 240; https://doi.org/10.3390/a17060240 - 3 Jun 2024
Cited by 1 | Viewed by 1517
Abstract
The kinematics of the human hand exhibit complex and diverse characteristics unique to each individual. Various techniques such as vision-based, ultrasonic-based, and data-glove-based approaches have been employed to analyze human hand movements. However, a critical challenge remains in efficiently analyzing and classifying hand [...] Read more.
The kinematics of the human hand exhibit complex and diverse characteristics unique to each individual. Various techniques such as vision-based, ultrasonic-based, and data-glove-based approaches have been employed to analyze human hand movements. However, a critical challenge remains in efficiently analyzing and classifying hand grasp types based on time-series kinematic data. In this paper, we propose a novel sparse coding feature extraction technique based on dictionary learning to address this challenge. Our method enhances model accuracy, reduces training time, and minimizes overfitting risk. We benchmarked our approach against principal component analysis (PCA) and sparse coding based on a Gaussian random dictionary. Our results demonstrate a significant improvement in classification accuracy: achieving 81.78% with our method compared to 31.43% for PCA and 77.27% for the Gaussian random dictionary. Furthermore, our technique outperforms in terms of macro-average F1-score and average area under the curve (AUC) while also significantly reducing the number of features required. Full article
(This article belongs to the Special Issue Algorithms for Feature Selection (2nd Edition))
Show Figures

Figure 1

16 pages, 2574 KB  
Article
Ascent and Attachment in Pea Plants: A Matter of Iteration
by Silvia Guerra, Giovanni Bruno, Andrea Spoto, Anna Panzeri, Qiuran Wang, Bianca Bonato, Valentina Simonetti and Umberto Castiello
Plants 2024, 13(10), 1389; https://doi.org/10.3390/plants13101389 - 16 May 2024
Cited by 6 | Viewed by 2663
Abstract
Pea plants (Pisum sativum L.) can perceive the presence of potential supports in the environment and flexibly adapt their behavior to clasp them. How pea plants control and perfect this behavior during growth remains unexplored. Here, we attempt to fill this gap [...] Read more.
Pea plants (Pisum sativum L.) can perceive the presence of potential supports in the environment and flexibly adapt their behavior to clasp them. How pea plants control and perfect this behavior during growth remains unexplored. Here, we attempt to fill this gap by studying the movement of the apex and the tendrils at different leaves using three-dimensional (3D) kinematical analysis. We hypothesized that plants accumulate information and resources through the circumnutation movements of each leaf. Information generates the kinematical coordinates for the final launch towards the potential support. Results suggest that developing a functional approach to grasp movement may involve an interactive trial and error process based on continuous cross-talk across leaves. This internal communication provides evidence that plants adopt plastic responses in a way that optimally corresponds to support search scenarios. Full article
(This article belongs to the Special Issue Plant Behavioral Ecology)
Show Figures

Figure 1

19 pages, 2760 KB  
Article
Explainable Multimodal Graph Isomorphism Network for Interpreting Sex Differences in Adolescent Neurodevelopment
by Binish Patel, Anton Orlichenko, Adnan Patel, Gang Qu, Tony W. Wilson, Julia M. Stephen, Vince D. Calhoun and Yu-Ping Wang
Appl. Sci. 2024, 14(10), 4144; https://doi.org/10.3390/app14104144 - 14 May 2024
Cited by 3 | Viewed by 2384
Abstract
Background: A fundamental grasp of the variability observed in healthy individuals holds paramount importance in the investigation of neuropsychiatric conditions characterized by sex-related phenotypic distinctions. Functional magnetic resonance imaging (fMRI) serves as a meaningful tool for discerning these differences. Among deep learning [...] Read more.
Background: A fundamental grasp of the variability observed in healthy individuals holds paramount importance in the investigation of neuropsychiatric conditions characterized by sex-related phenotypic distinctions. Functional magnetic resonance imaging (fMRI) serves as a meaningful tool for discerning these differences. Among deep learning models, graph neural networks (GNNs) are particularly well-suited for analyzing brain networks derived from fMRI blood oxygen level-dependent (BOLD) signals, enabling the effective exploration of sex differences during adolescence. Method: In the present study, we introduce a multi-modal graph isomorphism network (MGIN) designed to elucidate sex-based disparities using fMRI task-related data. Our approach amalgamates brain networks obtained from multiple scans of the same individual, thereby enhancing predictive capabilities and feature identification. The MGIN model adeptly pinpoints crucial subnetworks both within and between multi-task fMRI datasets. Moreover, it offers interpretability through the utilization of GNNExplainer, which identifies pivotal sub-network graph structures contributing significantly to sex group classification. Results: Our findings indicate that the MGIN model outperforms competing models in terms of classification accuracy, underscoring the benefits of combining two fMRI paradigms. Additionally, our model discerns the most significant sex-related functional networks, encompassing the default mode network (DMN), visual (VIS) network, cognitive (CNG) network, frontal (FRNT) network, salience (SAL) network, subcortical (SUB) network, and sensorimotor (SM) network associated with hand and mouth movements. Remarkably, the MGIN model achieves superior sex classification accuracy when juxtaposed with other state-of-the-art algorithms, yielding a noteworthy 81.67% improvement in classification accuracy. Conclusion: Our model’s superiority emanates from its capacity to consolidate data from multiple scans of subjects within a proven interpretable framework. Beyond its classification prowess, our model guides our comprehension of neurodevelopment during adolescence by identifying critical subnetworks of functional connectivity. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Biomedical Data Analysis)
Show Figures

Figure 1

17 pages, 3772 KB  
Article
A Semiautonomous Control Strategy Based on Computer Vision for a Hand–Wrist Prosthesis
by Gianmarco Cirelli, Christian Tamantini, Luigi Pietro Cordella and Francesca Cordella
Robotics 2023, 12(6), 152; https://doi.org/10.3390/robotics12060152 - 13 Nov 2023
Cited by 10 | Viewed by 3740
Abstract
Alleviating the burden on amputees in terms of high-level control of their prosthetic devices is an open research challenge. EMG-based intention detection presents some limitations due to movement artifacts, fatigue, and stability. The integration of exteroceptive sensing can provide a valuable solution to [...] Read more.
Alleviating the burden on amputees in terms of high-level control of their prosthetic devices is an open research challenge. EMG-based intention detection presents some limitations due to movement artifacts, fatigue, and stability. The integration of exteroceptive sensing can provide a valuable solution to overcome such limitations. In this paper, a novel semiautonomous control system (SCS) for wrist–hand prostheses using a computer vision system (CVS) is proposed and validated. The SCS integrates object detection, grasp selection, and wrist orientation estimation algorithms. By combining CVS with a simulated EMG-based intention detection module, the SCS guarantees reliable prosthesis control. Results show high accuracy in grasping and object classification (≥97%) at a fast frame analysis frequency (2.07 FPS). The SCS achieves an average angular estimation error ≤18° and stability ≤0.8° for the proposed application. Operative tests demonstrate the capabilities of the proposed approach to handle complex real-world scenarios and pave the way for future implementation on a real prosthetic device. Full article
(This article belongs to the Special Issue AI for Robotic Exoskeletons and Prostheses)
Show Figures

Figure 1

Back to TopTop