Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (52)

Search Parameters:
Keywords = motion intention recognition

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
40 pages, 2250 KiB  
Review
Comprehensive Comparative Analysis of Lower Limb Exoskeleton Research: Control, Design, and Application
by Sk Hasan and Nafizul Alam
Actuators 2025, 14(7), 342; https://doi.org/10.3390/act14070342 - 9 Jul 2025
Viewed by 285
Abstract
This review provides a comprehensive analysis of recent advancements in lower limb exoskeleton systems, focusing on applications, control strategies, hardware architecture, sensing modalities, human-robot interaction, evaluation methods, and technical innovations. The study spans systems developed for gait rehabilitation, mobility assistance, terrain adaptation, pediatric [...] Read more.
This review provides a comprehensive analysis of recent advancements in lower limb exoskeleton systems, focusing on applications, control strategies, hardware architecture, sensing modalities, human-robot interaction, evaluation methods, and technical innovations. The study spans systems developed for gait rehabilitation, mobility assistance, terrain adaptation, pediatric use, and industrial support. Applications range from sit-to-stand transitions and post-stroke therapy to balance support and real-world navigation. Control approaches vary from traditional impedance and fuzzy logic models to advanced data-driven frameworks, including reinforcement learning, recurrent neural networks, and digital twin-based optimization. These controllers support personalized and adaptive interaction, enabling real-time intent recognition, torque modulation, and gait phase synchronization across different users and tasks. Hardware platforms include powered multi-degree-of-freedom exoskeletons, passive assistive devices, compliant joint systems, and pediatric-specific configurations. Innovations in actuator design, modular architecture, and lightweight materials support increased usability and energy efficiency. Sensor systems integrate EMG, EEG, IMU, vision, and force feedback, supporting multimodal perception for motion prediction, terrain classification, and user monitoring. Human–robot interaction strategies emphasize safe, intuitive, and cooperative engagement. Controllers are increasingly user-specific, leveraging biosignals and gait metrics to tailor assistance. Evaluation methodologies include simulation, phantom testing, and human–subject trials across clinical and real-world environments, with performance measured through joint tracking accuracy, stability indices, and functional mobility scores. Overall, the review highlights the field’s evolution toward intelligent, adaptable, and user-centered systems, offering promising solutions for rehabilitation, mobility enhancement, and assistive autonomy in diverse populations. Following a detailed review of current developments, strategic recommendations are made to enhance and evolve existing exoskeleton technologies. Full article
(This article belongs to the Section Actuators for Robotics)
Show Figures

Figure 1

19 pages, 3216 KiB  
Article
Orbital Behavior Intention Recognition for Space Non-Cooperative Targets Under Multiple Constraints
by Yuwen Chen, Xiang Zhang, Wenhe Liao, Guoning Wei and Shuhui Fan
Aerospace 2025, 12(6), 520; https://doi.org/10.3390/aerospace12060520 - 9 Jun 2025
Viewed by 721
Abstract
To address the issue of misclassification and diminished accuracy that is prevalent in existing intent recognition models for non-cooperative spacecraft due to the omission of environmental influences, this paper presents a novel recognition framework leveraging a hybrid neural network subject to multiple constraints. [...] Read more.
To address the issue of misclassification and diminished accuracy that is prevalent in existing intent recognition models for non-cooperative spacecraft due to the omission of environmental influences, this paper presents a novel recognition framework leveraging a hybrid neural network subject to multiple constraints. The relative orbital motion of the targets is characterized and categorized through the use of Clohessy–Wiltshire equations, forming the foundation of a constrained intention dataset employed for training and evaluation. Furthermore, the method incorporates a composite architecture combining a convolutional neural network (CNN), long short-term memory (LSTM) unit, and self-attention (SA) mechanism to enhance recognition performance. The experimental results demonstrate that the integrated CNN-LSTM-SA model attains a recognition accuracy of 98.6%, significantly surpassing traditional methods and neural network models. Additionally, it demonstrates high efficiency, indicating significant promise for practical applications in avoiding spacecraft collisions and performing orbital maneuvers. Full article
(This article belongs to the Special Issue Asteroid Impact Avoidance)
Show Figures

Figure 1

18 pages, 4639 KiB  
Article
Using Hybrid Feature and Classifier Fusion for an Asynchronous Brain–Computer Interface Framework Based on Steady-State Motion Visual Evoked Potentials
by Bo Hu, Jun Xie, Huanqing Zhang, Junjie Liu and Hu Wang
Appl. Sci. 2025, 15(11), 6010; https://doi.org/10.3390/app15116010 - 27 May 2025
Viewed by 332
Abstract
This study proposes an asynchronous brain–computer interface (BCI) framework based on steady-state motion visual evoked potentials (SSMVEPs), designed to enhance the accuracy and robustness of control state recognition. The method integrates filter bank common spatial patterns (FBCSPs) and filter bank canonical correlation analysis [...] Read more.
This study proposes an asynchronous brain–computer interface (BCI) framework based on steady-state motion visual evoked potentials (SSMVEPs), designed to enhance the accuracy and robustness of control state recognition. The method integrates filter bank common spatial patterns (FBCSPs) and filter bank canonical correlation analysis (FBCCA) to extract complementary spatial and frequency domain features from EEG signals. These multimodal features are then fused and input into a dual-classifier structure consisting of a support vector machine (SVM) and extreme gradient boosting (XGBoost). A weighted fusion strategy is applied to combine the probabilistic outputs of both classifiers, allowing the system to leverage their respective strengths. Experimental results demonstrate that the fused FB(CSP + CCA)-(SVM + XGBoost) model achieves superior performance in distinguishing intentional control (IC) and non-control (NC) states compared to models using a single feature type or classifier. Furthermore, the visualization of feature distributions using UMAP shows improved inter-class separability when combining FBCSP and FBCCA features. These findings confirm the effectiveness of both feature-level and classifier-level fusion in asynchronous BCI systems. The proposed approach offers a promising and practical solution for developing more reliable and user-adaptive BCI applications, particularly in real-world environments requiring flexible control without external cues. Full article
Show Figures

Figure 1

28 pages, 6367 KiB  
Article
Human Action Recognition from Videos Using Motion History Mapping and Orientation Based Three-Dimensional Convolutional Neural Network Approach
by Ishita Arora and M. Gangadharappa
Modelling 2025, 6(2), 33; https://doi.org/10.3390/modelling6020033 - 18 Apr 2025
Viewed by 1406
Abstract
Human Activity Recognition (HAR) has recently attracted the attention of researchers. Human behavior and human intention are driving the intensification of HAR research rapidly. This paper proposes a novel Motion History Mapping (MHI) and Orientation-based Convolutional Neural Network (CNN) framework for action recognition [...] Read more.
Human Activity Recognition (HAR) has recently attracted the attention of researchers. Human behavior and human intention are driving the intensification of HAR research rapidly. This paper proposes a novel Motion History Mapping (MHI) and Orientation-based Convolutional Neural Network (CNN) framework for action recognition and classification using Machine Learning. The proposed method extracts oriented rectangular patches over the entire human body to represent the human pose in an action sequence. This distribution is represented by a spatially oriented histogram. The frames were trained with a 3D Convolution Neural Network model, thus saving time and increasing the Classification Correction Rate (CCR). The K-Nearest Neighbor (KNN) algorithm is used for the classification of human actions. The uniqueness of our model lies in the combination of Motion History Mapping approach with an Orientation-based 3D CNN, thereby enhancing precision. The proposed method is demonstrated to be effective using four widely used and challenging datasets. A comparison of the proposed method’s performance with current state-of-the-art methods finds that its Classification Correction Rate is higher than that of the existing methods. Our model’s CCRs are 92.91%, 98.88%, 87.97.% and 87.77% which are remarkably higher than the existing techniques for KTH, Weizmann, UT-Tower and YouTube datasets, respectively. Thus, our model significantly outperforms the existing models in the literature. Full article
Show Figures

Figure 1

27 pages, 4596 KiB  
Review
Review of sEMG for Exoskeleton Robots: Motion Intention Recognition Techniques and Applications
by Xu Zhang, Yonggang Qu, Gang Zhang, Zhiqiang Wang, Changbing Chen and Xin Xu
Sensors 2025, 25(8), 2448; https://doi.org/10.3390/s25082448 - 13 Apr 2025
Cited by 2 | Viewed by 1722
Abstract
The global aging trend is becoming increasingly severe, and the demand for life assistance and medical rehabilitation for frail and disabled elderly people is growing. As the best solution for assisting limb movement, guiding limb rehabilitation, and enhancing limb strength, exoskeleton robots are [...] Read more.
The global aging trend is becoming increasingly severe, and the demand for life assistance and medical rehabilitation for frail and disabled elderly people is growing. As the best solution for assisting limb movement, guiding limb rehabilitation, and enhancing limb strength, exoskeleton robots are becoming the focus of attention from all walks of life. This paper reviews the progress of research on upper limb exoskeleton robots, sEMG technology, and intention recognition technology. It analyzes the literature using keyword clustering analysis and comprehensively discusses the application of sEMG technology, deep learning methods, and machine learning methods in the process of human movement intention recognition by exoskeleton robots. It is proposed that the focus of current research is to find algorithms with strong adaptability and high classification accuracy. Finally, traditional machine learning and deep learning algorithms are discussed, and future research directions are proposed, such as using a deep learning algorithm based on multi-information fusion to fuse EEG signals, electromyographic signals, and basic reference signals. A model with stronger generalization ability is obtained after training, thereby improving the accuracy of human movement intention recognition based on sEMG technology, which provides important support for the realization of human–machine fusion-embodied intelligence of exoskeleton robots. Full article
Show Figures

Figure 1

19 pages, 1026 KiB  
Article
Surface EMG Sensing and Granular Gesture Recognition for Rehabilitative Pouring Tasks: A Case Study
by Congyi Zhang, Dalin Zhou, Yinfeng Fang, Naoyuki Kubota and Zhaojie Ju
Biomimetics 2025, 10(4), 229; https://doi.org/10.3390/biomimetics10040229 - 7 Apr 2025
Viewed by 575
Abstract
Surface electromyography (sEMG) non-invasively captures the electrical activity generated by muscle contractions, offering valuable insights into motion intentions. While sEMG has been widely applied to general gesture recognition in rehabilitation, there has been limited exploration of specific, intricate daily tasks, such as the [...] Read more.
Surface electromyography (sEMG) non-invasively captures the electrical activity generated by muscle contractions, offering valuable insights into motion intentions. While sEMG has been widely applied to general gesture recognition in rehabilitation, there has been limited exploration of specific, intricate daily tasks, such as the pouring action. Pouring is a common yet complex movement requiring precise muscle coordination and control, making it an ideal focus for rehabilitation studies. This research proposes a granular computing-based deep learning approach utilizing ConvMixer architecture enhanced with feature fusion and granular computing to improve gesture recognition accuracy. Our findings indicate that the addition of hand-crafted features significantly improves model performance; specifically, the ConvMixer model’s accuracy improved from 0.9512 to 0.9929. These results highlight the potential of our approach in rehabilitation technologies and assistive systems for restoring motor functions in daily activities. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) in Biomedical Engineering)
Show Figures

Figure 1

38 pages, 8562 KiB  
Review
Research on Control Strategy Technology of Upper Limb Exoskeleton Robots: Review
by Libing Song, Chen Ju, Hengrui Cui, Yonggang Qu, Xin Xu and Changbing Chen
Machines 2025, 13(3), 207; https://doi.org/10.3390/machines13030207 - 3 Mar 2025
Cited by 2 | Viewed by 2410
Abstract
Upper limb exoskeleton robots, as highly integrated wearable devices with the human body structure, hold significant potential in rehabilitation medicine, human performance enhancement, and occupational safety and health. The rapid advancement of high-precision, low-noise acquisition devices and intelligent motion intention recognition algorithms has [...] Read more.
Upper limb exoskeleton robots, as highly integrated wearable devices with the human body structure, hold significant potential in rehabilitation medicine, human performance enhancement, and occupational safety and health. The rapid advancement of high-precision, low-noise acquisition devices and intelligent motion intention recognition algorithms has led to a growing demand for more rational and reliable control strategies. Consequently, the control systems and strategies of exoskeleton robots are becoming increasingly prominent. This paper innovatively takes the hierarchical control system of exoskeleton robots as the entry point and comprehensively compares the current control strategies and intelligent technologies for upper limb exoskeleton robots, analyzing their applicable scenarios and limitations. The current research still faces challenges such as the insufficient real-time performance of algorithms and limited individualized adaptation capabilities. It is recognized that no single traditional control algorithm can fully meet the intelligent interaction requirements between exoskeletons and the human body. The integration of many advanced artificial intelligence algorithms into intelligent control systems remains restricted. Meanwhile, the quality of control is closely related to the perception and decision-making system. Therefore, the combination of multi-source information fusion and cooperative control methods is expected to enhance efficient human–robot interaction and personalized rehabilitation. Transfer learning and edge computing technologies are expected to enable lightweight deployment, ultimately improving the work efficiency and quality of life of end-users. Full article
(This article belongs to the Special Issue Advances and Challenges in Wearable Robotics)
Show Figures

Figure 1

12 pages, 1046 KiB  
Article
Assessing the Recognition of Social Interactions Through Body Motion in the Routine Care of Patients with Post-Lingual Sensorineural Hearing Loss
by Cordélia Fauvet, Léa Cantini, Aude-Eva Chaudoreille, Elisa Cancian, Barbara Bonnel, Chloé Sérignac, Alexandre Derreumaux, Philippe Robert, Nicolas Guevara, Auriane Gros and Valeria Manera
J. Clin. Med. 2025, 14(5), 1604; https://doi.org/10.3390/jcm14051604 - 27 Feb 2025
Viewed by 419
Abstract
Background: Body motion significantly contributes to understanding communicative and social interactions, especially when auditory information is impaired. The visual skills of people with hearing loss are often enhanced and compensate for some of the missing auditory information. In the present study, we investigated [...] Read more.
Background: Body motion significantly contributes to understanding communicative and social interactions, especially when auditory information is impaired. The visual skills of people with hearing loss are often enhanced and compensate for some of the missing auditory information. In the present study, we investigated the recognition of social interactions by observing body motion in people with post-lingual sensorineural hearing loss (SNHL). Methods: In total, 38 participants with post-lingual SNHL and 38 matched normally hearing individuals (NHIs) were presented with point-light stimuli of two agents who were either engaged in a communicative interaction or acting independently. They were asked to classify the actions as communicative vs. independent and to select the correct action description. Results: No significant differences were found between the participants with SNHL and the NHIs when classifying the actions. However, the participants with SNHL showed significantly lower performance compared with the NHIs in the description task due to a higher tendency to misinterpret communicative stimuli. In addition, acquired SNHL was associated with a significantly higher number of errors, with a tendency to over-interpret independent stimuli as communicative and to misinterpret communicative actions. Conclusions: The findings of this study suggest a misinterpretation of visual understanding of social interactions in individuals with SNHL and over-interpretation of communicative intentions in SNHL acquired later in life. Full article
Show Figures

Figure 1

29 pages, 32678 KiB  
Article
An Active Control Method for a Lower Limb Rehabilitation Robot with Human Motion Intention Recognition
by Zhuangqun Song, Peng Zhao, Xueji Wu, Rong Yang and Xueshan Gao
Sensors 2025, 25(3), 713; https://doi.org/10.3390/s25030713 - 24 Jan 2025
Cited by 3 | Viewed by 1594
Abstract
This study presents a method for the active control of a follow-up lower extremity exoskeleton rehabilitation robot (LEERR) based on human motion intention recognition. Initially, to effectively support body weight and compensate for the vertical movement of the human center of mass, a [...] Read more.
This study presents a method for the active control of a follow-up lower extremity exoskeleton rehabilitation robot (LEERR) based on human motion intention recognition. Initially, to effectively support body weight and compensate for the vertical movement of the human center of mass, a vision-driven follow-and-track control strategy is proposed. Subsequently, an algorithm for recognizing human motion intentions based on machine learning is proposed for human-robot collaboration tasks. A muscle–machine interface is constructed using a bi-directional long short-term memory (BiLSTM) network, which decodes multichannel surface electromyography (sEMG) signals into flexion and extension angles of the hip and knee joints in the sagittal plane. The hyperparameters of the BiLSTM network are optimized using the quantum-behaved particle swarm optimization (QPSO) algorithm, resulting in a QPSO-BiLSTM hybrid model that enables continuous real-time estimation of human motion intentions. Further, to address the uncertain nonlinear dynamics of the wearer-exoskeleton robot system, a dual radial basis function neural network adaptive sliding mode Controller (DRBFNNASMC) is designed to generate control torques, thereby enabling the precise tracking of motion trajectories generated by the muscle–machine interface. Experimental results indicate that the follow-up-assisted frame can accurately track human motion trajectories. The QPSO-BiLSTM network outperforms traditional BiLSTM and PSO-BiLSTM networks in predicting continuous lower limb motion, while the DRBFNNASMC controller demonstrates superior gait tracking performance compared to the fuzzy compensated adaptive sliding mode control (FCASMC) algorithm and the traditional proportional–integral–derivative (PID) control algorithm. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

19 pages, 1971 KiB  
Article
A Hierarchical-Based Learning Approach for Multi-Action Intent Recognition
by David Hollinger, Ryan S. Pollard, Mark C. Schall, Howard Chen and Michael Zabala
Sensors 2024, 24(23), 7857; https://doi.org/10.3390/s24237857 - 9 Dec 2024
Viewed by 1131
Abstract
Recent applications of wearable inertial measurement units (IMUs) for predicting human movement have often entailed estimating action-level (e.g., walking, running, jumping) and joint-level (e.g., ankle plantarflexion angle) motion. Although action-level or joint-level information is frequently the focus of movement intent prediction, contextual information [...] Read more.
Recent applications of wearable inertial measurement units (IMUs) for predicting human movement have often entailed estimating action-level (e.g., walking, running, jumping) and joint-level (e.g., ankle plantarflexion angle) motion. Although action-level or joint-level information is frequently the focus of movement intent prediction, contextual information is necessary for a more thorough approach to intent recognition. Therefore, a combination of action-level and joint-level information may offer a more comprehensive approach to predicting movement intent. In this study, we devised a novel hierarchical-based method combining action-level classification and subsequent joint-level regression to predict joint angles 100 ms into the future. K-nearest neighbors (KNN), bidirectional long short-term memory (BiLSTM), and temporal convolutional network (TCN) models were employed for action-level classification, and a random forest model trained on action-specific IMU data was used for joint-level prediction. A joint-level action-generic model trained on multiple actions (e.g., backward walking, kneeling down, kneeling up, running, and walking) was also used for predicting the joint angle. Compared with a hierarchical-based approach, the action-generic model had lower prediction error for backward walking, kneeling down, and kneeling up. Although the TCN and BiLSTM classifiers achieved classification accuracies of 89.87% and 89.30%, respectively, they did not surpass the performance of the action-generic random forest model when used in combination with an action-specific random forest model. This may have been because the action-generic approach was trained on more data from multiple actions. This study demonstrates the advantage of leveraging large, disparate data sources over a hierarchical-based approach for joint-level prediction. Moreover, it demonstrates the efficacy of an IMU-driven, task-agnostic model in predicting future joint angles across multiple actions. Full article
Show Figures

Figure 1

23 pages, 4654 KiB  
Article
Effective Acoustic Model-Based Beamforming Training for Static and Dynamic Hri Applications
by Alejandro Luzanto, Nicolás Bohmer, Rodrigo Mahu, Eduardo Alvarado, Richard M. Stern and Néstor Becerra Yoma
Sensors 2024, 24(20), 6644; https://doi.org/10.3390/s24206644 - 15 Oct 2024
Viewed by 1908
Abstract
Human–robot collaboration will play an important role in the fourth industrial revolution in applications related to hostile environments, mining, industry, forestry, education, natural disaster and defense. Effective collaboration requires robots to understand human intentions and tasks, which involves advanced user profiling. Voice-based communication, [...] Read more.
Human–robot collaboration will play an important role in the fourth industrial revolution in applications related to hostile environments, mining, industry, forestry, education, natural disaster and defense. Effective collaboration requires robots to understand human intentions and tasks, which involves advanced user profiling. Voice-based communication, rich in complex information, is key to this. Beamforming, a technology that enhances speech signals, can help robots extract semantic, emotional, or health-related information from speech. This paper describes the implementation of a system that provides substantially improved signal-to-noise ratio (SNR) and speech recognition accuracy to a moving robotic platform for use in human–robot interaction (HRI) applications in static and dynamic contexts. This study focuses on training deep learning-based beamformers using acoustic model-based multi-style training with measured room impulse responses (RIRs). The results show that this approach outperforms training with simulated RIRs or matched measured RIRs, especially in dynamic conditions involving robot motion. The findings suggest that training with a broad range of measured RIRs is sufficient for effective HRI in various environments, making additional data recording or augmentation unnecessary. This research demonstrates that deep learning-based beamforming can significantly improve HRI performance, particularly in challenging acoustic environments, surpassing traditional beamforming methods. Full article
(This article belongs to the Special Issue Advanced Sensors and AI Integration for Human–Robot Teaming)
Show Figures

Figure 1

14 pages, 5641 KiB  
Article
Estimation of Lower Limb Joint Angles Using sEMG Signals and RGB-D Camera
by Guoming Du, Zhen Ding, Hao Guo, Meichao Song and Feng Jiang
Bioengineering 2024, 11(10), 1026; https://doi.org/10.3390/bioengineering11101026 - 15 Oct 2024
Cited by 4 | Viewed by 1880
Abstract
Estimating human joint angles is a crucial task in motion analysis, gesture recognition, and motion intention prediction. This paper presents a novel model-based approach for generating reliable and accurate human joint angle estimation using a dual-branch network. The proposed network leverages combined features [...] Read more.
Estimating human joint angles is a crucial task in motion analysis, gesture recognition, and motion intention prediction. This paper presents a novel model-based approach for generating reliable and accurate human joint angle estimation using a dual-branch network. The proposed network leverages combined features derived from encoded sEMG signals and RGB-D image data. To ensure the accuracy and reliability of the estimation algorithm, the proposed network employs a convolutional autoencoder to generate a high-level compression of sEMG features aimed at motion prediction. Considering the variability in the distribution of sEMG signals, the proposed network introduces a vision-based joint regression network to maintain the stability of combined features. Taking into account latency, occlusion, and shading issues with vision data acquisition, the feature fusion network utilizes high-frequency sEMG features as weights for specific features extracted from image data. The proposed method achieves effective human body joint angle estimation for motion analysis and motion intention prediction by mitigating the effects of non-stationary sEMG signals. Full article
(This article belongs to the Special Issue Bioengineering of the Motor System)
Show Figures

Figure 1

22 pages, 7744 KiB  
Article
Improved Taillight Detection Model for Intelligent Vehicle Lane-Change Decision-Making Based on YOLOv8
by Ming Li, Jian Zhang, Weixia Li, Tianrui Yin, Wei Chen, Luyao Du, Xingzhuo Yan and Huiheng Liu
World Electr. Veh. J. 2024, 15(8), 369; https://doi.org/10.3390/wevj15080369 - 15 Aug 2024
Cited by 1 | Viewed by 1975
Abstract
With the rapid advancement of autonomous driving technology, the recognition of vehicle lane-changing can provide effective environmental parameters for vehicle motion planning, decision-making and control, and has become a key task for intelligent vehicles. In this paper, an improved method for vehicle taillight [...] Read more.
With the rapid advancement of autonomous driving technology, the recognition of vehicle lane-changing can provide effective environmental parameters for vehicle motion planning, decision-making and control, and has become a key task for intelligent vehicles. In this paper, an improved method for vehicle taillight detection and intent recognition based on YOLOv8 (You Only Look Once version 8) is proposed. Firstly, the CARAFE (Context-Aware Reassembly Operator) module is introduced to address fine perception issues of small targets, enhancing taillight detection accuracy. Secondly, the TriAtt (Triplet Attention Mechanism) module is employed to improve the model’s focus on key features, particularly in the identification of positive samples, thereby increasing model robustness. Finally, by optimizing the EfficientP2Head (a small object auxiliary head based on depth-wise separable convolutions) module, the detection capability for small targets is further strengthened while maintaining the model’s practicality and lightweight characteristics. Upon evaluation, the enhanced algorithm demonstrates impressive results, achieving a precision rate of 93.27%, a recall rate of 79.86%, and a mean average precision (mAP) of 85.48%, which shows that the proposed method could effectively achieve taillight detection. Full article
(This article belongs to the Special Issue Motion Planning and Control of Autonomous Vehicles)
Show Figures

Figure 1

11 pages, 12531 KiB  
Article
Effects of Exercise on the Inter-Session Accuracy of sEMG-Based Hand Gesture Recognition
by Xiangyu Liu, Chenyun Dai, Jionghui Liu and Yangyang Yuan
Bioengineering 2024, 11(8), 811; https://doi.org/10.3390/bioengineering11080811 - 9 Aug 2024
Cited by 1 | Viewed by 1287
Abstract
Surface electromyography (sEMG) is commonly used as an interface in human–machine interaction systems due to their high signal-to-noise ratio and easy acquisition. It can intuitively reflect motion intentions of users, thus is widely applied in gesture recognition systems. However, wearable sEMG-based gesture recognition [...] Read more.
Surface electromyography (sEMG) is commonly used as an interface in human–machine interaction systems due to their high signal-to-noise ratio and easy acquisition. It can intuitively reflect motion intentions of users, thus is widely applied in gesture recognition systems. However, wearable sEMG-based gesture recognition systems are susceptible to changes in environmental noise, electrode placement, and physiological characteristics. This could result in significant performance degradation of the model in inter-session scenarios, bringing a poor experience to users. Currently, for noise from environmental changes and electrode shifting from wearing variety, numerous studies have proposed various data-augmentation methods and highly generalized networks to improve inter-session gesture recognition accuracy. However, few studies have considered the impact of individual physiological states. In this study, we assumed that user exercise could cause changes in muscle conditions, leading to variations in sEMG features and subsequently affecting the recognition accuracy of model. To verify our hypothesis, we collected sEMG data from 12 participants performing the same gesture tasks before and after exercise, and then used Linear Discriminant Analysis (LDA) for gesture classification. For the non-exercise group, the inter-session accuracy declined only by 2.86%, whereas that of the exercise group decreased by 13.53%. This finding proves that exercise is indeed a critical factor contributing to the decline in inter-session model performance. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Figure 1

20 pages, 6297 KiB  
Article
Task-Motion Planning System for Socially Viable Service Robots Based on Object Manipulation
by Jeongmin Jeon, Hong-ryul Jung, Nabih Pico, Tuan Luong and Hyungpil Moon
Biomimetics 2024, 9(7), 436; https://doi.org/10.3390/biomimetics9070436 - 17 Jul 2024
Cited by 4 | Viewed by 2021
Abstract
This paper presents a software architecture to implement a task-motion planning system that can improve human-robot interactions by including social behavior when social robots provide services related to object manipulation to users. The proposed system incorporates four main modules: knowledge reasoning, perception, task [...] Read more.
This paper presents a software architecture to implement a task-motion planning system that can improve human-robot interactions by including social behavior when social robots provide services related to object manipulation to users. The proposed system incorporates four main modules: knowledge reasoning, perception, task planning, and motion planning for autonomous service. This system adds constraints to the robot motions based on the recognition of the object affordance from the perception module and environment states from the knowledge reasoning module. Thus, the system performs task planning by adjusting the goal of the task to be performed, and motion planning based on the functional aspects of the object, enabling the robot to execute actions consistent with social behavior to respond to the user’s intent and the task environment. The system is verified through simulated experiments consisting of several object manipulation services such as handover and delivery. The results show that, by using the proposed system, the robot can provide different services depending on the situation, even if it performs the same tasks. In addition, the system demonstrates a modular structure that enables the expansion of the available services by defining additional actions and diverse planning modules. Full article
Show Figures

Figure 1

Back to TopTop