Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (292)

Search Parameters:
Keywords = gait dataset

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 1893 KiB  
Article
Unlocking the Potential of Smart Environments Through Deep Learning
by Adnan Ramakić and Zlatko Bundalo
Computers 2025, 14(8), 296; https://doi.org/10.3390/computers14080296 - 22 Jul 2025
Viewed by 180
Abstract
This paper looks at and describes the potential of using artificial intelligence in smart environments. Various environments such as houses and residential and commercial buildings are becoming smarter through the use of various technologies, i.e., various sensors, smart devices and elements based on [...] Read more.
This paper looks at and describes the potential of using artificial intelligence in smart environments. Various environments such as houses and residential and commercial buildings are becoming smarter through the use of various technologies, i.e., various sensors, smart devices and elements based on artificial intelligence. These technologies are used, for example, to achieve different levels of security in environments, for personalized comfort and control and for ambient assisted living. We investigated the deep learning approach, and, in this paper, describe its use in this context. Accordingly, we developed four deep learning models, which we describe. These are models for hand gesture recognition, emotion recognition, face recognition and gait recognition. These models are intended for use in smart environments for various tasks. In order to present the possible applications of the models, in this paper, a house is used as an example of a smart environment. The models were developed using the TensorFlow platform together with Keras. Four different datasets were used to train and validate the models. The results are promising and are presented in this paper. Full article
(This article belongs to the Special Issue Multimodal Pattern Recognition of Social Signals in HCI (2nd Edition))
Show Figures

Figure 1

19 pages, 1818 KiB  
Article
Explainable AI Highlights the Most Relevant Gait Features for Neurodegenerative Disease Classification
by Gianmarco Tiddia, Francesca Mainas, Alessandra Retico and Piernicola Oliva
Appl. Sci. 2025, 15(14), 8078; https://doi.org/10.3390/app15148078 - 21 Jul 2025
Viewed by 291
Abstract
Gait analysis is a valuable tool for aiding in the diagnosis of neurological diseases, providing objective measurements of human gait kinematics and kinetics. These data enable the quantitative estimation of movement abnormalities, which helps to diagnose disorders and assess their severity. In this [...] Read more.
Gait analysis is a valuable tool for aiding in the diagnosis of neurological diseases, providing objective measurements of human gait kinematics and kinetics. These data enable the quantitative estimation of movement abnormalities, which helps to diagnose disorders and assess their severity. In this regard, machine learning techniques and explainability methods offer an opportunity to enhance anomaly detection in gait measurements and support a more objective assessment of neurodegenerative disease, providing insights into the most relevant gait parameters used for disease identification. This study employs several classifiers and explainability methods to analyze gait data from a public dataset composed of patients affected by degenerative neurological diseases and healthy controls. The work investigates the relevance of spatial, temporal, and kinematic gait parameters in distinguishing such diseases. The findings are consistent among the classifiers employed and in agreement with known clinical findings about the major gait impairments for each disease. This work promotes the use of data-driven assessments in clinical settings, helping reduce subjectivity in gait evaluation and enabling broader deployment in healthcare environments. Full article
(This article belongs to the Special Issue Machine Learning in Biomedical Sciences)
Show Figures

Figure 1

24 pages, 824 KiB  
Article
MMF-Gait: A Multi-Model Fusion-Enhanced Gait Recognition Framework Integrating Convolutional and Attention Networks
by Kamrul Hasan, Khandokar Alisha Tuhin, Md Rasul Islam Bapary, Md Shafi Ud Doula, Md Ashraful Alam, Md Atiqur Rahman Ahad and Md. Zasim Uddin
Symmetry 2025, 17(7), 1155; https://doi.org/10.3390/sym17071155 - 19 Jul 2025
Viewed by 380
Abstract
Gait recognition is a reliable biometric approach that uniquely identifies individuals based on their natural walking patterns. It is widely used to recognize individuals who are challenging to camouflage and do not require a person’s cooperation. The general face-based person recognition system often [...] Read more.
Gait recognition is a reliable biometric approach that uniquely identifies individuals based on their natural walking patterns. It is widely used to recognize individuals who are challenging to camouflage and do not require a person’s cooperation. The general face-based person recognition system often fails to determine the offender’s identity when they conceal their face by wearing helmets and masks to evade identification. In such cases, gait-based recognition is ideal for identifying offenders, and most existing work leverages a deep learning (DL) model. However, a single model often fails to capture a comprehensive selection of refined patterns in input data when external factors are present, such as variation in viewing angle, clothing, and carrying conditions. In response to this, this paper introduces a fusion-based multi-model gait recognition framework that leverages the potential of convolutional neural networks (CNNs) and a vision transformer (ViT) in an ensemble manner to enhance gait recognition performance. Here, CNNs capture spatiotemporal features, and ViT features multiple attention layers that focus on a particular region of the gait image. The first step in this framework is to obtain the Gait Energy Image (GEI) by averaging a height-normalized gait silhouette sequence over a gait cycle, which can handle the left–right gait symmetry of the gait. After that, the GEI image is fed through multiple pre-trained models and fine-tuned precisely to extract the depth spatiotemporal feature. Later, three separate fusion strategies are conducted, and the first one is decision-level fusion (DLF), which takes each model’s decision and employs majority voting for the final decision. The second is feature-level fusion (FLF), which combines the features from individual models through pointwise addition before performing gait recognition. Finally, a hybrid fusion combines DLF and FLF for gait recognition. The performance of the multi-model fusion-based framework was evaluated on three publicly available gait databases: CASIA-B, OU-ISIR D, and the OU-ISIR Large Population dataset. The experimental results demonstrate that the fusion-enhanced framework achieves superior performance. Full article
(This article belongs to the Special Issue Symmetry and Its Applications in Image Processing)
Show Figures

Figure 1

21 pages, 1118 KiB  
Review
Integrating Large Language Models into Robotic Autonomy: A Review of Motion, Voice, and Training Pipelines
by Yutong Liu, Qingquan Sun and Dhruvi Rajeshkumar Kapadia
AI 2025, 6(7), 158; https://doi.org/10.3390/ai6070158 - 15 Jul 2025
Viewed by 1374
Abstract
This survey provides a comprehensive review of the integration of large language models (LLMs) into autonomous robotic systems, organized around four key pillars: locomotion, navigation, manipulation, and voice-based interaction. We examine how LLMs enhance robotic autonomy by translating high-level natural language commands into [...] Read more.
This survey provides a comprehensive review of the integration of large language models (LLMs) into autonomous robotic systems, organized around four key pillars: locomotion, navigation, manipulation, and voice-based interaction. We examine how LLMs enhance robotic autonomy by translating high-level natural language commands into low-level control signals, supporting semantic planning and enabling adaptive execution. Systems like SayTap improve gait stability through LLM-generated contact patterns, while TrustNavGPT achieves a 5.7% word error rate (WER) under noisy voice-guided conditions by modeling user uncertainty. Frameworks such as MapGPT, LLM-Planner, and 3D-LOTUS++ integrate multi-modal data—including vision, speech, and proprioception—for robust planning and real-time recovery. We also highlight the use of physics-informed neural networks (PINNs) to model object deformation and support precision in contact-rich manipulation tasks. To bridge the gap between simulation and real-world deployment, we synthesize best practices from benchmark datasets (e.g., RH20T, Open X-Embodiment) and training pipelines designed for one-shot imitation learning and cross-embodiment generalization. Additionally, we analyze deployment trade-offs across cloud, edge, and hybrid architectures, emphasizing latency, scalability, and privacy. The survey concludes with a multi-dimensional taxonomy and cross-domain synthesis, offering design insights and future directions for building intelligent, human-aligned robotic systems powered by LLMs. Full article
Show Figures

Figure 1

22 pages, 3299 KiB  
Article
Lokomat-Assisted Robotic Rehabilitation in Spinal Cord Injury: A Biomechanical and Machine Learning Evaluation of Functional Symmetry and Predictive Factors
by Alexandru Bogdan Ilies, Cornel Cheregi, Hassan Hassan Thowayeb, Jan Reinald Wendt, Maur Sebastian Horgos and Liviu Lazar
Bioengineering 2025, 12(7), 752; https://doi.org/10.3390/bioengineering12070752 - 10 Jul 2025
Viewed by 431
Abstract
Background: Lokomat-assisted robotic rehabilitation is increasingly used for gait restoration in patients with spinal cord injury (SCI). However, the objective evaluation of treatment effectiveness through biomechanical parameters and machine learning approaches remains underexplored. Methods: This study analyzed data from 29 SCI patients undergoing [...] Read more.
Background: Lokomat-assisted robotic rehabilitation is increasingly used for gait restoration in patients with spinal cord injury (SCI). However, the objective evaluation of treatment effectiveness through biomechanical parameters and machine learning approaches remains underexplored. Methods: This study analyzed data from 29 SCI patients undergoing Lokomat-based rehabilitation. A dataset of 46 variables including range of motion (L-ROM), joint stiffness (L-STIFF), and muscular force (L-FORCE) was examined using statistical methods (paired t-test, ANOVA, and ordinary least squares regression), clustering techniques (k-means), dimensionality reduction (t-SNE), and anomaly detection (Isolation Forest). Predictive modeling was applied to assess the influence of age, speed, body weight, body weight support, and exercise duration on biomechanical outcomes. Results: No statistically significant asymmetries were found between left and right limb measurements, indicating functional symmetry post-treatment (p > 0.05). Clustering analysis revealed a weak structure among patient groups (Silhouette score ≈ 0.31). Isolation Forest identified minimal anomalies in stiffness data, supporting treatment consistency. Regression models showed that body weight and body weight support significantly influenced joint stiffness (p < 0.01), explaining up to 60% of the variance in outcomes. Conclusions: Lokomat-assisted robotic rehabilitation demonstrates high functional symmetry and biomechanical consistency in SCI patients. Machine learning methods provided meaningful insight into the structure and predictability of outcomes, highlighting the clinical value of weight and support parameters in tailoring recovery protocols. Full article
(This article belongs to the Special Issue Regenerative Rehabilitation for Spinal Cord Injury)
Show Figures

Figure 1

17 pages, 1691 KiB  
Article
Towards Explainable Graph Embeddings for Gait Assessment Using Per-Cluster Dimensional Weighting
by Chris Lochhead and Robert B. Fisher
Sensors 2025, 25(13), 4106; https://doi.org/10.3390/s25134106 - 30 Jun 2025
Viewed by 263
Abstract
As gaitpathology assessment systems improve both in accuracy and efficiency, the prospect of using these systems in real healthcare applications is becoming more realistic. Although gait analysis systems have proven capable of detecting gait abnormalities in supervised tasks in laboratories and clinics, there [...] Read more.
As gaitpathology assessment systems improve both in accuracy and efficiency, the prospect of using these systems in real healthcare applications is becoming more realistic. Although gait analysis systems have proven capable of detecting gait abnormalities in supervised tasks in laboratories and clinics, there is comparatively little investigation into making such systems explainable to healthcare professionals who would use gait analysis in practice in home-based settings. There is a “black box” problem with existing machine learning models, where healthcare professionals are expected to “trust” the model making diagnoses without understanding its underlying reasoning. To address this applicational barrier, an end-to-end pipeline is introduced here for creating graph feature embeddings, generated using a bespoke Spatio-temporal Graph Convolutional Network and per-joint Principal Component Analysis. The latent graph embeddings produced by this framework led to a novel semi-supervised weighting function which quantifies and ranks the most important joint features, which are used to provide a description for each pathology. Using these embeddings with a K-means clustering approach, the proposed method also outperforms the state of the art by between 4.53 and 16% in classification accuracy across three datasets with a total of 14 different simulated gait pathologies from minor limping to ataxic gait. The resulting system provides a workable improvement to at-home gait assessment applications by providing accurate and explainable descriptions of the nature of detected gait abnormalities without need of prior labeled descriptions of detected pathologies. Full article
Show Figures

Graphical abstract

30 pages, 2018 KiB  
Article
Comprehensive Performance Comparison of Signal Processing Features in Machine Learning Classification of Alcohol Intoxication on Small Gait Datasets
by Muxi Qi, Samuel Chibuoyim Uche and Emmanuel Agu
Appl. Sci. 2025, 15(13), 7250; https://doi.org/10.3390/app15137250 - 27 Jun 2025
Viewed by 378
Abstract
Detecting alcohol intoxication is crucial for preventing accidents and enhancing public safety. Traditional intoxication detection methods rely on direct blood alcohol concentration (BAC) measurement via breathalyzers and wearable sensors. These methods require the user to purchase and carry external hardware such as breathalyzers, [...] Read more.
Detecting alcohol intoxication is crucial for preventing accidents and enhancing public safety. Traditional intoxication detection methods rely on direct blood alcohol concentration (BAC) measurement via breathalyzers and wearable sensors. These methods require the user to purchase and carry external hardware such as breathalyzers, which is expensive and cumbersome. Convenient, unobtrusive intoxication detection methods using equipment already owned by users are desirable. Recent research has explored machine learning-based approaches using smartphone accelerometers to classify intoxicated gait patterns. While neural network approaches have emerged, due to the significant challenges with collecting intoxicated gait data, gait datasets are often too small to utilize such approaches. To avoid overfitting on such small datasets, traditional machine learning (ML) classification is preferred. A comprehensive set of ML features have been proposed. However, until now, no work has systematically evaluated the performance of various categories of gait features for alcohol intoxication detection task using traditional machine learning algorithms. This study evaluates 27 signal processing features handcrafted from accelerometer gait data across five domains: time, frequency, wavelet, statistical, and information-theoretic. The data were collected from 24 subjects who experienced alcohol stimulation using goggle busters. Correlation-based feature selection (CFS) was employed to rank the features most correlated with alcohol-induced gait changes, revealing that 22 features exhibited statistically significant correlations with BAC levels. These statistically significant features were utilized to train supervised classifiers and assess their impact on alcohol intoxication detection accuracy. Statistical features yielded the highest accuracy (83.89%), followed by time-domain (83.22%) and frequency-domain features (82.21%). Classifying all domain 22 significant features using a random forest model improved classification accuracy to 84.9%. These findings suggest that incorporating a broader set of signal processing features enhances the accuracy of smartphone-based alcohol intoxication detection. Full article
(This article belongs to the Special Issue AI-Based Biomedical Signal and Image Processing)
Show Figures

Figure 1

27 pages, 3417 KiB  
Article
GaitCSF: Multi-Modal Gait Recognition Network Based on Channel Shuffle Regulation and Spatial-Frequency Joint Learning
by Siwei Wei, Xiangyuan Xu, Dewen Liu, Chunzhi Wang, Lingyu Yan and Wangyu Wu
Sensors 2025, 25(12), 3759; https://doi.org/10.3390/s25123759 - 16 Jun 2025
Viewed by 516
Abstract
Gait recognition, as a non-contact biometric technology, offers unique advantages in scenarios requiring long-distance identification without active cooperation from subjects. However, existing gait recognition methods predominantly rely on single-modal data, which demonstrates insufficient feature expression capabilities when confronted with complex factors in real-world [...] Read more.
Gait recognition, as a non-contact biometric technology, offers unique advantages in scenarios requiring long-distance identification without active cooperation from subjects. However, existing gait recognition methods predominantly rely on single-modal data, which demonstrates insufficient feature expression capabilities when confronted with complex factors in real-world environments, including viewpoint variations, clothing differences, occlusion problems, and illumination changes. This paper addresses these challenges by introducing a multi-modal gait recognition network based on channel shuffle regulation and spatial-frequency joint learning, which integrates two complementary modalities (silhouette data and heatmap data) to construct a more comprehensive gait representation. The channel shuffle-based feature selective regulation module achieves cross-channel information interaction and feature enhancement through channel grouping and feature shuffling strategies. This module divides input features along the channel dimension into multiple subspaces, which undergo channel-aware and spatial-aware processing to capture dependency relationships across different dimensions. Subsequently, channel shuffling operations facilitate information exchange between different semantic groups, achieving adaptive enhancement and optimization of features with relatively low parameter overhead. The spatial-frequency joint learning module maps spatiotemporal features to the spectral domain through fast Fourier transform, effectively capturing inherent periodic patterns and long-range dependencies in gait sequences. The global receptive field advantage of frequency domain processing enables the model to transcend local spatiotemporal constraints and capture global motion patterns. Concurrently, the spatial domain processing branch balances the contributions of frequency and spatial domain information through an adaptive weighting mechanism, maintaining computational efficiency while enhancing features. Experimental results demonstrate that the proposed GaitCSF model achieves significant performance improvements on mainstream datasets including GREW, Gait3D, and SUSTech1k, breaking through the performance bottlenecks of traditional methods. The implications of this research are significant for improving the performance and robustness of gait recognition systems when implemented in practical application scenarios. Full article
(This article belongs to the Collection Sensors for Gait, Human Movement Analysis, and Health Monitoring)
Show Figures

Figure 1

21 pages, 12445 KiB  
Article
Parkinson’s Disease Detection via Bilateral Gait Camera Sensor Fusion Using CMSA-Net and Implementation on Portable Device
by Jinxuan Wang, Hua Huo, Wei Liu, Changwei Zhao, Shilu Kang and Lan Ma
Sensors 2025, 25(12), 3715; https://doi.org/10.3390/s25123715 - 13 Jun 2025
Viewed by 476
Abstract
The annual increase in the incidence of Parkinson’s disease (PD) underscores the critical need for effective detection methods and devices. Gait video features based on camera sensors, as a crucial biomarker for PD, are well-suited for detection and show promise for the development [...] Read more.
The annual increase in the incidence of Parkinson’s disease (PD) underscores the critical need for effective detection methods and devices. Gait video features based on camera sensors, as a crucial biomarker for PD, are well-suited for detection and show promise for the development of portable devices. Consequently, we developed a single-step segmentation method based on Savitzky–Golay (SG) filtering and a sliding window peak selection function, along with a Cross-Attention Fusion with Mamba-2 and Self-Attention Network (CMSA-Net). Additionally, we introduced a loss function based on Maximum Mean Discrepancy (MMD) to further enhance the fusion process. We evaluated our method on a dual-view gait video dataset that we collected in collaboration with a hospital, comprising 304 healthy control (HC) samples and 84 PD samples, achieving an accuracy of 89.10% and an F1-score of 81.11%, thereby attaining the best detection performance compared with other methods. Based on these methodologies, we designed a simple and user-friendly portable PD detection device. The device is equipped with various operating modes—including single-view, dual-view, and prior information correction—which enable it to adapt to diverse environments, such as residential and elder care settings, thereby demonstrating strong practical applicability. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

57 pages, 4508 KiB  
Review
Person Recognition via Gait: A Review of Covariate Impact and Challenges
by Abdul Basit Mughal, Rafi Ullah Khan, Amine Bermak and Atiq ur Rehman
Sensors 2025, 25(11), 3471; https://doi.org/10.3390/s25113471 - 30 May 2025
Viewed by 845
Abstract
Human gait identification is a biometric technique that permits recognizing an individual from a long distance focusing on numerous features such as movement, time, and clothing. This approach in particular is highly useful in video surveillance scenarios, where biometric systems allow people to [...] Read more.
Human gait identification is a biometric technique that permits recognizing an individual from a long distance focusing on numerous features such as movement, time, and clothing. This approach in particular is highly useful in video surveillance scenarios, where biometric systems allow people to be easily recognized without intruding on their privacy. In the domain of computer vision, one of the essential and most difficult tasks is tracking a person across multiple camera views, specifically, recognizing the similar person in diverse scenes. However, the accuracy of the gait identification system is significantly affected by covariate factors, such as different view angles, clothing, walking speeds, occlusion, and low-lighting conditions. Previous studies have often overlooked the influence of these factors, leaving a gap in the comprehensive understanding of gait recognition systems. This paper provides a comprehensive review of the most effective gait recognition methods, assessing their performance across various image source databases while highlighting the limitations of existing datasets. Additionally, it explores the influence of key covariate factors, such as viewing angle, clothing, and environmental conditions, on model performance. The paper also compares traditional gait recognition methods with advanced deep learning techniques, offering theoretical insights into the impact of covariates and addressing real-world application challenges. The contrasts and discussions presented provide valuable insights for developing a robust and improved gait-based identification framework for future advancements. Full article
(This article belongs to the Special Issue Artificial Intelligence and Sensor-Based Gait Recognition)
Show Figures

Figure 1

19 pages, 1662 KiB  
Systematic Review
Scoping Review of Machine Learning Techniques in Marker-Based Clinical Gait Analysis
by Kevin N. Dibbern, Maddalena G. Krzak, Alejandro Olivas, Mark V. Albert, Joseph J. Krzak and Karen M. Kruger
Bioengineering 2025, 12(6), 591; https://doi.org/10.3390/bioengineering12060591 - 30 May 2025
Cited by 1 | Viewed by 707
Abstract
The recent proliferation of novel machine learning techniques in quantitative marker-based 3D gait analysis (3DGA) has shown promise for improving interpretations of clinical gait analysis. The objective of this study was to characterize the state of the literature on using machine learning in [...] Read more.
The recent proliferation of novel machine learning techniques in quantitative marker-based 3D gait analysis (3DGA) has shown promise for improving interpretations of clinical gait analysis. The objective of this study was to characterize the state of the literature on using machine learning in the analysis of marker-based 3D gait analysis to provide clinical insights that may be used to improve clinical analysis and care. Methods: A scoping review of the literature was conducted using the PubMed and Web of Science databases. Search terms from eight relevant articles were identified by the authors and added to by experts in clinical gait analysis and machine learning. Inclusion was decided by the adjudication of three reviewers. Results: The review identified 4324 articles matching the search terms. Adjudication identified 105 relevant papers. The most commonly applied techniques were the following: support vector machines, neural networks (NNs), and logistic regression. The most common clinical conditions evaluated were cerebral palsy, Parkinson’s disease, and post-stroke. Conclusions: ML has been used broadly in the literature and recent advances in deep learning have been more successful in larger datasets while traditional techniques are robust in small datasets and can outperform NNs in accuracy and explainability. XAI techniques can improve model interpretability but have not been broadly used. Full article
(This article belongs to the Special Issue Biomechanics of Human Movement and Its Clinical Applications)
Show Figures

Figure 1

12 pages, 3764 KiB  
Article
Estimation of Three-Dimensional Ground Reaction Force and Center of Pressure During Walking Using a Machine-Learning-Based Markerless Motion Capture System
by Ru Feng, Ukadike Christopher Ugbolue, Chen Yang and Hui Liu
Bioengineering 2025, 12(6), 588; https://doi.org/10.3390/bioengineering12060588 - 29 May 2025
Viewed by 576
Abstract
Objective: We developed two neural network models to estimate the three-dimensional ground reaction force (GRF) and center of pressure (COP) based on marker trajectories obtained from a markerless motion capture system. Methods: Gait data were collected using two cameras and three force plates. [...] Read more.
Objective: We developed two neural network models to estimate the three-dimensional ground reaction force (GRF) and center of pressure (COP) based on marker trajectories obtained from a markerless motion capture system. Methods: Gait data were collected using two cameras and three force plates. Each gait dataset contained kinematic data and kinetic data from the stance phase. A multi-layer perceptron (MLP) and convolutional neural network (CNN) were constructed to estimate each component of GRF and COP based on the three-dimensional trajectories of the markers. A total of 100 samples were randomly selected as the test set, and the estimation performance was evaluated using the correlation coefficient (r) and relative root mean square error (rRMSE). Results: The r-values for MLP in each GRF component ranged from 0.918 to 0.989, with rRMSEs between 5.06% and 12.08%. The r-values for CNN in each GRF component ranged from 0.956 to 0.988, with rRMSEs between 6.03–9.44%. For the COP estimation, the r-values for MLP ranged from 0.727 to 0.982, with rRMSEs between 6.43% and 27.64%, while the r-values for CNN ranged from 0.896 to 0.977, with rRMSEs between 6.41% and 7.90%. Conclusions: It is possible to estimate GRF and COP from markerless motion capture data. This approach provides an alternative method for measuring kinetic parameters without force plates during gait analysis. Full article
(This article belongs to the Special Issue Biomechanics in Sport and Motion Analysis)
Show Figures

Figure 1

16 pages, 2191 KiB  
Article
Retrospective Frailty Assessment in Older Adults Using Inertial Measurement Unit-Based Deep Learning on Gait Spectrograms
by Julius Griškevičius, Kristina Daunoravičienė, Liudvikas Petrauskas, Andrius Apšega and Vidmantas Alekna
Sensors 2025, 25(11), 3351; https://doi.org/10.3390/s25113351 - 26 May 2025
Viewed by 626
Abstract
Frailty is a common syndrome in the elderly, marked by an increased risk of negative health outcomes such as falls, disability and death. It is important to detect frailty early and accurately to apply timely interventions that can improve health results in older [...] Read more.
Frailty is a common syndrome in the elderly, marked by an increased risk of negative health outcomes such as falls, disability and death. It is important to detect frailty early and accurately to apply timely interventions that can improve health results in older adults. Traditional evaluation methods often depend on subjective evaluations and clinical opinions, which might lack consistency. This research uses deep learning to classify frailty from spectrograms based on IMU data collected during gait analysis. The study retrospectively analyzed an existing IMU dataset. Gait data were categorized into Frail, PreFrail, and NoFrail groups based on clinical criteria. Six IMUs were placed on lower extremity segments to collect motion data during walking activities. The raw signals from accelerometers and gyroscopes were converted into time–frequency spectrograms. A convolutional neural network (CNN) trained solely on raw IMU-derived spectrograms achieved 71.4 % subject-wise accuracy in distinguishing frailty levels. Minimal preprocessing did not improve subject-wise performance, suggesting that the raw time–frequency representation retains the most salient gait cues. These findings suggest that wearable sensor technology combined with deep learning provides a robust, objective tool for frailty assessment, offering potential for clinical and remote health monitoring applications. Full article
Show Figures

Figure 1

15 pages, 2549 KiB  
Article
Automated Implementation of the Edinburgh Visual Gait Score (EVGS)
by Ishaasamyuktha Somasundaram, Albert Tu, Ramiro Olleac, Natalie Baddour and Edward D. Lemaire
Sensors 2025, 25(10), 3226; https://doi.org/10.3390/s25103226 - 21 May 2025
Viewed by 656
Abstract
The Edinburgh Visual Gait Score (EVGS) is a commonly used clinical scale for assessing gait abnormalities, providing insight into diagnosis and treatment planning. However, its manual implementation is resource-intensive and requires time, expertise, and a controlled environment for video recording and analysis. To [...] Read more.
The Edinburgh Visual Gait Score (EVGS) is a commonly used clinical scale for assessing gait abnormalities, providing insight into diagnosis and treatment planning. However, its manual implementation is resource-intensive and requires time, expertise, and a controlled environment for video recording and analysis. To address these issues, an automated approach for scoring the EVGS was developed. Unlike past methods dependent on controlled environments or simulated videos, the proposed approach integrates pose estimation with new algorithms to handle operational challenges present in the dataset, such as minor camera movement during sagittal recordings, slight zoom variations in coronal views, and partial visibility (e.g., missing head) in some videos. The system uses OpenPose for pose estimation and new algorithms for automatic gait event detection, stride segmentation, and computation of the 17 EVGS parameters across the sagittal and coronal planes. Evaluation of gait videos of patients with cerebral palsy showed high accuracy for parameters such as hip and knee flexion but a need for improvement in pelvic rotation and hindfoot alignment scoring. This automated EVGS approach can minimize the workload for clinicians through the introduction of automated, rapid gait analysis and enable mobile-based applications for clinical decision-making. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

29 pages, 1306 KiB  
Review
Artificial Vision Systems for Mobility Impairment Detection: Integrating Synthetic Data, Ethical Considerations, and Real-World Applications
by Santiago Felipe Luna-Romero, Mauren Abreu de Souza and Luis Serpa Andrade
Technologies 2025, 13(5), 198; https://doi.org/10.3390/technologies13050198 - 13 May 2025
Viewed by 1059
Abstract
Global estimates suggest that over a billion people worldwide—more than 15% of the global population—live with some form of mobility disability, underscoring the pressing need for innovative technological solutions. Recent advancements in artificial vision systems, driven by deep learning and image processing techniques, [...] Read more.
Global estimates suggest that over a billion people worldwide—more than 15% of the global population—live with some form of mobility disability, underscoring the pressing need for innovative technological solutions. Recent advancements in artificial vision systems, driven by deep learning and image processing techniques, offer promising avenues for detecting mobility aids and monitoring gait or posture anomalies. This paper presents a systematic review conducted in accordance with ProKnow-C guidelines, examining key methodologies, datasets, and ethical considerations in mobility impairment detection from 2015 to 2025. Our analysis reveals that convolutional neural network (CNN) approaches, such as YOLO and Faster R-CNN, frequently outperform traditional computer vision methods in accuracy and real-time efficiency, though their success depends on the availability of large, high-quality datasets that capture real-world variability. While synthetic data generation helps mitigate dataset limitations, models trained predominantly on simulated images often exhibit reduced performance in uncontrolled environments due to the domain gap. Moreover, ethical and privacy concerns related to the handling of sensitive visual data remain insufficiently addressed, highlighting the need for robust privacy safeguards, transparent data governance, and effective bias mitigation protocols. Overall, this review emphasizes the potential of artificial vision systems to transform assistive technologies for mobility impairments and calls for multidisciplinary efforts to ensure these systems are technically robust, ethically sound, and widely adoptable. Full article
Show Figures

Figure 1

Back to TopTop