A Comprehensive Review of Vision-Based Sensor Systems for Human Gait Analysis
Abstract
:1. Introduction
- Summary of the latest advances in vision sensor-based gait analysis systems, covering technological innovations and advances in hardware and software from 2019 to 2024.
- Key algorithms for human gait analysis, feature extraction, and classification are discussed, including advanced machine learning models such as CNNs (convolutional neural networks), LSTMs (long short-term memory Networks), and hybrid models.
- Different types of vision sensors such as 2D cameras, 3D cameras, and markerless motion capture systems are evaluated and their advantages, limitations, and suitability for various applications in human gait analysis are, respectively, discussed.
- The important applications of gait analysis in various fields such as healthcare, sports science, and safety through practical examples and case studies are explored.
- Some of the current challenges and shortcomings are identified, and possible future directions are consolidated and some recommendations are made.
2. Survey Methodology
3. Results
3.1. Types of Vision Sensors
3.1.1. 2D Camera
3.1.2. 3D Camera
3.1.3. Multi-Camera System
3.1.4. Other Camera Systems
3.2. Algorithms for Human Gait Analysis
3.2.1. Long Short-Term Memory (LSTM) Networks
3.2.2. Convolutional Neural Networks (CNNs)
3.2.3. Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM) Networks
3.2.4. Support Vector Machines (SVMs)
3.2.5. Pose Estimation Algorithms
3.2.6. Dynamic Time Warping (DTW)
3.3. Gait Parameter Analysis
3.3.1. Spatial Parameters
3.3.2. Temporal Parameters
3.3.3. Kinematic Parameters
3.3.4. Other Parameters
3.4. Data Processing and Feature Extraction
- Segmentation and Tracking: Segmentation and tracking in vision-based gait analysis employ advanced algorithms like OpenPose [15] or DeepLabCut to identify and continuously track the subject across video frames [44]. These methods utilize keypoints and pose estimation to ensure precise motion tracking, enabling detailed analyses of gait dynamics and biomechanical parameters with high accuracy and reliability.
- Feature Extraction: Feature extraction in gait analysis involves capturing joint coordinates [43], body segment lengths, and angles. These data points provide crucial insights into human movement patterns, aiding in clinical diagnostics and biomechanical research to enhance mobility and gait function assessment [40].
- Parameter Calculation: Parameter calculation in gait analysis uses extracted features to compute spatial (e.g., step length) [28], temporal (e.g., cadence), and kinematic parameters (e.g., joint angles). These metrics offer insights into human locomotion for clinical assessments and rehabilitation, optimizing interventions to improve movement efficiency and quality of life [27].
3.5. Number of Participants and the Experiment
3.6. Camera Location
3.6.1. Mobile Robot
3.6.2. Laboratory Environment
3.6.3. Fixed in One Position
3.7. Applications of Vision-Based Gait Analysis
3.7.1. Sports Performance Analysis
3.7.2. Healthcare and Clinical Diagnostics
3.7.3. Home Monitoring and Elderly Care
3.7.4. Walking Speed Classification
3.7.5. Fall Risk Assessment
3.7.6. Rehabilitation and Mobility Enhancement
4. Discussion
4.1. Vision-Based Sensors
4.2. Optimal Measurement Position for Vision Sensors
4.3. Strengths, Weaknesses, and Applicability of Gait Analysis Algorithms
4.4. Comprehensive Analysis of Gait Parameters in Vision-Based Gait Analysis
4.5. Review Limitations
4.6. Existing Challenges and Future Works
4.6.1. Existing Challenges
4.6.2. Future Works
5. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Talaa, S.; Fezazi, M.E.; Jilbab, A.; Alaoui, H.E.Y. Computer Vision-Based Approach for Automated Monitoring and Assessment of Gait Rehabilitation at Home. Int. J. Online Biomed. Eng. 2023, 19, 139–157. [Google Scholar] [CrossRef]
- Luo, J.; Tjahjadi, T. View and Clothing Invariant Gait Recognition via 3D Human Semantic Folding. IEEE Access 2020, 8, 100365–100383. [Google Scholar] [CrossRef]
- Cai, S.; Shao, M.; Du, M.; Bao, G.; Fan, B. A Binocular-Camera-Assisted Sensor-to-Segment Alignment Method for Inertial Sensor-Based Human Gait Analysis. IEEE Sens. J. 2023, 23, 2663–2671. [Google Scholar] [CrossRef]
- Rodrigues, T.B.; Catháin, C.; Devine, D.; Moran, K.; O’Connor, N.E.; Murray, N. An evaluation of a 3D multimodal marker-less motion analysis system. In Proceedings of the 10th ACM Multimedia Systems Conference, Amherst, MA, USA, 18–21 June 2019; pp. 213–221. [Google Scholar] [CrossRef]
- Hatamzadeh, M.; Busé, L.; Chorin, F.; Alliez, P.; Favreau, J.D.; Zory, R. A kinematic-geometric model based on ankles’ depth trajectory in frontal plane for gait analysis using a single RGB-D camera. J. Biomech. 2022, 145, 111358. [Google Scholar] [CrossRef]
- Ceriola, L.; Mileti, I.; Taborri, J.; Donati, M.; Rossi, S.; Patanè, F. Comparison of Two Video-Based Methods for Knee Joint Angle Measurement: A Preliminary Study. In Proceedings of the 2023 IEEE International Conference on Metrology for eXtended Reality, Artificial Intelligence and Neural Engineering (MetroXRAINE), Milano, Italy, 25–27 October 2023; pp. 155–160. [Google Scholar] [CrossRef]
- Wang, Z.; Deligianni, F.; Voiculescu, I.; Yang, G.Z. A Single RGB Camera Based Gait Analysis with A Mobile Tele-Robot for Healthcare. In Proceedings of the 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Virtual, 1–5 November 2021; pp. 6933–6936. [Google Scholar] [CrossRef]
- Rani, V.; Kumar, M. Human gait recognition: A systematic review. Multimed. Tools Appl. 2023, 82, 37003–37037. [Google Scholar] [CrossRef]
- Sethi, D.; Bharti, S.; Prakash, C. A comprehensive survey on gait analysis: History, parameters, approaches, pose estimation, and future work. Artif. Intell. Med. 2022, 129, 102314. [Google Scholar] [CrossRef]
- Sethi, D.; Prakash, C.; Bharti, S. Latest Trends in Gait Analysis Using Deep Learning Techniques: A Systematic Review. In Artificial Intelligence and Speech Technology, Proceedings of the Third International Conference, AIST 2021, Delhi, India, 12–13 November 2021; Dev, A., Agrawal, S.S., Sharma, A., Eds.; Springer: Cham, Switzerland, 2022; pp. 363–375. [Google Scholar]
- Alfayeed, S.M.; Saini, B.S. Human Gait Analysis Using Machine Learning: A Review. In Proceedings of the 2021 International Conference on Computational Intelligence and Knowledge Economy (ICCIKE), Dubai, United Arab Emirates, 17–18 March 2021; pp. 550–554. [Google Scholar] [CrossRef]
- Chen, B.; Chen, C.; Hu, J.; Sayeed, Z.; Qi, J.; Darwiche, H.F.; Little, B.E.; Lou, S.; Darwish, M.; Foote, C.; et al. Computer Vision and Machine Learning-Based Gait Pattern Recognition for Flat Fall Prediction. Sensors 2022, 22, 7960. [Google Scholar] [CrossRef]
- D’Antonio, E.; Taborri, J.; Mileti, I.; Rossi, S.; Patane, F. Validation of a 3D Markerless System for Gait Analysis Based on OpenPose and Two RGB Webcams. IEEE Sens. J. 2021, 21, 17064–17075. [Google Scholar] [CrossRef]
- Peebles, A.T.; Carroll, M.M.; Socha, J.J.; Schmitt, D.; Queen, R.M. Validity of Using Automated Two-Dimensional Video Analysis to Measure Continuous Sagittal Plane Running Kinematics. Ann. Biomed. Eng. 2021, 49, 455–468. [Google Scholar] [CrossRef]
- Ino, T.; Samukawa, M.; Ishida, T.; Wada, N.; Koshino, Y.; Kasahara, S.; Tohyama, H. Validity of AI-Based Gait Analysis for Simultaneous Measurement of Bilateral Lower Limb Kinematics Using a Single Video Camera. Sensors 2023, 23, 9799. [Google Scholar] [CrossRef]
- Ho, M.Y.; Kuo, M.C.; Chen, C.S.; Wu, R.M.; Chuang, C.C.; Shih, C.S.; Tseng, Y.J. Pathological Gait Analysis With an Open-Source Cloud-Enabled Platform Empowered by Semi-Supervised Learning-PathoOpenGait. IEEE J. Biomed. Health Inform. 2024, 28, 1066–1077. [Google Scholar] [CrossRef]
- Wade, L.; Needham, L.; Evans, M.; McGuigan, P.; Colyer, S.; Cosker, D.; Bilzon, J. Examination of 2D frontal and sagittal markerless motion capture: Implications for markerless applications. PLoS ONE 2023, 18, e0293917. [Google Scholar] [CrossRef] [PubMed]
- Vairis, A.; Boyak, J.; Brown, S.; Bess, M.; Bae, K.H.; Petousis, M. Gait Analysis Using Video for Disabled People in Marginalized Communities. In Intelligent Human Computer Interaction, Proceedings of the 12th International Conference, IHCI 2020, Daegu, South Korea, 24–26 November 2020; Springer Science and Business Media Deutschland GmbH; Springer: Cham, Switzerland, 2021; Volume 12616, pp. 145–153. [Google Scholar] [CrossRef]
- Muñoz-Ospina, B.; Alvarez-Garcia, D.; Clavijo-Moran, H.J.C.; Valderrama-Chaparro, J.A.; García-Peña, M.; Herrán, C.A.; Urcuqui, C.C.; Navarro-Cadavid, A.; Orozco, J. Machine Learning Classifiers to Evaluate Data From Gait Analysis With Depth Cameras in Patients With Parkinson’s Disease. Front. Hum. Neurosci. 2022, 16, 826376. [Google Scholar] [CrossRef]
- Menascu, S.; Vinogradsky, A.; Baransi, H.; Kalron, A. Range of motion abnormalities in the lower limb joints during gait in youth with multiple sclerosis. Mult. Scler. Relat. Disord. 2024, 82, 105417. [Google Scholar] [CrossRef]
- Lee, D.; Soon, J.; Choi, G.; Kim, K.; Bahn, S. Identification of the Visually Prominent Gait Parameters for Forensic Gait Analysis. Int. J. Environ. Res. Public Health 2022, 19, 2467. [Google Scholar] [CrossRef]
- O’Gorman, L.; Liu, X.; Sarker, M.I.; Milanova, M. Video analytics gait trend measurement for fall prevention and health monitoring. In Proceedings of the 2020 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, 10–15 January 2021; pp. 489–496. [Google Scholar] [CrossRef]
- Giannakou, E.; Fotiadou, S.; Gourgoulis, V.; Mavrommatis, G.; Aggelousis, N. A Comparative Analysis of Symmetry Indices for Spatiotemporal Gait Features in Early Parkinson’s Disease. Neurol. Int. 2023, 15, 1129–1139. [Google Scholar] [CrossRef] [PubMed]
- Duncan, L.; Zhu, S.; Pergolotti, M.; Giri, S.; Salsabili, H.; Faezipour, M.; Ostadabbas, S.; Mirbozorgi, S.A. Camera-Based Short Physical Performance Battery and Timed Up and Go Assessment for Older Adults with Cancer. IEEE Trans. Biomed. Eng. 2023, 70, 2529–2539. [Google Scholar] [CrossRef]
- Kanko, R.M.; Laende, E.K.; Strutzenberger, G.; Brown, M.; Selbie, W.S.; DePaul, V.; Scott, S.H.; Deluzio, K.J. Assessment of spatiotemporal gait parameters using a deep learning algorithm-based markerless motion capture system. J. Biomech. 2021, 122, 110414. [Google Scholar] [CrossRef] [PubMed]
- Keller, V.T.; Outerleys, J.B.; Kanko, R.M.; Laende, E.K.; Deluzio, K.J. Clothing condition does not affect meaningful clinical interpretation in markerless motion capture. J. Biomech. 2022, 141, 111182. [Google Scholar] [CrossRef]
- Martín, G.D.S.; Reyes-González, L.; Sainz-Ruiz, S.; Rodríguez-Cobo, L.; López-Higuera, J.M. Automatic ankle angle detection by integrated rgb and depth camera system. Sensors 2021, 21, 1909. [Google Scholar] [CrossRef]
- Aung, N.; Bovonsunthonchai, S.; Hiengkaew, V.; Tretriluxana, J.; Rojasavastera, R.; Pheung-Phrarattanatrai, A. Concurrent validity and intratester reliability of the video-based system for measuring gait poststroke. Physiother. Res. Int. 2020, 25, e1803. [Google Scholar] [CrossRef]
- Guess, T.M.; Bliss, R.; Hall, J.B.; Kiselica, A.M. Comparison of Azure Kinect overground gait spatiotemporal parameters to marker based optical motion capture. Gait Posture 2022, 96, 130–136. [Google Scholar] [CrossRef] [PubMed]
- Horsak, B.; Eichmann, A.; Lauer, K.; Prock, K.; Krondorfer, P.; Siragy, T.; Dumphart, B. Concurrent validity of smartphone-based markerless motion capturing to quantify lower-limb joint kinematics in healthy and pathological gait. J. Biomech. 2023, 159, 111801. [Google Scholar] [CrossRef]
- Amsaprabhaa, M.; Jane, Y.N.; Nehemiah, H.K. A survey on spatio-temporal framework for kinematic gait analysis in RGB videos. J. Vis. Commun. Image Represent. 2021, 79, 103218. [Google Scholar] [CrossRef]
- Sikandar, T.; Rabbi, M.F.; Ghazali, K.H.; Altwijri, O.; Alqahtani, M.; Almijalli, M.; Altayyar, S.; Ahamed, N.U. Using a deep learning method and data from two-dimensional (2d) marker-less video-based images for walking speed classification. Sensors 2021, 21, 2836. [Google Scholar] [CrossRef]
- Scicluna, J.; Seychell, D.; Spiteri, D.B. 2D Video Dataset for Detailed Pose Estimation of the Running Form. In Proceedings of the 2023 IEEE EMBS Special Topic Conference on Data Science and Engineering in Healthcare, Medicine and Biology, St. Julians, Malta, 7–9 December 2023; pp. 123–124. [Google Scholar] [CrossRef]
- Zult, T.; Allsop, J.; Tabernero, J.; Pardhan, S. A low-cost 2-D video system can accurately and reliably assess adaptive gait kinematics in healthy and low vision subjects. Sci. Rep. 2019, 9, 18385. [Google Scholar] [CrossRef] [PubMed]
- Kobsar, D.; Osis, S.T.; Jacob, C.; Ferber, R. Validity of a novel method to measure vertical oscillation during running using a depth camera. J. Biomech. 2019, 85, 182–186. [Google Scholar] [CrossRef]
- Ammann, E.; Meier, R.L.; Rutz, E.; Vavken, P.; Studer, K.; Camathias, C. Trochleoplasty improves knee flexion angles and quadriceps function during gait only if performed bilaterally. Knee Surg. Sport. Traumatol. Arthrosc. 2020, 28, 2067–2076. [Google Scholar] [CrossRef]
- Sinsurin, K.; Valldecabres, R.; Richards, J. An exploration of the differences in hip strength, gluteus medius activity, and trunk, pelvis, and lower-limb biomechanics during different functional tasks. Int. Biomech. 2020, 7, 35–43. [Google Scholar] [CrossRef] [PubMed]
- Tanpure, S.; Phadnis, A.; Nagda, T.; Rathod, C.; Chavan, A.; Gad, M. Do they feel better when they walk better? 3D gait analysis study in total knee arthroplasty for Indian osteoarthritic knees. J. Arthrosc. Jt. Surg. 2023, 10, 85–90. [Google Scholar] [CrossRef]
- Kaur, R.; Motl, R.W.; Sowers, R.; Hernandez, M.E. A Vision-Based Framework for Predicting Multiple Sclerosis and Parkinson’s Disease Gait Dysfunctions—A Deep Learning Approach. IEEE J. Biomed. Health Inform. 2023, 27, 190–201. [Google Scholar] [CrossRef] [PubMed]
- Vats, V.K.; Saikumar, R.; Prakash, C. Identification of knee angle trajectory in Indian outfit using Pose Analysis. In Proceedings of the 2022 4th International Conference on Artificial Intelligence and Speech Technology (AIST), Delhi, India, 9–10 December 2022. [Google Scholar] [CrossRef]
- Sivaprakash, P.; Kumar, S.S.; Felix, C.S.; Kayalvili, S.; Kumar, M.S.; Devi, S. Detection of Anomalies in Pedestrian Walking Using OpenPose and Bidirectional LSTM. In Proceedings of the 2023 4th International Conference on Smart Electronics and Communication (ICOSEC), Trichy, India, 20–22 September 2023; pp. 1638–1645. [Google Scholar] [CrossRef]
- Vuong, G.H.; Tran, M.T. Resnet Video 3D for Gait Retrieval: A Deep Learning Approach to Human Identification. In Proceedings of the 12th International Symposium on Information and Communication Technology, Ho Chi Minh, Vietnam, 7–8 December 2023; pp. 886–892. [Google Scholar] [CrossRef]
- Jaén-Vargas, M.; Leiva, K.R.; Fernandes, F.; Gonçalves, S.B.; Silva, M.T.; Lopes, D.S.; Olmedo, J.S. A deep learning approach to recognize human activity using inertial sensors and motion capture systems. In Fuzzy Systems and Data Mining VII; IOS Press BV: Amsterdam, The Netherlands, 2021; Volume 340, pp. 250–256. [Google Scholar] [CrossRef]
- Nishizaki, K.; Izumiura, T.; Nishizaki, H.; Izumi, S.I.; Ikegami, H. Evaluation of a gait analysis tool using posture estimation technology in clinical rehabilitation. In Proceedings of the 2021 IEEE 3rd Global Conference on Life Sciences and Technologies (LifeTech), Nara, Japan, 9–11 March 2021; pp. 84–87. [Google Scholar] [CrossRef]
- Pattanapisont, T.; Siritanawan, P.; Kotani, K.; Kondo, T.; Karnjana, J. Gait Image Analysis Based on Human Body Parts Model. In Proceedings of the 2023 IEEE International Conference on Agents (ICA), Kyoto, Japan, 4–6 December 2023; pp. 24–27. [Google Scholar] [CrossRef]
- Albert, J.A.; Owolabi, V.; Gebel, A.; Brahms, C.M.; Granacher, U.; Arnrich, B. Evaluation of the pose tracking performance of the azure kinect and kinect v2 for gait analysis in comparison with a gold standard: A pilot study. Sensors 2020, 20, 5104. [Google Scholar] [CrossRef]
- Lonini, L.; Moon, Y.; Embry, K.; Cotton, R.J.; McKenzie, K.; Jenz, S.; Jayaraman, A. Video-Based Pose Estimation for Gait Analysis in Stroke Survivors during Clinical Assessments: A Proof-of-Concept Study. Digit. Biomarkers 2022, 6, 9–18. [Google Scholar] [CrossRef]
- Zarei, H.; Norasteh, A.A.; Lieberman, L.J.; Ertel, M.W.; Brian, A. Effects of proprioception and core stability training on gait parameters of deaf adolescents: A randomized controlled trial. Sci. Rep. 2023, 13, 21867. [Google Scholar] [CrossRef]
- Yamamoto, M.; Shimatani, K.; Ishige, Y.; Takemura, H. Verification of gait analysis method fusing camera-based pose estimation and an IMU sensor in various gait conditions. Sci. Rep. 2022, 12, 17719. [Google Scholar] [CrossRef] [PubMed]
- Ruiz-Malagón, E.J.; García-Pinillos, F.; Molina-Molina, A.; Soto-Hermoso, V.M.; Ruiz-Alias, S.A. RunScribe Sacral Gait LabTM Validation for Measuring Pelvic Kinematics during Human Locomotion at Different Speeds. Sensors 2023, 23, 2604. [Google Scholar] [CrossRef]
- Nagymáté, G.; Kiss, R.M. Affordable gait analysis using augmented reality markers. PLoS ONE 2019, 14, e0212319. [Google Scholar] [CrossRef]
- Boldo, M.; Marco, R.D.; Martini, E.; Nardon, M.; Bertucco, M.; Bombieri, N. On the reliability of single-camera markerless systems for overground gait monitoring. Comput. Biol. Med. 2024, 171, 108101. [Google Scholar] [CrossRef]
- Ripic, Z.; Kuenze, C.; Andersen, M.S.; Theodorakos, I.; Signorile, J.; Eltoukhy, M. Ground reaction force and joint moment estimation during gait using an Azure Kinect-driven musculoskeletal modeling approach. Gait Posture 2022, 95, 49–55. [Google Scholar] [CrossRef]
- Tasjid, M.S.; Marouf, A.A. Leveraging Smartphone Sensors for Detecting Abnormal Gait for Smart Wearable Mobile Technologies. Int. J. Interact. Mob. Technol. 2021, 15, 167–175. [Google Scholar] [CrossRef]
- Albuquerque, P.; Machado, J.P.; Verlekar, T.T.; Correia, P.L.; Soares, L.D. Remote gait type classification system using markerless 2d video. Diagnostics 2021, 11, 1824. [Google Scholar] [CrossRef]
- Arizpe-Gómez, P.; Harms, K.; Janitzky, K.; Witt, K.; Hein, A. Towards automated self-administered motor status assessment: Validation of a depth camera system for gait feature analysis. Biomed. Signal Process. Control 2024, 87, 105352. [Google Scholar] [CrossRef]
- Washabaugh, E.P.; Shanmugam, T.A.; Ranganathan, R.; Krishnan, C. Comparing the accuracy of open-source pose estimation methods for measuring gait kinematics. Gait Posture 2022, 97, 188–195. [Google Scholar] [CrossRef]
- Needham, L.; Evans, M.; Cosker, D.P.; Wade, L.; McGuigan, P.M.; Bilzon, J.L.; Colyer, S.L. The accuracy of several pose estimation methods for 3D joint centre localisation. Sci. Rep. 2021, 11, 20673. [Google Scholar] [CrossRef]
- Jamsrandorj, A.; Nguyen, M.D.; Park, M.; Kumar, K.S.; Mun, K.R.; Kim, J. Vision-Based Gait Events Detection Using Deep Convolutional Neural Networks. In Proceedings of the 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Virtual, 1–5 November 2021; pp. 1936–1941. [Google Scholar] [CrossRef]
- Guffanti, D.; Brunete, A.; Hernando, M.; Gambao, E.; Alvarez, D. ANN-Based Optimization of Human Gait Data Obtained from a Robot-Mounted 3D Camera: A Multiple Sclerosis Case Study. IEEE Robot. Autom. Lett. 2022, 7, 8901–8908. [Google Scholar] [CrossRef]
- Yagi, K.; Sugiura, Y.; Hasegawa, K.; Saito, H. Gait Measurement at Home Using A Single RGB Camera. Gait Posture 2020, 76, 136–140. [Google Scholar] [CrossRef]
- Li, A.; Hou, S.; Cai, Q.; Fu, Y.; Huang, Y. Gait Recognition With Drones: A Benchmark. IEEE Trans. Multimed. 2024, 26, 3530–3540. [Google Scholar] [CrossRef]
- Veerubhotla, A.; Ehrenberg, N.; Ibironke, O.; Pilkar, R. Accuracy comparison of machine learning algorithms at various wear-locations for activity identification post stroke: A pilot analysis. In Proceedings of the 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Virtual, 1–5 November 2021; pp. 6106–6109. [Google Scholar] [CrossRef]
- Dorsch, E.M.; Röhling, H.M.; Zocholl, D.; Hafermann, L.; Paul, F.; Schmitz-Hübsch, T. Progression events defined by home-based assessment of motor function in multiple sclerosis: Protocol of a prospective study. Front. Neurol. 2023, 14, 1258635. [Google Scholar] [CrossRef]
- Foo, M.J.; Chang, J.S.; Ang, W.T. Real-Time Foot Tracking and Gait Evaluation with Geometric Modeling. Sensors 2022, 22, 1661. [Google Scholar] [CrossRef] [PubMed]
- Severin, A.C.; Gean, R.P.; Barnes, S.G.; Queen, R.; Butler, R.J.; Martin, R.; Barnes, C.L.; Mannen, E.M. Effects of a corrective heel lift with an orthopaedic walking boot on joint mechanics and symmetry during gait. Gait Posture 2019, 73, 233–238. [Google Scholar] [CrossRef] [PubMed]
- Topham, L.K.; Khan, W.; Al-Jumeily, D.; Waraich, A.; Hussain, A.J. A diverse and multi-modal gait dataset of indoor and outdoor walks acquired using multiple cameras and sensors. Sci. Data 2023, 10, 320. [Google Scholar] [CrossRef]
- Rajesh, P.; Anand, P.P.; Babu, C.R.; Karthik, V.; Kumar, A. Affordable human motion analysis system. IOP Conf. Ser. Mater. Sci. Eng. 2020, 912, 062025. [Google Scholar] [CrossRef]
- Yoon, S.; Jung, H.W.; Jung, H.; Kim, K.; Hong, S.K.; Roh, H.; Oh, B.M. Development and validation of 2d-lidar-based gait analysis instrument and algorithm. Sensors 2021, 21, 414. [Google Scholar] [CrossRef]
- Röhling, H.M.; Otte, K.; Rekers, S.; Finke, C.; Rust, R.; Dorsch, E.M.; Behnia, B.; Paul, F.; Schmitz-Hübsch, T. RGB-Depth Camera-Based Assessment of Motor Capacity: Normative Data for Six Standardized Motor Tasks. Int. J. Environ. Res. Public Health 2022, 19, 16989. [Google Scholar] [CrossRef]
- Zeng, Y.; Wu, L.; Xie, D. Gait Analysis based on Azure Kinect 3D Human Skeleton. In Proceedings of the 2021 International Conference on Computer Information Science and Artificial Intelligence (CISAI), Kunming, China, 17–19 September 2021; pp. 1059–1062. [Google Scholar] [CrossRef]
- Sims, D.T.; Burden, A.; Payton, C.; Onambélé-Pearson, G.L.; Morse, C.I. A quantitative description of self-selected walking in adults with Achondroplasia using the gait profile score. Gait Posture 2019, 68, 150–154. [Google Scholar] [CrossRef] [PubMed]
- Clark, R.A.; Mentiplay, B.F.; Hough, E.; Pua, Y.H. Three-dimensional cameras and skeleton pose tracking for physical function assessment: A review of uses, validity, current developments and Kinect alternatives. Gait Posture 2019, 68, 193–200. [Google Scholar] [CrossRef]
Paper Title | Year | Model | Systematic Review | Publication Year Range of Studied Articles | Number of Studies Included | Differences with Respect to Our Article |
---|---|---|---|---|---|---|
[8] | 2022 | SVM, KNN, CNN, LSTM | Yes | 2015–2022 | 128 | Only some vision sensors are mentioned, but their functions are not described in detail |
[9] | 2021 | Various traditional ML and DL models | Yes | 2010–2021 | 204 | No mention of 3D cameras |
[10] | 2023 | Deep learning models (e.g., CNN, RNN) | Yes | 2018–2023 | 29 | Only 2D video and image analysis |
[11] | 2020 | Machine learning models | Yes | 2015–2020 | 36 | Only 36 articles were analyzed, and the machine learning method is not comprehensive |
[12] | 2022 | ML techniques (e.g., SVM, KNN) | Yes | 2017–2022 | 89 | Only limited to vision-based deep learning approaches |
Database | Query |
---|---|
IEEE Xplore | (“All Metadata”: camera* OR “All Metadata”: video*) AND (“All Metadata”: human gait*) AND (“All Metadata”: walk* OR “All Metadata”: run* OR “All Metadata”: jog*) |
Scopus | (TITLE-ABS-KEY (camera* OR video*) AND TITLE-ABS-KEY (human AND gait*) AND TITLE-ABS-KEY (walk* OR run* OR jog*)) AND PUBYEAR > 2018 |
Web of Science | camera* OR video* (Abstract) and run* OR walk* OR jog* (Abstract) and human gait* (Abstract) |
PubMed | ((camera* [Title/Abstract] OR video* [Title/Abstract]) AND (run* [Title/Abstract] OR walk* [Title/Abstract] OR jog* [Title/Abstract])) AND (human gait* [Title/Abstract]) Filters: from 2019–2024 |
Inclusion Criteria | Exclusion Criteria |
---|---|
Gait analysis systems using vision sensors. | Book chapters, review papers, letters, short communications, technical notes, conference proceedings. |
Sensor modalities include 2D, 3D, ordinary, depth, multi-camera, and other vision-based systems. | Only evaluated vision sensor technology for step counts, distance, and activity classification. |
Included at least one defined gait outcome measure, such as spatiotemporal or kinematics. | Studies not evaluating straight walking (e.g., changes in direction tasks or cutting manoeuvres) |
Written in English. | Aimed to evaluate computer algorithms, machine learning, or statistical approaches. |
Based on video recording and gait analysis. | Study concerns non-human animal subjects |
Investigated gait variability or regularity. | Study involves clinical gait analysis only. |
Open-access articles. | Studies involve wearable sensors, such as IMU, etc. |
Authors | Vision Sensors | Frame Rates | Location | Gait Parameters | Limitations and Further Study |
---|---|---|---|---|---|
Ceriola et al. [6] | A Logitech BRIO 4k Stream Edition camera | f (1920 × 1080 px) and 60 Hz. | The subject performed four trials of one-minute walking on a treadmill at three different speeds. | Stride length/stride time/step length/step width. | The 30 Hz sample rate limits Azure Kinect’s accuracy at faster speeds and for stance/swing time measurements. |
Erika et al. [13] | Two identical high-definition webcams (Logitech brio 4k stream edition). | 60 frames and 720 × 1280 pixel. | The webcams were placed in three different positions: back–back, lateral–lateral, and back–lateral. | Specific lower limb joint angle. | The system’s kinematic accuracy is limited by camera configuration and locomotion activity. |
Peebles et al. [14] | A 10-camera motion capture system (Oqus 700, Qualisys, Goteborg, Sweden). A single consumer-grade video camera (Hero 6 Black, GoPro, San Mateo, CA, USA) | 120 frames and an image size of 1920 × 9 × 1080 pixels. | The camera was placed at five positions, spanning angles of 35.9° to 0°. | Knee, ankle, and foot angle at contact, peak knee flexion, knee flexion excursion, and knee–ankle flexion vector coding variability. | The study’s limitations include marker identification variance, single-session design, and sagittal plane focus. |
Ino et al. [15] | A high-speed digital video camera (Bonita Video 720C, Vicon Motion Systems Ltd., Oxford, UK). | 120 H | The camera captured sagittal views from the participants right side to ensure accurate visualization of the motion. | Ankle joint, knee joint, and hip joint. | Study limitations include AI misidentification, inherent 3D-MA errors, and lack of validation in abnormal gait patterns. |
Ho et al. [16] | ZED camera. | 30 frames and a resolution of 1080 p. | Participants walked a 4 m line. | Stride length/stride time/step length/step width. | Study limitations include requiring a 3D camera and varying performance with different cameras. |
Wade et al. [17] | Fifteen Qualisys cameras (Oqus, Qualysis, Gothenburg, Sweden) and two machine vision cameras. | 200 Hz. | One machine vision camera was set up in the sagittal plane (right hand side of the body during walking) and one was set up in the frontal plane (directly in front of the participant). | Ankle, hip, and knee. | Limitations include marker-based errors, indoor settings, and a small, healthy adult participant group. |
Vairis et al. [18] | Two iPhone 6s. | 8 megapixel cameras and 30 frames per second. | With one recording the side view of the walk and the second camera, at 90° to the first camera, recording the subject walking toward the camera. | ||
Beatriz et al. [19] | Portable low-cost devices (KinecteMotion). | 30 Hz. | The camera is placed in front of the participant. | Swing magnitude (left/right), swing time (left/right), swing speed (left/right), and arm swing asymmetry (ASA). | Study limitations include small sample size, single dataset, no gait speed matching, and unimplemented ML algorithms. |
Menascu et al. [20] | A six-camera (CX1 sensor unit) CODA 3D motion analysis system. | 4 s at 200 Hz | The camera is placed in front of the participant. | Walking speed, stride length, stride time, cadence (steps/min), step length, stance phase, and step time. | Limitations include a small sample size, retrospective design, unrecorded factors, and no control group. |
Lee et al. [21] | four different cameras. | Each camera was placed three meters from the center of the pedestrian path, and the cameras at the front and back were about one meter from the end of the walkway. | Step length, walking speed, swing/stance phase, foot angle, knee angle, and knee varus/valgus. | Limitations include a lack of dominant hand data and potential manual measurement errors. | |
Lawrence et al. [22] | Video camera. | 30 frames. | Physical layout shows the camera capturing a person’s gait as that person walks at 90° to the camera view. | Speed, variability, step duration, activity level, stride length, and multi-scale entropy. | Limitations include constrained system setup and challenges in achieving accurate gait measurement. |
Giannakou et al. [23] | Six optoelectronic cameras and two Kistler force plates. | 100 fr/s and 1000 Hz. | Six cameras record the movement of the participants’ lower limbs during gait, and the force plates are positioned in the middle of the walkway. | Spatiotemporal parameters, including cadence, step time, stride time, single support, double support, walking speed, step length, stride length, step width, and foot angle. | Symmetry indices may artificially inflate or deflate asymmetry levels due to parameter type or magnitude. |
Hatamzadeh et al. [5] | Reference: a 10 m long OptoGait system and a single Microsoft Azure Kinect camera | (spatial resolution: 1.041 cm, temporal resolution: 1000 Hz) 30 Hz. | The camera is placed at a 1 m distance from the end of the OptoGait walkway at a 80 cm height. | Step time, step length, stride time, stride length, and gait speed. | Limitations include validation solely on healthy individuals, limited walking distance (6 m), and unassessed OptoGait biases in phase percentage calculations. |
Duncan et al. [24] | Two USB cameras (ZEALINNO 1080P Webcam, Shenzhen MLK Technology Company Limited, Shenzhen, China), one Raspberry Pi V2.1 camera module, and one smartphone. | The physical dimensions of the CBMT platform equal to 13 cm × 19 cm × 36 cm (without the tripod and marker). The gap between the left and right cameras equals 273.5 mm to detect the depth more accurately. | The left and right cameras are used for gait speed tests. The center camera is used for standing balance, 5TSS, and TUG tests. | Limitations include validation only on a small sample size of healthy volunteers and older adults, and there is a reliance on the erratic walking trajectory for balance assessment. | |
Kanko et al. [25] | Seven Qualisys 3+ cameras (Qualisys AB, Gothenburg, Sweden), which were used to record marker trajectories, and eight Qualisys Miqus cameras | Both systems were recorded at 85 Hz. | The seven Qualisys systems recorded marker trajectories and the eight Qualisys Miqus cameras, which recorded 2D videos, were positioned around an instrumented treadmill. | Step length, stride length, stride width, step time, cycle time, swing time, stance time, double limb support time, and trial-average gait speed. | Limitations include sample bias toward healthy, young individuals and the unknown sensitivity of markerless system to health status and environmental factors. |
Keller et al. [26] | Eight Sony RX0II cameras (Sony Corporation, Minato, Japan). | 60 frames | Gait speed/step length/stride length/stride width/step time/cycle time/swing time/stance time/double-limb support time | Limitations include restricted clothing conditions and unexplored impact on joint moments, warranting further investigation in clinical populations. | |
Guillermo et al. [27] | Kinect v2. | 30 Hz. | Kinect v2 is placed at 1 m height to make the recording. The person is placed between 1.5 and 4.5 m. | Ankle angle | The method’s significant limitation is its non-real time processing, taking approximately half an hour for one minute of Kinect data, which may hinder timely results acquisition in certain scenarios. |
Aung et al. [28] | gold standard measurement: FDM system (Zebris, Germany) and a video camera (Sanyo Xacti VPC-GH1, Vietnam). | 100 Hz 60 Hz. | Placed perpendicular to the participants. | Step length, step time, stride length, gait velocity, and cadence. | The study’s limitations include parallax and perspective errors, and a lack of intertester reliability. |
Guess et al. [29] | A 12-camera Vicon optical motion capture system and a single Azure Kinect. | 100 Hz 30 Hz. | The Azure Kinect DK was placed 2.7 m from the center of the Vicon capture space and 1.0 m above the floor. | Stride length/stride time/step length/step width. | The 30 Hz sample rate limits Azure Kinect’s accuracy at faster speeds and for stance/swing time measurements. |
Horsak et al. [30] | A 16-camera motion capture system (Nexus, 2.14, Vicon, Oxford, UK) and two iOS smartphones (iPhone 11 and 12 Pro). | 120 Hz 720 × 1280 pixels and 60 Hz. | The cameras were positioned approximately 35 degrees off from the center of the walk way with the lens of each camera at a height of approximately 1.5 m. | Pelvis, hip, knee, and ankle joints. | The study’s limitations include using healthy participants to mimic pathological gait, potential anatomical differences affecting pose estimation, and possible minor variations from cross-correlation data alignment. |
Paper Title | Number of Participants | Participant Characteristics |
---|---|---|
[26] | 29 healthy subjects (18 males and 11 females) | Age of 21.5 ± 1.3 (SD) years, mean height of 1.71 ± 0.08 (SD) m, mean mass of 65.03 ± 10.64 (SD) kg, and mean body mass index (BMI) of 22.12 ± 2.44 (SD) kgm−2) after excluding subjects wearing inconsistent footwear in both clothing conditions. |
[56] | 26 initially, but data from 24 were evaluated. | The study included 26 participants: 15 with movement disorders (14 with Parkinson’s Disease and 1 with hereditary spastic spinal paralysis) and 11 without any disorders. |
[37] | 20 healthy participants (12 males and 8 females) | Twenty healthy participants were included after excluding those with current musculoskeletal issues, prior lower extremity surgery or trauma, or medical conditions impacting movement. |
[51] | 10 healthy participants | Aged between 18 and 84 years (average age 28.6 years, SD 19.6). The average height was 1.71 m (SD 0.06), and the average weight was 66.8 kg (SD 17.8). Participants were excluded if they had any musculoskeletal disorders. |
[57] | 32 healthy participants (22 males and 10 females) | They walked along a 6.5 m indoor walkway while recording video and motion capture data. Data from one participant was excluded due to it belonging to a different subset. |
[58] | 15 healthy participants (7 males [1.82 ± 0.11 m, 85.7 ± 11.1 kg], 8 females [1.65 ± 0.08 m, 63.2 ± 6.0 kg]) | In a single session, participants performed ten walking trials, ten running trials, and ten counter-movement jumps in random order, wearing a full-body marker set. Two motion capture systems recorded the movements simultaneously. |
[59] | 11 subjects (8 males and 3 females) | Age = 24.2 ± 3.8 years, height = 170.7 ± 6.4 cm, and weight = 71.7 ± 16.1 kg) without any presence or history of neurological disorders. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Han, X.; Guffanti, D.; Brunete, A. A Comprehensive Review of Vision-Based Sensor Systems for Human Gait Analysis. Sensors 2025, 25, 498. https://doi.org/10.3390/s25020498
Han X, Guffanti D, Brunete A. A Comprehensive Review of Vision-Based Sensor Systems for Human Gait Analysis. Sensors. 2025; 25(2):498. https://doi.org/10.3390/s25020498
Chicago/Turabian StyleHan, Xiaofeng, Diego Guffanti, and Alberto Brunete. 2025. "A Comprehensive Review of Vision-Based Sensor Systems for Human Gait Analysis" Sensors 25, no. 2: 498. https://doi.org/10.3390/s25020498
APA StyleHan, X., Guffanti, D., & Brunete, A. (2025). A Comprehensive Review of Vision-Based Sensor Systems for Human Gait Analysis. Sensors, 25(2), 498. https://doi.org/10.3390/s25020498