Detecting Toe-Off Events Utilizing a Vision-Based Method
Abstract
:1. Introduction
2. Related Work
2.1. Wearable Sensors-Based Methods
2.2. Vision-Based Methods
3. Toe-Off Events Detection Based on CSD-Maps
3.1. Consecutive Silhouettes Difference Maps
3.1.1. 2-CSD-Maps
3.1.2. n-CSD-Maps
Algorithm 1 Algorithm for generating n-CSD-maps |
Require: Consecutive silhouette images: . Parameter w and h represent the width and height of the silhouette images respectively. Parameter n represents the number of consecutive silhouette images. Ensure: The CSD-map: 1: for to w do 2: for to h do 3: ; 4: ; 5: for to n do 6: ; 7: end for 8: ; 9: end for 10: end for 11: return ; |
Algorithm 2 Algorithm for normalizing a CSD-map |
Require: The original CSD-map image: The width of the normalized CSD-map:w The height of the normalized CSD-map:h Ensure: The normalized CSD-map: 1: ; 2: ; 3: ; 4: return ; |
3.2. Convolutional Neural Network
4. Experiments and Results Analysis
4.1. Database
4.2. Toe-Off Frame Definition and Data Preparation
4.3. Experimental Configuration
- Configuration of n-CSD-maps. Several pre-tests have been conducted under the viewing angles of 72, 90 and 108 for choosing the size of normalized CSD-maps. As shown in Figure 8, the pre-test results show that different sizes of normalized CSD-maps practically cause almost no change to the detection accuracy. The main reason is that CSD-maps are generated from binary pedestrian silhouettes. The decline of the size of normalized CSD-maps would not result in much change to the detection accuracy of this method. Thus, in the following experiments, the size of normalized CSD-maps is set as 48*32. As to the parameter n of n-CSD-maps, it is set as 2, 3, 4, 5, and 6. This means that 2-CSD-maps, 3-CSD-maps, 4-CSD-maps, 5-CSD-maps and 6-CSD-maps are used in the experiments. The reason is that the increase of parameter n brings little increase of detection accuracy, while costs more time for features extraction, shown as Table 1 and Figure 9.
- Configuration of Training set and test set. The samples from subject #001 to subject #90 of each viewing angle are selected for model training. The rest of samples (from subject #091 to subject #124) is used for testing.
- Configuration of CNN Solver. The initialized learning rate is 0.001, the momentum is 0.9 and the weight decay is 0.0005. The maximum number of iteration in each experiment is 20,000. The weights in the CNN are initialized with a zeromean Gaussian distribution with standard deviation of 0.0001. The bias is set to one.
4.4. Experimental Results and Discussion
5. Conclusions and Future Work
- Comparing with wearable sensors-based methods, this method can detect toe-off event from 2D video data without the cooperation of participants. Usually, in the field of medicine, wearable sensors-based methods are the first choice for gait analysis, due to their high accuracy. However, these methods are suffering the disadvantages of high cooperation from users and power consumption restriction. The method proposed by this paper, which also achieves good accuracy for toe-off event detection by using a web camera, can overcome the disadvantages of wearable sensors-based methods for gait analysis.
- Comparing with other vision-based methods, this method provides a better accuracy for toe-off event detection. Gait cycle detection is a basic step of gait recognition. An accurate toe-off event detection algorithm can produce an accurate gait cycle detection algorithm. Thus, the method proposed by this paper would be beneficial to gait recognition.
- A much larger database is needed to test the practical performance of toe-off event detection under different conditions.
- CSD-map provides a good feature representation for detecting toe-off events from video data. It also would be applicable for other gait events detection, such as heel strike, foot flat, mid-stance, heel-off, and mid-swing.
Author Contributions
Funding
Conflicts of Interest
References
- Muro-de-la-Herran, A.; Garcia-Zapirain, B.; Mendez-Zorrilla, A. Gait analysis methods: An overview of wearable and non-wearable systems, highlighting clinical applications. Sensors 2014, 14, 3362–3394. [Google Scholar] [CrossRef]
- Fraccaro, P.; Walsh, L.; Doyle, J.; O’Sullivan, D. Real-world gyroscope-based gait event detection and gait feature extraction. In Proceedings of the Sixth International Conference on eHealth, Telemedicine, and Social Medicine, Barcelona, Spain, 23–27 March 2014; pp. 247–252. [Google Scholar]
- Auvinet, E.; Multon, F.; Aubin, C.E.; Meunier, J.; Raison, M. Detection of gait cycles in treadmill walking using a kinect. Gait Posture 2015, 41, 722–725. [Google Scholar] [CrossRef] [PubMed]
- Richards, J.G. The measurement of human motion: A comparison of commercially available systems. Hum. Mov. Sci. 1999, 18, 589–602. [Google Scholar] [CrossRef] [Green Version]
- Yang, C.; Ugbolue, U.C.; Kerr, A.; Stankovic, V.; Stankovic, L.; Carse, B.; Kaliarntas, K.T.; Rowe, P.J. Autonomous gait event detection with portable single-camera gait kinematics analysis system. J. Sens. 2016, 2016, 5036857. [Google Scholar] [CrossRef]
- Rueterbories, J.; Spaich, E.G.; Andersen, O.K. Gait event detection for use in fes rehabilitation by radial and tangential foot accelerations. Med. Eng. Phys. 2014, 36, 502–508. [Google Scholar] [CrossRef]
- Aung, M.S.H.; Thies, S.B.; Kenney, L.P.; Howard, D.; Selles, R.W.; Findlow, A.H.; Goulermas, J.Y. Automated detection of instantaneous gait events using time frequency analysis and manifold embedding. IEEE Trans. Neural Syst. Rehabil. Eng. 2013, 21, 908–916. [Google Scholar] [CrossRef] [PubMed]
- Formento, P.C.; Acevedo, R.; Ghoussayni, S.; Ewins, D. Gait event detection during stair walking using a rate gyroscope. Sensors 2014, 14, 5470–5485. [Google Scholar] [CrossRef] [PubMed]
- Mannini, A.; Genovese, V.; Sabatini, A.M. Online decoding of hidden markov models for gait event detection using foot-mounted gyroscopes. IEEE J. Biomed. Health Inform. 2014, 18, 1122–1130. [Google Scholar] [CrossRef]
- Anoop, K.G.; Hemant, K.V.; Nitin, K.; Deepak, J. A Force Myography-Based System for Gait Event Detection in Overground and Ramp Walking. IEEE Trans. Instrum. Meas. 2018, 67, 2314–2323. [Google Scholar]
- Jiang, X.; Chu, K.H.T.; Khoshnam, M.; Menon, C. A Wearable Gait Phase Detection System Based on Force Myography Techniques. Sensors 2018, 18, 1279. [Google Scholar] [CrossRef] [PubMed]
- Chia, B.N.; Ambrosini, E.; Pedrocchi, A.; Ferrigno, G.; Monticone, M.; Ferrante, S. A novel adaptive, real-time algorithm to detect gait events from wearable sensors. IEEE Trans. Neural Syst. Rehabil. Eng. 2014, 23, 413–422. [Google Scholar] [CrossRef]
- Olsen, E.; Andersen, P.H.; Pfau, T. Accuracy and precision of equine gait event detection during walking with limb and trunk mounted inertial sensors. Sensors 2012, 12, 8145–8156. [Google Scholar] [CrossRef]
- Trojaniello, D.; Cereatti, A.; Della, C.U. Accuracy, sensitivity and robustness of five different methods for the estimation of gait temporal parameters using a single inertial sensor mounted on the lower trunk. Gait Posture 2014, 40, 487–492. [Google Scholar] [CrossRef]
- Ledoux, E.D. Inertial Sensing for Gait Event Detection and Transfemoral Prosthesis Control Strategy. IEEE Trans. Biomed. Eng. 2018, 65, 2704–2712. [Google Scholar] [CrossRef] [PubMed]
- Pepa, L.; Verdini, F.; Spalazzi, L. Gait parameter and event estimation using smartphones. Gait Posture 2017, 57, 217–223. [Google Scholar] [CrossRef] [PubMed]
- Manor, B.; Yu, W.; Zhu, H.; Harrison, R.; Lo, O.Y.; Lipsitz, L.; Travison, T.; Pascual-Leone, A.; Zhou, J. Smartphone app-based assessment of gait during normal and dual-task walking: demonstration of validity and reliability. JMIR MHealth UHealth 2018, 6, e36. [Google Scholar] [CrossRef]
- Ellis, R.J.; Ng, Y.S.; Zhu, S.; Tan, D.M.; Anderson, B.; Schlaug, G.; Wang, Y. A validated smartphone-based assessment of gait and gait variability in Parkinson’s disease. PLoS ONE 2015, 10, e0141694. [Google Scholar] [CrossRef] [PubMed]
- Fernandez-Lopez, P.; Liu-Jimenez, J.; Sanchez-Redondo, C.S.; Sanchez-Reillo, R. Gait recognition using smartphone. In Proceedings of the 2016 IEEE International Carnahan Conference on Security Technology (ICCST), Orlando, FL, USA, 24–27 October 2016. [Google Scholar]
- Muaaz, M.; Mayrhofer, R. Smartphone-based gait recognition: From authentication to imitation. IEEE Trans. Mob. Comput. 2017, 16, 3209–3221. [Google Scholar] [CrossRef]
- Gadaleta, M.; Rossi, M. Idnet: Smartphone-based gait recognition with convolutional neural networks. Pattern Recognit. 2018, 74, 25–37. [Google Scholar] [CrossRef]
- Ugbolue, U.C.; Papi, E.; Kaliarntas, K.T.; Kerr, A.; Earl, L.; Pomeroy, V.M.; Rowe, P.J. The evaluation of an inexpensive, 2D, video based gait assessment system for clinical use. Gait Posture 2013, 38, 483–489. [Google Scholar] [CrossRef] [PubMed]
- Yang, C.; Ugbolue, U.; Carse, B.; Stankovic, V.; Stankovic, L.; Rowe, P. Multiple marker tracking in a single-camera system for gait analysis. In Proceedings of the 2013 20th IEEE International Conference on Image Processing (ICIP), Melbourne, Victoria, Australia, 15–18 September 2013; pp. 3128–3131. [Google Scholar]
- Ben, X.; Meng, W.; Yan, R. Dual-ellipse fitting approach for robust gait periodicity detection. Neurocomputing 2012, 79, 173–178. [Google Scholar] [CrossRef]
- Kale, A.; Sundaresan, A.; Rajagopalan, A.N.; Cuntoor, N.P.; Roy-Chowdhury, A.K.; Krüger, V.; Chellappa, R. Identification of humans using gait. IEEE Trans. Image Process. 2004, 13, 1163–1173. [Google Scholar] [CrossRef] [PubMed]
- Sarkar, S.; Phillips, P.J.; Liu, Z.; Bowyer, K.W. The humanid gait challenge problem: Data sets, performance, and analysis. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 27, 162–177. [Google Scholar] [CrossRef] [PubMed]
- Mori, A.; Makihara, Y.; Yagi, Y. Gait recognition using period-based phase synchronization for low frame-rate videos. In Proceedings of the IEEE 20th International Conference on Pattern Recognition (ICPR), Istanbul, Turkey, 23–26 August 2010; pp. 2194–2197. [Google Scholar]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Proceedings of the 25th International Conference on Neural Information Processing Systems, Lake Tahoe, NV, USA, 3–6 December 2012; pp. 1097–1105. [Google Scholar]
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar]
- Farabet, C.; Couprie, C.; Najman, L.; LeCun, Y. Learning hierarchical features for scene labeling. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1915–1929. [Google Scholar] [CrossRef] [PubMed]
- Schroff, F.; Kalenichenko, D.; Philin, J. FaceNet: A unified embedding for face recognition and clustering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 815–823. [Google Scholar]
- Sun, Y.; Wang, X.; Tang, X. Deep learning face representation from predicting 10,000 classes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Providence, RI, USA, 24–27 June 2014; pp. 1891–1898. [Google Scholar]
- Yu, S.; Tan, D.; Tan, T. A framework for evaluating the effect of view angle, clothing and carrying condition on gait recognition. In Proceedings of the IEEE 18th International Conference on Pattern Recognition (ICPR’06), Hong Kong, China, 20–24 August 2006; pp. 441–444. [Google Scholar]
- Jia, Y.; Shelhamer, E.; Donahue, J.; Karayev, S.; Long, J.; Girshick, R.; Guadarrama, S.; Darrell, T. Caffe: Convolutional Architecture for Fast Feature Embedding. arXiv, 2014; arXiv:1408.5093. [Google Scholar]
- Phillips, P.J.; Moon, H.; Rizvi, S.A.; Rauss, P.J. The feret evaluation methodology for face-recognition algorithms. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1090–1104. [Google Scholar] [CrossRef]
- Tang, Y.; Xue, A.; Ding, J.; Tian, H.; Guo, W. Gait Cycle Detection by Fusing Temporal and Spatial Features with Frame Difference. J. Data Acquis. Process. 2017, 32, 533–539. [Google Scholar]
n-CSD-Maps | 36 Degree | 54 Degree | 72 Degree | 90 Degree | 108 Degree | 126 Degree | 144 Degree |
---|---|---|---|---|---|---|---|
2-CSD | 93.2% | 94.34% | 94.3% | 96.26% | 94.68% | 94.83% | 92.72% |
3-CSD | 93.45% | 94.74% | 95.14% | 96.52% | 95.54% | 95.52% | 93.08% |
4-CSD | 93.52% | 95.18% | 95.24% | 96.58% | 95.64% | 95.58% | 93.16% |
5-CSD | 93.55% | 95.38% | 95.36% | 96.62% | 95.74% | 95.62% | 93.22% |
6-CSD | 93.63% | 95.4% | 95.44% | 96.78% | 95.78% | 95.65% | 93.44% |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Tang, Y.; Li, Z.; Tian, H.; Ding, J.; Lin, B. Detecting Toe-Off Events Utilizing a Vision-Based Method. Entropy 2019, 21, 329. https://doi.org/10.3390/e21040329
Tang Y, Li Z, Tian H, Ding J, Lin B. Detecting Toe-Off Events Utilizing a Vision-Based Method. Entropy. 2019; 21(4):329. https://doi.org/10.3390/e21040329
Chicago/Turabian StyleTang, Yunqi, Zhuorong Li, Huawei Tian, Jianwei Ding, and Bingxian Lin. 2019. "Detecting Toe-Off Events Utilizing a Vision-Based Method" Entropy 21, no. 4: 329. https://doi.org/10.3390/e21040329
APA StyleTang, Y., Li, Z., Tian, H., Ding, J., & Lin, B. (2019). Detecting Toe-Off Events Utilizing a Vision-Based Method. Entropy, 21(4), 329. https://doi.org/10.3390/e21040329