# Can Ensemble Deep Learning Identify People by Their Gait Using Data Collected from Multi-Modal Sensors in Their Insole?

^{1}

^{2}

^{*}

## Abstract

**:**

## 1. Introduction

#### Related Work

## 2. Gait Information

#### 2.1. Data Source

#### 2.2. Data Pre-Processing

#### 2.2.1. Walking Cycle Detection

#### 2.2.2. Standard Format

## 3. Network Design

**u**:

#### 3.1. Convolutional Neural Network

#### 3.2. Recurrent Neural Network

#### 3.3. Fully Connected Network

#### 3.4. Averaging Ensemble Model

## 4. Experimental Results

#### 4.1. Datasets and Evaluation Method

#### 4.2. Latent Space and Training Time

#### 4.3. Identification Accuracy

#### 4.3.1. Tri-Modal Sensing

#### 4.3.2. Bi-Modal Sensing

#### 4.3.3. Uni-Modal Sensing

## 5. Discussion and Future Work

## 6. Conclusions

## Author Contributions

## Funding

## Conflicts of Interest

## References

- Connor, P.; Ross, A. Biometric recognition by gait: A survey of modalities and features. Comput. Vis. Image Underst.
**2018**, 167, 1–27. [Google Scholar] [CrossRef] - Choudhury, S.D.; Tjahjadi, T. Silhouette-based gait recognition using Procrustes shape analysis and elliptic Fourier descriptors. Pattern Recognit.
**2012**, 45, 3414–3426. [Google Scholar] [CrossRef] [Green Version] - Cheng, M.H.; Ho, M.F.; Huang, C.L. Gait analysis for human identification through manifold learning and HMM. Pattern Recognit.
**2008**, 41, 2541–2553. [Google Scholar] [CrossRef] - Liao, R.; Yu, S.; An, W.; Huang, Y. A model-based gait recognition method with body pose and human prior knowledge. Pattern Recognit.
**2020**, 98, 107069. [Google Scholar] [CrossRef] - Johansson, G. Visual perception of biological motion and a model for its analysis. Percept. Psychophys.
**1973**, 14, 201–211. [Google Scholar] [CrossRef] - Cutting, J.E.; Kozlowski, L.T. Recognizing friends by their walk: Gait perception without familiarity cues. Bull. Psychon. Soc.
**1977**, 9, 353–356. [Google Scholar] [CrossRef] - Cutting, J.E.; Proffitt, D.R.; Kozlowski, L.T. A biomechanical invariant for gait perception. J. Exp. Psychol. Hum. Percept. Perform.
**1978**, 4, 357. [Google Scholar] [CrossRef] - Manap, H.H.; Tahir, N.M.; Yassin, A.I.M. Statistical analysis of parkinson disease gait classification using Artificial Neural Network. In Proceedings of the 2011 Institute of Electrical and Electronics Engineers International Symposium on Signal Processing and Information Technology (ISSPIT), Bilbao, Spain, 14–17 December 2011; pp. 060–065. [Google Scholar]
- Wahid, F.; Begg, R.K.; Hass, C.J.; Halgamuge, S.; Ackland, D.C. Classification of Parkinson’s disease gait using spatial-temporal gait features. Inst. Electr. Electron. Eng. J. Biomed. Health Inform.
**2015**, 19, 1794–1802. [Google Scholar] [CrossRef] - Dehzangi, O.; Taherisadr, M.; ChangalVala, R. IMU-based gait recognition using convolutional neural networks and multi-sensor fusion. Sensors
**2017**, 17, 2735. [Google Scholar] [CrossRef] [Green Version] - Moufawad, E.A.C.; Lenoble-Hoskovec, C.; Paraschiv-Ionescu, A.; Major, K.; Büla, C.; Aminian, K. Instrumented shoes for activity classification in the elderly. Gait Posture
**2016**, 44, 12–17. [Google Scholar] [CrossRef] - Gadaleta, M.; Rossi, M. Idnet: Smartphone-based gait recognition with convolutional neural networks. Pattern Recognit.
**2018**, 74, 25–37. [Google Scholar] [CrossRef] [Green Version] - Choi, S.I.; Moon, J.; Park, H.C.; Choi, S.T. User Identification from Gait Analysis Using Multi-Modal Sensors in Smart Insole. Sensors
**2019**, 19, 3785. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Wan, C.; Wang, L.; Phoha, V.V. A survey on gait recognition. ACM Comput. Surv. (CSUR)
**2018**, 51, 1–35. [Google Scholar] [CrossRef] [Green Version] - Elman, J.L. Finding structure in time. Cogn. Sci.
**1990**, 14, 179–211. [Google Scholar] [CrossRef] - Choi, S.I.; Lee, S.S.; Park, H.C.; Kim, H. Gait type classification using smart insole sensors. In Proceedings of the TENCON 2018—2018 Institute of Electrical and Electronics Engineers Region 10 Conference, Jeju, Korea, 28–31 October 2018; pp. 1903–1906. [Google Scholar]
- Murray, M.P.; Drought, A.B.; Kory, R.C. Walking patterns of normal men. J. Bone Jt. Surg.
**1964**, 46, 335–360. [Google Scholar] [CrossRef] - Schalkoff, R.J. Digital Image Processing and Computer Vision; Wiley: New York, NY, USA, 1989; Volume 286. [Google Scholar]
- Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
- Simonyan, K.; Zisserman, A. Two-stream convolutional networks for action recognition in videos. In Proceedings of the Annual Conference on Neural Information Processing Systems, Montreal, QC, Canada, 8–13 December 2014; pp. 568–576. [Google Scholar]
- Haynes, C.A.; Lockhart, T.E. Evaluation of gait and slip parameters for adults with intellectual disability. J. Biomech.
**2012**, 45, 2337–2341. [Google Scholar] [CrossRef] [Green Version] - Verghese, J.; Lipton, R.B.; Hall, C.B.; Kuslansky, G.; Katz, M.J.; Buschke, H. Abnormality of gait as a predictor of non-Alzheimer’s dementia. N. Engl. J. Med.
**2002**, 347, 1761–1768. [Google Scholar] [CrossRef] [Green Version] - Brandler, T.C.; Wang, C.; Oh-Park, M.; Holtzer, R.; Verghese, J. Depressive symptoms and gait dysfunction in the elderly. Am. J. Geriatr. Psychiatry
**2012**, 20, 425–432. [Google Scholar] [CrossRef] [Green Version] - Pirker, W.; Katzenschlager, R. Gait disorders in adults and the elderly. Wien. Klin. Wochenschr.
**2017**, 129, 81–95. [Google Scholar] [CrossRef] [Green Version] - Nagano, H.; Sarashina, E.; Sparrow, W.; Mizukami, K.; Begg, R. General Mental Health Is Associated with Gait Asymmetry. Sensors
**2019**, 19, 4908. [Google Scholar] [CrossRef] [Green Version] - Ahad, M.A.R.; Ngo, T.T.; Antar, A.D.; Ahmed, M.; Hossain, T.; Muramatsu, D.; Makihara, Y.; Inoue, S.; Yagi, Y. Wearable Sensor-Based Gait Analysis for Age and Gender Estimation. Sensors
**2020**, 20, 2424. [Google Scholar] [CrossRef] [PubMed] - Zhang, B.; Jiang, S.; Wei, D.; Marschollek, M.; Zhang, W. State of the art in gait analysis using wearable sensors for healthcare applications. In Proceedings of the 2012 Institute of Electrical and Electronics Engineers/ACIS 11th International Conference on Computer and Information Science, Shanghai, China, 30 May–1 June 2012; pp. 213–218. [Google Scholar]
- Mendes, J.J.A., Jr.; Vieira, M.E.M.; Pires, M.B.; Stevan, S.L., Jr. Sensor fusion and smart sensor in sports and biomedical applications. Sensors
**2016**, 16, 1569. [Google Scholar] [CrossRef] [PubMed] - Gouwanda, D.; Senanayake, S. Emerging trends of body-mounted sensors in sports and human gait analysis. In Proceedings of the 4th Kuala Lumpur International Conference on Biomedical Engineering 2008, Kuala Lumpur, Malaysia, 25–28 June 2008; Springer: Berlin/Heidelberg, Germany, 2008; pp. 715–718. [Google Scholar]
- Lee, S.S.; Choi, S.T.; Choi, S.I. Classification of Gait Type Based on Deep Learning Using Various Sensors with Smart Insole. Sensors
**2019**, 19, 1757. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Kozlow, P.; Abid, N.; Yanushkevich, S. Gait Type Analysis Using Dynamic Bayesian Networks. Sensors
**2018**, 18, 3329. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Niyogi, S.A.; Adelson, E.H. Analyzing and recognizing walking figures in XYT. In Proceedings of the Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 21–23 June 1994; Volume 94, pp. 469–474. [Google Scholar]
- Świtoński, A.; Polański, A.; Wojciechowski, K. Human identification based on gait paths. In Proceedings of the International Conference on Advanced Concepts for Intelligent Vision Systems, Ghent, Belgium, 22–25 August 2011; Springer: Berlin/Heidelberg, Germany, 2011; pp. 531–542. [Google Scholar]
- Yu, T.; Zou, J.H. Automatic human Gait imitation and recognition in 3D from monocular video with an uncalibrated camera. Math. Probl. Eng.
**2012**, 2012, 563864. [Google Scholar] [CrossRef] [Green Version] - Zhang, Z.; Tran, L.; Yin, X.; Atoum, Y.; Liu, X.; Wan, J.; Wang, N. Gait Recognition via Disentangled Representation Learning. In Proceedings of the Institute of Electrical and Electronics Engineers Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 4710–4719. [Google Scholar]
- Iwashita, Y.; Uchino, K.; Kurazume, R. Gait-based person identification robust to changes in appearance. Sensors
**2013**, 13, 7884–7901. [Google Scholar] [CrossRef] - Moon, K.S.; Lee, S.Q.; Ozturk, Y.; Gaidhani, A.; Cox, J.A. Identification of Gait Motion Patterns Using Wearable Inertial Sensor Network. Sensors
**2019**, 19, 5024. [Google Scholar] [CrossRef] [Green Version] - Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Proceedings of the Annual Conference on Neural Information Processing Systems, Lake Tahoe, NV, USA, 3–6 December 2012; pp. 1097–1105. [Google Scholar]
- Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn.
**1995**, 20, 273–297. [Google Scholar] [CrossRef] - Cevikalp, H.; Neamtu, M.; Wilkes, M.; Barkana, A. Discriminative common vectors for face recognition. IEEE Trans. Pattern Anal. Mach. Intell.
**2005**, 27, 4–13. [Google Scholar] [CrossRef] - Hou, H.; Andrews, H. Cubic splines for image interpolation and digital filtering. IEEE Trans. Acoust. Speech Signal Process.
**1978**, 26, 508–517. [Google Scholar] - Glorot, X.; Bordes, A.; Bengio, Y. Deep sparse rectifier neural networks. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, Ft. Lauderdale, FL, USA, 11–13 April 2011; pp. 315–323. [Google Scholar]
- Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput.
**1997**, 9, 1735–1780. [Google Scholar] [CrossRef] [PubMed] - Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res.
**2014**, 15, 1929–1958. [Google Scholar] - Xu, Q.S.; Liang, Y.Z. Monte Carlo cross validation. Chemom. Intell. Lab. Syst.
**2001**, 56, 1–11. [Google Scholar] [CrossRef] - Maaten, L.v.d.; Hinton, G. Visualizing data using t-SNE. J. Mach. Learn. Res.
**2008**, 9, 2579–2605. [Google Scholar]

**Figure 3.**Mean of eight pressure values (blue lines) and convolution of the mean and Gaussian function with standard deviation $\sigma =0.2$ s (red lines).

**Figure 4.**The procedure of data standardization. (

**a**) Original format. (

**b**) The length of the format is fixed as $d=87$. (

**c**) The same sensing modalities of both feet are concatenated in the standard format.

**Figure 7.**t-SNE plots of the output of the fully connected layer with 256 units in the CNN and RNN architectures. (

**a**) MCCV (30%), (

**b**) Sub-MCCV (50%), (

**c**) MCCV (50%).

**Figure 8.**Identification accuracies of the proposed method using tri-modal sensing for different k-values. (

**a**) MCCV (30%), (

**b**) Sub-MCCV (50%), (

**c**) MCCV (50%).

**Figure 9.**Identification accuracies of the proposed method using bi-modal sensing for different k-values. (

**a**) Pressure - Acceleration, (

**b**) Pressure - Rotation, (

**c**) Acceleration - Rotation.

**Figure 10.**Identification accuracies of the proposed method using uni-modal sensing for different k-values. (

**a**) Pressure, (

**b**) Acceleration, (

**c**) Rotation.

**Figure 11.**Identification accuracies of the RNN with two hyper-parameters, $d=87$ and $d=43$, using tri-modal sensing for different k-values. (

**a**) MCCV (30%), (

**b**) Sub-MCCV (50%), (

**c**) MCCV (50%).

**Table 1.**The total number of samples for different k-values. MCCV stands for Monte Carlo cross-validation.

k | # of Samples | MCCV (30%) | Sub-MCCV (50%) | MCCV (50%) | |||
---|---|---|---|---|---|---|---|

Train | Test | Train | Test | Train | Test | ||

1 | 4750 | 3325 | 1425 | 2000 | 2000 | 2375 | 2375 |

2 | 2368 | 1658 | 710 | 1000 | 1000 | 1184 | 1184 |

3 | 1570 | 1099 | 471 | 600 | 600 | 785 | 785 |

4 | 1176 | 823 | 353 | 500 | 500 | 588 | 588 |

**Table 2.**Identification accuracies of the proposed method using tri-modal sensing (pressure, acceleration, and rotation) for different k-values.

Validation | k | ${\mathcal{M}}_{\mathit{cnn}}$ | ${\mathcal{M}}_{\mathit{rnn}}$ | ${\mathcal{M}}_{\mathit{ens}}$ |
---|---|---|---|---|

MCCV (30%) | 1 | 0.9896 | 0.9786 | 0.9922 |

2 | 0.9918 | 0.9676 | 0.9940 | |

3 | 0.9928 | 0.9456 | 0.9942 | |

4 | 0.9947 | 0.9395 | 0.9950 | |

Sub-MCCV (50%) | 1 | 0.9888 | 0.9754 | 0.9914 |

2 | 0.9912 | 0.9516 | 0.9921 | |

3 | 0.9915 | 0.9190 | 0.9930 | |

4 | 0.9935 | 0.9056 | 0.9946 | |

MCCV (50%) | 1 | 0.9892 | 0.9776 | 0.9926 |

2 | 0.9929 | 0.9606 | 0.9937 | |

3 | 0.9934 | 0.9378 | 0.9940 | |

4 | 0.9949 | 0.9253 | 0.9958 |

**Table 3.**Identification accuracies of the proposed method using bi-modal sensing (combinations of two of pressure, acceleration, and rotation) for k = 1.

Validation | Sensing | ${\mathcal{M}}_{\mathit{cnn}}$ | ${\mathcal{M}}_{\mathit{rnn}}$ | ${\mathcal{M}}_{\mathit{ens}}$ |
---|---|---|---|---|

MCCV (30%) | ${\mathbf{x}}_{p}\left(t\right),{\mathbf{x}}_{a}\left(t\right)$ | 0.9888 | 0.9762 | 0.9919 |

${\mathbf{x}}_{a}\left(t\right),{\mathbf{x}}_{r}\left(t\right)$ | 0.9823 | 0.9512 | 0.9853 | |

${\mathbf{x}}_{r}\left(t\right),{\mathbf{x}}_{p}\left(t\right)$ | 0.9886 | 0.9732 | 0.9916 | |

Sub-MCCV (50%) | ${\mathbf{x}}_{p}\left(t\right),{\mathbf{x}}_{a}\left(t\right)$ | 0.9893 | 0.9651 | 0.9917 |

${\mathbf{x}}_{a}\left(t\right),{\mathbf{x}}_{r}\left(t\right)$ | 0.9825 | 0.9412 | 0.9836 | |

${\mathbf{x}}_{r}\left(t\right),{\mathbf{x}}_{p}\left(t\right)$ | 0.9887 | 0.9603 | 0.9908 | |

MCCV (50%) | ${\mathbf{x}}_{p}\left(t\right),{\mathbf{x}}_{a}\left(t\right)$ | 0.9884 | 0.9697 | 0.9910 |

${\mathbf{x}}_{a}\left(t\right),{\mathbf{x}}_{r}\left(t\right)$ | 0.9831 | 0.9459 | 0.9844 | |

${\mathbf{x}}_{r}\left(t\right),{\mathbf{x}}_{p}\left(t\right)$ | 0.9893 | 0.9688 | 0.9918 | |

MCCV (30%) | ${\mathbf{x}}_{p}\left(t\right)$ | 0.9847 | 0.9673 | 0.9885 |

${\mathbf{x}}_{a}\left(t\right)$ | 0.9808 | 0.9086 | 0.9798 | |

${\mathbf{x}}_{r}\left(t\right)$ | 0.9821 | 0.9173 | 0.9824 | |

Sub-MCCV (50%) | ${\mathbf{x}}_{p}\left(t\right)$ | 0.9818 | 0.9559 | 0.9867 |

${\mathbf{x}}_{a}\left(t\right)$ | 0.9808 | 0.8860 | 0.9789 | |

${\mathbf{x}}_{r}\left(t\right)$ | 0.9796 | 0.9004 | 0.9800 | |

MCCV (50%) | ${\mathbf{x}}_{p}\left(t\right)$ | 0.9777 | 0.9580 | 0.9851 |

${\mathbf{x}}_{a}\left(t\right)$ | 0.9801 | 0.9139 | 0.9796 | |

${\mathbf{x}}_{r}\left(t\right)$ | 0.9808 | 0.9130 | 0.9808 |

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Moon, J.; Minaya, N.H.; Le, N.A.; Park, H.-C.; Choi, S.-I.
Can Ensemble Deep Learning Identify People by Their Gait Using Data Collected from Multi-Modal Sensors in Their Insole? *Sensors* **2020**, *20*, 4001.
https://doi.org/10.3390/s20144001

**AMA Style**

Moon J, Minaya NH, Le NA, Park H-C, Choi S-I.
Can Ensemble Deep Learning Identify People by Their Gait Using Data Collected from Multi-Modal Sensors in Their Insole? *Sensors*. 2020; 20(14):4001.
https://doi.org/10.3390/s20144001

**Chicago/Turabian Style**

Moon, Jucheol, Nelson Hebert Minaya, Nhat Anh Le, Hee-Chan Park, and Sang-Il Choi.
2020. "Can Ensemble Deep Learning Identify People by Their Gait Using Data Collected from Multi-Modal Sensors in Their Insole?" *Sensors* 20, no. 14: 4001.
https://doi.org/10.3390/s20144001