Next Article in Journal
Fatigue Performance Test and Numerical Analysis of Composite Girders with CSW-CFST Truss Chords
Previous Article in Journal
A Novel Hierarchical Adaptive Feature Fusion Method for Meta-Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Detection of Walking Features Using Mobile Health and Deep Learning

1
Division of Computer Science and Engineering, Sun Moon University, Asan-si 31460, Korea
2
School of Nursing, University of Nevada, Las Vegas, NV 89154, USA
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(11), 5444; https://doi.org/10.3390/app12115444
Submission received: 27 April 2022 / Revised: 17 May 2022 / Accepted: 23 May 2022 / Published: 27 May 2022

Abstract

:
This study identifies seven human subjects’ walking features by training a deep learning model with sensor data. Using the proposed Mobile Health Application developed for collecting sensor data from an Android device, we collected data from human subjects with a history of mild traumatic brain injury. The sensors measure acceleration in m/s2 with respect to: the X, Y, and Z directions using an accelerometer, the rate of rotation around a spatial axis with a gyroscope, and nine parameters of a rotation vector with rotation vector components along the X, Y, Z axes using a rotation vector software-based sensor. We made a deep learning model using Tensorflow and Keras to identify the walking features of the seven subjects. The data are classified into the following categories: Accelerometer (X, Y, Z); Gyroscope (X, Y, Z); Rotation (X, Y, Z); Rotation vector (nine parameters); and a combination of the preceding categories. Each dataset was then used for training and testing the accuracy of the deep learning model. According to the Keras evaluation function, the deep learning model trained with Rotation vector data shows 99.5% accuracy for classifying walking characteristics of subjects. In addition, the ability of the model to accurately classify the characteristics of subjects’ walking with all datasets combined is 99.9%.

1. Introduction

Because the human body is composed of many different biomechanical systems (e.g., multiple joints, diverse sensorimotor responses, muscular and skeletal interconnectivity, etc.), individuals may have an inherently unstable biomechanical structure. For individuals with a traumatic brain injury (TBI), the brain injury may further exacerbate problems with this unstable biomechanical structure. Mild types of TBI or mild traumatic brain injury (mTBI) also contribute to several body balance problems because unspecified areas in the injured brain may negatively influence the center of gravity (COG) of the human body. In fact, several individuals with even a history of mTBI experience long-term difficulties in sensorimotor functions, including visual function or eye movements and muscle imbalance, which may cause limited mobility and impaired body balance [1,2,3,4,5]. Therefore, it is important to understand the challenges associated with maintaining daily balance, such as walking balance, among those with a history of mTBI. Previous research indicates that mTBIs increase the range of COG [6,7], thus making walking different from that of individuals without mTBI. This COG is related to walking balance [8].
Previous studies in human balance and related health problems have used a variety of measurement tools, such as force plates for postural stability testing [6], motion capture systems for gait analysis [9], visual motor equipment to measure eye movements [10], or body sensors to measure gait patterns [8,11]. However, the major limitation of these tools is that they require affected individuals to visit a clinical or research laboratory for assessment. Therefore, it is not suitable to measure human body balance for mobile health (mHealth).
Mobile health (mHealth) methodologies [11,12] are inexpensive and widely available, and are used for monitoring gait disturbances associated with chronic diseases [13]. Additionally, mHealth is a nice tool for providing real-time assessment and receiving feedback because it works on a smartphone. Real-time assessment and feedback [14] can both detect and prevent further functional deterioration, as well as save affected individuals multiple trips to clinics [15,16].
In this study, smartphone sensors collected data on leisurely walking for subsequent analysis using deep learning. Data for six mTBI subjects and one non-TBI subject were collected using a mHealth application that accesses two hardware sensors and one software sensor on smartphone devices that were worn by the subjects. On-board hardware recorded acceleration in m/s2, with the accelerometer recording acceleration in X, Y, and Z directions, and a gyroscope measuring acceleration with respect to rotation around a spatial axis. In addition, nine parameters associated with a rotation vector and rotation vector component along the X, Y, and Z axes were recorded using rotation vector software-based sensors previously installed on the Android devices. The subjects walked about 220-feet round trips. Walking data for the seven subjects were imported into a deep learning model originally built with TensorFlow [17] and Keras [18] in R, and subsequently classified in order to improve accuracy of the deep learning model. Model training with accelerometer data yielded a 50.5% accuracy in classifying subjects’ walking. Deep learning model training with Gyroscope data presented an accuracy of 48.3%. The accuracy of the model trained with the rotation vector component along the X, Y, and Z axis data was 77.1%. Model training with the nine parameters of rotation vector was 99.5%. Model training with all data sets yielded a 99.9% accuracy in predicting the classification of the subject’s walking.

2. Experiment Methods

2.1. Mobile Health Application

The mHealth application was developed to measure and record acceleration data and rotational data in real time using an Android smartphone’s motion sensors [19,20]. Figure 1a shows the mHealth application in use during data collection. The application was developed for use with Android mobile platforms, software development kits (SDKs) greater than 21 [21], using Android Studio, The mHealth application was installed on the Samsung Galaxy S8 with the Android 7.0 mobile operating system.
The smartphone measures rotation and acceleration through an angle around an axis (X, Y, Z) as shown in Figure 1a. The mHealth application collects rotation data and acceleration data in 10 ms intervals using the on-board accelerometer, gyroscope, and rotation vector software-based sensor.
The data are then saved to an SQLite database and CSV (comma-separated values) files stored on the smartphone. The application (Figure 1b) also collects subjects’ background information prior to real-time data collection, including any head, back, or leg pain; prior concussion experience; and gender, race, height, and weight.

2.2. Data Collection

A total of seven subjects were tested and their walking data were collected by the mHealth application. Table 1 describes their background information, including age, race, gender, number of mTBI injuries, year of last injury, weight, and height. All of the subjects’ ages ranged from 19 to 35 years. Among six subjects who reported a history of mTBI, three subjects (50%) had more than one mTBI and two of which were injured within the past year. One subject did not have any history of mTBI.
During the experiment, each subject wore a smartphone in the pocket of a waistband located at the center of the front body, as shown in Figure 2a. The smartphone screen was placed facing the direction of travel with its top turned to the right side of the body. The subjects walked along a straight, linear path at a comfortable pace in an indoor hallway of a building. The subjects were also instructed to walk leisurely to an initial destination ca. 220 feet away from their current location and return to the starting location as shown in Figure 2b.
Upon return, each subject walked a total of ca. 440 feet round trip. The mHealth application was activated upon the subject’s touching the “begin” button on the phone screen at the beginning of the trial, and the data collection was terminated in the mHealth application when the subject returned to the starting location.

3. Analysis and Experimental Results

We analyzed subjects’ acceleration in m/s2 for the rate of rotations around spatial X, Y, Z axes; the nine parameters of rotation vector; and the rotation vector component along the X, Y, Z axes. Table 2 shows the resulting statistical analysis from one of the subjects in terms of acceleration (accX, accY accZ); the rate of rotations as measured by the on-board gyroscope (gyroX, gyroY, gyro Z); and the rotation vector component assessed by the rotation vector software-based sensor (rotX, rotY, rotZ). Table 2 shows the minimum value, maximum value, mean, median, variance (var), standard deviation (SD), and Quartiles (Q). For example, A01 has a higher acceleration speed downward from its current location (X axis in Figure 2a) than upwards (−X axis in Figure 2a) due to the effect of gravity. In addition, although subjects intended to walk in a straight line (as indicated by the Z axis in Figure 2a), some subjects proceeded to deviate left (−Y axis in Figure 2a) or right (Y axis in Figure 2a) from the target path. All the subjects’ statistical information on walking data are displayed in Appendix A and Appendix B and show a scatter plot of rotation X, Y, Z with the first 30 s of walking data.
Data corresponding to the first 10 s (1000 rows) and the last 10 s (1000 rows) during each subject’s data collection session were removed to account for the start- and stop-button activation within the mHealth application. The data were then analyzed by the deep learning algorithms originally developed within Tensorflow and Keras in R, as shown in Figure 3. The model uses one input layer, one output layer, and three hidden layers. The output layer uses softmax activation [22] function and the inner and hidden layers use the Rectified Linear Units (relu) [23] function for activation. Figure 4 shows the detailed information of each layer. Both the input layer and hidden layers contain 256 arbitrarily decided neurons and the output layer contains 7 neurons for classifying seven subjects. The dropout rate [24] is 50% for both the input and hidden layers to avoid overfitting from the training data set (x_data_train, y_data_train). For initializing the weight of neurons, the model uses the Glorot normal initializer (also called Xavier normal initializer) [25].
Figure 3b shows the compilation of the deep learning model. The model is compiled with the categorical cross entropy loss function [17,18] to classify seven categories, and the Adam algorithm [26] to find a minimized cost of the model. For training of the model, we used the ID of subjects as a label (y_data) and the time series data (accX, accY, accZ, gyroX, gyroY, gyroZ, rotX, rotY, rotZ, and the nine parameters for the rotation vector) as input data (x_data). The ratio of input (x_data_train)–output (y_data_train) training data set, to input (x_data_test)–output (y_data_test) test data set is seven to three. So, approximately 100,000 input rows of data (x_data_train)–output (y_data_train) are used for training and approximately 43,000 input rows of data (x_data_test)–output (y_data_test) are used to test the model. Model training used the training data set (x_data_train, y_data_train). The training data set was used 300 times (epoch) to train the deep learning model as shown in Figure 3c and Figure 5 shows the loss and accuracy of the model during training, where X-axis of the graph represents epoch time and Y-axis represents a loss value by optimizer and accuracy of evaluation. Improving the accuracy of the deep learning model was facilitated with five different combinations of data sets including acceleration (Acc X, Y, Z) in the X, Y, and Z directions for the data set corresponding to the gyroscope (data set titled, Gyro X, Y, Z); the rotation vector component along the X, Y, Z axis (Rot X, Y, Z) data set; the nine parameters of the rotation vector (Rot vector—9); and the previously stated datasets combined. The trained deep learning model used Acc data (Figure 5a) and Gyro data (Figure 5b) to gradually minimize loss and increased the accuracy of the model up to 50 epochs. However, models of more than 100 epochs experienced few significant gains in terms of greater accuracy and minimization of loss.
The deep learning model trained with the Rot data (Figure 5c), the Rotation vector data (Figure 5d) and the dataset composed of all existing data sets combined (Figure 5e), each experienced significant decreases in the loss and similarly significant increases in model accuracy. Rotation vector (Figure 5d) and the combination of all data sets (Figure 5e) achieved up to 100% accuracy during the training of the deep learning model. Accuracy for the Rot (Figure 5c) increased to approximately 80% during model training. Among the multiple data sets, the deep learning model presented the highest performance when trained with the Rot XYZ (Figure 5d) and with all datasets combined (Figure 5e).
Outcomes associated with model training were evaluated with the test data set (x_data_test, y_data_test) using Keras code (Figure 3d).
Table 3 shows the result of this evaluation. Model training with acceleration (Acc X, Y, Z) in X, Y, and Z directions data sets indicates a 50.5% accuracy in classifying subjects’ walking characteristics using the test data (x_data_test). The model trained with Gyroscope data (Gyro X, Y, Z) presents an accuracy rate of 48.3%. The model trained with the rotation vector component along the X, Y, Z axis data set (Rot X, Y, Z) reveals 77.1 % accuracy. Rot XYZ is calculated from the nine parameters associated with the rotational vector. Thus, the trained model with Rot XYZ presents a higher accuracy than the accuracies derived from the accelerometer-related, and gyroscope-related datasets. The model trained with the nine parameters of rotation vector (Rot vector—9) displays accuracy of 99.5%. The model trained with all datasets shows 99.9% accuracy in matching the subject’s unique walking features with the corresponding correct subject (y_data_test) using test data (x_data_test). According to evaluation of the model, using nine parameters of rotation vector data was most effective in training the deep learning model for classifying the walking characteristics of the seven subjects.

4. Conclusions and Future Work

We analyzed the walking characteristics of six mTBI subjects and one non-mTBI subject to identify their walking features by training the deep learning model with multiple data sets. We collected the walking data from the subjects using the mHealth application originally developed for measuring acceleration in m/s2 in the X, Y, and Z directions from the accelerometer; the rate of rotations around a spatial axis of gyroscope hardware; nine parameters of rotation vectors; and rotation vector components along the X, Y, Z axis using rotation vector software-based sensor on the Android device. The sensor data were subsequently used to train a deep learning model developed with Tensorflow and Keras in R. According to model evaluations, using nine parameters of rotational vectors was the most effective dataset in training the deep learning model to accurately identify the walking characteristics of the seven subjects (95.5% accuracy).
With clinical data of mTBI and non-mTBI subjects, we will make a deep learning model that can predict (or be matched with) different levels of post-injury symptom severity on the Glasgow coma scale [27]. It is our hope that this model and the mHealth application together will contribute to the early detection of acute decline in sensorimotor function after mTBI based on the patient walking data. Additionally, the mHealth application will be useful to examine and screen the clinical symptom progress during post-mTBI recovery [28].

Author Contributions

Conceptualization, S.L.; methodology, S.L.; software, S.L.; validation, S.L.; formal analysis, S.L.; investigation, S.L. and H.L.; resources, H.L. and S.L.; data curation, H.L.; writing—original draft preparation, S.L.; writing—review and editing, H.L.; visualization, S.L.; supervision, H.L.; project administration, H.L.; funding acquisition, H.L. and S.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Sun Moon University Research Grant of 2021.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Biomedical Institutional Review Board of the University of Nevada, Las Vegas (UNLV) (#975928-9, approved on 20 May 2019; #1386038-7, approved on 21 August 2019).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study. Written informed consent has been obtained from the subjects to publish this paper.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Statistics Information of acceleration, rate of rotation around a spatial axis from the gyroscope, rotation vector component along the X, Y, Z axis. (Q is Quantiles).
IDSensorMinMaxMeanMedVarSDQ1Q2Q3Q4
A01accX4.11117.6389.4598.7314.8482.2024.1117.8448.73110.873
accY−5.2595.494−0.276−0.3092.2071.486−5.259−1.286−0.3090.692
accZ−6.9547.104−0.1740.2303.1221.767−6.954−1.1010.2301.105
gyroX−1.1852.7140.0140.0420.1900.436−1.185−0.3150.0420.344
gyroY−1.2601.5100.002−0.0330.0940.306−1.260−0.213−0.0330.181
gyroZ−1.1151.2290.002−0.0080.1030.322−1.115−0.209−0.0080.200
rotX−0.7560.7490.033−0.4120.2600.510−0.756−0.466−0.4120.552
rotY−0.6000.011−0.493−0.4930.0040.060−0.600−0.544−0.493−0.444
rotZ−0.6620.6720.056−0.3930.2510.501−0.662−0.433−0.3930.567
A02accX3.42920.3089.2479.1268.3452.8893.4297.0839.12611.076
accY−6.6905.626−0.130−0.0742.7341.653−6.690−1.103−0.0741.003
accZ−8.9013.527−2.067−1.6414.0722.018−8.901−3.553−1.641−0.565
gyroX−0.7993.1700.0160.0030.0760.276−0.799−0.1410.0030.150
gyroY−0.9361.213−0.0020.0290.1120.335−0.936−0.2710.0290.226
gyroZ−0.9940.845−0.0040.0100.0790.281−0.994−0.2410.0100.229
rotX−0.7830.7830.079−0.4300.3460.589−0.783−0.495−0.4300.682
rotY−0.7030.013−0.493−0.5550.0140.116−0.703−0.605−0.555−0.379
rotZ−0.6220.6230.067−0.3450.2200.469−0.622−0.389−0.3450.548
A04accX3.52018.9759.3418.9835.7982.4083.5207.4948.98310.952
accY−6.3127.7340.2260.3975.9362.436−6.312−1.7640.3972.075
accZ−12.2493.984−1.193−0.3908.0572.838−12.249−1.809−0.3900.592
gyroX−1.8672.6290.0150.0060.4820.694−1.867−0.5760.0060.574
gyroY−1.9112.2130.004−0.0140.2740.524−1.911−0.343−0.0140.320
gyroZ−1.0871.237−0.0040.0050.1460.381−1.087−0.2570.0050.230
rotX−0.7310.7290.027−0.4270.3530.594−0.731−0.566−0.4270.625
rotY−0.6100.061−0.447−0.4490.0040.059−0.610−0.489−0.449−0.406
rotZ−0.6960.6910.011−0.4270.2800.529−0.696−0.517−0.4270.543
C17accX2.24917.1069.4119.1745.3812.3202.2497.6799.17411.153
accY−4.7076.0730.0960.2061.7901.338−4.707−0.7300.2060.907
accZ−8.7896.2981.5991.8194.1472.036−8.7890.7151.8193.005
gyroX−1.4853.0330.0150.0140.1680.410−1.485−0.2630.0140.267
gyroY−1.2461.6630.002−0.0390.1850.430−1.246−0.300−0.0390.228
gyroZ−1.0531.2750.011−0.0210.0800.283−1.053−0.200−0.0210.202
rotX−0.6380.6580.047−0.2920.1800.425−0.638−0.353−0.2920.495
rotY−0.6100.024−0.477−0.5040.0060.076−0.610−0.545−0.504−0.407
rotZ−0.7760.7670.057−0.3620.2550.505−0.776−0.418−0.3620.594
C20accX3.27115.7049.4139.6174.8462.2013.2717.6989.61711.002
accY−3.5754.9340.4350.4471.7281.314−3.575−0.5170.4471.290
accZ−6.5642.625−0.880−0.3402.9241.710−6.564−1.711−0.3400.297
gyroX−0.9552.3560.016−0.0130.1030.321−0.955−0.208−0.0130.211
gyroY−1.1371.3240.001−0.0090.1310.362−1.137−0.271−0.0090.260
gyroZ−1.0281.051−0.0020.0010.0900.301−1.028−0.1850.0010.171
rotX−0.7610.7630.075−0.4150.3000.547−0.761−0.464−0.4150.628
rotY−0.6260.011−0.480−0.5100.0110.103−0.626−0.579−0.510−0.386
rotZ−0.6640.6510.050−0.4030.2510.501−0.664−0.443−0.4030.557
C21accX4.31415.0729.4429.0862.4731.5724.3148.3849.08610.512
accY−3.9945.2400.5780.5001.5231.234−3.994−0.2200.5001.371
accZ−6.8825.154−0.264−0.0292.3221.524−6.882−0.957−0.0290.684
gyroX−1.6072.6090.0130.0210.2060.454−1.607−0.3160.0210.318
gyroY−1.2761.4260.005−0.0160.0660.256−1.276−0.164−0.0160.166
gyroZ−0.7630.6500.002−0.0030.0390.197−0.763−0.134−0.0030.156
rotX−0.7270.727−0.007−0.4490.2470.497−0.727−0.496−0.4490.500
rotY−0.6040.077−0.516−0.5170.0010.032−0.604−0.533−0.517−0.502
rotZ−0.6870.695−0.042−0.4770.2320.482−0.687−0.515−0.4770.447
CS03accX1.90021.8399.3268.96913.3163.6491.9006.2938.96912.385
accY−6.4516.068−0.0040.0245.2272.286−6.451−1.9410.0241.924
accZ−15.9127.004−1.262−0.5029.5133.084−15.912−2.084−0.5020.646
gyroX−2.1383.8120.0190.0750.7460.864−2.138−0.7500.0750.754
gyroY−2.5532.288−0.005−0.0460.1920.438−2.553−0.272−0.0460.243
gyroZ−1.2591.548−0.0040.0150.3030.551−1.259−0.4540.0150.435
rotX−0.7180.7180.0510.5070.3230.568−0.718−0.5150.5070.616
rotY−0.6580.072−0.473−0.4820.0070.086−0.658−0.541−0.482−0.411
rotZ−0.6900.6980.0500.5010.2600.510−0.690−0.4610.5010.557

Appendix B

Scatter plot of Rotation X, Y, Z with the first 30 s of walking data.
Applsci 12 05444 i001 Applsci 12 05444 i002 Applsci 12 05444 i003
A01′s Rotation X and YA01’s Rotation X and ZA01′s Rotation Y and Z
Applsci 12 05444 i004 Applsci 12 05444 i005 Applsci 12 05444 i006
A02′s Rotation X and YA02′s Rotation X and ZA02′s Rotation Y and Z
Applsci 12 05444 i007 Applsci 12 05444 i008 Applsci 12 05444 i009
A04′s Rotation X and YA04′s Rotation X and ZA04′s Rotation Y and Z
Applsci 12 05444 i010 Applsci 12 05444 i011 Applsci 12 05444 i012
C17′s Rotation X and YC17′s Rotation X and ZC17′s Rotation Y and Z
Applsci 12 05444 i013 Applsci 12 05444 i014 Applsci 12 05444 i015
C20′s Rotation X and YC20′s Rotation X and ZC20′s Rotation Y and Z
Applsci 12 05444 i016 Applsci 12 05444 i017 Applsci 12 05444 i018
C21′s Rotation X and YC21′s Rotation X and ZC21′s Rotation Y and Z
Applsci 12 05444 i019 Applsci 12 05444 i020 Applsci 12 05444 i021
CS03′s Rotation X and YCS03′s Rotation X and ZCS03′s Rotation Y and Z

References

  1. Bruttini, C.; Esposti, R.; Bolzoni, F.; Vanotti, A.; Mariotti, C.; Cavallari, P. Temporal disruption of upper-limb anticipatory postural adjustments in cerebellar ataxic patients. Exp. Brain Res. 2015, 233, 197–203. [Google Scholar] [CrossRef] [PubMed]
  2. Lee, H.; Lee, S.; Salado, L.; Estrada, J.; White, J.; Muthukumar, V.; Lee, S.P.; Mohapatra, S. Proof-of-Concept Testing of a Real-Time mHealth Measure to Estimate Postural Control During Walking: A Potential Application for Mild Traumatic Brain Injuries. Asian/Pac. Isl. Nurs. J. 2018, 3, 175–189. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Montecchi, M.G.; Muratori, A.; Lombardi, F.; Morrone, E.; Briant, K. Trunk recovery scale: A new tool to measure posture control in patients with severe acquired brain injury. A study of the psychometric properties. Eur. J. Phys. Rehabil. Med. 2013, 49, 341–351. [Google Scholar] [PubMed]
  4. Jurgens, R.; Nasios, G.; Becker, W. Vestibular optokinetic and cognitive contribution to the guidance of passive self-rotation toward instructed targets. Exp. Brain Res. 2003, 151, 90–107. [Google Scholar] [CrossRef] [PubMed]
  5. Lee, H.; Lee, S.; Black, I.; Salado, L.; Estrada, J.; Isla, K. Long-Term Impact of Mild Traumatic Brain Injuries on Multiple Functional Outcomes and Epigenetics: A Pilot Study with College Students. Appl. Sci. 2020, 10, 4131. [Google Scholar] [CrossRef]
  6. Degani, A.M.; Santos, M.M.; Leonard, C.T.; Rau, T.F.; Patel, S.A.; Mohapatra, S.; Danna-Dos-Santos, A. The effects of mild traumatic brain injury on postural control. Brain Inj. 2016, 31, 49–56. [Google Scholar] [CrossRef] [PubMed]
  7. Merchant-Borna, K.; Jones, C.M.; Janigro, M.; Wasserman, E.B.; Clark, R.A.; Bazarian, J.J. Evaluation of Nintendo Wii Balance Board as a Tool for Measuring Postural Stability After Sport-Related Concussion. J. Athl. Train. 2017, 52, 245–255. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. Tsuruoka, M.; Tsuruoka, Y.; Shibasaki, R.; Yasuoka, Y. Spectral Analysis of Walking with Shoes and without Shoes. In Proceedings of the 2006 International Conference of the IEEE Engineering in Medicine and Biology Society, New York, NY, USA, 30 August–3 September 2006; pp. 6125–6128. [Google Scholar] [CrossRef]
  9. Venugopalan, J.; Cheng, C.; Stokes, T.H.; Wang, M.D. Kinect-based rehabilitation system for patients with traumatic brain injury. In Proceedings of the 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Osaka, Japan, 3–7 July 2013; pp. 4625–4628. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  10. Heitger, M.H.; Jones, R.D.; Anderson, T.J. A new approach to predicting postconcussion syndrome after mild traumatic brain injury based upon eye movement function. In Proceedings of the 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Vancouver, BC, Canada, 20–25 August 2008; pp. 3570–3573. [Google Scholar] [CrossRef] [PubMed]
  11. Petersen, E.; Zech, A.; Hamacher, D. Walking barefoot vs. with minimalist footwear—Influence on gait in younger and older adults. BMC Geriatr. 2020, 20, 88. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Patrick, K.; Griswold, W.G.; Raab, F.; Intille, S.S. Health and the mobile phone. Am. J. Prev. Med. 2008, 35, 177–181. [Google Scholar] [CrossRef] [PubMed]
  13. Juen, J.; Cheng, Q.; Prieto-Centurion, V.; Krishnan, J.A.; Schatz, B. Health monitors for chronic disease by gait analysis with mobile phones. Telemed. J. E-Health 2014, 20, 1035–1041. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Rosenbaum, S.B.; Lipton, M.L. Embracing chaos: The scope and importance of clinical and pathological heterogeneity in mTBI. Brain Imaging Behav. 2012, 6, 255–282. [Google Scholar] [CrossRef] [PubMed]
  15. Cancela, J.; Pastorino, M.; Tzallas, A.T.; Tsipouras, M.G.; Rigas, G.; Arredondo, M.T.; Fotiadis, D.I. Wearability assessment of a wearable system for Parkinson’s disease remote monitoring based on a body area network of sensors. Sensors 2014, 14, 17235–17255. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Lamont, R.M.; Daniel, H.L.; Payne, C.L.; Brauer, S.G. Accuracy of wearable physical activity trackers in people with Parkinson’s disease. Gait Posture 2018, 63, 104–108. [Google Scholar] [CrossRef] [Green Version]
  17. TensorFlow. TensorFlow Core v2.2.0. Available online: https://www.tensorflow.org/api_docs/python/tf/keras/losses/CategoricalCrossentropy (accessed on 1 May 2020).
  18. Keras. Available online: https://keras.io/ (accessed on 1 May 2020).
  19. Developers. Motion Sensors. Available online: https://developer.android.com/guide/topics/sensors/sensors_motion#sensors-motion-rotate (accessed on 10 April 2020).
  20. Lee, S.; Hwang, E.; Kim, Y.; Demir, F.; Lee, H.; Mosher, J.J.; Jang, E.; Lim, K. Mobile Health App for Adolescents: Motion Sensor Data and Deep Learning Technique to Examine the Relationship between Obesity and Walking Patterns. Appl. Sci. 2022, 12, 850. [Google Scholar] [CrossRef]
  21. Developers. API reference. Available online: https://developer.android.com/reference (accessed on 10 April 2020).
  22. Christopher, M. Bishop, Pattern Recognition and Machine Learning; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  23. Nair, V.; Hinton, G.E. Rectified Linear Units Improve Restricted Boltzmann Machines. In Proceedings of the 27th International Conference on Machine Learning, Haifa, Israel, 21–24 June 2010. [Google Scholar]
  24. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. System and Method for Addressing Overfitting in a Neural Network. USA Patent US 9406017B2, 2 August 2016. [Google Scholar]
  25. Glorot, X.; Bengio, Y. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the 13th International Conference, Sardinia, Italy, 13–15 May 2010. [Google Scholar]
  26. Diederik, P.; Kingma, J.B. Adam: A Method for Stochastic Optimization. In Proceedings of the 3rd International Conference for Learning Representations, San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
  27. Teasdale, G.; Jennett, B. Assessment of coma and impaired consciousness. A practical scale. Lancet 1974, 13, 81–84. [Google Scholar] [CrossRef]
  28. Bond, M.R. The stages of recovery from severe head injury with special reference to late outcome. Int. Rehabil. Med. 1979, 1, 155–159. [Google Scholar] [CrossRef] [PubMed]
Figure 1. (a) The measuring sensor data in walking. Accelerometer, Gyroscope, and Rotation Matrix of XYZ were collected. (b) Subject information. Base information that can be related to walking was collected from subjects.
Figure 1. (a) The measuring sensor data in walking. Accelerometer, Gyroscope, and Rotation Matrix of XYZ were collected. (b) Subject information. Base information that can be related to walking was collected from subjects.
Applsci 12 05444 g001
Figure 2. (a) A subject wearing the Smartphone. (b) XYZ-axis orientations of the smartphone during the walking balance test. X-axis is the right (−) and the left (+) side of the subject. Y-axis is the up (−) and the down (+) of the subject. Z-axis is the forward (+) and the backward (−) of the subject. Subjects wearing the smartphone walked for about 440 feet in total.
Figure 2. (a) A subject wearing the Smartphone. (b) XYZ-axis orientations of the smartphone during the walking balance test. X-axis is the right (−) and the left (+) side of the subject. Y-axis is the up (−) and the down (+) of the subject. Z-axis is the forward (+) and the backward (−) of the subject. Subjects wearing the smartphone walked for about 440 feet in total.
Applsci 12 05444 g002
Figure 3. (a) Deep learning model using Tensorflow and Keras. (b) Compile the model. (c) Train the model. (d) Evaluate the model.
Figure 3. (a) Deep learning model using Tensorflow and Keras. (b) Compile the model. (c) Train the model. (d) Evaluate the model.
Applsci 12 05444 g003
Figure 4. Deep learning model.
Figure 4. Deep learning model.
Applsci 12 05444 g004
Figure 5. (a) Training model with Acc XYZ. (b) Training model with Gyro XYZ. (c) Training model with Rot XYZ. (d) Training model with Rot vector—9, (e) Training model with all data sets.
Figure 5. (a) Training model with Acc XYZ. (b) Training model with Gyro XYZ. (c) Training model with Rot XYZ. (d) Training model with Rot vector—9, (e) Training model with all data sets.
Applsci 12 05444 g005
Table 1. Human subjects’ general information.
Table 1. Human subjects’ general information.
IDAgeRaceGender# of mTBI InjuriesYear of Last InjuryWeightHeight
A0123AsianFemale120191275′1″
A0235AsianMale520181485′4″
A0420AsianFemale120151174′11″
C2032WhiteFemale320151555′4″
C2119WhiteFemale420191325′8″
C1721WhiteFemale120151155′1″
CS0324WhiteFemale001155′2″
Table 2. Statistical information of A01 subject’s data.
Table 2. Statistical information of A01 subject’s data.
IDSensorMinMaxMeanMedVarSDQ1Q2Q3Q4
A01accX4.11117.6389.4598.7314.8482.2024.1117.8448.73110.873
accY−5.2595.494−0.276−0.3092.2071.486−5.259−1.286−0.3090.692
accZ−6.9547.104−0.1740.2303.1221.767−6.954−1.1010.2301.105
gyroX−1.1852.7140.0140.0420.1900.436−1.185−0.3150.0420.344
gyroY−1.2601.5100.002−0.0330.0940.306−1.260−0.213−0.0330.181
gyroZ−1.1151.2290.002−0.0080.1030.322−1.115−0.209−0.0080.200
rotX−0.7560.7490.033−0.4120.2600.510−0.756−0.466−0.4120.552
rotY−0.6000.011−0.493−0.4930.0040.060−0.600−0.544−0.493−0.444
rotZ−0.6620.6720.056−0.3930.2510.501−0.662−0.433−0.3930.567
Table 3. Loss and accuracy of the model.
Table 3. Loss and accuracy of the model.
Model Data TypeLossAccuracy
Acc X, Y, Z1.2910.505 (50.5%)
Gyro X, Y, Z1.3160.483 (48.3%)
Rot X, Y, Z0.5400.771 (77.1%)
Rot vector—90.0140.995 (99.5%)
All data set0.0100.999 (99.9%)
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lee, S.; Lee, H. Detection of Walking Features Using Mobile Health and Deep Learning. Appl. Sci. 2022, 12, 5444. https://doi.org/10.3390/app12115444

AMA Style

Lee S, Lee H. Detection of Walking Features Using Mobile Health and Deep Learning. Applied Sciences. 2022; 12(11):5444. https://doi.org/10.3390/app12115444

Chicago/Turabian Style

Lee, Sungchul, and Hyunhwa Lee. 2022. "Detection of Walking Features Using Mobile Health and Deep Learning" Applied Sciences 12, no. 11: 5444. https://doi.org/10.3390/app12115444

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop