Next Article in Journal
Evapotranspiration Estimation with Small UAVs in Precision Agriculture
Next Article in Special Issue
Gait-Based Identification Using Deep Recurrent Neural Networks and Acceleration Patterns
Previous Article in Journal
Reactive Navigation on Natural Environments by Continuous Classification of Ground Traversability
Previous Article in Special Issue
Smartphone Motion Sensor-Based Complex Human Activity Identification Using Deep Stacked Autoencoder Algorithm for Enhanced Smart Healthcare System
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:

Accelerometer-Based Human Activity Recognition for Patient Monitoring Using a Deep Neural Network

Esther Fridriksdottir
Alberto G. Bonomi
Department of Patient Care & Measurements, Philips Research Laboratories, 5656AE Eindhoven, The Netherlands
Author to whom correspondence should be addressed.
Sensors 2020, 20(22), 6424;
Submission received: 21 September 2020 / Revised: 5 November 2020 / Accepted: 8 November 2020 / Published: 10 November 2020


The objective of this study was to investigate the accuracy of a Deep Neural Network (DNN) in recognizing activities typical for hospitalized patients. A data collection study was conducted with 20 healthy volunteers (10 males and 10 females, age = 43 ± 13 years) in a simulated hospital environment. A single triaxial accelerometer mounted on the trunk was used to measure body movement and recognize six activity types: lying in bed, upright posture, walking, wheelchair transport, stair ascent and stair descent. A DNN consisting of a three-layer convolutional neural network followed by a long short-term memory layer was developed for this classification problem. Additionally, features were extracted from the accelerometer data to train a support vector machine (SVM) classifier for comparison. The DNN reached 94.52% overall accuracy on the holdout dataset compared to 83.35% of the SVM classifier. In conclusion, a DNN is capable of recognizing types of physical activity in simulated hospital conditions using data captured by a single tri-axial accelerometer. The method described may be used for continuous monitoring of patient activities during hospitalization to provide additional insights into the recovery process.

1. Introduction

Hospitalized patients spend most of their time inactive and lying in bed [1,2,3]. This is especially concerning for older patients as physical inactivity following hospitalization can lead to functional decline [4]. On the other hand, stable or improved activity levels can serve as a valuable input for assessing patient discharge readiness [5]. Currently, monitoring mobility of hospitalized patients relies largely on direct observation from the caregivers. There are multiple tools available to assess the mobility and functional ability of patients. The choice of which assessment tool to use depends on feasibility and the clinician’s preference. These tools are mainly divided into two categories; self-report and performance-based measures [6]. Self-report questionnaires are easy to use and rapid, which makes them more preferable to performance-based measures [6]. However, self-report is based on the patient’s perception of their mobility rather than actual performance, which can lead to misleading results due to recall bias and under-reporting [7]. On the other hand, performance-based measures, such as timed “up and go” [8] or 6-minute walk test (6-MWT) [9], provide objective evidence about the capabilities of the patient. The downside of using performance-based measures is that setting up a test course requires equipment and measurements that can be time consuming for the clinician.
Wearable accelerometers have the potential to act as powerful tools in evaluating the health status of patients during recovery in an objective way and enabling evaluation of rehabilitation and other medical interventions [10]. Metrics such as amount of time spent in an upright position and daily step count have been found to have a relationship with length of hospital stay [5,11,12,13]. In addition, posture detection algorithms can provide important information for preventing pressure ulcer formation [14]. These metrics can be determined by using human activity recognition (HAR) based on camera systems or wearable sensors such as accelerometers, gyroscopes, magnetometers and barometric pressure sensors. Processing signals from wearable sensors requires considerably less computational power compared to the camera-based approach and imposes a less invasion of privacy. HAR using accelerometers embedded in smartwatches and smartphones as fitness trackers has recently become widely accepted in the consumer industry. However, step detection shows high error rates during slow walking and when using walking aid [15,16], which remains as one of the challenges for applying this technology in clinical settings, such as for patient monitoring.
HAR can be achieved by extracting hand-crafted features from sensor data and training classifiers that learn patterns and relationships between features and class labels. This is the traditional approach of using feature-based machine learning methods. Another approach that has become a popular choice for HAR recently is deep neural networks (DNNs). DNNs have developed and advanced considerably in recent years and has brought about breakthroughs in fields such as visual object recognition and natural language processing [17]. The advantage of using DNNs over conventional machine learning approaches is that they are able to automatically extract high-level features from raw input so hand-crafted feature extraction is not required.
This paper introduces a classification model that can recognize typical activities of patients during hospitalization using a single accelerometer mounted on the trunk. Two different approaches will be explored and compared; a deep learning approach and a feature-based machine learning approach. The aim is to investigate how accurately a deep learning algorithm can recognize activities typical for hospitalized patients using a single trunk-worn accelerometer.

2. Related Work

2.1. Methods Used for Human Activity Recognition (HAR)

2.1.1. Feature-Based Approaches

Several acceleration features have been found to be valuable for HAR. These features are often based on the frequency of the signal or the statistical distribution of signal values.
A few examples are: tilt angle estimates to discriminate between lying and upright [10], discrete wavelet transform or vertical velocity estimates to recognize sit-to-stand transitions [18,19] and signal power to distinguish static activities from dynamic [10,20,21,22]. Machine learning classifiers, such as random forests, k-nearest neighbors and support vector machines, are often used to process acceleration features and classify activity types [23,24,25,26,27,28,29,30].

2.1.2. Deep Neural Networks (DNNs)

Two types of DNN structures have been shown to perform well in accelerometer-based HAR in literature. These are convolutional neural networks (CNNs), a type of recurrent neural networks called long short-term memory (LSTM) and a combination of both. CNNs have been applied to sensor data for HAR with outstanding performances [31,32,33,34,35,36,37,38,39,40,41]. Previous studies proposed augmenting the feature vector extracted by a CNN with several statistical features [33,34]. Aviléz-Cruz et al. [35] developed a three-headed CNN model for recognizing six activities. The three CNNs work in parallel, all receiving the same input signal coming from a triaxial accelerometer and a triaxial gyroscope. The feature maps of the three CNNs are flattened and concatenated before they are passed into a fully connected layer and at last an output layer with a softmax activation.
Other studies have shown the relevance of using LSTM networks for HAR [36,42,43,44,45]. Lastly, a few studies have suggested augmenting CNNs with LSTM layers [37,46,47]. Karim et al. [37] proposed a model architecture in which a three-layer CNN and an LSTM layer extract features from sensor data in parallel. The resulting feature vectors are then concatenated and passed into a softmax classification layer. Others added LSTM layers after the CNN [46,47].

2.2. Human Activity Recognition (HAR) for Patient Monitoring

Using accelerometers for monitoring mobility of patients has been shown to be suitable for application in a clinical setting [10]. Aminian et al. [20] presented a rule-based HAR model comprising two accelerometers worn on the chest and thigh to classify lying, sitting, standing and dynamic activities. The model was tested on three hospitalized patients and compared to patient self-report. The authors found a significant discrepancy between the sensor outcome and patient self-report, which was explained by subjective bias of patients. The authors suggested that the thresholds used for classification should be adapted to each patient for improved performance. Rauen et al. [22] used a rule-based HAR model to monitor position changes of 30 immobile patients in early neuro-rehabilitation using triaxial accelerometers worn on the chest and thigh. The chest-worn accelerometer performed considerably better than the thigh sensor and was able to detect all position changes of the patients, and a few in addition to what was recorded in the standard written care documentation. The authors concluded that their approach was promising for monitoring position changes of immobile patients and evaluating their overall health.

3. Methods

3.1. Data Collection

Twenty healthy subjects, ten males and ten females (age = 43 ± 13 years, weight = 78 ± 15 kg, BMI = 26 ± 3 kg/m2) were recruited for the data collection. Inclusion criteria for volunteers was age in range of 18–65 years. This age range was selected to represent the typical age of hospitalized patients and a wide range of BMI were allowed for the participants in the study. Exclusion criteria were pregnancy, movement disorders, hypersensitivity to stainless steel and allergy to medical grade adhesives. The study, according to the regulations in the Netherlands, was waived as non-medical research and therefore approval by a IRB institution was not needed. The Internal Committee for Biomedical Experiments at Philips approved the study. Informed consent was obtained from all volunteers.
A GENEActiv (Activinsights Ltd., Kimbolton, UK) sensor was attached to the left side of the trunk of subjects by using a medical grade double adhesive. Careful orientation of the device allowed alignment of the y-axis of the GENEActiv device to the caudo-cranial direction of the body, resulting in alignment of the x-axis and the z-axis along the medio-lateral and antero-posterior direction of the body, respectively. This accelerometer placement was used as it proved to be an effective location for accelerometer-derived vital signs monitoring in patients. The sensor measured acceleration at 100 Hz sampling frequency with 12 bit resolution in the range of ±8 g (1 g = 9.8 m/s2). More wearable sensors were used to collect additional data during the measurement sessions, however, this data was not used for the classification models described in this paper. All sessions were recorded with a video camera for activity class label annotation purposes. Prior to the start of the data collection protocol, the accelerometers were all calibrated by orienting the sensor axis along the vertical direction to set the average signal to 1 g.
The protocol consisted of various activities typical for hospitalized patients such as lying in bed, eating and drinking, performing physiotherapy exercises and walking with and without walking aids at very slow to normal pace. A summary of the protocol can be found in Table 1 in chronological order. The order of activities was not randomized between subjects. The subjects were asked to act as a patient in the hospital (i.e., move slowly) for all tasks except for the Ebbeling test and the 6-MWT. That is because these tests were used to determine the subjects’ fitness and physical performance.

3.2. Data Preprocessing

Out of the 20 volunteers, one subject was not able to complete the 6-MWT and the walking up/down stairs activities due to fatigue. For another subject the acceleration signals during the 6-MWT had notably larger peaks than for all other subjects. That was due to one of the other devices used for data collection colliding with the GENEActiv sensor during this activity. The 6-MWT acceleration data for this subject was removed from the dataset because this periodic collision between devices is not expected during measurements outside the laboratory environment.
Activities were manually annotated and synced with the acceleration signal. Camera recordings were used to properly review volunteers activities during the protocol and generate annotations of start time and stop time for the various tasks. A single researcher reviewed the captured videos to generate activity label timestamps. The activities of the protocol were categorized into six activity classes; Lying, upright (sitting or standing), walking, stair ascent, stair descent and wheelchair transport. The dataset was split randomly into training, validation and test subsets based on participant IDs. Data from 50% of the subjects was used for training, 25% of the subjects for validation and 25% of the subjects for final testing.
Fixed-length sliding window technique, with length set to 6 seconds and 50% overlap, was used to segment the data. This segment length was chosen to make sure that relevant information was captured in each data segment during activities like slow walking and wheelchair transport. Indeed, for slow walking activities intervals of 6 seconds guaranteed the presence of at least 2 steps as well as for slow wheelchair activities movement were often repetitive on a 3–4 s period. Labels were assigned to each segment determined by class majority. Segments containing only unlabelled data or a majority of unlabelled data, such as during breaks between activities, were not used for training the classifiers.

3.3. Classification

Two different classifiers were trained and their performances compared. The first classifier was a DNN that achieves automatic feature extraction from the normalized acceleration segments. The second classifier was a support vector machine (SVM) that required handcrafted features as input. Figure 1 shows the different data preparation needed for the two classification models.

3.3.1. Deep Neural Network

Figure 2 shows the model architecture of the DNN. Normalized acceleration segments with dimensions 600 × 3 (6 s of x-, y- and z-acceleration sampled at 100 Hz) were used as input for the DNN. Three convolutional layers (filters: 8, 8 and 16 with kernel sizes: 23, 10 and 7, respectively) followed by an LSTM layer (units: 6) performed automatic feature extraction for the classification. The convolutional layers used a ReLu activation function and zero padding to avoid losing information at the boundaries of the input data. Max pooling layers (pool sizes: 10, 4 and 2, respectively), also with zero padding, and dropout layers (ratio: 30%) followed the convolutional layers to reduce risk of overfitting. Batch normalization layers were added after each convolutional layer as they have been shown to be effective in accelerating training of DNNs [45,48]. The last layer is a fully connected layer with a softmax activation that returns the classification predictions. The model was trained using an Adam optimizer [49] and batch size of 100. Hyperparameters such as number of filters, kernel size, pool size, dropout ratio and batch size was determined by iterating one hyperparameter at a time. The model was developed using Keras with TensorFlow backend.
Due to class imbalance, models were trained using a balanced batch generator the by imbalanced-learn library [50]. The purpose of the balanced batch generator was to make sure that in every batch there were equal amounts of samples from each class. The batch generator did so by creating copies of randomly selected samples belonging to all classes except the majority class of the batch.

3.3.2. Feature-Based Classifier

A total of 86 features, from both time and frequency domains, were extracted from each acceleration segment. The features are listed in Table 2 and have previously been proposed for HAR [23,51,52]. Each feature was computed from the x-, y-, z-acceleration and the acceleration magnitude. Features were normalized to zero mean and unit standard deviation.
Prinicpal component analysis has commonly been used for reducing dimensionality of a feature set used for HAR [53,54,55,56]. By using the first 30 principal components, 99% of the cumulative variance of the original data can be maintained. A radial basis kernel was used and the γ parameter was set to γ = 0.001 . Class weights were inversely proportional to class size to deal with class imbalance. The classifier was implemented using Sklearn [57]. Feature normalization and development of the PCA transform parameters were obtained on the training dataset and then applied to the validation and testing datasets.

4. Results

The dataset contained approximately 23,000 labelled segments in total. Roughly 64% of the segments belonged to the walking class while the wheelchair class, which was smallest class, accounted for less than 2% of all the samples. Both the DNN and SVM classifiers were evaluated on the same holdout dataset containing data from 25% of the subjects. Table 3 shows the performance scores of both classification models. The DNN reached a considerably better performance with 94.5% in overall accuracy compared to 83.35% for the SVM. The between-subject variability in the DNN classification accuracy within the holdout dataset was 6%. F1-score is often considered a better metric when dealing with classification problems of imbalanced datasets and is therefore listed in the table.
Figure 3 shows the accuracy and loss of the DNN model on training and validation datasets. The performance of the model stops improving around the 50th training epoch. The model performs similarly for the training data and the validation data, which indicates low risk of overfitting.
Figure 4a shows the confusion matrix resulting from applying the DNN to the holdout data. Lying in bed was correctly classified for 100% of the segments. Segments labelled as upright and walking were correctly classified 94.7% and 94.9% of the time, respectively. The stair ascent, stair descent and wheelchair classes had slightly poorer classification rates of 82.1%, 85.1% and 86%, respectively. For comparison, the confusion matrix of the SVM on the same holdout data is shown in Figure 4b. The classification rate of the SVM is less for all classes except for lying in bed and wheelchair.
Figure 5 shows the percentage of wrongly classified segments per activity of the holdout dataset to indicate which activities are more difficult to classify than others. Slow walking, walking with walking aid and walking up/down stairs are the most challenging activities to classify for both models.

5. Discussion

This study demonstrated that a DNN model could be used to accurately classify activities that are typical for hospitalized patients using an accelerometer worn on the trunk. The DNN model showed substantially larger accuracy than a feature-based SVM on the presented laboratory data. Continuous patient monitoring using this approach could add insight into the recovery process by providing objective information about patients’ mobility and behavior. The DNN architecture was relatively small with 3 convolutional layers, a recurrent layer and a final dense layer. This model architecture and the number of operations required for real time data processing make the implementation of the DNN feasible for embedded processing in wearable devices equipped with modern processors capable of running computing libraries such as TensorFlow Lite [58].
Monitoring patient activity requires accurate walking detection at slow speeds as patients often ambulate at less than 1 km/h [59]. At very slow walking speeds, both classifiers had difficulties detecting walking. The DNN misclassified 27% of segments in the holdout dataset representing walking at 0.4 km/h as upright position. The ratio of misclassified segments improved as speed increased and for speeds higher than 1 km/h, 100% of the segments were correctly classified as walking. Segments representing walking with a 4-wheel rollator, walker and crutches were misclassified as upright for 18% to 26% of the segments. Activities while standing such as dressing/undressing, washing hands and brushing teeth were sometimes mistaken as walking or wheelchair. That may be due to small movements that resemble acceleration signals belonging to those two classes. The walking up/down stairs activities had 9% to 26% misclassification rates, which was expected partly because the acceleration signals while walking up/down stairs resemble those during walking in the corridor. In addition, in between floors there were parts where the subjects had to walk a few steps on a flat level before continuing walking up/down the stairs. These short flat level parts were not specifically annotated and therefore it is possible that there were some segments labelled as walking up/down stairs that should have been labelled as walking.
The amount of misclassified segments is considerably higher for the SVM. Walking with crutches was the activity with the highest percentage of misclassifications, in total 82%. These segments where misclassified as upright, wheelchair, stair ascent and descent. Walking with an anterior walker and 4-wheel rollator follow with misclassification rates 67% and 53%, respectively. Many of the activities while standing or sitting, such as dressing, undressing, physiotherapy and reading, were falsely predicted as belonging to the wheelchair class. The difficulties of the SVM in predicting walking with walking aid and the wheelchair class might indicate that different features were needed for these patient-specific activities.
A limitation of this study was that the algorithm was trained and tested using laboratory data. Previous studies have shown that performance of algorithms in laboratory conditions may not accurately reflect performance in daily life [60]. This especially applies to algorithms such as DNNs that require large and representative datasets for generalizing. However, preliminary testing including the unlabelled activities from the dataset collected for this study indicates good performance on new data, with just a few false positives for wheelchair and stair walking activities. Figure 6 shows the predictions of the DNN classifier for segments of the whole recording session of a representative participant from the holdout dataset. Another limitation is that this study does not address the challenge of monitoring changes in activity pattern in patients which is an important target when looking into clinical applicability of the presented model to support assessment of patient recovery during hospitalization.

6. Conclusions

This work showed that a single trunk-worn accelerometer has the potential to monitor mobility of patients in hospitals. The DNN model presented in this report is a reliable algorithm for recognizing activities that are typical of daily patient behavior in the hospital. The model can accurately detect walking at speeds down to 1 km/h. This method has the potential to provide nurses and doctors insight into the recovery process of their patients and valuable objective information for making decisions regarding patient discharge. Future studies are needed to validate the classification model in continuous monitoring of hospitalized patients.

Author Contributions

Conceptualization, E.F. and A.G.B.; methodology, E.F. and A.G.B.; software, E.F. and A.G.B.; validation, E.F. and A.G.B.; formal analysis, E.F.; investigation, E.F.; resources, E.F. and A.G.B.; data curation, E.F. and A.G.B.; writing—original draft preparation, E.F.; writing—review and editing, E.F. and A.G.B.; visualization, E.F.; supervision, A.G.B.; project administration, A.G.B.; funding acquisition, A.G.B. All authors have read and agreed to the published version of the manuscript.


This research received no external funding.

Conflicts of Interest

Both authors were employed at Philips Research Laboratories at the time of the study.


  1. Brown, C.J.; Redden, D.T.; Flood, K.L.; Allman, R.M. The underrecognized epidemic of low mobility during hospitalization of older adults. J. Am. Geriatr. Soc. 2009, 57, 1660–1665. [Google Scholar]
  2. Kuys, S.S.; Dolecka, U.E.; Guard, A. Activity level of hospital medical inpatients: An observational study. Arch. Gerontol. Geriatr. 2012, 55, 417–421. [Google Scholar] [PubMed]
  3. Mudge, A.M.; Mcrae, P.; Mchugh, K.; Griffin, L.; Hitchen, A.; Walker, J.; Cruickshank, M.; Morris, N.R.; Kuys, S. Poor mobility in hospitalized adults of all ages. J. Hosp. Med. 2016, 11, 289–291. [Google Scholar] [PubMed] [Green Version]
  4. Brown, C.J.; Friedkin, R.J.; Inouye, S.K. Prevalence and Outcomes of Low Mobility in Hospitalized Older Patients. J. Am. Soc. 2004, 52, 1263–1270. [Google Scholar]
  5. Sallis, R.; Roddy-Sturm, Y.; Chijioke, E.; Litman, K.; Kanter, M.H.; Huang, B.Z.; Shen, E.; Nguyen, H.Q. Stepping toward discharge: Level of ambulation in hospitalized patients. J. Hosp. Med. 2015, 10, 384–389. [Google Scholar] [PubMed]
  6. Chung, J.; Demiris, G.; Thompson, H.J. Instruments to assess mobility limitation in community-dwelling older adults: A systematic review. J. Aging Phys. Act. 2015, 23, 298–313. [Google Scholar]
  7. Appelboom, G.; Yang, A.H.; Christophe, B.R.; Bruce, E.M.; Slomian, J.; Bruyère, O.; Bruce, S.S.; Zacharia, B.E.; Reginster, J.-Y.; Connolly, E.S.; et al. The promise of wearable activity sensors to define patient recovery. J. Clin. Neurosci. 2014, 21, 1089–1093. [Google Scholar]
  8. Podsiadlo, D.; Richardson, S. The Timed “Up & Go”: A Test of Basic Functional Mobility for Frail Elderly Persons. J. Am. Geriatr. Soc. 1991, 39, 142–148. [Google Scholar]
  9. Enright, P.L. The Six-Minute Walk Test. Respir. Care 2003, 48, 783–785. [Google Scholar]
  10. Culhane, K.M.; Lyons, G.M.; Hilton, D.; Grace, P.A.; Lyons, D. Long-term mobility monitoring of older adults using accelerometers in a clinical environment. Clin. Rehabil. 2004, 18, 335–343. [Google Scholar]
  11. Browning, L.; Denehy, L.; Scholes, R.L. The quantity of early upright mobilisation performed following upper abdominal surgery is low: An observational study. Aust. J. Physiother. 2007, 53, 47–52. [Google Scholar] [PubMed] [Green Version]
  12. Daskivich, T.J.; Houman, J.; Lopez, M.; Luu, M.; Fleshner, P.; Zaghiyan, K.; Cunneen, S.; Burch, M.; Walsh, C.; Paiement, G.; et al. Association of Wearable Activity Monitors With Assessment of Daily Ambulation and Length of Stay Among Patients Undergoing Major Surgery. JAMA Netw. Open 2019, 2, e187673. [Google Scholar] [PubMed] [Green Version]
  13. Cook, D.J.; Thompson, J.E.; Prinsen, S.K.; Dearani, J.A.; Deschamps, C. Functional Recovery in the Elderly After Major Surgery: Assessment of Mobility Recovery Using Wireless Technology. Ann. Thoracic Surg. 2013, 96, 1057–1061. [Google Scholar]
  14. Dhillon, M.S.; McCombie, S.A.; McCombie, D.B. Towards the Prevention of Pressure Ulcers with a Wearable Patient Posture Monitor Based on Adaptive Accelerometer Alignment. In Proceedings of the 2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, San Diego, CA, USA, 28 August–1 September 2012; pp. 4513–4516. [Google Scholar]
  15. Beevi, F.H.A.; Miranda, J.; Pedersen, C.F.; Wagner, S. An Evaluation of Commercial Pedometers for Monitoring Slow Walking Speed Populations. Telemed. e-Health 2016, 22, 441–449. [Google Scholar]
  16. Ridder, R.; Blaiser, C. Activity trackers are not valid for step count registration when walking with crutches. Gait Posture 2019, 70, 30–32. [Google Scholar]
  17. Rusk, N. Deep learning. Nat. Methods 2016, 13, 35. [Google Scholar]
  18. Najafi, B.; Aminian, K.; Paraschiv-Ionescu, A.; Loew, F.; Bula, C.J.; Robert, P.; Büla, C.J.; Robert, P.; Member, S.; Aminian, K.; et al. Ambulatory system for human motion analysis using a kinematic sensor: Monitoring of daily physical activity in the elderly. IEEE Trans. Biomed. 2003, 50, 711–723. [Google Scholar]
  19. Godfrey, A.; Bourke, A.; Ólaighin, G.; Ven, P.V.; Nelson, J. Activity classification using a single chest mounted tri-axial accelerometer. Med. Eng. Phys. 2011, 33, 1127–1135. [Google Scholar]
  20. Aminian, K.; Robert, P.; Buchser, E.E.; Rutschmann, B.; Hayoz, D.; Depairon, M. Physical activity monitoring based on accelerometry: Validation and comparison with video observation. Med. Biol. Comput. 1999, 37, 304–308. [Google Scholar]
  21. Jeon, A.-Y.; Ye, S.-Y.; Park, J.-M.; Kim, K.-N.; Kim, J.-H.; Jung, D.-K.; Jeon, G.-R.; Ro, J.-H.; Ye, S.-Y.; Kim, J.-H. Emergency Detection System Using PDA Based on Self-Response Algorithm. In Proceedings of the 2007 International Conference on Convergence Information Technology (ICCIT 2007), Gyeongju, Korea, 21–23 November 2007; pp. 1207–1212. [Google Scholar]
  22. Rauen, K.; Schaffrath, J.; Pradhan, C.; Schniepp, R.; Jahn, K. Accelerometric Trunk Sensors to Detect Changes of Body Positions in Immobile Patients. Sensors 2018, 18, 3272. [Google Scholar]
  23. Zhu, J.; San-Segundo, R.; Pardo, J.M. Feature extraction for robust physical activity recognition. Hum. Cent. Comput. Inf. Sci. 2017, 7, 16. [Google Scholar] [CrossRef] [Green Version]
  24. Attal, F.; Mohammed, S.; Dedabrishvili, M.; Chamroukhi, F.; Oukhellou, L.; Amirat, Y. Physical Human Activity Recognition Using Wearable Sensors. Sensors 2015, 15, 31314–31338. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Moncada-Torres, A.; Leuenberger, K.; Gonzenbach, R.; Luft, A.; Gassert, R. Activity classification based on inertial and barometric pressure sensors at different anatomical locations. Physiol. Meas. 2014, 35, 1245–1263. [Google Scholar] [CrossRef] [PubMed]
  26. Parera, J.; Angulo, C.; Rodríguez-Molinero, A.; Cabestany, J.; Rodriguez-Molinero, A.; Cabestany, J. User Daily Activity Classification from Accelerometry Using Feature Selection and SVM. In Proceedings of the 10th International Work-Conference on Artificial Neural Networks (IWANN 2009), Salamanca, Spain, 10–12 June 2009; pp. 1137–1144. [Google Scholar]
  27. Cleland, I.; Kikhia, B.; Nugent, C.; Boytsov, A.; Hallberg, J.; Synnes, K.; McClean, S.; Finlay, D. Optimal Placement of Accelerometers for the Detection of Everyday Activities. Sensors 2013, 13, 9183–9200. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Awais, M.; Chiari, L.; Ihlen, E.A.F.; Helbostad, J.L.; Palmerini, L. Physical Activity Classification for Elderly People in Free-Living Conditions. IEEE J. Biomed. Health Inform. 2019, 23, 197–207. [Google Scholar] [CrossRef]
  29. Sasaki, J.E.; Hickey, A.; Staudenmayer, J.; John, D.; Kent, J.A.; Freedson, P.S. Performance of Activity Classification Algorithms in Free-living Older Adults. Med. Sci. Sport. Exerc. 2016, 48, 941–950. [Google Scholar] [CrossRef] [Green Version]
  30. Lyden, K.; Keadle, S.K.; Staudenmayer, J.; Freedson, P.S. A method to estimate free-living active and sedentary behavior from an accelerometer. Med. Sci. Sport. Exerc. 2014, 46, 386–397. [Google Scholar] [CrossRef] [Green Version]
  31. Jiang, W.; Yin, Z. Human Activity Recognition Using Wearable Sensors by Deep Convolutional Neural Networks. In Proceedings of the 2015 ACM Multimedia Conference (MM 2015), Brisbane, Australia, 26–30 October 2015; pp. 1307–1310. [Google Scholar]
  32. Hur, T.; Bang, J.; Huynh-The, T.; Lee, J.; Kim, J.-I.; Lee, S. Iss2Image: A Novel Signal-Encoding Technique for CNN-Based Human Activity Recognition. Sensors 2018, 18, 3910. [Google Scholar] [CrossRef] [Green Version]
  33. Almaslukh, B.; Artoli, A.M.; Al-Muhtadi, J. A Robust Deep Learning Approach for Position-Independent Smartphone-Based Human Activity Recognition. Sensors 2018, 18, 3726. [Google Scholar] [CrossRef] [Green Version]
  34. Ignatov, A. Real-time human activity recognition from accelerometer data using Convolutional Neural Networks. Appl. Soft Comput. 2018, 62, 915–922. [Google Scholar] [CrossRef]
  35. Avilés-Cruz, C.; Ferreyra-Ramírez, A.; Zú niga-López, A. Coarse-Fine Convolutional Deep-Learning Strategy for Human Activity Recognition. Sensors 2019, 19, 1556. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  36. Uddin, M.Z.; Hassan, M.M. Activity Recognition for Cognitive Assistance Using Body Sensors Data and Deep Convolutional Neural Network. IEEE Sens. J. 2019, 19, 8413–8419. [Google Scholar] [CrossRef]
  37. Karim, F.; Majumdar, S.; Darabi, H.; Chen, S. LSTM Fully Convolutional Networks for Time Series Classification. IEEE Access 2018, 6, 1662–1669. [Google Scholar] [CrossRef]
  38. Jeong, C.Y.; Kim, M. An Energy-Efficient Method for Human Activity Recognition with Segment-Level Change Detection and Deep Learning. Sensors 2019, 19, 3688. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  39. Ha, S.; Yun, J.-M.; Choi, S. Multi-modal Convolutional Neural Networks for Activity Recognition. In Proceedings of the 2015 IEEE International Conference on Systems, Man, and Cybernetics, Hong Kong, China, 9–12 October 2015; pp. 3017–3022. [Google Scholar]
  40. Cheng, W.-Y.; Scotland, A.; Lipsmeier, F.; Kilchenmann, T.; Jin, L.; Schjodt-Eriksen, J.; Wolf, D.; Zhang-Schaerer, Y.-P.; Garcia, I.F.; Siebourg-Polster, J.; et al. Human Activity Recognition from Sensor-Based Large-Scale Continuous Monitoring of Parkinson’s Disease Patients. In Proceedings of the 2017 IEEE/ACM International Conference on Connected Health: Applications, Systems and Engineering Technologies (CHASE), Philadelphia, PA, USA, 17–19 July 2017; pp. 249–250. [Google Scholar]
  41. Chen, Y.; Xue, Y. A Deep Learning Approach to Human Activity Recognition Based on Single Accelerometer. In Proceedings of the 2015 IEEE International Conference on Systems, Man, and Cybernetics, Hong Kong, China, 9–12 October 2015; pp. 1488–1492. [Google Scholar]
  42. Inoue, M.; Inoue, S.; Nishida, T. Deep recurrent neural network for mobile human activity recognition with high throughput. Artif. Life Robot. 2018, 23, 173–185. [Google Scholar] [CrossRef] [Green Version]
  43. Chen, W.-H.; Baca, C.A.; Tou, C.-H. LSTM-RNNs combined with scene information for human activity recognition. In Proceedings of the 2017 IEEE 19th International Conference on e-Health Networking, Applications and Services (Healthcom), Dalian, China, 12–15 October 2017; pp. 1–6. [Google Scholar]
  44. Welhenge, A.M.; Taparugssanagorn, A. Human activity classification using long short-term memory network. Signal Image Video Process. 2019, 13, 651–656. [Google Scholar] [CrossRef]
  45. Zebin, T.; Sperrin, M.; Peek, N.; Casson, A.J. Human activity recognition from inertial sensor time-series using batch normalized deep LSTM recurrent networks. In Proceedings of the 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Honolulu, HI, USA, 18–21 July 2018; pp. 1–4. [Google Scholar]
  46. Li, H.; Trocan, M. Personal Health Indicators by Deep Learning of Smart Phone Sensor Data. In Proceedings of the 2017 3rd IEEE International Conference on Cybernetics (CYBCONF), Exeter, UK, 21–23 June 2017; pp. 1–5. [Google Scholar]
  47. Ordóñez, F.; Roggen, D. Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition. Sensors 2016, 16, 115. [Google Scholar] [CrossRef] [Green Version]
  48. Ioffe, S.; Szegedy, C. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. arXiv 2015, arXiv:1502.03167. [Google Scholar]
  49. Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  50. Lemaitre, G.; Nogueira, F.; Aridas, C.K. Imbalanced-learn: A Python Toolbox to Tackle the Curse of Imbalanced Datasets in Machine Learning. J. Mach. Learn. Res. 2017, 18, 1–5. [Google Scholar]
  51. Arif, M.; Kattan, A. Physical Activities Monitoring Using Wearable Acceleration Sensors Attached to the Body. PLoS ONE 2015, 10, e0130851. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  52. Duarte, F.; Lourenço, A.; Abrantes, A. Classification of Physical Activities using a Smartphone: Evaluation study using multiple users. Procedia Technol. 2014, 17, 239–247. [Google Scholar] [CrossRef] [Green Version]
  53. Altun, K.; Barshan, B.; Tunçel, O. Comparative study on classifying human activities with miniature inertial and magnetic sensors. Pattern Recognit. 2010, 43, 3605–3620. [Google Scholar] [CrossRef]
  54. Sharma, A.; Lee, H.J.; Chung, W.-Y. Principal Component analysis based Ambulatory monitoring of elderly. J. Korea Inst. Inf. Commun. Eng. 2008, 12, 2105–2110. [Google Scholar]
  55. Chen, Y.; Wang, Z. A hierarchical method for human concurrent activity recognition using miniature inertial sensors. Sens. Rev. 2017, 37, 101–109. [Google Scholar] [CrossRef]
  56. Nguyen, V.N.; Yu, H. Novel Automatic Posture Detection for In-patient Care Using IMU Sensors. In Proceedings of the 2013 6th IEEE Conference on Robotics, Automation and Mechatronics (RAM), Manila, Philippines, 12–15 November 2013; pp. 31–36. [Google Scholar]
  57. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. SciKit-learn: Machine Learning in {P}ython. J. Mach. Learn. 2011, 12, 2825–2830. [Google Scholar]
  58. David, R.; Duke, J.; Jain, A.; Reddi, V.J.; Jeffries, N.; Li, J.; Kreeger, N.; Nappier, I.; Natraj, M.; Regev, S.; et al. TensorFlow Lite Micro: Embedded Machine Learning on TinyML Systems. arXiv 2020, arXiv:2010.08678. [Google Scholar]
  59. Alfredsson, J.; Stebbins, A.; Brennan, J.M.; Matsouaka, R.; Afilalo, J.; Peterson, E.D.; Vemulapalli, S.; Rumsfeld, J.S.; Shahian, D.; Mack, M.J.; et al. Gait speed predicts 30-day mortality after transcatheter aortic valve replacement: Results from the Society of Thoracic Surgeons/American College of Cardiology Transcatheter Valve Therapy Registry. Circulation 2016, 133, 1351–1359. [Google Scholar] [CrossRef] [PubMed]
  60. Gyllensten, I.C.; Bonomi, A.G. Identifying types of physical activity with a single accelerometer: Evaluating laboratory-trained algorithms in daily life. IEEE Trans. Biomed. Eng. 2011, 58, 2656–2663. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Flowchart showing the difference between handling the acceleration segments when using a feature-based machine learning classifier, in this case a support vector machine (SVM), and a deep neural network (DNN).
Figure 1. Flowchart showing the difference between handling the acceleration segments when using a feature-based machine learning classifier, in this case a support vector machine (SVM), and a deep neural network (DNN).
Sensors 20 06424 g001
Figure 2. Model architecture of the deep neural network. Acceleration segments with dimension N × 600 × 3 , where N represents number of segments, were used as input. Batch normalization layers are not shown for simplicity. The dimensions of the feature maps before each feature extraction layer are noted below the layers.
Figure 2. Model architecture of the deep neural network. Acceleration segments with dimension N × 600 × 3 , where N represents number of segments, were used as input. Batch normalization layers are not shown for simplicity. The dimensions of the feature maps before each feature extraction layer are noted below the layers.
Sensors 20 06424 g002
Figure 3. Learning curves during training of the deep neural network (DNN). (a) Accuracy of the training and validation data, (b) loss of the training and validation data.
Figure 3. Learning curves during training of the deep neural network (DNN). (a) Accuracy of the training and validation data, (b) loss of the training and validation data.
Sensors 20 06424 g003
Figure 4. Normalized confusion matrices on holdout data of (a) the deep neural network (DNN) and (b) the support vector machine (SVM) classifier.
Figure 4. Normalized confusion matrices on holdout data of (a) the deep neural network (DNN) and (b) the support vector machine (SVM) classifier.
Sensors 20 06424 g004
Figure 5. Percentage of wrong predictions per activity by (a) the deep neural network (DNN) and (b) the support vector machine (SVM). The colors represent the wrongly predicted class.
Figure 5. Percentage of wrong predictions per activity by (a) the deep neural network (DNN) and (b) the support vector machine (SVM). The colors represent the wrongly predicted class.
Sensors 20 06424 g005
Figure 6. Predictions of the deep neural network (DNN) when the whole recording session of one subject is passed into the model. The grey areas represent unlabelled activities, which were not included when training the model.
Figure 6. Predictions of the deep neural network (DNN) when the whole recording session of one subject is passed into the model. The grey areas represent unlabelled activities, which were not included when training the model.
Sensors 20 06424 g006
Table 1. Activities used in the study protocol, with corresponding class labels and duration per participant. 6-MWT stands for 6-minute walk test.
Table 1. Activities used in the study protocol, with corresponding class labels and duration per participant. 6-MWT stands for 6-minute walk test.
Activity LabelClass LabelDuration (mm:ss)
Activities in and around bed
Lie supineLying in bed3:00
Lie leftLying in bed0:30
Lie rightLying in bed0:30
Restless in bedLying in bed1:00
Physiotherapy in bedLying in bed1:00
ReclinedLying in bed0:30
Sitting edge of bedUpright0:30
Standing next to bedUpright0:30
Treadmill activities
0.4 km/hWalking2:00
0.6 km/hWalking2:00
0.8 km/hWalking2:00
1.0 km/hWalking2:00
1.2 km/hWalking2:00
1.5 km/hWalking2:00
2.0 km/hWalking2:00
3.0 km/hWalking2:00
4.0 km/hWalking2:00
Activities of daily hospital living
Physiotherapy on a chairUpright1:00
Sit-to-Stand transitionsUpright1:00
Hospital ambulation
Patient transport in wheelchairUpright1:00
Washing hands brushing teethUpright1:00
Anterior walkerWalking1:00
IV poleWalking1:00
4-wheel rollatorWalking1:00
Self propelled wheelchairWheelchair1:00
Stair walking
Stair ascent one leg injuredStair ascent1:00
Stair descent one leg injuredStair descent1:00
Stair ascentStair ascent1:00
Stair descentStair descent1:00
Table 2. The features extracted from each acceleration segment. Each feature was extracted from four signals; the x-, y-, z-acceleration and the acceleration magnitude.
Table 2. The features extracted from each acceleration segment. Each feature was extracted from four signals; the x-, y-, z-acceleration and the acceleration magnitude.
MeanMean value of the vector
Absolute meanMean of absolute values in the vector
MedianMedian value of the vector
Mean absolute deviationMean absolute deviation of the vector
Standard deviationStandard deviation of the vector
VarianceVariance of the vector
Minimum valueLowest value in the vector
Maximum valueHighest value in the vector
Full rangeDifference between the maximum and minimum value of the vector
Interquartile rangeDifference between the 1st and 3rd quartile
AreaSum of all values in the vector
Absolute areaSum of all absolute values in the vector
EnergySum of squared components of the vector
CorrelationCorrelation coefficients between each pair of vectors
SkewnessShape of distribution
KurtosisShape of distribution
Spectral entropyA measure of the complexity of a signal
Spectral centroidMean of fourier transform
Spectral varianceVariance of fourier transform
Spectral skewnessSkewness of fourier transform
Spectral kurtosisKurtosis of fourier transform
Table 3. Classification performance of the deep neural network (DNN) and support vector machine (SVM) on holdout data. Precision, recall and F1-scores are reported as weighted averages.
Table 3. Classification performance of the deep neural network (DNN) and support vector machine (SVM) on holdout data. Precision, recall and F1-scores are reported as weighted averages.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Fridriksdottir, E.; Bonomi, A.G. Accelerometer-Based Human Activity Recognition for Patient Monitoring Using a Deep Neural Network. Sensors 2020, 20, 6424.

AMA Style

Fridriksdottir E, Bonomi AG. Accelerometer-Based Human Activity Recognition for Patient Monitoring Using a Deep Neural Network. Sensors. 2020; 20(22):6424.

Chicago/Turabian Style

Fridriksdottir, Esther, and Alberto G. Bonomi. 2020. "Accelerometer-Based Human Activity Recognition for Patient Monitoring Using a Deep Neural Network" Sensors 20, no. 22: 6424.

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop