Physical Activity Recognition Based on a Parallel Approach for an Ensemble of Machine Learning and Deep Learning Classifiers
Abstract
:1. Introduction
- A large dataset to serve HAR system development, recorded on participants using a comfortable smart textile garment with an embedded single waist-worn accelerometer.
- A parallel architecture to combine traditional and deep learning pattern classification algorithms, for accrued computational and classification accuracy, that we referred to as an ensemble learning architecture. This architecture includes both the training and testing aspects of algorithm development, for ease of application development.
- A parallel implementation of this ensemble learning architecture.
2. Related Work
3. Materials and Methods
3.1. Data Acquisition
3.1.1. Participants
3.1.2. Equipment
3.1.3. Research Protocol
3.2. Preprocessing
3.2.1. Labeling Procedure
- Index: line number in the csv file ( millions data points);
- Participant number, in ;
- Participant reference: nomenclature as saved in the LIO server ;
- Trial number, in ;
- Timestamp (format in milliseconds);
- : data from the x-axis accelerometer sensor;
- : data from the y-axis accelerometer sensor;
- : data from the z-axis accelerometer sensor;
- : euclidean norm;
- activity name.
3.2.2. Validation
3.2.3. Segmentation
3.3. Overview of the Proposed Architecture
3.3.1. First Pipeline
Handcrafted Feature Extraction
- Time domain metrics: mean, variance, standard deviation, maximum, minimum, Root Mean Square (RMS), kurtosis, skewness, euclidean norm and l1-norm;
- Frequency domain metrics: energy and maximum magnitude of the Fast Fourier Transform (FFT).
Feature Selection Using ReliefF
Multiclass Support Vector Machine (SVM) Classifier
3.3.2. Second Pipeline
Automatic Feature Extraction Using LDA
K-NN Classifier
3.3.3. Third Pipeline
CNN for Multivariate Time Series
- The input layer has neurons.
- Hidden layers of a deep network are designed to learn the hierarchical feature representations of the data. During the training, a set of hyperparameters was manually tuned, and the weights were initialized randomly [47]. By gradient descent, the weights were updated using the back propagation algorithm in a way that minimizes the cost function on the training set. The choice of the model, the architecture and the cost function was crucial to obtain a network that generalizes well, and is in general problem- and data-dependent.
- The output layer has l neurons, which corresponds to the multiclass classification problem in this application.
- The convolution block, which is composed of the convolution layer and the pooling layer. These two layers form the essential components of the feature extractor, which learns the features from the raw data automatically (feature learning).The convolutional layer implements the receptive field and shared weight concepts. Neurons in the convolutional layers are locally connected to neurons inside its receptive field in the previous layer. Neurons in a given layer are organized in planes where all the neurons share the same set of weights (also called filters or kernels). The set of outputs of the neurons in such a plane is called a feature map. The number of feature maps are the same as the number of filters. A pooling layer performs either an average subsampling (mean-pooling) or a maximum subsampling (max-pooling). For a time series, the pooling layers simply reduce the length, and thus the resolution, of the feature maps.
- The fully connected block, which performs the classification based on the learned features from the convolutional blocks.
3.3.4. Fusion Stage
3.3.5. Computational Optimization
4. Results
4.1. Experimental Design
4.1.1. Weighting Imbalanced Classes
4.1.2. Multiclass Performance Measures
4.2. Experimental Results
4.2.1. Classification Result Using the Handcrafted Feature Engineering Based Approach
4.2.2. Classification Result Using the Automatic Feature Extraction-Based Approach
4.2.3. Classification Result Using Feature Learning-Based Approach
4.2.4. Classification Results Using the Ensemble Learning Based Approach
4.3. Discussion of the Recognition Rate Results
4.4. Performance Speed Analysis
4.5. Comparison Results
- Partitioning the dataset into training and testing subsets using the leave-one-subject-out cross-validation;
- Segmenting the raw accelerometer signals, using a 1-s fixed-size overlapping sliding window (FOSW), with a 50% overlap;
- Handling class imbalance in all the learning techniques;
- Making a comparative analysis on the basis of performance measures such as F1-score, precision, and recall as well as confusion matrices using the mean score estimated on each group of out-of-fold predictions.
5. Discussion
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Acampora, G.; Cook, D.; Rashidi, P.; Vasilakos, A. A Survey on Ambient Intelligence in Health Care. Proc. IEEE. Inst. Electr. Electron. Eng. 2013, 101, 2470–2494. [Google Scholar] [CrossRef] [Green Version]
- Kristoffersson, A.; Lindén, M. A Systematic Review on the Use of Wearable Body Sensors for Health Monitoring: A Qualitative Synthesis. Sensors 2020, 20, 1502. [Google Scholar] [CrossRef] [Green Version]
- Plötz, T.; Hammerla, N.Y.; Olivier, P. Feature Learning for Activity Recognition in Ubiquitous Computing. In Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence, Barcelona, Spain, 16–22 July 2011; pp. 1729–1734. [Google Scholar]
- Chen, L.; Hoey, J.; Nugent, C.D.; Cook, D.J.; Yu, Z. Sensor-Based Activity Recognition. IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 2012, 42, 790–808. [Google Scholar] [CrossRef]
- Lara, O.D.; Labrador, M.A. A Survey on Human Activity Recognition using Wearable Sensors. IEEE Commun. Surv. Tutor. 2013, 15, 1192–1209. [Google Scholar] [CrossRef]
- Attal, F.; Mohammed, S.; Debarishvili, M.; Chamroukhi, F.; Oukhellou, L.; Amirat, Y. Physical Human Activity Recognition Using Wearable Sensors. Sensors 2015, 15, 31314–31338. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Nweke, H.F.; Teh, Y.W.; Al-Garadi, M.A.; Alo, U.R. Deep learning algorithms for human activity recognition using mobile and wearable sensor networks: State of the art and research challenges. Expert Syst. Appl. 2018, 105, 233–261. [Google Scholar] [CrossRef]
- Wang, Y.; Cang, S.; Yu, H. A survey on wearable sensor modality centred human activity recognition in health care. Expert Syst. Appl. 2019, 137, 167–190. [Google Scholar] [CrossRef]
- Bao, L.; Intille, S.S. Activity Recognition from User-Annotated Acceleration Data. In Pervasive; Ferscha, A., Mattern, F., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2004; Volume 3001, pp. 1–17. [Google Scholar]
- Wang, J.; Chen, Y.; Hao, S.; Peng, X.; Hu, L. Deep learning for sensor-based activity recognition: A survey. Pattern Recognit. Lett. 2019, 119, 3–11. [Google Scholar] [CrossRef] [Green Version]
- Garcia, K.D.; de Sá, C.R.; Poel, M.; Carvalho, T.; Mendes-Moreira, J.; Cardoso, J.M.; de Carvalho, A.C.; Kok, J.N. An ensemble of autonomous auto-encoders for human activity recognition. Neurocomputing 2021, 439, 271–280. [Google Scholar] [CrossRef]
- Kwapisz, J.; Weiss, G.; Moore, S. Activity Recognition Using Cell Phone Accelerometers. SIGKDD Explor. 2010, 12, 74–82. [Google Scholar] [CrossRef]
- Attila, R.; Didier, S. Creating and Benchmarking a New Dataset for Physical Activity Monitoring. In Proceedings of the 5th International Conference on PErvasive Technologies Related to Assistive Environments, Heraklion, Crete, Greece, 6–8 June 2012; Association for Computing Machinery: New York, NY, USA, 2012; pp. 1–8. [Google Scholar]
- Baños, O.; Gálvez, J.M.; Damas, M.; Pomares, H.; Rojas, I. Window Size Impact in Human Activity Recognition. Sensors 2014, 14, 6474–6499. [Google Scholar] [CrossRef] [Green Version]
- Damasevicius, R.; Vasiljevas, M.; Salkevicius, J.; Woźniak, M. Human Activity Recognition in AAL Environments Using Random Projections. Comput. Math. Methods Med. 2016, 2016, 4073584. [Google Scholar] [CrossRef] [Green Version]
- Bayat, A.; Pomplun, M.; Tran, D.A. A Study on Human Activity Recognition Using Accelerometer Data from Smartphones. Procedia Comput. Sci. 2014, 34, 450–457. [Google Scholar]
- Kern, N.; Schiele, B.; Schmidt, A. Multi-Sensor Activity Context Detection for Wearable Computing. In Ambient Intelligence; Aarts, E., Collier, R.W., van Loenen, E., de Ruyter, B., Eds.; Springer: Berlin/Heidelberg, Germany, 2003; pp. 220–232. [Google Scholar]
- Bidargaddi, N.; Sarela, A.; Klingbeil, L.; Karunanithi, M. Detecting walking activity in cardiac rehabilitation by using accelerometer. In Proceedings of the 2007 3rd International Conference on Intelligent Sensors, Sensor Networks and Information, Melbourne, Australia, 3–6 December 2007; pp. 555–560. [Google Scholar]
- Cheung, V.H.; Gray, L.; Karunanithi, M. Review of Accelerometry for Determining Daily Activity Among Elderly Patients. Arch. Phys. Med. Rehabil. 2011, 92, 998–1014. [Google Scholar] [PubMed]
- Cleland, I.; Kikhia, B.; Nugent, C.; Boytsov, A.; Hallberg, J.; Synnes, K.; McClean, S.; Finlay, D. Optimal Placement of Accelerometers for the Detection of Everyday Activities. Sensors 2013, 13, 9183–9200. [Google Scholar] [CrossRef] [Green Version]
- King, R.C.; Villeneuve, E.; White, R.J.; Sherratt, R.S.; Holderbaum, W.; Harwin, W.S. Application of data fusion techniques and technologies for wearable health monitoring. Med. Eng. Phys. 2017, 42, 1–12. [Google Scholar] [CrossRef] [PubMed]
- Nweke, H.F.; Teh, Y.W.; Ghulam, M.; Al-Garadi, M.A. Data fusion and multiple classifier systems for human activity detection and health monitoring: Review and open research directions. Inf. Fusion 2019, 46, 147–170. [Google Scholar] [CrossRef]
- Zeng, M.; Nguyen, L.T.; Yu, B.; Mengshoel, O.J.; Zhu, J.; Wu, P.; Zhang, J. Convolutional Neural Networks for human activity recognition using mobile sensors. In Proceedings of the 6th International Conference on Mobile Computing, Applications and Services, Austin, TX, USA, 6–7 November 2014; pp. 197–205. [Google Scholar]
- Cai, J.; Yang, K.Q.; Zhang, Y. Real-Time Physical Activity Recognition Using a Single Triaxial Accelerometer Based on HMM. In Advanced Manufacturing and Information Engineering, Intelligent Instrumentation and Industry Development; Trans Tech Publications Ltd.: Freienbach, Switzerland, 2014; Volume 602, pp. 2221–2224. [Google Scholar]
- Fu, Y.; Cao, L.; Guo, G.; Huang, T.S. Multiple Feature Fusion by Subspace Learning. In Proceedings of the 2008 International Conference on Content-Based Image and Video Retrieval; Association for Computing Machinery: New York, NY, USA, 2008; pp. 127–134. [Google Scholar]
- Tao, D.; Jin, L.; Yuan, Y.; Xue, Y. Ensemble Manifold Rank Preserving for Acceleration-Based Human Activity Recognition. IEEE Trans. Neural Networks Learn. Syst. 2016, 27, 1392–1404. [Google Scholar]
- Daghistani, T.; Alshammari, R. Improving Accelerometer-Based Activity Recognition by Using Ensemble of Classifiers. Int. J. Adv. Comput. Sci. Appl. 2016, 7, 128–133. [Google Scholar] [CrossRef] [Green Version]
- Ruta, D.; Gabrys, B. An Overview of Classifier Fusion Methods. Comput. Inf. Syst. 2000, 7, 1–10. [Google Scholar]
- Bengio, Y.; Courville, A.; Vincent, P. Representation Learning: A Review and New Perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1798–1828. [Google Scholar] [CrossRef]
- Figo, D.; Diniz, P.C.; Ferreira, D.R.; Cardoso, J.M. Preprocessing Techniques for Context Recognition from Accelerometer Data. Pers. Ubiquitous Comput. 2010, 14, 645–662. [Google Scholar] [CrossRef]
- He, Z.; Jin, L. Activity recognition from acceleration data based on discrete consine transform and SVM. In Proceedings of the 2009 IEEE International Conference on Systems, Man and Cybernetics, San Antonio, TX, USA, 11–14 October 2009; pp. 5041–5044. [Google Scholar]
- Atallah, L.; Lo, B.; King, R.; Yang, G. Sensor Positioning for Activity Recognition Using Wearable Accelerometers. IEEE Trans. Biomed. Circuits Syst. 2011, 5, 320–329. [Google Scholar] [CrossRef]
- Bicocchi, N.; Mamei, M.; Zambonelli, F. Detecting activities from body-worn accelerometers via instance-based algorithms. Pervasive Mob. Comput. 2010, 6, 482–495. [Google Scholar] [CrossRef] [Green Version]
- Jatoba, L.C.; Grossmann, U.; Kunze, C.; Ottenbacher, J.; Stork, W. Context-aware mobile health monitoring: Evaluation of different pattern recognition methods for classification of physical activity. In Proceedings of the 2008 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Vancouver, BC, Canada, 20–25 August 2008; pp. 5250–5253. [Google Scholar]
- Rodriguez-Martin, D.; Samà, A.; Perez-Lopez, C.; Català, A.; Cabestany, J.; Rodriguez-Molinero, A. SVM-based posture identification with a single waist-located triaxial accelerometer. Expert Syst. Appl. 2013, 40, 7203–7211. [Google Scholar] [CrossRef]
- Li, A.; Ji, L.; Wang, S.; Wu, J. Physical activity classification using a single triaxial accelerometer based on HMM. In Proceedings of the IET International Conference on Wireless Sensor Network 2010 (IET-WSN 2010), Beijing, China, 15–17 November 2010; pp. 155–160. [Google Scholar]
- Xu, S.; Tang, Q.; Jin, L.; Pan, Z. A Cascade Ensemble Learning Model for Human Activity Recognition with Smartphones. Sensors 2019, 19, 2307. [Google Scholar] [CrossRef] [Green Version]
- Catal, C.; Tufekci, S.; Pirmit, E.; Kocabag, G. On the use of ensemble of classifiers for accelerometer-based activity recognition. Appl. Soft Comput. 2015, 37, 1018–1022. [Google Scholar] [CrossRef]
- Cherif, N.; Ouakrim, Y.; Benazza-Benyahia, A.; Mezghani, N. Physical Activity Classification Using a Smart Textile. In Proceedings of the 2018 IEEE Life Sciences Conference (LSC), Montreal, QC, Canada, 28–30 October 2018; pp. 175–178. [Google Scholar]
- Arlot, S.; Celisse, A. A Survey of Cross Validation Procedures for Model Selection. Stat. Surv. 2009, 4. [Google Scholar] [CrossRef]
- Liono, J.; Qin, A.K.; Salim, F.D. Optimal Time Window for Temporal Segmentation of Sensor Streams in Multi-Activity Recognition. In Proceedings of the 13th International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services, Hiroshima, Japan, 28 November–1 December 2016; Association for Computing Machinery: New York, NY, USA, 2016; pp. 10–19. [Google Scholar]
- Kira, K.; Rendell, L.A. A Practical Approach to Feature Selection. In Machine Learning Proceedings 1992; Sleeman, D., Edwards, P., Eds.; Morgan Kaufmann: San Francisco, CA, USA, 1992; pp. 249–256. [Google Scholar]
- Kononenko, I.; Šimec, E.; Robnik-Sikonja, M. Overcoming the Myopia of Inductive Learning Algorithms with RELIEFF. Appl. Intell. 1997, 7, 39–55. [Google Scholar] [CrossRef]
- Hearst, M.; Dumais, S.; Osman, E.; Platt, J.; Scholkopf, B. Support vector machines. Intell. Syst. Appl. IEEE 1998, 13, 18–28. [Google Scholar] [CrossRef] [Green Version]
- Xanthopoulos, P.; Pardalos, P.M.; Trafalis, T.B. Linear discriminant analysis. In Robust Data Mining; Springer: New York, NY, USA, 2013; pp. 27–33. [Google Scholar]
- Peterson, L.E. K-nearest neighbor. Scholarpedia 2009, 4, 1883. [Google Scholar] [CrossRef]
- LeCun, Y.A.; Bottou, L.; Orr, G.B.; Müller, K. Efficient BackProp. In Neural Networks: Tricks of the Trade, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2012; pp. 9–48. [Google Scholar]
- Hammerla, N.Y.; Halloran, S.; Plötz, T. Deep, Convolutional, and Recurrent Models for Human Activity Recognition Using Wearables. In Twenty-Fifth International Joint Conference on Artificial Intelligence; AAAI Press: New York, NY, USA, 2016; pp. 1533–1540. [Google Scholar]
- LeCun, Y.; Yoshua, B. Convolutional Networks for Images, Speech, and Time Series. In The Handbook of Brain Theory and Neural Networks; MIT Press: Cambridge, MA, USA, 1998; pp. 255–258. [Google Scholar]
- Lam, L.; Suen, S.Y. Application of majority voting to pattern recognition: An analysis of its behavior and performance. IEEE Trans. Syst. Man Cybern. Part A Syst. Humans 1997, 27, 553–568. [Google Scholar] [CrossRef] [Green Version]
- Multiprocessing—Process-Based Parallelism. Available online: https://docs.python.org/3/library/multiprocessing.html (accessed on 5 July 2021).
- Gerber, F.; Nychka, D.W. Parallel cross-validation: A scalable fitting method for Gaussian process models. Comput. Stat. Data Anal. 2021, 155, 107113. [Google Scholar] [CrossRef]
- Abadi, M.; Agarwal, A.; Barham, P.; Brevdo, E.; Chen, Z.; Citro, C.; Corrado, G.S.; Davis, A.; Dean, J.; Devin, M.; et al. TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. 2015. Available online: tensorflow.org (accessed on 5 July 2021).
- Smith, S.L.; Kindermans, P.J.; Le, Q.V. Do not Decay the Learning Rate, Increase the Batch Size. In Proceedings of the International Conference on Learning Representations, Vancouver, BC, Canada, 30 April–3 May 2018. [Google Scholar]
- Krawczyk, B. Learning from imbalanced data: Open challenges and future directions. Prog. Artif. Intell. 2016, 5, 221–232. [Google Scholar] [CrossRef] [Green Version]
- Sokolova, M.; Lapalme, G. A systematic analysis of performance measures for classification tasks. Inf. Process. Manag. 2009, 45, 427–437. [Google Scholar] [CrossRef]
- Ismail Fawaz, H.; Forestier, G.; Weber, J.; Idoumghar, L.; Muller, P.A. Deep learning for time series classification: A review. Data Min. Knowl. Discov. 2019, 33, 917–963. [Google Scholar] [CrossRef] [Green Version]
- Benavoli, A.; Corani, G.; Mangili, F. Should We Really Use Post-Hoc Tests Based on Mean-Ranks? J. Mach. Learn. Res. 2016, 17, 152–161. [Google Scholar]
- Demsar, J. Statistical Comparisons of Classifiers over Multiple Data Sets. J. Mach. Learn. Res. 2006, 7, 1–30. [Google Scholar]
- Casale, P.; Pujol, O.; Radeva, P. Human Activity Recognition from Accelerometer Data Using a Wearable Device. In Iberian Conference on Pattern Recognition and Image Analysis; Springer: Berlin/Heidelberg, Germany, 2011; Volume 6669, pp. 289–296. [Google Scholar] [CrossRef]
- Bulbul, E.; Cetin, A.; Dogru, I.A. Human Activity Recognition Using Smartphones. In Proceedings of the 2018 2nd International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT), Ankara, Turkey, 19–21 October 2018; pp. 1–6. [Google Scholar] [CrossRef]
- Brewer, B.W. Fostering Treatment Adherence in Athletic Therapy. Athl. Ther. Today 1998, 3, 30–32. [Google Scholar] [CrossRef]
- Abd, R.; Ku, N.K.; Elamvazuthi, I.; Izhar, L.I.; Capi, G. Classification of Human Daily Activities Using Ensemble Methods Based on Smart phone Inertial Sensors. Sensors 2018, 18, 4132. [Google Scholar]
- Barandas, M.; Folgado, D.; Fernandes, L.; Santos, S.; Abreu, M.; Bota, P.; Liu, H.; Schultz, T.; Gamboa, H. TSFEL: Time Series Feature Extraction Library. SoftwareX 2020, 11, 100456. [Google Scholar] [CrossRef]
- Christ, M.; Braun, N.; Neuffer, J.; Kempa-Liehr, A.W. Time Series FeatuRe Extraction on basis of Scalable Hypothesis tests (tsfresh—A Python package). Neurocomputing 2018, 307, 72–77. [Google Scholar] [CrossRef]
- Zhou, B.; Khosla, A.; Lapedriza, A.; Oliva, A.; Torralba, A. Learning Deep Features for Discriminative Localization. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 2921–2929. [Google Scholar]
Class | Precision | Recall | F1-Score | Support |
---|---|---|---|---|
up the stairs | 0.87 | 0.96 | 0.91 | 268 |
down the stairs | 0.99 | 0.88 | 0.93 | 199 |
walk | 0.98 | 0.94 | 0.96 | 702 |
run | 0.94 | 1.00 | 0.97 | 372 |
sit | 0.77 | 1.00 | 0.87 | 345 |
fall-right | 1.00 | 1.00 | 1.00 | 5 |
fall-left | 1.00 | 1.00 | 1.00 | 6 |
fall-front | 1.00 | 0.67 | 0.80 | 6 |
fall-back | 0.83 | 1.00 | 0.91 | 10 |
lying | 1.00 | 0.73 | 0.85 | 394 |
macro avg | 0.94 | 0.92 | 0.92 | 2307 |
weighted avg | 0.93 | 0.92 | 0.92 | 2307 |
Class | Precision | Recall | F1-Score | Support |
---|---|---|---|---|
up the stairs | 0.91 | 0.81 | 0.85 | 268 |
down the stairs | 0.77 | 0.35 | 0.48 | 199 |
walk | 0.77 | 0.96 | 0.86 | 702 |
run | 0.97 | 0.82 | 0.89 | 372 |
sit | 0.94 | 1.00 | 0.97 | 345 |
fall-right | 1.00 | 0.80 | 0.89 | 5 |
fall-left | 1.00 | 1.00 | 1.00 | 6 |
fall-front | 1.00 | 0.83 | 0.91 | 6 |
fall-back | 1.00 | 0.70 | 0.82 | 10 |
lying | 0.99 | 1.00 | 1.00 | 394 |
macro avg | 0.93 | 0.83 | 0.87 | 2307 |
weighted avg | 0.88 | 0.88 | 0.87 | 2307 |
Class | Precision | Recall | F1-Score | Support |
---|---|---|---|---|
up the stairs | 0.99 | 0.94 | 0.97 | 268 |
down the stairs | 0.99 | 0.96 | 0.98 | 199 |
walk | 0.98 | 1.00 | 0.99 | 702 |
run | 0.98 | 1.00 | 0.99 | 372 |
sit | 1.00 | 1.00 | 1.00 | 345 |
fall-right | 1.00 | 1.00 | 1.00 | 5 |
fall-left | 0.86 | 1.00 | 0.92 | 6 |
fall-front | 0.86 | 1.00 | 0.92 | 6 |
fall-back | 1.00 | 0.90 | 0.95 | 10 |
lying | 1.00 | 1.00 | 1.00 | 394 |
macro avg | 0.97 | 0.98 | 0.97 | 2307 |
weighted avg | 0.99 | 0.99 | 0.99 | 2307 |
Class | Precision | Recall | F1-Score | Support |
---|---|---|---|---|
up the stairs | 0.98 | 0.94 | 0.96 | 268 |
down the stairs | 1.00 | 0.96 | 0.98 | 199 |
walk | 0.98 | 0.99 | 0.99 | 702 |
run | 0.98 | 1.00 | 0.99 | 372 |
sit | 1.00 | 1.00 | 1.00 | 345 |
fall-right | 1.00 | 1.00 | 1.00 | 5 |
fall-left | 0.86 | 1.00 | 0.92 | 6 |
fall-front | 1.00 | 1.00 | 1.00 | 6 |
fall-back | 1.00 | 0.90 | 0.95 | 10 |
lying | 1.00 | 1.00 | 1.00 | 394 |
macro avg | 0.98 | 0.98 | 0.98 | 2307 |
weighted avg | 0.99 | 0.99 | 0.99 | 2307 |
Dataset | Methods | F1-Score | Precison | Recall |
---|---|---|---|---|
WISDM | The proposed method | 0.77 ± 0.07 | 0.77 ± 0.07 | 0.77 ± 0.07 |
Ensemble (DT-LR-MLP) [38] | 0.73 ± 0.11 | 0.73 ± 0.11 | 0.73 ± 0.11 | |
Adaboost [61] | 0.46 ± 0.13 | 0.46 ± 0.13 | 0.46 ± 0.13 | |
Random Forest [60] | 0.72 ± 0.11 | 0.72 ± 0.11 | 0.72 ± 0.11 | |
Hexoskin | The proposed method | 0.85 ± 0.12 | 0.85 ± 0.12 | 0.85 ± 0.12 |
Ensemble (DT-LR-MLP) [38] | 0.79± 0.14 | 0.79± 0.14 | 0.79± 0.14 | |
Adaboost [61] | 0.49 ± 0.11 | 0.49 ± 0.11 | 0.49 ± 0.11 | |
Random Forest [60] | 0.81 ± 0.14 | 0.81 ± 0.14 | 0.81 ± 0.14 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Abid, M.; Khabou, A.; Ouakrim, Y.; Watel, H.; Chemcki, S.; Mitiche, A.; Benazza-Benyahia, A.; Mezghani, N. Physical Activity Recognition Based on a Parallel Approach for an Ensemble of Machine Learning and Deep Learning Classifiers. Sensors 2021, 21, 4713. https://doi.org/10.3390/s21144713
Abid M, Khabou A, Ouakrim Y, Watel H, Chemcki S, Mitiche A, Benazza-Benyahia A, Mezghani N. Physical Activity Recognition Based on a Parallel Approach for an Ensemble of Machine Learning and Deep Learning Classifiers. Sensors. 2021; 21(14):4713. https://doi.org/10.3390/s21144713
Chicago/Turabian StyleAbid, Mariem, Amal Khabou, Youssef Ouakrim, Hugo Watel, Safouene Chemcki, Amar Mitiche, Amel Benazza-Benyahia, and Neila Mezghani. 2021. "Physical Activity Recognition Based on a Parallel Approach for an Ensemble of Machine Learning and Deep Learning Classifiers" Sensors 21, no. 14: 4713. https://doi.org/10.3390/s21144713
APA StyleAbid, M., Khabou, A., Ouakrim, Y., Watel, H., Chemcki, S., Mitiche, A., Benazza-Benyahia, A., & Mezghani, N. (2021). Physical Activity Recognition Based on a Parallel Approach for an Ensemble of Machine Learning and Deep Learning Classifiers. Sensors, 21(14), 4713. https://doi.org/10.3390/s21144713