An Optimal Feature Selection Method for Human Activity Recognition Using Multimodal Sensory Data
Abstract
:1. Introduction
- A generic framework is presented to recognize complex, long-term human activities of daily living (i.e., composite activities) using a two-level hierarchical approach to address the challenge of recognizing composite activities from raw sensory data;
- An optimal feature learning technique is presented that uses atomic scores of the underlying atomic activities in a composite activity;
- The proposed method was evaluated on the CogAge dataset, demonstrating a significant improvement in accuracy compared to existing methods, which highlights the effectiveness of the proposed framework.
2. Related Work
2.1. Handcrafted Feature-Based Techniques
2.2. Codebook-Based Feature Extraction
2.3. Deep Learning-Based Techniques
3. Proposed Method
3.1. Atomic Activities Recognition
3.1.1. Codebook Construction
3.1.2. Codeword Assignment
3.1.3. Model Training
3.2. Feature Selection for Composite Activities
3.3. Composite Activity Recognition
3.3.1. Support Vector Machine
3.3.2. Random Forest
3.4. Experimental Setup
4. Results and Discussion
4.1. Dataset
- Smartphone’s accelerometer (sp-acc): Provides measurements of acceleration (including gravity) in meters per second squared (m/s2) across three dimensions x, y, and z;
- Smartphone’s gyroscope (sp-gyro): Provides angular velocity measurement in radians per second (rad/s) across three dimensions x, y, and z;
- Smartphone’s gravity (sp-grav): Provides the sequence of gravity force in meters per second squared (m/s2) across three dimensions x, y, and z;
- Smart-phone’s linear accelerometer (sp-linacc): Provides measurements of acceleration (excluding gravity) in meters per second squared (m/s2) across three dimensions x, y, and z;
- Smartphone’s magnetometer (sp-mag): Provides intensities of earth’s magnetic field in tesla (t) across three dimensions x, y, and z, which help in determining the orientation of the mobile phone;
- Smartwatch’s accelerometer (sw-acc): Provides measurements of acceleration (including gravity) in meters per second squared (m/s2) across three dimensions x, y, and z;
- Smartwatch’s gyroscope (sw-gyro): Provides angular velocity measurement in radians per second (rad/s) across three dimensions x, y, and z;
- Smart glasses’ accelerometer (sg-acc): Provides sequence of acceleration forces in meters per second squared (m/s2) across three dimensions x, y, and z.
4.2. Results
4.3. Performance Evaluation
4.4. Performance Comparison with Existing State-of-the-Art Techniques
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Koping, L.; Shirahama, K.; Grzegorzek, M. A general framework for sensor-based human activity recognition. Comput. Biol. Med. 2018, 95, 248–260. [Google Scholar] [CrossRef] [PubMed]
- Civitarese, G.; Sztyler, T.; Riboni, D.; Bettini, C.; Stuckenschmidt, H. POLARIS: Probabilistic and ontological activity recognition in smart-homes. IEEE Trans. Knowl. Data Eng. 2019, 33, 209–223. [Google Scholar] [CrossRef]
- Yang, R.; Wang, B. PACP: A position-independent activity recognition method using smartphone sensors. Information 2016, 7, 72. [Google Scholar] [CrossRef]
- Khan, M.H. Human Activity Analysis in Visual Surveillance and Healthcare; Logos Verlag GmbH: Berlin, Germany, 2018; Volume 45. [Google Scholar]
- Fan, S.; Jia, Y.; Jia, C. A feature selection and classification method for activity recognition based on an inertial sensing unit. Information 2019, 10, 290. [Google Scholar] [CrossRef]
- Nisar, M.A.; Shirahama, K.; Li, F.; Huang, X.; Grzegorzek, M. Rank pooling approach for wearable sensor-based ADLs recognition. Sensors 2020, 20, 3463. [Google Scholar] [CrossRef]
- Amjad, F.; Khan, M.H.; Nisar, M.A.; Farid, M.S.; Grzegorzek, M. A Comparative Study of Feature Selection Approaches for Human Activity Recognition Using Multimodal Sensory Data. Sensors 2021, 21, 2368. [Google Scholar] [CrossRef] [PubMed]
- Li, F.; Shirahama, K.; Nisar, M.A.; Köping, L.; Grzegorzek, M. Comparison of feature learning methods for human activity recognition using wearable sensors. Sensors 2018, 18, 679. [Google Scholar] [CrossRef]
- Lara, O.D.; Labrador, M.A. A Survey on Human Activity Recognition using Wearable Sensors. IEEE Commun. Surv. Tutor. 2013, 15, 1192–1209. [Google Scholar] [CrossRef]
- Ke, S.R.; Thuc, H.L.U.; Lee, Y.J.; Hwang, J.N.; Yoo, J.H.; Choi, K.H. A Review on Video-Based Human Activity Recognition. Computers 2013, 2, 88–131. [Google Scholar] [CrossRef]
- Rani, V.; Kumar, M.; Singh, B. Handcrafted features for human gait recognition: CASIA-A dataset. In Proceedings of the International Conference on Artificial Intelligence and Data Science, Hyderabad, India, 17–18 December 2021; pp. 77–88. [Google Scholar]
- Schonberger, J.L.; Hardmeier, H.; Sattler, T.; Pollefeys, M. Comparative evaluation of hand-crafted and learned local features. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1482–1491. [Google Scholar]
- Saba, T.; Mohamed, A.S.; El-Affendi, M.; Amin, J.; Sharif, M. Brain tumor detection using fusion of hand crafted and deep learning features. Cogn. Syst. Res. 2020, 59, 221–230. [Google Scholar] [CrossRef]
- Patel, B.; Srikanthan, S.; Asani, F.; Agu, E. Machine learning prediction of tbi from mobility, gait and balance patterns. In Proceedings of the 2021 IEEE/ACM Conference on Connected Health: Applications, Systems and Engineering Technologies (CHASE), Washington, DC, USA, 16–18 December 2021; pp. 11–22. [Google Scholar]
- Khan, M.H.; Farid, M.S.; Grzegorzek, M. Spatiotemporal features of human motion for gait recognition. Signal Image Video Process. 2019, 13, 369–377. [Google Scholar] [CrossRef]
- Fatima, R.; Khan, M.H.; Nisar, M.A.; Doniec, R.; Farid, M.S.; Grzegorzek, M. A Systematic Evaluation of Feature Encoding Techniques for Gait Analysis Using Multimodal Sensory Data. Sensors 2024, 24, 75. [Google Scholar] [CrossRef] [PubMed]
- Zhao, B.; Lu, H.; Chen, S.; Liu, J.; Wu, D. Convolutional neural networks for time series classification. J. Syst. Eng. Electron. 2017, 28, 162–169. [Google Scholar] [CrossRef]
- Tran, L.; Hoang, T.; Nguyen, T.; Kim, H.; Choi, D. Multi-model long short-term memory network for gait recognition using window-based data segment. IEEE Access 2021, 9, 23826–23839. [Google Scholar] [CrossRef]
- Sargano, A.B.; Angelov, P.; Habib, Z. A comprehensive review on handcrafted and learning-based action representation approaches for human activity recognition. Appl. Sci. 2017, 7, 110. [Google Scholar] [CrossRef]
- Dong, M.; Han, J.; He, Y.; Jing, X. HAR-Net: Fusing deep representation and hand-crafted features for human activity recognition. In Proceedings of the International Conference on Signal and Information Processing, Networking and Computers, Ji’nan, China, 23–25 May 2018; pp. 32–40. [Google Scholar]
- Ferrari, A.; Micucci, D.; Mobilio, M.; Napoletano, P. Hand-crafted features vs residual networks for human activities recognition using accelerometer. In Proceedings of the 2019 IEEE 23rd International Symposium on Consumer Technologies (ISCT), Ancona, Italy, 19–21 June 2019; pp. 153–156. [Google Scholar]
- Khan, M.A.; Sharif, M.; Akram, T.; Raza, M.; Saba, T.; Rehman, A. Hand-crafted and deep convolutional neural network features fusion and selection strategy: An application to intelligent human action recognition. Appl. Soft Comput. 2020, 87, 105986. [Google Scholar] [CrossRef]
- Anguita, D.; Ghio, A.; Oneto, L.; Parra, X.; Reyes-Ortiz, J.L. Human activity recognition on smartphones using a multiclass hardware-friendly support vector machine. In Proceedings of the Ambient Assisted Living and Home Care: 4th International Workshop, IWAAL 2012, Vitoria-Gasteiz, Spain, 3–5 December 2012; pp. 216–223. [Google Scholar]
- Ronao, C.A.; Cho, S.B. Human activity recognition using smartphone sensors with two-stage continuous hidden Markov models. In Proceedings of the 2014 10th International Conference on Natural Computation (ICNC), Xiamen, China, 19–21 August 2014; pp. 681–686. [Google Scholar]
- Rana, R.; Kusy, B.; Wall, J.; Hu, W. Novel activity classification and occupancy estimation methods for intelligent HVAC (heating, ventilation and air conditioning) systems. Energy 2015, 93, 245–255. [Google Scholar] [CrossRef]
- Seera, M.; Loo, C.K.; Lim, C.P. A hybrid FMM-CART model for human activity recognition. In Proceedings of the 2014 IEEE International Conference on Systems, Man, and Cybernetics (SMC), San Diego, CA, USA, 5–8 October 2014; pp. 182–187. [Google Scholar]
- Anguita, D.; Ghio, A.; Oneto, L.; Parra, X.; Reyes-Ortiz, J.L. A public domain dataset for human activity recognition using smartphones. In Proceedings of the Esann, Bruges, Belgium, 24–26 April 2013; Volume 3, p. 3. [Google Scholar]
- Chen, Z.; Zhang, L.; Cao, Z.; Guo, J. Distilling the knowledge from handcrafted features for human activity recognition. IEEE Trans. Ind. Inform. 2018, 14, 4334–4342. [Google Scholar] [CrossRef]
- Abid, M.H.; Nahid, A.A. Two Unorthodox Aspects in Handcrafted-feature Extraction for Human Activity Recognition Datasets. In Proceedings of the 2021 International Conference on Electronics, Communications and Information Technology (ICECIT), Khulna, Bangladesh, 14–16 September 2021; pp. 1–4. [Google Scholar]
- Liang, H.; Sun, X.; Sun, Y.; Gao, Y. Text feature extraction based on deep learning: A review. EURASIP J. Wirel. Commun. Netw. 2017, 2017, 211. [Google Scholar] [CrossRef]
- Khan, M.H.; Farid, M.S.; Grzegorzek, M. A comprehensive study on codebook-based feature fusion for gait recognition. Inf. Fusion 2023, 92, 216–230. [Google Scholar] [CrossRef]
- Khan, M.H.; Farid, M.S.; Grzegorzek, M. A generic codebook based approach for gait recognition. Multimed. Tools Appl. 2019, 78, 35689–35712. [Google Scholar] [CrossRef]
- Azmat, U.; Jalal, A. Smartphone Inertial Sensors for Human Locomotion Activity Recognition based on Template Matching and Codebook Generation. In Proceedings of the 2021 International Conference on Communication Technologies (ComTech), Rawalpindi, Pakistan, 21–22 September 2021; pp. 109–114. [Google Scholar] [CrossRef]
- Ryu, J.; McFarland, T.; Haas, C.T.; Abdel-Rahman, E. Automatic clustering of proper working postures for phases of movement. Autom. Constr. 2022, 138, 104223. [Google Scholar] [CrossRef]
- Shirahama, K.; Grzegorzek, M. On the generality of codebook approach for sensor-based human activity recognition. Electronics 2017, 6, 44. [Google Scholar] [CrossRef]
- Uddin, M.Z.; Thang, N.D.; Kim, J.T.; Kim, T.S. Human activity recognition using body joint-angle features and hidden Markov model. ETRI J. 2011, 33, 569–579. [Google Scholar] [CrossRef]
- Siddiqui, S.; Khan, M.A.; Bashir, K.; Sharif, M.; Azam, F.; Javed, M.Y. Human action recognition: A construction of codebook by discriminative features selection approach. Int. J. Appl. Pattern Recognit. 2018, 5, 206–228. [Google Scholar] [CrossRef]
- Khan, M.H.; Farid, M.S.; Grzegorzek, M. Person identification using spatiotemporal motion characteristics. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; pp. 166–170. [Google Scholar]
- Gaikwad, N.B.; Tiwari, V.; Keskar, A.; Shivaprakash, N. Efficient FPGA implementation of multilayer perceptron for real-time human activity classification. IEEE Access 2019, 7, 26696–26706. [Google Scholar] [CrossRef]
- Mesquita, C.M.; Valle, C.A.; Pereira, A.C. Dynamic Portfolio Optimization Using a Hybrid MLP-HAR Approach. In Proceedings of the 2020 IEEE Symposium Series on Computational Intelligence (SSCI), Canberra, Australia, 1–4 December 2020; pp. 1075–1082. [Google Scholar]
- Rustam, F.; Reshi, A.A.; Ashraf, I.; Mehmood, A.; Ullah, S.; Khan, D.M.; Choi, G.S. Sensor-based human activity recognition using deep stacked multilayered perceptron model. IEEE Access 2020, 8, 218898–218910. [Google Scholar] [CrossRef]
- Azmat, U.; Ghadi, Y.Y.; Shloul, T.a.; Alsuhibany, S.A.; Jalal, A.; Park, J. Smartphone Sensor-Based Human Locomotion Surveillance System Using Multilayer Perceptron. Appl. Sci. 2022, 12, 2550. [Google Scholar] [CrossRef]
- Hassan, M.M.; Uddin, M.Z.; Mohamed, A.; Almogren, A. A robust human activity recognition system using smartphone sensors and deep learning. Future Gener. Comput. Syst. 2018, 81, 307–313. [Google Scholar] [CrossRef]
- Mekruksavanich, S.; Jitpattanakul, A. Deep convolutional neural network with rnns for complex activity recognition using wrist-worn wearable sensor data. Electronics 2021, 10, 1685. [Google Scholar] [CrossRef]
- Chung, S.; Lim, J.; Noh, K.J.; Kim, G.; Jeong, H. Sensor data acquisition and multimodal sensor fusion for human activity recognition using deep learning. Sensors 2019, 19, 1716. [Google Scholar] [CrossRef]
- Tong, L.; Ma, H.; Lin, Q.; He, J.; Peng, L. A novel deep learning Bi-GRU-I model for real-time human activity recognition using inertial sensors. IEEE Sens. J. 2022, 22, 6164–6174. [Google Scholar] [CrossRef]
- Batool, S.; Khan, M.H.; Farid, M.S. An ensemble deep learning model for human activity analysis using wearable sensory data. Appl. Soft Comput. 2024, 159, 111599. [Google Scholar] [CrossRef]
- Khodabandelou, G.; Moon, H.; Amirat, Y.; Mohammed, S. A fuzzy convolutional attention-based GRU network for human activity recognition. Eng. Appl. Artif. Intell. 2023, 118, 105702. [Google Scholar] [CrossRef]
- Varamin, A.A.; Abbasnejad, E.; Shi, Q.; Ranasinghe, D.C.; Rezatofighi, H. Deep auto-set: A deep auto-encoder-set network for activity recognition using wearables. In Proceedings of the 15th EAI International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services, New York, NY, USA, 5–7 November 2018; pp. 246–253. [Google Scholar]
- Malekzadeh, M.; Clegg, R.G.; Haddadi, H. Replacement autoencoder: A privacy-preserving algorithm for sensory data analysis. In Proceedings of the 2018 IEEE/ACM Third International Conference on Internet-of-Things Design and Implementation (IoTDI), Orlando, FL, USA, 17–20 April 2018; pp. 165–176. [Google Scholar]
- Jia, G.; Lam, H.K.; Liao, J.; Wang, R. Classification of electromyographic hand gesture signals using machine learning techniques. Neurocomputing 2020, 401, 236–248. [Google Scholar] [CrossRef]
- Rubio-Solis, A.; Panoutsos, G.; Beltran-Perez, C.; Martinez-Hernandez, U. A multilayer interval type-2 fuzzy extreme learning machine for the recognition of walking activities and gait events using wearable sensors. Neurocomputing 2020, 389, 42–55. [Google Scholar] [CrossRef]
- Gavrilin, Y.; Khan, A. Across-sensor feature learning for energy-efficient activity recognition on mobile devices. In Proceedings of the 2019 International Joint Conference on Neural Networks (IJCNN), Budapest, Hungary, 14–19 July 2019; pp. 1–7. [Google Scholar]
- Mohammadian Rad, N.; Van Laarhoven, T.; Furlanello, C.; Marchiori, E. Novelty detection using deep normative modeling for imu-based abnormal movement monitoring in parkinson’s disease and autism spectrum disorders. Sensors 2018, 18, 3533. [Google Scholar] [CrossRef]
- Prasath, R.; O’Reilly, P.; Kathirvalavakumar, T. Mining Intelligence and Knowledge Exploration; Springer: Berlin/Heidelberg, Germany, 2014. [Google Scholar]
- Almaslukh, B.; AlMuhtadi, J.; Artoli, A. An effective deep autoencoder approach for online smartphone-based human activity recognition. Int. J. Comput. Sci. Netw. Secur. 2017, 17, 160–165. [Google Scholar]
- Mohammed, S.; Tashev, I. Unsupervised deep representation learning to remove motion artifacts in free-mode body sensor networks. In Proceedings of the 2017 IEEE 14th International Conference on Wearable and Implantable Body Sensor Networks (BSN), Eindhoven, The Netherlands, 9–12 May 2017; pp. 183–188. [Google Scholar]
- Malekzadeh, M.; Clegg, R.G.; Cavallaro, A.; Haddadi, H. Protecting sensory data against sensitive inferences. In Proceedings of the 1st Workshop on Privacy by Design in Distributed Systems, Porto, Portugal, 23–26 April 2018; pp. 1–6. [Google Scholar]
- Malekzadeh, M.; Clegg, R.G.; Cavallaro, A.; Haddadi, H. Mobile sensor data anonymization. In Proceedings of the International Conference on Internet of Things Design and Implementation, Montreal, QC, Canada, 15–18 April 2019; pp. 49–58. [Google Scholar]
- Gao, X.; Luo, H.; Wang, Q.; Zhao, F.; Ye, L.; Zhang, Y. A human activity recognition algorithm based on stacking denoising autoencoder and lightGBM. Sensors 2019, 19, 947. [Google Scholar] [CrossRef]
- Bai, L.; Yeung, C.; Efstratiou, C.; Chikomo, M. Motion2Vector: Unsupervised learning in human activity recognition using wrist-sensing data. In Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers, London, UK, 9–13 September 2019; pp. 537–542. [Google Scholar]
- Wahla, S.Q.; Ghani, M.U. Visual Fall Detection from Activities of Daily Living for Assistive Living. IEEE Access 2023, 11, 108876–108890. [Google Scholar] [CrossRef]
- Tang, Y.; Zhang, L.; Min, F.; He, J. Multi-scale deep feature learning for human activity recognition using wearable sensors. IEEE Trans. Ind. Electron. 2022, 70, 2106–2116. [Google Scholar] [CrossRef]
- Li, Y.; Yang, G.; Su, Z.; Li, S.; Wang, Y. Human activity recognition based on multienvironment sensor data. Inf. Fusion 2023, 91, 47–63. [Google Scholar] [CrossRef]
- Ordóñez, F.J.; Roggen, D. Deep convolutional and lstm recurrent neural networks for multimodal wearable activity recognition. Sensors 2016, 16, 115. [Google Scholar] [CrossRef] [PubMed]
- Rivera, P.; Valarezo, E.; Choi, M.T.; Kim, T.S. Recognition of human hand activities based on a single wrist imu using recurrent neural networks. Int. J. Pharma Med. Biol. Sci 2017, 6, 114–118. [Google Scholar] [CrossRef]
- Aljarrah, A.A.; Ali, A.H. Human activity recognition using PCA and BiLSTM recurrent neural networks. In Proceedings of the 2019 2nd International Conference on Engineering Technology and its Applications (IICETA), Al-Najef, Iraq, 27–28 August 2019; pp. 156–160. [Google Scholar]
- Hu, Y.; Zhang, X.Q.; Xu, L.; He, F.X.; Tian, Z.; She, W.; Liu, W. Harmonic loss function for sensor-based human activity recognition based on LSTM recurrent neural networks. IEEE Access 2020, 8, 135617–135627. [Google Scholar] [CrossRef]
- Martindale, C.F.; Christlein, V.; Klumpp, P.; Eskofier, B.M. Wearables-based multi-task gait and activity segmentation using recurrent neural networks. Neurocomputing 2021, 432, 250–261. [Google Scholar] [CrossRef]
- Singh, N.K.; Suprabhath, K.S. HAR Using Bi-directional LSTM with RNN. In Proceedings of the 2021 International Conference on Emerging Techniques in Computational Intelligence (ICETCI), Hyderabad, Indi, 25–27 August 2021; pp. 153–158. [Google Scholar]
- Kaya, Y.; Topuz, E.K. Human activity recognition from multiple sensors data using deep CNNs. Multimed. Tools Appl. 2024, 83, 10815–10838. [Google Scholar] [CrossRef]
- Reddy, C.R.; Sreenivasulu, A. Deep Learning Approach for Suspicious Activity Detection from Surveillance Video. In Proceedings of the 2nd International Conference on Innovative Mechanisms for Industry Applications (ICIMIA), Bangalore, India, 5–7 March 2020. [Google Scholar]
- Abuhoureyah, F.S.; Wong, Y.C.; Isira, A.S.B.M. WiFi-based human activity recognition through wall using deep learning. Eng. Appl. Artif. Intell. 2024, 127, 107171. [Google Scholar] [CrossRef]
- Ashfaq, N.; Khan, M.H.; Nisar, M.A. Identification of Optimal Data Augmentation Techniques for Multimodal Time-Series Sensory Data: A Framework. Information 2024, 15, 343. [Google Scholar] [CrossRef]
- Sezavar, A.; Atta, R.; Ghanbari, M. DCapsNet: Deep capsule network for human activity and gait recognition with smartphone sensors. Pattern Recognit. 2024, 147, 110054. [Google Scholar] [CrossRef]
- Lalwani, P.; Ramasamy, G. Human activity recognition using a multi-branched CNN-BiLSTM-BiGRU model. Appl. Soft Comput. 2024, 154, 111344. [Google Scholar] [CrossRef]
- Wei, X.; Wang, Z. TCN-attention-HAR: Human activity recognition based on attention mechanism time convolutional network. Sci. Rep. 2024, 14, 7414. [Google Scholar] [CrossRef] [PubMed]
- Koutroumbas, K.; Theodoridis, S. Pattern Recognition; Academic Press: Cambridge, MA, USA, 2008. [Google Scholar]
- Cai, J.; Luo, J.; Wang, S.; Yang, S. Feature selection in machine learning: A new perspective. Neurocomputing 2018, 300, 70–79. [Google Scholar] [CrossRef]
- Wang, H.; Kläser, A.; Schmid, C.; Liu, C.L. Dense trajectories and motion boundary descriptors for action recognition. Int. J. Comput. Vis. 2013, 103, 60–79. [Google Scholar] [CrossRef]
- Durgesh, K.S.; Lekha, B. Data classification using support vector machine. J. Theor. Appl. Inf. Technol. 2010, 12, 1–7. [Google Scholar]
- Nurhanim, K.; Elamvazuthi, I.; Izhar, L.; Ganesan, T. Classification of human activity based on smartphone inertial sensor using support vector machine. In Proceedings of the 2017 IEEE 3rd International Symposium in Robotics and Manufacturing Automation (ROMA), Kuala Lumpur, Malaysia, 19–21 September 2017; pp. 1–5. [Google Scholar]
- Khatun, M.A.; Yousuf, M.A.; Ahmed, S.; Uddin, M.Z.; Alyami, S.A.; Al-Ashhab, S.; Akhdar, H.F.; Khan, A.; Azad, A.; Moni, M.A. Deep CNN-LSTM with self-attention model for human activity recognition using wearable sensor. IEEE J. Transl. Eng. Health Med. 2022, 10, 1–16. [Google Scholar] [CrossRef] [PubMed]
- Mohammed, A.; Kora, R. A comprehensive review on ensemble deep learning: Opportunities and challenges. J. King Saud Univ.-Comput. Inf. Sci. 2023, 35, 757–774. [Google Scholar] [CrossRef]
- Khan, M.H.; Farid, M.S.; Grzegorzek, M. Vision-based approaches towards person identification using gait. Comput. Sci. Rev. 2021, 42, 100432. [Google Scholar] [CrossRef]
- Khan, M.H.; Farid, M.S.; Grzegorzek, M. A non-linear view transformations model for cross-view gait recognition. Neurocomputing 2020, 402, 100–111. [Google Scholar] [CrossRef]
- Snoek, C.G.; Worring, M.; Smeulders, A.W. Early versus late fusion in semantic video analysis. In Proceedings of the 13th Annual ACM International Conference on Multimedia, Singapore, 6–11 November 2005; pp. 399–402. [Google Scholar]
- Vapnik, V.N. An overview of statistical learning theory. IEEE Trans. Neural Netw. 1999, 10, 988–999. [Google Scholar] [CrossRef]
- Zhang, J.; Marszałek, M.; Lazebnik, S.; Schmid, C. Local features and kernels for classification of texture and object categories: A comprehensive study. Int. J. Comput. Vis. 2007, 73, 213–238. [Google Scholar] [CrossRef]
- Shirahama, K.; Köping, L.; Grzegorzek, M. Codebook approach for sensor-based human activity recognition. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct, Heidelberg, Germany, 12–16 September 2016; pp. 197–200. [Google Scholar]
- Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
- Cutler, A. Random Forests for Regression and Classification; Utah State University: Ovronnaz, Switzerland, 2010. [Google Scholar]
- Augustinov, G.; Nisar, M.A.; Li, F.; Tabatabaei, A.; Grzegorzek, M.; Sohrabi, K.; Fudickar, S. Transformer-based recognition of activities of daily living from wearable sensor data. In Proceedings of the 7th International Workshop on Sensor-Based Activity Recognition and Artificial Intelligence, Rostock, Germany, 19–20 September 2022; pp. 1–8. [Google Scholar]
- Nisar, M.A.; Shirahama, K.; Irshad, M.T.; Huang, X.; Grzegorzek, M. A Hierarchical Multitask Learning Approach for the Recognition of Activities of Daily Living Using Data from Wearable Sensors. Sensors 2023, 23, 8234. [Google Scholar] [CrossRef] [PubMed]
Feature | Formula |
---|---|
Minimum | |
Maximum | |
Mean | |
Median | Me = |
Standard Deviation | |
Variance | |
Skewness | |
Kurtosis | |
Root Mean Square |
Composite Activity Name | Atomic Activities Count in | ||
---|---|---|---|
90% | 80% | 70% | |
Brushing Teeth | 24 | 14 | 8 |
Cleaning Room | 20 | 11 | 8 |
Handling Medication | 25 | 15 | 9 |
Preparing Food | 26 | 15 | 9 |
Styling Hair | 17 | 9 | 6 |
Using Phone | 18 | 10 | 7 |
Washing Hands | 16 | 9 | 6 |
Support Vector Machine | Random Forest |
---|---|
Cross-validation: 5-folds | Cross-validation: 5-folds |
Random State: 42 | Random State: 42 |
Kernel: Linear | Number of Trees: 200 |
Maximum Depth: 10 |
Sensor | Timestamp | x-Axis | y-Axis | z-Axis |
---|---|---|---|---|
Smatphone Accelerometer | 1522061483421 | 0.5344391 | 3.7631073 | 9.24202 |
(m/s2) | 1522061483425 | 0.54881287 | 3.7607117 | 9.191742 |
1522061483430 | 0.57754517 | 3.7559204 | 9.160614 | |
Smartphone Gyroscope | 1521205075060 | 0.89419556 | −0.5268707 | −0.1633606 |
(rad/s) | 1521205075065 | 1.170105 | 0.0015106201 | −0.18891907 |
1521205075070 | 1.074234 | −0.015533447 | −0.24751282 | |
Smartphone Magnetometer | 1521205075001 | −78.93925 | −21.0886 | −95.36343 |
(tesla) | 1521205075011 | −79.0144 | −21.200943 | −95.100975 |
1521205075021 | −78.97682 | −21.575928 | −95.0634 | |
Smartphone Gravity | 1521205075001 | 5.426985 | 7.1766686 | 3.9004674 |
(m/s2) | 1521205075006 | 5.4208546 | 7.198336 | 3.8689373 |
1521205075011 | 5.416293 | 7.219007 | 3.836677 |
Dataset | CogAge |
---|---|
No. of Atomic Activities | 61 |
No. of Composite Activities | 7 |
No. of Participants for Atomic Activities | 8 |
No. of Participants for Composite Activities | 6 |
Instances of Atomic Activities | 9700 |
Instances of Composite Activities | 890 |
Devices and Their Placement | Smart glasses (eyes) |
Smartwatch (wrist) | |
Smartphone (waist) | |
Sensors (unit) | 8 |
Smartphone accelerometer (m/s2) | |
Smartphone gyroscope (radian/s) | |
Smartphone gravity (m/s2) | |
Smartphone linear accelerometer (m/s2) | |
Smartphone magnetometer (tesla) | |
Smartwatch accelerometer (m/s2) | |
Smartwatch gyroscope (radian/s) | |
Smart glasses accelerometer (m/s2) | |
Composite Activity Feature Representation | 61 |
Sr. No | Activity Name | Example Points |
---|---|---|
1. | Brushing Teeth | 128 |
2. | Cleaning Room | 129 |
3. | Handling Medication | 137 |
4. | Preparing Food | 113 |
5. | Styling Hair | 127 |
6. | Using Phone | 128 |
7. | Washing Hands | 128 |
Bending | Lying | Sitting | Squatting |
Standing | Walking | Bring | CleanFloor |
CleanSurface | CloseBigBox | CloseDoor | CloseDrawer |
CloseLidByRotate | CloseOtherLid | CloseSmallBox | Drink |
DryOffHand | DryOffHandByShake | EatSmall | Gargle |
GettingUp | Hang | LyingDown | OpenBag |
OpenBigBox | OpenDoor | OpenDrawer | OpenLidByRotate |
OpenOtherLid | OpenSmallBox | PlugIn | PressByGrasp |
PressFromTop | PressSwitch | PutFromBottle | PutFromTapWater |
PutOnFloor | PutHighPosition | Read | Rotate |
RubHands | ScoopPut | SittingDown | SquattingDown |
StandingUp | StandUpFromSquatting | TakeFromFloor | TakeFromHighPosition |
TakeOffJacket | TakeOut | TalkByTelephone | ThrowOut |
ThrowOutWater | TouchSmartPhoneScreen | Type | Unhang |
Unplug | WearJacket | Write | CloseTapWater |
OpenTapWater |
Sr. No | Composite Activity | No. of Selected Features |
---|---|---|
1. | Brushing Teeth (BT) | 24 |
2. | Cleaning Room (CR) | 20 |
3. | Handling Medication (HM) | 25 |
4. | Preparing Food (PF) | 26 |
5. | Styling Hair (SH) | 17 |
6. | Using Phone (UP) | 18 |
7. | Washing Hands (WH) | 16 |
Atomic Score (%) | Selected Features | SVM | RF |
---|---|---|---|
Accuracy (%) | Accuracy (%) | ||
65 | 19 | 84.4 | 95.48 |
70 | 22 | 87.3 | 95.25 |
75 | 28 | 86.0 | 95.93 |
80 | 35 | 90.5 | 96.38 |
85 | 39 | 92.55 | 95.03 |
90 | 48 | 94.1 | 96.61 |
100 | 61 | 93.9 | 96.16 |
Methods | Year | Accuracy (%) |
---|---|---|
Rank pooling fused with max and average pooling [6] | 2020 | 68.65 |
Handcrafted features [7] | 2021 | 79 |
Transformers [93] | 2022 | 73.36 |
Hierarchical multi-task leaning [94] | 2023 | 82.31 |
Multi-branch hybrid Conv-LSTM network [74] | 2024 | 82.16 |
Proposed method (using SVM) | 2024 | 94.1 |
Proposed method (using RF) | 2024 | 96.61 |
Method | Classification Time (ms) |
---|---|
Asfaq et al. [74] | 5.5 |
Proposed method (using SVM) | 3.3 |
Proposed method (using RF) | 1.1 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Haider, T.; Khan, M.H.; Farid, M.S. An Optimal Feature Selection Method for Human Activity Recognition Using Multimodal Sensory Data. Information 2024, 15, 593. https://doi.org/10.3390/info15100593
Haider T, Khan MH, Farid MS. An Optimal Feature Selection Method for Human Activity Recognition Using Multimodal Sensory Data. Information. 2024; 15(10):593. https://doi.org/10.3390/info15100593
Chicago/Turabian StyleHaider, Tazeem, Muhammad Hassan Khan, and Muhammad Shahid Farid. 2024. "An Optimal Feature Selection Method for Human Activity Recognition Using Multimodal Sensory Data" Information 15, no. 10: 593. https://doi.org/10.3390/info15100593
APA StyleHaider, T., Khan, M. H., & Farid, M. S. (2024). An Optimal Feature Selection Method for Human Activity Recognition Using Multimodal Sensory Data. Information, 15(10), 593. https://doi.org/10.3390/info15100593