Sudden Fall Detection of Human Body Using Transformer Model
Abstract
:1. Introduction
2. Literature Review
2.1. Camera-Based Related Systems
2.2. Wearable Sensor-Based Systems
3. Proposed Methodology
3.1. Data Collection and Pre-Processing
3.2. Selection of Key Points and Calculation of Key Point Change Speed
4. Experiments and Results
4.1. Definition of Normalization, Input Data and Output Data
4.2. Transformer Encoder-Decoder Architecture with Attention Mechanism
4.3. Model Architecture and Training
5. Results and Analysis
5.1. Performance Measure
5.2. Experimental Results
6. Conclusions and Future Work
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Berg, K. Measuring Balance in the Elderly: Development and Validation of an Instrument. Ph.D. Thesis, McGill University, Montreal, QC, Canada, 1992. [Google Scholar]
- Kong, X.; Chen, L.; Wang, Z.; Chen, Y.; Meng, L.; Tomiyama, H. Robust Self-Adaptation Fall-Detection System Based on Camera Height. Sensors 2019, 19, 3768. [Google Scholar] [CrossRef]
- Singh, A.K.; Kumbhare, V.A.; Arthi, K. Real-Time Human Pose Detection and Recognition Using MediaPipe. In Soft Computing and Signal Processing; Reddy, V.S., Prasad, V.K., Wang, J., Reddy, K.T.V., Eds.; Advances in Intelligent Systems and Computing; Springer Nature: Singapore, 2022; Volume 1413, pp. 145–154. ISBN 9789811670879. [Google Scholar]
- Zhang, J.; Chen, Z.; Tao, D. Towards High Performance Human Keypoint Detection. Int. J. Comput. Vis. 2021, 129, 2639–2662. [Google Scholar] [CrossRef]
- Makris, A.; Argyros, A. Robust 3d Human Pose Estimation Guided by Filtered Subsets of Body Keypoints. In Proceedings of the 2019 16th International Conference on Machine Vision Applications (MVA), Tokyo, Japan, 27–31 May 2019; pp. 1–6. [Google Scholar]
- Duan, H.; Lin, K.-Y.; Jin, S.; Liu, W.; Qian, C.; Ouyang, W. Trb: A Novel Triplet Representation for Understanding 2d Human Body. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 9479–9488. [Google Scholar]
- Wolf, T.; Debut, L.; Sanh, V.; Chaumond, J.; Delangue, C.; Moi, A.; Cistac, P.; Rault, T.; Louf, R.; Funtowicz, M. Transformers: State-of-the-Art Natural Language Processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, Online, 16–20 November 2020; pp. 38–45. [Google Scholar]
- Suvitha, D.; Vijayalakshmi, M. Vehicle Density Prediction in Low Quality Videos with Transformer Timeseries Prediction Model (TTPM). Comput. Syst. Sci. Eng. 2023, 44, 873–894. [Google Scholar] [CrossRef]
- Sun, F.; Liu, J.; Wu, J.; Pei, C.; Lin, X.; Ou, W.; Jiang, P. BERT4Rec: Sequential Recommendation with Bidirectional Encoder Representations from Transformer. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, Beijing, China, 3–7 November 2019; pp. 1441–1450. [Google Scholar]
- Suriani, N.S.; Rashid, F.N.; Yunos, N.Y. Optimal Accelerometer Placement for Fall Detection of Rehabilitation Patients. J. Telecommun. Electron. Comput. Eng. (JTEC) 2018, 10, 25–29. [Google Scholar]
- Xi, X.; Tang, M.; Miran, S.M.; Luo, Z. Evaluation of Feature Extraction and Recognition for Activity Monitoring and Fall Detection Based on Wearable sEMG Sensors. Sensors 2017, 17, 1229. [Google Scholar] [CrossRef]
- Kerdegari, H.; Samsudin, K.; Ramli, A.R.; Mokaram, S. Evaluation of Fall Detection Classification Approaches. In Proceedings of the 2012 4th International Conference on Intelligent and Advanced Systems (ICIAS2012), Kuala Lumpur, Malaysia, 12–14 June 2012; Volume 1, pp. 131–136. [Google Scholar]
- Debard, G.; Karsmakers, P.; Deschodt, M.; Vlaeyen, E.; Dejaeger, E.; Milisen, K.; Goedemé, T.; Vanrumste, B.; Tuytelaars, T. Camera-Based Fall Detection on Real World Data. In Outdoor and Large-Scale Real-World Scene Analysis; Dellaert, F., Frahm, J.-M., Pollefeys, M., Leal-Taixé, L., Rosenhahn, B., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2012; Volume 7474, pp. 356–375. ISBN 978-3-642-34090-1. [Google Scholar]
- Chen, J.; Kwong, K.; Chang, D.; Luk, J.; Bajcsy, R. Wearable Sensors for Reliable Fall Detection. In Proceedings of the 2005 IEEE Engineering in Medicine and Biology 27th Annual Conference, Shanghai, China, 17–18 January 2006; pp. 3551–3554. [Google Scholar]
- Nooruddin, S.; Islam, M.M.; Sharna, F.A.; Alhetari, H.; Kabir, M.N. Sensor-Based Fall Detection Systems: A Review. J. Ambient. Intell. Hum. Comput. 2022, 13, 2735–2751. [Google Scholar] [CrossRef]
- Lindemann, U.; Hock, A.; Stuber, M.; Keck, W.; Becker, C. Evaluation of a Fall Detector Based on Accelerometers: A Pilot Study. Med. Biol. Eng. Comput. 2005, 43, 548–551. [Google Scholar] [CrossRef]
- Cheng, J.; Chen, X.; Shen, M. A Framework for Daily Activity Monitoring and Fall Detection Based on Surface Electromyography and Accelerometer Signals. IEEE J. Biomed. Health Inform. 2012, 17, 38–45. [Google Scholar] [CrossRef]
- Bian, Z.-P.; Hou, J.; Chau, L.-P.; Magnenat-Thalmann, N. Fall Detection Based on Body Part Tracking Using a Depth Camera. IEEE J. Biomed. Health Inform. 2014, 19, 430–439. [Google Scholar] [CrossRef] [PubMed]
- Menacho, C.; Ordoñez, J. Fall Detection Based on CNN Models Implemented on a Mobile Robot. In Proceedings of the 2020 17th International Conference on Ubiquitous Robots (UR), Kyoto, Japan, 22–26 June 2020; pp. 284–289. [Google Scholar]
- Xu, Q.; Huang, G.; Yu, M.; Guo, Y. Fall Prediction Based on Key Points of Human Bones. Phys. A Stat. Mech. Its Appl. 2020, 540, 123205. [Google Scholar] [CrossRef]
- Kwolek, B.; Kepski, M. Human Fall Detection on Embedded Platform Using Depth Maps and Wireless Accelerometer. Comput. Methods Programs Biomed. 2014, 117, 489–501. [Google Scholar] [CrossRef]
- Saurav, S.; Saini, R.; Singh, S. A Dual-Stream Fused Neural Network for Fall Detection in Multi-Camera and 360° Videos. Neural Comput. Applic 2022, 34, 1455–1482. [Google Scholar] [CrossRef]
- Chen, Z.; Wang, Y.; Yang, W. Video Based Fall Detection Using Human Poses. In Big Data; Liao, X., Zhao, W., Chen, E., Xiao, N., Wang, L., Gao, Y., Shi, Y., Wang, C., Huang, D., Eds.; Communications in Computer and Information Science; Springer: Singapore, 2022; Volume 1496, pp. 283–296. ISBN 9789811697081. [Google Scholar]
- Thummala, J.; Pumrin, S. Fall Detection Using Motion History Image and Shape Deformation. In Proceedings of the 2020 8th International Electrical Engineering Congress (iEECON), Chiang Mai, Thailand, 4–6 March 2020; pp. 1–4. [Google Scholar]
- Charfi, I.; Miteran, J.; Dubois, J.; Atri, M.; Tourki, R. Optimised Spatio-Temporal Descriptors for Real-Time Fall Detection: Comparison of SVM and Adaboost Based Classification. J. Electron. Imaging (JEI) 2013, 22, 17. [Google Scholar]
- Zhang, J.; Wu, C.; Wang, Y. Human Fall Detection Based on Body Posture Spatio-Temporal Evolution. Sensors 2020, 20, 946. [Google Scholar] [CrossRef]
- Wang, L.; Peng, M.; Zhou, Q. Pre-Impact Fall Detection Based on Multi-Source CNN Ensemble. IEEE Sens. J. 2020, 20, 5442–5451. [Google Scholar] [CrossRef]
- Tufek, N.; Yalcin, M.; Altintas, M.; Kalaoglu, F.; Li, Y.; Bahadir, S.K. Human Action Recognition Using Deep Learning Methods on Limited Sensory Data. IEEE Sens. J. 2019, 20, 3101–3112. [Google Scholar] [CrossRef]
- Pierleoni, P.; Belli, A.; Maurizi, L.; Palma, L.; Pernini, L.; Paniccia, M.; Valenti, S. A Wearable Fall Detector for Elderly People Based on AHRS and Barometric Sensor. IEEE Sens. J. 2016, 16, 6733–6744. [Google Scholar] [CrossRef]
- Santos, G.L.; Endo, P.T.; Monteiro, K.H.d.C.; Rocha, E.d.S.; Silva, I.; Lynn, T. Accelerometer-Based Human Fall Detection Using Convolutional Neural Networks. Sensors 2019, 19, 1644. [Google Scholar] [CrossRef]
- Chen, L.; Li, R.; Zhang, H.; Tian, L.; Chen, N. Intelligent Fall Detection Method Based on Accelerometer Data from a Wrist-Worn Smart Watch. Measurement 2019, 140, 215–226. [Google Scholar] [CrossRef]
- Santoyo-Ramón, J.A.; Casilari, E.; Cano-García, J.M. Analysis of a Smartphone-Based Architecture with Multiple Mobility Sensors for Fall Detection with Supervised Learning. Sensors 2018, 18, 1155. [Google Scholar] [CrossRef]
- Nizam, Y.; Mohd, M.N.H.; Jamil, M.M.A. A Study on Human Fall Detection Systems: Daily Activity Classification and Sensing Techniques. Int. J. Integr. Eng. 2016, 8, 35–43. [Google Scholar]
- Davari, A.; Aydin, T.; Erdem, T. Automatic Fall Detection for Elderly by Using Features Extracted from Skeletal Data. In Proceedings of the 2013 International Conference on Electronics, Computer and Computation (ICECCO), Ankara, Turkey, 7–9 November 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 127–130. [Google Scholar]
- Rezaee, K.; Haddadnia, J. Design of Fall Detection System: A Dynamic Pattern Approach with Fuzzy Logic and Motion Estimation. Inf. Syst. Telecommun. 2014, 3, 181. [Google Scholar]
- Xu, T.; Zhou, Y.; Zhu, J. New Advances and Challenges of Fall Detection Systems: A Survey. Appl. Sci. 2018, 8, 418. [Google Scholar] [CrossRef]
- Nyan, M.N.; Tay, F.E.; Tan, A.W.Y.; Seah, K.H.W. Distinguishing Fall Activities from Normal Activities by Angular Rate Characteristics and High-Speed Camera Characterization. Med. Eng. Phys. 2006, 28, 842–849. [Google Scholar] [CrossRef]
- Dai, J.; Bai, X.; Yang, Z.; Shen, Z.; Xuan, D. PerFallD: A Pervasive Fall Detection System Using Mobile Phones. In Proceedings of the 2010 8th IEEE International Conference on Pervasive Computing and Communications Workshops (PERCOM Workshops), Mannheim, Germany, 29 March–2 April 2010; pp. 292–297. [Google Scholar]
- Igual, R.; Medrano, C.; Plaza, I. A Comparison of Public Datasets for Acceleration-Based Fall Detection. Med. Eng. Phys. 2015, 37, 870–878. [Google Scholar] [CrossRef] [PubMed]
- Liu, H.; Liu, Z.; Jia, W.; Lin, X.; Zhang, S. A Novel Transformer-Based Neural Network Model for Tool Wear Estimation. Meas. Sci. Technol. 2020, 31, 065106. [Google Scholar] [CrossRef]
- Hao, Y.; Dong, L.; Wei, F.; Xu, K. Self-Attention Attribution: Interpreting Information Interactions inside Transformer. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtual, 2–9 February 2021; Volume 35, pp. 12963–12971. [Google Scholar]
- Variš, D.; Bojar, O. Sequence Length Is a Domain: Length-Based Overfitting in Transformer Models. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, Punta Cana, Dominican Republic, 7–11 November 2021; pp. 8246–8257. [Google Scholar]
- Wang, X.; Jin, Y.; Cen, Y.; Lang, C.; Li, Y. PST-NET: Point Cloud Sampling via Point-Based Transformer. In Image and Graphics; Peng, Y., Hu, S.-M., Gabbouj, M., Zhou, K., Elad, M., Xu, K., Eds.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2021; Volume 12890, pp. 57–69. ISBN 978-3-030-87360-8. [Google Scholar]
- Lugaresi, C.; Tang, J.; Nash, H.; McClanahan, C.; Uboweja, E.; Hays, M.; Zhang, F.; Chang, C.-L.; Yong, M.G.; Lee, J.; et al. MediaPipe: A Framework for Building Perception Pipelines. arXiv 2019, arXiv:1906.08172. [Google Scholar]
- Suherman, S.; Suhendra, A.; Ernastuti, E. Method Development Through Landmark Point Extraction for Gesture Classification with Computer Vision and MediaPipe. TEM J. 2023, 12, 1677–1686. [Google Scholar] [CrossRef]
- Shen, S.; Yao, Z.; Gholami, A.; Mahoney, M.; Keutzer, K. Powernorm: Rethinking Batch Normalization in Transformers. In Proceedings of the International Conference on Machine Learning, PMLR, Virtual, 13–18 July 2020; pp. 8741–8751. [Google Scholar]
- Nguyen, T.Q.; Salazar, J. Transformers without Tears: Improving the Normalization of Self-Attention. arXiv 2019. [Google Scholar] [CrossRef]
- Han, K.; Xiao, A.; Wu, E.; Guo, J.; Xu, C.; Wang, Y. Transformer in Transformer. Adv. Neural Inf. Process. Syst. 2021, 34, 15908–15919. [Google Scholar]
- Chang, Y.; Li, F.; Chen, J.; Liu, Y.; Li, Z. Efficient Temporal Flow Transformer Accompanied with Multi-Head Probsparse Self-Attention Mechanism for Remaining Useful Life Prognostics. Reliab. Eng. Syst. Saf. 2022, 226, 108701. [Google Scholar] [CrossRef]
- Lu, S.; Wang, M.; Liang, S.; Lin, J.; Wang, Z. Hardware Accelerator for Multi-Head Attention and Position-Wise Feed-Forward in the Transformer. In Proceedings of the 2020 IEEE 33rd International System-on-Chip Conference (SOCC), Virtual, 8–11 September 2020; pp. 84–89. [Google Scholar]
- Li, Y.; Lin, Y.; Xiao, T.; Zhu, J. An Efficient Transformer Decoder with Compressed Sub-Layers. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtual, 2–9 February 2021; Volume 35, pp. 13315–13323. [Google Scholar]
- Kobayashi, G.; Kuribayashi, T.; Yokoi, S.; Inui, K. Attention Is Not Only a Weight: Analyzing Transformers with Vector Norms. arXiv 2020, arXiv:2004.10102. [Google Scholar]
- Zhou, H.; Zhang, S.; Peng, J.; Zhang, S.; Li, J.; Xiong, H.; Zhang, W. Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtual, 2–9 February 2021; Volume 35, pp. 11106–11115. [Google Scholar]
- Choi, S.R.; Lee, M. Transformer Architecture and Attention Mechanisms in Genome Data Analysis: A Comprehensive Review. Biology 2023, 12, 1033. [Google Scholar] [CrossRef] [PubMed]
- Zheng, S.; Lu, J.; Zhao, H.; Zhu, X.; Luo, Z.; Wang, Y.; Fu, Y.; Feng, J.; Xiang, T.; Torr, P.H. Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 6881–6890. [Google Scholar]
- Zhou, K.; Yu, H.; Zhao, W.X.; Wen, J.-R. Filter-Enhanced MLP Is All You Need for Sequential Recommendation. In Proceedings of the ACM Web Conference 2022, Virtual Event, Lyon, France, 25–29 April 2022; pp. 2388–2399. [Google Scholar]
- Butt, A.; Narejo, S.; Anjum, M.R.; Yonus, M.U.; Memon, M.; Samejo, A.A. Fall Detection Using LSTM and Transfer Learning. Wirel. Pers. Commun. 2022, 126, 1733–1750. [Google Scholar] [CrossRef]
- Lin, C.-B.; Dong, Z.; Kuan, W.-K.; Huang, Y.-F. A Framework for Fall Detection Based on OpenPose Skeleton and LSTM/GRU Models. Appl. Sci. 2020, 11, 329. [Google Scholar] [CrossRef]
- Benoit, A.; Escriba, C.; Gauchard, D.; Esteve, A.; Rossi, C. Analyzing and Comparing Deep Learning Models on a ARM 32 Bits Microcontroller for Pre-Impact Fall Detection. IEEE Sens. J. 2024, 24, 11829–11842. [Google Scholar] [CrossRef]
- Zeng, G.; Zeng, B.; Hu, H. Real-World Efficient Fall Detection: Balancing Performance and Complexity with FDGA Workflow. Comput. Vis. Image Underst. 2023, 237, 103832. [Google Scholar] [CrossRef]
- Siami-Namini, S.; Tavakoli, N.; Namin, A.S. The Performance of LSTM and BiLSTM in Forecasting Time Series. In Proceedings of the 2019 IEEE International Conference on Big Data (Big Data), Los Angeles, CA, USA, 9–12 December 2019; pp. 3285–3292. [Google Scholar]
- Zhang, X.; Shen, F.; Zhao, J.; Yang, G. Time Series Forecasting Using GRU Neural Network with Multi-Lag After Decomposition. In Neural Information Processing; Liu, D., Xie, S., Li, Y., Zhao, D., El-Alfy, E.-S.M., Eds.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2017; Volume 10638, pp. 523–532. ISBN 978-3-319-70138-7. [Google Scholar]
- Zhang, Z. Improved Adam Optimizer for Deep Neural Networks. In Proceedings of the 2018 IEEE/ACM 26th International Symposium on Quality of Service (IWQoS), Banff, AB, Canada, 4–6 June 2018; pp. 1–2. [Google Scholar]
- Llugsi, R.; El Yacoubi, S.; Fontaine, A.; Lupera, P. Comparison between Adam, AdaMax and Adam W Optimizers to Implement a Weather Forecast Based on Neural Networks for the Andean City of Quito. In Proceedings of the 2021 IEEE Fifth Ecuador Technical Chapters Meeting (ETCM), Cuenca, Ecuador, 12–15 October 2021; pp. 1–6. [Google Scholar]
- Deng, X.; Liu, Q.; Deng, Y.; Mahadevan, S. An Improved Method to Construct Basic Probability Assignment Based on the Confusion Matrix for Classification Problem. Inf. Sci. 2016, 340, 250–261. [Google Scholar] [CrossRef]
- Hand, D.J.; Anagnostopoulos, C. When Is the Area under the Receiver Operating Characteristic Curve an Appropriate Measure of Classifier Performance? Pattern Recognit. Lett. 2013, 34, 492–495. [Google Scholar] [CrossRef]
Posture Type | Number of Video Clip | Posture Description |
---|---|---|
Falling | 100 | Falling posture |
Stand_up | 100 | Standing posture |
Standing | 100 | Transitioning from lying down or sitting to standing |
Sit_down | 100 | Sitting posture |
Lie_down | 100 | Lying down posture |
Sleeping | 100 | Sleeping posture with less movements |
Lie | 100 | Transitioning from standing to lying down |
Index | Frame | 0x (Nose) | 0y (Nose) | 0z (Nose) | 1x (Left_Eye_Inner) | 1y (Left_Eye_Inner) | … | Central_Body_Speed | Left_Knee_Speed | Right_Knee_Speed | Y |
---|---|---|---|---|---|---|---|---|---|---|---|
1 | 1 | 0.379765 | 0.553287 | 0.208805 | 0.381411 | 0.55061 | ⋮ | 0 | 0 | 0 | falling |
2 | 0.362753 | 0.185408 | −0.32835 | 0.364276 | 0.17145 | ⋮ | 0.11449 | 0.05578 | 0.083724 | ||
3 | 0.362748 | 0.185095 | −0.30977 | 0.364278 | 0.171292 | 0.033503 | 0.020768 | 0.015855 | |||
⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ||
9899 | 0.249462 | 0.940241 | −0.25696 | 0.239478 | 0.939346 | 0.001084 | 0.001716 | 0.006252 | |||
9900 | 0.253466 | 0.935217 | −0.20723 | 0.242784 | 0.934585 | ⋮ | 0.003875 | 0.003117 | 0.000977 | ||
100 | 9901 | 0.251388 | 0.93537 | −0.21042 | 0.243521 | 0.931843 | ⋮ | 0.001172 | 0.001893 | 0.002287 | |
⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | |
1 | 1 | 0.494136 | 0.256956 | −0.41847 | 0.497415 | 0.239821 | ⋮ | 0.002505 | 0.018862 | 0.013497 | standing |
2 | 0.490758 | 0.246923 | −0.40555 | 0.493804 | 0.230732 | ⋮ | 0.004472 | 0.003269 | 0.002368 | ||
3 | 0.488241 | 0.242574 | −0.40686 | 0.490833 | 0.227228 | ⋮ | 0.007852 | 0.009989 | 0.005957 | ||
⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ||
9899 | 0.507184 | 0.241033 | −0.35604 | 0.512011 | 0.230799 | 0.005119 | 0.031786 | 0.006606 | |||
9900 | 0.505589 | 0.240715 | −0.34864 | 0.510574 | 0.230771 | ⋮ | 0.002204 | 0.013514 | 0.006841 | ||
100 | 9901 | 0.496076 | 0.237336 | −0.33496 | 0.501637 | 0.227336 | ⋮ | 0.001037 | 0.006635 | 0.004 |
Index | Frame | 0x (Nose) | 0y (Nose) | 0z (Nose) | 1x (Left_Eye_Inner) | 1y (Left_Eye_Inner) | … | Central_Body_Speed | Left_Knee_Speed | Right_Knee_Speed | Y |
---|---|---|---|---|---|---|---|---|---|---|---|
1 | 1 | 0.374411 | 0.569179 | 0.685071 | 0.372001 | 0.577581 | ⋮ | 0 | 0 | 0 | falling |
2 | 0.353146 | 0.244493 | 0.346481 | 0.350831 | 0.244334 | ⋮ | 0.280930 | 0.180228 | 0.233038 | ||
3 | 0.353140 | 0.244216 | 0.358191 | 0.350833 | 0.244196 | 0.082207 | 0.067104 | 0.044132 | |||
⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ||
9899 | 0.211527 | 0.910703 | 0.391483 | 0.196643 | 0.919246 | 0.002661 | 0.005545 | 0.017402 | |||
9900 | 0.216533 | 0.906268 | 0.422828 | 0.200727 | 0.915062 | ⋮ | 0.009507 | 0.01007 | 0.002718 | ||
100 | 9901 | 0.213935 | 0.906404 | 0.42082 | 0.201638 | 0.912651 | ⋮ | 0.002876 | 0.006116 | 0.006367 | |
⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | |
1 | 1 | 0.517380 | 0.30764 | 0.289674 | 0.515324 | 0.304426 | ⋮ | 0 | 0 | 0 | standing |
2 | 0.513157 | 0.298785 | 0.297817 | 0.510864 | 0.296438 | ⋮ | 0.006145 | 0.060945 | 0.037568 | ||
3 | 0.510012 | 0.294947 | 0.296995 | 0.507193 | 0.293358 | ⋮ | 0.010973 | 0.010563 | 0.006591 | ||
⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ||
9899 | 0.533691 | 0.293586 | 0.329027 | 0.533359 | 0.296497 | 0.012561 | 0.102704 | 0.018387 | |||
9900 | 0.531697 | 0.293306 | 0.33369 | 0.531582 | 0.296472 | 0.005409 | 0.043665 | 0.019042 | |||
100 | 9901 | 0.519806 | 0.290323 | 0.342313 | 0.520541 | 0.293453 | ⋮ | 0.002543 | 0.021437 | 0.011134 |
Model | Model Parameters | Results | |||||
---|---|---|---|---|---|---|---|
LSTM | optimizers | Learning rate | Unit | Drop out. rate | Batch size | Performance | |
F1-score | Precision | ||||||
LSTM1 | Adam | 0.01 | 64 | 0.2 | 32 | 0.922 | 0.927 |
LSTM2 | SGD | 0.01 | 64 | 0.3 | 32 | 0.916 | 0.921 |
LSTM3 | AdamW | 0.01 | 64 | 0.4 | 64 | 0.955 | 0.958 |
GRU1 | Adam | 0.01 | 64 | 0.2 | 32 | 0.946 | 0.949 |
GRU2 | SGD | 0.01 | 64 | 0.3 | 32 | 0.657 | 0.668 |
GRU3 | AdamW | 0.01 | 128 | 0.4 | 64 | 0.873 | 0.903 |
TRANSFORMER | optimizers | Layer | Drop Out. rate | Activation Function | Batch size | Performance | |
F1-score | Precision | ||||||
TRANSFORMER 1 | Adam | t1, t2, t3, t4, t5 | 0.1 | Relu | 32 | 0.918 | 0.921 |
TRANSFORMER 2 | Adam | t1, t2, t3, t4, t5 | 0.3 | Relu | 32 | 0.964 | 0.962 |
TRANSFORMER 3 | AdamW | t1, t2, t3, t4, t5 | 0.4 | Relu | 64 | 0.974 | 0.976 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Kibet, D.; So, M.S.; Kang, H.; Han, Y.; Shin, J.-H. Sudden Fall Detection of Human Body Using Transformer Model. Sensors 2024, 24, 8051. https://doi.org/10.3390/s24248051
Kibet D, So MS, Kang H, Han Y, Shin J-H. Sudden Fall Detection of Human Body Using Transformer Model. Sensors. 2024; 24(24):8051. https://doi.org/10.3390/s24248051
Chicago/Turabian StyleKibet, Duncan, Min Seop So, Hahyeon Kang, Yongsu Han, and Jong-Ho Shin. 2024. "Sudden Fall Detection of Human Body Using Transformer Model" Sensors 24, no. 24: 8051. https://doi.org/10.3390/s24248051
APA StyleKibet, D., So, M. S., Kang, H., Han, Y., & Shin, J.-H. (2024). Sudden Fall Detection of Human Body Using Transformer Model. Sensors, 24(24), 8051. https://doi.org/10.3390/s24248051