Manta Ray Foraging Optimization with Transfer Learning Driven Facial Emotion Recognition
Abstract
:1. Introduction
- An intelligent TLDFER-ADAS technique encompassing preprocessing, Xception feature extraction, QDNN classification, and MRFO parameter tuning is presented for facial emotion classification;
- To the best of our knowledge, the presented TLDFER-ADAS technique never existed in the literature;
- Parameter tuning of the QDNN model using the MRFO algorithm helps in accomplishing significant classification performance;
- The emotion recognition performance was validated on two facial datasets: FER-2013 and CK+ datasets.
2. Related Works
3. The Proposed Model
3.1. Contrast Enhancement
3.2. Xception Based Feature Extraction
3.3. Driver Emotion Recognition
4. Results and Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Kandeel, A.A.; Abbas, H.M.; Hassanein, H.S. Explainable model selection of a convolutional neural network for driver’s facial emotion identification. In Proceedings of the International Conference on Pattern Recognition, Milan, Italy, 10–15 January 2021; Springer: Cham, Germany, 2021; pp. 699–713. [Google Scholar]
- Gera, D.; Balasubramanian, S.; Jami, A. CERN: Compact facial expression recognition net. Pattern Recognit. Lett. 2022, 155, 9–18. [Google Scholar] [CrossRef]
- Li, W.; Cui, Y.; Ma, Y.; Chen, X.; Li, G.; Zeng, G.; Guo, G.; Cao, D. A Spontaneous Driver Emotion Facial Expression (Defe) Dataset for Intelligent Vehicles: Emotions Triggered by Video-Audio Clips in Driving Scenarios. In IEEE Transactions on Affective Computing; IEEE: Piscataway Township, NJ, USA, 2021. [Google Scholar]
- Bodapati, J.D.; Naik, D.S.; Suvarna, B.; Naralasetti, V. A Deep Learning Framework with Cross Pooled Soft Attention for Facial Expression Recognition. J. Inst. Eng. Ser. B 2022, 103, 1395–1405. [Google Scholar] [CrossRef]
- Deng, W.; Wu, R. Real-time driver-drowsiness detection system using facial features. IEEE Access 2019, 7, 118727–118738. [Google Scholar] [CrossRef]
- Dias, W.; Andaló, F.; Padilha, R.; Bertocco, G.; Almeida, W.; Costa, P.; Rocha, A. Cross-dataset emotion recognition from facial expressions through convolutional neural networks. J. Vis. Commun. Image Represent. 2022, 82, 103395. [Google Scholar] [CrossRef]
- Yan, K.; Zheng, W.; Zhang, T.; Zong, Y.; Cui, Z. Cross-database non-frontal facial expression recognition based on transductive deep transfer learning. arXiv 2018, arXiv:1811.12774. [Google Scholar]
- Jabbar, R.; Al-Khalifa, K.; Kharbeche, M.; Alhajyaseen, W.; Jafari, M.; Jiang, S. Real-time driver drowsiness detection for android application using deep neural networks techniques. Procedia Comput. Sci. 2018, 130, 400–407. [Google Scholar] [CrossRef]
- Hung, J.C.; Chang, J.W. Multi-level transfer learning for improving the performance of deep neural networks: Theory and practice from the tasks of facial emotion recognition and named entity recognition. Appl. Soft Comput. 2021, 109, 107491. [Google Scholar] [CrossRef]
- Alzubi, J.A.; Jain, R.; Alzubi, O.; Thareja, A.; Upadhyay, Y. Distracted driver detection using compressed energy efficient convolutional neural network. J. Intell. Fuzzy Syst. 2022, 42, 1253–1265. [Google Scholar] [CrossRef]
- Yang, H.; Zhang, Z.; Yin, L. Identity-adaptive facial expression recognition through expression regeneration using conditional generative adversarial networks. In Proceedings of the 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), Xi’an, China, 15–19 May 2018; IEEE: Piscataway Township, NJ, USA, 2018; pp. 294–301. [Google Scholar]
- Khanzada, A.; Bai, C.; Celepcikay, F.T. Facial expression recognition with deep learning. arXiv 2020, arXiv:2004.11823. [Google Scholar]
- Hossain, S.; Umer, S.; Asari, V.; Rout, R.K. A unified framework of deep learning-based facial expression recognition system for diversified applications. Appl. Sci. 2021, 11, 9174. [Google Scholar] [CrossRef]
- Macalisang, J.R.; Alon, A.S.; Jardiniano, M.F.; Evangelista, D.C.P.; Castro, J.C.; Tria, M.L. Drive-Awake: A YOLOv3 Machine Vision Inference Approach of Eyes Closure for Drowsy Driving Detection. In Proceedings of the 2021 IEEE International Conference on Artificial Intelligence in Engineering and Technology (IICAIET), Kota Kinabalu, Malaysia, 13–15 September 2021; IEEE: Piscataway Township, NJ, USA, 2021; pp. 1–5. [Google Scholar]
- Rescigno, M.; Spezialetti, M.; Rossi, S. Personalized models for facial emotion recognition through transfer learning. Multimed. Tools Appl. 2020, 79, 35811–35828. [Google Scholar] [CrossRef]
- Hou, M.; Wang, M.; Zhao, W.; Ni, Q.; Cai, Z.; Kong, X. A lightweight framework for abnormal driving behavior detection. Comput. Commun. 2022, 184, 128–136. [Google Scholar] [CrossRef]
- Shao, J.; Qian, Y. Three convolutional neural network models for facial expression recognition in the wild. Neurocomputing 2019, 355, 82–92. [Google Scholar] [CrossRef]
- Sini, J.; Marceddu, A.C.; Violante, M.; Dessì, R. Passengers’ emotions recognition to improve social acceptance of autonomous driving vehicles. In Progresses in Artificial Intelligence and Neural Systems; Springer: Singapore, 2021; pp. 25–32. [Google Scholar]
- Naqvi, R.A.; Arsalan, M.; Rehman, A.; Rehman, A.U.; Loh, W.K.; Paul, A. Deep learning-based drivers emotion classification system in time series data for remote applications. Remote Sens. 2020, 12, 587. [Google Scholar] [CrossRef] [Green Version]
- Paikrao, P.; Mukherjee, A.; Jain, D.K.; Chatterjee, P.; Alnumay, W. Smart emotion recognition framework: A secured IOVT perspective. In IEEE Consumer Electronics Magazine; IEEE: Piscataway Township, NJ, USA, 2021. [Google Scholar]
- Jeong, M.; Ko, B.C. Driver’s facial expression recognition in real-time for safe driving. Sensors 2018, 18, 4270. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Sukhavasi, S.B.; Sukhavasi, S.B.; Elleithy, K.; El-Sayed, A.; Elleithy, A. A hybrid model for driver emotion detection using feature fusion approach. Int. J. Environ. Res. Public Health 2022, 19, 3085. [Google Scholar] [CrossRef]
- Xiao, H.; Li, W.; Zeng, G.; Wu, Y.; Xue, J.; Zhang, J.; Li, C.; Guo, G. On-Road Driver Emotion Recognition Using Facial Expression. Appl. Sci. 2022, 12, 807. [Google Scholar] [CrossRef]
- Mehendale, N. Facial emotion recognition using convolutional neural networks (FERC). SN Appl. Sci. 2020, 2, 446. [Google Scholar] [CrossRef] [Green Version]
- Oh, G.; Ryu, J.; Jeong, E.; Yang, J.H.; Hwang, S.; Lee, S.; Lim, S. Drer: Deep learning–based driver’s real emotion recognizer. Sensors 2021, 21, 2166. [Google Scholar] [CrossRef]
- Li, W.; Zeng, G.; Zhang, J.; Xu, Y.; Xing, Y.; Zhou, R.; Guo, G.; Shen, Y.; Cao, D.; Wang, F.Y. CogEmoNet: A Cognitive-Feature-Augmented Driver Emotion Recognition Model for Smart Cockpit. IEEE Trans. Comput. Soc. Syst. 2021, 9, 667–678. [Google Scholar] [CrossRef]
- Ma, J.; Fan, X.; Yang, S.X.; Zhang, X.; Zhu, X. Contrast limited adaptive histogram equalization-based fusion in YIQ and HSI color spaces for underwater image enhancement. Int. J. Pattern Recognit. Artif. Intell. 2018, 32, 1854018. [Google Scholar] [CrossRef]
- Lo, W.W.; Yang, X.; Wang, Y. An xception convolutional neural network for malware classification with transfer learning. In Proceedings of the 2019 10th IFIP International Conference on New Technologies, Mobility and Security (NTMS), Canary Island, Spain, 24–26 June 2019; IEEE: Piscataway Township, NJ, USA, 2019; pp. 1–5. [Google Scholar]
- Dharmawan, W.; Nambo, H. End-to-End Xception model implementation on Carla Self Driving Car in moderate dense environment. In Proceedings of the 2019 2nd Artificial Intelligence and Cloud Computing Conference, Kobe, Japan, 21–23 December 2019; pp. 139–143. [Google Scholar]
- Zhang, L.; Wang, M.; Fu, Y.; Ding, Y. A Forest Fire Recognition Method Using UAV Images Based on Transfer Learning. Forests 2022, 13, 975. [Google Scholar] [CrossRef]
- Jeswal, S.K.; Chakraverty, S. Recent developments and applications in quantum neural network: A review. Arch. Comput. Methods Eng. 2019, 26, 793–807. [Google Scholar] [CrossRef]
- Tate, N.; Miyata, Y.; Sakai, S.I.; Nakamura, A.; Shimomura, S.; Nishimura, T.; Kozuka, J.; Ogura, Y.; Tanida, J. Quantitative analysis of nonlinear optical input/output of a quantum-dot network based on the echo state property. Opt. Express 2022, 30, 14669–14676. [Google Scholar] [CrossRef]
- Houssein, E.H.; Zaki, G.N.; Diab, A.A.Z.; Younis, E.M. An efficient Manta Ray Foraging Optimization algorithm for parameter extraction of three-diode photovoltaic model. Comput. Electr. Eng. 2021, 94, 107304. [Google Scholar] [CrossRef]
- Hu, G.; Li, M.; Wang, X.; Wei, G.; Chang, C.T. An enhanced manta ray foraging optimization algorithm for shape optimization of complex CCG-Ball curves. Knowl. Based Syst. 2022, 240, 108071. [Google Scholar] [CrossRef]
Class | FER-2013 | CK+ (Last Frame) |
---|---|---|
Angry | 4593 | 45 |
Disgust | 547 | 59 |
Fear | 5121 | 25 |
Happy | 8989 | 69 |
Sad | 6077 | 28 |
Surprise | 4002 | 83 |
Neutral | 6198 | 327 |
Total Number of Samples | 35,527 | 636 |
FER-2013 Dataset | |||||
---|---|---|---|---|---|
Class | Accuracy | Precision | Recall | F-Score | AUC Score |
Entire Dataset | |||||
Angry | 99.52 | 98.57 | 97.69 | 98.13 | 98.74 |
Disgust | 99.62 | 84.80 | 91.77 | 88.15 | 95.76 |
Fear | 99.23 | 96.10 | 98.69 | 97.38 | 99.01 |
Happy | 99.29 | 98.79 | 98.39 | 98.59 | 98.99 |
Sad | 98.97 | 98.84 | 95.11 | 96.94 | 97.44 |
Surprise | 99.26 | 97.03 | 96.40 | 96.72 | 98.01 |
Neutral | 99.16 | 96.46 | 98.79 | 97.61 | 99.01 |
Average | 99.29 | 95.80 | 96.69 | 96.22 | 98.14 |
Training Phase (70%) | |||||
Angry | 99.52 | 98.30 | 98.03 | 98.17 | 98.89 |
Disgust | 99.61 | 84.17 | 91.88 | 87.86 | 95.81 |
Fear | 99.24 | 96.18 | 98.69 | 97.42 | 99.02 |
Happy | 99.31 | 98.93 | 98.35 | 98.64 | 99.00 |
Sad | 98.97 | 98.80 | 95.15 | 96.94 | 97.46 |
Surprise | 99.23 | 97.16 | 95.94 | 96.55 | 97.79 |
Neutral | 99.12 | 96.27 | 98.72 | 97.48 | 98.96 |
Average | 99.29 | 95.69 | 96.68 | 96.15 | 98.13 |
Testing Phase (30%) | |||||
Angry | 99.51 | 99.24 | 96.88 | 98.04 | 98.38 |
Disgust | 99.64 | 86.29 | 91.52 | 88.82 | 95.64 |
Fear | 99.21 | 95.91 | 98.69 | 97.28 | 98.99 |
Happy | 99.23 | 98.47 | 98.47 | 98.47 | 98.98 |
Sad | 98.97 | 98.92 | 95.03 | 96.94 | 97.41 |
Surprise | 99.33 | 96.74 | 97.45 | 97.09 | 98.51 |
Neutral | 99.24 | 96.87 | 98.95 | 97.90 | 99.13 |
Average | 99.31 | 96.06 | 96.71 | 96.36 | 98.15 |
CK+ Dataset | |||||
---|---|---|---|---|---|
Class | Accuracy | Precision | Recall | F-Score | AUC Score |
Entire Dataset | |||||
Angry | 99.06 | 91.49 | 95.56 | 93.48 | 97.44 |
Disgust | 99.69 | 100.00 | 96.61 | 98.28 | 98.31 |
Fear | 99.53 | 95.83 | 92.00 | 93.88 | 95.92 |
Happy | 98.58 | 96.88 | 89.86 | 93.23 | 94.75 |
Sad | 99.37 | 96.15 | 89.29 | 92.59 | 94.56 |
Surprise | 99.69 | 98.80 | 98.80 | 98.80 | 99.31 |
Neutral | 98.43 | 97.31 | 99.69 | 98.49 | 98.39 |
Average | 99.19 | 96.64 | 94.54 | 95.53 | 96.95 |
Training Phase (70%) | |||||
Angry | 99.33 | 92.59 | 96.15 | 94.34 | 97.84 |
Disgust | 99.78 | 100.00 | 97.73 | 98.85 | 98.86 |
Fear | 99.33 | 93.33 | 87.50 | 90.32 | 93.63 |
Happy | 99.10 | 97.78 | 93.62 | 95.65 | 96.68 |
Sad | 99.33 | 95.24 | 90.91 | 93.02 | 95.34 |
Surprise | 99.78 | 100.00 | 98.28 | 99.13 | 99.14 |
Neutral | 98.43 | 97.47 | 99.57 | 98.51 | 98.38 |
Average | 99.29 | 96.63 | 94.82 | 95.69 | 97.12 |
Testing Phase (30%) | |||||
Angry | 98.43 | 90.00 | 94.74 | 92.31 | 96.79 |
Disgust | 99.48 | 100.00 | 93.33 | 96.55 | 96.67 |
Fear | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 |
Happy | 97.38 | 94.74 | 81.82 | 87.80 | 90.61 |
Sad | 99.48 | 100.00 | 83.33 | 90.91 | 91.67 |
Surprise | 99.48 | 96.15 | 100.00 | 98.04 | 99.70 |
Neutral | 98.43 | 96.94 | 100.00 | 98.45 | 98.44 |
Average | 98.95 | 96.83 | 93.32 | 94.87 | 96.27 |
Accuracy (%) | ||
---|---|---|
Methods | FER-2013 | CK+ (Last Frame) |
TLDFER-ADAS | 99.31 | 99.29 |
DNN | 95.65 | 96.03 |
Asm-SVM | 97.91 | 94.29 |
PGC | 96.39 | 95.81 |
FPD-NN | 97.03 | 96.22 |
Improved FRCNN | 94.78 | 94.29 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Mustafa Hilal, A.; Elkamchouchi, D.H.; Alotaibi, S.S.; Maray, M.; Othman, M.; Abdelmageed, A.A.; Zamani, A.S.; Eldesouki, M.I. Manta Ray Foraging Optimization with Transfer Learning Driven Facial Emotion Recognition. Sustainability 2022, 14, 14308. https://doi.org/10.3390/su142114308
Mustafa Hilal A, Elkamchouchi DH, Alotaibi SS, Maray M, Othman M, Abdelmageed AA, Zamani AS, Eldesouki MI. Manta Ray Foraging Optimization with Transfer Learning Driven Facial Emotion Recognition. Sustainability. 2022; 14(21):14308. https://doi.org/10.3390/su142114308
Chicago/Turabian StyleMustafa Hilal, Anwer, Dalia H. Elkamchouchi, Saud S. Alotaibi, Mohammed Maray, Mahmoud Othman, Amgad Atta Abdelmageed, Abu Sarwar Zamani, and Mohamed I. Eldesouki. 2022. "Manta Ray Foraging Optimization with Transfer Learning Driven Facial Emotion Recognition" Sustainability 14, no. 21: 14308. https://doi.org/10.3390/su142114308
APA StyleMustafa Hilal, A., Elkamchouchi, D. H., Alotaibi, S. S., Maray, M., Othman, M., Abdelmageed, A. A., Zamani, A. S., & Eldesouki, M. I. (2022). Manta Ray Foraging Optimization with Transfer Learning Driven Facial Emotion Recognition. Sustainability, 14(21), 14308. https://doi.org/10.3390/su142114308