Masked Face Emotion Recognition Based on Facial Landmarks and Deep Learning Approaches for Visually Impaired People
Abstract
:1. Introduction
- The facial emotion recognition part of the smart glasses design was implemented to assist BVI people in understanding and communicating with people. Current smart glasses designs do not have the facial emotion recognition method in a low-light noise environment. It uses real-time audio results to inform users about their direct surroundings [22];
- We used a low-light image enhancement technique to solve the problem of misclassification in scenarios where the upper parts of the face are too dark or when the contrast is low;
- To recognize facial emotion, specific facial landmark modalities employ the MediaPipe face mesh method [26]. The results indicate a dual role in facial emotion identification. Specifically, the model can identify emotional states in either masked or unmasked faces;
- We created a CNN model with feature extraction, fully connected, SoftMax classification layers. The Mish activation function was adopted in each convolution layer. The use of Mish is a significant development that has the potential to enhance categorization precision.
2. Related Works
2.1. Upper and Lower Parts of The Face
2.2. Facial Landmarks
3. Materials and Methods
3.1. Datasets for Facial Emotion Recognition
3.2. Proposed Method for Facial Emotion Recognition
3.2.1. Low-Contrast Image Enhancement Model
3.2.2. Recognizing Emotions from Masked Facial Images
3.2.3. Generating and Detecting Synthetic Masked Face
3.2.4. Infinity Shape Creation
3.2.5. Normalizing Infinity Shape
3.2.6. Landmark Detection
3.2.7. Feature Extraction
3.2.8. Emotion Classification
4. Experimental Results and Analysis
4.1. Qualitative Evaluation
4.2. Quantitative Evaluation Using AffectNet Dataset
4.3. Evaluation Based on Confusion Matrix
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Burger, D. Accessibility of brainstorming sessions for blind people. In LNCS, Proceedings of the ICCHP, Paris, France, 9–11 July 2014; Miesenberger, K., Fels, D., Archambault, D., Penaz, P., Zagler, W., Eds.; Springer: Cham, Swithzerland, 2014; Volume 8547, pp. 237–244. [Google Scholar] [CrossRef]
- Van Kleef, G.A. How emotions regulate social life: The emotions as social information (EASI) model. Curr. Dir. Psychol. Sci. 2009, 18, 184–188. [Google Scholar] [CrossRef]
- Hess, U. Who to whom and why: The social nature of emotional mimicry. Psychophysiology 2020, 58, e13675. [Google Scholar] [CrossRef] [PubMed]
- Mukhamadiyev, A.; Khujayarov, I.; Djuraev, O.; Cho, J. Automatic Speech Recognition Method Based on Deep Learning Approaches for Uzbek Language. Sensors 2022, 22, 3683. [Google Scholar] [CrossRef] [PubMed]
- Keltner, D.; Sauter, D.; Tracy, J.; Cowen, A. Emotional Expression: Advances in Basic Emotion Theory. J. Nonverbal Behav. 2019, 43, 133–160. [Google Scholar] [CrossRef] [PubMed]
- Mukhiddinov, M.; Jeong, R.-G.; Cho, J. Saliency Cuts: Salient Region Extraction based on Local Adaptive Thresholding for Image Information Recognition of the Visually Impaired. Int. Arab. J. Inf. Technol. 2020, 17, 713–720. [Google Scholar] [CrossRef]
- Susskind, J.M.; Lee, D.H.; Cusi, A.; Feiman, R.; Grabski, W.; Anderson, A.K. Expressing fear enhances sensory acquisition. Nat. Neurosci. 2008, 11, 843–850. [Google Scholar] [CrossRef]
- Guo, K.; Soornack, Y.; Settle, R. Expression-dependent susceptibility to face distortions in processing of facial expressions of emotion. Vis. Res. 2019, 157, 112–122. [Google Scholar] [CrossRef]
- Ramdani, C.; Ogier, M.; Coutrot, A. Communicating and reading emotion with masked faces in the Covid era: A short review of the literature. Psychiatry Res. 2022, 114755. [Google Scholar] [CrossRef]
- Canal, F.Z.; Müller, T.R.; Matias, J.C.; Scotton, G.G.; de Sa Junior, A.R.; Pozzebon, E.; Sobieranski, A.C. A survey on facial emotion recognition techniques: A state-of-the-art literature review. Inf. Sci. 2021, 582, 593–617. [Google Scholar] [CrossRef]
- Maithri, M.; Raghavendra, U.; Gudigar, A.; Samanth, J.; Barua, P.D.; Murugappan, M.; Chakole, Y.; Acharya, U.R. Automated emotion recognition: Current trends and future perspectives. Comput. Methods Programs Biomed. 2022, 215, 106646. [Google Scholar] [CrossRef]
- Xia, C.; Pan, Z.; Li, Y.; Chen, J.; Li, H. Vision-based melt pool monitoring for wire-arc additive manufacturing using deep learning method. Int. J. Adv. Manuf. Technol. 2022, 120, 551–562. [Google Scholar] [CrossRef]
- Li, W.; Zhang, L.; Wu, C.; Cui, Z.; Niu, C. A new lightweight deep neural network for surface scratch detection. Int. J. Adv. Manuf. Technol. 2022, 123, 1999–2015. [Google Scholar] [CrossRef] [PubMed]
- Mukhiddinov, M.; Akmuradov, B.; Djuraev, O. Robust text recognition for Uzbek language in natural scene images. In Proceedings of the 2019 International Conference on Information Science and Communications Technologies (ICISCT), Tashkent, Uzbekistan, 4–6 November 2019; pp. 1–5. [Google Scholar]
- Khamdamov, U.R.; Djuraev, O.N. A novel method for extracting text from natural scene images and TTS. Eur. Sci. Rev. 2018, 1, 30–33. [Google Scholar] [CrossRef]
- Chen, X.; Wang, X.; Zhang, K.; Fung, K.-M.; Thai, T.C.; Moore, K.; Mannel, R.S.; Liu, H.; Zheng, B.; Qiu, Y. Recent advances and clinical applications of deep learning in medical image analysis. Med. Image Anal. 2022, 79, 102444. [Google Scholar] [CrossRef] [PubMed]
- Avazov, K.; Abdusalomov, A.; Mukhiddinov, M.; Baratov, N.; Makhmudov, F.; Cho, Y.I. An improvement for the automatic classification method for ultrasound images used on CNN. Int. J. Wavelets Multiresolution Inf. Process. 2021, 20, 2150054. [Google Scholar] [CrossRef]
- Mellouk, W.; Handouzi, W. Facial emotion recognition using deep learning: Review and insights. Procedia Comput. Sci. 2020, 175, 689–694. [Google Scholar] [CrossRef]
- Saxena, A.; Khanna, A.; Gupta, D. Emotion Recognition and Detection Methods: A Comprehensive Survey. J. Artif. Intell. Syst. 2020, 2, 53–79. [Google Scholar] [CrossRef]
- Ko, B.C. A Brief Review of Facial Emotion Recognition Based on Visual Information. Sensors 2018, 18, 401. [Google Scholar] [CrossRef]
- Dzedzickis, A.; Kaklauskas, A.; Bucinskas, V. Human Emotion Recognition: Review of Sensors and Methods. Sensors 2020, 20, 592. [Google Scholar] [CrossRef]
- Mukhiddinov, M.; Cho, J. Smart Glass System Using Deep Learning for the Blind and Visually Impaired. Electronics 2021, 10, 2756. [Google Scholar] [CrossRef]
- Lu, K.; Zhang, L. TBEFN: A Two-Branch Exposure-Fusion Network for Low-Light Image Enhancement. IEEE Trans. Multimedia 2020, 23, 4093–4105. [Google Scholar] [CrossRef]
- Mollahosseini, A.; Hasani, B.; Mahoor, M.H. AffectNet: A Database for Facial Expression, Valence, and Arousal Computing in the Wild. IEEE Trans. Affect. Comput. 2017, 10, 18–31. [Google Scholar] [CrossRef] [Green Version]
- Aqeel, A. MaskTheFace. 2020. Available online: https://github.com/aqeelanwar/MaskTheFace (accessed on 28 October 2022).
- Available online: https://google.github.io/mediapipe/solutions/face_mesh.html (accessed on 2 November 2022).
- Roberson, D.; Kikutani, M.; Döge, P.; Whitaker, L.; Majid, A. Shades of emotion: What the addition of sunglasses or masks to faces reveals about the development of facial expression processing. Cognition 2012, 125, 195–206. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Gori, M.; Schiatti, L.; Amadeo, M.B. Masking Emotions: Face Masks Impair How We Read Emotions. Front. Psychol. 2021, 12, 669432. [Google Scholar] [CrossRef] [PubMed]
- Noyes, E.; Davis, J.P.; Petrov, N.; Gray, K.L.H.; Ritchie, K.L. The effect of face masks and sunglasses on identity and expression recognition with super-recognizers and typical observers. R. Soc. Open Sci. 2021, 8, 201169. [Google Scholar] [CrossRef]
- Carbon, C.-C. Wearing Face Masks Strongly Confuses Counterparts in Reading Emotions. Front. Psychol. 2020, 11, 566886. [Google Scholar] [CrossRef]
- Gulbetekin, E.; Fidancı, A.; Altun, E.; Er, M.N.; Gürcan, E. Effects of mask use and race on face perception, emotion recognition, and social distancing during the COVID-19 pandemic. Res. Sq. 2021, PPR533073. [Google Scholar] [CrossRef]
- Pazhoohi, F.; Forby, L.; Kingstone, A. Facial masks affect emotion recognition in the general population and individuals with autistic traits. PLoS ONE 2021, 16, e0257740. [Google Scholar] [CrossRef]
- Gosselin, F.; Schyns, P.G. Bubbles: A technique to reveal the use of information in recognition tasks. Vis. Res. 2001, 41, 2261–2271. [Google Scholar] [CrossRef]
- Blais, C.; Roy, C.; Fiset, D.; Arguin, M.; Gosselin, F. The eyes are not the window to basic emotions. Neuropsychologia 2012, 50, 2830–2838. [Google Scholar] [CrossRef]
- Wegrzyn, M.; Vogt, M.; Kireclioglu, B.; Schneider, J.; Kissler, J. Mapping the emotional face. How individual face parts contribute to successful emotion recognition. PLoS ONE 2017, 12, e0177239. [Google Scholar] [CrossRef] [Green Version]
- Beaudry, O.; Roy-Charland, A.; Perron, M.; Cormier, I.; Tapp, R. Featural processing in recognition of emotional facial expressions. Cogn. Emot. 2013, 28, 416–432. [Google Scholar] [CrossRef] [PubMed]
- Schurgin, M.W.; Nelson, J.; Iida, S.; Ohira, H.; Chiao, J.Y.; Franconeri, S.L. Eye movements during emotion recognition in faces. J. Vis. 2014, 14, 14. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Kotsia, I.; Buciu, I.; Pitas, I. An analysis of facial expression recognition under partial facial image occlusion. Image Vis. Comput. 2008, 26, 1052–1067. [Google Scholar] [CrossRef]
- Yan, J.; Zheng, W.; Cui, Z.; Tang, C.; Zhang, T.; Zong, Y. Multi-cue fusion for emotion recognition in the wild. Neurocomputing 2018, 309, 27–35. [Google Scholar] [CrossRef]
- Jung, H.; Lee, S.; Yim, J.; Park, S.; Kim, J. Joint Fine-Tuning in Deep Neural Networks for Facial Expression Recognition. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 2983–2991. [Google Scholar]
- Kollias, D.; Zafeiriou, S.P. Exploiting Multi-CNN Features in CNN-RNN Based Dimensional Emotion Recognition on the OMG in-the-Wild Dataset. IEEE Trans. Affect. Comput. 2020, 12, 595–606. [Google Scholar] [CrossRef]
- Hasani, B.; Mahoor, M.H. Facial Expression Recognition Using Enhanced Deep 3D Convolutional Neural Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 21–26 July 2017; pp. 2278–2288. [Google Scholar] [CrossRef] [Green Version]
- Fabiano, D.; Canavan, S. Deformable synthesis model for emotion recognition. In Proceedings of the 2019 14th IEEE Interna-tional Conference on Automatic Face & Gesture Recognition (FG 2019), Lille, France, 14–18 May 2019. [Google Scholar]
- Ngoc, Q.T.; Lee, S.; Song, B.C. Facial Landmark-Based Emotion Recognition via Directed Graph Neural Network. Electronics 2020, 9, 764. [Google Scholar] [CrossRef]
- Khoeun, R.; Chophuk, P.; Chinnasarn, K. Emotion Recognition for Partial Faces Using a Feature Vector Technique. Sensors 2022, 22, 4633. [Google Scholar] [CrossRef] [PubMed]
- Nair, P.; Cavallaro, A. 3-D Face Detection, Landmark Localization, and Registration Using a Point Distribution Model. IEEE Trans. Multimedia 2009, 11, 611–623. [Google Scholar] [CrossRef] [Green Version]
- Shah, M.H.; Dinesh, A.; Sharmila, T.S. Analysis of Facial Landmark Features to determine the best subset for finding Face Orientation. In Proceedings of the 2019 International Conference on Computational Intelligence in Data Science (ICCIDS), Gurugram, India, 6–7 September 2019; pp. 1–4. [Google Scholar]
- Riaz, M.N.; Shen, Y.; Sohail, M.; Guo, M. eXnet: An Efficient Approach for Emotion Recognition in the Wild. Sensors 2020, 20, 1087. [Google Scholar] [CrossRef] [Green Version]
- Shao, J.; Qian, Y. Three convolutional neural network models for facial expression recognition in the wild. Neurocomputing 2019, 355, 82–92. [Google Scholar] [CrossRef]
- Miao, S.; Xu, H.; Han, Z.; Zhu, Y. Recognizing Facial Expressions Using a Shallow Convolutional Neural Network. IEEE Access 2019, 7, 78000–78011. [Google Scholar] [CrossRef]
- Wang, K.; Peng, X.; Yang, J.; Meng, D.; Qiao, Y. Region Attention Networks for Pose and Occlusion Robust Facial Expression Recognition. IEEE Trans. Image Process. 2020, 29, 4057–4069. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Farzaneh, A.H.; Qi, X. Facial expression recognition in the wild via deep attentive center loss. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–8 January 2021; pp. 2402–2411. [Google Scholar]
- Shi, J.; Zhu, S.; Liang, Z. Learning to amend facial expression representation via de-albino and affinity. arXiv 2021, arXiv:2103.10189. [Google Scholar]
- Li, S.; Deng, W. Reliable Crowdsourcing and Deep Locality-Preserving Learning for Unconstrained Facial Expression Recognition. IEEE Trans. Image Process. 2018, 28, 356–370. [Google Scholar] [CrossRef] [PubMed]
- Li, Y.; Zeng, J.; Shan, S.; Chen, X. Occlusion Aware Facial Expression Recognition Using CNN With Attention Mechanism. IEEE Trans. Image Process. 2018, 28, 2439–2450. [Google Scholar] [CrossRef] [PubMed]
- Farkhod, A.; Abdusalomov, A.B.; Mukhiddinov, M.; Cho, Y.-I. Development of Real-Time Landmark-Based Emotion Recognition CNN for Masked Faces. Sensors 2022, 22, 8704. [Google Scholar] [CrossRef]
- Gross, R.; Matthews, I.; Cohn, J.; Kanade, T.; Baker, S. Multi-pie. Image Vis. Comput. 2010, 28, 807–813. [Google Scholar] [CrossRef]
- Lucey, P.; Cohn, J.F.; Kanade, T.; Saragih, J.; Ambadar, Z.; Matthews, I. The extended cohn-kanade dataset (ck+): A com-plete dataset for action unit and emotion-specified expression. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), San Francisco, CA, USA, 13–18 June 2010; pp. 94–101. [Google Scholar]
- Lyons, M.; Akamatsu, S.; Kamachi, M.; Gyoba, J. Coding facial expressions with Gabor wavelets. In Proceedings of the 3rd IEEE International Conference on Automatic Face and Gesture Recognition, Nara, Japan, 14–16 April 1998. [Google Scholar] [CrossRef]
- Pantic, M.; Valstar, M.; Rademaker, R.; Maat, L. Web-Based Database for Facial Expression Analysis. In Proceedings of the 2005 IEEE International Conference on Multimedia and Expo, Amsterdam, The Netherlands, 6–8 July 2005. [Google Scholar] [CrossRef] [Green Version]
- McDuff, D.; Kaliouby, R.; Senechal, T.; Amr, M.; Cohn, J.; Picard, R. Affectiva-mit facial expression dataset (am-fed): Naturalistic and spontaneous facial expressions collected. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Portland, OR, USA, 23–28 June 2013; pp. 881–888. [Google Scholar]
- Mavadati, S.M.; Mahoor, M.H.; Bartlett, K.; Trinh, P.; Cohn, J.F. DISFA: A Spontaneous Facial Action Intensity Database. IEEE Trans. Affect. Comput. 2013, 4, 151–160. [Google Scholar] [CrossRef]
- Sneddon, I.; McRorie, M.; McKeown, G.; Hanratty, J. The Belfast Induced Natural Emotion Database. IEEE Trans. Affect. Comput. 2011, 3, 32–41. [Google Scholar] [CrossRef]
- Goodfellow, I.J.; Erhan, D.; Carrier, P.L.; Courville, A.; Mirza, M.; Hamner, B.; Cukierski, W.; Tang, Y.; Thaler, D.; Lee, D.-H.; et al. Challenges in representation learning: A report on three machine learning contests. Neural Netw. 2015, 64, 59–63. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Available online: https://www.kaggle.com/datasets/msambare/fer2013 (accessed on 28 October 2022).
- Mehendale, N. Facial emotion recognition using convolutional neural networks (FERC). SN Appl. Sci. 2020, 2, 446. [Google Scholar] [CrossRef] [Green Version]
- Anwar, A.; Raychowdhury, A. Masked face recognition for secure authentication. arXiv Preprint 2020, arXiv:2008.11104. [Google Scholar]
- Zafeiriou, S.; Papaioannou, A.; Kotsia, I.; Nicolaou, M.A.; Zhao, G. Facial affect “in-the-wild”: A survey and a new data-base. In Proceedings of the International Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Affect “in-the-wild” Workshop, Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
- Dhall, A.; Goecke, R.; Joshi, J.; Wagner, M.; Gedeon, T. Emotion recognition in the wild challenge 2013. In Proceedings of the 15th ACM on International Conference on Multimodal Interaction, Sydney, Australia, 9–13 December 2013; pp. 509–516. [Google Scholar]
- Benitez-Quiroz, C.F.; Srinivasan, R.; Martinez, A.M. Emotionet: An accurate, real-time algorithm for the automatic an-notation of a million facial expressions in the wild. In Proceedings of the IEEE International Conference on Computer Vision & Pattern Recognition (CVPR16), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
- Mollahosseini, A.; Hasani, B.; Salvador, M.J.; Abdollahi, H.; Chan, D.; Mahoor, M.H. Facial expression recognition from world wild web. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Las Vegas, NV, USA, 26 June–1 July 2016. [Google Scholar]
- Cai, J.; Gu, S.; Zhang, L. Learning a Deep Single Image Contrast Enhancer from Multi-Exposure Images. IEEE Trans. Image Process. 2018, 27, 2049–2062. [Google Scholar] [CrossRef] [PubMed]
- Chen, W.; Wang, W.; Yang, W.; Liu, J. Deep retinex decomposition for low-light enhancement. arXiv 2018, arXiv:1808.04560. [Google Scholar]
- Available online: https://google.github.io/mediapipe/solutions/face_detection.html (accessed on 28 October 2022).
- Bazarevsky, V.; Kartynnik, Y.; Vakunov, A.; Raveendran, K.; Grundmann, M. BlazeFace: Sub-millisecond Neural Face Detection on Mobile GPUs. arXiv 2019, arXiv:1907.05047. [Google Scholar]
- Chen, Y.; Wang, J.; Chen, S.; Shi, Z.; Cai, J. Facial Motion Prior Networks for Facial Expression Recognition. In Proceedings of the 2019 IEEE Visual Communications and Image Processing (VCIP), Sydney, Australia, 1–4 December 2019; pp. 1–4. [Google Scholar] [CrossRef] [Green Version]
- Georgescu, M.-I.; Ionescu, R.T.; Popescu, M. Local Learning With Deep and Handcrafted Features for Facial Expression Recognition. IEEE Access 2019, 7, 64827–64836. [Google Scholar] [CrossRef]
- Hayale, W.; Negi, P.; Mahoor, M. Facial Expression Recognition Using Deep Siamese Neural Networks with a Supervised Loss function. In Proceedings of the 2019 14th IEEE International Conference on Automatic Face & Gesture Recognition, Lille, France, 14–18 May 2019; pp. 1–7. [Google Scholar] [CrossRef]
- Zeng, J.; Shan, S.; Chen, X. Facial expression recognition with inconsistently annotated datasets. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 222–237. [Google Scholar]
- Antoniadis, P.; Filntisis, P.P.; Maragos, P. Exploiting Emotional Dependencies with Graph Convolutional Networks for Facial Expression Recognition. In Proceedings of the 2021 16th IEEE International Conference on Automatic Face and Gesture Recognition, Jodhpur, India, 15–18 December 2021; pp. 1–8. [Google Scholar] [CrossRef]
- Mukhiddinov, M.; Abdusalomov, A.B.; Cho, J. A Wildfire Smoke Detection System Using Unmanned Aerial Vehicle Images Based on the Optimized YOLOv5. Sensors 2022, 22, 9384. [Google Scholar] [CrossRef]
- Mukhiddinov, M.; Muminov, A.; Cho, J. Improved Classification Approach for Fruits and Vegetables Freshness Based on Deep Learning. Sensors 2022, 22, 8192. [Google Scholar] [CrossRef]
- Mukhiddinov, M.; Abdusalomov, A.B.; Cho, J. Automatic Fire Detection and Notification System Based on Improved YOLOv4 for the Blind and Visually Impaired. Sensors 2022, 22, 3307. [Google Scholar] [CrossRef]
- Patro, K.; Samantray, S.; Pławiak, J.; Tadeusiewicz, R.; Pławiak, P.; Prakash, A.J. A hybrid approach of a deep learning technique for real-time ecg beat detection. Int. J. Appl. Math. Comput. Sci. 2022, 32, 455–465. [Google Scholar] [CrossRef]
- Li, Y.; Zeng, J.; Shan, S.; Chen, X. Patch-gated CNN for occlusion-aware facial expression recognition. In Proceedings of the 24th International Conference on Pattern Recognition (ICPR), Beijing, China, 20–24 August 2018; pp. 2209–2214. [Google Scholar]
- Li, Y.; Lu, Y.; Li, J.; Lu, G. Separate loss for basic and compound facial expression recognition in the wild. In Proceedings of the Asian Conference on Machine Learning, Nagoya, Japan, 17–19 November 2019; pp. 897–911. [Google Scholar]
- Wang, C.; Wang, S.; Liang, G. Identity- and Pose-Robust Facial Expression Recognition through Adversarial Feature Learning. In Proceedings of the 27th ACM International Conference on Multimedia, Nice, France, 21–25 October 2019; pp. 238–246. [Google Scholar] [CrossRef]
- Farzaneh, A.H.; Qi, X. Discriminant distribution-agnostic loss for facial expression recognition in the wild. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020; pp. 406–407. [Google Scholar]
- Wen, Y.; Zhang, K.; Li, Z.; Qiao, Y. A discriminative feature learning approach for deep face recognition. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 8–16 October 2016; pp. 499–515. [Google Scholar] [CrossRef]
Models | Facial Features | Emotions | Datasets | Recognition in Dark |
---|---|---|---|---|
ExNet [48] | Upper and lower | 7 | FER-2013, CK+, RAF-DB | No |
Shao et al. [49] | Upper and lower | 7 | CK+, FER-2013 | No |
Miao et al. [50] | Upper and lower | 7 | FER2013, CASME II, SAMM | No |
Wang et al. [51] | Upper and lower | 8 | FERPlus, AffectNet, RAF-DB | No |
Farzaneh et al. [52] | Upper and lower | 7 | RAF-DB, AffectNet | No |
Shi et al. [53] | Upper and lower | 8 | RAF-DB, AffectNet | No |
Li et al. [54] | Upper and lower | 7 | RAF-DB | No |
Li et al. [55] | Upper and lower | 7 | RAF-DB, AffectNet | No |
Khoeun et al. [45] | Upper | 8 | CK+, RAF-DB | No |
Our previous work [56] | Upper | 7 | FER-2013 | No |
The proposed work | Upper | 7 | AffectNet | Yes |
Datasets | Total Size | Image Size | Emotion Categories | Number of Subjects | Condition |
---|---|---|---|---|---|
MultiPie [57] | 750,000 | 3072 × 2048 | 6 | 337 | Controlled |
Aff-Wild [68] | 10,000 | Various | Valence and arousal | 2000 | Wild |
Lusey et al. [58] | 10,708 | ~ | 7 | 123 | Controlled |
Pantic et al. [60] | 1500 | 720 × 576 | 6 | 19 | Controlled |
AM-FED [61] | 168,359 | 224 × 224 | 14 action unit | 242 | Spontaneous |
DISFA [62] | 130,000 | 1024 × 768 | 12 action unit | 27 | Spontaneous |
FER-2013 [65] | ~35,887 | 48 × 48 | 7 | ~35,887 | Wild |
AFEW [69] | Videos | Various | 7 | 130 | Wild |
EmotionNet [70] | 1,000,000 | Various | 23 | ~10,000 | Wild |
FER-Wild [71] | 24,000 | Various | 7 | ~24,000 | Wild |
AffectNet [24] | 1,000,000 | Various | 8 | ~450,000 | Wild |
Components | Specifications | Descriptions |
---|---|---|
GPU | GPU 2-GeForce RTX 2080 Ti 11 GB | Two GPU are installed |
CPU | Intel Core 9 Gen i7-9700k (4.90 GHz) | |
RAM | DDR4 64 GB (DDR4 16 GB × 4) | Samsung DDR4 16 GB PC4-21300 |
Storage | SSD: 512 GB/HDD: TB (2 TB × 2) | |
Motherboard | ASUS PRIME Z390-A STCOM | |
OS | Ubuntu Desktop | Version: 18.0.4 LTS |
Models | Accuracy | Models | Accuracy |
---|---|---|---|
Wang et al. [51] | 52.97% | Li et al. [86] | 58.89% |
Farzaneh et al. [52] | 65.20% | Wang et al. [87] | 57.40% |
Shi et al. [53] | 65.2% | Farzaneh et al. [88] | 62.34% |
Li et al. [55] | 58.78% | Weng et al. [89] | 64.9% |
Li et al. [85] | 55.33% | Proposed model | 69.3% |
Facial Emotions | Precision | Sensitivity | Recall | Specificity | F-Score | Accuracy |
---|---|---|---|---|---|---|
Happiness | 87.9% | 89.4% | 90.3% | 90.8% | 89.6% | 89 |
Sadness | 57.2% | 58.7% | 54.6% | 55.2% | 56.3% | 54.6% |
Anger | 56.7% | 57.3% | 62.8% | 63.5% | 59.4% | 62.8% |
Fear | 53.4% | 54.2% | 45.2% | 45.7% | 49.6% | 45.2% |
Disgust | 74.1% | 74.8% | 72.5% | 73% | 73.2% | 72.5% |
Surprise | 82.8% | 83.5% | 80.7% | 81.4% | 81.5% | 80.7% |
Neutral | 62.3% | 63.1% | 65.4% | 65.9% | 63.7% | 65.4% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Mukhiddinov, M.; Djuraev, O.; Akhmedov, F.; Mukhamadiyev, A.; Cho, J. Masked Face Emotion Recognition Based on Facial Landmarks and Deep Learning Approaches for Visually Impaired People. Sensors 2023, 23, 1080. https://doi.org/10.3390/s23031080
Mukhiddinov M, Djuraev O, Akhmedov F, Mukhamadiyev A, Cho J. Masked Face Emotion Recognition Based on Facial Landmarks and Deep Learning Approaches for Visually Impaired People. Sensors. 2023; 23(3):1080. https://doi.org/10.3390/s23031080
Chicago/Turabian StyleMukhiddinov, Mukhriddin, Oybek Djuraev, Farkhod Akhmedov, Abdinabi Mukhamadiyev, and Jinsoo Cho. 2023. "Masked Face Emotion Recognition Based on Facial Landmarks and Deep Learning Approaches for Visually Impaired People" Sensors 23, no. 3: 1080. https://doi.org/10.3390/s23031080
APA StyleMukhiddinov, M., Djuraev, O., Akhmedov, F., Mukhamadiyev, A., & Cho, J. (2023). Masked Face Emotion Recognition Based on Facial Landmarks and Deep Learning Approaches for Visually Impaired People. Sensors, 23(3), 1080. https://doi.org/10.3390/s23031080