Imaging Studies for Face and Gesture Analysis

A special issue of Journal of Imaging (ISSN 2313-433X).

Deadline for manuscript submissions: closed (1 September 2021) | Viewed by 14119

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computing and Mathematics, Manchester Metropolitan University, Manchester M15 6BH, UK
Interests: face informatics; micro-expression; human behavior; human motion; gesture
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Sudan University of Science and Technology, Khartoum, Sudan
Interests: facial micro-expressions; face recognition; face and gestures analysis; hand written recognition; OCR recognition

E-Mail
Guest Editor
Department of Computer Science, Shaqra University, Riyadh, Saudi Arabia
Interests: digital image processing; facial wrinkle detection; facial features analysis; face age estimation

E-Mail Website
Guest Editor
School of Computer Science and Information Technology, Sharg El-Neil College, Khartoum, Sudan
Interests: face analysis; face age estimation; facial wrinkle detection

Special Issue Information

Dear Colleagues,

Face and (body) gestures are rapidly becoming an area of intense interest in computer science and human–computer interaction. Face and body gestures are considered as signals to enable better communication and the study of human behaviors. They are mostly used in security applications in public places such as banks and airports while also being used for marketing and entertainment. However, due to variations in face images and that people may pose the same gesture differently, efficient methods/algorithms for their analysis are still needed. This Special Issue focuses on imaging studies for face and gesture analysis, including the creation of new datasets, development of new technology/algorithms for measuring human behaviors, psychology/perception of human behaviors, image processing/machine learning and deep learning in face and gesture analysis, and the implication of human behaviors for healthcare applications. We aim to promote interactions between researchers, scholars, practitioners, engineers, and students from across industry and academia to discuss all aspects of human behaviors. We welcome original works that address a wide range of issues, including, but not limited to:

  • Face and gesture recognition;
  • Face age estimation and facial wrinkle analysis;
  • Face inpainting;
  • Subtle/microface and gesture movement analysis;
  • Technology in automated human behavior measurement;
  • Machine learning in human behavior analysis;
  • Real-time face and motion analysis;
  • Face and motion recognition on mobile devices;
  • Analysis of human motion (face and/or body gesture) for healthcare applications;
  • Novel datasets for face and gesture analysis;
  • Applications in face and gesture analysis;

Applications in other domains are welcome, though we ask that you please contact the Guest Editors.

Dr. Moi Hoon Yap
Dr. Walied Ali Merghani
Dr. Remah Mutasim Ibrahim Albashir
Dr. Omaima FathElrahman Osman
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • face recognition
  • face age estimation
  • facial microexpressions
  • face informatics
  • human motion analysis
  • face and gestures analysis
  • behavioral analysis
  • facial wrinkle detection
  • face inpainting

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

11 pages, 1769 KiB  
Article
Masked Face Analysis via Multi-Task Deep Learning
by Vatsa S. Patel, Zhongliang Nie, Trung-Nghia Le and Tam V. Nguyen
J. Imaging 2021, 7(10), 204; https://doi.org/10.3390/jimaging7100204 - 5 Oct 2021
Cited by 6 | Viewed by 2903
Abstract
Face recognition with wearable items has been a challenging task in computer vision and involves the problem of identifying humans wearing a face mask. Masked face analysis via multi-task learning could effectively improve performance in many fields of face analysis. In this paper, [...] Read more.
Face recognition with wearable items has been a challenging task in computer vision and involves the problem of identifying humans wearing a face mask. Masked face analysis via multi-task learning could effectively improve performance in many fields of face analysis. In this paper, we propose a unified framework for predicting the age, gender, and emotions of people wearing face masks. We first construct FGNET-MASK, a masked face dataset for the problem. Then, we propose a multi-task deep learning model to tackle the problem. In particular, the multi-task deep learning model takes the data as inputs and shares their weight to yield predictions of age, expression, and gender for the masked face. Through extensive experiments, the proposed framework has been found to provide a better performance than other existing methods. Full article
(This article belongs to the Special Issue Imaging Studies for Face and Gesture Analysis)
Show Figures

Figure 1

11 pages, 1831 KiB  
Article
Ground Truth Data Generator for Eye Location on Infrared Driver Recordings
by Sorin Valcan and Mihail Gaianu
J. Imaging 2021, 7(9), 162; https://doi.org/10.3390/jimaging7090162 - 27 Aug 2021
Cited by 6 | Viewed by 2080
Abstract
Labeling is a very costly and time consuming process that aims to generate datasets for training neural networks in several functionalities and projects. In the automotive field of driver monitoring it has a huge impact, where much of the budget is used for [...] Read more.
Labeling is a very costly and time consuming process that aims to generate datasets for training neural networks in several functionalities and projects. In the automotive field of driver monitoring it has a huge impact, where much of the budget is used for image labeling. This paper presents an algorithm that will be used for generating ground truth data for 2D eye location in infrared images of drivers. The algorithm is implemented with many detection restrictions, which makes it very accurate but not necessarily very constant. The resulting dataset shall not be modified by any human factor and will be used to train neural networks, which we expect to have a very good accuracy and a much better consistency for eye detection than the initial algorithm. This paper proves that we can automatically generate very good quality ground truth data for training neural networks, which is still an open topic in the automotive industry. Full article
(This article belongs to the Special Issue Imaging Studies for Face and Gesture Analysis)
Show Figures

Figure 1

14 pages, 1736 KiB  
Article
Synthesising Facial Macro- and Micro-Expressions Using Reference Guided Style Transfer
by Chuin Hong Yap, Ryan Cunningham, Adrian K. Davison and Moi Hoon Yap
J. Imaging 2021, 7(8), 142; https://doi.org/10.3390/jimaging7080142 - 11 Aug 2021
Cited by 4 | Viewed by 3488
Abstract
Long video datasets of facial macro- and micro-expressions remains in strong demand with the current dominance of data-hungry deep learning methods. There are limited methods of generating long videos which contain micro-expressions. Moreover, there is a lack of performance metrics to quantify the [...] Read more.
Long video datasets of facial macro- and micro-expressions remains in strong demand with the current dominance of data-hungry deep learning methods. There are limited methods of generating long videos which contain micro-expressions. Moreover, there is a lack of performance metrics to quantify the generated data. To address the research gaps, we introduce a new approach to generate synthetic long videos and recommend assessment methods to inspect dataset quality. For synthetic long video generation, we use the state-of-the-art generative adversarial network style transfer method—StarGANv2. Using StarGANv2 pre-trained on the CelebA dataset, we transfer the style of a reference image from SAMM long videos (a facial micro- and macro-expression long video dataset) onto a source image of the FFHQ dataset to generate a synthetic dataset (SAMM-SYNTH). We evaluate SAMM-SYNTH by conducting an analysis based on the facial action units detected by OpenFace. For quantitative measurement, our findings show high correlation on two Action Units (AUs), i.e., AU12 and AU6, of the original and synthetic data with a Pearson’s correlation of 0.74 and 0.72, respectively. This is further supported by evaluation method proposed by OpenFace on those AUs, which also have high scores of 0.85 and 0.59. Additionally, optical flow is used to visually compare the original facial movements and the transferred facial movements. With this article, we publish our dataset to enable future research and to increase the data pool of micro-expressions research, especially in the spotting task. Full article
(This article belongs to the Special Issue Imaging Studies for Face and Gesture Analysis)
Show Figures

Figure 1

20 pages, 21639 KiB  
Article
FACS-Based Graph Features for Real-Time Micro-Expression Recognition
by Adamu Muhammad Buhari, Chee-Pun Ooi, Vishnu Monn Baskaran, Raphaël C. W. Phan, KokSheik Wong and Wooi-Haw Tan
J. Imaging 2020, 6(12), 130; https://doi.org/10.3390/jimaging6120130 - 30 Nov 2020
Cited by 14 | Viewed by 4544
Abstract
Several studies on micro-expression recognition have contributed mainly to accuracy improvement. However, the computational complexity receives lesser attention comparatively and therefore increases the cost of micro-expression recognition for real-time application. In addition, majority of the existing approaches required at least two frames (i.e., [...] Read more.
Several studies on micro-expression recognition have contributed mainly to accuracy improvement. However, the computational complexity receives lesser attention comparatively and therefore increases the cost of micro-expression recognition for real-time application. In addition, majority of the existing approaches required at least two frames (i.e., onset and apex frames) to compute features of every sample. This paper puts forward new facial graph features based on 68-point landmarks using Facial Action Coding System (FACS). The proposed feature extraction technique (FACS-based graph features) utilizes facial landmark points to compute graph for different Action Units (AUs), where the measured distance and gradient of every segment within an AU graph is presented as feature. Moreover, the proposed technique processes ME recognition based on single input frame sample. Results indicate that the proposed FACS-baed graph features achieve up to 87.33% of recognition accuracy with F1-score of 0.87 using leave one subject out cross-validation on SAMM datasets. Besides, the proposed technique computes features at the speed of 2 ms per sample on Xeon Processor E5-2650 machine. Full article
(This article belongs to the Special Issue Imaging Studies for Face and Gesture Analysis)
Show Figures

Figure 1

Back to TopTop