Next Article in Journal
IoT-Based Cotton Plant Pest Detection and Smart-Response System
Next Article in Special Issue
Multiplicative Vector Fusion Model for Detecting Deepfake News in Social Media
Previous Article in Journal
A Comprehensive Classification Method for the Pore Permeability of Deep-Mine Sandstone Used to Guide Grouting
Previous Article in Special Issue
Unsupervised Semantic Segmentation Inpainting Network Using a Generative Adversarial Network with Preprocessing
 
 
Article
Peer-Review Record

Lightweight Micro-Expression Recognition on Composite Database

Appl. Sci. 2023, 13(3), 1846; https://doi.org/10.3390/app13031846
by Nur Aishah Ab Razak 1,* and Shahnorbanun Sahran 2,*
Reviewer 1:
Reviewer 2:
Reviewer 3:
Appl. Sci. 2023, 13(3), 1846; https://doi.org/10.3390/app13031846
Submission received: 16 December 2022 / Revised: 14 January 2023 / Accepted: 21 January 2023 / Published: 31 January 2023
(This article belongs to the Special Issue Deep Learning Architectures for Computer Vision)

Round 1

Reviewer 1 Report

Here, authors proposed MER 19model is developed from truncated EfficientNet-B0 model consisting of 15 layers with only 867k 20 parameters. Then, demonstrate how transfer learning from a 15 larger and varied macro-expression database (FER 2013) in a lightweight deep-learning network 16 before fine-tuning on the CDE dataset can achieve high MER performance using only static images 17 as input. The few corrections are required acceptance 


1. Few more recent and proper reference should be included to extent the literature of review part. 


2. Grammatical and technical errors should be rectified. 

 

3. Authors chose MER 19model to identify the hidden emotions, but didn’t specify why?

 

4. The discussion part should be extended to understand by the readers of journal.

5. Authors must be compared existing technology with proposed one.

Author Response

Response 1: We have updated the paper with 3 recent papers from 2022-2023. We've also added the relevant previous works and how they relate back to this work in the Introduction and Relevant Work section.

Response 2: The article is now corrected using MS Word proofreading tool. 

Response 3: We're not sure what is meant by MER 19 model referred to by the reviewer. Assuming that this refers to the proposed Efficient-ME model, we have added more explanation on why EfficientNet and specifically its lightest variant, EfficientNet-B0 model is chosen in the Related Work and Proposed Model sections. 

Response 4: Section 4 is now re-written as series of experiments done to relate back to the paper's contributions. The discussions for Experiments 2 and 3 (impact of source database in transfer learning and loss function manipulation) is paraphrased and clarified further to increase understanding.

Response 5: We believe we have compared the existing technologies outlined in the 3.5.2 Baselines section that are using the same composite dataset in Experiment 1. Latest research (RNAS-MER) is also included for comparison. 

 

Reviewer 2 Report

In this paper, the authors propose a novel lightweight deep learning model to solve the computational complexity of MER tasks. A simple algorithm-tuning that manipulation the loss function has also proved to be useful in deal with the imbalanced problem, which is a drawback of CDE datasets. This is an interesting study, here are some of my comments.

 

 

 

- In Section 3.2, The modules should be explained in its own language and in combination with its role in MER tasks.

 

 

 

- The author mentioned some classic training strategies and did not make many targeted improvements. I don't think this part needs to be described in detail, but simply summarized.

 

 

 

- Is there any further research on the lightweight limit of the deep learning model? I believe that the relevant conclusions will contribute to the lightweight research in the MER field, which is also mentioned in this manuscript.

 

 

 

- I am suspicious of algorithm tuning to circumvent imbalanced datasets problem. In the experiment, the effect of Efficient-ME with FL in the balanced datasets is better than that of Efficient-ME with CE, while their effects in the imbalanced datasets are similar. Why can we conclude that the imbalanced datasets can be circumvented by algorithm tuning? I hope the author will reinterpret this part of the experiment.

 

 

 

- There are also some problems with the naming of charts in the paper. The naming of some charts makes people completely confused about what they want to show. For example, as a comparative experiment, Table 6 and Table 7 are completely independent in naming.

 -  Some related works should be discussed, such as 10.1007/s11042-016-3971-4, 10.1016/j.jvcir.2014.12.007, 10.1016/j.image.2017.07.006

 

Author Response

Response 1: Section 3.2's introduction is updated with more context on why EfficientNet-B0 model is used as the base model.

Response 2: We assumed that the 'classic training strategies' mentioned by the reviewer refers to the transfer learning technique (Section 3.3 & 3.4). Therefore, these sections are summarized accordingly as the reviewer correctly observe that no targeted improvements were made in this process.

Response 3: To our knowledge, there hasn't been much discussion on the lightweight limit of deep learning model in MER. However, in Facial Expression Recognition (FER), there has been a lot of work trying to reduce the network complexity. But these works still do not explain the limit of how small a deep learning model can be before losing its performance/accuracy.

Response 4: We've clarified and re-write the conclusion from the loss function experiment from being too generic that claims any imbalanced dataset problem can be tackled by using FL loss function to effectiveness of using FL loss function in transfer learning when source database class distribution does not match the target. We hope this is clearer to the reviewers and readers.

Response 5: We've renamed the charts in question and ensured all the other tables and figures are named clearly to convey our intents.

Response 6: With due respect, in our opinion the suggested related works are more related to digital image processing than Micro-Expression Recognition so we've omitted their inclusions. 

Reviewer 3 Report

The article focuses on problems in the field of micro-expression recognition. Addressing the complexity, high computational cost, and small and unbalanced datasets of previous micro-expression recognition methods, the proposed recognition method redefines the problem of unbalanced datasets as an algorithm tuning problem in a lightweight network with migration learning in a larger database and input using static images. The loss function is tuned and a new network architecture, Efficient-ME, is proposed. experimental data in the article show that the proposed method leads to a significant improvement in the performance and metrics (UF1 Score, UAR Score) of micro-expression recognition.

However, there are some problems with the article. The background of micro-expression recognition and what previous work has been done is not sufficiently described in the Introduction section, so the relevant research background needs to be added in the Introduction section. And cite all the papers you have used correctly. In the Related Work section, it should be clear and concise what has been done in this research. The experiments in section 4.2 of the paper use the ImageNet dataset, and it is recommended that experiments using some other dataset be conducted.

 

Author Response

Response 1 : Introduction section is extended to add relevant work with regards to MER and problem statement is clarified based on the relevant work

Response 2 : Citations has been checked for correctness and completeness.

Response 3 : Related work section is now divided into sub-sections with explanations of the connection between the previous works and what was done in this work

Response 4 : A new experiment is added in section 4.2 (Experiment 2) with CK+ dataset. The discussion and results are updated to reflect this new finding. 

 

Round 2

Reviewer 2 Report

The authors have successfully addressed my major concerns. I recommend accepting this manuscript.

Back to TopTop