Next Article in Journal
Edge-Nodes Representation Neural Machine for Link Prediction
Previous Article in Journal
Diagonally Implicit Runge–Kutta Type Method for Directly Solving Special Fourth-Order Ordinary Differential Equations with Ill-Posed Problem of a Beam on Elastic Foundation
 
 
Article
Peer-Review Record

Facial Expression Recognition Based on Discrete Separable Shearlet Transform and Feature Selection

Algorithms 2019, 12(1), 11; https://doi.org/10.3390/a12010011
by Yang Lu, Shigang Wang * and Wenting Zhao
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Algorithms 2019, 12(1), 11; https://doi.org/10.3390/a12010011
Submission received: 11 December 2018 / Revised: 24 December 2018 / Accepted: 25 December 2018 / Published: 31 December 2018

Round 1

Reviewer 1 Report

The approach can be divided into five steps: Firstly, all test and training images are preprocessed. Secondly, the DSST transform is applied to the preprocessed facial expression images, and all the transformation coefficients are obtained as the original feature set. Thirdly, an improved normalized mutual information feature selection is proposed to find the optimal feature subset of the original feature set, thus we can retain the key classification information of the original data. Fourthly, the feature extraction and selection the feature space is reduced by employing linear discriminant analysis. Finally, support vector machine is used to recognize the expressions.

 

(1)    Why use DSST? What is its advantage?

(2)    Why not use traditional signal processing method?

(3)    Did you consider to use conventional Fourier analysis? What is its shortcoming?

(4)    Why preprocessing is necessary?

(5)    How do you select the important features?

(6)    Equation 23, why o should be less than r?

(7)    Some related reference could be discussed, see “Emotion Recognition based on Biorthogonal Wavelet Entropy, Fuzzy Support Vector Machine, and Stratified Cross Validation” and “Facial Emotion Recognition via Discrete Wavelet Transform, Principal Component Analysis, and Cat Swarm Optimization


Author Response

Dear Editor and Reviewers:

   We sincerely thank the editors and all reviewers for their valuable feedback to improve the quality of our manuscript (# algorithms-415349, " Facial Expression Recognition based on Discrete Separable Shearlet Transform and Feature Selection"). All of the reviewers’ comments are addressed and replied to in this document. The modifications made to the manuscript are highlighted in blue for ease of access and review. We have given the responses of your comments point-by-point in the word document.

   At last, we want to express our thanks to you for your prudence again, we look forward to your further suggestions.

Sincerely,

Yang Lu, Shigang Wang* and Wenting Zhao

 

School of Communication Engineering, Jilin University, No. 5372 Nanhu Road, Changchun 130012, China.

 

E-mail:

Yang Lu: [email protected] ;

Shigang Wang: [email protected] ;

Wenting Zhao: [email protected] .





Author Response File: Author Response.pdf

Reviewer 2 Report

The authors propose a novel facial expression recognition approach based on discrete separable shearlet transform (DSST) and a normalized mutual information feature selection.

The proposed research work is somehow interesting as it addresses a current topic of great interest to the research community. Overall, the paper is well-written and easy to follow.

The authors have well-thought-out their main ideas and contributions and the provided theoretical analysis is concrete and correct without the reviewer having any major concern.

Based on the proposed topic, the reviewer sees the relevance of the proposed facial expression recognition approach, as well as its applicability to the current recommender systems towards improving the content and context-aware recommendation process, e.g.,

"A holistic approach for personalization, relevance feedback & recommendation in enriched multimedia content." Multimedia Tools and Applications 77, no. 1 (2018): 283-326,

"Personalized multimedia content retrieval through relevance feedback techniques for enhanced user experience." In Telecommunications (ConTEL), 2015 13th International Conference on, pp. 1-8. IEEE, 2015,

Recommender systems survey. knowledge-based systems. Elsevier 46:109–132 (2013), Content-based, collaborative recommendation.Commun ACM 40(3):66–72 (1997). 


The authors are encouraged to provide this angle in the provided related work and update the corresponding reference list. This aspect of their research will strengthen its applicability and its potential to be also applied in the field of recommendations and relevance feedback approaches.


Moreover, the authors should provide the complexity analysis of the quantization algorithm. What is the implementation cost of the proposed framework?


What is the time complexity of the proposed framework, so as to be implemented in a real time scenario?


The results are sufficient in order to support the main attributes of the proposed framework. However, the authors are encouraged to provide some discussion regarding the nature of the adopted datasets, i.e., JAFFE, MMi, etc.


The overall manuscript should be checked regarding the usage of the English language, as it has many typos, grammar and syntax errors.

 


Author Response

Dear Editor and Reviewers:

We sincerely thank the editors and all reviewers for their valuable feedback to improve the quality of our manuscript (#algorithms-415349, “ Facial Expression Recognition based on Discrete Separable Shearlet Transform and Feature Selection ”). All of the reviewers’ comments are addressed and replied to in this document. The modifications made to the manuscript are highlighted in blue for ease of access and review.  We have given the responses of your comments point-by-point in the word document.

 At last, we want to express our thanks to you for your prudence again, we look forward to your further suggestions.

Sincerely,


Yang Lu, Shigang Wang* and Wenting Zhao


School of Communication Engineering, Jilin University, No. 5372 Nanhu Road, Changchun 130012, China


E-mail:

Yang Lu: [email protected] ;

Shigang Wang: [email protected] ;

Wenting Zhao: [email protected] .






Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

Accept n current form.

Author Response

Dear Editor and Reviewers:

We sincerely thank the editors and all reviewers for their valuable feedback to improve the quality of our manuscript (#algorithms-415349, “ Facial Expression Recognition based on Discrete Separable Shearlet Transform and Feature Selection ”). Thank you for your suggestions on our work, and thank you again for your approval of our article.


At last, we want to express our thanks to you for your prudence again.


Sincerely,

 

Yang Lu, Shigang Wang* and Wenting Zhao

School of Communication Engineering, Jilin University, No. 5372 Nanhu Road, Changchun 130012, China.

 

E-mail:

Yang Lu: [email protected] ; Shigang Wang: [email protected] ; Wenting Zhao: [email protected] .



 




Reviewer 2 Report

The authors have addressed all the comments raised by the reviewer. However, they should check their references list, as it is wrong. Many references such as [5]-[9] are wrongly listed, as the authors use the first name of the authors instead of their last name. Also, there are several remaining typos in the manuscript that should be checked.

Author Response

Dear Editor and Reviewers:

We sincerely thank the editors and all reviewers for their valuable feedback to improve the quality of our manuscript (#algorithms-415349, “ Facial Expression Recognition based on Discrete Separable Shearlet Transform and Feature Selection ”). All of the reviewers’ comments are addressed and replied to in this document. The modifications made to the manuscript are highlighted in blue for ease of access and review.


 

Reviewer comments and suggestions for authors are provided here:

>
> Reviewer #2:
> 1. The authors have addressed all the comments raised by the reviewer. However, they should check their references list, as it is wrong. Many references such as [5]-[9] are wrongly listed, as the authors use the first name of the authors instead of their last name. Also, there are several remaining typos in the manuscript that should be checked.

Reply: We feel sorry for our wrong writing in the references. We have checked all the references and corrected the wrong writing. We also have examined the full manuscript and corrected the several remaining typos. Although these modifications are not listed one by one, we guarantee that these changes will not influence the content and framework of the paper.


The revised references are now: 

[5] Stai, E.; Kafetzoglou, S.; Tsiropoulou, E.E.; Papavassiliou, S. A holistic approach for personalization, relevance feedback & recommendation in enriched multimedia content. Multimedia Tools and Applications. 2018, 77, 283-326.

[6] Pouli, V.; Kafetzoglou, S.; Tsiropoulou, E.E.; Dimitriou, A.; Papavassiliou, S. Personalized multimedia content retrieval through relevance feedback techniques for enhanced user experience. In 2015 13th International Conference on Telecommunications (ConTEL); IEEE: London, UK, 13-15 July 2015; pp. 1-8.

[7] Bobadilla, J.; Ortega, F.; Hernando, A.; Gutiérrez, A. Recommender systems survey. Knowledge-Based Systems. 2013, 46, 109–132.

[8] Balabanovic, M.; Shoham, Y. Fab: content-based, collaborative recommendation. Communications of the ACM. 1997, 40, 66-72

[9] Mohammadi, M.R.; Fatemizadeh, E.; Mahoor, M.H. PCA-based dictionary building for accurate facial expression recognition via sparse representation. Journal of Visual Communication & Image Representation. 2014, 25, 1082-1092.


At last, we want to express our thanks to you for your prudence again, we look forward to your further suggestions.

 

Sincerely,

 

Yang Lu, Shigang Wang* and Wenting Zhao

 

School of Communication Engineering, Jilin University, No. 5372 Nanhu Road, Changchun 130012, China.

 

E-mail:

Yang Lu: [email protected] ; Shigang Wang: [email protected] ; Wenting Zhao: [email protected] .



Author Response File: Author Response.pdf

Back to TopTop