Next Article in Journal
A Systemic Mapping Study of Business Intelligence Maturity Models for Higher Education Institutions
Next Article in Special Issue
A Critical Review on the 3D Cephalometric Analysis Using Machine Learning
Previous Article in Journal
Towards Predicting Architectural Design Patterns: A Machine Learning Approach
Previous Article in Special Issue
A New Method of Disabling Face Detection by Drawing Lines between Eyes and Mouth
 
 
Article
Peer-Review Record

Machine Learning Models for Classification of Human Emotions Using Multivariate Brain Signals

Computers 2022, 11(10), 152; https://doi.org/10.3390/computers11100152
by Shashi Kumar G. S. 1, Ahalya Arun 1, Niranjana Sampathila 2,* and R. Vinoth 1
Reviewer 1: Anonymous
Reviewer 3:
Computers 2022, 11(10), 152; https://doi.org/10.3390/computers11100152
Submission received: 28 July 2022 / Revised: 7 October 2022 / Accepted: 7 October 2022 / Published: 13 October 2022
(This article belongs to the Special Issue Multimodal Pattern Recognition of Social Signals in HCI)

Round 1

Reviewer 1 Report

1. There is no in-depth analysis of the reasons for the highest classification accuracy using the BiLSTM network.

2. The classification models used are all existing, and it is hoped that they can be improved on the basis of the model.

Author Response

Response to the Reviewer’s comments

Note: The changes made in the manuscript are highlighted with BLUE colored text.

Reviewer 1:

 

R1: Comments and Suggestions for Authors. We thank the reviewer for valuable comments and Suggestions.

  1. There is no in-depth analysis of the reasons for the highest classification accuracy using the BiLSTM network.

Response: The classification of positive and negative emotion with and without removing the outlier samples of EEG signal using Bi-LSTM is performed for all scalp regions viz, frontal, parietal, occipital, temporal and even for all 32 electrodes. Among these regions, frontal region electrodes (Fp1, F3, F4 and Fp2) and all 32 electrodes shown highest classification accuracy. Without removing the out layered samples of EEG signal, the classification accuracy obtained is 90.25% and 91.25% for considering all 32 electrodes and four frontal electrodes respectively. After removing the out layered samples, the classification accuracy increased to 92.15% and 94.95% for considering all 32 electrodes and four frontal electrodes respectively. Line 421

  1. The classification models used are all existing, and it is hoped that they can be improved on the basis of the model.

Response: Yes, all are existing model and has been improved based on selection of electrodes by reducing data size.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Author Response File: Author Response.pdf

Reviewer 2 Report

NOTES:

- Define the acronyms

- Explain your method better (improve the explanation) and the English writing (sometimes it is not correct)

- I am not writing here all the situations, but the explanation is missing essential details

ABSTRACT:

- Define PSD - Power Spectral Density, BiLSTM, ANN, among others, in the text. Do not use acronyms without defining them.

- What are the innovations? The method or the use of electrodes? Explain

- 94.95% accuracy is ok, but what increased your method accuracy compared to other state-of-the-art methods?

INTRODUCTION:

- Define the acronyms e.g. PSD 

- The innovation and the objectives of your article are not well explained

- What is new, and what is "old"? It is essential to introduce your method and explain it. Use schematics and/or Figures also

- Missing a paragraph explaining the article organization "In Section XX, it will be ...."

RELATED WORK:

- Divide the related work into smaller subsections according to e.g. to the application

- You talk here a little bit about frontal and pre-frontal regions. You have not explained your method yet. Use figures and schematics to explain this

METHODOLOGY:

- Start the Section by introducing the subsections' contents

- Where is the system's explanation? The electrodes position figure? The working mode?

- Define the acronyms e.g. DEAP (I know that the acronym is introduced in Section 2 - Related Work, but the acronym is not defined)

- "...preprocessed MATLAB data" What preprocessing was made? I am not writing here all the situations, but the explanation is missing essential details

- Bad quality figures, the resolution of the images is bad

- Define all the equation parameters, what is x_n, etc.

- You do not have to explain well-known methods here like the LTSM (e.g. Figure 7), but your innovation and the modifications that you have made to adapt it to your specific case

- Figure 10? What is this? I cannot understand anything from the figure. Was it divided into a or b? Why?

- Line 307 - "Acuuracy"?

-"..It is calculated using the following equation (4)." This is not correct. It is "t is calculated using the following equation:"

- The metrics Accuracy, Precision, etc. are well known. Do not waste article space explaining them again here

- Table 2, 3, etc. Why are you showing values here? Add an "Experimental Results" Section

- Processing time? Real-time analysis?

CONCLUSIONS:

- Indicate your method increase of performance using e.g. percentage and if it's a real-time analysis

Author Response

Response for the Reviewer’s comments

Note: The changes made in the manuscript are highlighted with BLUE coloured text.

Reviewer 2:

 

R2: Comments and Suggestions for Authors. We thank the reviewer for valuable comments and Suggestions.

1.Define the acronyms

Response: All the acronyms are defined.

  1. Explain your method better (improve the explanation) and the English writing (sometimes it is not correct)

Response: The EEG signals from various electrodes in different scalp regions viz., frontal, parietal, temporal, occipital is studied. The region-based classification is performed considering each scalp region separately. Among all other scalp regions electrodes, the frontal region electrodes performed better and gave highest classification accuracy. The results indicate that the use of set of frontal electrodes (Fp1, F3, F4, Fp2) for emotion recognition that can simplify the acquisition and processing of EEG data. Line 229

English writing and grammatical errors are corrected.

ABSTRACT:

  1. Define PSD - Power Spectral Density, BiLSTM, ANN, among others, in the text. Do not use acronyms without defining them.

Response: All the acronyms are defined.

  1. What are the innovations? The method or the use of electrodes? Explain

Response: The EEG signals from various electrodes in different scalp regions viz., frontal, parietal, temporal, occipital is studied. The region-based classification is performed considering each scalp region separately. Among all other scalp regions, the frontal region electrodes performed better and gave highest classification accuracy. The results indicate that the use of set of frontal electrodes (Fp1, F3, F4, Fp2) for emotion recognition that can simplify the acquisition and processing of EEG data. Line 229

  1. 94.95% accuracy is ok, but what increased your method accuracy compared to other state-of-the-art methods?

Response: In the Proposed method, 2-dimensional model (Arousal & Valance), minimum electrodes are considered. The experimental results demonstrate that the region-based classifications provide higher accuracy compared to selecting all 32 electrodes. In the recent development, a number of neurophysiological studies have reported that there is correlation between EEG signals and emotions. Studies showed that the frontal scalp seems to store more emotional activation compared to other regions of the brain such as temporal, parietal, and occipital. From the experimental results of this study, frontal region gives higher classification accuracy. Also, among different brain regions, frontal region proved improved performance in classifying positive and negative emotions. Line 399

 The classification of positive and negative emotion with and without removing the outlier samples of EEG signal using Bi-LSTM is performed for all scalp regions viz, frontal, parietal, occipital, temporal and even for all 32 electrodes. Among these regions, frontal region electrodes (Fp1, F3, F4 and Fp2) and all 32 electrodes shown highest classification accuracy. Without removing the out layered samples of EEG signal, the classification accuracy obtained is 90.25% and 91.25% for considering all 32 electrodes and four frontal electrodes respectively. After removing the out layered samples, the classification accuracy increased to 92.15% and 94.95% for considering all 32 electrodes and four frontal electrodes respectively. Line 421

 

INTRODUCTION:

  1. Define the acronyms e.g. PSD 

Response: All the acronyms are defined.

  1. The innovation and the objectives of your article are not well explained

Response: In the Proposed method, 2-dimensional model (Arousal & Valance), minimum electrodes are considered. The experimental results demonstrate that the region-based classifications provide higher accuracy compared to selecting all 32 electrodes. In the recent development, a number of neurophysiological studies have reported that there is correlation between EEG signals and emotions. Studies showed that the frontal scalp seems to store more emotional activation compared to other regions of the brain such as temporal, parietal, and occipital. From the experimental results of this study, frontal region gives higher classification accuracy. Also, among different brain regions, frontal region proved improved performance in classifying positive and negative emotions. Line 399

  1. What is new, and what is "old"? It is essential to introduce your method and explain it. Use schematics and/or Figures also

Response: Most of the initial research on emotion recognition was carried out on observable verbal or non-verbal emotional expressions (facial expressions, patterns of body gestures). However, tone of voice and facial expression can be deliberately hidden or over expressed and in some other cases where the person is physically disabled or introverted may not be able to emotionally express through these parameters. This makes the method less reliable to measure emotions. In contrast, emotion recognition methods based on physiological signals such as EEG, ECG, EMG and GSR are more reliable as humans intentionally cannot control them. Among them, EEG signal in the objective physiological signal is directly generated by the central nervous system, which is closely related to human emotional state. Line 90.

  1. Missing a paragraph explaining the article organization "In Section XX, it will be ...."

Response: The rest of the paper is categories as follows: The literature review given under section 2 is based on the categorization of emotion into two emotion models. Section 3 describes proposed methodology. Experiments and results are given in section 4, and concluding remarks are given under section 5. Line 100

RELATED WORK:

  1. Divide the related work into smaller subsections according to e.g. to the application

Response: As per the comments related work is divided into smaller subsections.

  1. You talk here a little bit about frontal and pre-frontal regions. You have not explained your method yet. Use figures and schematics to explain this

Response:  Electrode placements in dorsolateral prefrontal cortex are and orbital frontal cortex are F3, F4 and Fp1, Fp2 respectively [54]. Line 213

 METHODOLOGY:

  1. Start the Section by introducing the subsections' contents

Response: In this study, machine learning techniques are adopted to classify the emotional states. Based on a 2-dimesional Russell’s emotional model, states of emotion have been classified for each subject using EEG data. The Power Spectral Density (PSD) of EEG signals for each video and every participant is extracted using MATLAB code. The PSD is given as the input to the model, which later classifies the emotions into positive class and negative class. Line 223

  1. Where is the system's explanation? The electrodes position figure? The working mode?

The internationally accepted 10-20 electrode arrangement for electrode placement is generally followed while placing the electrodes atop the scalp to cover the brain lobes. From nasion to inion, measurements are done in the median and transverse planes. Electrode placement locations are measured by dividing the transverse and median planes in 10% - 20% of distance interval, as shown in Fig.2. Scalp regions over which the electrodes are placed are indicated by letters – Frontal(F), Occipital(O), Temporal(T) and Parietal(P). The brain hemispheres over which these electrodes are placed are denoted by odd numbers for left hemisphere and even numbers for right. Line 196

 

  1. Define the acronyms e.g. DEAP (I know that the acronym is introduced in Section 2 - Related Work, but the acronym is not defined)

Response: Database for Emotion Analysis using Physiological Signals (DEAP) test set. Line 119

  1. "...preprocessed MATLAB data" What preprocessing was made? I am not writing here all the situations, but the explanation is missing essential details

Response: Pre-processing of data is very much required for improving signal to noise ratio of EEG data. Eye blinks, facial and neck muscle activities and body movement are the major EEG artefacts. To address these artefacts, pre-processing of EEG signals was done by the authors of the database before making it publically accessible. Line 272

  1. Bad quality figures, the resolution of the images is bad

Response: Resolution of all figures are improved.

  1. Define all the equation parameters, what is x_n, etc.

Response: Defined all the equitation parameters.

  1. You do not have to explain well-known methods here like the LTSM (e.g. Figure 7), but your innovation and the modifications that you have made to adapt it to your specific case.

Response: Figure 7 is changed to Fig.8. It is existing model and has been improved based on selection of electrodes by reducing data size and there is no tuning of attributes with to any network has been performed.

  1. Figure 10? What is this? I cannot understand anything from the figure. Was it divided into a or b? Why?

Response: Figure 10 is changed to Figure 11. It is a typical Accuracy and Loss schematic curve. It was not divided into a or b. Line 355

  1. Line 307 - "Acuuracy"?

Response: Spelling mistake has been corrected. Line 363

  1. -"..It is calculated using the following equation (4)." This is not correct. It is "t is calculated using the following equation:"

Response: Corrections are made as follows: The following measures were used to test model reliability: accuracy, precision, recall and F1-score are calculated from equation (3), (4), (5) and (6) respectively.  Line 360.

  1. The metrics Accuracy, Precision, etc.are well known. Do not waste article space explaining them again here

Response: Definitions of all evaluation metrics are removed.

  1. Table 2, 3, etc. Why are you showing values here? Add an "Experimental Results" Section

Response: In Table 2 and Table 3, mentioned values are obtained experimentally and it is under Results and Discussion section.

  1. Processing time? Real-time analysis?

Response: Processing time took around 48 hours. It’s not a Real-time analysis. Freely available DEAP datasets have been used.

CONCLUSIONS:

  1. Indicate your method increase of performance using e.g. percentage and if it's a real-time analysis

Response: We didn’t do the real time analysis. Freely available DEAP datasets have been used with author’s permission. Line .286

 

Author Response File: Author Response.pdf

Reviewer 3 Report

 

 

The subject matter is interesting in terms of basic research. Nevertheless, the authors do not clearly state how it can be used in real life.

Moreover, the authors should mention other works that use face analysis to remotely determine the emotions of the targeted persons. This approach has real-world applications, especially in security, in therapy... The following references should be cited with this in mind :

Past, Present, and Future of Face Recognition: A Review. Electronics 20209, 1188. https://doi.org/10.3390/electronics9081188

Automatic emotion recognition for groups: a review," in IEEE Transactions on Affective Computing, doi: 10.1109/TAFFC.2021.3065726.

They can also be oriented towards modalities that complement the face (gestures, speech ...):

An EEG-Based Brain Computer Interface for Emotion Recognition and Its Application in Patients with Disorder of Consciousness in IEEE Transactions on Affective Computing, vol. 12, no. 4, pp. 832-842, 1 Oct.-Dec. 2021, doi: 10.1109/TAFFC.2019.2901456.

Finally, I am surprised that the authors do not mention the possibility of classification of emotions but at a distance using the concept Brain-Computer Interface :

Automatic Pain Estimation from Facial Expressions: A Comparative Analysis Using Off-the-Shelf CNN Architectures. Electronics 202110, 1926. https://doi.org/10.3390/electronics10161926

Moreover, the authors must define all the acronyms from their first use including in the abstract where PSD and BiLSTM are not defined.

Is the number of participants equal to 32 sufficient to validate your approach?

In all curves, the x-axis and y-axis must be defined.

The relation (2) is false

The relation (3) does not represent the PSD!

Why use once xn and another time x[n]?

The flow chart has to be redone.

The classification part is naive and the authors do not give any information about the input parameters.

In the definition of F1 score, replace "*" by the multiplication symbol.

The conclusion must be more precise and the results must be quantified.

Author Response

Response for the Reviewer’s comments

Note: The changes made in the manuscript are highlighted with BLUE coloured text.

Reviewer 3:

R3:Comments and Suggestions for Authors: We thank the reviewer for valuable comments and Suggestions.

1.The subject matter is interesting in terms of basic research. Nevertheless, the authors do not clearly state how it can be used in real life.

Response: This improved accuracy enables us to use this system in different real time applications like, wearable sensors design, biofeedback applications for monitoring stress and psychological wellbeing. Line 433

2.Moreover, the authors should mention other works that use face analysis to remotely determine the emotions of the targeted persons. This approach has real-world applications, especially in security, in therapy... The following references should be cited with this in mind :

Response: Cited. Ref. [55] to [58]. Line 556.

3.Past, Present, and Future of Face Recognition: A Review. Electronics 2020, 9, 1188. https://doi.org/10.3390/electronics9081188

Response: Cited as Ref. [55]. Line 558.

  1. Automatic emotion recognition for groups: a review," in IEEE Transactions on Affective Computing, doi: 10.1109/TAFFC.2021.3065726.

Response: Cited as Ref. [56]. Line 560.

  1. They can also be oriented towards modalities that complement the face (gestures, speech ...):

Response: Yes, there are different patterns to understand the emotional conditions which includes, speech, face, gestures etc. The physiology behind emotion states are associated with limbic system and which is a major part of the brain. EEG signal is originating directly from the brain, significantly have signature of emotion states. Line. 409

  1. An EEG-Based Brain Computer Interface for Emotion Recognition and Its Application in Patients with Disorder of Consciousness in IEEE Transactions on Affective Computing, vol. 12, no. 4, pp. 832-842, 1 Oct.-Dec. 2021, doi: 10.1109/TAFFC.2019.2901456.

Response: Cited as Ref. [57]. Line 562.

  1. Finally, I am surprised that the authors do not mention the possibility of classification of emotions but at a distance using the concept Brain-Computer Interface:

Response: The classification of positive and negative emotion with and without removing the outlier samples of EEG signal using Bi-LSTM is performed for all scalp regions viz, frontal, parietal, occipital, temporal and even for all 32 electrodes. Among these regions, frontal region electrodes (Fp1, F3, F4 and Fp2) and all 32 electrodes shown highest classification accuracy. Without removing the out layered samples of EEG signal, the classification accuracy obtained is 90.25% and 91.25% for considering all 32 electrodes and four frontal electrodes respectively. After removing the out layered samples, the classification accuracy increased to 92.15% and 94.95% for considering all 32 electrodes and four frontal electrodes respectively. Line 421

 

  1. Automatic Pain Estimation from Facial Expressions: A Comparative Analysis Using Off-the-Shelf CNN Architectures. Electronics 2021, 10, 1926. https://doi.org/10.3390/electronics10161926

Response: Cited as Ref. [58]. Line 557.

  1. Moreover, the authors must define all the acronyms from their first use including in the abstract where PSD and BiLSTM are not defined.

Response: All acronyms are well defined.

  1. Is the number of participants equal to 32 sufficient to validate your approach?

Response: Yes, Validated our approach with 32 subjects data.

  1. In all curves, the x-axis and y-axis must be defined.

Response: x-axis and y-axis label parameters are defined.

  1. The relation (2) is false

Response: Yes. Earlier equation (2) has been removed.

 

The DFT of EEG signal is calculated using equation (1).

  1. The relation (3) does not represent the PSD!

Response: Equation has been corrected.

Where,   is a finite duration data sequence recording for any electrode, N is the number of samples in the sequence and  is sampling frequency. Line 306

  1. Why use once xn and another time x[n]?

Response: Equation has been corrected. Line 305

  1. The flow chart has to be redone.

Response: Flow chart has redrawn. Line 309

  1. The classification part is naive and the authors do not give any information about the input parameters.

Response: The proposed work uses DEAP dataset for EEG signals. The EEG data used is pre-processed by down sampling the signal to 128Hz, removing EOG artifacts. Bandpass filter has been applied to acquire signal between 4 to 45.0 Hz. The data is averaged for common reference and segmented for 60s to get EEG signal for each video. The Power Spectral Density (PSD) of EEG signals for each video and every participant is extracted using MATLAB code. Line 286.

  1. In the definition of F1 score, replace "*" by the multiplication symbol.

Response: Replaced "*" by the multiplication symbol. Line 371.

  1. The conclusion must be more precise and the results must be quantified.

Response: The classification of positive and negative emotion with and without removing the outlier samples of EEG signal using Bi-LSTM is performed for all scalp regions viz, frontal, parietal, occipital, temporal and even for all 32 electrodes. Among these regions, frontal region electrodes (Fp1, F3, F4 and Fp2) and all 32 electrodes shown highest classification accuracy. Without removing the out layered samples of EEG signal, the classification accuracy obtained is 90.25% and 91.25% for considering all 32 electrodes and four frontal electrodes respectively. After removing the out layered samples, the classification accuracy increased to 92.15% and 94.95% for considering all 32 electrodes and four frontal electrodes respectively. Line 421

 

 

 

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

NOTES:

- The figures' overall quality is terrible, mainly due to the resolution used 

- The system explanation is not yet well performed. You have to be clearer in your explanation. You present a flow chart in Figure 6 that I cannot read

RELATED WORK:

- Divide the related work into smaller subsections according to e.g. to the application - Not done

METHODOLOGY:

- Where is the system's explanation? The electrode position figure? The working mode? - Missing a proper explanation of the system's working mode. Basically

- You do not have to explain well-known methods here like the LTSM (e.g. Figure 7), but your innovation and the modifications that you have made to adapt it to your specific case - Missing a better explanation of the innovation and the performed modifications

- Figure 10? What is this? I cannot understand anything from the figure. Was it divided into a or b? Why? - I cannot read anything from Figure 11

- Processing time? Real-time analysis? - You said, "Processing time took around 48 hours. It's not a Real-time analysis. Freely available DEAP datasets have been used." - Do I want a method that takes 48 hours to give me results? Processing time is important in any system. If I have infinite time, I can design an "infinitely good algorithm"

CONCLUSIONS:

- Indicate your method's increase of performance using e.g. percentage, and if it's a real-time analysis - Do I want a method that takes 48 hours to give me results?

Author Response

Response for the Reviewer’s comments
Note: The changes made in the manuscript are highlighted with GREEN coloured text. 

Reviewer 2 : 

 R2: Comments and Suggestions for Authors. We thank the reviewer for valuable comments and Suggestions.

1. The figures' overall quality is terrible, mainly due to the resolution used  

Response: Figure’s resolutions has been improved.
2. The system explanation is not yet well performed. You have to be clearer in your explanation. You present a flow chart in Figure 6 that I cannot read 
Response: We updated the explanation on system with the following details: Line 393 
To detect emotions from EEG data, a deep learning-BiLSTM network is used. To adopt a pure subject-independent strategy, the model is trained and tested on DEAP database. All 32 and four frontal (Fp1, Fp2, F3 and F4) EEG electrodes are chosen from DEAP. Based on valence rating, positive and negative emotions are classified. The DEAP database is available in accordance with the valence and arousal scale. Fig. 6 shows that valence greater than 1 indicates positive emotion and valence less than 1 indicates negative
In the proposed approach, PSD features are extracted from EEG signal. The normalized PSD features are used for training the LSTM and BiLSTM architectures. The BiLSTM outputs connected through a fully connected layer and a softmax layer. This softmax layer is used to generate the positive and negative emotion status. 
Flow chart in Figure 6 is redrawn.
 
RELATED WORK: 
3. Divide the related work into smaller subsections according to e.g. to the application - Not done
Response: We have divided the related work section into sub sections based on applications of Machine Learning (ML) and Deep Learning network. 
METHODOLOGY: 
4. Where is the system's explanation? The electrode position figure? The working mode? - Missing a proper explanation of the system's working mode. Basically ‘ Line 393
Response: To detect emotions from EEG data, a deep learning-BiLSTM network is used. To adopt a pure subject-independent strategy, the model is trained and tested on DEAP database. All 32 and four frontal (Fp1, Fp2, F3 and F4 ) EEG electrodes are chosen from DEAP. Based on valence rating, positive and negative emotions are classified. The DEAP database is available in accordance with the valence and arousal scale. Fig. 6 shows that valence greater than 1 indicates positive emotion and valence less than 1 indicates negative
In the proposed approach, PSD features are extracted from EEG signal. The normalized PSD features are used for training the LSTM and BiLSTM architectures. The BiLSTM outputs connected through a fully connected layer and a softmax layer. This softmax layer is used to generate the positive and negative emotion status. 

Line 224. The internationally accepted 10-20 electrode arrangement for electrode placement is generally followed while placing the electrodes atop the scalp to cover the brain lobes. From nasion to inion, measurements are done in the median and transverse planes. Electrode placement locations are measured by dividing the transverse and median planes by 10% - 20% of the distance interval, as shown in Fig 2. 
 
Fig.2. EEG Electrode placement.
Line 231. The numbers 10 and 20 indicate the distance between adjacent electrodes (10% or 20% of the total front-back or right-left distance of the skull). Each site has a letter to identify the lobe and a number to identify the hemisphere location. F stands for Frontal, T for Temporal, C for Central (although there is no central lobe, C letter is used for identification purposes), P for Parietal, and O for Occipital. z (zero) refer to an electrode placed on the mid line. Even numbers refer to electrode positions on the right hemisphere, while odd numbers refer to the left one. 

5. You do not have to explain well-known methods here like the LTSM (e.g. Figure 7), but your innovation and the modifications that you have made to adapt it to your specific case - Missing a better explanation of the innovation and the performed modifications 
Response: Figure 7 is modified as Fig.8. 
 Line 383. The modified LSTM model referred as Bi-LSTM which consists two LSTM models where, one LSTM takes input in forward direction and another takes input in backwards direction. BiLSTM model classifies the emotion into positive and negative using the sum of Valence and Dominance, and Arousal and Liking. 
6. Figure 10? What is this? I cannot understand anything from the figure. Was it divided into a or b? Why? - I cannot read anything from Figure 11 
Response: Figure 10 is typical Bi-LSTM network architecture.
Previously provided Figure 10 is now relabelled as Figure 11. It is a typical Accuracy and Loss curve explains the performance of the proposed model. It was not divided into a or b. 
7. Processing time? Real-time analysis? - You said, "Processing time took around 48 hours. It's not a Real-time analysis. Freely available DEAP datasets have been used." - Do I want a method that takes 48 hours to give me results? Processing time is important in any system. If I have infinite time, I can design an "infinitely good algorithm" 
Response: The time taken to build a BiLSTM system is about 48 hours during training phase. Once the system has been trained, the time taken (real time) for the decision for a given test samples will be negligibly small.
CONCLUSIONS: 
8. Indicate your method's increase of performance using e.g. percentage, and if it's a real-time analysis 
Response: The work proposed could be implemented for real time applications. However, algorithm developed are tested using DEAP datasets. 
9. Do I want a method that takes 48 hours to give me results? 
Response: No sir. 
What we have mentioned is that the time taken to build a BiLSTM system is about 48 hours during training phase. Once the system has been trained, the time taken (real time) for the decision for a given test samples will be negligibly small.

 
 

 

 

Author Response File: Author Response.pdf

Reviewer 3 Report

The authors have globally answered our remarks, nevertheless the relation (2) is still false. Moreover f is not the sampling frequency, T and delta(t) are not defined in the manuscript.

I ask the authors to be a little more serious and professional if they want their manuscript to be accepted.

Author Response

Response sheet attached

Author Response File: Author Response.pdf

Back to TopTop