Next Article in Journal
NodeVector: A Novel Network Node Vectorization with Graph Analysis and Deep Learning
Previous Article in Journal
Stalling in Queuing Systems with Heterogeneous Channels
Previous Article in Special Issue
Transfer Learning-Based Remaining Useful Life Prediction Method for Lithium-Ion Batteries Considering Individual Differences
 
 
Article
Peer-Review Record

Unsupervised Deep Anomaly Detection for Industrial Multivariate Time Series Data

Appl. Sci. 2024, 14(2), 774; https://doi.org/10.3390/app14020774
by Wenqiang Liu 1, Li Yan 2, Ningning Ma 1, Gaozhou Wang 2, Xiaolong Ma 1, Peishun Liu 1,* and Ruichun Tang 1
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3:
Appl. Sci. 2024, 14(2), 774; https://doi.org/10.3390/app14020774
Submission received: 16 December 2023 / Revised: 11 January 2024 / Accepted: 13 January 2024 / Published: 16 January 2024

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

The paper is in general well written. However, some improvements are needed as written below.

 

A short description of paper’s structure should be added at the end of the introduction.

 

It is recommended to refer to the particular elements in the manuscript (like “Fig. 1”) rather that writing “The following diagram illustrates the network architecture we propose:” (line 164).

 

Fig. 1 – what do the symbols mean (z, d, p)? It is not clear from the picture, neither from the corresponding text.

 

Formatting in some places should be revised (see e.g. lines 326-332).

 

Fig. 3 is much too small.

 

Tables 3 and 6 needs better formatting (the numbers are too close to each other).

 

What are the limitations of the proposed approach? Please discuss them.

 

What is the next step of your research? Such information would be nice at the end of the paper.

 

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 2 Report

Comments and Suggestions for Authors

REVIEW OF MANUSCRPIT applsci-2802840

The paper describes an algorithm for unsupervised deep anomaly detection for industrial multivariate time series data.

The paper is well-prepared, the literature is detailed, and informative, the method is well-presented, despite the subject is complicated, and the conclusions are clear. The authors use six benchmarked datasets to compare the proposed AT-DCAEP method with ten other methods.

I have some comments to correct/improve the manuscript:

Equation 12: The variable ct shall be with capital C.

Line 270: Please refer explicitly the name of the activation functions σ and tanh. Probably you refer to the sigmoid and the hyperbolic tangent functions, but this shall be shown in the text

Line 296: Please replace in the sequence 𝐻={ℎ0,ℎ0,⋯,ℎ𝑡−1,ℎ𝑡}, the second h0 by h1.

Please provide some more information about the software you have used or you have developed: You utilized some commercial software and you connected some modules, or you developed your own source code? If the second case is applicable, which language has been used?

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 3 Report

Comments and Suggestions for Authors

The researchers provided the new model for anomaly detection. Currently, it is not surprising to apply NLP methods to other areas of AI research, such as time-series analysis in this case. By using attention from transformers, the authors trained a hybrid model called the Attention-Based Deep Convolutional Autoencoding Prediction Network. The key idea they have used came from GAN where one part of network tries to deceive another one. Then in first part of NN authors extract low-dimensional features by using convolution layers and multi-head attention. Finally, the second part of neural network tries to predict reconstruction error.

 

Some issues with math notation at 9th – 14th equations were faced. In most articles with LSTM strict math symbols of multiplication are used but the authors used square brackets and haven’t declared it afterward. Symbols of real numbers at 294 should be consistent with mathematical conventions. In general, manuscript is well-structured and clearly written; references are used where it needed and goal is explicitly conveyed.

 

Steps of model construction are coherent, each contains explanation and overcomed disadvantage on the step.

 

During reading the related work section, a question arose. It comes from analysis of intersection of CNN and LSTM, C-LSTM was included in the related work but ConvLSTM with both CNN and LSTM advantages as well wasn’t. It is also recommended to update articles and indicate recent sources (no later than 2018).

 

 

Experiment section is well-written but paragraph with data preprocessing, if any, definitely should be included. The neural network was tested on various datasets to conduct a comprehensive analysis, and the authors successfully achieved this. No link to repository with code was provided so question of reproducibility remains unclear.

 

Figures in the article have low quality and should be improved. Figure 5 should be decomposed. With tables was faced no concerns.

 

 

Most references are relevant except LSTM from 1967 (ref. 15), the background of neural network models should be out of scope of the article.

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Round 2

Reviewer 1 Report

Comments and Suggestions for Authors

I am satisfied with the responses and the revision.

Back to TopTop