You are currently viewing a new version of our website. To view the old version click .
Applied Sciences
  • Article
  • Open Access

27 July 2022

Fv-AD: F-AnoGAN Based Anomaly Detection in Chromate Process for Smart Manufacturing

,
,
and
1
Department of Smart Factory Convergence, Sungkyunkwan University, 2066 Seobu-ro, Jangan-gu, Suwon 16419, Korea
2
Department of Applied Data Science, Sungkyunkwan University, 2066 Seobu-ro, Jangan-gu, Suwon 16419, Korea
3
Department of Advanced Materials Science & Engineering, Sungkyunkwan University, 2066 Seobu-ro, Jangan-gu, Suwon 16419, Korea
*
Author to whom correspondence should be addressed.
This article belongs to the Special Issue AI Applications in the Industrial Technologies

Abstract

Anomaly detection for quality prediction has recently become important, as data collection has increased in various fields, such as smart factories and healthcare systems. Various attempts have been made in the existing manufacturing process to improve discrimination accuracy due to data imbalance in the anomaly detection model. Predicting the quality of a chromate process has a significant influence on the completeness of the process, and anomaly detection is important. Furthermore, obtaining image data, such as monitoring during the manufacturing process, is difficult, and prediction is challenging owing to data imbalance. Accordingly, the model employs an unsupervised learning-based Generative Adversarial Networks (GAN) model, performs learning with only normal data images, and augments the Fast Unsupervised Anomaly Detection with GAN (F-AnoGAN) base with a visualization component to provide a more intuitive judgment of defects with chromate process data. In addition, anomaly scores are calculated based on mapping in the latent space, and new data are applied to confirm anomaly detection and the corresponding location values. As a result, this paper presents a GAN architecture to detect anomalies through chromate facility data in a smart manufacturing environment. It proved meaningful performance and added visualization parts to provide explainable interpretation. Data experiments on the chromate process show that the loss value, anomaly score, and anomaly position are accurately distinguished from abnormal images.

1. Introduction

Supervised learning is difficult to predict and is limited to areas where abnormal data are collected. Recently, unsupervised learning or semi-supervised learning models have been increasingly used in anomaly detection. The GAN, an unsupervised learning model, has features that can solve data imbalance problems and is widely used for anomaly detection in abnormal areas [1,2]. Furthermore, we have successfully demonstrated learning and deep generative models. The proposed GAN-based anomaly detection method detects anomalies using only the normal data generated by learning. The GAN-based anomaly-detection model is being developed, and the GAN model is actively being studied in the image field. Recently, semi-supervised learning has been used to improve classification problems [3].
Anomaly detection is a method to distinguish between normal and abnormal (bad) data, and making a distinction between the two is a critical issue in all domains. AI can preferentially classify products that are expected to be defective during the manufacturing process, resulting in. Furthermore, AI has the advantage of first reviewing and shorter working hours suggesting products that are deemed defective, such as a lack of expertise of workers and human error. Most anomaly detection predictions are available in a variety of domains but require an annotation process for unusual data. Furthermore, data collection in the manufacturing process industry is difficult because the frequency of defective samples is significantly lower than that of normal samples [4].
Therefore, domains with severe data imbalances require access to unsupervised learning-based anomaly detection. Therefore, in this paper, we learn and generate only normal data using GAN, then input the data into the learned GAN model, compare the original image with the original image, and predict it [5]. We also propose a model that augments the discriminator portion of the model with a visualization portion of the output to make the outlier position more intuitive and descriptive.
A data imbalance impedes anomaly detection in the manufacturing process. Various studies are being conducted to address these issues using batch sampling [6] and augmentation techniques. In addition, we expect to solve this problem if we proceed with learning only through normal data as one of the various studies [7]. Therefore, this study applies fast anomaly detection with the GAN (F-AnoGAN) technique to manufacturing process data that are difficult to label. In the absence of abnormal data in the manufacturing process, the GAN technique can only be used only with normal data by generating and learning arbitrary data. AnoGAN is the first study to utilize GAN for anomaly detection and uses the F-AnoGAN technique, a method developed in AnoGAN, for manufacturing process data. Localization also highlights damaged areas of an image, making them visible [8].
Among the GAN models, we propose the detection and localization of anomalies in the chromate process using the F-AnoGAN technique, which is an anomaly detection GAN. The anomalous part of the image is represented as an image and its performance is verified and demonstrated by the loss value and anomaly score. Furthermore, the F-AnoGAN model proposed in this paper is intended for real-world anomaly detection and utilization, and it outperforms previous Anomaly GAN(AnoGAN) models in quantitative indicators [9].
This paper proposes a prediction method of outlier detection in the quality check process through image data generated by the chromate process. The predictive performance of the model through the proposed Fv-AnoGAN and the visualization through it proved to be significant. Suggest that image data from the manufacturing process can contribute to one of the prediction methods of outlier detection.
The rest of this paper is structured as follows Section 2 introduces the anomaly detection cases and existing anomaly detection models used in the smart factory. Section 3 elaborates on the proposed F-AnoGAN model’s components and approaches, as well as its key ideas. Section 4 presents the experimental environment, structure, and evaluation results of the dataset. Finally, Section 5 concludes the paper with a summary of evaluation results and directions for future research.

3. Fv-AD: F-AnoGAN Based Anomaly Detection

3.1. System Architecture

As a result of anomaly detection, we aim to represent anomalous images together through localization and anomaly scores. The proposed anomaly detection prediction model is named fast visualization-anomaly detection GAN (Fv-AnoGAN).
Figure 4 shows an overview of the Fv-AnoGAN model and localization section. Similar to F-AnoGAN, WGAN and encoder are used, and the model is composed of input data, layer, generator, discriminator, encoder layer, and output. We learn using images of the normal data collected in the chromate process with the input data. The input data are then learned in the WGAN model during the training phase, and the encoder training below is performed using the normal image because it is based on a well-trained model. The encoder is learned in the middle layer of the discriminator model’s encoder section using the residual loss value of the feature space. After learning the GAN and encoder, in the detection step, we propose a model that calculates the anomaly score through the learned model by adding unseen data and checking the localized anomaly image. The F-AnoGAN model, which performed well in anomaly detection, was used in this study. F-AnoGAN is a model developed to improve the performance of AnoGAN, which first announced the detection of anomalies based on GAN and Figure 5 shows the overall structure of the model and the F-AnoGAN model through image data generated by the chromate process.
Figure 4. Fv-AD Model.
Figure 5. System architecture for chromate process.
The visualization of the abnormal part is then explained using the localization technique as a result of the anomaly detection, along with the abnormal score. F-AnoGAN is divided into two stages: WGAN and encoder [20]. The GAN learns two models: Generator G and Discriminator D, which describe the distribution of trained data and attempt to produce similar images, which compete with each other and categorize the fake data generated by the generator through an adversarial process. The WGAN is a data classification algorithm that can improve the output performance by redefining metrics for convergence values. It also simplifies the model based on the distance of Wasserstein from Jensen-Shannon. The first step in building the model is to train the WGAN so that The Generator G only learns the distribution of normal data from the data to be learned, resulting in only normal images. The encoder is then trained to map the image to the latent space. F-AnoGAN is randomized in the process of mapping images in AnoGAN, and AnoGAN, which learns, is not well mapped. To address this issue, F-AnoGAN employs AutoEncoder’s Encoder model [29]. Encoder models are compressed, as the input data are characterized by latent vectors. The reason for using the encoder model is to prevent the generator from generating normal data when the test data are input to the WGAN. The GAN weight is fixed at this point, and if the query image is entered into the GAN during the TEST process, the generator will only generate normal data unrelated to the query data. The method of mapping to Latent space is based on an encoder, and training is conducted to enable x z , inverse mapping. Anomaly detection is performed based on a trained model that includes the discriminator feature secondary loss and image reconstruction error G(z).

3.2. Calculating Anomaly Score

GAN learning generates the generator G(z) = z x , which maps to the latent space z x , but not the inverse mapping to the manifold x -> latent space z, which is required for outlier detection. To solve this problem, AnoGAN maps to the latent space, but F-AnoGAN learns the encoder, enabling reverse mapping to E(x) = x z . F-AnoGAN with encoder proposes three methods, but this study uses the last encoder model of i z i f method using feature space loss for the i m a g e z i m a g e . In the learning process, the process of mapping from a real image to the latent space z is encoded and consists of an encoder model. When query image x arrives, it allows the quick creation of G(E(x)) such as x. The i z i f architecture has the same structure as the AutoEncoder and CAE [30], in which the encoder follows the decoder. During learning, the encoder trains mapping from a real image to z and learns with G, which maps z to the image space. This architecture is similar to image-to-image AE, and the learning proceeds in a manner that minimizes the MSE of lossmsx and E(G(x)) used when learning with the i z i f architecture. The i z i Training objective implements similarities in the image space. The loss function of i z i an encoder training architecture compared is as follows. It has a structure with a generator behind the encoder and minimizes MSE loss.
L i z i ( x ) = 1 n x G ( E ( x ) ) 2 ,
Minimizing the pixel-by-pixel difference is not an actual example of a normal image; an image with a small residual may be output even in the case of an abnormal image. The image space residual is not a reliable position. This leads to the creation of architecture by further calculating the statistics from real and reconstructed images. The statistics of the input are calculated using the discriminator feature f ( ) of the intermediate layer, and the characteristic expression in the middle indicates the dimension [5]. The discriminator feature in F-AnoGAN is inspired by the feature mapping technique proposed in [31] and it is related to the loss value used in the initial outlier detection task for mapping repeated z-values. However, i z i ’s goal is to limit the generation of images, but i z i ’s shortcomings are that mapping to the latent space is unknown, and the exact result can not be predicted by calculating the residual loss. This adds statistics to the actual and generated images to construct the architecture. i z i f ’s loss function is as follows:
L i z i f ( x ) = 1 n · x G ( E ( x ) ) 2 + κ n d · f ( x ) f ( G ( E ( x ) ) ) 2 ,
From the formula, it can be seen that the discriminator features are related to the loss used in AnoGAN and the discriminator feature utilizes the parameters obtained when learning the GAN. It provides a good mapping of the image and latent spaces using this loss. The parameters learned during [5] WGAN training are fixed during encoder training, and the Figure 6 architecture guides encoder training in both the image and latent space at the same time, selecting the proposed encoder training architecture from the F-AnoGAN model. The discriminator parameters are learned during WGAN training and fixed during encoder training.
Figure 6. i z i f Encoder training.
After learning the GAN and encoder, the anomaly score is calculated by entering the query image X. Calculating the anomaly score is the same as the equation [5].
A ( x ) = A R ( x ) + K A D ( x )
An anomaly score represents the deviation of the query and reconstruction images during outlier detection. By observing the formula in [5], it can be said that it is the same as the loss i z i f formula and usually results in a high score on an abnormal image and a low score on a normal image. The residual loss equation used for learning is employed to calculate the average score. As the model learns only about normal images, only images similar to the input images are reconstructed using the encoder with x of normal images.

4. Experiment and Results

4.1. Experiment Environment

Python 3.7 and PyTorch 1.7.0 were used in the model configurations proposed in this paper. We defined models and used GPU acceleration in a Google Collaboration environment in a cloud environment using Torch.nnn and Torch vision modules to avoid collisions caused by the presence of different versions of all training and experiments. Google Collaboration Pro is a graphics processing unit GPU (e.g., T4 or P100 specifications). We obtained good results from the Google Collaboration environment. However, it should be noted that the performance of learning artificial intelligence neural networks and the degree of convergence of artificial intelligence neural networks may vary depending on the GPU.

4.2. Datasets

In this paper, experiments were conducted using manufacturing process data images collected from actual chromate processes [32]. Chromate has a large impact on process completeness and was collected as data for quality prediction. We used data to forecast normal and poor quality. Figure 7 depicts some of the collected data datasets.
Figure 7. Datasets.
Chromate treatment is used as a post-treatment method after galvanizing or cadmium plating; in fact, chromate treatment after galvanizing is essential. The process of coating a rust preventive film using bichromate is used for products that require gloss. The main reaction for treating the film is that zinc dissolves, hydrogen ion concentration decreases at the zinc interface to reduce bichromate ions, and precipitates on the zinc surface. In the chromate process, if the solution concentration is low, the result is not white; if the concentration is high, the thickness is reduced. There are various methods for the plating process, among which, the dataset of the chromate process is an electroplating process.
The data collection method receives real-time data through PLC and sensors, stores it in CSV file form based on the collected time, and uses an image for visual evaluation of quality as a product stored in PNG form through an image collection device. The datasets are CSV and PNG, and image data are learned to automatically conduct visual quality inspections to detect defective products more accurately. Among the learning data, 1103 normal and 76 abnormal data are used to enable experiments on unbalanced data. Subsequently, we convert it into <64 × 64, 128 × 128, 256 × 256> sizes that show the optimal performance of input images through various experiments.

4.3. Performance Matrix

This subsection employs two tables to validate the numerical values of the anomaly scores. By dividing normal data and abnormal data, the image value, z value learned by encoder, anomaly score, and loss value are shown in a table. The values in the two tables are the numerical values of the anomaly score for the test data, with label 0 representing the anomaly data and label 1 representing the normal data.
image_distance represents MSELoss in the loss function. MSELoss is often used for differences between masks in differences between images or segmentation. It indicates that the difference value is obtained by subtracting the target value from the generated image value, and the average value is 0.06. The average anomaly score value of the normal data in Table 1 is 0.01, indicating a difference between the two classes.
Table 1. Performance matrix of anomaly detection score for normal data.
z_distance also uses the MSELoss function, and the difference from image_distance is an image learned by the encoder, indicating the difference between the generated value and the difference value. Table 2 The average value of z_distance is 0.11. Table 1 confirms that the z_distance value of the normal data is 0.005, and a significant difference in the numerical values between the two classes can be confirmed.
Table 2. Performance matrix of anomaly detection score for anomaly data.
Section 3 describes the anomaly score. The difference in features between the generated value and the target value of the output as a discriminator is referred to as the loss value.
Table 1 present the experimental results of the outlier detection score for the normal. There is a clear difference in all of the figures when compared to the experimental results with the abnormal data in Table 2.

4.4. Results and Analysis

When learning WGAN, the final setting value of the hyperparameter is as follows: Epoch = 100, Lr = 0.0002, batch_size = 32, b1 = 0.6, b2 = 0.999, latent_dim = 100, sample_interval = 400, and the hyperparameter values you set when you learned Encoder are as follows: Epoch = 200, batch_size = 32. Lr = 0.0002, b1 = 0.5, b2 = 0.999, latent_dim = 100, sample_interval = 400. The learning was performed after it was set to x, and the following performance was confirmed.
Figure 8 shows a histogram of the scores for normal and abnormal data. The normal image distribution of the histogram graph is 0.15 or higher and 0.06 or lower. This demonstrates that the threshold for anomaly scores was set to a value between 0.06 and 0.15. The Results for the normal and abnormal data have significant implications and are visible. Anomaly scores can be represented as important indicators of the anomaly detection results in the process. In the graph, the x-axis represents the outlier score and the y-axis represents the number. Furthermore, the proposed anomaly score in F-AnoGAN can distinguish between bad and normal data, and the comparison image obtained from the difference between the test and generated images can accurately determine the abnormal part.
Figure 8. Histogram of anomaly score.
Figure 9 shows a graph of the loss function obtained using the F-AnoGAN model. The x-axis represents the number of data points and the y-axis represents the loss value. The above figure shows that the experimental results of F-AnoGAN and AnoGAN were excellent. The loss value of F-AnoGAN is lower than that of AnoGAN and the images created by F-AnoGAN are more sophisticated than those created by AnoGAN.
Figure 9. Loss function.
The image on the left side of Figure 10 show the image data of defective products that occurred during the chromate process. The data were found to be defective due to a crack at the bottom, and anomaly detection was performed using the generated image. The middle image was created by the generator, and although it did not attain the resolution of the real image, it created a false image. The final image compares the real image learned through the encoder with the fake image generated by the generator. The darkened part of the image indicates an abnormal part can be visually identified.
Figure 10. Bottom Crack Image Data Comparison.
The leftmost image in Figure 11 show the image data of the defective product that occurred during the chromate process. These data were found to be defective by the crack at the top, and anomaly detection was performed with the generated image. The middle image represents a fake image generated by the generator. Compared to AnoGAN, it showed a high resolution, and the last image compares the real image learned through the conventional encoder with the fake image generated by the generator. Compared to Figure 11, we can see the darkened part at the bottom, and through this, we can observe that the crack at the top and the crack at the bottom of Figure 10 and Figure 11 were detected in contrast.
Figure 11. Top Crack Image Data Comparison.
Figure 12 shows the result of the abnormal detection of a real image without a crack. The image on the left shows the image data of the normal product obtained during the chromate process. The middle image is an image generated by the generator, and the normal image does not reach the same resolution as the real image but produces a compliant false image. The final image compares the real image learned through the encoder with the fake image generated by the generator. Compared to Figure 11 and Figure 12, the abnormal location of the dark black area was not found, demonstrating that the normal image was correctly detected.
Figure 12. Normal Image Data Comparison.
Localization is used to detect anomalies, and the location of the anomaly can be checked after learning with F-AnoGAN. As the image was binarized using the threshold technique, the position of the abnormal part is converted to a dark black color. Figure 13 learns chromate process data from the proposed F-AnoGAN and then demonstrates that abnormalities can be judged and predicted using localization and anomaly scores.
Figure 13. Localization images.
Table 3 compares the approaches according to encoder training. The same WGAN training was implemented using an unrestricted encoder of a linear output layer. Three versions of the performance are presented and the i z i f encoder model proposed in this paper describes the best performance. We compare z i z , i z i , and i z i f encoder training architectures based on WGAN.
Table 3. Encoder training performance.
Although there are various evaluation indicators for measuring the model, in this study, the performance of the model was confirmed and verified using five items as shown in Table 4. The refined data according to the preprocessing of the data has a large impact on accuracy, which can vary depending on the quality of the data, and image data collected in the chromate process boast high performance in all models. The proposed Fv-AnoGAN model exhibited the highest performance.
Table 4. Model performance.
The fast localization anomaly detection (Fv-AD) model was newly defined in this study, and positive and useful results were obtained using data generated in the chromate process in the outlier detection process. The abnormal location of the defective image in this study was determined using unsupervised learning-based F-AnoGAN, which was intended to be tested, and it was proven using outlier scores. Because the post-processing process greatly affects quality, the chromate process is expected to solve the problem of worker inexperience or not being caught by humans at the manufacturing process site, which predicts accurate quality. Furthermore, several attempts using chromate process data are considered meaningful among the various unsupervised learning techniques. In a future research project, we will continue to study F-AnoGAN by studying a combined model that allows artificial intelligence to demonstrate explainable results in the process of learning the model by combining the XAI model [33] with F-AnoGAN.

5. Conclusions

To address these issues, manufacturing processes, such as real-time monitoring methods that detect process anomalies early to achieve product homogeneity, have long been studied [34]. In the existing manufacturing process, various attempts have been made to determine outliers owing to data imbalance accurately. In addition, abnormal detection is difficult because of technical limitations or a lack of worker efficiency. Quality issues are not limited to the mass production stage; therefore, the process of product planning, R&D, mass production, and service must be managed in one step. To solve this problem, we propose a manufacturing process model using F-AnoGAN with an excellent outlier detection performance. Based on various experiments, data from the actual manufacturing process revealed optimal performance.
Owing to the expected effect, the completeness of the process in the chromate process significantly affects the quality, and chromium, an environmental substance, is substituted. It is expected that it will be required in the field of manufacturers that require cost reduction and actual quality prediction. Furthermore, it is anticipated that it will be used in processes similar to electroplating, such as melt plating and chemical plating. In the future, by comparing more models, we might be able to create a model that can discriminate outliers with pixel levels and we intend to improve it more efficiently by optimizing parameters or reducing learning and generation time to supplement the results.
Because PGAN currently uses WGAN-GP losses and further increases reliability and robustness, future use of pGAN is also considered a complement to future models [22] and is also expected to allow optimization to create lightweight models suitable for use in a variety of domains.

Author Contributions

Conceptualization, C.P. and S.L.; methodology, C.P.; software, C.P. and S.L.; validation, C.P. and S.L.; formal analysis, D.C.; investigation, D.C.; resources, C.P.; data curation, C.P.; writing—original draft preparation, C.P., S.L. and D.C.; writing—review and editing, S.L.; visualization, S.L.; supervision, J.J. project administration, J.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the MSIT (Ministry of Science and ICT), Korea, under the ITRC (Information Technology Research Center) support program (IITP-2022-2018-0-01417) supervised by the IITP (Institute for Information & Communications Technology Planning & Evaluation). Also, this work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2021R1F1A1060054).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

All data generated or analyzed during this study are included in this published article.

Acknowledgments

This research was supported by the SungKyunKwan University and the BK21 FOUR (Graduate School Innovation) funded by the Ministry of Education (MOE, Korea) and National Research Foundation of Korea (NRF) and the ITRC(Information Technology Research Center) support program (IITP-2022-2018-0-01417) supervised by the IITP (Institute for Information & communications Technology Planning & Evaluation).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. In Proceedings of the 28th Annual Conference on Neural Information Processing Systems (NIPS 2014), Montreal, QC, Canada, 8–13 December 2014. [Google Scholar]
  2. Radford, A.; Metz, L.; Chintala, S. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv 2015, arXiv:1511.06434. [Google Scholar]
  3. Pham, H.; Dai, Z.; Xie, Q.; Le, Q.V. Meta pseudo labels. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 21–25 June 2021; pp. 11557–11568. [Google Scholar]
  4. Chalapathy, R.; Chawla, S. Deep learning for anomaly detection: A survey. arXiv 2019, arXiv:1901.03407. [Google Scholar]
  5. Schlegl, T.; Seeböck, P.; Waldstein, S.M.; Langs, G.; Schmidt-Erfurth, U. f-AnoGAN: Fast unsupervised anomaly detection with generative adversarial networks. Med. Image Anal. 2019, 54, 30–44. [Google Scholar] [CrossRef] [PubMed]
  6. Lee, K.; Lim, J.; Bok, K.; Yoo, J. Handling method of imbalance data for machine learning: Focused on sampling. J. Korea Contents Assoc. 2019, 19, 567–577. [Google Scholar]
  7. Li, Z.; Kamnitsas, K.; Glocker, B. Analyzing overfitting under class imbalance in neural networks for image segmentation. IEEE Trans. Med. Imaging 2020, 40, 1065–1077. [Google Scholar] [CrossRef] [PubMed]
  8. Huang, Y.; Juefei-Xu, F.; Guo, Q.; Liu, Y.; Pu, G. FakeLocator: Robust localization of GAN-based face manipulations. IEEE Trans. Inf. Forensics Secur. 2022; Early Access. [Google Scholar] [CrossRef]
  9. Schlegl, T.; Seeböck, P.; Waldstein, S.M.; Schmidt-Erfurth, U.; Langs, G. Unsupervised anomaly detection with generative adversarial networks to guide marker discovery. In Proceedings of the International Conference on Information Processing in Medical Imaging, Boone, NC, USA, 25–30 June 2017; Springer: Berlin/Heidelberg, Germany, 2017; pp. 146–157. [Google Scholar]
  10. Nam, H. A case study on the application of process abnormal detection process using big data in smart factory. Korean J. Appl. Stat. 2021, 34, 99–114. [Google Scholar]
  11. Jiang, W.; Hong, Y.; Zhou, B.; He, X.; Cheng, C. A GAN-based anomaly detection approach for imbalanced industrial time series. IEEE Access 2019, 7, 143608–143619. [Google Scholar] [CrossRef]
  12. Lu, H.; Du, M.; Qian, K.; He, X.; Wang, K. GAN-based data augmentation strategy for sensor anomaly detection in industrial robots. IEEE Sens. J. 2021. [Google Scholar] [CrossRef]
  13. Kiran, B.R.; Thomas, D.M.; Parakkal, R. An overview of deep learning based methods for unsupervised and semi-supervised anomaly detection in videos. J. Imaging 2018, 4, 36. [Google Scholar] [CrossRef] [Green Version]
  14. Oz, M.A.N.; Mercimek, M.; Kaymakci, O.T. Anomaly localization in regular textures based on deep convolutional generative adversarial networks. Appl. Intell. 2022, 52, 1556–1565. [Google Scholar] [CrossRef]
  15. Munir, M.; Siddiqui, S.A.; Dengel, A.; Ahmed, S. DeepAnT: A deep learning approach for unsupervised anomaly detection in time series. IEEE Access 2018, 7, 1991–2005. [Google Scholar] [CrossRef]
  16. Li, D.; Chen, D.; Goh, J.; Ng, S.k. Anomaly detection with generative adversarial networks for multivariate time series. arXiv 2018, arXiv:1809.04758. [Google Scholar]
  17. Li, D.; Chen, D.; Jin, B.; Shi, L.; Goh, J.; Ng, S. Madgan: Multivariate anomaly detection for time series data with generative adversarial networks. In Proceedings of the International Conference on Artificial Neural Networks, Munich, Germany, 17–19 September 2019; pp. 703–716. [Google Scholar]
  18. Akcay, S.; Atapour-Abarghouei, A.; Breckon, T.P. Ganomaly: Semi-supervised anomaly detection via adversarial training. In Proceedings of the Asian Conference on Computer Vision, Perth, Australia, 2–6 December 2018; Springer: Berlin/Heidelberg, Germany, 2018; pp. 622–637. [Google Scholar]
  19. Bae, S.; Kim, M.; Jung, H. GAN system using noise for image generation. J. Korea Inst. Inf. Commun. Eng. 2020, 24, 700–705. [Google Scholar]
  20. Arjovsky, M.; Chintala, S.; Bottou, L. Wasserstein generative adversarial networks. In Proceedings of the International Conference on Machine Learning, Seoul, Korea, 15–17 November 2017; pp. 214–223. [Google Scholar]
  21. Gulrajani, I.; Ahmed, F.; Arjovsky, M.; Dumoulin, V.; Courville, A.C. Improved training of wasserstein gans. In Proceedings of the 31st Annual Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
  22. Berg, A.; Ahlberg, J.; Felsberg, M. Unsupervised learning of anomaly detection from contaminated image data using simultaneous encoder training. arXiv 2019, arXiv:1905.11034. [Google Scholar]
  23. Mao, X.; Li, Q.; Xie, H.; Lau, R.Y.; Wang, Z.; Paul Smolley, S. Least squares generative adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2794–2802. [Google Scholar]
  24. Karras, T.; Aila, T.; Laine, S.; Lehtinen, J. Progressive growing of gans for improved quality, stability, and variation. arXiv 2017, arXiv:1710.10196. [Google Scholar]
  25. Ledig, C.; Theis, L.; Huszár, F.; Caballero, J.; Cunningham, A.; Acosta, A.; Aitken, A.; Tejani, A.; Totz, J.; Wang, Z.; et al. Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4681–4690. [Google Scholar]
  26. Venkataramanan, S.; Peng, K.C.; Singh, R.V.; Mahalanobis, A. Attention guided anomaly localization in images. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 485–503. [Google Scholar]
  27. Park, C.H.; Kim, T.; Kim, J.; Choi, S.; Lee, G.H. Outlier Detection By Clustering-Based Ensemble Model Construction. KIPS Trans. Softw. Data Eng. 2018, 7, 435–442. [Google Scholar]
  28. Yoo, J.; Choo, J. A study on the test and visualization of change in structures associated with the occurrence of non-stationary of long-term time series data based on unit root test. KIPS Trans. Softw. Data Eng. 2019, 8, 289–302. [Google Scholar]
  29. Vincent, P.; Larochelle, H.; Bengio, Y.; Manzagol, P.A. Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th International Conference on Machine Learning, Helsinki, Finland, 5–9 July 2008; pp. 1096–1103. [Google Scholar]
  30. Pathak, D.; Krahenbuhl, P.; Donahue, J.; Darrell, T.; Efros, A.A. Context encoders: Feature learning by inpainting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2536–2544. [Google Scholar]
  31. Salimans, T.; Goodfellow, I.; Zaremba, W.; Cheung, V.; Radford, A.; Chen, X. Improved techniques for training gans. In Proceedings of the 30th Annual Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain, 5–10 December 2016. [Google Scholar]
  32. Platform, K.A.M.P. (Korea AI Manufacturing Platform) CNC Machine AI Dataset, KAIST (UNIST, EPM SOLUTIONS). 2022. Available online: https://www.kamp-ai.kr/front/dataset/AiData.jsp (accessed on 11 June 2020).
  33. Das, A.; Rad, P. Opportunities and challenges in explainable artificial intelligence (XAI): A survey. arXiv 2020, arXiv:2006.11371. [Google Scholar]
  34. Bentley, K.H.; Kleiman, E.M.; Elliott, G.; Huffman, J.C.; Nock, M.K. Real-time monitoring technology in single-case experimental design research: Opportunities and challenges. Behav. Res. Ther. 2019, 117, 87–96. [Google Scholar] [CrossRef] [PubMed]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.