Next Article in Journal
A Review on the Microwave-Assisted Pyrolysis of Waste Plastics
Previous Article in Journal
Underutilized Fig (Ficus carica L.) Cultivars from Puglia Region, Southeastern Italy, for an Innovative Product: Dried Fig Disks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Feature Disentangling Autoencoder for Anomaly Detection of Reactor Core Temperature with Feature Increment Strategy

1
State Key Laboratory of Nuclear Power Safety Technology and Equipment, China Nuclear Power Engineering Co., Ltd., Shenzhen 518172, Guangdong, China
2
College of Control Science and Engineering, Zhejiang University, Hangzhou 310027, China
*
Authors to whom correspondence should be addressed.
Processes 2023, 11(5), 1486; https://doi.org/10.3390/pr11051486
Submission received: 21 March 2023 / Revised: 11 May 2023 / Accepted: 11 May 2023 / Published: 14 May 2023
(This article belongs to the Section Automation Control Systems)

Abstract

:
Anomaly detection for core temperature has great significance in maintaining the safety of nuclear power plants. However, traditional auto-encoder-based anomaly detection methods might extract the latent space features with redundancy, which may lead to missing and false alarms. To address this problem, the idea of feature disentangling is introduced under the auto-encoder framework in this paper. First, a feature disentangling auto-encoder (DAE) is proposed where a latent space disentangling loss is designed to disentangle the features. We further propose an incrementally feature disentangling auto-encoder (IDAE), which is the improved version of DAE. In the IDAE model, an incremental feature generation strategy is developed, which enables the model to evaluate the disentangling degree to adaptively determine the feature dimension. Furthermore, an iterative training framework is designed, which focuses on the parameter training of the newly incremented feature, overcoming the difficulty of model training. Finally, we illustrate the effectiveness and superiority of the proposed method on a real nuclear reactor core temperature dataset. IDAE achieves average false alarm rates of 4.745% and 6.315%, respectively, using two monitoring statistics, and achieves average missing alarm rates of 6.4% and 2.9%, respectively, using two monitoring statistics, outperforming the other methods.

1. Introduction

Nuclear power generation is a clean and effective method of power generation. In China, nuclear power is developing efficiently and steadily and it plays an increasingly important role in optimizing the energy structure [1,2,3,4]. The nuclear power generation process is complex in scale and structure, with the reactor core being the energy core of the whole nuclear power system. Since the fission reaction in the core is kept under high temperature and high pressure continuously, nuclear power generation has higher requirements for process safety and reliability compared with traditional power generation. Core temperature is the most direct representation of core health. The abnormal core temperature distribution could lead to serious accidents such as core meltdowns, causing serious casualties and economic losses. Therefore, it is important to perform accurate and reliable anomaly detection for reactor core temperature [5,6,7,8].
Traditional core temperature anomaly detection is usually based on mechanistic analysis methods, which are widely used in the process monitoring systems of large power plants nowadays. These methods model the process mechanism related to core temperature and determine the abnormal condition by empirical knowledge [9,10,11,12]. However, these methods have limitations due to high labor costs, low detection efficiency and poor real-time performance. Recently, data-driven anomaly detection methods have been developed greatly [13,14,15]. These methods do not rely on the empirical knowledge related to the mechanism but utilize the large amount of data collected during the system operation process to capture the potential information within, to achieve efficient and real-time anomaly detection. At present, the research on the data-driven anomaly detection methods for reactor core temperature data is still in the early stage. Thus, here, we expand the scope of current research and introduce related work in industrial scenarios to provide some ideas for core temperature anomaly detection.
In industrial scenarios, data-driven anomaly detection methods based on multivariate statistical analysis are widely used [16,17,18,19]. These methods aim to map process data from high-dimensional space to low-dimensional feature space, extract the feature information of original data and calculate statistics for process fault detection. Zhou et al. [20] combined the characteristics of principal component analysis (PCA) and independent component analysis (ICA) to establish a comprehensive process monitoring model, fully considering the Gaussian and non-Gaussian distributions in process data. Shang et al. [21] proposed a process monitoring strategy based on Slow Feature Analysis (SFA), which can distinguish real faults that cause dynamic process anomalies from regular switching of operating conditions, accomplishing the fault detection task with higher accuracy. The above methods have strong interpretability and are relatively easy to implement. However, they are not suitable for industrial processes with strong nonlinearity. There exist complicated nonlinear correlations between core temperature variables at different measurement points, and these methods cannot effectively capture the nonlinear characteristics of temperature data. As a result, these methods cannot obtain enough detection accuracy in core temperature anomaly detection tasks.
In recent years, anomaly detection methods based on deep learning have been developed rapidly and gradually applied to anomaly detection tasks of industrial processes with complicated nonlinearity [22,23,24,25,26]. These methods aim to build a deep neural network model and use the nonlinear mapping between neurons to learn the nonlinear features hidden in the data to detect anomalies. Among them, the anomaly detection methods based on an auto-encoder (AE) and its variants are representative. Chen et al. [27] designed a conditional discriminant auto-encoder (CDAE) network to establish the steady sub-mode model for nonstationary and nonlinear industrial processes, which can help improve monitoring performance. Yu et al. [28] used the elastic net strategy to sparse the encoder network parameters of the denoising auto-encoder model, which enabled the model to have strong robustness against noise and to extract nonlinear features more efficiently. However, in the latent space generated by AE, there may exist a high degree of correlation between the features in different dimensions, leading to information redundancy. Based on these correlated features, the calculated monitoring statistics may fail to accurately describe the degree of deviation of test data from the model in the feature space, resulting in the missing and false alarms in anomaly detection tasks.
To address the above problem, we creatively introduce the idea of feature disentangling under the auto-encoder framework for the first time. First, a feature disentangling auto-encoder (DAE) is proposed, where a latent space disentangling loss is specially designed to endow the traditional AE model with the capability of feature disentangling. However, the model training and the determination of the disentangled feature dimension remain challenges for DAE. An improved version of DAE, named the incrementally feature disentangling auto-encoder (IDAE), is then proposed to deal with the challenges mentioned. In the IDAE network, an incrementally feature generation strategy and an iterative training framework are designed. This enables the IDAE network to adaptively determine the appropriate feature dimension while focusing on the parameters training of the newly incremented feature, overcoming the difficulty of model training. Furthermore, a comprehensive anomaly detection for core temperature is developed by constructing two monitoring statistics based on the disentangled feature space and residual space, helping to achieve fewer false and missing alarms in the anomaly detection tasks.
The novel contributions are summarized as follows:
(1)
We first introduce the idea of feature disentangling into the anomaly detection tasks under the AE framework. A DAE network is thus available, where a latent space disentangling loss is designed to realize the disentangling of the latent space features while holding the ability to capture the nonlinear relations in temperature data;
(2)
A feature incremental and stepwise iterative training framework is proposed to revise DAE and thus obtain the IDAE network. This enables the IDAE network to achieve a higher degree of feature disentangling while being able to adaptively determine the appropriate number of feature dimension for the anomaly detection tasks;
(3)
We develop a comprehensive anomaly detection for reactor core temperature by constructing two statistics based on the disentangled feature space and residual space, which can achieve fewer missing and false alarms.
The remainder of this article is organized as follows. In Section 2, the designed DAE network is introduced with its problem analyzed. In the next section, the structure and the training frameworks of the IDAE network are presented. In Section 4, we demonstrate the online anomaly detection procedure. Then, the experimental results of the proposed methods are shown. Finally, the conclusion is drawn in the last section.

2. Feature Disentangling under the AE Framework

In reactor core temperature anomaly detection tasks, the traditional AE model faces the problem of extracting the latent space features with redundancy, which may lead to missing and false alarms. Therefore, we introduce the idea of feature disentangling under the AE framework to extract the independent features that potentially hidden in the core temperature variables. A feature disentangling auto-encoder (DAE) is first designed to implement our idea.
The designed DAE network uses the same structure as the traditional AEs. As shown in Figure 1, DAE transforms high-dimensional core temperature data into low-dimensional features and recovers the temperature data from the features using the “Encoder” and “Decoder” networks.
In the training stage of the DAE model, the original input core temperature data are collected during the regular operation of nuclear power plants. We define the input core temperature data matrix as X m × n = [ x 1 , x 2 , , x n ] , where n refers to the number of temperature data samples, m stands for the number of temperature variables collected at different locations in the reactor core. X is inputted into the “Encoder” network to calculate the latent space feature matrix H k × n , where k refers to the feature dimension,
H = σ ( W X + b )
where W is a weight matrix and b is a bias vector. σ ( x ) is a nonlinear activation function.
The “Decoder” network reconstructs the temperature data matrix X ^ from the feature matrix H,
X ^ = σ ^ ( W ^ H + b ^ )
where W ^ is a weight matrix and b ^ is a bias vector of the decoder. σ ^ ( x ) is a nonlinear activation function of the decoder. Define θ = { W , b , W ^ , b ^ } as the set of the parameters mentioned above.
In a traditional AE network, those mentioned parameters can be calculated by solving the optimization problem as follows,
θ = arg min θ L ( X , X ^ ) = arg min θ 1 n t = 1 n x t x ^ t 2 2
where x t m × 1 , t [ 1 , 2 , 3 , , n ] is an input temperature data sample and x ^ t m × 1 is the corresponding reconstructed sample. L ( X , X ^ ) is named the reconstruction error. The optimization problem is typically solved using stochastic gradient descent [20].
To realize feature disentangling in the DAE network, we design a latent space disentangling loss and add it to the optimization goal of the model. Consider the extracted feature matrix H k × n = [ h 1 , h 2 , , h k ] T , where h i 1 × n , i [ 1 , 2 , , k ] are feature vectors and are shown as the thin green bars in Figure 1. The covariance matrix C of feature matrix H is defined as,
C = C o v ( h 1 , h 1 ) C o v ( h 1 , h k ) C o v ( h k , h 1 ) C o v ( h k , h k )
C o v ( h i , h j ) = E ( h i · h j ) E ( h i ) · E ( h j )
where function C o v ( h i , h j ) calculates the covariance between two feature vectors h i and h j . E ( h i ) calculates the mean value of h i .
If the features of all dimensions are independent, the covariance between the features of different dimensions is 0 and the covariance matrix C is a diagonal matrix. Therefore, the latent space disentangling loss is designed to constrain the covariance matrix C to be diagonal.
Define S m a i n and S o t h e r as the sum of squares of the main diagonal elements and other elements of matrix C, correspondingly, which are expressed as,
S m a i n = i [ 1 , , k ] C o v ( h i , h i ) 2
S o t h e r = i , j [ 1 , , k ] , i j C o v ( h i , h j ) 2
The latent space disentangling loss L D A E ( H ) aims to enlarge the difference between the value of main diagonal elements and other elements to realize the independence of features, which is defined as,
L D A E ( H ) = ( 1 k S m a i n 1 k 2 k S o t h e r )
As shown in Figure 1, the DAE network combines this disentangling loss with the reconstruction loss to perform the training process, generating all the disentangled features at one time. In other words, the entire procedure of feature generation for core temperature anomaly detection is under an end-to-end training framework.
Unfortunately, it is almost impracticable for DAE to achieve feature disentangling by simply relying on this latent space disentangling loss. On one hand, the feature disentangling loss L D A E ( H ) in Equation (8) is relatively complicated, being composed of two summation functions and involving the computation of k × k covariances. It is difficult for DAE to effectively implement such a complicated loss under an end-to-end AE training framework. On the other hand, the selection of the feature dimension k is uneasy, which might have a significant impact on the core temperature anomaly detection performance. If k is too large, it will inevitably lead to redundancy among features, then it is difficult to realize the feature disentangling. If k is too small, the information loss during the feature extracting process will be large, leading to inadequacy in representing the information in the input core temperature data.
Therefore, although the idea of feature disentangling introduced in the DAE network is meaningful, it is still difficult to solve this model. The training of the disentangling model and the determination of the feature dimension remain as challenges.

3. The IDAE Network for Incrementally Feature Disentangling

In this section, we revise the DAE model and propose the IDAE network, which can incrementally generate the disentangled features and resolve the issues encountered by the DAE model. We will first introduce the structure of the IDAE network and then present the iterative strategy that IDAE uses to generate disentangled features step by step.

3.1. Structure of IDAE Network

To resolve the two issues existing in the DAE model, we revised the DAE model to propose the IDAE network. Instead of generating all the k latent space features at one time, the IDAE network chooses to generate the features step by step and apply disentangling constraints under an iterative training framework. This enables the IDAE network to realize a more sufficient feature disentangling while adaptively determining the appropriate feature dimension for core temperature anomaly detection.
The structure of the IDAE network is shown in Figure 2. Between the input and output, there exists a hidden layer H and a buffer layer Q with multiple sublayers inside. The thin green and blue bars represent the existing feature vectors in layer H and layer Q, respectively. Additionally, the hidden layer H stands for the latent space that we discussed in this paper, the features inside are regarded as the latent space features. Instead of fixing the feature dimension before training and generating all the features at one time as in the DAE model in Figure 1, both layer Q and layer H in the IDAE network are designed to allow for feature increments. The thin white bars with a dotted outline in Figure 2 represent the places for feature increments. In addition to Encoder and Decoder modules, here, we name the neural network between layer Q and layer H as the Latent module. In other words, the output of layer Q can be mapped into layer H through the Latent module.
To generate the latent space features step by step in the core temperature anomaly detection task, we begin to train this IDAE network with only a one-dimensional feature in the hidden layer. If the reconstruction performance is poor, then we need to add another one-dimensional feature to hidden layer H and retrain the network in order to compensate for the information loss. Meanwhile, we need to make sure the incremented feature is independent of the former generated one, in other words, they should be disentangled. If the degree of feature disentangling is not high enough, we need to perform feature increments in buffer layer Q and retrain the network. This will help introduce more information from the input core temperature data to layer H, making it easier to realize feature disentangling. In the next subsection, we will detailly show how we implement the feature increments operation and introduce the complete training framework of the IDAE network.

3.2. Iterative Training Framework of IDAE Network

To describe the entire training framework of the IDAE network more clearly, here we set the number of sublayers in buffer layer Q to 1. As shown in Figure 3, in the initial state of training, the IDAE network aims to generate a one-dimensional latent space feature in layer H and have c features in layer Q. Features in layer H and Q are shown as the one green bar and a group of c blue bars, respectively, in the figure. We define the mapping function in module Encoder, Latent and Decoder when generating a one-dimensional latent space feature to be f e n c o d e r 1 , f l a t e n t 1 , f d e c o d e r 1 , and the corresponding network parameters are θ e n c o d e r 1 , θ l a t e n t 1 , θ d e c o d e r 1 .
For the input core temperature data matrix X m × n , IDAE first maps it into the buffer layer to obtain a feature matrix Q c × n = [ q 1 , q 2 , , q c ] T ,
Q = f e n c o d e r 1 ( X )
The matrix Q is then mapped into the one-dimensional latent space feature h 1 1 × n through the Latent module,
h 1 = f l a t e n t 1 ( Q )
Eventually, feature h 1 is mapped to the output space to obtain the reconstructed core temperature data matrix X ^ m × n ,
X ^ = f d e c o d e r 1 ( h 1 )
The optimization problem of the initial training only includes minimizing the reconstruction loss. Let θ 1 = { θ e n c o d e r 1 , θ l a t e n t 1 , θ d e c o d e r 1 } , then it can be expressed as,
θ 1 = arg min θ 1 L ( X , X ^ )
Here, we define a metric E = L ( X , X ^ ) to describe the ability of the IDAE model to reconstruct core temperature data and set the corresponding threshold E th . If the model shows E > E th after the first training, this means the one-dimensional feature h 1 is not able to capture enough information from the core temperature data for reconstruction. Then, we need to apply feature increments in layer H to enhance the reconstruction capability. We add one extra feature h 2 to be trained, which is shown as the yellow bar in Figure 4. After the increment operation, we need to input the core temperature data again and retrain the IDAE model to generate the new feature h 2 . Meanwhile, we expect the feature h 1 to be unaffected. Therefore, part of the network parameters must be kept unchanged during this retraining process. We indicate the updatable and unchanged parameters by the orange and grey parts marked in those modules in Figure 4, respectively. We define the new mapping function in the module Encoder, Latent and Decoder when generating the second latent space feature to be f e n c o d e r 2 , f l a t e n t 2 , f d e c o d e r 2 , and the corresponding network parameters are θ e n c o d e r 2 , θ l a t e n t 2 , θ d e c o d e r 2 .
For the parameters group θ e n c o d e r 2 in the Encoder module, we need to keep it equal to θ e n c o d e r 1 and for it to remain unchanged during this retraining process. For the parameters group θ l a t e n t 2 in the Latent module, it keeps the θ l a t e n t 1 part unchanged and adds some extra parameters θ ˜ l a t e n t 1 to generate the new feature h 2 . In other words, θ l a t e n t 2 = θ l a t e n t 1 , θ ˜ l a t e n t 1 and only θ ˜ l a t e n t 1 is updated during the retraining process, which is indicated by the extra orange parts marked in the Latent module. For the parameters θ d e c o d e r 2 in the Decoder module, we update it completely.
A feature disentangling loss L I D A E ( H ) is added when generating h 2 to keep it independent of h 1 . This loss L I D A E ( H ) is different from the one L D A E ( H ) for the DAE network. Instead of considering the whole covariance matrix, L I D A E ( H ) calculates the average covariance between the newly generated latent space feature to be trained and those trained features,
L I D A E ( H ) = 1 k 1 i = 1 k 1 C o v ( h k , h i )
where k is the current dimension of the latent space features. h k is the kth feature to be trained and h i ( i = 1 , 2 , , k 1 ) represents those trained features.
The optimization objective is now composed of the reconstruction loss term and this disentangling loss term. Let θ 2 = { θ e n c o d e r 2 , θ l a t e n t 2 , θ d e c o d e r 2 } , which is expressed as,
θ 2 = arg min θ 2 L ( X , X ^ ) + μ L I D A E ( H )
where μ is the model hyper-parameter. In the subsequent training process, the model will keep using this optimization objective for training.
Here, we present a discussion on the impact of loss weight μ . If μ is too small, the effort IDAE expends on feature disentangling is not enough during each training epoch. This will lead to more feature increment operations in buffer layer Q and increase the model training time. If the loss weight μ is too large, the IDAE model is forced to pay significantly less attention to data reconstruction. This might lead to a loss of information in latent space features, causing IDAE to keep generating meaningless features and fail to satisfy the termination condition of the iterative training framework. In the experiment, we can determine the optimal loss weight by achieving the lowest false positive rate on the training set through cross validation.
Here, we define a metric D = L I D A E ( H ) to describe the degree of latent space feature disentangling of the IDAE network and set the corresponding threshold D th . If the model shows D > D th after generating the second feature h 2 , this means h 2 and h 1 are still correlated to a relatively large extent. Then, we need to apply feature increments in buffer layer Q to help introduce more information from the input space to layer H, making it easier to realize feature disentangling. We add one extra feature q c + 1 to layer Q, which is shown as the left yellow bar in Figure 5. After the increment operation, we need to combine q c + 1 with the former feature matrix Q to obtain Q = [ q 1 , q 2 , , q c , q c + 1 ] T and retrain the IDAE network again to regenerate the feature h 2 . Meanwhile, we still expect the feature h 1 to be unaffected. The update operations for model parameters are summarized as follows.
For the parameters θ e n c o d e r 2 in the Encoder module, it keeps the θ e n c o d e r 1 part unchanged and adds some extra parameters θ ˜ e n c o d e r 1 to generate the new feature q c + 1 . In other words, θ e n c o d e r 2 = θ e n c o d e r 1 , θ ˜ e n c o d e r 1 and only θ ˜ e n c o d e r 1 is updated during the retraining process, which is indicated by the extra orange parts marked in the Encoder module. For the parameters θ l a t e n t 2 in the Latent module, we perform the same operation as in the previous retraining process. We keep the θ l a t e n t 1 part unchanged and update the θ ˜ l a t e n t 1 part to regenerate the feature h 2 . For the parameters θ d e c o d e r 2 in the Decoder module, we update it completely.
Based on the detailed description of the feature increment operation above, we conclude the complete iterative training framework as follows:
Case 1:  D D th and E E th .
Operation 1: This indicates that for the IDAE network, the reconstruction capability and the degree of feature disentangling both satisfy the requirement. The training is ended.
Case 2:  D D th and E > E th .
Operation 2: This indicates for the IDAE network, the reconstruction capability remains to be improved. Then, we need to perform feature increments in layer H and retrain the network.
Case 3:  D > D th and E E th .
Operation 3: This indicates that for the IDAE network, the degree of feature disentangling is still not high enough. Then, we need to perform feature increments in layer Q and retrain the network.
Case 4:  D > D th and E > E th .
Operation 4: This indicates that for the IDAE network, both the reconstruction capability and the degree of feature disentangling do not satisfy the requirement. Then, we need to prioritize the improvement of the degree of feature disentangling. The operation is supposed to be the same as Operation 3.
The IDAE network calculates the metrics D and E while being iteratively trained under the proposed framework using the core temperature data. When Case 1 is satisfied, we save the network parameters and terminate the training process.
If the buffer layer Q has multiple sublayers, the feature increment operation in layer Q should be modified and organized as follows:
Assuming the number of sublayers in layer Q is u, we name the u sublayers Q 1 , Q 2 , , Q u in ascending order of the distance between them and layer H. When the model shows D > D th , to introduce as much information to the hidden layer H as possible with fewer operations, firstly, we need to perform the feature increments in the sublayer Q 1 and retrain the network. If there still exists D > D th after the first increment operation, we then need to perform feature increments in the sublayer Q 2 . We perform increment operations in the order of Q 1 , Q 2 , , Q u until all the sublayers have experienced this operation once. Then, we return to the sublayer Q 1 and start the second round of feature increments. We are supposed to repeat the above procedure until D D th is satisfied.

4. Online Anomaly Detection for Nuclear Reactor Core Temperature

During the online anomaly detection process, we use T2 and SPE statistics to obtain real-time health information on nuclear reactor cores, deciding whether the operation status of the whole nuclear power process is abnormal. The T2 and SPE statistics, respectively, describe the degree of deviation of test data from the model in the feature space and residual space. After calculating these two statistics based on the normal temperature data without anomalies, we need to obtain the two corresponding control limits as the criteria for determining anomalies. The control limit for T2 and SPE can be obtained through the kernel density estimation method [29]. During real-time application, if either statistic of T2 and SPE exceeds the corresponding control limit, it indicates the occurrence of an anomaly in core temperature. The calculation is shown as follows.
After finishing the training of the IDAE model, we need to first calculate the two statistics and their corresponding control limits based on the temperature data collected during the regular operation of nuclear power plants. The calculation is shown as follows.
For one input temperature data sample x t m × 1 , t [ 1 , 2 , 3 , , n ] , we use the IDAE network to extract the latent space feature h t k × 1 and obtain the corresponding reconstructed sample x ^ t m × 1 . The residual error can be defined as,
e = x t x ^ t
Then, the T2 and SPE statistics can be defined as,
T 2 = h t T Ω 1 h t
SPE = e T e
where Ω represents the empirical covariance matrix estimated from current modeling data. We define the corresponding control limits for T2 and SPE as C l T 2 and C l SPE .
In the online anomaly detection stage, when a new temperature data sample x n e w m × 1 arrives and is inputted into the IDAE network, the latent space feature h n e w k × 1 is extracted and the reconstructed sample x ^ n e w m × 1 is outputted. The residual error can be calculated through the expression below,
e n e w = x n e w x ^ n e w
Then, the T2 and SPE statistics can be calculated through the following expressions,
T 2 = h n e w T Ω 1 h n e w
SPE = e n e w T e n e w
During the online anomaly detection process for reactor core temperature, if either statistic of T2 and SPE correspondingly exceeds C l T 2 or C l SPE , it indicates the occurrence of an anomaly in core temperature.
It is worth noting that we apply the IDAE model for real-time applications only after the model is well trained, at which the model structure is fixed and no real-time training is required. Therefore, it is feasible for IDAE to achieve real-time detection without requiring many computational resources.

5. Case Study

In this section, we will first introduce the core temperature dataset that we use, and then we present the model training settings. We will illustrate the effectiveness and superiority of our methods through a latent space disentangling degree experiment and an anomaly detection experiment. Finally, we will provide validation and parameter verification.

5.1. Dataset Description

The nuclear reactor core is the energy center of the nuclear power plant, mainly composed of the reactor vessel and the fuel assembly inside. The structure is briefly shown in Figure 6 [30]. The core temperature data are collected by the sensors located on the top, middle and bottom layers of the reactor core at a sampling frequency of 1 min. We built one training set with the data collected during the regular operation of the reactor core, which is used for IDAE model training and validation. Two test sets, Test Set 1 and Test Set 2, were constructed using the data collected when the core temperature experienced abrupt and slow changes, indicating two kinds of anomalies. Test Set 1 and 2 are used for testing the performance of IDAE in anomaly detection tasks. Each of the three data sets contained 6000 samples and 40 temperature variables. Additionally, both kinds of anomalies in the test sets occurred at 2000 sample points. Inspired by [31,32], here, we choose to use the box-and-whisker plot to briefly demonstrate the data distribution in Figure 7. We select the No. 2, No. 20 and No. 38 temperature variables collected on the top, middle and bottom layers, respectively, of the reactor core as representatives. The top and bottom lines of the rectangular, respectively, denote the data values corresponding to the 25% and 75% data amount. The line inside the box represents the median value. The vertical lines on the top and bottom of these boxes are known as whiskers. It can be found that the No. 2 temperature exhibits a positively skewed pattern.
Remark: The nuclear core temperature dataset we used comes from an industrial collaborator instead of online open sources, which has not been accessed and used in the published papers so far by other authors. Therefore, we use two test sets with two typical fault types, including the abrupt and slow changes in core temperature, to verify the universality of IDAE in detecting different kinds of anomalies.

5.2. Training Settings

For model training, the IDAE network is formed by a cascade of one input layer, one buffer layer Q, one hidden layer H and one output layer. We choose for the number of sublayers in the buffer layer Q to be 1. In the initial state, the dimensions of input and output are both set to 40, which is equal to the number of temperature variables. The dimensions of layers Q and H are set to 3 and 1. The threshold D th and E th are set to 0.8 and 0.15, respectively. The hyper-parameter is μ set to 0.5. The batch size and the initial learning rate are chosen to be 64 and 0.001, respectively. In this experiment, the GPU platform we used was NVIDIA GeForce RTX 3060. The programming language was Python3.9 and the deep learning framework used was PyTorch.

5.3. Visualization of the Latent Space Disentangling Degree

In this subsection, we use the training set to train the different networks and compare the degree of feature disentangling in the latent space they extracted through a visualization experiment.
After the IDAE network completes the iterative training, the feature numbers in its layers Q and H are 8 and 6. We calculate the covariance matrix of the six-dimensional latent space features and visualize it through a heatmap shown in Figure 8. In the heat map, each color block corresponds to an element of the covariance matrix. The darker color indicates a larger element value. From Figure 8, we can tell that for the IDAE network, the color of the blocks on the main diagonal is significantly darker than those of the other blocks. This shows that the covariances between features in different dimensions are numerically small, almost negligible. In other words, the covariance matrix is close to a diagonal matrix and the degree of feature disentangling of the IDAE network is remarkably high.
We also use the AE [33] and DAE networks to perform the same experiment for comparison. We set the latent space dimension of these two networks to be 6, which is equal to that of the IDAE network. The hyper-parameter β of the DAE network is set to 0.5. After training, we visualize the covariance matrixes of the latent space features of these two networks through heat maps. They are shown in Figure 9 and Figure 10, respectively. In Figure 8, we can see that the element values in the covariance matrix of the AE network’s latent space features are completely in a random arrangement, which shows that there’s no property of feature disentangling in the AE network’s latent space. For the DAE network, we can tell that the non-main diagonal elements in its covariance matrix have lower values than those of the AE network. However, those values are not so small as to be negligible. Compared to the IDAE network, the degree of feature disentangling of the DAE network is still not high enough. In conclusion, DAE achieves feature disentangling to a certain extent, and IDAE shows a much higher degree of feature disentangling than the other two networks. This indicates the effectiveness of the feature increments design and the iterative training framework.

5.4. Core Temperature Anomaly Detection Experiment and Discussions

In this subsection, we will demonstrate the anomaly detection experiment using different models and compare the result. To evaluate the anomaly detection capability, we use the missing alarm rate (MAR) and the false alarm rate (FAR), which are defined as,
MAR = N M A E N a b n o r m a l
FAR = N F A E N n o r m a l
where N M A E , N a b n o r m a l , N F A E , N n o r m a l are the numbers of the missing alarm events, abnormal samples, false alarm events and normal samples.
In addition to the three network models used in the previous subsection, we add the PCA method to model normal data and establish four anomaly detection models based on PCA [34], AE [33], DAE and IDAE methods. We take the natural log of the statistics and the corresponding control limits to show the result more clearly.
Figure 11 shows the monitoring result of statistic T2 and SPE on Test Set 1 using four different methods. The blue curves represent the T2 and SPE values and the red straight lines stand for the control limit with 95% confidence. In the face of the fuel assembly overheating fault, which is manifested as temperature abrupt change, the statistic T2 and SPE of four methods all exceed their control limits at around the 1200th sample. This indicates that the process anomaly is successfully detected. According to Table 1 and Table 2 that show the MAR and FAR values of different methods on two test sets, we can tell the MAR values of these four methods are relatively low on Test Set 1. Furthermore, in addition to the T2 statistics of PCA showing many false positives before fault introduction, the MAR values of other methods were all at a low level too. However, generally speaking, IDAE and DAE show better performance than other methods.
Figure 12 shows the monitoring result of statistic T2 and SPE on Test Set 1 using four different methods. The core temperature experienced a slow change when the fault occurs in Test Set 2. We can tell the fault detection time of each method is delayed compared with the previous test set. According to Table 1 and Table 2, the anomaly detection model based on the PCA method still produces many false positives before the fault is introduced, and the MAR value also increases significantly, showing poor monitoring performance. The AE network can capture the nonlinear relationship between core temperature variables but fails to achieve feature disentangling. The feature covariance matrix has the possibility of irreversibility, which leads to the unstable numerical calculation of T2 statistics. The FAR value of the AE method is relatively high and the monitoring performance is also poor. The DAE network achieved feature disentangling to a certain extent, its FAR value is lower compared to the AE method. However, the disentangling of its features is not sufficient, the overall performance remains to be improved. The IDAE network realized a high degree of feature disentangling through its feature increments design and the iterative training framework. Its MAR and FAR values on statistics T2 and SPE are all at a low level. The monitoring performance of the IDAE network is significantly superior to other methods. This proved the effectiveness and superiority of our methods.

5.5. Validations and Parameter Verifications

In this subsection, we present the result of the ten-fold cross validation and provide the verifications for the impact of the loss weight μ .
For the ten-fold cross validation, we split the training set into 10 subsets with equal sample numbers, selecting one subset for validation in turn and using the other 9 for training. We calculate the FAR values for 10 validation experiments and present the result in Table 3, where Exp. 1−10 denote those 10 validation experiments. Since there are no fault samples in the training set, the FAR value only indicates the proportion of samples for which the statistics exceed the control limit with a confidence of 95%. As shown in Table 3, the FAR of each experiment is below 5%, which validates the effectiveness of our IDAE model.
To verify the impact of the IDAE loss weight μ , we choose different values to train the IDAE model and evaluate some detailed performances. More specifically, 0.5 is the weight we selected to perform anomaly detection tasks in this paper, and 0.1 and 10 are the contrastive values for smaller and larger choices. As shown in Table 4, compared with the choice of μ = 0.5, it takes almost twice the amount of time to finish training when μ = 0.1. When we apply μ = 10 to IDAE, the model fails to converge. We terminate the iterative training at 500 s when the feature dimension is already increased to 8. Compared with the condition when μ = 0.5, it can be found that the MAR and FAR of IDAE have both greatly increased in two test sets when μ = 10, which shows the degradation of model performance.

6. Conclusions

In this work, we introduced the idea of feature disentangling into the anomaly detection tasks under the AE framework. Based on this idea, we first proposed the DAE model where a latent space disentangling loss is designed to disentangle the features. Then, we revealed that the training of this model and the determination of the feature dimension in disentangled latent space remain as challenges. We further proposed an improved version of DAE, named the IDAE model, to address these issues. Taking advantage of the designed incrementally feature generation strategy and the iterative training framework, the IDAE model achieved more sufficient feature disentangling while adaptively determining the appropriate feature dimension for anomaly detection. We proved that IDAE realized a higher degree of feature disentangling than DAE and AE through a latent space visualization experiment. Finally, through a core temperature anomaly detection experiment, it was found that the proposed IDAE model achieved average FARs of 4.745% and 6.315%, respectively, using T2 and SPE statistics on two test sets and achieved average MARs of 6.4% and 2.9%, respectively, using T2 and SPE statistics on two test sets, outperforming the other methods. In particular, IDAE improved the performance on FAR and MAR, respectively, by 3.7% and 3% over the DAE model using T2, which validates the effectiveness of the proposed feature incremental and stepwise iterative training framework.
It is worth noting that, our method exhibits a certain degree of generality and can be extended to other objects. However, the primary focus of this article is not on such applications. Hence, this paper does not present experimental findings specific to other open-source data.
In actual nuclear power processes, the operation condition might change with the progress in power production, leading to variation in the distribution of core temperature data. Therefore, for further research, we aim to direct more effort towards the anomaly detection tasks of the nuclear power process under varying operation conditions.

Author Contributions

Methodology, H.L., X.L., W.M. and J.C.; software, J.C.; validation, H.L., X.L., W.M. and J.C.; writing—original draft preparation, J.C.; writing—review and editing, X.C. and C.Z.; visualization, J.C.; supervision, C.Z. and W.W.; project administration, C.Z.; funding acquisition, C.Z., H.L., X.L. and W.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant numbers No. 62125306 and No. 11975181.

Data Availability Statement

The core temperature dataset used in this work is unavailable due to privacy restrictions.

Conflicts of Interest

The authors declare no conflict of interest.

Nomenclature

AEAuto-Encoder.
CDAEConditional Discriminant Auto-Encoder.
DAEFeature Disentangling Auto-Encoder.
FARFalse Alarm Rate
ICAIndependent Component Analysis.
IDAEIncrementally Feature Disentangling Auto-Encoder.
MARMissing Alarm Rate
PCAPrincipal Component Analysis.
SFASlow Feature Analysis.

References

  1. Singh, P.; Singh, L.K. Design of safety critical and control systems of Nuclear Power Plants using Petri nets. Nucl. Eng. Technol. 2019, 51, 1289–1296. [Google Scholar] [CrossRef]
  2. Xu, Y.; Kang, J.; Yuan, J. The prospective of nuclear power in China. Sustainability 2018, 10, 2086. [Google Scholar] [CrossRef]
  3. Wang, G.; Chao, Y.; Jiang, T.; Chen, Z. Facilitating developments of solar thermal power and nuclear power generations for carbon neutral: A study based on evolutionary game theoretic method. Sci. Total Environ. 2022, 814, 151927. [Google Scholar] [CrossRef] [PubMed]
  4. Wang, C.; Raza, S.A.; Adebayo, T.S.; Yi, S.; Shah, M.I. The roles of hydro, nuclear and biomass energy towards carbon neutrality target in China: A policy-based analysis. Energy 2023, 262, 125303. [Google Scholar] [CrossRef]
  5. Zhang, H.; Bao, H.; Shorthill, T.; Quinn, E. An integrated risk assessment process of safety-related digital I&C systems in nuclear power plants. Nucl. Technol. 2023, 209, 377–389. [Google Scholar]
  6. Oh, C.; Kim, D.H.; Lee, J.I. Application of data driven modeling and sensitivity analysis of constitutive equations for improving nuclear power plant safety analysis code. Nucl. Eng. Technol. 2023, 55, 131–143. [Google Scholar] [CrossRef]
  7. Adumene, S.; Islam, R.; Amin, M.T.; Nitonye, S.; Yazdi, M.; Johnson, K.T. Advances in nuclear power system design and fault-based condition monitoring towards safety of nuclear-powered ships. Ocean Eng. 2022, 251, 111156. [Google Scholar] [CrossRef]
  8. Zhao, C. Perspectives on nonstationary process monitoring in the era of industrial artificial intelligence. J. Process Control 2022, 116, 255–272. [Google Scholar] [CrossRef]
  9. Bowman, S.M. SCALE 6: Comprehensive nuclear safety analysis code system. Nucl. Technol. 2011, 174, 126–148. [Google Scholar] [CrossRef]
  10. Sakthivel, M.; Madhusoodanan, K. Core Temperature Monitoring System for Prototype Fast Breeder Reactor. Nucl. Sci. Eng. 2012, 170, 290–293. [Google Scholar] [CrossRef]
  11. Oettingen, M.; Kim, J. Detection of Numerical Power Shift Anomalies in Burnup Modeling of a PWR Reactor. Sustainability 2023, 15, 3373. [Google Scholar] [CrossRef]
  12. Hartert, L.; Nuzillard, D.; Jeannot, J.P. Dynamic detection of nuclear reactor core incident. Signal Process. 2013, 93, 468–475. [Google Scholar] [CrossRef]
  13. Song, P.; Zhao, C.; Huang, B. MPGE and RootRank: A sufficient root cause characterization and quantification framework for industrial process faults. Neural Netw. 2023, 161, 397–417. [Google Scholar] [CrossRef] [PubMed]
  14. Li, X.; Huang, T.; Cheng, K.; Qiu, Z.; Sichao, T. Research on anomaly detection method of nuclear power plant operation state based on unsupervised deep generative model. Ann. Nucl. Energy 2022, 167, 108785. [Google Scholar] [CrossRef]
  15. Zhao, C.; Yu, W.; Gao, F. Data analytics and condition monitoring methods for nonstationary batch processes—Current status and future. Acta Autom. Sin. 2020, 46, 2072–2091. [Google Scholar]
  16. Dong, Y.; Qin, S.J.; Hashimoto, I. Dynamic latent variable analytics for process operations and control. Comput. Chem. Eng. 2018, 114, 69–80. [Google Scholar] [CrossRef]
  17. Song, P.; Zhao, C. Slow down to go better: A survey on slow feature analysis. IEEE Trans. Neural Netw. Learn. Syst. 2022. [Google Scholar] [CrossRef]
  18. Feng, X.; Kong, X.; Du, B.; Luo, J. Adaptive LII-RMPLS based data-driven process monitoring scheme for quality-relevant fault detection. J. Control Decis. 2022, 9, 477–488. [Google Scholar] [CrossRef]
  19. Chen, X.; Zheng, J.; Zhao, C.; Wu, M. Full Decoupling High-Order Dynamic Mode Decomposition for Advanced Static and Dynamic Synergetic Fault Detection and Isolation. IEEE Trans. Autom. Sci. Eng. 2022, 1–15. [Google Scholar] [CrossRef]
  20. Li, Z.; Yan, X. Fault-relevant optimal ensemble ICA model for non-Gaussian process monitoring. IEEE Trans. Control Syst. Technol. 2019, 28, 2581–2590. [Google Scholar] [CrossRef]
  21. Shang, C.; Yang, F.; Gao, X.; Huang, X.; Suykens, J.A.; Huang, D. Concurrent monitoring of operating condition deviations and process dynamics anomalies with slow feature analysis. AIChE J. 2015, 61, 3666–3682. [Google Scholar] [CrossRef]
  22. Zhao, R.; Yan, R.; Chen, Z.; Mao, K.; Wang, P.; Gao, R.X. Deep learning and its applications to machine health monitoring. Mech. Syst. Signal Process. 2019, 115, 213–237. [Google Scholar] [CrossRef]
  23. Song, P.; Zhao, C.; Huang, B. SFNet: A slow feature extraction network for parallel linear and nonlinear dynamic process monitoring. Neurocomputing 2022, 488, 359–380. [Google Scholar] [CrossRef]
  24. Zhang, S.; Qiu, T. A dynamic-inner convolutional autoencoder for process monitoring. Comput. Chem. Eng. 2022, 158, 107654. [Google Scholar] [CrossRef]
  25. Zhang, J.; Zhao, C. Condition-driven probabilistic adversarial autoencoder with nonlinear Gaussian feature learning for nonstationary process monitoring. J. Process Control 2022, 117, 140–156. [Google Scholar] [CrossRef]
  26. Zhu, J.; Shi, H.; Song, B.; Tao, Y.; Tan, S. Information concentrated variational auto-encoder for quality-related nonlinear process monitoring. J. Process Control 2020, 94, 12–25. [Google Scholar] [CrossRef]
  27. Chen, X.; Zhao, C. Conditional discriminative autoencoder and condition-driven immediate representation of soft transition for monitoring complex nonstationary processes. Control Eng. Pract. 2022, 122, 105090. [Google Scholar] [CrossRef]
  28. Yu, W.; Zhao, C. Robust monitoring and fault isolation of nonlinear industrial processes using denoising autoencoder and elastic net. IEEE Trans. Control Syst. Technol. 2019, 28, 1083–1091. [Google Scholar] [CrossRef]
  29. Parzen, E. On estimation of a probability density function and mode. Ann. Math. Stat. 1962, 33, 1065–1076. [Google Scholar] [CrossRef]
  30. Wang, C.; Sun, X.; Sabharwall, P. CFD investigation of MHTGR natural circulation and decay heat removal in P-LOFC accident. Front. Energy Res. 2020, 8, 129. [Google Scholar] [CrossRef]
  31. Li, D.C.; Huang, W.K.; Lin, Y.S. New Product Short-Term Demands Forecasting with Boxplot-Based Fractional Grey Prediction Model. Appl. Sci. 2022, 12, 5131. [Google Scholar] [CrossRef]
  32. Vakharia, V.; Castelli, I.E.; Bhavsar, K.; Solanki, A. Bandgap prediction of metal halide perovskites using regression machine learning models. Phys. Lett. A 2022, 422, 127800. [Google Scholar] [CrossRef]
  33. Sakurada, M.; Yairi, T. Anomaly detection using autoencoders with nonlinear dimensionality reduction. In Proceedings of the 2nd Workshop on Machine Learning for Sensory Data Analysis (MLSDA 2014), Gold Coast, QLD, Australia, 2 December 2014; pp. 4–11. [Google Scholar]
  34. Wold, S.; Esbensen, K.; Geladi, P. Principal component analysis. Chemom. Intell. Lab. Syst. 1987, 2, 37–52. [Google Scholar] [CrossRef]
Figure 1. The conceptual structure of the DAE network. It consists of an encoder and a decoder module. The loss function of DAE is composed of a feature disentangling loss and a reconstruction loss, which aims to realize the capability of feature disentangling and reconstruction at the same time.
Figure 1. The conceptual structure of the DAE network. It consists of an encoder and a decoder module. The loss function of DAE is composed of a feature disentangling loss and a reconstruction loss, which aims to realize the capability of feature disentangling and reconstruction at the same time.
Processes 11 01486 g001
Figure 2. The structure of the IDAE network. It consists of an encoder, a latent and a decoder module to connect the input and output with the feature incrementable layers Q and H. The thin blue and green bars represent the existing features and the thin white bars with a dotted outline stand for the places for feature incrementing.
Figure 2. The structure of the IDAE network. It consists of an encoder, a latent and a decoder module to connect the input and output with the feature incrementable layers Q and H. The thin blue and green bars represent the existing features and the thin white bars with a dotted outline stand for the places for feature incrementing.
Processes 11 01486 g002
Figure 3. The initial state when training the IDAE network with 1 sublayer in the buffer layer Q. The loss function during the training only includes the reconstruction loss.
Figure 3. The initial state when training the IDAE network with 1 sublayer in the buffer layer Q. The loss function during the training only includes the reconstruction loss.
Processes 11 01486 g003
Figure 4. The IDAE network training when performing feature increments in hidden layer H. The loss function during the training includes the reconstruction loss and feature disentangling loss. The grey part of the parameters remains unchanged. The orange part of the parameters is updated.
Figure 4. The IDAE network training when performing feature increments in hidden layer H. The loss function during the training includes the reconstruction loss and feature disentangling loss. The grey part of the parameters remains unchanged. The orange part of the parameters is updated.
Processes 11 01486 g004
Figure 5. The IDAE network training when performing feature increments in buffer layer Q. The loss function during the training includes the reconstruction loss and feature disentangling loss. The grey part of the parameters remains unchanged. The orange part of the parameters is updated.
Figure 5. The IDAE network training when performing feature increments in buffer layer Q. The loss function during the training includes the reconstruction loss and feature disentangling loss. The grey part of the parameters remains unchanged. The orange part of the parameters is updated.
Processes 11 01486 g005
Figure 6. The structure of the reactor core. (a) Cut-away view of the reactor pressure vessel. (b) Cross-section view of the reactor core.
Figure 6. The structure of the reactor core. (a) Cut-away view of the reactor pressure vessel. (b) Cross-section view of the reactor core.
Processes 11 01486 g006
Figure 7. Box plot reflecting the distributions of 3 representative temperature variables. The line inside the rectangular box represents the median value. The vertical lines on the top and bottom of these boxes are known as whiskers. Typically, outliers are plotted as individual circles.
Figure 7. Box plot reflecting the distributions of 3 representative temperature variables. The line inside the rectangular box represents the median value. The vertical lines on the top and bottom of these boxes are known as whiskers. Typically, outliers are plotted as individual circles.
Processes 11 01486 g007
Figure 8. The heat map of the IDAE network feature covariance matrix. The darker the color block, the larger the corresponding element value.
Figure 8. The heat map of the IDAE network feature covariance matrix. The darker the color block, the larger the corresponding element value.
Processes 11 01486 g008
Figure 9. The heat map of the AE network feature covariance matrix.
Figure 9. The heat map of the AE network feature covariance matrix.
Processes 11 01486 g009
Figure 10. The heat map of the DAE network feature covariance matrix.
Figure 10. The heat map of the DAE network feature covariance matrix.
Processes 11 01486 g010
Figure 11. Monitoring graphs of statistic T2 and SPE for Test Set 1 using (a) PCA, (b) AE, (c) DAE and (d) IDAE methods. In each subfigure, the blue curves represent the T2 and SPE values and the red straight lines denote the control limit with 95% confidence. If either statistic of T2 and SPE exceeds the corresponding control limit, it indicates the occurrence of an anomaly in core temperature.
Figure 11. Monitoring graphs of statistic T2 and SPE for Test Set 1 using (a) PCA, (b) AE, (c) DAE and (d) IDAE methods. In each subfigure, the blue curves represent the T2 and SPE values and the red straight lines denote the control limit with 95% confidence. If either statistic of T2 and SPE exceeds the corresponding control limit, it indicates the occurrence of an anomaly in core temperature.
Processes 11 01486 g011
Figure 12. Monitoring graphs of statistic T2 and SPE for Test Set 2 using (a) PCA, (b) AE, (c) DAE and (d) IDAE methods. In each subfigure, the blue curves represent the T2 and SPE values and the red straight lines denote the control limit with 95% confidence. If either statistic of T2 and SPE exceeds the corresponding control limit, it indicates the occurrence of an anomaly in core temperature.
Figure 12. Monitoring graphs of statistic T2 and SPE for Test Set 2 using (a) PCA, (b) AE, (c) DAE and (d) IDAE methods. In each subfigure, the blue curves represent the T2 and SPE values and the red straight lines denote the control limit with 95% confidence. If either statistic of T2 and SPE exceeds the corresponding control limit, it indicates the occurrence of an anomaly in core temperature.
Processes 11 01486 g012
Table 1. Comparison of the MAR (%) value of different models on two test sets.
Table 1. Comparison of the MAR (%) value of different models on two test sets.
Test DataPCA [34]AE [33]DAEIDAE
T2SPET2SPET2SPET2SPE
Test Set 10.210.250.190.100.190.080.150.08
Test Set 230.886.5028.296.8920.136.2112.655.72
Table 2. Comparison of the FAR (%) value of different models on two test sets.
Table 2. Comparison of the FAR (%) value of different models on two test sets.
Test DataPCA [34]AE [33]DAEIDAE
T2SPET2SPET2SPET2SPE
Test Set 113.308.137.605.416.916.003.855.22
Test Set 214.4120.8213.28.208.668.735.647.41
Table 3. Ten-fold cross validation for IDAE model on the training set.
Table 3. Ten-fold cross validation for IDAE model on the training set.
Metric\
Experiments
Exp. 1Exp. 2Exp. 3Exp. 4Exp. 5Exp. 6Exp. 7Exp. 8Exp. 9Exp. 10
FAR (%)T24.964.964.974.994.974.984.964.964.964.96
SPE4.964.974.974.974.974.994.994.964.994.97
Table 4. Verification for the impact of IDAE loss weight μ .
Table 4. Verification for the impact of IDAE loss weight μ .
μ = 0.1 μ = 0.5 μ = 10
Training Time (s)310.96161.82Cannot converge (Stop at 500 s)
MAR (%)Test Set 1T20.150.150.66
SPE0.100.080.26
Test Set 2T212.7712.6517.35
SPE5.735.727.52
FAR (%)Test Set 1T23.853.855.33
SPE5.585.227.75
Test Set 2T25.745.6416.15
SPE8.037.419.33
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, H.; Li, X.; Mao, W.; Chang, J.; Chen, X.; Zhao, C.; Wang, W. Feature Disentangling Autoencoder for Anomaly Detection of Reactor Core Temperature with Feature Increment Strategy. Processes 2023, 11, 1486. https://doi.org/10.3390/pr11051486

AMA Style

Li H, Li X, Mao W, Chang J, Chen X, Zhao C, Wang W. Feature Disentangling Autoencoder for Anomaly Detection of Reactor Core Temperature with Feature Increment Strategy. Processes. 2023; 11(5):1486. https://doi.org/10.3390/pr11051486

Chicago/Turabian Style

Li, Heng, Xianmin Li, Wanchao Mao, Junyu Chang, Xu Chen, Chunhui Zhao, and Wenhai Wang. 2023. "Feature Disentangling Autoencoder for Anomaly Detection of Reactor Core Temperature with Feature Increment Strategy" Processes 11, no. 5: 1486. https://doi.org/10.3390/pr11051486

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop