A Hybrid Intelligent Fault Diagnosis Strategy for Chemical Processes Based on Penalty Iterative Optimization

Process fault is one of the main reasons that a system may appear unreliable, and it affects the safety of a system. The existence of different degrees of noise in the industry also makes it difficult to extract the effective features of the data for the fault diagnosis method based on deep learning. In order to solve the above problems, this paper improves the deep belief network (DBN) and iterates the optimal penalty term by introducing a penalty factor, avoiding the local optimal situation of a DBN and improving the accuracy of fault diagnosis in order to minimize the impact of noise while improving fault diagnosis and process safety. Using the adaptive noise reduction capability of an adaptive lifting wavelet (ALW), a practical chemical process fault diagnosis model (ALW-DBN) is finally proposed. Then, according to the Tennessee–Eastman (TE) benchmark test process, the ALW-DBN model is compared with other methods, showing that the fault diagnosis performance of the enhanced DBN combined with adaptive wavelet denoising has been significantly improved. In addition, the ALW-DBN shows better performance under the influence of different noise levels in the acid gas absorption process, which proves its high adaptability to different noise levels.


Introduction
With the advancement of industrial intelligence, modern industry has higher requirements for system reliability and safety, which enables the rapid development of real-time risk management methods for efficiently detecting faults that threaten system reliability and eliminating uncertain noise affecting system safety. Among them, the fault detection and diagnosis (FDD) methods in the risk method play a central role [1].
Research on FDD technology began in 1971 [2], and it is now under rapid development. FDD methods can be classified into three categories: quantitative model-based, qualitative model-based and process history-based methods [3][4][5]. Among these methods, quantitative process history-based methods or data-driven methods possess the greatest potential for application in chemical processes [6]. A branch of data-driven methods relies on statistical measures such as principal component analysis (PCA) [7][8][9], independent component analysis (ICA) [10][11][12] and partial least squares (PLS) [13][14][15] for feature extraction and dimensionality reduction. The advantage of these methods is that they can simplify the analysis of complex dimensional data to improve efficiency. Other pattern classification methods, such as the artificial neural network (ANN) [16][17][18] and support vector machine (SVM) [19][20][21][22], have low generalization error rates and simple training processes. In addition, in recent years, new models have been developed by combining different datadriven methods to achieve the desired results, such as ICA-PCA [23], PCA-XGBoost [24] 2. Methods

Enhanced Deep Belief Network (DBN)
The traditional DBN was proposed by Hinton [40], which consists of multiple restricted Boltzmann machines (RBMs) and a top-level back propagation neural network (BPNN).

Traditional DBN
In the traditional DBN network, the RBM is a neural network with two layers of neurons, also known as the visible layer and the hidden layer. The visible layer consists of units that accept input data, whereas the hidden layer is separated from the input and contains units for feature extraction. The units within each layer are isolated from each other, but the units between the two layers are interconnected. A real number called the weight is associated with each interlayer connection. The structure of an RBM is illustrated in Figure 1. For a given state (v, h), an energy function can be defined as follows: where v i and h j represent the binary state of the visible unit i and the hidden unit j, respectively. b i and c j are their biases and w ij is the weight between units i and j.
According to the energy function E θ (v, h), the joint probability distribution of states (v, h) can be derived as follows: where Z θ is a normalization factor, expressed as In an RBM, the conditional probability of the hidden layer neurons being activated with respect to an input state is Because an RBM is bidirectionally connected, the visible layer neurons can also be activated with conditional probability: where σ is the RELU function.
The proposed enhanced DBN is a deep neural network consisting of multiple RBMs as its bottom layers and an improved BPNN as its topmost layer. The structure of the enhanced DBN is shown in Figure 2. Layer 1 is the input layer, and layer 4 is the feature layer. Layers 1-4 can be considered three RBMs stacked together. The weights associated with each RBM in the form of matrices-that is, W 1 , W 2 , and W 3 -are called detection weights. The transposes of these matrices, which are W T 1 , W T 2 , and W T 3 , are called generative weights. It forms a BPNN with the fourth layer and uses the label data of the training samples as the prediction target to fine-tune the entire network. In this layer, this paper adds a penalty factor to the formed BPNN to optimize the output result and update the manual optimization to automatic infinite optimization. Training of an enhanced DBN involves two steps: pretraining and fine-tuning. Pre-training is conducted to train each RBM sequentially through unsupervised learning, using the output of the previous RBM as the input of the next RBM. The standard method for training RBMs is the contrast divergence (CD) method proposed by Hinton [41].
After the training of all RBMs, the output of the last RBM is used as the input of the BPNN.
The fine-tuning of the DBN is achieved by supervised learning. Hinton proposed a wake-sleep algorithm [42], which suggests that in the wake state, the generation weights can be adjusted by the errors between the input data and the reconstruction data. However, the traditional DBN may cause errors to accumulate because of the standardized data matrix passing through the RBM layer by layer, eventually resulting in the appearance of a local optimum. The traditional DBN method manually adds the penalty number in the optimization process. This optimization method has an extremely low precision, and the adjustment process is significantly cumbersome. To avoid this situation, the traditional penalty terms are improved, and penalty factors are introduced to independently optimize the standardized matrix.

Penalty Factor
This paper introduces a penalty factor between the fourth and fifth layers of the DBN. The constrained optimization is converted to unlimited optimization to improve the diagnosis accuracy.
Consider these inequality constraints: where f is the input dataset and x k is the dataset decomposition. Therefore, the following augmented objective function can be constructed: where r k is the penalty factor and µ is the Lagrangian operator. The iterative update method is used to determine the saddle point of the Lagrangian expression, which is the optimal solution of the model. The optimization steps are as follows: 1.
The penalty factor is introduced by the DBN standardized dataset.

2.
Initialize the data x 1 k and Lagrange operator λ 1 .

3.
The initial point is iterated, and x j+1 k is updated as shown in Equation (9):

4.
After updating x, update the Lagrangian operator λ as shown in Equation (10):

5.
Equation (11) is used to determine whether the current iteration result meets the set error requirement. If the error requirement is not met, return to step 3 to continue the next iteration. If the error requirement is met, the current optimal penalty coefficient r is obtained, and the DBN is input to reduce the cumulative error generated by each layer of RBM training: The method to solve the optimal penalty parameter is iterative convergence (i.e., to determine whether the optimal value x j k is reached through the result of each iteration). The optimal penalty item of the training dataset is matched to achieve the purpose of DBN training optimization.
Compared with the traditional DBN method of manually setting penalty items, the enhanced DBN method performs iterative calculations on the entire standardized dataset to obtain the optimal penalty item of the current dataset, and it has stronger adaptability to different data sets. The cumulative error generated by each layer of the RBM of the enhanced DBN network is minimized. The generalization of the final training process is reduced, and the training error is smaller. In comparison with traditional manual multiple adjustment training, this method is more accurate and convenient.

Adaptive Lifting Wavelet Method
Modern industrial process data are often accompanied by noise interference, which affects process fault diagnosis. Aiming at the noise of the chemical process data in this study, the original lifting wavelet (LW) soft threshold method proposed by Wim Sweldens [38,43] is enhanced, and an ALW based on the adaptive soft threshold method is proposed. The ALW transformation consists of three steps: splitting, prediction and updating.
As shown in Figure 3, P is the prediction operator, U is the updating operator, c[n] is the lifted low-frequency coefficient, d[n] is the lifted high-frequency coefficient and X e [n] and X o [n] are the even and odd items of sampling, respectively.
Multi-level decomposition of the original signal can be obtained after several iterations of the original input signal using Equations (12)- (14).
Among them, the wavelet decomposition layer selection uses the wavecode() function in MATLAB for decomposition, and the signal-to-noise ratio (SNR) is used to measure the pros and cons of the selected wavelet decomposition layer. The calculation equation of the SNR is as follows: where x (n) is the original signal, x (n) is the signal after noise reduction and n is the signal length.
Traditional LW noise reduction methods include the hard and soft threshold methods, which show different effects in the noise reduction of data. The specific calculation is presented in Equations (16) and (17): where W h and W s represent the hard and soft threshold functions, respectively, and w is the wavelet coefficient. The equation of the threshold T is where σ is the noise standard deviation and Med is the median function.
On the basis of the soft threshold function, after improving it, a new threshold function, namely the adaptive soft threshold, is obtained, which is expressed in Equation (20): where k is the adaptive coefficient. The expression is k = 1 − e −α(|ω|−T) 2 , and α is a positive number. The threshold obtained by the threshold estimation algorithm of Equations (18) and (19) is a fixed value. Using this threshold to process the wavelet coefficients produces the phenomenon of "over-killing", which causes greater distortion when reconstructing the speech signal. Therefore, the use of traditional thresholds for wavelet coefficients under all decomposition scales filter out excessive useful signals and cause signal distortion. In view of the above analysis, this study adopts an adaptive threshold to process wavelet coefficients obtained under different decomposition scales.
After wavelet decomposition, the calculation of the noise standard deviation δ j of the jth layer wavelet coefficients is expressed in Equation (21): where N j represents the number of wavelet coefficients of the jth layer, W k represents the wavelet coefficients of the jth layer and W represents the mean value of the wavelet coefficients of the jth layer. When processing the wavelet coefficients of the jth layer, the threshold is set to T j , and the calculation is as shown in Equation (22): where δ j represents the standard deviation of the wavelet coefficient noise of the jth layer and N j represents the number of wavelet coefficients of the jth layer.
To determine the effect of adaptive soft threshold noise reduction, we used the root mean square error (RMSE) of the original signal and the denoised signal to measure the advantage of the adaptive soft threshold method. The equation is as follows: where x (n) is the speech signal without noise and x , (n) is the speech signal after the denoising process. In general, the smaller the RMSE, the closer the denoised speech signal is to the original pure speech signal, indicating that the current algorithm has a better denoising effect. When selecting different adaptive wavelet coefficients k and calculating the corresponding RMSE value, it can be observed that the smaller the RMSE value, the closer the corresponding k value is to the optimal adaptive coefficient.

Fault Diagnosis Model
We propose a fault diagnosis model based on ALW-DBN for chemical processes. The framework of our model is illustrated in Figure 4.
First, the process data were upgraded by an adaptive wavelet transform, and the adaptive soft threshold method was used to reduce the noise after the wavelet lifting splitting, prediction, and update stages. Then, the ideal data were divided into a training set and test set according to the ratio of 4:1. The unlabeled data samples in the training set were pretrained using a layer-by-layer unsupervised algorithm. Pretraining was divided into two stages. The first stage included 100 iterations of pretraining with a momentum parameter of 0.5 and then 200 iterations of pretraining with a momentum parameter of 0.9. In the pretraining stage, the number of updated data samples in each batch was subject to actual process data samples. The model consisted of five RBM layers. After each RBM was fully trained, the entire network model was fine-tuned. In the fine-tuning stage, the learning rate α was determined, and then the weights W 1 , W 2 , W 3 , W T 1 , W T 2 , W T 3 were adjusted and updated by the wake-up sleep algorithm of supervised learning. At this time, local optimization may have occurred in the model, and hence, a penalty factor r was added to optimize the penalty term of the model. We used penalty function for iterative optimization and iterated on the standardized network data completed by the pretraining to obtain the best penalty term r j k for the training data set, correct the BPNN weights and reduce the accumulation of errors caused by multi-layer DBN training. After completion of the fine-tuning stage, the label data generated by the training set and the test set were input into the ALW-DBN diagnosis model. Finally, we determined whether the FDR reached the set standard a. If not, a return to the retraining step was necessary.

Tennessee-Eastman Process Description
The TE process was widely cited as a benchmark for studies in control, optimization and fault diagnosis. The flowchart of the TE benchmark process is depicted in Figure 5 [44].  Table 1. Table 1. Process faults for the Tennessee-Eastman process.

Fault Number
Description Type Step Step 03 D feed (Stream2) Step 04 Reactor cooling water inlet temperature Step 05 Condenser cooling water inlet temperature Step 06 A feed loss (Stream1) Step 07 Header pressure loss-Reduced availability (Stream4) Step 08 A The data came from the Matlab simulation code of the TE process built by the University of Washington. The sampling interval in the simulation was 3 min, and the number of data features was 52. There were two cases of data in the original report. Each case contained 22 sets of data, which included normal state data and 21 types of fault state data. We used the first case of data to begin a simulation in the normal state for 1 h, and the simulation was continued for 24 h after adding the disturbance. For the second case, the simulation was started in the normal state for 8 h and then continued for 40 h after the disturbance was added. The first case included 10,580 samples, and the second case included 21,120 samples, so the total sample size for model training and testing was 31,700. The ratio of the training set to the test set was 4:1. The data input format of the training and testing process of the model was the sample * feature.

Results and Discussion
The first step in improving the algorithm was to perform noise reduction on the historical data with lifting wavelets. Before choosing the threshold function, this paper used SNR to determine the optimal number of decomposition levels. The SNRs of different decomposition layers are shown in Table 2. According to the comparison of the SNRs of different decomposition layers, the best wavelet decomposition layer for the TE process would be three layers.
This study used three kinds of threshold methods to perform three sets of comparative experiments on the monitoring variables of the TE process. The monitoring variable in Figure 6a is the reactor liquid level, which corresponds to the data of the TE process that was continuously simulated for 50 h under normal conditions with a sampling interval of 3 min. The monitoring variable in Figure 6b is the reactor liquid level, which corresponds to the data after the TE process was simulated for 8 h under normal conditions and the fault 1 disturbance signal was added, and the simulation continued for 40 h. It can be observed from the figure that the data processed by the adaptive soft threshold method had fewer high-frequency disturbance signals than those processed by the soft and hard threshold methods, and the graph change trend was more stable. According to the test results, the RMSE of the hard threshold method was 0.506, and the soft threshold method's was 0.421. The RMSE of the adaptive soft threshold method under different adaptive coefficients is presented in Table 3. After testing, the noise reduction effect was best when k = 6. In comparison with the traditional fixed threshold, the adaptive soft threshold method could remove the noise signal more effectively and retain the effective signal.
Then, the denoised data were normalized, and the training set was input into the first RBM for training. All RBM training was completed through an unsupervised greedy algorithm initialization layer by layer. After training all the RBMs, the back propagation algorithm was used as a means to fine-tune the structure and weight the parameters of the entire network with the label data corresponding to the training samples as the output target. Finally, the optimal combination of parameters was selected by minimizing the error. Figure 7 depicts the error curve during the adjustment process. In Figure 7, the label a represents the sample size of each batch of updated data during the fine-tuning process without adding a penalty factor for 640 iterations. Label b represents the sample size of each batch of updated data after the penalty factor is added 640 times, and the horizontal axis also indicates that it is calculated every 1000 iterations. It can be observed from the figure that the training process without the penalty factor had oscillation or even divergence. Although the training process with the penalty factor was slower in convergence, it gradually converged to a lower range as the number of training times increased.
To obtain a suitable fault diagnosis model, we tested different hyperparameters, including the number of layers of the neural network, the number of neurons per layer, the learning rate, the number of pretraining and fine-tuning rounds, the momentum and the size of the batch, to improve its performance. Our final DBN model included six layers of neurons, with 50, 50, 30, 20 and 22 present on each layer. The batch sizes in the pretraining and fine-tuning stages were 80 and 634, respectively. The learning rate was 0.00005. In the fine-tuning phase, the training rate was 0.0005, and the training rate was reduced to 0.00001. After the training rate was reduced to 0.00001, the training was continued for 75,000 iterations. The number of pre-training sessions was divided into two stages. First, the momentum parameter was 0.5 for 100 pretraining iterations, and then it was 0.8 for 200 pretraining iterations. The number of updated data samples per batch in the pretraining phase was 160, and the number of updated data samples per batch in the fine-tuning phase was 640. The optimal penalty coefficient was iterated by the penalty parameter to be 0.00000005.
After unsupervised learning, the classification layer was added to the feature extraction model, and the label data were used for training. For each fault state and normal state, the following defines the fault diagnosis rate (FDR) and false positive rate (FPR) [28]: where p is the count of this state's samples that are classified to this state and q is the count of other state samples that are classified to another state. The fault diagnosis results of the TE process need to be compared with other machine learning methods. This paper chose PCA, the Bayesian method and the SSVM method to compare the FDR and compare the FPR with the original DBN and the enhanced DBN. The results are shown in Table 4.  [46], SSVM [21], original DBN, enhanced DBN, ALW-DBN and DCNN [26], respectively). When compared with the traditional method in terms of the FDR, it can be seen that the FDR of the ALW-DBN model proposed in this paper was higher than the other methods in most fault situations, and the average FDR was the highest among all methods, especially for faults 3, 9 and 15, indicating that the ALW-DBN model was more sensitive to the more difficult faults.

Fault Type FDR (%) FPR (%) (A) (B) (C) (D) (E) (F) (G) (D) (E) (F) (G)
In order to further quantify the fault diagnosis performance, the FDT was introduced and compared with the DCNN method proposed by Wu et al. [26]. This article set several operating conditions and selected 17 faults for comparison. The comparison results of the 17 faults are shown in Table 5. It can be seen that the ALW-DBN method proposed in this paper was better than the DCNN in FDT. The reason for this was that the ALW-DBN network not only eliminated the errors generated within the network training but also eliminated the noise outside the TE process and restored the true signal value, reducing the time required for training.

Acid Gas Adsorption Process Description
Methyldiethanolamine (MDEA) is often used as an absorbent in chemical processes to absorb acid gases. A flowchart of the typical absorption process is depicted in Figure 8.
Stream 111 is the absorbent MDEA. It first exchanges heat with cooling water at room temperature through the heat exchanger E-105 and then is cooled to 21 • C. Then, it enters the absorption tower C-101 from the top. The raw material gas stream 102 enters the absorption tower C-101 from the bottom, flows countercurrently with the absorbent MDEA and absorbs acid gases (H2S and CO2) from the natural gas. The overhead gas of the absorption tower C-101 is natural gas containing a large amount of moisture, which then enters the downstream dehydration system for further dehydration and purification to meet the national natural gas standards. The bottom product of the absorption tower C-101 is rich amine liquid containing acid gas. After heat exchange, the rich amine liquid enters the regeneration tower to resolve the acid gas and regenerate the absorbent. According to the piping and instrument diagram (P&ID) chart, the following variables are present: V1 is the absorbent MDEA volume flow rate; V2 is the absorption tower's absorbent feed temperature; V3 is the absorption tower's top pressure; V4 is the the natural gas feed flow; and V5 is the bottom liquid level height of absorption tower. The enhanced DBN and ALW-DBN models were tested and compared for this real chemical process.

Results and Discussion
To test the adaptability and accuracy of the enhanced DBN and ALW-DBN in situations with different amplitudes of noise, we used HYSYS to perform a dynamic simulation of the acid gas absorption process. The transfer function module of HYSYS was used to adjust the process variables (PVs) through the control variables. Noise was generated through transfer functions with standard deviations of 1, 2, 4, 6, 8 and 10 in different simulation runs. In this process, four types of faults were introduced to the normal situation: fault 1, where the inlet temperature of the heat exchanger slowly rose to 31 • C; fault 2, where the normal absorbent composition was MDEA (0.48), and water (0.49), which slowly changed to MDEA (0.08) and water (0.92), and the acid gas composition was reduced; fault 3, where the gas molar flow rate of 5202 kmoL/h was set to 7000 kmoL/h under normal conditions; and fault 4, where the bottom product outlet valve of the absorption tower was adjusted, namely with the valve opening being reduced by 10%. Finally, the PV values of all modules were generated and analyzed.
According to the actual simulated industrial data, the optimal wavelet decomposition layer was determined by the selection of different wavelet layers. The signal-to-noise ratios of different wavelet layers are shown in Table 6. The noise standard deviation was also larger, and more layers needed to be decomposed. The corresponding SNR was also larger. Then, we selected 80% of the simulation results as the training set and the rest as the test set. The noise standard deviation was set to two. The noise reduction effect of variable 2-the absorption tower's absorbent feed temperature-and the comparison between variable 2 before and after noise reduction in normal conditions and fault 1 are shown in Figure 9. For the actual industrial process, the hard threshold noise reduction effect was not ideal, as shown in Figure 9. The adaptive soft threshold method could remove most of the noise existing in the data and display the most original state of the data, which had good performance in practical industrial applications. The selection of adaptive wavelet coefficients under different noise standard deviations is presented in Table 7. The adaptive wavelet coefficient k under different standard deviations could perform adaptive coefficient matching to different noises to achieve the greatest degree of noise reduction. Subsequent network training was prepared. Then, the denoised data were input into the ALW-DBN model for training. In the training process, the penalty factor was set to calculate the training error. Under different noise standard deviations, the influence of the penalty factor addition on the ALW-DBN model training error is shown in Figure 10. The training results show that under the influence of noise with different standard deviations, the ALW-DBN model with a penalty factor was better than the model without a penalty factor. The iterative selection mechanism of penalty factors selected the optimal penalty factors for different systems and eliminated the cumulative error of each layer of the RBM during the training process. The overall DBN training mechanism was optimized.
The ALW-DBN model was applied to the fault diagnosis of the acid gas absorption process from natural gas under the influence of different levels of noise. The diagnosis results are listed in Tables 8 and 9.  In the case of different failure standard deviations, the larger the failure standard deviation, the less effective information could be extracted from the data. Therefore, the larger the failure standard deviation after the ALW method was used, the less effective fault information could be extracted by the deep belief network. Then, compared with the enhanced DBN for the FDT of the acid gas absorption process, the comparison results are shown in Table 10. Based on the comparison of the above diagnosis results, the performance of the ALW-DBN model was better than that of the enhanced DBN model without noise reduction. The results show that the DBN model optimized by the penalty factor was not accurate for different degrees of industrial noise diagnosis, and the diagnosis of the enhanced DBN model was more accurate than that of the single enhanced DBN model after eliminating redundant irrelevant noise by the adaptive soft threshold method. The diagnostic rates were 93.75% and 77.1%, respectively. However, for fault 3, when the gas molar flow rate was changed to 7000 kmoL/h, the diagnostic results of the enhanced DBN and ALW-DBN models were extremely similar because the gas flow rate of the 102A stream returned to 0 kmoL/h after the flow rate was stabilized, regardless of the added noise. When these two models were used to diagnose fault 3, the rates were almost identical. This suggests that fault 3 was insensitive to the choice between the enhanced DBN and ALW-DBN diagnostic models.

Conclusions
To more accurately extract the fault characteristics and eliminate different noise levels, an ALW-DBN model based on ALW noise reduction and an enhanced deep confidence network were proposed. The adaptive soft threshold method could adaptively set the threshold function to match different data sets. The introduction of the penalty factor eliminated the cumulative error of each layer of RBM training in the DBN, thereby reducing the error of the final training result and improving the diagnosis accuracy. By using the data after adaptive wavelet denoising as the input data of the enhanced DBN, the two optimization algorithms were combined to form a complete ALW-DBN fault diagnosis model.
For the TE process, two types of noise reduction comparisons between the normal and fault data were performed. The results indicate that adaptive soft threshold noise reduction is more suitable for an enhanced DBN model. In comparison with the original DBN model, the enhanced DBN model and the ALW-DBN model for the diagnosis rates of 21 types of faults, the results indicate that the ALW-DBN model had the best diagnosis effect, with an average FDR of 96.21% and an FPR of 3.0%. Compared with other machine learning methods, it had a relatively large improvement in FDR and FPR. Then, compared with the DCNN in the FDT, the results show that the average FDT of the ALW-DBN model was 33.65 min, which was better than the DCNN result. This proves that the enhanced DBN model combined with adaptive wavelet noise reduction could yield better diagnostic results.
The ALW-DBN model exhibited excellent performance at different noise levels during the acid gas adsorption process. When the standard deviation of the noise was less than 8, the FDR of the ALW-DBN model was greater than 95%, and the average FDR of 5 different noises was 93.75%. In comparison with the enhanced DBN method, the diagnosis rate was 77.1%, and the average FPR of the ALW-DBN model was 0.537%, whereas that of the enhanced DBN model was 2.98%. It has been proven that the ALW-DBN model exhibits good diagnostic accuracy and noise adaptability for actual chemical processes with different noise effects. For FDT, the process data could extract effective features faster after adaptive noise reduction. Compared with the improved DBN model, the ALW-DBN model shortened the average FDT by 4.4 min. This shows the effectiveness of the ALW-DBN model in fault diagnosis.
Presently, the ALW-DBN model has some advantages in the application of industrial chemical process. However, due to the embedding of an iterative penalty optimization algorithm, the training of the ALW-DBN model becomes more time-consuming. At the same time, the model has errors in identifying fault types that do not exist in the training process, and its autonomous learning ability is too weak. Additionally, compared with the DCNN method, the DBN has obvious disadvantages in extracting the time information of chemical data. These shortcomings will limit the practical application of this method in complex chemical processes. In the future, we will discuss how to strengthen the universality of the method and how to improve the effect of the method, which will help formulate safe and reliable management measures and effective accident prevention plans for the maintenance of industrial chemical processes.