A Novel Dynamic Process Monitoring Algorithm: Dynamic Orthonormal Subspace Analysis

: Orthonormal subspace analysis (OSA) is proposed for handling the subspace decomposition issue and the principal component selection issue in traditional key performance indicator (KPI)- related process monitoring methods such as partial least squares (PLS) and canonical correlation analysis (CCA). However, it is not appropriate to apply the static OSA algorithm to a dynamic process since OSA pays no attention to the auto-correlation relationships in variables. Therefore, a novel dynamic OSA (DOSA) algorithm is proposed to capture the auto-correlative behavior of process variables on the basis of monitoring KPIs accurately. This study also discusses whether it is necessary to expand the dimension of both the process variables matrix and the KPI matrix in DOSA. The test results in a mathematical model and the Tennessee Eastman (TE) process show that DOSA can address the dynamic issue and retain the advantages of OSA.


Introduction
Process monitoring and fault detection are two important aspects of process systems engineering because they are the key issues to address in order to ensure the safety and the normal operation of industrial processes [1].As such, traditional data-driven algorithms such as principal components analysis (PCA) [2] and independent components analysis (ICA) [3] have been proposed to monitor processes and to improve the product quality.PCA and ICA can effectively detect faults in a process.However, in the actual production process at a modern industrial plant, there are a large number of controllers, sensors and actuators distributed widely, and not all data need to be analyzed [4,5].That is to say, not all process variables directly affect the safety and the product quality.The information highly relevant to the product quality and economic benefits are called key performance indicators (KPIs), and their role should be emphasized in process monitoring [6,7].It is worth mentioning that both PCA and ICA monitor KPI-related and KPI-unrelated components simultaneously, and they perform poorly in detecting faults in KPI-related components because the fault information might be submersed in the disturbances of numerous KPI-unrelated components.As such, KPI-related process monitoring such as partial least squares (PLS) [8] and canonical correlation analysis (CCA) [9] algorithms have developed rapidly in recent decades, and this development is essential for ensuring production safety and obtaining superior operation performance.
However, there are still some drawbacks to these traditional KPI algorithms.First, the residual subspace calculated by the PLS algorithm is non-orthogonal to the principal components (PCs) subspace, which means that some KPI-related information may leak into the residual spaces [10,11].Second, the CCA algorithm requires KPIs to be available during both offline training and online monitoring stages as it uses KPI variables to construct indices [12,13].Third, both PLS and CCA algorithms are unable to extract PCs [14,15].
To address the above issues in traditional KPI-related algorithms, Lou et al. proposed orthonormal subspace analysis (OSA) [16].OSA can divide the process data and KPIs into three orthonormal subspaces, namely, subspaces of KPI-related components, KPI-unrelated components in process data, and process-unrelated components in KPIs.Furthermore, the cumulative percent variance method is used to select the number of PCs in an OSA algorithm.Due to the ability of the OSA algorithm to independently monitor each subspace, the OSA algorithm is not limited by KPIs during the offline and online stages.
The original OSA was proposed for addressing the monitoring issues in static process problems, so it assumes that the observations are time-independent.However, dynamic features widely exist in most industrial processes, and, hence, the auto-correlation relationships in variables interfere with the extraction of the KPI-related information [17,18].Therefore, the subspaces obtained by the OSA algorithm are not orthonormal in dynamic processes.
The "time lag shift" method, which lists the historical data as additional variables to the original variable set, is an effective measure for handling the dynamic issue, and it has been adopted in the PLS and CCA algorithms, i.e., the dynamic PLS (DPLS) and dynamic CCA (DCCA) algorithms.Therefore, in this paper, the "time lag shift" method is also combined with the OSA algorithm, named the dynamic OSA (DOSA) algorithm, and is applied to the Tennessee Eastman (TE) process to illustrate its efficiency.
The contributions of this study are as follows.First, this study proposes DOSA for dealing with the low detection rate problem caused by the dynamics processes.DOSA can determine whether the fault in a dynamics process originates from KPI-related or KPI-unrelated process variables or the measurement of KPIs.Second, this study discusses whether it is necessary to expand the dimension of both the process variables matrix and the KPI matrix in order to reduce the computation.At the same time, a new method to select the time lag number in the "time lag shift" structure is proposed.Additionally, we analyze the impact of the sampling period on DOSA.Third, we place an emphasis on the real-time nature of information and design new monitoring indices.Finally, this study compares the detection rates of the OSA, DOSA, DPLS, and DCCA algorithms.
The remainder of this paper is organized into five sections.Section 2 discusses the classical OSA algorithm and the "time lag shift" method.Section 3 proposes DOSA for dynamics process monitoring.Section 4 compares the DOSA algorithm with other KPIrelated algorithms based on TE process testing.Section 5 reviews the contributions of this work.

Orthonormal Subspace Analysis
Here, we take X ∈ R n×s as the process variables matrix (where n is the number of samples, and s is the number of process variables), and the standard PLS identification technique introduces the KPI matrix as Y ∈ R n×r (where r is the number of KPIs).OSA decomposes both X and Y into the following bilinear terms: where T com ∈ R n×φ (φ is the number of principal components) is the common latent variables shared by X and Y; Ξ X ∈ R s×φ and Ξ Y ∈ R r×φ are the transformation matrices; and E OSA ∈ R n×s and F OSA ∈ R n×r are the residual matrices.Then, OSA, along with PLS and CCA, is called 'KPI-related algorithm'.As opposed to PLS and CCA, the extracted subspaces of OSA are proved to be orthogonal [16].That is to say, T com , E OSA , and F OSA are orthogonal in Equation (1), and, most importantly, they can be monitored independently.

The "Time Lag Shift" Method
The proposed OSA algorithm in Section 2.1 implicitly assumes that the current observations are statistically independent to the historical observations [19,20].That is to say, OSA only considers the correlation between variables at the same time but does not consider the mutual influence of variables at different times.However, most data from industrial processes show degrees of dynamic characteristics; that is, the sampling data at different times are correlated.For such a process, the static OSA algorithm is not applicable.
The most common method to address such a problem is to use an autoregressive (AR) model to describe the dynamic characteristics.Similarly, the OSA algorithm can be extended to take into account the serial correlations by augmenting each observation vector, X(t) ∈ R 1×s or Y(t) ∈ R 1×r , at the current time t with the previous l x or l y observations in the following manner [21]: As known in Equation ( 2), the first s columns of X(t) and the first r columns of Y(t) represent the data at the current time, and the rest represent the data at the past time.For n sampling times, one can obtain the augmented matrices X ∈ R n×[(l x +1)×s] and Y ∈ R n×[(l y +1)×s] .
By performing dimension expansion on the data matrix in Equation ( 2), the static OSA methods can be used to analyze the autocorrelation, cross-correlation, and hysteresis correlation among the data synchronously.That is to say, X and Y will be decomposed by OSA.More details can be found in Section 3.

Determination of the Lag Number
As the traditional lag determination methods, such as the Akaike information criterion (AIC) [22] and the Bayesian information criterion (BIC) [23], are only suitable for a steady state, a new lag determination method should be proposed for DOSA.
Suppose the relationship between the data at the current time and the past time is as follows: where ) , and B = B 1 , B 2 , . . ., B l y ∈ R n×(l y ×r) .D x (t) ∈ R 1×s and D y (t) ∈ R 1×s denote the disturbance introduced at each time, and it is statistically independent of the past data.The coefficient matrices A and B can be estimated from the least square method as follows: Therefore, D x (t) and D y (t) can be estimated as follows: Then, the optimal number of time lag will be the one that creates the following indices: the minimum and the indices will not change significantly if we continue increasing the time lag.As opposed to X(t) and Y(t), D x (t) and D y (t) are time-uncorrelated and independent of the initial states of X(t) and Y(t), so they can be adopted to the dynamic process in both steady and unsteady states.
Additionally, we also set up an index to describe 'the value of Lag x or Lag y would not change significantly' as shown in Equation (7): where Lag i represents the value of Lag x or Lag y when the lag number is l x (l x > 1) or l y (l y > 1), and Lag i−1 represents the value of Lag x or Lag y when the lag number is l x − 1 or l y − 1.If the value of RC% begins to be less than 5%, we will say that 'the value of Lag x or Lag y would not change significantly'.

DOSA Procedure
Step 1.The "Time Lag Shift" method mentioned in Section 2.2.Calculate the lag number of l x and l y in Equation ( 6).Then, augment X(t) and Y(t) with the previous observations shown in Equation ( 2).In doing so, we can obtain the augmented matrix X and Y with n samples.
Step 2. Traditional OSA mentioned in Section 2.1.
(a) Calculate the Y-related component X OSA ∈ R n×[(l x +1)×s] and the X-related component Y OSA ∈ R n×[(l y +1)×s] using Equation (8).X OSA and Y OSA are both called 'the common component' and are shown to be equal in reference [16], as shown below: We tend to focus on process variables related to KPIs in industrial processes.By extracting common components and monitoring them (Step 3), one can know whether there are faults in the variables related to KPIs.
where E OSA and F OSA are both called 'the unique component'.By extracting and monitoring the unique components (Step 3), one can know whether there are faults in the variables unrelated to KPIs.(c) Extract the PCs in X OSA using the PCA decomposition method because the variables in X OSA might be highly correlated: where T comx ∈ R n×k represents the score matrix of the common component; )×s]×k is the loading matrix of the common component; )×s] is the residual matrix; and k is the number of PCs.In this step, the PCs are selected by using the CPV method, and the threshold value follows the PCA criterion, e.g., 85%.
In theory, the score matrices of the common components X OSA and Y OSA are equal unless there is something wrong with the relationship between X and Y.We use the sum of squares of the score matrices to monitor whether there are faults in the relationship between X and Y (Step 3).Similarly to Equation (10), the score matrix of the common component is T comy = Y OSA P com .
Taking into account the real-time nature of the information, PCA monitoring is not directly performed for X OSA , E OSA , and F OSA because these components contain a great amount of information at the past time.The calculation of the indices if as follows: (a) The first s columns of X OSA are monitored by the PCA approach and can then be used to generate the T 2 C and SPE C indices.That is to say, we only monitor the data at the current time.(b) Similarly, the first s columns of E OSA and the first r columns of F OSA can be monitored by the PCA approach and can then be used to generate the indices T 2 E , T 2 F , SPE E , and SPE F .(c) Furthermore, if there is something wrong with the relationship between X and Y, there will be significant differences between the score matrices T comx and T comy .Therefore, the following index can be used to test the abnormal relationship: Figure 1 summarizes the procedure presented below.
(c) Extract the PCs in OSA using the PCA decomposition method because the variables in OSA X might be highly correlated: where

TR
represents the score matrix of the common component;   ( 1) is the loading matrix of the common component;  ER is the residual matrix; and k is the number of PCs.In this step, the PCs are selected by using the CPV method, and the threshold value follows the PCA criterion, e.g., 85%.
In theory, the score matrices of the common components OSA X and OSA Y are equal unless there is something wrong with the relationship between X and Y .We use the sum of squares of the score matrices to monitor whether there are faults in the relationship between X and Y (Step 3).Similarly to Equation (10), the score matrix of the common component is , there will be significant differences between the score matrices comx T and comy T .Therefore, the following index can be used to test the abnormal relationship: Figure 1 summarizes the procedure presented below.To analyze the characteristics of the DOSA method and compare its performance with the OSA algorithm, we use a simplistic simulation process in illustrating the monitoring performances of them.Consider a large-scale process in which each single subprocess can be expressed using a time-invariant, state-space model as follows: where s 1 , s 2 , and s 3 are independent Gaussian distributed vectors; ξ and ζ are the noisy components, which are independent of the process measurements; and C and E and D and F are the coefficient matrices of the dynamic and static parts, respectively.Here, we take three algorithms into consideration: OSA; the DOSA that expands the dimension of X, which is denoted as DOSA-X; the DOSA that expands the dimension of both X and Y, which is denoted as DOSA-XY.

The Optimal Numbers of Time Lag
To determine the number of time lag, the dynamics model with several numbers of lags that are different from the normal data are fitted.Here, l x and l y are the numbers of lags in matrix X and Y, respectively.In this work, we set l x ∈ [0.1 . . ., 6] and l y ∈ [0.1 . . .6], and several values of Lag x and Lag y are shown in Figures 2 and 3   ( ) [ ( 1), ( 2), ( where 1 s , 2 s , and 3 s are independent Gaussian distributed vectors;  and ζ are the noisy components, which are independent of the process measurements; and C and E and D and F are the coefficient matrices of the dynamic and static parts, respectively.Here, we take three algorithms into consideration: OSA; the DOSA that expands the dimension of X , which is denoted as DOSA-X; the DOSA that expands the dimension of both X and Y , which is denoted as DOSA-XY.

The Optimal Numbers of Time Lag
To determine the number of time lag, the dynamics model with several numbers of lags that are different from the normal data are fitted.Here,   From the analyses shown in Figures 2 and 3, the values of where 1 s , 2 s , and 3 s are independent Gaussian distributed vectors;  and ζ are the noisy components, which are independent of the process measurements; and C and E and D and F are the coefficient matrices of the dynamic and static parts, respectively.Here, we take three algorithms into consideration: OSA; the DOSA that expands the dimension of X , which is denoted as DOSA-X; the DOSA that expands the dimension of both X and Y , which is denoted as DOSA-XY.

The Optimal Numbers of Time Lag
To determine the number of time lag, the dynamics model with several numbers of lags that are different from the normal data are fitted.Here, x l and y l are the numbers of lags in matrix X and Y , respectively.In this work, we set   From the analyses shown in Figures 2 and 3, the values of From the analyses shown in Figures 2 and 3, the values of Lag x (l x = 3) would be lowest if l x was less than or equal to 3, and the values of Lag y (l y = 3) tended to be lowest if l y was less than or equal to 3. At this time, the values of both Lag x and Lag y would not decrease significantly if we continued increasing the values of l x and l y .Therefore, the optimal lag numbers were l x = 3 and l y = 3, and this can be seen intuitively in the diagram.Furthermore, the several values of Lag x , Lag y , and RC% are listed in Tables 1 and 2.
Table 1.The values of Lag x under different l x values.From the data presented in Tables 1 and 2, the values of RC% were less than 5% when l x and l y gradually increased from 3. This also means that the optimal numbers of lags were l x = 3 and l y = 3, which is consistent with the true value.
Here, we take the traditional BIC method, which has a larger penalty than the AIC, as an example to calculate the optimal number of this model.When selecting the best model from a set of alternative models, the model with the lowest BIC should be chosen.
From the data presented in Tables 3 and 4, the optimal numbers of lags were l x = 2 and l y = 3.However, we introduced a third-order lag as Section 3.3.1 mentioned.Therefore, instead of the BIC, the original method of this work was applied to test the algorithm.(a) Fault 1: a step change with an amplitude of 3 in s 1 .Certainly, the static parameter s 1 is the unique part of X.The detection rates and false alarm rates of three algorithms are shown in Table 5.In Table 5, the detection rate of T 2 E was extremely high, so we could correctly infer that the fault occurred in the unique part of X.In other words, it is possible that there was a fault in the process variables instead of in the measurement of the KPIs.It is more important that the detection rates of the two dynamics monitoring methods were higher than the detection rate of the OSA.Thus, the dynamics problem could be solved by DOSA in this case.Furthermore, the effect of the dimension expansion for both X and Y was better than the dimension expansion for X alone.It can be hypothesized that expanding the dimension of the matrix can improve the sensitivity of the algorithm to the fault.
(b) Fault 2: a step change with an amplitude of 3 in s 3 .It is obvious that the static parameter s 3 is the unique part of Y.The results are shown in Table 6.As can be seen in Table 6, we had already expanded the dimension of X, but the detection rates of all of the indices were extremely low.Then, we found that the index T 2 F performed better while expanding the dimension of both X and Y.This means that the fault occurred in the unique part of Y.That is to say, there was a fault in the measurement of the KPIs instead of the process variables.In addition, the detection rate of DOSA-XY was extremely higher than the other two algorithms.Thus, an algorithm for the dimension expansion of data matrices with dynamic processes performs well while also solving the dynamics issue.(c) Fault 3: a step change with an amplitude of 3 in s 2 .Certainly, the static parameter s 2 is the common part of both X and Y.The results are shown in Table 7.As can be seen in Table 7, we could not judge the location of the fault if we did not expand the dimension of Y because the detection rates of most of the indices were about 50%.Then, the index T 2 C performed better while expanding the dimension of both X and Y.This means that the fault occurred in the common part of both X and Y.That is to say, there was a fault in both the process variables and in the measurement of the KPIs.In addition, the detection rate of DOSA-XY was extremely higher than that of the other two algorithms.Thus, an algorithm for the dimension expansion of data matrices with dynamic processes performs well while dealing with the dynamics issue.
Generally, the coefficient matrix D affects the relationship of X and Y.The results are shown in Table 8.In Table 8, the index SPE XY that specifically detects the relationship of X and Y performed well.We could infer that there was a high probability of a fault in D or F.Then, the detection rate of DOSA-XY was extremely higher than that of the other two algorithms.Thus, an algorithm for the dimension expansion of both X and Y performs well while also solving the dynamics issue.In sum, it is necessary to expand the dimension of both X and Y.In this section, we will take the effect of the sampling rate on the DOSA algorithm into account.The dynamics models and faults in Sections 3.3.1 and 3.3.3still apply to this section.
Firstly, the section will discuss the effect of doubling the sampling period on the selection of the lag number.We still set l x ∈ [0.1 . . ., 6] and l y ∈ [0.1 . . ., 6], followed by several values of Lag x and Lag y , and the corresponding changes in rate are listed in Tables 9 and 10.
Table 9.The values of Lag x for doubling the sampling period.As shown in Tables 9 and 10, the optimal lag numbers were l x = 1 and l y = 1 because the values of RC% were less than 5% when l x and l y gradually increased from 1.That is to say, the optimal lag numbers were affected by the sampling period.Thus, the effect of the sampling period on the detection rates of the DOSA was also a concern.
(a) Fault 1: the fault occurs in the unique part of X.The experimental comparison of the primitive and doubled sampling periods is shown in Table 11.As also shown in the table, the detection rate of T 2 E decreased by about 9%, and the detection rate of SPE E decreased by about 4%.(d) Fault 4: the fault occurs in the coefficient matrix D, which affects the relationship of X and Y.The experimental comparison of the primitive and doubled sampling periods is shown in Table 14.As can be seen in Table 14, there was no significant change in the detection rate of SPE XY .Based on the above testing results, we can see that the change in sampling period affected the determination of the lag numbers.The detection rates were also slightly affected.That is to say, the DOSA algorithm is sensitive to the change in sampling period because the AR model, which is constructed by the DOSA, will be different with the change in sampling period.We hope to solve this problem as we continue our improvement of this project in the future.

Conclusion
As shown by the above results, we can conclude the following: (1) It is necessary to expand the dimension of both X and Y.
(2) DOSA could adequately solve the dynamics issue.
(3) DOSA is able to directly analyze the location of the fault.Thus, we can know whether a fault actually occurs in KPI-related process variables, KPI-unrelated process variables, and the measurement of the KPIs.(4) DOSA is sensitive to the change in sampling period.

Tennessee Eastman Process
In this section, we would like to briefly introduce an industrial benchmark of the Tennessee Eastman (TE) process [24,25].All the discussed methods will be further applied to demonstrate their efficiencies.The TE process model is a realistic simulation program of a chemical plant, which is widely accepted as a benchmark for control and monitoring studies [26].The flow diagram of the process is described in [27,28], and the FORTRAN code of the process is available on the Internet.The process has two products from four reactants as shown in Equation ( 14): The TE process has 52 variables, including 41 process variables and 11 manipulated variables.Table 15 lists a set of 15 known faults introduced to the TE process.Training and test sets have been collected by running 25 and 48 h simulations, respectively, in which faults have been introduced 1 and 8 h into the simulation, and each variable is sampled every 3 min.Thus, training sets consist of 500 samples, whereas test sets contain 960 samples per set of simulation [29,30].

The Numbers of Time Lag in TE Process
Here, L x and L y are the lag numbers in the augmented process variables matrix and the augmented KPI matrix, respectively.In this work, we set L x ∈ [0, 1, . . ., 6] and L y ∈ [0, 1, . . ., 6].Several values of Lag x and Lag y and their corresponding changes in rate are listed in Tables 16 and 17.From the data presented in Tables 16 and 17, the values of Lag x (L x = 3) tended to be the lowest, and the values of Lag y (L y = 3) tended to be the lowest if L y was less than or equal to 3. At this time, the values of the rate of change were less than 5% when L y gradually increased from 3. That is to say, the values of Lag y would not decrease significantly if we continued increasing L y .Therefore, the optimal numbers of lags were L x = 3 and L y = 3.

Simulation Study
We tend to focus on the ability to detect KPI-related faults in the TE process.Table 18 lists a set of nine KPI-related faults introduced to the TE process.It shows the detection and false alarm rates for four algorithms: OSA, DOSA, Dynamics CCA (DCCA), and Dynamics PLS (DPLS).Considering the data presented in Table 18, DOSA shows better performance compared to the other algorithms for KPI-related faults.Meanwhile, the DOSA algorithm showed a great advantage in Faults 1-2, 8, and 12-13 over the OSA algorithm.From this analysis, it can be concluded that the DOSA algorithm performs better than the OSA algorithm on dynamic problems.Figure 4 shows the simulation diagram of OSA and DOSA monitoring in these faults.The blue line represents the value of the statistic, and the red line represents the value of the control limit.When the blue line is higher than the red line, a fault has occurred.It is obvious that the DOSA algorithm is more sensitive to these faults.
pared to the other algorithms for KPI-related faults.Meanwhile, the DOSA algorithm showed a great advantage in Faults 1-2, 8, and 12-13 over the OSA algorithm.From this analysis, it can be concluded that the DOSA algorithm performs better than the OSA algorithm on dynamic problems.Figure 4 shows the simulation diagram of OSA and DOSA monitoring in these faults.The blue line represents the value of the statistic, and the red line represents the value of the control limit.When the blue line is higher than the red line, a fault has occurred.It is obvious that the DOSA algorithm is more sensitive to these faults.

Conclusions
In this paper, we have presented an improved algorithm of OSA for conducting large-scale process monitoring, called the DOSA algorithm, and compared its performance against DPLS and DCCA, which are KPI-related algorithms that are also used to solve dynamic problems.
Considering the testing results of the dynamics model, this article proved that it is necessary to expand the dimension of both the process variables matrix and the KPI matrix while using the DOSA algorithm.Furthermore, the DOSA algorithm is able to adequately solve the dynamics issue; Thus, we can know whether a fault actually occurs in the KPI-related or KPI-unrelated process variables or in the measurement of the KPIs.
The comparative study was conducted using the Tennessee Eastman benchmark process, and we can conclude that the DOSA algorithm achieves better detection rates of faults from the analysis of the results obtained.However, the DOSA algorithm is sensitive to the change in sampling period.We intend to solve this problem as we continue the improvement of this project in the future.

Conclusions
In this paper, we have presented an improved algorithm of OSA for conducting large-scale process monitoring, called the DOSA algorithm, and compared its performance against DPLS and DCCA, which are KPI-related algorithms that are also used to solve dynamic problems.
Considering the testing results of the dynamics model, this article proved that it is necessary to expand the dimension of both the process variables matrix and the KPI matrix while using the DOSA algorithm.Furthermore, the DOSA algorithm is able to adequately solve the dynamics issue; Thus, we can know whether a fault actually occurs in the KPI-related or KPI-unrelated process variables or in the measurement of the KPIs.

3 .
Monitoring indices calculation.Taking into account the real-time nature of the information, PCA monitoring is not directly performed for OSA X , OSA E , and OSA F because these components contain a great amount of information at the past time.The calculation of the indices if as follows: (a) The first s columns of OSA X are monitored by the PCA approach and can then be used to generate the 2 C T and C SPE indices.That is to say, we only monitor the data at the current time.(b) Similarly, the first s columns of OSA Eand the first r columns of OSA F can be monitored by the PCA approach and can then be used to generate the indices Furthermore, if there is something wrong with the relationship between X and Y

Figure 1 .
Figure 1.The flow chart of DOSA.Figure 1.The flow chart of DOSA.

Figure 1 .
Figure 1.The flow chart of DOSA.Figure 1.The flow chart of DOSA.

12
numbers of lags in matrix X and Y , respectively.In this work, we set [

Figure 2 .
Figure 2. The values of

Figure 3 .
Figure 3.The values of

Figure 2 .
Figure 2. The values of Lag x under different l x values.

Figure 2 .
Figure 2. The values of

Figure 3 .
Figure 3.The values of

Figure 3 .
Figure 3.The values of Lag y under different l y values.

Table 2 .
The values of Lag y under different l y values.

Table 3 .
The values of BIC under different l x values.

Table 4 .
The values of BIC under different l y values.

Table 5 .
Fault 1 detection rates and false alarm rates of three algorithms.

Table 6 .
Fault 2 detection rates and false alarm rates of three algorithms.

Table 7 .
Fault 3 detection rates and false alarm rates of three algorithms.
(d) Fault 4: the matrix D changed to D f :

Table 8 .
Fault 4 detection rates and false alarm rates of three algorithms.

Table 10 .
The values of Lag y for doubling the sampling period.

Table 11 .
Comparison of primitive and doubled sampling periods (Fault 1).Fault 2: the fault occurs in the unique part of Y.The experimental comparison of the primitive and doubled sampling periods is shown in Table12.As also shown in the table, the detection rate of T 2 F decreased by about 8%, and the detection rate of SPE F decreased by about 3%.(c)Fault3: the fault occurs in the common part of X and Y.The experimental comparison of the primitive and doubled sampling periods is shown in Table13.As also shown in the table, the detection rate of T 2 C decreased by about 8%, and the detection rate of SPE C decreased by about 5%.

Table 12 .
Comparison of primitive and doubled sampling periods (Fault 2).

Table 13 .
Comparison of primitive and doubled sampling periods (Fault 3).

Table 14 .
Comparison of primitive and doubled sampling periods (Fault 4).

Table 15 .
Descriptions of known faults in TE process.

Table 16 .
The values of Lag x under different L x values.

Table 17 .
The values of Lag y under different L y values.

Table 18 .
Testing results of KPI-related faults for the TE process.