Abstract
With the increasing demands for process safety and manufacturing efficiency, process monitoring has garnered significant attention from both academia and industry over the past few decades. Process monitoring aims to detect deviations from normal operating conditions by analyzing data features extracted under predefined normal states. However, the inherent non-stationarity of real industrial processes can compromise the accurate definition of these normal conditions, thereby limiting the effectiveness of traditional multivariate statistical process monitoring (MSPM) methods. A common strategy to address non-stationarity is to employ projection matrices that transform non-stationary time series into stationary ones, upon which monitoring statistics are constructed. Nevertheless, this approach often overlooks the valuable information contained in the non-stationary subspace, leading to insufficient extraction of fault-relevant features. Fault signatures may manifest in both stationary and non-stationary components of the process data. To overcome these limitations, an integrated monitoring framework that combines Stationary Subspace Analysis (SSA), a Stacked Autoencoder (SAE), and Support Vector Data Description (SVDD) is proposed in this research. Specifically, SSA was first applied to decompose the process data into stationary and non-stationary subspaces. Monitoring statistics were then constructed directly in the stationary subspace, while reconstruction errors from the SAE were used to capture features in the non-stationary subspace. Finally, SVDD was used to fuse the dual-space statistical indicators, enabling comprehensive fault detection. The proposed method was validated by the Tennessee Eastman and real industrial processes. Comparative results demonstrate that it outperformed existing non-stationary monitoring techniques in terms of monitoring performance.
1. Introduction
Process monitoring is a crucial means to ensure the stable and safe operation of industrial processes []. With the accumulation of massive production data in distributed control systems, data-driven process monitoring methods have been extensively developed and applied. Multivariate Statistical Process Monitoring (MSPM) has attracted considerable attention due to its low dependence on prior process knowledge and ease of implementation.
Currently, MSPM mainly includes principal component analysis (PCA), partial least squares (PLS), independent component analysis (ICA), and so on []. These methods typically operate by projecting high-dimensional process data into a low-dimensional feature space that preserves most of the original information []. Within this low-dimensional space, the distribution of normal operating conditions can be characterized by certain statistics, for example, the Hotelling’s statistic [,], thus enabling effective fault detection. Most MSPM methods assume that the process operates under a predefined normal and stable condition, which means that the process variables are stationary []. However, in large-scale and complex chemical processes, due to factors such as equipment aging, planned operational adjustments, and external disturbances, non-stationary variables commonly exist [], posing significant challenges to the monitoring performance of traditional MSPM methods. The primary idea to implement non-stationary process monitoring is to eliminate the non-stationary trends by preprocessing and then establish models [], in which a common approach is to conduct the transform of difference. Time series in general industrial processes can be converted to stationary by differencing at most twice [], by which the seasonality, cyclicality, or other forms of non-stationary characteristics can be removed. However, the dynamic information of the process may be lost during the differencing, which could compromise the monitoring effect []. Model adaptive updating strategies are also applied to solve the non-stationary problems, where the model structure and parameters are continuously updated. However, the update nodes of the model are difficult to determine because faults and non-stationary trends are difficult to distinguish, and if the model is updated with fault data, it will compromise the effectiveness of monitoring.
In addition to these methods, the long-term equilibrium relationship analysis is considered an effective approach. Its core idea is to extract stable collaborative relationships from non-stationary variables. The cointegration analysis (CA), originally developed for economic variable analysis [,], was introduced to monitor the non-stationary. Zhao et al. [] proposed a sparse cointegration analysis (SCA) based total variable decomposition and distributed modeling algorithm to fully explore the underlying non-stationary variable relationships. Hu et al. [] proposed a dual cointegration analysis for common and specific non-stationary fault variations diagnosis.
The stationary subspace analysis (SSA), first proposed by Bunau et al. [], as another long-term equilibrium relationship analysis method, aims to separate stationary and non-stationary sources from mixed signals. By using the stationary components to build monitoring models, SSA enables effective monitoring of non-stationary processes. Wu [] considered the dynamic characteristics of the non-stationary process and proposed dynamic stationary subspace analysis (DSSA). The time shift technique is introduced to model dynamic relationships and the Mahalanobis distance is adopted for monitoring stationary components of augmented data. The results of three cases demonstrated the performance of DSSA. Chen [] developed an exponential analytic stationary subspace analysis (EASSA) algorithm to estimate the stationary sources more accurately and numerically stably. Two cases studied demonstrated that the real faults could be distinguished from normal changes.
In large-scale chemical processes, non-stationary variables induced by equipment aging, operational adjustments, and external disturbances present a critical challenge for reliable process monitoring. Traditional non-stationary process monitoring methods predominantly emphasize the modeling of stationary features, while often overlooking fault relevant information hidden within the non-stationary subspace. This limitation arises from the inherent monitoring design of traditional SSA, which exclusively focus on extracting stationary components. Consequently, potential fault signatures, such as gradual drifts, oscillatory behaviors, or dynamic anomalies, embedded in non-stationary components are frequently discarded. This omission compromises monitoring sensitivity and leads to increased missed detection rates in practical applications.
Autoencoders (AE), as typical deep learning models, have demonstrated strong feature extraction capabilities through their encoder-decoder structure []. Their basic principle is to minimize the error between the input and the reconstructed data, thereby learning latent features that effectively represent the input. The stacked autoencoders (SAE) are formed by stacking multiple autoencoders layer by layer, with each hidden representation serving as the input to the next encoder []. Through multi-layer nonlinear mappings, SAE can gradually extract higher-order features, making them well-suited to capture complex relationships in non-stationary process data.
Motivated by these considerations, this study proposes a hybrid process monitoring framework that integrates SSA and SAE to jointly exploit information from both stationary and non-stationary subspaces. Specifically, SSA was first applied to decompose process data into stationary and non-stationary components. In the stationary subspace, conventional monitoring statistics were constructed to capture stationary variations. In the non-stationary subspace, SAE is employed to learn deep latent features, and the reconstruction error is used as monitoring statistics, thereby retaining fault relevant dynamic information that would otherwise be ignored. To achieve unified decision-making, Support Vector Data Description (SVDD) [] was then adopted to fuse the monitoring statistics from both subspaces. SVDD provides a powerful one-class classification framework that encloses normal operating data within a hypersphere in feature space, allowing effective discrimination between normal and abnormal conditions. This integration not only enhances sensitivity to both steady-state and dynamic faults but also improves robustness against process non-stationarity. The proposed framework was validated on the benchmark Tennessee Eastman (TE) and two industrial processes. Experimental results demonstrate that the proposed framework significantly outperformed conventional SSA and some deep learning-based monitoring methods, offering superior detection accuracy, lower false alarm rates.
2. Theory and Methods
2.1. Stationary Subspace Analysis
SSA is a blind source separation approach that factorizes the observed signal into stationary and non-stationary source based on the Equation (1) as follows:
where is an invertible matrix. and are stationary and non-stationary sources, respectively. The goal of SSA is to separate the stationary sources and non-stationary sources by estimating a separation matrix ad the following:
where and are the stationary and non-stationary projection matrices.
The process data were first divided into consecutive and nonoverlapping epochs, . For any projection matrix , it was possible to obtain the mean and covariance matrix of stationary sources in each epoch, thus obtaining the distribution .
The distance between the stationary sources and the standard normal distribution was calculated in each epoch, which was measured by the Kullback–Leibler divergence . were summed over each epoch to construct an objective function as follows:
Equation (4) corresponds to the following optimization objective:
The problem is usually solved by the gradient descend method to obtain the optimal stationary projection matrix and stationary sources .
2.2. Stacked Autoencoder
The structure of the autoencoder is divided into two parts, the encoder and the decoder, for input data , the encoder maps it to the hidden layer of the following form:
where is the hidden layer features, is the encoder layer weights, is the bias vector, and is the activation function. The decoder reconstructs the hidden layer features to obtain the reconstructed data with the loss function as the following:
Stacked autoencoders, on the other hand, were obtained by stacking and combining multiple autoencoders, where the current hidden layer features were used as inputs to the next layer of encoders.
2.3. Support Vector Data Description
The goal of the support vector data description is to find a hypersphere region of minimum size , thus including all the training objects, and when a sample point falls outside the region, the sample can be considered as an anomaly.
Its optimization problem can be formulated as follows:
The radius of the hypersphere was calculated through nonlinear mapping by kernel function as the following:
The Gaussian kernel function is employed as the following:
For a new sample , the distance to the center of the hypersphere region can be calculated using the following equation:
If , it is considered that a fault may have occurred.
2.4. Proposed Monitoring Strategy
The modeling and implementation steps of the method are shown in Figure 1, which is mainly divided into two parts: offline modeling and online monitoring.
Figure 1.
The proposed process monitoring framework.
Offline modeling
The offline modeling consists of the following steps:
Step 1: The training data were normalized with Equation (14):
where and denote the sample mean and the standard deviation of the training data, respectively.
Step 2: SSA was employed on the normalized data to extract the stationary and non-stationary components. A Mahalanobis distance statistic was established for the stationary components, while the non-stationary components were input into a stacked autoencoder.
Step 3: The SAE model was established to reconstruct the non-stationary components, and a monitoring statistic based on reconstruction error is then established.
Step 4: The two statistics were concatenated together, and each statistic can be regarded as a spatial coordinate of a sampling point. Then, all sampling points were mapped to a high-dimensional space via SVDD to find a hypersphere region of minimum size. The radius of the hypersphere is the control limit.
Online monitoring
The online modeling consists of the following steps:
Step 1: The online data were normalized by the mean and variance of the offline data.
Step 2: The normalized data were projected into stationary and non-stationary subspace. A Mahalanobis distance statistic was established for the stationary subspace, while the non-stationary components were input into the trained stacked autoencoder.
Step 3: The non-stationary components were reconstructed by the trained SAE model to obtain the reconstruction error-based statistics.
Step 4: The two statistics were concatenated and mapped by SVDD to obtain the distance between a new sample and the hypersphere’s center, which is the fusion statistics. When the fusion statistics exceed the control limits, it is deemed that a fault has occurred.
3. Results and Discussion
3.1. TE Case
The TE process, developed by Eastman Chemical Company [,], is a chemical simulation process consisting of 5 units and 53 variables. The TE process includes 21 pre-define faults, which are usually used to verify the performance of monitoring methods. A total of 19 variables in the TE process are component variables and their sampling frequency is lower than that of other variables and one variable remains constant, which is usually not selected in process monitoring tasks. Thus, we selected the remaining 33 process variables. The process flow chart of the TE process is shown in Figure 2, and the variables information is shown in Table 1. The fault details are described in Table 2. For each fault, the same training set was employed which comprising 500 sampling points, and the test set contains 960 sampling points, and each fault is introduced at the 160th sampling point.
Figure 2.
The Tennessee Eastman process flowchart.
Table 1.
Variables information of the TE process.
Table 2.
Fault information of the TE process.
The traditional principal component analysis (PCA), SSA, SAE, variational autoencoders (VAE), and nSSA-SAE, which only consider non-stationary spatial reconstruction errors, and an SSA-one class support vector machine (SSA-OCSVM) [] were compared with proposed integrated monitoring framework. The SAE consists of three encoding and decoding layers. The ReLU activation function was applied in all hidden layers. The model was trained through the Adam optimizer with a learning rate of 0.001 and a mean squared error (MSE) loss function. Fault Detection Rate (FDR), False Alarm Rate (FAR), and Fault Detection Time (FDT) (the index corresponding to five consecutive alarm sample points) were applied to evaluate the process monitoring performance, which can be calculated by following Equations (15) and (16). The results are shown in Table 3, Table 4 and Table 5 and Figure 3.
Table 3.
Comparison monitoring results (FDR)% of 21 faults in the TE process.
Table 4.
Comparison monitoring results (FAR)% of 21 faults in the TE process.
Table 5.
Comparison monitoring results (FDT) of 21 faults in the TE process.
Figure 3.
Comparison of average FDR, FAR, and FDT.
It has been demonstrated in many existing studies that faults 3, 9, and 15 are difficult to monitor due to their tiny deviation and are therefore not considered in the comparison results. It can be observed in Table 2 and Table 3, the proposed method performed optimally in most faults, especially in faults 5, 10, 16, and 19. This may be due to the introduction of SSA, which can separate stationary and non-stationary components from the original data, making it more sensitive to these faults. It can also be proven by the fact that SSA-based methods, SSA, nSSA-SAE, SSA-OCSVM, and the proposed method, can greatly improve the FDR compared to traditional PCA, VAE, and SAE, which demonstrated the claim that the fault information could be hidden in the non-stationary subspace. In addition, although the FAR of the proposed method is not the lowest, it is within an acceptable range, proving the effectiveness of the proposed method.
Faults 1 and 19 were selected as examples for further analysis. For fault 1, as shown in Figure 4, all methods proved effective in detecting the fault and promptly issuing an alarm upon its occurrence. This is attributable to the relatively straightforward nature of fault 1, where the deviation in variables is substantial when the fault occurs. For fault 19, as shown in Figure 5, however, methods such as PCA, VAE, and SAE, which do not account for process non-stationarity, performed poorly. This may stem from the fault association with non-stationary variables, whereas these methods operate under the assumption of process stationarity. SSA, SSA-OCSVM, and nSSA-SAE, which considered only one aspect of process information (either stationary or non-stationary), consequently exhibited lower fault detection rates. The higher detection rate of nSSA-SAE compared to SSA further underscores the necessity of incorporating non-stationary subspace information. The proposed method in this paper enables timely alarm generation upon fault occurrence and achieves the highest fault detection rate for fault 19, demonstrating its superiority over other SSA-based and reconstruction-based monitoring methods.
Figure 4.
Comparison monitoring results of fault 1 in TEP: (a) PCA (T2), (b) PCA (Q), (c) SSA, (d) SSA−OCSVM, (e) VAE, (f) SAE, (g) nSSA−SAE, and (h) the proposed method.
Figure 5.
Comparison monitoring results of fault 19 in TEP: (a) PCA (T2), (b) PCA (Q), (c) SSA, (d) SSA−OCSVM, (e) VAE, (f) SAE, (g) nSSA−SAE, and (h) the proposed method.
3.2. Industrial Case 1
Figure 6 demonstrates the flowchart of a catalytic reforming process of a petrochemical company, which contains four reactors, four heating furnaces and a key equipment, plate heat exchanger. In this process, the feed rates of naphtha and circulating hydrogen are the key factors affecting the pressure drop at the hot end of the plate heat exchanger. Due to external disturbances such as production load adjustment, process variables such as heat exchanger hot end pressure drop and circulating hydrogen feed rate often show obvious non-stationary trends. In actual operation, the hot end pressure drop of the heat exchanger frequently shows an abnormal increase, which brings much trouble to the operator and makes it difficult to accurately determine whether the change is caused by normal fluctuations or potential faults.
Figure 6.
Flowchart of the catalytic reforming process.
In addition, the heat exchange efficiency is influenced by the increase of pressure drop, resulting in increasing fuel gas consumption of the heating furnace. When the pressure drop increases to an unacceptable level, the plant must shut down, which causes a huge loss to upstream and downstream production plans and factory profits. Therefore, effective monitoring of this non-stationary process is of great significance in early abnormal warning, assisting operators to take timely intervention measures, avoiding the expansion of faults, and bringing significant economic benefits to the enterprise.
Figure 7 demonstrates the variation in hot-end pressure drop with hydrogen feed rate and total naphtha feed rate. Due to frequent adjustments in production load, the naphtha feed rate exhibits significant fluctuations, resulting in multiple operating conditions, a characteristic feature of non-steady-state processes. Under normal operating conditions, the hot-end pressure drop typically varies synchronously with both hydrogen and naphtha feed rates, indicating a stable process relationship. However, within the red shaded region, when the naphtha feed rate remains relatively stable, a reduction in the hydrogen feed rate paradoxically causes the hot-end pressure drop to increase, suggesting a potential process malfunction. Thus, 4000 sampling points were selected that included abnormal increases of hot end pressure drop. The sampling frequency was 1 min, and each sampling point consisted of 27 variables, including circulating hydrogen pressure, reactor inlet and outlet pressure difference, and reactor outlet temperature. The detailed information is in Table 6. The first 2500 sampling points were used as offline data to train the monitoring model, while the remaining 1500 sampling points were used as online data to validate the performance of the monitoring model. Around the 460th point in the test set, an abnormal increase in hot end pressure drop occurred.
Figure 7.
Trend of pressure drop as hydrogen feed rate and naphtha feed rate varied.
Table 6.
Variables information in continuous catalytic reforming process.
Figure 8.
Comparison monitoring results of industrial process 1: (a) PCA (T2), (b) PCA (Q), (c) SSA, (d) SSA−OCSVM, (e) VAE, (f) SAE, (g) nSSA−SAE, and (h) the proposed method.
Table 7.
Comparison monitoring results of industrial process 1.
As shown in Figure 8, the PCA (T2) failed to detect the fault and although PCA (Q), SSA-based, VAE, and SAE-based methods can detect faults, they also generated many false alarms even there was no fault occurs, which may mislead operators and lead to incorrect operations. The nSSA-SAE method effectively reduced the probability of the model identifying normal non-stationary trends as faults by considering fault information in non-stationary subspaces. However, its false alarm rate was still higher than that of the method proposed. Although the FDR of proposed integrated monitoring framework was slightly lower than nSSA-SAE, it significantly reduced the FAR compared to the comparison methods, demonstrating the rationality of jointly considering stationary and non-stationary information and proving the effectiveness of the proposed method. Although the FDT of some methods was lower than that of the proposed method, this is attributable to false alarms and is therefore unreliable. Consequently, the FDT serves as a less significant measure of model performance in this case.
We further analyzed the impact of the number of stationary component and depth of SAE on the performance of proposed method, taking this case as an example.
Figure 9 demonstrates that the number of stationary components has little effect on the FDR, with all FDR values achieving a high level. The FAR is minimized (lower than 5%) when the number of stationary components equals six (selected in this work), which indicates that the fault information is more concealed within the non-stationary components, and the detection of this fault is primarily contributed by reconstruction error.
Figure 9.
Effect of stationary component cumber on FDR and FAR.
In Figure 10, the FDR for most models reached 95%, with a FAR below 5%. However, due to the inherent uncertainty in deep learning model training, it remains unclear how the number of SAE layers affects FDR and FAR in this case. Nevertheless, it is certain that higher layer counts necessitate greater computational resources for both training and inference. Consequently, the number of SAE layers should not be excessively high.
Figure 10.
Effect of SAE layers number on FDR and FAR.
3.3. Industrial Case 2
An industrial Dimethyl Ether/Methanol to Olefin (DMTO) of a chemical company in China was investigated as the second industrial case study to demonstrate the performance of the proposed framework in industrial non-stationary processes monitoring. The DMTO process is an advanced coal-to-chemicals technology that converts methanol or dimethyl ether into light olefins, mainly ethylene and propylene, through catalytic reactions. In industrial applications, the DMTO process exhibited highly complex dynamics due to catalyst deactivation, strong nonlinear coupling among temperature, pressure, and flowrate variables in the process. The detailed information is in Table 8.
Table 8.
Variables information in DMTO.
A section of historical production data collected by DCS with a total of 7500 samples was selected for validate the monitoring performance of proposed and compared methods. The sampling interval was 15 s and each sample contained 44 process variables, including temperature, pressure, flow rate, and variables associated with devices such as the two main reactors in DMTO.
Figure 11 illustrates the four variables for measuring the main reactor dense phase bed bottom temperature. At about the 1070th point (the red line), an unknown offset occurred in 1190TI1134E, whereas before that, these variables retained a similar change trend, which suggests that a fault has occurred.
Figure 11.
Trend of the main reactor dense phase bed bottom temperature.
Figure 12 illustrates the performance of the proposed method and comparative approaches in monitoring the DMTO process. As shown, PCA T2 and SAE exhibited delayed alarm times and failed to sustain alarms after faults occurred due to their insensitivity to non-stationary processes. Due to the assumption of a normal distribution in its latent space, the VAE may not be applicable for resampling non-stationary variables, thereby rendering it alarms almost the whole test set. Meanwhile, PCA-SPE and nSSA-SAE, which reconstructs solely for non-stationary components, generated a small number of false alarms before faults occurred, resulting in higher false alarm rates, and SSA-OCSVM also exhibited a similar situation. SSA and the proposed method exhibited similar performance, but SSA also showed a relatively high false alarm rate, which may lead to misjudgment by operators. As shown in Table 9, although the FDR of proposed method was not the highest, it still achieved 99.47%, which meets industrial practice requirements. Furthermore, its FAR reached 1.31%. Overall, the proposed method demonstrated the best performance in the DMTO process.
Figure 12.
Comparison monitoring results of industrial process 2: (a) PCA (T2), (b) PCA (Q), (c) SSA, (d) SSA-OCSVM, (e) VAE, (f) SAE, (g) nSSA-SAE, and (h) the proposed method.
Table 9.
Comparison monitoring results of industrial process 2.
4. Conclusions
This work tackles the challenge of non-stationary processes monitoring by proposing a novel joint monitoring framework that integrates stationary subspace analysis (SSA) with stacked autoencoders (SAE). The proposed method effectively captures both stationary and non-stationary features, enabling the identification of latent fault information and improving fault detection performance under non-stationary conditions. In practical production scenarios, it can successfully differentiate between normal process variations and abnormal behaviors, providing timely fault warnings and significantly reducing the false alarm probability. And it can detect minor abnormalities in the process to avoid accidents, resulting in economic losses or even casualties. Experimental results demonstrate that the proposed approach outperforms conventional process monitoring methods in terms of detection accuracy and robustness.
In fact, this method can adapt to different processes by adjusting the weight coefficients of the stationary subspace monitoring statistics and the non-stationary reconstruction error statistics. To consider the dynamic system, the process time lag can be introduced. We will study this part in the future.
Author Contributions
Conceptualization, J.R. and C.J.; methodology, J.R. and C.J.; software, J.R., C.J.; validation, J.R.; formal analysis, J.R.; investigation, J.R.; resources, J.R., C.J.; data curation, J.R., J.W., W.S.; writing—original draft preparation, J.R.; writing—review and editing, C.J., J.W., W.S.; visualization, J.R.; supervision, J.W., W.S.; project administration, J.W., W.S.; funding acquisition, J.W., W.S. All authors have read and agreed to the published version of the manuscript.
Funding
This work was supported by the National Natural Science Foundation of China (grant numbers [22278018]).
Data Availability Statement
The TEP data can be accessed from the Ref. []. The real industrial case data are from a company and are confidential.
Acknowledgments
The authors acknowledge the support received from the foundation.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Ji, C.; Sun, W. A review on data-driven process monitoring methods: Characterization and mining of industrial data. Processes 2022, 10, 335. [Google Scholar] [CrossRef]
- Qin, S.J. Survey on data-driven industrial process monitoring and diagnosis. Annu. Rev. Control 2012, 36, 220–234. [Google Scholar] [CrossRef]
- Wang, J.; He, Q.P. Multivariate statistical process monitoring based on statistics pattern analysis. Ind. Eng. Chem. Res. 2010, 49, 7858–7869. [Google Scholar] [CrossRef]
- Kano, M.; Hasebe, S.; Hashimoto, I.; Ohno, H. A new multivariate statistical process monitoring method using principal component analysis. Comput. Chem. Eng. 2001, 25, 1103–1113. [Google Scholar] [CrossRef]
- Li, G.; Qin, S.J.; Zhou, D. Geometric properties of partial least squares for process monitoring. Automatica 2010, 46, 204–210. [Google Scholar] [CrossRef]
- Scott, D.; Shang, C.; Huang, B.; Huang, D. A holistic probabilistic framework for monitoring nonstationary dynamic industrial processes. IEEE Trans. Control Syst. Technol. 2020, 29, 2239–2246. [Google Scholar] [CrossRef]
- Yu, Z.; Wang, G.; Jiang, Q.; Yan, X.; Cao, Z. Enhanced variational autoencoder with continual learning capability for multimode process monitoring. Control Eng. Pract. 2025, 156, 106219. [Google Scholar] [CrossRef]
- Ji, C.; Ma, F.; Wang, J.; Sun, W. Early Identification of Abnormal Deviations in Nonstationary Processes by Removing Non-Stationarity. Comput. Aided Chem. Eng. 2022, 49, 1393–1398. [Google Scholar] [CrossRef]
- Rao, J.; Ji, C.; Wang, J.; Sun, W.; Romagnoli, J.A. High-Order Nonstationary Feature Extraction for Industrial Process Monitoring Based on Multicointegration Analysis. Ind. Eng. Chem. Res. 2024, 63, 9489–9503. [Google Scholar] [CrossRef]
- Chen, Q.; Kruger, U.; Leung, A.Y. Cointegration testing method for monitoring nonstationary processes. Ind. Eng. Chem. Res. 2009, 48, 3533–3543. [Google Scholar] [CrossRef]
- Engle, R.F.; Granger, C.W. Co-integration and error correction: Representation, estimation, and testing. Econometrica 1987, 55, 251–276. [Google Scholar] [CrossRef]
- Granger, C.W. Some properties of time series data and their use in econometric model specification. J. Econom. 1981, 16, 121–130. [Google Scholar] [CrossRef]
- Zhao, C.; Sun, H.; Tian, F. Total variable decomposition based on sparse cointegration analysis for distributed monitoring of nonstationary industrial processes. IEEE Trans. Control Syst. Technol. 2019, 28, 1542–1549. [Google Scholar] [CrossRef]
- Hu, Y.; Zhao, C. Fault diagnosis with dual cointegration analysis of common and specific nonstationary fault variations. IEEE Trans. Autom. Sci. Eng. 2019, 17, 237–247. [Google Scholar] [CrossRef]
- Von Bünau, P.; Meinecke, F.C.; Király, F.C.; Müller, K.R. Finding stationary subspaces in multivariate time series. Phys. Rev. Lett. 2009, 103, 214101. [Google Scholar] [CrossRef]
- Wu, D.; Sheng, L.; Zhou, D.; Chen, M. Dynamic stationary subspace analysis for monitoring nonstationary dynamic processes. Ind. Eng. Chem. Res. 2020, 59, 20787–20797. [Google Scholar] [CrossRef]
- Chen, J.; Zhao, C. Exponential stationary subspace analysis for stationary feature analytics and adaptive nonstationary process monitoring. IEEE Trans. Ind. Inform. 2021, 17, 8345–8356. [Google Scholar] [CrossRef]
- Li, P.; Pei, Y.; Li, J. A comprehensive survey on design and application of autoencoder in deep learning. Appl. Soft Comput. 2023, 138, 110176. [Google Scholar] [CrossRef]
- Liu, G.; Bao, H.; Han, B. A stacked autoencoder-based deep neural network for achieving gearbox fault diagnosis. Math. Probl. Eng. 2018, 2018, 5105709. [Google Scholar] [CrossRef]
- Tax, D.M.; Duin, R.P. Support vector data description. Mach. Learn. 2004, 54, 45–66. [Google Scholar] [CrossRef]
- Yin, S.; Ding, S.X.; Haghani, A.; Hao, H.; Zhang, P. A comparison study of basic data-driven fault diagnosis and process monitoring methods on the benchmark Tennessee Eastman process. J. Process Control 2012, 22, 1567–1581. [Google Scholar] [CrossRef]
- Downs, J.J.; Vogel, E.F. A plant-wide industrial process control problem. Comput. Chem. Eng. 1993, 17, 245–255. [Google Scholar] [CrossRef]
- Di, J.; Rao, J.; Ji, C.; Wang, J.; Sun, W. Constant component separation for nonlinear time-varying process monitoring based on stationary subspace analysis and one-class SVM. Can. J. Chem. Eng. 2025. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).