Data Driven Detection of Different Dissolved Oxygen Sensor Faults for Improving Operation of the WWTP Control System

Sensor faults frequently occur in wastewater treatment plant (WWTP) operation, leading to incomplete monitoring or poor control of the plant. Reliable operation of the WWTP considerably depends on the aeration control system, which is essentially assisted by the dissolved oxygen (DO) sensor. Results on the detection of different DO sensor faults, such as bias, drift, wrong gain, loss of accuracy, fixed value, or complete failure, were investigated based on Principal Components Analysis (PCA). The PCA was considered together with two statistical approaches, i.e., the Hotelling’s T2 and the Squared Prediction Error (SPE). Data used in the study were generated using the previously calibrated first-principle Activated Sludge Model no.1 for the Anaerobic-Anoxic-Oxic (A2O) reactors configuration. The equation-based model was complemented with control loops for DO concentration control in the aerobic reactor and nitrates concentration control in the anoxic reactor. The PCA data-driven model was successfully used for the detection of the six investigated DO sensor faults. The statistical detection approaches were compared in terms of promptness, effectiveness, and accuracy. The obtained results revealed the way faults originating from DO sensor malfunction can be detected and the efficiency of the detection approaches for the automatically controlled WWTP.


Introduction
Clean water availability has become one of the ten major problems of humanity, as water demand due to population growth, urbanization, industrialization, and environmental concerns associated with wastewater-treatment technologies have been considerably amplified during the last decade. Domestic, industrial, agricultural, mining, and commercial sources of wastewater require efficient removal of organic or inorganic pollutants, present in solid or suspended forms.
Taking into account the increasing demands of European and international regulations regarding the standards of wastewater treatment and required water quality characteristics, the wastewater treatment plant (WWTP) becomes the most important actor, but its operation turns is difficult and challenging. Detailed descriptions of wastewater treatment processes and their associated models are very complex, show non-linear behavior, and are characterized by many continuously changing variables. Besides setting the structure and finding the accurate parameter values of the model, the most demanding difficulties for WWTP modeling and control of operation are daily, weekly, or seasonally influent composition changes. Furthermore, comparison of control strategies operating in different plants is complex and requires building simulation benchmarks [1].
The International Water Association (IWA), as a professional organization, is known for its extensive contributions regarding solutions to worldwide water challenges. Following the assumed mission to advance standards for sustainable water management, ciency can be dramatically affected due to erroneous information provided by the DO sensor, leading to the deterioration of expected control functioning and with negative consequences on degrading the effluent quality, increasing the energy consumed, decaying the environmental sustainability performance, or even temporary plant shutdown. Few studies have addressed the DO sensors faults issue. One study developed a PCA method for detecting three different fault types [25], one approached a failure type of the DO sensor fault and of the level sensor [14], another paper investigated the wrong output signal originating from the sensor [26], and a study used PCA to detect deviations due to sensor clogging [27]. Other studies focused on implementing several detection methods using set membership identification and radial-basis neural network [28], binary classifiers [29], or impulse response-based methods [30].
The purpose of the present study was to investigate and propose new detection solutions for specific faults that occur due to defective DO sensors, such as: bias, drift, wrong gain, loss of accuracy, fixed value, or complete failure. The novelty and innovation of this work consists in comparatively revealing their occurrence and effects, associated with highlighting reliable improvements brought to the WWTP control system by the PCA-based models aiming to detect different types of DO sensor faults.
The paper is organized to present (i) the first-principle wastewater treatment plant calibrated model used for carrying out dynamic simulations, followed by (ii) the presentation of the core theoretical aspects of the PCA fault-identification approach for (iii) the six different types of investigated DO sensors faults, in association to their software implementation, and (iv) results and discussions of the performance obtained by the proposed PCA fault-identification methodology, and ending with (v) the conclusions of the investigations for improving the operation of the municipal wastewater treatment plant used as a case study.

Process Model
The municipal WWTP investigated in this study is was characterized by an anaerobicanoxic-aerobic (A 2 O) configuration. After wastewater enters the WWTP, the influent is the subject of mechanical filtration, sand and fats separation, followed by primary sedimentation. The wastewater leaves the primary clarifier and reaches the biodegradation tanks, where the activated sludge technology performs the carbon, nitrogen, and phosphorous removal. Biodegradation basins were divided into three different zones: anaerobic, anoxic, and aerobic. The growth of phosphorus-accumulating microorganisms takes place in the anaerobic zone, while the reduction of the nitrates and nitrites (NO) formed in the aerobic reactors takes took place in the anoxic zone [31,32]. Physical separation is performed in the secondary settler where the sludge is separated from the semiliquid mixture. The treated water is discharged to the emissary river. Part of the sludge from the secondary clarifier is sent to the anaerobic digestion unit, while another part is returned to the anaerobic biodegradation tank. The latter stream forms the external recycle. An additional recycle stream, denoted as the nitrate or internal recycle, is used to return nitrates from the aerobic reactor into the anoxic zone in order to perform the denitrification step [33]. Tables 1 and 2 present the principal design and operating parameters of the municipal plant.  BSM1 and ASM1 represent the foundation of the WWTP model developed and studied in the present paper. The necessary modifications were applied to the original BSM1 model, in order to comply with the A 2 O configuration of the investigated municipal WWTP. The model of the WWTP consists of a set of differential and algebraic equations that describe each of the main structural units: primary settler [34], anaerobic bioreactor, anoxic bioreactor, three aerobic bioreactors [3], and secondary settler [35]. Data for dry weather were collected from municipal WWTP measurements and underwent a process of reconciliation. The modified BSM1 model was updated with the plant collected data and calibrated accordingly. For this study, a previously developed and calibrated MATLAB model of the municipal WWTP, based on ASM1, BSM1 and MATLAB & Simulink TM software, was used to conduct dynamic-state simulations of both normal and abnormal DO sensor operation [36]. The calibration of the model parameters was performed based on optimization. The mathematical model was implemented in C++ programming language and compiled as MATLAB executable files to gain simulation speed of the Simulink sfunctions and to spare the computation resources.
The importance of automatic control is largely recognized for both the WWTP stable and safe operation, and for the counteraction of pollution propagation in the emissary river [37,38]. The synergy created by automatic control and optimization is highly valuable [39,40]. Two automatic control structures were used in this study: a dissolved oxygen control loop and a nitrates and nitrites concentration control loop. The first loop controlled DO concentration in the third aerated bioreactor, while the second loop controlled the nitrates and nitrites concentration in the anoxic bioreactor. The DO setpoint value of 2 mg O 2 /L in the last aerated reactor was imposed for the DO concentration control loop [41]. In order to change the air flow rate entering the aerated reactors, the DO controller computed the control signal to manipulate air control valves. The air flow rate computed by the DO controller is unequally distributed in the sequence of the three aerobic reactors. It used the following weighting factors: 0.9 for aerobic reactor three, 0.5 for aerobic reactor four, and 0.3 for aerobic reactor five. This distribution scheme was established on the basis of oxygen requirements for the nitrification bioreactors. For the second control loop, the nitrates and nitrites (NO) concentration in the second (anoxic) reactor was controlled by manipulating the nitrate recycling flow rate. A setpoint value of 0.01 mg N/L was used for this control loop. Proportional-Integral (PI) controllers were considered for the two feedback control loops. The control system design is presented in Figure 1.
this control loop. Proportional-Integral (PI) controllers were considered for the two fe back control loops. The control system design is presented in Figure 1. Dynamic state simulations for both normal and abnormal operation (i.e., caused faulty DO sensor operation) were carried out for each type of investigated faults and d were subsequently processed for fault detection.

Principal Component Analysis
PCA is a data-mining methodology relying on a process model built with ordina process data and used in statistical process monitoring (SPM). PCA models are principa used to identify from raw data the correlation between process variables. The meth reduces the large model training data set by linearly transforming it into a smaller o that still contains the key information. The resulting data set consists of a score matrix a a loadings matrix that include information revealed by the reduced set of variables [42 Consider the data matrix X ϵ ℝ × where m represents the number of samples, a n the number of process variables, collected during normal operation. The matrix X normalized to zero-mean and unit-variance matrix . To reduce the original space dim sion, matrix is decomposed in a score matrix called T and a loading matrix named The expression of this decomposition is: where is a score vector, is a loading vector with i = 1, 2, 3, … k, and E is the resid matrix. The number of selected principal components, k, is calculated using the cumu tive percent variance (CPV). It measures the percent of variance encapsulated by the f k principal components [43]. It can be computed using the equation: where λ is the j th eigenvalue of the covariance matrix, C: where the columns of matrix V are the eigenvectors of the covariance matrix C and S is diagonal matrix that contains the eigenvalues of C, arranged in decreasing order of m nitude.
The original and normalized data space can be reconstructed according to the follo ing relationship: Dynamic state simulations for both normal and abnormal operation (i.e., caused by faulty DO sensor operation) were carried out for each type of investigated faults and data were subsequently processed for fault detection.

Principal Component Analysis
PCA is a data-mining methodology relying on a process model built with ordinary process data and used in statistical process monitoring (SPM). PCA models are principally used to identify from raw data the correlation between process variables. The method reduces the large model training data set by linearly transforming it into a smaller one that still contains the key information. The resulting data set consists of a score matrix and a loadings matrix that include information revealed by the reduced set of variables [42].
Consider the data matrix X R m×n where m represents the number of samples, and n the number of process variables, collected during normal operation. The matrix X is normalized to zero-mean and unit-variance matrix . X. To reduce the original space dimension, matrix . X is decomposed in a score matrix called T and a loading matrix named P. The expression of this decomposition is: where t i is a score vector, p i is a loading vector with i = 1, 2, 3, . . . k, and E is the residual matrix. The number of selected principal components, k, is calculated using the cumulative percent variance (CPV). It measures the percent of variance encapsulated by the first k principal components [43]. It can be computed using the equation: where λ j is the j th eigenvalue of the covariance matrix, C: and where the columns of matrix V are the eigenvectors of the covariance matrix C and S is the diagonal matrix that contains the eigenvalues of C, arranged in decreasing order of magnitude. The original and normalized data space can be reconstructed according to the following relationship:

Hotelling's T 2 Statistic
In order to measure variations in the principal component subspace (PCS), a multivariate version of a Student's T-test, the Hotelling's T 2 value [44], can be calculated as the sum of the squares of a sample vector x: where S k is a matrix formed by the first k lines and columns of the matrix S. T 2 is a measure of the square distance from the sample (observation) point to the origin point of the principal-components model space.
For a process to be considered normal, it is necessary that the calculated T 2 be less than or equal to the threshold value T 2 α : where F α (k, m −k) is the critical value of the F-distribution with α being the confidence level that takes values between 90% and 95% and k, m−k being the degrees of freedom.

Squared Prediction Error Statistics
Variations in the residual subspace (RS) are measured using the Square Prediction Error, also known as the Q-Statistic [44], by the sum of the squares of residuals: where r is the residual vector: with I as the identity matrix, for a sample vector x. SPE is a measure of the distance from the sample point to the model plane of the principal components. The threshold for SPE can be determined by: with C α the standard normal deviation value for 1−α percentile and where:

Sensors Faults
Maintaining the required standards for water quality and efficiency of the wastewater treatment process may be ensured when abnormal operating circumstances are promptly identified and corrected. Many of the problems that occur in WWTPs originate from faulty sensors. Sensor faults are caused by several issues such as questionable-quality signal [45], poor connection, or sensor failure [2].
Considering the major role of the Dissolved Oxygen sensor for the overall WWTP operation, the identification of several DO sensor types of faults was investigated in Processes 2021, 9,1633 7 of 17 the present study. The considered six types of sensors faults are defined and described as follows: • Bias, also known as a shift or an off-set of the sensor's generated signal values, occurs when the sensor is miscalibrated and the delivered value is shifted in contrast with the true one [46]; • Drift, appears as a deviation of the real value that fluctuates in time [46]; • Wrong gain, is also known as a calibration error; occurs when the slope of the sensor is inaccurately established in the calibration step [47]; • Loss of accuracy is when the sensor exhibits a value that is characterized by imprecision around the true value; is often erroneously qualified as the true value of the measurement [46]; • Fixed value is when the sensor always displays a constant value [47]; • Complete failure (minimum or maximum) is when the measured value is either the minimum or the maximum calibration value of the sensor [47].

•
The previously presented faults are graphically exemplified in Figure 2. • Loss of accuracy is when the sensor exhibits a value that is characterized by impre sion around the true value; is often erroneously qualified as the true value of measurement [46]; • Fixed value is when the sensor always displays a constant value [47]; • Complete failure (minimum or maximum) is when the measured value is either minimum or the maximum calibration value of the sensor [47].

•
The previously presented faults are graphically exemplified in Figure 2.  Figure 2 shows the comparison between the normal (true) vs. the faulty sensor s nals, as they evolve in time.

Methodology of Faults Implementation
Dedicated software applications were built in order to reproduce the real behav of the six fault types considered for the DO concentration sensor. They were implemen in the WWTP dynamic model. For each type of faults, the generated faulty value was s as a measured process variable to the Proportional-Integral DO controller. The DO cont loop manipulated the air flow rate accordingly, such as to bring with zero offset the me ured DO to the desired setpoint of 2 mg O2/L.
The bias fault was characterized by a fixed difference between the true value and faulty output value of the sensor. For this fault type, the DO measured concentration w considered to show a bias of +1.5 mg O2/L.
The drift was simulated as an increasing bias, being described by a ramp vary signal in time, that was added to the true DO value. The constant value of 0.05 mg O  Figure 2 shows the comparison between the normal (true) vs. the faulty sensor signals, as they evolve in time.

Methodology of Faults Implementation
Dedicated software applications were built in order to reproduce the real behavior of the six fault types considered for the DO concentration sensor. They were implemented in the WWTP dynamic model. For each type of faults, the generated faulty value was sent as a measured process variable to the Proportional-Integral DO controller. The DO control loop manipulated the air flow rate accordingly, such as to bring with zero offset the measured DO to the desired setpoint of 2 mg O 2 /L.
The bias fault was characterized by a fixed difference between the true value and the faulty output value of the sensor. For this fault type, the DO measured concentration was considered to show a bias of +1.5 mg O 2 /L. The drift was simulated as an increasing bias, being described by a ramp varying signal in time, that was added to the true DO value. The constant value of 0.05 mg O 2 /L (i.e., 2.5% of the 2 mg O 2 /L nominal setpoint value of the DO control loop) was integrated in time and added to the DO true value, in order to mimic the drift.
The wrong gain fault was simulated by the loss of effectiveness of the sensor due to a gain factor different to 1. It consisted of a wrong calibration gain of 1.4, that led to an inadequate correlation factor between the input and output of the sensor. The wrong gain was smoothly introduced in time, according to a first-order filter with the time constant of 0.3 days.
The loss of accuracy fault was implemented using a uniform random-number generator that produced a value in the interval [−2.5,2.5], with a sampling time of 0.1 days. This signal was sent to a first-order filter with the time constant of 0.01 days and then passed to a saturation block that limited its output value to the positive interval [0,6]. This random signal was then added to the DO true value to produce the faulty measured process variable.
The fixed-value fault was characterized by a constant value of the sensor output signal, irrespective of the changes of the true DO process variable value. For this study, a constant value of 2.2 mg O 2 /L was considered (i.e., 10% higher than the 2 mg O 2 /L nominal setpoint value of the DO control loop).
The complete failure of a sensor was again characterized by constant values of the sensor output signal, but they were the minimum or the maximum values of the sensor calibration interval. The two extreme values of 0.1 mg O 2 /L and of 6 mg O 2 /L were chosen for the complete failure minimum and maximum alternatives.
Statistical theory is applied to process monitoring and relies on the assumption that characteristics of the data variations change only when an abnormal condition (fault) occurs in the process [48]. The failure of a single sensor at a time was also assumed.

Normal and Abnormal Data Sets
In order to generate the necessary data sets, seven different simulations were built, one for the normal operation and six for faulty operation. The plant model was simulated for 168 days of operation, in the WWTP configuration with the DO and NO control loops. The faults were implemented starting on the 140th day of the simulation scenarios. The first 100 days of normal operation were used to obtain the regular state of the plant. Data obtained by simulation during a period of the next 40 days (i.e., the days ranging from 100 to 139) of normal WWTP operation were collected with a sampling time of 15 min and used to construct data matrix X of the PCA model. Data generated in the time period starting with the 140th and 160th days of abnormal (DO sensor fault) operation were used to test the fault detection capability of the PCA model.
The set of twenty WWTP process variables considered for building the PCA model consisted of secondary settler concentration and flow rate variables of the clean water effluent (6 variables), and sludge concentration and flow rate variables of the bottom effluent (14 variables), including temperature. The variables taken into consideration were: total nitrogen (N total ), total Kjeldahl nitrogen (TKN), chemical oxygen demand (COD), nitrate and nitrite nitrogen (S NO ), free and saline ammonia (S NH ), total suspended solids (TSS), slowly biodegradable substrate (X S ), heterotrophic biomass (X B,H ), autotrophic biomass (X B,A ), inert particulate products (X P ), particulate biodegradable organic nitrogen (X ND ), soluble biodegradable organic nitrogen (S ND ), dissolved oxygen concentration (S O ), readily biodegradable substrate (S S ), alkalinity (S alk ), waste flow rate (Q W ), and temperature (T). The first six variables of the presented set were considered for the clean effluent and the last fourteen for its bottom. Inert suspended organic matter (X I ) and inert soluble organic matter (S I ) concentration variables were excluded, as they would not add significant information to the study [49].

PCA Model Construction
A set of 3840 observations was used from the normal operating period of 40 days in order to set up the PCA model. The obtained data matrix X of 3840 × 20 dimensions represented the training matrix. Scores matrix T and loadings matrix P were obtained after scaling the data matrix. A threshold value of 98% was chosen for CPV k . As a result, the appropriate number of principal components k that led to a good representation of the whole data set was considered to be 8. This selection is based on the CPV k values presented in Table 3. The number of chosen principal components was confirmed by a Scree test [50,51]. The scree plot presented in Figure 3 shows that the selected number of 8 eigenvalues was comprehensive, and they properly captured the variability of the process. Based on this selection the covariance matrix C and the diagonal matrix S were determined.   Using a 99% confidence level, a threshold value of 20.16 was determined for T and a threshold of 1.26 for SPEα. Any vector x of measured variables showing a value of either T 2 or of SPE that was higher than the two corresponding thresholds indicated abnormal sensor operation.

Fault Detection
Seven testing data matrices were constructed with the vectors of abnormal operation measured variables. One was created for each of the six types of faults, excepting the complete-failure fault type where two matrices were built for, respectively, the minimum and the maximum faulty DO sensor. A number of 1920 samples, from simulation day 140 and up to day 160 of the sensor, were used for each testing data matrix. After determining the values of T 2 and SPE for each of the sample vectors of the testing matrices, the values obtained were compared with the previously computed thresholds in order to determine faulty sensor operation. Graphic representations of T 2 and SPE values for each of the normal training and the faulty testing samples, in association to the T and SPEα thresholds, clearly highlight the presence of the DO sensor fault.
Firstly, the training data set was tested with the proposed PCA model. As may be observed in Figure 4, all of the T 2 and SPE values from this data set were confirmed to belong to the normal operation (faulty-free values). Additionally, the T 2 plot reveals that the difference between the normal operation values and their associated threshold was larger than the difference shown by the SPE plot. This means that in the normal operation mode, the most relevant data are provided by the SPE statistics. Using a 99% confidence level, a threshold value of 20.16 was determined for T 2 α and a threshold of 1.26 for SPE α . Any vector x of measured variables showing a value of either T 2 or of SPE that was higher than the two corresponding thresholds indicated abnormal sensor operation.

Fault Detection
Seven testing data matrices were constructed with the vectors of abnormal operation measured variables. One was created for each of the six types of faults, excepting the complete-failure fault type where two matrices were built for, respectively, the minimum and the maximum faulty DO sensor. A number of 1920 samples, from simulation day 140 and up to day 160 of the sensor, were used for each testing data matrix. After determining the values of T 2 and SPE for each of the sample vectors of the testing matrices, the values obtained were compared with the previously computed thresholds in order to determine faulty sensor operation. Graphic representations of T 2 and SPE values for each of the normal training and the faulty testing samples, in association to the T 2 α and SPE α thresholds, clearly highlight the presence of the DO sensor fault.
Firstly, the training data set was tested with the proposed PCA model. As may be observed in Figure 4, all of the T 2 and SPE values from this data set were confirmed to belong to the normal operation (faulty-free values). Additionally, the T 2 plot reveals that the difference between the normal operation values and their associated threshold was larger than the difference shown by the SPE plot. This means that in the normal operation mode, the most relevant data are provided by the SPE statistics. The fault detection results are presented in Figures 5-11, for each type of investigat faults.          Results show that WWTP DO sensor faults were successfully detected by the pro posed PCA approach. For the majority of the DO sensor fault types, the statistic threshold  Results show that WWTP DO sensor faults were successfully detected by the proposed PCA approach. For the majority of the DO sensor fault types, the statistic thresholds Results show that WWTP DO sensor faults were successfully detected by the proposed PCA approach. For the majority of the DO sensor fault types, the statistic thresholds were overridden soon after the DO sensor fault acted. This is presented in the detailed parts of Figures 5-11. Moreover, the T 2 and SPE plots revealed high overshooting values of the statistic thresholds during the fault sensor operation. Figures 5, 10 and 11 show that the detection time of bias and complete failure, either maximum or minimum, were the types of sensor faults which are the most promptly discovered. The fixed-value sensor fault type, presented in Figure 9, closely followed in this ranking. It revealed a detection time of about 1.5 h. The wrong-gain fault detection time was roughly double, when compared to the previous ones, as may be seen in Figure 7. Figure 8 depicts the loss-of-accuracy fault, which was detected in a slightly longer time of 4 h, but this depends on the random amplitude of the fault component value. The fault type requiring the longest time to be uncovered was the sensor drift type of fault. This may be observed from Figure 6 and it revealed a detection time of about 5.5 or 8.75 h. A longer detection time for the drift sensor fault was expected, as the faulty signal grows slowly in time and it passes over the statistical threshold only when its amplitude becomes significant.
Compared to previously reported results, that typically investigated only one specific type of DO sensor fault, the present study had a more comprehensive investigation of six sensor fault types. By exception, one study developed the PCA-based detection for three fault types, i.e., for drift, bias, and precision degradation (loss of accuracy) [25], while another investigated the failure of the DO sensor and the level sensor [11].
In the present study, for the majority of the investigated DO sensor fault types, the Hotelling's T 2 statistic and the Squared prediction error statistic performed the detection with the same efficiency. However, the results of the present work showed that the SPE detection method proved to be more effective than the T 2 statistic for the drift type of fault. Previous results showed that faults could be better detected with SPE statistics because fault-generated changes were mainly identified in RS, rather than in PCS [11]. The results of the present work are also in agreement with another previous study that investigated the detection time for four different faults, out of which only one was related to the DO sensor located in the aerobic reactor [26]. However, the present study revealed that for this specific fault type, the PCA methodology detected the fault in a period of a few hours, while in the prior study the drift fault of a clogging DO sensor was only detected by different PCA algorithms in a period of more than one day [27].
The PCA investigation revealed the most important process variables that contribute, in a decreasing order, to the first four Principal Components that explain 93.18 % of the total variance. They are for Principal Component 1: N total , S NO_bottom ; for Principal Component 2: X ND , X S ; for Principal Component 3: T, TKN, and; for Principal Component 4: COD, S O .

Conclusions
In this study, a PCA-based sensor fault detection method was proposed and its performance was tested, for the DO sensor of the WWTP used as a case study. The DO sensor is part of the control loop designed to provide efficiency to the nitrification process. Keeping the DO value in the aerated reactors at the desired setpoint was achieved by manipulating the air flow rate, and aeration was of primary importance for WWTP energy consumption.
The detection algorithm and its applicability to six different DO sensor fault types were individually investigated. The considered fault sensor types were: bias, drift, wrong gain, loss of accuracy, fixed value, and complete failure. A calibrated WWTP model was used to generate the data sets corresponding to normal and faulty sensor operation. Both Hotelling's T 2 and SPE statistical tests performed successfully for identifying the presence of the fault, yet in some cases the latter had a faster detection ability. Bias, fixed-value, and complete-failure faults were detected promptly. They showed less than 2 h detection time, following the DO sensor fault occurrence. The loss-of-accuracy detection time depends on the random component of the fault, but it implied a longer time, of about 3-4 h. The detection time of the drift and wrong-gain faults took longer than the detection of the other faults, up to 8-9 h, but strongly depended on the additive or multiplicative magnitude of these fault types.
The PCA fault detection algorithm can be implemented within the software of the Supervisory Control and Data Acquisition system, used for monitoring and control of the WWTP. The available sensors or soft sensors based on the available measurements have to provide the information on the most relevant variables. The operating staff must be trained to interpret the fault detection information, relating to the fault diagnosis analysis, and to take appropriate normal operation restoration actions. Auto-correction algorithms and adaptive PCA may be involved in the more advanced stages of the implementation.
This study has shown the potential of the methodologies and demonstrated the efficiency of the presented PCA-based fault detection approach in detecting the DO sensor faults of the controlled WWTP. Their performance is of major practical importance for complex, non-linear, time-varying and controlled wastewater treatment processes, in support of their safe and efficient management.