# Process Monitoring of Quality-Related Variables in Wastewater Treatment Using Kalman-Elman Neural Network-Based Soft-Sensor Modeling

^{*}

## Abstract

**:**

## 1. Introduction

_{5}(biochemical oxygen demand for 5 days), COD (chemical oxygen demand), and TN (total nitrogen) is usually difficult [2,3]. To describe the physical, chemical, and biological reactions in wastewater, many differential equations are required, which discourages process model construction. Because prior knowledge is not required, data-driven soft-sensor technology has become the most commonly-used method to measure the quality-related variables of biological treatment processes in the wastewater treatment. Essentially, data-driven soft-sensor technology aims to construct a certain mathematical model to describe the relationship between input and output variables, to predict hard-to-measure variables without necessarily resorting to an accurate mechanism model [4].

## 2. Preliminary Materials and Methods

#### 2.1. Kalman Filter

#### 2.2. SR-UKF Algorithm

#### 2.3. Elman Neural Network

## 3. Proposed Prediction Model and Validation Materials

#### 3.1. Elman Network Based on SR-UKF (Elman-SR-UKF)

#### 3.2. Update the Model with Adaptive Noise Variance

Algorithm 1 Elman-SR-UKF algorithm | ||

1: | Initialization: | |

2: | ${\widehat{\mathit{x}}}_{0}=E\left[{\mathit{x}}_{0}\right],\text{}{\mathit{P}}_{{\mathit{x}}_{0}}=E\left[\left(\mathit{x}-{\widehat{\mathit{x}}}_{0}\right){\left(\mathit{x}-{\widehat{\mathit{x}}}_{0}\right)}^{T}\right],\text{}{\mathit{S}}_{{\mathit{x}}_{0}}=chol\left({\mathit{P}}_{{\mathit{x}}_{0}}\right)$ | (10) |

3: | For $k=1,2,\cdots ,$ | |

4: | Time update equations: | |

5: | ${\widehat{\mathit{x}}}_{k}^{-}={\widehat{\mathit{x}}}_{k-1}$ | (11) |

6: | ${\widehat{\mathit{x}}}_{0}=E\left[{\mathit{x}}_{0}\right],\text{}{\mathit{P}}_{{\mathit{x}}_{0}}=E[\left(\mathit{x}-{\widehat{\mathit{x}}}_{0}\right){\left(\mathit{x}-{\widehat{\mathit{x}}}_{0}\right)}^{T}],\text{}{\mathit{S}}_{{\mathit{x}}_{0}}=chol\left({\mathit{P}}_{{\mathit{x}}_{0}}\right)$ | (12) |

7: | Calculate sigma points: | |

8: | ${\mathsf{\chi}}_{k|k-1}=\left[\begin{array}{ccc}{\widehat{\mathit{x}}}_{k}^{-}& {\widehat{\mathit{x}}}_{k}^{-}+\gamma {\mathit{S}}_{{\mathit{x}}_{k}}^{-}& {\widehat{\mathit{x}}}_{k}^{-}-\gamma {\mathit{S}}_{{\mathit{x}}_{k}}^{-}\end{array}\right]$ | (13) |

9: | Measurement update equations: | |

10: | ${\mathcal{Y}}_{k|k-1}=h\left({\mathsf{\chi}}_{k|k-1},{\mathit{u}}_{k}\right)$ | (14) |

11: | ${\widehat{\mathit{z}}}_{k}^{-}\approx {\displaystyle \sum _{i=0}^{2L}{w}_{i}^{(m)}{y}_{i,k|k-1}}$ | (15) |

12: | ${\mathit{S}}_{{\tilde{\mathit{z}}}_{k}}=\mathit{q}\mathit{r}\left\{\left[\begin{array}{cc}\sqrt{{w}_{1}^{\left(c\right)}}\left({\mathcal{Y}}_{1:2L,k|k-1}-{\widehat{\mathit{z}}}_{k}^{-}\right)& {\mathit{S}}_{{\mathit{Q}}_{k}}\end{array}\right]\right\}$ | (16) |

13: | ${\mathit{S}}_{{\tilde{\mathit{z}}}_{k}}=\mathit{cholupdate}\left\{{\mathit{S}}_{{\tilde{\mathit{z}}}_{k}},{\mathcal{Y}}_{0,k|k-1}-{\widehat{\mathit{z}}}_{k}^{-},{w}_{0}^{\left(c\right)}\right\}$ | (17) |

14: | ${\mathit{P}}_{{\mathit{x}}_{k}{\mathit{z}}_{k}}={\displaystyle {\displaystyle \sum}_{i=0}^{2L}}{w}_{i}^{\left(c\right)}\left({\mathsf{\chi}}_{i,k|k-1}-{\widehat{\mathit{x}}}_{k}^{-}\right){\left({\mathcal{Y}}_{i,k|k-1}-{\widehat{\mathit{z}}}_{k}^{-}\right)}^{T}$ | (18) |

15: | ${K}_{k}=\left({\mathit{P}}_{{\mathit{x}}_{k}{\mathit{z}}_{k}}/{\mathit{S}}_{{\tilde{\mathit{z}}}_{k}}^{T}\right)/{\mathit{S}}_{{\tilde{\mathit{z}}}_{k}}$ | (19) |

16: | ${\widehat{\mathit{x}}}_{k}={\widehat{\mathit{x}}}_{k}^{-}+{K}_{k}\left({\mathit{z}}_{k}-{\widehat{\mathit{z}}}_{k}^{-}\right)={\widehat{\mathit{x}}}_{k}^{-}+{K}_{k}{\tilde{\mathit{z}}}_{k}$ | (20) |

17: | ${\mathit{H}}_{k}={\mathit{P}}_{{\mathit{x}}_{k}{\mathit{z}}_{k}}^{T}{\left({\mathit{P}}_{{\mathit{x}}_{k}}^{-}\right)}^{-1},\text{}{\mathit{P}}_{{\mathit{x}}_{k}}^{-}={\mathit{S}}_{{\mathit{x}}_{k}}^{-}{\left({\mathit{S}}_{{\mathit{x}}_{k}}^{-}\right)}^{T}$ | (21) |

18: | ${\mathit{S}}_{{\mathit{x}}_{k}}=qr\left(\left[\begin{array}{cc}{\mathit{S}}_{{\mathit{x}}_{k}}^{-}-{K}_{k}{\mathit{H}}_{k}{\mathit{S}}_{{\mathit{x}}_{k}}^{-}& {K}_{k}{\mathit{S}}_{{\mathit{Q}}_{k}}\end{array}\right]\right)$ | (22) |

where ${\mathit{S}}_{{\mathit{x}}_{k}}$ and ${\mathit{S}}_{{\mathit{Q}}_{k}}$ are the square-root form of ${\mathit{P}}_{{\mathit{x}}_{k}}$ and ${\mathit{Q}}_{k}$, respectively, namely ${\mathit{P}}_{{\mathit{x}}_{k}}={\mathit{S}}_{{\mathit{x}}_{k}}{\mathit{S}}_{{\mathit{x}}_{k}}^{T}$, ${\mathit{Q}}_{k}={\mathit{S}}_{{\mathit{Q}}_{k}}{\mathit{S}}_{{\mathit{Q}}_{k}}^{T}$.$\text{}{\mathit{R}}_{k}$ is the process noise covariance. $\gamma =\sqrt{L+\lambda}$ is a composite scaling parameter. ${\mathit{z}}_{k}$ and ${\tilde{\mathit{z}}}_{k}$ are the true value and innovation at time step $k$ respectively. The $diag\{\xb7\}$ operator zeros all the elements of a square matrix except the main diagonal. $qr\{\xb7\}$ and $cholupdate\{\xb7\}$ are standard MATLAB functions, representing QR decomposition and Cholesky factor updating, respectively. The operator “/” stands for the right division operation of MATLAB. The origin of Equation (22) can be referred to the derivation of Equation (42). |

#### 3.2.1. Adaptive Noise Estimation

#### 3.2.2. Adaptive Sage-Husa Noise Estimation

#### 3.3. Handling Outliers

#### 3.3.1. Outliers Identification

#### 3.3.2. Parameter Adjustment

#### 3.4. Weight Constraining

- (1)
- $\varphi $ is a continuously differentiable function on $\left[-\infty ,+\infty \right]$;
- (2)
- ${\varphi}^{-1}$ exists and is a continuous function $\left[-\mu ,+\mu \right]$;
- (3)
- ${\mathrm{lim}}_{\mu \to \infty}\varphi \left({\tilde{\mathit{x}}}_{k}^{i,j},\mu \right)={\tilde{\mathit{x}}}_{k}^{i,j}={\mathit{x}}_{k}^{i,j}$.

## 4. Evaluation Methods and Validation Materials

#### 4.1. Evaluation Methods

#### 4.2. Materials for the Case Study

## 5. Results and Discussion

**logsig**functions and

**purelin**functions, respectively. The initial value of weights is a random number between [−0.5, 0.5]. Equations (43)–(45) are used as evaluation criteria for predictive performance. The parameter definitions of the model are shown in Table 2. The software used in this study was MATLAB R2016a.

**Remark**

**1.**

**Remark**

**2.**

_{2}O during the reaction. In addition to activated sludge processes, the proposed methods can be extended and used for other processes such as oxidation ditches (ODs) and sequencing bath reactors (SBRs).

## 6. Conclusions

## Author Contributions

## Funding

## Data Availability Statement

## Conflicts of Interest

## Abbreviations

BOD_{5} | biochemical oxygen demand for 5 days |

COD | chemical oxygen demand |

TN | total nitrogen |

SVI | sludge volume index |

ODs | oxidation ditches |

SBRs | sequencing bath reactors |

KF | Kalman filter |

EKF | extended Kalman filter |

UKF | unscented Kalman filter |

SR-UKF | square-root unscented Kalman filter |

NN | neural network |

RNN | recursive neural network |

RTRL | real-time recurrent learning |

BPTT | back propagation through time |

LM | Levenberg-Marquardt |

SUT | scaled unscented transformation |

RMSE | root mean square error |

R | correlation coefficient |

RMSSD | root mean of diagonal square sum |

MR | multiple correlation coefficient |

DQO-S | output chemical demand of oxygen |

DBO-S | output biological demand of oxygen |

SS-S | output suspended solids |

Elman-BPTT | Elman network based on back propagation through time algorithm |

Elman-GDM | Elman network based on momentum gradient descent algorithm |

Elman-LM | Elman network based on Levenberg-Marquardt algorithm |

RTRL-LM | real-time recurrent learning based on Levenberg-Marquardt algorithm |

Elman-RTRL-LM | Elman network based on RTRL-LM |

## References

- Cheng, H.; Wu, J.; Huang, D.; Liu, Y.; Wang, Q. Robust adaptive boosted canonical correlation analysis for quality-relevant process monitoring of wastewater treatment. ISA Trans.
**2021**, 117, 210–220. [Google Scholar] [CrossRef] [PubMed] - Kadlec, P.; Gabrys, B.; Strandt, S. Data-driven soft sensors in the process industry. Comput. Chem. Eng.
**2009**, 33, 795–814. [Google Scholar] [CrossRef] [Green Version] - Li, D.; Liu, Y.; Huang, D. Development of Semi-supervised Multiple-output Soft-sensors with Co-training and Tri-training MPLS and MRVM. Chemom. Intell. Lab. Syst.
**2020**, 199, 103970. [Google Scholar] [CrossRef] - Daoping, H.; Yiqi, L.; Yan, L. Soft sensor research and its application in wastewater treatment. CIESC J.
**2011**, 62, 7–15. [Google Scholar] - Wu, J.; Cheng, H.; Liu, Y.; Huang, D.; Yuan, L.; Yao, L. Learning soft sensors using time difference–based multi-kernel relevance vector machine with applications for quality-relevant monitoring in wastewater treatment. Environ. Sci. Pollut. Res.
**2020**, 27, 28986–28999. [Google Scholar] [CrossRef] [PubMed] - Yla, B.; Min, X.B. Rebooting data-driven soft-sensors in process industries: A review of kernel methods-ScienceDirect. J. Process. Control
**2020**, 89, 58–73. [Google Scholar] [CrossRef] - Jiang, Y.; Yin, S.; Dong, J.; Kaynak, O. A Review on Soft Sensors for Monitoring, Control and Optimization of Industrial Processes. IEEE Sens. J.
**2020**, 21, 12868–12881. [Google Scholar] [CrossRef] - Jawad, J.; Hawari, A.H.; Javaid Zaidi, S. Artificial neural network modeling of wastewater treatment and desalination using membrane processes: A review. Chem. Eng. J.
**2021**, 419, 129540. [Google Scholar] [CrossRef] - Zhao, L.; Dai, T.J.; Qiao, Z.; Sun, P.Z.; Hao, J.Y.; Yang, Y.K. Application of artificial intelligence to wastewater treatment: A bibliometric analysis and systematic review of technology, economy, management, and wastewater reuse. Process Saf. Environ. Prot.
**2020**, 133, 169–182. [Google Scholar] [CrossRef] - Guan, X.Z.; Song, T.L.; Yan-Hai, X.U.; Liu, Y. Comparison of prediction on BP neural networks and Elman neural networks in wastewater treatment. Tech. Autom. Appl.
**2014**, 33, 1–3, 25. [Google Scholar] - Liang, Y. Application of Elman neural network in short-term load forecasting. In Proceedings of the International Conference on Artificial Intelligence & Computational Intelligence, Sanya, China, 23–24 October 2010. [Google Scholar]
- Howard, B.D.; Mark, H.B.; Orlando De, J.; Martin, T.H. Neural Network Design; Martin Hagan: Stillwater, OK, USA, 2014. [Google Scholar]
- Williams, R.J. A learning algorithm for continually running fully recurrent networks. Neural. Comput.
**1989**, 1, 270–280. [Google Scholar] [CrossRef] - Werbos, P.J. Backpropagation through time: What it does and how to do it. Proc. IEEE
**1990**, 78, 1550–1560. [Google Scholar] [CrossRef] [Green Version] - Hagan, M.T.; Menhaj, M.B. Training feedforward networks with the Marquardt algorithm. IEEE Trans. Neural Netw.
**2002**, 5, 989–993. [Google Scholar] [CrossRef] [PubMed] - Bianchini, M.; Gori, M. On the problem of local minima in recurrent neural networks. IEEE Trans. Neural Netw.
**1994**, 5, 167–177. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Zhang, F.; Reynolds, A.C.; Oliver, D.S. An Initial Guess for the Levenberg–Marquardt Algorithm for Conditioning a Stochastic Channel to Pressure Data. Math. Geol.
**2003**, 35, 67–88. [Google Scholar] [CrossRef] - Snyder, C. Introduction to the Kalman filter. In Advanced Data Assimilation for Geosciences; Oxford University Press: Oxford, MI, USA, 2010; pp. 75–120. [Google Scholar]
- Julier, S.J.; Uhlmann, J.K.; Durrant-Whyte, H.F. A new approach for filtering nonlinear systems. In Proceedings of the American Control Conference—ACC’95, Seattle, WA, USA, 21–23 June 1995. [Google Scholar]
- Sharad, S.; Lance, W. Training multilayer perceptrons with the extended kalman algorithm. In Proceedings of the 1st International Conference on Neural Information Processing Systems, Cambridge, MA, USA, 1 January 1988; pp. 133–140. [Google Scholar]
- Puskorius, G.V.; Feldkamp, L.A. Parameter-Based Kalman Filter Training: Theory and Implementation. In Kalman Filtering and Neural Networks; Wiley: New York, NY, USA, 2001. [Google Scholar]
- Wu, X.; Wang, Y. Extended and Unscented Kalman filtering based feedforward neural networks for time series prediction. Appl. Math. Model.
**2012**, 36, 1123–1131. [Google Scholar] [CrossRef] - Lima, D. Neural Network Training Using Unscented and Extended Kalman Filter. Robot. Autom. Eng. J.
**2017**, 1, 555–558. [Google Scholar] - Merwe, R.; Wan, E.A. The square-root unscented Kalman filter for state and parameter-estimation. In Proceedings of the IEEE International Conference on Acoustics, Salt Lake City, UT, USA, 7–11 May 2001. [Google Scholar]
- Merwe, R.; Wan, E. Sigma-point Kalman filters for probabilistic inference in dynamic state-space models. In Proceedings of the 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, (ICASSP ‘03), Hong Kong, China, 6–10 April 2003. [Google Scholar]
- Todorović, B.; Stanković, M.; Moraga, C. Recurrent Neural Networks Training Using Derivative Free Nonlinear Bayesian Filters. In Proceedings of the International Joint Conference on Advances in Computational Intelligence, Rome, Italy, 22–24 October 2014; pp. 383–410. [Google Scholar]
- Liu, K.; Zhao, W.; Sun, B.; Wu, P.; Zhang, P. Application of Updated Sage–Husa Adaptive Kalman Filter in the Navigation of a Translational Sprinkler Irrigation Machine. Water
**2019**, 11, 1269. [Google Scholar] [CrossRef] [Green Version] - Julier, S.J. The scaled unscented transformation. In Proceedings of the American Control Conference, Anchorage, AK, USA, 8–10 May 2002. [Google Scholar]
- Elman, J.L. Finding structure in time. Cogn. Sci.
**1990**, 14, 179–211. [Google Scholar] [CrossRef] - Wei, C.; Jie, W. Kalman and gradation algorithms used in recurrent neural networks. J. South China Univ. Technol.
**1998**, 4, 44–48. [Google Scholar] - Xiaobing, L.; Tianshuang, T.Y.Q. A new training algorithm based on Kalman filter of essential Elman network. J. Dalian Univ. Technol.
**2009**, 49, 276–281. [Google Scholar] - Simon, D. Optimal State Estimation: Kalman, H∞, and Nonlinear Approaches; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2006. [Google Scholar]
- Mohamed, A.H.; Schwarz, K.P. Adaptive Kalman filtering for INS/GPS. J. Geod.
**1999**, 73, 193–203. [Google Scholar] [CrossRef] - Ju, H.Y.; Du, Y.K.; Shin, V. Window length selection in linear receding horizon filtering. In Proceedings of the 2008 International Conference on Control, Automation and Systems, Seoul, Korea, 14–17 October 2008. [Google Scholar]
- Robust adaptive unscented Kalman filter for attitude estimation of pico satellites. Int. J. Adapt. Control Signal Process.
**2014**, 28, 107–120. [CrossRef] - Shi, Y.; Han, C.; Liang, Y. Adaptive UKF for target tracking with unknown process noise statistics. In Proceedings of the 2009 12th International Conference on Information Fusion, Seattle, WA, USA, 6–9 July 2009. [Google Scholar]
- Chang, G. Robust Kalman filtering based on Mahalanobis distance as outlier judging criterion. J. Geod.
**2014**, 88, 391–401. [Google Scholar] [CrossRef] - Haykin, S. Kalman Filtering and Neural Networks; Wiley: New York, NY, USA, 2001. [Google Scholar]
- Dua, D.; Graff, C. UCI Machine Learning Repository. 2017. Available online: https://archive.ics.uci.edu/ml/index.php (accessed on 20 November 2021).
- Liu, Y.; Huang, D.; Li, Y. Development of Interval Soft Sensors Using Enhanced Just-in-Time Learning and Inductive Confidence Predictor. Ind. Eng. Chem. Res.
**2012**, 51, 3356–3367. [Google Scholar] [CrossRef] - Jing, W. Study on Kernel Function Modeling Method for Unsteady Process of Wastewater Treatment. Ph.D. Dissertation, South China University of Technology, Guangzhou, China, 2020. [Google Scholar]
- Liang, N.; Huang, G.; Saratchandran, P.; Sundararajan, N. A Fast and Accurate Online Sequential Learning Algorithm for Feedforward Networks. IEEE Trans. Neural Netw.
**2006**, 17, 1411–1423. [Google Scholar] [CrossRef] [PubMed] - Nguyen, T.V.; Bonilla, E. Collaborative multi-output gaussian processes. In Proceedings of the Thirtieth Conference on Uncertainty in Artificial Intelligence, UAI, Quebec City, QC, Canada, 23–27 July 2014; pp. 643–652. [Google Scholar]
- Zhou, Y.; Guo, S.; Xu, C.Y.; Chang, F.J.; Yin, J. Improving the Reliability of Probabilistic Multi-Step-Ahead Flood Forecasting by Fusing Unscented Kalman Filter with Recurrent Neural Network. Water
**2020**, 12, 578. [Google Scholar] [CrossRef] [Green Version] - Li, D.; Huang, D.; Yu, G.; Liu, Y. Learning Adaptive Semi-Supervised Multi-Output Soft-Sensors with Co-training of Heterogeneous Models. IEEE Access
**2020**, 8, 46493–46504. [Google Scholar] [CrossRef] - Li, D.; Huang, D.; Liu, Y. A novel two-step adaptive multioutput semisupervised soft sensor with applications in wastewater treatment. Environ. Sci. Pollut. Res.
**2021**, 28, 1–15. [Google Scholar] [CrossRef] [PubMed]

**Figure 1.**(

**A**) The simplified topology of the Elman neural network; (

**B**) A detailed expansion of module A.

**Figure 4.**Prediction profiles of output variables compared with real values. (The first 80 time series data in the testing dataset).

**Figure 5.**Convergence profiles of different weight constraints (${\mathit{P}}_{{\mathit{x}}_{0}}=0.05\mathit{I}$).

No | Variables | Comments |
---|---|---|

1 | DBO-E | Input biological demand of oxygen to plant |

2 | DQO-E | Input chemical demand of oxygen to plant |

3 | DBO-P | Input biological demand of oxygen to primary settler |

4 | PH-D | Input pH to secondary settler |

5 | DBO-D | Input biological demand of oxygen to secondary settler |

6 | DQO-D | Input chemical demand of oxygen to secondary settler |

7 | SS-D | Input suspended solids to secondary settler |

8 | SED-D | Input sediments to secondary settler |

9 | RD-DBO-P | Performance input biological demand of oxygen in primary settler |

10 | RD-SS-P | Performance input suspended solids to primary settler |

11 | RD-DBO-S | Performance input biological demand of oxygen to secondary settler |

12 | RD-DQO-S | Performance input chemical demand of oxygen to secondary settler |

13 | RD-DBO-G | Global performance input biological demand of oxygen |

14 | RD-DQO-G | Global performance input chemical demand of oxygen |

15 | RD-SS-G | Global performance input suspended solids |

16 | RD-SED-G | Global performance input sediments |

17 | PH-S | pH in the effluent |

18 | SED-S | Sediments in the effluent |

Models | Parameters |
---|---|

Elman-SR-UKF | $\alpha =1$, $\beta =0,\text{}\kappa =2$, $\mathrm{forgetting}\text{}\mathrm{factor}\text{}\mathit{b}=0.955$, initial process covariance ${\mathit{R}}_{0}=1\times {10}^{-5}\mathit{I}$, $\mathrm{initial}\text{}\mathrm{measurement}\text{}\mathrm{covariance}\text{}{\mathit{Q}}_{0}=0.5\mathit{I}$, $\mathrm{initial}\text{}\mathrm{error}\text{}\mathrm{covariance}\text{}{\mathit{P}}_{{\mathit{x}}_{0}}=0.01\mathit{I}$, $\mathrm{moving}\text{}\mathrm{window}\text{}N=20$, $\mathrm{statistic}\text{}{\alpha}_{\mathsf{\chi}}=0.05$ |

Other Elman models | learning rate lr = 0.0l, iteration = 1000 |

Models | Predicted Variables | SS-S | DBO-S | DQO-S | RMSSD | MR | Epochs |
---|---|---|---|---|---|---|---|

Elman-BPTT | RMSE | 4.957 | 3.591 | 16.125 | 17.271 | 0.688 | 1000 |

R | 0.622 | 0.709 | 0.732 | ||||

MRE | 0.246 | 0.172 | 0.190 | ||||

Elman-GDM | RMSE | 4.598 | 3.308 | 14.479 | 15.569 | 0.781 | 1000 |

R | 0.705 | 0.799 | 0.834 | ||||

MRE | 0.224 | 0.154 | 0.166 | ||||

Elman-LM | RMSE | 25.961 | 3.3581 | 13.531 | 31.356 | 0.7648 | 975 |

R | 0.471 | 0.906 | 0.917 | ||||

MRE | 1.273 | 0.145 | 0.135 | ||||

Elman-RTRL-LM | RMSE | 21.184 | 2.434 | 10.142 | 24.529 | 0.770 | 80 |

R | 0.457 | 0.922 | 0.931 | ||||

MRE | 0.870 | 0.092 | 0.091 | ||||

Elman-SR-UKF (μ = ∞) | RMSE | 3.330 | 1.765 | 8.028 | 8.847 | 0.917 | 30 |

R | 0.857 | 0.946 | 0.949 | ||||

MRE | 0.151 | 0.076 | 0.081 | ||||

ELM | RMSE | 5.773 | 4.007 | 18.065 | 19.431 | 0.626 | / |

R | 0.523 | 0.664 | 0.690 | ||||

MRE | 0.278 | 0.193 | 0.205 | ||||

MGPR | RMSE | 4.412 | 3.057 | 13.012 | 14.121 | 0.807 | / |

R | 0.742 | 0.826 | 0.854 | ||||

MRE | 0.213 | 0.141 | 0.139 |

**Table 4.**RMSE, R, RMSSD, and MR values of the output variables with different weight constraints (${\mathit{P}}_{{\mathit{x}}_{0}}=0.05\mathit{I}$).

Elman-SR-UKF with Constrained Weights | Predicted Variables | SS-S | DBO-S | DQO-S | RMSSD | MR | Maximum Weight |
---|---|---|---|---|---|---|---|

$\mu =1$ | RMSE | 3.967 | 2.053 | 8.701 | 17.271 | 0.688 | 0.80 |

R | 0.792 | 0.927 | 0.939 | ||||

MRE | 0.181 | 0.091 | 0.089 | ||||

$\mu =2$ | RMSE | 3.646 | 1.894 | 8.805 | 9.625 | 0.903 | 1.303 |

R | 0.834 | 0.938 | 0.938 | ||||

MRE | 0.168 | 0.082 | 0.089 | ||||

$\mu =5$ | RMSE | 3.596 | 1.851 | 8.666 | 9.507 | 0.908 | 2.15 |

R | 0.841 | 0.942 | 0.941 | ||||

MRE | 0.165 | 0.079 | 0.086 | ||||

$\mu =\infty $ | RMSE | 3.563 | 1.802 | 8.26 | 9.177 | 0.912 | 4.12 |

R | 0.847 | 0.945 | 0.946 | ||||

MRE | 0.160 | 0.076 | 0.080 |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Liu, Y.; Yuan, L.; Li, D.; Li, Y.; Huang, D.
Process Monitoring of Quality-Related Variables in Wastewater Treatment Using Kalman-Elman Neural Network-Based Soft-Sensor Modeling. *Water* **2021**, *13*, 3659.
https://doi.org/10.3390/w13243659

**AMA Style**

Liu Y, Yuan L, Li D, Li Y, Huang D.
Process Monitoring of Quality-Related Variables in Wastewater Treatment Using Kalman-Elman Neural Network-Based Soft-Sensor Modeling. *Water*. 2021; 13(24):3659.
https://doi.org/10.3390/w13243659

**Chicago/Turabian Style**

Liu, Yiqi, Longhua Yuan, Dong Li, Yan Li, and Daoping Huang.
2021. "Process Monitoring of Quality-Related Variables in Wastewater Treatment Using Kalman-Elman Neural Network-Based Soft-Sensor Modeling" *Water* 13, no. 24: 3659.
https://doi.org/10.3390/w13243659