# An Advanced Learning-Based Multiple Model Control Supervisor for Pumping Stations in a Smart Water Distribution System

^{1}

^{2}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Related Work

- Identification of feasible learning tasks that can provide useful information for the users.
- Selection of adequate machine learning algorithms for each task.
- Preprocessing stage that is often particular to a given scenario.
- Evaluation of generated knowledge.
- Integration of AI with the monitoring and control system.

#### 2.1. Deep Learning Frameworks and Applications

#### 2.2. IoT, Sensor Networks and Protocols

#### 2.3. Reactive Programming

#### 2.4. Multiple Model Control

## 3. Methodology

- For machine learning, we use a decision tree classification algorithm to have a baseline for the performance evaluation. Then, we use the random forest classification algorithm to see whether combining multiple classifiers using an ensemble method improves the accuracy of prediction.
- For deep learning, we use a multi-layer perceptron to have a baseline for the neural network models and see if it outperforms the classic machine learning algorithms. Then, we use a recurrent neural network (RNN) with multiple LSTM layers to determine whether a deep neural network architecture is a good fit for our problem and whether it outperforms both classic machine learning algorithms and the multi-layer perceptron.

#### 3.1. Machine Learning Models

#### 3.1.1. Decision Trees

#### 3.1.2. Ensemble Methods

#### 3.2. Deep Learning Models

#### 3.2.1. Deep Feed Forward Network

#### 3.2.2. Long Short-Term Memory

- ${x}_{t}\in {\mathbb{R}}^{m}$ is the input features vector of dimension m;
- ${h}_{t}\in {\mathbb{R}}^{n}$ is the hidden state vector as well as the unit’s output vector of dimension n, where the initial value is ${h}_{0}=0$;
- ${\tilde{c}}_{t}\in {\mathbb{R}}^{n}$ is the input activation vector;
- ${c}_{t}\in {\mathbb{R}}^{n}$ is the cell state vector, with the initial value ${c}_{0}=0$;
- ${W}_{i},{W}_{f},{W}_{o},{W}_{c}\in {\mathbb{R}}^{n\times m}$ are the weight matrices corresponding to the current input of the input gate, output gate, forget gate and the cell state;
- ${V}_{i},{V}_{f},{V}_{o},{V}_{c}\in {\mathbb{R}}^{n\times n}$ are the weight matrices corresponding to the hidden output of the previous state for the current input of the input gate, output gate, forget gate and the cell state;
- ${b}_{i},{b}_{f},{b}_{o},{b}_{c}\in {\mathbb{R}}^{n}$ are the bias vectors corresponding to the current input of the input gate, output gate, forget gate and the cell state;
- ${\delta}_{s}(\xb7)={\sigma}_{g}(\xb7)\in [0,1]$ is the sigmoid activation function;
- ${\delta}_{h}(\xb7)=tanh(\xb7)\in [-1,1]$ is the hyperbolic tangent activation function;
- ⊙ is the element wise product; i.e., Hadamard Product.

#### 3.3. Modeling of Laminar Flow in Pipes

- L is the length of the pipe;
- D is the diameter of the pipe;
- $\rho $ is the fluid density;
- S is the pipe section.

- F is the flow through the pipe;
- $\mathsf{\Delta}P$ is the reduction in pressure along the pipe;
- $\alpha $ is the flow coefficient;
- $\nu =\frac{F\left(t\right)}{S}$ is the flow velocity;
- $M=\rho Ls$ is the fluid mass;
- ${V}_{0}$ is the fluid volume in static conditions.

#### 3.3.1. Laminar Flow in Short Pipes

- ${K}_{p}=0.5$ is the process gain;
- ${T}_{p}={\alpha}^{2}\frac{{V}_{0}}{{F}_{0}}$ is the time constant of the process.

#### 3.3.2. Laminar Flow in Long Pipes

- ${K}_{p}=0.5$ is the process gain;
- ${T}_{p}=\frac{{L}^{5}}{2k{F}_{0}S}$ is the time constant of the process.

#### 3.3.3. Consideration of Process Model

#### 3.4. Multiple Model Control

- $y\left[k\right]$ is the process output at sample k;
- ${y}_{i}\left[k\right]$ is the output of model i at sample k;
- ${\u03f5}_{i}\left[k\right]$ is the output error of model i at sample k;
- ${u}_{i}\left[k\right]$ is the controller output for model i at sample k;
- ${w}_{i}\left[k\right]$ is the controller weight for model i at sample k;
- $\alpha >0$ is the weighting factor;
- $\beta >0$ is the long term accuracy for instantaneous measurements;
- $\lambda >0$ is the forgetting factor for active window limitation over the model error ${\u03f5}_{i}\left[k\right]$.$${\u03f5}_{i}\left[k\right]=y\left[k\right]-{y}_{i}\left[k\right]$$$${J}_{i}\left[k\right]=\alpha {\u03f5}_{i}^{2}\left[k\right]+\beta \sum _{j=1}^{k}{e}^{-\lambda (k-j)}{e}_{i}^{2}\left[j\right]$$$$u\left[k\right]=\sum _{i=1}^{N}({w}_{i}\left[k\right]\xb7{u}_{i}\left[k\right])$$

#### 3.5. Evaluation Methods

#### 3.5.1. Classification Evaluation Metrics

#### 3.5.2. Control Evaluation Metrics

- $\sigma $ is the overshoot;
- ${\u03f5}_{st}$ is the steady state error;
- ${r}_{st}$ is the (steady state) controller setpoint;
- ${y}_{st}$ is the steady state closed loop response;
- ${y}_{max}$ is the maximum closed loop response with regard to the steady state;
- ${\sigma}_{u}\left(N\right)$ is the standard deviation of the controller output in discrete time;
- $u\left[k\right]$ is the discrete controller output at sample k;
- $\overline{u}$ is the mean controller output.

## 4. Results

#### 4.1. Experimental Model

- $Q\left[k\right]$ is the flow in $L/h$ at sample k;
- $N\left[k\right]$ is the counter value at sample k;
- ${f}_{int}$ is the timer interrupt frequency—i.e., 100 Hz;
- ${f}_{osc}$ is the clock frequency—i.e., 16 MHz;
- $PS$ is the clock prescaler—i.e., 64;
- $TOP$ is the counter limit—i.e., $OCR1A=2499$.

#### 4.2. Reactive IoT Platform

#### 4.3. Evaluation of Advanced Learning Methods

**1-N**) are shown in Figure 4. The sampling is time-based, capturing the transient response and the steady state measurements.

**1-N**(one way single valve sequence). The valves are opened and closed in sequence, starting from the first valve up to the last valve. The next valve is opened and the current valve is closed, with no delay in between, accounting for the transient state in network reconfiguration. The pump is set to 80% for the duration of the experiment.**1-N-1**(single valve return sequence). The valves are opened and closed in sequence, starting from the first valve up to the last valve, and then going back to the first valve. This case is evaluated for 80% and 50% pump outputs.**GRAY**(gray code valve sequence). The valves are opened and closed according to the 6-bit gray code, so that consecutive combinations differ by a single valve and the entire range of combinations is achieved. The pump is set to 80% for the duration of the experiment.

**Dense**. For the deep feed forward network, i.e., a dense model, a multi-layer perceptron (MLP) is used for multi-class softmax classification. Three hidden layers are used with a dropout rate of $0.5$ and 32 units. The hidden layers use a ReLu activation function, while softmax is used for the output layer, which gives the outputs as probabilities. The model is implemented in Keras as a Sequential model using dense layers. The model is trained in a maximum of 500 epochs with a stop criterion implemented as EarlyStopping monitor based on the loss function (categorical_crossentropy for multi-class classification).**RNN**. A recurrent neural network (RNN) is implemented as a sequential model with three LSTM hidden layers, while the output layer is a dense layer with a softmax activation function. The two dimensional input data (samples, sensor nodes) are reshaped into a 3D representation that is required for training the model. While the LSTM model has a higher learning rate when compared to the dense model, it is more intensive in terms of computation, and therefore it is trained using a fixed number of five epochs using the categorical_crossentropy loss function.

**DT**. A decision tree classifier is implemented using the DecisionTreeClassifier in scikit-learn, with a default splitting criterion based on the Gini Index. The accuracy is evaluated in terms of the predicted classes for the integer encoded test dataset.**RF**. A random forest classifier is implemented using the RandomForestClassifier in scikit-learn, with a default splitting criterion based on the Gini Index and 100 estimators. The accuracy is evaluated in terms of the predicted classes for the integer encoded test dataset, while the model aims for improved accuracy and reduced overfitting when compared to a single decision tree.

**GRAY**test scenario, showing considerably lower accuracy than the other models for the classification problem. In the best model chart in Figure 5b, the results are similar. The RNN model is more consistent and outperforms the other models in every scenario, for both average and best model evaluations, while the RF model shows surprisingly good results in this evaluation.

**1-N-1**test scenario where the accuracy is better for the dataset obtained with 50% pump output, which can be explained by the reduced pressure and therefore less disturbance during transient state of valve switching.

**1-N**,

**1-N-1**) are not able to predict combinations of valves when compared to models that are trained on the entire range of valve combinations (

**GRAY**). However, with the RNN model, both the single valve class of models and the

**GRAY**model yield similar results on the

**1-N**and

**1-N-1**datasets, showing good adaptability to different scenarios having similar configurations. The RF classifier achieves good accuracy for the

**GRAY**model across the experimental dataset, showing less overfitting than the DT/GRAY model and comparable results to the RNN/GRAY model in this case.

#### 4.4. Extension of the Multiple Model Control Structure

**Controller design**is based on experimental identification or closed-loop PID tuning strategies for first and second order models. For first order models having the transfer function of ${H}_{p}\left(s\right)=\frac{K}{Ts+1}$, the methods include evaluation of step response for calculating the amplification factor K and time constant T and using pseudorandom binary sequence (PRBS) applied to the process for more advanced and automated identification. For PID tuning, the closed loop response is evaluated, and the coefficients ${K}_{r}$, ${T}_{i}$ and ${T}_{d}$ are adjusted to match the performance criteria (asymptotic error $\u03f5\left(t\right)$, settling time ${t}_{s}$ and overshoot).**Controller selection**is based on learning from simulated scenarios, aiming at improved adaptability and replacing or complementing the more traditional switching algorithm based on model outputs. In this case, the RNN model returns the probabilities for each class at each sample k, which are then used for switching (on/off, weighted output) the associated controllers. Classes (${c}_{k}\in \mathbb{N}$) define network configurations, represented as one-hot encoded binary combinations of valves; i.e., $y={2}^{{c}_{k}}$.

## 5. Discussion

## 6. Materials and Methods

## 7. Conclusions

## Author Contributions

## Funding

## Acknowledgments

## Conflicts of Interest

## References

- Rode, S. Sustainable drinking water supply in Pune metropolitan region: Alternative policies. Theor. Empir. Res. Urban Manag.
**2009**, 4, 48–59. [Google Scholar] - KALLIS, G.; COCCOSSIS, H. Water for the City: Lessons from Tendencies and Critical Issues in Five Advanced Metropolitan Areas. Built Environ.
**2002**, 28, 96–110. [Google Scholar] - Kelman, J. Water Supply to the Two Largest Brazilian Metropolitan Regions; At the Confluence Selection from the 2014 World Water Week in Stockholm. Aquat. Proc.
**2015**, 5, 13–21. [Google Scholar] [CrossRef] - Sun, G.; Mcnulty, S.; Myers, J.; Cohen, E. Impacts of Multiple Stresses on Water Demand and Supply Across the Southeastern United States1. JAWRA J. Am. Water Res. Assoc.
**2008**, 44, 1441–1457. [Google Scholar] [CrossRef] - Awe, M.; Okolie, S.; Fayomi, O.S.I. Review of Water Distribution Systems Modelling and Performance Analysis Softwares. J. Phys. Conf. Ser.
**2019**, 1378, 022067. [Google Scholar] [CrossRef] - Arsene, C.; Al-Dabass, D.; Hartley, J. A Study on Modeling and Simulation of Water Distribution Systems Based on Loop Corrective Flows and Containing Controlling Hydraulics Elements. In Proceedings of the 3rd International Conference on Intelligent Systems Modelling and Simulation, ISMS 2012, Kota Kinabalu, Malaysia, 8–10 February 2012. [Google Scholar] [CrossRef]
- Dobriceanu, M.; Bitoleanu, A.; Popescu, M.; Enache, S.; Subtirelu, E. SCADA system for monitoring water supply networks. WSEAS Trans. Syst.
**2008**, 7, 1070–1079. [Google Scholar] - Martinovska Bande, C.; Bande, G. SCADA System for Monitoring Water Supply Network: A Case Study. Int. J. Softw. Hardw. Res. Eng.
**2016**, 4, 1–7. [Google Scholar] - Camarinha-Matos, L.M.; Martinelli, F.J. Application of Machine Learning in Water Distribution Networks Assisted by Domain Experts. J. Intell. Rob. Syst.
**1999**, 26, 325–352. [Google Scholar] [CrossRef] - Koo, D.; Piratla, K.; Matthews, C.J. Towards sustainable water supply: Schematic development of big data collection using internet of things (IoT). Proc. Eng.
**2015**, 118, 489–497. [Google Scholar] [CrossRef] [Green Version] - Siddique, N.; Adeli, H. Introduction to Computational Intelligence. In Computational Intelligence; John Wiley & Sons, Ltd.: Hoboken, NJ, USA, 2013; Chapter 1; pp. 1–17. [Google Scholar] [CrossRef]
- Bunn, S.M.; Reynolds, L. The energy-efficiency benefits of pump-scheduling optimization for potable water supplies. IBM J. Res. Dev.
**2009**, 53, 5:1–5:13. [Google Scholar] [CrossRef] - Ormsbee, L.; Lansey, K. Optimal control of water supply pumping systems. J. Water Res. Plann. Manag.
**1994**, 120, 237–252. [Google Scholar] [CrossRef] [Green Version] - Walker, D.; Creaco, E.; Vamvakeridou-Lyroudia, L.; Farmani, R.; Kapelan, Z.; Savic, D. Forecasting Domestic Water Consumption from Smart Meter Readings Using Statistical Methods and Artificial Neural Networks. Proc. Eng.
**2015**, 119, 1419–1428. [Google Scholar] [CrossRef] [Green Version] - Rahim, M.S.; Nguyen, K.A.; Stewart, R.A.; Giurco, D.; Blumenstein, M. Machine Learning and Data Analytic Techniques in Digital Water Metering: A Review. Water
**2020**, 12, 294. [Google Scholar] [CrossRef] [Green Version] - Wu, Z.; El-Maghraby, M.; Pathak, S. Applications of Deep Learning for Smart Water Networks. Proc. Eng.
**2015**, 119, 479–485. [Google Scholar] [CrossRef] - Avni, N.; Fishbain, B.; Shamir, U. Water consumption patterns as a basis for water demand modeling. Water Resour. Res.
**2015**, 51, 8165–8181. [Google Scholar] [CrossRef] [Green Version] - Merit Solutions, Why a Holistic Approach is the Best Way to Achieve Manufacturing Transformation. Available online: https://meritsolutions.com/holistic-approach-manufacturing-transofrmation-blog/ (accessed on 24 January 2020).
- Loring, M.; Marron, M.; Leijen, D. Semantics of Asynchronous JavaScript. In Proceedings of the 2017 Symposium on Dynamic Languages, Vancouver, BC, Canada, 24 October 2017; pp. 51–62. [Google Scholar]
- Yu, L.; Qiu, H.; Li, J.H.; Chang, Y. Design of Asynchronous Non-block Server for Agricultural IOT. In Proceedings of the 2019 4th International Conference on Big Data and Computing, Guangzhou, China, 10–12 May 2019; pp. 322–327. [Google Scholar]
- Smith, J. Machine Learning Systems: Designs that Scale; Learning reactive machine learning; Manning Publications: Shelter Island, NY, USA, 2018; Chapter 1. [Google Scholar]
- Wu, Z.; Rahman, A. Optimized Deep Learning Framework for Water Distribution Data-Driven Modeling. Proc. Eng.
**2017**, 186, 261–268. [Google Scholar] [CrossRef] - MIT Technology Review, Ten Breakthrough Technologies 2013. Available online: http://www.technologyreview.com/lists/breakthrough-technologies/2013/ (accessed on 24 January 2020).
- Predescu, A.; Negru, C.; Mocanu, M.; Lupu, C.; Candelieri, A. A Multiple-Layer Clustering Method for Real-Time Decision Support in a Water Distribution System. In Business Information Systems Workshops; Abramowicz, W., Paschke, A., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 485–497. [Google Scholar] [CrossRef]
- Seyoum, A.; Tanyimboh, T. Integration of Hydraulic and Water Quality Modelling in Distribution Networks: EPANET-PMX. Water Resour. Manag.
**2017**, 31, 4485–4503. [Google Scholar] [CrossRef] [Green Version] - Giudicianni, C.; Di Nardo, A.; Di Natale, M.; Greco, R.; Santonastaso, G.; Scala, A. Topological Taxonomy of Water Distribution Networks. Water
**2018**, 10, 444. [Google Scholar] [CrossRef] [Green Version] - Guyer, J. An Introduction to Pumping Stations for Water Supply Systems; Createspace Independent Pub.: Scotts Valley, CA, USA, 2013. [Google Scholar]
- Nowak, D.; Krieg, H.; Bortz, M.; Geil, C.; Knapp, A.; Roclawski, H.; Böhle, M. Decision Support for the Design and Operation of Variable Speed Pumps in Water Supply Systems. Water
**2018**, 10, 734. [Google Scholar] [CrossRef] [Green Version] - Menke, R.; Abraham, E.; Parpas, P.; Stoianov, I. Extending the Envelope of Demand Response Provision though Variable Speed Pumps. Proc. Eng.
**2017**, 186, 584–591. [Google Scholar] [CrossRef] - Castro-Gama, M.; Pan, Q.; Lanfranchi, E.A.; Jonoski, A.; Solomatine, D.P. Pump Scheduling for a Large Water Distribution Network. Milan, Italy. Proc. Eng.
**2017**, 186, 436–443. [Google Scholar] [CrossRef] - Predescu, A.; Mocanu, M.; Lupu, C. Modeling the Effects of Leaks on Measured Parameters in a Water Distribution System. In Proceedings of the 2017 21st International Conference on Control Systems and Computer Science (CSCS), Bucharest, Romania, 29–31 May 2017; pp. 585–590. [Google Scholar] [CrossRef]
- Predescu, A.; Mocanu, M.; Lupu, C. A modern approach for leak detection in water distribution systems. In Proceedings of the 2018 22nd International Conference on System Theory, Control and Computing (ICSTCC), Sinaia, Romania, 10–12 October 2018; pp. 486–491. [Google Scholar] [CrossRef]
- Javadiha, M.; Blesa, J.; Soldevila, A.; Puig, V. Leak Localization in Water Distribution Networks using Deep Learning. In Proceedings of the 2019 6th International Conference on Control, Decision and Information Technologies (CoDIT), Paris, France, 23–26 April 2019; pp. 1426–1431. [Google Scholar] [CrossRef] [Green Version]
- Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016; pp. 326–366. Available online: http://www.deeplearningbook.org (accessed on 27 March 2020).
- Bubtiena, A.M.; Elshafie, A.H.; Jafaar, O. Application of Artificial Neural networks in modeling water networks. In Proceedings of the 2011 IEEE 7th International Colloquium on Signal Processing and its Applications, Penang, Malaysia, 4–6 March 2011; pp. 50–57. [Google Scholar] [CrossRef]
- Zhou, Z.H.; Feng, J. Deep Forest: Towards An Alternative to Deep Neural Networks. In Proceedings of the 26th International Joint Conference on Artificial Intelligence, IJCAI-17, Melbourne, VIC, Australia, 19–25 August 2017; pp. 3553–3559. [Google Scholar] [CrossRef] [Green Version]
- Kocev, D.; Vens, C.; Struyf, J.; Dzeroski, S. Ensembles of Multi-Objective Decision Trees. In Machine Learning: ECML 2007; Springer: Berlin/Heidelberg, Germany, 2007; pp. 624–631. [Google Scholar] [CrossRef] [Green Version]
- Linusson, H. Multi-Output Random Forests. Master’s Thesis, University of Borås, Borås, Sweden, 2013. [Google Scholar]
- Regression Models with Multiple Target Variables. Available online: https://towardsdatascience.com/regression-models-with-multiple-target-variables-8baa75aacd (accessed on 26 January 2020).
- Zhou, X.; Tang, Z.; Xu, W.; Meng, F.; Chu, X.; Xin, K.; Fu, G. Deep learning identifies accurate burst locations in water distribution networks. Water Res.
**2019**, 166, 115058. [Google Scholar] [CrossRef] [PubMed] - Vieira, A.; Ribeiro, B. Introduction to Deep Learning Business Applications for Developers; Apress: Berkeley, CA, USA, 2018. [Google Scholar] [CrossRef]
- Bahashwan, A.; Manickam, S. A Brief Review of Messaging Protocol Standards for Internet of Things (IoT). J. Cyber Secur. Mob.
**2019**, 8, 1–14. [Google Scholar] [CrossRef] [Green Version] - Akintade, O.; Yesufu, T.; Kehinde, L. Development of an MQTT-based IoT Architecture for Energy-Efficient and Low-Cost Applications. Int. J. Int. Things
**2019**, 2019, 27–35. [Google Scholar] [CrossRef] - Lv, H.; Ge, X.; Zhu, H.; Yuan, Z.; Wang, Z.; Zhu, Y. Designing of IoT Platform Based on Functional Reactive Pattern. In Proceedings of the 2018 International Conference on Computer Science, Electronics and Communication Engineering (CSECE 2018); Atlantis Press: Paris, France, 2018. [Google Scholar] [CrossRef]
- Hespanha, J.; Liberzon, D.; Morse, A.; Anderson, B.; Brinsmead, T.; De Bruyne, F. Multiple Model Adaptive Control, Part 2: Switching. Int. J. Robust Nonl. Control
**2001**, 11, 479–496. [Google Scholar] [CrossRef] - Lupu, C.; Popescu, D.; Ciubotaru, B.; Catalin-Dumitru, P.; Florea, G. Switching Solution for Multiple-Models Control Systems. In Proceedings of the 14th Mediterranean Conference on Control and Automation, Palermo, Italy, 16–19 June 2006; pp. 1–6. [Google Scholar] [CrossRef]
- Predescu, A.; Mocanu, M.; Lupu, C. Real time implementation of IoT structure for pumping stations in a water distribution system. In Proceedings of the 2017 21st International Conference on System Theory, Control and Computing (ICSTCC), Sinaia, Romania, 19–21 October 2017; pp. 529–534. [Google Scholar] [CrossRef]
- Quinlan, J.R. Induction of decision trees. Mach. Learn.
**1986**, 1, 81–106. [Google Scholar] [CrossRef] [Green Version] - Quinlan, J.R. C4.5: Programs for Machine Learning; Elsevier Science & Technology: Amsterdam, The Netherlands, 1992. [Google Scholar]
- Breiman, L.; Friedman, J.; Stone, C.J.; Olshen, R.A. Classification and Regression Trees; Taylor & Francis Ltd.: Milton Park, UK, 1984. [Google Scholar]
- Breiman, L. Bagging Predictors. Mach. Learn.
**1996**, 24, 123–140. [Google Scholar] [CrossRef] [Green Version] - Breiman, L. Random Forests. Mach. Learn.
**2001**, 45, 5–32. [Google Scholar] [CrossRef] [Green Version] - Elman, J.L. Finding Structure in Time. Cogn. Sci.
**1990**, 14, 179–211. [Google Scholar] [CrossRef] - Hochreiter, S.; Schmidhuber, J. Long Short-Term Memory. Neural Comput.
**1997**, 9, 1735–1780. [Google Scholar] [CrossRef] - Cho, K.; van Merrienboer, B.; Gulcehre, C.; Bahdanau, D.; Bougares, F.; Schwenk, H.; Bengio, Y. Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation. In Conference on Empirical Methods in Natural Language Processing (EMNLP); Association for Computational Linguistics: Stroudsburg, PA, USA, 2014; pp. 1724–1734. [Google Scholar] [CrossRef]
- Popescu, D.; Ștefănoiu, D.; Lupu, C.; Petrescu, C.; Ciubotaru, B.; Dimon, C. Automatică Industrială; AGIR: Bucharest, Romania, 2006. [Google Scholar]
- Jagan, N. Control Systems; BS Publications: Andhra Pradesh, India, 2014. [Google Scholar]
- Narendra, K.; Balakrishnan, J.; Ciliz, M. Adaptation and Learning Using Multiple Models, Switching, and Tuning. Control Syst. IEEE
**1995**, 15, 37–51. [Google Scholar] [CrossRef] - Lowe, G.; Zohdy, M. Modeling nonlinear systems using multiple piecewise linear equations. Nonl. Anal. Modell. Control
**2010**, 15, 451–458. [Google Scholar] [CrossRef] - Lupu, C.; Borne, P.; Popescu, D. Multi-model adaptive control systems. J. Control Eng. Appl. Inf.
**2008**, 10, 49–56. [Google Scholar] - Sokolova, M.; Lapalme, G. A systematic analysis of performance measures for classification tasks. Inf. Proc. Manag.
**2009**, 45, 427–437. [Google Scholar] [CrossRef] - Dumitrache, I. Ingineria Reglării Automate; Editura Politehnica Press: Timisoara, Romania, 2005. [Google Scholar]
- Zheng, O.; Yanfei, L.; Keith, W. HVAC control loop performance assessment: A critical review (1587-RP). Sci. Technol. Built. Environ.
**2017**, 23, 619–636. [Google Scholar] [CrossRef] [Green Version] - Allen, M.; Preis, A.; Iqbal, M.; Whittle, A.J. Water Distribution System Monitoring and Decision Support Using a Wireless Sensor Network. In Proceedings of the 2013 14th ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing, Honolulu, HI, USA, 1–3 July 2013; pp. 641–646. [Google Scholar]
- Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res.
**2011**, 12, 2825–2830. [Google Scholar] [CrossRef] - Ngx-Admin: Free Open Source Admin Dashboard Template Based on Angular, Bootstrap. Available online: https://akveo.github.io/ngx-admin/?utm_source=akveo.com&utm_medium=product_page (accessed on 25 January 2020).
- Heaton, J. Introduction to Neural Networks for Java, 2nd ed.; Heaton Research, Inc.: St. Louis, MO, USA, 2008. [Google Scholar] [CrossRef]
- Lam, H.K.; Shi, Q.; Xiao, B.; Tsai, S.H. Adaptive PID Controller Based on Q-learning Algorithm. CAAI Trans. Intell. Technol.
**2018**, 3, 235–244. [Google Scholar] [CrossRef]

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Predescu, A.; Truică, C.-O.; Apostol, E.-S.; Mocanu, M.; Lupu, C.
An Advanced Learning-Based Multiple Model Control Supervisor for Pumping Stations in a Smart Water Distribution System. *Mathematics* **2020**, *8*, 887.
https://doi.org/10.3390/math8060887

**AMA Style**

Predescu A, Truică C-O, Apostol E-S, Mocanu M, Lupu C.
An Advanced Learning-Based Multiple Model Control Supervisor for Pumping Stations in a Smart Water Distribution System. *Mathematics*. 2020; 8(6):887.
https://doi.org/10.3390/math8060887

**Chicago/Turabian Style**

Predescu, Alexandru, Ciprian-Octavian Truică, Elena-Simona Apostol, Mariana Mocanu, and Ciprian Lupu.
2020. "An Advanced Learning-Based Multiple Model Control Supervisor for Pumping Stations in a Smart Water Distribution System" *Mathematics* 8, no. 6: 887.
https://doi.org/10.3390/math8060887