Open Access
This article is

- freely available
- re-usable

*Water*
**2019**,
*11*(1),
88;
https://doi.org/10.3390/w11010088

Article

Comparison of Multiple Linear Regression, Artificial Neural Network, Extreme Learning Machine, and Support Vector Machine in Deriving Operation Rule of Hydropower Reservoir

^{1}

Bureau of Hydrology, ChangJiang Water Resources Commission, Wuhan 430010, China

^{2}

School of Hydropower and Information Engineering, Huazhong University of Science and Technology, Wuhan 430074, China

^{3}

Institute of Hydropower and Hydroinformatics, Dalian University of Technology, Dalian 116024, China

^{*}

Author to whom correspondence should be addressed.

Received: 26 November 2018 / Accepted: 29 December 2018 / Published: 7 January 2019

## Abstract

**:**

Operation rule plays an important role in the scientific management of hydropower reservoirs, because a scientifically sound operating rule can help operators make an approximately optimal decision with limited runoff prediction information. In past decades, various effective methods have been developed by researchers all the over world, but there are few publications evaluating the performances of different methods in deriving the hydropower reservoir operation rule. To achieve satisfying scheduling process triggered by limited streamflow data, four methods are used to derive the operation rule of hydropower reservoirs, including multiple linear regression (MLR), artificial neural network (ANN), extreme learning machine (ELM), and support vector machine (SVM). Then, the data from 1952 to 2015 in Hongjiadu reservoir of China are chosen as the survey case, and several quantitative statistical indexes are adopted to evaluate the performances of different models. The radial basis function is chosen as the kernel function of SVM, while the sigmoid function is used in the hidden layer of ELM and ANN. The simulations show that three artificial intelligence algorithms (ANN, SVM, and ELM) are able to provide better performances than the conventional MLR and scheduling graph method. Hence, for scholars in the hydropower operation field, the applications of artificial intelligence algorithms in deriving the operation rule of hydropower reservoir might be a challenge, but represents valuable research work for the future.

Keywords:

hydropower reservoir; operation rule derivation; multiple linear regression; artificial neural network; extreme learning machine; support vector machine; dynamic programming## 1. Introduction

As a classical tool for adjusting natural runoff, reservoirs play an increasingly important role in the human society [1]. In practice, reservoirs need to satisfy a variety of practical requirements from various administrative departments, such as flood control, power generation, agricultural irrigation, water supply, and ecological protection [2]. In addition, booming socio-economic development has caused an unprecedented imbalance between water supply and water demand [3], and it is of great necessity to make the utmost of the regulation abilities of all the reservoirs [4]. As a result, the reservoir operation optimization has become one of the most significant tasks in water resources and power system over past decades [5]. In general, when the inflow per scheduling period is known, the global optimal solution for the reservoir operation problem can be easily obtained using the dynamic programming or other optimization methods [6]. Traditionally, this dispatching pattern is identified as the deterministic optimization and the corresponding scheduling result denotes the best solution found in this scenario [7]. Nevertheless, it is difficult to capture the perfect future runoff information because of the limitation of existing runoff forecasting technology. That is to say, the deterministic optimization is just a potential reflection for the fixed runoff case, but is not suitable for uncertain environments. In recent years, the fast-growing computer technology has markedly promoted the collection, processing, and storage of multi-source heterogeneous data produced in the entire life-cycle of a hydropower reservoir, which indicates that abundant data information is available to provide potential technical support for operators. Hence, a natural idea for handling the above issue is to examine the reservoir operation rule with actual data and planning data [8].

Implicit stochastic optimization (ISO) is a tool developed to achieve this goal. The key idea hidden in the ISO method is to derive the near-optimal reservoir operation rule from the long-term historical data [9]. Since its origin, ISO has attracted intensive attention from researchers all over the world and many effective methods have been developed to enhance the practicality of ISO [10]. Thus far, all of the existing methods can be roughly divided into two different groups [11]: the first includes traditional techniques like scheduling graph method (SGM) and multiple linear regressions (MLR); and the other is artificial intelligence (AI) approaches represented by artificial neural network (ANN), extreme learning machine (ELM), and support vector machine (SVM). The former involves classical methods, but they often fail to consider the latest operation data and deal with the complex nonlinearity between dependent variable and independent variables [12], while the latter can not only effectively alleviate the above defects, but also scientifically analyze large-scale dataset [13]. Over the past few decades, extensive applications of the AI-based methods have been published, because the AI-based methods can produce accurate results for a variety of engineering problems.

ANN is inspired by the working mechanism of the human brain and nervous system and has been widely applied to solve a variety of practical engineering problems. ANN can be treated as a special signal processing system with numerous interconnected layers linked by weight vectors between two neighboring layers. For instance, the authors of [14] used a particle swarm optimization model to train the parameters of ANN in stage prediction of Shing Mun River; the authors of [15] verified the feasibilities of support vector regression and ANN in river stage prediction; the authors of [16] developed a hybrid ANN method based on quantum-behaved particle swarm optimization for the daily runoff forecasting; the authors of [17] used ANN to forecast the ice conditions of the Yellow River in the inner Mongolia reach; the authors of [18] compared the performances of several AI-based methods (like ANN, and SVM) in monthly discharge predication; the authors of [19] made full use of ANN to forecast concurrent flows in a river system; and, based on ANN and SVM, the authors of [20] developed a hybrid forecasting method to effectively improve the forecast accuracy of monthly streamflow. Therefore, the above literatures indicate that ANN can provide reasonable results in water resources problems.

ELM is a novel training method for single-hidden layer feed-forward neural networks. After randomly determining the input-hidden weights and hidden biases, ELM can directly obtain the hidden-output weights by calculating the Moore–Penrose generalized inverse of the hidden output matrix. ELM has better generalization ability and a faster learning rate than the gradient-based method, promoting its widespread application in practice. For instance, the authors of [21] used wavelet neural networks and ELM to forecast monthly discharge; the authors of [22] used ELM and quantum-behaved particle swarm optimization to predict daily runoff; the author of [23] proposed a robust ELM method and then verified its feasibility in indoor positioning; the authors of [24] developed a weighted ELM for imbalance learning; the authors of [25] used a base-flow separation, binary-coded swarm optimization, and ELM for neural network river forecasting; the authors of [26] used binary-coded particle swarm optimization and ELM to develop a data-driven input variable selection method for rainfall-runoff modeling; and the authors of [27] developed a hybrid ELM model for multi-step short-term wind speed forecasting. Thus, existing simulations have fully demonstrated that ELM is a promising tool to address complicated regression and classification problems.

SVM is a supervised machine learning method based on the Vapnik–Chervonenkis dimension theory and structural risk minimization principle. It was proven in theory that SVM is able to guarantee global optimization for regression or classification problems. Recently, growing attention has been paid to the SVM method because it can produce satisfactory results in many engineering problems. For instance, the authors of [28] verified the predictability of monthly streamflow using SVM coupled with discrete wavelet transform and empirical mode decomposition; the authors of [29] used support vector machines for long-term discharge prediction; the authors of [30] developed a multi-objective ecological reservoir operation model based on an improved SVM model in which meteorological and hydrological data are used as the input information; the authors of [31] proposed an artificial bee colony method optimized SVM for system reliability analysis of slopes; and the authors of [32] used a modified SVM model based on the ensemble empirical mode decomposition to forecast the annual rainfall-runoff. Thus, various reports have fully proven feasibility of SVM in solving engineering problems.

Although a variety of reports on reservoir operation rule derivation has been published has been published over the past few decades, there are few publications evaluating the performances of different methods in deriving the hydropower reservoir operation rule thus far. Hence, in order to fill this gap, the primary goal of this paper is to compare the performances of several famous methods in deriving the reservoir operation rule, including the conventional scheduling graph method (SGM), MLR, ANN, ELM, and SVM. The Hongjiadu reservoir located in southwest China is chosen as the study area, and the effectiveness of five methods with different indexes is compared. The simulations show that three artificial intelligence methods (ANN, ELM, and SVM) are promising tools in deriving the reservoir operation rule when compared with SGM and MLR.

This rest of this paper is organized as follows. The deterministic hydropower reservoir operation is given in Section 2. Section 3 briefly presents the theories of several methods adopted in this study. The quantitative indexes, experimental results, and discussions are presented in Section 4, and the conclusions are given in Section 5.

## 2. Deterministic Hydropower Reservoir Operation to Produce Dataset

#### 2.1. Objective Function

The scheduling process obtained from the deterministic optimization model is used to evaluate the performance of the derived reservoir operation rule. Considering that power generation is an important indicator to compare the management levels of different hydropower enterprises in a market environment, the objective function is often chosen to maximize of the multi-year average electric energy production in the target hydropower reservoir [33], which can be expressed as follows:
where $E$ is the value of the objective function; $N$ is the number of years; $M$ is the number of periods per year (month here, i.e., $M=12$); ${P}_{i,j}$ is the reservoir’s power output at the jth period of the ith year; ${t}_{i,j}$ is the total hours at the jth period of the ith year; and $g\left({P}_{i,j}\right)$ denotes the penalty function, which can be described as below:
where ${P}_{i,j}^{\mathrm{min}}$ is the preset minimum power output, and $a$ and $b$ are two positive coefficients.

$$\mathrm{max}E={\displaystyle \sum _{i=1}^{N}{\displaystyle \sum _{j=1}^{M}{P}_{i,j}{t}_{i,j}}}\text{\hspace{0.17em}}-g\left({P}_{i,j}\right)$$

$$g\left({P}_{i,j}\right)=\{\begin{array}{l}a{\left[{P}_{i,j}-{P}_{i,j}^{\mathrm{min}}\right]}^{b}\text{\hspace{1em}}\mathrm{if}\text{}\left({P}_{i,j}{P}_{i,j}^{\mathrm{min}}\right)\text{\hspace{0.17em}}\\ 0\text{\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}}\mathrm{otherwise}\end{array}$$

#### 2.2. Operation Constraints

To ensure that all the variables vary in the feasible zones, the following equality or inequality constraints are considered in the modeling process [34,35,36], including the water balance equation, storage volume limits, water spillage limits, turbine discharge, and power output limits.
where ${V}_{i,j}$, ${I}_{i,j}$, ${q}_{i,j}$, and ${s}_{i,j}$ are the storage volume, local inflow, turbine discharge, and abandoned spillage at the jth period of the ith year, respectively. ${V}_{i,j}^{\mathrm{max}}$ and ${V}_{i,j}^{\mathrm{min}}$ are the maximum and minimum storage volume at the jth period of the ith year, respectively. ${q}_{i,j}^{\mathrm{max}}$ and ${q}_{i,j}^{\mathrm{min}}$ are the maximum and minimum turbine discharge at the jth period of the ith year, respectively. ${s}_{i,j}^{\mathrm{max}}$ and ${s}_{i,j}^{\mathrm{min}}$ are the maximum and minimum water spillage at the jth period of the ith year, respectively. ${P}_{i,j}^{\mathrm{max}}$ and ${P}_{i,j}^{\mathrm{min}}$ are the maximum and minimum power output at the jth period of the ith year, respectively.

$${V}_{i,j}={V}_{i,j-1}+\left[{I}_{i,j}-\left({q}_{i,j}+{s}_{i,j}\right)\right]\xb7{t}_{i,j};\text{\hspace{1em}}i\in \left[1,N\right],j\in \left[1,M\right]$$

$${V}_{i,j}^{\mathrm{min}}\le {V}_{i,j}\le {V}_{i,j}^{\mathrm{max}};\text{\hspace{1em}}i\in \left[1,N\right],j\in \left[1,M\right]$$

$${q}_{i,j}^{\mathrm{min}}\le {q}_{i,j}\le {q}_{i,j}^{\mathrm{max}};\text{\hspace{1em}}i\in \left[1,N\right],j\in \left[1,M\right]$$

$${s}_{i,j}^{\mathrm{min}}\le {s}_{i,j}\le {s}_{i,j}^{\mathrm{max}};\text{\hspace{1em}}i\in \left[1,N\right],j\in \left[1,M\right]$$

$${P}_{i,j}^{\mathrm{min}}\le {P}_{i,j}\le {P}_{i,j}^{\mathrm{max}};\text{\hspace{1em}}i\in \left[1,N\right],j\in \left[1,M\right]$$

#### 2.3. Optimization Methods

When the long-term inflow series, initial storage, and terminal storage are known, the above optimization model will become a deterministic operation problem that can be easily resolved by the famous dynamic programming method [37,38]. Then, the corresponding dynamic programming recursive equation is given as below:
where ${e}_{i,j}\left({V}_{i,j},{V}_{i,j-1}\right)$ is the objective function value at the jth period of the ith year and ${E}_{i,j}^{\ast}\left({V}_{i,j}\right)$ denotes the optimal cumulative return from the jth period of the ith year to the first period.

$${E}_{i,j}^{\ast}\left({V}_{i,j}\right)=\mathrm{max}\left\{{e}_{i,j}\left({V}_{i,j},{V}_{i,j-1}\right)+{E}_{i,j-1}^{\ast}\left({V}_{i,j-1}\right)\right\}$$

## 3. Brief Introductions of the Adopted Methods

Brief information of the four adopted methods is given in this section, including multiple linear regress (MLR), artificial neural network (ANN), extreme learning machine (ELM), and support vector machine (SVM). Because of its simple principle and easy implementation, MLR is seen as one of the most classical method in the reservoir operation rule field; with strong generalization and self-learning abilities, ANN is chosen as to derive reservoir operation rule; with faster training rate and better regression ability, ELM is also used for reservoir operation rule derivation; because of the merits of less computation parameters and theoretical completeness, SVM is also an alternative tool in deriving the reservoir operation rule. Besides, numerous mature software packages have been developed to achieve those methods, which can make an obvious improvement in the workload and execution efficiency.

#### 3.1. Multiple Linear Regression (MLR)

Multiple linear regression (MLR) is a classical statistical tool develop to formulate the complex input–output relationship [39]. The key goal of MLR is to find out an approximation linear function between a set of independent variables and the dependent variable. Without loss of generality, the regression line in MLR can be expressed as follows:
where $y$ is the dependent variable, ${x}_{i}$ is the ith independent variable, ${\beta}_{i}$ is the polynomial coefficients of ${x}_{i}$, $k$ is the number of independent variables, and $\epsilon $ is the possible variation form.

$$y={\beta}_{0}+{\beta}_{1}{x}_{1}+\cdots +{\beta}_{i}{x}_{i}+\cdots +{\beta}_{k}{x}_{k}+\epsilon $$

Then, the above equation for a set of samples can be rewritten in a compact matrix form, which can be described as below:
where
where $n$ is the number of samples, ${x}_{m,i}$ is the value of the ith independent variable in the mth sample, and ${\epsilon}_{i}$ is the ith residual error in the mth sample.

$$\mathit{Y}=\mathit{X}\mathit{\beta}+\mathit{\epsilon}$$

$$\mathit{Y}=\left[\begin{array}{c}{y}_{1}\\ {y}_{2}\\ \vdots \\ {y}_{n}\end{array}\right],\text{}\mathit{\epsilon}=\left[\begin{array}{c}{\epsilon}_{1}\\ {\epsilon}_{2}\\ \text{\hspace{0.17em}}\vdots \\ {\epsilon}_{n}\end{array}\right],\text{}\mathit{\beta}=\left[\begin{array}{c}{\beta}_{0}\\ {\beta}_{1}\\ \text{\hspace{0.17em}}\vdots \\ {\beta}_{k}\end{array}\right]\text{}\mathrm{and}\text{}\mathit{X}={\left[\begin{array}{cccc}1& {x}_{1,1}& \cdots & {x}_{1,k}\\ 1& {x}_{2,1}& \cdots & {x}_{2,k}\\ 1& \vdots & {x}_{m,i}& \vdots \\ 1& {x}_{n,1}& \cdots & {x}_{n,k}\end{array}\right]}_{n\times \left(k+1\right)}$$

Based on the classical matrix operation theory, the standard least-square method can be used to calculate the coefficient vector $\mathit{\beta}$ associated in the MLR model, which is described as below:

$$\mathit{\beta}={\left({\mathit{X}}^{\mathrm{T}}\mathit{X}\right)}^{-1}{\mathit{X}}^{\mathrm{T}}\mathit{Y}$$

In such a way, the coefficient vector $\mathit{\beta}$ is known and the obtained MLR model can be adopted to predict the possible dependent variable related with the newly input vector.

#### 3.2. Artificial Neural Network (ANN)

ANNs have been widely used to alleviate the shortcomings of the conventional algorithms to deal with complex problems. Without knowing the accurate mathematical description about the underlying process to be addressed, ANNs can learn hidden knowledge from the assigned data samples via establishing an input–output mapping for simulations. By far, there are many different types of ANN variants in previous literatures [40]. Here, the feed-forward network based on the back propagation training method is the choice of this paper. The sketch map of the feed-forward ANN model is drawn in Figure 1. In the feed-forward ANN, there are often three kinds of layers that are composed of multiple interconnected neurons, including the input layer receiving the external signal, the hidden layer or layers processing data in an order way, and the output layer exporting the predictive result.

Two key procedures are involved in the training process of the feed-forward ANN: The first is the feed-forward procedure in which the information is delivered from the input layer to the output layer via all the hidden layers, and the other is the reverse procedure in which the overall derivatives of the objective function in terms of weights are scattered among all the nodes of the neural network, which means that the weights and biases of all the nodes are dynamically adjusted based on the error between the simulated values of the network and the target outputs. For any one node per layer, the transfer function is adopted to obtain the accumulated result by calculating the inner product of the input vector and the weight vector, which can be expressed in Equation (13). Then, the accumulated result is directly delivered to the next layer. In addition, the neurons in the previous layer are often linked with all the neurons in the next layer, whereas the connections for any two neurons in the same layer do not exist.
where $y$ is the output of the node; $f$ is the transfer function of the node; $b$ is the bias value of the node; and $\mathit{w}$ and $\mathit{x}$ denote the input vector and weight vector of the node, respectively.

$$y=f\left\{{\displaystyle \sum \mathit{w}\xb7\mathit{x}}+b\right\}$$

#### 3.3. Extreme Learning Machine (ELM)

Extreme learning machine (ELM) is an emerging optimization technique developed to train the single-hidden layer feed-forward neural networks (SLFNs) [41]. In ELM, after randomly generating the input weights and hidden biases in the preset range, the hidden-output weights can be obtained via the matrix multiplication of the generalized inverse of hidden output matrix and the targeted output matrix. For a set of training samples $\left\{\left({\mathit{x}}_{t},{\mathit{y}}_{t}\right),{\mathit{x}}_{t}\in {\mathit{R}}^{n},{\mathit{y}}_{t}\in {\mathit{R}}^{m},t=1,2,\cdots ,N\right\}$, the hidden outputs of the ELM model can be expressed as below:
where ${\mathit{\alpha}}_{i}\in {\mathit{R}}^{n}$ is the weight vector linking the input layer and the ith hidden node, ${\mathit{\beta}}_{i}\in {\mathit{R}}^{m}$ is the weight vector linking the ith hidden node and the output layer, ${b}_{i}\in \mathit{R}$ is the bias value of the ith hidden node, $g(\xb7)$ is the nonlinear activation function of the hidden node, L is the number of neurons in the hidden layer, and ${\mathit{O}}_{t}\in {\mathit{R}}^{m}$ is the simulated output vector of the neural network.

$${f}_{t}={\displaystyle \sum _{i=1}^{L}{\mathit{\beta}}_{i}g\left({\mathit{\alpha}}_{i}\xb7{\mathit{x}}_{t}+{b}_{i}\right)}={\mathit{O}}_{t},t=1,2,\cdots ,N$$

Then, the above equation can be rewritten as follows:
where
where $\mathit{H}$ denotes the output matrix of the hidden layer.

$$\mathit{H}\mathit{\beta}=\mathit{T}$$

$$\mathit{H}=\left[\begin{array}{c}\mathit{h}\left({\mathit{x}}_{1}\right)\\ \vdots \\ \mathit{h}\left({\mathit{x}}_{N}\right)\end{array}\right]={\left[\begin{array}{ccc}g\left({\mathit{a}}_{1}\xb7{\mathit{x}}_{1}+{b}_{1}\right)& \cdots & g\left({\mathit{a}}_{L}\xb7{\mathit{x}}_{1}+{b}_{L}\right)\\ \vdots & \cdots & \vdots \\ g\left({\mathit{a}}_{1}\xb7{\mathit{x}}_{N}+{b}_{1}\right)& \cdots & g\left({\mathit{a}}_{L}\xb7{\mathit{x}}_{N}+{b}_{L}\right)\end{array}\right]}_{N\times L}$$

$$\mathit{\beta}={\left[\begin{array}{c}{\mathit{\beta}}_{1}^{\mathrm{T}}\\ \vdots \\ {\mathit{\beta}}_{L}^{\mathrm{T}}\end{array}\right]}_{L\times m}\text{}\mathrm{and}\text{}\mathit{T}={\left[\begin{array}{c}{\mathit{y}}_{1}^{\mathrm{T}}\\ \vdots \\ {\mathit{y}}_{L}^{\mathrm{T}}\end{array}\right]}_{N\times m}$$

The optimization objective of ELM is to find appropriate parameters making $\sum _{t=1}^{N}\Vert {\mathit{O}}_{t}-{\mathit{y}}_{t}\Vert}=0$ hold. Then, the coefficient matrix $\mathit{\beta}$ can be obtained by analytically determining the least-squared solution of the above-mentioned linear system $\underset{\mathit{\beta}}{\mathrm{min}}\Vert \mathit{H}\mathit{\beta}-\mathit{T}\Vert $, and then the special solution can be expressed as follows:
where ${\mathit{H}}^{\u2020}$ denotes the Moore–Penrose generalized inverse of the hidden layer output matrix.

$$\mathit{\beta}={\mathit{H}}^{\u2020}\mathit{T}$$

Then, the learning procedures for the ELM method are summarized as below:

- Step 1:
- Define the amount of hidden neurons and the activation function of each neuron.
- Step 2:
- Produce the input-hidden weights as well as the hidden biases.
- Step 3:
- Use all the data samples to obtain the output matrix of the hidden layer.
- Step 4:
- Choose the suitable method to calculate the hidden-output weights.
- Step 5:
- Use the optimized ELM network to produce the simulated output for new samples.

#### 3.4. Support Vector Machine (SVM)

As a famous technology based on statistical learning theory, the support vector machine (SVM) makes full use of the principle of structural risk minimization, rather than the classical empirical risk minimization in conventional methods, to guarantee the generalization capability of the regression model [42]. Figure 2 shows he sketch map of the SVM model. Supposing that the ith sample has a D-dimensional input vector ${\mathit{x}}_{i}\in {\mathit{R}}^{D}$ and a scalar output ${\mathit{y}}_{i}\in \mathit{R}$, then the following regression function can be employed to express the nonlinear input–output relationship in the SVM model:
where $f\left({\mathit{x}}_{i}\right)$ denotes the predicated value of the SVM model, $\phi \left({\mathit{x}}_{i}\right)$ is the nonlinear mapping function, and $\mathit{w}$ and $b$ are the parameters of the SVM model to be optimized.

$$f\left({\mathit{x}}_{i}\right)={\mathit{w}}^{\mathrm{T}}\phi \left({\mathit{x}}_{i}\right)+b,\text{}i=1,2,\cdots ,l$$

For the training dataset with l samples, the v-SVM optimization model for can be expressed as follows:
where $C$ is the parameter used to balance the empirical risk and model complexity term ${\Vert \mathit{w}\Vert}^{2}$, and ${\xi}_{i}^{\ast}$ is the slack variable to denote the distance of the ith sample outside of the $\epsilon $-tube.

$$\{\begin{array}{l}\mathrm{min}\text{}R\left(\mathit{w},\mathit{\xi},{\mathit{\xi}}^{\ast},\epsilon \right)=\frac{1}{2}{\Vert \mathit{w}\Vert}^{2}+C\left[v\epsilon +\frac{1}{l}{\displaystyle \sum _{i=1}^{l}\left({\xi}_{i}+{\xi}_{i}^{\ast}\right)}\right]\\ \mathrm{subjective}\text{}\mathrm{to}:\text{\hspace{0.17em}}{y}_{i}-{\mathit{w}}^{\mathrm{T}}\phi \left({\mathit{x}}_{i}\right)-b\le \epsilon +{\xi}_{i}\text{\hspace{0.17em}}\\ \text{\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}}{\mathit{w}}^{\mathrm{T}}\phi \left({\mathit{x}}_{i}\right)+b-{y}_{i}\le \epsilon +{\xi}_{i}\\ \text{\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}}{\mathit{\xi}}^{\ast},\epsilon \ge 0\end{array}$$

As a standard nonlinear constrained optimization problem, the above problem can be resolved by constructing the dual optimization problem based on the Lagrange multipliers technique:
where $K\left({\mathit{x}}_{i},{\mathit{x}}_{j}\right)$ is the kernel function satisfying the Mercer’s condition; and ${a}_{i}$ and ${a}_{i}^{*}$ are the nonnegative Lagrange multipliers, respectively.

$$\{\begin{array}{l}\mathrm{max}\text{}R\left({a}_{i},{a}_{i}^{*}\right)={\displaystyle \sum _{i=1}^{l}{y}_{i}\left({a}_{i}^{*}-{a}_{i}\right)}-\frac{1}{2}{\displaystyle \sum _{i=1}^{l}{\displaystyle \sum _{j=1}^{l}\left({a}_{i}-{a}_{i}^{*}\right)\left({a}_{j}-{a}_{j}^{*}\right)K\left({\mathit{x}}_{i},{\mathit{x}}_{j}\right)}}\\ \mathrm{subjective}\text{}\mathrm{to}:\text{\hspace{0.17em}}{\displaystyle \sum _{i=1}^{l}\left({a}_{i}-{a}_{i}^{*}\right)}=0\\ \text{\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}}0\le {a}_{i},{a}_{i}^{*}\le C/l\\ \text{\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}}{\displaystyle \sum _{i=1}^{l}\left({a}_{i}+{a}_{i}^{*}\right)}\text{\hspace{0.17em}}\le C\xb7v\end{array}$$

After obtaining the best solution for the dual optimization problem, the parameters of the SVM model are known and the regression form for an unknown input vector $\mathit{x}$ is expressed as follows:

$$f\left(\mathit{x}\right)={\displaystyle \sum _{i=1}^{l}\left({a}_{i}^{\ast}-{a}_{i}\right)K\left({\mathit{x}}_{i},\mathit{x}\right)}+b$$

## 4. Experimental Results

#### 4.1. Study Area and Dataset

Here, the Hongjiadu reservoir located on the mainstream of Wu River in southwest China is chosen as the study site. This reservoir has a total drainage area of 9900 km

^{2}and an average annual runoff of 4.89 billion m^{3}. The dead water level is 1076 m and the dead storage is 1.14 billion m^{3}; the normal water level is 1140 m and the corresponding storage volume is 4.5 billion m^{3}. In Hongjiadu, the flood control level is 1138 m from 1 June to 1 September, while its regulation storage is about 3.4 billion m^{3}. Obviously, the active-storage volume of the Hongjiadu reservoir is rather large in comparison with its annual inflow volume, meaning it plays a large role in determining the efficiencies to be achieved by any operation rules. Besides, the Hongjiadu reservoir has three mixed-flow turbine generating units with 200 MW per unit and its total installed capacity is 600 MW. Under normal circumstances, almost all of the flow of Hongjiadu is through the hydropower turbines. As a leading carry-over storage reservoir on the trunk stream of Wu River, the Hongjiadu reservoir begins to provide comprehensive benefits to promote the healthy and orderly development of Guizhou Province since being put into operation, like power generation, ecological protection, water supply, flood control, and environment governance. In practice, various scheduling purposes can be well addressed in the derived operating rule by setting the necessary constraints on some variables, like water levels, power outputs, or discharge rates [43,44,45,46].The actual monthly streamflow data from January 1952 and December 2015 are collected from the watershed management organization of Wu River. Then, dynamic programing is employed to calculate the deterministic optimization results for Hongjiadu reservoir, and the minimum power output is defined as 150 MW. The optimal results (water level, inflow, and outflow) are drawn in Figure 3. For the optimized scheduling results, the first 50 years’ data are used to train the model, while those of the last 13 years are employed for testing. In addition, for the artificial intelligence algorithms (like ANN, ELM, and SVM), the numerical problem is often unavoidable if the smaller attribute values are dominated by the large ones. In order to effectively avoid the numerical difficulties in the modeling process, the normalization process in Equation (23) is adopted to make all the attribute values scale to the range of 0 and 1. All the results are obtained on a desktop computer with the Windows 7 operating system, Intel-core i7-3770 processor, and 4GB random access memory (RAM).
where ${x}_{i}$ and ${\tilde{x}}_{i}$ denote the original and normalized value of the target factor, respectively.

$${\tilde{x}}_{i}=\frac{{x}_{i}-\underset{1\le i\le n}{\mathrm{min}}\left\{{x}_{i}\right\}}{\underset{1\le i\le n}{\mathrm{max}}\left\{{x}_{i}\right\}-\underset{1\le i\le n}{\mathrm{min}}\left\{{x}_{i}\right\}}$$

To be mentioned, the actual, rather than runoff-prediction, data are used by reservoir operators in deriving the candidate operation rules, in which the actual monthly inflow is an important input component and the monthly outflow is the key decision variable. When the obtained operation rule is used for production guidance, the future monthly runoff obtained by the real-time inflow rates available on a daily basis for the past months are used to determine the flows through the turbines and the abandoned flows through the spillway.

#### 4.2. Performance Criterion

Here, two quantitative indicators are used to test the feasibility of different methods, including average power generation (APG) and generation guarantee rate (GGR). APG shows the simulated generation benefit of the target method in the long run, while GGR measures the assurance degree of the simulated power output larger than the preset minimum. Generally, the method with a larger value of the two indexes has better performance. The definitions of the two indexes are given as below:
where ${\tilde{P}}_{i,j}$ is the simulated power output of the target method at the jth period of the ith year, and ${c}_{i,j}$ is the intermediate variable.

$$APG=\frac{1}{N}{\displaystyle \sum _{i=1}^{N}{\displaystyle \sum _{j=1}^{M}{\tilde{P}}_{i,j}{t}_{i,j}}}$$

$$GGR=\frac{1}{N\times M}{\displaystyle \sum _{i=1}^{N}{\displaystyle \sum _{j=1}^{M}{c}_{i,j}}},{c}_{i,j}=\{\begin{array}{l}1\text{}\mathrm{if}\left({\tilde{P}}_{i,j}\ge {P}_{i,j}^{\mathrm{min}}\right)\\ 0\text{}\mathrm{otherwise}\end{array}$$

#### 4.3. Model Development

#### 4.3.1. MLR Model Development

Because of its simplicity and easy implementation, the linear operation rule is used for the purpose of comparison. The total discharge is chosen as the dependent variable, while the initial water level and inflow per period are chosen as two independent variables that are related to the dependent variable. The linear operation rule for Hongjiadu reservoir per month is expressed in Equation (26). Then, the parameters involved in the linear operation rule of Hongjiadu reservoir are obtained by the MLR method mentioned in the Section 3.1. Table 1 shows the obtained coefficients for the linear operation rule per month. It can be observed that three coefficients in 12 months are totally different from each other, demonstrating the complexity of reservoir operation.
where ${O}_{t}$ is the total discharge at the tth month; ${I}_{t}$ is the total inflow at the tth month; ${Z}_{t-1}^{}$ is the initial water level at the tth month; and $a$, $b$, and $c$ are three different parameters.

$${O}_{t}=a+b\times {Z}_{t-1}^{}+c\times {I}_{t},\text{\hspace{0.17em}}t=1,2,\cdots ,12$$

#### 4.3.2. ANN Model Development

Here, the three-layer ANN model based on the back-propagation training method is used to derive the operation rule of Hongjiadu reservoir. All the hidden nodes use the sigmoid activation function, while the linear function is used in the output layer. Given that the number of nodes in the hidden layer has an important effect on the performance of the ANN model, the trial and error strategy is used to choose the best network structure. The training process will be terminated when the root-mean-square error (RMSE) of all the testing samples reaches the minimum. Figure 4 shows the performances of the testing dataset with the change of hidden nodes from 3 to 18. It can be found that the model performance is affected by the hidden neurons. When there are seven nodes in the hidden layer, the best performance in the testing dataset will be achieved. Thus, the number of hidden nodes is set as seven for Hongjiadu reservoir.

#### 4.3.3. ELM Model Development

Similar to the above ANN model, the sigmoid and linear activation functions are adopted in the hidden layer and output layer of the ELM model, respectively. The amount of hidden nodes is two times of the number of input layers, while the quantum-behaved particle swarm optimization (QPSO) [16,22] is employed to search for the appropriate network parameters (including the input-hidden weights and hidden bias). The number of individuals and iterations in QPSO is set as 100, while the RMSE value is chosen as the indicator to compare the model parameters. Figure 5 illustrates the simulation results of the ELM model for Hongjiadu reservoir in 10 runs. It can be found that the ELM model in the fourth run has the optimal performance in both generation guarantee rate and average power generation. Thus, the corresponding model is chosen to derive the operation rule of Hongjiadu reservoir.

#### 4.3.4. SVM Model Development

In general, the kernel function plays an important role in enhancing the SVM performance. Based on the previous publications, the radial basis function is seen as one of the most commonly used kernel functions because it has better generalization ability compared with other kernel functions. Hence, the radial basis function (RBF) in Equation (27) is chosen as the Kernel function. Obviously, there are three parameters ($C,\gamma ,\epsilon $) in the RBF function, as used in the SVM model. In order to obtain satisfying performance, the above-mentioned QPSO method is used to optimize those parameters. Based on the simulation results, the optimal parameter combination in the SVM model is set as (10.768, 0.456, 0.784) for operation rule derivation in Hongjiadu reservoir.
where $\gamma $ is the kernel parameter to be optimized.

$$K\left({\mathit{x}}_{i},{\mathit{x}}_{j}\right)=\mathrm{exp}\left(-\gamma {\Vert {\mathit{x}}_{i}-{\mathit{x}}_{j}\Vert}^{2}\right)$$

#### 4.4. Comparison and Discussion

For the purpose of comparison, the traditional scheduling graph method (SGM) was chosen as the benchmark yardstick. Table 2 and Figure 6 show the detailed results of different approaches in Hongjiadu reservoir. It can be clearly observed that the dynamic programing method can obtain the best scheduling results in the deterministic case; four simulation-based methods (MLR, ANN, ELM, and SVM) are able to provide suboptimal results when compared with the dynamic programing method, but outperform SGM with respect to two statistical measures. On the other hand, as compared with SGM, MLR, ANN, and SVM, the ELM method can generate the best solution with approximately 9.00%, 7.57%, 3.03%, and 1.73% improvements in APG, respectively, while the generation assurance rate is improved by about 8.01%, 4.80%, 1.87%, and 0.27%, respectively. Hence, it can be concluded that the three AI-based methods can provide better results than the traditional SGM and MLR methods, and the operation rule derived by ELM has the best performance in the long-term simulations.

Figure 7 shows the average power output obtained by different methods for Hongjiadu reservoir. Figure 8 shows the water level of different methods for Hongjiadu reservoir in the testing samples. It can be found that the dynamic programming (DP) method can provide the most power generation in wet season but the least power generation in the dry season. This case indicates that in the ideal scheduling process, Hongjiadu reservoir can reduce the power generation in the dry season, and then use the abundant runoff to keep the reservoir operating at a high level, enhancing the operation efficiency of hydroelectric generators in the long run. Besides, the SGM method tends to smooth the power output in the long run because it fails to raise the water level in the wet season; the ELM method has a stronger capability than MLR, ANN, and SVM in mimicking the optimal scheduling process. Thus, the feasibilities of the solutions obtained by several methods are fully proven in this case.

Figure 9 draws the graphic models (outflow–inflow–water level) of four algorithms for Hongjiadu reservoir in August. The following conclusions for four methods can be deduced: when the water level is fixed, there is a positive relationship between power output and inflow; when the inflow is fixed, the reservoir tends to increase the power output with the increase of water level. On the other hand, there are obvious differences in the graphic models, while the gap among the average annual power generation of four methods is relatively small, demonstrating the equivalence for different combinations of parameters in the hydropower reservoir operation rule. Thus, operators should take the actual working condition of the hydropower reservoir into consideration when making the scheduling plan for production guidance.

From the above analysis, it can be clearly observed that the dynamic programming has the best performance, while the three artificial intelligence algorithms (ANN, ELM, and SVM) can provide better simulation results than SGM and MLR. The dynamic programing method divides the complicated multistage reservoir operation optimization problem into a series of relatively simple subproblems to be solved sequentially, and then seeks for the global optimal solution in the discrete state space, providing the best scheduling results for simulation [47,48,49]. In general, for the reservoir operation rule per month, there is often a strong nonlinearity between the independent variables (like water level and inflow) and dependent variables (like outflow). The conventional MLR method based on the simulation and optimization strategy can only handle conventional linearity, rather than the inherent nonlinear relationship in this problem, leading to lower generation benefit of the hydropower reservoir. The SGM approach based on the historical data and engineering experience cannot well consider the dynamic variation of reservoir runoff caused by climate change and human activities, reducing the overall operational efficiency of Hongjiadu reservoir. Three artificial intelligence algorithms get the utmost out of the mapping functions to map the training samples into the high-dimensional feature space, and then carefully choose the appropriate optimization strategies to find the solution that minimizes the total training error. As a result, three artificial intelligence algorithms have some unique merits in comparison with the SGM and MLR methods, including self-learning ability (training network parameters to simulate the complex nonlinear input–output relationship), generalization ability (possessing satisfying performance for new data samples), and fault-tolerant ability (behaving well for the partially damaged system), producing better performances than the two traditional methods. On the other hand, it seems that the ELM method is capable of obtaining the best performance among all the methods used to derive the reservoir operation rule. The difference in the adopted optimization principle per method is the key point leading to the fact that the performance of ELM is superior to both SVM and ANN. Specifically speaking, with a strong generalized ability for a variety of feature mappings, ELM is able to approximate any continuous functions by determining the global optima of the training samples [50], while the QPSO optimizer can effectively enhance the network compactness by carefully choosing the necessary parameters [51]; the traditional gradient-based ANN training method tends to fail into local optima with a relatively long learning time; and the SVM method can only provide the suboptimal solution with a higher computational complexity and more compact constraints. In addition, it should be pointed out that the performances of the three artificial intelligence methods may be different along with the change in the problem characteristics or research objects. To sum up, it can be concluded that in the reservoir operation rule derivation field, future research efforts can be directed to those artificial intelligence methods with promising simulation ability.

## 5. Conclusions

This study investigates the performances of four effective methods in deriving the operation rule of a hydropower reservoir, including MLR, ANN, ELM, and SVM. For the purpose of comparison, the conventional SGM approach was chosen as the benchmark yardstick. The historical streamflow data of Hongjiadu reservoir optimized by the dynamic programming method are adopted to develop those models. Two indexes are adopted to evaluate the performance of different methods, including average power generation and generation guarantee rate. The results indicate that three artificial intelligence algorithms (ANN, ELM, and SVM) provide better simulation performances than SGM and MLR. Therefore, the results show that the artificial intelligence methods are promising tools in deriving the operation rule of a hydropower reservoir. To be mentioned, the performances of ANN, ELM, and SVM vary with the change of parameter combination, and it is of great importance to develop effective tools to choose appropriate model parameters.

Besides, the amount of reservoir operation data will increase with the passage of time, which directly affects the major decision (monthly outflow that is equivalent to the abandoned spillage and turbine discharge) and the core function (like generation benefit) involved in the operation rules. Thus, the update frequency of reservoir operating rules can be set to a monthly basis in practice. On the other hand, in many parts of the world, climate change is creating non-stationary conditions in the flow-rate and streams. These non-stationary conditions show up as time trends in the annual flow-rate of streams and change over time in the pattern of month-by-month contributions to the annual streamflow [52,53,54,55], which will lead to certain significant impacts on reservoir operation. Because of the limited time and energy, the check for time trends in the inflow data of the survey region was not made, but this work is necessary for any future application of the methods presented here for operating-rule selection. Thus, in the future, we will deepen the research on the operation optimization of a hydropower reservoir in the changing environment.

## Author Contributions

All authors contributed extensively to the work presented in this paper. Z.-K.F. and W.-J.N. contributed to modeling and finalized the manuscripts. B.-F.F. and Y.-W.M. contributed to data analysis. C.-T.C. and J.-Z.Z. contributed to the literature review.

## Funding

This paper is supported by the National Key R&D Program of China (2017YFC0405406), National Natural Science Foundation of China (51709119), Natural Science Foundation of Hubei Province (2018CFB573), and Fundamental Research Funds for the Central Universities (HUST: 2017KFYXJJ193).

## Acknowledgments

The writers would like to express appreciation to both editors and reviewers for their valuable comments and suggestions.

## Conflicts of Interest

The authors declare no conflict of interest.

## References

- Feng, Z.K.; Niu, W.J.; Cheng, C.T.; Liao, S.L. Hydropower system operation optimization by discrete differential dynamic programming based on orthogonal experiment design. Energy
**2017**, 126, 720–732. [Google Scholar] [CrossRef] - Ming, B.; Chang, J.X.; Huang, Q.; Wang, Y.M.; Huang, S.Z. Optimal operation of Multi-Reservoir system Based-On cuckoo search algorithm. Water Resour. Manag.
**2015**, 29, 5671–5687. [Google Scholar] [CrossRef] - Madani, K. Game theory and water resources. J. Hydrol.
**2010**, 381, 225–238. [Google Scholar] [CrossRef] - Niu, W.J.; Feng, Z.K.; Cheng, C.T.; Wu, X.Y. A parallel multi-objective particle swarm optimization for cascade hydropower reservoir operation in southwest china. Appl. Soft Comput.
**2018**, 70, 562–575. [Google Scholar] [CrossRef] - Madani, K.; Lund, J.R.; Krone, R.B. Innovative modelling for Califomian high hydro. Int. Water Power Dam Constr.
**2012**, 64, 34–36. [Google Scholar] - Zhang, Y.; Jiang, Z.; Ji, C.; Sun, P. Contrastive analysis of three parallel modes in multi-dimensional dynamic programming and its application in cascade reservoirs operation. J. Hydrol.
**2015**, 529, 22–34. [Google Scholar] [CrossRef] - Li, X.; Wei, J.; Li, T.; Wang, G.; Yeh, W.W.G. A parallel dynamic programming algorithm for multi-reservoir system optimization. Adv. Water Resour.
**2014**, 67, 1–15. [Google Scholar] [CrossRef] - Liu, P.; Li, L.; Chen, G.; Rheinheimer, D.E. Parameter uncertainty analysis of reservoir operating rules based on implicit stochastic optimization. J. Hydrol.
**2014**, 514, 102–113. [Google Scholar] [CrossRef] - Liu, P.; Guo, S.; Xu, X.; Chen, J. Derivation of Aggregation-Based joint operating rule curves for cascade hydropower reservoirs. Water Resour. Manag.
**2011**, 25, 3177–3200. [Google Scholar] [CrossRef] - Ji, C.M.; Zhou, T.; Huang, H.T. Operating rules derivation of Jinsha reservoirs system with parameter calibrated support vector regression. Water Resour. Manag.
**2014**, 28, 2435–2451. [Google Scholar] [CrossRef] - Yang, G.; Guo, S.; Liu, P.; Li, L.; Liu, Z. Multiobjective cascade reservoir operation rules and uncertainty analysis based on PA-DDS algorithm. J. Water Res. Plan. Man.
**2017**, 143, 04017025. [Google Scholar] [CrossRef] - Wang, Y.; Guo, S.L.; Yang, G.; Hong, X.J.; Hu, T. Optimal early refill rules for Danjiangkou Reservoir. Water Sci. Eng.
**2014**, 7, 403–419. [Google Scholar] - Ma, C.; Lian, J.; Wang, J. Short-term optimal operation of Three-gorge and Gezhouba cascade hydropower stations in non-flood season with operation rules from data mining. Energ. Convers. Manag.
**2013**, 65, 616–627. [Google Scholar] [CrossRef] - Chau, K.W. Particle swarm optimization training algorithm for ANNs in stage prediction of Shing Mun River. J. Hydrol.
**2006**, 329, 363–367. [Google Scholar] [CrossRef][Green Version] - Wu, C.L.; Chau, K.W.; Li, Y.S. River stage prediction based on a distributed support vector regression. J. Hydrol.
**2008**, 358, 96–111. [Google Scholar] [CrossRef][Green Version] - Cheng, C.; Niu, W.; Feng, Z.; Shen, J.; Chau, K. Daily reservoir runoff forecasting method using artificial neural network based on quantum-behaved particle swarm optimization. Water
**2015**, 7, 4232–4246. [Google Scholar] [CrossRef] - Wang, T.; Yang, K.; Guo, Y. Application of artificial neural networks to forecasting ice conditions of the yellow river in the inner Mongolia reach. J. Hydrol. Eng.
**2008**, 13, 811–816. [Google Scholar] - Wang, W.C.; Chau, K.W.; Cheng, C.T.; Qiu, L. A comparison of performance of several artificial intelligence methods for forecasting monthly discharge time series. J. Hydrol.
**2009**, 374, 294–306. [Google Scholar] [CrossRef][Green Version] - Choudhury, P.; Roy, P. Forecasting concurrent flows in a river system using ANNs. J. Hydrol. Eng.
**2015**, 20, 06014012. [Google Scholar] [CrossRef] - Cheng, C.T.; Feng, Z.K.; Niu, W.J.; Liao, S.L. Heuristic methods for reservoir monthly inflow forecasting: A case study of Xinfengjiang Reservoir in Pearl river, China. Water
**2015**, 7, 4477–4495. [Google Scholar] [CrossRef] - Li, B.; Cheng, C. Monthly discharge forecasting using wavelet neural networks with extreme learning machine. Sci. China Technol. Sci.
**2014**, 57, 2441–2452. [Google Scholar] [CrossRef] - Niu, W.; Feng, Z.; Cheng, C.; Zhou, J. Forecasting daily runoff by extreme learning machine based on quantum-behaved particle swarm optimization. J. Hydrol. Eng.
**2018**, 23, 04018002. [Google Scholar] [CrossRef] - Lu, X.; Zou, H.; Zhou, H.; Xie, L.; Huang, G.B. Robust extreme learning machine with its application to indoor positioning. IEEE Trans. Cybern.
**2016**, 46, 194–205. [Google Scholar] [CrossRef] [PubMed] - Zong, W.; Huang, G.B.; Chen, Y. Weighted extreme learning machine for imbalance learning. Neurocomputing
**2013**, 101, 229–242. [Google Scholar] [CrossRef] - Taormina, R.; Chau, K.W.; Sivakumar, B. Neural network river forecasting through baseflow separation and binary-coded swarm optimization. J. Hydrol.
**2015**, 529, 1788–1797. [Google Scholar] [CrossRef] - Taormina, R.; Chau, K.W. Data-driven input variable selection for rainfall-runoff modeling using binary-coded particle swarm optimization and Extreme Learning Machines. J. Hydrol.
**2015**, 529, 1617–1632. [Google Scholar] [CrossRef] - Li, C.; Xiao, Z.; Xia, X.; Zou, W.; Zhang, C. A hybrid model based on synchronous optimisation for multi-step short-term wind speed forecasting. Appl. Energy
**2018**, 215, 131–144. [Google Scholar] [CrossRef] - Zhu, S.; Zhou, J.; Ye, L.; Meng, C. Streamflow estimation by support vector machine coupled with different methods of time series decomposition in the upper reaches of Yangtze River, China. Environ. Earth Sci.
**2016**, 75, 531. [Google Scholar] [CrossRef] - Lin, J.Y.; Cheng, C.T.; Chau, K.W. Using support vector machines for long-term discharge prediction. Hydrol. Sci. J.
**2006**, 51, 599–612. [Google Scholar] [CrossRef][Green Version] - Yu, Y.; Wang, P.; Wang, C.; Qian, J.; Hou, J. Combined monthly inflow forecasting and multiobjective ecological reservoir operations model: Case study of the Three Gorges Reservoir. J. Water Res. Plan. Manag.
**2017**, 143, 05017004. [Google Scholar] [CrossRef] - Kang, F.; Li, J. Artificial bee colony algorithm optimized support vector regression for system reliability analysis of slopes. J. Comput. Civ. Eng.
**2016**, 30, 04015040. [Google Scholar] [CrossRef] - Wang, W.C.; Xu, D.M.; Chau, K.W.; Chen, S. Improved annual rainfall-runoff forecasting using PSO-SVM model based on EEMD. J. Hydroinform.
**2013**, 15, 1377–1390. [Google Scholar] [CrossRef] - Feng, Z.K.; Niu, W.J.; Cheng, C.T.; Lund, J.R. Optimizing hydropower reservoirs operation via an orthogonal progressive optimality algorithm. J. Water Resour. Plan. Manag.
**2018**, 144, 4018001. [Google Scholar] [CrossRef] - Feng, Z.K.; Niu, W.J.; Cheng, C.T.; Wu, X.Y. Peak operation of hydropower system with parallel technique and progressive optimality algorithm. Int. J. Electr. Power Energy Syst.
**2018**, 94, 267–275. [Google Scholar] [CrossRef] - Feng, Z.K.; Niu, W.J.; Cheng, C.T.; Wu, X.Y. Optimization of large-scale hydropower system peak operation with hybrid dynamic programming and domain knowledge. J. Clean. Prod.
**2018**, 171, 390–402. [Google Scholar] [CrossRef] - Feng, Z.K.; Niu, W.J.; Cheng, C.T. Optimizing electrical power production of hydropower system by uniform progressive optimality algorithm based on two-stage search mechanism and uniform design. J. Clean. Prod.
**2018**, 190, 432–442. [Google Scholar] [CrossRef] - Feng, Z.K.; Niu, W.J.; Cheng, C.T.; Wu, X.Y. Optimization of hydropower system operation by uniform dynamic programming for dimensionality reduction. Energy
**2017**, 134, 718–730. [Google Scholar] [CrossRef] - Chang, J.; Wang, X.; Li, Y.; Wang, Y.; Zhang, H. Hydropower plant operation rules optimization response to climate change. Energy
**2018**, 160, 886–897. [Google Scholar] [CrossRef] - Wang, S.; Huang, G.H.; He, L. Development of a clusterwise-linear-regression-based forecasting system for characterizing DNAPL dissolution behaviors in porous media. Sci. Total Environ.
**2012**, 433, 141–150. [Google Scholar] [CrossRef] - Chau, K.W.; Wu, C.L.; Li, Y.S. Comparison of several flood forecasting models in Yangtze River. J. Hydrol. Eng.
**2005**, 10, 485–491. [Google Scholar] [CrossRef] - Huang, G.B.; Zhu, Q.Y.; Siew, C.K. Extreme learning machine: Theory and applications. Neurocomputing
**2006**, 70, 489–501. [Google Scholar] [CrossRef][Green Version] - Huang, S.Z.; Chang, J.X.; Huang, Q.; Chen, Y.T. Monthly streamflow prediction using modified EMD-based support vector machine. J. Hydrol.
**2014**, 511, 764–775. [Google Scholar] - Feng, Z.K.; Niu, W.J.; Cheng, C.T. Optimization of hydropower reservoirs operation balancing generation benefit and ecological requirement with parallel multi-objective genetic algorithm. Energy
**2018**, 153, 706–718. [Google Scholar] [CrossRef] - Feng, Z.K.; Niu, W.J.; Cheng, C.T. Optimal allocation of hydropower and hybrid electricity injected from inter-regional transmission lines among multiple receiving-end power grids in china. Energy
**2018**, 162, 444–452. [Google Scholar] [CrossRef] - Feng, Z.K.; Niu, W.J.; Wang, S.; Cheng, C.T.; Jiang, Z.Q.; Qin, H.; Liu, Y. Developing a successive linear programming model for head-sensitive hydropower system operation considering power shortage aspect. Energy
**2018**, 155, 252–261. [Google Scholar] [CrossRef] - Chang, J.; Meng, X.; Wang, Z.; Wang, X.; Huang, Q. Optimized cascade reservoir operation considering ice flood control and power generation. J. Hydrol.
**2014**, 519, 1042–1051. [Google Scholar] [CrossRef] - Niu, W.J.; Feng, Z.K.; Cheng, C.T. Optimization of variable-head hydropower system operation considering power shortage aspect with quadratic programming and successive approximation. Energy
**2018**, 143, 1020–1028. [Google Scholar] [CrossRef] - Feng, Z.K.; Niu, W.J.; Cheng, C.T.; Zhou, J.Z. Peak shaving operation of hydro-thermal-nuclear plants serving multiple power grids by linear programming. Energy
**2017**, 135, 210–219. [Google Scholar] [CrossRef] - Feng, Z.K.; Niu, W.J.; Zhou, J.Z.; Cheng, C.T.; Qin, H.; Jiang, Z.Q. Parallel multi-objective genetic algorithm for short-term economic environmental hydrothermal scheduling. Energies
**2017**, 10, 163. [Google Scholar] [CrossRef] - Huang, G.B.; Zhou, H.; Ding, X.; Zhang, R. Extreme learning machine for regression and multiclass classification. IEEE Trans. Syst. Man Cybern. Part B Cybern.
**2012**, 42, 513–529. [Google Scholar] [CrossRef] - Feng, Z.K.; Niu, W.J.; Cheng, C.T. Multi-objective quantum-behaved particle swarm optimization for economic environmental hydrothermal energy system scheduling. Energy
**2017**, 131, 165–178. [Google Scholar] [CrossRef] - Chang, J.; Wang, Y.; Istanbulluoglu, E.; Bai, T.; Huang, Q.; Yang, D.; Huang, S. Impact of climate change and human activities on runoff in the Weihe River Basin, China. Quat. Int.
**2015**, 380–381, 169–179. [Google Scholar] [CrossRef] - Feng, Z.K.; Niu, W.J.; Zhou, J.Z.; Cheng, C.T.; Zhang, Y.C. Scheduling of short-term hydrothermal energy system by parallel multi-objective differential evolution. Appl. Soft Comput.
**2017**, 61, 58–71. [Google Scholar] [CrossRef] - Zimmer, C.A.; Heathcote, I.W.; Whiteley, H.R.; Schroter, H. Low-Impact-Development practices for stormwater: Implications for urban hydrology. Can. Water Resour. J.
**2007**, 32, 193–212. [Google Scholar] [CrossRef] - Abu-Zreig, M.; Rudra, R.P.; Lalonde, M.N.; Whiteley, H.R.; Kaushik, N.K. Experimental investigation of runoff reduction and sediment removal by vegetated filter strips. Hydrol. Process.
**2004**, 18, 2029–2037. [Google Scholar] [CrossRef]

**Figure 3.**Deterministic optimization results by dynamic programing for Hongjiadu reservoir in different periods (month).

**Figure 4.**Sensitivity of the number of hidden nodes in the ANN method for Hongjiadu reservoir. RMSE—root-mean-square error.

**Figure 5.**Simulation results of the extreme learning machine (ELM) model for Hongjiadu reservoir in 10 runs. GGR—generation guarantee rate; APG—average power generation.

**Figure 6.**Comparison of different methods for Hongjiadu reservoir. DP—dynamic programming; MLR—multiple linear regression; SGM—scheduling graph method.

**Figure 9.**Graphic models (outflow–inflow–water level) for Hongjiadu reservoir in August: (

**a**) DP; (

**b**) SVM; (

**c**) ELM; (

**d**) ANN.

Coefficient | Month | |||||
---|---|---|---|---|---|---|

1 | 3 | 5 | 7 | 9 | 11 | |

a | 740.9 | 966.6 | −205.9 | −7001.2 | 2698.6 | 6297.8 |

b | −0.54 | −0.73 | 0.30 | 6.30 | −2.34 | −5.49 |

c | −0.04 | 0.02 | 0.58 | 0.50 | 0.73 | 0.84 |

**Table 2.**Comparison of different methods in Hongjiadu reservoir. DP—dynamic programming; MLR—multiple linear regression; ANN—artificial neural network; ELM—extreme learning machine; SVM—support vector machine; SGM—scheduling graph method; GGR—generation guarantee rate; APG—average power generation.

Method | DP | SGM | MLR | ANN | ELM | SVM |
---|---|---|---|---|---|---|

APG (10^{8} kWh) | 23.38 | 21.03 | 21.36 | 22.41 | 23.11 | 22.71 |

Gap (%) | - | −10.05 | −8.64 | −4.15 | −1.15 | −2.87 |

GGR (%) | 98.18 | 89.84 | 92.97 | 95.83 | 97.66 | 97.40 |

Gap (%) | - | −8.49 | −5.31 | −2.39 | −0.53 | −0.79 |

Note: Gap = (Method − DP)/DP × 100%; Gap denotes the gap between method and DP.

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).