# An Ensemble Empirical Mode Decomposition, Self-Organizing Map, and Linear Genetic Programming Approach for Forecasting River Streamflow

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Methodology

#### 2.1. Study Area

^{3}/s) and average daily total rainfall (mm), with streamflow data being measured at a single location at Lock and Dam 10, south of Winchester, along the Kentucky River, and precipitation data being taken from an average of 5 gauges, at Manchester, Hyden, Jackson, Heidelberg, and Lexington Airport. The rain gauges at Manchester, Hyden, Jackson, and Heidelberg were chosen as their locations lie along the river network and directly supply to runoff at Lock and Dam 10. The rain gauge at Lexington Airport was chosen as it lies just north of Lock and Dam 10 and provides supplementary information regarding precipitation in the basin.

^{2}[10] with the underlying geology being highly karst [28]. Streamflow and precipitation data were obtained from the USGS at waterdata.usgs.gov and consist of 9496 dates split into two 13 year periods from 1 January 1960 to 31 December 1972, and from 1 January 1977 to 31 December 1989. For each model, the first 13-year period was used for model training while the second 13-year period was used for model validation. Consistency in the datasets used allowed for more meaningful comparative assessment. The runoff and precipitation records for each of the two datasets are depicted in Figure 2 and Figure 3.

#### 2.2. Linear Genetic Programming

Algorithm 1. Example of simple LGP (Linear Genetic Programming), with corresponding algebraic structure. |

#### 2.3. Ensemble Empirical Mode Decomposition

- (1)
- Identify all local minima and local maxima of an original time-series x(t).
- (2)
- Create an upper envelope (e
_{upper}) by connecting all local maxima, and a lower envelope (e_{lower}) by connecting all local minima through the use of cubic spline interpolation. - (3)
- Calculate the mean value of the upper and lower envelopes through the equation:$$m\left(t\right)=[\frac{{e}_{upper}\left(t\right)+{e}_{lower}\left(t\right)}{2}]$$
- (4)
- Extract the computed mean value m(t) from the time-series x(t), with the difference being d(t):$$d\left(t\right)=x\left(t\right)-m\left(t\right)$$
- (5)
- Check d(t) to see if it meets the defined requirements of an IMF. If the requirements are met then d(t) becomes the ith IMF denoted as c
_{i}(t) and the remaining x(t) is then referred to as the remaining residue r(t) denoted as r(t) = x(t) − c_{i}(t) so that the process can continue and the next IMF can be extracted. If the requirements are not met then the process continues with d(t) replacing x(t) in the above equation and steps (1)–(5) being performed on the remaining d(t) until c_{i}(t) is found. - (6)
- Steps (1)–(5) are repeated until either r(t) becomes a monotonic function or the number of extrema becomes less than or equal to one.
- (7)
- Once completed, the original time-series x(t) can be expressed as the sum of the IMFs and the final residue r(t), with n representing the number of IMFs in the summation:$$x\left(t\right)={\displaystyle \sum}_{i=1}^{n}{c}_{i}\left(t\right)+r\left(t\right)$$

- (1)
- Decompose through EMD I realizations of x(t) + $\epsilon \xb7{w}^{2}$, where $w$ represents white noise. Then take the ensemble average through:$$\overline{{c}_{i}\left(t\right)}=\frac{1}{I}{{\displaystyle \sum}}_{1}^{I}{c}_{i}\left(t\right)$$
- (2)
- Then, instead of step (5), as above, calculate the remaining residue as:$$r\left(t\right)=x\left(t\right)-\overline{{c}_{i}\left(t\right)}$$

#### 2.4. Self-Organizing Map

- First finding the BMU:$${d}_{j}\left(x\right)={\displaystyle \sum}_{i=1}^{n}{({x}_{i}-{w}_{i})}^{2}$$
_{i}is the current input vector, w_{i}is the current weight vector, and n is the number of weights. - Determining the radius of the neighborhood:$$\mathsf{\sigma}\left(t\right)={\mathsf{\sigma}}_{0}{e}^{\left(-\frac{t}{\lambda}\right)}$$
- Determining the time constant:$$\mathsf{\lambda}=\frac{setnumberofiterations}{radiusofthemap}$$
- Adjusting the weights of the neuron:$$w\left(t+1\right)=w\left(t\right)+\Theta \left(t\right)L\left(t\right)\left(x\left(t\right)-w\left(t\right)\right)$$
- Applying a learning rate to Equation (4) above:$$L\left(t\right)={L}_{0}{e}^{\frac{-t}{\lambda}}$$
_{0}is the learning rate from the previous step. - Recalculating the distance from the BMU$$\Theta \left(t\right)={e}^{\frac{-{d}^{2}}{2{\sigma}^{2}\left(t\right)}}$$

## 3. Model Development

#### 3.1. LGP as a Standalone Application

#### 3.2. LGP in Hybrid Application with EEMD and SOM

#### 3.3. Practicality of the Method for Forecasting Multiple Time-Steps Ahead

#### 3.4. Model Performance

_{per}). These measures are portrayed through the following equations:

_{obs}(t) for each day t, the number of days included in the dataset N, the mean estimated flow Q

_{mean}, the mean observed flow Q

_{obs}

_{.mean}, the maximum estimated flow Q

_{max}, and the maximum observed flow Q

_{obs}

_{.max}. The parameter Q

_{obs}(t−1) refers to the observed runoff measurement from the previous day.

_{per}was calculated, which gives the relative error between the difference in the observed flow and the observed flow of the day before and the difference of the predicted and observed flow.

## 4. Results and Discussion

_{per}, E) and others regressed (AARE, NMBE). Comparing LGP-7 to LGP-3 resulted in more parameters that improved, but again, the improvements were not enough to justify the added computation requirement and time. LGP-8, which utilized IMF data derived from EEMD in addition to standard rainfall and precipitation information, showed major improvement in reducing AARE and improving E & E

_{per}, while better-predicting values further from the mean, as indicated by its NRMSE. From the set of EEMD included models that resulted in LGP-8, consistently stronger AARE, E, I, NRMSE, and E

_{per}values were found, although it appeared to create a negative bias for %MF. These models, including others, are included in the supplementary material. Further improvement was seen when an SOM was utilized to classify EEMD and standard information before LGP application, as indicated by LGP-9. Of the different designs presented, LGP-9 resulted in the largest reduction in error, with an AARE of 10.122, the strongest Nash-Sutcliffe of 0.987, strongest E

_{per}of 0.931, the strongest ability to predict values away from the mean, with an NRMSE of 0.182, and very low magnitude of over-predicting, with and NMBE of 0.161. Utilizing an SOM in conjunction with EEMD-derived runoff information was shown to significantly improve the performance of models created through LGP. Figure 7 shows a scatter-plot comparison between LGP-2, LGP-6, LGP-8, and LGP-9 for capturing streamflow.

^{3}/s, medium flows between 28.2 and 282.3 m

^{3}/s, and high flows above 282.3 m

^{3}/s. LGP-2, which utilized LGP as a standalone application, resulted in an extremely strong AARE, which was not reproduced for any other LGP standalone models created during trial and error, which likely signifies that it is an extreme outlier linear genetic program for this individual basin. LGP-3, LGP-8, & LGP-9 each show more difficulty in predicting low magnitude flows compared to medium and high magnitude flows, with LGP-9 reporting the best results for each interval.

_{per}of 0.522, while the EEMD-SOM-LGP design returned an AARE of 43.365, an E of 0.902, and an E

_{per}of 0.591 at Q(t+3). Utilizing the EEMD-SOM-LGP design also minimized the negative effect on %MF, as values without the application of the SOM were all negative and in double digits. For comparative purposes, these models can be found in the supplementary material. Figure 9 compares LGP-10 with LGP as a standalone application for predicting streamflow at three days ahead.

## 5. Conclusions

## Supplementary Materials

## Author Contributions

## Conflicts of Interest

## References

- Vojtěch, K.; Hanel, M.; Máca, P.; Kuráž, M.; Pech, P. Incorporating basic hydrological concepts into genetic programming for rainfall-runoff forecasting. Computing
**2013**, 95, 363–380. [Google Scholar] - Mulvaney, T.J. On the use of self-registering rain and flood gauges in making observations of the relations of rainfall and flood discharges in a given catchment. Proc. Inst. Civ. Eng. Irel.
**1851**, 4, 18–33. [Google Scholar] - Beven, K.J. Rainfall-Runoff Modelling: The Primer; John Wiley & Sons: Lancaster, UK, 2011. [Google Scholar]
- Nourani, V.; Özgür, K.; Mehdi, K. Two hybrid artificial intelligence approaches for modeling rainfall-runoff process. J. Hydrol.
**2011**, 402, 41–59. [Google Scholar] [CrossRef] - Babovic, V.; Keijzer, M. Rainfall runoff modelling based on genetic programming. Hydrol. Res.
**2002**, 5, 331–346. [Google Scholar] - Makkeasorn, A.; Chang, N.-B.; Zhou, X. Short-term streamflow forecasting with global climate change implications—A comparative study between genetic programming and neural network models. J. Hydrol.
**2008**, 352, 336–354. [Google Scholar] [CrossRef] - Parasuraman, K.; Amin, E.; Sean, K.C. Modelling the dynamics of the evapotranspiration process using genetic programming. Hydrol. Sci. J.
**2007**, 52, 563–578. [Google Scholar] [CrossRef] - Aytek, A.; Murat, A. An application of artificial intelligence for rainfall-runoff modeling. J. Earth Syst. Sci.
**2008**, 117, 145–155. [Google Scholar] [CrossRef] - Mehr, A.D.; Ercan, K.; Ehsan, O. Streamflow prediction using linear genetic programming in comparison with a neuro-wavelet technique. J. Hydrol.
**2013**, 505, 240–249. [Google Scholar] [CrossRef] - Srinivasulu, S.; Ashu, J. A comparative analysis of training methods for artificial neural network rainfall-runoff models. Appl. Soft Comput.
**2006**, 6, 295–306. [Google Scholar] [CrossRef] - Poli, R.; McPhee, N.; Langdon, W. A Field Guide to Genetic Programming; Creative Commons: San Francisco, CA, USA, 2008. [Google Scholar]
- Chang, L.C.; Shen, H.Y.; Wang, Y.F.; Huang, J.Y.; Lin, Y.T. Clustering-based hybrid inundation model for forecasting flood inundation depths. J. Hydrol.
**2010**, 385, 257–268. [Google Scholar] [CrossRef] - Nourani, V.; Mehdi, K.; Mohammad, T.A. Hybrid wavelet-genetic programming approach to optimize ANN modeling of rainfall-runoff Process. J. Hydrol. Eng.
**2011**, 17, 724–741. [Google Scholar] [CrossRef] - Kisi, O.; Jalal, S. Precipitation forecasting using wavelet-genetic programming and wavelet-neuro-fuzzy conjunction models. Water Resour. Manag.
**2011**, 25, 3135–3152. [Google Scholar] [CrossRef] - Wang, W.; Xu, D.M.; Chau, K.W.; Chen, S. Improved annual rainfall-runoff forecasting using PSO–SVM model based on EEMD. J. Hydroinform.
**2013**, 15, 1377–1390. [Google Scholar] [CrossRef] - Di, C.; Yang, X.; Wang, X. A Four-Stage Hybrid Model for Hydrological Time Series Forecasting. PLoS ONE
**2014**, 9, e104663. [Google Scholar] [CrossRef] [PubMed] - Murtagh, F.; Hernández-Pajares, M. The Kohonen self-organizing map method: An assessment. J. Classif.
**1995**, 12, 165–190. [Google Scholar] [CrossRef] - Ismail, S.; Ani, S.; Ruhaidah, S. A hybrid model of self-organizing maps (SOM) and least square support vector machine (LSSVM) for time-series forecasting. Expert Syst. Appl.
**2011**, 38, 10574–10578. [Google Scholar] [CrossRef] - Ju, Q.; Yu, Z.; Hao, Z.; Ou, G.; Zhao, J.; Liu, D. Division-based rainfall-runoff simulations with BP neural networks and Xinanjiang model. Neurocomputing
**2009**, 72, 2873–2883. [Google Scholar] [CrossRef] - Jain, A.; Sanaga, S. Integrated approach to model decomposed flow hydrograph using artificial neural network and conceptual techniques. J. Hydrol.
**2006**, 317, 291–306. [Google Scholar] [CrossRef] - Weissling, B.P.; Xie, H. MODIS Biophysical States and NEXRAD Precipitation in a Statistical Evaluation of Antecedent Moisture Condition and Streamflow. J. Am. Water Resour. Assoc.
**2009**, 45, 419–433. [Google Scholar] [CrossRef] - Chen, P.-A.; Chang, L.-C.; Chang, F.-J. Reinforced recurrent neural networks for multi-step-ahead flood forecasts. J. Hydrol.
**2013**, 497, 71–79. [Google Scholar] [CrossRef] - Yang, J.-S.; Yu, S.-P.; Liu, G.-M. Multi-step-ahead predictor design for effective long-term forecast of hydrological signals using a novel wavelet neural network hybrid model. Hydrol. Earth Syst. Sci.
**2013**, 17, 4981–4993. [Google Scholar] [CrossRef] - Kentucky Geological Survey, Kentucky River Basin Map and Chart by Daniel I. Carey. 2015. Available online: http://kgs.uky.edu/kgsweb/olops/pub/kgs/mc188_12.pdf (accessed on 20 May 2015).
- USGS (U.S. Geologic Survey). Water-quality assessment of the Kentucky River Basin, Kentucky—Analysis of available surface-water-quality data through 1986. In National Water-Quality Assessment; USGS Water-Supply Paper 2351; USGS: Reston, VA, USA, 1995. [Google Scholar]
- Kentucky River Authority. Kentucky River History—Locks and Dams. 2015. Available online: http://finance.ky.gov/offices/Pages/LocksandDams.aspx (accessed on 20 May 2015). [Google Scholar]
- Johnson, L.R.; Charles, E.P. Kentucky River Development: The Commonwealth’s Waterway; Louisville District, U.S. Army Corps of Engineers: Louisville, KY, USA, 1999.
- Currens, J.C. Model Ordinance for Development on Karst in Kentucky. In Kentucky Geological Survey; University of Kentucky: Lexington, KY, USA, 2009. [Google Scholar]
- Deschaine, L.M.; Frank, D.F. Comparison of Discipulus™ Linear Genetic Programming Soft-ware with Support Vector Machines, Classification Trees, Neural Networks and Human Experts. In White Paper; RML Technologies, Inc.: Boulder, CO, USA, 2004. [Google Scholar]
- Wu, Z.H.; Huang, N.E.; Chen, X. The multi-dimensional ensemble empirical mode decomposition method. Adv. Adapt. Data Anal.
**2009**, 1, 339–372. [Google Scholar] [CrossRef] - Torres, M.E.; Colominas, M.A.; Schlotthauer, G.; Flandrin, P. A complete ensemble empirical mode decomposition with adaptive noise. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Prague, Czech Republic, 22–27 May 2011.
- Kohonen, T. The self-organizing map. Proc. IEEE
**1990**, 78, 1464–1480. [Google Scholar] [CrossRef] - Kastberger, G.; Gerhard, K. Visualization of multiple influences on ocellar flight control in giant honeybees with the data-mining tool Viscovery SOMine. Behav. Res. Methods Instrum. Comput.
**2000**, 32, 157–168. [Google Scholar] [CrossRef] [PubMed] - Gallego, J.A.; Rocon, E.; Koutsou, A.D.; Pons, J.L. Analysis of kinematic data in pathological tremor with the Hilbert-Huang transform. In Proceedings of the 5th International IEEE/EMBS Conference on Neural Engineering (NER), Cancun, Mexico, 27 April–1 May 2011.
- Nash, J.E.; Sutcliffe, J.V. River flow forecasting through conceptual models part I—A discussion of principles. J. Hydrol.
**1970**, 10, 282–290. [Google Scholar] [CrossRef] - Moriasi, D.N.; Arnold, J.G.; Van Leiw, M.W.; Binger, R.L.; Harmel, R.D.; Veith, T.L. Model evaluation guidelines for systematic quantification of accuracy in watershed simulations. Trans. ASABE
**2007**, 50, 885–900. [Google Scholar] [CrossRef] - De Vos, N.J.; Rientjes, T.H.M. Constraints of artificial neural networks for rainfall-runoff modelling: Trade-offs in hydrological state representation and model evaluation. Hydrol. Earth Syst. Sci. Discuss.
**2005**, 2, 365–415. [Google Scholar] [CrossRef] - Ashu, J.; Sudheer, K.P.; Sanaga, S. Identification of physical processes inherent in artificial neural network rainfall runoff models. Hydrol. Process.
**2004**, 18, 571–581. [Google Scholar]

**Figure 2.**Streamflow and precipitation record for the training dataset. Red represents the high flow seasons while blue represents the low flow seasons.

**Figure 3.**Streamflow and precipitation record for the validation dataset. Red represents the high flow seasons while blue represents the low flow seasons.

**Figure 4.**Ensemble Empirical Mode Decomposition-Self-Organizing Map-Linear Genetic Programming structure with tournament selection in LGP.

**Figure 6.**Runoff time-series and IMF (Intrinsic Mode Functions) signal components for validation dataset. (

**a**) Original validation runoff time-series and first 4 IMF components; (

**b**) IMF components 5 through 9; (

**c**) IMF components 10 through 12 and the final residue.

**Figure 7.**Scatter plots of observed vs. predicted for select models. An improved fit can be seen in the EEMD-LGP (LGP-8) and EEMD-SOM-LGP (LGP-9) models for capturing streamflow. (

**a**) LGP as standalone application; (

**b**) SOM-LGP; (

**c**) EEMD-LGP; (

**d**) EEMD-SOM-LGP.

**Figure 9.**Scatter plots of observed vs. predicted for models in LGP-10. (

**a**) Streamflow at 1 day ahead; (

**b**) streamflow at 2 days ahead; (

**c**) streamflow at 3 days ahead; (

**d**) streamflow at 4 days ahead.

**Table 1.**Input variables included for LGP as a standalone application along with comparative ANN model.

Model | Input Variables |
---|---|

LGP-1 | Q(t−1), Q(t−2), Q(t−3) |

LGP-2 | P(t), Q(t−1), Q(t−2) |

LGP-3 | P(t), P(t−1), Q(t−1), Q(t−2), Q(t−3) |

ANN [10] | P(t), P(t−1), P(t−2), Q(t−1), Q(t−2) |

**Table 2.**Input variables for LGP as standalone application in capturing different aspects of hydrograph.

Model | Input Variables | |
---|---|---|

LGP-4 | High Season | P(t), P(t−1), Q(t−1), Q(t−2), Q(t−3) |

Low Season | P(t), P(t−1), Q(t−1), Q(t−2), Q(t−3) | |

LGP-5 | Rising Trends | P(t), Q(t−1), Q(t−2) |

Falling Trends | P(t), P(t−1), Q(t−1), Q(t−2) |

Model | Hybrid | Input Variables |
---|---|---|

LGP-6 | SOM-LGP | P(t), Q(t−1), Q(t−2) |

LGP-7 | SOM-LGP | P(t), P(t−1), Q(t−1), Q(t−2), Q(t−3) |

LGP-8 | EEMD-LGP | P(t), Q(t−1), Q(t−2), Q(t−3) *imf set |

LGP-9 | EEMD-SOM-LGP | P(t), Q(t−1), Q(t−2), Q(t−3) *imf set |

Output | Input Variables for LGP-10 |
---|---|

Q(t+1) | P(t), Q_{pred}(t), Q(t−1), Q(t−2), Q(t−3), *imfs |

Q(t+2) | P(t), Q_{pred}(t+1), Q_{pred}(t), Q(t−1), Q(t−2), Q(t−3), *imfs |

Q(t+3) | P(t), Q_{pred}(t+2), Q_{pred}(t+1), Q_{pred}(t), Q(t−1), Q(t−2), Q(t−3), *imfs |

Q(t+4) | P(t), Q_{pred}(t+3), Q_{pred}(t+2), Q_{pred}(t+1), Q_{pred}(t), Q(t−1), Q(t−2), Q(t−3), *imfs |

Model Set 1 | AARE | E | R | NMBE | NRMSE | %MF | E_{per} | |
---|---|---|---|---|---|---|---|---|

LGP-1 | Training | 38.261 | 0.911 | 0.955 | −0.022 | 0.485 | −3.934 | 0.477 |

Validation | 38.305 | 0.901 | 0.949 | 0.083 | 0.502 | −3.530 | 0.477 | |

LGP-2 | Training | 17.223 | 0.943 | 0.971 | −0.828 | 0.387 | −12.740 | 0.666 |

Validation | 17.118 | 0.937 | 0.968 | −0.905 | 0.401 | 0.156 | 0.666 | |

LGP-3 | Training | 22.026 | 0.942 | 0.970 | 0.615 | 0.393 | −14.663 | 0.656 |

Validation | 23.360 | 0.942 | 0.970 | 0.312 | 0.386 | 3.959 | 0.691 | |

ANN [10] | Training | 22.39 | 0.954 | 0.976 | −0.048 | 0.349 | −2.14 | 0.728 |

Validation | 23.92 | 0.941 | 0.970 | −0.172 | 0.387 | −1.62 | 0.689 |

**Table 6.**Statistical measures for validation dataset of High/Low flow seasons and Rising/Falling portions of hydrograph.

Models | AARE | E | R | NMBE | NRMSE | %MF | E_{per} | |
---|---|---|---|---|---|---|---|---|

LGP-4 | High | 24.081 | 0.941 | 0.970 | 0.820 | 0.300 | −0.359 | 0.714 |

Low | 29.051 | 0.920 | 0.959 | −2.353 | 0.650 | −3.254 | 0.585 | |

LGP-5 | Rising | 38.744 | 0.948 | 0.974 | −0.603 | 0.350 | −0.003 | 0.735 |

Falling | 8.511 | 0.968 | 0.984 | −1.974 | 0.236 | 1.590 | 0.907 |

Model Set 2 | AARE | E | R | NMBE | NRMSE | %MF | E_{per} | |
---|---|---|---|---|---|---|---|---|

LGP-6 | Training | 22.436 | 0.944 | 0.972 | 0.808 | 0.362 | −3.311 | 0.704 |

Validation | 20.678 | 0.941 | 0.970 | −1.369 | 0.393 | −0.344 | 0.684 | |

LGP-7 | Training | 21.930 | 0.953 | 0.976 | 1.517 | 0.352 | −2.104 | 0.724 |

Validation | 22.716 | 0.943 | 0.971 | 0.261 | 0.380 | 4.597 | 0.700 | |

LGP-8 | Training | 13.454 | 0.984 | 0.992 | −0.987 | 0.207 | −1.500 | 0.905 |

Validation | 14.232 | 0.981 | 0.991 | −0.449 | 0.220 | −7.653 | 0.899 | |

LGP-9 | Training | 9.181 | 0.989 | 0.995 | −0.890 | 0.170 | 5.540 | 0.935 |

Validation | 10.122 | 0.987 | 0.994 | 0.161 | 0.182 | 6.212 | 0.931 |

**Table 8.**Statistical parameters for low, medium, and high magnitude flows on the validation dataset of select models.

Model | AARE | E | R | Model | AARE | E | R | ||
---|---|---|---|---|---|---|---|---|---|

LGP-2 | Low | 14.042 | 0.851 | 0.923 | LGP-8 | Low | 21.818 | 0.743 | 0.862 |

Medium | 18.53 | 0.767 | 0.876 | Medium | 11.061 | 0.952 | 0.908 | ||

High | 18.721 | 0.858 | 0.926 | High | 9.043 | 0.978 | 0.957 | ||

LGP-3 | Low | 33.151 | 0.591 | 0.768 | LGP-9 | Low | 13.221 | 0.934 | 0.872 |

Medium | 18.98 | 0.775 | 0.880 | Medium | 6.748 | 0.977 | 0.954 | ||

High | 17.797 | 0.869 | 0.932 | High | 7.377 | 0.986 | 0.974 |

LGP-10 | AARE | E | R | NMBE | NRMSE | %MF | E_{per} | |
---|---|---|---|---|---|---|---|---|

Q(t+1) | Training | 16.611 | 0.969 | 0.985 | 2.327 | 0.287 | −1.055 | 0.816 |

Validation | 17.612 | 0.963 | 0.981 | 2.770 | 0.311 | −5.627 | 0.799 | |

Q(t+2) | Training | 28.289 | 0.936 | 0.968 | 1.047 | 0.411 | −8.476 | 0.711 |

Validation | 29.799 | 0.929 | 0.964 | 1.496 | 0.427 | −3.851 | 0.699 | |

Q(t+3) | Training | 41.465 | 0.908 | 0.953 | 3.348 | 0.497 | 8.041 | 0.580 |

Validation | 43.365 | 0.902 | 0.950 | 3.355 | 0.502 | 6.664 | 0.591 | |

Q(t+4) | Training | 51.129 | 0.890 | 0.944 | 2.468 | 0.540 | −16.739 | 0.560 |

Validation | 54.369 | 0.874 | 0.935 | 3.095 | 0.568 | −5.348 | 0.486 |

© 2016 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC-BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Barge, J.T.; Sharif, H.O. An Ensemble Empirical Mode Decomposition, Self-Organizing Map, and Linear Genetic Programming Approach for Forecasting River Streamflow. *Water* **2016**, *8*, 247.
https://doi.org/10.3390/w8060247

**AMA Style**

Barge JT, Sharif HO. An Ensemble Empirical Mode Decomposition, Self-Organizing Map, and Linear Genetic Programming Approach for Forecasting River Streamflow. *Water*. 2016; 8(6):247.
https://doi.org/10.3390/w8060247

**Chicago/Turabian Style**

Barge, Jonathan T., and Hatim O. Sharif. 2016. "An Ensemble Empirical Mode Decomposition, Self-Organizing Map, and Linear Genetic Programming Approach for Forecasting River Streamflow" *Water* 8, no. 6: 247.
https://doi.org/10.3390/w8060247