Rivers’ Temporal Sustainability through the Evaluation of Predictive Runoff Methods

: The concept of sustainability is assumed for this research from a temporal perspective. Rivers represent natural systems with an inherent internal memory on their runoff and, by extension, to their hydrological behavior, that should be identified, characterized and quantified. This memory is formally called temporal dependence and allows quantifying it for each river system. The ability to capture that temporal signature has been analyzed through different methods and techniques. However, there is a high heterogeneity on those methods´ analytical capacities. It is found in this research that the most advanced ones are those whose output provides a dynamic and quantitative assessment of the temporal dependence for each river system runoff. Since the runoff can be split into temporal conditioned runoff fractions, advanced methods provide an important improvement over classic or alternative ones. Being able to characterize the basin by calculating those fractions is a very important progress for water managers that need predictive tools for orienting their water policies to a certain manner. For instance, rivers with large temporal dependence will need to be controlled and gauged by larger hydraulic infrastructures. The application of this approach may produce huge investment savings on hydraulic infrastructures and an environmental impact minimization due to the achieved optimization of the binomial cost ‐ benefit.


Introduction
In order to implement a real integrated water resource management (IWRM), it is required the proper use and organization of a great range and amount of data sources and methods. In this context, climatic and hydrological variables are essential [1,2].
Hydrological processes' variability is increasing on a global and local scale [3,4]. Consequently, those events, with values and trends far from the historical average behavior, are more recurrent [5][6][7]. In order to address this new hydrological reality and aimed to anticipate and forecast it, new approaches, methodologies and techniques have emerged, incorporating this changing behavior. However, in order to build tools for the present-future, not all the reasons that explain this increasing variability are brand new (climate change and projections to the future), but also, the historical behavior should be more deeply understood [8].
Hydrological temporal behavior from a stochastic view has been usually studied, and many of those concepts, assumptions and applications are still valid today. Those concepts comprise temporal-spatial correlation and statistical hydrological parameters, including basic, droughts and storage ones, as well as trends, shifts and seasonality testing, among others [8,9].
Traditionally, hydrological models and all their mathematical development are built based on a unique observation runoff record collected from a gauge station. Still, today, most of the hydrological tools officially implemented in water administrations are based on the previous and rudimentary approach [10]. However, the boom of very powerful analytical techniques and methods such as data mining (DM), artificial intelligence (AI) and machine learning (ML), among others, have produced a drastic change in several disciplines, including hydrology [8]. Consequently, river basins' hydrological behaviors can be currently characterized through those tools by means of huge amount of information coming from statistical or deterministic operations [11]. This massive amount of information is used for populating sophisticated expert systems that provide quantitative and dynamic results, incorporating and quantifying the inherent hydrological uncertainty [11]. However, those techniques have a high variety of performance, usage, utility, analytical power and/or accuracy, among other parameters. This heterogeneity is not properly revised or synthetized in academic studies or research articles, and, consequently, it is worthy to analyze and characterize it.
The most powerful and useful methods are the stochastic ones, which are able to incorporate and deal with the uncertainty of hydrological variables. In this sense, the discipline of stochastic hydrology is where those methods are included. Furthermore, ideas and features such as temporariness or persistence strongly related to the measure of the time series long-term memory (Hurst coefficient) [12], dependence-independence of internal data or structure and strength of causal relationships, are particularly significant.
This study is aimed to review the most popular, powerful and innovative approaches and methods for analyzing and predicting rivers' temporal behaviors. Furthermore, this paper is also aimed to provide the reader with an evaluation of the strengths, weaknesses, opportunities and threats (SWOT) of those methods and techniques set. Finally, this paper is also aimed to provide the reader with a rigorous suitability assessment focused on achieving the most suitable method for developing robust sustainable rivers´ assessments. For that, the paper is structured as follows. Section 2 covers a revision of the main material and methods included in literature for developing a rigorous and updated analysis of rivers´ temporal behavior and, by extension, their applications. Section 3 contains the methodology and the main results drawn from this research. This section includes the identification and assessment of parameters, a SWOT analysis of those techniques and methods and a suitability assessment for capturing the best methods for temporal rivers´ sustainability. Finally, Sections 4 and 5 are dedicated for the discussion and conclusions, respectively.

Overview of Research Approaches
Traditionally, the use of hydrological models has been extensive worldwide. A model can be in the form of a physical, analog or mathematical model [13]. Currently, mathematical models are more preferred due to the rapid development of computer technology and because of including a chronological set of relations. However, as it was aforementioned before, most of the models uniquely rely on their large set of theoretical mathematical algorithms, using for their input only the historical runoff record rather than feeding them with massive high-quality information. Predictive and descriptive methods for studying and evaluating the temporal behavior of rivers may be classified according to different criteria. In this sense, main criteria can be the following: First, considering its incapacity/capacity for explicitly incorporating and quantifying the uncertainty in their whole functioning, methods can be categorized into deterministic and stochastic [14]. Then, taking into account its capacity for dealing with massive or low amounts of data, methods are grouped into DM, big data (BD) and ML or, on the other hand, scarce-data methods. Furthermore, methods can be classified into a black or white box, in view of its transparency and manageability in their processing [15]. Additionally, stochastic methods are differentiative due to its way for dealing with the uncertainty. In this sense, there are methods like causal reasoning (CR) that uses probability [11]; other stochastic methods, such as HJ-Biplot, use multivariate analysis and techniques [16], and they are able to evaluate correlation, similarity, distance/dissimilarity and covariance [16] in the datasets.
Application of different multivariate statistical techniques has increased in the recent past. The most used methods are the following ones. First, HJ-Biplot, which simultaneously interprets the position of the variables, the sets and the relationships between them [16]. Furthermore, discriminant analysis (MDA) [17] and cluster analysis (CA) allow classifying the units according to similarities [18]; factor analysis (FA) explores relationships in the few principle components [19]; principal component analysis (PCA) [17] and canonical correlation analysis (CCA) [20] select variables and identify factors influencing hydrological changes.
Given the high variability, randomness and uncertainty of hydrological processes, methods that fall on the classification of stochastic and deterministic are described in detail as follows. In this sense, according to Koutsoyiannis [21], an appropriate modeling approach for any uncertain hydrological system should necessarily include quantification of its uncertainty within a stochastic framework. Consequently, for the same previous reasons, the most adequate methods are the stochastic ones [14]. However, some new deterministic approaches can be successfully applied to some river´s systems, especially for very controlled and gauged rivers´ basins poorly affected by climate change [14] that are explained as follows.

Process-Based on Hydrological Models
This type of modeling has complex physical theory and usually needs to have a large amount of data and computational time. They have an initial given condition which is defined and parameterized and apply nonlinear partial differential equations which describe the hydrologic processes. Furthermore, they are usually accurate; however, they are often complex and need a lot of information as model inputs, such as river bathymetry; a complete set of meteorological information (air temperature, solar radiation, wind, etc.); inflow and outflow conditions; etc. As a result, application of such models for regions with limited data is impractical [22]. These models can be classified into black, grey, or white box, in view of its transparency and manageability in their processing [15]. Firstly, the lumped model (black model), which evaluates the response of the basin simply at the output and does not characterize the physical characteristics of the hydrological processes. Secondly, the semi-distributed model (grey model), which is partly permitted to change in space with division of catchment into an amount of sub-basins. It requires lesser amounts of input data in contrast with the fully distributed model. The final type is the distributed model (white model), which requires large amounts of data for parameterization; however, it presents some problems, such as the nonlinearity, scale and uncertainty [15]. One of the important advantages of the deterministic models is that they present the inside view of a process which enables better understanding of the hydrological system. On the other hand, their capacity for dealing (identifying, characterizing, evaluating or quantifying) with temporal behavior of rivers (dependence/memory) is very limited [14,23]. In this sense, traditional hydrological modeling codes such as HEC-HMS [24] or SIMPA [10], among others, have used the mathematical equations given by classic hydrology. They have implemented simulation models for being applied to hydrographic basins and then are compared to observational data records to get a good calibration. Those models follow the traditional hydrological scheme of procedure of routing for getting the final product of hydrographs [25][26][27][28].

Wavelets Transformation (WT)
Wavelet transform (WT) is a method that has been widely used to reveal information (signal) both over time and on a domain scale (frequency). It splits the main time series into sub-fractions, which improves the data decomposition in forecasts. Consequently, this allows capturing useful information at different levels of data resolution. It has been extensively applied in hydrology, such as rainfall-runoff relations for karstic springs [29,30], scale-dependent synthetic streamflow generation [31], temporal patterns of precipitation [32,33], variabilities of hydrological processes [34,35] and hydrological forecasting and regionalization [36,37].
The discrete wavelet decomposition is known to offer a local representation of time series data using wavelet and scaling coefficients at different resolutions obtained via Mallat's pyramidal algorithm, among others [38]. There are several researches that have employed the wavelet technique for various hydro-climatological applications [39]. For instance, Adamowski and Chan [40] developed a wavelet neural network conjunction model to forecast monthly groundwater levels using wavelet decomposed data of multiple hydrological and meteorological variables. Moosavi et al. [41] compare the forecast performance of Wavelet-ANFIS and Wavelet-ANN hybrid models to the ANFIS and ANN benchmark models. Wavelet-ANFIS model provided better groundwater level forecasts at different time horizons. Likewise, numerous groundwater level estimation studies based on hybrid Wavelet-AI models can be found in the literature [42,43]. Furthermore, such hybrid WT-ANN methods have been shown to provide good performance in hydrological studies, such as rainfall-runoff modeling [44,45] and streamflow forecasting [46,47], as well as river and groundwater level forecasting [48], among others. Several studies have also shown that the WT-ANN hybrid models perform better than some other widely used models. For example, Peng et al. [47] applied empirical wavelet transform and artificial neural networks for streamflow forecasting. They concluded that the hybrid model can capture the nonlinear features of the streamflow time series and, consequently, provide more accurate forecasts than the traditional ANN method. Adamowski and Chan [40] applied the WT-ANN method to predict groundwater level and stated that it provided better forecasting accuracy than the conventional ANN method. Similar results have also been reported by Rajaee et al. [49] and Seo et al. [50]. These studies indicate that the WT-ANN hybrid models allow users to achieve forecasts with higher accuracies.

Deterministic Machine Learning (ML) methods
Some of the most widely used methods have been selected to be included in this section given its high operational capacity. Most of them belong to the ML group of methods. However, there are other methods that exceed ML influence and are also worthy to include.
There are two general categories of machine learning: supervised and unsupervised. Supervised ML techniques refer to when there is a piece of data to predict or explain. This is done by using previous data of inputs and outputs to predict an output based on a new input. By contrast, unsupervised ML is aimed to search ways to relate and group data points without the use of a target variable to predict. In other words, it evaluates data in terms of traits and uses the traits to form clusters of items that are similar to one another. To keep a robust and coherent structure, ML methods have been classified in this paper into deterministic and stochastic ones.
When there is inadequate information of influential parameters, the ML methods demonstrate higher potential and efficiency, and they are more appropriate than the conventional numerical or finite element methods [38]. The ML techniques make more reliable predictions/forecasts based on their ability to identify and reveal the nonlinear, non-stationary and discriminant features from historical time-series records and also effectively handle chaos and uncertainties associated with the input data. Furthermore, currently, the development of hybrid ML techniques for modeling hydrologic time series is also a very active area of research. For instance, the ML models developed from wavelet decomposed data have demonstrated better performance than the models provided with raw data. The description of selected methods, shown as follows, is focused on their capacity for predicting increasing hydrological condition processes, such as rivers´ runoff.

Artificial Neural Networks Methods (ANN)
During the last 20 years, ANN have been increasingly used for wider applications in forecasting time series, including geophysical, meteorological and hydrological time series. Most ANN applications are used for forecasting in situations where there is an unknown relationship between the set of input factors and the outputs. Despite there is a strong criticism about the capacity of ANN for dealing with hydrological uncertainty and its reliability for semiarid regions is seriously questioned, the ANN model has become an appreciated tool for modeling nonlinear hydrometeorological phenomena, such as precipitation forecasting [51][52][53], streamflow forecasting [54,55] and ground water-level simulation [56,57], as well as river water temperature forecasting [58,59]. Furthermore, in regards to more specific utilities, using ANN as a tool, the NERD (neural error regression diagnosis) model of Abramowitz et al. [60] is available for data assimilation and the SOLO (self-organizing linear output) model of Hsu et al. [61], first for classifying input features into various categories, and then for performing function mapping for each category separately. Since the hydrological conditions are increasingly changing, applications of the linear and nonlinear regression models, as well as the traditional ANN models for river water temperature modeling, frequently have limitations, especially in processing of nonstationary data [62]. In this regard, wavelet transform, as a good preprocessing method for nonstationary data, can be a potential complement for the traditional methods.

System Dynamics Methods (SDs)
SDs is basically a methodology for studying complex feedback systems [63]. Consequently, this method was not originally intended to be applied for hydrological forecasting [64][65][66][67]. However, due to the inherent uncertainty and complex causal and dependency relationships within the hydrological records, it is also recommended for that. Specifically, it is particularly useful when studying complex systems with interacting elements, the behavior of which cannot be easily predicted. It allows one to examine system behavior modes and response as different variables are altered. The most well-known SDM packages include: SIMILE [68], VENSIM (www.vensim.com), STELLA (www.iseesystems.com), POWERSIM (www.powersim.com) and SIMULINK-an add-on to MATLAB (www.mathworks.com) [63].

Traditional Techniques
The traditional statistical/time series methods, such as linear and nonlinear regression models, are usually simple to develop; however, they produce, in general, large modeling errors [69]. Consequently, the way they cope with uncertainty is not recommended. It is well-known that a correlogram provides an idea of temporal dependence intensity through the correlation coefficient [70]. Furthermore, it is known that a correlogram comprises a static picture and average result of the temporal behavior of hydrological series. This fact provokes the existence of indeterminacy points for defining the temporal dependence/independence of time series [70,71]. Traditional techniques were designed exclusively for the short term through Pearson's correlation coefficient and for the long term through the Hurst coefficient, which provides an idea of the degree of dependency of hydrological events throughout time series. Within these methods, dependence has been analyzed for hydrologic studies from many perspectives and through several approaches [72]. The most common traditional measure of dependence is the Pearson's correlation coefficient, which is computed based on the assumption of normal distribution to measure the linear dependence [8]. This is commonly assumed in statistical approaches, such as autoregressive (AR) models or the autoregressive moving average (ARMA) model, to characterize the linear dependence among multivariate random variables in hydrology, such as precipitation, streamflow or runoff [73,74]. Nevertheless, the normal assumption does not hold, and the linear dependence is too basic to characterize far more complicated dependence structures [72]. Another nonparametric correlation estimator as the Spearman and Kendall has been widely used. They have also been commonly used as alternative dependence measures to characterize the nonlinear dependence of hydrological variables, as demonstrated recently with copula models [75] described later.

Multivariate Adaptive Regression Splines (MARS)
This procedure, proposed by Friedman [76], is a multivariate nonparametric model which searches over all possible univariate knot locations and across interactions among all variables by means of the use of local so-called basis functions. In this sense, MARS algorithm is based on three sequential steps [76,77]. In the first one, named "constructive phase", it is created through adding basis functions step by step. Then, in order to improve the model, the best pairs of spline functions are selected. At each step, the split that minimized some lack of fit criterion from all the possible splits is chosen on each basis function. The iterative procedure of construction finishes when the model reaches a maximum number of basic functions. Then, in the second step, "pruning phase", a backward elimination process to redefine the model fitting, is carried out. Here, basis functions are eliminated one at a time until the lack of fit criterion based on the generalized cross validation (GCV) criterion [78] is a minimum. Finally, in the third phase, the optimal model selection is done based on an evaluation of the properties of the different models.
On the other hand, recently, MARS procedure has begun to be applied in the field of hydrological forecasting due to its flexibility in modeling high-dimensional data. It is recommended and applied over a wide range of hydrological issues, like runoff prediction [79,80], drought prediction [81], water pollution prediction [82], pan evaporation modeling [83] and prediction of scour depth below free overfall spillways [84]. In most of these previous researches, the raw timeseries data are fed as input. The performance of MARS was better than that of the ANN, ANFIS, support vector machine (SVM) and M5 Tree models [38].

ARIMA/ARMA Methods
ARIMA/ARMA methods are usually used for generating synthetic series from the historical record [8]. Before tackling this generation, historical time series need to be normalized, which is a drawback of this technique [85]. Once the normalization is done, by means of the previous model, several synthetic series are generated with the same occurrence probability of the historical one [70]. Frequently, these synthetic series are used to populate a further model/method, such as CR [23]. Consequently, this method is usually hybridized with other techniques of artificial intelligence, which provide more accurate results in the modeling of complex natural processes [73,74,86,87]. This approach is applied to the hybridization with Bayesian methods for developing CR [77].

Causal Reasoning (CR) Methods
CR is based on the propagation of probability along a Bayesian implementation through a causal model. This model is based on decision variables who represent probability distributions for temporal runoff. Most of the CR approaches are based on Bayesian networks (BNs) and dynamic Bayesian networks (DBNs). Those methods are based on probabilistic graphical models [88,89]. They easily and automatically allow defining relationships between parts into complex models, through conditional probabilities, due to both models being based on Bayes' theorem [90]. BNs are probabilistic graphical models that offer compact representations of the joint probability distribution over sets of random variables [91,92]. This property has been taken to use BNs as a decision support system (DSS), for example in [93,94], to address problems within the IWRM paradigm [95]. Dynamic Bayesian networks (DBN) are a generalization of hidden Markov models (HMM) [8,96]. One of the main advantages of BNs is that they compute inference omni-directionally. Given an observation with any type of evidence on any of the networks' nodes (or a subset of nodes), BNs can compute the posterior probabilities of all other nodes in the network, regardless of arc direction, through observational inference [97].
The inherent ability of BNs to explicitly model uncertainty makes them suitable for temporal hydrological series analysis. Another feature of BNs is the unsupervised structural learning [98], which means probabilistic relationships between a large number of variables without having to specify input or output nodes. This can be seen as a quintessential form of knowledge discovery, as no assumptions are required to perform these algorithms on unknown datasets. Furthermore, this is strongly related to machine learning that has been applied successfully in the hydrological context in papers such as [99,100]. Consequently, the resulting product has many similarities with a neuro-fuzzy system or adaptive neuro-fuzzy inference system (ANFIS) that has been applied in works such as [101]. Finally, an important contribution of this method comprises the identification, characterization and quantification of temporal conditioned runoff fractions [8]. This represents an outstanding advance over classic or alternative methods.

Copulas Methods
In hydrology, copula applications started after the work of De Michele and Salvadori [102], who tested Frank copulas for a joint study of the negatively associated storm intensity and duration. Other research, such as that of Zhang and Singh [103], incorporated copulas for an extreme analysis of rainfall and drought events. The study of the dependence's modeling between extreme events is widely studied nowadays [104]. Dependence in extreme events needs to be evaluated through techniques focused on an extreme distribution, such as the tail-dependence coefficient. This has been commonly used in investigating hydroclimatic extremes, such as precipitation and temperature [105].
Another important and recent area of research is the construction of a multivariate distribution in modeling different dependence structures [106]. This is applied to hydrology in the form of frequency analysis, downscaling, streamflow or rainfall simulation, geostatistical interpolation, bias correction and so on. Other methods, such as multivariate parametric distribution [107], entropy [108] and copula [75], have been developed to model various dependence structures of multivariate variables through the construction of joint or conditioned distribution.
On the other hand, it should be highlighted that Copula method is based on the description and modeling of the dependence structure between random variables independently of the marginal laws involved. In this sense, there are high similarities with CR. Differences with CR lies in the fact that a Copula is a bivariate function, while CR is based on conditioned probabilistic distributions.

Kalman and Particle Filter Methods
The Kalman filter (KF) is a method that allows estimating unobservable state variables from observable variables that may contain some measurement error. It is a recursive nature algorithm that requires two types of equations: those that relate the state variables (main equations) and those that determine the temporal structure of the state variables (state equations). The estimates of the state variables are made based on the dynamics of these variables (time dimension), as well as the measurements of the observable variables that are obtained at each moment of time (cross-sectional dimension). That is, the dynamics are summarized in two steps: estimate the state variables using its own dynamics (prediction stage). Then, improve that first estimate using the information of the observable variables (correction stage). Once the algorithm predicts the new state at time, it adds a correction term, and the new "corrected" state serves as the initial condition in the next stage, t + 1. In this way, the estimation of the state variables uses all the available information until that moment and not only the information until the stage before the moment in which the estimation is made [109].
Recently, some novel applications, based on KF, have emerged in the field of hydrologic predictions. This is due to its features of real-time adjustment and easy implementation within a framework dynamic state [110]; one of them is an ensemble Kalman filter (EnKF). This method approximates the distributions of the system states by random samples, named ensembles and replaces the covariance matrix by the sample covariance computed from the ensembles, which is used for state updating in the KF [111]. In this sense, it is worth noting interesting approaches in the hydrological predictions field. For example, Moradkhani et al. [112] showed a dual-state estimation for parameters and state variables in a hydrologic model. For its part, Weerts and Serafy [113] compared the capability of EnKF and particle filter (PF) methods by reducing uncertainty in the rainfall-runoff update and internal model state estimation for flooding forecasting purposes. For their part, Pathiraja et al. [114] proposed an approach to detect nonstationary hydrologic model parameters in a paired catchment system.
On the other hand, a particle filter (PF) technique has emerged as an attractive alternative to remove the unrealistic Gaussian assumption of the EnKF [115], improving the reliability of hydrologic predictions [116]. PF is able to fully represent the posterior distributions of model parameters and state variables through a number of independent random samples called particles, and the particles are weighted and propagated sequentially by assimilating available observations. Over the past few years, the PF and its variants have been receiving increasing attention from the hydrologic community due to its ability to properly estimate the state of nonlinear and non-Gaussian systems [117,118].

HJ-Biplot
The HJ-Biplot [119] is an extension of the classical Biplot methods [120]. This technique allows a joint representation of a data matrix in a reduced subspace dimension, usually a plane, of the rows and columns of a matrix of multivariate data Xnxp. It is a symmetric simultaneous representation technique that is in some ways similar to correspondence analysis but is not restricted to frequency data.
HJ-Biplot uses markers (points/vectors), denoted as g1, g2, …, gn for each row, and h1, h2, …, hp for each column. The markers are obtained from the usual singular value decomposition (SVD) of the data matrix X = UDV T , where U are the eigenvectors of XX T , V is the eigenvectors of X T X and D is the matrix diagonal of singular values, taking as row markers rows of J = UD and, as column markers, rows of H = VD. HJ-Biplot is widely used, because it provides a higher goodness-of-fit for rows and columns of the matrix [16].
Recently, this technique has been applied in the field of hydrology. Carrasco et al. [16] examined the relationship between the physicochemical and biological variables, as well as the sampling points through different months. Although this method has a descriptive character contributing to deepening the hydro-chemical knowledge and, consequently, improving the hydrological and hydrodynamic understanding and interpretation of water systems, which allowed generating new interpretations, knowledge and evaluations about the water quality of Gatun Lake. It also offers the advantage of providing high representation quality, both for the sampling points and for the physicochemical and biological variables.

Principal Component Analysis (PCA)
PCA analysis reduces the dimension of the original dataset. This technique uses an orthogonal transformation to convert highly correlated variables into a set of values of linearly uncorrelated, which are called principal components. Each principal component is a linear combination of the original variables. The mathematical approach for processing can be called adaptive data analysis ( Table 1) that comprises mathematical processes such as orthogonal linear transformation, rank (dimensionality) reduction and visualization of latent data structures.
On several occasions, hydrological datasets contain not only useful important information but also confusing noise. Sometimes, datasets are not normally distributed or may contain outliers. Even the hydrological parameters are usually many times correlated. The correlation indicates that some of the information contained in one variable is also contained in some of the other remaining variables. In order to reveal the logical structures of the data, there are several hydrological procedures, such as PCA, which reduces the data dimensionality and gain more effective features [121]. Therefore, in several studies, the reason for selecting PCA as a statistical method was the low noise sensitivity [121].
PCA analysis has been used in numerous hydrological studies to test the water quality assessment [122], understand the role of land-atmosphere interactions in driving climate variability and the spatiotemporal variability [123] or select the redundant input variables for a prediction model of hybrid runoff models [124].

Factorial Analysis of Variance (FAV)
Factorial analysis is usually used to help in the study of the individual and interaction effects of the parameters [125]. More specifically, the factorial analysis of variance method is used to diagnose the curve relationship between the parameters and the response [126,127]. In other words, FAV technique is used for measuring the specific variations of hydrological responses in terms of posterior distributions to investigate the individual and interactive effects of parameters on model outputs [128]. To complete this description, FAV comprises a powerful statistical tool to facilitate the exploration of the main effects. This is done by measuring the variation ranges of hydrological response under the impact of individual parameters and parameter interactions by revealing the specific variations of each parameter's effects under the impact of another parameter [129]. The FAV technique can also quantify the sensitivity of model outputs to individual parameters, as well as their interactions, through addressing the curvilinear characteristic of the hydrological response when parameters vary across their multiple levels [130,131].
Consequently, the hybridization of this method with Bayesian methods is quite appropriate, direct and powerful. This is called the Bayesian-based multilevel factorial analysis (BMFAV), and it is used to assess parameter uncertainties and their effects on hydrological model outputs. In this sense, there are several other applications aimed to approximate the posterior distributions of model parameters with Bayesian inference [128,132]. Another important application of FAV was to develop a BMFAV method to address the dynamic influence of hydrological model parameters on runoff simulation [133].

Methodology and Results
This research comprises a methodological framework that includes the development of several consecutive phases listed as follows. First, an identification of parameters for a posterior SWOT analysis took place; then, those parameters were assessed based on a rigorous analysis of the global scientific literature; after that, a SWOT analysis was developed; finally, a suitability assessment for rivers´ sustainability was developed. Results drawn from this research are largely established in terms of a comparative analysis of the aforementioned techniques aimed to reach a ranking of methods from a multidimensional way. This will allow selecting the most suitable technique for each component of rivers´ sustainability included in this research.

Identification of Parameters
The parameters identified and established in this research for tackling the SWOT analysis are listed as follows. Predictability: There is a high variability on the prediction capacity provided by the analyzed methods and techniques. A qualitative score (high, medium and low) to rank and measure that skill is established. Reliability: This parameter is aimed to qualitatively score the degree of confidence and/or accuracy that the generated prediction provides from a certain method. Mathematical approach for dealing with uncertainty: This descriptive parameter informs about which way is the method that deals with uncertainty (probability, multivariate, etc.). Mathematical approach for processing: This provides insights of the algorithms type and theoretical development the method relies on. Amount of processed information: This parameter is aimed to qualitatively score the amount of information the method requires for generating its output. A qualitative score (high, medium and low) for ranking and measuring is established. Manageability: This provides an idea of the ease of handling. Consequently, it measures the complexity and level of expertise required for implementing the method. A qualitative score (high, medium and low) is established. Transparency: This measures the ease for an external observer to look into the internal process. The spectrum is quite wide, going all the way through total transparency to complete a black box. A qualitative score (high, medium and low) for ranking and measuring is also established. Table 1 shows the scores and values for each method, based on a rigorous analysis of the global scientific literature and on our own experience. The reader may be referred to the previous section, where the insights of each method are described, and also to the discussion and conclusions sections, where a summary of the essential information drawn from this research is shown.

Assessment of Parameters
The interpretation of Table 1 is not trivial. The reader may conclude that there are several similarities among methods, and that is right. Consequently, it seems necessary to tackle a further assessment where the main strengths, weaknesses, opportunities and threats (SWOT) for each method are shown and summarized ( Table 2). After that, a final assessment that comprises a suitability evaluation is carried out (Table 3). There, the degree of method suitability for assuring the highest level of rivers´ sustainability depends on the final use that the method is designed to and the type of service that it is able to deliver. Table 1. Parameters assessment. Measure of the predictability and reliability. Low (*), medium (**) and high (***).

SWOT analysis
This evaluation comprises the identification and inclusion of the main feature for each of the 4 characteristics of the SWOT analysis (Table 2). This deep analysis is aimed to provide a guide for the final users to get a general knowledge of each method applicability, so the usage for the goals to reach is optimum. There is a high variety in the traits across methods. However, it is worthy to mention some of the main patterns identified here. For the strength, several methods show great computation power, so make them able to cope with great amounts of data at very fast processing. Other methods are known worldwide that make their usage easier for non-specialists´ users. Process-based hydrological models show their ability for capturing and reproducing physical knowledge. Other methods, such as MARS, reveals its high predictive nature and others; furthermore, KF, EnKF and PF, as well as Copulas, show their capacity for dealing with hydrological uncertainty. Regarding weaknesses, methods show also a great heterogeneity. Some of them, such as process-based hydrological models or ANN, show scarce capacity for dealing with hydrological variability and temporally runoff understanding, respectively. Then, WT and SDS reveal a hard output interpretation; traditional techniques show very high inputs requirements; others methods like MARS, Copulas, KF, EnKF and PF are very mathematically complex to deal with. CR show its great opacity in its functioning, and others show their low ability for predicting. In regards to the opportunity, the most frequent and important trait is hybridization with other methods. Finally, threats analysis also shows methods´ deficiencies on opacity, as well as on inaccurate, manifold, overrated and complex usage, among others.

Suitability Assessment for Rivers´ Sustainability
The measure of a method suitability is a quite complex matter, and the procedure designed and followed in this research is explained as follows. In this sense, the level of suitability depends on the final use that the method is designed to. For that reason, this analysis is divided on the different services that the method may address. These services are: average prediction (AP), current conditions simulation (CCS), temporal dependence evaluation (TDE), spatio-temporal dependence evaluation (SPTDE), extreme prediction (EP) and protection actions (PA). At the end, a global suitability score (GSS) for the sustainability assessment is provided.
The quantitative score for each service/method is explained as follows ( Table 3): The highest level of suitability is symbolized with (***), the second level (**), the third level (*) and, finally, the absence of suitability leaves the cell in blank.
The analysis and interpretation of this assessment shows that the best method for predicting runoff temporal behavior is the CR. On the other hand, the worst methods for this purpose are HJ-Biplot and Hurst coefficient.

Discussion
Hydrological processes' variability is increasing on all levels, and those extreme events are more recurrent over time. In order to address this new hydrological reality and aimed to anticipate and forecast it, new approaches, methodologies and techniques have been analyzed in this research. This analysis is articulated in three phases: identification and assessment of parameters, SWOT analysis and suitability assessment.
Seven parameters have been identified and evaluated whose behavior assessment is discussed as follows. For predictability, the methods that behaves the best are ANN, WT, CR and copulas. On the other hand, considering these methods were not initially or specifically conceived for developing predictions, the worst ones for developing a prediction are process-based hydrological models, linear and nonlinear regression models, Pearson's correlogram coefficient (correlogram)-Spearman and Kendall estimator, multivariate adaptive regression splines (MARS) and HJ-Biplot. For reliability, the best methods are CR and FAV. On the contrary, the worst methods are process-based hydrological models and Pearson's correlogram coefficient (correlogram). Regarding the mathematical approach for dealing with uncertainty, 11 different values have been identified: null; null/scarce, scaling coefficients at different resolutions, regression, correlogram coefficient, optimization, mathematical residuals coefficients, probability, bivariate function, random samples (ensembles) and different statistical expressions. The methods that do not include any uncertainty treatment are the artificial neural networks (ANN), system dynamics (SDs) and Hurst coefficient. In regards to the mathematical approach for processing, four values are identified: theoretical algorithms, dynamic and adaptive mathematical expressions, constant mathematical expressions and matrixes visualization/cluster analysis. Then, regarding the amount of processed information, none of the studied methods requires low levels of data amounts, which is reasonable, because, usually, the more information for populating the method, the best performance level is obtained. For manageability, the easiest methods to work with are the Pearson's correlogram coefficient (correlogram) and Hurst coefficient; on the contrary, the most complicated and sophisticated methods to deal with are the ANN, WT and SDs. Finally, in regards to transparency, the clearest methods are the Pearson's correlogram coefficient (correlogram) and Hurst coefficient, while the darkest are the ANN, SDs, CR, Copulas, Kalman and particle filter.
SWOT analysis reveals a multidimensional analysis largely focused on the advantages and drawbacks for each method. In this sense, one feature has been identified and included for each of the four components of the SWOT analysis, which are strength, weakness, opportunity and threat ( Table 2). Some of the most important traits are the following. On the strengths, several methods show great computation power, others are globally known, others are aimed to capture and reproduce physical knowledge, others have a clear predictive nature and others, such as Kalman and the particle filter, show their capacity for dealing with hydrological uncertainty. Regarding weaknesses, some of them show scarce capacity for dealing with hydrological variability and temporally runoff understanding. Then, hard output interpretations and very high input requirements are also common traits; others methods are, mathematically, very complex to deal with; opacity in its functioning and low predictive skills are also important weaknesses to mention. In regards to the opportunity, the most frequent and important trait is hybridization with other methods. Finally, a threat analysis also shows methods´ deficiencies on opacity, as well as on inaccurate, manifold, overrated and complex usage, among others.
Suitability assessment for rivers´ sustainability comprises an analysis of seven services, which are: average prediction (AP), current conditions simulation (CCS), temporal dependence evaluation (TDE), spatio-temporal dependence evaluation (SPTDE), extreme prediction (EP) and protection actions (PA). At the end, a global suitability score (GSS) for the sustainability assessment is provided. This evaluation reveals that the method that averagely behaves the best for achieving temporally river sustainability is causal reasoning. It is recommended to inform that this final score is an average, and, in this case, CR has an important drawback on the great opacity across the whole implementation.
It is worthy to highlight that none of the analyzed methods are too appropriate by themselves for implementing protection actions. In this service, the best methods are the traditional physical methods (process-based hydrological models) that allow changing the physical features of the hydrological model.

Conclusions
This review paper shows an extensive analysis of the most important methods for the hydrological understanding and prediction of rivers´ runoff behaviors. Of course, there is a high variability of dimensions involved in this research. Consequently, there are methods more appropriate to one dimension than to others. However, in general terms, the most recommended method is causal reasoning, despite its dark (black box) nature, need of large data amount and manageability difficulty. The reasons why CR is the best-ranked method largely comprises its flexibility, articulated by the chance for incorporating dynamic and adaptive mathematical expressions, reliability, predictability, ease to hybridize with other methods and the uncertainty treatment method, which is probability; that, in itself, is a very powerful uncertainty tool.
This research may be useful for helping to reach a much larger sustainability of rivers, since their temporal hydrological behavior is far from being well understood.