Abstract: The ever-increasing availability of new remote sensing and land surface model datasets opens new opportunities for hydrologists to improve flood forecasting systems. The current study investigates the performance of two operational soil moisture (SM) products provided by the “EUMETSATSatellite Application Facility in Support of Operational Hydrology and Water Management” (H-SAF, http://hsaf.meteoam.it/) within a recently-developed hydrological model called the “simplified continuous rainfall-runoff model” (SCRRM) and the possibility of using such a model at an operational level. The model uses SM datasets derived from external sources (i.e., remote sensing and land surface models) as input for calculating the initial wetness conditions of the catchment prior to the flood event. Hydro-meteorological data from 35 Italian catchments ranging from 800 to 7400 km2 were used for the analysis for a total of 593 flood events. The results show that H-SAF operational products used within SCRRM satisfactorily reproduce the selected flood events, providing a median Nash–Sutcliffe efficiency index equal to 0.64 (SM-OBS-1) and 0.60 (SM-DAS-2), respectively. Given the results obtained along with the parsimony, the simplicity and independence of the model from continuously-recorded rainfall and evapotranspiration data, the study suggests that: (i) SM-OBS-1 and SM-DAS-2 contain useful information for flood modelling, which can be exploited in flood forecasting; and (ii) SCRRM is expected to be beneficial as a component of real-time flood forecasting systems in regions characterized by low data availability, where a continuous modelling approach can be problematic.
Abstract: Water is essential to all forms of life and is regarded as the most important and exploited natural resource in the world. Water moves through and is stored in different compartments of the Earth environment (atmosphere, glaciers, vegetation, soil, shallow and deep subsurface, rivers, lakes and oceans) at different velocities and quantities respectively, strongly controlled by current and past hydrometeorological and geological conditions. Understanding and quantification of the quantitative and qualitative aspects of the typical physical processes like precipitation, evaporation, transpiration, interception, infiltration, runoff, recharge, surface water and groundwater discharge form the fundamental components of the hydrological sciences. Hence, one of the definitions of hydrology is the scientific study of the occurrence, distribution, movement and properties of the waters and their interaction with the environment within each phase of the hydrologic cycle . [...]
Abstract: Hydrological simulation, based on weather inputs and the physical characterization of the watershed, is a suitable approach to predict the corresponding streamflow. This work, carried out on four different watersheds, analyzed the impacts of using three different meteorological data inputs in the same model to compare the model’s accuracy when simulated and observed streamflow are compared. Meteorological data from the Daily Global Historical Climatology Network (GHCN-D), National Land Data Assimilation Systems (NLDAS) and the National Operation Hydrological Remote Sensing Center’s Interactive Snow Information (NOHRSC-ISI) were used as an input into the Soil and Water Assessment Tool (SWAT) hydrological model and compared as three different scenarios on each watershed. The results showed that meteorological data from an assimilation system like NLDAS achieved better results than simulations performed with ground-based meteorological data, such as GHCN-D. However, further work needs to be done to improve both the datasets and model capabilities, in order to better predict streamflow.
Abstract: The increasing availability of digital databases (e.g., of climatology, topography, soils and land use) has enabled research into the generalisation of hydrological model parameter values from physical properties and the development of grid-based models. A hydrological modelling framework (HMF) is being developed to exploit this generalisation and provide a flexible gridded infrastructure, operational over regional, national or larger scales at a range of spatial and temporal resolutions. The capability of the framework is demonstrated through adaptation of an existing semi-distributed catchment-based rainfall-runoff model, CLASSIC, for which a generalised methodology exists to determine parameter values. The main change required was to ensure consistency of parameter values between the runoff procedure in CLASSIC and flow routing in the HMF. Assessment is by comparison of modelled and observed flow at grid points in Britain corresponding to gauging stations, both for catchments previously modelled and for new locations, for a range of catchment areas and physical properties and for four spatial resolutions (10, 5, 2.5 and 1 km). Good model performance is achieved for 90% of catchments tested, with a 5 km resolution proving adequate for catchments larger than 500 km2. Applications are outlined for which the framework could be used to test alternative modelling approaches or undertake consistent studies across the range of resolutions.
Abstract: Artificial Neural Networks (ANNs) are classified as a data-driven technique, which implies that their learning improves as more and more training data are presented. This observation is based on the premise that a longer time series of training samples will contain more events of different types, and hence, the generalization ability of the ANN will improve. However, a longer time series need not necessarily contain more information. If there is considerable repetition of the same type of information, the ANN may not become “wiser”, and one may be just wasting computational effort and time. This study assumes that there are segments in a long time series that contain a large quantum of information. The reason behind this assumption is that the information contained in any hydrological series is not uniformly distributed, and it may be cyclic in nature. If an ANN is trained using these segments rather than the whole series, the training would be the same or better based on the information contained in the series. A pre-processing can be used to select information-rich data for training. However, most of the conventional pre-processing methods do not perform well due to large variation in magnitude, scale and many zeros in the data series. Therefore, it is not very easy to identify these information-rich segments in long time series with large variation in magnitude and many zeros. In this study, the data depth function was used as a tool for the identification of critical (information) segments in a time series, which does not depend on large variation in magnitude, scale or the presence of many zeros in data. Data from two gauging sites were used to compare the performance of ANN trained on the whole data set and just the data from critical events. Selection of data for critical events was done by two methods, using the depth function (identification of critical events (ICE) algorithm) and using random selection. Inter-comparison of the performance of the ANNs trained using the complete data sets and the pruned data sets shows that the ANN trained using the data from critical events, i.e., information-rich data (whose length could be one third to half of the series), gave similar results as the ANN trained using the complete data set. However, if the data set is pruned randomly, the performance of the ANN degrades significantly. The concept of this paper may be very useful for training data-driven models where the training time series is incomplete.