Next Article in Journal
Characterizing Trace Metal Contamination and Partitioning in the Rivers and Sediments of Western Europe Watersheds
Next Article in Special Issue
Smart Data Blending Framework to Enhance Precipitation Estimation through Interconnected Atmospheric, Satellite, and Surface Variables
Previous Article in Journal
Assessing Hydrological Simulations with Machine Learning and Statistical Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparison of Tree-Based Ensemble Algorithms for Merging Satellite and Earth-Observed Precipitation Data at the Daily Time Scale

by
Georgia Papacharalampous
*,
Hristos Tyralis
,
Anastasios Doulamis
and
Nikolaos Doulamis
Department of Topography, School of Rural, Surveying and Geoinformatics Engineering, National Technical University of Athens, Iroon Polytechniou 5, 157 80 Zografou, Greece
*
Author to whom correspondence should be addressed.
Hydrology 2023, 10(2), 50; https://doi.org/10.3390/hydrology10020050
Submission received: 31 December 2022 / Revised: 30 January 2023 / Accepted: 31 January 2023 / Published: 12 February 2023

Abstract

:
Merging satellite products and ground-based measurements is often required for obtaining precipitation datasets that simultaneously cover large regions with high density and are more accurate than pure satellite precipitation products. Machine and statistical learning regression algorithms are regularly utilized in this endeavor. At the same time, tree-based ensemble algorithms are adopted in various fields for solving regression problems with high accuracy and low computational costs. Still, information on which tree-based ensemble algorithm to select for correcting satellite precipitation products for the contiguous United States (US) at the daily time scale is missing from the literature. In this study, we worked towards filling this methodological gap by conducting an extensive comparison between three algorithms of the category of interest, specifically between random forests, gradient boosting machines (gbm) and extreme gradient boosting (XGBoost). We used daily data from the PERSIANN (Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks) and the IMERG (Integrated Multi-satellitE Retrievals for GPM) gridded datasets. We also used earth-observed precipitation data from the Global Historical Climatology Network daily (GHCNd) database. The experiments referred to the entire contiguous US and additionally included the application of the linear regression algorithm for benchmarking purposes. The results suggest that XGBoost is the best-performing tree-based ensemble algorithm among those compared. Indeed, the mean relative improvements that it provided with respect to linear regression (for the case that the latter algorithm was run with the same predictors as XGBoost) are equal to 52.66%, 56.26% and 64.55% (for three different predictor sets), while the respective values are 37.57%, 53.99% and 54.39% for random forests, and 34.72%, 47.99% and 62.61% for gbm. Lastly, the results suggest that IMERG is more useful than PERSIANN in the context investigated.

1. Introduction

Machine and statistical learning algorithms (e.g., those documented in [1,2,3]) are increasingly adopted for solving a variety of practical problems in hydrology ([4,5,6,7,8,9,10,11,12,13,14,15,16,17,18]) and beyond ([19,20,21,22,23]). Among the entire pool of such algorithms, the tree-based ensemble ones (i.e., those combining decision trees under properly designed ensemble learning strategies; [24]) are of special interest for many practical problems, as most of their software implementations offer high predictive performance with low computational cost, together with high automation and some degree of explainability ([25,26]). Additionally, they usually do not require extensive preprocessing and hyperparameter tuning to perform well ([1,26]). On the other hand, as they are highly flexible algorithms, they are less interpretable than simpler algorithms (e.g., linear regression), due to the well-recognized trade-off between interpretability and flexibility ([2]).
Notably, the known theoretical properties of the various tree-based ensemble algorithms (including random forests, gradient boosting machines (gbm) and extreme gradient boosting (XGBoost); [27,28,29]) cannot support the selection of the most appropriate one among them for each practical problem. Instead, such a selection could rely on attentively designed empirical comparisons. Thus, such comparisons of tree-based ensemble algorithms are conducted with increasing frequency in various scientific fields ([30,31,32,33,34,35]).
Tree-based ensemble algorithms are regularly applied and compared to other machine and statistical learning algorithms for the task of merging satellite products and ground-based measurements. This task is the general focus of this work, together with the general concept of tree-based ensemble algorithms, and is commonly executed in the literature in the direction of obtaining precipitation datasets that cover large geographical regions with high density and, simultaneously, are more accurate than pure satellite precipitation products. The importance of this same task can be perceived through the inspection of the major research topics appearing in the hydrological literature (see, e.g., those discussed in [36,37]). Relevant examples of applications and comparisons are available in [38,39,40,41,42,43,44].
These examples refer to various temporal resolutions and many different geographical regions around the globe (see also the reviews by [45,46]), with the daily temporal resolution and the Unites States (US) being frequent cases. Nonetheless, a relevant comparison of tree-based ensemble algorithms for the latter temporal resolution and the latter geographical region is missing from the literature, with the closest investigations at the moment being those available in the work by [43], which focuses on China. In this work, we fill this specific literature gap. Notably, the selection of the most accurate regression algorithm from the tree-based ensemble family could be particularly useful at the daily temporal scale, in which the size of the datasets for large geographical areas might impose significant limitations on the application of other accurate machine and statistical learning regression algorithms due to their large computational costs.

2. Methods

Random forests, gbm and XGBoost were applied in a cross-validation setting (see Section 3.2) for conducting an extensive comparison in the context of merging gridded satellite products and ground-based measurements at the daily time scale. Additionally, the linear regression algorithm was applied in the same setting for benchmarking purposes. In this section, we provide brief descriptions of the four aforementioned algorithms. Extended descriptions are out of the scope of this work, as they are widely available in the machine and statistical learning literature (e.g., in [1,2,3]). Statistical software information that ensures the work’s reproducibility is provided in Appendix A.

2.1. Linear Regression

The results of this work are reported in terms of relative scores (see Section 3.3). These scores were computed with respect to the linear regression algorithm, which models the dependent variable as a linear weighted sum of the predictor variables ([1], pp. 43–55). A squared error scoring function facilitated this algorithm’s fitting.

2.2. Random Forests

Random forests ([27]) are the most commonly used algorithm in the context of merging gridded satellite products and ground-based measurements (see the examples in [47]). A detailed description of this algorithm can be found in [25], along with a systematic review of its application in water resources. Notably, random forests are an ensemble learning algorithm and, more precisely, an ensemble of regression trees that is based on bagging (acronym for “bootstrap aggregation”) but with an additional randomization procedure. The latter aims at reducing overfitting. In this work, random forests were implemented with all their hyperparameters kept as default. For instance, the number of trees was equal to 500. This methodological choice is adequate, as random forests are known to perform well without tuning as long as they are applied with a large number of trees ([25]).

2.3. Gradient Boosting Machines

Another ensemble learning algorithm that was herein used with regression trees as base learners is gbm ([28,48]). The main concept behind this ensemble algorithm and, more generally, behind all the boosting algorithms (including the one described in Section 2.4) is the iterative training of new base learners using the errors of previously trained base learners ([26,49]). For gradient boosting machines, a gradient descent algorithm adapts the loss function for achieving optimal fitting. This loss function is the squared error scoring function herein. Consistency in respect to the implementation of random forests was ensured by setting the number of trees equal to 500. The remaining hyperparameters were kept as default. This latter methodological choice is expected to be adequate, as boosting procedures are designed with the ability to run as “off the shelf” procedures ([26]).

2.4. Extreme Gradient Boosting

XGBoost ([29]) is the third tree-based ensemble learning and the second boosting algorithm implemented in this work. In the implementations of this work, all the hyperparameters were kept as default (as this is expected to be adequate; see the above section), except for the maximum number of iterations that were set to 500.
Aside from applying XGBoost in a cross-validation setting for its comparison to the remaining algorithms, we also utilized it with the same hyperparameter values for ensuring some degree of explainability in terms of variable importance under the more general explainable machine learning culture ([50,51,52]). Specifically, we computed the gain importance metric, which is available in the XGBoost algorithm. This metric assesses the “fractional contribution of each feature to the model based on the total gain of this feature’s splits”, with higher values indicating more important features ([53]).

3. Data and Application

3.1. Data

For our experiments, we retrieved and used daily earth-observed precipitation, gridded satellite precipitation and elevation data for the gauged locations and grid points shown in Figure 1 and Figure 2.

3.1.1. Earth-Observed Precipitation Data

Daily precipitation totals from the Global Historical Climatology Network daily (GHCNd) ([54,55,56]) were used for comparing the algorithms. More precisely, data from 7264 earth-located stations spanning across the contiguous US (see Figure 1) were extracted. These data cover the two-year time period 2014−2015. Data retrieval was made from the website of the NOAA National Climatic Data Center (https://www1.ncdc.noaa.gov/pub/data/ghcn/daily; accessed on 27 February 2022).

3.1.2. Satellite Precipitation Data

For comparing the algorithms, we additionally used gridded satellite daily precipitation data from the current operational PERSIANN (Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks) system (see the geographical locations of the extracted PERSIANN grid with a spatial resolution of 0.25 degree × 0.25 degree in Figure 2a) and the GPM IMERG (Integrated Multi-satellitE Retrievals) Late Precipitation L3 1 day 0.1 degree × 0.1 degree V06. These two gridded satellite precipitation databases were developed by the Centre for Hydrometeorology and Remote Sensing (CHRS) at the University of California, Irvine (UCI) and the National Aeronautics and Space Administration (NASA) Goddard Earth Sciences (GES) Data and Information Services Center (DISC), respectively. More precisely, the PERSIANN data were retrieved from the website of CHRS (https://chrsdata.eng.uci.edu; accessed on 7 March 2022) and the IMERG data were retrieved from the website of NASA Earth Data (https://doi.org/10.5067/GPM/IMERGDL/DAY/06; accessed on 10 December 2022). The extracted data cover the entire contiguous US for the two-year time period 2014−2015. Notably, the extracted PERSIANN data were used in the experiments with their original spatial resolution, while the extracted GPM IMERG data were used for forming data with a spatial resolution of 0.25 degree × 0.25 degree by applying bilinear interpolation on the CMORPH0.25 grid. The herein formed data and their grid (see Figure 2b) are those referred to in what follows as “IMERG values” and “IMERG grid”, respectively, and are the ones used in the experiments.

3.1.3. Elevation Data

Elevation is a key predictor variable for many hydrological processes ([57]). Therefore, we estimated its value for all the geographical locations shown in Figure 1. For this estimation, we relied on the Amazon Web Services (AWS) Terrain Tiles (https://registry.opendata.aws/terrain-tiles; accessed on 25 September 2022).

3.2. Validation Setting and Predictor Variables

To formulate the regression settings of this work, we first defined earth-observed daily total precipitation at a point of interest (which could be station 1 in Figure 3) as the dependent variable. Then, we adopted procedures proposed in [58] to compute the observations of possible predictor variables. Separately for each of the two satellite precipitation grids (see Figure 2), we determined the closest four grid points to each ground-based station from those depicted in Figure 1. We also computed the distances di, i = 1, 2, 3, 4 from these grid points and indexed the latter as Si, i = 1, 2, 3, 4 based on the following order: d1 < d2 < d3 < d4 (see Figure 3). Throughout this work, the distances di, i = 1, 2, 3, 4 are also, respectively, called “PERSIANN distances 1−4” or “IMERG distances 1−4” (depending on whether we refer to the PERSIANN grid or the IMERG grid) and the daily precipitation values at the grid points 1−4 are called “PERSIANN values 1−4” or “IMERG values 1−4” (depending on whether we refer to the PERSIANN grid or the IMERG grid).
Based on the above, the predictor variables for the technical problem of interest could include the PERSIANN values 1−4, the IMERG values 1−4, the PERSIANN distances 1−4, the IMERG distances 1−4 and the station’s elevation. We defined and examined three sets of predictor variables (see Table 1). Each of them defines a different regression setting that includes 4,833,007 samples. These samples were exploited under a two-fold cross-validation scheme for comparing the three tree-based ensemble algorithms outlined in Section 2 in the context of merging gridded satellite precipitation products and ground-based precipitation measurements at the daily temporal scale. The same samples were explored by estimating the Spearman correlation ([59]) for the various pairs of variables and by ranking the predictor variables based on their importance in the regression. The latter methodological step was made by applying explainable machine learning procedures offered by the XGBoost algorithm (see Section 2.4).

3.3. Performance Metrics and Assessment

The performance assessment relied on procedures proposed by [58]. These procedures are reported in what follows. First, we computed the median of the squared error function separately for each set {algorithm, predictor set, test fold}. Note that the squared error scoring function can adequately support our performance comparisons, as it is consistent for the mean functional of the predictive distributions ([60]). Subsequently, two relative scores (which are else referred to as “relative improvements” throughout this work) were computed for each set {algorithm, predictor set}. For that, the two median squared error (MedSE) values offered by each set {algorithm, predictor set} (each corresponding to a different test fold) were utilized, together with their corresponding MedSE values offered by the reference modeling approach, which was defined as the linear regression when run with the same predictor set as the modeling approach to which the relative score referred. More precisely, the relative score was computed as the difference between the score of the set {algorithm, predictor set} minus the score of the reference modeling approach, multiplied by 100 and divided by the score of the reference modeling approach. Then, mean relative scores (which are also referred to as “mean relative improvements” throughout this work) were computed by averaging, separately for each set {algorithm, predictor set}, the relative scores. The procedures for computing the relative scores and the mean relative scores were repeated by considering the set {linear regression, predictor set 1} as the reference modeling approach for all the sets {algorithm, predictor set}.
Mean rankings of the machine and statistical learning algorithms were also computed. For that, and separately for each set {case, predictor set} (with each case belonging to one test fold only), we first ranked the four algorithms based on their squared errors. Then, we averaged these rankings, separately for each set {predictor set, test fold}. Lastly, we obtained the mean rankings reported by averaging the two previously computed mean ranking values corresponding to the same predictor set. We also computed the rankings collectively for all the predictor sets.

4. Results

4.1. Regression Setting Exploration

Regression setting explorations can facilitate interpretations of the results of prediction experiments, at least to some extent. Therefore, in Figure 4, we present the Spearman correlation estimates for the various variable pairs appearing in the regression settings examined in this work. As it could be expected, the magnitude of the relationships between the predictand (i.e., the precipitation quantity observed at the earth-located stations, which is referred to as “true value” in Figure 4) and the 17 predictor variables seem to depend, to some extent, on the satellite rainfall product. Indeed, the Spearman correlation estimates made for the relationships between the predictand and precipitation quantities from the IMERG grid are equal to 0.45, while the corresponding estimates for the case of the RERSIANN grid are equal to 0.40. The remaining relationships between the predictand and predictor variables are far less intense, almost negligible, based on the Spearman correlation statistic. Still, they could also provide information in the regression settings.
The relationships between the predictor variables also exhibit various intensities. The most intense among them, according to the Spearman correlation statistic, are the relationships between the PERSIANN values, for which the estimates obtained are equal to 0.90, 0.91, 0.92 and 0.93. The relationships between the IMERG values are also intense, with the corresponding Spearman correlation estimates being equal to 0.80, 0.82, 0.83, 0.84 and 0.85. The Spearman correlation estimates referring to the relationships between the distances, as well as those referring to the relationships between the distances and the earth-located station’s elevation (with the latter being referred to as “station elevation” in the visualizations), are either positive or negative and smaller (in absolute terms) than the Spearman correlation estimates referring to the relationships between the PERSIANN values and the relationships between the IMERG values. Still, some of them are of similar magnitude as those referring to the relationships between the PERSIANN and IMERG values.
Furthermore, Figure 5 presents the importance scores and rankings computed for the 17 predictor variables through the XGBoost algorithm and by considering all of these predictor variables in the regression setting. The four IMERG values were found to be the most important predictor variables. Moreover, the fifth and sixth most important predictors are PERSIANN value 1 and station elevation, respectively, and PERSIANN values 2−4 follow in the line, while the eight distances are the eight least important predictor variables. Notably, the fact that station elevation is more important than three of the four PERSIANN values could not be expected by inspecting the Spearman correlation estimates (see again Figure 4).

4.2. Algorithm Comparison

Figure 6 facilitates the comparison of the four machine and statistical learning algorithms in terms of the square error function, separately for each predictor set. The mean relative improvements (see Figure 6a) suggest that XGBoost is the best algorithm for all the predictor sets. For predictor set 1 (which incorporates, among others, information from the PERSIANN gridded precipitation dataset; see Table 1), random forests exhibit very similar performance compared to that of XGBoost. At the same time, for predictor set 2 (which incorporates, among others, information from the IMERG gridded precipitation dataset; see Table 1), gbm exhibits very similar performance to that of XGBoost. Additionally, the mean rankings (see Figure 6b) of random forests and XGBoost are of similar magnitude. In terms of the same criterion, gbm scores much closer to random forests and XGBoost than to linear regression.
Moreover, Figure 7 facilitates more detailed comparisons with respect to the frequency with which each algorithm appeared in the various positions from the first to the fourth in the experiments. Here again, the comparisons can be made across both the algorithms and the predictor sets. The respective results are somewhat similar across predictor sets. Indeed, linear regression is much more likely to be found at the fourth (i.e., the last) place than at any other place. It is also more likely to be found at the first place than at the second and third places. At the same time, random forests are more likely to be ranked first than second, third and fourth, and gbm appears most often in the second and third positions. The last position is the least likely for both gbm and XGBoost. The latter algorithm is more likely to be ranked second; yet, the first and third positions are also far more likely than the last.
Lastly, Figure 8 and Figure 9 allow us to compare the degree of information that is offered by the two gridded precipitation products within the context of our regression problem, further than the comparisons already allowed by the variable importance explorations using the gain metric incorporated into the XGBoost algorithm (see again Figure 5). Overall, the IMERG dataset was proven to be far more information-rich than the PERSIANN dataset, in terms of both mean relative improvement (see Figure 8a) and mean ranking (see Figure 8b). Indeed, the relative improvements with respect to the linear regression algorithm run with the predictor set 1 are much larger for the tree-based algorithms when these algorithms are run with predictor set 2 than when they are run with predictor set 1. Additionally, predictor set 3 (which contains information from both gridded precipitation datasets) does not improve the performances notably in terms of mean relative improvements with respect to predictor set 2, although it does in terms of mean ranking. While the best modeling choice is {XGBoost, predictor set 3}, random forests were ranked in the two first positions more often than any other algorithm for predictor sets 2 and 3, when the ranking was made collectively for all the predictor sets (see Figure 9). Still, for the same predictor sets, XGBoost appeared in the last few positions much less often and achieved the best performance in terms of mean ranking when run with predictor set 3.

5. Discussion

Overall, XGBoost was proven to perform notably better than random forests and gbm when merging gridded satellite precipitation products and ground-based precipitation measurements for the contiguous US at the daily time scale. Notably, this result agrees with results offered by multiple competitions, in which large datasets from other applied disciplines were utilized, and where XGBoost outperformed other boosting algorithms and random forests. The better performance of XGBoost compared with gbm could be attributed to the fact that the former algorithm was armored during its making with extra parameters compared with traditional boosting implementations, as well as with a regularization procedure for avoiding overfitting. Moreover, the variable importance scores obtained in this work and the same work’s predictive performance comparison across predictor sets indicate that the IMERG product offers more useful predictors than the PERSIANN product for the same time scale. In summary, when the former of these products is utilized (either alone or together with the latter of them), random forests are far behind both XGBoost and gbm in terms of accuracy. On the other hand, when PERSIANN is utilized, without IMERG being utilized as well, gbm is far behind both XGBoost and random forests.
This latter result agrees, to some extent, with results obtained for the monthly time scale in [58], although the relative scores with respect to the linear regression algorithm were found to be somewhat lower therein. Note, however, that the comparison in this latter work relied on the PERSIANN satellite dataset only. Still, it is accurate to deduce that the improvements in performance with respect to the linear regression algorithm offered by XGBoost, gbm and random forests, when all these four algorithms are run with the same predictor variables, are very large (i.e., from approximately 25% to approximately 65%) for both the daily and monthly time scales. Notably, even larger improvements could be achieved by combining predictions of diverge algorithms in advanced or even simple ensemble learning frameworks, following research efforts made in various fields (e.g., those by [24,61,62,63,64,65,66,67,68]).
As the main concepts behind the boosting and random forest families of algorithms are different (see Section 2 for brief summaries of these concepts), their combinations could be investigated in the direction of achieving these further improvements with a low computational cost. Moreover, their combination with the linear regression algorithm could also be investigated. Indeed, in some contexts, even the least accurate algorithms could benefit ensemble learning solutions (see, e.g., the relevant comparison outcome in [69]). In cases where the computational cost does not constitute a limiting factor in algorithm selection, neural network ([70,71,72]) and deep learning ([73,74]) regression algorithms could be added to the ensembles. Lastly, instead of aiming at providing accurate mean-value predictions, one could aim at providing accurate median-value predictions coupled with useful uncertainty estimates. This would require working on machine and statistical learning methods, such as those summarized and popularized in the reviews by [75,76].

6. Conclusions

Precipitation datasets that simultaneously cover large regions with high density and are more accurate than satellite precipitation products can be obtained by correcting such products using earth-observed datasets together with machine and statistical learning regression algorithms. Tree-based ensemble algorithms are adopted in various fields for solving algorithmic problems with high accuracy and lower computational cost compared with other algorithms. Still, information on which tree-based ensemble algorithm to select when the merging is conducted for the contiguous United States (US) and at the daily time scale, at which the computational requirements might constitute a crucial factor to consider along with accuracy, is missing from the literature of satellite precipitation product correction.
Herein, we worked towards filling this methodological gap. We conducted an extensive comparison between three tree-based ensemble algorithms, specifically random forests, gradient boosting machines (gbm) and extreme gradient boosting (XGBoost). We exploited daily information from the PERSIANN (Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks) and the IMERG (Integrated Multi-satellitE Retrievals for GPM) gridded datasets, and daily earth-observed information from the Global Historical Climatology Network daily (GHCNd) database. The entire contiguous US was examined and results that are generalizable were obtained. These results indicate that XGBoost is more accurate than random forests and gbm. They also indicate that IMERG is more useful than PERSIANN in the context investigated.

Author Contributions

G.P. and H.T. conceptualized and designed the work with input from A.D. and N.D.; G.P. and H.T. performed the analyses and visualizations, and wrote the first draft, which was commented on and enriched with new text, interpretations and discussions by A.D. and N.D.; All authors have read and agreed to the published version of the manuscript.

Funding

This work was conducted in the context of the research project BETTER RAIN (BEnefiTTing from machine lEarning algoRithms and concepts for correcting satellite RAINfall products). This research project was supported by the Hellenic Foundation for Research and Innovation (H.F.R.I.) under the “3rd Call for H.F.R.I. Research Projects to support Post-Doctoral Researchers” (Project Number: 7368).

Data Availability Statement

The data used in this paper are open (see Section 3.1).

Acknowledgments

The authors are sincerely grateful to the Journal for inviting the submission of this paper, and to the editor and reviewers for their constructive remarks.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

We used the R programming language ([77]) to implement the algorithms, and to report and visualize the results.
For data processing and visualizations, we used the contributed R packages caret ([78]), data.table ([79]), elevatr ([80]), ncdf4 ([81]), rgdal ([82]), sf ([83,84]), spdep ([85,86,87]) and tidyverse ([88,89]).
The algorithms were implemented using the contributed R packages gbm ([90]), ranger ([91,92]) and xgboost ([53]).
The performance metrics were computed by implementing the contributed R package scoringfunctions ([76,93]).
Reports were produced using the contributed R packages devtools ([94]), knitr ([95,96,97]) and rmarkdown ([98,99,100]).

References

  1. Hastie, T.; Tibshirani, R.; Friedman, J. The Elements of Statistical Learning; Springer: New York, NY, USA, 2009. [Google Scholar] [CrossRef]
  2. James, G.; Witten, D.; Hastie, T.; Tibshirani, R. An Introduction to Statistical Learning; Springer: New York, NY, USA, 2013. [Google Scholar] [CrossRef]
  3. Efron, B.; Hastie, T. Computer Age Statistical Inference; Cambridge University Press: New York, NY, USA, 2016. [Google Scholar] [CrossRef]
  4. Dogulu, N.; López López, P.; Solomatine, D.P.; Weerts, A.H.; Shrestha, D.L. Estimation of predictive hydrologic uncertainty using the quantile regression and UNEEC methods and their comparison on contrasting catchments. Hydrol. Earth Syst. Sci. 2015, 19, 3181–3201. [Google Scholar] [CrossRef]
  5. Xu, L.; Chen, N.; Zhang, X.; Chen, Z. An evaluation of statistical, NMME and hybrid models for drought prediction in China. J. Hydrol. 2018, 566, 235–249. [Google Scholar] [CrossRef]
  6. Quilty, J.; Adamowski, J.; Boucher, M.A. A stochastic data-driven ensemble forecasting framework for water resources: A case study using ensemble members derived from a database of deterministic wavelet-based models. Water Resour. Res. 2019, 55, 175–202. [Google Scholar] [CrossRef]
  7. Curceac, S.; Atkinson, P.M.; Milne, A.; Wu, L.; Harris, P. Adjusting for conditional bias in process model simulations of hydrological extremes: An experiment using the North Wyke Farm Platform. Front. Artif. Intell. 2020, 3, 82. [Google Scholar] [CrossRef] [PubMed]
  8. Quilty, J.; Adamowski, J. A stochastic wavelet-based data-driven framework for forecasting uncertain multiscale hydrological and water resources processes. Environ. Model. Softw. 2020, 130, 104718. [Google Scholar] [CrossRef]
  9. Rahman, A.T.M.S.; Hosono, T.; Kisi, O.; Dennis, B.; Imon, A.H.M.R. A minimalistic approach for evapotranspiration estimation using the Prophet model. Hydrol. Sci. J. 2020, 65, 1994–2006. [Google Scholar] [CrossRef]
  10. Althoff, D.; Rodrigues, L.N.; Bazame, H.C. Uncertainty quantification for hydrological models based on neural networks: The dropout ensemble. Stoch. Environ. Res. Risk Assess. 2021, 35, 1051–1067. [Google Scholar] [CrossRef]
  11. Fischer, S.; Schumann, A.H. Regionalisation of flood frequencies based on flood type-specific mixture distributions. J. Hydrol. X 2021, 13, 100107. [Google Scholar] [CrossRef]
  12. Cahyono, M. The development of explicit equations for estimating settling velocity based on artificial neural networks procedure. Hydrology 2022, 9, 98. [Google Scholar] [CrossRef]
  13. Papacharalampous, G.; Tyralis, H. Time series features for supporting hydrometeorological explorations and predictions in ungauged locations using large datasets. Water 2022, 14, 1657. [Google Scholar] [CrossRef]
  14. Mehedi, M.A.A.; Khosravi, M.; Yazdan, M.M.S.; Shabanian, H. Exploring temporal dynamics of river discharge using univariate long short-term memory (LSTM) recurrent neural network at East Branch of Delaware River. Hydrology 2022, 9, 202. [Google Scholar] [CrossRef]
  15. Rozos, E.; Koutsoyiannis, D.; Montanari, A. KNN vs. Bluecat—Machine learning vs. classical statistics. Hydrology 2022, 9, 101. [Google Scholar] [CrossRef]
  16. Rozos, E.; Leandro, J.; Koutsoyiannis, D. Development of rating curves: Machine learning vs. statistical methods. Hydrology 2022, 9, 166. [Google Scholar] [CrossRef]
  17. Granata, F.; Di Nunno, F.; Najafzadeh, M.; Demir, I. A stacked machine learning algorithm for multi-step ahead prediction of soil moisture. Hydrology 2023, 10, 1. [Google Scholar] [CrossRef]
  18. Payne, K.; Chami, P.; Odle, I.; Yawson, D.O.; Paul, J.; Maharaj-Jagdip, A.; Cashman, A. Machine learning for surrogate groundwater modelling of a small carbonate island. Hydrology 2023, 10, 2. [Google Scholar] [CrossRef]
  19. Goetz, J.N.; Brenning, A.; Petschko, H.; Leopold, P. Evaluating machine learning and statistical prediction techniques for landslide susceptibility modeling. Comput. Geosci. 2015, 81, 1–11. [Google Scholar] [CrossRef]
  20. Bahl, M.; Barzilay, R.; Yedidia, A.B.; Locascio, N.J.; Yu, L.; Lehman, C.D. High-risk breast lesions: A machine learning model to predict pathologic upgrade and reduce unnecessary surgical excision. Radiology 2018, 286, 810–818. [Google Scholar] [CrossRef]
  21. Feng, D.C.; Liu, Z.T.; Wang, X.D.; Chen, Y.; Chang, J.Q.; Wei, D.F.; Jiang, Z.M. Machine learning-based compressive strength prediction for concrete: An adaptive boosting approach. Constr. Build. Mater. 2020, 230, 117000. [Google Scholar] [CrossRef]
  22. Rustam, F.; Khalid, M.; Aslam, W.; Rupapara, V.; Mehmood, A.; Choi, G.S. A performance comparison of supervised machine learning models for Covid-19 tweets sentiment analysis. PLoS ONE 2021, 16, e0245909. [Google Scholar] [CrossRef]
  23. Bamisile, O.; Oluwasanmi, A.; Ejiyi, C.; Yimen, N.; Obiora, S.; Huang, Q. Comparison of machine learning and deep learning algorithms for hourly global/diffuse solar radiation predictions. Int. J. Energy Res. 2022, 46, 10052–10073. [Google Scholar] [CrossRef]
  24. Sagi, O.; Rokach, L. Ensemble learning: A survey. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2018, 8, e1249. [Google Scholar] [CrossRef]
  25. Tyralis, H.; Papacharalampous, G.; Langousis, A. A brief review of random forests for water scientists and practitioners and their recent history in water resources. Water 2019, 11, 910. [Google Scholar] [CrossRef]
  26. Tyralis, H.; Papacharalampous, G. Boosting algorithms in energy research: A systematic review. Neural Comput. Appl. 2021, 33, 14101–14117. [Google Scholar] [CrossRef]
  27. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  28. Friedman, J.H. Greedy function approximation: A gradient boosting machine. Ann. Stat. 2001, 29, 1189–1232. [Google Scholar] [CrossRef]
  29. Chen, T.; Guestrin, C. XGBoost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 785–794. [Google Scholar] [CrossRef]
  30. Fan, J.; Yue, W.; Wu, L.; Zhang, F.; Cai, H.; Wang, X.; Lu, X.; Xiang, Y. Evaluation of SVM, ELM and four tree-based ensemble models for predicting daily reference evapotranspiration using limited meteorological data in different climates of China. Agric. For. Meteorol. 2018, 263, 225–241. [Google Scholar] [CrossRef]
  31. Besler, E.; Wang, Y.C.; Chan, T.C.; Sahakian, A.V. Real-time monitoring radiofrequency ablation using tree-based ensemble learning models. Int. J. Hyperth. 2019, 36, 427–436. [Google Scholar] [CrossRef]
  32. Ahmad, T.; Zhang, D. Novel deep regression and stump tree-based ensemble models for real-time load demand planning and management. IEEE Access 2020, 8, 48030–48048. [Google Scholar] [CrossRef]
  33. Liu, C.; Zhou, S.; Wang, Y.G.; Hu, Z. Natural mortality estimation using tree-based ensemble learning models. ICES J. Mar. Sci. 2020, 77, 1414–1426. [Google Scholar] [CrossRef]
  34. Ziane, A.; Dabou, R.; Necaibia, A.; Sahouane, N.; Mostefaoui, M.; Bouraiou, A.; Khelifi, S.; Rouabhia, A.; Blal, M. Tree-based ensemble methods for predicting the module temperature of a grid-tied photovoltaic system in the desert. Int. J. Green Energy 2021, 18, 1430–1440. [Google Scholar] [CrossRef]
  35. Park, S.; Kim, C. Comparison of tree-based ensemble models for regression. Commun. Stat. Appl. Methods 2022, 29, 561–589. [Google Scholar] [CrossRef]
  36. Montanari, A.; Young, G.; Savenije, H.H.G.; Hughes, D.; Wagener, T.; Ren, L.L.; Koutsoyiannis, D.; Cudennec, C.; Toth, E.; Grimaldi, S.; et al. “Panta Rhei—Everything Flows”: Change in hydrology and society—The IAHS Scientific Decade 2013–2022. Hydrol. Sci. J. 2013, 58, 1256–1275. [Google Scholar] [CrossRef]
  37. Blöschl, G.; Bierkens, M.F.P.; Chambel, A.; Cudennec, C.; Destouni, G.; Fiori, A.; Kirchner, J.W.; McDonnell, J.J.; Savenije, H.H.G.; Sivapalan, M.; et al. Twenty-three unsolved problems in hydrology (UPH)–A community perspective. Hydrol. Sci. J. 2019, 64, 1141–1158. [Google Scholar] [CrossRef]
  38. He, X.; Chaney, N.W.; Schleiss, M.; Sheffield, J. Spatial downscaling of precipitation using adaptable random forests. Water Resour. Res. 2016, 52, 8217–8237. [Google Scholar] [CrossRef]
  39. Baez-Villanueva, O.M.; Zambrano-Bigiarini, M.; Beck, H.E.; McNamara, I.; Ribbe, L.; Nauditt, A.; Birkel, C.; Verbist, K.; Giraldo-Osorio, J.D.; Xuan Thinh, N. RF-MEP: A novel random forest method for merging gridded precipitation products and ground-based measurements. Remote Sens. Environ. 2020, 239, 111606. [Google Scholar] [CrossRef]
  40. Chen, C.; Hu, B.; Li, Y. Easy-to-use spatial random-forest-based downscaling-calibration method for producing precipitation data with high resolution and high accuracy. Hydrol. Earth Syst. Sci. 2021, 25, 5667–5682. [Google Scholar] [CrossRef]
  41. Zhang, L.; Li, X.; Zheng, D.; Zhang, K.; Ma, Q.; Zhao, Y.; Ge, Y. Merging multiple satellite-based precipitation products and gauge observations using a novel double machine learning approach. J. Hydrol. 2021, 594, 125969. [Google Scholar] [CrossRef]
  42. Fernandez-Palomino, C.A.; Hattermann, F.F.; Krysanova, V.; Lobanova, A.; Vega-Jácome, F.; Lavado, W.; Santini, W.; Aybar, C.; Bronstert, A. A novel high-resolution gridded precipitation dataset for Peruvian and Ecuadorian watersheds: Development and hydrological evaluation. J. Hydrometeorol. 2022, 23, 309–336. [Google Scholar] [CrossRef]
  43. Lei, H.; Zhao, H.; Ao, T. A two-step merging strategy for incorporating multi-source precipitation products and gauge observations using machine learning classification and regression over China. Hydrol. Earth Syst. Sci. 2022, 26, 2969–2995. [Google Scholar] [CrossRef]
  44. Militino, A.F.; Ugarte, M.D.; Pérez-Goya, U. Machine learning procedures for daily interpolation of rainfall in Navarre (Spain). Stud. Syst. Decis. Control 2023, 445, 399–413. [Google Scholar] [CrossRef]
  45. Hu, Q.; Li, Z.; Wang, L.; Huang, Y.; Wang, Y.; Li, L. Rainfall spatial estimations: A review from spatial interpolation to multi-source data merging. Water 2019, 11, 579. [Google Scholar] [CrossRef]
  46. Abdollahipour, A.; Ahmadi, H.; Aminnejad, B. A review of downscaling methods of satellite-based precipitation estimates. Earth Sci. Inform. 2022, 15, 1–20. [Google Scholar] [CrossRef]
  47. Hengl, T.; Nussbaum, M.; Wright, M.N.; Heuvelink, G.B.M.; Gräler, B. Random forest as a generic framework for predictive modeling of spatial and spatio-temporal variables. PeerJ 2018, 6, e5518. [Google Scholar] [CrossRef]
  48. Mayr, A.; Binder, H.; Gefeller, O.; Schmid, M. The evolution of boosting algorithms: From machine learning to statistical modelling. Methods Inf. Med. 2014, 53, 419–427. [Google Scholar] [CrossRef]
  49. Natekin, A.; Knoll, A. Gradient boosting machines, a tutorial. Front. Neurorobotics 2013, 7, 21. [Google Scholar] [CrossRef] [PubMed]
  50. Linardatos, P.; Papastefanopoulos, V.; Kotsiantis, S. Explainable AI: A review of machine learning interpretability methods. Entropy 2020, 23, 18. [Google Scholar] [CrossRef]
  51. Roscher, R.; Bohn, B.; Duarte, M.F.; Garcke, J. Explainable machine learning for scientific insights and discoveries. IEEE Access 2020, 8, 42200–42216. [Google Scholar] [CrossRef]
  52. Belle, V.; Papantonis, I. Principles and practice of explainable machine learning. Front. Big Data 2021, 4, 688969. [Google Scholar] [CrossRef]
  53. Chen, T.; He, T.; Benesty, M.; Khotilovich, V.; Tang, Y.; Cho, H.; Chen, K.; Mitchell, R.; Cano, I.; Zhou, T.; et al. xgboost: Extreme Gradient Boosting. R package version 1.6.0.1. 2022. Available online: https://CRAN.R-project.org/package=xgboost (accessed on 31 December 2022).
  54. Durre, I.; Menne, M.J.; Vose, R.S. Strategies for evaluating quality assurance procedures. J. Appl. Meteorol. Climatol. 2008, 47, 1785–1791. [Google Scholar] [CrossRef]
  55. Durre, I.; Menne, M.J.; Gleason, B.E.; Houston, T.G.; Vose, R.S. Comprehensive automated quality assurance of daily surface observations. J. Appl. Meteorol. Climatol. 2010, 49, 1615–1633. [Google Scholar] [CrossRef]
  56. Menne, M.J.; Durre, I.; Vose, R.S.; Gleason, B.E.; Houston, T.G. An overview of the Global Historical Climatology Network-Daily database. J. Atmos. Ocean. Technol. 2012, 29, 897–910. [Google Scholar] [CrossRef]
  57. Xiong, L.; Li, S.; Tang, G.; Strobl, J. Geomorphometry and terrain analysis: Data, methods, platforms and applications. Earth-Sci. Rev. 2022, 233, 104191. [Google Scholar] [CrossRef]
  58. Papacharalampous, G.; Tyralis, H.; Doulamis, A.; Doulamis, N. Comparison of machine learning algorithms for merging gridded satellite and earth-observed precipitation data. Water 2023, 15, 634. [Google Scholar] [CrossRef]
  59. Spearman, C. The proof and measurement of association between two things. Am. J. Psychol. 1904, 15, 72–101. [Google Scholar] [CrossRef]
  60. Gneiting, T. Making and evaluating point forecasts. J. Am. Stat. Assoc. 2011, 106, 746–762. [Google Scholar] [CrossRef]
  61. Bogner, K.; Liechti, K.; Zappa, M. Technical note: Combining quantile forecasts and predictive distributions of streamflows. Hydrol. Earth Syst. Sci. 2017, 21, 5493–5502. [Google Scholar] [CrossRef] [Green Version]
  62. Papacharalampous, G.; Tyralis, H.; Langousis, A.; Jayawardena, A.W.; Sivakumar, B.; Mamassis, N.; Montanari, A.; Koutsoyiannis, D. Probabilistic hydrological post-processing at scale: Why and how to apply machine-learning quantile regression algorithms. Water 2019, 11, 2126. [Google Scholar] [CrossRef]
  63. Tyralis, H.; Papacharalampous, G.; Burnetas, A.; Langousis, A. Hydrological post-processing using stacked generalization of quantile regression algorithms: Large-scale application over CONUS. J. Hydrol. 2019, 577, 123957. [Google Scholar] [CrossRef]
  64. Kim, D.; Lee, H.; Beighley, E.; Tshimanga, R.M. Estimating discharges for poorly gauged river basin using ensemble learning regression with satellite altimetry data and a hydrologic model. Adv. Space Res. 2021, 68, 607–618. [Google Scholar] [CrossRef]
  65. Lee, D.G.; Ahn, K.H. A stacking ensemble model for hydrological post-processing to improve streamflow forecasts at medium-range timescales over South Korea. J. Hydrol. 2021, 600, 126681. [Google Scholar] [CrossRef]
  66. Tyralis, H.; Papacharalampous, G.; Langousis, A. Super ensemble learning for daily streamflow forecasting: Large-scale demonstration and comparison with multiple machine learning algorithms. Neural Comput. Appl. 2021, 33, 3053–3068. [Google Scholar] [CrossRef]
  67. Granata, F.; Di Nunno, F.; de Marinis, G. Stacked machine learning algorithms and bidirectional long short-term memory networks for multi-step ahead streamflow forecasting: A comparative study. J. Hydrol. 2022, 613, 128431. [Google Scholar] [CrossRef]
  68. Li, S.; Yang, J. Improved river water-stage forecasts by ensemble learning. Eng. Comput. 2022. [Google Scholar] [CrossRef]
  69. Papacharalampous, G.; Tyralis, H. Hydrological time series forecasting using simple combinations: Big data testing and investigations on one-year ahead river flow predictability. J. Hydrol. 2020, 590, 125205. [Google Scholar] [CrossRef]
  70. Cheng, B.; Titterington, D.M. Neural networks: A review from a statistical perspective. Stat. Sci. 1994, 9, 2–30. [Google Scholar]
  71. Jain, A.K.; Mao, J.; Mohiuddin, K.M. Artificial neural networks: A tutorial. Computer 1996, 29, 31–44. [Google Scholar] [CrossRef]
  72. Paliwal, M.; Kumar, U.A. Neural networks and statistical techniques: A review of applications. Expert Syst. Appl. 2009, 36, 2–17. [Google Scholar] [CrossRef]
  73. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
  74. Schmidhuber, J. Deep learning in neural networks: An overview. Neural Netw. 2015, 61, 85–117. [Google Scholar] [CrossRef]
  75. Papacharalampous, G.; Tyralis, H. A review of machine learning concepts and methods for addressing challenges in probabilistic hydrological post-processing and forecasting. Front. Water 2022, 4, 961954. [Google Scholar] [CrossRef]
  76. Tyralis, H.; Papacharalampous, G. A review of probabilistic forecasting and prediction with machine learning. ArXiv 2022, arXiv:2209.08307. [Google Scholar]
  77. R Core Team. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria. 2022. Available online: https://www.R-project.org (accessed on 31 December 2022).
  78. Kuhn, M. caret: Classification and Regression Training. R package version 6.0-93. 2022. Available online: https://CRAN.R-project.org/package=caret (accessed on 31 December 2022).
  79. Dowle, M.; Srinivasan, A. data.table: Extension of ‘data.frame’. R package version 1.14.4. 2022. Available online: https://CRAN.R-project.org/package=data.table (accessed on 31 December 2022).
  80. Hollister, J.W. elevatr: Access Elevation Data from Various APIs. R package version 0.4.2. 2022. Available online: https://CRAN.R-project.org/package=elevatr (accessed on 31 December 2022).
  81. Pierce, D. ncdf4: Interface to Unidata netCDF (Version 4 or Earlier) Format Data Files. R package version 1.19. 2021. Available online: https://CRAN.R-project.org/package=ncdf4 (accessed on 31 December 2022).
  82. Bivand, R.S.; Keitt, T.; Rowlingson, B. rgdal: Bindings for the ‘Geospatial’ Data Abstraction Library. R package version 1.5-32. 2022. Available online: https://CRAN.R-project.org/package=rgdal (accessed on 31 December 2022).
  83. Pebesma, E. Simple features for R: Standardized support for spatial vector data. R J. 2018, 10, 439–446. [Google Scholar] [CrossRef]
  84. Pebesma, E. sf: Simple Features for R. R package version 1.0-8. 2022. Available online: https://CRAN.R-project.org/package=sf (accessed on 31 December 2022).
  85. Bivand, R.S. spdep: Spatial Dependence: Weighting Schemes, Statistics. R package version 1.2-7. 2022. Available online: https://CRAN.R-project.org/package=spdep (accessed on 31 December 2022).
  86. Bivand, R.S.; Wong, D.W.S. Comparing implementations of global and local indicators of spatial association. TEST 2018, 27, 716–748. [Google Scholar] [CrossRef]
  87. Bivand, R.S.; Pebesma, E.; Gómez-Rubio, V. Applied Spatial Data Analysis with R, 2nd ed.; Springer: New York, NY, USA, 2013. [Google Scholar] [CrossRef]
  88. Wickham, H.; Averick, M.; Bryan, J.; Chang, W.; McGowan, L.D.; François, R.; Grolemund, G.; Hayes, A.; Henry, L.; Hester, J.; et al. Welcome to the tidyverse. J. Open Source Softw. 2019, 4, 1686. [Google Scholar] [CrossRef]
  89. Wickham, H. tidyverse: Easily Install and Load the ‘Tidyverse’. R package version 1.3.2. 2022. Available online: https://CRAN.R-project.org/package=tidyverse (accessed on 31 December 2022).
  90. Greenwell, B.; Boehmke, B.; Cunningham, J. gbm: Generalized Boosted Regression Models. R package version 2.1.8.1. 2022. Available online: https://CRAN.R-project.org/package=gbm (accessed on 31 December 2022).
  91. Wright, M.N. ranger: A Fast Implementation of Random Forests. R package version 0.14.1. 2022. Available online: https://CRAN.R-project.org/package=ranger (accessed on 31 December 2022).
  92. Wright, M.N.; Ziegler, A. ranger: A fast implementation of random forests for high dimensional data in C++ and R. J. Stat. Softw. 2017, 77, 1–17. [Google Scholar] [CrossRef] [Green Version]
  93. Tyralis, H.; Papacharalampous, G. scoringfunctions: A Collection of Scoring Functions for Assessing Point Forecasts. R package version 0.0.5. 2022. Available online: https://CRAN.R-project.org/package=scoringfunctions (accessed on 31 December 2022).
  94. Wickham, H.; Hester, J.; Chang, W.; Bryan, J. devtools: Tools to Make Developing R Packages Easier. R package version 2.4.5. 2022. Available online: https://CRAN.R-project.org/package=devtools (accessed on 31 December 2022).
  95. Xie, Y. knitr: A Comprehensive Tool for Reproducible Research in R. In Implementing Reproducible Computational Research; Stodden, V., Leisch, F., Peng, R.D., Eds.; Chapman and Hall/CRC: London, UK, 2014. [Google Scholar]
  96. Xie, Y. Dynamic Documents with R and knitr, 2nd ed.; Chapman and Hall/CRC: London, UK, 2014. [Google Scholar]
  97. Xie, Y. knitr: A General-Purpose Package for Dynamic Report Generation in R. R package version 1.40. 2022. Available online: https://CRAN.R-project.org/package=knitr (accessed on 31 December 2022).
  98. Allaire, J.J.; Xie, Y.; McPherson, J.; Luraschi, J.; Ushey, K.; Atkins, A.; Wickham, H.; Cheng, J.; Chang, W.; Iannone, R. rmarkdown: Dynamic Documents for R. R package version 2.17. 2022. Available online: https://CRAN.R-project.org/package=rmarkdown (accessed on 31 December 2022).
  99. Xie, Y.; Allaire, J.J.; Grolemund, G. R Markdown: The Definitive Guide; Chapman and Hall/CRC: London, UK, 2018; ISBN 9781138359338. Available online: https://bookdown.org/yihui/rmarkdown (accessed on 31 December 2022).
  100. Xie, Y.; Dervieux, C.; Riederer, E. R Markdown Cookbook; Chapman and Hall/CRC: London, UK, 2020; ISBN 9780367563837. Available online: https://bookdown.org/yihui/rmarkdown-cookbook (accessed on 31 December 2022).
Figure 1. Map of the geographical locations of the earth-located stations that offered data for this work.
Figure 1. Map of the geographical locations of the earth-located stations that offered data for this work.
Hydrology 10 00050 g001
Figure 2. Maps of the geographical locations of the points composing the (a) PERSIANN and (b) IMERG grids utilized in this work.
Figure 2. Maps of the geographical locations of the points composing the (a) PERSIANN and (b) IMERG grids utilized in this work.
Hydrology 10 00050 g002
Figure 3. Setting of the regression problem. Note that the term “grid point” is used to describe the geographical locations with satellite data, while the term “station” is used to describe the geographical locations with ground-based measurements. Note also that, throughout this work, the distances di, i = 1, 2, 3, 4 are also, respectively, called “PERSIANN distances 1−4” or “IMERG distances 1−4” (depending on whether we refer to the PERSIANN grid or the IMERG grid) and the daily precipitation values at the grid points 1−4 are called “PERSIANN values 1−4” or “IMERG values 1−4” (depending on whether we refer to the PERSIANN grid or the IMERG grid).
Figure 3. Setting of the regression problem. Note that the term “grid point” is used to describe the geographical locations with satellite data, while the term “station” is used to describe the geographical locations with ground-based measurements. Note also that, throughout this work, the distances di, i = 1, 2, 3, 4 are also, respectively, called “PERSIANN distances 1−4” or “IMERG distances 1−4” (depending on whether we refer to the PERSIANN grid or the IMERG grid) and the daily precipitation values at the grid points 1−4 are called “PERSIANN values 1−4” or “IMERG values 1−4” (depending on whether we refer to the PERSIANN grid or the IMERG grid).
Hydrology 10 00050 g003
Figure 4. Heatmap of the Spearman correlation estimates for the various variable pairs appearing in the regression settings of this work.
Figure 4. Heatmap of the Spearman correlation estimates for the various variable pairs appearing in the regression settings of this work.
Hydrology 10 00050 g004
Figure 5. Bar plot of the gain scores computed for the predictor variables by utilizing the extreme gradient boosting algorithm. The predictor variables are presented from the most to the least important (from top to bottom) based on the same scores.
Figure 5. Bar plot of the gain scores computed for the predictor variables by utilizing the extreme gradient boosting algorithm. The predictor variables are presented from the most to the least important (from top to bottom) based on the same scores.
Hydrology 10 00050 g005
Figure 6. Heatmaps of the: (a) relative improvement (%) in terms of the median square error metric, averaged across the two folds, as this improvement was provided by each tree-based ensemble algorithm with respect to the linear regression algorithm; (b) mean ranking of each machine and statistical learning algorithm, averaged across the two folds. The computations were made separately for each predictor set. The more reddish the color, the better the predictions on average.
Figure 6. Heatmaps of the: (a) relative improvement (%) in terms of the median square error metric, averaged across the two folds, as this improvement was provided by each tree-based ensemble algorithm with respect to the linear regression algorithm; (b) mean ranking of each machine and statistical learning algorithm, averaged across the two folds. The computations were made separately for each predictor set. The more reddish the color, the better the predictions on average.
Hydrology 10 00050 g006
Figure 7. Heatmaps of the percentages (%) with which the four machine and statistical learning algorithms were ranked from 1 to 4 for the predictor sets (ac) 1−3. The rankings summarized in this figure were computed separately for each pair {case, predictor set}. The darker the color, the higher the percentage.
Figure 7. Heatmaps of the percentages (%) with which the four machine and statistical learning algorithms were ranked from 1 to 4 for the predictor sets (ac) 1−3. The rankings summarized in this figure were computed separately for each pair {case, predictor set}. The darker the color, the higher the percentage.
Hydrology 10 00050 g007
Figure 8. Heatmaps of the: (a) relative improvement (%) in terms of the median square error metric, averaged across the two folds, as this improvement was provided by each tree-based ensemble algorithm with respect to the linear regression algorithm, with this latter algorithm being run with the predictor set 1; and (b) mean ranking of each machine and statistical learning algorithm, averaged across the two folds. The computations were made collectively for all the predictor sets. The more reddish the color, the better the predictions on average.
Figure 8. Heatmaps of the: (a) relative improvement (%) in terms of the median square error metric, averaged across the two folds, as this improvement was provided by each tree-based ensemble algorithm with respect to the linear regression algorithm, with this latter algorithm being run with the predictor set 1; and (b) mean ranking of each machine and statistical learning algorithm, averaged across the two folds. The computations were made collectively for all the predictor sets. The more reddish the color, the better the predictions on average.
Hydrology 10 00050 g008
Figure 9. Heatmaps of the percentages (%) with which the four machine and statistical learning algorithms were ranked from 1 to 12 for the predictor sets (ac) 1−3. The rankings summarized in this figure were computed separately for each case and collectively for all the predictor sets.
Figure 9. Heatmaps of the percentages (%) with which the four machine and statistical learning algorithms were ranked from 1 to 12 for the predictor sets (ac) 1−3. The rankings summarized in this figure were computed separately for each case and collectively for all the predictor sets.
Hydrology 10 00050 g009
Table 1. Inclusion of predictor variables in the predictor sets examined in this work.
Table 1. Inclusion of predictor variables in the predictor sets examined in this work.
Predictor VariablePredictor Set 1Predictor Set 2Predictor Set 3
PERSIANN value 1×
PERSIANN value 2×
PERSIANN value 3×
PERSIANN value 4×
IMERG value 1×
IMERG value 2×
IMERG value 3×
IMERG value 4×
PERSIANN distance 1×
PERSIANN distance 2×
PERSIANN distance 3×
PERSIANN distance 4×
IMERG distance 1×
IMERG distance 2×
IMERG distance 3×
IMERG distance 4×
Station elevation
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Papacharalampous, G.; Tyralis, H.; Doulamis, A.; Doulamis, N. Comparison of Tree-Based Ensemble Algorithms for Merging Satellite and Earth-Observed Precipitation Data at the Daily Time Scale. Hydrology 2023, 10, 50. https://doi.org/10.3390/hydrology10020050

AMA Style

Papacharalampous G, Tyralis H, Doulamis A, Doulamis N. Comparison of Tree-Based Ensemble Algorithms for Merging Satellite and Earth-Observed Precipitation Data at the Daily Time Scale. Hydrology. 2023; 10(2):50. https://doi.org/10.3390/hydrology10020050

Chicago/Turabian Style

Papacharalampous, Georgia, Hristos Tyralis, Anastasios Doulamis, and Nikolaos Doulamis. 2023. "Comparison of Tree-Based Ensemble Algorithms for Merging Satellite and Earth-Observed Precipitation Data at the Daily Time Scale" Hydrology 10, no. 2: 50. https://doi.org/10.3390/hydrology10020050

APA Style

Papacharalampous, G., Tyralis, H., Doulamis, A., & Doulamis, N. (2023). Comparison of Tree-Based Ensemble Algorithms for Merging Satellite and Earth-Observed Precipitation Data at the Daily Time Scale. Hydrology, 10(2), 50. https://doi.org/10.3390/hydrology10020050

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop