Next Article in Journal
Numerical Simulation and Parameter Optimization of Double-Pressing Sowing and Soil Covering Operation for Wheat
Previous Article in Journal
Effect of Flat Planting Without Film Mulching and Phosphorus Fertilization on Soil Phosphorus Dynamics and Nutrient Uptake in Faba Bean in Alpine Cropping Systems
Previous Article in Special Issue
Evapotranspiration Differences, Driving Factors, and Numerical Simulation of Typical Irrigated Wheat Fields in Northwest China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Machine Learning for Reference Crop Evapotranspiration Modeling: A State-of-the-Art Review and Future Directions

1
Center for Agricultural Water Research in China, China Agricultural University, Beijing 100083, China
2
State Key Laboratory of Efficient Utilization of Agricultural Water Resources, China Agricultural University, Beijing 100083, China
*
Author to whom correspondence should be addressed.
Agronomy 2025, 15(9), 2038; https://doi.org/10.3390/agronomy15092038
Submission received: 11 June 2025 / Revised: 20 August 2025 / Accepted: 22 August 2025 / Published: 25 August 2025
(This article belongs to the Special Issue Water Saving in Irrigated Agriculture: Series II)

Abstract

Reference crop evapotranspiration (ETo) is a crucial component in calculating crop water requirements, and its accurate prediction is vital for effective agricultural water management and irrigation planning. Generally, the FAO Penman-Monteith 56 equation is recommended as the benchmark’s method for calculating Eto, but it requires extensive meteorological data—posing challenges in regions with sparse monitoring infrastructure. This review addresses a critical gap: the lack of systematic comparative analysis of machine learning (ML) methods for ETo estimation under data-limited conditions. We review 325 studies searched by Web of Science from 2001 to 2024, focusing on applications of machine learning models in ETo modeling and prediction. Then, this review evaluates these models regarding their characteristics, accuracy, and applicability, including artificial neural networks (ANN), support vector machines (SVM), ensemble learning (EL), and deep learning (DL). Crucially, EL models demonstrate superior stability and cost-effectiveness, with typical performance metrics of R2 > 0.95 and RMSE ranging from 0.1 to 0.6 mm·d−1. Notably, DL methods achieve the highest accuracy under conditions of data scarcity. Using only temperature data, they attain competitive performance (R2 = 0.81, RMSE = 0.56 mm·d−1). Additionally, we further synthesize optimal input variables, performance metrics, and domain-specific implementation guidelines. In summary, this study provides a comprehensive and up-to-date overview of machine learning methods for ETo modeling, thereby offering valuable insights for researchers in the field of evapotranspiration.

Graphical Abstract

1. Introduction

Reference crop evapotranspiration (ETo) provides a benchmark for calculating actual crop evapotranspiration (ETc), and the accurate estimation of both ETo and ETc is crucial for irrigation planning, water resource management, and ecological protection. The Food and Agriculture Organization of the United Nations (FAO) defines ETo as the evapotranspiration from a green grassland with a crop height of 0.12 m, a leaf resistance of 70 s/m, an albedo of 0.23, an open surface, uniformly high and vigorous crop growth, complete ground cover, and an adequate water supply [1]. More specifically, in water resource management issues, ETo is useful for calculating the water balance, providing insights into the water cycle and its environmental impact. In the field of ecology, ETo is essential for climate research and modeling, serving as a valuable indicator of a region’s energy balance and water cycle [2]. Technically speaking, the computation of ETo is a nonlinear, dynamic, and theory-based process [3] that involves multiple meteorological factors. ETo is measured by instruments and estimated with empirical equations and machine learning techniques. However, high-precision instruments and equipment used to measure evapotranspiration are not only expensive but also difficult to maintain, making them less widely available. The development of empirical equation has enhanced the understanding of ETo physical processes, leading many scholars to develop models for estimating ETo, such as the temperature-based Hargreaves and Samani equation [4], the FAO 24 Blaney–Criddle equation [5], and radiation-based models like the Makkink equation [6], the Ritchie equation [7], and the FAO 24 Radiation equation [5]. However, different empirical equations, which require to be calibrated for accurate use, have varying levels of adaptability to specific environments, and their modeling accuracy often needs further improvement. Among them, the FAO-56 Penman–Monteith (P-M) equation, as the standard method recommended by the FAO for calculating ETo, has been shown to be the most reliable under various climatic conditions [8,9,10].
Although the FAO-56 P-M equation is a standardized method, it requires many meteorological factors as input parameters, such as maximum and minimum temperatures, mean wind speed, mean relative humidity, and solar radiation. In some areas, obtaining such comprehensive meteorological data can be challenging. Therefore, the ability to accurately estimate ETo, even with limited data, is meaningful for ensuring sustainable water use, especially in regions with sparse meteorological monitoring infrastructure. In recent years, machine learning models have made significant progress; these models excel at solving complex nonlinear and multivariate problems and are widely used in hydrological modeling [11,12,13]. Moreover, significant contributions have been made in applying machine learning methods to ETo modeling and prediction. The main methods include support vector machines (SVM), ensemble learning (EL), random forests (RF), extreme gradient boosting (XGBoost), and artificial neural networks (ANN) [14,15,16,17,18,19]. Generally speaking, machine learning methods enhance ETo calculation adaptability and robustness. By improving traditional models with limited data and integrating multiple ML techniques, they achieve higher accuracy even without comprehensive meteorological inputs [20,21,22,23,24,25,26,27].
Therefore, this paper provides a comprehensive review of 325 studies searched by Web of Science from 2001 to 2024, focusing on the research progress of machine learning models, including ANN, EL, and deep learning (DL) for the estimation of ETo. In addition, this paper discusses the impact of major modeling factors such as input variables, performance metrics, and study areas on the results. Comparison between empirical formulations and machine learning simulations ETo and the framework review used in these works are further discussed. It concludes by outlining the challenges and future directions for predicting ETo using machine learning techniques.
Section 2 presents the studies selected for this comprehensive investigation. Section 3 explains the progress in calculating ETo with traditional empirical equations and outlines their limitations. Section 4 examines five machine learning methods used to predict ETo and provides suggestions for future improvements. The fifth section focuses on data preprocessing and the model evaluation process. Finally, Section 6 discusses the challenges and future directions of applying machine learning to estimate ETo.

2. Research Methodology and Related Literature

To fully capture the latest research developments and advances, this paper searched all of the relevant literature from 2001 to 2024 using Web of Science. The search terms were Title = (“Predict*” OR “Forecast*” OR “Model” OR “Estimate*”) AND Abstract = (“Reference Crop Evapotranspiration” OR “Reference Evapotranspiration”) AND All Fields = (“Machine Learning” OR “Deep Learning” OR “ANN” OR “SVM” OR “ANFIS” OR “Ensemble Learning”). A total of 406 papers were retrieved. To select papers that align with the purpose of this study, the screening criteria were set as follows: (1) the literature must focus on predicting ETo rather than other types of evapotranspiration, (2) the literature must use machine learning models rather than statistical or mechanistic models, and (3) studies deemed ‘irrelevant’ were those failing to meet criteria (1) or (2). After manual screening and the elimination of duplicates or irrelevant studies, a total of 325 papers were retained. Given our research focus is on the application of ML methods for ETo modeling, rather than on the maturity or impact of the findings, all relevant studies meeting the inclusion criteria were considered valuable regardless of publication type and were analyzed uniformly. Therefore, these 325 papers served as the foundation for the subsequent work in this study.
Furthermore, the number of annual publications is an important indicator of the development timeline and future trends in a specific research area. From 2001 to 2010, only 32 research papers were published on using machine learning to predict ETo, as shown in Figure 1. Between 2011 and 2019, the number of publications increased to 83. In 2020, there was a breakthrough in the number of publications, followed by a significant acceleration. By 2023, the number of publications reached a peak of 54.
More specifically, the 325 articles we studied were published in over 50 journals, with 16 being conference papers. Agricultural Water Management and the Journal of Hydrology were the most prominent journal, featuring 28 relevant articles, followed by Computers and Electronics in Agriculture and Agricultural Water Management with 27 articles. Water, Water Resources Management, and the Journal of Irrigation and Drainage Engineering each had more than 10 articles. These journal publications align with our research, which lies at the intersection of three disciplines: agriculture, hydrology, and computing, utilizing machine learning to predict ETo
In terms of the study regions, the papers are distributed across 36 countries and regions worldwide. The number of publications from each country is shown in Figure 2. China has the largest number of studies, with a total of 73 publications. This is followed by the Iran with 44 papers, and India and United States are also notable with more than 30 research papers.

3. Traditional Empirical Equations

Since Dalton proposed Dalton’s law of evapotranspiration in the early 18th century, empirical models for estimating evapotranspiration from reference crops have evolved significantly. Over time, numerous combined theoretical and empirical equations have been developed, each considering different climatic variables and tailored to specific regions. This progression has greatly enhanced the academic community’s understanding of ETo, leading to more accurate and regionally adapted estimation methods. The FAO-56 P-M model delivers reliable ETo estimates but requires extensive meteorological data inputs. As a result, empirical equations for estimating ETo remain highly valuable in situations where meteorological information is limited, providing practical alternatives for accurate evapotranspiration estimation in such conditions. The current empirical ETo, based on a small number of meteorological factors, is mainly classified into three categories: temperature-based equations, radiation-based equations, and water surface evaporation-based equations. Table 1 demonstrates the main empirical formulas for estimating ETo.
Common temperature-based models for estimating ETo include the Hargreaves and Samani (H-S) equation [4] and the FAO 24 Blaney-Criddle (FAO 24 B-C) model. The H-S equation, which originated in the arid northwestern United States, is the most widely used temperature-based ETo model [30] and has shown good agreement with measured values [1]. For example, Almorox et al. [31] studied 11 temperature-based ETo models globally and found that the Hargreaves model had the highest accuracy. Similarly, Zhang et al. [32] compared five temperature-based models in the North China Plain, and the H-S equation outperformed the others. Additionally, the H-S equation has proven to be more applicable to arid and semi-arid zones, with performance metrics such as RMSE = 4.19 mm month−1, MAE = 3.42 mm month−1, and R2 = 0.987 [33,34]. The equation, when calibrated, also shows good accuracy in wet areas [35]. On the other hand, the FAO 24 Blaney–Criddle equation was developed based on data from wetter regions of the United States, making it more suitable for humid regions. However, it tends to be less accurate when applied to arid and semi-arid regions [36,37,38]. Compared to the FAO-56 P-M formulation, temperature-based approaches typically produce higher ETo estimates. However, local calibration of parameters in temperature-based ETo simulation models can significantly improve their accuracy [33,39,40,41]. For example, an improved H-S model reduced the RMSE from 0.92 mm d−1 to 0.84 mm d−1, demonstrating the effectiveness of local calibration in the Haihe River Basin, China [42].
Common radiation-based models are the Makkink equation [6], Ritchie’s equation [7], the Priestly–Taylor equation [28], the FAO 24 Radiation equation [5]. Among the radiation-based equations, Ritchie’s equation is the most commonly used [30]. However, Berengena et al. [43] found that in southern Spain, which has a Mediterranean climate, both the Ritchie and Priestley–Taylor equations performed poorly, with R2 values of 0.797 and 0.830, respectively. This suggests that these models may not be well suited for all climatic conditions. Xystrakis et al. [44] compared 13 ETo models in Crete, a semi-arid region in southern Greece. Among the radiation-based models, the Hansen equation performed the best. On the other hand, the Makkink equation showed variable performance and low stability across different studies, indicating that its reliability may fluctuate depending on the specific conditions and location. Gao et al. [45] compared the simulation results of seven ETo empirical models across three distinct climates: the arid zone of Aksu, China, the semi-arid zone of Tongchuan, and the humid zone of Starkville, U.S.A. The study found that the Priestley–Taylor method performed best in both the arid and semi-arid regions, while the Makkink method was most effective in the arid zone. Additionally, Wu et al. [46], in the Sichuan Basin (humid region), found that the Makkink model provided more accurate ETo simulations in those humid regions. However, studies have also shown that the Makkink method performs poorly in Changins, Switzerland (temperate oceanic climate), and in the arid and semi-arid regions of southern Iran [47]. In contrast, the FAO 24 Radiation equation has been found to be applicable to a wider range of climate types, offering higher accuracy across different regions. Gabriela et al. [48] studied the performance of 13 empirical ETo equations across various climatic conditions. They found that the FAO 24 Radiation equation performed optimally in Mediterranean, semi-humid, and semi-arid zones. However, it performed poorly in humid and subtropical climates. Wang et al. [49] compared 28 ETo models across different climate zones in China and found that the FAO 24 Radiation equation had the best overall performance. Based on these findings, the FAO 24 Radiation equation is recommended for ETo simulation in China due to its superior accuracy and reliability across various climatic conditions.
In addition, ETo can also be estimated by multiplying water surface evaporation data by empirical coefficients. Water surface evaporation data can either be measured directly using evaporation pans or calculated indirectly with empirical equation. This method is widely used in various regions around the world due to its simplicity and low cost [50]. However, the use of water surface evaporation data for estimating ETo has limitations. It requires separate measurements, and its simulation accuracy is generally lower compared to other models [51]. Additionally, this method cannot predict future ETo values, further restricting its application.
From the above studies, it is evident that empirical equation for estimating ETo are all semi-empirical and semi-theoretical, based on certain assumptions. These equations tend to be highly regional, meaning their accuracy and applicability are significantly influenced by the complex geographic, hydrological, and climatic environments of different areas. As a result, there are no empirical equations with strong generalization across all regions because they may not perform as well when applied to regions outside of where they were originally developed. It is necessary to carefully evaluate and screen empirical equations before applying them to a specific region. Basically, ensuring that the chosen equation is well suited to the local conditions is crucial for achieving accurate ETo estimates. Similarly, empirical equations treat certain weather conditions uniformly, effectively homogenizing them across a region. Nevertheless, they are only suitable for large-scale ETo modeling and are not applicable to the needs of modern precision agriculture.

4. Machine Learning Models

There are two main categories of machine learning: traditional machine learning (ML) models (e.g., SVM, EL, and ANN), and DL models. DL models have clear advantages over traditional ML models because they automatically learn a hierarchical representation of the original data, reducing the need for feature engineering. The “very deep” layers in DL models allow them to process massive datasets, leading to improved accuracy. Several studies have shown that DL models are more accurate than traditional ML models [52,53].
However, DL models require large, labeled datasets, significant computational resources, and incur high computational costs. Additionally, they also lack interpretability, thus limiting their applicability. The following sections review five aspects of machine learning for ETo modeling and prediction: ANN, SVM, adaptive neuro-fuzzy inference systems (ANFIS), EL, and DL. The classification of these various machine learning model types is depicted in Figure 3, while the annual publication count utilizing different machine learning methods is presented in Figure 4.

4.1. Artificial Neural Network

The artificial neural network (ANN) is a computational model that mimics a biological neural network. It typically consists of input, hidden, and output layers, with the complexity of the ANN increasing as more hidden layers are added. In an ANN architecture, each neuron performs weighting and bias calculations on the information it receives, then passes it to the next layer by adjusting the neural weights. This process allows the network to capture complex nonlinear relationships between variables.
The most basic ANN architecture is the Feedforward Neural Network (FFNN), where information flows in a single direction-from the input layer, through the hidden layer, and finally to the output layer. In addition, the Extreme Learning Machine (ELM) is an efficient type of FFNN with a single hidden layer that does not require iterative parameter tuning, making it fast and accurate [54]. The Backpropagation Neural Network (BPNN) updates and learns network parameters using the backpropagation algorithm. Besides, the Multilayer Perceptron (MLP) also uses backpropagation, along with a nonlinear activation function, to handle complex nonlinear relationships.
ANN for ETo simulation is more widespread and performs better than empirical models. It also shows better performance when using limited meteorological data (e.g., temperature, radiation) [55,56,57,58,59,60,61,62,63]. For example, Achite et al. [64] compared FFNN, Radial Basis Function Neural Network (RBFNN), and Gene Expression Programming (GEP) methods for estimating daily ETo. They found that the FFNN model was more accurate. By contrast, ELM also performed better than FFBP in modeling ETo. Abdullah et al. [65] compared the ELM and FFBP models, finding that the ELM model was highly accurate (RMSE = 0.091 mm d−1, MAE = 0.069 mm d−1, R2 = 0.982) and computational efficient (FFBP took 22.18 s, while ELM took 5.85 s). The ELM model also performed well even with only maximum and minimum temperatures available (R2 = 0.905). Similarly, Hameed et al. [66] compared ELM with multiple linear regression (MLR) and random forest (RF), again showing optimal results. Studies by Dimitriadou et al. [67], Pino-Vargas et al. [68], and Mandal et al. [69] also demonstrated the high accuracy of the MLP model in simulating ETo (MAE = 0.033 mm d−1, RMSE = 0.043 mm d−1).
In practice, the performance of ANNs depends heavily on hyperparameter tuning, making the integration of optimization algorithms with ANN models a popular research topic. The Levenberg–Marquardt (LM) optimization algorithm, which combines the simplicity of gradient descent with the fast convergence of Newton’s method, has been widely used to train ANNs [70,71,72]. Besides, Zheng et al. [73] applied a Genetic Algorithm (GA)-improved BP neural network combined with the particle swarm optimization (PSO) algorithm for daily ETo prediction in the Shihezi area. The model achieved better results than basic ANN algorithms (RMSE = 0.163 mm d−1, MAE = 0.145 mm d−1, R2 = 0.952). Similarly, Zhao et al. [74] combined three meta-heuristic algorithms with BPNN for ETo prediction, significantly improving both computation time and accuracy.
ANN can handle complex nonlinear relationships and can be combined with optimization algorithms to find optimal parameters and reduce convergence time, which makes it highly possible to be used in simulating Eto [74]. Moreover, ELM offers fast training speeds and MLP provides high simulation accuracy compared to regression methods and empirical equations, highlighting the advantages of ANN [65]. However, ANN performance remains architecture-dependent: suboptimal layer/neuron design risks overfitting or underfitting [75,76], and current reliance on manual trial-and-error for architecture selection is inefficient and may yield suboptimal results. Finding the optimal architecture often involves computationally expensive trial and error [77].

4.2. Support Vector Machine

SVM is a supervised learning method based on kernel functions, known for its good generalization ability and prediction accuracy. It is widely used for regression and prediction in fields like agriculture, hydrology, meteorology, and environmental research [78,79,80]. SVM has shown high accuracy and utility in ETo estimation. Studies indicate that SVM results for ETo estimation closely match those of the P-M formula, outperforming ANN, regression, and empirical models (RMSE = 0.262 mm d−1, MAE = 0.207 mm d−1, R2 = 0.950) [3,15,81,82].
However, this high accuracy comes at the cost of high computational resources. Fan et al. [83] showed that while SVM has comparable stability and accuracy to EL algorithms in ETo modeling, its computational cost is about 2.0 times that of the M5Tree model and 2.4 times that of the XGBoost model. Mao et al. [84] demonstrated that SVM is more accurate and stable than GBDT in ETo simulation in Xinjiang but has a higher computational cost. This is mainly because the SVM algorithm needs to solve a convex quadratic programming problem, making it computationally complex, and it cannot be computed in parallel. Ikranm et al. [85] coupled the SVM model with a meta-heuristic algorithm for month-by-month ETo prediction in northwest Bangladesh, improving the accuracy of the coupled SVM algorithm by about 25%. For future work, further coupling of optimization algorithms with SVM could enhance parameter optimization efficiency during SVM training, thereby reducing computational costs [85], though this adds another layer of complexity.

4.3. Adaptive Neuro-Fuzzy Inference System

The Adaptive Neuro-Fuzzy Inference System (ANFIS) combines fuzzy logic with neural networks to connect and weight processing components. This blend of neural networks and fuzzy logic provides strong approximation capabilities. Studies have shown that ANFIS models generally have higher accuracy than ANN models [86,87], but often at a higher computational cost than basic ANNs, and lower accuracy compared to SVM [88,89]. However, there are exceptions. For example, a study by Tabari et al. [15] in Iran’s semi-arid zone found that while ANFIS sometimes outperformed SVM with different input variables, SVM consistently achieved the highest accuracy when using optimal inputs. Additionally, coupling ANFIS with optimization algorithms like the firefly algorithm, differential evolutionary algorithm, or particle swarm optimization can significantly enhance prediction accuracy [89,90,91], but inevitably increases the computational burden.
Tabari et al. [15] found that Gaussian and trapezoidal functions are the best membership functions (MFs) for ANFIS models. However, there are currently no studies on integrating two or more types of MFs within an ANFIS model for ETo modeling, which enables different types of MFs to represent various input variables and can improve potentially modeling accuracy.

4.4. Ensemble Learning

The EL model involves combining multiple learners to enhance the overall performance of a model, offering both speed and stability. EL improves overall prediction accuracy by combining multiple similar or slightly different models, such as ANN and decision trees. In contrast, hybrid machine learning models focus on model diversity, where each model serves a different function in the data processing process. For example, El-Shafie et al. [92] and El-Kenawy et al. [93] used an ensemble ANN model to simulate ETo in Malaysia and Iran. This ensemble ANN model addresses the overfitting issues found in traditional ANN models, resulting in better accuracy. Roy et al. [94] used a weighted combination of multiple meta-heuristic algorithms with a coupled ANFIS model to simulate ETo in three regions of Bangladesh and the U.S.A. The results showed that the ensemble model provided better accuracy than any individual prediction model. Currently, there are two main categories of EL methods. The first is parallelization methods, where individual learners are independent and can be generated simultaneously, such as in Bagging and random forest (RF). The second category is serialization methods, where there are strong dependencies between individual learners, requiring them to be generated sequentially, as seen in Boosting [95]. Manikumari et al. [96] found that Bagged-NN and Boosted-NN EL models outperform individual neural network models in terms of accuracy. Moreover, RF, which is based on Bagging, introduces random attribute selection during the training process of decision trees [97,98]. Common models using the Boosting algorithm include Boosting Trees (BT), where the predictions of each decision tree are weighted and averaged to obtain the final prediction. Another is Gradient Boosting Decision Trees (GBDT), which improves accuracy by applying gradient descent to the fitted residuals of the decision trees [99]. Extreme Gradient Boosting (XGBoost) builds on GBDT by introducing parallel computing during the model prediction phase [100]. Category Gradient Boosting (CatBoost) automatically handles categorical features and introduces Ordered Boosting, a new gradient computation method that reduces overfitting [101]. Lightweight Gradient Boosting (LightGBM) uses histogram-based algorithms, feature sampling, and other techniques to create a more “lightweight” model [102].
The RF model has implemented various strategies to enhance its efficiency and effectiveness when handling high-dimensional data. These strategies include feature selection, hyper-parameter tuning, parallel computing, and cloud-based implementations. While RF models are highly accurate, their generalization ability is not very strong. Feng et al. [103] modeled ETo using RF and generalized regression neural networks in the Sichuan Basin, China, and found that RF had slightly higher accuracy. Similarly, Wu et al. [104] demonstrated that RF could be effectively applied to ETo simulation in arid regions with limited meteorological data (RMSE = 0.352 mm d−1, MAE = 0.267 mm d−1, R2 = 0.987). However, Salam et al. [105] found that although the RF model provided higher accuracy and lower computational cost compared to the Bagging method, it exhibited lower stability. In addition, Hameed et al. [66] applied RF and ELM models to predict ETo in southern Turkey. They found that RF had higher accuracy during the training phase, but its accuracy was lower than ELM during the testing phase. This finding aligns with the results of Fan et al. [83], suggesting that the generalization ability of the RF model still needs improvement.
By contrast, XGBoost performs well in simulating ETo and is often computationally efficient. Fan et al. [83] compared the SVM, ELM, RF, M5Tree, GBDT, and XGBoost models to simulate ETo across five different climatic zones in China. The study found that GBDT had the best stability, and XGBoost had the lowest computational cost. However, the EL models, including XGBoost, were less accurate than SVM and ELM. For different EL methods, Ge et al. [18], Akar et al. [106], and Zhao et al. [107] compared XGBoost with GBDT, RF, and BT for predicting ETo. The results showed that XGBoost had the highest accuracy (MAE = 0.155 ± 0.029 mm month−1, R2 = 0.981 ± 0.007). However, CatBoost was found to be more stable than XGBoost [108]. Additionally, CatBoost and LightGBM demonstrated higher accuracy than XGBoost during the model testing stage [109]. However, Huang et al. [110] compared the SVM, RF, and CatBoost methods to simulate daily ETo. The results showed that while the CatBoost model had higher accuracy during the training phase, its accuracy was lower than SVM for six out of eight combinations of meteorological inputs during the testing phase. CatBoost exhibited increased sensitivity to missing values in meteorological data, resulting in degraded performance. This heightened sensitivity is likely attributed to its reliance on decision trees, which require complete input features to accurately map the relationship to the output. When encountering incomplete data, the model struggles to effectively capture the underlying patterns, potentially leading to underfitting.
In response to the low accuracy of the EL model, there are several strategies to address such a challenging issue. One approach is to use feature engineering methods, such as combining the EL model with principal component analysis (PCA), wavelet decomposition, and other variable analysis techniques. These methods can help improve the model’s ability to capture important patterns and enhance overall accuracy. For example, Shiri [111] improved RF accuracy by decomposing raw meteorological data using a wavelet algorithm and using the decomposed data as input to simulate ETo. Another approach is to combine the EL model with optimization algorithms for hyperparameter tuning. Existing research shows that combining EL models with the Gray Wolf Optimization algorithm [112,113], the Improved Gray Wolf Optimization [93], and Bayesian optimization [107] lead to higher accuracy compared to using a single EL model. Thirdly, the Stacking method can be used, where sub-learners are first generated, and then a secondary learner is trained to combine the outputs of the sub-learners. For example, Liu et al. [114] used the Stacking strategy to improve the accuracy of a single EL model, but this method is still relatively under-researched. Fourthly, exploring other sub-learners besides decision trees could also be a viable approach to enhance model performance.

4.5. Deep Learning

The DL model is a machine learning technique based on artificial neural networks, distinguished by its use of multiple hidden layers. Generally, DL models mainly include Convolutional Neural Networks (CNN), Long Short-Term Memory networks (LSTM), and Gated Recurrent Units (GRU). In ETo forecasting, LSTM is the most commonly used model. Since its full release in the early 2000s, LSTM has become the most widely used type of artificial neural network for solving time series forecasting problems. Bidirectional Long Short-Term Memory networks (Bi-LSTM) have been developed to better capture contextual dependencies using both forward and reverse LSTM networks to process information simultaneously in both directions [115,116].
Currently, scholars predominantly use DL models for short-term (weekly) ETo forecasting. Some studies have shown that the accuracy of LSTM models in forecasting ETo is superior to that of traditional neural networks, XGBoost models, and ANFIS models [117,118,119]. More specifically, Yin et al. [120] used Bi-LSTM for 7-day ETo prediction and achieved better accuracy compared to traditional LSTM models. Similarly, Zhang et al. [121] trained an LSTM model using data from 65 stations across four climate zones in China, successfully providing accurate short-term daily ETo forecasts tailored to different climate zones in the country. Meanwhile, some scholars have explored the use of DL models for medium-term (monthly) and long-term (multi-month) ETo forecasts. These approaches often rely on indirect prediction methods, which depend on future meteorological data. However, the accuracy of medium- and long-term climate forecasts is still limited, making these forecasts less reliable. Additionally, direct prediction methods, which involve learning from historical ETo data with multiple iterations, can also reduce model accuracy due to the cumulative errors over time. For example, Chia et al. [17] directly predicted the monthly average ETo for the next six months using CNN, GRU, and LSTM models. However, the performance varied significantly across different sites, with only one site achieving acceptable model accuracy (KEG = 0.638). Mandal et al. [69] developed a CNN and LSTM model to perform a 1–28 day ETo grid multi-step prediction for India using the ECMWF Global Climate Atmosphere Reanalysis Fifth Generation (ERA5) dataset. However, since ERA5 is based on historical climatic data, it does not strictly represent a prediction of future ETo. For example, Dang et al. [122] produced day-by-day ETo projections up to 2050 using an LSTM model.
To enhance model’s accuracy, optimization algorithms like particle swarm optimization (PSO) are used, as well as COVID-19 optimization for LSTM hyper-parameter tuning to reduce computational time [123,124]. Additionally, coupling other DL models can further improve their performance. For example, Yin et al. [120] used an ANN to learn the relationship between LSTM output and actual ETo, adjusting the prediction results accordingly. This approach significantly improved model accuracy and reduced computational costs (RMSE = 0.499 mm d−1, R2 = 0.899; hybrid Bi-LSTM: RMSE = 0.241 mm d−1, R2 = 0.970). Moreover, Ahmed et al. [125] used CNN for feature extraction and then fed the extracted features into a GRU model, which improved the accuracy of ETo predictions. Similarly, Sharma et al. [126] developed a CNN–LSTM model for ETo prediction, which also effectively enhanced model accuracy. Regarding interpret-ability, Troncoso-García et al. [124] analyzed LSTM prediction results using SHAP (SHapley Additive exPlanations)—a technique that quantifies feature contribution to model predictions—to identify the meteorological factors that have the most significant impact on ETo in the region.
However, there are still some limitations and challenges with DL models, especially for complex problems [127]. For example, the accuracy of LSTM estimations tends to decrease as the prediction time increases, primarily due to its limited computational capacity. Adopting the Transformer model [128], which is based on a self-attention mechanism, could potentially address this issue, but there are few studies on this research topic. Besides, DL algorithms often lack the constraints of physical mechanisms, which can lead to error accumulation over time. To mitigate this, some strategies involve reducing the number of iterations or integrating physical mechanisms into DL models to reduce errors. Nevertheless, the introduction of physical mechanisms into DL models to reduce errors remains underexplored. For potential improvements, new algorithms like Kolmogorov-Arnold Networks (KAN) [129], which offer higher interpretability, are emerging to be applied to ETo simulation, potentially achieving better results.
Finally, Table 2 clearly presents the advantages, disadvantages, and future research suggestions for the different models.

5. Data Preprocessing and Model Post-Evaluation

The complete machine learning modeling process includes data collection, data preprocessing, model selection, and performance evaluation (Figure 5). Data preprocessing and selecting the right performance indicators are crucial for improving model performance.

5.1. Data Preprocessing

Constructing ETo models for data-deficient regions using ML methods is crucial. The performance of an ML model depends significantly on its parameter settings, and selecting the right input variables is a complex yet vital step that directly impacts the model’s performance and prediction accuracy.
Among these meteorological factors, temperature variables (Tmin, Tmax, Tmean) are consistently the most critical inputs for accurate ETo estimation [132,133]. Relative humidity (RH), wind speed (WS), and solar radiation (Ra) or its proxy, sunshine hours (N), typically form the next tier of essential variables, with combinations of these core inputs yielding high accuracy [134]. Though less universally critical, precipitation (P) can enhance performance, particularly in humid regions. However, the relative importance and optimal combination of these variables are not uniform across all environments. ETo sensitivity to meteorological drivers varies significantly by climatic zone, necessitating climate-specific ML input combinations. In arid/semi-arid regions, temperature and relative humidity are critical predictors, often supplemented by wind speed [135,136]. Conversely, humid/semi-humid zones prioritize solar radiation and precipitation due to energy limitations [137].
Beyond climatic considerations, the choice of ML architecture itself introduces another layer of complexity regarding input variable sensitivity. Different ML architectures exhibit varying sensitivities to input variable selection and number. RF leverages feature importance for selection but suffers performance degradation when datasets contain numerous irrelevant features, which can obscure the signals of key predictors [132]. RF is highly dependent on variable informativeness. Conversely, LSTM inherently handle sequential dependencies for ETo prediction and are capable of ignoring irrelevant inputs [19]. However, the increased model complexity resulting from extra variables necessitates substantially larger training datasets to avoid poor performance.
Additionally, before selecting input variables, it is essential to preprocess and normalize the data to ensure better model outcomes. Therefore, accurate ETo estimation using ML models requires thorough consideration of both variable selection and data preprocessing. Preprocessing involves handling missing values, addressing outliers, and normalizing the data. Normalization methods like Z-Score standardization, Min-Max normalization, and normalization by decimal scaling ensure that variables with different scales are brought to the same scale for accurate comparison and analysis.
When it comes to factor selection, most scholars tend to rely on empirical rather than theoretical foundations [88,138,139]. By combining statistically based feature selection methods with model-driven approaches, it is possible to effectively screen and identify useful input variables. This approach enhances the accuracy and reliability of the ETo estimation models. In recent years, scholars have applied various feature selection methods to identify key factors, thereby enhancing scientifically sound selection of factors and implementing dimensionality reduction. These methods include factor analysis techniques like principal component analysis (PCA) and Independent Component Analysis (ICA), as well as Canonical Correlation Analysis (CCA) and the k-Nearest Neighbor algorithm (k-NN). Additionally, the correlation coefficient method has also been used, further improving the accuracy and effectiveness of factor selection in ETo modeling. Screening key factors in advance allows for an in-depth understanding of the relationship between meteorological factors and ETo. Moreover, it is possible to reduce the computational burden associated with testing different variable inputs and improve model accuracy and efficiency. For example, Zhang et al. [140] used a multiple linear regression model to analyze the effects of climatic factors such as air temperature, solar radiation, relative humidity, vapor pressure, and wind speed on ETo in the Aksu River Basin. The study found that wind speed and relative humidity were the main factors influencing ETo. In fact, changes in wind speed accounted for more than 50% of the variation in ETo at some stations. Xing et al. [141] estimated spring ETo in South China using path analysis theory and identified sunshine duration as the dominant factor influencing ETo, with a correlation coefficient of 0.8357. This was followed by minimum air temperature, average air temperature, and relative humidity, with different sub-determinants varying by city. Feng et al. [142] found that maximum temperature and relative humidity were most closely related to ETo in Northwest China through canonical correlation analysis, consistent with the findings of Huo et al. [138]. Similarly, Nagappan et al. [143] used the PCA method to determine that maximum temperature, minimum temperature, and wind speed had the greatest influence on ETo in Chennai, India. Zhao et al. [144] analyzed data from 16 stations in China using the K-Nearest Neighbor (k-NN) algorithm and found that surface radiation was the primary factor in predicting ETo at almost all stations, with an importance range from 0.391 to 0.884. Temperature was the second most important factor, with an importance range from 0.1823 to 0.5122. In another study, Zhao et al. [145] used the decision tree algorithm (CART) method at typical sites in arid and semi-arid zones of China and identified temperature as the main factor affecting ETo. Similarly, Adib et al. [146] found that maximum temperature and wind speed were the two most important factors influencing ETo in southwestern Iran. Raza et al. [147] found that as regions transition from humid to semi-arid to arid zones, the correlation between temperature and wind speed with ETo tends to increase, while the correlation between sunshine duration and ETo tends to decrease. It is obvious that surface radiation is highly correlated with ETo, but this issue has been less studied due to the relative difficulty in obtaining data. Among other factors, temperature and wind speed are the most closely related to ETo, followed by relative humidity. However, the reasons why the dominant factors vary across different meteorological stations or regions have not been studied more thoroughly.
The nonlinear, uncertain, and stochastic relationships between ETo and the related meteorological factors can significantly impact model prediction accuracy if all relevant features, i.e., such as trends, seasonality, cyclical behavior, outliers, and sudden changes in the time series, are not accurately captured. Therefore, it is crucial to extract useful information from the raw data as inputs to the ML models to ensure better predictions. Accordingly, the combination of trend analysis methods and ML models enhances applicability and effectiveness. For example, Pertal et al. [148] combined wavelet transform with neural networks, while Adarsh et al. [149] ensemble multivariate empirical modal decomposition (MEMD) with stepwise linear regression. Jayasinghe et al. [150] combined MEMD with Boruta–random forest for feature selection, developing the MEMD–Boruta–LSTM deep neural network. Similarly, Ali et al. [151] ensembled Multivariate Variational Modal Decomposition (MVMD) with Boosted Regression Trees (BRT), and the ensemble models significantly outperformed the standalone models in each case.

5.2. Model Post-Evaluation

Evaluating the performance and accuracy of machine learning models typically involves several key metrics: accuracy, recall, precision, F1 score, and the receiver operating characteristic (ROC) curve, along with the area under the curve (AUC). These metrics help provide a comprehensive understanding of model performance, and they focus on different performance dimensions of the model to provide a comprehensive assessment, respectively. It is critical to select the appropriate performance metrics to evaluate the model, and there are a number of performance metrics that are commonly used to evaluate the performance of such model predictions, such as the Coefficient of Determination (R2), Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and Nash–Sutcliffe Efficiency (NSE) coefficients, which are briefly explained and formulated in the following.
R2 indicates the degree of similarity between the predicted and actual results, ranging from 0 to 1, with larger values indicating a higher degree of similarity.
R 2 = 1 i = 1 n ( y i y ^ i ) 2 i = 1 n ( y i y ¯ i ) 2
where y i is the actual value, y ^ i is the predicted value, and y ¯ i is the mean of the actual values.
RMSE denotes the average of the squared differences between the actual and predicted values, a widely used indicator for assessing the validity of regression models as it is easily interpretable and provides a direct measure of the model’s accuracy. The smaller the value is, the smaller the difference is between the predicted and actual results.
R M S E = i = 1 n ( y i y ^ i ) 2 n
where y i is the actual value, y ^ i is the predicted value, and n is the number of data.
MAE represents the average of the absolute difference between the actual and predicted values. It is easy to interpret and gives a direct indication of the average error magnitude. It is particularly useful to avoid the disproportionate influence of large errors that can occur with the RMSE.
M A E = 1 n i = 1 n | y i y ^ i |
where y i is the actual value, y ^ i is the predicted value, and n is the number of data.
NSE is used to assess the ratio of the remaining unexplained variation (noise) to the variability present in a given dataset. It is often used in hydrological modeling to help standardize data accuracy to enhance understanding. NSE focuses on the ability of the model to accurately predict the observed values, with a focus on the 1:1 relationship between observed and simulated values.
N S E = 1 i = 1 n ( y i y ^ i ) 2 i = 1 n ( y i y ¯ ) 2
where y i is the actual value, y ^ i is the predicted value, and y ¯ i is the mean of the actual values.
Table 3 functions as a thorough and detailed compilation, encapsulating a wide range of different time periods along with various assessment methods employed to carefully evaluate the model’s ability to predict the ETo. This table shows that applying machine learning technology to predict ETo achieves high accuracy. However, it also highlights that the accuracy of predictions using fewer factors needs improvement. In addition, while the accuracy of empirical equations can be somewhat improved through calibration, they still generally fall short in comparison to the accuracy achieved by ML models. ML models typically offer more robust and precise predictions, particularly in complex or variable environments.
Table 4 builds upon the comparative accuracy analysis presented in Table 3 by summarizing the best-performing models tailored to specific climate zones—namely arid, humid, and semi-arid regions. Where prediction accuracy was comparable, models with fewer input variables were selected. This synthesis offers practical, region-specific guidance for selecting the most suitable ETo prediction model, enhancing the applicability of machine learning approaches across diverse environmental contexts.
In order to evaluate the performance of the proposed model in a more systematic way, in addition to the metrics presented in Table 3, metrics such as Wilmot’s consistency index (WI), correlation factor (R), global performance index (GPI), etc., are considered as needed to evaluate the performance of the model.

6. Conclusions and Recommendations

Empirical models vary in adaptability across different regions, but ML models can be used for ETo simulation in various locations. A critical consideration when selecting an ML model is the trade-off between predictive accuracy and computational cost (including training time, resource intensity, and complexity). Deep learning is more accurate, and it can achieve short-term predictions using a single meteorological data point (e.g., temperature). It usually requires significant computational resources, and it is better suited for large datasets. In contrast, traditional machine learning has lower resource requirements. When it comes to training time, the EL model has the shortest computation time. For model stability, the SVM is the most stable. Overall, the EL model is recommended for ETo simulation due to its cost-effectiveness and stability.
Although modeling ETo using ML models has shown good results, there are still some limitations: (1) Traditional ML models are prone to overfitting or underfitting, leading to poor performance on new data. (2) Traditional ML algorithms can be limited by the computational and storage requirements needed for large-scale datasets. (3) The quality and representativeness of the training data are crucial to the model’s ability to generalize. Insufficient, unrepresentative, or low-quality training data can severely impact the model’s performance. Additionally, irrelevant features can lead to poor model training. (4) The accuracy of evapotranspiration estimation heavily depends on the input variables and the ML model used. For instance, ANN model can achieve high prediction accuracy, but if actual solar radiation data are unavailable, the model’s performance will be greatly reduced. (5) Reproducibility challenges arise from the lack of open-source code and standardized benchmarks in many reviewed studies, hindering independent validation. For our review, potential publication bias may exist, as studies from regions like China and Iran are overrepresented, which could limit the global applicability of the findings.
As DL and reinforcement learning (RL) techniques continue to advance, the application of ML in ET or ETo simulation will become more extensive and in-depth. For example, combining multiple ML models with traditional models can enhance the accuracy and adaptability of evapotranspiration estimation. To improve the model’s generalization ability and accuracy, further optimization of algorithms and feature engineering is needed. Effective strategies to address existing problems include increasing the amount of data, data cleaning, feature selection, and regularization. Incorporating physical mechanisms into ML models (e.g., via physics-informed neural networks that embed domain knowledge directly into the model architecture) can help better capture the nature of the vaporization process. By combining mechanism models with DL models, it is possible to introduce more domain knowledge and constraints while also reducing data requirements. Furthermore, the ethical implications of ML in water resource management, such as data privacy and potential biases in training datasets, warrant careful consideration. In conclusion, despite current shortcomings, continuous improvement of algorithms, integration of multi-source data, incorporation of physical mechanisms, and attention to ethical concerns will make ETo modeling more accurate, efficient, and responsible.

Author Contributions

Y.C. and C.Z.: data curation, methodology, formal analysis, writing—original draft, and writing—review and editing; J.H. and H.C.: investigation, visualization and conceptualization; C.W., C.Z., and Z.H.: supervision, project administration and resources. All authors have read and agreed to the published version of the manuscript.

Funding

This research was jointly supported by the National Key R & D Program of China (No. 2021YFD1900603), Young Scientist Program of Bayannaoer Research Institute of China Agricultural University: 2024BYNECAU012 and the National Natural Science Foundation of China (No. 52279050).

Data Availability Statement

Further information on the data and methodologies will be made available by the author for correspondence, as requested.

Conflicts of Interest

The authors declare that they have no competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

  1. Allan, R.; Pereira, L.; Smith, M. Crop Evapotranspiration-Guidelines for Computing Crop Water Requirements-FAO Irrigation and Drainage Paper 56; FAO: Rome, Italy, 1998; Volume 56. [Google Scholar]
  2. Kang, S.; Su, X.; Tong, L.; Zhang, J.; Zhang, L.; Davies. A Warning from an Ancient Oasis: Intensive Human Activities Are Leading to Potential Ecological and Social Catastrophe. Int. J. Sustain. Dev. World Ecol. 2008, 15, 440–447. [Google Scholar] [CrossRef]
  3. Wen, X.; Si, J.; He, Z.; Wu, J.; Shao, H.; Yu, H. Support-Vector-Machine-Based Models for Modeling Daily Reference Evapotranspiration with Limited Climatic Data in Extreme Arid Regions. Water Resour. Manag. 2015, 29, 3195–3209. [Google Scholar] [CrossRef]
  4. Hargreaves, G.; Samani, Z. Reference Crop Evapotranspiration from Temperature. Appl. Eng. Agric. 1985, 1, 96–99. [Google Scholar] [CrossRef]
  5. Doorenbos, J.; Pruitt, W.O. Guidelines for Predicting Crop Water Requirements; FAO: Rome, Italy, 1975. [Google Scholar]
  6. Makkink, G.F. Testing the Penman Formula by Means of Lysimeters. J. Inst. Water Eng. 1957, 11, 277–288. [Google Scholar]
  7. Ritchie, J.T. Model for Predicting Evaporation from a Row Crop with Incomplete Cover. Water Resour. Res. 1972, 8, 1204–1213. [Google Scholar] [CrossRef]
  8. DehghaniSanij, H.; Yamamoto, T.; Rasiah, V. Assessment of Evapotranspiration Estimation Models for Use in Semi-Arid Environments. Agric. Water Manag. 2004, 64, 91–106. [Google Scholar] [CrossRef]
  9. Allen, R.G.; Clemmens, A.J.; Burt, C.M.; Solomon, K.; O’Halloran, T. Prediction Accuracy for Projectwide Evapotranspiration Using Crop Coefficients and Reference Evapotranspiration. J. Irrig. Drain. Eng. 2005, 131, 24–36. [Google Scholar] [CrossRef]
  10. Allen, R.G.; Pruitt, W.O.; Wright, J.L.; Howell, T.A.; Ventura, F.; Snyder, R.; Itenfisu, D.; Steduto, P.; Berengena, J.; Yrisarry, J.B.; et al. A Recommendation on Standardized Surface Resistance for Hourly Calculation of Reference ETo by the FAO56 Penman-Monteith Method. Agric. Water Manag. 2006, 81, 1–22. [Google Scholar] [CrossRef]
  11. Alizamir, M.; Kisi, O.; Zounemat-Kermani, M. Modelling Long-Term Groundwater Fluctuations by Extreme Learning Machine Using Hydro-Climatic Data. Hydrol. Sci. J. 2018, 63, 63–73. [Google Scholar] [CrossRef]
  12. Roy, D.K.; Biswas, S.K.; Mattar, M.A.; El-Shafei, A.A.; Murad, K.F.I.; Saha, K.K.; Datta, B.; Dewidar, A.Z. Groundwater Level Prediction Using a Multiple Objective Genetic Algorithm-Grey Relational Analysis Based Weighted Ensemble of ANFIS Models. Water 2021, 13, 3130. [Google Scholar] [CrossRef]
  13. An, Y.; Zhang, Y.; Yan, X. An Integrated Bayesian and Machine Learning Approach Application to Identification of Groundwater Contamination Source Parameters. Water 2022, 14, 2447. [Google Scholar] [CrossRef]
  14. Kisi, O.; Cimen, M. Evapotranspiration Modelling Using Support Vector Machines. Hydrol. Sci. J. 2009, 54, 918–928. [Google Scholar] [CrossRef]
  15. Tabari, H.; Kisi, O.; Ezani, A.; Hosseinzadeh Talaee, P. SVM, ANFIS, Regression and Climate Based Models for Reference Evapotranspiration Modeling Using Limited Climatic Data in a Semi-Arid Highland Environment. J. Hydrol. 2012, 444–445, 78–89. [Google Scholar] [CrossRef]
  16. Granata, F. Evapotranspiration Evaluation Models Based on Machine Learning Algorithms—A Comparative Study. Agric. Water Manag. 2019, 217, 303–315. [Google Scholar] [CrossRef]
  17. Chia, M.Y.; Huang, Y.F.; Koo, C.H.; Ng, J.L.; Ahmed, A.N.; El-Shafie, A. Long-Term Forecasting of Monthly Mean Reference Evapotranspiration Using Deep Neural Network: A Comparison of Training Strategies and Approaches. Appl. Soft Comput. 2022, 126, 109221. [Google Scholar] [CrossRef]
  18. Ge, J.; Zhao, L.; Yu, Z.; Liu, H.; Zhang, L.; Gong, X.; Sun, H. Prediction of Greenhouse Tomato Crop Evapotranspiration Using XGBoost Machine Learning Model. Plants 2022, 11, 1923. [Google Scholar] [CrossRef]
  19. Vaz, P.J.; Schütz, G.; Guerrero, C.; Cardoso, P.J.S. Hybrid Neural Network Based Models for Evapotranspiration Prediction Over Limited Weather Parameters. IEEE Access 2023, 11, 963–976. [Google Scholar] [CrossRef]
  20. Chi, D.; Wang, X.; Zhou, B.; Wang, X.; Cao, J. Real-Time Forecast Application Based on Improved Weather Forecast Model ET0. Yangtze River 2008, 8, 5–6+16. [Google Scholar] [CrossRef]
  21. Feng, Y.; Cui, N.; Gong, D. Comparison of Machine Learning Algorithms and Hargreaves Model for Reference Evapotranspiration Estimation in Sichuan. Chin. J. Agrometeorol. 2016, 37, 415–421. [Google Scholar] [CrossRef]
  22. Hu, Z.; Bashir, R.N.; Rehman, A.U.; Iqbal, S.I.; Shahid, M.M.A.; Xu, T. Machine Learning Based Prediction of Reference Evapotranspiration (ET0) Using IoT. IEEE Access 2022, 10, 70526–70540. [Google Scholar] [CrossRef]
  23. Mangalath Ravindran, S.; Moorakkal Bhaskaran, S.K.; Ambat, S.K.; Balakrishnan, K.; Manguttathil Gopalakrishnan, M. An Automated Machine Learning Methodology for the Improved Prediction of Reference Evapotranspiration Using Minimal Input Parameters. Hydrol. Process. 2022, 36, e14571. [Google Scholar] [CrossRef]
  24. Shaloo; Kumar, B.; Bisht, H.; Rajput, J.; Mishra, A.K.; Tm, K.K.; Brahmanand, P.S. Reference Evapotranspiration Prediction Using Machine Learning Models: An Empirical Study from Minimal Climate Data. Agron. J. 2024, 116, 956–972. [Google Scholar] [CrossRef]
  25. Tang, P.; Xu, B.; Gao, Z.; Gao, X. Simplified Limited Data ET0 Equation Adapted for High-Elevation Locations in Tibet. J. Hydraul. Eng. 2017, 48, 1055–1063. [Google Scholar] [CrossRef]
  26. Wei, Q.; Wei, Q.; Xu, J.; Bai, Y.; Li, X.; He, M.; Xu, J. Research on ET0 Prediction Based on Machine Learning Algorithm. Water Sav. Irrig. 2022, 2022, 9–17. [Google Scholar] [CrossRef]
  27. Zhou, R.; Wei, Z.; Zhang, Y.; Zhang, S. Prediction of Reference Crop Evapotranspiration Based on Generalized RegressionNeural Network and Particle Swarm Optimization Algorithm. China Rural. Water Hydropower 2017, 6, 1–7. [Google Scholar]
  28. Priestley, C.H.B.; Taylor, R.J. On the Assessment of Surface Heat Flux and Evaporation Using Large-Scale Parameters. Mon. Weather Rev. 1972, 100, 81–92. [Google Scholar] [CrossRef]
  29. Hansen, S. Estimation of Potential and Actual Evapotranspiration: Paper Presented at the Nordic Hydrological Conference (Nyborg, Denmark, August—1984). Hydrol. Res. 1984, 15, 205–212. [Google Scholar] [CrossRef]
  30. Xiang, K.; Li, Y.; Horton, R.; Feng, H. Similarity and Difference of Potential Evapotranspiration and Reference Crop Evapotranspiration—A Review. Agric. Water Manag. 2020, 232, 106043. [Google Scholar] [CrossRef]
  31. Almorox, J.; Quej, V.H.; Martí, P. Global Performance Ranking of Temperature-Based Approaches for Evapotranspiration Estimation Considering Köppen Climate Classes. J. Hydrol. 2015, 528, 514–522. [Google Scholar] [CrossRef]
  32. Zhang, L.; Zhao, X.; Ge, J.; Zhang, J.; Traore, S.; Fipps, G.; Luo, Y. Evaluation of Five Equations for Short-Term Reference Evapotranspiration Forecasting Using Public Temperature Forecasts for North China Plain. Water 2022, 14, 2888. [Google Scholar] [CrossRef]
  33. Awal, R.; Rahman, A.; Fares, A.; Habibi, H. Calibration and Evaluation of Empirical Methods to Estimate Reference Crop Evapotranspiration in West Texas. Water 2022, 14, 3032. [Google Scholar] [CrossRef]
  34. Fu, Y.; Shen, X.; Li, W.; Wu, X.; Zhang, Q. Applicability of Reference Crop Evapotranspiration Calculation Based on Hargreaves-Samani Regression Correction. Arid Land Geogr. 2022, 45, 1752–1760. [Google Scholar] [CrossRef]
  35. Trajkovic, S. Hargreaves versus Penman-Monteith under Humid Conditions. J. Irrig. Drain. Eng. 2007, 133, 38–42. [Google Scholar] [CrossRef]
  36. George, B.; Reddy, B.; Raghuwanshi, N.; Wallender, W. Decision Support System for Estimating Reference Evapotranspiration. J. Irrig. Drain. Eng. 2002, 128, 1–10. [Google Scholar] [CrossRef]
  37. Odhiambo, L.; Wright, W.; Yoder, R. Biological Evaluation of Methods for Estimating Daily Reference Crop Evapotranspiration at a Site in the Humid Southeast United States. Appl. Eng. Agric. 2005, 21, 197–202. [Google Scholar] [CrossRef]
  38. López-Urrea, R.; Martín de Santa Olalla, F.; Fabeiro, C.; Moratalla, A. Testing Evapotranspiration Equations Using Lysimeter Observations in a Semiarid Climate. Agric. Water Manag. 2006, 85, 15–26. [Google Scholar] [CrossRef]
  39. Abbaspour, K.C. A Comparison of Different Methods of Estimating Energy-limited Evapotranspiration in the Peace River Region of British Columbia. Atmos. Ocean. 1991, 29, 686–698. [Google Scholar] [CrossRef]
  40. Zhai, L.; Feng, Q.; Li, Q.; Xu, C. Comparison and Modification of Equations for Calculating Evapotranspiration (ET) with Data from Gansu Province, Northwest China. Irrig. Drain. 2010, 59, 477–490. [Google Scholar] [CrossRef]
  41. Valipour, M. Use of Average Data of 181 Synoptic Stations for Estimation of Reference Crop Evapotranspiration by Temperature-Based Methods. Water Resour. Manag. 2014, 28, 4237–4255. [Google Scholar] [CrossRef]
  42. Zhang, D.; Wang, C.; Han, Y.; Yuan, Y. Improvement and Applicability of Reference Crop Evapotranspiration ModelBased on Temperature in Haihe River Basin. Yellow River 2021, 43, 155–160. [Google Scholar] [CrossRef]
  43. Berengena, J.; Gavilán, P. Reference Evapotranspiration Estimation in a Highly Advective Semiarid Environment. J. Irrig. Drain. Eng. 2005, 131, 147–163. [Google Scholar] [CrossRef]
  44. Xystrakis, F.; Matzarakis, A. Evaluation of 13 Empirical Reference Potential Evapotranspiration Equations on the Island of Crete in Southern Greece. J. Irrig. Drain. Eng. 2011, 137, 211–222. [Google Scholar] [CrossRef]
  45. Gao, F.; Feng, G.; Ouyang, Y.; Wang, H.; Fisher, D.; Adeli, A.; Jenkins, J. Evaluation of Reference Evapotranspiration Methods in Arid, Semiarid, and Humid Regions. JAWRA J. Am. Water Resour. Assoc. 2017, 53, 791–808. [Google Scholar] [CrossRef]
  46. Wu, Z.; Cui, N.; Hu, X.; Gong, D.; Wang, Y.; Feng, Y.; Xing, L.; Zhu, B.; Zou, Q. Estimation of Reference Crop Evapotranspiration in Sichuan Basin Based on Improved Makkink Model. J. Drain. Irrig. Mach. Eng. 2021, 39, 509–516. [Google Scholar]
  47. Xu, C.-Y.; Singh, V.P. Evaluation and Generalization of Radiation-Based Methods for Calculating Evaporation. Hydrol. Process. 2000, 14, 339–349. [Google Scholar] [CrossRef]
  48. Gabriela Arellano, M.; Irmak, S. Reference (Potential) Evapotranspiration. I: Comparison of Temperature, Radiation, and Combination-Based Energy Balance Equations in Humid, Subhumid, Arid, Semiarid, and Mediterranean-Type Climates. J. Irrig. Drain. Eng. 2016, 142, 04015065. [Google Scholar] [CrossRef]
  49. Wang, J.; Ye, S.; Fan, Y.; Wang, J.; Ye, S.; Fan, Y. Applicability Evaluation of Reference Crop Evapotranspiration Calculation Models in Different Climatic Regions of China. Water Sav. Irrig. 2022, 3, 82–91. [Google Scholar] [CrossRef]
  50. Liu, W.; Sun, F. Assessing Estimates of Evaporative Demand in Climate Models Using Observed Pan Evaporation over China. J. Geophys. Res. Atmos. 2016, 121, 8329–8349. [Google Scholar] [CrossRef]
  51. Heydari, M.; Agha Majidi, R.; Beygipoor, G.; Heydari, M. Comparison and Evaluation of 38 Equations for Estimating Reference Evapotranspiration in an Arid Region. Fresenius Environ. Bull. 2014, 23, 1985–1996. [Google Scholar]
  52. Abed, M.; Imteaz, M.A.; Ahmed, A.N.; Huang, Y.F. Modelling Monthly Pan Evaporation Utilising Random Forest and Deep Learning Algorithms. Sci. Rep. 2022, 12, 13132. [Google Scholar] [CrossRef]
  53. Elzain, H.E.; Abdalla, O.A.; Abdallah, M.; Al-Maktoumi, A.; Eltayeb, M.; Abba, S.I. Innovative Approach for Predicting Daily Reference Evapotranspiration Using Improved Shallow and Deep Learning Models in a Coastal Region: A Comparative Study. J. Environ. Manag. 2024, 354, 120246. [Google Scholar] [CrossRef]
  54. Huang, G.; Zhu, Q.; Siew, C.K. Extreme Learning Machine: A New Learning Scheme of Feedforward Neural Networks. In Proceedings of the 2004 IEEE International Joint Conference on Neural Networks (IEEE Cat. No.04CH37541), Budapest, Hungary, 25–29 July 2004; Volume 2, pp. 985–990. [Google Scholar]
  55. Elbeltagi, A.; Nagy, A.; Mohammed, S.; Pande, C.B.; Kumar, M.; Bhat, S.A.; Zsembeli, J.; Huzsvai, L.; Tamás, J.; Kovács, E.; et al. Combination of Limited Meteorological Data for Predicting Reference Crop Evapotranspiration Using Artificial Neural Network Method. Agronomy 2022, 12, 516. [Google Scholar] [CrossRef]
  56. Fang, S.-L.; Lin, Y.-S.; Chang, S.-C.; Chang, Y.-L.; Tsai, B.-Y.; Kuo, B.-J. Using Artificial Intelligence Algorithms to Estimate and Short-Term Forecast the Daily Reference Evapotranspiration with Limited Meteorological Variables. Agriculture 2024, 14, 510. [Google Scholar] [CrossRef]
  57. Gocić, M.; Motamedi, S.; Shamshirband, S.; Petković, D.; Ch, S.; Hashim, R.; Arif, M. Soft Computing Approaches for Forecasting Reference Evapotranspiration. Comput. Electron. Agric. 2015, 113, 164–173. [Google Scholar] [CrossRef]
  58. Landeras, G.; Ortiz-Barredo, A.; López, J.J. Forecasting Weekly Evapotranspiration with ARIMA and Artificial Neural Network Models. J. Irrig. Drain. Eng. 2009, 135, 323–334. [Google Scholar] [CrossRef]
  59. Skhiri, A.; Ferhi, A.; Bousselmi, A.; Khlifi, S.; Mattar, M.A. Artificial Neural Network for Forecasting Reference Evapotranspiration in Semi-Arid Bioclimatic Regions. Water 2024, 16, 602. [Google Scholar] [CrossRef]
  60. Trajkovic, S.; Todorovic, B.; Stankovic, M. Forecasting of Reference Evapotranspiration by Artificial Neural Networks. J. Irrig. Drain. Eng. 2003, 129, 454–457. [Google Scholar] [CrossRef]
  61. Traore, S.; Wang, Y.M.; Chung, W.G. Predictive Accuracy of Backpropagation Neural Network Methodology in Evapotranspiration Forecasting in Dédougou Region, Western Burkina Faso. J. Earth Syst. Sci. 2014, 123, 307–318. [Google Scholar] [CrossRef]
  62. Zhang, X.; Wang, Z.; Shen, Y.; Yang, H. Comparison of Different Methods for Estimating Reference Evapotranspiration with Weather Data from Nearby Station. J. Nat. Resour. 2019, 34, 179–190. [Google Scholar] [CrossRef]
  63. Zhou, J.; Dong, Q. Using Temperature Models to Estimate ET0 in Data-Scarce Regions with Limited Solar Radiation Data. Chin. J. Agrometeorol. 2024, 45, 701–714. [Google Scholar] [CrossRef]
  64. Achite, M.; Jehanzaib, M.; Sattari, M.; Abderrezak Kamel, T.; Elshaboury, N.; Walega, A.; Krakauer, N.; Yoo, J.-Y.; Kim, T.-W. Modern Techniques to Modeling Reference Evapotranspiration in a Semiarid Area Based on ANN and GEP Models. Water 2022, 14, 1210. [Google Scholar] [CrossRef]
  65. Abdullah, S.S.; Malek, M.A.; Abdullah, N.S.; Kisi, O.; Yap, K.S. Extreme Learning Machines: A New Approach for Prediction of Reference Evapotranspiration. J. Hydrol. 2015, 527, 184–195. [Google Scholar] [CrossRef]
  66. Hameed, M.M.; AlOmar, M.K.; Mohd Razali, S.F.; Kareem Khalaf, M.A.; Baniya, W.J.; Sharafati, A.; AlSaadi, M.A. Application of Artificial Intelligence Models for Evapotranspiration Prediction along the Southern Coast of Turkey. Complexity 2021, 2021, 8850243. [Google Scholar] [CrossRef]
  67. Dimitriadou, S.; Nikolakopoulos, K.G. Artificial Neural Networks for the Prediction of the Reference Evapotranspiration of the Peloponnese Peninsula, Greece. Water 2022, 14, 2027. [Google Scholar] [CrossRef]
  68. Pino-Vargas, E.; Taya-Acosta, E.; Ingol-Blanco, E.; Torres-Rúa, A. Deep Machine Learning for Forecasting Daily Potential Evapotranspiration in Arid Regions, Case: Atacama Desert Header. Agriculture 2022, 12, 1971. [Google Scholar] [CrossRef]
  69. Mandal, N.; Chanda, K. Performance of Machine Learning Algorithms for Multi-Step Ahead Prediction of Reference Evapotranspiration across Various Agro-Climatic Zones and Cropping Seasons. J. Hydrol. 2023, 620, 129418. [Google Scholar] [CrossRef]
  70. Chauhan, S.; Shrivastava, R.K. Reference Evapotranspiration Forecasting Using Different Artificial Neural Networks Algorithms. Can. J. Civ. Eng. 2009, 36, 1491–1505. [Google Scholar] [CrossRef]
  71. Abdullahi, J.; Elkiran, G. Prediction of the Future Impact of Climate Change on Reference Evapotranspiration in Cyprus Using Artificial Neural Network. Procedia Comput. Sci. 2017, 120, 276–283. [Google Scholar] [CrossRef]
  72. Kartal, V. Prediction of Monthly Evapotranspiration by Artificial Neural Network Model Development with Levenberg–Marquardt Method in Elazig, Turkey. Environ. Sci. Pollut. Res. 2024, 31, 20953–20969. [Google Scholar] [CrossRef]
  73. Zheng, Y.; Zhang, L.; Hu, X.; Zhao, J.; Dong, W.; Zhu, F.; Wang, H. Multi-Algorithm Hybrid Optimization of Back Propagation (BP) Neural Networks for Reference Crop Evapotranspiration Prediction Models. Water 2023, 15, 3718. [Google Scholar] [CrossRef]
  74. Zhao, L.; Xing, L.; Wang, Y.; Cui, N.; Zhou, H.; Shi, Y.; Chen, S.; Zhao, X.; Li, Z. Prediction Model for Reference Crop Evapotranspiration Based on the Back-Propagation Algorithm with Limited Factors. Water Resour. Manag. 2023, 37, 1207–1222. [Google Scholar] [CrossRef]
  75. Wu, D.; Wang, G.G. Causal Artificial Neural Network and Its Applications in Engineering Design. Eng. Appl. Artif. Intell. 2021, 97, 104089. [Google Scholar] [CrossRef]
  76. Tallec, G.; Yvinec, E.; Dapogny, A.; Bailly, K. Fighting Over-Fitting with Quantization for Learning Deep Neural Networks on Noisy Labels. In Proceedings of the 2023 IEEE International Conference on Image Processing (ICIP), Kuala Lumpur, Malaysia, 8–11 October 2023; pp. 575–579. [Google Scholar]
  77. Ünal, H.T.; Başçiftçi, F. Evolutionary Design of Neural Network Architectures: A Review of Three Decades of Research. Artif. Intell. Rev. 2022, 55, 1723–1802. [Google Scholar] [CrossRef]
  78. Torres, A.F.; Walker, W.R.; McKee, M. Forecasting Daily Potential Evapotranspiration Using Machine Learning and Limited Climatic Data. Agric. Water Manag. 2011, 98, 553–562. [Google Scholar] [CrossRef]
  79. Shrestha, N.K.; Shukla, S. Support Vector Machine Based Modeling of Evapotranspiration Using Hydro-Climatic Variables in a Sub-Tropical Environment. Agric. For. Meteorol. 2015, 200, 172–184. [Google Scholar] [CrossRef]
  80. Ghorbani, M.A.; Shamshirband, S.; Zare Haghi, D.; Azani, A.; Bonakdari, H.; Ebtehaj, I. Application of Firefly Algorithm-Based Support Vector Machines for Prediction of Field Capacity and Permanent Wilting Point. Soil Tillage Res. 2017, 172, 32–38. [Google Scholar] [CrossRef]
  81. Hou, Z.; Yang, P.; Su, Y.; Ren, S. Simulation of ET0 Based on LS-SVM Method. J. Hydraul. Eng. 2011, 42, 743–749. [Google Scholar] [CrossRef]
  82. Kisi, O. Least Squares Support Vector Machine for Modeling Daily Reference Evapotranspiration. Irrig. Sci. 2013, 31, 611–619. [Google Scholar] [CrossRef]
  83. Fan, J.; Yue, W.; Wu, L.; Zhang, F.; Cai, H.; Wang, X.; Lu, X.; Xiang, Y. Evaluation of SVM, ELM and Four Tree-Based Ensemble Models for Predicting Daily Reference Evapotranspiration Using Limited Meteorological Data in Different Climates of China. Agric. For. Meteorol. 2018, 263, 225–241. [Google Scholar] [CrossRef]
  84. Mao, Y.; Fang, S. Research of Reference Evapotranspiration’s Simulation Based on Machine Learning. J. Geo. Inf. Sci. 2020, 22, 1692–1701. [Google Scholar] [CrossRef]
  85. Ikram, R.M.A.; Mostafa, R.R.; Chen, Z.; Islam, A.R.M.T.; Kisi, O.; Kuriqi, A.; Zounemat-Kermani, M. Advanced Hybrid Metaheuristic Machine Learning Models Application for Reference Crop Evapotranspiration Prediction. Agronomy 2023, 13, 98. [Google Scholar] [CrossRef]
  86. Citakoglu, H.; Cobaner, M.; Haktanir, T.; Kisi, O. Estimation of Monthly Mean Reference Evapotranspiration in Turkey. Water Resour. Manag. 2014, 28, 99–113. [Google Scholar] [CrossRef]
  87. Keshtegar, B.; Kisi, O.; Ghohani Arab, H.; Zounemat-Kermani, M. Subset Modeling Basis ANFIS for Prediction of the Reference Evapotranspiration. Water Resour. Manag. 2018, 32, 1101–1116. [Google Scholar] [CrossRef]
  88. Quej Chi, V.; Castillo Aguilar, C.; Almorox, J.; Rivera, B. Evaluation of Artificial Intelligence Models for Daily Prediction of Reference Evapotranspiration Using Temperature, Rainfall and Relative Humidity in a Warm Sub-Humid Environment. Ital. J. Agrometeorol. 2022, 1, 49–63. [Google Scholar] [CrossRef]
  89. Sharafi, S.; Ghaleni, M.M.; Scholz, M. Comparison of Predictions of Daily Evapotranspiration Based on Climate Variables Using Different Data Mining and Empirical Methods in Various Climates of Iran. Heliyon 2023, 9, e13245. [Google Scholar] [CrossRef] [PubMed]
  90. Aghelpour, P.; Varshavian, V.; Khodamorad Pour, M.; Hamedi, Z. Comparing Three Types of Data-Driven Models for Monthly Evapotranspiration Prediction under Heterogeneous Climatic Conditions. Sci. Rep. 2022, 12, 17363. [Google Scholar] [CrossRef] [PubMed]
  91. Tao, H.; Diop, L.; Bodian, A.; Djaman, K.; Ndiaye, P.M.; Yaseen, Z.M. Reference Evapotranspiration Prediction Using Hybridized Fuzzy Model with Firefly Algorithm: Regional Case Study in Burkina Faso. Agric. Water Manag. 2018, 208, 140–151. [Google Scholar] [CrossRef]
  92. El-Shafie, A.; Najah, A.; Alsulami, H.M.; Jahanbani, H. Optimized Neural Network Prediction Model for Potential Evapotranspiration Utilizing Ensemble Procedure. Water Resour. Manag. 2014, 28, 947–967. [Google Scholar] [CrossRef]
  93. El-kenawy, E.-S.M.; Zerouali, B.; Bailek, N.; Bouchouich, K.; Hassan, M.A.; Almorox, J.; Kuriqi, A.; Eid, M.; Ibrahim, A. Improved Weighted Ensemble Learning for Predicting the Daily Reference Evapotranspiration under the Semi-Arid Climate Conditions. Environ. Sci. Pollut. Res. 2022, 29, 81279–81299. [Google Scholar] [CrossRef]
  94. Roy, D.K.; Barzegar, R.; Quilty, J.; Adamowski, J. Using Ensembles of Adaptive Neuro-Fuzzy Inference System and Optimization Algorithms to Predict Reference Evapotranspiration in Subtropical Climatic Zones. J. Hydrol. 2020, 591, 125509. [Google Scholar] [CrossRef]
  95. Zhou, Z. Machine Learning; Tsinghua University Press: Beijing, China, 2016. [Google Scholar]
  96. Manikumari, N.; Murugappan, A.; Vinodhini, G. Time Series Forecasting of Daily Reference Evapotranspiration by Neural Network Ensemble Learning for Irrigation System. IOP Conf. Ser. Earth Environ. Sci. 2017, 80, 012069. [Google Scholar] [CrossRef]
  97. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  98. Matin, S.S.; Chelgani, S.C. Estimation of Coal Gross Calorific Value Based on Various Analyses by Random Forest Method. Fuel 2016, 177, 274–278. [Google Scholar] [CrossRef]
  99. Friedman, J.H. Stochastic Gradient Boosting. Comput. Stat. Data Anal. 2002, 38, 367–378. [Google Scholar] [CrossRef]
  100. Chen, T.; Guestrin, C. XGBoost: A Scalable Tree Boosting System. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–16 August 2016; ACM: New York, NY, USA, 2016; pp. 785–794. [Google Scholar]
  101. Prokhorenkova, L.; Gusev, G.; Vorobev, A.; Dorogush, A.V.; Gulin, A. CatBoost: Unbiased Boosting with Categorical Features. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, Montreal, QB, Canada, 3–6 December 2018; Curran Associates Inc.: Red Hook, NY, USA, 2018; pp. 6639–6649. [Google Scholar]
  102. Ke, G.; Meng, Q.; Finley, T.; Wang, T.; Chen, W.; Ma, W.; Ye, Q.; Liu, T.-Y. LightGBM: A Highly Efficient Gradient Boosting Decision Tree. In Proceedings of the Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; Curran Associates Inc.: Red Hook, NY, USA, 2017; pp. 3149–3157. [Google Scholar]
  103. Feng, Y.; Cui, N.; Gong, D.; Zhang, Q.; Zhao, L. Evaluation of Random Forests and Generalized Regression Neural Networks for Daily Reference Evapotranspiration Modelling. Agric. Water Manag. 2017, 193, 163–173. [Google Scholar] [CrossRef]
  104. Wu, M.; Feng, Q.; Wen, X.; Deo, R.C.; Yin, Z.; Yang, L.; Sheng, D. Random Forest Predictive Model Development with Uncertainty Analysis Capability for the Estimation of Evapotranspiration in an Arid Oasis Region. Hydrol. Res. 2020, 51, 648–665. [Google Scholar] [CrossRef]
  105. Salam, R.; Islam, A.R.M.T. Potential of RT, Bagging and RS Ensemble Learning Algorithms for Reference Evapotranspiration Prediction Using Climatic Data-Limited Humid Region in Bangladesh. J. Hydrol. 2020, 590, 125241. [Google Scholar] [CrossRef]
  106. Akar, F.; Katipoğlu, O.; Yeşilyurt, S.; Taş, M. Evaluation of Tree-Based MachineLearning and Deep Learning Techniquesin Temperature-Based PotentialEvapotranspiration Prediction. Pol. J. Environ. Stud. 2023, 32, 1009–1023. [Google Scholar] [CrossRef]
  107. Zhao, L.; Wang, Y.; Shi, Y.; Zhao, X.; Cui, N.; Zhang, S. Selecting Essential Factors for Predicting Reference Crop Evapotranspiration through Tree-Based Machine Learning and Bayesian Optimization. Theor. Appl. Climatol. 2024, 155, 2953–2972. [Google Scholar] [CrossRef]
  108. Liu, X.; Dai, Z.; Wu, L.; Zhang, F.; Dong, J.; Chen, Z. Comparing the Performance of GPR, XGBoost and CatBoost Models forCalculating Reference Crop Evapotranspiration in Jiangxi Province. J. Irrig. Drain. 2021, 40, 91–96. [Google Scholar] [CrossRef]
  109. Liang, Y.; Feng, D.; Sun, Z.; Zhu, Y. Evaluation of Empirical Equations and Machine Learning Models for Daily Reference Evapotranspiration Prediction Using Public Weather Forecasts. Water 2023, 15, 3954. [Google Scholar] [CrossRef]
  110. Huang, G.; Wu, L.; Ma, X.; Zhang, W.; Fan, J.; Yu, X.; Zeng, W.; Zhou, H. Evaluation of CatBoost Method for Prediction of Reference Evapotranspiration in Humid Regions. J. Hydrol. 2019, 574, 1029–1041. [Google Scholar] [CrossRef]
  111. Shiri, J. Improving the Performance of the Mass Transfer-Based Reference Evapotranspiration Estimation Approaches through a Coupled Wavelet-Random Forest Methodology. J. Hydrol. 2018, 561, 737–750. [Google Scholar] [CrossRef]
  112. Lu, X.; Fan, J.; Wu, L.; Dong, J. Forecasting Multi-Step Ahead Monthly Reference Evapotranspiration Using Hybrid Extreme Gradient Boosting with Grey Wolf Optimization Algorithm. Comput. Model. Eng. Sci. 2020, 125, 699–723. [Google Scholar] [CrossRef]
  113. Heramb, P.; Ramana Rao, K.V.; Subeesh, A.; Srivastava, A. Predictive Modelling of Reference Evapotranspiration Using Machine Learning Models Coupled with Grey Wolf Optimizer. Water 2023, 15, 856. [Google Scholar] [CrossRef]
  114. Liu, A.; Zhao, D.; Wei, Y.; Xiao, L. Ensemble Learning Estimation of Reference Crop Evapotranspiration Taking into Account Temporal and Spatial Characteristics. J. Drain. Irrig. Mach. Eng. 2024, 42, 179–186. [Google Scholar] [CrossRef]
  115. Cho, K.; Van Merrienboer, B.; Gulcehre, C.; Bahdanau, D.; Bougares, F.; Schwenk, H.; Bengio, Y. Learning Phrase Representations Using RNN Encoder–Decoder for Statistical Machine Translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar, 25–29 October 2014; Association for Computational Linguistics: Stroudsburg, PA, USA, 2014; pp. 1724–1734. [Google Scholar]
  116. Zhang, L.; Liu, P.; Zhao, L.; Wang, G.; Zhang, W.; Liu, J. Air Quality Predictions with a Semi-Supervised Bidirectional LSTM Neural Network. Atmospheric Pollut. Res. 2021, 12, 328–339. [Google Scholar] [CrossRef]
  117. Karbasi, M.; Jamei, M.; Ali, M.; Malik, A.; Yaseen, Z.M. Forecasting Weekly Reference Evapotranspiration Using Auto Encoder Decoder Bidirectional LSTM Model Hybridized with a Boruta-CatBoost Input Optimizer. Comput. Electron. Agric. 2022, 198, 107121. [Google Scholar] [CrossRef]
  118. Pan, Z.; Liu, Z.; Shen, X.; Zhang, Z.; Shi, K.; Zhang, S. Prediction Model of Reference Crop Evapotranspiration Based on Deep Learning. J. Shanxi Agric. Sci. 2023, 51, 942–952. [Google Scholar] [CrossRef]
  119. Roy, D.K.; Sarkar, T.K.; Kamar, S.S.A.; Goswami, T.; Muktadir, M.A.; Al-Ghobari, H.M.; Alataway, A.; Dewidar, A.Z.; El-Shafei, A.A.; Mattar, M.A. Daily Prediction and Multi-Step Forward Forecasting of Reference Evapotranspiration Using LSTM and Bi-LSTM Models. Agronomy 2022, 12, 594. [Google Scholar] [CrossRef]
  120. Yin, J.; Deng, Z.; Ines, A.V.M.; Wu, J.; Rasu, E. Forecast of Short-Term Daily Reference Evapotranspiration under Limited Meteorological Variables Using a Hybrid Bi-Directional Long Short-Term Memory Model (Bi-LSTM). Agric. Water Manag. 2020, 242, 106386. [Google Scholar] [CrossRef]
  121. Zhang, L.; Zhao, X.; Zhu, G.; He, J.; Chen, J.; Chen, Z.; Traore, S.; Liu, J.; Singh, V.P. Short-Term Daily Reference Evapotranspiration Forecasting Using Temperature-Based Deep Learning Models in Different Climate Zones in China. Agric. Water Manag. 2023, 289, 108498. [Google Scholar] [CrossRef]
  122. Dang, C.; Zhang, H.; Yao, C.; Mu, D.; Lyu, F.; Zhang, Y.; Zhang, S. IWRAM: A Hybrid Model for Irrigation Water Demand Forecasting to Quantify the Impacts of Climate Change. Agric. Water Manag. 2024, 291, 108643. [Google Scholar] [CrossRef]
  123. Jia, W.; Zhang, Y.; Wei, Z.; Zheng, Z.; Xie, P. Daily Reference Evapotranspiration Prediction for Irrigation Scheduling Decisions Based on the Hybrid PSO-LSTM Model. PLoS ONE 2023, 18, e0281478. [Google Scholar] [CrossRef]
  124. Troncoso-García, A.R.; Brito, I.S.; Troncoso, A.; Martínez-Álvarez, F. Explainable Hybrid Deep Learning and Coronavirus Optimization Algorithm for Improving Evapotranspiration Forecasting. Comput. Electron. Agric. 2023, 215, 108387. [Google Scholar] [CrossRef]
  125. Ahmed, A.A.M.; Deo, R.C.; Feng, Q.; Ghahramani, A.; Raj, N.; Yin, Z.; Yang, L. Hybrid Deep Learning Method for a Week-Ahead Evapotranspiration Forecasting. Stoch. Environ. Res. Risk Assess. 2022, 36, 831–849. [Google Scholar] [CrossRef]
  126. Sharma, G.; Singh, A.; Jain, S. A Hybrid Deep Neural Network Approach to Estimate Reference Evapotranspiration Using Limited Climate Data. Neural Comput. Appl. 2022, 34, 4013–4032. [Google Scholar] [CrossRef]
  127. Farooque, A.A.; Afzaal, H.; Abbas, F.; Bos, M.; Maqsood, J.; Wang, X.; Hussain, N. Forecasting Daily Evapotranspiration Using Artificial Neural Networks for Sustainable Irrigation Scheduling. Irrig. Sci. 2022, 40, 55–69. [Google Scholar] [CrossRef]
  128. Yan, H.; Deng, B.; Li, X.; Qiu, X. TENER: Adapting Transformer Encoder for Named Entity Recognition. arXiv 2019, arXiv:1911.04474. [Google Scholar] [CrossRef]
  129. Liu, Z.; Wang, Y.; Vaidya, S.; Ruehle, F.; Halverson, J.; Soljacic, M.; Hou, T.Y.; Tegmark, M. KAN: Kolmogorov–Arnold Networks. arXiv 2024, arXiv:2404.19756. [Google Scholar]
  130. Mahadeva, R.; Kumar, M.; Gupta, V.; Manik, G.; Patole, S.P. Modified Whale Optimization Algorithm Based ANN: A Novel Predictive Model for RO Desalination Plant. Sci. Rep. 2023, 13, 2901. [Google Scholar] [CrossRef]
  131. Qian, C.; Yu, Y.; Zhou, Z.-H. Pareto Ensemble Pruning. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, Austin, TX, USA, 25–30 January 2015; AAAI Press: Washington, DC, USA; pp. 2935–2941. [Google Scholar]
  132. Santos, P.A.B.d.; Schwerz, F.; Carvalho, L.G.d.; Baptista, V.B.d.S.; Marin, D.B.; Ferraz, G.A.e.S.; Rossi, G.; Conti, L.; Bambi, G. Machine Learning and Conventional Methods for Reference Evapotranspiration Estimation Using Limited-Climatic-Data Scenarios. Agronomy 2023, 13, 2366. [Google Scholar] [CrossRef]
  133. Salahudin, H.; Shoaib, M.; Albano, R.; Inam Baig, M.A.; Hammad, M.; Raza, A.; Akhtar, A.; Ali, M.U. Using Ensembles of Machine Learning Techniques to Predict Reference Evapotranspiration (ET0) Using Limited Meteorological Data. Hydrology 2023, 10, 169. [Google Scholar] [CrossRef]
  134. Ayaz, A.; Rajesh, M.; Singh, S.K.; Rehana, S.; Ayaz, A.; Rajesh, M.; Singh, S.K.; Rehana, S. Estimation of Reference Evapotranspiration Using Machine Learning Models with Limited Data. AIMS Geosci. 2021, 7, 268–290. [Google Scholar] [CrossRef]
  135. Abdel-Fattah, M.K.; Kotb Abd-Elmabod, S.; Zhang, Z.; Merwad, A.-R.M.A. Exploring the Applicability of Regression Models and Artificial Neural Networks for Calculating Reference Evapotranspiration in Arid Regions. Sustainability 2023, 15, 15494. [Google Scholar] [CrossRef]
  136. Raza, A.; Fahmeed, R.; Syed, N.R.; Katipoğlu, O.M.; Zubair, M.; Alshehri, F.; Elbeltagi, A. Performance Evaluation of Five Machine Learning Algorithms for Estimating Reference Evapotranspiration in an Arid Climate. Water 2023, 15, 3822. [Google Scholar] [CrossRef]
  137. Qin, A.; Fan, Z.; Zhang, L. Hybrid Genetic Algorithm−Based BP Neural Network Models Optimize Estimation Performance of Reference Crop Evapotranspiration in China. Appl. Sci. 2022, 12, 10689. [Google Scholar] [CrossRef]
  138. Huo, Z.; Feng, S.; Kang, S.; Dai, X. Artificial Neural Network Models for Reference Evapotranspiration in an Arid Area of Northwest China. J. Arid Environ. 2012, 82, 81–90. [Google Scholar] [CrossRef]
  139. Chia, M.Y.; Huang, Y.F.; Koo, C.H. Reference Evapotranspiration Estimation Using Adaptive Neuro-Fuzzy Inference System with Limited Meteorological Data. IOP Conf. Ser. Earth Environ. Sci. 2020, 612, 012017. [Google Scholar] [CrossRef]
  140. Zhang, S.; Liu, S.; Mo, X.; Shu, C.; Sun, Y.; Zhang, C. Assessing the Impact of Climate Change on Potential Evapotranspiration in Aksu River Basin. J. Geogr. Sci. 2011, 21, 609–620. [Google Scholar] [CrossRef]
  141. Xing, X.; Liu, Y.; Zhao, W.; Kang, D.; Yu, M.; Ma, X. Determination of Dominant Weather Parameters on Reference Evapotranspiration by Path Analysis Theory. Comput. Electron. Agric. 2016, 120, 10–16. [Google Scholar] [CrossRef]
  142. Feng, K.; Tian, J.; Hong, Y. Method for Estimating Potential Evapotranspiration by Self-Optimizing Nearest Neighboralgorithm. Trans. Chin. Soc. Agric. Eng. 2019, 35, 76–83. [Google Scholar] [CrossRef]
  143. Nagappan, M.; Gopalakrishnan, V.; Alagappan, M. Prediction of Reference Evapotranspiration for Irrigation Scheduling Using Machine Learning. Hydrol. Sci. J. 2020, 65, 2669–2677. [Google Scholar] [CrossRef]
  144. Zhao, L.; Zhao, X.; Pan, X.; Shi, Y.; Qiu, Z.; Li, X.; Xing, X.; Bai, J. Prediction of Daily Reference Crop Evapotranspiration in Different Chinese Climate Zones: Combined Application of Key Meteorological Factors and Elman Algorithm. J. Hydrol. 2022, 610, 127822. [Google Scholar] [CrossRef]
  145. Zhao, L.; Zhao, X.; Li, Y.; Shi, Y.; Zhou, H.; Li, X.; Wang, X.; Xing, X. Applicability of Hybrid Bionic Optimization Models with Kernel-Based Extreme Learning Machine Algorithm for Predicting Daily Reference Evapotranspiration: A Case Study in Arid and Semiarid Regions, China. Environ. Sci. Pollut. Res. 2023, 30, 22396–22412. [Google Scholar] [CrossRef] [PubMed]
  146. Adib, A.; Kalantarzadeh, S.S.O.; Shoushtari, M.M.; Lotfirad, M.; Liaghat, A.; Oulapour, M. Sensitive Analysis of Meteorological Data and Selecting Appropriate Machine Learning Model for Estimation of Reference Evapotranspiration. Appl. Water Sci. 2023, 13, 83. [Google Scholar] [CrossRef]
  147. Raza, A.; Saber, K.; Hu, Y.; Ray, R.; Yunus; Kaya, Y.; Dehghanisanij, H.; Kisi, O.; Elbeltagi, A. Modelling Reference Evapotranspiration Using Principal Component Analysis and Machine Learning Methods under Different Climatic Environments. Irrig. Drain. 2023, 72, 1–26. [Google Scholar] [CrossRef]
  148. Partal, T. Modelling Evapotranspiration Using Discrete Wavelet Transform and Neural Networks. Hydrol. Process. 2009, 23, 3545–3555. [Google Scholar] [CrossRef]
  149. Adarsh, S.; Sanah, S.; Murshida, K.K.; Nooramol, P. Scale Dependent Prediction of Reference Evapotranspiration Based on Multi-Variate Empirical Mode Decomposition. Ain Shams Eng. J. 2018, 9, 1839–1848. [Google Scholar] [CrossRef]
  150. Jayasinghe, W.J.M.L.P.; Deo, R.C.; Ghahramani, A.; Ghimire, S.; Raj, N. Deep Multi-Stage Reference Evapotranspiration Forecasting Model: Multivariate Empirical Mode Decomposition Integrated with the Boruta-Random Forest Algorithm. IEEE Access 2021, 9, 166695–166708. [Google Scholar] [CrossRef]
  151. Ali, M.; Jamei, M.; Prasad, R.; Karbasi, M.; Xiang, Y.; Cai, B.; Abdulla, S.; Ahsan Farooque, A.; Labban, A.H. New Achievements on Daily Reference Evapotranspiration Forecasting: Potential Assessment of Multivariate Signal Decomposition Schemes. Ecol. Indic. 2023, 155, 111030. [Google Scholar] [CrossRef]
  152. Sultan Abdullah, S.; Malek, M.A.; Sultan Abdullah, N.; Mustapha, A. Feedforward Backpropagation, Genetic Algorithm Approaches for Predicting Reference Evapotranspiration. Sains Malays. 2015, 44, 1053–1059. [Google Scholar] [CrossRef]
  153. Traore, S.; Luo, Y.; Fipps, G. Deployment of Artificial Neural Network for Short-Term Forecasting of Evapotranspiration Using Public Weather Forecast Restricted Messages. Agric. Water Manag. 2016, 163, 363–379. [Google Scholar] [CrossRef]
  154. Gocić, M.; Arab Amiri, M. Reference Evapotranspiration Prediction Using Neural Networks and Optimum Time Lags. Water Resour. Manag. 2021, 35, 1913–1926. [Google Scholar] [CrossRef]
  155. Wu, L.; Fan, J. Comparison of Neuron-Based, Kernel-Based, Tree-Based and Curve-Based Machine Learning Models for Predicting Daily Reference Evapotranspiration. PLoS ONE 2019, 14, e0217520. [Google Scholar] [CrossRef] [PubMed]
  156. Üneş, F.; Kaya, Y.Z.; Mamak, M. Daily Reference Evapotranspiration Prediction Based on Climatic Conditions Applying Different Data Mining Techniques and Empirical Equations. Theor. Appl. Climatol. 2020, 141, 763–773. [Google Scholar] [CrossRef]
  157. Roy, D.K.; Lal, A.; Sarker, K.K.; Saha, K.K.; Datta, B. Optimization Algorithms as Training Approaches for Prediction of Reference Evapotranspiration Using Adaptive Neuro Fuzzy Inference System. Agric. Water Manag. 2021, 255, 107003. [Google Scholar] [CrossRef]
  158. Fan, J.; Wu, L.; Zheng, J.; Zhang, F. Medium-Range Forecasting of Daily Reference Evapotranspiration across China Using Numerical Weather Prediction Outputs Downscaled by Extreme Gradient Boosting. J. Hydrol. 2021, 601, 126664. [Google Scholar] [CrossRef]
  159. Xing, L.; Cui, N.; Guo, L.; Du, T.; Gong, D.; Zhan, C.; Zhao, L.; Wu, Z. Estimating Daily Reference Evapotranspiration Using a Novel Hybrid Deep Learning Model. J. Hydrol. 2022, 614, 128567. [Google Scholar] [CrossRef]
Figure 1. Literature trends. (a) Number of papers published per year, January 2001–2024. (b) Number of publications from journals with a cumulative publication count exceeding 5.
Figure 1. Literature trends. (a) Number of papers published per year, January 2001–2024. (b) Number of publications from journals with a cumulative publication count exceeding 5.
Agronomy 15 02038 g001
Figure 2. Number of documents related to each country.
Figure 2. Number of documents related to each country.
Agronomy 15 02038 g002
Figure 3. Machine learning methods classification.
Figure 3. Machine learning methods classification.
Agronomy 15 02038 g003
Figure 4. Number of publications using different machine learning methods each year.
Figure 4. Number of publications using different machine learning methods each year.
Agronomy 15 02038 g004
Figure 5. Machine learning forecasting process. Note: PCA: principal component analysis; KNN: K-Nearest Neighbors; MEMD: Multivariate Empirical Mode Decomposition; MVMD: Multivariate Variational Mode Decomposition; BRT: Boosted Regression Trees.
Figure 5. Machine learning forecasting process. Note: PCA: principal component analysis; KNN: K-Nearest Neighbors; MEMD: Multivariate Empirical Mode Decomposition; MVMD: Multivariate Variational Mode Decomposition; BRT: Boosted Regression Trees.
Agronomy 15 02038 g005
Table 1. The empirical equations for estimating ETo.
Table 1. The empirical equations for estimating ETo.
MethodsEquations
Temperature-based equationsHargreaves and Samani [4] E T 0 = 0.0023 R a ( T a + 17.8 ) ( T max T min ) 0.5
FAO 24 Blaney–Criddle [5] E T 0 = P ( 0.46 T a + 8 )
Radiation-based equationsMakkink [6] E T 0 = 0.61 Δ / ( Δ + γ ) R s 0.12
Ritchie [7] E T 0 = Δ / ( Δ + γ ) R n
Priestly–Taylor [28] E T 0 = α 0 Δ / ( Δ + γ ) ( R n G )
FAO 24 Radiation [5] E T 0 = c 1 ( W R s )
Hansen [29] E T 0 = 0.7 Δ / ( Δ + γ ) R s
Note: Rs is incoming solar radiation, MJ·m−2·day−1; Rn is net radiation, MJ·m−2·day−1; Ra is extraterrestrial radiation, MJ·m−2·day−1; α0 is a coefficient; Δ is slope of the saturation vapor pressure, kPa·°C−1; G is soil heat flux, MJ·m−2·day−1; γ is psychrometric constant, kPa·°C−1; Ta = 0.5 (Tmax + Tmin), Tmax is maximum air temperature, °C; Tmin is minimum air temperature, °C; u2 means wind speed at 2 m height, m·s−1; W is weight factor which depend on temperature and altitude; c1 = −3.019 × 10−3.
Table 2. Advantages, disadvantages, and future research suggestions for different models.
Table 2. Advantages, disadvantages, and future research suggestions for different models.
ModelAdvantagesDisadvantagesRecommendations for Future Research
ANNAdaptability and flexibility, ease of implementation.
ELM has fast training speed [65].
MLP has high accuracy [66].
Model performance dependency parameters and architecture [75,76].Using stochastic techniques or optimization determines the best model, which will reduce the dependency on experience [130].
SVMVery good generalization and prediction accuracy [3,15].High computational costs [84].Use optimization algorithms to improve solution efficiency [85].
ANFISUsually more accurate than ANN [86,87].Less accurate compared to SVM [88,89].Integration of multiple types of optimal affiliation functions [15].
ELFaster and more stable calculations [107].Poor generalization [83].Use stacking combination method [114].
Use ensemble pruning [131].
DLBest simulation accuracy [118,119].
LSTM is widely used for time series forecasting [115,116].
Higher data volume requirements [127].
Error increases with increasing time [17].
Experiment with new algorithms such as Transformers [128].
Combine with physical mechanisms.
Table 3. ETo evaluation metrics.
Table 3. ETo evaluation metrics.
ReferencesClimate ZoneVariableThe Best ModelTimescaleR2MAE (mm d−1)RMSE (mm d−1)
ANNAbdullah et al., 2015 [65]Arid and Semi-aridTmax, Tmin, U2, Rh, NELMDaily0.9830.0660.092
Sultan Abdullah et al., 2015 [152]AridTmax, Tmin, U2, Rh, NFFBP-GADaily0.9820.0690.088
Traore et al., 2016 [153]HumidTmax, Tmin, RsMLPDaily-0.7080.143
Abdullahi and Elkiran, 2017 [71]AridTmax, Tmin, Rh, U2, N, RaFFBP-LMDaily0.9996-0.0051
Gocić and Arab Amiri, 2021 [154]HumidTmin, Tmax. Rh, U2, N, eaANNmonth0.980-0.300
Dimitriadou and Nikolakopoulos, 2022 [67]Arid and Semi-aridTmean, NRBFDaily0.960-0.385
Zheng et al., 2023 [73]AridTmax, Tmean, Tmin, N, RhGA-PSO-BPDaily0.9520.1450.163
Zhao et al., 2023a [74]HumidT, U, N, Rh, APCSO-BPDaily0.9050.3330.446
Skhiri et al., 2024 [59]Arid and Semi-aridTmin, Tmax, Rhmin, Rhmax, U, NANNDaily0.9920.2330.326
SVMWen et al., 2015 [3]AridTmax, Tmin, U2, RsSVMDaily0.9500.2070.262
Wu and Fan, 2019 [155]Arid, Semi-arid and HumidTSVMDaily0.8290.5080.718
Quej et al., 2022 [88]HumidTmin, Tmax. Rh, U2, RsSVMDaily0.9010.3330.437
Ikram et al., 2023 [85]HumidTmin, Tmax, Ra, Rs, U2, RhSVM-PSOGSADaily0.9270.1780.292
Zhao et al., 2023 [145]Arid and Semi-aridTmin, Tmax, N, Rs,, U2, RhSSA-KELMDaily0.9990.0560.063
Shaloo et al., 2024 [24]HumidTmin, Tmax. Rh, U2, NSVMDaily0.9850.1680.224
ANFISCitakoglu et al., 2014 [86]Semi-aridRs, T, Rh, U2ANFISMonthly0.9850.1980.345
Keshtegar et al., 2018 [87]Semi-aridRs, T, Rh, U2Subset-ANFISDaily-0.07390.1094
Üneş et al., 2020 [156]HumidT, Rs, U, RhANFISDaily0.9060.481-
Roy et al., 2021b [157]HumidTmin, Tmax, Rh, U2, NFA-ANFISDaily0.9930.6800.149
Aghelpour et al., 2022 [90]Arid, Semi-arid and HumidTmin, Tmax, Tmean, Rhmax, RhminANFIS-DEMonthly0.977-0.351
ELManikumari et al., 2017 [96]HumidTmin, Tmax, Rhmin, Rhmax, U, NBoosted NNDaily0.991-0.107
Fan et al., 2018 [83]Arid, Semi-arid and HumidTmax, Tmin, Rs, U2XGBootDaily0.9760.1290.391
Salam and Islam, 2020 [105]HumidTmax, Tmin, Rh, Rs, U2RTDaily10.00020.0046
Wu et al., 2020 [104]AridTmax, Tmin, N, U2, RhRFDaily0.9900.2550.339
Fan et al., 2021 [158]Arid, Semi-arid and HumidTmax, Tmin, Rh, U10, RsXGBoostDaily0.850-0.600
Zhao et al., 2024 [107]Arid and Semi-aridTmax, Tmin, Rs, U2BO-XGBoostDaily0.9900.117-
DLYin et al., 2020 [120]AridTmin, Tmax, NLSTMDaily0.9920.0390.159
Roy et al., 2022 [119]HumidTmin, Tmax, Rh, NBi-LSTMDaily0.9980.582-
Chia et al., 2022 [17]HumidTmin, Tmax, Ra, Rs, U2, LSTM-GRUDaily-0.1820.260
Xing et al., 2022 [159]Semi-aridTmin, Tmax, Tmean, RhDBN-LSTMDaily0.944-0.423
Farooque et al., 2022 [127]HumidTmin, Tmax, Rh, U2Conv-LSTMDaily0.740-0.620
Zhang et al., 2023 [121]Arid, Semi-arid and HumidTLSTMDaily0.8100.4180.564
Troncoso-García et al., 2023 [124]Semi-aridTmin, Tmax, Tmean, Rhmin, Rh, RsLSTM-CVOADaily0.8240.5990.885
Note: Rs is incoming solar radiation; Ra is extraterrestrial radiation; N is sunshine duration; Rh is relative humidity; Rhmax is maximum relative humidity; Rhmin is minimum relative humidity; AP is atmosphere pressure; Tmax is maximum air temperature; Tmin is minimum air temperature; Tmean = 0.5 (Tmax + Tmin); U2 means wind speed at 2 m height; U10 means wind speed at 10 m height; U means wind speed.
Table 4. The best-performing model(s) for each climate zone.
Table 4. The best-performing model(s) for each climate zone.
Climate ZoneThe Best-Performing Model(s)
AridFFBP-LM [71], LSTM [120]
Semi-aridBO-XGBoost [107], ANFIS [86]
HumidBi-LSTM [119], RT [105]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chang, Y.; Zhang, C.; Huang, J.; Chang, H.; Wang, C.; Huo, Z. Machine Learning for Reference Crop Evapotranspiration Modeling: A State-of-the-Art Review and Future Directions. Agronomy 2025, 15, 2038. https://doi.org/10.3390/agronomy15092038

AMA Style

Chang Y, Zhang C, Huang J, Chang H, Wang C, Huo Z. Machine Learning for Reference Crop Evapotranspiration Modeling: A State-of-the-Art Review and Future Directions. Agronomy. 2025; 15(9):2038. https://doi.org/10.3390/agronomy15092038

Chicago/Turabian Style

Chang, Yu, Chenglong Zhang, Ju Huang, Hong Chang, Chaozi Wang, and Zailin Huo. 2025. "Machine Learning for Reference Crop Evapotranspiration Modeling: A State-of-the-Art Review and Future Directions" Agronomy 15, no. 9: 2038. https://doi.org/10.3390/agronomy15092038

APA Style

Chang, Y., Zhang, C., Huang, J., Chang, H., Wang, C., & Huo, Z. (2025). Machine Learning for Reference Crop Evapotranspiration Modeling: A State-of-the-Art Review and Future Directions. Agronomy, 15(9), 2038. https://doi.org/10.3390/agronomy15092038

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop