Next Article in Journal
Maize Classification in Arid Regions via Spatiotemporal Feature Optimization and Multi-Source Remote Sensing Integration
Previous Article in Journal
Study of Detection of Typical Pesticides in Paddy Water Based on Dielectric Properties
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comprehensive Assessment of PeriodiCT Model for Canopy Temperature Forecasting

1
CSIRO Data61, Australian Resources Research Centre, P.O. Box 1130, Bentley, WA 6102, Australia
2
CSIRO Agriculture and Food, GPO Box 1700, Canberra, ACT 2601, Australia
3
CSIRO Agriculture and Food, Locked Bag 59, Narrabri, NSW 2390, Australia
4
CSIRO Agriculture and Food, Queensland Bioscience Precinct, 306 Carmody Road, St. Lucia, QLD 4067, Australia
5
CSIRO Data61, GPO Box 1700, Canberra, ACT 2601, Australia
6
School of Agriculture and Food Sciences, The University of Queensland, St. Lucia, QLD 4072, Australia
*
Author to whom correspondence should be addressed.
Agronomy 2025, 15(7), 1665; https://doi.org/10.3390/agronomy15071665
Submission received: 12 June 2025 / Revised: 5 July 2025 / Accepted: 8 July 2025 / Published: 9 July 2025
(This article belongs to the Section Water Use and Irrigation)

Abstract

Canopy temperature is an important indicator of plants’ water status. The so-called PeriodiCT model was developed to forecast canopy temperature using ambient weather variables, providing a powerful tool for planning crop irrigation scheduling. As this model requires observed data in its parameter training before implementing the forecast, it is important to understand the data requirements in the model training such that accurate forecasts are attained. In this work, we conduct a comprehensive assessment of the PeriodiCT model in terms of sample size requirement and predictabilities across sensors in a field and across seasons for the full model and sub-models. The results show that (1) 5 days’ observations are sufficient for the full model and sub-models to achieve very high predictability, with a minimum coefficient of efficiency of 0.844 for the full model and 0.840 for the sub-model using only air temperature. The predictability decreases in the following order: full model, sub-model without radiation S, with air temperature Ta and vapor pressure VP, and with only Ta. The predictions perform reasonably well even when only one day’s observations are used. (2) The predictability into the future is very stable as the prediction steps increase. (3) The predictabilities of the full and sub-models when using a trained model from one sensor for another sensor perform comparatively well, with a minimum coefficient of efficiency of 0.719 for the full model and 0.635 for the sub-model using only air temperature. (4) The predictabilities of the sub-models without solar radiation when using trained models from one season for another season perform comparatively well, with a minimum coefficient of efficiency of 0.866 for the full model and 0.764 for the sub-model using only air temperature, although the cross-season performances are not as good as the cross-sensor performances. The importance of the predictors is in the order of air temperature, vapor pressure, wind speed, and solar radiation, while vapor pressure and wind speed have similar contributions, and solar radiation has only a marginal contribution.

1. Introduction

Water stress is one of the main abiotic stresses limiting crop production and quality [1,2,3,4,5,6]. It is important to know the level of water stress in a crop, in order to schedule irrigation [7,8]. As the canopy temperature is closely linked to stomatal behavior, particularly stomatal conductance, and stomatal closure reduces transpiration and heat loss, leading to increased leaf temperature [9,10,11], the canopy temperature (Tc) is an indicator of crop water stress, and the reduction in canopy temperature relative to the ambient environment reflects the ability of transpiration to cool the plant’s leaves. In practice, continuous measurement of canopy temperature is a useful tool to identify crops’ water stress [12,13,14] and indicate the irrigation needed to optimize growth and yield in crops [15,16], and it can be used to study drought and heat stress tolerance [17,18]. Furthermore, canopy temperature can also allow for more accurate estimates of the consequences of heat stress on the crop and its yield compared to air temperature [19,20,21,22], since canopy temperature can deviate from air temperature under field conditions because of the interplay among plant traits, plant water availability, air temperature and humidity, solar radiation, wind velocity, and the ensuing canopy microclimate [23,24].
There are different ways to model and simulate canopy temperatures. Biophysics-based crop models have components to simulate canopy temperature, such as the rice sterility model [25], the wheat simulation model SIRIUS for predicting leaf appearance in wheat [26,27], the generic STICS model for crop simulation with water and nitrogen balance [28], APSIM for farming system modeling [29], and direct simulation of canopy temperature based on Monin–Obukhov similarity theory [30]. However, due to the complicated biophysical processes, these models face challenges in quantifying the complex factors that affect canopy temperature because of large variations in crop types, locations, and management regimes, as well as significant exogenous information of farm/location-specific qualities that are often not available [30]. In addition, these physical-based models often simplify the relationship between canopy temperature and weather variables (particularly with the use of only air temperature) or require ancillary measurements to reliably estimate the model parameters (such as heat transfer resistance in the energy balance approach) [31].
Data-driven models are popular alternatives in modeling practice [24]. Several models have previously been used to predict canopy temperature from the more easily obtained weather variables, including the Brown and Zeiher day-and-night model [32], the Georgia day–night models [33], the Georgia 24 h models [33], and the generalized model proposed in [34,35]. All of these models were applied to cotton [33]. All of these models are in linear form, i.e., constant coefficients of weather variables in multiple linear regression. Shao et al. [14] overcame the limitation of the linear setting by proposing the so-called PeriodiCT model, which outperforms all of the abovementioned models for cotton.
Various deep learning models have recently been reported for long-term high-frequency time-series forecasts like 15 min canopy temperature [36], but they demand a large amount of training time series, and only a few of them have simple mechanisms to include covariates available in the future [37]. Furthermore, more sophisticated machine learning and statistical models need a lot of training data, as well as expertise in setting up the hyperparameters in order to reach their optimal forecast performance [38]. As the canopy temperature interacts with environment differently at various development stages [39], the requirement of large training data often becomes unfeasible because the model parameters can be different over different development stages. Therefore, it is interesting to know whether the simpler machine learning model PeriodiCT suffers from large amounts of training data (i.e., whether a smaller, simpler amount of training data would be sufficient to train the model parameters). Furthermore, it is also interesting to know whether a trained PeriodiCT model can be used over different fields and across different seasons.
One of the practical uses of canopy temperature is for irrigation scheduling. With affordable canopy temperature sensors, the use of continuous canopy temperature measurements for irrigation scheduling is made even more accurate by using the Biologically Identified Optimal Temperature Interactive Console (BIOTIC), which was developed in [40], whose algorithm has been updated to the stress time temperature (STT) threshold [40]. In Australia, the potential use of the BIOTIC for furrow flood irrigation systems in cotton has been supported by the use of an accumulative STT [15,16].
Note that water is a precious natural resource and is costly to farmers, and better decision-making is required on irrigation scheduling [41]. In order for farmers to use the BIOTIC and SST to guide their irrigation scheduling, particularly in the case of furrow flood irrigation systems, there is a need to predict canopy temperature data in the near future (1 to 7 days in advance) to determine when irrigation is potentially needed. For example, Australian cotton growers need to plan when and how to attain the water moats for irrigations before they are needed, because some farms need to order water from large reservoirs, which can be some distance away from the farms [42,43], so as to allow the water to be released from reservoirs so that it arrives at the farms on time.
The following considerations need to be addressed for implementing the forecasting algorithms in practice: Firstly, for each sensor, one can only use the historical data to train the model parameters for predicting the future canopy temperature. Secondly, as the number of observation days of data can be quite small in the early season and the model can vary at different crop development stages, one is interested in knowing how many days are needed to reliably estimate the model parameters. Thirdly, there may only be a limited number of sensors in a field, and one may be interested in knowing whether the trained model for one season can be used to predict canopy temperature across fields with the same crop variety but with no canopy temperature sensors available. Fourthly, in cases where no observations are available to train the model (such as at the start of a crop season), one may be interested in knowing whether the trained model from the previous season can be used to predict the canopy temperature in the current season. Finally, it is possible that some weather variables are not available, and it is useful to know how the model performs if some weather variables are missing.
To assist in the implementation of the PeriodiCT model from economic and practical perspectives in term of forecasting, this work aims to assess the performance of our PeriodiCT model against all of the above issues, by answering the following questions: (1) How many data (days, as sample size) are required in model training to achieve reasonable predictability? (2) How well does the model predict into the future? (3) How well does the trained model from one location predict the canopy temperature at another location where there is a similar crop variety? (4) How well does the trained model from one season predict the canopy temperature in another season in the same field with the same crop variety? The full model, using all predictors (air temperature Ta, vapor pressure VP, wind speed U, and solar radiation S), and all sub-models without S are also assessed and compared with the performance of corresponding benchmarks using leave-one-out cross-validation procedures.

2. Materials and Methods

2.1. Cotton Experiment Used to Test PeriodiCT

To understand the impact of sample size (the number of days used for model training), we used canopy temperature data and weather data collected from the same 5 sensors as used in Ref. [14], with sensor identification (ID) = S1303, S1304, S1305, S1306, and S1309. A threshold temperature of 28 °C was also used as the threshold to indicate the point at which the biological optimum is exceeded during the day [41]. In this study, at no time did temperatures exceed 28 °C for 5.25 h for the day where yield could be impacted [15,16].
To assess the applicability of the PeriodiCT model across a field with the same crop variety, several fields in the cotton region around Narrabri are investigated.
The fields and sensors are summarized in Table 1, including Emerald (commercial farm in the Central Highlands Region of Queensland), Moree (commercial farm, Gywdir Valley), and ACRI (Namoi Valley) in Narrabri, New South Wales, over the 2014/2015 and/or 2015/2016 cropping seasons. Figure 1 presents a map of Australia with indication of the study fields. The Central Highlands Region has a subtropical climate with hot and humid summers, with average temperatures ranging from 20 to 28 °C; and mild, dry winters, with average temperatures ranging from 9 to 20 °C. Rainfall is most common during the summer months (December–February). Narrabri, with latitude from −30.24741288° S to −29.24741288° S and longitude from 149.818732° E to 150.818732° E, located in the northwest of New South Wales, Australia, is a major center for cotton as well as red meat production. The field sites are characterized as semi-arid, with average annual rainfall of around 657 mm. The area has mild winters and hot summers, with average summer temperatures ranging from 20 to 39 °C and average winter temperatures ranging from 0 to 20 °C. Rainfall is summer-dominant, with two-thirds of yearly rainfall occurring during the cotton season of September to February (Emerald) and October to March (Moree and ACRI). The soils in the study areas are uniform grey cracking clay (USDA Soil Taxonomy: Typic Haplustert) and alkaline, with a high clay fraction and high background fertility, making them suitable for various crops’ growth. Irrigated cotton farming is one of the major agricultural activities. All of the experiments used a randomized complete block design with three replicates. The experimental blocks were 180 m by 16 or 20 rows at ACRI, 550 m by 16 or 20 rows at Emerald, and 1200 m by 24 rows at Moree. The canopy temperature scheduled plots were irrigated when the estimated STT of 37 h was reached. The traditional irrigation scheduling treatments at Emerald and Moree used combinations of soil moisture meters (capacitance probes), farmers’ experience, and intuition. All approaches used were intended to minimize stress.
To assess the applicability of the PeriodiCT model across years, the experimental data from Emerald over the 2014–2015 and 2015–2016 seasons were used as follows: the data collected in 2014–2015 were used to train the, models and then the trained models were used to predict the canopy temperature over the 2015–2016 season by using the observed weather data.
In the experiments, the canopy temperatures were measured by commercial thermal temperature sensors installed in the fields, directly facing the cotton leaves. Regular adjustments were conducted to ensure that the sensors were above the cotton plants. Given that, like other models, including the Brown and Zeiher model [32] and Georgia model [33], the PeriodiCT model was developed for irrigated crops such as cotton [44], all of the experiments and treatments took place under fully watered situations.

2.2. PeriodiCT: Canopy Temperature Prediction Models

Based on the measured canopy temperature data, a data-driven model was constructed to predict the canopy temperature by using weather variables. The PeriodiCT model for this purpose is given in the regression form as follows:
T c t = A ρ t + A T a ρ t × T a t + A e a ρ t × e a t + A U ρ t × U t + A S ρ t × S t ,
where t represents the Julian time (in the form of 24 h), Tc(t) is the canopy temperature estimated by the model in °C at time t, Ta(t) is the air temperature at time t in °C, ea(t) is the vapor pressure at time t in kPa, U(t) is the wind speed at time t in ms−1, and S(t) is the solar radiation at time t in Wm−2. ρ t = mod 24 h ( t ) is a periodic function of t with a 24 h period, with values between 0 and 24 h. The regression coefficients A ρ t , A T a ρ t , A e a ρ t , A U ρ t , and A S ρ t are the functions of ρ t to be estimated. The coefficient functions are estimated by using the procedure in Ref. [13] through
K h x = K x ρ t h ,   for   0 x < ρ   ( = 24   h ) .
The advantage of PeriodiCT is that varying coefficients capture the need for parameter values to change over time within a day, and the periodicity constraint ensures that the parameter change should smoothly evolve over time [45]. For each experiment and treatment, the regression coefficient functions are trained by using measured canopy temperatures and corresponding weather variables.
In addition to the full PeriodiCT model using all four predictors, we also considered the sub-models, as in Ref. [44]. For the ease of notation, we used digital numbers to name the sub-models, with 1 representing air temperature, 2 vapor pressure, 3 wind speed, and 4 solar radiation. For example, sub-model 123 represents the sub-model using air temperature (as 1), vapor pressure (as 2), and wind speed (as 3).

2.3. Assessment Criteria

To assess the performance of the PeriodiCT model on the canopy temperature prediction with different lengths of historical data, one needs to train the model by using a dataset with known weather and canopy temperature data, and then apply this model to predict unknown canopy temperatures using only known corresponding weather data in the following days. The cross-validation approach was used in the assessment [46]. To do this, a consecutive k days of the experiment dataset (called training data) were used to train the model parameters by using both the weather and canopy temperature observations, and then the trained model was applied to the next l days (called validation or testing data, with lead time l) to obtain the predicted canopy temperature, using only the weather observations. We evaluated the performance for all of the combinations of k from 1 to 15 previous days and l from 1 to 14 future days. The predicted canopy temperatures were compared with the corresponding observations (which are indeed known in the validation data). For each pair of (k, l), this training and validation process was performed for all possible combinations of a given experimental dataset. We named this process forward (k, l) cross-validation.
The temperature difference ETC, absolute difference DTC (both in percentage), root-mean-square error RMSETC, and the Nash–Sutcliffe coefficient of efficiency NSTC were used to measure the model’s performance in canopy temperature prediction [47]. The percentage of relative temperature difference, the percentage of absolute difference, and the root-mean-square error are defined by
E T c k ,   l = i = 1 n T C i T ^ C i / i = 1 n T C i × 100 %
D T c k ,   l = i = 1 n T C i T ^ C i / i = 1 n T C i × 100 %
and
R M S E T c k ,   l = i = 1 n T C i T ^ C i 2 / n ,
respectively. The coefficient of efficiency describes how well the forecasts compare to the measurements and is defined by
N S T c k ,   l = 1 i = 1 n T C i T ^ C i 2 / i = 1 n T C i T ¯ C 2 .
where Tci is the measured canopy temperature at time i, T ¯ c is the mean canopy temperature of measurement in the summation, T ^ C i is the forecasted canopy temperature at time i by the forward (k, l) cross-validation testing process, and n is the number of data used in the summation. That is, T ^ C i is forecasted by the model trained by using the data from the (ikl+1)th to (il)th days and weather data from the ith day (so the ith day has the lead time l in the setting). The smaller the ETC, DTC, or RMSETC is, the better the model’s predictability is. The closer the value of NSTC is to 1, the more successfully the model forecasts.
To evaluate the model’s ability to detect crop stress status, where stress time (ST) accumulates once the canopy temperature reaches above a pre-defined threshold (which is 28 °C in our case studies; see [40]), the following four categories were defined for testing reliability: a forecast was said to be (1) a correct negative when both the forecast and measurement were below the threshold, (2) a correct hit if both the forecast and measurement were above the threshold, (3) a miss if the forecast was below the threshold while the measurement was above the threshold, and (4) a false alarm if the forecast was above the threshold while the measurement was below the threshold. By counting the number of measurement/forecast pairs for a model, a contingency table can be formed, as shown in Table 2.
The accuracy was then defined as follows:
Accuracy = (number of Hits + number of correct negatives)/Total
where “Total” is the total number of measurements (or pairs in the contingency table; see Table 2 as an example) [48].
To assess the performance of the PeriodiCT model on the canopy temperature prediction in a field over time, one needs to train the model by using the dataset from the sensor in one growing season with known weather and canopy temperature data, and then apply the trained model to predict unknown canopy temperature from the same sensor in the growing season of interest with only known, corresponding weather data at the same sensor in the whole growing season. We named this assessment one-to-one predictability assessment across sensors. There are n × (n − 1) pairs of assessments for a treatment with n sensors (n > 1). As there are often more than two sensors, the pairs of assessment are often quite numerous. Alternatively, one can assess the predictability based on one sensor by using the trained model based on the rest of the sensors within the corresponding treatment of an experiment. We named this latter assessment multiple-to-one predictability assessment across sensors. The same assessment criteria of ETC, DTC, RMSETC, and NSTC were used to evaluate the model performance, as before, but by letting T ^ C i be the forecasted temperature at time i over the whole growing season of interest.
To assess the performance of the PeriodiCT model on the canopy temperature prediction across a field, one needs to train the model by using a dataset from one or more sensors with known weather and canopy temperature data and then apply this model to predict the unknown canopy temperature of the other sensor(s) using only known, corresponding weather data from these sensors. We named this assessment one-to-one predictability assessment across seasons. There are n × m pairs of assessments for a treatment with n sensors in one season and m sensors in another season. Alternatively, one can assess the predictability based on one sensor by using the trained model based on all of the sensors in the other season. We named this latter assessment multiple-to-one predictability assessment across seasons. Again, the same assessment criteria of ETC, DTC, RMSETC, and NSTC were used to evaluate the model performance, as before, but by letting T ^ C i be the forecasted canopy temperature for the selected sensor at time i over the whole growing season.
Furthermore, the predictabilities of trained models in different scenarios were assessed in comparison with the so-called leave-one-out cross-validation test. As this leave-one-out cross-validation gives the best criterion values, we called its criterion values the benchmarks.
The assessments were undertaken using a hindcast approach, where actual measured values of canopy temperature and weather variables were used. The assessments were also undertaken not only for the full model, where all weather variables were assumed to be available, but also for the sub-models in cases where only some weather variables were available. The assessments for sub-models are also important in real forecasts, because forecasted weather is always subject to uncertainty, and the use of predictors with low skill may reduce the accuracy of canopy temperature forecasts. However, we did not consider the forecast uncertainty in our assessment, because the assessment results are largely affected by the ability of the weather models. This is why we used the hindcast approach. The assessments were based on the data from fully watered furrow-flood-irrigated cotton in Australia.

3. Results

The predictabilities were assessed for all of the different cases given in each sub-section below. To sharpen our findings, we only discuss the full model, sub-model 1 (with air temperature as the only predictor), sub-model 12 (with air temperature and vapor pressure), and sub-model 123 (without solar radiation), following the order of contribution in [14]. In fact, air temperature and vapor pressure are also the most skillful variables in weather forecasting, and the solar radiation is also difficult to forecast. The vapor pressure and wind speed also have similar contributions.

3.1. Predictability with Different Sample Sizes

3.1.1. Full Model

Utilizing the data of five sensors used in [44], we evaluated the predictions at lead time l performed by the models trained on the data in the previous k days. The percentages of relative temperature difference ETc (ranging from −0.909% to 0.321% for all sensors with different numbers of days in and different day-out predictions) were very small and, therefore, are not reported here. As there were too many trained models, and all model coefficients vary periodically to capture the variation over the time of day, with similar patterns as those shown in Ref. [14], we do not provide the trained coefficient functions here. Instead, we provide the plots of all criterion values against the number of day-out predictions for each number of day-in predictions used for training in Figure 2 (coefficients of efficiency and accuracies). It should be noted that the values obtained by using leave-one-out cross-validation when all of the data are used provide the benchmark values, as these are the best that the proposed model can achieve.
It can be seen from Figure 2a that, for all of the trained models with different numbers of days in, the predictabilities are quite stable over different numbers of days out. While the models using only data from one day (days in = 1) have reasonable ability, with the minimum coefficient of efficiency values between 0.650 (for sensor ID s1304) and 0.678 (for sensor ID s1306), the coefficient of efficiency increases steadily as the number of days in increases. However, the improvement is quite marginal after the number of days in reaches 5 days, for which the minimum coefficient of efficiency values are between 0.844 (for sensor ID s1303) and 0.874 (for sensor ID s1306), while the minimum coefficient of efficiency values increase to between 0.896 (for sensor ID s1303) and 0.912 (for sensor ID s1304) when the number of days in is 15 days.
In terms of accuracy, it can also be seen from Figure 2b that, for all of the trained models with different numbers of days in, the predictabilities are quite stable over different numbers of days out. The models using only data from one day (days in = 1) have reasonable ability, with the minimum accuracy values between 88.27% (for sensor ID s1304) and 89.67% (for sensor ID s1309). The number of days in required to achieve steady accuracy varies from sensor to sensor, and the improvement in accuracy is not monotonic. However, similar to the coefficient of efficiency, the accuracy can be quite stable when the number of days in reaches 5 for all sensors, with the minimum accuracy values between 90.40% (for sensor ID s1303) and 92.87% (for sensor ID 21306). The minimum accuracy values are between 91.34% (for sensor ID s1305) and 92.80% (for sensor ID s1309) when the number of days in is 15.
Overall, it can be concluded that in terms of both the coefficient of efficiency and accuracy, the predictability is good even if only one day’s data is used in the model training, and it will be quite satisfactory once the number of days in reaches 5 days. The reason for the outstanding model performance is that the periodicity constraint follows the natural cycle of Earth systems, and the periodicity used in the model setup gives more precision by capturing this natural diurnal pattern.

3.1.2. Sub-Models

For sub-model 123, using the air temperature, vapor pressure, and wind speed, the percentages of relative temperature difference ETc are very small (ranging from −0.907% to 0.533% for all sensors with different numbers of days in and different day-out predictions). The plots of all criterion values against the number of day-out predictions for each number of days in used are given in Figure 3 (coefficients of efficiency and accuracies). It can be seen from Figure 3a that, for all of the models with different numbers of days in, the predictabilities are quite stable over different numbers of days out, and this three-predictor sub-model has very similar coefficient of efficiency values to the full model, confirming again that the contribution of solar radiation to the model’s predictability is very minimal. The models using only data from one day (days in = 1) have reasonable ability, with the minimum coefficients of efficiency between 0.649 (for sensor ID s1305) and 0.664 (for sensor ID s1309). However, the improvement is quite marginal after the number of days in reaches 5 days, for which the minimum coefficient of efficiency values are between 0.839 (for sensor ID s1303) and 0.870 (for sensor ID s1306), while the minimum coefficient of efficiency values increase to between 0.894 (for sensor ID s1303) and 0.910 (for sensor ID s1304) when the number of days in is 15 days. There is not much lost in comparison with the leave-one-out benchmark values, which are between 0.910 (for sensor ID s1305) and 0.923 (for sensor ID s1304).
In terms of accuracy, it can also be seen from Figure 3b that for all of the trained models with different numbers of days in, the predictabilities are quite stable over different numbers of days out. The models using only data from one day (days in = 1) have reasonable ability, with the minimum accuracy values between 88.06% (for sensor ID s1304) and 89.96% (for sensor ID s1306). The number of days in required to achieve steady accuracy varies from sensor to sensor, and the improvement in accuracy is not monotonic. However, similar to the coefficient of efficiency, the accuracy can be quite stable when the number of days in reaches to 5 all sensors, with the minimum accuracy values between 90.35% (for sensor ID s1303) and 92.40% (for sensor ID 21309). The minimum accuracy values are between 91.55% (for sensor ID s1305) and 92.54% (for sensor ID s1306) when the number of days in is 15, which are better than the leave-one-out benchmark values of between 89.92% (for sensor ID s1303) and 91.41% (for sensor ID s1306). A possible reason for this might be that the model should change over time at different crop growth stages, making the predictability better when only the nearby observations are used in the model. More evidence is needed to confirm this finding once more, and more experiments should be conducted. However, it should be noted that the accuracy values strongly depend on the threshold value used in the calculation and cannot be used as the sole evidence of model performance.
For sub-model 12, using the air temperature and vapor pressure, the percentages of relative temperature difference ETc (ranging from −0.482% to 1.127% for all sensors with different numbers of days in and different day-out predictions) are very small and, therefore, are not reported here. The plots of all criterion values against the number of day-out predictions for each number of days in used for training are given in Figure 4 (coefficients of efficiency and accuracies). It can be seen from Figure 4a that, for all of the trained models with different numbers of days in, the predictabilities are quite stable over different numbers of days out, and this two-predictor sub-model has very similar coefficient of efficiency values to the full model and the three-predictor sub-model without solar radiation, confirming again that the contribution of wind speed to the model’s predictability is not high either. The models using only data from one day (days in = 1) have reasonable ability, with the minimum coefficient of efficiency values between 0.662 (for sensor ID s1303) and 0.709 (for sensor ID s1306). However, the improvement is quite marginal after the number of days in reaches 5 days, for which the minimum coefficient of efficiency values are between 0.837 (for sensor ID s1303) and 0.862 (for sensor ID s1306), while the minimum coefficient of efficiency values increase to between 0.883 (for sensor ID s1303) and 0.899 (for sensor ID s1304) when the number of days in is 15 days. There is not much lost in comparison with the leave-one-out benchmark values, which are between 0.893 (for sensor ID s1305) and 0.909 (for sensor ID s1304).
In terms of accuracy, it can also be seen from Figure 4b that for all of the trained models with different numbers of days in, the predictabilities are quite stable over different numbers of days out. The models using only data from one day (days in = 1) have reasonable ability, with the minimum accuracy values between 89.28% (for sensor ID s1303) and 90.57% (for sensor ID s1306). The number of days in required to achieve steady accuracy varies from sensor to sensor, and the improvement in accuracy is not monotonic. However, similar to the coefficient of efficiency, the accuracy can be quite stable when the number of days in reaches 5 for all sensors, with the minimum accuracy values between 90.74% (for sensor ID s1303) and 92.80% (for sensor ID 1306). The minimum accuracy values are between 91.44% (for sensor ID s1303) and 92.11% (for sensor ID s1306) when the number of days in is 15, which are similar to the leave-one-out benchmark values of between 90.67% (for sensor ID s1303) and 92.21% (for sensor ID s1306).
For sub-model 1, using only the air temperature, the percentages of relative temperature difference ETc are very small (ranging from −0.590% to 0.191% for all sensors with different number of days in and different day-out predictions). The plots of all criterion values against the number of day-out predictions for each number of days in used for training are given in Figure 5 (coefficients of efficiency and accuracies). It can be seen from Figure 5a that, for all of the trained models with different numbers of days in, the predictabilities are quite stable over different numbers of days out (except for a small deceasing trend for small numbers of days in). This one-predictor sub-model has larger coefficient of efficiency values than the two-predictor sub-model with air temperature and wind speed. However, this one-predictor sub-model has larger coefficient of efficiency values than the full model and the other sub-models discussed above when the number of days in is small, as well as the three-predictor sub-model without solar radiation when the number of days in is larger. The models using only data from one day (days in = 1) have reasonable ability, with the minimum coefficient of efficiency values between 0.811 (for sensor ID s1303) and 0.838 (for sensor ID s1306). However, the improvement is quite marginal after the number of days in reaches 5 days, for which the minimum coefficient of efficiency values are between 0.840 (for sensor ID s1303) and 0.864 (for sensor ID s1304), while the minimum coefficient of efficiency values increase to between 0.847 (for sensor ID s1303) and 0.868 (for sensor ID s1304) when the number of days in is 15 days. There is not much lost in comparison with the leave-one-out benchmark values, which are between 0.861 (for sensor ID s1303) and 0.883 (for sensor ID s1304).
In terms of accuracy, it can also be seen from Figure 5b that for all of the trained models with different numbers of days in, the predictabilities are quite stable over different numbers of days out. The models using only data from one day (days in = 1) have reasonable ability, with the minimum accuracy values between 90.01% (for sensor ID s1303) and 91.89% (for sensor ID s1306). The number of days in required to achieve steady accuracy varies from sensor to sensor, and the improvement in accuracy is not monotonic. However, similar to the coefficient of efficiency, the accuracy can be quite stable when the number of days in reaches 5 for all sensors, with the minimum accuracy values between 90.14% (for sensor ID s1303) and 92.14% (for sensor ID 1306). The minimum accuracy values are between 90.33 (for sensor ID s1304) and 91.04% (for sensor ID s1306) when the number of days in is 15, similar to the leave-one-out benchmark values, which are between 89.30% (for sensor ID s1303) and 91.23% (for sensor ID s1306).
In summary, for all of the models considered (the full model, the three-predictor sub-model 123 without solar radiation, the two-predictor sub-model 12 with air temperature and vapor pressure, and the one-predictor sub-model using only air temperature), the predictabilities are quite stable as the lead time of prediction increases for any given number of days used in training. The model trained using only data from one day (i.e., days in = 1) has reasonable predictability. The improvement in predictability becomes very marginal once the number of days (i.e., days in) used for training reaches 5, and the predictabilities are quite similar to the benchmark using leave-one-out cross-validation testing over all of the data. The solar radiation has a very marginal contribution to the predictability, while the air temperature is the major contributor to the predictability.

3.2. Predictability Across Sensors

We now assess how the model trained based on one sensor predicts the canopy temperature for another sensor with known environmental weather variables. After comparing the predictabilities between the one-to-one and multiple-to-one methods, we found that the results were quite similar. Therefore, we only report the results for the multiple-to-one method.

3.2.1. Full Model

The criterion values are given in Table 3 for the relative error, coefficient of efficiency, and accuracy, and as a comparison, for each sensor, the benchmark criterion values using the leave-one-out cross-validation approach over all data from its own sensor are given in Table 4. It can be seen that the errors for the benchmark approach are relatively small but can be larger when the predictions are made using the trained models for other sensors, particularly for the scheduling at Moree in the 2014–2016 season (between −6.442% for sensor ID s115 and 7.860% for sensor ID s111). Therefore, the reason for this may be that these two sensors were not well calibrated before being installed in the fields. However, the coefficient of efficiency values are very similar, with the difference between the benchmark and the cross-sensor approaches between −0.068 and 0.137. In comparison with the benchmark values, the accuracy values are quite comparable between the cross-sensor and the benchmark approaches for all of the sensors, with the difference between −2.41% (sensor ID s259) and 2.83% (sensor ID ss1302), except for the results for Moree, which show two large values of difference: −14.91% (for sensor ID s111) and 15.70% (for sensor ID s255). Overall, the models trained from other sensors in the same season perform quite well, except for sensor s255 in Moree. After carefully exploring the cause of this underperformance for sensor ID s255, we found that s225 behaves differently from the other two sensors, with a substantially higher canopy temperature.

3.2.2. Sub-Models

For the three-predictor sub-model 123 with air temperature, vapor pressure, and wind speed as predictors, the criterion values are given in Table 5, and the corresponding benchmark values are given in Table 6. It can be seen that the errors for the benchmark approach are relatively small, as usual, but can be larger when the predictions are made using the models trained for other sensors, particularly for the scheduling at Moree in the 2015–2016 season (between −6.417% for sensor ID s115 and 7.818% for sensor ID s111), which had a different irrigation scheduling design. However, the coefficient of efficiency values are very similar, with the difference between −0.026 and 0.006 for all sensors in all experiments, except for the scheduling at Moree in the 2014–2016 season, with a difference of up to 0.201 (sensor ID s111). In comparison with the benchmark values, the accuracy values are similar in ACRI and Emerald but are mainly lower in scheduling at Moree in the 2014–2015 season, with two very large differences between the benchmark and cross-sensor approaches for sensor IDs s111 (15.33%) and s255 (16.40%).
In comparison with the cross-sensor results achieved by the full model (see Table 3), the cross-sensor results achieved by the three-predictor sub-model can have larger errors, with the differences between −2.572 (sensor ID s259) and 3.022 (sensor ID s032). However, the coefficient of efficiency values are very similar, with a difference of up to 0.062 (sensor ID s1303). The accuracy values are also similar, with a difference of up to 3.06% (sensor ID s1302), except for the largest value of 10.10% for sensor ID s115. Therefore, the model performance is generally stable when the solar radiation variable is not used in the model.
For the two-predictor sub-model 12, with air temperature and vapor pressure as predictors, the criterion values are given in Table 7, and the corresponding benchmark values are given in Table 8. It can be seen that the errors for the benchmark approach are relatively small, as usual, but can be larger when the predictions are made by using the models trained for other sensors, particularly for the scheduling at Moree in the 2015–2016 season (between −6.368% for sensor ID s115 and 7.865% for sensor ID s111). However, the coefficient of efficiency values are very similar, with the difference between −0.023 and 0.008 for the sensors in all experiments, except for the scheduling at Moree in the 2014–2016 season, with a difference of up to −0.201 (sensor ID s111). In comparison with the benchmark values, the accuracy values are similar in ACRI and Emerald, with a difference of up to −2.260% (sensor ID s170), but are mainly lower in scheduling at Moree in the 2014–2015 season, with two very large differences between the benchmark and cross-sensor approaches for sensor IDs s111 (16.01%) and s255 (16.88%).
In comparison with the cross-sensor results achieved by the three-predictor sub-model above (see Table 5), the cross-sensor results achieved by this two-predictor sub-model can have slightly larger errors, with differences of up to 0.139 (sensor ID 2001). However, the coefficient of efficiency values are very similar, with a difference of up to 0.029 (sensor ID s202). The accuracy values are also similar, with a difference of up to 1.36% (sensor ID s134). Therefore, the model performance is generally stable for the models with and without wind speed.
Finally, for the one-predictor sub-model 1, with air temperature as the only predictor, the criterion values are given in Table 9, and the corresponding benchmark values are given in Table 10. It can be seen that the errors for the benchmark approach are relatively small, as usual, but can be larger when the predictions are made by using the models trained for other sensors, particularly for the scheduling at Moree in the 2015–2016 season (between −6.331% for sensor ID s115 and 7.929% for sensor ID s111). However, the coefficient of efficiency values are very similar, with the difference between −0.189 and 0.009 for sensors in all experiments. In comparison with the benchmark values, the accuracy values are similar in ACRI and Emerald, with a difference of up to −1.32% (sensor ID s170), but are mainly lower in scheduling at Moree in the 2014–2015 season, with two very large differences between the benchmark and cross-sensor approaches for sensor IDs s111 (16.11%) and s255 (17.56%), which still have reasonable accuracy values of 77.47% and 76.62%, respectively.
It can be seen from Table 7 and Table 9 that the one-predictor sub-model 1 performs similarly to these two-predictor models, but that the one-predictor sub-model with air temperature as the only predictor usually has slightly larger relative errors (36 and 25 out of 37 sensors, respectively), smaller coefficients of efficiency (36 and 37 out of 37 sensors, respectively), and lower accuracy (29 and 24 out of 37 sensors, respectively) than the two-predictor sub-models. Therefore, the use of vapor pressure achieves better model performance.

3.3. Predictability Across Seasons

It is of importance to understand how the model can be trained based on one sensor from one season to be used to predict the canopy temperature in another season. The full season data were used in both training and validation. After comparing the predictabilities between the one-to-one and multiple-to-one methods, we found that the results were quite similar. Therefore, we only report the results for the multiple-to-one method for training and the 2015–2016 season for prediction.

3.3.1. Full Model

The fully trained models based on the 2014–2015 season do not perform well in predicting canopy temperatures in the 2015–2016 season. We found that the problem might be caused by different patterns in solar radiation. To see this, we plotted the solar radiation in Figure 6, from which we can see that the solar radiation in 2015–2016 varied more than in 2014–2015. To check whether the problem of unreliable prediction was also caused by the flexibility of our PeriodiCT model (too much flexibility causes less robustness), we also fitted the Georgia 24-h model, which is the linear version of the PeriodiCT model. The results also showed that the Georgia 24-h model trained based on the 2014–2015 season also provided large prediction errors for canopy temperatures in the 2015–2016 season during the same early mornings with larger solar radiation values. We do not report these results here because there is no information gain.
By noting that the contribution of solar radiation to the canopy temperature prediction is minimal, we do not recommend the use of solar radiation as a predictor when the trained model has abnormal behavior in prediction.

3.3.2. Sub-Models

The criterion values obtained by using sub-model 123, with air temperature, vapor pressure, and wind speed as predicators, are given in Table 11 for relative error, coefficient of efficiency, and accuracy. As the benchmarks, the criterion values obtained by using the leave-one-out cross-validation procedure are already given in Table 6. It can be seen from Table 11 that the relative errors are generally quite small for sub-model 123 using air temperature, vapor pressure, and wind speed, and for sub-model 13 using air temperature they are the most negative (except for the prediction for s167 using the model trained based on experiment cT-1.7MPa). However, the relative errors are generally large for sub-model 12, with air temperature and vapor pressure as predictors (up to 5.064% for sensor s167, using the cT-1.7MPa experiment for training), and for sub-model 1, with air temperature as the only predictor (up to 7.223% for sensor s167, using the cT-1.7MPa experiment for training).
In terms of the coefficient of efficiency, sub-model 123 trained using the control experiment performed quite well, with values between 0.919 and 0.923, while sub-model 123 trained using the cT-1.7MPa experiment performed reasonably, with values between 0.866 and 0.875. When sub-model 12 was used, the model trained using the control experiment had a slightly smaller coefficient of efficiency, with values between 0.903 and 0.920, but the model trained using the cT-1.7MPa experiment had a larger coefficient of efficiency, with values between 0.996 and 0.901. However, when sub-model 13 was used, the models trained using both experiments had smaller coefficients of efficiency, with values between 0.808 and 0.825 when the control experiment was used for training and between 0.717 and 0.743 when the cT-1.7MPa experiment was used for training. When the air temperature was used as the only predictor (i.e., sub-model 1), the model using the control experiment for training had a slightly smaller coefficient of efficiency (with values between 0.785 and 0.819) than sub-model 12 and sub-model 13, but the model using the cT-1.7MPa experiment for training had a smaller coefficient of efficiency (with values between 0.764 and 0.795) than sub-model 12 but a larger coefficient of efficiency than sub-model 13. In summary, the sub-models using the control experiment for training performed as follows: sub-models 123, 12, 13, and 1, in descending order. Meanwhile, the sub-models using the cT-1.7MPa experiment for training performed as follows: sub-models 12, 123, 1, and 13, in descending order, meaning that the use of wind speed as a predictor in the cT-1.7MPa experiment decreased the predictability in terms of the coefficient of efficiency.
In terms of accuracy, the sub-models 123 and 12 trained using the control experiment performed well and similarly, with the values between 88.82% and 90.57% for sub-model 123 and between 88.79% and 91.04% for sub-model 12, while sub-model 123 trained using the cT-1.7MPa experiment performed reasonably, with values between 82.12% and 85.42%, which are worse than those of sub-model 123 trained using the cT-1.7MPa experiment, with accuracy values between 88.79% and 91.21%. However, for training using both the control and cT-1.7MPa experiments, sub-model 1 (with accuracy values between 79.87% and 83.89% for training using the control experiment, and between 78.84% and 83.25% for training using the cT-1.7MPa experiment) performed worse than sub-models 123 and 12 but better than sub-model 13 (with accuracy values between 77.27% and 82.44% for training using the control experiment, and between 75.71% and 81.79% for training using the cT-1.7MPa experiment). In summary, the sub-models performed as follows: sub-models 123, 12, 13, and 1, in descending order, for the training using both the control and cT-1.7MPa experiments, except that sub-models 123 and 12 performed similarly for training using the control experiments. Overall, the conclusions in terms of accuracy are consistent with the conclusions in terms of the coefficient of efficiency.

4. Discussion

This work assessed the performance of the PeriodiCT model in terms of the sample size requirements in model training, the predictability into the future, and the applicability of the models trained on different sensors within the same field and across seasons, as well as the performances of the sub-models.
There are other data-driven models that may be applicable to canopy temperature modeling. For example, statistical autoregressive models such as ARIMA (Autoregressive Integrated Moving Average; see Ref. [49]) and LSTM (Long Short-Term Memory; see Ref. [50]) use only the past canopy temperature as the input, which has not been used for canopy temperature but has been used for canopy covers [51]. Other machine learning models have also been proposed for different crops, including the random forest model [52]. Various deep learning models have recently been reported for long-term high-frequency time-series forecasts like 15-min canopy temperature [36], but they demand a large amount of training time series, and only a few of them have simple mechanisms to include covariates that become available in the future [37]. It will be an interesting research topic to compare these models with similar objectives to those used in the current work.
As data-driven models are flexible enough to include more data as predictors, future research could use other possible hydroclimatic variables. For example, the traditional and dynamic treatments were scheduled by a combination of neutron moisture probe readings and forecasted evapotranspiration. The traditional schedules had irrigation targets based on soil moisture deficits of ~50, 55, and 60 mm, and the water applied per irrigation event reflected these deficit values. As canopy temperature is partially regulated by evapotranspiration through energy consumption at the canopy surface and can be used to infer changes in stomatal regulation and vegetation water stress [27,53], evapotranspiration could be potentially included as a predictor. Furthermore, vapor pressure deficit, instead of vapor pressure, could be a potential factor to consider [54].
Similar to other data-driven models, PeriodiCT needs to be trained. The model training and the order of the predictors’ contributions can vary in different regions under different environments and climates. As we can see from the cross-season results in this work, the solar radiation could not be used in the next season, because the solar radiation pattern changed dramatically, meaning that the climate patterns were quite different in these two seasons. It was found that humidity and cloud cover (which are directly linked to solar radiation) are important factors in semi-arid environments [55].
In this work, 28 °C was used as the threshold for the accuracy calculation, as it was tested for cotton fields in Australia. However, accuracy could be tested by using different thresholds. The threshold method was used in another study [56]. There are other indices to assess crops’ water stress, such as the crop water stress index (CWSI) to guide irrigation [12,35,57,58] and the plant stress index (PSI) for the same purpose [59].
It should be noted that this assessment was conducted by using the true weather observations as predictors. In real applications, weather forecasts will be used (except for the cross-sensor case, which can be viewed as real-time prediction and gap-filling), and these weather variables will be subject to uncertainty. It will be important to assess the model’s performance in real forecasting, which of course depends heavily on the weather forecasting model.

5. Conclusions

A periodic coefficient model named PeriodiCT was proposed to predict canopy temperature by using weather variables, aiming to assist in irrigation scheduling and planning. In order for farmers to confidently use this model, it is important to assess the model’s performance in different situations, including the sample size requirement to train the model, as the model may vary at different stages of crop development; the applicability of the trained model to other fields, which can be important for reducing the cost of the sensor installation; and across different seasons, as a trained model may be unavailable at the early stage of a new season. It is also important to know the model’s performance if some weather variables are unavailable or unreliable, such that only a sub-model can be used. This work aimed to address all of these concerns.
In the assessment, three criteria were used to assess the performance, including relative error, coefficient of efficiency, and accuracy, with a threshold value of 28 °C, which is the value to calculate the water stress time in irrigation scheduling. The performances were assessed in comparison with the benchmark criterion values when the leave-one-out cross-validation was used for the whole crop season, because the leave-one-out cross-validation gives the best criterion values.
By using selected experimental data from Australia and comparing the results with the benchmark values using the leave-one-out cross-validation procedure, and from conducting analysis on the full models and sub-models with different numbers of weather variables, which were added in order of importance (air temperature, vapor pressure, wind speed, and solar radiation), we can conclude the following:
(1) For the sample size requirement, the model can be trained well by using the data for only 5 days (days in), performing quite well in prediction in comparison with the benchmark values, and the predictability remains stable and shows only marginal improvement as the number of days ahead (days out) increases. Even the models trained by using only one day’s data (days in = 1) can achieve reasonable predictability. The solar radiation has a very marginal contribution to the predictability, followed by the vapor pressure and wind speed, which in fact have very similar contributions to the predictability.
(2) When a model trained using data from one sensor was applied to predict the canopy temperature for another sensor in the field, the predictability performed comparatively well in comparison with the benchmark, except for the experiment in Moree, where the cross-sensor predictabilities were lower than the benchmark. The reason for the low predictability in Moree may be the inferior sensor calibration before its installation in the field. To compare the models in terms of cross-sensor performance, the importance of the predictors is in the order of air temperature, vapor pressure, wind speed, and solar radiation, while the vapor pressure and wind speed have similar contributions and solar radiation has only marginal contributions.
(3) When the models trained by using data from one season are applied to predict the canopy temperature in the next season, the full model performs badly, with some extremely large prediction values (even when the Georgia 24 h model is used), and therefore is not recommended. The predictabilities of the sub-models decrease in comparison with the benchmarks, but they still perform reasonably well. It is interesting to see that the use of wind speed decreases the predictability.
(4) In all assessments, the air temperature is always the most important predictor, which dominates the prediction performance of PeriodiCT. By taking the reliability of predictors into account in real forecasting, we recommend the use of vapor pressure as the second most important predictor, if available, followed by wind speed if it is useful. However, solar radiation should not be used in real forecasting, as it is less reliable in weather forecasting and has only a marginal contribution to canopy temperature forecasting.
Overall, the prediction model PeriodiCT for canopy temperature, using weather variables as predictors, performs reasonably well with low data availability within a sensor and across sensors. However, for our Australian experiment, the solar radiation cannot be used, and the wind speed is not recommended when a model trained in one season is used for prediction in the next season. The assessment provided important guidance for the use of PeriodiCT in practice, adding further confidence for farmers to adopt the model in their management, particularly in irrigation scheduling.

Author Contributions

Conceptualization, Q.S., R.R., H.J. (Hiz Jamali), C.N., B.Z., H.J. (Huidong Jin), S.C.C. and M.B.; Methodology, Q.S., R.R., B.Z., H.J. (Huidong Jin), S.C.C. and M.B.; Software, Q.S.; Formal Analysis, Q.S.; Investigation, B.Z.; Resources, R.R. and M.B.; Data Curation, H.J. (Hiz Jamali) and C.N.; Writing—Original Draft, Q.S.; Writing—Review and Editing, R.R., H.J. (Hiz Jamali), C.N., B.Z., H.J. (Huidong Jin), S.C.C. and M.B.; Project Administration, R.R. and M.B.; Funding Acquisition, M.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the CSIRO DigiScape Future Science Platform.

Data Availability Statement

Data are available from the authors.

Acknowledgments

The sensor data used in these analyses were collected in projects supported by the CSIRO and the Australian Cotton Research Development Corporation. Comments and suggestions from three anonymous reviewers are gratefully acknowledged.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Guilioni, L.; Jones, H.G.; Leinonen, I.; Lhomme, J.P. On the relationships between stomatal resistance and leaf temperatures in thermography. Agric. For. Meteorol. 2008, 148, 1908–1912. [Google Scholar]
  2. de Araújo, A.F.B.; Cavalcante, E.S.; Lacerda, C.F.; de Albuquerque, F.A.; da Silva Sales, J.R.; Lopes, F.B.; da Silva Ferreira, J.F.; Costa, R.N.T.; Lima, S.C.R.V.; Bezerra, M.A.; et al. Fiber Quality, Yield, and Profitability of Cotton in Response to Supplemental Irrigation with Treated Wastewater and NPK Fertilization. Agronomy 2022, 12, 2527. [Google Scholar] [CrossRef]
  3. Deb, P.; Moradkhani, H.; Han, X.; Abbaszadeh, P.; Xu, L. Assessing irrigation mitigating drought impacts on crop yields with an integrated modeling framework. J. Hydrol. 2022, 609, 127760. [Google Scholar] [CrossRef]
  4. Elkot, A.F.; Shabana, Y.; Elsayed, M.L.; Saleh, S.M.; Gadallah, M.A.M.; Fitt, B.D.L.; Richard, B.; Qi, A. Yield Responses to Total Water Input from Irrigation and Rainfall in Six Wheat Cultivars Under Different Climatic Zones in Egypt. Agronomy 2024, 14, 3057. [Google Scholar] [CrossRef]
  5. Duan, T.; Zhang, L.; Wang, G.; Liang, F. Effects of water application frequency and water use efficiency under deficit irrigation on maize yield in Xinjiang. Agronomy 2025, 15, 1110. [Google Scholar] [CrossRef]
  6. Misgina, N.A.; Beshir, H.M.; Yohannes, D.B.; Gebreyohanes, G.H. Growth, Yield, and Water Productivity of Potato Genotypes Under Supplemental and Non-Supplemental Irrigation in Semi-Arid Areas of Northern Ethiopia. Agronomy 2025, 15, 72. [Google Scholar] [CrossRef]
  7. Tennakoon, S.B.; Milroy, S.P. Crop water use and water use efficiency on irrigated cotton farms in Australia. Agric. Water Manag. 2003, 61, 179–194. [Google Scholar]
  8. Jenkins, M.; Block, D.E. A review of methods for data=driven irrigation in modern agricultural systems. Agronomy 2024, 14, 1355. [Google Scholar] [CrossRef]
  9. Brown, H.T.; Escombe, F. Researches on some of the physiological processes of green 24 plants with special references to the interchange of energy between the leaf and the 25 surroundings. Proc. R. Soc. London. Ser. B Contain. Pap. A Biol. Character 1905, 76, 29–111. [Google Scholar]
  10. Jones, H.G. Use of infrared thermometry for estimation of stomatal conductance as a possible aid to irrigation scheduling. Agric. For. Meteorol. 1999, 95, 139–149. [Google Scholar]
  11. Jones, H.G. Application of thermal imaging and infrared sensing in plant physiology and ecophysiology. Adv. Bot. Res. 2004, 41, 107–163. [Google Scholar]
  12. Jackson, R.D. Canopy temperature and crop water stress. Adv. Irrig. 1982, 1, 43–84. [Google Scholar]
  13. Peñuelas, J.; Savé, R.; Marfà, O.; Serrano, L. Remotely measured canopy temperature of greenhouse strawberries as indicator of water status and yield under mild and very mild water stress conditions. Agric. For. Meteorol. 1992, 58, 63–77. [Google Scholar]
  14. Shao, Q.; Bange, M.; Mahan, J.; Jin, H.; Jamali, H.; Zheng, B.; Chapman, S.C. A new probabilistic forecasting model for canopy temperature with consideration of periodicity and parameter variation. Agric. For. Meteorol. 2019, 265, 88–98. [Google Scholar]
  15. Conaty, W.C. Temperature-Time Thresholds for Irrigation Scheduling in Precision Application and Deficit Furrow Irrigated Cotton. Ph.D. Thesis, The University of Sydney, Camperdown, Australia, 2010. [Google Scholar]
  16. Conaty, W.C.; Burke, J.J.; Mahan, J.R.; Sutton, B.G.; Neilsen, J.E. Determining the optimum plant temperature of cotton physiology and yield to improve plant-based irrigation scheduling. Crop Sci. 2012, 52, 1828–1836. [Google Scholar]
  17. Jiang, Q.; Roche, D.; Monaco, T.A.; Hole, D. Stomatal conductance is a key parameter to assess limitations to photosynthesis and growth potential in barley genotypes. Plant Biol. 2006, 8, 515–521. [Google Scholar]
  18. Yu, M.-H.; Ding, G.-D.; Gao, G.-L.; Zhao, Y.-Y.; Yan, L.; Sai, K. Using Plant Temperature to Evaluate the Response of Stomatal Conductance to Soil Moisture Deficit. Forests 2015, 6, 3748–3762. [Google Scholar] [CrossRef]
  19. Ninanya, J.; Ramírez, D.A.; Rinza, J.; Silva-Díaz, C.; Cervantes, M.; García, J.; Quiroz, R. Temperature as a Key Physiological Trait to Improve Yield Prediction under Water Restrictions in Potato. Agronomy 2021, 11, 1436. [Google Scholar] [CrossRef]
  20. Gabaldón-Leal, C.; Webber, H.; Otegui, M.; Slafer, G.; Ordóñez, R.; Gaiser, T.; Lorite, I.; Ruiz-Ramos, M.; Ewert, F. Modelling the impact of heat stress on maize yield formation. Field Crop Res. 2016, 198, 226–237. [Google Scholar]
  21. Rezaei, E.E.; Webber, H.; Gaiser, T.; Naab, J.; Ewert, F. Heat stress in cereals: Mechanisms and modelling. Eur. J. Agron. 2015, 64, 98–113. [Google Scholar]
  22. Siebert, S.; Webber, H.; Zhao, G.; Ewert, F. Heat stress is overestimated in climate impact studies for irrigated agriculture. Environ. Res. Lett. 2017, 12, 054023. [Google Scholar]
  23. Malano, H.M.; Patto, M. Automation of border irrigation in South-East Australia: An overview. Irri Drain. Sys. 1992, 6, 9–26. [Google Scholar]
  24. Rozenstein, O.; Cohen, Y.; Alchanatis, V.; Behrendt, K.; Bonfil, D.J.; Eshel, G.; Harari, A.; Harris, W.E.; Klapp, I.; Laor, Y.; et al. Data-driven agriculture and sustainable farming: Friends or foes? Precis. Agric. 2024, 25, 520–531. [Google Scholar]
  25. van Oort, A.J.P.; Saito, K.; Zwart, S.J.; Shrestha, S. A simple model for simulating heat induced sterility in rice as a function of flowering time and transpirational cooling. Field Crops Res. 2014, 156, 303–312. [Google Scholar]
  26. Jamieson, P.D.; Semenov, M.A.; Brooking, I.R.; Francis, G.S. Sirius: A mechanistic model of wheat response to environmental variation. Europ. J. Agron. 1998, 8, 161–179. [Google Scholar]
  27. Javadian, M.; Smith, W.K.; Lee, K.; Knowles, J.F.; Scott, R.L.; Fisher, J.B.; Moore, D.J.; van Leeuwen, W.J.; Barron-Gafford, G.; Behrangi, A. Canopy Temperature Is Regulated by Ecosystem Structural Traits and Captures the Ecohydrologic Dynamics of a Semiarid Mixed Conifer Forest Site. J. Geophys. Res. Biogeosci. 2022, 127, e2021JG006617. [Google Scholar]
  28. Brisson, N.; Mary, B.; Ripoche, D.; Jeuffroy, M.H.; Ruget, F.; Nicoullaud, B.; Gate, P.; Devienne-Barret, F.; Antonioletti, R.; Durr, C.; et al. STICS: A generic model for the simulation of crops and their water and nitrogen balances. I. Theory and parameterization applied to wheat and corn. Agronomie 1998, 18, 311–346. [Google Scholar]
  29. Keating, B.A.; Carberry, P.S.; Hammer, G.L.; Probert, M.E.; Robertson, M.J.; Holzworth, D.; Huth, N.I.; Hargreaves, J.N.G.; Meinke, H.; Hochman, Z.; et al. An overview of APSIM, a model designed for farming systems simulation. Eur. J. Agron. 2003, 18, 267–288. [Google Scholar]
  30. Webber, H.; White, J.W.; Kimball, B.A.; Ewert, F.; Asseng, S.; Rezaei, E.E.; Pinter, P.J., Jr.; Hatfield, J.L.; Reynolds, M.P.; Ababaei, B.; et al. Physical robustness of canopy temperature models for crop heat stress simulation across environments and production conditions. Field Crops Res. 2016, 216, 75–88. [Google Scholar]
  31. King, B.A.; Shellie, K.C. Evaluation of neural network modeling to predict non-water-stressed leaf temperature in wine grape for calculation of crop water stress. Agric. Water Manag. 2016, 167, 38–52. [Google Scholar]
  32. Brown, P.W.; Zeiher, C.A. A model to estimate canopy temperature in the desert southwest. In Proceedings of the 1998 Beltwide Cotton Conferences, San Diego, CA, USA, 5–9 January 1998; The National Cotton Council: Memphis, TN, USA, 1998; p. 1734. [Google Scholar]
  33. Christ, E.H.; Webster, P.J.; Snider, J.L.; Toma, V.E.; Oosterhuis, D.M.; Chastain, D.R. Predicting Heat Stress in Cotton Using Probabilistic Canopy Temperature Forecasts. Agron. J. 2016, 108, 1981–1991. [Google Scholar]
  34. Hake, K.; Silvertooth, J. High temperature effects on cotton. In Physiology Today: Newsletter of the Cotton Physiology Education Program—National Cotton Council; 1990; Volume 1, pp. 1–4. Available online: https://www.cotton.org/tech/physiology/cpt/plantphysiology/upload/CPT-July90-REPOP.pdf (accessed on 15 March 2017).
  35. Idso, S.B.; Jackson, R.D.; Pinter, P.J.; Reginato, R.J.; Hatfield, J.L. Normalizing the stress-degree-day parameter for environmental variability. Agric. Meteorol. 1981, 24, 45–55. [Google Scholar]
  36. Benidis, K.; Rangapuram, S.S.; Flunkert, V.; Wang, Y.; Maddix, D.; Turkmen, C.; Gasthaus, J.; Bohlke-Schneider, M.; Salinas, D.; Stella, L.; et al. Deep learning for time series forecasting: Tutorial and literature survey. ACM Comput. Surv. 2022, 55, 1–3536. [Google Scholar]
  37. Das, A.; Kong, W.; Leach, A.; Mathur, S.; Sen, R.; Yu, R. Long-term forecasting with tide: Time-series dense encoder. Trans. Mach. Learn. Res. 2023. April 2017. Available online: https://openreview.net/forum?id=pCbC3aQB5W (accessed on 7 July 2025).
  38. Hyndman, R.J.; Athanasopoulos, G. Forecasting: Principles and Practice, 2nd ed.; OTexts: Melbourne, Australia, 2018; Available online: https://otexts.com/fpp2/ (accessed on 7 July 2025).
  39. Hatfield, J.L.; Boote, K.J.; Kimball, B.A.; Ziska, L.H.; Izaurralde, R.C.; Ort, D.; Thomson, A.M.; Wolfe, D.W. Climate impacts on agriculture: Implications for crop production. Agron. J. 2011, 103, 351–370. [Google Scholar]
  40. Upchurch, D.R.; Wanjura, D.F.; Burke, J.J.; Mahan, J.R. Biologically Identified Optimal Temperature Interactive Console (BIOTIC) for managing Irrigation. U.S. Patent 5,539,637, 23 July 1996. [Google Scholar]
  41. Mahan, J.R.; Burke, J.J.; Wanjura, D.F.; Upchurch, D.R. Determination of temperature and time thresholds for BIOTIC irrigation of peanut on the Southern High Plains of Texas. Irrig. Sci. 2005, 23, 145–152. [Google Scholar]
  42. Richards, Q.D.; Bange, P.M.; Johnston, S.B. HydroLOGIC: An irrigation management system for Australian cotton. Agric. Syst. 2008, 98, 40–49. [Google Scholar]
  43. Roth, G.; Harris, G.; Gillies, M.; Montgomery, J.; Wigginton, D. Water-use efficiency and productivity trends in Australian irrigated cotton: A review. Crop Pasture Sci. 2013, 64, 1033–1048. [Google Scholar]
  44. Shao, Q.; Wong, H.; Li, M.; Ip, W.-C. Streamflow forecasting using functional-coefficient time series model with periodic variation. J. Hydrol. 2009, 368, 88–95. [Google Scholar]
  45. Fan, J.; Zhang, W. Statistical methods with varying coefficient models. Stat. Its Inference 2008, 1, 179–195. [Google Scholar]
  46. Stone, M. Cross-Validatory Choice and Assessment of Statistical Predictions. J. R. Stat. Soc. B. 1974, 36, 111–133. [Google Scholar]
  47. Nash, J.E.; Sutcliffe, J. River flow forecasting through conceptual models. J. Hydrol. 1970, 10, 282–290. [Google Scholar]
  48. Zar, J.H. Biostatistical Analysis; Prentice-Hall: Upper Saddle River, NJ, USA, 2008. [Google Scholar]
  49. Box, G.; Jenkins, G. Time Series Analysis: Forecasting and Control; Holden-Day: San Francisco, CA, USA, 1970. [Google Scholar]
  50. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [PubMed]
  51. Dhal, S.B.; Kalafatis, S.; Braga-Neto, U.; Gadepally, K.C.; Landivar-Scott, J.S.; Zhao, L.; Nowks, K.; Landivar, J.; Pal, P.; Bhandari, M. Testing the Performance of LSTM and ARIMA Models for In-Season Forecasting of Canopy Cover (CC) in Cotton Crops. Remote Sens. 2024, 16, 1906. [Google Scholar]
  52. Yang, M.; Gao, P.; Zhou, P.; Xie, J.; Sun, D.; Han, X.; Wang, W. Simulating Canopy Temperature Using a Random Forest Model to Calculate the Crop Water Stress Index of Chinese Brassica. Agronomy 2021, 11, 2244. [Google Scholar] [CrossRef]
  53. Kibler, C.L.; Trugman, A.T.; Roberts, D.A.; Still, C.J.; Scott, R.L.; Caylor, K.K.; Stella, J.C.; Singer, M.B. Evapotranspiration regulates leaf temperature and respiration in dryland vegetation. Agric. For. Meteorol. 2023, 339, 109560. [Google Scholar]
  54. Noguchi, R.; Sakuma, A.; Kakazu, Y.; Iwata, H.; Asano, Y.; Itoh, Y.; Kurimoto, I. Transpiration model for plant canopy using ioT systems in the microenvironment. SICE J. Control. Meas. Syst. Integr. 2025, 18, 2485458. [Google Scholar] [CrossRef]
  55. Conaty, W.; Brodrick, R.; Mahan, J.; Payton, P. Climate and Its Interaction with Cotton Morphology. In Cotton, 2nd ed.; Fang, D., Percy, R.C., Eds.; Agronomy Monographs, Wiley: Hoboken, NJ, USA, 2015; Volume 57, pp. 401–417. [Google Scholar]
  56. Wanjura, D.F.; Upchurch, D.R.; Mahan, J.R. Automated irrigation based on threshold canopy temperature. Trans. ASAE 1992, 35, 153–159. [Google Scholar]
  57. Jackson, R.D.; Kustas, W.P.; Choudhury, B.J. A reexamination of the crop water stress index. Irrig. Sci. 1988, 9, 309–317. [Google Scholar]
  58. Jackson, R.D.; Reginato, R.J.; Idso, S.B. Wheat canopy temperature: A practical tool for evaluating water requirements. Water Resour. Res. 1977, 13, 651–656. [Google Scholar]
  59. Pramanik, M.; Garg, N.K.; Tripathi, S.K.; Singh, R. A New Approach of Canopy Temperature based Irrigation Scheduling of Wheat in Humid Subtropical Climate of India. Proc. Natl. Acad. Sci. India Sect. B Biol. Sci. 2017, 87, 261–1269. [Google Scholar]
Figure 1. Indicative locations of study regions on the Australian map. The figure was produced with assistance from the R package “ozmaps” version 0.4.5 (https://github.com/mdsumner/ozmaps, accessed on 4 July 2025).
Figure 1. Indicative locations of study regions on the Australian map. The figure was produced with assistance from the R package “ozmaps” version 0.4.5 (https://github.com/mdsumner/ozmaps, accessed on 4 July 2025).
Agronomy 15 01665 g001
Figure 2. Plots of values of assessment criteria (y-axis) using different numbers of days (from 1 to 15) in training (represented by different lines) for prediction of Australia’s canopy temperature over different numbers of days ahead (x-axis, from 1 to 14) with selected sensors (presented in each plot) within the experiments and treatments: (a) Coefficients of efficiency and (b) accuracy. The full model 1234 uses the weather variables air temperature, vapor pressure, wind speed, and solar radiation.
Figure 2. Plots of values of assessment criteria (y-axis) using different numbers of days (from 1 to 15) in training (represented by different lines) for prediction of Australia’s canopy temperature over different numbers of days ahead (x-axis, from 1 to 14) with selected sensors (presented in each plot) within the experiments and treatments: (a) Coefficients of efficiency and (b) accuracy. The full model 1234 uses the weather variables air temperature, vapor pressure, wind speed, and solar radiation.
Agronomy 15 01665 g002
Figure 3. Plots of values of assessment criteria (y-axis) using different numbers of days (from 1 to 15) in training (represented by different lines) for prediction of Australia’s canopy temperature over different numbers of days ahead (x-axis, from 1 to 14) for selected sensors (presented in each plot) within the experiments and treatments: (a) Coefficient of efficiency and (b) accuracy. The sub-model 123 uses the weather variables air temperature, vapor pressure, and wind speed.
Figure 3. Plots of values of assessment criteria (y-axis) using different numbers of days (from 1 to 15) in training (represented by different lines) for prediction of Australia’s canopy temperature over different numbers of days ahead (x-axis, from 1 to 14) for selected sensors (presented in each plot) within the experiments and treatments: (a) Coefficient of efficiency and (b) accuracy. The sub-model 123 uses the weather variables air temperature, vapor pressure, and wind speed.
Agronomy 15 01665 g003
Figure 4. Plots of values of assessment criteria (y-axis) using different numbers of days (from 1 to 15) in training (represented by different lines) for prediction of Australia’s canopy temperature over different numbers of days ahead (x-axis, from 1 to 14) for selected sensors (presented in each plot) within the experiments and treatments: (a) Coefficient of efficiency and (b) accuracy. The sub-model 12 uses the weather variables air temperature and vapor pressure.
Figure 4. Plots of values of assessment criteria (y-axis) using different numbers of days (from 1 to 15) in training (represented by different lines) for prediction of Australia’s canopy temperature over different numbers of days ahead (x-axis, from 1 to 14) for selected sensors (presented in each plot) within the experiments and treatments: (a) Coefficient of efficiency and (b) accuracy. The sub-model 12 uses the weather variables air temperature and vapor pressure.
Agronomy 15 01665 g004
Figure 5. Plots of values of assessment criteria (y-axis) using different numbers of days (from 1 to 15) in training (represented by different lines) for prediction of Australia’s canopy temperature over different numbers of days ahead (x-axis, from 1 to 14) for selected sensors (presented in each plot) within the experiments and treatments: (a) Coefficient of efficiency and (b) accuracy. The sub-model 1 uses only the weather variable air temperature.
Figure 5. Plots of values of assessment criteria (y-axis) using different numbers of days (from 1 to 15) in training (represented by different lines) for prediction of Australia’s canopy temperature over different numbers of days ahead (x-axis, from 1 to 14) for selected sensors (presented in each plot) within the experiments and treatments: (a) Coefficient of efficiency and (b) accuracy. The sub-model 1 uses only the weather variable air temperature.
Agronomy 15 01665 g005
Figure 6. Plots of the solar radiation over time of day (left panel) and the change in solar radiation against solar radiation reading (right panels) in the 2014–2015 season (top panels) and in the 2015–2016 season (bottom panels). The time interval for the data is 15 min.
Figure 6. Plots of the solar radiation over time of day (left panel) and the change in solar radiation against solar radiation reading (right panels) in the 2014–2015 season (top panels) and in the 2015–2016 season (bottom panels). The time interval for the data is 15 min.
Agronomy 15 01665 g006
Table 1. Summary information for canopy temperature sensors in selected fields.
Table 1. Summary information for canopy temperature sensors in selected fields.
LocationExperimentTreatmentCT SensorsYearStart DateEnd Date
ACRISchedulingControls2004, s2008, s20092014–20156 January 20154 March 2015
ACRISchedulingTc-1.7MPas2001, s20112014–20156 January 20154 March 2015
ACRITc PredictionTc Averages1301, s1302, s1307, s1308, s13112015–201622 December 2015 23 March 2016
ACRITc PredictionTc Forecasts1303, s1304, s1305, s1306, s13092015–201622 December 2015 23 March 2016
Emerald SchedulingControls171, s226, s2532014–201511 November 201420 December 2014
Emerald SchedulingTc-1.7MPas032, s104, s2592014–201511 November 201420 December 2014
Emerald SchedulingBIOTICs160, s176, s2342015–201610 November 201516 January 2016
Emerald SchedulingControls170, s1922015–201610 November 201516 January 2016
Emerald SchedulingDynamics167, s2022015–201610 November 201516 January 2016
MoreeSchedulingControls022, s111, s1152014–201525 December 20144 March 2015
MoreeSchedulingTc-1.7MPas134, s162, s1942014–201525 December 20144 March 2015
MoreeSchedulingDynamics192, s202, s2552014–201525 December 20144 March 2015
Table 2. Category definition matrix for reliability testing of the forecasting capabilities for predicting canopy temperature for the different models used in this study.
Table 2. Category definition matrix for reliability testing of the forecasting capabilities for predicting canopy temperature for the different models used in this study.
Measurement\ForecastPrediction Above 28 °CPrediction Below 28 °C
Actual above 28 °CNumber of hitsNumber of misses
Actual below 28 °CNumber of false alarmsNumber of correct negatives
Table 3. Summary information for canopy temperature sensors in selected fields by using the full model trained based on other sensors within the experiment and treatment.
Table 3. Summary information for canopy temperature sensors in selected fields by using the full model trained based on other sensors within the experiment and treatment.
LocationExperimentTreatmentSensorsError (%)NSAccuracy (%)
ACRI1415SchedulingControls20040.0680.93390.55
s2008−0.5500.91990.93
s20090.2490.92689.19
Tc-1.7MPas2001 −0.4810.91689.19
s20110.3550.93993.09
ACRI1516Tc PredictionTc Averages13010.4250.91889.75
s1302−0.5060.98094.94
s13070.2830.97393.39
s1308−1.1680.96893.74
s13111.0050.97694.68
Tc Forecasts13030.4730.97594.20
s13040.7390.97995.18
s1305−0.8570.97694.22
s1306−0.4880.97895.14
s13090.2090.97294.16
Emerald1415SchedulingControls1710.0020.95393.44
s226−0.0040.93992.67
s2530.0000.96094.06
Tc-1.7MPas0320.0180.93992.39
s104−0.0070.89390.57
s259−0.0160.92389.95
Emerald1516SchedulingBIOTICs160−0.0170.89591.56
s1760.0250.89092.22
s234−0.0110.90093.01
Controls1700.0170.88990.51
s192−0.0190.89391.31
Dynamics1670.0170.90792.38
s202−0.0090.90892.19
Moree1415SchedulingControls022−2.1570.86988.85
s1117.8600.71978.91
s115−6.4420.73493.47
Tc-1.7MPas1344.3680.86286.37
s162−2.2440.88791.45
s194−2.3030.87590.05
Dynamics192−3.9950.83387.98
s202−2.3280.83188.53
s2557.6940.72278.85
Table 4. Summary information for canopy temperature sensors in selected fields by leave-one-out cross-validation with the full model.
Table 4. Summary information for canopy temperature sensors in selected fields by leave-one-out cross-validation with the full model.
LocationExperimentTreatmentSensorsError (%)NSAccuracy (%)
ACRI1415SchedulingControls2004−0.0760.92691.62
s2008−0.1000.91189.60
s2009−0.1000.91989.80
Tc-1.7MPas2001 −0.0700.91089.39
s2011−0.1200.93693.02
ACRI1516Tc PredictionTc Averages1301−0.0590.91389.78
s1302−0.0470.92192.11
s1307−0.0820.91391.63
s1308−0.1140.92393.81
s1311−0.0590.92393.01
Tc Forecasts1303−0.0900.91391.84
s1304−0.0960.93093.13
s1305−0.0740.91791.74
s1306−0.0460.92393.34
s1309−0.0900.91993.19
Emerald1415SchedulingControls171−0.0840.95293.61
s226−0.0770.94192.79
s253−0.0850.95994.23
Tc-1.7MPas032−0.0380.91890.46
s104−0.1290.95493.18
s259−0.1560.95594.38
Emerald1516SchedulingBIOTICs160−0.1140.90292.09
s176−0.0620.90993.46
s234−0.0730.90493.65
Controls170−0.0710.89992.46
s192−0.0680.90492.28
Dynamics167−0.0660.91093.38
s202−0.0970.90292.22
Moree1415SchedulingControls022−0.0550.80390.69
s111−0.0220.85693.82
s115−0.0920.80693.20
Tc-1.7MPas134−0.0120.84793.15
s162−0.0630.82191.26
s194−0.1580.80791.28
Dynamics192−0.1170.79890.83
s202−0.0930.82191.77
s255−0.0090.85594.55
Table 5. Summary information for canopy temperature sensors in selected fields by using the sub-models trained based on other sensors within the experiment and treatment, using air temperature, vapor pressure, and wind speed as predictors.
Table 5. Summary information for canopy temperature sensors in selected fields by using the sub-models trained based on other sensors within the experiment and treatment, using air temperature, vapor pressure, and wind speed as predictors.
LocationExperimentTreatmentSensorsError (%)NSAccuracy (%)
ACRI1415SchedulingControls20040.0240.92890.04
s2008−0.5790.91390.31
s20090.2110.92089.29
Tc-1.7MPas2001 −0.5250.90989.70
s20110.3460.93493.50
ACRI1516Tc PredictionTc Averages13010.4530.91489.83
s1302−0.5990.92191.88
s13070.2430.91191.37
s1308−1.2060.92093.15
s13110.9650.92293.33
Tc Forecasts13030.4380.91391.84
s13040.7050.93193.16
s1305−0.8930.91591.70
s1306−0.5240.92593.15
s13090.1750.92092.65
Emerald1415SchedulingControls1710.1240.94892.87
s226−0.4780.93692.67
s253−0.1090.95493.33
Tc-1.7MPas032−1.2280.90290.94
s104−1.7240.91890.17
s2590.9880.94691.74
Emerald1516SchedulingBIOTICs160−1.5970.89591.64
s1762.6000.88691.85
s234−1.0910.89793.09
Controls1701.7560.88690.35
s192−1.7850.89191.39
Dynamics1671.7990.89992.14
s202−0.8870.89992.26
Moree1415SchedulingControls022−2.1710.85589.12
s1117.8180.71178.52
s115−6.4170.72283.37
Tc-1.7MPas1344.3650.84785.86
s162−2.2490.86891.33
s194−2.3080.86389.98
Dynamics192−3.9830.81688.02
s202−4.3140.81588.36
s2557.6800.71278.13
Table 6. Summary information for canopy temperature sensors in selected fields by leave-one-out cross-validation with the sub-model using air temperature, vapor pressure, and wind speed.
Table 6. Summary information for canopy temperature sensors in selected fields by leave-one-out cross-validation with the sub-model using air temperature, vapor pressure, and wind speed.
LocationExperimentTreatmentSensorsError (%)NSAccuracy (%)
ACRI1415SchedulingControls2004−0.1370.92390.69
s2008−0.2260.90789.39
s2009−0.1580.91489.08
Tc-1.7MPas2001 −0.0730.90689.56
s2011−0.1900.93193.26
ACRI1516Tc PredictionTc Averages1301−0.0560.91089.67
s1302−0.0520.91891.93
s1307−0.0110.90991.37
s1308−0.0760.92093.54
s1311−0.0170.92092.99
Tc Forecasts1303−0.0450.90991.66
s1304−0.0680.92792.92
s1305−0.0250.91391.94
s1306−0.0190.92193.17
s1309−0.0450.91692.81
Emerald1415SchedulingControls171−0.1700.94893.41
s226−0.0910.94192.42
s253−0.1340.95493.81
Tc-1.7MPas032−0.0290.92590.23
s104−0.2510.94491.93
s259−0.2290.94593.27
Emerald1516SchedulingBIOTICs160−0.0690.90092.11
s176−0.0140.90693.49
s2340.0390.90293.43
Controls170−0.0010.89692.46
s192−0.0170.90192.36
Dynamics1670.0170.90793.30
s2020.0040.90092.46
Moree1415SchedulingControls022−0.0660.85690.28
s1110.0000.91294.05
s115−0.1660.86692.60
Tc-1.7MPas134−0.0270.90093.10
s162−0.0500.87091.21
s194−0.1330.86591.10
Dynamics192−0.1170.85591.93
s202−0.0850.87192.27
s2550.0020.91094.53
Table 7. Summary information for canopy temperature sensors in selected fields by using the sub-model trained based on other sensors within the experiment and treatment, using air temperature and vapor pressure as predictors.
Table 7. Summary information for canopy temperature sensors in selected fields by using the sub-model trained based on other sensors within the experiment and treatment, using air temperature and vapor pressure as predictors.
LocationExperimentTreatmentSensorsError (%)NSAccuracy (%)
ACRI1415SchedulingControls20040.1250.91490.42
s2008−0.4760.90090.45
s20090.3120.90589.66
Tc-1.7MPas2001 −0.3860.89989.90
s20110.4060.91993.36
ACRI1516Tc PredictionTc Averages13010.4680.89989.73
s1302−0.5840.89991.53
s13070.2570.88991.48
s1308−1.1940.89593.04
s13110.9790.89692.80
Tc Forecasts13030.4510.89391.93
s13040.7160.91192.97
s1305−0.8820.89291.89
s1306−0.5130.90493.30
s13090.1860.89792.38
Emerald1415SchedulingControls1710.1280.92892.53
s226−0.4690.92092.30
s253−0.1080.94193.10
Tc-1.7MPas032−1.2190.88490.46
s104−1.7240.89889.78
s2591.0080.92391.34
Emerald1516SchedulingBIOTICs160−1.6660.88691.63
s1762.5300.88091.82
s234−1.1580.88893.08
Controls1701.6850.87890.08
s192−1.8510.88191.43
Dynamics1671.7370.89192.36
s202−0.9510.89192.93
Moree1415SchedulingControls022−2.1160.83588.29
s1117.8650.69777.76
s115−6.3680.69782.94
Tc-1.7MPas1344.4170.82384.50
s162−2.1920.84090.25
s194−2.2460.83489.58
Dynamics192−3.9230.78887.15
s202−4.2600.78687.42
s2557.7280.69577.30
Table 8. Summary information for canopy temperature sensors in selected fields by leave-one-out cross-validation with the sub-model using air temperature and vapor pressure.
Table 8. Summary information for canopy temperature sensors in selected fields by leave-one-out cross-validation with the sub-model using air temperature and vapor pressure.
LocationExperimentTreatmentSensorsError (%)NSAccuracy (%)
ACRI1415SchedulingControls2004−0.0110.90991.44
s2008−0.0970.89489.94
s2009−0.0380.89889.39
Tc-1.7MPas2001 −0.0050.89389.80
s2011−0.0300.91492.98
ACRI1516Tc PredictionTc Averages1301−0.0350.89589.48
s1302−0.0300.89791.47
s13070.0260.88791.43
s1308−0.0280.89593.30
s13110.0240.89492.76
Tc Forecasts1303−0.0220.89091.76
s1304−0.0350.90892.90
s13050.0060.89191.94
s13060.0120.90193.04
s1309−0.0080.89492.47
Emerald1415SchedulingControls171−0.1910.92592.67
s226−0.1210.92592.30
s253−0.1520.93993.33
Tc-1.7MPas032−0.0560.90590.17
s104−0.2580.92191.39
s259−0.2510.92292.76
Emerald1516SchedulingBIOTICs160−0.1370.89292.11
s176−0.0760.89893.19
s234−0.0290.89393.46
Controls170−0.0620.88892.34
s192−0.0880.89292.33
Dynamics167−0.0470.89993.24
s202−0.0610.89292.46
Moree1415SchedulingControls022−0.0140.83590.09
s1110.0580.89893.77
s115−0.1020.83691.51
Tc-1.7MPas1340.0410.87692.00
s1620.0170.84089.65
s194−0.0760.83589.91
Dynamics192−0.0680.82290.76
s202−0.0230.83990.78
s2550.0670.89594.18
Table 9. Summary information for canopy temperature sensors in selected fields by using the sub-model trained based on other sensors within the experiment and treatment, using air temperature as the only predictor.
Table 9. Summary information for canopy temperature sensors in selected fields by using the sub-model trained based on other sensors within the experiment and treatment, using air temperature as the only predictor.
LocationExperimentTreatmentSensorsError (%)NSAccuracy (%)
ACRI1415SchedulingControls20040.1920.87390.86
s2008−0.3930.88390.52
s20090.3870.86989.01
Tc-1.7MPas2001 −0.3220.85989.77
s20110.4970.89993.39
ACRI1516Tc PredictionTc Averages13010.5160.87789.20
s1302−0.5350.87490.83
s13070.3050.86791.19
s1308−1.1450.87092.39
s13111.0260.87192.47
Tc Forecasts13030.5000.86791.17
s13040.7660.89192.39
s1305−0.8310.87191.37
s1306−0.4630.88192.60
s13090.2350.87092.10
Emerald1415SchedulingControls1710.1940.91191.88
s226−0.4110.91792.84
s253−0.0460.92092.30
Tc-1.7MPas032−1.1600.88790.77
s104−1.6600.87589.49
s2591.0740.90591.65
Emerald1516SchedulingBIOTICs160−1.6590.85591.48
s1762.5400.85091.72
s234−1.1550.85993.22
Controls1701.6840.85090.93
s192−1.8330.84891.00
Dynamics1671.7470.86391.93
s202−0.9510.86191.91
Moree1415SchedulingControls022−2.0640.79687.95
s1117.9290.69677.47
s115−6.3310.63582.78
Tc-1.7MPas1344.4950.77584.09
s162−2.1190.77089.70
s194−2.1780.76588.60
Dynamics192−3.8630.71686.46
s202−4.2000.71386.82
s2557.8130.69476.62
Table 10. Summary information for canopy temperature sensors in selected fields by leave-one-out cross-validation with the sub-model using air temperature.
Table 10. Summary information for canopy temperature sensors in selected fields by leave-one-out cross-validation with the sub-model using air temperature.
LocationExperimentTreatmentSensorsError (%)NSAccuracy (%)
ACRI1415SchedulingControls20040.1320.86591.44
s2008−0.0300.87489.66
s20090.0670.86188.74
Tc-1.7MPas2001 0.1420.85089.56
s20110.0410.89093.02
ACRI1516Tc PredictionTc Averages13010.0120.87389.20
s13020.0180.87290.97
s13070.0770.86591.16
s13080.0320.87092.56
s13110.0750.86992.31
Tc Forecasts13030.0380.86491.15
s13040.0160.88892.38
s13050.0530.86991.61
s13060.0710.87792.56
s13090.0590.86791.93
Emerald1415SchedulingControls171−0.1600.91292.08
s226−0.0710.91892.36
s253−0.1170.92092.33
Tc-1.7MPas032−0.0250.89390.85
s104−0.2070.89790.12
s259−0.2110.90792.56
Emerald1516SchedulingBIOTICs160−0.1310.86091.66
s176−0.0760.86892.98
s234−0.0200.86493.33
Controls170−-0.0460.86192.25
s192−0.0890.85891.87
Dynamics167−0.0560.87192.54
s202−0.0530.86292.11
Moree1415SchedulingControls0220.0610.79689.42
s1110.0890.88593.58
s1150.0280.75891.05
Tc-1.7MPas1340.1300.82791.97
s1620.1280.76889.47
s1940.0460.76489.42
Dynamics1920.0720.74589.56
s2020.1130.75889.86
s2550.1090.87794.18
Table 11. Criterion values of canopy temperature prediction for sensors in the BIOTIC experiment in Emerald in the 2015–2016 season. The predictions were made by using models trained based on sensors in the cT-1.7MPa experiment and in the control experiment in Emerald in the 2014–2015 season. The predictors were air temperature, vapor pressure, and wind speed in the sub-models.
Table 11. Criterion values of canopy temperature prediction for sensors in the BIOTIC experiment in Emerald in the 2015–2016 season. The predictions were made by using models trained based on sensors in the cT-1.7MPa experiment and in the control experiment in Emerald in the 2014–2015 season. The predictors were air temperature, vapor pressure, and wind speed in the sub-models.
CriterionExperimentModelSensors in 2015–2016
In 2014–2015BIOTICControl Dynamic
s160s176S234s170s192s162s167S202
ErrorsControl123−1.399−0.753−1.276−0.982−1.441−1.170−0.733−1.165
(%) 123.8094.3183.9254.1573.7744.0064.3613.999
130.1820.7380.3500.5960.1520.4330.8240.400
16.3776.7586.5346.7006.3576.5826.8666.540
cT-1.7MPa123−0.733−0.029−0.600−0.278−0.780−0.487−0.008−0.477
124.4485.0204.5774.8394.4094.6645.0644.661
130.7581.3600.9301.2000.7251.0201.4430.992
16.6787.1166.8427.0376.6536.8987.2236.860
NS coef. Control1230.9200.9200.9210.9220.9190.9230.9230.919
120.9200.9030.9210.9160.9170.9160.9110.916
130.8210.8080.8250.8210.8200.8240.8200.815
10.8120.7850.8190.8100.8090.8090.8020.807
cT-1.7MPa1230.8710.8660.8740.8720.8710.8750.8730.868
120.9000.8770.9010.8930.8970.8950.8860.894
130.7340.7170.7430.7300.7330.7410.7360.727
10.7950.7640.8020.7920.7920.7920.7820.790
AccuracyControl12389.7388.8290.6190.5790.1890.1190.0589.58
(%) 1290.3388.7991.2191.0490.4690.1689.6890.78
1382.1177.2781.1280.0582.4481.5979.0181.05
183.7679.8783.0382.0483.8982.4481.1583.44
cT1.7MPa12385.1482.1285.6384.8585.3085.4284.2784.43
1287.1184.9987.8186.9786.7086.7962.2087.59
1381.3975.7179.3678.1881.7980.1876.7879.59
182.9878.8482.1580.9183.2581.7479.8982.58
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shao, Q.; Roche, R.; Jamali, H.; Nunn, C.; Zheng, B.; Jin, H.; Chapman, S.C.; Bange, M. Comprehensive Assessment of PeriodiCT Model for Canopy Temperature Forecasting. Agronomy 2025, 15, 1665. https://doi.org/10.3390/agronomy15071665

AMA Style

Shao Q, Roche R, Jamali H, Nunn C, Zheng B, Jin H, Chapman SC, Bange M. Comprehensive Assessment of PeriodiCT Model for Canopy Temperature Forecasting. Agronomy. 2025; 15(7):1665. https://doi.org/10.3390/agronomy15071665

Chicago/Turabian Style

Shao, Quanxi, Rose Roche, Hiz Jamali, Chris Nunn, Bangyou Zheng, Huidong Jin, Scott C. Chapman, and Michael Bange. 2025. "Comprehensive Assessment of PeriodiCT Model for Canopy Temperature Forecasting" Agronomy 15, no. 7: 1665. https://doi.org/10.3390/agronomy15071665

APA Style

Shao, Q., Roche, R., Jamali, H., Nunn, C., Zheng, B., Jin, H., Chapman, S. C., & Bange, M. (2025). Comprehensive Assessment of PeriodiCT Model for Canopy Temperature Forecasting. Agronomy, 15(7), 1665. https://doi.org/10.3390/agronomy15071665

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop