Assessing the Efﬁcacy of the SWAT Auto-Irrigation Function to Simulate Irrigation, Evapotranspiration, and Crop Response to Management Strategies of the Texas High Plains

: In the semi-arid Texas High Plains, the underlying Ogallala Aquifer is experiencing continuing decline due to long-term pumping for irrigation with limited recharge. Accurate simulation of irrigation and other associated water balance components are critical for meaningful evaluation of the effects of irrigation management strategies. Modelers often employ auto-irrigation functions within models such as the Soil and Water Assessment Tool (SWAT). However, some studies have raised concerns as to whether the function is able to adequately simulate representative irrigation practices. In this study, observations of climate, irrigation, evapotranspiration (ET), leaf area index (LAI), and crop yield derived from an irrigated lysimeter ﬁeld at the USDA-ARS Conservation and Production Research Laboratory at Bushland, Texas were used to evaluate the efﬁcacy of the SWAT auto-irrigation functions. Results indicated good agreement between simulated and observed daily ET during both model calibration (2001–2005) and validation (2006–2010) periods for the baseline scenario (Nash-Sutcliffe efﬁciency; NSE ≥ 0.80). The auto-irrigation scenarios resulted in reasonable ET simulations under all the thresholds of soil water deﬁcit (SWD) triggers as indicated by NSE values > 0.5. However, the auto-irrigation function did not adequately represent ﬁeld practices, due to the continuation of irrigation after crop maturity and excessive irrigation when SWD triggers were less than the static irrigation amount.


Introduction
Hydrologic models such as the Soil and Water Assessment Tool (SWAT) [1], Agricultural Policy/Environmental eXtender (APEX) [2], and European Hydrological System Model MIKE SHE [3], are widely used for assessing the impacts of best management practices at various spatial scales. Proper calibration and validation of models using observed data are required for meaningful evaluation of outputs and subsequent use in decision making. In most studies, hydrologic models were calibrated for streamflow using measured data at the watershed outlet [4]. A limited number of studies have used measured evapotranspiration (ET) for calibrating hydrologic models [5][6][7]. According to an assessment of 257 process-based, watershed modeling papers published between 1992 and 2010, auto-irrigation functions for simulations of irrigation, ET, and crop responses using quality, long-term field observations.
In this study, data collected from a lysimeter located in an irrigated field at the USDA-ARS CPRL at Bushland, Texas were used for evaluating the SWAT model. Observed data used in this study included continuous measurement of daily ET, seasonal leaf area index (LAI), annual crop yield, climate data, field operations, and detailed irrigation management records from 2000 to 2010. The primary goal of this study was to assess the efficacy of the SWAT auto-irrigation function to simulate irrigation management strategies typical of the Texas High Plains region, while simultaneously simulating reasonable values for crop yield, LAI, and ET. Specifically, the objectives of this study were to: (1) calibrate the SWAT model for ET, LAI, and crop yield using measured values from a weighing lysimeter field; and (2) compare measured ET, irrigation, LAI, and crop yield to simulated values using SWAT auto-irrigation functions under different SWD thresholds triggered by the soil water content method, and quantitatively determine the shortcomings of the auto-irrigation functions.

Study Area
The study area is located at the USDA-ARS CPRL near Bushland, Texas (35.2 • N, 102.1 • W,~1170 m above mean sea level). The regional climate is classified as semi-arid, with a mean precipitation and temperature (2000)(2001)(2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010) of 488 mm and 14.1 • C, respectively. The major soil is well-drained Pullman silty clay loam (fine, mixed, superactive, thermic Torrertic Paleustoll) [18]. The study area is relatively flat, with a slope of <0.2%. A 4.7 ha irrigated field with a large-weighing lysimeter located in its center was selected as the research site. An adjacent research-grade weather station maintained in accordance with ASCE-EWRI specifications [19] was used for the climate data input. Data collected from the lysimeter field from 2000 to 2010 were processed and used in this study. Crops grown during the study period included cotton (Gossypium hirsutum L.), soybean (Glycine max L.), grain and forage sorghum [Sorghum bicolor (L.) Moench], sunflower (Helianthus annuus L.), and forage corn (Zea mays L.). A linear move sprinkler irrigation system was used to apply water to the crops. The specific crop management practices are listed in Table 1.

Climate Data Collection and Analysis
Climate data output values (15-min interval) obtained from the research grass reference weather station adjacent to the irrigated lysimeter field were integrated into daily values. These values were used to compute daily weather values for use in the model simulations. Quality assurance and quality control (QA/QC) procedures were used to ensure that measured wind speed, air temperature, precipitation, relative humidity, and solar radiation values were within acceptable tolerances. In addition, all climate data were compared to data collected from a collocated, solar-powered weather station of the Texas High Plains ET Network [20]. Correlations between the two datasets were used to fill any missing climatic data of the research weather station.

Lysimeter, LAI, and Crop Yield Data Collection and Analysis
The lysimeter contains an undisturbed soil monolith collected on site, weighing~45 Mg including the container mass. The lysimeter surface dimensions are approximately 3 m × 3 m (9 m 2 ) with a depth of 2.3 m [21]. Voltage outputs from load cells (SM-50, Interface, Inc., Scottsdale, Ariz.) with 22 kg full-range capacity were measured and recorded by data loggers (CR-7X, Campbell Scientific, Logan, Utah) at 0.5 Hz (2 s) frequency. The lysimeter's accuracy has ranged from 0.05 mm [22,23] to 0.01 mm. Experienced support technicians and scientists were responsible for routine lysimeter maintenance and ensuring representativeness of the surrounding field. Load cell voltage outputs were converted to mass using calibration equations, and 5 min means were used to develop a base dataset for subsequent processing [22]. Lysimeter mass (kg) was converted to a mass-equivalent relative lysimeter storage value (mm of water) by dividing it by the relevant surface area of the lysimeter (~9 m 2 ) and the density of water (taken as 1000 kg m −3 ). Lysimeter design and management are further described by Marek et al. [5]. Daily ET is calculated using the following soil water balance equation.

ET = P + I + R + F + ∆SW
where P is precipitation (mm), I is irrigation (mm), R is the sum of runon and runoff (taken as zero due to furrow diking to minimize runoff in this study), F is flux into or out of the volume (mm, taken as negative when exiting the control volume), and ∆SW is the change in soil water content (mm). The lysimeter is equipped with a vacuum drainage system, which collects profile drainage into partitioned tanks suspended from the bottom of the lysimeter soil tank such that drainage does not change the lysimeter mass. As such, F was assigned a value of zero. Other mass-changing events were flagged and accounted for according to data processing and data QA/QC procedures provided by Marek et al. [24]. Missing or non-suitable periods of lysimeter ET data resulting from lysimeter maintenance, calibrations, agronomic activities, and power outages were not used. Crop LAI was measured periodically during the growing season throughout the 10-year period. At the end of the growing season, crop yield was also collected.

SWAT Single HRU Setup and Calibration
The SWAT model is a physically-based, continuous, semi-distributed, watershed-scale model with spatially distributed parameters [1]. Primary model inputs include terrain, land use, soil, weather, and management practices [25]. The SWAT auto-irrigation function using the soil water content method triggered an irrigation when the total soil water in the profile fell below field capacity by more than the user defined SWD threshold. In this study, ArcSWAT 2012 (version 2012.10_0.7) was used. Although SWAT is commonly used for watershed-scale studies, many projects have focused on field-scale modeling. Recently, a single hydrologic response unit (HRU) method was described in detail by Moloney et al. [26] and Cibin et al. [27]. The SWAT model divides a watershed into a number of HRUs and aggregates them into subbasins. An HRU is a basic computational unit in the SWAT model that contains unique land use, soil type, and slope characteristics. An HRU in the SWAT model, therefore, serves as a good representation of field conditions. The SWAT model can be set up using a single HRU, and hence asserts the model useful for field-scale assessment. This emerging single-HRU method is a flexible, time-saving method for field-scale simulations.
The irrigated lysimeter field was set up as one HRU using the single-HRU method. A single HRU management file was created to reflect actual agronomic practices performed on their respective dates of operation, including tillage, planting, fertilization, irrigation, and harvesting. Irrigations on the lysimeter field throughout the study period were scheduled to satisfy full irrigation requirements relative to soil water in the root zone (to a depth of 1.5 m), as determined by soil water neutron probe measurements. The soil water deficit thresholds used to schedule the actual irrigation ranged from approximately 10 mm to 100 mm for various crops at different growth stages. Under the baseline scenario, the actual irrigation practices were input into the single-HRU management file. The soil parameters and values used in this study are shown in Table 2. A SWAT model initially calibrated for ET only using the irrigated lysimeter field by Marek et al. [5] was used and further calibrated for LAI and crop yield. The calibrated hydrologic parameters are listed in Table 3. The SWAT model simulation was structured as an 11-year (2000-2010) simulation. The first year was used for the model warm-up period, and the remaining years were divided into calibration (2001)(2002)(2003)(2004)(2005) and validation (2006)(2007)(2008)(2009)(2010) periods.
The measured field data (e.g., climate, management practices, and maximum LAI for 2000-2010) were manually inputted into the SWAT model. In addition, important hydrologic parameters and initial state variables were taken from published literature [5]. The crop LAI development-related parameters were manually adjusted to obtain the best fit between simulated LAI and the observed data (Table 4). Finally, crop yield parameters were manually adjusted. This approach involved the year-by-year adjustment of crop growth parameters in the SWAT plant database and provided the most accurate simulation of crop LAI, yield, and ET for subsequent evaluation of the auto-irrigation functions in SWAT.  Table 3.

Auto-irrigation Scenario Design and Assessment
Eight auto-irrigation scenarios were developed using different soil water deficit thresholds by soil layer. SWD triggers of 10, 20 and 36 mm were used for the first soil layer. SWD triggers of 40, 50, 100 and 158 mm were used for the second, and 337 mm was used for the third soil layer. In this study, the frequencies and amounts of actual irrigations were known. Such detailed information is generally not known for use in modeling studies, which is also the primary reason that the auto-irrigation function is used in many SWAT studies. However, parameterization of the auto-irrigation scenarios in this study assumed no knowledge of the frequencies and amounts of actual irrigation. In this way, the effects of auto-irrigation functions using various SWD threshold triggers on model performance could be quantitatively determined. The results could have implications for previous studies using the SWAT auto-irrigation functions. The SWAT model performance for predicting daily ET and crop yield was evaluated using two statistical measures: percent bias (PBIAS) and Nash-Sutcliffe efficiency (NSE) [29] under the baseline and different auto-irrigation scenarios. The PBIAS statistic is a measure of the tendency of the average to be larger or smaller than the observed values, with an optimal value of zero. Positive and negative values indicate overestimation and underestimation, respectively, expressed as a percentage. The NSE statistic represents the relative magnitude of residual variance compared to the variance of measured data. NSE values range from −∞ to 1.0, with 1.0 being the optimal value.

Evaluation of the SWAT Single-HRU Method for LAI and ET Simulations
The observed maximum LAI values were inputted into the crop database for each crop in each year. Seasonally-measured LAI values were compared to daily-simulated LAI for each year. The graphical comparison indicated that overall, SWAT-simulated LAI matched observed data well ( Figure 1). However, SWAT clearly underestimated LAI for forage corn in 2006 (Figure 1f). A large variation in measured LAI for forage corn was also observed in 2006. In that year, the forage corn was damaged by an undetermined plant virus or herbicide, and replanted to a short-season variety in early July. The simulated LAI values of cotton showed an extended tail during the senescence period (Figure 1a,b,h,j), which may result from the lack of application of harvest aids (defoliant) in the SWAT model. Producers commonly apply harvest aids about three weeks prior to expected cotton harvest to facilitate boll opening and maturity, particularly in wetter than normal years. Crop LAI is directly proportional to the transpiration component of ET. A reasonable partition of evaporation and transpiration will benefit the simulations of plant growth and water balance [30]. actual irrigation. In this way, the effects of auto-irrigation functions using various SWD threshold triggers on model performance could be quantitatively determined. The results could have implications for previous studies using the SWAT auto-irrigation functions. The SWAT model performance for predicting daily ET and crop yield was evaluated using two statistical measures: percent bias (PBIAS) and Nash-Sutcliffe efficiency (NSE) [29] under the baseline and different auto-irrigation scenarios. The PBIAS statistic is a measure of the tendency of the average to be larger or smaller than the observed values, with an optimal value of zero. Positive and negative values indicate overestimation and underestimation, respectively, expressed as a percentage. The NSE statistic represents the relative magnitude of residual variance compared to the variance of measured data. NSE values range from −∞ to 1.0, with 1.0 being the optimal value.

Evaluation of the SWAT Single-HRU Method for LAI and ET Simulations
The observed maximum LAI values were inputted into the crop database for each crop in each year. Seasonally-measured LAI values were compared to daily-simulated LAI for each year. The graphical comparison indicated that overall, SWAT-simulated LAI matched observed data well ( Figure 1). However, SWAT clearly underestimated LAI for forage corn in 2006 (Figure 1f). A large variation in measured LAI for forage corn was also observed in 2006. In that year, the forage corn was damaged by an undetermined plant virus or herbicide, and replanted to a short-season variety in early July. The simulated LAI values of cotton showed an extended tail during the senescence period (Figure 1a,b,h,j), which may result from the lack of application of harvest aids (defoliant) in the SWAT model. Producers commonly apply harvest aids about three weeks prior to expected cotton harvest to facilitate boll opening and maturity, particularly in wetter than normal years. Crop LAI is directly proportional to the transpiration component of ET. A reasonable partition of evaporation and transpiration will benefit the simulations of plant growth and water balance [30].  Following the calibration of LAI, the model performance for ET simulation was evaluated. The NSE and PBIAS values obtained during the model calibration (2001)(2002)(2003)(2004)(2005) and validation (2006)(2007)(2008)(2009)(2010) periods for the simulation of daily ET from the irrigated lysimeter field were 0.85 and 0.80, and 7.2% and 2.4%, respectively ( Table 5). The model performance statistics were deemed "very good" during both calibration and validation periods, according to criteria outlined by Moriasi et al. [31]. Also, NSE model performance was improved as compared to Marek et al. [5] during both the calibration (0.85 vs. 0.67) and validation (0.80 vs. 0.78) periods, due to the further calibration of LAI. The SWAT single-HRU method was, therefore, found to be effective in simulating both LAI and ET. However, simulated ET was slightly higher than the observed ET for cotton in all years during their senescence periods (Figures 2 and 3). This overestimation of ET coincided with the overestimation of LAI during the senescence period of cotton. Following the calibration of LAI, the model performance for ET simulation was evaluated. The NSE and PBIAS values obtained during the model calibration (2001)(2002)(2003)(2004)(2005) and validation (2006-2010) periods for the simulation of daily ET from the irrigated lysimeter field were 0.85 and 0.80, and 7.2% and 2.4%, respectively ( Table 5). The model performance statistics were deemed "very good" during both calibration and validation periods, according to criteria outlined by Moriasi et al. [31]. Also, NSE model performance was improved as compared to Marek et al. [5] during both the calibration (0.85 vs. 0.67) and validation (0.80 vs. 0.78) periods, due to the further calibration of LAI. The SWAT single-HRU method was, therefore, found to be effective in simulating both LAI and ET. However, simulated ET was slightly higher than the observed ET for cotton in all years during their senescence periods (Figures 2 and 3). This overestimation of ET coincided with the overestimation of LAI during the senescence period of cotton.

Comparison of SWAT-Simulated Crop Yields with Field Observations
Some studies have suggested using crop yield as a surrogate for the calibration of ET in SWAT, as crop yield is proportional to the ET component [12,32]. Therefore, after calibrating the model for LAI simulation, the SWAT model was further calibrated for crop yield simulation. The

Comparison of SWAT-Simulated Crop Yields with Field Observations
Some studies have suggested using crop yield as a surrogate for the calibration of ET in SWAT, as crop yield is proportional to the ET component [12,32]. Therefore, after calibrating the model for LAI simulation, the SWAT model was further calibrated for crop yield simulation. The

Simulated ET, Crop Response, and Irrigation Scheduling under SWAT Auto-Irrigation Scenarios
The

Simulated ET, Crop Response, and Irrigation Scheduling under SWAT Auto-Irrigation Scenarios
The   Model performance for yield simulations decreased considerably relative to the baseline scenario ( Figure 4b). For instance, the NSE and PBIAS values were 0.67 and 40% (baseline NSE and PBIAS: 0.99 and 1.3%) under the auto-irrigation scenario of a SWD threshold of 36 mm, which achieved the best model performance for ET simulation among the auto-irrigation scenarios. Notably, the auto-irrigation scenario simulated substantially higher yields of cotton (>50%), forage corn (70%), and forage sorghum (40%), as compared to the observed data ( Figure 4b). As for the LAI values, clear differences were found for cotton in 2002, 2008, and 2010 under the auto-irrigation scenario of a SWD threshold of 36 mm ( Figure 6). Cotton is a perennial shrub that is cultivated as an annual cash crop in the U.S. [35]. Vegetative growth continued under well-irrigated conditions. The observed maximum LAI was used as the input for each crop in this study, which was necessary for the reasonable match of the LAI for other crops under the auto-irrigation scenarios.  Model performance for yield simulations decreased considerably relative to the baseline scenario ( Figure 4b). For instance, the NSE and PBIAS values were 0.67 and 40% (baseline NSE and PBIAS: 0.99 and 1.3%) under the auto-irrigation scenario of a SWD threshold of 36 mm, which achieved the best model performance for ET simulation among the auto-irrigation scenarios. Notably, the auto-irrigation scenario simulated substantially higher yields of cotton (>50%), forage corn (70%), and forage sorghum (40%), as compared to the observed data ( Figure 4b). As for the LAI values, clear differences were found for cotton in 2002, 2008, and 2010 under the auto-irrigation scenario of a SWD threshold of 36 mm ( Figure 6). Cotton is a perennial shrub that is cultivated as an annual cash crop in the U.S. [35]. Vegetative growth continued under well-irrigated conditions. The observed maximum LAI was used as the input for each crop in this study, which was necessary for the reasonable match of the LAI for other crops under the auto-irrigation scenarios. Model performance for yield simulations decreased considerably relative to the baseline scenario ( Figure 4b). For instance, the NSE and PBIAS values were 0.67 and 40% (baseline NSE and PBIAS: 0.99 and 1.3%) under the auto-irrigation scenario of a SWD threshold of 36 mm, which achieved the best model performance for ET simulation among the auto-irrigation scenarios. Notably, the auto-irrigation scenario simulated substantially higher yields of cotton (>50%), forage corn (70%), and forage sorghum (40%), as compared to the observed data ( Figure 4b). As for the LAI values, clear differences were found for cotton in 2002, 2008, and 2010 under the auto-irrigation scenario of a SWD threshold of 36 mm ( Figure 6). Cotton is a perennial shrub that is cultivated as an annual cash crop in the U.S. [35]. Vegetative growth continued under well-irrigated conditions. The observed maximum LAI was used as the input for each crop in this study, which was necessary for the reasonable match of the LAI for other crops under the auto-irrigation scenarios.  Interestingly, the SWAT model achieved a good range of model performance in ET simulations under SWD thresholds ranging from 36 mm to 158 mm. This suggests that a large uncertainty in the simulation of hydrologic parameters may exist when using the soil water content method of the auto-irrigation function in the SWAT model. Further analysis of relative amounts of auto-irrigation associated with each SWD trigger was shown in Figure 7. The irrigation amount was set to 25.4 mm (1 inch) in each trigger for all simulations. Therefore, the lower SWD trigger values (10 mm and 20 mm) resulted in frequent irrigations (>300 times vs. 198 times of the baseline scenario) that exceeded field capacity, resulting in the gross overestimation of total irrigation as compared to actual irrigation ( Figure 7 and Table 6). Conversely, larger SWD triggers (>40 mm) resulted in corresponding decreases in total simulated irrigation frequencies (<197 times), due to the delay of the initial irrigation, as a greater amount of soil water depletion was allowed before an irrigation event was triggered ( Table 6). In general, the actual total seasonal irrigation was approximated to the 36 and 40 mm SWD trigger scenarios. However, total irrigation amounts associated with the majority of SWD thresholds were larger than the actual irrigation amounts for cotton and soybeans (Figure 7). Although the simulated irrigation under different SWD triggers often bracketed actual seasonal irrigation, these simulated values were biased. Use of the soil water content option of the auto-irrigation function resulted in excess irrigation due to the application of irrigation water after crop maturity and in non-growing season. In essence, water is applied strictly based on the SWD trigger and does not terminate following the killing operation. This is a critical flaw in the soil water content method of the SWAT auto-irrigation function. The low NSE values for the 10 mm SWD trigger, during both calibration and validation periods, were caused by excessive irrigation due to the SWD trigger being less than each irrigation amount of 25.4 mm. However, the low NSE value for the 337 mm SWD trigger in the calibration period was likely due to the minimal irrigation applied to the 2004 soybean and 2005 grain sorghum, with both years receiving relatively high amounts of precipitation. Compared to the baseline scenario, the 36 mm and 40 mm SWD triggers showed the most reasonable number of irrigation frequencies (Table 6). However, the differences in irrigation frequency for 2001 and 2010 cotton and 2003 soybean were visible. Table 6. Irrigation frequencies (number of times) of actual management and simulated soil water deficit triggers using the auto-irrigation functions in SWAT. Actual  Frequency  10 mm *  20 mm  36 mm  40 mm  50 mm  100 mm 158 mm 337 mm   2001 cotton  18  57  34  26  25  24  21  18  10  2002 cotton  23  70  38  25  24  23  19  17  12  2003 soybean  31  69  36  24  24  23  20  18  7  2004 soybean  16  54  27  17  16  14  12  10  1  2005 grain sorghum  14  57  22  12  12  11  8  5  5  2006 forage corn  27  68  37  24  23  20  15  13  7  2007 forage sorghum  18  52  20  14  13  13  10  8  5  2008 cotton  20  68  34  23  22  20  14  11  6  2009 sunflower  17  57  25  16  16  15  13  10  17  2010 cotton  14  66  33  22  22  21  19  18  70  Total frequencies  198  618  306  203  197  184  151  128  10 Note: * Soil water deficit threshold (mm H 2 O).

Conclusions
The accurate simulation of water balance processes and their subsequent impacts on plant growth depend on quality environmental and management inputs. Detailed irrigation management information is crucial for a quantitative evaluation of the change in hydrologic components and crop response under various auto-irrigation scenarios. Several studies have alluded to potential deficiencies of the SWAT auto-irrigation functions to simulate realistic irrigation conditions. Results from this study offered a quantitative analysis of shortcomings of the auto-irrigation functions using long-term observed data. Although the auto-irrigation functions achieved overall satisfactory agreement for ET simulation, a noticeable decrease in NSE (>0.06) was observed, as compared to the baseline scenario. Considerable differences in irrigation amounts and frequencies, as well as crop yields, resulted from a range of SWD triggers, demonstrating that the auto-irrigation functions did not adequately represent field practices. Two major reasons for these results are the continuation of auto irrigation after crop maturity, and excessive irrigation when SWD triggers were less than the static irrigation amount. It is also worth mentioning that the current irrigation trigger factor of soil water content method is not reported as a percentage in the plant water demand option, but rather in terms of mm of soil water deficit, which can easily be overlooked. Findings in this study provide useful information about the potential deficiencies of the SWAT auto-irrigation function for users, modelers, and developers. This study also emphasizes the need for the SWAT auto-irrigation functions to better simulate the water balance and crop growth in an irrigated region. We suggest development of a more sensitive and intuitive auto-irrigation algorithm representative of actual irrigation management, which would greatly improve simulations of study area containing significant irrigated acreages. Managed allowed depletion (MAD), a common conceptual framework for irrigation scheduling, uses a percentage of plant available water to trigger irrigation, rather than only soil characteristics related to the soil water deficit trigger threshold, which may be useful for inclusion in SWAT. The development and testing of MAD-based irrigation algorithms will be the focus of a future study.