1. Introduction
Hydrologic models such as the Soil and Water Assessment Tool (SWAT) [
1], Agricultural Policy/Environmental eXtender (APEX) [
2], and European Hydrological System Model MIKE SHE [
3], are widely used for assessing the impacts of best management practices at various spatial scales. Proper calibration and validation of models using observed data are required for meaningful evaluation of outputs and subsequent use in decision making. In most studies, hydrologic models were calibrated for streamflow using measured data at the watershed outlet [
4]. A limited number of studies have used measured evapotranspiration (ET) for calibrating hydrologic models [
5,
6,
7]. According to an assessment of 257 process-based, watershed modeling papers published between 1992 and 2010, Wellen et al. [
4] found that in 96% of studies, models were calibrated against measured streamflow, and in the remaining 4% of studies, models were calibrated using measured surface runoff. They found that none of the studies evaluated the ability of the model to simulate any other hydrologic parameters. Researchers have cautioned that model calibrations based on a single measured hydrologic parameter could lead to incorrect conclusions, as errors in the simulation of one hydrologic parameter can be compensated by errors in the simulation of another hydrologic parameter with the opposite sign [
8,
9]. Modelers have therefore emphasized the need for using other observed hydrologic data in addition to streamflow for calibrating hydrologic models. Among hydrologic parameters, ET is commonly the most significant portion, particularly in semi-arid regions.
Recently, measured long-term lysimeter ET data from the USDA-ARS Conservation and Production Research Laboratory (CPRL) at Bushland, Texas were used for the calibration and validation of the SWAT model for row crops under irrigated and dryland management [
5,
6]. Detailed weather, plant growth, water balance, agronomy, and management records from the weighing lysimeter fields were also used. The availability of such detailed field-scale data for model calibration is essential for improving model performance. Additionally, long-term observed irrigation data present an opportunity for quantitative analysis of the performance of the auto-irrigation function in hydrologic models. In this study, the open-source SWAT model was used, and the source code was obtained from [
10].
Auto-irrigation function in SWAT is based upon one of two water stress trigger (WSTRS_ID) approaches; (1) plant water demand, or (2) soil water content. The plant water demand trigger applies water whenever a user-defined reduction in plant growth occurs (AUTO_WSTRS) due to water stress. However, actual field irrigation is commonly scheduled in response to changes in soil water content in the plant rooting zone. This management approach is most closely represented in SWAT by the soil water content approach, triggered by a user-defined soil water deficit (SWD) threshold (AUTO_WSTRS), calculated as the difference between field capacity and soil water content in mm of water. Lower values of SWD thresholds generally result in more frequent irrigations, as the time required for soil water content to reach the SWD threshold is relatively short. As for larger values of SWD thresholds, the time elapsed between irrigations is increased, as more time is associated with the additional allowed depletion of soil profile water before reaching the SWD threshold. For the same reasons, the first irrigation of the season will generally occur earlier for smaller SWD values, as opposed to later with larger SWD values. Once triggered, irrigation is applied at a user-defined amount (IRR_MX) [
11]. If no value is provided, SWAT assigns a default irrigation amount of 25.4 mm (1 inch). This is not reported in the user manual and theoretical documentation but can be identified by the source code. The default irrigation amount and user-defined SWD threshold establish a pseudo field capacity value used to trigger irrigations.
In semi-arid/arid regions, it is not always desirable or even possible to irrigate to field capacity for producers with limited irrigation system capacities. In most cases, producers use a percentage of plant available water (PAW) depletion to trigger irrigations. This managed allowable depletion (MAD) approach allows for more effective use of limited water resources and is dependent upon respective soil and crop characteristics. The difference between this producer–irrigation paradigm and the current auto-irrigation function in SWAT may influence the accurate simulation of irrigation and other water balance parameters. Some studies have qualitatively reported that the auto-irrigation functions in SWAT were unable to appropriately reproduce the hydrologic cycle in the intensively irrigated watersheds [
12,
13,
14,
15]. Using SWAT2005, Dechmi et al. [
13] commented that the default auto-irrigation functions returned excess irrigation water to the irrigation source, rather than accounting for it in the water balance. Githui et al. [
16] also found that monthly patterns of irrigation intervals simulated using the SWAT auto-irrigation function did not match estimated irrigation data of 2009 and 2010 from an irrigated catchment in southeast Australia. Researchers have also recommended different growth stage-specific irrigation techniques to create favorable conditions for crop growth [
17]. However, the current auto-irrigation functions in SWAT cannot simulate differential irrigation of crops based on growth stages. To date, no known study has reported the performance of SWAT auto-irrigation functions for simulations of irrigation, ET, and crop responses using quality, long-term field observations.
In this study, data collected from a lysimeter located in an irrigated field at the USDA-ARS CPRL at Bushland, Texas were used for evaluating the SWAT model. Observed data used in this study included continuous measurement of daily ET, seasonal leaf area index (LAI), annual crop yield, climate data, field operations, and detailed irrigation management records from 2000 to 2010. The primary goal of this study was to assess the efficacy of the SWAT auto-irrigation function to simulate irrigation management strategies typical of the Texas High Plains region, while simultaneously simulating reasonable values for crop yield, LAI, and ET. Specifically, the objectives of this study were to: (1) calibrate the SWAT model for ET, LAI, and crop yield using measured values from a weighing lysimeter field; and (2) compare measured ET, irrigation, LAI, and crop yield to simulated values using SWAT auto-irrigation functions under different SWD thresholds triggered by the soil water content method, and quantitatively determine the shortcomings of the auto-irrigation functions.
4. Conclusions
The accurate simulation of water balance processes and their subsequent impacts on plant growth depend on quality environmental and management inputs. Detailed irrigation management information is crucial for a quantitative evaluation of the change in hydrologic components and crop response under various auto-irrigation scenarios. Several studies have alluded to potential deficiencies of the SWAT auto-irrigation functions to simulate realistic irrigation conditions. Results from this study offered a quantitative analysis of shortcomings of the auto-irrigation functions using long-term observed data. Although the auto-irrigation functions achieved overall satisfactory agreement for ET simulation, a noticeable decrease in NSE (>0.06) was observed, as compared to the baseline scenario. Considerable differences in irrigation amounts and frequencies, as well as crop yields, resulted from a range of SWD triggers, demonstrating that the auto-irrigation functions did not adequately represent field practices. Two major reasons for these results are the continuation of auto irrigation after crop maturity, and excessive irrigation when SWD triggers were less than the static irrigation amount. It is also worth mentioning that the current irrigation trigger factor of soil water content method is not reported as a percentage in the plant water demand option, but rather in terms of mm of soil water deficit, which can easily be overlooked. Findings in this study provide useful information about the potential deficiencies of the SWAT auto-irrigation function for users, modelers, and developers. This study also emphasizes the need for the SWAT auto-irrigation functions to better simulate the water balance and crop growth in an irrigated region. We suggest development of a more sensitive and intuitive auto-irrigation algorithm representative of actual irrigation management, which would greatly improve simulations of study area containing significant irrigated acreages. Managed allowed depletion (MAD), a common conceptual framework for irrigation scheduling, uses a percentage of plant available water to trigger irrigation, rather than only soil characteristics related to the soil water deficit trigger threshold, which may be useful for inclusion in SWAT. The development and testing of MAD-based irrigation algorithms will be the focus of a future study.