Breeding for Resilience to Water Deﬁcit and Its Predicted Effect on Forage Mass in Tall Fescue

: Resilience is increasingly part of the discussion on climate change, yet there is a lack of breeding for resilience per se. This experiment examined the genetic parameters of a novel, direct measure of resilience to water deﬁcit in tall fescue ( Lolium arundinaceum (Schreb.) Darbysh.). Heritability, genetic correlations, and predicted gain from selection were estimated for average productivity, resilience, and stability based on forage mass of a tall fescue half-sib population grown under a line-source irrigation system with ﬁve different water levels (WL). Resilience was both measurable and moderately heritable (h 2 = 0.43), with gains of 2.7 to 3.1% per cycle of selection predicted. Furthermore, resilience was not correlated with average response over environments and negatively correlated with stability, indicating that it was not a measure of responsiveness to more favorable environments. Genetic correlations among WL ranged from 0.87 to 0.56, however in contrast, resilience was either not or slightly negatively genetically correlated with WL except for moderate correlations with the ‘crisis’ WL. Thus, breeding for improved resilience was predicted to have little effect on forage mass at any given individual water deﬁcit environment. Overall, results indicated that this novel metric could facilitate breeding for improved resilience per se to water deﬁcit environments.


Introduction
The concept of 'resilience' is increasingly becoming part of the discussion on climate change, leading to what some authors termed the "renaissance of resilience" [1]. One report has even suggested that due to climate change the future of any given species was a dichotomy of "resilience or decline" [2]. Resilience for a biological species has been defined as the ability to withstand a short-term crisis or perturbation, like a drought, by absorbing the perturbation and being able to retain the same function [3,4], and in a broad sense was comprised of two possible components: (1) the ability to withstand a crisis and not deviate during the perturbation (i.e., resistance); and, (2) the ability to recover from a crisis and the speed of that recovery (i.e., recovery) [5].
Associated with the "renaissance of resilience" is the increased academic interest and number of organizations attempting to integrate an understanding of 'resilience' into their work [1]. Resilience as it relates to forage systems was reviewed by Picasso et al. [3] and Tracy et al. [6]. In the synthesis paper from the 2017 symposia "Resiliency in Forage and Grazinglands" from the Crop Science Society of America, it was concluded that long-term research projects are needed to measure and promote resilience in forages [6]. Many current papers focus on functional plant diversity and ecology and biotic interactions of existing forage species and genotypes to develop grazing lands resilient to a drier future [6][7][8]. A few go further and identify adaptive strategies, including types of drought tolerance, necessary for the development of future resilient forage cultivars [7,[9][10][11][12]. However, to date, there are no reports of trying to breed for increased resilience per se, either as increased resistance or recovery from perturbation. This is due, in part, to the fact that resilience and its cousin, stability (minimal variability over time under normal conditions), are not well understood, sometimes confused, and their evaluation is not straightforward [13]. Additionally, there has been a lack of a measurable metric of resilience per se that could be adapted to several different species or climate change possibilities. However, recently Picasso et al. [3] suggested that resilience per se could be quantified as the proportion of average productivity across environments that is achieved in a 'crisis' environment, where the crisis environment is an environment with significantly reduced yields for most entries due to an extreme climatic event such as drought. They reported that this metric addressed the 'resistance' component of resilience and successfully identified different alfalfa genotypes and underlying mechanisms as compared to a common stability measure [3].
Selection for resilience to climate change per se implies evaluation and identification of superior genotypes in a less than ideal environment, which are often associated with increased environmental variance. The choice of selection environment is often debated and frequently unresolved in plant breeding [14]. In many cases, breeders use a selection environment that has reduced variability to indirectly select for performance in their target environment that is often characterized as having greater variability. Given an appropriate family structure that allows for measurement of genetic variance and genetic correlation, the relative efficiency of indirect selection versus direct selection can be predicted [15]. Predicting the relative efficiency of indirect selection for traits in forage breeding has often been reported, for instance, Conaghan et al. [16] reported that selection for fresh forage mass could successfully improve dry forage mass in perennial ryegrass (Lolium perenne L.). Furthermore, the theoretical framework of indirect selection for traits can also be extended to determine a correlated response to a single trait measured in two different environments by treating the trait as different and unique in each environment [14]. This approach has been widely used to evaluate indirect selection response between breeding-versus target-environments, including high-versus low-yield, sward versus spaced-plant, monoculture versus mixture, and laboratory or greenhouse evaluation versus field environments [14,[17][18][19][20][21][22][23].
Therefore, the objectives of this research were: (1) to estimate and compare the genetic parameters of tall fescue forage mass at each of five environments characterized by varying levels of water deficit, and of the Picasso et al. [3] resilience metric as calculated from those environments, and; (2) to predict the relative efficiency of indirect selection (forage mass at any given water deficit), as opposed to direct selection for the resilience to water deficit per se metric. Elucidation of the genetic control of forage production under water deficit environments, such as reported here, is critical to forage breeding as resilience to climate change continues to become an increasingly important issue.

Plant Materials and Harvests
In 2000, 28 half-sib families (HSF) of endophyte-free tall fescue were planted in a line-source sprinkler experiment [24,25] to evaluate the effects of deficit irrigation on the expression of genetic variation and heritability estimates. The HSF were randomly selected from 130 HSF comprising an early-flowering, broad-based breeding population resulting from 1 cycle of selection for soft lax leaves and visual forage vigor. The 28 HSF maternally traced to the following cultivar sources: 'Advance', 'Alta', 'AU-Triump', 'Cajun', 'Cattle Club', 'Fawn', 'Forager', 'Martin', 'Mozark', and 'Phyter'. Only 28 of the 130 HSF were used due to limitation in the line-source design. 'Kentucky-31 both as endophyte-free (E-) and endophyte infected (E+) and Fawn were included as checks. The experiment was located at the Utah State University Evans Experimental Farm, approximately 2 km south of Logan, UT, USA (41 • 45 N, 111 • 8 W, 1350 m above sea level). Soil at the site was a Nibley silty clay loam (fine, mixed, mesic Aquic Argiustolls).
Experimental plots were arranged as a modified split-plot design with four replications, two on each side of a line-source irrigation line, and five water levels applied as non-randomized strips (i.e., as a whole plot) ( Figure 1). Plots (2.0-m long × 1.1-m wide, 2.2 m 2 ) were planted in May 2000 with a five-row (22-cm row spacing) Wintersteiger cone seeder (Wintersteiger Corporation, Salt Lake City, UT, USA) at a depth of 1.3 cm, and seeding rate of 135 pure live seeds per linear meter of row (e.g., 12.3 kg ha −1 ). The plots were oriented perpendicular on both sides of a line-source irrigation system, and in five successively more distal 2.0-m long ranges separated by 1-m mowed alleys. Each range was designated as one of five water levels (WL), with the WL nearest to the irrigation line designated as WL1 and the most distal range as WL5 (Figure 1). Experimental plots were arranged as a modified split-plot design with four replications, two on each side of a line-source irrigation line, and five water levels applied as non-randomized strips (i.e., as a whole plot) ( Figure 1). Plots (2.0-m long × 1.1-m wide, 2.2 m 2 ) were planted in May 2000 with a five-row (22-cm row spacing) Wintersteiger cone seeder (Wintersteiger Corporation, Salt Lake City, UT, USA) at a depth of 1.3 cm, and seeding rate of 135 pure live seeds per linear meter of row (e.g., 12.3 kg ha −1 ). The plots were oriented perpendicular on both sides of a line-source irrigation system, and in five successively more distal 2.0-m long ranges separated by 1-m mowed alleys. Each range was designated as one of five water levels (WL), with the WL nearest to the irrigation line designated as WL1 and the most distal range as WL5 (Figure 1).    Plots were irrigated uniformly as needed during the establishment year (2000), and 56 kg N ha −1 was applied in midsummer and again in the fall. Irrigation water plus rainfall were monitored, via rain gauges with an oil overlay to prevent evaporation, from May through September in 2001, 2002, and 2003. The amount of water applied to each WL is presented as mm per week (Table 1). Reference evapotranspiration (ETo) values for these time periods were obtained from the Utah Climate Center, using the Hargreaves equation, at a weather monitoring station approximately 3.2 km from the Evans research farm (USC00425194) and compared to water applied at each WL ( Table 1). The resulting average percent ET replacement over the three growing seasons was 105, 84, 59, 40, and 18%, for WL 1, 2, 3, 4, and 5, respectively (Table 1), assuming 100% irrigation efficiency. Fertilizer applications of 56 kg N ha −1 of ammonium nitrate were made prior to the first harvest and after Harvests 2 and 4 each year. Following establishment in 2000, plots were harvested to an 8-cm stubble height with a Swift Current sickle bar harvester (Swift Machining & Welding LTD, Swift Current, SK) ( Figure 1). The first harvest was conducted when growth in WLs 1 and 2 were visually estimated to be between the vegetative and elongation stage [26]. Forage samples used to estimate percent dry matter were dried at 60 • C in a forced-air oven to constant weight, and the forage mass (Mg ha −1 ) of each plot was determined.

Statistical and Genetic Analysis
Forage mass was analyzed within WL, both including 'harvests' in the model or using the seasonal total, considering replications and HSF as random variables, and harvests (when appropriate), and years as fixed effects. A preliminary analysis using the R-based software tool DeltaGen [27] was conducted to determine which WL exhibited significant HSF variance. Those WL that did not exhibit significant HSF variance were dropped from average productivity, resilience, stability, and genetic correlation analyses. Average productivity (P) and resilience (R) statistics were calculated as described by Picasso et al. [3] with the modification that a coefficient was calculated for each year, HSF, replicate, and harvest (for a model that included harvests) combination across WL (e.g., thus assuming each WL was a different environment) as shown: where Y ijlrh is the yield of HSF j in the year i for WL l, replicate r, and harvest h, and n is the number of WL used in the calculation. And Yc ijrh is the yield in the crisis environment of HSF j in the year i for replicate r, and harvest h. Thus, resilience is the proportion of the average productivity that is achieved in a "crisis" environment [3], with the WL of greatest deficit ET o replacement that exhibited significant HSF variance considered the crisis environment (i.e., WL3 for across harvest analysis and WL5 for seasonal total). Because of the limited number of environments (e.g., WL), the crisis environment was included in the average productivity. Parametric stability statistics of Plaisted and Peterson's mean variance component ( , and Kang's rank-sum (Kr) (for description of each, see Pour-Aboughadareh, et al. [28]) were also estimated for each HSF, year, replicate, and harvest (for the model that included harvests) combination across WL environments using R v4.0.3 [29] and the code used in the R package STABILITYSOFT [28].
Additive genetic variances (σ 2 A ), narrow-sense heritabilities (h 2 ) and BLUP values, and additive genetic correlations (r A ) for forage mass at each WL, and for average productivity, resilience, stability were estimated on a plot mean basis using DeltaGen [27] and assuming the variance among HSF was equivalent to 1/4σ 2 A [30]. Heritability for forage mass within each WL and for Productivity, Resilience, and Stability were computed with the harvest in the model or from the seasonal total as: where σ 2 F = HSF variance, σ 2 FH = HSF × harvest variance, σ 2 FY = HSF × year variance, σ 2 FHY = HSF × harvest × year variance, σ 2 e = residual error variance, and h, y, r equal the number of harvests, years, and replicates, respectively. Predicted changes from direct selection in forage mass at any single WL, or from average productivity, stability, and resilience were calculated as: using individual harvest data or from the seasonal total, respectively, where the recombination unit was isolated polycross of selected HSF (i.e., c = parental control factor = 1) [30], and the top 15% HSF were selected (i.e., k = standardized selection differential = 1.554). The indirect response in tall fescue mass at one WL that would result from selection in another WL or from selection on average productivity, resilience, or stability was predicted using the correlated response theory and equations described by Falconer [15] and reviewed by Burdon [18] and Cooper et al. [19]. Briefly, these papers validate that the underlying basis for correlated response to selection between traits can, in turn, be extended to the correlated response between the same trait measured in two environments. As such, additive genetic correlations were estimated as: where σ A(xy) is the additive genetic covariance of HSF means for trait or environment x and y, and σ A(x) and σ A(y) are the additive genetic standard deviations for traits or environments x and y, respectively. Predicted correlated response (e.g., indirect selection) resulting from a single trait selection was calculated as: where x = trait under direct phenotypic selection, y = correlated trait under indirect selection, i x = k (i.e., standardized selection differential) imposed for trait x, and σ P(y) = phenotypic standard deviation for trait y (i.e., the square root of denominator in h 2 calculation). In a similar line-source experiment Asay et al. [31] found that tall fescue forage mass responded almost exclusively in a polynomial quadratic fashion across WL. Therefore, our predicted responses were plotted against WL and fit with quadratic polynomial trends. Finally, the expected relative efficiency (RE) of indirect selection to increase forage mass was calculated as the ratio of correlated response (CR y ) to direct response (R x ).

Average Forage Mass
The effect of Harvest and the WL × Harvest interaction on forage mass were both significant (p = 0.0001). Average WL × Harvest tall fescue forage mass is shown in Figure 2. Forage mass was greatest at the first harvest for all WL, with large decreases in forage mass at the second and subsequent harvests compared to Harvest 1 (Figure 2). This was especially true for 40 and 18% ET replacement, where almost all the seasonal total forage mass was from harvest 1 ( Figure 2). The HSF × WL interaction variance was also significant (0.0024 ± 0.0007, Likelihood Ratio Test p = 0.0001) indicating differential HSF performance at the various WL. Furthermore, the range in HSF BLUP values for forage mass were extremely narrow at 40% and 18% ET replacement, as compared to the greater ET replacement water levels ( Table 2). where x = trait under direct phenotypic selection, y = correlated trait under indirect selection, ix = k (i.e., standardized selection differential) imposed for trait x, and σP(y) = phenotypic standard deviation for trait y (i.e., the square root of denominator in h 2 calculation).
In a similar line-source experiment Asay et al. [31] found that tall fescue forage mass responded almost exclusively in a polynomial quadratic fashion across WL. Therefore, our predicted responses were plotted against WL and fit with quadratic polynomial trends. Finally, the expected relative efficiency (RE) of indirect selection to increase forage mass was calculated as the ratio of correlated response (CRy) to direct response (Rx).

Average Forage Mass
The effect of Harvest and the WL × Harvest interaction on forage mass were both significant (p = 0.0001). Average WL × Harvest tall fescue forage mass is shown in Figure  2. Forage mass was greatest at the first harvest for all WL, with large decreases in forage mass at the second and subsequent harvests compared to Harvest 1 (Figure 2). This was especially true for 40 and 18% ET replacement, where almost all the seasonal total forage mass was from harvest 1 ( Figure 2). The HSF × WL interaction variance was also significant (0.0024 ± 0.0007, Likelihood Ratio Test p = 0.0001) indicating differential HSF performance at the various WL. Furthermore, the range in HSF BLUP values for forage mass were extremely narrow at 40% and 18% ET replacement, as compared to the greater ET replacement water levels ( Table 2).    [32] as a measure of stability (b i ). Only WLs that exhibited significant HSF variance were included in calculation of statistics, with the remaining WL of greatest deficit ETo replacement considered the crisis environment (i.e., 59%ET for across harvests and 18%ET for seasonal total). 2 The percent of evapotranspiration (%ET) replaced weekly via precipitation and irrigation at each water level. 3 Checks included 'Kentucky-31 both as endophyte-free (KY31E−) and endophyte infected (KY31E+).

Heritability and Genetic Correlation of Forage Mass and Resilience to Deficit Irrigation
Genetic variance significance depended upon whether or not analyses were performed across five repeated harvests or as the seasonal total of the five harvests. The results are presented using both models and the implications reviewed in the 'Discussion' section. In the case of the 40% and 18% ET replacement water levels, HSF variances in the across harvest model were not significantly different than zero (p = 0.344 and 0.273, respectively) ( Table 3). Therefore, as noted in the Materials and Methods, forage mass values from these water levels were not included in the related calculations of average productivity, resilience, or stability nor were they included in genetic correlations. In addition, the only parametric stability statistic that exhibited significant (p = 0.027) HSF variance or approached significance (p = 0.161) in the case of the model across harvests (Table 3) was Finlay and Wilkinson's regression coefficient (e.g., b) [32], therefore no other stability parameters are included in the results and discussion. Significant HSF variance was exhibited at all WL in the seasonal total model, but as noted above, was not significant at 40% and 18% ET replacement water levels when repeated harvest data was included in the model (Table 3). Corresponding heritable variation for tall fescue forage mass at any given WL with significant HSF variance was moderately high ranging from 0.49 to 0.67 (Table 3). In addition, heritability for average productivity across WL environments was relatively high (0.66 and 0.73), whereas heritability for resilience and stability to deficit WL were moderate to moderately low (0.49 to 0.35) ( Table 3). Genetic correlations among WL were mostly high, ranging from 0.87 to 0.56 (Table 4). Thus, it was not surprising that the genetic correlation between average productivity across WL and any given WL were also high, ranging from 0.95 to 0.81 (Table 4). In contrast, resilience was either not or negatively genetically correlated with WL except for moderate correlations with the WL considered as the crisis environment (e.g., 59%ET and 18%ET for the across harvests and seasonal total models, respectively) ( Table 4). In comparison, Spearman's rank correlations often followed different trends including even in sign (e.g., positive or negative) compared to the genetic correlations (Table 5). This is not totally unexpected as the magnitude and sign of genetic correlations cannot be determined from phenotypic correlations [15], but does still have implications for gain from indirect selection. Table 4. Genetic correlations among water levels (% ET replacement) and yield statistics for 28 tall fescue half-sib families evaluated for forage mass in a line-source irrigation experiment with 5 water levels (WL) from 2001 to 2003 near Logan, UT, USA. The top diagonal is for the seasonal total forage mass model, whereas the bottom diagonal is for analysis across five repeated harvests.  Table 5. Spearman's Rank correlations among water levels (% ET replacement) and yield statistics for 28 tall fescue half-sib families evaluated for forage mass in a line-source irrigation experiment with 5 water levels (WL) from 2001 to 2003 near Logan, UT, USA. Top diagonal is seasonal total model, and bottom diagonal is across harvests model.  1 Correlation only appropriate when both traits exhibit significant genetic variation, therefore, no values for 40 and 18% ET replacement in the across harvests model. 2 Statistics are average performance (Y i ) over WL 1 to 3 for 'Across harvests' or WL 1 to 5 for 'Seasonal total', resilience (R i ) considering WL3 and WL5 as the crisis environment for 'Across harvests and Seasonal total, respectively, and the Finlay and Wilkinson regression coefficient [32] as a measure of stability (b i ).
Heritability and genetic correlation were used to predict direct and indirect gain from selection ( Figure 3). Predicted gains from direct selection for average productivity (P i ), resilience (R i ), and stability (b i ) were 5.0, 2.7, and 6.8% per cycle, respectively, for the across harvests model. Likewise, for the seasonal total model, predicted gains as a result of direct selection for P i , R i , and b i were similar at 5.3, 3.1, and 5.5% per cycle, respectively. Notably, selection for improved resilience only indirectly impacted forage mass of the crisis WL (Figure 3), whereas selection for average productivity was predicted to indirectly increase forage mass at all WL ( Figure 3). Direct selection at any given WL was predicted to increase forage mass by 6.3 to 4.0% per cycle, and for the most part was more efficient than indirect selection (Figure 3). Notable exceptions included that selection on P i was up to 108% more efficient than direct selection at 59%ET in the across harvests model, and 103 and 105% more efficient than direct selection at 59%ET and 40%ET replacement, respectively, in the seasonal total model (Figure 3a,b).

The Challenge of Multiple Harvests in Forage Breeding for Water Deficit
The summary of this special Agronomy issue on Applied Plant Breeding and Quantitative Genetics Research to Improve Forage and Turf Plants states that "applied forage and turf

The Challenge of Multiple Harvests in Forage Breeding for Water Deficit
The summary of this special Agronomy issue on Applied Plant Breeding and Quantitative Genetics Research to Improve Forage and Turf Plants states that "applied forage and turf breeding comes with its unique challenges and obstacles...". This research on breeding for resilience highlights several of those challenges: including how to deal with multiple harvests within the same growing season. Many forage papers have used the seasonal total of multiple harvests as the metric of productivity (see for example, [31,[33][34][35][36]). However, herein, we have also reported differences in heritable variation based upon a model that included data from multiple harvests. As illustrated in Figure 2, this challenge can be especially problematic for research combining perennial forages, water deficit, and temperate regions where much of the yearly precipitation comes outside of the growing season (e.g., snowfall) reducing or eliminating the effect of water deficit on the first harvest.
Examples of this include the study by Jensen et al. [37], where in a similar linesource experiment they reported that there was only significant heritability for meadow bromegrass seasonal-total forage mass at one of five WL. However, in contrast they found heritable variation within the first two harvests (of five) when analysis was done by harvest [37]. Asay et al. [31] looked at seasonal distribution of forage mass for 10 tall fescue cultivars grown in a line-source irrigation experiment with 5 WL and 5 to 6 harvests within each growing season. They found that the largest decline in forage occurred between harvests 2 and 3 compared to our results of the sharpest decline after harvest 1. They attempted to address the harvest × WL interaction by conducting an analysis on the seasonal yield minus harvests 1 through 3, which resulted in more pronounced differences in trends among WL and cultivars [31]. Quantitative genetic theory indicates that the HSF × Year or HSF × Location variance is confounded with the HSF variance when evaluation is only done in a single location or single year, respectively, and as a result, the HSF variance is often inflated upward [38]. Similarly, it is reasonable to assume that HSF and HSF × Harvest variances are confounded together when analyses are done on seasonal totals, possibly leading to inflated genetic variances. This would agree with our results where significant HSF variances were observed at all WL when analysis was on seasonal total mass, as opposed to only the three least deficit WL when harvest was included in the model (Table 3). Overall, our results support including 'harvest' in the model to get the most accurate genetic parameters, especially when evaluating in water deficit environments.

Forage Breeding for Reslience Per se to Water Deficit
The primary question of this research was, can h 2 for resilience per se be estimated and if so, can breeding for resilience improve tall fescue forage mass at deficit ET replacement? Related questions included: what is the genetic relationship between average productivity, stability, and resilience? Previously, Picasso et al. [3] proposed a new resilience metric and along with Robins et al. [39] showed that the metric could differentiate the resilience among alfalfa and grass cultivars. Our results add to their reports and indicate that genetic parameters for the Picasso et al. [3] resilience metric can be estimated, and within the tested tall fescue population, resilience per se was heritable (Table 3). We also found that this resilience metric was not highly genetically correlated with average productivity and negatively correlated with stability (i.e., b i ) ( Table 5). Genetic correlations indicate the degree that two measurements reflect what is genetically the same character [15]. Thus, in as much as b i > 1.0 equates to high responsiveness to more favorable growing environments [40], and there was a negative genetic correlation between b i and R i , our results indicate that the Picasso et al. [3] resilience R i metric is a measure of resistance to perturbation as opposed to another estimate of responsiveness to less water deficit. This conclusion is supported by the lack of genetic correlations between resilience and noncrisis WL. In the tested tall fescue population, resilience per se was predicted to respond to selection at a rate of 2.7% per cycle (harvest included model), however, it was less efficient at improving forage mass at all WL than direct selection or selection on average productivity over WL. It was notable that selection on average productivity was predicted to have the largest overall impact on forage mass across the tested WL ( Figure 3) and given the lack of correlation with R i could possibly be simultaneously selected together resulting in both increased forage mass and resilience.
Multiple authors have suggested breeding for specific drought tolerance traits to improve resilience to water deficit. For example, Kole et al. [12] identified four QTL regions associated with drought tolerance traits such as cell-membrane stability, osmotic adjustment, root traits, and leaf rolling as targets for genomics-assisted breeding for increased resilience. Volaire et al. [7] suggested that genotypes should be evaluated for "dehydration delay" in order to design resilient grasslands. However, Gilliland et al. [11] defined resilience as multifactorial in nature and in terms of forage breeding as, "proficient and sustained delivery of highly utilizable, high yielding herbage." They concluded that breeding for a specific stress trait was not multifactorial and therefore was not resilience breeding, whereas, breeding for forage mass and nutritive value in stress environments were among the most important drivers of resilience breeding in forages [11]. Their argument lends support for breeding based upon a resilience index or metric with forage mass as the trait measured, such as done in our research. Besides what we have presented herein, the only other similar type measurement we could find was the proposed adaption of a stress tolerance index (STI) to evaluate orchard grass under water deficit environments [41]. The STI index was used to identify genotypes that produced high forage mass under both stress and non-stress conditions and similar to our R i is a ratio (i.e., STI = (Y Pi × Y Si )/Y p 2 , where Y Pi and Y Si are yields of a given genotype under stress and normal conditions, respectively, and Y p is the average yield of all genotypes under normal conditions [41]). However, the numerator of the STI ratio included forage mass at both the stress and non-stress environments, indicating that it would lean more towards measuring response than the Picasso et al. [3] R i metric. Overall, these reports by others along with our results support the use of R i as a metric to evaluate and breed for resilience per se. Additional studies that look at realized gain and/or involve other populations and environments are needed to validate this hypothesis.

Conclusions
In conclusion, our use of a line-source irrigation system to simulate multiple water deficit environments allowed us to estimate a novel resilience metric, R i, and genetic parameters of forage mass for a breeding population of tall fescue. Results indicated that the resilience metric was both measurable and heritable, with gains in R i of 2.7 and 3.1% per cycle predicted for the across harvests and seasonal total models, respectively. The resilience metric was not correlated with average response over environments and negatively correlated with stability, indicating that R i is not a measure of responsiveness to more favorable environments. Furthermore, our results indicated that breeding for improved R i , would have no effect on forage mass at any given individual environment except for the crisis environment. Therefore, breeding for improved R i could be done independently and/or concurrently with breeding for improved forage mass at any given water-deficit environment. Overall, these results indicated that the measurement of R i could facilitate the breeding for improved resilience per se to climate change.

Data Availability Statement:
The data presented in this study are openly available in FigShare at 10.6084/m9.figshare.16528446, [42].

Acknowledgments:
The authors gratefully acknowledge M.Z.Z. Jahufer for his support and guidance on the use of DeltaGen. This research was supported in part by the U.S. Department of Agriculture, Agricultural Research Service. USDA is an equal opportunity employer and service provider. Mention of trade names or commercial products in this publication is solely for the purpose of providing specific information and does not imply recommendation or endorsement by the U.S. Department of Agriculture.