2.1. Case Study Location
The SNF is located in east-central California, in the Forest Service’s Pacific Southwest Region. Established in 1893, the SNF spans ~525,000 hectares between ~275 and ~4250 m in elevation. The SNF has over two dozen species of trees, with California red fir (
Abies magnifica), white fir (
Abies concolor), ponderosa pine (
Pinus ponderosa), lodgepole pine (
Pinus contorta), and incense-cedar (
Calocedrus decurrens) among the most commonly found in the SNF. The SNF also contains other varied flora and fauna covering rolling foothills, heavily forested elevations, and alpine landscapes [
57]. In 2014 and 2015, respectively, 136,900 cubic meters and 117,640 cubic meters of timber were cut from the SNF, mostly in the form of sawtimber and to a much lesser extent, fuelwood.
In terms of historical fire regimes, over half (54%) of the SNF is a frequent fire (≤35-year fire return interval) regime of low to moderate severity (primarily in the lower elevations). Longer fire return intervals are found at higher elevations, with 31% of the SNF in a 35–200 year return interval of low-moderate severity fire. Other fire regimes include 8% barren/non-burnable, 4% replacement severity with 35–200 year return interval, 1% sparsely vegetated, 1% water, 0.2% replacement severity with a less than 35-year return interval, 0.2% longer than 200-year return interval with replacement severity, 0.0005% snow and ice, and 0.00008% indeterminate fire regime [
58].
The influences of fire suppression and other factors have shifted forest composition to fire-intolerant species and resulted in substantial departure from historical fire regimes, most notably in dry, low-elevation ponderosa pine forests [
59]. Over the time period from 1992–2013, the mean annual area burned by large fires (>100 ha) that ignited in the SNF was 12,472 ha, and the mean annual burn probability for the SNF was 0.0053; the median large-fire size for the fire occurrence area we modeled (see
Section 2.5) was 309 ha [
60]. More information on historical fire sizes and relation to contemporary burn probabilities and burn probability modeling is available in [
54,
55,
56].
The SNF makes a useful case study location because it reflects a microcosm of many of the challenges surrounding contemporary fire and fuels management in the western U.S.: potential for large, long-duration fires; corresponding potential for high suppression expenditures; proximal at-risk human communities; accumulation of hazardous fuel loads due in part to fire exclusion; and significant treatment and restoration needs. In fact, the National Forest is home to an existing funded project under CFLRP, the Dinkey Landscape Restoration Project, a ~62,000 ha project area where the management strategy aims to “restore key features of diverse, fire-adapted forests, including heterogeneity at multiple scales, reduced surface and ladder fuels, and terrestrial and aquatic habitats for sensitive wildlife species” [
59]. To provide a sense of scale, the project aims to implement mechanical treatments on approximately 14,000 ha, and prescribed fire on approximately 19,000 ha, over a 10-year planning horizon. Across the broader SNF, annual treatments rates hover around 1000–2000 ha for mechanical treatment, and 800–1200 ha for prescribed fire [
61]. Here, we abstract away from the specifics of that project, and from the logistics of regulated planning processes, to explore alternative spatial treatment strategies under different budget levels across the entire SNF.
From a more practical perspective, the SNF is a well-studied location, such that many of the building blocks for our analysis are readily available. These include maps of fuel treatment constraints [
33], biophysically driven fuel treatment prescriptions [
54], and spatial risk assessment and response planning results [
55]. Of particular note is the use of potential wildland fire operations delineations (PODs) as the spatial unit of analysis for fuel treatment prioritization. As outlined in [
55], the SNF pioneered development of PODs, which are polygons whose boundary features are relevant to fire control operations (e.g., roads, ridgetops, and water bodies). PODs provide a useful spatial construct to summarize risk and plan strategic response to unplanned ignitions accordingly [
62]. It has been suggested that treatment strategies could be designed to create “anchors” to facilitate fire management operations for landscapes with limited treatment opportunities [
33]. Here, we build from that idea by locating and prioritizing fuel treatments within PODs, i.e., the treatment decision unit is the POD.
Figure 1 presents a map of the case study landscape, with POD boundaries and suitable treatment locations identified.
2.2. Model Workflow and Leverage Metrics
Figure 2 presents the basic workflow for our treatment modeling framework. Beginning in the upper left, the existing conditions (EC) landscape is the foundation for fire behavior modeling and treatment design, and serves as the basis for creating hypothetical post-treatment (PT) landscape conditions. The primary analytical steps highlighted in this diagram are: (1) optimization to generate efficient spatial treatment strategies, and (2) stochastic fire simulation to evaluate these strategies. Additional modules not directly illustrated in this framework are treatment location, prescription, and cost modeling; spatial risk assessment; and suppression cost modeling. Optimal treatment strategies are developed as a function of treatment costs, harvest volume, feasible treatment locations, and expected net value change (eNVC; see
Section 2.5). All of these measures are summarized for each POD, which as described above, is the decision unit for implementing treatments. If a POD is selected for treatment, all of the feasible treatable area within that POD is scheduled for treatment.
Boxes highlighted in grey are used in leverage calculations, the equations for which are presented in the upper right of the figure. The “optimal treatment strategies” box is highlighted in darker grey, as this is where all inputs flow and link before post-treatment conditions are simulated. Within boxes we highlight key datasets, methods, and models used, all of which are described in more detail below. All leverage metrics are calculated as ratios, with the numerator expressing the net change due to the treatment strategy, in terms of annual area burned, suppression costs, and landscape expected net value change. The denominators reflect an attribute of the treatment strategy itself, in terms of area treated, treatment cost, and expected net value change within treated areas. Individual modeling components are described below. By also calculating fire-treatment encounter rates (described below), we are able to generate frequency-magnitude distributions that characterize treatment effects on avoided annual area burned and avoided suppression costs. In other words, in addition to asking how many times simulated fires interacted with treated areas, we can also ask, for instance, how often such interactions resulted in cost savings above a certain threshold. Note that all modeling related to fuel treatments, stochastic fire simulation, and risk assessment is performed on a pixelated, or rasterized, landscape, with a pixel size of 180 × 180 m (see
Section 2.5 for more detail).
2.3. Fuel Treatment Eligibility, Prescription, and Cost Modeling
We began our fuel treatment modeling by removing from consideration any locations that were not operationally or administratively feasible for mechanical treatment. To do this, we relied on previous research mapping treatment opportunities by [
33], specifically using their Scenario D, which offered the loosest constraints on treatments (allows access to all timber within 610 m of existing roads on slopes <35%, and all timber within 305 m of existing roads on slopes <50%). We then used random forest modeling to assign each pixel on the landscape to a unique tree list that corresponds to an existing Forest Inventory and Analysis (FIA) plot following the methodology of [
63]. FIA measures the size and species of each tree on each plot, which provides the basis for these tree lists. Predictor variables were chosen to optimize biomass predictions, and among others included a suite of vegetation variables: existing vegetation group, existing forest cover, and existing forest height. These tree list data, mapped to each pixel on the landscape, and corresponding to a unique empirical FIA plot, formed the basis for subsequent modeling of mechanical harvesting and treatment costs, as described below.
Forest treatment prescriptions simulated for the SNF are those described by [
54], and are based upon existing condition canopy cover percentage within specified ranges that are then treated to a meet a canopy cover percentage specified by prescription. The treatment prescriptions also change surface fuel models, which we model as under burning performed after mechanical thinning. These treatment prescriptions were designed to reduce the rate of spread and intensity of surface fires as well as the probability of crown fire; additional details are available in [
54].
To simulate these forest treatments, we used the Western Sierra variant of the Forest Vegetation Simulator (FVS) [
64]. If a plot had canopy cover > 40%, a thin from below was triggered to cut trees beginning with 5 cm diameter at breast height (DBH), progressing to larger DBH’s until the treated condition canopy cover was achieved. Using FVS, we estimated cut merchantable (tree stems with 25.4 cm minimum DBH) and non-merchantable (tops and limbs of merchantable trees and whole trees less than 25.4 cm DBH) tree components. We assumed that 15% of cut stems and 15% of branchwood from cut stems would remain in the stand due to standard operating procedures in which removing all cut material is generally unachievable for ecological purposes and operational constraints. We additionally computed other plot variables required for harvest cost modeling. Examples of computed estimated plot variables include trees per hectare cut, average cubic meter per tree cut, and the residue fraction of trees cut. We also computed harvested volume for use in treatment optimization (described below).
We then estimated harvest costs for the treatments using the Fuel Reduction Cost Simulator (FRCS) [
65]. We estimated treatment costs for each plot assuming mechanical ground-based whole-tree harvesting. In addition to using stand-specific information generated with the FVS, the FRCS required additional variables. We obtained mean slope and elevation from expert opinion of silvicultural specialists with the SNF [
61]. Across the entire SNF, we estimated the mean percent slope at 25%, the mean elevation at 1524 m and the mean yarding distance along the mean slope at 152 m. This is admittedly a coarse approximation resulting in vegetation structure being a primary determinant of cost estimates. We did this to dramatically reduce computational time, otherwise we would have had to run thousands of unique tree list-slope-elevation combinations through FRCS. The one-way move in distance was estimated at 121 km, the average treatment area was estimated as 20 ha, and we assumed a partial cut. The average weight of cut trees was calculated as the average of species present in the treatment areas at 362 kg per cubic meter. Following this parameterization of the FRCS model, we estimated treatment costs for all plots on the landscape meeting the canopy cover ranges.
To estimate costs of under burning to dispose of activity fuels generated from implementing the crown cover reduction treatment, we employed the model developed by [
66]. As with the FRCS mechanical treatment cost model, we assume a 20 ha treatment size for each entry, where treatment size is the only continuous variable. Based on the expert opinion of a fire management officer [
61] of the SNF, we parameterized this cost model assuming a fuel model of logging slash, the presence of threatened and endangered species, and proximity to the wildland-urban interface. These assumptions have the effect of creating higher cost estimates (treatments are cheaper further away from human communities and from sensitive wildlife habitat). Both these and the FRCS mechanical treatment cost estimates were converted to 2012 dollars using the Gross Domestic Product deflator [
67], in order to be consistent with modeled suppression costs.
The methods described above provided per-hectare treatment costs for each operationally feasible tree list present on our modeling landscape. We then further removed from consideration tree lists on the basis of low values of canopy cover reduction or trees removed per hectare to avoid estimating what would likely be artificially high treatment costs due to harvest parameters outside of what would normally be implemented on the ground. This resulted in 199 unique tree lists that collectively accounted for 49,490 ha eligible for treatment. We aggregated these results up to the POD scale according to the number of pixels assigned to each unique tree list within each POD, resulting in a total treatment cost estimate per POD.
Lastly, we applied two additional filters for treatment, requiring a minimum treatable area within a POD of 202 ha, and limiting consideration to PODs with a net negative eNVC value (meaning net loss from wildfire). Although PODs with a positive net value change (meaning net benefit) could be candidates for application of prescribed fire for resource benefit, we did not consider that option in this analysis, choosing instead to target PODs for fuel treatments where the potential to avoid loss to highly valued resources was greatest. Applying these filters resulted in 31 PODs eligible for treatment, comprising approximately 20,640 ha.
2.4. Treatment Strategy Optimization
We developed a single-period, bi-criteria integer programming formulation to maximize risk reduction (Equation (1)) and maximize volume harvested (Equation (2)). The objective to maximize risk reduction uses as a proxy the total eNVC within areas feasible for treatment within the POD. To reiterate, the decision variable is whether to treat a given POD; selecting a POD for treatment dictates that all feasible treatment locations within the POD will be treated. The only constraint is that the total amount spent on treatment at the National Forest level must be below a defined budget level (Equation (3)); here, we explored four budgetary levels: $10.5 M, $21 M, $31.5 M, and $42 M. The basic model formulation is shown below.
| index for and set of feasible PODs |
B | maximum allowable budget |
| summed expected net value change for POD i |
| total board foot volume harvested for POD i |
| total treatment cost of POD i |
| 0/1 variable; 1 if POD i is scheduled for treatment |
For each of the four budget level scenarios we generated an efficient frontier comprised of twenty solutions, by iteratively fixing the level of the volume harvested objective function and maximizing the risk reduction subject to achieving that level of volume, resulting in a total of eighty optimal treatment strategies. Because of the small size of the problem, heuristics were unnecessary, and we were able to solve all iterations to optimality using the General Algebraic Modeling System (GAMS). Although this generated more solutions than we could feasibly analyze with stochastic fire simulation, evaluating the slope of these frontiers is useful information in its own right to understand how the nature of tradeoffs across objectives may vary with available budget, and we retained optimal solutions for possible exploration in future research.
We selected six treatment strategies to feed into additional simulation analysis. In each case we selected one strategy from each budget level, with two additional strategies for the $21 M budget. To do so, we first re-scaled objective function results on a (0–1) scale according to the best performing solution (i.e., highest risk reduction, highest volume production). We then applied an even weight to both objectives, added the two objective scores together, and selected the treatment solution with the highest overall score. For the $21 M case, we also selected endpoint strategies from the frontier.
2.5. Stochastic Fire Simulation, Risk Assessment, and Suppression Cost Modeling
We used the Large Fire Simulator FSim [
68] to model the ignition, spread, and eventual containment of thousands of fires across the modeling landscape. Model parameterization was facilitated by leveraging previous simulations performed on the same landscape [
54,
55,
56,
69]. Daily ignition probabilities were based on logistic regression of Energy Release Component for fuel model G values (ERC-G) in a gridded dataset for the pixel in which the Trimmer Remote Automated Weather Station is located [
70]. FSim generates an ensemble of artificial yearly weather sequences (comprised of ERC-G, wind speed, and wind direction), whose statistics are representative of the local weather station records [
71]. Each yearly weather sequence represents a scenario under which fire ignition and behavior are simulated. For this analysis we ran FSim for 10,024 unique yearly weather scenarios. Running thousands of fire seasons is necessary to capture variation in simulated fire weather and corresponding fire activity. Past simulation performed on the same modeling landscape ran 10,000 seasons [
55], which we adopted here, with minor variation due to the number of processors used. Fire growth was based on a minimum travel time algorithm [
72] using the Scott and Reinhardt crown fire model [
73]. Fire containment was based on daily probabilities derived from the fuel type, number of quiescent versus active growth periods, and length of quiescent versus active growth periods [
74]. A pixel size of 180 × 180 m was used, created by nearest neighbor resampling of LANDFIRE c2012 landscape data [
58] in native 30 m pixels; this upscaling was performed to increase computational efficiency. Fuel models were based on the 40-model set by [
75]. The model was calibrated on this “existing conditions” landscape by adjusting the suppression factor (which controls the rate at which “fireline” is built to contain the fire) and the acrefract (which adjusts the number of ignitions) until the mean number and size of modeled fires closely reflected the historical observed fires in the simulation area [
60]. The simulation area was larger than the SNF, in order to allow fires to ignite outside the study area and burn into it; see [
54].
FSim outputs include (1) an event set of fire perimeters, (2) a raster of burn probabilities produced by tallying the number of times each pixel burned during the simulation period divided by the number of years in the simulation period, and (3) rasters of the conditional probability of six flame length categories. The same set of fire ignitions and weather was used for all landscapes (current existing conditions landscape and six post-treatment landscapes; see below). Thus, fire sizes, flame lengths, and derived fire suppression costs can be directly compared across all seven runs.
We used FSim outputs as inputs into models of fire risk and suppression cost. In the case of the former, we relied on a widely-used framework [
50] that characterizes fire-related losses and benefits in terms of a weighted net value change (NVC) metric. Specifically we used FireNVC, a geospatial risk calculation tool developed by the U.S. Forest Service [
51]. NVC values are derived from integrating flame length burn probabilities with resource- and asset-specific response functions that define net changes in value in terms of flame categories. These response functions are tabulations of the relative change in value if the resource or asset were to burn in one of six flame-length classes. Response function values range from −100 (greatest possible loss of resource value) to +100 (greatest increase in value), and are generated by local resource specialists (e.g., wildlife biologists). Relative importance weights can also be incorporated to differentiate resources and assets in terms of management priority, which was done in this case by forest leadership. Resources and assets in the assessment included visual resources (e.g., scenic byways), human communities, inholdings (e.g., private industrial forests), critical infrastructure (e.g., transmission lines), timber resources, watershed resources, and critical terrestrial wildlife habitat. Beneficial effects from low intensity fire were expected for timber, watershed, visual, and habitat resources; more information is available in [
55].
NVC values can be calculated conditional on the occurrence of fire, or, as we use here, in terms of statistical expectations that incorporate the probability of experiencing large fire. Henceforth, we refer to the risk metric we employed as expected net value change, or eNVC. To calculate eNVC, we leveraged pre-existing data and risk assessment results from the SNF [
55]. For each new hypothetical treatment landscape and accompanying FSim burn probability and flame length results, we re-ran risk calculations using techniques described in [
51]. To characterize risk at the POD-level we summed all pixel-level eNVC values within each POD.
To estimate suppression costs, we leveraged a recently developed regression model of suppression costs, referred to as the Spatial Stratified Cost Index (S-SCI) [
52]. This model expands upon an earlier suppression cost model (simply called SCI) used by the Forest Service [
76] in a few key ways, notably by using spatial data associated with the final fire perimeter rather than the ignition point, and has been shown to have improved predictive power [
52]. Whereas previous work combining stochastic fire simulation with suppression cost modeling was only able to discern possible changes in suppression costs through reductions in fire size, here factors driven by changes in both fire size and shape (e.g., proportion of different fuel types burned) can be accommodated into cost estimates. Consistent with intended use to estimate cost for nominally “large” fires, we subset the fire perimeters to include only those fires that grew to be over 100 ha, and further included only fires that ignited within the SNF. The number of fires that met these criteria varied amongst the treated runs slightly, but was near 24,000.
Inputs for the suppression cost model include: fire size; fuel dryness (i.e., maximum Energy Release Component (ERC) percentile during fire and standard deviation of ERC); proportion of fire’s area in various land ownerships (i.e., U.S. Forest Service and U.S. Department of Interior); proportion of the fire’s area in various land management designations (i.e., Wilderness, Inventoried Roadless Area, other specially designated areas); mean elevation; proportion of the fire in various slope categories; proportion of the fire in various fuel types (i.e., grass, brush, timber, and slash); housing value within the fire perimeter as well as within successive buffered distances (8 km, 16 km, 32 km) of the fire perimeter; proportion of the fire in various aspect categories; fire duration; and region. We used the same data sources as [
52]: primarily the Wildland Fire Decision Support Center data downloads [
77] and Landscape Fire and Resource Management Planning Tools Project fuels data for the landscape c2012 [
58]. Because we were estimating costs for simulated rather than observed fires, we derived fire area from a text file output by FSim called the Fire Size List, which gives the final size of each fire ignited in FSim. In order to obtain the maximum ERC during the fire and the standard deviation during the fire, we took the fire’s duration (also obtained from the Fire Size List) and looked up the ERCs during that period in a set of binary files output by FSim that record daily ERCs. Housing value was calculated using a Python script that called the arcpy module to iteratively select for each fire all housing values inside the fire and within buffered distances of the fire and summed these housing values. For the remainder of the predictor variables, we overlaid each fire perimeter with the predictor variable raster and found the variable of interest (mean for some variables and proportion for others) using the RMRS Raster Tool’s Zonal Statistics Tool [
78]. Lastly, we used a script written in R [
79] to estimate per-fire costs based on these predictor variables, using the coefficients presented for the ordinary least squares model presented in [
52].
The processing requirements for modeling suppression costs are noticeably higher than for risk calculations, and are similar to those of the fire simulation. Whereas risk assessment results are built from burn probabilities that are aggregated across all unique simulated seasons, suppression costs must be modeled on a fire-by-fire basis. In total, costs were calculated for nearly 150,000 large fires, across the calibration and all post-treatment runs. Only after calculating these individual fire costs could we estimate annual suppression costs, accounting for years in which no simulated fires occurred and those where several occurred. We similarly calculated distributions of annual area burned, which served as the basis for our encounter rate calculations (see below).