Next Article in Journal
Introducing the “Mathematical Modelling and Numerical Simulation of Combustion and Fire” Section of the Journal Fire
Previous Article in Journal
The Curse of Conservation: Empirical Evidence Demonstrating That Changes in Land-Use Legislation Drove Catastrophic Bushfires in Southeast Australia
Previous Article in Special Issue
Wildfire Risk in the Complex Terrain of the Santa Barbara Wildland–Urban Interface during Extreme Winds
 
 
Article
Peer-Review Record

Summer and Fall Extreme Fire Weather Projected to Occur More Often and Affect a Growing Portion of California throughout the 21st Century

by David E. Rother 1,*, Fernando De Sales 1, Doug Stow 1 and Joseph P. McFadden 2
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Submission received: 15 September 2022 / Revised: 10 October 2022 / Accepted: 25 October 2022 / Published: 27 October 2022
(This article belongs to the Special Issue Fire in California)

Round 1

Reviewer 1 Report

This study uses statistically downscaled model data to analyze the future change of two fire weather indices in terms of their mean and days with extreme value during summer and fall over different eco-regions in California. The manuscript is overall well written. The BCSD method they used is potentially beneficial for evaluating climate model output at finer spatial scales. The overall conclusion of this study, i.e., more frequent and severe fire weather in California summer and fall, is not of surprise as it is suggested in some previous studies such as Williams et al. 2019; however, this study does provide more details on ecoregion level. I do have some concerns listed below that I hope the authors can address before publication.

Major comments:

1. One of the main points for this study is the difference in fire weather change in various ecoregions. However, most of the analysis in the result section, as well as the abstract, haven’t made it very clear whether there are significant differences across regions, such as coast vs. inland or vegetation types. The manuscript does have a section (4.2) to discuss future changes of fire weather over different ecosystems, however, there is little direct connection or reference to the presented results.

2. I’m not quite clear about how the BCSD method works. Line 148-149 mentioned it’s done using gridMET data. gridMET is a finished analysis product for many surface variables and fire indices. How is that linked to the model data? If gridMET is used here, why use TerraClimate and NARR for verification and some other analysis?  

Line 155 “linear regression model”, what part of the data is used to train this “regression model”? Is the 1979 data shown in Figure 2 part of the training data or testing data?

Please revise the method section to elaborate on how the BCSD method was applied in this study a bit more clearly for replicability of the study and a better interpretation of the result.

3. Again, it’s a bit confusing why use three different observational datasets (NARR, TerraClimate, gridMET) for comparison with model output, while in fact sometimes only one of them is used. It’s better to use a separated data section to clarify what these datasets are used for and why use three.

Minor comments:

Figures 4 & 5. what are the numbers in the figure? Please specify in the caption. I assume 2010-2040 data is not plotted here. I suggest either plotting the continuous time series from 1979-2100, or removing the connection between 2010 and 2041.

Figure 6. Please change the extreme FWI/VPD days unit to days/year, which is easier for interpretation (as well as other related figures). In addition, why not include the plot for 2041-2070? For the figures of 2041-2070 and 2071-2100, I would suggest plot their difference with 1981-2010 so that the change is more visible. There are missing latitude labels in the bottom panels.

 

In my understanding, the percentile is relative to the historical period of 1981-2010, in that case, the frequency with which FWI/VPD exceed their 95th value should be the same value everywhere, 92*5%*30=138 days for summer. However, there are significant spatial differences in the 1981-2010 values in Figures 6&7. 

Author Response

REVIEWER 1

 

General Comment:

We would like to thank Reviewer 1 for their comments on our manuscript. We appreciate the time and effort that it takes to provide constructive criticism and we are grateful for your feedback.

 

Response to Reviewer 1 (Reviewer 1 comment is numbered, our response is below):

  1. One of the main points for this study is the difference in fire weather change in various ecoregions. However, most of the analysis in the result section, as well as the abstract, haven’t made it very clear whether there are significant differences across regions, such as coast vs. inland or vegetation types. The manuscript does have a section (4.2) to discuss future changes of fire weather over different ecosystems, however, there is little direct connection or reference to the presented results.

 

Two sentences were added to section 3.2 comparing the VPD/FWI anomaly and relative change results between northern/central/southern California ecoregions. Section 3.3 has a discussion of the spatial distribution of 95th percentile days across California. Two paragraphs were added to section 3.3 to contribute to the interpretation of results among ecoregions. Sentences were added to section 4.2 about the difference in northern California vs southern California, tying in results to the broader discussion of California fire regimes.

 

  1. I’m not quite clear about how the BCSD method works. Line 148-149 mentioned it’s done using gridMET data. gridMET is a finished analysis product for many surface variables and fire indices. How is that linked to the model data? If gridMET is used here, why use TerraClimate and NARR for verification and some other analysis? Line 155 “linear regression model”, what part of the data is used to train this “regression model”? Is the 1979 data shown in Figure 2 part of the training data or testing data? Please revise the method section to elaborate on how the BCSD method was applied in this study a bit more clearly for replicability of the study and a better interpretation of the result.

 

The gridMET data, in its entirety (1979-2014), was used to bias correct the CMIP6 simulations. We did not split the gridMET data into training/testing segments – we instead used 2 independent datasets (NARR and TerraClimate) to validate the results of our BCSD methodology. gridMET data was used in both the bias correction step and the downscaling step (it was used to generate the regression functions (slope and intercept)). This information was clarified in section 2.2. Figure 2 (showing 1979 temperature for TerraClimate, MIROC6 raw, and MIROC BCSD) is meant to illustrate the differences in resolution between the raw and BCSD data, and also to compare the CMIP6 data to the observations. 1979 was included in the bias correction (technically part of the training data).

 

In order to address your concern about the “linear regression model” and your request to expand on the BCSD methodology, an additional paragraph (plus a couple sentences) were added to section 2.2 

 

  1. Again, it’s a bit confusing why use three different observational datasets (NARR, TerraClimate, gridMET) for comparison with model output, while in fact sometimes only one of them is used. It’s better to use a separated data section to clarify what these datasets are used for and why use three.

 

gridMET was not used to compare with model output, it is the dataset we used to bias correct and downscale the CMIP6 dataset. We used the entirety of the gridMET dataset (1979-2014) to “train” the bias correction, and in order to avoid training and testing with the same data, we decided to validate the results of the BCSD with 2 independent datasets (NARR and TerraClimate). Additional information was added to section 2.2 to elaborate on our methodology. We hope this adequately addresses your concerns about the use of 3 datasets. Please let us know if you would like more information.

 

Response to Reviewer 1 Minor Comments (dashed line is the reviewer comment, ours is below):

  • Figures 4 & 5. what are the numbers in the figure? Please specify in the caption. I assume 2010-2040 data is not plotted here. I suggest either plotting the continuous time series from 1979-2100, or removing the connection between 2010 and 2041.

 

The numbers in the figures are the relative change of the mid- and late-century periods relative to the historical period (1981-2010). I have added this information to the caption. Also, we have removed the connection between 2010 and 2041 by adding a small break to the figure. The respective time periods (historical, mid- and late-century) each have axis labels directly below.

 

  • Figure 6. Please change the extreme FWI/VPD days unit to days/year, which is easier for interpretation (as well as other related figures). In addition, why not include the plot for 2041-2070? For the figures of 2041-2070 and 2071-2100, I would suggest plot their difference with 1981-2010 so that the change is more visible. There are missing latitude labels in the bottom panels.

 

We have changed the units of Figures 6 and 7 (as well as Table 2), to days/yr instead of days/30yr like it was previously. All text that references to these results has been changed to reflect this change in units. We have also followed your suggestion and calculated the difference between the mid-century/late-century and the historical period instead of showing the total number of days. The mid-century period was added, as well as the missing latitude labels in the bottom panels.

 

  • In my understanding, the percentile is relative to the historical period of 1981-2010, in that case, the frequency with which FWI/VPD exceed their 95th value should be the same value everywhere, 92*5%*30=138 days for summer. However, there are significant spatial differences in the 1981-2010 values in Figures 6&7.

 

You are correct that the number of times a pixel could exceed the 95th percentile out of the 2730 total JJA/SON days in the respective time periods is constant, however, we used the daily ecoregion average 95th percentile as a threshold, not the pixel 95th percentile. In other words, we calculated the ecoregion average FWI/VPD for every day of the historical period. We then calculated the 95th percentile threshold from that time series of values. This threshold value (for each ecoregion) was used as the threshold for the mid- and late-century periods. The number of times an individual pixel can exceed that threshold is not the same as the number of times it could exceed its own 95th percentile threshold (which would be constant for all pixels, regardless of ecoregion, and would be dependent on the total number of days in consideration). The explanation of what is being calculated and what is shown in the figures is in section 2.3, as well as in section 3.3.

Reviewer 2 Report

This study developed daily weather conditions and two fire weather indices (FWI and VPD) using a bias-correction and statistical downscaling technique for California based on CMIP6 simulation results. The authors then analyzed the historical and future patterns of extreme fire weather across the state. This study improves our understanding about future fire weather conditions in California. The research topic is interesting and suitable for publishing on Fire. Yet I have a few concerns about the manuscript.

 

Abatzoglou (2013) and Abatzoglou et al. (2018) have developed similar methodology and weather data at spatial high resolution (~4km) for United States based on CMIP5 data (https://www.climatologylab.org/). I wonder why the authors chose to develop their own methods and datasets for calculating FWI and VPD in the California? Are there any gaps or issues in the existing datasets or methods? Have the authors compared their results with Abatzoglou’s datasets as a reference? 

Author Response

REVIEWER 2

 

General Comment:

We would like to thank Reviewer 2 for their comments on our manuscript. We appreciate the time and effort that it takes to provide constructive criticism and we are grateful for your feedback.

 

Reviewer Comment:

Abatzoglou (2013) and Abatzoglou et al. (2018) have developed similar methodology and weather data at spatial high resolution (~4km) for United States based on CMIP5 data (https://www.climatologylab.org/). I wonder why the authors chose to develop their own methods and datasets for calculating FWI and VPD in the California? Are there any gaps or issues in the existing datasets or methods? Have the authors compared their results with Abatzoglou’s datasets as a reference?

 

Comments/Response for Reviewer 2:

We chose to develop our own methodology and dataset for calculating FWI and VPD because we wanted to use the newest available (at the time of submission) CMIP6 datasets. We also wanted to test and develop the methodology implemented in our submission so that it may be applied to climate model data in regions around the world (and to other climate variables) that are not necessarily available in the aforementioned CMIP5 dataset of the conterminous United States.

 

To further answer your questions, we have not investigated the CMIP5 data by Abatzoglou and we have not compared our results with it. A comparison of our data with theirs could make for an interesting study, we may look into that in future work.

 

Reviewer 3 Report

Review 1945790

This manuscript uses bias correction and statistical downscaling technique to enhance the spatial resolution of the coarse-resolution daily meteorological outputs generated by the Global Climate Models and associated Climate Model Intercomparison simulations. It also analyzes the impact of two fire weather indices, including vapor pressure deficit and fire weather index, on historical and future fire weather conditions in ten different eco-regions within California. I believe this is a topic of interest to readers of the Fire journal. The manuscript is thorough and well-constructed. Its content and style fit well with the journal, compared to other recent articles. I do not have any significant concerns of this generally very good article.

 Minor comments

- Line 35-37. Avoid using “No where” in this sentence. We do not know for sure.

- Line 581-583. “The spatial downscaling algorithm tested and implemented in this study was inexpensive (compared to dynamic downscaling)”. Please provide some evident of the better performance (time, size, etc.)

- Figures 8, 9, 10, 11 need higher resolution. Some texts are very difficult to read.

Author Response

REVIEWER 3

 

General Comment:

We would like to thank Reviewer 3 for their comments on our manuscript. We appreciate the time and effort that it takes to provide constructive criticism and we are grateful for your feedback.

 

Reviewer 3 Comments (dashed line is the reviewer comment, ours is below):

- Line 35-37. Avoid using “No where” in this sentence. We do not know for sure.

 

Removed “nowhere” and replaced with a less absolute phrase.

 

- Line 581-583. “The spatial downscaling algorithm tested and implemented in this study was inexpensive (compared to dynamic downscaling)”. Please provide some evident of the better performance (time, size, etc.)

 

We added 2 references to the sentence the reviewer cited in order to back up the claim that statistical downscaling is more computationally inexpensive/efficient than dynamical downscaling. The two citations are Abatzoglou and Brown (2013) and Zhang et al. (2020) [13 and 15 in the submission].

 

Also another reference from the United States Agency of International Development’s review called A Review of Downscaling Methods For Climate Change Projections states, “Statistical downscaling methods are computationally inexpensive in comparison to RCMs that require complex modeling of physical processes.” (Sylwia Trzaska and Emilie Schnarr, 2014). This reference is not in our submission, we feel that the current citations adequately express the differences between dynamical and statistical downscaling.

 

Also, since we have not completed this study using dynamical downscaling we have no way of knowing the time it would take, or the size of the datasets that would result. If more information is required to address this concern, please let us know.

 

- Figures 8, 9, 10, 11 need higher resolution. Some texts are very difficult to read.

 

Figures 8-11 were remade with larger axis font to make them more legible.

 

Round 2

Reviewer 1 Report

The authors have addressed my previous comments accordingly.

Back to TopTop