Next Article in Journal
Decision Support Models and Methodologies for Fire Suppression
Next Article in Special Issue
Vegetation Cover Type Classification Using Cartographic Data for Prediction of Wildfire Behaviour
Previous Article in Journal
The Ignition Frequency of Structural Fires in Australia from 2012 to 2019
Previous Article in Special Issue
Quantifying Litter Bed Ignitability: Comparison of a Laboratory and Field Method
 
 
Article
Peer-Review Record

A Multimodal Data Fusion and Deep Learning Framework for Large-Scale Wildfire Surface Fuel Mapping

by Mohamad Alipour 1, Inga La Puma 2, Joshua Picotte 3, Kasra Shamsaei 4, Eric Rowell 5, Adam Watts 6, Branko Kosovic 7, Hamed Ebrahimian 4 and Ertugrul Taciroglu 8,*
Reviewer 1: Anonymous
Reviewer 2:
Reviewer 3:
Reviewer 4: Anonymous
Submission received: 28 October 2022 / Revised: 30 December 2022 / Accepted: 31 December 2022 / Published: 17 January 2023
(This article belongs to the Special Issue Advances in the Measurement of Fuels and Fuel Properties)

Round 1

Reviewer 1 Report

 

Comments to Authors

It was a pleasure to read such a thoroughly described and detailed manuscript on fuels mapping at large scales. I would like to commend the authors on the completion of this work. I will use quotations and page numbers to help direct authors to where in the text I am referring for any comments/edits and will use section headers to help as well. I am also going to assume things like table formats, correct in-text citation, and figure descriptions are correct. Finally, unless a grammar or spelling error impacts the context or message of a sentence, I won’t point it out directly here in the comments to keep the focus on any content related comments/edits. Thank you again for such a well-written manuscript! My comments to follow…

 

First, I would like to point out the strength and depth of your Introduction and Background sections, which I feel are both a solid representation of the history and limitations of larger scale fuels classification and mapping for fire behavior modeling and spread simulations.

 

In the Research Significance sub-section, if you could give some names or examples for the various data repositories and data layers you included in your model vector up front (in the paragraphs on page 6), it might be helpful for those less familiar with the process to follow your workflow and all the inputs/outputs. Maybe combine the topics/text from pages 6 and 7 to reduce redundancy.

 

Figure 2 might need a bit more description given the complexity of what it is you’re trying to visualize here. I am slightly unclear on the arrow formation for the average and variance boxes and their relation to the boxes below.

 

Pg 14-15. “On the other hand, mis-predicting a very small number of isolated pixels has a less pronounced effect on the overall fire spread than errors in the prediction of large areas of dominant fuel types.” Might need a source on this to back this up, one might consider the mis-predicting of rare types to be important to the model if the topography is flat. Likely OK to keep as is, just might consider throwing in a source.

 

Based on the evidence provided in the manuscript in the results section, I am convinced that 70% model accuracy using the 4% fuel type threshold is an important contribution to large-scale fuel mapping and can help improve upon the already beneficial LANDFIRE maps. Additionally determining which layers were beneficial for which classes is also really important. Finally, showing that imagery data overall improved model accuracy helps support future methodologies with data fusion techniques. Just thinking this process could be automated in GEE, using the high-res imagery available to even make a cool app. Just some neat ideas for potential future avenues your work could take people. ?

 

You do a nice job on pages 20-22 describing where the model errors come from and showing how the pseudo-labels can incur error in model predictions based on inaccuracies in the derived fuel maps. It might also be what you mentioned earlier about the visual similarities in RGB images and the ability to distinguish landcover types. And nicely done wrapping it back to one of the three main research significance points from the intro. Might suggest the sensitivity analysis somehow make it earlier into the results/discussion section. Text on pg 25 is important and well stated and might benefit from coming earlier in that section.

 

Pg 27 – 112 GB of RAM would be the fastest computer I’ve ever seen haha. And to think training still took over an hour and a half. That must be the result of the DNN and CNN concatenation step?

 

Pg 28 – last bits of the results and discussion section are interesting, but might be better served as supplemental discussion/materials. I’m not sure if there are page limits on manuscripts for this issue, but if you need to find places to cut, this would be a potential choice. Again, not because it’s not valid, just a bit extra. You’ve mentioned the FIA database potential before in the paper too, so you wouldn’t be losing content. Just some thoughts.

 

Just want to emphasize again how well this manuscript is written given the serious depth of statistics and geospatial analysis described.

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 2 Report

In the great majority of case for surface fuel mapping, the spatial scale of interest was smaller so that many small-scale site-specific models need be trained and used for the landscape at the national scale. This paper proposes a surface fuel identification based on custom deep learning framework to ingest multimodal data. This model can extract information from multispectral signatures, high-resolution imagery, and biophysical climate and terrain data. A Monte Carlo Dropout mechanism is devised to create a stochastic ensemble of models that can capture classification uncertainties to boost the prediction performance. The fuel pseudo-labels are created by a random geospatial sampling of existing fuel maps. However, I have some question on the paper as the following description.

1.    Figure 1 shows that different data are combined through a concatenate operation, and I don’t understand how multimodal data are combined, and what is their corresponding proportion weight?

2.     We know that the dataset in this article is ground fuel, and the author adopts the idea of transferring learning, but should add a description why can the CNN main weight be initialized from the pre-trained weight on the ImageNet dataset.  Generally speaking, in the natural scene, the difference between the ImageNet foreground and background is quite large, while the dataset in this paper shows that the difference between the foreground and background is not large. Maybe considering a more appropriate weight for the natural scene dataset for reference?

3.     Extensive tests have been carried out for the proposed system to obtain a best architecture for this model through cross validation. The pre-training CNN architecture, including VGGNet, ResNet, DenseNet, Inception and Inception ResNet, was taken as the backbone for extracting visual features from NAIP images, and the best results have been obtained by Inception ResNet. Please provide the comparison experiments for different network and analyze the reason that the Inception ResNet has achieved the best performance.

4.     In Model Development and Evaluation section, I have a suggestion that a detailed flow chart should be considered for deep learning model framework.

5.      The purpose of this paper is to develop a large-scale ground fuel identification model, but it is recommended to add an experiment comparison between this model and the existing model for the same large-scale dataset to highlight the advantages of this model.

 

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 3 Report

The proposed paper adresses an important current topic and uses innovative approach to provide new possibilities in improving wild fire modelling through a better knwoledge of wildland characteristics. 

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 4 Report

This extremely well-written proof-of-concept paper presents the utility of a machine-learning approach for surface fuel identification across a broad geographic area. By combining a range of remote sensing and biophysical datasets, it introduces a flexible, effective, and relatively computationally efficient method for fuel classification. I think the uncertainty raster produced by the model represents a significant benefit to this approach over other fuel classification methods. I do have some concerns about taking the next step of moving this from a concept paper to a robust method of landscape fuel classification.

I think the paper would benefit from a more frank discussion about what an adequate field-based training dataset would look like to implement this method across large scales. There is discussion of FIA as one potential dataset, however, there are only 5,369 FIA plots in California with just 10% of those being measured each year. In contrast, this work started with 40,000 training points. Further, I do not believe that assigning fire behavior fuel models is a component of the standard FIA monitoring protocol, so it is unclear how useful these data would be as a training dataset. Cross-walking the FIA data to surface fuel models through methods such as the Forest Vegetation Simulator rulesets comes with its own set of substantial limitations and uncertainties. Therefore, I feel that a more detailed description of the suggested use of FIA data would be beneficial.

 

Though the methods presented here do represent a promising approach I think the paper needs to more directly discuss the real-world challenges that must be overcome in order for it to successfully map surface fuels at scale. There are multiple efforts underway to tackle the very real and pressing need for improved fuel maps and though I see this deep learning approach as a great potential tool, I do not feel the paper in its current form adequately addresses the significant challenge of acquiring robust training data.

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Back to TopTop