Preliminary Results from a Wildfire Detection System Using Deep Learning on Remote Camera Images
Reviewer 1 Report
Wildfire detection systems based on ground video stations are topics intnsively studied in the last couple of decades, therefore all digital image analyses methods and technologies developed during this time period has been also applied in wildfire detection. In this field ML is today the hottest topic, so it is normal that it has been also applied in this very important task.
This paper is very well written, quite short and self explanatory so almost I could not find any objection, therefore I suggest its publishing almost in present form.
Maybe in this paper there are not many scientific contributions. Authors have applied well known method for ML image recognition in wildfire detection, but there are some contributions concerning adaptive training process. On the other side from practical point of view this work is important because it could quiet improve existing video based wildfire detection systems. A valuable contribution is that the authors provided the source code as open source.
In order not only to highlight the good points, here are some comments that would make this paper even better:This is not the first paper concerning application of Deep Learning in wildfire detection, therefore in Section 2 comparison of their method with other papers based on Deep Learning is missing. The same is for Section 3 about Evaluation. Usualy in papers describing new wildfire detection method it is compared on the same training data set with some other similar methods, this time based on Deep Learning. I would like to see why InceptionV3 model is the best one for this task, but as authors emphasise they have intention to do that in the future, therefore I will not insist to have that part in this paper, but instead of that it will be good to have comment 3 included. Authors made source code publicly available and that is very good, but I think that it will be useful to make also the image base used for training and testing publicly available, so that other researchers could test their ideas on the same image base.
Thank for reviewing the paper and we truly appreciate your suggestions for improvement. After more research we were able to find some references and have compared our work to them. We also described why we chose InceptionV3, but please note that we are not claiming that it is the best. We are simply claiming that we were able to get good results by using it. Since the original submission, we have also added more information regarding detection timing from our system and more details on satellite data as well.
Our plan has always been to publish the dataset, but we needed to get some more agreements with the camera networks to allow us to distribute the data. We were able to secure some permissions and will be publishing roughly 1/4th of the data in a week or so (more to follow later), and it will be linked from our existing github repository.
Reviewer 2 Report
Comments for author File: Comments.pdf
Thank for reviewing the paper and we truly appreciate your suggestions for improvement. Since the original submission, we have also added more information regarding detection timing from our system and more details on satellite data as well. Here are the responses to your specific suggestions.
1-1) We’ve updated the paper to clarify the one experiment comparing fine-tuning vs. full-training we conducted months ago, but it was not a fully exhaustive experiment and we will consider re-evaluating that again in the future.
1-2) We added reference to tensorflow slim github library with model definition.
2-1) We added sample images illustrating the diagonal shifts. The augmentation is not applied to negative (non-smoke) images because we have plenty of those.
2-2) We added more details on the training vs. validation split as well as precise numbers on test set.
3-1) Please note that 0.5 is just the initial threshold as an optimization, and the real threshold is based on 3 days worth of historical data. We’ve updated the text to attempt to make that more obvious.
3-2) We’ve updated the text to clarify that was an arbitrary selection, and in the future we plan to experiment with different time periods as well as ways to also incorporate weather conditions.
4-1) We’ve added the full confusion matrix for the test set. We use both F-1 and accuracy metric (definition added in paper) internally as the primary way to evaluate different model iterations.
5) They are mostly cloud formations, and we’ve added a reference back to false positive in what is now Figure 6.
6) Our plan has always been to publish the dataset, but we needed to get some more agreements with the camera networks to allow us to distribute the data. We were able to secure some permissions and will be publishing roughly 1/4th of the data in a week or so (more to follow later), and it will be linked from our existing github repository.
Reviewer 3 Report
please find my comments attached below
Comments for author File: Comments.pdf
Thank for reviewing the paper and we truly appreciate your suggestions for improvement. We’ve added references for other machine learning based smoke detection systems and compared our work with those systems, especially time to detection. We also added more information regarding satellites and their timing. Following your suggestion, we are also requesting the editor to switch the submission as a paper vs. letter. Here are the responses to your specific suggestions.
Line 238: We’ve updated text to add the definition for the accuracy metric we’re using.
Table 2: We’ve updated the text in the table to point out that it’s due to suppression of repeat detections.
Line 311: We’ve switched to the Vodacek reference you’ve provided.
Reviewer 2 Report
The work looks practically important. Hopefully the data will be completely opened to other researchers to make contribution on this important task.