Next Article in Journal
Smart Count System Based on Object Detection Using Deep Learning
Next Article in Special Issue
The Prediction of the Tibetan Plateau Thermal Condition with Machine Learning and Shapley Additive Explanation
Previous Article in Journal
Subsidence Monitoring and Mechanism Analysis of Anju Airport in Suining Based on InSAR and Numerical Simulation
Previous Article in Special Issue
Two-Stage Spatiotemporal Context Refinement Network for Precipitation Nowcasting
 
 
Article
Peer-Review Record

End-to-End Prediction of Lightning Events from Geostationary Satellite Images

Remote Sens. 2022, 14(15), 3760; https://doi.org/10.3390/rs14153760
by Sebastian Brodehl 1,*, Richard Müller 2, Elmar Schömer 1, Peter Spichtinger 3 and Michael Wand 1
Reviewer 1:
Reviewer 2:
Reviewer 3: Anonymous
Reviewer 4: Anonymous
Remote Sens. 2022, 14(15), 3760; https://doi.org/10.3390/rs14153760
Submission received: 15 June 2022 / Revised: 30 July 2022 / Accepted: 2 August 2022 / Published: 5 August 2022
(This article belongs to the Special Issue Artificial Intelligence for Weather and Climate)

Round 1

Reviewer 1 Report

The article is devoted to the development of a method for the now casting of thunderstorms based on deep convolution neural networks. Forecasts are considered in increments of 30 minutes for a period of up to 180 minutes.

Remote sensing data and the LINET thunderstorm monitoring network are used as input information. The problem is solved using the well-known U-Net architecture developed for the interpretation of X-ray images. Comparative estimates with the DWD now casting method are demonstrated, indicating the superiority of the proposed method by the approach used by DWD.

1.   There are several Remarks to article:

 

1. It is not clear what are the reasons for not using meteorological information, for example, wind speed at cloud height?

 

2.    It is not entirely clear from the text whether two- or three-dimensional convolutions are used in a neural network, or both in different parts of the network?

 In line 230 says that the convolution core has a size of 3x3, that is, two-dimensional convolutions are used. In this case, it is unclear how the previous remote sensing frames are used? Above (Fig. 2, line 211) it is said about three-dimensional images (the third dimension is time).

3.    For each lead time, it is proposed to use its own neural network, and with increasing lead time, the number of filters increases, as indicated in Tables 3 and 4. However, in tables 3 and 4, for lead times of more than 60 minutes, different values of the number of Bottleneck channels are indicated.

4.   According table 4 the learning rate is equal 0.5. This value very big for SGDW. It is correct? The default value is equal 0.001. https://pytorch-optimizer.readthedocs.io/en/latest/_modules/torch_optimizer/sgdw.html

5.      When divided into train, test and validation according to the table. A1 the same thunderstorms at different stages of their development can get into different datasets and the assumption of the independence of these datasets is very doubtful. This is indirectly confirmed by the worst results for the independent 2021 (Fig. 4).

6.      According to table 4, the learning rate is equal to 0.5. This value is very big for SGDW. It is correct? The default value is equal to 0.001. https://pytorch-optimizer.readthedocs.io/en/latest/_modules/torch_optimizer/sgdw.html

 

7.      It is useful to  describe which equipment was used in the calculations and the performance achieved

Reviewer 2 Report

Please see the attachment. 

Comments for author File: Comments.pdf

Author Response

Please see the attachment. 

Author Response File: Author Response.pdf

Reviewer 3 Report

The authors developed a machine learning model to forecast future lighting events with a leading time of up to 180 minutes based on past satellite and lighting images. One important contribution is the utilization of the customized loss function that aims to handle the imbalance of positive and negative classes. The paper is recommended to publish in the journal Remote Sensing but may be subject to the following comments:

 

1 one important contribution is the utilization of the customized loss function which is modified from convention cross-entropy loss. Can you compare your model performance with the customized loss function and that with the conventional loss function, please? So that it will be much clearer to highlight the contribution of your customized loss function.

 

2 I am a little bit confused with weight determination in the customized function. From lines 200-201, I realized that the weight is pre-calculated from the training set?  My question is will the weight will change a lot if we switch to another training set? Or will the weight keep almost the same for different training, validation, and testing set? How will they impact the model performance?

 

3 The model inputs are the satellite and lighting images. Can you also evaluate the model performance if the input is only from the satellite image or lighting image?

 

Author Response

Please see the attachment. 

Author Response File: Author Response.pdf

Reviewer 4 Report

 

In my opinion the paper is well written. I have only some minor comments, which are reported below.

 

The abbreviation ML in Table 5 needs to be defined or maybe added to the abbreviation list

 

Figure 4 caption: “Impact factor” is “improvement factor”?

 

Figure 4 and Figure 5 should be moved immediately after the first time they are mentioned in the text. In the current version of the manuscript those figures are shifted to a different section than the one in which they are mentioned.

 

Line 436 2017/17 should be 2016/17

Author Response

Please see the attachment. 

Author Response File: Author Response.pdf

Back to TopTop