Next Article in Journal
An Integrated Method for Road Crack Segmentation and Surface Feature Quantification under Complex Backgrounds
Next Article in Special Issue
SSANet: An Adaptive Spectral–Spatial Attention Autoencoder Network for Hyperspectral Unmixing
Previous Article in Journal
Evaluation and Analysis of Remotely Sensed Water Vapor from the NASA VIIRS/SNPP Product in Mainland China Using GPS Data
Previous Article in Special Issue
Center-Ness and Repulsion: Constraints to Improve Remote Sensing Object Detection via RepPoints
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

3D-UNet-LSTM: A Deep Learning-Based Radar Echo Extrapolation Model for Convective Nowcasting

The College of Meteorology and Oceanography, National University of Defense Technology, Changsha 410005, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(6), 1529; https://doi.org/10.3390/rs15061529
Submission received: 26 January 2023 / Revised: 5 March 2023 / Accepted: 6 March 2023 / Published: 10 March 2023

Abstract

:
Radar echo extrapolation is a commonly used approach for convective nowcasting. The evolution of convective systems over a very short term can be foreseen according to the extrapolated reflectivity images. Recently, deep neural networks have been widely applied to radar echo extrapolation and have achieved better forecasting performance than traditional approaches. However, it is difficult for existing methods to combine predictive flexibility with the ability to capture temporal dependencies at the same time. To leverage the advantages of the previous networks while avoiding the mentioned limitations, a 3D-UNet-LSTM model, which has an extractor-forecaster architecture, is proposed in this paper. The extractor adopts 3D-UNet to extract comprehensive spatiotemporal features from the input radar images. In the forecaster, a newly designed Seq2Seq network exploits the extracted features and uses different convolutional long short-term memory (ConvLSTM) layers to iteratively generate hidden states for different future timestamps. Finally, the hidden states are transformed into predicted radar images through a convolutional layer. We conduct 0–1 h convective nowcasting experiments on the public MeteoNet dataset. Quantitative evaluations demonstrate the effectiveness of the 3D-UNet extractor, the newly designed forecaster, and their combination. In addition, case studies qualitatively demonstrate that the proposed model has a better spatiotemporal modeling ability for the complex nonlinear processes of convective echoes.

Graphical Abstract

1. Introduction

Convective nowcasting usually refers to forecasting the evolution trends of convective systems for lead times of up to a few hours, which is significant for protecting lives and property and supporting outdoor activities [1,2,3]. However, it is still challenging due to the obvious suddenness, rapid changes, and inherent uncertainty of convection systems.
In most cases, extrapolation-based forecasts have higher skills for lead times of up to 1–2 h. Spatiotemporal extrapolation techniques use statistical models or data-driven models to extrapolate radar or satellite images into the imminent future. After obtaining extrapolation results, convective nowcasting can be conducted with radar echo reflectivity values ≥ 35 dBZ [4] or cloud-top brightness temperatures below a certain threshold [5], and convective precipitation fields can also be estimated with the Z-R relation [6] and nonlinear mapping algorithms [7,8].
Traditional extrapolation techniques are usually based on statistical models, and most of them follow the framework of Lagrangian persistence, which utilizes the motion field calculated from recent images to extrapolate the latest available image under the assumption that the intensity and motion are constant [9]. These methods can be roughly divided into object-based extrapolation [10,11,12] and region-based extrapolation approaches [9,13,14,15]. Object-based extrapolation first identifies a convective storm cell and then extrapolates its trajectory based on the calculated motion vectors; this technique is mainly suitable for nowcasting convective storms with high intensity and stability. Region-based extrapolation focuses on the image and extrapolates all grid values without specific classifications. However, the performance of traditional extrapolation techniques is poor when they are used to forecast rapidly changing weather systems, especially for severe convection storms with abrupt intensity, location and size changes [2,16].
Recently, the continuous development of deep learning has contributed significantly to the modeling capabilities of extrapolation techniques. Deep neural networks (DNNs) are capable of modeling nonlinear processes in observation images, thus depicting complicated and rapidly developing weather phenomena such as the initiation, dissipation, and rotation of clouds. On the other hand, a data-driven solution enables DNNs to learn local weather patterns from massive historical observations, making them more suitable for regional convective forecasts. Furthermore, many studies have demonstrated that deep learning-based extrapolation methods perform better than traditional statistical extrapolation techniques [17,18,19,20]. Among those methods, the commonly used DNNs are convolutional recurrent neural networks (ConvRNNs) and convolutional neural networks (CNNs) [17,18].
ConvRNNs can explicitly model the temporal dependencies of consecutive observation images by recursively applying stacked ConvRNN units along the time direction, transmitting and updating the inside states. In prior works, most deep learning practitioners used ConvRNNs to address extrapolation-based nowcasting for convective storms and precipitation. For example, Shi et al. [17] proposed convolutional long short-term memory (ConvLSTM) to extrapolate radar images; this approach uses convolution operations instead of full connections in its state transitions. Shi et al. [21] then designed a more reasonable encoding-forecasting structure and proposed the trajectory-gated recurrent unit (TrajGRU) model to address the location invariance problem existing in ConvLSTM. To memorize spatial and temporal information simultaneously, Wang et al. [22] presented a general framework called the predictive RNN (PredRNN), which makes the states flow in two directions. Tuyen et al. [23] designed RainPredRNN, which could reduce the number of calculated operations based on PredRNN. In addition, Jing et al. [24] exploited radar images at three altitudes to extrapolate those at one and addressed the blurry prediction problem with adversarial training. A generative adversarial network (GAN) architecture was also applied by Ravuri et al. [19] to generate more sharp future radar images via a ConvRNN. Moreover, since observation images can be considered video sequences continuously recorded with a fixed “camera”, other advanced ConvRNN models for video prediction [25,26] can also be applied to convective nowcasting.
Although it has already been concluded that a simple convolutional architecture can outperform recurrent architectures on diverse sequence modeling tasks [27,28], ConvRNNs are more generally used for spatiotemporal sequence forecasting than those using CNNs. In the past two years, the application of CNNs to extrapolation-based convective nowcasting has attracted increasing attention. Unlike ConvRNN-based extrapolation methods that explicitly model time, CNN-based approaches consider the forecasting task as an image-to-image translation problem, which aims to directly transform multiple concatenated past images into a future image/image sequence through layer-by-layer mapping [18,29,30,31]. Among the numerous available CNN models, UNet [32] can combine high-level and low-level features through skip connections to exploit more comprehensive information for future image generation, leading to increasing applications in radar-based nowcasting. For example, Agrawal et al. [18] used UNet to provide three pixel-level binary classifications that indicated whether the future rainfall intensity in the given pixel exceeded corresponding thresholds. Instead of predicting classes, UNet was applied in [20,33,34,35] to extrapolate radar images directly. Han et al. [20] demonstrated that UNet achieved comparable extrapolation performance to a ConvRNN-based model. The recent successes of UNet in the above applications indicate that the role of CNNs in extrapolation-based convective nowcasting needs to be reconsidered.
However, these two types of DNNs still have some limitations. First, it is not easy for standard ConvRNN models to tailor their predictions at different timestamps. One reason is that their sequence-to-sequence (Seq2Seq) structures use the same weights to generate the hidden states of all timestamps. Second, CNN models mainly emphasize spatial features while weakening the temporal variations between the input images, leading to difficulty learning relatively long-range temporal dependencies. Even though a few studies have noticed that 3D convolutions can extract spatiotemporal representations [36,37], they still follow the image-to-image translation paradigm and rarely explicitly model the temporal correlations among the extracted features in the prediction stage.
To leverage the advantages of the UNet and ConvRNN models while avoiding the above limitations, we develop a radar echo extrapolation model called 3D-UNet-LSTM for convective nowcasting, which combines 3D-UNet and a newly designed Seq2Seq network in an extractor-forecaster architecture. We first adopt 3D-UNet as the extractor to extract the spatiotemporal features of the input radar reflectivity images while retaining more detailed information, such as textures. In the forecaster, the Seq2Seq network uses different unstacked ConvLSTM layers to iteratively generate hidden states for different future timestamps. Finally, these hidden states are mapped to predicted images via a convolution layer.
The remainder of this paper is organized as follows. Section 2 describes the data used in this paper, and Section 3 illustrates the proposed model, the loss function, and the evaluation metrics in detail. The experimental results are presented in Section 4. Finally, a summary and discussions are given in Section 5. Appendix A briefly introduces some prior knowledge related to our work.

2. Data

The radar reflectivity data used in this paper are provided by an open meteorological database named MeteoNet [38], which covers two geographical areas, the northwest zone (NW) and southeast zone (SE) of France in Figure 1, and spans 3 years, 2016 to 2018, with 5-min intervals.
The data were collected using the Doppler radar network of METEO FRANCE, and 3D reflectivity maps were obtained by each radar scanning the sky. The radar’s spatial resolution is 0.01 degrees, and the projection system used is EPSG:4326.
To build our dataset, we first generate 1.5-h radar image sequences (each sequence has 19 radar images) every 25 min. Next, sequence samples are selected if the total number of pixels with reflectivity values ≥ 35 dBZ in one of their last 12 images exceeds 2000, and a total of 12,503 sequence samples are collected. To reduce the computational and memory cost and maintain adequate spatial resolution, the images in each sequencing sample are resized from 565 × 784 to 104 × 160 through bilinear interpolation, with a spatial resolution of approximately 0.05 degrees. Finally, to test the generalization ability of the proposed model, we ensure that the training, validation, and test subsets do not overlap in time, the details of which are shown in Table 1.
In addition, the reflectivity (in dBZ) can be approximated to a rainfall intensity R (mm/h) by using the Marshall-Palmer relation:
d B Z = 10 log a + 10 b log R
where a = 200 and b = 1.6.

3. Methodology

Consecutive radar images can directly show the evolution of convective systems. In this section, we propose a DNN model called 3D-UNet-LSTM to extrapolate future radar reflectivity images. The locations and intensities of convective systems over a very short term can be foreseen according to the extrapolated results. M consecutive radar images are given to predict the subsequent N radar images. In the implementation, we use the radar images in the past 0.5 h to forecast those in the next 1 h (i.e., M = 7, N = 12). We describe the architecture of 3D-UNet-LSTM in Section 3.1 and introduce the loss function and evaluation metrics in Section 3.2 and Section 3.3, respectively.

3.1. 3D-UNet-LSTM

The proposed 3D-UNet-LSTM is an end-to-end trainable model with an extractor-forecaster architecture, as illustrated in Figure 2. In the extractor part, we use 3D-UNet [39] to extract the comprehensive spatiotemporal features of consecutive radar images. It is composed of multiple 3D convolutional layers with kernel sizes of 2 × 3 × 3, each of which is followed by a rectified linear unit (ReLU) activation function. Like UNet, the extractor contains a downsampling path, a symmetrical upsampling path and skip connections. Since skip connections require the temporal and spatial sizes of the features before each downsampling operation to be consistent with those observed after the symmetrical upsampling operation, we add a zero image before the 7 consecutive radar images and stack them along the temporal dimension as the model input. In the downsampling path, the temporal and spatial sizes of the input sequence are progressively halved by using three 3D convolutional layers with strides of 2, each followed by two 3D convolutional layers, and spatiotemporal features with different representation levels are extracted. In the upsampling path, the high-level features gradually return to the original size via three transposed 3D convolutional layers, each followed by two 3D convolutional layers. Furthermore, low-level features are received from the downsampling path through skip connections, bringing detailed information to the more comprehensive representations. Batch normalization (BN) [40] is used after the last convolutional layer to mitigate the vanishing gradient effect during backward propagation. After that, the comprehensive spatiotemporal features of the radar image sequence are output.
The forecaster part is designed to further exploit the spatiotemporal features extracted by the extractor and output the predicted radar images. This part, a Seq2Seq network is presented to explicitly model time and extrapolate the hidden states step-by-step. ConvLSTM is selected as the basic unit due to its simplicity and effectiveness. For the Seq2Seq structure, considering the two common structures in Figure A1 that use shared parameters to generate hidden states for the predictions over all future timestamps, their ability to make corresponding adjustments according to the specific situations encountered at different timestamps in the future may be limited. To alleviate this problem, we utilize N ConvLSTM layers that have different parameters to individually generate the hidden states for future timestamps in an iterative way, as shown in Figure 2, each ConvLSTM layer has a step length of 8 with a convolutional kernel size of 3 × 3 and 64 hidden state channels, thereby exploiting the long-term spatiotemporal information of the inputs and obtaining a hidden state correlated with a specific future timestamp. The hidden state output by the previous ConvLSTM layer is concatenated behind the inputs of the last 7 timestamps of this layer. Then, these are fed into the next layer to output the hidden state of the next future timestamp. In addition to utilizing different layers to tailor the predictions for different timestamps, the iterative design can ensure that the previous features, whether extracted by the extractor or generated by specific ConvLSTM layers, can be reused multiple times; thus, it is also helpful in improving the quality of long-term forecasts. Finally, the hidden state at each future timestamp is converted to a corresponding radar reflectivity image through a 2D convolutional layer with a kernel size of 1 × 1.

3.2. Loss Function

In many spatiotemporal sequence forecasting tasks, such as video prediction and traffic flow prediction, where the pixel values of images are relatively evenly distributed, the mean absolute error (MAE) and mean squared error (MSE) are used as the loss functions to train DNN models. However, for radar reflectivity images, the proportion of low-intensity pixels is much larger than that of high-intensity pixels [21]. Training the extrapolation model with the original MAE and MSE losses will make it focus on predicting low-intensity pixels (indicating no weather echoes and weak echoes), limiting the forecasting effect in areas with relatively strong echoes associated with hazardous convection. To achieve better forecasting performance for strong echoes, we introduce a balanced reconstruction loss function L B rec that assigns greater weights to the errors of higher reflectivity values in the calculation process:
L B rec = 1 N H W n = 1 N i = 1 H j = 1 W { w e i g h t t + n , i , j × [ | I t + n , i , j I ^ t + n , i , j | + ( I t + n , i , j I ^ t + n , i , j ) 2 ] }
w e i g h t t + n , i , j = { 1 ,                                       I t + n , i , j < 15 d B Z 2 ,   15 d B Z I t + n , i , j < 35 d B Z 5 ,   35 d B Z I t + n , i , j                                      
where I t + n , i , j denotes the observed reflectivity value of the ( i , j ) th pixel of the future image at timestamp t + n , and I ^ t + n , i , j denotes the corresponding predicted value. w e i g h t t + n , i , j is the weight assigned to each pixel according to the range of its observed reflectivity. H and W are the height and width of the radar images, respectively. As in previous work [20,21,41], the values of w e i g h t t + n , i , j are determined based on experience. The prediction errors of high reflectivity values are given larger weights compared to those of low reflectivity values, but the difference between weights is only 2–3 times. Finally, the weights are determined by experiment. We verify the effectiveness of the balanced reconstruction loss function in Section 4.

3.3. Evaluation Metrics

To quantitatively evaluate the nowcasting performance of extrapolation models, we apply the probability of detection (POD), false-alarm ratio (FAR), bias score (BIAS), critical success index (CSI), root mean square error (RMSE) and correlation coefficient (CC) and design a temporally weighted average CSI (twaCSI) measure. These metrics can be computed based on a given threshold τ, representing a corresponding echo intensity level. CSI can provide a ratio of correct predictions. For its calculation, the observed image and predicted image are first binarized by a threshold τ. A pixel value greater than τ is set to 1; otherwise, it is set to 0. Then, TP, FN, and FP, which denote the numbers of true positives (prediction = 1, observation = 1), false negatives (prediction = 0, observation = 1) and false positives (prediction = 1, observation = 0), respectively, are obtained. The CSI is computed as
C S I τ = T P T P + F N + F P
Furthermore, considering it becomes more challenging to forecast radar images with increasing lead time, we design t w a C S I τ to evaluate the temporal sequence of predicted radar images. It emphasizes the CSI scores of the images predicted at later timestamps by assigning them heavier weights; this step is defined as
t w a C S I τ = n = 1 N n · C S I t + n τ n = 1 N n
where C S I t + n τ is the CSI score of the predicted image at timestamp t + n .
POD and FAR would emphasize the amount of missed events and false alarms. Also, including BIAS will give an idea about the deviation of predictions.
P O D τ = T P T P + F N  
F A R τ = F P T P + F P  
B I A S τ = T P + F P T P + F N  
when BIAS > 1, the forecast result is stronger than the real; when BIAS < 1, the forecast result is weaker; when BIAS = 1, the forecast deviation is 0, which is the highest prediction skill. In addition, for each predicted image, we utilize R M S E τ and C C τ to present the prediction error and consistency in the area where the observed reflectivities are greater than τ. Denoting the sets of observed values larger than τ and the corresponding predicted values as s and s ^ , respectively, R M S E τ and C C τ are calculated as follows:
R M S E τ = 1 | s | i = 1 ( s i s i ^ ) 2
C C τ = Cov ( s , s ^ ) Var ( s ) · Var ( s ^ )
where | s | represents the number of values in set s.
Specifically, we select 18 dBZ (0.5 mm/h, indicating rain or not [21]) and 35 dBZ (used to identify strong convections [10]) as the thresholds.

4. Experiments and Results

To evaluate the effectiveness and superiority of the proposed 3D-UNet-LSTM model, extrapolation-based 0–1 h nowcasting experiments are conducted. For comparison, six baseline models and a state-of-the-art model are reimplemented, including the Eulerian persistence model (hereafter called Persistence), which assumes that future radar images do not differ from the most recent observed image, a conventional model based on optical flow (Rainymotion [14]), five deep learning models including three four-layer ConvRNN models (ConvLSTM [17], PredRNN [22], SA-ConvLSTM [26]), a U-Net [32] model, and a state-of-the-art model (RainPredRNN [23]). In those models, ConvLSTM adopts the “same-side” structure, and PredRNN and SA-ConvLSTM apply the “opposite-side” structure.
We first separately train the 3D-UNet-LSTM model and the other deep learning models on the training set and validation set following the settings in Section 4.1 and then compare the performance of Persistence, Rainymotion and the well-trained models on the whole test set in Section 4.2. Then, to verify the effectiveness of the model design, Section 4.3 compares the 3D-UNet-LSTM model with two variations, including 3D-UNet. Next, in Section 4.4, we further investigate the impact of the balanced loss and adversarial loss functions on the performance of DNNs in accurately predicting convective echoes. Finally, two representative cases are studied in Section 4.5.

4.1. Implementation Details for Training

The radar reflectivity images are first normalized to [0, 1] and then fed into the DNN models. For a fair comparison, all models are trained with the balanced reconstruction loss function on the training set via the adaptive moment estimation (ADAM) optimizer [42] with an initial learning rate of 10−4. The batch size of each training iteration is set to 4. To prevent overfitting, the training process is stopped if the twaCSI35 obtained on the validation set is not improved for 20 epochs. All experiments are implemented in TensorFlow [43] and executed on a TITAN RTX GPU (24 GB).

4.2. Quantitative Evaluation of Eight Models on the Test Set

We quantitatively evaluate the overall 0–1 h nowcasting performance of the proposed 3D-UNet-LSTM model, RainPredRNN and six baseline models with the CSI, twaCSI, CC and RMSE scores (averaged over all 1137 samples) obtained on the test set. The twaCSI results and the mean CSI, CC and RMSE values obtained for all lead times at thresholds of 18 and 35 dBZ are tabulated in Table 2. Persistence has the poorest scores for all metrics. The optical flow based Rainymotion approach obviously performs better than Persistence with the help of the calculated motion field. The six well-trained DNN models significantly outperform the above two traditional models, which demonstrates the powerful modeling capability of deep learning. Among the ConvRNN models, although PredRNN achieves the same performance as ConvLSTM in terms of the CSI and twaCSI, it obtains higher CC and lower RMSE scores at both thresholds that the nowcasting values of PredRNN are more precise and closely aligned with the ground truth than those of ConvLSTM. RainPredRNN performs better than PredRNN with the help of the ST-LSTM unit and setting appropriate hyperparameters. Another SA-ConvLSTM obtains similar CSI18 and twaCSI18 scores compared to those of ConvLSTM, PredRNN and RainPredRNN. Yet, it is superior to both when the threshold is set to 35 dBZ, particularly for twaCSI35, implying that SA-ConvLSTM has a better nowcasting performance at longer lead times for echoes with high-intensity levels. The UNet model, which does not have a special design for time series modeling, obtains even better scores for all metrics than the above three advanced ConvRNN models at the thresholds of 18 dBZ and 35 dBZ, which is noteworthy, as it shows the high potential of the UNet architecture for extrapolation-based convective nowcasting. The proposed 3D-UNet-LSTM model yields the best nowcasting scores among the eight models, which verifies its superiority. Greater improvements in the CSI and twaCSI are achieved at the 35 dBZ threshold than at the 18 dBZ threshold because we focus more on improving the prediction accuracy for convective echoes, especially at longer lead times. In addition, the best CC and RMSE scores obtained at both thresholds indicate that the predicted radar reflectivities of 3D-UNet-LSTM are more precise and, thus better for estimating future rainfall intensities.
The POD, FAR and BIAS values obtained for all lead times at thresholds of 18 and 35 dBZ are tabulated in Table 3. For the forecasting of medium and strong echoes, the BIAS score of our proposed model is greater than 1, and the overall forecast results are strong. The reason is that the model is designed to focus more on strong echoes. The model has the best POD and FAR scores at the thresholds of 35 dBZ (strong echo).
Beyond that, to directly show the convective nowcasting performance over time, the CSI, CC and RMSE curves produced by the eight models at the 35 dBZ threshold against different nowcasting lead times up to 60 min are plotted in Figure 3. The results show that the performance of all extrapolation models deteriorates with increasing lead times, which can be expected and mainly results from unavoidable error accumulation and increasing uncertainty in the forecasting process. RainPredRNN and PredRNN obtain similar performance on all metrics over time. In addition, we notice that although UNet achieves a better overall performance in terms of mean CSI35 and RMSE35 in Table 2 than the three ConvRNN models and RainPredRNN, this is largely due to the contribution of its better scores for lead times between 5 and 30 min. Later, the performance of UNet gradually becomes comparable to that of SA-ConvLSTM and is finally exceeded by that approach for lead times beyond approximately 45 min. One reason for this phenomenon presumably is that UNet focuses on maintaining or changing spatial appearances for radar images but fails to capture the internal temporal dependencies; this appears to affect its long-term prediction effectiveness.
In contrast, the proposed 3D-UNet-LSTM produces the best CSI35 value for any lead time in one hour and achieves a score of more than 0.25 for 60-min nowcasts, while those of other deep learning models are in the range of 0.21 to 0.23. The same is true for RMSE35; the proposed model remains competitive over the whole period, and its superiority becomes increasingly obvious at lead times after 30 min. For 60-min nowcasts, it reduces the average error by almost 2 dBZ compared with UNet. In terms of CC35, the prediction results of the proposed model exhibit consistency with the observation values, especially at shorter lead times. Although its performance drops sharply as the lead time increases, our model still achieves the highest CC35 scores compared to other models. In general, 3D-UNet-LSTM has better early performance than UNet and consistently outperforms SA-ConvLSTM at long lead times, demonstrating its effective spatiotemporal modeling ability and better overall performance for convective nowcasting.

4.3. Evaluation of the Model Design

To evaluate the effectiveness of the 3D-UNet-LSTM model design, we first design two variations of the model, one that removes the forecaster and retains the 3D-UNet extractor only and another that replaces the forecaster with a two-layer ConvLSTM network (this variation model is referred to as ‘3D-UNet + ConvLSTM’). Then, the overall performance of the original ConvLSTM, UNet, 3D-UNet-LSTM and these two variations are compared, as shown in Table 4. When only the 3D-UNet extractor is retained, it still outperforms ConvLSTM and UNet in terms of the metrics at the 35 dBZ threshold, indicating that the 3D-UNet extractor has good potential for convective nowcasting. However, as we attempt to use a common ConvLSTM network to further leverage the features extracted by 3D-UNet and generate future hidden states according to the shared parameters, the nowcasting performance decreases considerably, becoming even worse than that of the original ConvLSTM. In contrast, when utilizing our designed forecaster to produce future hidden states with different parameters, the model obtains better scores than those of 3D-UNet, demonstrating the effectiveness of the forecaster.
We also draw the CSI35, CC35 and RMSE35 curves of these methods for different lead times in Figure 4. It can be seen that by combining 3D-UNet and the forecaster, our model has better performance than the other approaches for nearly all lead times. The superiority of its design is more obvious for longer lead times.

4.4. Evaluation of Different Loss Functions

In the following, we train the 3D-UNet-LSTM model with different loss functions and test their effects on the prediction accuracy for convective echo regions. These loss functions are the reconstruction loss (the sum of the MAE and MSE) widely used in video prediction tasks [22,26], the sum of the reconstruction loss and adversarial loss, which has been applied to address the blurring problem for echo prediction [24], the balanced reconstruction loss [21] applied in this paper, and the sum of the balanced reconstruction loss and adversarial loss [37,44]. The scaling factor of the adversarial loss is set to 0.03 to ensure that it can exert a certain degree of influence on the model training process. When the scaling factor is set to 0.003, its influence is quite slight. The results are shown in Table 5. We can see that without using any weights for reflectivities, the reconstruction loss slightly improves the CSI18 and twaCSI18 scores but yields much poorer performance than that of the balanced loss functions in terms of other metrics, especially CSI35 and twaCSI35. As we add an adversarial term to the reconstruction loss, these gaps are slightly narrowed. Regarding the balanced loss functions, the balanced reconstruction loss applied in this paper obtains the best scores for all evaluation metrics at the 35 dBZ threshold.
Regarding its combination with an adversarial loss, the convective nowcasting performance deteriorates with increasing scaling factors for the adversarial term. It can be concluded that compared with the original reconstruction loss, the balanced loss can significantly improve the convective nowcasting performance of a deep learning model. It seems that adding an adversarial loss to the reconstruction loss can slightly improve the prediction accuracy for convective echoes. However, for the balanced reconstruction loss, adding an adversarial loss term is of no help for further increasing the prediction precision.

4.5. Representative Case Study

To qualitatively evaluate the performance of the proposed model, we select two representative cases from the test set and visually examine the nowcasts produced by different models. The images of two cases, including radar observations and nowcasts, are presented in Figure 5 and Figure 6, respectively, and are displayed every 15 min to show the evolutions of convective systems.
Figure 5 shows a representative case of local strong convective growth over northwest France at a forecasting time of T = 7 August 2018, 11:55 UTC. In the input radar images, it can be seen that an isolated convective cell is located in the west at time T - 30 min, moving northeast together with other dispersed echoes, and the formation of a new strong small-scale convective cell occurs in Region B at forecasting time T. For the ground-truth observations in the next hour, the echoes continue to move in the northeast direction, and during this period, the new convective cell gradually grows and appears to merge with the older cell. Comparing the nowcasting results of each model with the ground truth, one can observe that all models can capture the movements of most echoes. However, the optical flow-based Rainymotion method simply advects the radar echoes. It fails to forecast the subsequent growth and evolution of the newly formed convective cell because it cannot completely model nonlinear processes. In contrast, all deep learning models successfully forecast that the newly formed convective cell will grow at time T + 30 min but underestimate its intensity. This under-forecasting problem, also called blurry prediction, is common when utilizing deterministic deep learning models for radar echo extrapolation, especially with longer lead times; this is mainly because a DNN model tends to average all probable outcomes to a blurry prediction in a case in which it has difficulty dealing with future uncertainty [45]. Nonetheless, the 30-min nowcast obtained by the 3D-UNet-LSTM model is closer to the ground truth in terms of the horizontal extent of the convection than those derived from other models. For the 60-min nowcasts, the forecasted intensities of the old convective cell in the results of other deep learning models deviate considerably from the ground truth, while the 3D-UNet-LSTM model and 3D-UNet model can maintain their intensity values at relatively high levels (≥ 40 dBZ). It is noted that only the 3D-UNet-LSTM model forecasts a further growth trend in the size of the newly formed convective cell from time T + 30 min to T + 60 min, and its 60-min nowcasting result also successfully depicts the merging phenomenon of the two isolated convective echoes that occur in regions A and B one hour later.
Another representative case is shown in Figure 6, which describes the evolution of a severe squall line that occurs in southeast France at a forecasting time of T = 13 August 2018, 05:00 UTC. It is clear from the radar observations that a squall line is moving eastward while the convective area behind it gradually becomes larger, and it finally develops into a bow echo at time T + 60 min. As in the first case, all models provide relatively accurate moving directions for the quasi-linear convective system. The 30-min nowcasts obtained from all models, especially UNet, achieve good agreement with the radar observations, presumably because the system evolves relatively slowly during the first half hour after forecasting time T. However, for the 60-min nowcasts, it is difficult for the optical flow-based Rainymotion method to predict the subsequent convective evolution. Although the deep learning models successfully forecast that the convective area will expand in the future, significant differences remain between their 60-min nowcast performances. For example, one can observe that the three ConvRNN models give misleading information that high-impact meteorological hazards (reflectivity ≥ 40 dBZ) tend to decrease. Although UNet and 3D-UNet effectively preserve their intensities, neither they nor the ConvRNN models can forecast the bow echo structure at time T + 60 min. It is noted that the proposed 3D-UNet-LSTM yields a more trustable 60-min nowcast in Region A with a realistic bow echo structure (the region with reflectivity ≥ 40 dBZ in Figure 6) and a reasonable intensity distribution than those of other models. Bow echo is bowed toward the direction of movement. There are general weaknesses in reflectivity behind the bow. Only the nowcasting results of the proposed approach depict the squall line-to-bow echo transition clearly, indicating that 3D-UNet-LSTM has a better spatiotemporal modeling ability for the complex nonlinear processes of convective echoes.

5. Conclusions

In this paper, we propose a novel deep learning model called 3D-UNet-LSTM to precisely extrapolate radar reflectivity images for convective nowcasting. This model combines a well-known CNN named 3D-UNet and a newly designed Seq2Seq network in an extractor-forecaster architecture. We first apply 3D-UNet as the extractor to extract the comprehensive spatiotemporal representations of input radar images. Then, in the forecaster, the extracted features are further leveraged by the Seq2Seq network to individually generate hidden states for different future timestamps with different ConvLSTM layers. These hidden states are finally transformed into predicted radar images by a convolutional layer.
We conduct comparative experimental studies on a test set. The quantitative evaluation results show that 3D-UNet-LSTM outperforms conventional methods and state-of-the-art deep learning models regarding the prediction of convective echoes, particularly with long lead times. In addition, the evaluation of the model design demonstrates the effectiveness of the 3D-UNet extractor and the newly designed forecaster, as well as their combination. It is noteworthy that UNet-based models, especially 3D-UNet, achieve comparable or even superior performance to that of some ConvRNN-based models. We also verify the effectiveness of the utilized balanced loss function on the model performance for precisely forecasting strong echoes. Finally, representative case studies qualitatively illustrate that the 3D-UNet-LSTM model can better model the nonlinear processes of the evolutions of convective echoes and produce more reasonable and location-accurate nowcasts.
Although the quantitative and qualitative comparison and analysis verify the superiority and effectiveness of 3D-UNet-LSTM for extrapolation-based convective nowcasting, some limitations remain. We think these should be noted and discussed. First, like other deep learning models, the proposed model has difficulty forecasting convective initiation, which is still challenging for the meteorological community. One main reason is that the input reflectivity images cannot provide a DNN with sufficient early signals and characteristics of convective initiation. From there, adding relevant radar variables to supplement input reflectivities may be a promising direction. Second, the loss function has much room for improvement and introducing an additional classification network and an effective classification loss seems to be a good solution. Thirdly, we are currently working on only one benchmark dataset and will try to conduct studies using different benchmark data. In future work, we will carry out research on these three aspects.

Author Contributions

Conceptualization, Q.L., N.S. and S.G.; methodology, N.S. and S.G.; validation, S.G. and N.S.; investigation, N.S.; writing—original draft preparation, S.G.; writing—review and editing, N.S. and Y.P.; supervision, Q.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (Grant No. U2242201, 42075139, 41305138), the China Postdoctoral Science Foundation (Grant No. 2017M621700), Hunan Province Natural Science Foundation (Grant No. 2021JC0009, 2021JJ30773) and Fengyun Application Pioneering Project (FY-APP-2022.0605).

Data Availability Statement

Meteonet data [38] is available at https://meteonet.umr-cnrm.fr/ (accessed on 6 April 2022).

Acknowledgments

The authors would like to thank the anonymous reviewers for providing professional and insightful comments about this manuscript. Finally, we thank the contributors of the Meteonet dataset for collecting, processing, and sharing their data.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Existing studies that have applied ConvRNNs or CNNs to conduct extrapolation-based convective nowcasting have included some important research directions, such as developing effective networks and designing loss functions. Two key issues need to be considered when designing a ConvRNN-based model: the basic ConvRNN unit and the Seq2Seq structure. In this appendix, we briefly introduce the typical ConvLSTM unit and the common Seq2Seq structures related to our method, as well as a typical adversarial loss function that is evaluated in experiments.

Appendix A.1. ConvLSTM Unit

The ConvLSTM unit is the basic component of a ConvLSTM model [17]. It receives the current input X t , previous hidden state H t 1 , and temporal cell state C t 1 to generate a new hidden state H t through a gate-controlled mechanism. This process can be formulated as
i t = σ ( W x i X t + W h i H t 1 + b i )
f t = σ ( W x f X t + W h f H t 1 + b f )
C t = f t C t 1 + i t t a n h ( W x c X t + W h c H t 1 + b c )
o t = σ ( W x o X t + W h o H t 1 + b o )
H t = o t t a n h ( C t )
where W and b represent the trainable 2D convolution kernel and bias, respectively. σ is the sigmoid activation function. and are the 2D convolution operation and the Hadamard product, respectively. The information flow is controlled by an input gate i t , a forget gate f t and an output gate o t .

Appendix A.2. Structure

Two Seq2Seq structures were commonly used in prior works on RNN-based radar echo extrapolation, including the “same-side” structure (Figure A1a) [19,21], in which the inputs and predictions are on the same side, and the “opposite-side” structure (Figure A1b) [22,26], in which the predictions are on the opposite side of the inputs. As we can see from Figure A1, both structures can conduct direct multistep prediction by leveraging the shared parameters to generate hidden states over all future timestamps. The “same-side” structure is more suitable for input–output transformation since the spatial and channel sizes of the inputs and predictions are allowed to be different, while the “opposite-side” structure requires them to be consistent and can reduce the difficulty of training.
Figure A1. Two commonly used Seq2Seq structures for RNN-based radar echo extrapolation (choosing ConvLSTM as the basic unit). (a) The “same-side” structure; (b) The “opposite-side” structure.
Figure A1. Two commonly used Seq2Seq structures for RNN-based radar echo extrapolation (choosing ConvLSTM as the basic unit). (a) The “same-side” structure; (b) The “opposite-side” structure.
Remotesensing 15 01529 g0a1

Appendix A.3. Adversial Loss Function

A GAN [46] is a kind of architecture that is mostly used for image synthesis. A regular GAN-based architecture consists of a generator and a discriminator. The generator outputs images, and the discriminator is trained to distinguish whether its input is produced by the generator or derived from the training dataset (binary classification). At the same time, when training the generator with an adversarial loss function to fool the discriminator, the quality of its output images is improved.
In recent years, some studies have treated the extrapolation model as the generator and trained it in a GAN-based architecture with suitably designed adversarial loss functions to improve the textures of predicted images [19,24,44,47,48]. In that context, a simple yet effective adversarial loss function [48] can be defined as:
L adv g = E x [ 1 D ( { x , G ( x ) } ) ]
L adv d = E x , y [ 1 D ( { x , y } ) ] + E x [ D ( { x , G ( x ) } ) ]
where L adv g and L adv d denote the loss functions of the generator G and discriminator D , respectively. The generator G takes radar images x as input and generates predicted images G ( x ) , intended to have the same echo distribution as y , the training (ground-truth) data. D ( · ) is the output of the discriminator D . { } represents the concatenation operation.

References

  1. Sun, J.; Xue, M.; Wilson, J.W.; Zawadzki, I.; Ballard, S.P.; Onvlee-Hooimeyer, J.; Joe, P.; Barker, D.M.; Li, P.-W.; Golding, B.; et al. Use of NWP for Nowcasting Convective Precipitation: Recent Progress and Challenges. Bull. Am. Meteorol. Soc. 2014, 95, 409–426; [Google Scholar] [CrossRef] [Green Version]
  2. Wilson, J.W.; Feng, Y.; Chen, M.; Roberts, R.D. Nowcasting Challenges during the Beijing Olympics: Successes, Failures, and Implications for Future Nowcasting Systems. Weather Forecast. 2010, 25, 1691–1714. [Google Scholar] [CrossRef]
  3. Li, P.-W.; Wong, W.-K.; Cheung, P.; Yeung, H.-Y. An overview of nowcasting development, applications, and services in the Hong Kong Observatory. J. Meteorol. Res. 2014, 28, 859–876. [Google Scholar] [CrossRef]
  4. Mecikalski, J.R.; Bedka, K.M. Forecasting Convective Initiation by Monitoring the Evolution of Moving Cumulus in Daytime GOES Imagery. Mon. Weather Rev. 2006, 134, 49–78. [Google Scholar] [CrossRef] [Green Version]
  5. Cancelada, M.; Salio, P.; Vila, D.; Nesbitt, S.W.; Vidal, L. Backward Adaptive Brightness Temperature Threshold Technique (BAB3T): A Methodology to Determine Extreme Convective Initiation Regions Using Satellite Infrared Imagery. Remote Sens. 2020, 12, 337. [Google Scholar] [CrossRef] [Green Version]
  6. Marshall, J.S.; Langille, R.C.; Palmer, W.M.K. Measurement of rainfall by radar. J. Atmos. Sci. 1947, 4, 186–192. [Google Scholar] [CrossRef]
  7. Peng, X.; Li, Q.; Jing, J. CNGAT: A Graph Neural Network Model for Radar Quantitative Precipitation Estimation. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–14. [Google Scholar] [CrossRef]
  8. Akbari Asanjan, A.; Yang, T.; Hsu, K.; Sorooshian, S.; Lin, J.; Peng, Q. Short-Term Precipitation Forecast Based on the PERSIANN System and LSTM Recurrent Neural Networks. J. Geophys. Res. Atmos. 2018, 123, 12543–12563. [Google Scholar] [CrossRef]
  9. Germann, U.; Zawadzki, I. Scale-Dependence of the Predictability of Precipitation from Continental Radar Images. Part I: Description of the Methodology. Mon. Weather Rev. 2002, 130, 2859–2873. [Google Scholar] [CrossRef]
  10. Dixon, M.; Wiener, G. TITAN: Thunderstorm Identification, Tracking, Analysis, and Nowcasting—A Radar-based Methodology. J. Atmos. Ocean. Technol. 1993, 10, 785–797. [Google Scholar] [CrossRef]
  11. Johnson, J.T.; MacKeen, P.L.; Witt, A.; Mitchell, E.D.W.; Stumpf, G.J.; Eilts, M.D.; Thomas, K.W. The Storm Cell Identification and Tracking Algorithm: An Enhanced WSR-88D Algorithm. Weather Forecast. 1998, 13, 263–276. [Google Scholar] [CrossRef]
  12. Walker, J.R.; MacKenzie, W.M.; Mecikalski, J.R.; Jewett, C.P. An Enhanced Geostationary Satellite–Based Convective Initiation Algorithm for 0–2-h Nowcasting with Object Tracking. J. Appl. Meteorol. Climatol. 2012, 51, 1931–1949. [Google Scholar] [CrossRef]
  13. Rinehart, R.E.; Garvey, E.T. Three-dimensional storm motion detection by conventional weather radar. Nature 1978, 273, 287–289. [Google Scholar] [CrossRef]
  14. Ayzel, G.; Heistermann, M.; Winterrath, T. Optical flow models as an open benchmark for radar-based precipitation nowcasting (rainymotion v0.1). Geosci. Model Dev. 2019, 12, 1387–1402. [Google Scholar] [CrossRef] [Green Version]
  15. Pulkkinen, S.; Nerini, D.; Pérez Hortal, A.A.; Velasco-Forero, C.; Seed, A.; Germann, U.; Foresti, L. Pysteps: An open-source Python library for probabilistic precipitation nowcasting (v1.0). Geosci. Model Dev. 2019, 12, 4185–4219. [Google Scholar] [CrossRef] [Green Version]
  16. Hwang, Y.; Clark, A.J.; Lakshmanan, V.; Koch, S.E. Improved Nowcasts by Blending Extrapolation and Model Forecasts. Weather Forecast. 2015, 30, 1201–1217. [Google Scholar] [CrossRef]
  17. Shi, X.; Chen, Z.; Wang, H.; Yeung, D.-Y.; Wong, W.-K.; Woo, W.-c. Convolutional LSTM network: A machine learning approach for precipitation nowcasting. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 7–12 December 2015; pp. 802–810. [Google Scholar]
  18. Agrawal, S.; Barrington, L.; Bromberg, C.; Burge, J.; Gazen, C.; Hickey, J. Machine learning for precipitation nowcasting from radar images. arXiv 2019, arXiv:1912.12132. [Google Scholar] [CrossRef]
  19. Ravuri, S.; Lenc, K.; Willson, M.; Kangin, D.; Lam, R.; Mirowski, P.; Fitzsimons, M.; Athanassiadou, M.; Kashem, S.; Madge, S.; et al. Skilful precipitation nowcasting using deep generative models of radar. Nature 2021, 597, 672–677. [Google Scholar] [CrossRef] [PubMed]
  20. Han, L.; Liang, H.; Chen, H.; Zhang, W.; Ge, Y. Convective Precipitation Nowcasting Using U-Net Model. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–8. [Google Scholar] [CrossRef]
  21. Shi, X.; Gao, Z.; Lausen, L.; Wang, H.; Yeung, D.-Y.; Wong, W.-K.; Woo, W.-C. Deep learning for precipitation nowcasting: A benchmark and a new model. In Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 5618–5628. [Google Scholar]
  22. Wang, Y.; Long, M.; Wang, J.; Gao, Z.; Yu, P.S. Predrnn: Recurrent neural networks for predictive learning using spatiotemporal lstms. In Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 879–888. [Google Scholar]
  23. Tuyen, D.N.; Tuan, T.M.; Le, X.-H.; Tung, N.T.; Chau, T.K.; Van Hai, P.; Gerogiannis, V.C.; Son, L.H. RainPredRNN: A New Approach for Precipitation Nowcasting with Weather Radar Echo Images Based on Deep Learning. Axioms 2022, 11, 107. [Google Scholar] [CrossRef]
  24. Jing, J.; Li, Q.; Peng, X. MLC-LSTM: Exploiting the Spatiotemporal Correlation between Multi-Level Weather Radar Echoes for Echo Sequence Extrapolation. Sensors 2019, 19, 3988. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Villegas, R.; Yang, J.; Hong, S.; Lin, X.; Lee, H. Decomposing motion and content for natural video sequence prediction. arXiv 2017, arXiv:1706.08033. [Google Scholar] [CrossRef]
  26. Lin, Z.; Li, M.; Zheng, Z.; Cheng, Y.; Yuan, C. Self-attention convlstm for spatiotemporal prediction. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; pp. 11531–11538. [Google Scholar]
  27. Bai, S.; Kolter, J.Z.; Koltun, V. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. arXiv 2018, arXiv:1803.01271. [Google Scholar] [CrossRef]
  28. Chafik Bakkay, M.; Serrurier, M.; Kivachuk Burda, V.; Dupuy, F.; Citlali Cabrera-Gutierrez, N.; Zamo, M.; Mader, M.-A.; Mestre, O.; Oller, G.; Jouhaud, J.-C.; et al. Precipitaion Nowcasting using Deep Neural Network. arXiv 2022, arXiv:2203.13263. [Google Scholar] [CrossRef]
  29. Prudden, R.; Adams, S.; Kangin, D.; Robinson, N.; Ravuri, S.; Mohamed, S.; Arribas, A. A review of radar-based nowcasting of precipitation and applicable machine learning techniques. arXiv 2020, arXiv:2005.04988. [Google Scholar] [CrossRef]
  30. Klein, B.; Wolf, L.; Afek, Y. A Dynamic Convolutional Layer for short rangeweather prediction. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 4840–4848. [Google Scholar]
  31. Ayzel, G.; Heistermann, M.; Sorokin, A.; Nikitin, O.; Lukyanova, O. All convolutional neural networks for radar-based precipitation nowcasting. Procedia Comput. Sci. 2019, 150, 186–192. [Google Scholar] [CrossRef]
  32. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
  33. Ayzel, G.; Scheffer, T.; Heistermann, M. RainNet v1.0: A convolutional neural network for radar-based precipitation nowcasting. Geosci. Model Develop. 2020, 13, 2631–2644. [Google Scholar] [CrossRef]
  34. Trebing, K.; Staǹczyk, T.; Mehrkanoon, S. SmaAt-UNet: Precipitation nowcasting using a small attention-UNet architecture. Pattern Recognit. Lett. 2021, 145, 178–186. [Google Scholar] [CrossRef]
  35. Pan, X.; Lu, Y.; Zhao, K.; Huang, H.; Wang, M.; Chen, H. Improving Nowcasting of Convective Development by Incorporating Polarimetric Radar Variables Into a Deep-Learning Model. Geophys. Res. Lett. 2021, 48, e2021GL095302. [Google Scholar] [CrossRef]
  36. Che, H.; Niu, D.; Zang, Z.; Cao, Y.; Chen, X. ED-DRAP: Encoder–Decoder Deep Residual Attention Prediction Network for Radar Echoes. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  37. Larvor, G.; Berthomier, L.; Chabot, V.; Le Pape, B.; Pradel, B.; Perez, L. MeteoNet, An Open Reference Weather Dataset by Meteo-France. 2020. Available online: https://meteonet.umr-cnrm.fr/ (accessed on 6 April 2022).
  38. Wang, C.; Wang, P.; Wang, P.; Xue, B.; Wang, D. Using Conditional Generative Adversarial 3-D Convolutional Neural Network for Precise Radar Extrapolation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 5735–5749. [Google Scholar] [CrossRef]
  39. Çiçek, Ö.; Abdulkadir, A.; Lienkamp, S.S.; Brox, T.; Ronneberger, O. 3D U-Net: Learning dense volumetric segmentation from sparse annotation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Athens, Greece, 17–21 October 2016; pp. 424–432. [Google Scholar]
  40. Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the International Conference on Machine Learning, Lille, France, 6–11 July 2015; pp. 448–456. [Google Scholar]
  41. Niu, D.; Huang, J.; Zang, Z.; Xu, L.; Che, H.; Tang, Y. Two-Stage Spatiotemporal Context Refinement Network for Precipitation Nowcasting. Remote Sens. 2021, 13, 4285. [Google Scholar] [CrossRef]
  42. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar] [CrossRef]
  43. Abadi, M.; Agarwal, A.; Barham, P.; Brevdo, E.; Chen, Z.; Citro, C.; Corrado, G.S.; Davis, A.; Dean, J.; Devin, M. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv 2016, arXiv:1603.04467. [Google Scholar] [CrossRef]
  44. Liu, H.B.; Lee, I. MPL-GAN: Toward Realistic Meteorological Predictive Learning Using Conditional GAN. IEEE Access 2020, 8, 93179–93186. [Google Scholar] [CrossRef]
  45. Oprea, S.; Martinez-Gonzalez, P.; Garcia-Garcia, A.; Castro-Vargas, J.A.; Orts-Escolano, S.; Garcia-Rodriguez, J.; Argyros, A. A Review on Deep Learning Techniques for Video Prediction. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 44, 2806–2826. [Google Scholar] [CrossRef] [PubMed]
  46. Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. In Proceedings of the 27th International Conference on Neural Information Processing Systems, Montreal, QC, Canada, 8–13 December 2014; pp. 2672–2680. [Google Scholar]
  47. Tian, L.; Li, X.; Ye, Y.; Xie, P.; Li, Y. A Generative Adversarial Gated Recurrent Unit Model for Precipitation Nowcasting. IEEE Geosci. Remote Sens. Lett. 2020, 17, 601–605. [Google Scholar] [CrossRef]
  48. Veillette, M.; Samsi, S.; Mattioli, C. Sevir: A storm event imagery dataset for deep learning applications in radar and satellite meteorology. In Proceedings of the Advances in Neural Information Processing Systems, Online, 6–12 December 2020; pp. 22009–22019. [Google Scholar]
Figure 1. The geographical regions used for the radar reflectivity data (red rectangle).
Figure 1. The geographical regions used for the radar reflectivity data (red rectangle).
Remotesensing 15 01529 g001
Figure 2. The 3D-UNet-LSTM architecture. ‘k’ and ‘s’ represent the kernel size and the stride for a convolution, respectively.
Figure 2. The 3D-UNet-LSTM architecture. ‘k’ and ‘s’ represent the kernel size and the stride for a convolution, respectively.
Remotesensing 15 01529 g002
Figure 3. The (a) CSI, (b) CC and (c) RMSE curves produced by the eight models at the 35 dBZ threshold against different lead times. All values are the scores averaged over all cases in the test set at the corresponding lead time.
Figure 3. The (a) CSI, (b) CC and (c) RMSE curves produced by the eight models at the 35 dBZ threshold against different lead times. All values are the scores averaged over all cases in the test set at the corresponding lead time.
Remotesensing 15 01529 g003
Figure 4. The (a) CSI, (b) CC and (c) RMSE curves at the 35 dBZ threshold against different lead times for the evaluation of the model design.
Figure 4. The (a) CSI, (b) CC and (c) RMSE curves at the 35 dBZ threshold against different lead times for the evaluation of the model design.
Remotesensing 15 01529 g004
Figure 5. A representative case of local strong convective growth in the northwestern quarter of France at a forecasting time of T = 7 August 2018, 11:55 UTC. Letters A–B represents different regions where the proposed 3D-UNet-LSTM performs well.
Figure 5. A representative case of local strong convective growth in the northwestern quarter of France at a forecasting time of T = 7 August 2018, 11:55 UTC. Letters A–B represents different regions where the proposed 3D-UNet-LSTM performs well.
Remotesensing 15 01529 g005
Figure 6. A representative case of squall line evolution in the southeastern quarter of France at a forecasting time of T = 13 August 2018, 05:00 UTC. Letter A represents the region where the proposed 3D-UNet-LSTM performs well.
Figure 6. A representative case of squall line evolution in the southeastern quarter of France at a forecasting time of T = 13 August 2018, 05:00 UTC. Letter A represents the region where the proposed 3D-UNet-LSTM performs well.
Remotesensing 15 01529 g006
Table 1. The divided subsets for training, validation, and testing.
Table 1. The divided subsets for training, validation, and testing.
PeriodSample NumberTotal
NWSE
Training2016.1–2018.55504486510,369
Validation2018.6–2018.7480517997
Test2018.8–2018.103088291137
Table 2. Overall performance of the eight models on the test set.
Table 2. Overall performance of the eight models on the test set.
MethodCSI↑twaCSI↑CC↑RMSE↓
18 dBZ35 dBZ18 dBZ35 dBZ18 dBZ35 dBZ18 dBZ35 dBZ
Persistence0.41810.20680.35910.15540.26440.035516.9221.34
Rainymotion0.51490.26750.45810.21070.36160.069414.0117.69
ConvLSTM0.58140.32440.54210.27860.43500.100710.7012.89
PredRNN0.58980.32780.54680.27550.45000.125610.5812.78
RainPredRNN0.59060.33140.54830.28680.46240.136310.4512.63
SA-ConvLSTM0.58110.33490.54440.29330.44220.111010.4712.50
UNet0.59380.35500.54970.29980.47070.157010.4112.03
3D-UNet-LSTM0.59900.37420.55120.32010.48530.17609.7211.34
The best and second-best scores are marked in bold and underlined, respectively, ↑ which means that higher is better, while ↓ lower is better.
Table 3. Evaluation scores of our proposed model with others.
Table 3. Evaluation scores of our proposed model with others.
MethodPOD↑FAR↓BIAS
18 dBZ35 dBZ18 dBZ35 dBZ18 dBZ35 dBZ
Persistence0.56640.32020.42050.67270.98451.0220
Rainymotion0.65250.35850.31700.53150.95460.7718
ConvLSTM0.78870.47760.32300.50851.17950.9820
PredRNN0.78880.46510.31290.49231.16220.9072
RainPredRNN0.79530.48360.32060.50491.15841.0659
SA-ConvLSTM0.80120.50210.33190.51331.21781.0384
UNet0.80050.54800.31450.51361.18631.1500
3D-UNet-LSTM0.82380.56100.32350.48441.24621.1489
The best and the second-best scores are marked in bold and underlined, respectively, ↑ which means that higher is better, while ↓ lower is better.
Table 4. Quantitative evaluation of the model design.
Table 4. Quantitative evaluation of the model design.
MethodCSI↑twaCSI↑CC↑RMSE↓
18 dBZ35 dBZ18 dBZ35 dBZ18 dBZ35 dBZ18 dBZ35 dBZ
ConvLSTM0.58140.32440.54210.27860.43500.100710.7012.89
UNet0.59380.35500.54970.29980.47070.157010.4112.03
3D-UNet0.58970.36420.54390.30990.47350.168710.2711.76
3D-UNet + ConvLSTM0.55670.30970.51970.26480.42080.108710.9613.03
3D-UNet-LSTM0.59900.37420.55120.32010.48530.17609.7211.34
The best and second-best scores are marked in bold and underlined, respectively, ↑ which means that higher is better, while ↓ lower is better.
Table 5. Quantitative evaluation of different loss functions.
Table 5. Quantitative evaluation of different loss functions.
Loss FunctionCSI↑twaCSI↑CC↑RMSE↓
18 dBZ35 dBZ18 dBZ35 dBZ18 dBZ35 dBZ18 dBZ35 dBZ
L rec 0.60450.33020.55750.26360.44600.111411.2613.86
L rec + 0.03 L adv g 0.59500.33920.54630.27940.45350.143311.0813.37
L B rec 0.59900.37420.55120.32010.48530.17609.7211.34
L B rec + 0.003 L adv g 0.59780.37160.55200.31610.47600.162210.1211.57
L B rec + 0.03 L adv g 0.58840.36390.53850.30580.46350.152910.7612.37
The best and second-best scores are marked in bold and underlined, respectively; ↑ which means that higher is better, which ↓ means that lower is better.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Guo, S.; Sun, N.; Pei, Y.; Li, Q. 3D-UNet-LSTM: A Deep Learning-Based Radar Echo Extrapolation Model for Convective Nowcasting. Remote Sens. 2023, 15, 1529. https://doi.org/10.3390/rs15061529

AMA Style

Guo S, Sun N, Pei Y, Li Q. 3D-UNet-LSTM: A Deep Learning-Based Radar Echo Extrapolation Model for Convective Nowcasting. Remote Sensing. 2023; 15(6):1529. https://doi.org/10.3390/rs15061529

Chicago/Turabian Style

Guo, Shiqing, Nengli Sun, Yanle Pei, and Qian Li. 2023. "3D-UNet-LSTM: A Deep Learning-Based Radar Echo Extrapolation Model for Convective Nowcasting" Remote Sensing 15, no. 6: 1529. https://doi.org/10.3390/rs15061529

APA Style

Guo, S., Sun, N., Pei, Y., & Li, Q. (2023). 3D-UNet-LSTM: A Deep Learning-Based Radar Echo Extrapolation Model for Convective Nowcasting. Remote Sensing, 15(6), 1529. https://doi.org/10.3390/rs15061529

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop