Next Article in Journal
Optimizing Wheat Yield Prediction Integrating Data from Sentinel-1 and Sentinel-2 with CatBoost Algorithm
Next Article in Special Issue
A Dual-Polarization Information-Guided Network for SAR Ship Classification
Previous Article in Journal
Reusing Remote Sensing-Based Validation Data: Comparing Direct and Indirect Approaches for Afforestation Monitoring
Previous Article in Special Issue
Using Artificial Neural Networks to Assess Earthquake Vulnerability in Urban Blocks of Tehran
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mutual Information Boosted Precipitation Nowcasting from Radar Images

1
Shanghai Key Laboratory of Intelligent Information Processing, School of Computer Science, Fudan University, Shanghai 200433, China
2
School of Computing and Information, University of Pittsburgh, 4200 Fifth Avenue, Pittsburgh, PA 15260, USA
3
Institute of Science and Technology for Brain-Inspired Intelligence, MOE Frontiers Center for Brain Science, Fudan University, Shanghai 200433, China
4
Shanghai Center for Brain Science and Brain-Inspired Technology, Shanghai 200031, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(6), 1639; https://doi.org/10.3390/rs15061639
Submission received: 15 January 2023 / Revised: 8 March 2023 / Accepted: 9 March 2023 / Published: 17 March 2023

Abstract

:
Precipitation nowcasting has long been a challenging problem in meteorology. While recent studies have introduced deep neural networks into this area and achieved promising results, these models still struggle with the rapid evolution of rainfall and extremely imbalanced data distribution, resulting in poor forecasting performance for convective scenarios. In this article, we evaluate the amount of information in different precipitation nowcasting tasks of varying lengths using mutual information. We propose two strategies: the mutual information-based reweighting strategy (MIR) and a mutual information-based training strategy (time superimposing strategy (TSS)). MIR reinforces neural network models to improve the forecasting accuracy for convective scenarios while maintaining prediction performance for rainless scenarios and overall nowcasting image quality. The TSS strategy enhances the model’s forecasting performance by adopting a curriculum learning-like method. Although the proposed strategies are simple, the experimental results show that they are effective and can be applied to various state-of-the-art models.

1. Introduction

Precipitation nowcasting aims to predict the kilometer-wise rainfall intensity within the next two hours [1]. It plays a vital role in daily life, such as traffic planning, disaster alerts, and agriculture [2]. Precipitation nowcasting is often defined as a spatiotemporal sequence prediction task [3,4,5,6,7]. A sequence of historical radar echo images is taken in and a sequence of future radar echo images is predicted [3]. In this paper, we denote the historical radar echo images as X and the future (to be predicted) radar echo images as Y. The rainfall intensity distribution of the whole dataset will be P ( Y ) (or P ( X ) , because both X and Y are drawn from the same distribution), and the precipitation nowcasting task can be represented as P ( Y | X ) .
However, due to the highly skewed distribution of rainfall intensities, the traditional approach has a limited ability to forecast heavy rainfall scenarios [8]. For instance, using the Italian dataset TAASRAD19 [9] as an example, the number of pixels with a radar reflective intensity greater than 50 dBZ accounts for only 0.066% of the total number of pixels, and only 0.45% of pixels are greater than 40 dBZ. The same situation also exists in the ECP dataset [10] and HKO-7 dataset [11], as illustrated in Figure 1. Radar echo intensity (dBZ) does not correspond to the rainfall intensity (mm/h). The conversion from radar reflectivity to rainfall intensity requires a precipitation estimation algorithm, such as the Z–R relation formula. This paper focuses on radar echo intensity prediction; rainfall intensity can be estimated from the predicted radar echo intensity using an estimation algorithm [3].
Heavy rainfall scenarios are often rare; however, if left unaddressed, they will have more severe consequences than moderate to light rainfall scenarios. Therefore, efforts have been devoted to improving the heavy rain forecasting performance, with reweighting and resampling being the most popular strategies [8,11,12]. These strategies could increase the heavy rainfall sample weights based on P ( Y ) .
However, adjusting the sample weights or prediction losses based on P ( Y ) would undermine the conditional distribution P ( Y | X ) , downgrading the majority classes’ performance and hurting the overall rainfall prediction accuracy.
In this work, we propose a new strategy, mutual-information-based reweighting (MIR), to improve the nowcasting prediction for imbalanced rainfall data. Mutual information I ( X ; Y ) measures the dependence between random variables X and Y, with high mutual information corresponding to high-dependency X and Y (easy-to-learn) tasks, and low mutual information corresponding to low-dependency (hard-to-learn) tasks [13].
In the task of precipitation nowcasting, we calculate the mutual information of the radar echo data and observe that tasks with more mutual information exhibit greater resilience to the issue of data imbalance. Specifically, when the mutual information is high, MIR employs relatively mild reweighting factors to preserve the original distribution of P ( Y | X ) . Conversely, for tasks with low mutual information, MIR employs higher reweighting factors to enhance the prediction performance. This approach boosts the performance of the minority groups without negatively impacting the overall prediction performance.
Furthermore, we propose a simple curriculum-style training strategy, the time superimposing strategy (TSS). The primary advantage of curriculum learning is that it enables machines to start learning with more manageable tasks and gradually progress to more challenging ones. Inspired by this, TSS first trains the model with the highest mutual information task and gradually stacks the lower mutual information tasks into the training task set. Regarding implementation, the TSS strategy only requires control over the forecast time length during loss calculation during the training phase, which can be achieved by adding just one or two lines of code.
This work is an extension of our previous work [14], which fused MIR and TSS together. In this paper, we elaborate on the MIR and TSS strategies separately, provide a more detailed experimental analysis, and extensively discuss different aspects of the proposed strategies.
The remainder of this paper is organized as follows. Section 2 briefly reviews the related works about deep learning models and the data imbalance problem in precipitation nowcasting. In Section 3, we describe how to compute the mutual information for the precipitation nowcasting task. Then, a reweighting strategy, MIR (Section 3.2), and a curriculum-learning-style training strategy, TSS (Section 3.3), are proposed based on the mutual information of the training tasks. Extensive experiments in Section 4 reveal that the proposed MIR and TSS strategies reinforce the state-of-the-art models’ performances using a large gap without downgrading the overall prediction performance. Section 5 discusses several research questions. The conclusions are shown in Section 6.

2. Related Works

2.1. Models for Precipitation Nowcasting

Precipitation nowcasting models can be classified into three categories: numerical weather prediction methods, extrapolation-based methods, and deep-learning-based end-to-end methods [10]. This paper concentrates on the latter due to their exceptional performance. To be more specific, deep learning models can be categorized into two types: ConvLSTM-based and UNet-based models [15].
The ConvLSTM proposed by Shi et al. is a notable achievement in this field. This replaces the fully connected layer in the long short-term memory (LSTM) [16] with a convolution layer and extends LSTM to the image domain. Subsequently, many ConvLSTM-based models emerged [17]. For example, in TrajGRU [11], the convolution layer was transformed into a non-local version and analogically integrated with GRUs to allow for the active learning of location-variant patterns. PredRNN, introduced by Wang et al. [18], separates the spatial and temporal memory and communicates them at distinct LSTM levels. Another model by Espeholt et al. [19] uses a ConvLSTM-based approach for large-scale precipitation forecasting, which is capable of predicting up to 12 hours in advance.
Nowcasting models based on UNet [20], such as RainNet [21], vanilla UNet [22], and MSST-Net [23], have recently emerged, thanks to the faster training capability of CNN compared to RNN. Agrawa et al. [22] treated forecasting as an image-to-image translation, and thus adopted UNet to classify at a high resolution in terms of both space and intensity, which was similar to the SmaAt-UNet [24] approach. Moreover, they equipped the basic UNet with the attention modules and achieved competitive accuracy, notably reducing the number of trainable parameters. T-UNet [25] combines TrajGRU and UNet to further improve the model’s forecasting ability.
Furthermore, GAN was adopted in the precipitation nowcasting tasks to improve the imagery quality [26]. DGMR, proposed by Ravuri et al. [12], adopts a UNet encoder and a ConvLSTM decoder to solve the blurry problem from the perspective of generative models. These models improve the nowcasting performance compared to the original ConvLSTM model by modifying the network structure to enhance the fitting ability. We argue that fitting ability is not the only key factor in this task. Rethinking the nowcasting problem in the task from a data perspective helped us to acquire better nowcasting models in this paper.

2.2. Data Imbalance

The data imbalance problem is prevalent in various forms of natural data [27]. The research on data imbalance has a long history and generally refers to the problem where the uneven distribution of P ( Y ) affects the model training. The basic assumption is that the P ( Y ) of the training data is unbalanced, making it easy for the model to train trivial solutions, which leads to a good performance in the majority class and poor performance in the minority class [28]. In Section 3.2, we challenge this assumption using a toy classification problem.
Resampling and reweighting are two common strategies for addressing the data imbalance problem. The typical strategy is over-sampling or up-weighting minority classes [29]. However, in precipitation nowcasting, the resampling method is usually performed patch-wise or sample-wise, which is less feasible for the pixel-level imbalanced precipitation data [8]. Reweighting methods are proposed to adjust the importance of different rainfall intensity samples to balance the impact of data imbalance using the reweighted loss for different rainfall intensities [8,11]. Ravuri et al. [12] adopted the importance of sampling and reweighting to reduce the number of rainless samples. Although these works improve the forecast indicators for the minority class (heavy rainfall), they compromise the model performance on the majority class and the overall image quality.
There are other ways to mitigate the data imbalance problem. Feature selection techniques help to pre-process the data [30]. Recent studies also indicated that semi-supervised and self-supervised learning strategies alleviate the influence of imbalanced data [31]. In contrast to these works, we rethink the data imbalance assumption and analyze the data imbalance problem by considering mutual information.

3. Methodology

In this section, we begin by explaining the process of calculating conditional distribution and mutual information, which is essential when identifying tasks with a high or low information content. Next, we explore the connection between mutual information and the data imbalance problem, and present a novel mutual information-based reweighting approach that addresses the limitations of existing methods. Finally, we introduce a curriculum-style learning strategy that guides the model to learn tasks progressively. This approach prioritizes tasks with a high level of mutual information, allowing for the model to master them before moving on to those with lower mutual information.

3.1. Estimating the Mutual Information on Precipitation Nowcasting Tasks

Existing deep-learning-based models [10,11,18] usually regard the precipitation nowcasting task as a spatiotemporal forecasting problem. Models encode information from a sequence of n + 1 historical radar echo images and generate a sequence of m future radar echo images that are most likely to occur, which can be formulated as
S ^ n : n + m 1 = arg max S n : n + m 1 p ( S n : n + m 1 | S 0 : n 1 ) ,
where S R T × H × W is the radar echo image sequence, T = m + n is the temporal length, and H and W are the height and the width of images, respectively. Each pixel in the rainfall data has an echo intensity value within [ 0 70 ] dBZ , corresponding to the rainfall intensity.
In information theory, the mutual information I ( X ; Y ) is proposed to quantify the information gain achieved by Y by knowing X, and vice versa [13]. This is defined as I ( X ; Y ) : = H ( Y ) H ( Y | X ) , where information entropy H ( Y ) : = y p ( y ) log 1 p ( y ) and conditional entropy H ( Y | X ) : = x , y p ( x , y ) log 1 p ( y | x ) . When X and Y are independent, H ( Y | X ) = H ( Y ) and I ( X ; Y ) = 0 ; when X determines Y, I ( X ; Y ) = H ( Y ) .
However, calculating the mutual information in a high-dimensional task is challenging. Mutual information I ( X ; Y ) measures the dependence between random variables X and Y, which involves an estimate of the probability density distribution P ( X , Y ) and an estimate of the marginal distributions P ( X ) and P ( Y ) . When the task is low-dimensional, it is relatively easy to obtain sufficient training data to estimate P ( X , Y ) ; however, when the task is high-dimensional, it is hard to obtain extensive enough training datasets to estimate P ( X , Y ) . This phenomenon is called the curse of dimensionality. As a result, previous researchers usually train large and over-parameterized generative models with limited training data to approximate P ( X , Y ) .
To avoid training an approximated generative model for P ( X , Y ) estimation, we transfer the high-dimensional radar echo image intensity prediction task into a one-dimensional radar echo pixel intensity prediction task. More specifically, in this section, we regard the precipitation nowcasting task as a series of pixel prediction tasks with different forecasting lengths. As the dimension of Y shrinks to 1, estimating P ( X , Y ) and I ( X ; Y ) is straightforward and easy. In this way, mutual information is calculated.
To calculate the joint probability distribution, we first propose redefining the precipitation nowcasting task at the pixel level:
y ^ i , t 1 = arg max y i , t 1 p ( y i , t 1 | x i , t 0 , N i , t 0 ) ,
where x i , t 0 denotes the value of the pixel i at time t 0 , N i , t 0 refers to the set of spatiotemporal neighbors (Here, neighbors of pixel i are the pixels from the length-l cube centered at pixel i) of pixel i at time t 0 , and y i , t 1 represents the value of pixel i at time t 1 , where t 1 t 0 . It is important to note that x i , t 0 = y i , t 0 . Equations (1) and (2) are equivalent only if N i , t 0 covers current as well as all past image pixels.
Next, we employ a three-dimensional Gaussian convolution, G 3 d ( · ) , of size l × l × l on each pixel i to merge the information of spatiotemporal neighboring pixels.
x i , t 0 = G 3 d ( x i , t 0 , N i , t 0 ) .
During this procedure, only the first-order spatiotemporal information is kept, and higher-order information such as standard deviation and gradient direction is lost. Then, Equation (2) could be rewritten as:
y ^ i , t 1 = arg max y i , t 1 p ( y i , t 1 | x i , t 0 ) .
Third, we compute the conditional distribution P ( Y t 1 | X t 0 ) across the whole training dataset, which approximates P ( Y t 1 | X t 0 ) . The conditional probability is computed as:
p ( y | x ) = i 𝟙 ( y i , t 1 = y and x i , t 0 = x ) i 𝟙 ( x i , t 0 = x ) , where 𝟙 ( c ) = 0 if c is False 1 if c is True
Finally, mutual information is computed as:
I ( X t 0 ; Y t 1 ) = y Y t 1 x X t 0 p ( y | x ) p ( x ) log 2 p ( y | x ) p ( y ) .
Here, the probability p ( y ) and p ( x ) can be obtained similarly, as p ( y | x ) . The mutual information I ( X , Y ) indicates the degree to which X determines Y; therefore, we can use it to measure the degree to which X t 0 determines Y t 1 .
Figure 2 displays two conditional distribution matrices. To facilitate interpretation, the rainfall intensity is divided into five categories of equal size. The mutual information for the three precipitation nowcasting datasets with different t values is shown in Figure 3, where t : = t 1 t 0 . It should be noted that the mutual information does not always monotonically decrease with increasing t. For instance, the mutual information fluctuates periodically when dealing with periodic data as t increases.

3.2. Mutual Information-Based Reweighting (MIR) Strategy

While reweighting methods based on P ( Y ) may decrease the quality of generated images, sacrificing part of the majority’s performance to improve the minority is still acceptable in precipitation nowcasting, where heavy rainfall is more critical. This subsection proposes a new reweighting scheme that considers mutual information to adjust the weighting factors. To better understand the relationship between data imbalances and mutual information, consider a binary classification experiment.

3.2.1. Motivating Example

In this experiment, the training data are sampled from two one-dimensional Gaussian distributions A and B, where A = N ( μ A , 1 ) , B = N ( μ B , 1 ) , and Δ μ : = | μ A μ B | . The objective is to train a binary classifier to distinguish whether a testing sample is generated from A or B. The testing dataset is balanced, and a three-layer, fully connected network is used as the model.
Table 1 displays the mean absolute error (MAE) for different levels of imbalance ratios and Δ μ settings. The mutual information values are indicated within the brackets. The model’s prediction is considered correct when the MAE equals to 0, and is regarded as random guessing when the MAE equals 0.5.
Traditionally, the data imbalance issue has been associated with the reduced performance of minority classes due to the imbalanced P ( Y ) . This holds true when Δ μ is constant. However, when the imbalance ratio is constant, the MAE decreases as Δ μ and mutual information increases, indicating that the impact of data imbalance is reduced. The model becomes resilient to data imbalances when the standard deviation equals one, and Δ μ 10 . This experiment demonstrates that the imbalanced distribution does not necessarily lead to poor performance for the minority class. High mutual information tasks result in better model training when the imbalance ratio is constant compared to low mutual information tasks.
In an imbalanced setting, such as 1:99, the mutual information is lower than in a balanced setting because the information entropy H ( Y 1 : 99 ) = 0.08 , representing the upper bound of mutual information. Therefore, the trend of mutual information values within each imbalance ratio is more important than the value itself. When the imbalance ratio or P ( Y ) is constant, mutual information can help to identify settings that are more resilient to the impact of data imbalance. Thus, it is unnecessary to use reweighting strategies for high mutual information tasks, avoiding the side effect of image quality degradation.

3.2.2. MIR Strategy

Figure 3 shows that mutual information is high for small values of t. Therefore, a rebalancing strategy is unnecessary and could lead to misinformation in P ( Y | X ) . To address this issue, we propose a reweighting ratio r t , 0 < r t 1 as the exponential factor of the reweighting factor w, based on P ( Y ) :
w t = w r t , where r t 1 / I ( X ; Y t ) ,
where w uses the same reweighting factors as WMSE [11]. The new weighting factor w t is directly multiplied by the respective loss to derive the reweighted loss. A simple solution is r t = t / m because mutual information negatively correlates with t. The proposed w t meets the requirement of an unweighted loss at higher mutual information and a precipitous w t for lower mutual information. This approach avoids degrading image quality and undermining the original distribution of reweighting strategies.
In this paper, we adopted the same weighting factor w of w t = w r t following the weighted mean square error (WMSE) [11], which is
w ( x i ) = 1 , x i < 22.4 2 , 22.4 x i < 28.6 5 , 28.6 x i < 33.3 10 , 33.3 x i < 40.7 30 , x i 40.7 , where 0 x i 70 dBZ
Since the degree to which I ( X t 0 ; Y t 1 ) affecting the model’s resistance of data imbalance was unknown, we tried several different naive r t solutions:
(a)
Linear to t: r t = α t + β , where 1 t 10 , α is a constant that controls the expected growing speed of r t . The code is shown in Algorithm 1.
(b)
Exponential: r t = α t m , where α > 1 is a constant that depends on the expected growing speed of r t , and m = 10 in this paper.
(c)
Linear to 1 / I ( X ; Y ) :
r t = m i n ( I ( X t 0 ; Y t 1 ) ) / I ( X t 0 ; Y t 1 ) . When t = 10 , m i n ( I ( X t 0 ; Y t 1 ) ) = I ( X t 0 ; Y t 1 ) . As shown in Figure 4, this solution is similar to a special version of the linear solution.
Algorithm 1 MIR Strategy
Input: 
Model m o d e l , Input Data x, Ground Truth g t , WMSE Function w m s e , Weighting Factor w.
Output: 
Loss L M I R .
  1:
p r e d m o d e l ( x )
  2:
L M I R 0
  3:
for t 1 t o 10 do        Forecasting 10 Steps
  4:
     r t / 10            Linear Solution
  5:
     w t w r
  6:
     L M I R + = w m s e ( p r e d [ t ] , g t [ t ] , w t )
  7:
end for
  8:
return L M I R

3.3. Time Superimposing Strategy (TSS)

Traditionally, neural network models are simultaneously trained with all tasks, from t = 1 to t = 10 . The graph in Figure 3 illustrates that the training task at t = 1 provides the highest amount of information. As the forecasting length t increases, the mutual information steadily decreases. We adopt a curriculum learning approach to improve training efficiency and reorganize the training order of different forecasting length tasks. A straightforward strategy is to start with high mutual information tasks and gradually move to low mutual information tasks.
Suppose there is a set of training tasks, and the model is trained with all the tasks in the set during every iteration of the training process. The task set starts with only the task for t = 1 , and progressively incorporates other forecasting tasks with increasing lengths until t = 10 .
To be specific, the initial training task is P ( Y t 1 = 1 | X t 0 ) . In the next stage, we simultaneously train P ( Y t 1 = 1 | X t 0 ) and P ( Y t 1 = 2 | X t 0 ) . In stage three, we simultaneously train P ( Y t 1 = 1 | X t 0 ) , P ( Y t 1 = 2 | X t 0 ) , and P ( Y t 1 = 3 | X t 0 ) , and so on.
We name this method the time superimposing strategy (TSS). TSS could be simplified into a loss function controlling forecasting length. The TSS with fixed training iterations per stage is shown in Algorithm 2. More TSS variants are discussed in Section 4.3.
Algorithm 2 TSS Strategy
Input: 
Total Iteration i t e r , Iteration Per Stage L, Model’s Output p r e d , Ground Truth g t , Loss Function L ( · , · ) .
Output: 
Loss L T S S .
  1:
t * i t e r L
  2:
L T S S L ( p r e d [ 0 : t ] , g t [ 0 : t ] ) First t Frames
  3:
return L T S S

4. Experiment

4.1. Experimental Settings

4.1.1. Dataset

Three radar echo datasets were considered in the experiments: TAASRAD19 [9], HKO-7 [11], and East China Precipitation dataset [10]. We refer to these as TAAS, HKO, and ECP, respectively. Dataset details are shown in Table 2. We adopted the abnormal detection method in the Ref. [9] to mask the noise pixels. Sequences with a raining area less than 5% were removed during pre-processing. Datasets were split based on the chronological order of observations: the first 70% of the dataset was used for training, the next 10% for validation, and the last 20% for testing.
Neural network models on precipitation nowcasting tasks often make consecutive ten echo frames forecasting, which is used to evaluate the model prediction performance [3]. In this study, our goal was to predict precipitation about two hours ahead accurately. Therefore, we trained models to predict a consecutive sequence of 10 echo frames, and the time interval between neighboring frames should be around 12 min so that the final echo frame will be reached about two hours later. The original time interval between two adjacent echo frames in TAAS was 5 min, and 6 min for HKO and ECP; when running the experiment, we extended the interval between echo frames by two times for computational efficiency (10 min for TAAS, and 12 min for HKO and ECP).

4.1.2. Criterion

We adopted two meteorological indicators: critical success index (CSI) and Heidke skill score (HSS). CSI and HSS are defined as:
CSI z = TP TP + FN + FP ,
HSS z = 2 ( TP × TN FP × FN ) ( TP + FN ) × ( TF + TN ) × ( TP + FP ) × ( FN + TN ) ,
where z (dBZ) is the threshold and TP, FN, FP, and TN are True Positive, False Negative, False Positive, and True Negative, respectively. Here, we empirically chose 20 dBZ to denote a drizzle, 30 dBZ for moderate, and 40 dBZ for heavy rain. It was also necessary to evaluate how the predicted radar echo images match the corresponding ground truth. Thus, we also reported results for two popular computer vision criteria: the structure similarity index measure (SSIM) [32] and mean square error (MSE).
SSIM assessed the similarity between two images, x and y; this can be defined as: SSIM ( x , y ) = l ( x , y ) · c ( x , y ) . s ( x , y ) = 2 μ x μ y + c 1 μ x 2 + μ y 2 + c 1 · 2 σ x σ y + c 2 σ x 2 + σ y 2 + c 2 · σ x y + c 3 σ x σ y + c 3 . The brightness similarity is represented by l ( x , y ) , the contrast similarity is represented by c ( x , y ) , and the structural similarity is represented by s ( x , y ) . The mean values of x and y are μ x and μ y , respectively, and their standard deviations are σ x and σ y , respectively. σ x y is the cross-correlation between x and y. Small positive constants c 1 , c 2 , and c 3 were added to prevent division by zero and numerical instability. These values were calculated based on a certain local patch in the image.

4.1.3. Implementation Details

We set the model’s input echo frame sequence length to 5 and the output sequence length to 10. Radar echo images were resized to 120 × 120 . All models were optimized for 50,000 iterations, using the ADAM optimizer with a learning rate of 0.001 with a batchsize = 32 . The loss function was L1 + L2 loss. Our experiments were implemented with PyTorch, and the training processes were conducted on 4 Nvidia Tesla A100 GPUs.

4.2. Results of MIR and TSS

We applied MIR and TSS to ConvLSTM, a well-known model, and present the results on three precipitation nowcasting datasets in Table 3. We also evaluated two other strategies, Scheduled Sampling (SS) [33] and WMSE [11], as competitors. SS is a curriculum strategy used by PredRNN that adopts a sampling procedure at each timestep t and adjusts the sampling rate based on the index of training iterations. However, this is incompatible with pyramid-shaped networks such as TrajGRU, DGMR, and UNet. Meanwhile, WMSE is a reweighting strategy utilized by TrajGRU that improves minority performances by a large margin but downgrades the performance of 20dBZ and the overall image quality.
Table 3 shows that ConvLSTM + TSS outperforms ConvLSTM and ConvLSTM + SS in all criteria. Notably, the proposed MIR strategy improves both the minority and majority performance as CSI and HSS are higher on 20, 30, and 40 dBZ than WMSE. However, both MIR and WMSE decrease MSE and SSIM, and WMSE performs worse. ConvLSTM + TSS + MIR achieves much better performance than all baseline strategies. We can conclude that TSS and MIR assist the model in learning more information by handling high mutual information tasks.
We demonstrated a forecasting example in Figure 5. Both UNet and ConvLSTM predicted the correct trend with the wrong position. However, ConvLSTM + TSS and ConvLSTM + TSS + MIR managed to forecast a relatively correct position. MIR encourages the model to make more heavy rainfall predictions.
Furthermore, we applied MIR and TSS to six models to assess the universality of the proposed strategies. As shown in Table 4, we can see that TSS enhances the performance of both majority and minority classes, as Model + TSS exhibits better overall performance on CSIavg and HSSavg than the model that does not utilize TSS. Moreover, when comparing Model + TSS with Model + TSS + MIR, we observe that MIR significantly improves the performance of the minority class without compromising the majority performance and the overall image quality. By leveraging TSS and MIR, ConvLSTM (2015) outperforms the latest precipitation nowcasting models.

4.3. Hyperparameters of MIR and TSS

4.3.1. MIR

Table 5 presents the results of several MIR variants, including three strategies ((a), (b), and (c)) described in Section 3.2, where α and β are hyperparameters controlling the reweighting factor of MIR. The results are divided into two categories, with and without TSS, demonstrating that models utilizing TSS outperform those without it. Additionally, the MIR strategy can further improve the model’s overall performance. Among the reweighting methods, our proposed I ( X ; Y ) shows excellent performance (with the second-highest C S I avg ) and does not require any hyperparameters. Method (a) achieves the best C S I avg with α = 0.05 and β = 0.5 .

4.3.2. TSS

The model’s performance is influenced by the number of training iterations L for each forecasting length t. Table 6 shows the performances of multiple TSS variants with different L. The L = 1k indicates that the model was trained for 1000 iterations for each t [ 1 , 10 ] , with a total of 10,000 iterations. After 10,000 iterations, the model may not converge but will continue training, with t being 10 for the remaining training iterations. We proposed two other L changing strategies: 1 k 4 k and 4 k 1 k . For instance, 1 k 4 k means L = 1000 + ( t 1 ) × 3000 10 1 , hence L 1000 , 1333 , 1666 , , 4000 . We observed that the model’s performance gradually improved after L = 4 k , so we selected 4 k to reduce computational cost and reported the model’s performance at 4 k in this article. The L changing strategies are illustrated in Figure 6.

5. Discussion

5.1. How Does MIR Work?

Figure 7 displays the visualization of P ( Y | X ) for TAAS. The figure exhibits the conditional distributions for five different forecasting lengths: t = 1 , t = 2 , t = 5 , t = 10 , and t = , where represents infinity. At t = , P ( Y | X ) is equal to P ( Y ) , and I ( X ; Y ) = 0 and H ( Y | X ) = H ( Y ) . We present the original P ( Y | X ) , balanced P ( Y | X ) , and MIR balanced P ( Y | X ) from top to bottom, respectively.
Five images in the top row of Figure 7 show that with t increasing, the conditional distribution P ( Y | X ) approaches the long-tailed distribution P ( Y ) . The mutual information is high for t = 1 or t = 2 , which indicates that these tasks are relatively easy to learn. In contrast, for t = 5 or t = 10 , the mutual information is low, making it difficult for the models to learn the information. When predicting an echo image occurring in the infinite future ( t = ), P ( Y | X ) is equal to P ( Y ) .
The most simple and straightforward strategy to re-weight the training samples is to balance P ( Y ) to a uniform distribution, and we visualize the corresponding conditional distribution P ( Y | X ) in the middle row of Figure 7. Compared with the original P ( Y | X ) in the first row, when t is either small (such as 1 and 2) or large (such as 5, 10, and ), all Y have a more uniformed probability. It indicates that this strategy stops the imbalanced tendency when t is large, but it changes the conditional distribution for easy-to-learn tasks at smaller t. For instance, when t = 1 , the rebalanced P ( Y | X ) matrix has smaller values under the diagonal than the original matrix. Since smaller t tasks are relatively easy to learn, this rebalancing strategy does not provide any benefit in this scenario.
Figure 7 presents the original conditional distribution P ( Y | X ) , the marginal distribution P ( Y ) , and the MIR-balanced P ( Y | X ) . The MIR approach leverages two main strategies: (i) preservation of the conditional distribution of high mutual information tasks, such as t = 1 and t = 2 , and (ii) readjustment of the conditional distribution for low mutual information tasks with large t. This results in the MIR-balanced P ( Y | X ) having higher mutual information at smaller t and a relatively even distribution for large t.

5.2. What Is the Relationship between Mutual Information and Model Performance?

As discussed in Section 3.2, the mutual information is negatively correlated with t, and the model shows better resistance to the data imbalance problem in high mutual information scenarios. To verify the impact of the mutual information on the performance of precipitation nowcasting models, we conducted experiments on two models and three precipitation datasets with t = 1, 2, 5, and 10 for the training phase. Setting t = 10 allowed for the loss function to be calculated with all 10 predicted frames. The forecasting length of all experiments was 10 frames in the inference phase, and all the results were averaged across 10 timesteps. Table 7 records the experiment results on three datasets and the two most well-known models: an RNN model ConvLSTM and a CNN model UNet. Although ConvLSTM and UNet were proposed 7 years ago, these two models still ranked top in recent precipitation nowcasting contests due to their simple structure and good compatibility.
As shown in Table 7, in terms of minority classes (CSI 40 and HSS 40 ), both ConvLSTM and UNet achieve better performance at smaller t, and worse performance at larger t. This demonstrates that larger I ( X ; Y ) tasks provide models with better data imbalance resistance.
Furthermore, according to Xu et al. (2019) [35], UNet outperforms ConvLSTM in terms of 20 dBZ, 30 dBZ, and mean squared error (MSE), indicating that UNet is more adept at capturing low-frequency information. Nevertheless, UNet exhibits inferior performance in the structural similarity index (SSIM), which may be attributed to the rough fusion and expansion of the temporal axis utilized by UNet. Additionally, the Markov chain formulation of ConLSTM enables it to produce smoother results, which could also account for its superior performance.

5.3. Mutual Information across Datasets

The mutual information among the three datasets significantly differs. In Section 3, Figure 3 indicates that mutual information of HKO-7 declines slower than TAASRAD19. Considering the goal of enhancing the amount of information available for training tasks, HKO-7 appears to be a better training set. Hence, we conducted experiments involving the exchange of training and test sets for all three datasets. As per the results presented in Table 8, the HKO-7 training variants outperformed the other variants, including the consistent results from Table 7.

5.4. Limitations of Reweighting

Table 9 presents the experimental results of the WMSE variants with t = 1 , t = 10 , SS, and the TSS strategy evaluated on the TAASRAD19 dataset. The WMSE approach, which incorporates P ( Y ) reweighting, substantially improves the CSI and HSS performance measures at 40 dBZ. However, compared to the non-weighting methods, the WMSE strategy results in an average (Average of 4 settings in Table 9) performance degradation of 5.2% for SSIM and 21.1% for MSE. For MIR, these figures are 0.04% and 3.5%, respectively, indicating that the WMSE method achieves a trade-off between image quality and minority class performance.

5.5. Limitations of MIR and TSS

TSS is a curriculum learning method that trains tasks in an order of increasing difficulty. It employs the mutual information of each task to control the training sequence. However, TSS only applies to the prediction part and not the encoding part, which limits its effectiveness on separate encoder–decoder networks such as TrajGRU. ConvLSTM uses the same model parameters for each time step, whereas UNet uses different parameters for each time step. Therefore, TSS is also limited for UNet since it has specific parameters for each timestep. Additionally, UNet encodes and generates all frames simultaneously, which reduces the need for a curriculum learning style strategy, as the parameters are relatively independent. To address this, MIR weakens the reweighting factors for high mutual information tasks, reinforcing the simplest reweighting strategy. This has been successful with both RNN and CNN structured precipitation nowcasting models.
However, the approximation method in Equation (3) degrades the higher-order information of the data and adds uncertainty to the approximated P ( Y | X ) . Improving the approximation method can lead to a more precise I ( X ; Y ) .

6. Conclusions and Future Work

In the precipitation nowcasting task, previous studies have attributed poor prediction performances regarding heavy rainfall samples to the data imbalance issue. We found that prediction performance is related to both mutual information (MI) and data imbalance.
In this paper, we redefined the precipitation nowcasting task at the pixel level to estimate the conditional distribution P ( Y | X ) and the mutual information I ( X ; Y ) . We found that higher I ( X ; Y ) corresponds to better data imbalance resistance. Inspired by this finding, a reweighting method, MIR, preserves more information by assigning smooth weighting factors for high I ( X ; Y ) data. MIR successfully avoids downgrading the performance of the majority class. By studying the relationship between I ( X ; Y ) and the forecasting timespan t, we found that a smaller t benefits the model’s training. Combining this feature with the merit of curriculum learning, ordered from easy to hard, we proposed a curriculum-learning-style training strategy. The experimental results demonstrated the superiority of the proposed strategies. With the help of the approximated I ( X ; Y ) and P ( Y | X ) , we also tried to explain how P ( Y ) -based reweighting works and to find an informative precipitation dataset. This work is only a preliminary exploration since P ( Y | X ) is not fully utilized. More mutual information-based strategies remain to be discovered.

Author Contributions

Conceptualization, Y.C. and X.Z.; data curation, Y.C. and H.S.; formal analysis, Y.C.; funding acquisition, J.Z.; investigation, Y.C.; methodology, Y.C. and D.Z.; project administration, J.Z.; resources, Y.C.; software, Y.C.; supervision, H.S.; validation, H.S. and J.Z.; visualization, Y.C.; writing—original draft, Y.C.; writing—review and editing, D.Z., H.S. and J.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (Nos. 62176059, 62101136).

Informed Consent Statement

Not applicable.

Data Availability Statement

All three radar echo datasets are publicly available. TAASRAD19 can be downloaded from https://doi.org/10.3390/atmos11030267, https://doi.org/10.3390/rs11242922 (accessed on 1 March 2023). HKO-7 can be found at https://github.com/sxjscience/HKO-7 (accessed on 1 March 2023). ECP is available at https://doi.org/10.7910/DVN/2GKMQJ (accessed on 1 March 2023).

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
MIRMutual Information based Reweighting
TSSTime Superimposing Strategy
LSTMLong Short-term Memory
CNNConvolution Neural Network
RNNRecurrent Neural Network
CSICritical Success Index
HSSHeidke Skill Score
SSIMStructure Similarity Index Measure
MAEMean Absolute Error
MSEMean Square Error

References

  1. Lebedev, V.; Ivashkin, V.; Rudenko, I.; Ganshin, A.; Molchanov, A.; Ovcharenko, S.; Grokhovetskiy, R.; Bushmarinov, I.; Solomentsev, D. Precipitation nowcasting with satellite imagery. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Anchorage, AK, USA, 4–8 August 2019; pp. 2680–2688. [Google Scholar]
  2. Sun, Z.; Sandoval, L.; Crystal-Ornelas, R.; Mousavi, S.M.; Wang, J.; Lin, C.; Cristea, N.; Tong, D.; Carande, W.H.; Ma, X.; et al. A review of Earth Artificial Intelligence. Comput. Geosci. 2022, 159, 105034. [Google Scholar] [CrossRef]
  3. Shi, X.; Chen, Z.; Wang, H.; Yeung, D.Y.; Wong, W.; Woo, W. Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting. In Proceedings of the International Conference on Neural Information Processing Systems, Montreal, QC, Canada, 7–12 December 2015; pp. 802–810. [Google Scholar]
  4. Niu, D.; Huang, J.; Zang, Z.; Xu, L.; Che, H.; Tang, Y. Two-stage spatiotemporal context refinement network for precipitation nowcasting. Remote Sens. 2021, 13, 4285. [Google Scholar] [CrossRef]
  5. Huang, Q.; Chen, S.; Tan, J. TSRC: A Deep Learning Model for Precipitation Short-Term Forecasting over China Using Radar Echo Data. Remote Sens. 2023, 15, 142. [Google Scholar] [CrossRef]
  6. Tuyen, D.N.; Tuan, T.M.; Le, X.H.; Tung, N.T.; Chau, T.K.; Van Hai, P.; Gerogiannis, V.C.; Son, L.H. RainPredRNN: A New Approach for Precipitation Nowcasting with Weather Radar Echo Images Based on Deep Learning. Axioms 2022, 11, 107. [Google Scholar] [CrossRef]
  7. Zhang, F.; Wang, X.; Guan, J. A Novel Multi-Input Multi-Output Recurrent Neural Network Based on Multimodal Fusion and Spatiotemporal Prediction for 0–4 Hour Precipitation Nowcasting. Atmosphere 2021, 12, 1596. [Google Scholar] [CrossRef]
  8. Cao, Y.; Chen, L.; Zhang, D.; Ma, L.; Shan, H. Hybrid Weighting Loss for Precipitation Nowcasting from Radar Images. In Proceedings of the ICASSP 2022—2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Singapore, 22–27 May 2022; pp. 3738–3742. [Google Scholar]
  9. Franch, G.; Maggio, V.; Coviello, L.; Pendesini, M.; Jurman, G.; Furlanello, C. TAASRAD19, a high-resolution weather radar reflectivity dataset for precipitation nowcasting. Sci. Data 2020, 7, 1–13. [Google Scholar] [CrossRef] [PubMed]
  10. Chen, L.; Cao, Y.; Ma, L.; Zhang, J. A Deep Learning-Based Methodology for Precipitation Nowcasting With Radar. Earth Space Sci. 2020, 7, e2019EA000812. [Google Scholar] [CrossRef] [Green Version]
  11. Shi, X.; Gao, Z.; Lausen, L.; Wang, H.; Yeung, D.Y.; Wong, W.; Woo, W. Deep learning for precipitation nowcasting: A benchmark and a new model. In Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 5617–5627. [Google Scholar]
  12. Ravuri, S.; Lenc, K.; Willson, M.; Kangin, D.; Lam, R.; Mirowski, P.; Fitzsimons, M.; Athanassiadou, M.; Kashem, S.; Madge, S.; et al. Skilful precipitation nowcasting using deep generative models of radar. Nature 2021, 597, 672–677. [Google Scholar] [CrossRef] [PubMed]
  13. Brillouin, L. Science and Information Theory; Courier Corporation: North Chelmsford, MA, USA, 2013. [Google Scholar]
  14. Cao, Y.; Zhang, D.; Zheng, X.; Shan, H.; Zhang, J. Mutual Information based Reweighting for Precipitation Nowcasting. In Proceedings of the ICASSP 2023—2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Rhodes Island, Greece, 4–10 June 2023. [Google Scholar]
  15. Gao, Z.; Shi, X.; Wang, H.; Yeung, D.Y.; Woo, W.c.; Wong, W.K. Deep learning and the weather forecasting problem: Precipitation nowcasting. Deep Learning for the Earth Sciences: A Comprehensive Approach to Remote Sensing, Climate Science, and Geosciences; Wiley: Hoboken, NJ, USA, 2021; pp. 218–239. [Google Scholar]
  16. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
  17. Sun, N.; Zhou, Z.; Li, Q.; Jing, J. Three-Dimensional Gridded Radar Echo Extrapolation for Convective Storm Nowcasting Based on 3D-ConvLSTM Model. Remote Sens. 2022, 14, 4256. [Google Scholar] [CrossRef]
  18. Wang, Y.; Wu, H.; Zhang, J.; Gao, Z.; Wang, J.; Philip, S.Y.; Long, M. Predrnn: A recurrent neural network for spatiotemporal predictive learning. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 2208–2225. [Google Scholar] [CrossRef]
  19. Espeholt, L.; Agrawal, S.; Sønderby, C.; Kumar, M.; Heek, J.; Bromberg, C.; Gazen, C.; Hickey, J.; Bell, A.; Kalchbrenner, N. Skillful Twelve Hour Precipitation Forecasts using Large Context Neural Networks. arXiv 2021, arXiv:2111.07470. [Google Scholar]
  20. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
  21. Ayzel, G.; Scheffer, T.; Heistermann, M. RainNet v1. 0: A convolutional neural network for radar-based precipitation nowcasting. Geosci. Model Dev. 2020, 13, 2631–2644. [Google Scholar] [CrossRef]
  22. Agrawal, S.; Barrington, L.; Bromberg, C.; Burge, J.; Gazen, C.; Hickey, J. Machine learning for precipitation nowcasting from radar images. arXiv 2019, arXiv:1912.12132. [Google Scholar]
  23. Ye, Y.; Gao, F.; Cheng, W.; Liu, C.; Zhang, S. MSSTNet: A Multi-Scale Spatiotemporal Prediction Neural Network for Precipitation Nowcasting. Remote Sens. 2023, 15, 137. [Google Scholar] [CrossRef]
  24. Trebing, K.; Stanczyk, T.; Mehrkanoon, S. Smaat-unet: Precipitation nowcasting using a small attention-unet architecture. Pattern Recognit. Lett. 2021, 145, 178–186. [Google Scholar] [CrossRef]
  25. Zeng, Q.; Li, H.; Zhang, T.; He, J.; Zhang, F.; Wang, H.; Qing, Z.; Yu, Q.; Shen, B. Prediction of Radar Echo Space-Time Sequence Based on Improving TrajGRU Deep-Learning Model. Remote Sens. 2022, 14, 5042. [Google Scholar] [CrossRef]
  26. Xu, L.; Niu, D.; Zhang, T.; Chen, P.; Chen, X.; Li, Y. Two-Stage UA-GAN for Precipitation Nowcasting. Remote Sens. 2022, 14, 5948. [Google Scholar] [CrossRef]
  27. Chawla, N.V.; Japkowicz, N.; Kotcz, A. Special issue on learning from imbalanced data sets. ACM SIGKDD Explor. Newsl. 2004, 6, 1–6. [Google Scholar] [CrossRef]
  28. Yang, Y.; Zha, K.; Chen, Y.; Wang, H.; Katabi, D. Delving into deep imbalanced regression. In Proceedings of the International Conference on Machine Learning, Virtual, 18–24 July 2021; pp. 11842–11851. [Google Scholar]
  29. Johnson, J.M.; Khoshgoftaar, T.M. Survey on deep learning with class imbalance. J. Big Data 2019, 6, 1–54. [Google Scholar] [CrossRef] [Green Version]
  30. Liu, H.; Zhou, M.; Liu, Q. An embedded feature selection method for imbalanced data classification. IEEE/CAA J. Autom. Sin. 2019, 6, 703–715. [Google Scholar] [CrossRef]
  31. Yang, Y.; Xu, Z. Rethinking the value of labels for improving class-imbalanced learning. Adv. Neural Inf. Process. Syst. 2020, 33, 19290–19301. [Google Scholar]
  32. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  33. Bengio, S.; Vinyals, O.; Jaitly, N.; Shazeer, N. Scheduled sampling for sequence prediction with recurrent neural networks. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 7–10 December 2015; Volume 28. [Google Scholar]
  34. Wang, Y.; Long, M.; Wang, J.; Gao, Z.; Yu, P.S. PredRNN: Recurrent Neural Networks for Predictive Learning using Spatiotemporal LSTMs. In Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; Volume 30. [Google Scholar]
  35. Xu, Z.Q.J.; Zhang, Y.; Xiao, Y. Training behavior of deep neural network in frequency domain. In Proceedings of the International Conference on Neural Information Processing, Bali, Indonesia, 8–12 December 2019; pp. 264–274. [Google Scholar]
Figure 1. Distribution of different radar echo intensity levels. ECP and TAASRAD19 were collected from the north temperate zone, and HKO-7 was collected from the tropics.
Figure 1. Distribution of different radar echo intensity levels. ECP and TAASRAD19 were collected from the north temperate zone, and HKO-7 was collected from the tropics.
Remotesensing 15 01639 g001
Figure 2. The conditional distributions P ( Y | X ) of two precipitation nowcasting tasks on one dataset. Left: Task predicting future radar echo intensity in the next 10 min. Right: Task predicting future intensity in the next 100 min. The radar echo intensity (0–70 dBZ) is evenly divided into five categories, and the X and Y axes stand for the index of each category. The value in each cell represents the corresponding joint probability P ( Y | X ) . Take 0.95 in the bottom-left corner of the left image as an example: if the current intensity is in category 1 (0–14 dBZ), the probability that the intensity in 10 min is still category 1 is 0.95 .
Figure 2. The conditional distributions P ( Y | X ) of two precipitation nowcasting tasks on one dataset. Left: Task predicting future radar echo intensity in the next 10 min. Right: Task predicting future intensity in the next 100 min. The radar echo intensity (0–70 dBZ) is evenly divided into five categories, and the X and Y axes stand for the index of each category. The value in each cell represents the corresponding joint probability P ( Y | X ) . Take 0.95 in the bottom-left corner of the left image as an example: if the current intensity is in category 1 (0–14 dBZ), the probability that the intensity in 10 min is still category 1 is 0.95 .
Remotesensing 15 01639 g002
Figure 3. Mutual information of tasks with different forecasting lengths from three precipitation datasets.
Figure 3. Mutual information of tasks with different forecasting lengths from three precipitation datasets.
Remotesensing 15 01639 g003
Figure 4. Three strategies of MIR.
Figure 4. Three strategies of MIR.
Remotesensing 15 01639 g004
Figure 5. Radar reflectivity predictions of different strategies.
Figure 5. Radar reflectivity predictions of different strategies.
Remotesensing 15 01639 g005
Figure 6. TSS strategies with different L. t stands for the number of frames for training in Algorithm 2.
Figure 6. TSS strategies with different L. t stands for the number of frames for training in Algorithm 2.
Remotesensing 15 01639 g006
Figure 7. (Top) P ( Y | X ) of TAAS dataset. (Middle) P ( Y ) balanced P ( Y | X ) . (Bottom) MIR reweighting strategy. Smaller H ( Y | X ) corresponds to larger I ( X ; Y ) . The radar echo intensity (0–70 dBZ) is divided into five categories evenly. X and Y axes stand for the index of the category.
Figure 7. (Top) P ( Y | X ) of TAAS dataset. (Middle) P ( Y ) balanced P ( Y | X ) . (Bottom) MIR reweighting strategy. Smaller H ( Y | X ) corresponds to larger I ( X ; Y ) . The radar echo intensity (0–70 dBZ) is divided into five categories evenly. X and Y axes stand for the index of the category.
Remotesensing 15 01639 g007
Table 1. MAE of the imbalanced binary classification problem. The imbalance ratio refers to the ratio of the number of samples in class A to the number of samples in class B.
Table 1. MAE of the imbalanced binary classification problem. The imbalance ratio refers to the ratio of the number of samples in class A to the number of samples in class B.
Imbalance RatioMAE (Mutual Information)
Δ μ = 20 Δ μ = 10 Δ μ = 3 Δ μ = 1 Δ μ = 0.1
1:990.00 (0.08)0.00 (0.08)0.26 (0.06)0.48 (0.02)0.50 (0.01)
10:900.00 (0.47)0.00 (0.47)0.13 (0.35)0.44 (0.06)0.50 (0.08)
50:500.00 (1.00)0.00 (1.00)0.11 (0.74)0.40 (0.16)0.50 (0.09)
Table 2. Summary of three precipitation datasets.
Table 2. Summary of three precipitation datasets.
DatasetLocationIntervalAnnual PrecipitationFramesYearsRadar DiameterResolution
TAASTrentino5 min780 mm894,9169240 km(480,480)
HKOHong Kong6 min2200 mm238,0007256 km(480,480)
ECPShanghai6 min1200 mm170,0003251 km(501,501)
Table 3. Results of ConvLSTM on three datasets. ↑ stands for the higher, which denotes a better result. ↓ stands for the lower, which denotes a better result. The values of 20, 30, and 40 dBZ denote the thresholds of criterion CSI and HSS, and avg is their mean.
Table 3. Results of ConvLSTM on three datasets. ↑ stands for the higher, which denotes a better result. ↓ stands for the lower, which denotes a better result. The values of 20, 30, and 40 dBZ denote the thresholds of criterion CSI and HSS, and avg is their mean.
Training StrategyCSI↑HSS↑SSIM↑MSE↓Dataset
avg203040avg203040 ( × 10 3 )
ConvLSTM0.1480.3300.1000.0140.2250.4760.1730.0260.6894.485TAAS
+SS0.1540.3270.1180.0180.2340.4690.2000.0330.6485.050
+TSS0.1780.3730.1410.0200.2650.5230.2360.0360.6954.253
+WMSE0.1940.3360.1730.0750.3010.4820.2850.1350.6385.461
+MIR0.2060.3460.1910.0810.3160.4940.3090.1450.6634.978
+TSS + MIR0.2100.3700.1900.0710.3180.5200.3080.1250.6814.429
ConvLSTM0.2460.4390.2610.0360.3500.5890.3970.0640.7398.069HKO
+SS0.2590.4500.2790.0470.3650.5960.4170.0810.7318.243
+TSS0.2730.4720.2970.0510.3820.6190.4410.0880.7417.358
+WMSE0.2920.4360.3030.1360.4180.5790.4440.2320.71712.292
+MIR0.3250.4720.3490.1540.4580.6160.4990.2590.7409.630
+TSS + MIR0.3300.4840.3540.1510.4610.6270.5050.2520.7407.805
ConvLSTM0.2430.4610.2620.0060.3450.6180.4060.0110.8933.224ECP
+SS0.2750.4800.3040.0400.3850.6350.4550.0660.8933.182
+TSS0.2930.5130.3160.0500.4040.6630.4670.0830.8982.803
+WMSE0.2860.4670.2990.0920.4130.6220.4520.1650.8883.763
+MIR0.3040.4940.3240.0940.4300.6480.4790.1630.8943.418
+TSS + MIR0.3220.5240.3430.0980.4470.6730.5000.1670.8982.846
Table 4. Results of TSS and MIR on various models on the TAAS dataset.
Table 4. Results of TSS and MIR on various models on the TAAS dataset.
Training ModelCSI↑HSS↑SSIM↑
avg203040avg203040
ConvLSTM [3]0.1480.3300.1000.0140.2250.4760.1730.0260.689
+TSS0.1780.3730.1410.0200.2650.5230.2360.0360.695
+TSS + MIR0.2100.3700.1900.0710.3180.5200.3080.1250.681
UNet [20]0.1360.3220.0810.0040.2010.4620.1340.0080.652
+TSS0.1360.3330.0690.0050.2000.4750.1140.0100.666
+TSS + MIR0.1930.3430.1780.0560.2910.4830.2890.1020.657
TrajGRU [11]0.1540.3290.1200.0140.2360.4740.2080.0270.672
+TSS0.1560.3430.1190.0050.2340.4890.2020.0100.676
+TSS + MIR0.2050.3580.1830.0730.3130.5070.2990.1330.673
PredRNN [34] 10.1590.2960.1370.0440.2430.4260.2240.0780.681
+TSS0.1700.3330.1490.0290.2560.4690.2440.0540.706
+TSS + MIR0.1990.3560.1860.0560.3010.5020.3000.1000.704
PredRNNV2 [18]0.1740.3550.1550.0130.2620.5030.2580.0250.695
+TSS0.1760.3650.1440.0200.2620.5120.2390.0360.696
+TSS + MIR0.2130.3600.1960.0830.3240.5090.3160.1460.696
DGMR [12]0.1590.3500.1110.0160.2390.4990.1890.0300.672
+GAN 20.1830.3510.1540.0440.2790.4990.2570.0800.651
+TSS0.1670.3610.1210.0190.2490.5100.2040.0340.676
+TSS + MIR0.2200.3790.1920.0880.3310.5270.3110.1550.687
1 PredRNN and V2 are implemented with the SS strategy. 2 The DGMR is trained with GAN loss, according to the paper.
Table 5. MIR strategies on HKO-7. False TSS stands for the t fixed to 10.
Table 5. MIR strategies on HKO-7. False TSS stands for the t fixed to 10.
TSSReweightingCSI↑HSS↑SSIM↑MSE↓
avg203040avg203040 ( × 10 3 )
FalseNone0.2460.4390.2610.0360.3500.5890.3970.0640.7398.069
WMSE0.2950.4390.3140.1310.4220.5830.4590.2240.71712.209
(a) α = 0.1 , β = 0 0.2880.4510.3200.0940.4090.5940.4660.1680.7229.701
(a) α = 0.1 , β = 0.2 0.2890.4430.3140.1100.4130.5850.4580.1950.71811.247
(a) α = 0.2 , β = 0 0.2460.3910.2580.0890.3520.5180.3800.1570.68124.517
(a) α = 0.05 , β = 0 0.2780.4600.3140.0590.3920.6070.4640.1060.7268.086
(a) α = 0.05 , β = 0.5 0.3030.4540.3280.1260.4300.5970.4760.2170.72210.351
(b) α = 2 0.2600.4500.2910.0400.3680.5960.4360.0710.7248.995
(c) I ( X ; Y ) 0.2930.4450.3160.1170.4180.5880.4610.2050.72010.865
TrueNone0.2710.4770.2910.0450.3770.6230.4310.0770.7417.392
WMSE0.3200.4630.3330.1650.4520.6030.4790.2740.72911.612
(a) α = 0.1 , β = 0 0.3210.4890.3510.1240.4500.6310.5030.2150.7387.832
(a) α = 0.1 , β = 0.2 0.3230.4850.3500.1350.4520.6260.5000.2310.7369.085
(a) α = 0.2 , β = 0 0.3010.4650.3280.1100.4210.6030.4700.1890.7209.048
(a) α = 0.05 , β = 0 0.3040.4940.3430.0760.4220.6400.4960.1310.7417.440
(a) α = 0.05 , β = 0.5 0.3320.4870.3580.1500.4640.6280.5090.2540.7369.221
(b) α = 2 0.2870.4850.3130.0620.4000.6310.4620.1080.7407.399
(c) I ( X ; Y ) 0.3290.4930.3580.1360.4590.6350.5090.2320.7369.007
Table 6. The TSS strategy with different L on ConvLSTM.
Table 6. The TSS strategy with different L on ConvLSTM.
L IterationsCSI↑HSS↑SSIM↑MSE↓Dataset
avg203040avg203040 ( × 10 3 )
1k0.1420.3250.0950.0070.2160.4690.1640.0140.6784.527TAAS
2k0.1520.3480.0980.0110.2280.4960.1680.0200.6874.299
3k0.1510.3520.0880.0130.2250.5000.1500.0230.6914.299
4k0.1530.3400.1050.0160.2310.4860.1770.0300.6924.294
5k0.1730.3670.1340.0180.2580.5150.2260.0320.6934.297
8k0.1740.3630.1330.0250.2610.5120.2250.0460.6954.312
10k0.1740.3410.1470.0320.2590.4780.2410.0580.6954.344
1k→4k0.1520.3570.0880.0120.2260.5060.1500.0220.6864.374
4k→1k0.1560.3650.0920.0120.2300.5140.1540.0220.6904.375
1k0.2690.4780.2910.0380.3750.6250.4340.0660.7377.511HKO
2k0.2650.4690.2830.0430.3730.6170.4250.0760.7387.449
3k0.2730.4670.2990.0530.3820.6130.4430.0910.7387.605
4k0.2750.4740.3050.0480.3840.6200.4490.0820.7417.358
5k0.2710.4770.2910.0450.3770.6230.4310.0770.7417.392
8k0.2730.4800.2870.0530.3820.6260.4280.0920.7427.323
10k0.2730.4800.2940.0460.3810.6260.4360.0790.7437.313
1k→4k0.2690.4800.2880.0400.3760.6270.4300.0710.7397.483
4k→1k0.2720.4790.2960.0400.3780.6260.4400.0700.7397.455
1k0.2670.4960.2790.0260.3730.6500.4240.0450.8952.948ECP
2k0.2740.4860.2980.0390.3840.6400.4480.0660.8923.158
3k0.2820.5100.2920.0440.3910.6620.4370.0750.8982.789
4k0.2900.5080.3140.0490.4010.6580.4650.0810.8982.774
5k0.2900.4980.3140.0600.4040.6490.4640.1010.8992.779
8k0.2910.5060.3060.0610.4020.6560.4510.0990.8982.895
10k0.3080.5230.3370.0640.4240.6730.4930.1060.8992.781
1k→4k0.2480.4550.2620.0260.3540.6110.4040.0470.8923.136
4k→1k0.2790.5110.2860.0390.3840.6620.4260.0650.8982.819
Table 7. Performance with different ts.
Table 7. Performance with different ts.
Model StrategyTrainingCSI↑HSS↑SSIM↑MSE↓Dataset
avg203040avg203040 ( × 10 3 )
ConvLSTMt = 10.1220.2300.1060.0310.1860.3300.1710.0550.5717.135TAAS
t = 20.1420.3060.1160.0040.2160.4420.1980.0070.5915.626
t = 50.1530.3480.0930.0190.2280.4960.1540.0340.6954.143
t = 100.1480.3300.1000.0140.2250.4760.1730.0260.6894.485
t = 10.2440.3800.2760.0750.3520.5190.4090.1280.68111.972HKO
t = 20.2400.4220.2640.0350.3410.5670.3970.0600.7249.542
t = 50.2500.4490.2590.0430.3550.5980.3930.0740.7437.808
t = 100.2460.4390.2610.0360.3500.5890.3970.0640.7398.069
t = 10.2550.4700.2500.0500.3580.6180.3780.0780.8953.597ECP
t = 20.2690.4800.2770.0490.3710.6230.4080.0830.8883.800
t = 50.2620.4900.2660.0300.3650.6430.4020.0510.8943.125
t = 100.2430.4610.2620.0060.3450.6180.4060.0110.8933.224
UNett = 10.1860.3760.1710.0120.2730.5180.2790.0210.6494.515TAAS
t = 20.1900.3820.1880.0010.2750.5240.3010.0010.6564.407
t = 50.1480.3290.1040.0090.2190.4690.1700.0170.6634.366
t = 100.1360.3220.0810.0040.2010.4620.1340.0080.6524.591
t = 10.2740.4720.3160.0340.3750.6100.4550.0590.7237.768HKO
t = 20.2710.4700.3140.0290.3720.6110.4570.0490.7237.296
t = 50.2670.4740.3110.0150.3660.6160.4550.0280.6987.023
t = 100.2480.4490.2790.0170.3460.5910.4160.0310.6847.071
t = 10.2740.5020.3140.0050.3730.6460.4630.0090.8883.090ECP
t = 20.2840.5210.3260.0050.3850.6680.4770.0100.8802.879
t = 50.2700.5000.3040.0070.3720.6510.4540.0120.8732.673
t = 100.2440.4540.2690.0090.3430.6050.4090.0160.8742.778
Table 8. Switching the training set and the testing set of three datasets. * stands for TSS + MIR.
Table 8. Switching the training set and the testing set of three datasets. * stands for TSS + MIR.
ModelTrainTest CSI avg HSS avg SSIM↑MSE↓ ( × 10 3 )
UNetTAASECP0.2630.3620.8672.915
UNetHKO-7ECP0.2690.3710.8862.858
ConvLSTM   TAASECP0.1910.2710.8803.160
ConvLSTM   HKO-7ECP0.2580.3670.8923.203
ConvLSTM *TAASECP0.2620.3700.7483.350
ConvLSTM *HKO-7ECP0.3290.4570.8982.947
UNetTAASHKO-70.1770.2500.7108.505
UNetECPHKO-70.2120.2990.7238.723
ConvLSTM   TAASHKO-70.2330.3160.6948.084
ConvLSTM   ECPHKO-70.2400.3260.7198.405
ConvLSTM *TAASHKO-70.2420.3370.6498.210
ConvLSTM *ECPHKO-70.2730.3840.7219.266
UNetECPTAAS0.1440.2120.4587.390
UNetHKO-7TAAS0.1680.2460.5825.566
ConvLSTM   ECPTAAS0.1380.2090.3796.932
ConvLSTM   HKO-7TAAS0.1650.2520.3916.326
ConvLSTM *ECPTAAS0.1960.2980.3906.906
ConvLSTM *HKO-7TAAS0.2120.3200.4106.103
Table 9. Reweighting strategy on the ConvLSTM of the TAASRAD19 dataset.
Table 9. Reweighting strategy on the ConvLSTM of the TAASRAD19 dataset.
StrategyReweighting CSI 20 CSI 40 HSS 20 HSS 40 SSIM↑MSE↓ ( × 10 3 )
t = 1 None0.2300.0310.3300.0550.5717.135
t = 10 0.3300.0140.4760.0260.6894.485
SS0.3270.0180.4690.0330.6485.050
TSS0.3730.0200.5230.0360.6954.253
t = 1 WMSE0.2920.0530.4180.0900.5199.084
t = 10 0.3360.0750.4820.1350.6385.461
SS0.3340.0900.4790.1590.6295.700
TSS0.3700.0880.5180.1550.6714.946
t = 1 *MIR0.2300.0310.3300.0550.5717.135
t = 10 0.3460.0810.4940.1450.6634.978
SS0.3700.0820.5190.1450.6794.992
TSS0.3700.0710.5200.1250.6814.429
* MIR strategy is the same as non-weighting when t = 1.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cao, Y.; Zhang, D.; Zheng, X.; Shan, H.; Zhang, J. Mutual Information Boosted Precipitation Nowcasting from Radar Images. Remote Sens. 2023, 15, 1639. https://doi.org/10.3390/rs15061639

AMA Style

Cao Y, Zhang D, Zheng X, Shan H, Zhang J. Mutual Information Boosted Precipitation Nowcasting from Radar Images. Remote Sensing. 2023; 15(6):1639. https://doi.org/10.3390/rs15061639

Chicago/Turabian Style

Cao, Yuan, Danchen Zhang, Xin Zheng, Hongming Shan, and Junping Zhang. 2023. "Mutual Information Boosted Precipitation Nowcasting from Radar Images" Remote Sensing 15, no. 6: 1639. https://doi.org/10.3390/rs15061639

APA Style

Cao, Y., Zhang, D., Zheng, X., Shan, H., & Zhang, J. (2023). Mutual Information Boosted Precipitation Nowcasting from Radar Images. Remote Sensing, 15(6), 1639. https://doi.org/10.3390/rs15061639

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop