Next Article in Journal
An Unambiguous Synchronization Scheme for GNSS BOC Signals Based on Reconstructed Correlation Function
Previous Article in Journal
Frontal Electroencephalogram Alpha Asymmetry during Mental Stress Related to Workplace Noise
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

RN-Net: A Deep Learning Approach to 0–2 Hour Rainfall Nowcasting Based on Radar and Automatic Weather Station Data

1
School of Computer, National University of Defense Technology, Changsha 410000, China
2
School of Meteorology and Oceanography, National University of Defense Technology, Changsha 410000, China
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(6), 1981; https://doi.org/10.3390/s21061981
Submission received: 1 January 2021 / Revised: 6 March 2021 / Accepted: 8 March 2021 / Published: 11 March 2021
(This article belongs to the Section Intelligent Sensors)

Abstract

:
Precipitation has an important impact on people’s daily life and disaster prevention and mitigation. However, it is difficult to provide more accurate results for rainfall nowcasting due to spin-up problems in numerical weather prediction models. Furthermore, existing rainfall nowcasting methods based on machine learning and deep learning cannot provide large-area rainfall nowcasting with high spatiotemporal resolution. This paper proposes a dual-input dual-encoder recurrent neural network, namely Rainfall Nowcasting Network (RN-Net), to solve this problem. It takes the past grid rainfall data interpolated by automatic weather stations and doppler radar mosaic data as input data, and then forecasts the grid rainfall data for the next 2 h. We conduct experiments on the Southeastern China dataset. With a threshold of 0.25 mm, the RN-Net’s rainfall nowcasting threat scores have reached 0.523, 0.503, and 0.435 within 0.5 h, 1 h, and 2 h. Compared with the Weather Research and Forecasting model rainfall nowcasting, the threat scores have been increased by nearly four times, three times, and three times, respectively.

1. Introduction

Precipitation is the main forecast element of Numerical Weather Prediction [1] (NWP), which has an important impact on people’s daily life [2,3] and disaster prevention and mitigation [4]. After years of development, the current short-term and medium-term NWP models have become more and more accurate. However, for rainfall nowcasting, it is difficult to give accurate forecast results due to spin-up [5] and other problems in NWP models.
In recent years, artificial intelligence has become the new engine of the global scientific and technological revolution, and some scholars have applied machine learning and deep learning to precipitation forecasting [6,7,8,9,10,11,12,13]. After Shi et al. [14] achieved precipitation intensity nowcasting by radar echo extrapolation, it has emerged as a hot research topic in the meteorological community. They formulated radar echo extrapolation as a spatiotemporal prediction problem, and used ConvLSTM applying convolution structure to LSTM to predict future radar echo data by past radar echo data. They then used the Z–R relationship to convert predicted radar echo data into precipitation intensity data to realize precipitation intensity nowcasting. They conducted experiments on the dataset composed of radar echo data during the 97 days of precipitation in Hong Kong in 2011–2013. ConvLSTM has reached 0.577 for the Critical Success Index (CSI) with a threshold of 0.5 mm/h in the next 1.5 h, which has a strong ability to forecast precipitation intensity. In 2017, Shi et al. [15] further proposed TrajGRU to improve the effect of precipitation intensity nowcasting. It used the generated optical flow [16] to realize a connection structure based on position changes, and the point in the convolution structure is connected to points with higher correlation instead of a fixed number of surrounding points. In the experiment on the HKO-7 dataset, TrajGRU’s CSI reached 0.552 in the next 2 h with a threshold of 0.5 mm/h.
After Shi et al. [14] formulated radar echo extrapolation as a spatiotemporal prediction problem, many spatiotemporal prediction methods [17,18] regarded radar echo extrapolation as one of the problems to evaluate the effectiveness of their methods. Wang et al. [17] proposed a spatiotemporal prediction method celled PredRNN, which solves the problem of spatial features of each layer of ConvLSTM being independent of each other in time series. Spatiotemporal memory units are added to PredRNN and connected through a zigzag structure so that features can be propagated both spatially and temporally. They conducted experiments on the radar echo dataset in Guangzhou, and the mean square error of PredRNN is 30% higher than that of ConvLSTM. Bonnet et al. [19] applied the spatiotemporal prediction method PredRNN++ [20] to the precipitation intensity nowcasting. PredRNN++ utilized the Causal LSTM unit to integrate temporal and spatial features and Gradient Highway (GHU) that could alleviate gradient disappearance. However, the precipitation intensity nowcasting based on radar echo extrapolation has two main problems. One is that the radar echo data cannot reflect the real-world distribution of precipitation, which is caused by the working principle of the radar and various noises. The second is that the precipitation intensity converted from the radar echo data are inconsistent with the actual precipitation intensity, which is caused by the inaccurate Z–R relationship.
Compared with radar echo data used in precipitation intensity nowcasting, the rainfall data used in rainfall nowcasting can be directly measured by rain gauges and other equipment, which can more accurately reflect the real-world precipitation. Currently, there are few rainfall nowcasting methods based on machine learning and deep learning. Zhang et al. [21] used a multi-layer perceptron to forecast the rainfall data of 56 weather stations in China of the next 3 h. The forecast was derived from 13 physical factors related to precipitation in the surrounding area. Although the forecast results can meet the needs of nowcasting, its spatial resolution is too low to achieve large-area rainfall nowcasting.
Existing rainfall nowcasting methods based on machine learning and deep learning are difficult in order to achieve rainfall nowcasting with high spatiotemporal resolution. To be able to achieve this, we set the grid data interpolated from the rainfall data in the dense automatic weather station as the forecast object. Since the original data are directly measured by the rain gauge, the grid data will reflect the real-world rainfall distribution as much as possible. In the grid data, the forecasting time resolution and spatial resolution are 30 min and 5 km, respectively, which can meet the high spatiotemporal resolution requirement. As the forecast target is sequential grid data, we formulate this forecasting problem as a spatiotemporal prediction problem, which predicts future development through past spatiotemporal features [22]. Rainfall depends on rainfall intensity and rainfall duration so that its evolution is more complicated and diverse. Therefore, we utilize both rainfall data and radar echo data as our input data to gain more meteorological spatiotemporal features which can support this complex forecasting.
Based on experiments with multiple models, we propose a dual-input dual-encoder RNN, namely Rainfall Nowcasting Network (RN-Net). RN-Net extracts spatiotemporal features of the rainfall and the radar echo data via dual encoders. Then, these features are combined by a fusion module. Finally, the fused features are fed into a predictor to make forecasts. In order to reasonably evaluate the effectiveness of rainfall nowcasting, we propose a new performance metric that combines multiple metrics in the field of meteorological and spatiotemporal prediction. In the experiment, 10 months of radar echo data and rainfall data in the southeastern coastal area of China were used as deep learning samples and compared with rainfall nowcasting of the Weather Research and Forecasting (WRF) [23] model. The results are expected to provide convenience for daily activities such as travel and irrigation, and provide a basis for early warning of natural disasters such as floods and mudslides.
The rest of this paper is organized as follows: Section 2 introduces the preparatory work. Section 3 details the proposed RN-Net framework. Experimental results are demonstrated in Section 4. Finally, we conclude this paper and put forward some suggestions for future work.

2. Preliminary

2.1. Data Details

The radar echo data used in this article is Doppler radar mosaic data. Radar echo data contains various echo noise, such as non-meteorological echo, interference echo, etc., which mislead prediction. Therefore, we construct a singular point filter and a bilateral filter to filter the value domain and the spatial domain, which can effectively eliminate the pulsation and clutter while retaining the echo characteristics. In addition, a high-pass filter is constructed to remove data below 15 dBZ, and only data related to precipitation are retained. Since the data will be saved as an image format, we convert the radar echo data into pixel data.
The rainfall interpolation data of the automatic weather station is selected as the rainfall data in this article. Automatic weather stations are widely distributed and the distribution of automatic weather stations used in this paper is shown in Figure 1a. Rainfall data are usually measured by rain gauges. It collects the rainfall in a specific area and divides the rainfall volume by the surface area to obtain the depth of rainfall. Inspired by the E-OBS dataset [24], we interpolate the rainfall point data into a uniform grid. We use Inverse Distance Weight [25] (IDW) to interpolate the rainfall data of 13,655 automatic weather stations in the forecasting area into a 240 × 240 grid. With such high-density data interpolation, the actual rainfall distribution is restored as much as possible. Figure 1b,c shows the effect of interpolation. IDW takes the distance between the interpolation point and the sample point as the weight for the weighted average. The closer the sample point is to the interpolation point, the greater the weight. The critical equation is as follows:
d i j = ( x j x i ) 2 + ( y j y i ) 2
λ i j = 1 d i j i = 1 n 1 d i j
Z ( x j , y j ) = i = 1 n λ i j Z ( x i , y i )
where n is the number of selected sampling points closest to the interpolation point, which is set to 16 in the experiment. ( x i , y i ) represents the coordinates of the sample point, and ( x j , y j ) denotes the coordinates of the interpolation point. Z ( · , · ) is the value of this coordinate, and d i j is the distance between the sample point and the interpolation point. λ i j is the weight of the sample point to the interpolation point. Finally, we convert the interpolated rainfall data into pixel data.
In addition, the WRF model is used to compare the 0–2 h rainfall nowcasting effect of RN-Net. The WRF model [26] is configured with a one-domain nested grid system. The horizontal resolution of the domain is 5 km, with the grid points 240 × 240. The domain has 35 vertical layers, with the model top at 50 hPa. The boundary conditions are updated every 6 h from the 0.25° × 0.25° National Centers for Environmental Prediction (NCEP) Final Operational Model Global Tropospheric Analysis. The main physical parameterization schemes are shown in Table 1. The model is integrated every 6 h, the forecast time is 12 h, and the results are output every 30 min.

2.2. Problem Definition

Our goal is to forecast future automatic weather station rainfall interpolation data by past radar echo data and rainfall data. We formally define this problem as follows: suppose the current moment is t = 0 . We have access to the radar echo data [ R E t ] t = n 0 and the recent rainfall data [ R F t ] t = m 0 . Our task is to predict [ R F ^ t ] t = 1 s , and make them as close as possible to [ R F t ] t = 1 s , which is the real rainfall data for next time. Specifically, our goal is to find a mapping f such that
f m i n l o s s ( [ R F ^ t ] t = 1 s , [ R F t ] t = 1 s ) s . t . [ R F ^ t ] t = 1 s = f ( [ R E t ] t = n 0 , [ R F t ] t = m 0 ) )

3. Method

3.1. Network Structure

In order to achieve high spatiotemporal resolution rainfall nowcasting, our model needs to obtain sufficient meteorological spatiotemporal features to support the forecast. Meanwhile, its RNN unit also needs to have stronger feature extraction and transmission capabilities. The network structure of RN-Net is shown in Figure 2.
Inspired by LightNet [27], RN-Net contains two encoders, a fusion module and a predictor. The time resolution of rainfall data from automatic weather stations is 30 min and the time resolution of radar echo data are 6 min. The time resolution difference between the two data are too large. Therefore, the two types of data cannot be encoded by the same encoder. RN-Net uses radar echo encoder and rainfall encoder to respectively encode the two kinds of data to generate spatiotemporal features. The fusion module composed of CNN is used to fuse the spatiotemporal features of the two data. Finally, the spatiotemporal features are input to the predictor, and the forecasting of future rainfall is output. We detail each component as follows.
Radar Echo Encoder or Rainfall Encoder: Both of these two encoders have the same network structure and parameters. The encoder has a three-layer structure, and each layer is composed of a layer of RNN and downsample unit. The downsample unit helps the model understand the high-level spatial features of the input data, so as to better extract the spatiotemporal features. The input of the first layer is radar echo data or rainfall data ( [ R E t ] t = n 0 or [ R F t ] t = m 0 ). Then, the hidden features of each time point of this layer are input into the downsample unit of the next layer, and the hidden state of the last time point ( h 1 R E or h 1 R F ) is used as the output of the encoder in this layer. The second and third layers continue the same process. The final output of the encoder is the hidden state of each layer, and the formula is expressed as follows:
h 3 R E h 2 R E h 1 R E = R a d a r - E c h o - E n c o d e r ( [ R E t ] t = n 0 )
h 3 R F h 2 R F h 1 R F = R a i n f a l l - E n c o d e r ( [ R F t ] t = m 0 )
Fusion Module: The radar echo data contain rich meteorological features. Due to its various noises, it cannot accurately reflect the real-world rainfall distribution. The rainfall data of the automatic weather station reflect the actual rainfall distribution. To obtain accurate rainfall nowcasting, the hidden features of the two data are combined. The fusion module superimposes the hidden features of the two, and then deep fusion through CNN. Its formula is expressed as follows:
h 3 f u s i o n h 2 f u s i o n h 1 f u s i o n = F u s i o n ( h 1 R F h 1 R E , h 2 R F h 2 R E , h 3 R F h 3 R E )
Rainfall Predictor: The structure of rainfall predictor is similar to an encoder. It is also a three-layer structure, and each layer is composed of a layer of RNN and upsample units. The difference is that two layers of CNN are added to the output part, which is more conducive to generating forecasting data [ R F ^ t ] t = 1 s by spatiotemporal features. When forecasting, the input data of the predictor are the fused spatiotemporal hidden features. The third layer expands the corresponding fused hidden state in the future period. Then, the hidden state at each time is input to the upsample unit to generate the lower-level spatial hidden state, which is input into the next layer of the predictor. The second and first layers continue the same process. Finally, two layers of CNN output rainfall nowcasting based on low-level spatiotemporal features. The formula is as follows:
[ R F ^ t ] t = 1 s = P r e d i c t o r ( h 3 f u s i o n h 2 f u s i o n h 1 f u s i o n )
TrajGRU is the RNN unit used in RN-Net. It is improved based on ConvGRU and overcomes the problem that the connection structure between the memory states in other convRNNs is fixed. For input data, TrajGRU and ConvGRU both use convolution as the connection structure, which makes it possible to obtain the spatial features of the input data. For memory state, TrajGRU uses a structure generation network to dynamically generate the optical flow between states as the connection structure. Such a flexible connection structure can more efficiently learn complex motion patterns such as rotation and zooming in spatiotemporal features. The settings (the kernel size, channels and stride) of each component of RN-Net are detailed in Table 2.
In addition to RN-Net, we also try two other dual-input dual-encoder methods. When ConvLSTM is used as the RNN unit of RN-Net, the radar echo encoder/rainfall encoder and predictor need to transmit cells and hidden states simultaneously, and the fusion module needs to fuse the two features separately. When using PredRNN as the backbone network of RN-Net, the radar echo encoder/rainfall encoder and the predictor need to transmit cell, hidden state, and spatiotemporal memory at the same time, and spatiotemporal memory needs to be interspersed with zigzags in the network. The fusion module needs to fuse the three features separately.

3.2. Implementation Details

The proposed neural networks are implemented with Pytorch [28] and are trained end-to-end. All network parameters are initialized with a normal distribution. All models are optimized with L2 loss, and they are trained using the Adam optimizer [29] with a starting learning rate of 10 4 . The training process is stopped after 40,000 iterations, and the batch size of each iteration is set to 4. The rainfall data and radar echo data normalized to the range of [0, 1] are used as network input data.

4. Experiment

In this part, we evaluate the proposed models on the Southeastern China dataset. In Section 4.1, we introduce the details of the dataset. In Section 4.2, we introduce a new rainfall nowcasting performance metric, which combines multiple evaluation metrics in the meteorological field and the spatiotemporal prediction field. Section 4.3 compares RN-Net with other methods, including eight deep learning methods and the WRF model. We visualize two representative examples for further analysis in Section 4.4. Our experimental platform uses Ubuntu16.04, 32 GB memory, and two Nvidia RTX 2080 GPUs.

4.1. Dataset

Radar Echo Data: We use the data from the southeast coast of China. The data are stored in a 240 × 240 grid, and its spatial resolution is 5 km. The time resolution is 6 min, and the time range includes May to September in 2018 and 2019.
Rainfall Data: The spatial range, spatial resolution, and time resolution of rainfall data from automatic weather stations are the same as those of radar echo data. Its original time resolution is 10 min; due to the small value and sparse spatial distribution after interpolation, its time resolution is converted to 30 min.
In the dataset, 207 days for training, 29 days for validation, and 57 days for testing. The data on some days were incomplete due to equipment failure or other reasons. Our task can be defined as nowcasting the rainfall data of the next 2 h, based on the rainfall data of the past 2 h and the radar echo data of the past 1 h.

4.2. Performance Metric

In our methods, the rainfall nowcasting with the high spatiotemporal resolution is formulated as a spatiotemporal prediction problem to solve. The forecast result is the cumulative rainfall interpolation data of four frames within 0.5 h in the next 2 h, which is compared with the actual automatic weather station rainfall interpolation data to evaluate the forecast effect. To make a reasonable evaluation, we define a new performance metric by combining various evaluation metrics in the field of meteorology and spatiotemporal prediction.
Commonly used metrics for rainfall nowcasting in the meteorological field include TS, probability of detection (POD) and false alarm rate (FAR). In the experiment, we choose to use the thresholds 0.25 mm, 1 mm, and 2.5 mm to calculate these metrics. The threshold setting refers to the rainfall level, and the corresponding relationship is shown in Table 3. In order to show the effect of rainfall nowcasting in the next 2 h, we take three time periods within 0.5 h, 1 h, and 2 h for evaluation. The forecast rainfall and actual rainfall in these three periods are accumulated and used as evaluation data. In the field of spatiotemporal prediction, the forecasting results are evaluated frame by frame. The evaluation result in a period of time is the average of the evaluation results of each frame in the period. Applying this idea to our method evaluation, and the multi-frame evaluation results within 1 h and 2 h are averaged, including the Critical Success Index (CSI), POD, and FAR.
In addition, since the data are all two-dimensional grid data and are saved in the image format, we introduce two metrics, MAE and MSE, which respectively calculate the L1 distance and L2 distance between the truth data and the forecast data.
CSI and TS have the same calculation formula. TS evaluates accumulated rainfall in the period, and CSI is the average value of multiple frame evaluations in the period. The following are the calculation equations for these six evaluation metrics:
M S E = x = 1 w y = 1 h ( R F ^ x y R F x y ) 2
M A E = x = 1 w y = 1 h | R F ^ x y R F x y |
C S I / T S = N A / ( N A + N B + N C )
P O D = N A / ( N A + N C )
F A R = N B / ( N A + N B )
Here, w and h are the width and height of the rainfall data, respectively. R F ^ x y and R F x y are the forecast rainfall and truth rainfall in the coordinates ( x , y ) . NA, NB, NC, and ND represent the number of true-positive, false-positive, false-negative, and true-negative grid points.
Finally, the performance metric includes five evaluation results of cumulative rainfall within 0.5 h, 1 h and 2 h, and average values of the first two frames within 1 h and four frames within 2 h. Such performance metric not only reflect the forecasting effect of the method on rainfall with different time resolutions, but also reflect the spatiotemporal prediction capabilities of the method.

4.3. Experimental Results and Analysis

We try three deep learning model input methods: rainfall data single input, radar echo data single input, and rainfall data and radar echo data dual input. The single input method of rainfall data is similar to the radar echo extrapolation method, which forecasts its future development by past data. The single input method of radar echo data are used to detect whether the radar echo data are effective for rainfall nowcasting. The dual-input model can simultaneously obtain the meteorological spatiotemporal feature of radar echo data and rainfall data. Three deep learning models are tried for each input method, including ConvLSTM, TrajGRU, and PredRNN. The experiment contains nine deep learning methods, and the best method is dual-input dual-encoder RNN which uses TrajGRU, namely RN-Net.
In addition, to compare with the traditional method, the WRF model is run to get the rainfall nowcasting. Its spatial scope is the same as the dataset, and its time scope covers the testing set of the dataset. The WRF model is integrated every 6 h and forecasts the next 12 h with a time resolution of 0.5 h of rainfall. Our deep learning methods forecast the rainfall for the next 2 h every 0.5 h. In order to compare the two types of rainfall nowcasting methods, we designed a special comparison method, as shown in Figure 3. First, we extract 2 h of data every 0.5 h from the 12 h WRF model rainfall forecast, and a total of 21 sets of data. Then, we compare each set of data with the corresponding real rainfall. However, the WRF model has a spin-up period whose duration cannot be determined, and the forecast in this period is usually not used. In order to avoid the spin-up period, the best evaluation results among the 21 sets of data are used as the WRF model evaluation result within 12-h. Meanwhile, we also compared our deep learning methods rainfall forecast of these 21 time periods with the corresponding real rainfall. The average of the 21 evaluation results is used as our deep learning methods evaluation result within 12-h. We compare all the 12-h WRF model forecasts integrated every 6 h in the testing set with the deep learning methods forecasts through the above method. This comparison method not only solves the problem of the different forecasting frequencies of the two methods, but also avoids the spin-up period of the WRF model.
The evaluation results of cumulative rainfall nowcasting within 0.5 h, 1 h, and 2 h are shown in Table 4, Table 5 and Table 6, respectively. The average values of multi-frame 0.5 h rainfall forecast evaluation results within 1 h and 2 h are shown in Table 7 and Table 8, respectively. When comparing the performance of rainfall nowcasting methods, TS and CSI is the main basis. RN-Net has the highest TS and CSI in Table 4, Table 5, Table 6, Table 7 and Table 8 among all rainfall nowcasting methods. Next, we mainly compare the evaluation results of different methods from the following three aspects.
Comparison between Deep Learning Methods and WRF Model: As shown in Table 4, Table 5, Table 6, Table 7 and Table 8, the deep learning methods are better than the WRF model. Compared with WRF model rainfall nowcasting, RN-Net’s TS within 0.5 h, 1 h, and 2 h of 0.25 mm as the threshold are increased by nearly four times, three times, and three times, respectively, and RN-Net’s CSI within 1 h and 2 h of 0.25 mm as the threshold are increased by nearly four times and three times, respectively. The rainfall nowcasting in the WRF model is not effective and the FAR is extremely high. This result is caused by two factors. First, it does not use the latest truth data, but is completely dependent on WRF simulations, and WRF simulations usually have deviations in the time domain and geographical area. Second, these methods are manually designed by meteorological experts and can hardly benefit from a large amount of historical data. In addition, the effect of WRF model at low time resolution is better than that at high time resolution, while the deep learning methods are the opposite. This is because the deep learning methods use high time resolution data as training data.
Comparison of Different Input Methods: The rainfall extrapolation method that is similar to the radar echo extrapolation method is not good, and even worse than methods that use radar echo data as input. This is due to the fact that spatiotemporal features of the rainfall data are too small to support their forecasts. The dual-input methods are better than the single-input methods.
Comparison of Different Network Components: PredRNN, which has the best performance in the three radar echo extrapolation models, has the worst effect in rainfall nowcasting. This is because the scale and quality of the dataset are not enough to support the training of complex deep learning models. The method that uses PredRNN will be more prone to overfitting during the training. Moreover, although the method using TrajGRU is superior to the method using ConvLSTM on TS/CSI and POD, it is slightly inferior to the latter on MSE, MAE, and FAR. This is due to the mechanism of TrajGRU to generate the state connection structure, and this needs further improvement.

4.4. Visualization Results

Figure 4 visualizes two representative cases for RN-Net, RE-RF-ConvLSTM, RE-RF-PredRNN, RE-TrajGRU, RF-TrajGRU, and WRF model. From this figure, we observe that all of the deep learning methods except RF-TrajGRU can make accurate forecasting in the first hour, which is consistent with the performance of evaluation metrics. The rainfall data which is input data of RF-TrajGRU are not enough to provide enough meteorological spatiotemporal features to support rainfall nowcasting. Even in the first half-hour, there are some deviations. Moreover, the forecast results of the deep learning methods will gradually disappear after one hour. Among the three dual-input methods, the disappearance problem of the method including ConvLSTM is the most serious. This is due to the structure of RNN [30] and the distance loss function [31]. However, there is no identical situation with the WRF model. WRF model forecast results often have large-scale false alarms, which is identical to the performance in evaluation metrics.

5. Conclusions

In this paper, we propose a model (namely RN-Net) for rainfall nowcasting. RN-Net is a deep neural network with dual-input and dual-encoder. RN-Net provides a more sufficient basis for forecasting by fusing the spatiotemporal features of rainfall data and radar data. On the one hand, it overcomes the drawback of conventional forecasting methods that cannot mine knowledge from historical data. On the other hand, it provides high spatiotemporal resolution forecasting that other deep learning methods cannot achieve. We conduct experiments on the Southeastern China dataset. In the experiment, RN-Net is much better than the WRF model. However, compared with the accuracy of precipitation intensity nowcasting [14,15,17,18,20], RN-Net’s rainfall nowcasting still has room for improvement. Moreover, the generalization ability of RN-Net may be poor. This is because rainfall is affected by topography, climate, season, and other factors, and our dataset only contains summer and autumn data of Southeast China.
In order to further improve the accuracy and generalization of rainfall nowcasting, we will extend our current work to three aspects. Firstly, we will increase the scale of the dataset and expand the area included in the dataset. Secondly, we will add more input data to provide more meteorological spatiotemporal features for forecast. Finally, we look forward to future work trying other novel deep learning networks.

Author Contributions

Conceptualization, X.W., J.G. and F.Z.; methodology, F.Z.; software, F.Z.; validation, X.W., J.G. and F.Z.; formal analysis, F.Z.; investigation, F.Z.; resources, J.G. and F.Z.; data curation, F.Z.; writing—original draft preparation, F.Z.; writing—review and editing, F.Z., M.W. and L.G.; visualization, F.Z.; supervision, F.Z.; project administration, F.Z.; funding acquisition, J.G. and X.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (Project No. 41975066).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The raw/processed data required to reproduce these findings cannot be shared at this time as the data also forms part of an ongoing study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Knievel, J.C.; Ahijevych, D.A.; Manning, K.W. Using temporal modes of rainfall to evaluate the performance of a numerical weather prediction model. Mon. Weather Rev. 2004, 132, 2995–3009. [Google Scholar] [CrossRef] [Green Version]
  2. Muhandhis, I.; Susanto, H.; Asfari, U. Determining Salt Production Season Based on Rainfall Forecasting Using Weighted Fuzzy Time Series. J. Appl. Comput. Sci. Math. 2020, 14, 23–27. [Google Scholar] [CrossRef]
  3. Zhou, J.; Zhang, Y.; Tian, S.; Lai, S. Forecasting Rainfall with Recurrent Neural Network for irrigation equipment. IOP Conf. Ser. Earth Environ. Sci. 2020, 510, 042040. [Google Scholar] [CrossRef]
  4. Osanai, N.; Shimizu, T.; Kuramoto, K.; Kojima, S.; Noro, T. Japanese early-warning for debris flows and slope failures using rainfall indices with Radial Basis Function Network. Landslides 2010, 7, 325–338. [Google Scholar] [CrossRef]
  5. Chu, Q.; Xu, Z.; Chen, Y.; Han, D. Evaluation of the ability of the Weather Research and Forecasting model to reproduce a sub-daily extreme rainfall event in Beijing, China using different domain configurations and spin-up times. Hydrol. Earth Syst. Sci. 2018, 22, 3391. [Google Scholar] [CrossRef] [Green Version]
  6. Hong, W.C. Rainfall forecasting by technological machine learning models. Appl. Math. Comput. 2008, 200, 41–57. [Google Scholar] [CrossRef]
  7. Sumi, S.M.; Zaman, M.F.; Hirose, H. A rainfall forecasting method using machine learning models and its application to the Fukuoka city case. Int. J. Appl. Math. Comput. Sci. 2012, 22, 841–854. [Google Scholar] [CrossRef] [Green Version]
  8. Cramer, S.; Kampouridis, M.; Freitas, A.A.; Alexandridis, A.K. An extensive evaluation of seven machine learning methods for rainfall prediction in weather derivatives. Expert Syst. Appl. 2017, 85, 169–181. [Google Scholar] [CrossRef] [Green Version]
  9. Moon, S.H.; Kim, Y.H.; Lee, Y.H.; Moon, B.R. Application of machine learning to an early warning system for very short-term heavy rainfall. J. Hydrol. 2019, 568, 1042–1054. [Google Scholar] [CrossRef]
  10. Parmar, A.; Mistree, K.; Sompura, M. Machine Learning Techniques for Rainfall Prediction: A Review. In Proceedings of the 2017 International Conference on Innovations in information Embedded and Communication Systems (ICIIECS), Coimbatore, India, 17–18 March 2017. [Google Scholar]
  11. Adewoyin, R.; Dueben, P.; Watson, P.; He, Y.; Dutta, R. TRU-NET: A Deep Learning Approach to High Resolution Prediction of Rainfall. arXiv 2020, arXiv:2008.09090. [Google Scholar]
  12. Ayzel, G.; Heistermann, M.; Sorokin, A.; Nikitin, O.; Lukyanova, O. All convolutional neural networks for radar-based precipitation nowcasting. Procedia Comput. Sci. 2019, 150, 186–192. [Google Scholar] [CrossRef]
  13. Tian, L.; Li, X.; Ye, Y.; Xie, P.; Li, Y. A Generative Adversarial Gated Recurrent Unit Model for Precipitation Nowcasting. IEEE Geosci. Remote Sens. Lett. 2019, 17, 601–605. [Google Scholar] [CrossRef]
  14. Shi, X.; Chen, Z.; Wang, H.; Yeung, D.Y.; Wong, W.K.; Woo, W.C. Convolutional LSTM network: A machine learning approach for precipitation nowcasting. Adv. Neural Inf. Process. Syst. 2015, 28, 802–810. [Google Scholar]
  15. Shi, X.; Gao, Z.; Lausen, L.; Wang, H.; Yeung, D.Y.; Wong, W.k.; Woo, W.C. Deep learning for precipitation nowcasting: A benchmark and a new model. arXiv 2017, arXiv:1706.03458. [Google Scholar]
  16. Woo, W.C.; Wong, W.K. Operational application of optical flow techniques to radar-based rainfall nowcasting. Atmosphere 2017, 8, 48. [Google Scholar] [CrossRef] [Green Version]
  17. Wang, Y.; Long, M.; Wang, J.; Gao, Z.; Philip, S.Y. Predrnn: Recurrent Neural Networks for Predictive Learning Using Spatiotemporal Lstms. 2017. Available online: http://ise.thss.tsinghua.edu.cn/~mlong/doc/predrnn-nips17.pdf (accessed on 10 March 2021).
  18. Wang, Y.; Zhang, J.; Zhu, H.; Long, M.; Wang, J.; Yu, P.S. Memory in memory: A predictive neural network for learning higher-order non-stationarity from spatiotemporal dynamics. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 9154–9162. [Google Scholar]
  19. Bonnet, S.M.; Evsukoff, A.; Morales Rodriguez, C.A. Precipitation Nowcasting with Weather Radar Images and Deep Learning in São Paulo, Brasil. Atmosphere 2020, 11, 1157. [Google Scholar] [CrossRef]
  20. Wang, Y.; Gao, Z.; Long, M.; Wang, J.; Philip, S.Y. PredRNN++: Towards A Resolution of the Deep-in-Time Dilemma in Spatiotemporal Predictive Learning. In Proceedings of the International Conference on Machine Learning, Long Beach, CA, USA, 16–20 June 2019; pp. 5123–5132. [Google Scholar]
  21. Zhang, P.; Jia, Y.; Gao, J.; Song, W.; Leung, H.K. Short-term rainfall forecasting using multi-layer perceptron. IEEE Trans. Big Data 2018, 6, 93–106. [Google Scholar] [CrossRef]
  22. Oprea, S.; Martinez-Gonzalez, P.; Garcia-Garcia, A.; Castro-Vargas, J.A.; Orts-Escolano, S.; Garcia-Rodriguez, J.; Argyros, A. A Review on Deep Learning Techniques for Video Prediction. arXiv 2020, arXiv:2004.05214. [Google Scholar]
  23. Skamarock, W.C.; Klemp, J.B.; Dudhia, J.; Gill, D.O.; Barker, D.M.; Duda, M.G.; Huang, X.Y.; Wang, W.; Powers, J.G. A Description of the Advanced Research WRF Version 3. Available online: https://apps.dtic.mil/sti/pdfs/ADA487419.pdf (accessed on 11 March 2021).
  24. Cornes, R.C.; van der Schrier, G.; van den Besselaar, E.J.; Jones, P.D. An ensemble version of the E-OBS temperature and precipitation data sets. J. Geophys. Res. Atmos. 2018, 123, 9391–9409. [Google Scholar] [CrossRef] [Green Version]
  25. Chen, F.W.; Liu, C.W. Estimation of the spatial rainfall distribution using inverse distance weighting (IDW) in the middle of Taiwan. Paddy Water Environ. 2012, 10, 209–222. [Google Scholar] [CrossRef]
  26. Skamarock, W.C.; Klemp, J.B.; Dudhia, J. Prototypes for the WRF (Weather Research and Forecasting) Model. Available online: https://www.researchgate.net/profile/Jimy_Dudhia/publication/242446432_Prototypes_for_the_WRF_Weather_Research_and_Forecasting_model/links/0c9605314d71c38bae000000/Prototypes-for-the-WRF-Weather-Research-and-Forecasting-model.pdf (accessed on 10 March 2021).
  27. Geng, Y.a.; Li, Q.; Lin, T.; Jiang, L.; Xu, L.; Zheng, D.; Yao, W.; Lyu, W.; Zhang, Y. Lightnet: A dual spatiotemporal encoder network model for lightning prediction. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Anchorage, AK, USA, 4–8 August 2019; pp. 2439–2447. [Google Scholar]
  28. Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. Pytorch: An imperative style, high-performance deep learning library. arXiv 2019, arXiv:1912.01703. [Google Scholar]
  29. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  30. Le, P.; Zuidema, W. Quantifying the vanishing gradient and long distance dependency problem in recursive neural networks and recursive LSTMs. arXiv 2016, arXiv:1603.00423. [Google Scholar]
  31. Tran, Q.K.; Song, S.k. Computer vision in precipitation nowcasting: Applying image quality assessment metrics for training deep neural networks. Atmosphere 2019, 10, 244. [Google Scholar] [CrossRef] [Green Version]
Figure 1. (a) Distribution map of automatic weather stations; automatic weather station rainfall data of 9:00 a.m.–9:30 a.m. on 23 May 2017; (b) and data after interpolation (c).
Figure 1. (a) Distribution map of automatic weather stations; automatic weather station rainfall data of 9:00 a.m.–9:30 a.m. on 23 May 2017; (b) and data after interpolation (c).
Sensors 21 01981 g001
Figure 2. RN-Net consists of four parts: Radar Echo Encoder, Rainfall Encoder, Fusion module, and Rainfall Predictor. The Radar Echo Encoder and the Rainfall Encoder encode spatiotemporal features of radar echo data [ R E t ] t = n 0 and recent rainfall data [ R F t ] t = m 0 , respectively. Then, the fusion module combines the radar echo feature and rainfall feature so as to provide more spatiotemporal feature support for nowcasting. Finally, the rainfall predictor receives the fused feature and makes forecasts [ R F ^ t ] t = 1 s .
Figure 2. RN-Net consists of four parts: Radar Echo Encoder, Rainfall Encoder, Fusion module, and Rainfall Predictor. The Radar Echo Encoder and the Rainfall Encoder encode spatiotemporal features of radar echo data [ R E t ] t = n 0 and recent rainfall data [ R F t ] t = m 0 , respectively. Then, the fusion module combines the radar echo feature and rainfall feature so as to provide more spatiotemporal feature support for nowcasting. Finally, the rainfall predictor receives the fused feature and makes forecasts [ R F ^ t ] t = 1 s .
Sensors 21 01981 g002
Figure 3. Schematic diagram of comparison method between our deep learning methods and the WRF model.
Figure 3. Schematic diagram of comparison method between our deep learning methods and the WRF model.
Sensors 21 01981 g003
Figure 4. Visualize two representative rainfall nowcasting cases. In (a,b), from left to right, are the actual rainfall data of the past two hours [ R F t ] t = 3 0 , the actual rainfall data of the next two hours [ R F t ] t = 1 4 and rainfall nowcasting [ R F ^ t ] t = 1 4 made by RN-Net, RE-RF-ConvLSTM, RE-RF-PredRNN, RE-TrajGR, RF-TrajGRU, and WRF models. The value in each forecast frame is the CSI with 0.25 mm as the threshold for this frame.
Figure 4. Visualize two representative rainfall nowcasting cases. In (a,b), from left to right, are the actual rainfall data of the past two hours [ R F t ] t = 3 0 , the actual rainfall data of the next two hours [ R F t ] t = 1 4 and rainfall nowcasting [ R F ^ t ] t = 1 4 made by RN-Net, RE-RF-ConvLSTM, RE-RF-PredRNN, RE-TrajGR, RF-TrajGRU, and WRF models. The value in each forecast frame is the CSI with 0.25 mm as the threshold for this frame.
Sensors 21 01981 g004
Table 1. Physical parameterization schemes.
Table 1. Physical parameterization schemes.
NameScheme
MicrophysicsThompson scheme
Cumulus parameterizationKain–Fritsch (new Eta) scheme
Planetary boundary layerMellor–Yamada–Janjic TKE scheme
Surface layerRevised MM5 Monin–Obukhov scheme
Longwave radiationRapid Radiative Transfer Model for GCMs
Shortwave radiationRapid Radiative Transfer Model for GCMs
Table 2. Various settings in RN-Net, including channels, kernel, and stride.
Table 2. Various settings in RN-Net, including channels, kernel, and stride.
ModuleNameCH I/OKernelStride
Radar Echo Encode or Rainfall EncodeEconv1
ETrajGRU1
Econv2
ETrajGRU2
Econv3
ETrajGRU3
1/8
8/64
64/192
192/192
192/192
192/192
5 × 5
3 × 3
4 × 4
3 × 3
3 × 3
3 × 3
3
1
2
1
3
1
Fusion ModuleFconv1
Fconv2
Fconv3
128/64
384/192
384/192
3 × 3
3 × 3
3 × 3
1
1
1
Rainfall PredictorPTrajGRU3
Pdeconv3
PTrajGRU2
Pdeconv2
PTrajGRU1
Pdeconv1
Oconv1
Oconv2
192/192
192/192
192/192
192/64
64/64
64/8
8/8
8/1
3 × 3
3 × 3
3 × 3
4 × 4
3 × 3
5 × 5
3 × 3
1 × 1
1
3
1
2
1
3
1
1
Table 3. Correspondence between threshold and rainfall level.
Table 3. Correspondence between threshold and rainfall level.
Half-Hour of Rainfall (mm)Rainfall Level
r < 0.25No or hardly noticeable
0.25 ≤ r < 1Light
1 ≤ r < 2.5Light to moderate
2.5 ≤ rModerate or greater
Table 4. Evaluation results of 0.5 h cumulative rainfall nowcasting. The best performance is reported using red, and the second best is reported using blue. ‘↑’ means that the higher the score the better, while ‘↓’ means that the lower score the better. ‘ r γ ’ means the skill score at the γ mm rainfall threshold in 0.5 h. RE, RF, and RE-RF, respectively, indicate that the method uses radar echo data, rainfall data, or both as input data. PredRNN, ConvLSTM, and PredRNN are RNN units or backbone networks used in this method.
Table 4. Evaluation results of 0.5 h cumulative rainfall nowcasting. The best performance is reported using red, and the second best is reported using blue. ‘↑’ means that the higher the score the better, while ‘↓’ means that the lower score the better. ‘ r γ ’ means the skill score at the γ mm rainfall threshold in 0.5 h. RE, RF, and RE-RF, respectively, indicate that the method uses radar echo data, rainfall data, or both as input data. PredRNN, ConvLSTM, and PredRNN are RNN units or backbone networks used in this method.
MethodMSE↓MAE↓r ≥ 0.25 mm
TS↑ POD↑ FAR↓
r ≥ 1 mm
TS↑ POD↑ FAR↓
r ≥ 2.5 mm
TS↑ POD↑ FAR↓
WRF1.07726.6020.131 0.359 0.7630.094 0.278 0.8300.066 0.190 0.868
RF-PredRNN0.96519.7930.404 0.495 0.3110.303 0.348 0.2980.216 0.243 0.336
RF-ConvLSTM0.89018.8890.425 0.506 0.2720.348 0.409 0.2990.268 0.310 0.337
RF-TrajGRU0.92919.0240.427 0.516 0.2870.326 0.378 0.2930.235 0.265 0.324
RE-PredRNN1.10622.7500.380 0.380 0.3440.344 0.361 0.3890.189 0.228 0.475
RE-ConvLSTM0.93220.5500.428 0.501 0.2540.354 0.412 0.2850.261 0.301 0.334
RE-TrajGRU0.87220.5140.455 0.546 0.2680.400 0.492 0.3180.315 0.392 0.383
RE-RF-PredRNN0.84118.5170.474 0.585 0.2850.393 0.478 0.3100.296 0.356 0.360
RE-RF-ConvLSTM0.67416.5310.507 0.577 0.1930.4520.5190.2210.3230.4460.266
RN-Net0.69816.4840.5230.6110.2140.4640.5510.2520.3710.4330.278
Table 5. Evaluation results of 1 h cumulative rainfall nowcasting. ‘ r γ ’ means the skill score at the γ mm rainfall threshold in 1 h.
Table 5. Evaluation results of 1 h cumulative rainfall nowcasting. ‘ r γ ’ means the skill score at the γ mm rainfall threshold in 1 h.
MethodMSE↓MAE↓r ≥ 0.25 mm
TS↑ POD↑ FAR↓
r ≥ 1 mm
TS↑ POD↑ FAR↓
r ≥ 2.5 mm
TS↑ POD↑ FAR↓
WRF3.63553.3110.153 0.419 0.7360.129 0.363 0.7590.098 0.294 0.818
RF-PredRNN3.10440.5230.395 0.454 0.2450.327 0.295 0.2800.249 0.278 0.261
RF-ConvLSTM2.96339.1240.424 0.491 0.2440.335 0.374 0.2370.274 0.304 0.262
RF-TrajGRU3.06639.8110.428 0.503 0.2550.331 0.380 0.2760.258 0.291 0.302
RE-PredRNN3.51546.2250.352 0.411 0.2870.325 0.398 0.3580.247 0.297 0.407
RE-ConvLSTM3.08342.1510.365 0.395 0.1700.362 0.407 0.2350.235 0.320 0.261
RE-TrajGRU2.81841.7770.423 0.508 0.2830.411 0.486 0.2720.356 0.430 0.324
RE-RF-PredRNN2.71338.2580.454 0.525 0.2270.411 0.494 0.2900.340 0.407 0.325
RE-RF-ConvLSTM2.26034.5730.465 0.508 0.1540.442 0.489 0.1770.3920.4390.214
RN-Net2.35834.9590.5030.585 0.2170.4610.5330.2260.3990.467 0.266
Table 6. Evaluation results of 2 h cumulative rainfall nowcasting. ‘ r γ ’ means the skill score at the γ mm rainfall threshold in 2 h.
Table 6. Evaluation results of 2 h cumulative rainfall nowcasting. ‘ r γ ’ means the skill score at the γ mm rainfall threshold in 2 h.
MethodMSE↓MAE↓r ≥ 0.25 mm
TS↑ POD↑ FAR↓
r ≥ 1 mm
TS↑ POD↑ FAR↓
r ≥ 2.5 mm
TS↑ POD↑ FAR↓
WRF12.467130.0270.168 0.466 0.7420.160 0.422 0.7580.132 0.384 0.780
RF-PredRNN10.19687.1800.341 0.380 0.2300.295 0.333 0.2780.235 0.261 0.295
RF-ConvLSTM10.01884.9240.352 0.391 0.2170.277 0.297 0.1940.227 0.241 0.202
RF-TrajGRU10.26287.1720.360 0.409 0.2470.285 0.319 0.2750.228 0.254 0.310
RE-PredRNN11.30897.9190.315 0.368 0.3130.297 0.361 0.3730.244 0.293 0.407
RE-ConvLSTM10.63591.3430.293 0.309 0.1480.289 0.310 0.1930.239 0.256 0.219
RE-TrajGRU9.65590.0430.378 0.441 0.2740.359 0.413 0.2660.318 0.372 0.313
RE-RF-PredRNN9.16783.9130.399 0.461 0.2500.3680.440 0.3070.314 0.373 0.337
RE-RF-ConvLSTM8.28277.8840.393 0.421 0.1440.368 0.394 0.1540.325 0.351 0.183
RN-Net8.46579.1880.4350.495 0.2180.3960.453 0.2420.3500.406 0.283
Table 7. Average evaluation results of two frames of rainfall nowcasting in 1 h. ‘ r γ ’ means the skill score at the γ mm rainfall threshold in 0.5 h.
Table 7. Average evaluation results of two frames of rainfall nowcasting in 1 h. ‘ r γ ’ means the skill score at the γ mm rainfall threshold in 0.5 h.
MethodMSE↓MAE↓r ≥ 0.25 mm
CSI↑ POD↑ FAR↓
r ≥ 1 mm
CSI↑ POD↑ FAR↓
r ≥ 2.5 mm
CSI↑ POD↑ FAR↓
WRF1.22327.4840.129 0.355 0.7840.092 0.285 0.8460.065 0.190 0.883
RF-PredRNN1.09121.2010.349 0.424 0.3390.244 0.280 0.3490.163 0.183 0.405
RF-ConvLSTM0.89020.4240.347 0.404 0.2760.264 0.303 0.3110.189 0.214 0.365
RF-TrajGRU1.07620.7850.354 0.425 0.3220.257 0.297 0.3530.176 0.198 0.405
RE-PredRNN1.21724.0640.337 0.428 0.3900.241 0.298 0.4470.144 0.173 0.540
RE-ConvLSTM1.08621.9130.371 0.431 0.2730.281 0.322 0.3090.189 0.214 0.371
RE-TrajGRU1.01322.0230.407 0.490 0.2950.342 0.423 0.3610.257 0.318 0.435
RE-RF-PredRNN0.98620.3510.416 0.522 0.3310.326 0.401 0.3720.232 0.280 0.433
RE-RF-ConvLSTM0.84018.3210.446 0.508 0.2190.3770.4330.2590.3030.3500.308
RN-Net0.86718.5990.4560.5390.2540.3850.462 0.3120.2890.3400.356
Table 8. Average evaluation results of four frames of rainfall nowcasting in 2 h. ‘ r γ ’ means the skill score at the γ mm rainfall threshold in 0.5 h.
Table 8. Average evaluation results of four frames of rainfall nowcasting in 2 h. ‘ r γ ’ means the skill score at the γ mm rainfall threshold in 0.5 h.
MethodMSE↓MAE↓r ≥ 0.25 mm
CSI↑ POD↑ FAR↓
r ≥ 1 mm
CSI↑ POD↑ FAR↓
r ≥ 2.5 mm
CSI↑ POD↑ FAR↓
WRF1.51435.1900.124 0.357 0.8090.084 0.296 0.8730.062 0.190 0.898
RF-PredRNN1.27023.4460.273 0.331 0.3980.174 0.199 0.4410.107 0.119 0.519
RF-ConvLSTM1.24622.6340.249 0.282 0.2860.175 0.197 0.3430.113 0.126 0.447
RF-TrajGRU1.27123.3450.263 0.313 0.3670.180 0.208 0.4590.115 0.129 0.533
RE-PredRNN1.38426.3270.268 0.347 0.4750.171 0.211 0.5490.092 0.109 0.650
RE-ConvLSTM1.30324.0530.269 0.307 0.3020.180 0.202 0.3480.109 0.122 0.431
RE-TrajGRU1.23924.5550.320 0.385 0.3590.250 0.307 0.4480.171 0.210 0.537
RE-RF-PredRNN1.19823.2270.331 0.423 0.4150.239 0.295 0.4720.152 0.182 0.547
RE-RF-ConvLSTM1.09321.1320.342 0.387 0.2620.270 0.307 0.3230.2010.2300.391
RN-Net1.11821.8590.3550.422 0.3330.2770.335 0.4180.1900.223 0.489
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhang, F.; Wang, X.; Guan, J.; Wu, M.; Guo, L. RN-Net: A Deep Learning Approach to 0–2 Hour Rainfall Nowcasting Based on Radar and Automatic Weather Station Data. Sensors 2021, 21, 1981. https://doi.org/10.3390/s21061981

AMA Style

Zhang F, Wang X, Guan J, Wu M, Guo L. RN-Net: A Deep Learning Approach to 0–2 Hour Rainfall Nowcasting Based on Radar and Automatic Weather Station Data. Sensors. 2021; 21(6):1981. https://doi.org/10.3390/s21061981

Chicago/Turabian Style

Zhang, Fuhan, Xiaodong Wang, Jiping Guan, Meihan Wu, and Lina Guo. 2021. "RN-Net: A Deep Learning Approach to 0–2 Hour Rainfall Nowcasting Based on Radar and Automatic Weather Station Data" Sensors 21, no. 6: 1981. https://doi.org/10.3390/s21061981

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop