Next Article in Journal
An Interpretable Transformer-Based Framework for Monitoring Dissolved Inorganic Nitrogen and Phosphorus in Jiangsu–Zhejiang–Shanghai Offshore
Previous Article in Journal
DNMF-AG: A Sparse Deep NMF Model with Adversarial Graph Regularization for Hyperspectral Unmixing
Previous Article in Special Issue
Revisiting Clear-Air Echo Classification in Cloudnet: A Deep Learning Perspective
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Physics-Constrained Deep Learning with Adaptive Z-R Relationship for Accurate and Interpretable Quantitative Precipitation Estimation

1
School of Artificial Intelligence, Shenzhen University, Shenzhen 518060, China
2
Faculty of Science and Technology, UOW College Hong Kong, Hong Kong, China
3
Department of Building Environment and Energy Engineering, The Hong Kong Polytechnic University, Hong Kong, China
4
Foshan Tornado Research Center, Foshan 528000, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2026, 18(1), 156; https://doi.org/10.3390/rs18010156
Submission received: 24 November 2025 / Revised: 17 December 2025 / Accepted: 23 December 2025 / Published: 3 January 2026

Highlights

What are the main findings?
  • A hybrid framework integrates physical knowledge with deep learning models.
  • An adaptive Z-R branch is built by extending the squeeze-and-excitation network.
What are the implications of the main findings?
  • FusionQPE’s explainability is shown by comparing contributions of DL and Z-R branches.
  • FusionQPE is trained and tested using real radar and rainfall observations.

Abstract

Quantitative precipitation estimation (QPE) from radar reflectivity is fundamental for weather nowcasting and water resource management. Conventional Z-R relationship formulas, derived from Rayleigh scattering theory, rely heavily on empirical parameter fitting, which limits the estimation accuracy and generalization across different precipitation regimes. Recent deep learning (DL)-based QPE methods can capture the complex nonlinear relationships between radar reflectivity and rainfall. However, most of them overlook fundamental physical constraints, resulting in reduced robustness and interpretability. To address these issues, this paper proposes FusionQPE, a novel Physics-Constrained DL framework that integrates an adaptive Z-R formula. Specifically, FusionQPE employs a Dense convolutional neural network (DenseNet) backbone to extract multi-scale spatial features from radar echoes, while a modified squeeze-and-excitation (SE) network adaptively learns the parameters of the Z-R relationship. The final rainfall estimate is obtained through a linear combination of outputs from both the DenseNet backbone and the adaptive Z-R branch, where the trained linear weight and Z-R parameters provide interpretable insights into the model’s physical reasoning. Moreover, a physical-based constraint derived from the Z-R branch output is incorporated into the loss function to further strengthen physical consistency. Comprehensive experiments on real radar and rain gauge observations from Guangzhou, China, demonstrate that FusionQPE consistently outperforms both traditional and state-of-the-art DL-based QPE models across multiple evaluation metrics. The ablation and interpretability analysis further confirms that the adaptive Z-R branch improves both the physical consistency and credibility of the model’s precipitation estimation.

1. Introduction

Accurate and timely rainfall nowcasting [1] is crucial in hydrological and atmospheric research, especially for short-term (0–6 h) heavy rainfall events [2]. Conventional numerical weather prediction (NWP) models [3], though widely adopted in weather forecasting, are limited by coarse spatial–temporal resolution and parameterization uncertainties [4]. Since high-precision models require extensive computational resources and long processing times, they are generally unsuitable for short-term, high-frequency nowcasting. The development of Doppler weather radar has opened a new avenue for precipitation nowcasting [5,6,7]. Specifically, future rainfall rates are forecast by extrapolating current radar echoes and transforming the predicted radar echoes into precipitation intensities, a process known as quantitative precipitation forecasting (QPF) [8]. The intermediate step of estimating precipitation from radar reflectivity is referred to as quantitative precipitation estimation (QPE) [9,10,11], which serves as a fundamental component for radar-based precipitation nowcasting.
An important and well-established empirical relationship exists between radar reflectivity (Z) and rainfall rate (R), typically expressed as Z = a R b . This relationship is generated from a combination of physical principles (such as Rayleigh scattering and integral definition of reflectivity) and empirical observations derived from simplified raindrop size distributions and terminal velocities [12]. Conventional studies on Z-R relationship primarily focus on optimizing the two empirical parameters a and b. Although this formulation is simple and widely implemented in operational weather radar systems, its performance is often limited by regional variability, microphysical diversity, and the inherently nonlinear nature of precipitation processes.
Deep learning (DL) has demonstrated a strong capability to automatically extract complex and nonlinear relationships from data. Consequently, DL-based QPE methods have gained increasing attention in recent years [13,14,15,16,17,18]. However, most existing approaches are purely data-driven and tend to overlook the physical constraints and interpretability inherent in the Z-R relationship. This deficiency often leads to inconsistent performance under unseen meteorological conditions and limits their generalization in real-world forecasting. To address these issues, a hybrid model combining DL with the Z-R formula has been proposed [19]. Nonetheless, that method treats the empirical Z-R relationship as an auxiliary input feature rather than an integral component of the model architecture, which leads to only marginal improvements in estimation accuracy. An effective fusion framework that deeply couples the physical Z-R relationship with DL has not yet been fully realized.
In this paper, we propose FusionQPE, a hybrid model that seamlessly integrates an adaptive Z-R relationship with DL for QPE. FusionQPE consists of a densely connected convolutional neural network (DenseNet) backbone and an adaptive Z-R branch. The DenseNet architecture [20] is adopted as the backbone owing to its efficiency in reusing features, strengthening gradient propagation, and capturing multi-scale spatial patterns. The backbone extracts deep feature maps from radar reflectivity, while the adaptive Z-R branch (inspired by the squeeze-and-excitation (SE) network [21]) is designed to automatically learn the two parameters of the Z-R relation. The learned parameters, together with the original radar reflectivity, are substituted into the Z-R equation to compute rainfall. FusionQPE contains multiple dense blocks in the backbone, each paired with an adaptive Z-R block to generate rainfall vectors. The final precipitation estimate is obtained through a linear combination of rainfall vectors derived from both adaptive Z-R branch and the backbone output, where the trained linear weight and Z-R parameters offer interpretable insights into the model’s physical reasoning. The main contributions of this paper are summarized as follows:
  • Physics-constrained fusion framework: A novel and effective hybrid framework is proposed to tightly integrate physical knowledge with DL models. The FusionQPE framework fully leverages both physical Z-R relationship and data-driven feature extraction, providing a generalizable strategy for developing hybrid physics–AI hybrid models in other scientific and engineering domains.
  • Adaptive Z-R branch: An adaptive Z-R branch is developed by extending the SE network. This branch automatically learns the two parameters of the Z-R relationship through a channel-wise attention mechanism applied to each dense block, enabling dynamic adjustment across various precipitation patterns.
  • Interpretable fusion mechanism: A linear fusion layer is introduced to integrate the outputs of the adaptive Z-R branch and the DenseNet backbone. The learned linear weights and Z-R parameters provide quantitative interpretability, revealing the relative contribution of physical and data-driven components as well as the learned relationship between radar reflectivity and rainfall in precipitation estimation.
  • Validation on real-world data: FusionQPE is trained and evaluated on real radar and rain gauge observational datasets collected from actual weather events, demonstrating its practicality and strong potential for integration into operational weather forecasting systems.

2. Related Work

2.1. Empirical Z-R Relationship

In the seminal work of Marshall and Palmer in 1948 [22], the empirical Z-R relationship was formalized as follows:
Z = a R b
where Z denotes the radar reflectivity factor ( mm 6 m 3 ), and a and b are empirical coefficients determined from observations. Extensive research has been devoted to refining the parameters a and b to better characterize various precipitation types and climatic regions [22,23,24,25]. Representative formulations include the classical stratiform relation Z = 200 R 1.6 (Marshall–Palmer) [22], the U.S. WSR-88D convective relation Z = 300 R 1.4 [23], and the tropical relation Z = 250 R 1.2 [24]. Further refinements have been made through practical application and regional analysis, such as Z = 321 R 1.53 for Greece [25].
To improve the accuracy of QPE, several adaptive schemes have been proposed to dynamically calibrate a and b across different spatial locations and time periods [26,27,28,29]. For example, Rosenfeld and Ulbrich [26] utilized different parameter sets for distinct microphysical conditions, such as convective, stratiform, tropical, and orographic precipitation. The Multi-Radar Multi-Sensor System [27] adopts multiple rain type dependent Z-R relations to reduce bias across precipitation categories. More recently, machine learning (ML)-based methods [28,29] have been introduced to infer optimal parameter sets for various rainfall types. However, adjusting the two parameters a and b alone cannot overcome the inherent limitations of the Z-R relationship, which ultimately restricts its generalization and accuracy in practical QPE applications.

2.2. DL-Based QPE Method

DL has shown remarkable capability in automatically learning complex, nonlinear mappings from large-scale datasets, making it well-suited for QPE tasks. For example, ref. [13] proposed a DL-based QPE framework that utilizes three-dimensional QPESUMS radar reflectivity to estimate two-dimensional (2-D) precipitation fields. Similarly, Li et al. proposed RQPENet [14] and StarNet [15], which incorporate densely connected convolutional networks and clique units, respectively, to enhance precipitation estimation accuracy. Additionally, ref. [16] established the relationship among single polarimetric variables (e.g., reflectivity Z H , differential reflectivity Z D R , and specific differential phase K D P ) and multi-variable combinations ( Z H - Z D R - K D P ) and rainfall via DL-based models and a customized loss function to improve predictive performance.
Despite these advancements, most existing DL-based QPE approaches remain purely data-driven, relying primarily on statistical correlations between radar reflectivity and rainfall. These models often overlook the intrinsic physical relationship between the radar echo and the rainfall, which governs the transformation of radar echoes into surface precipitation intensity. As a result, their predictions may become unstable or unreliable when applied to unseen rainfall regimes or rare weather events.
To overcome this limitation, several studies have explored physics-guided learning strategies. For instance, ref. [19] proposed a hybrid model that integrates the empirical Z-R relationship into a DL architecture by using rainfall estimated from the Z-R formula as an auxiliary input feature. However, this method only indirectly leverages the physical constraint and yields limited improvement in the precipitation estimation accuracy. Consequently, developing an effective and adaptive fusion mechanism that tightly couples the Z-R relationship with DL networks remains an open challenge in radar-based QPE research.

2.3. Squeeze-and-Excitation Network

The SE network [21] is a lightweight architectural module that is designed to adaptively recalibrate channel-wise feature responses by explicitly modeling inter-dependencies among channels. It achieves this through a two-step process: squeeze for global information embedding and excitation for adaptive feature reweighting, thereby enhancing the representational capacity of convolutional neural networks. In this paper, the SE network is modified to construct the adaptive Z-R branch in the proposed FusionQPE framework. The original SE block consists of two main components: Squeeze (Global Information Embedding) and Excitation (Adaptive Recalibration). The Squeeze operator ( F s q ) applies global average pooling to the feature map to capture channel-wise information, which can be defined as:
z c = F s q ( x ) = 1 H × W i = 1 H j = 1 W x ( i , j )
where x ( i , j ) denotes the feature value of x at the i t h row and j t h column. The Excitation operator ( F e x ) employs two fully connected layers with Rectified Linear Unit (ReLU) and sigmoid activation functions to capture channel-wise dependencies s, which can be defined as:
s = F e x ( z , W ) = σ ( W 2 δ ( W 1 z ) )
where z = [ z 1 , z 2 , , z c , , z C ] R C is the channel descriptor obtained from Equation (2); W 1 R C r × C and W 2 R C × C r represent the weights of the two fully connected layers, respectively; r is the reduction ratio; δ and σ are the ReLU and sigmoid activation functions, respectively. The final output of the SE block is obtained by rescaling the feature map (x) with the channel-wise dependencies s:
x ˜ c = F s c a l e ( x c , s c ) = s c · x c
where x ˜ = [ x ˜ 1 , x ˜ 2 , , x ˜ C ] is the final recalibrated feature map of x.

3. Materials and Methods

3.1. FusionQPE Framework

To tightly this paper proposes the FusionQPE, a novel Physics-Constrained DL framework that integrates adaptive Z-R relationship. The overall architecture of FusionQPE is illustrated in Figure 1, which consists of two major components: the backbone network (Figure 1a) and the adaptive Z-R branch (Figure 1b). The backbone network mainly comprises dense convolutional blocks and transition blocks. Each dense convolutional block (Figure 1c) is constructed from multiple dense convolutional units (Figure 1d). And the adaptive Z-R branch is constructed from Z-R formula blocks (Figure 1e). The detailed structures and functions of these components are described in the following subsections.
As the weather radar completes one full volume scan approximately every six minutes, ten radar echo frames are typically obtained within one hour. The shape of the FusionQPE input (original radar reflectivity data) without additional preprocessing is c × h × w , where c = 10 is 10 time slices, h = 9 and w = 9 denote the spatial sizes. The 2-D convolutional (Conv) operation [30] is fundamental for capturing spatial features, and FusionQPE employs it to extract deep representations. In addition, a 2-D batch normalization (BN) layer [31] is used to stabilize training, and a ReLU activation function [32] introduces nonlinearity alongside each convolutional layer.

3.2. Backbone Network

To effectively capture the complex relationship between radar reflectivity and rainfall, a densely connected convolutional neural network is employed to learn the spatial and temporal correlations among the radar echoes within one hour and their 9 × 9 neighboring grids. The backbone network of FusionQPE consists of an initial head, four dense convolutional blocks, three transition blocks, and a final vectorization block. Four dense convolutional blocks include ( 6 , 12 , 36 , 24 ) dense convolutional units, respectively. Each component is described as follows:

3.2.1. Initial Head

The adaptive Z-R branch requires the original radar reflectivity to calculate the rainfall rate. Therefore, the input to FusionQPE is the raw radar echo data. To preprocess the input and enhance feature extraction, the model first applies a BN layer, followed by a convolutional layer with a kernel size of 3 and 96 channels. A ReLU activation function is subsequently employed to introduce nonlinearity and strengthen feature representation.

3.2.2. Dense Convolutional Block

The dense convolutional block (Figure 1c) consists of multiple dense convolutional units, where the input to each unit is constructed by concatenating the feature maps of all preceding units, thereby promoting feature reuse and strengthening information flow. The dense convolutional unit, shown in Figure 1d, contains BN, ReLU, Conv ( 192 @ 1 × 1 ), BN, ReLU, and Conv ( 48 @ 3 × 3 ). The two convolutional layers correspond to 2-D convolutions with kernel sizes of 1 and 3, and output channel dimensions of 192 and 48, respectively.

3.2.3. Transition Block

The transition block is designed to condense the deep feature map and reduce spatial dimensions. In FusionQPE, the transition block is positioned after each dense convolutional block. Each transition block consists of a BN layer, a ReLU activation, a 1 × 1 convolutional layer with an unchanged channel size, and a 2 × 2 average pooling layer with a stride of 2. The output deep feature map halves the input spatial resolution, yielding a more compact and semantically enriched feature representation.

3.2.4. Vectorization Block

The vectorization block is designed to transform the deep feature map from a 3-D tensor into a one-dimensional vector, representing the deep estimated rainfall vector ( y D ). It consists of a BN layer, a 1 × 1 convolutional layer with half the number of output channels, and a global average pooling layer that aggregates spatial information into a compact vector representation.

3.3. Adaptive Z-R Branch

In this paper, the Z-R branch with learnable parameters is integrated into the DL model. The raw input to the proposed FusionQPE model is d B Z , which is expressed on a logarithmic decibel (dB) scale and is different from the reflectivity factor Z. The goal of FusionQPE is to estimate the rainfall R through the radar echo d B Z . Hence, the empirical Z-R relationship in Equation (1) must first be reformulated into a compatible mathematical expression suitable for the logarithmic input representation. This design directly addresses the limitations of purely data-driven QPE models discussed in the Section 1 by embedding the Z-R relationship as the dominant physical constraint.
Since weather radars usually report reflectivity in decibel units ( d B Z ), the relationship between Z and d B Z is:
d B Z = 10 log 10 ( Z )
which can be equivalently rewritten as:
Z = 10 d B Z 10 .
Substituting Equation (6) into Equation (1) gives:
10 d B Z 10 = a R b .
To express R as a function of d B Z , we rearrange Equation (7) as:
R = ( a ) 1 b 10 d B Z 10 1 b .
In radar–rainfall estimation, it is often convenient to express this formula in a scaled form using a and b:
R = G ( a , b ) = a × 10 2 × 10 d B Z 10 b ,
where a = ( a ) 1 b and b = 1 b . The term 10 2 is an empirical scaling factor widely adopted in operational QPE systems to adjust the rainfall rate to the unit of mm / h . Therefore, Equation (9) represents the equivalent form of the traditional Z-R relationship expressed in terms of d B Z rather than Z, where the two parameters, a and b, are automatically adjusted according to the corresponding radar echo in FusionQPE.
After reformulating the Z-R relationship into a form compatible with the deep learning framework, the adaptive Z-R formula branch is further developed for integration within FusionQPE. The branch is designed to estimate rainfall by integrating the physical Z-R relationship with deep feature maps extracted by the backbone network. A rainfall rate vector is computed based on the output of each dense convolutional block, allowing the model to capture multi-scale precipitation features. Hence, the adaptive Z-R branch contains four Z-R formula blocks, each corresponding to one of the four dense convolutional blocks in FusionQPE. As illustrated in Figure 1e, each Z-R formula block is developed by modifying the SE block [21].

3.3.1. Z-R Formula Block

The Z-R formula block consists of two modified SE blocks connected in series. The first modified SE block, referred to as the SE-based Parameter Learner, is responsible for adaptively learning the two parameters a and b in the Z-R relationship (Equation (9)) from the corresponding dense block features. Using the learned parameters together with the raw radar reflectivity ( d B Z ), a rainfall tensor is calculated via Equation (9), whose shape ( c × h × w ) matches that of the model input. To further capture the temporal relationship between consecutive rainfall tensors, the second modified SE block, referred to as the SE-based Temporal Relationship Capturer, is applied to the rainfall tensor to generate a refined rainfall vector of length h w .

3.3.2. SE-Based Parameter Learner

The deep feature map from the ith dense convolutional block is denoted as X i = F i ( X ) , where X represents the model input and F i denotes the operation of all previous layers up to the dense block, and the final component of F i ( X ) corresponds to the ith dense convolutional block. The first modified SE block acts as a parameter learner that automatically derives the two Z-R parameters (a and b). Its Squeeze operator is defined as Equation (2), where x is the ith deep feature map ( X i ). Unlike the original Excitation operator, the modified version outputs two scalar values instead of channel-wise dependencies, applying a ReLU activation to a and no activation to b. The ReLU ensures that a, the scaling factor, remains non-negative, while b, the exponent, is unconstrained to allow a wider dynamic range. Accordingly, the output dimension of second fully connected layer becomes 2 ( W 2 R C × 2 ). A rainfall rate tensor r ˜ i R c × h × w is calculated using Equation (9) with the learned parameters (a and b) and the model input (X).

3.3.3. SE-Based Temporal Relationship Capturer

To refine rainfall estimation, the second modified SE block serves as a temporal relationship capturer, operating on the previously calculated rainfall rate tensor r ˜ i . The resulting rainfall rate vector ( r i R h w ) is obtained as the weighted sum of channel-wise tensors:
r i = Flatten k = 1 c F s c a l e ( r ˜ i , s c )
where s = F e x ( F s q ( r ˜ i ) , W ) . Compared with the original excitation operator, the second SE block replaces its activation function with a softmax layer to normalize channel contributions and emphasize the most relevant temporal–spatial features.

3.4. Fusion Layers

The deep learning backbone estimates a rainfall vector ( y D ), while the adaptive Z-R branch computes four rainfall vectors ( r i , where i = 1 , , 4 ) derived from four distinct Z-R formula blocks. FusionQPE concatenates these vectors into a single feature vector and employs a fully connected layer without any activation function to estimate the final rainfall rate ( r R ). To ensure that the adaptive Z-R branch learns physically meaningful parameters rather than acting as an arbitrary feature extractor, a physical consistency constraint is incorporated into the loss function. This constraint explicitly forces the Z-R branch to approximate the ground-truth rainfall independently. Consequently, the total loss function is formulated as a constrained Mean Squared Error (MSE):
L = 1 N n = 1 N { ( r n y n ) 2 + β 1 d r j = 1 d r ( Norm ( r n , j ) y n ) 2 }
where r n represents the model’s final output, r n , j denotes the jth element of the Z-R branch output, and y n is the normalized rain gauge measurement (mm/h). Here, d r = h × w × 4 represents the flattened dimension of the Z-R branch output, and N is the dataset size. The term Norm ( · ) applies the same linear normalization defined in Equation (16) to the Z-R branch outputs, aligning them with the ground-truth. Crucially, this normalization is a linear scaling operation that aligns the magnitude of the physical branch outputs with the network’s optimization landscape without altering the relative distribution of rainfall intensities or compressing heavy rainfall signals. The hyperparameter β controls the strength of this physical regularization and is set to 0.3, based on the sensitivity analysis provided in Section 4.5.

3.5. Interpretability of FusionQPE

Compared with purely data-driven DL-based QPE methods, the proposed FusionQPE framework offers improved interpretability through analysis of the fusion layer weights and the learned Z-R parameters. The fusion layer produces the final rainfall estimate as a linear combination of outputs from the backbone network and the adaptive Z-R branch. Therefore, its learned weights can be utilized to examine the relative contributions of each component within FusionQPE. Furthermore, to explore the influence of the adaptive Z-R branch, representative test instances across different precipitation levels are analyzed to assess the corresponding learned parameters, providing deeper insight into the model’s physical consistency and adaptive behavior.
The Backbone outputs d B features, while the adaptive Z-R branch includes four Z-R formula blocks, each generating h w features. The contributions of the ith Backbone and jth Z-R features are denoted as C b n i = | u b n i × w b n i | and C z r j = | u z r i × w z r j | , respectively, where u and w represent the corresponding input into the fusion layer and the corresponding learned weight from the fusion layer. The overall contribution ratios of the backbone network and Z-R branch are calculated as:
C R Backbone = i = 1 d B C b n i i = 1 d B C b n i + j = 1 4 h w C z r j
C R ZR = j = 1 4 h w C z r j i = 1 d B C b n i + j = 1 4 h w C z r j
C R ZR k = j = 1 h w C z r j k i = 1 d B C b n i + j = 1 4 h w C z r j
where C R Backbone , C R ZR , and C R ZR k denote the contribution ratios of the backbone network, the adaptive Z-R branch, and the kth Z-R block, respectively; C z r j k represents the contribution of the jth Z-R features of the k t h Z-R block; the relationship between Equations (13) and (14) is C R ZR = k = 1 4 C R ZR k .
To further quantify the relative contributions of the backbone network and the adaptive Z-R branch, their contribution densities are estimated using the Gaussian kernel, defined as:
f ( x ) = 1 n · h i = 1 n 1 2 π exp 1 2 x l o g 10 ( C R i ) h 2
where x is the estimated density, n is the number of features, and h is the bandwidth of the Gaussian kernel.
The parameters a and b are adaptively learned from four Z-R blocks based on the outputs of their corresponding dense convolutional blocks, which are derived from the raw radar echo. In other words, the two parameters are automatically adjusted according to the raw radar echo, ensuring that the relationship between radar reflectivity and rainfall is dynamically fitted. To further explain how each Z-R block affects the final estimate of FusionQPE, the learned two parameter values of four Z-R blocks are compared with the conventional fixed parameters of Equation (9). Additionally, the output of the backbone network, the outputs of four Z-R blocks, and the final output of FusionQPE are analyzed according to the ground truth. The examples on the interpretability analysis of FusionQPE can be found in Section 5.

4. Results

4.1. Dataset

4.1.1. Study Region

The radar data utilized in this study were collected from a regional weather radar (CINRAD-SAD) located in Guangzhou, Guangdong Province, China. Corresponding ground-truth rainfall measurements were obtained from 219 automatic weather stations located in Foshan, Guangdong Province. As illustrated in Figure 2 [33], the position of the Guangzhou radar is marked by a red star and all the weather stations are indicated by blue dots. The colored background depicts an example of radar reflectivity observed during a representative precipitation event. The study area, Foshan, is located in the Pearl River Delta and features a subtropical monsoon climate with an annual average rainfall of 1600–2000 mm [34]. Precipitation is highly seasonal, concentrated in the flood season (April to September) and primarily driven by the East Asian summer monsoon and tropical cyclones.

4.1.2. Data Quality Control

This radar operates in the Volume Cover Pattern 21 (VCP21) mode, performing a full volume scan every six minutes across nine elevation angles ( [ 0.5 ° , 1.45 ° , 2.4 ° , 3.35 ° , 4.3 ° , 6.0 ° , 9.9 ° , 14.6 ° , 19.5 ° ] ). The radar base data has a spatial resolution of 1 km and an azimuthal resolution of 1 ° . To better estimate the rainfall rate, near-surface radar echoes serve as the model input. Specifically, the Constant Altitude Plan Position Indicator (CAPPI) product at an altitude of 1 km is employed. This altitude was selected to represent near-surface conditions, thereby minimizing the vertical distance to ground gauges and reducing uncertainties associated with the Vertical Profile of Reflectivity (VPR). The CAPPI radar reflectivity is derived from the polar-coordinate base radar data using the Cressman interpolation method [35]. Although converting from polar to Cartesian coordinates may introduce minor local biases, FusionQPE processes a 9 × 9 spatial patch across 10 temporal frames rather than relying on a single grid point. This spatiotemporal context enables the network to learn robust feature representations, making the estimation resilient to interpolation artifacts and small-scale vertical variabilities.
To ensure the robustness of the model against data quality issues, we implemented a data completeness screening protocol to handle missing or anomalous radar values. Specifically, we evaluated the information density of each 10 × 9 × 9 spatiotemporal patch. Samples containing an excessive proportion of missing or anomalous values (exceeding a 50% threshold) were considered insufficiently informative for reliable QPE and were permanently discarded. This threshold-based filtering effectively removes samples that lack sufficient radar reflectivity context to establish a valid mapping with ground gauges. For the retained samples where data gaps were minor and sparse, a zero-imputation strategy was employed. This approach preserves the spatiotemporal tensor structure required for the CNN input while mitigating the impact of local data discontinuities.

4.1.3. Rainfall Category Distribution

The experimental dataset covers the period from 1 January 2019 to 30 June 2021, comprising approximately 140,000 samples in total. To ensure a robust and unbiased evaluation, the entire dataset was randomly shuffled and partitioned into three independent subsets: a training set ( N = 80,000), a validation set ( N = 20,000), and a test set ( N = 40,000). This random splitting strategy ensures that meteorological characteristics, including precipitation intensity distributions and storm types, are uniformly represented across all subsets.
Figure 3 illustrates the sample distributions of the training, validation, and test sets across five distinct rainfall categories: Light (<5 mm/h), Moderate (5–10 mm/h), Heavy (10–20 mm/h), Storm (20–30 mm/h), and Extreme (>30 mm/h). The histograms reveal a natural long-tail distribution, where light rainfall predominates while extreme events remain rare. Crucially, the highly consistent proportions of each category across the training, validation, and test sets demonstrate the statistical homogeneity of the data partitioning. This alignment with the region’s precipitation climatology confirms the dataset’s representativeness, providing a solid foundation for training a model capable of handling diverse, real-world weather conditions.

4.1.4. Data Preprocessing

To capture spatiotemporal correlations between radar echoes and precipitation, each sample consists of 10 consecutive radar reflectivity time slices (spanning one hour) covering a 9 × 9 spatial grid with 1 km resolution, along with the corresponding one hourly rainfall rate. Thus, each data sample is represented as a pair ( x R 10 × 9 × 9 , y R ). The input tensor dimensions were strategically selected to balance physical representativeness with computational efficiency. Spatially, the 9 × 9 grid captures local reflectivity gradients and textural patterns surrounding the target station, providing sufficient context to identify precipitation regimes without introducing noise from distant, uncorrelated echoes. Temporally, the sequence length of 10 frames (equivalent to a 1 h window) effectively covers the evolution and motion trends of mesoscale convective systems, enabling the model to learn dynamic features related to storm growth and decay.
Since FusionQPE directly estimates rainfall rates with raw radar echoes, no additional preprocessing is applied to the model input (reflectivity in dBZ). The rain gauge measurements are standardized using the following normalization formula:
y = Norm ( y ) = y y μ y s t d
where y μ = 1 N n = 1 N y n and y s t d = n = 1 N ( y n y μ ) 2 N are the mean and standard deviation of the training samples, respectively. Crucially, y μ and y s t d were calculated exclusively using the training dataset. These fixed statistics were then consistently applied to normalize the training, validation, and test sets to prevent data leakage.

4.2. Comparison Methods

To verify the effectiveness of proposed FusionQPE framework, five benchmark methods are compared, including two empirical Z-R relationship formulas and three state-of-the-art DL-based QPE models.

4.2.1. Convective Relation

The convective Z-R relationship is defined as:
R = 0.328 × 10 2 × Z 0.714
where Z denotes the radar reflectivity in linear scale, calculated from the model input (in d B Z ) using Z = 10 d B Z 10 . This formula is derived from Z = 300 × R 1.4 . More details can be found in [23].

4.2.2. Stratiform Relation

The stratiform Z-R relationship, corresponding to Z = 200 × R 1.6 [22], is expressed as:
R = 0.863 × 10 2 × Z 0.625

4.2.3. ZRDL

ZRDL is a DL-based model that incorporates the Z-R relationship, proposed in 2022 [19]. ZRDL employs 2-D convolutions to learn the Z-R parameters, followed by two long short-term memory (LSTM) blocks applied to the estimated rainfall vector and a fully connected layer to produce the final rainfall rate. In [19], this model corresponds to the “CNN+LSTM+FC+ZR” case, which is referred to as ZRDL in this paper.

4.2.4. RQPENet

RQPENet, proposed in 2023 [14], employs a densely connected network architecture and uses dual-polarization radar observations (Z, Z d r , and K d p ) as model inputs. Among the four DL models introduced in [14], the RQPENetD1, which achieved the best performance, is selected for comparison in this study and referred to simply as RQPENet. In our experiments, the model input is adapted to accept only radar reflectivity (Z) sequences to align with the single-polarization dataset. As detailed in the analysis in Section 5.4, this modification allows for a valid assessment of the backbone’s feature extraction efficiency without compromising its structural integrity.

4.2.5. StarNet

StarNet, proposed in 2024 [15], is a QPE model based on 3-D convolutional neural networks (3D-CNNs) and clique units. The model utilizes clique units to construct its star blocks, employs 3D convolutions to capture the spatiotemporal features of radar echoes, and adopts a reweighted B-Huber loss function to mitigate precipitation–sample imbalance. Similar to RQPENet, StarNet was originally developed for polarimetric data. In this study, we implemented its full 3D architecture to capture spatiotemporal dependencies but adapted the input layer for Z-only reflectivity sequences. Please refer to Section 5.4 for a comprehensive discussion on the validity and implications of comparing these dual-polarization architectures in a single-variable setting.
To ensure a fair and unbiased comparison, all methods, including FusionQPE and its ablation versions, are trained and evaluated under identical experimental conditions. The same dataset, data split, and optimization settings are applied to all methods. Moreover, the network structures and hyperparameter configurations of the compared models are preserved as described in their original publications, with necessary adaptations only to the input channel dimensions to accommodate the single-polarization data.

4.3. Experimental Setup

The experimental settings are as follows: The Adam optimizer is employed with a learning rate of 1.0 × 10 4 , a batch size of 1024, and 100 training epochs. The mean squared error (MSE) loss function is used for most models, except for StarNet and FusionQPE, which employ the reweighted B-Huber loss and constrained MSE loss (refer to Equation (11)), respectively. All the models were trained on a single NVIDIA A40 GPU.
To comprehensively examine the precipitation estimation performance of FusionQPE, five quantitative metrics are employed: mean absolute error (MAE), root mean squared error (RMSE), bias (BIAS), correlation coefficient (CC), and normalized standard error (NSE), defined as follows:
MAE = 1 N n = 1 N | r n y n |
RMSE = n = 1 N ( r n y n ) 2 N
BIAS = n = 1 N r n n = 1 N y n
CC = n = 1 N ( r n R ¯ ) ( y n Y ¯ ) n = 1 N ( r n R ¯ ) 2 n = 1 N ( y n Y ¯ ) 2
NSE = n = 1 N | r n y n | n = 1 N y n
where N denotes the number of test samples, and R ¯ and Y ¯ represent the mean values of the model outputs and ground-truth samples, respectively.
To strictly assess whether the performance improvements of FusionQPE are statistically significant rather than due to random chance, we employed two statistical verification methods: Paired t-test and Bootstrap Confidence Interval (CI). We conducted a paired t-test on the absolute errors (AE) of the prediction results. Let e i A and e i B denote the absolute errors of model A (FusionQPE) and model B (Benchmark) for the i-th sample, respectively. The differences are defined as d i = e i A e i B . The t-statistic is calculated as:
t = d ¯ s d / N
where d ¯ is the mean of the differences, s d is the standard deviation of the differences, and N is the number of test samples. The p-value is derived from the calculated t by determining the corresponding tail probability under the Student’s t-distribution with N 1 degrees of freedom [36]. A p-value less than 0.05 indicates a statistically significant difference. To evaluate the stability of the evaluation metrics, we calculated the 95% confidence interval (95% CI) using the non-parametric bootstrap method. We created K = 1000 resampled datasets by sampling with replacement from the original test set. The RMSE metric was calculated for each resampled dataset to obtain an empirical distribution. The 95% CI is defined by the 2.5th and 97.5th percentiles of this distribution:
95 % CI = [ θ 2.5 * , θ 97.5 * ]
where θ * represents the bootstrap estimates of the RMSE.
To further assess the model’s performance across different precipitation intensities, six categorical verification metrics are employed at rainfall thresholds of [ 5.0 ,   10.0 ,   20.0 ,   30.0 ] mm/h. These metrics include Accuracy (ACC), probability of detection (POD), false alarm ratio (FAR), critical success index (CSI), Heidke skill score (HSS) and Equitable Threat Score (ETS), defined as follows:
ACC = T P + T N T P + T N + F P + F N
POD = T P T P + F N
FAR = F P T P + F P
CSI = T P T P + F N + F P
HSS = 2 × ( T P × T N F N × F P ) d e n o m i n a t o r
ETS = T P T P r a n d o m T P + F N + F P T P r a n d o m
where d e n o m i n a t o r = ( T P + F N ) × ( F N + T N ) + ( T P + F P ) × ( F P + T N ) , true positive (TP) and true negative (TN) refer to the numbers of positive and negative samples that the model predicts correctly; false positive (FP) and false negative (FN) refer to the numbers of positive and negative samples for which the model makes incorrect predictions, respectively; T P r a n d o m = ( T P + F N ) × ( T P + F P ) N represents the number of hits expected by random chance, where N = T P + F N + F P + T N is the total number of samples. Higher values of ACC, POD, CSI, and HSS indicate better model performance, whereas a lower FAR value implies fewer false alarms. The ETS value ranges from 1 / 3 to 1, where 1 represents a perfect prediction and 0 indicates no skill. These metrics provide a comprehensive evaluation of both the numerical accuracy and categorical detection capability of the proposed framework.
While object-oriented metrics (e.g., FSS, SAL, MODE) are valuable for full-frame radar analysis, they are not applicable to our dataset, which consists of spatially discontinuous radar patches centered on gauge sites. Therefore, to rigorously evaluate the model’s performance in detecting precipitation events at different intensities, we employ ETS. ETS measures the fraction of observed and/or forecast events that were correctly predicted, adjusted for hits associated with random chance.

4.4. Experimental Results

The quantitative results of FusionQPE and the six benchmark methods are presented in Table 1 and Table 2, where the best performances are highlighted in bold. Among the evaluation metrics, MAE, RMSE, and NSE measure the deviation between predictions and ground truth, where smaller values indicate better performance. The BIAS value closer to 1 suggests higher consistency between estimated and observed rainfall, while a larger CC value represents stronger correlation and higher estimation reliability. As shown in Table 1, all DL-based models outperform the traditional empirical Z-R relationships, confirming that DL-based frameworks are more effective in capturing the nonlinear mapping between radar reflectivity and rainfall. Among the existing DL models, the hybrid model, ZRDL, and the recent StarNet achieve superior accuracy compared to RQPENet, yet both remain inferior to the proposed FusionQPE. FusionQPE achieved the best results across all metrics, significantly reducing estimation errors and demonstrating superior robustness compared with all benchmark methods.
To verify that the improvements of FusionQPE are statistically significant, we conducted paired t-tests between the absolute errors of FusionQPE and other benchmark methods. The results show significant differences with p-values < 0.001 for all comparisons, rejecting the null hypothesis that the improvements are due to random chance. Furthermore, we calculated the 95% bootstrap confidence intervals (CI) for RMSE. The CI for FusionQPE is [2.66, 2.72] mm/h, which is distinctly lower than and does not overlap with the best baseline method StarNet ([2.98, 3.04] mm/h), confirming the robustness of the proposed framework.
Table 2 reports the classification metrics of FusionQPE and five comparison methods at rainfall thresholds of [ 5.0 10.0 20.0 30.0 ] mm/h. At all rainfall thresholds, FusionQPE consistently outperforms the comparison models in overall ACC, CSI, and HSS, while maintaining the lowest FAR. Although its POD is marginally lower than RQPENet and StarNet, FusionQPE attains markedly higher overall detection reliability. This indicates that FusionQPE is not only balancing sensitivity and precision, but also achieving substantially improved discriminative capability. Especially for moderate to heavy rainfall (≥20 mm/h), FusionQPE improves CSI by 17–20% and HSS by 13–16% over RQPENet and StarNet, while reducing FAR by more than 40%. The performance gains highlight the effectiveness of embedding the adaptive Z-R branch, which guides the network toward physically meaningful rainfall distributions and enhances generalization under varying meteorological conditions.
To further validate the model’s operational reliability, we analyzed the ETS as presented in Table 2. FusionQPE consistently achieves the highest scores across all evaluated thresholds, demonstrating its robustness in varying precipitation regimes. A critical advantage is observed at the 30 mm/h threshold (extreme rainfall). While comparison methods show significant performance degradation—with StarNet dropping to 0.4761 and ZRDL to 0.406 due to the smoothing effect inherent in pure regression networks—FusionQPE maintains a remarkable ETS of 0.518. This indicates that the physics-constrained dual-branch architecture effectively preserves high-intensity signals, ensuring accurate detection of extreme storm cores.
To further show model performance in detail, Figure 4 presents scatter plots comparing estimated and observed rainfall for all methods. The black solid line denotes the ideal 1:1 reference, while the red dashed line represents the regression fit. Among all models, FusionQPE produces the most compact scatter distribution around the reference line, especially for heavy rainfall events (≥30 mm/h), confirming its superior predictive accuracy and reduced bias in extreme precipitation conditions.
To further analyze model performance under different precipitation levels, five representative cases with observed rainfall rates of [ 0.7 7.9 14.1 24.5 36.4 ] were selected to represent light (≤5), moderate (5∼10), heavy (10∼20), very heavy (20∼30) and extreme (≥30 mm/h) rainfall conditions. The input radar echoes, corresponding observed rainfall rates, and estimated precipitation maps from six models are shown in Figure 5. FusionQPE exhibits the closest agreement with the observed rainfall in both spatial distribution and intensity across all five conditions. The empirical Z-R relationships systematically underestimate precipitation, particularly during heavy and extreme rainfall, due to their fixed coefficients and lack of adaptive calibration. Although DL-based methods capture the temporal evolution of radar echoes more effectively, they tend to either overestimate light rain and underestimate extreme precipitation. In contrast, FusionQPE reconstructs both the spatial morphology and intensity magnitude of precipitation fields with high fidelity, showing consistent accuracy across all rainfall regimes. These visual results further confirm that the proposed fusion mechanism enables FusionQPE to produce more precise, stable, and physically meaningful radar-based precipitation estimations under diverse weather conditions.

4.5. Ablation Study

To verify the necessity of the adaptive Z-R branch and the constraint term in the loss function, three simplified versions of FusionQPE were compared: (1) Backbone, which removes the Z-R branch entirely; (2) 2ZR, which retains only the first two Z-R blocks; and (3) MSE, which replaces the constraint MSE loss with a standard single-term MSE loss. All variants were trained and tested under identical datasets and experimental settings to those used for FusionQPE. Table 3 reports both the regression metrics (MAE, RMSE, BIAS, CC, NSE) and the classification metrics (ACC, POD, FAR, CSI, HSS at 5.0, 10.0, 20.0, and 30.0 mm/h) for Backbone, 2ZR, MSE, and FusionQPE.
From the regression results in Table 3, removing the Z-R branch leads to a noticeable degradation in performance. Compared with Backbone, FusionQPE reduces MAE and RMSE by 0.1708 mm/h and 0.2655 mm/h, respectively, and increases CC from 0.8662 to 0.8799. These results confirm that the adaptive Z-R branch strengthens the model’s capability to capture the nonlinear relationship between radar reflectivity and rainfall rate. The 2ZR variant is introduced because the outputs of the last two Z-R blocks were found to be zero in the Section 5. To examine their necessity, 2ZR is constructed and compared. As shown in Table 3, 2ZR performs slightly better than Backbone but remains inferior to FusionQPE. This indicates that, although last two Z-R blocks produce zero outputs, they still contribute implicitly by improving the internal feature representations and the final fusion, therefore remaining essential for achieving optimal estimation accuracy. This indicates that the Z-R blocks play a dominant role in enforcing physical consistency, rather than merely serving as auxiliary regularizers. The MSE version performs worse than all other variants. Its higher MAE/RMSE and FAR indicate that the constraint term in the loss function is crucial for stabilizing training and maintaining physically consistent estimates, especially under heavy-rain conditions where unconstrained models tend to overfit imbalanced samples.
The classification results in Table 3 further reinforce these findings. At the 5.0 mm/h threshold, FusionQPE improves ACC (0.8484 vs. 0.8415) and reduces FAR (0.2997 vs. 0.3167), while maintaining a comparable POD. At the 10.0 mm/h level, it achieves both higher detection accuracy (CSI 0.5818 vs. 0.5289) and a much lower FAR (0.2953 vs. 0.3820). At higher thresholds (20.0 and 30.0 mm/h), FusionQPE consistently yields the best CSI and HSS and the lowest FAR values, for example, at 30.0 mm/h, CSI improves from 0.4647 to 0.5205 and HSS from 0.6320 to 0.6827.
In Equation (11), the hyperparameter β balances the trade-off between the data-driven backbone and the physics-constrained Z-R branch. A sensitivity analysis was performed with β { 0 , 0.1 , 0.3 , 0.5 , 1.0 } . Results indicated that β = 0.3 yielded the lowest RMSE on the validation set. Lower values ( β 0.1 ) weakened the physical consistency, leading to results similar to the ’Backbone’ ablation variant, while higher values ( β 0.5 ) caused the constraint term to dominate the optimization, hindering the backbone’s ability to refine residual errors.
Overall, the ablation results validate that the adaptive Z-R branch provides physically meaningful guidance (including from blocks whose direct outputs may be zero in certain cases), and the constraint loss term ensures balanced optimization and generalization. Their combination enables FusionQPE to achieve superior accuracy and robustness in radar-based quantitative precipitation estimation.

5. Discussion

5.1. Explainability Analysis of FusionQPE

There are 40,000 instances in the test dataset, 2000 representative samples were randomly drawn from the test dataset to calculate the contributions of different components of FusionQPE. More details about the dataset and representative sample set distributions can be found in Section 4.1. The distributions of the test dataset and the representative dataset are depicted in Figure 6. From this figure, it is obvious that the 2000 randomly selected samples have the same distribution of the test dataset. As shown in Figure 6, the 2000 randomly selected samples closely match the overall test-set distribution, suggesting that this subset is representative for estimating component-wise contribution statistics.
To show the explainability of the proposed framework, the absolute weight magnitudes of all features in the last layer are computed and visualized in Figure 7. The Z-R branch dominates the final rainfall estimation, contributing approximately 86.8 % , while the Backbone accounts for only 13.2 % of the total contribution (as per its computing method, referred to in Equations (12)–(14)). In other words, the physically informed Z-R branch serves as the primary predictive component and the Backbone provides residual correction. Figure 7b further shows that the first two Z-R blocks contribute 43.6 % and 43.2 % , respectively, while the latter two blocks do not directly contribute to the final rainfall estimate. However, the latter two blocks are indispensable for model training. As shown in the ablation study (Table 3), removing these blocks (the 2ZR model) degrades performance. This indicates that Z-R Blocks 3 and 4 act as deep supervision modules. The constraint loss in Equation (11) derived from their outputs forces the Backbone to learn physically consistent features during backpropagation. Consequently, while their direct contributions effectively drop to zero at inference (as the optimized Backbone becomes sufficient), their presence during training is essential for guiding the network toward a robust, physics-compliant solution. These results indicate that the Z-R branch functions as the core predictive component, embedding empirical physical relationships directly into the learning process, while the Backbone provides auxiliary data-driven refinement.
The feature contributions (Equation (15)) in Figure 7c reveal that Backbone features (blue) exhibit a wider spread (approximately −11 to −2), while Z-R branch features (orange) form a narrower and sharper peak near −2. This clear separation demonstrates that the Z-R branch contributes through high-magnitude, physically meaningful activations, while the Backbone provides low-magnitude auxiliary information. Also, Figure 7d shows that all the top 20 contributing features originate from the Z-R branch. This indicates that the Z-R branch functions as the core predictive component, embedding empirical physical relationships directly into the learning process. It is worth noting that the contribution analyzed in this study does not solely refer to the magnitude of the final output, but also reflects the influence of each component on feature learning and physical consistency during training.

5.2. Event-Based Performance and Operational Relevance

To evaluate the model’s operational value beyond global statistics, we conducted a detailed event-based analysis on five representative cases. These selected events cover a diverse range of storm types and intensities, spanning from stable stratiform precipitation (Case 1) to intense convective storms (Case 5). By rigorously analyzing the learned physical parameters and spatial reconstruction quality for these specific lifecycle stages, we assess the model’s adaptability to distinct meteorological regimes and its reliability in real-world operational scenarios.
To demonstrate case-based interpretability, we conducted a granular analysis of five representative rainfall events ranging from light to extreme intensity. As detailed in Table 4, we tracked the evolution of the learned physical parameters ( a , b ) and decomposed the specific contributions of the Backbone and adaptive Z-R blocks for each case. This analysis confirms that while the Backbone provides complementary data-driven corrections to capture complex nonlinear patterns, the Z-R branch provides physically grounded guidance that constrains the estimation within a physically meaningful space and improves robustness and interpretability. Table 4 shows outputs of each FusionQPE component and corresponding observations for five representative cases. Since the loss is computed between the model outputs and normalized observations (Equation (16)), the observation column lists normalized rainfall for direct comparison. Each row corresponds to one instance, where the outputs of individual Z-R blocks include the learned parameters ( a , b ) of the empirical Z-R relationship. The results confirm that the Backbone contributes the most to the final estimation, while the Z-R branch acts as a physically consistent supplement. Specifically, Z-R Blocks 1 and 2 yield nonzero outputs with increasing a values as rainfall intensity rises (while b remains relatively stable), indicating a sensible learned scaling behavior.
A detailed analysis of the learned parameters in Table 4 reveals the model’s physical reasoning strategy, particularly in distinguishing between light (Case1) and extreme (Case5) rainfall: Consistent Microphysical Shape (b): The learned exponent b remains remarkably stable around 0.56–0.59 across different intensities. This value is closely aligned with the Marshall–Palmer exponent ( b = 1 / 1.6 = 0.625 ), suggesting that the fundamental Drop Size Distribution (DSD) shape in this subtropical monsoon region maintains stratiform-like characteristics even within larger storm envelopes. Adaptive Intensity Scaling (a): In contrast, the scalar parameter a exhibits significant adaptability, increasing sharply from ∼7.9 in light rain to ∼25.0 (in Block2) during extreme rainfall. Physically, an increase in a (in R a Z b ) enhances the estimated rainfall rate for a given reflectivity. This adaptation is critical for modeling high-efficiency precipitation processes often observed in South China’s “Dragon Boat Water” events (warm-sector heavy rainfall), where high liquid water content generates intense rainfall without the extremely high reflectivity values typical of continental convection. This demonstrates that FusionQPE essentially operates as an adaptive scaler: it anchors the microphysical shape to the regional climatology (via b) while dynamically adjusting the precipitation efficiency (via a) to robustly estimate extreme rainfall.

5.3. Comparison with Hybrid Model

Although both FusionQPE and ZRDL [14] integrate the empirical Z-R relationship into a deep learning framework, FusionQPE achieves superior performance (RMSE 2.6924 vs. 3.2110 refers to Table 1) due to its unique multi-scale physical modeling. In ZRDL, the physical parameters (a and b) are typically regressed in a single pass from the aggregated features. In contrast, FusionQPE couples an adaptive Z-R block with each level of the dense backbone. This architecture allows the model to learn physical parameters from hierarchical deep features. The early blocks capture local, high-frequency spatial variations (shallow features), while deeper blocks capture broader storm structures (deep features). By fusing these multi-level physical estimates, FusionQPE can dynamically adjust the Z-R relationship for both small-scale convective cells and large-scale stratiform regions simultaneously. This hierarchical integration, combined with the SE-based attention mechanism and the DenseNet backbone, provides a more robust and physically consistent estimation than the single-stage parameter generation used in ZRDL.

5.4. Comparison with Dual-Polarization Architectures

It is important to address the comparison with RQPENet and StarNet, which were originally presented with dual-polarization inputs ( Z , Z D R , K D P ). In this study, we evaluate all models using only Reflectivity (Z). This comparison is structurally valid because RQPENet and StarNet process multiple radar variables by concatenating them as generic input channels, without incorporating variable-specific physical constraints into their architecture. Consequently, reducing their input to a single channel (Z) effectively tests the feature extraction efficiency of their respective backbones without violating their structural design.
The experimental results demonstrate that FusionQPE outperforms these baselines in the Z-only setting. This highlights a critical advantage of our approach: while standard deep learning models like RQPENet rely on increasing the number of input variables (e.g., adding Z D R and K D P ) to gain information, FusionQPE maximizes the information extraction from the single primary variable (Z) by embedding the physical Z-R law directly into the network structure. This proves that embedding physical knowledge can be as effective as, or more effective than, simply increasing the dimensionality of the input data in generic networks.

5.5. Computational Efficiency and Operational Feasibility

To evaluate the potential for real-time deployment, we compared the computational costs of FusionQPE against baseline models on a single NVIDIA A40 GPU (Table 5). The results demonstrate that FusionQPE achieves a highly competitive inference speed of 29.90 ms, outperforming both the recurrent ZRDL (30.20 ms) and the 3D-based StarNet (32.55 ms). It is particularly noteworthy that FusionQPE processes data faster than StarNet despite having significantly more parameters (53.05 M vs. 3.58 M). This indicates that the proposed 2D architecture effectively circumvents the high computational latency inherent in 3D convolutions, achieving a superior balance between model capacity and inference speed. Although RQPENet exhibits slightly lower latency, FusionQPE maintains a much lower floating-point operation count (0.25 G vs. 1.04 G), suggesting efficient resource utilization. Given the operational requirement where radar data updates every 6 min, the millisecond-level latency of FusionQPE is negligible, confirming its suitability for real-time operational systems.

5.6. Limitations and Future Directions

A limitation of the current study is that the FusionQPE model was trained and validated using a single radar dataset from the Pearl River Delta. While the framework demonstrates superior performance in this subtropical monsoon region, its transferability to different climatic regimes (e.g., arid or high-latitude regions) and different radar frequencies (e.g., X-band or C-band) remains to be explored. Theoretically, the adaptive Z-R branch and the physics-constrained loss function are designed to be flexible; the model can autonomously recalibrate physical parameters to suit different microphysical environments. Future work will focus on cross-regional validation and the integration of multi-source data to further enhance the model’s robustness and global applicability.
Currently, FusionQPE provides deterministic point estimates of rainfall intensity. While we validated the statistical significance of the model’s performance via bootstrap confidence intervals (Section 4.4), the architecture does not yet output pixel-level uncertainty bounds. Future work will extend this framework into a probabilistic deep learning model (e.g., by incorporating Bayesian layers or Quantile Regression) to provide explicit confidence intervals for each prediction, which is particularly valuable for risk assessment in operational hydrology.

6. Conclusions

Quantitative precipitation estimation (QPE) based on radar is a fundamental and critical step for rainfall nowcasting. In this paper, we propose a novel deep fusion model, named FusionQPE, which integrates the Z-R relationship with DL. The proposed framework employs a DenseNet as its backbone and introduces four adaptive Z-R formula blocks. Each Z-R block is designed by modifying the SE network to learn the Z-R parameters directly from deep features. Furthermore, a constraint term based on the MSE of the Z-R branch output is incorporated into the overall loss function to better exploit the physical information encoded by the Z-R branch. Comprehensive experiments conducted on a real radar gauge dataset demonstrate that FusionQPE outperforms both conventional empirical Z-R formulas and state-of-the-art DL-based QPE methods. The ablation study further confirms the necessity of each model component, showing that Z-R blocks are essential for achieving optimal estimation accuracy. In addition, the Z-R branch enhances the interpretability and physical transparency of the model compared with purely data-driven approaches.
In future work, we plan to further develop the theoretical foundation of the adaptive Z-R branch and extend this knowledge-constrained fusion framework to other geophysical and environmental prediction tasks.

Author Contributions

Conceptualization, T.S. and H.Z.; methodology, T.S. and H.Z.; software, T.S.; validation, T.S. and K.C.; formal analysis, T.S. and H.Z.; investigation, T.S. and K.C.; resources, K.C. and Z.Z.; data curation, K.C.; writing—original draft preparation, T.S.; writing—review and editing, T.S., H.Z. and Z.Z.; visualization, T.S. and K.C.; supervision, Z.Z.; project administration, Z.Z.; funding acquisition, T.S. and Z.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Guangdong Basic and Applied Basic Research Foundation (2024A1515510031 and 2023A1515011438); National Natural Science Foundation of China (No. 62471310); Scientific Foundation for Youth Scholars of Shenzhen University (868-000001033384); Guangdong Meteorological Bureau General Project (GRMC2023M43); China Meteorological Administration Youth Innovation Team (CMA2024QN01).

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
QPEQuantitative Precipitation Estimation
DLDeep Learning
FusionQPEFusion Model for QPE
DenseNetDense COnvolutional Neural Network
SESqueeze-and-Excitation
ZRadar Reflectivity
RRainfall Rate
BNBatch Normalization
ConvConvolutional Layer
ReLURectified Linear Unit
AvgPoolAverage Pooling Layer
GlobalAvgPoolGlobal AvgPool
LSTMLong Short-Term Memory
FCFully Connected Layer
MAEMean Absolute Error
RMSERoot Mean Squared Error
BIASbias
CCCorrelation Coefficient
NSENormalized Standard Error
ACCAccuracy
PODProbability of Detection
FARFalse Alarm Ratio
CSICritical Success Index
HSSHeidke Skill Score
TPTrue Positive
TNTrue Negative
FPFalse Positive
FNFalse Negative

References

  1. Schmid, F.; Wang, Y.; Harou, A. Nowcasting guidelines—A summary. Bulletin 2019, 68, 2. [Google Scholar]
  2. Schmid, W.; Mecklenburg, S.; Joss, J. Short-term risk forecasts of heavy rainfall. Water Sci. Technol. 2002, 45, 121–125. [Google Scholar] [CrossRef]
  3. Bauer, P.; Thorpe, A.; Brunet, G. The quiet revolution of numerical weather prediction. Nature 2015, 525, 47–55. [Google Scholar] [CrossRef] [PubMed]
  4. Castorina, G.; Caccamo, M.T.; Colombo, F.; Magazù, S. The role of physical parameterizations on the numerical weather prediction: Impact of different cumulus schemes on weather forecasting on complex orographic areas. Atmosphere 2021, 12, 616. [Google Scholar] [CrossRef]
  5. Bringi, V.N.; Chandrasekar, V. Polarimetric Doppler Weather Radar: Principles and Applications; Cambridge University Press: Singapore, 2001. [Google Scholar]
  6. Fang, W.; Pang, L.; Sheng, V.S.; Wang, Q. STUNNER: Radar echo extrapolation model based on spatiotemporal fusion neural network. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5103714. [Google Scholar] [CrossRef]
  7. Chen, S.; Shu, T.; Zhao, H.; Zhong, G.; Chen, X. TempEE: Temporal–spatial parallel transformer for radar echo extrapolation beyond autoregression. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5108914. [Google Scholar] [CrossRef]
  8. Novák, P.; Březková, L.; Frolík, P. Quantitative precipitation forecast using radar echo extrapolation. Atmos. Res. 2009, 93, 328–334. [Google Scholar] [CrossRef]
  9. Ryzhkov, A.; Zhang, P.; Bukovčić, P.; Zhang, J.; Cocks, S. Polarimetric radar quantitative precipitation estimation. Remote Sens. 2022, 14, 1695. [Google Scholar] [CrossRef]
  10. Lu, X.; Li, J.; Liu, Y.; Li, Y.; Huo, H. Quantitative precipitation estimation in the Tianshan mountains based on machine learning. Remote Sens. 2023, 15, 3962. [Google Scholar] [CrossRef]
  11. Cui, W.; Si, J.; Zhang, L.; Han, L.; Chen, Y. Enhanced Multimodal-fusion Network for Radar Quantitative Precipitation Estimation Incorporating Relative Humidity Data. IEEE Trans. Geosci. Remote Sens. 2025, 63, 4107213. [Google Scholar] [CrossRef]
  12. Goyette, J.S. The Z-Relation in Theory and Practice; University of Rochester: Rochester, NY, USA, 2012. [Google Scholar]
  13. Cheng, Y.Y.; Chang, C.T.; Chen, B.F.; Kuo, H.C.; Lee, C.S. Extracting 3d radar features to improve quantitative precipitation estimation in complex terrain based on deep learning neural networks. Weather Forecast. 2023, 38, 273–289. [Google Scholar]
  14. Li, W.; Chen, H.; Han, L. Polarimetric radar quantitative precipitation estimation using deep convolutional neural networks. IEEE Trans. Geosci. Remote Sens. 2023, 61, 4102911. [Google Scholar] [CrossRef]
  15. Li, W.; Chen, H.; Han, L.; Lee, W.C. StarNet: A deep learning model for enhancing Polarimetric radar quantitative precipitation estimation. IEEE Trans. Geosci. Remote Sens. 2024, 62, 4106513. [Google Scholar] [CrossRef]
  16. Jiang, H.; Hu, Z.; Zheng, J.; Wang, L.; Zhu, Y. Study on quantitative precipitation estimation by polarimetric radar using deep learning. Adv. Atmos. Sci. 2024, 41, 1147–1160. [Google Scholar]
  17. Biondi, A.; Facheris, L.; Argenti, F.; Cuccoli, F. Comparison of Different Quantitative Precipitation Estimation Methods Based on a Severe Rainfall Event in Tuscany, Italy, November 2023. Remote Sens. 2024, 16, 3985. [Google Scholar] [CrossRef]
  18. Zhao, C.; Xu, M.; Wang, Z.; Li, J.; Zheng, J.; Yuan, M.; Tao, Y.; Shi, L. Spatiotemporal Distribution and Applicability Evaluation of Remote Sensing Precipitation in River Basins Across Mainland China. Remote Sens. 2025, 17, 3534. [Google Scholar] [CrossRef]
  19. Ma, J.; Cui, X.; Jiang, N. Modelling the ZR Relationship of Precipitation Nowcasting Based on Deep Learning. Comput. Mater. Contin. 2022, 72, 1939–1949. [Google Scholar] [CrossRef]
  20. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
  21. Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar]
  22. Marshall, J.S.; Palmer, W.M.K. THE Distribution of Raindrops with Size. J. Atmos. Sci. 1948, 5, 165–166. [Google Scholar] [CrossRef]
  23. Hunter, S.M. WSR-88D radar rainfall estimation: Capabilities, limitations and potential improvements. Natl. Weather Dig. 1996, 20, 26–38. [Google Scholar]
  24. Fournier, J.D. Reflectivity-Rainfall Rate Relationships in Operational Meteorology; National Weather Service Technical Memo, National Weather Service: Tallahassee, FL, USA, 1999. [Google Scholar]
  25. Bournas, A.; Baltas, E. Determination of the ZR relationship through spatial analysis of X-band weather radar and rain gauge data. Hydrology 2022, 9, 137. [Google Scholar] [CrossRef]
  26. Rosenfeld, D.; Ulbrich, C.W. Cloud microphysical properties, processes, and rainfall estimation opportunities. Meteorol. Monogr. 2003, 30, 237–258. [Google Scholar] [CrossRef]
  27. Zhang, J.; Howard, K.; Langston, C.; Kaney, B.; Qi, Y.; Tang, L.; Grams, H.; Wang, Y.; Cocks, S.; Martinaitis, S.; et al. Multi-Radar Multi-Sensor (MRMS) quantitative precipitation estimation: Initial operating capabilities. Bull. Am. Meteorol. Soc. 2016, 97, 621–638. [Google Scholar] [CrossRef]
  28. Wang, G.; Liu, L.; Ding, Y. Improvement of radar quantitative precipitation estimation based on real-time adjustments to ZR relationships and inverse distance weighting correction schemes. Adv. Atmos. Sci. 2012, 29, 575–584. [Google Scholar] [CrossRef]
  29. Huang, Y.; Liu, H.; Yao, Y.; Ni, T.; Feng, Y. An integrated method of multiradar quantitative precipitation estimation based on cloud classification and dynamic error analysis. Adv. Meteorol. 2017, 2017, 1475029. [Google Scholar] [CrossRef]
  30. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25. [Google Scholar] [CrossRef]
  31. Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the International Conference on Machine Learning, Lille, France, 7–9 July 2015; PMLR: Cambridge, MA, USA, 2015; pp. 448–456. [Google Scholar]
  32. Nair, V.; Hinton, G.E. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), Haifa, Israel, 21–24 June 2010; pp. 807–814. [Google Scholar]
  33. Cai, K.; Hu, Z.; Tan, H.; Canjin, H.; Zhang, W.; Zhang, J.; Zhi, J. Research on quantitative precipitation estimation using polarimetric radar using convolutional neural networks. J. Trop. Meteorol. (1004-4965) 2024, 40, 64. [Google Scholar]
  34. Shi, N.; Liu, J.; Kuang, Y.; Zhou, Y.; Liao, Q.; Liu, Z. Characteristics and Influences of Precipitation Tendency in Foshan under Environmental Variations. J. Water Resour. Res. 2014, 3, 41–49. [Google Scholar] [CrossRef]
  35. Cressman, G.P. An operational objective analysis system. Mon. Weather Rev. 1959, 87, 367–374. [Google Scholar] [CrossRef]
  36. Student. The probable error of a mean. Biometrika 1908, 6, 1–25. [Google Scholar] [CrossRef]
Figure 1. Detailed architecture of the proposed FusionQPE model: (a) backbone network, (b) adaptive Z-R branch, (c) dense convolutional block, (d) dense convolutional unit, (e) Z-R formula block.
Figure 1. Detailed architecture of the proposed FusionQPE model: (a) backbone network, (b) adaptive Z-R branch, (c) dense convolutional block, (d) dense convolutional unit, (e) Z-R formula block.
Remotesensing 18 00156 g001
Figure 2. The map of our dataset: the red star represents the position of the Guangzhou radar, the black line marks the boundary of our study area (Foshan), the blue dots denote the positions of the rain gauges, and the colored texture is an example of radar echoes from a weather event.
Figure 2. The map of our dataset: the red star represents the position of the Guangzhou radar, the black line marks the boundary of our study area (Foshan), the blue dots denote the positions of the rain gauges, and the colored texture is an example of radar echoes from a weather event.
Remotesensing 18 00156 g002
Figure 3. Distribution of rainfall samples across five intensity categories (Light, Moderate, Heavy, Storm, and Extreme) for the Training, Validation, and Test datasets.
Figure 3. Distribution of rainfall samples across five intensity categories (Light, Moderate, Heavy, Storm, and Extreme) for the Training, Validation, and Test datasets.
Remotesensing 18 00156 g003
Figure 4. Scatter plots of observed vs. estimated rainfall for six methods: (a) convective relation, (b) stratiform relation, (c) ZRDL, (d) RQPENet, (e) StarNet, and (f) FusionQPE (Ours). The black solid and red dashed lines represent the 1:1 and regression lines, respectively.
Figure 4. Scatter plots of observed vs. estimated rainfall for six methods: (a) convective relation, (b) stratiform relation, (c) ZRDL, (d) RQPENet, (e) StarNet, and (f) FusionQPE (Ours). The black solid and red dashed lines represent the 1:1 and regression lines, respectively.
Remotesensing 18 00156 g004
Figure 5. Five representative cases showing input radar echoes (ae), observed rainfall, and estimated precipitation from six models (fj) under different rainfall conditions ( [ 0.7 7.9 14.1 24.5 36.4 ] mm/h). Each case includes ten radar slices over one hour. FusionQPE achieves the closest match to observations in both spatial pattern and intensity.
Figure 5. Five representative cases showing input radar echoes (ae), observed rainfall, and estimated precipitation from six models (fj) under different rainfall conditions ( [ 0.7 7.9 14.1 24.5 36.4 ] mm/h). Each case includes ten radar slices over one hour. FusionQPE achieves the closest match to observations in both spatial pattern and intensity.
Remotesensing 18 00156 g005
Figure 6. Distributions of the test dataset and the representative dataset.
Figure 6. Distributions of the test dataset and the representative dataset.
Remotesensing 18 00156 g006
Figure 7. Analyzing Contributions of Backbone and Z-R Block Features: (a) Contribution Ratios of Backbone and Z-R Branch Features, (b) Contribution Ratio of Each Z-R Block, (c) Distribution of Absolute Output Value (log10), (d) Features of Top 20 Contributions.
Figure 7. Analyzing Contributions of Backbone and Z-R Block Features: (a) Contribution Ratios of Backbone and Z-R Branch Features, (b) Contribution Ratio of Each Z-R Block, (c) Distribution of Absolute Output Value (log10), (d) Features of Top 20 Contributions.
Remotesensing 18 00156 g007
Table 1. Quantitative Results of FusionQPE and Benchmark Methods.
Table 1. Quantitative Results of FusionQPE and Benchmark Methods.
MethodsMAE ↓
(mm/h)
RMSE ↓
(mm/h)
BIAS∼1CC ↑NSE ↓p-Values ↓95% CI
Convective Z-R relation2.79004.83150.86150.64620.63160.0[4.7165, 4.9343]
Stratiform Z-R relation2.63624.35970.79630.65110.59680.0[4.2863, 4.4330]
ZRDL2.18283.21101.23540.86180.49425.0625 × 10−220[3.1662, 3.2531]
RQPENet2.27173.40241.27460.86730.51430.0[3.3627, 3.4418]
StarNet2.03563.00941.16180.87130.46092.6060 × 10−99[2.9771, 3.0423]
FusionQPE1.83392.69241.09350.87990.4152 [2.6644, 2.7222]
↑(↓) indicates that the higher (lower) value, the better the model performance, ∼1 denotes that the closer the bias value is to 1, the better the performance. Bold numbers represent the best performance.
Table 2. Classification Metric Results of FusionQPE and Five Comparison Methods.
Table 2. Classification Metric Results of FusionQPE and Five Comparison Methods.
Threshold5.0 mm/h10.0 mm/h
MethodACC ↑POD ↑FAR ↓CSI ↑HSS ↑ETS∼1ACC ↑POD ↑FAR ↓CSI ↑HSS ↑ETS∼1
Convective
Z-R relation
0.81150.55040.30610.44290.49130.32570.90700.51450.40230.38220.50140.3346
Stratiform
Z-R relation
0.80900.53520.30660.43280.48100.31660.90970.44120.36090.35320.47400.3106
ZRDL0.83060.79010.34290.55950.59800.42650.92930.80190.35130.55910.67730.5121
RQPENet0.82560.79740.35450.55450.59000.41850.91520.84210.41620.52620.64230.4731
StarNet0.83400.80490.34000.56900.60800.43680.92620.84840.37500.56220.67840.5133
FusionQPE0.84840.77490.29970.58190.62980.45960.93820.76950.29530.58180.70070.5393
Threshold20.0 mm/h30.0 mm/h
MethodACC ↑POD ↑FAR ↓CSI ↑HSS ↑ETS ∼1ACC ↑POD ↑FAR ↓CSI ↑HSS ↑ETS∼1
Convective
Z-R relation
0.96850.42860.59120.26460.40230.25180.98970.44080.71800.20770.33900.2041
Stratiform
Z-R relation
0.97460.29610.46400.23570.36960.22670.99340.20410.57980.15920.27180.1573
ZRDL0.98040.74550.39620.50060.65720.48940.99350.73880.52120.40950.57790.4064
RQPENet0.97580.88170.47490.49050.64650.47760.99330.90610.52460.45310.62050.4498
StarNet0.98230.78810.36750.54060.69280.52990.99540.68570.38690.47860.64510.4761
FusionQPE0.98570.73420.27270.57570.72340.56660.99620.67350.30380.52050.68270.5183
↑(↓) indicates that the higher (lower) value, the better the model performance, ∼1 denotes that the closer the bias value is to 1, the better the performance. Bold numbers represent the best performance.
Table 3. Comparison Results of Different Versions of FusionQPE.
Table 3. Comparison Results of Different Versions of FusionQPE.
MAE ↓RMSE ↓BIAS∼1CC ↑NSE ↓
Backbone2.00472.95791.16760.86620.4538
2ZR1.96522.95111.15970.87050.4449
MSE2.20593.27371.25310.86490.4994
FusionQPE1.8339 2.69241.09350.87990.4152
ACC ↑POD ↑FAR ↓CSI ↑HSS ↑
5.0 mm/h
Backbone0.84150.77910.31670.57240.6169
2ZR0.84330.76970.30950.57220.6184
MSE0.82660.80460.35420.55820.5938
FusionQPE0.84840.77490.29970.58190.6298
10.0 mm/h
Backbone0.92180.78580.38200.52890.6478
2ZR0.92930.77240.34410.54960.6694
MSE0.91710.80340.40410.52000.6377
FusionQPE0.93820.76950.29530.58180.7007
20.0 mm/h
Backbone0.98350.77580.34140.55330.7040
2ZR0.98440.79280.32580.57320.7207
MSE0.97950.87610.42660.53040.6830
FusionQPE0.98570.73420.27270.57570.7234
30.0 mm/h
Backbone0.99510.69800.41840.46470.6320
2ZR0.99460.75100.46040.45770.6253
MSE0.99400.82450.49370.45700.6245
FusionQPE0.99620.67350.30380.52050.6827
↑(↓) indicates that the higher (lower) value, the better the model performance, ∼1 denotes that the closer the bias value is to 1, the better the performance. Bold numbers represent the best performance.
Table 4. Outputs of Different Components in FusionQPE for Five Example Cases *.
Table 4. Outputs of Different Components in FusionQPE for Five Example Cases *.
BackboneZ-R Block1Z-R Block2Z-R Block3Z-R Block4EstimationObservation
Case1−0.59620.02830.001500−0.5777−0.5733
(7.94, 0.59) 1(21.22, 0.43)(0, 0.07)(0, −0.3)
Case20.18310.10170.1437000.41720.4199
(9.35, 0.49)(22.09, 0.46)(0, 0.03)(0, −0.24)
Case30.84110.2450.2051001.27991.2751
(9.07, 0.56)(22.17, 0.49)(0, 0.05)(0, −0.22)
Case41.48870.65970.5887002.72582.7096
(9.10, 0.62)(22.55, 0.5)(0, 0.03)(0, −0.18)
Case53.75830.41930.1843004.35064.351
(11.11, 0.56)(25.03, 0.48)(0, 0.05)(0, −0.21)
* All output values of various components are raw numbers, and observation is normalized via Equation (16). 1 The learned values for parameters (a, b) in the Z-R formula (refers to Equation (9)).
Table 5. Comparison of computational efficiency (Parameters, FLOPs) and inference latency. Latency is measured on a single NVIDIA A40 GPU with an input size of 10 × 9 × 9 .
Table 5. Comparison of computational efficiency (Parameters, FLOPs) and inference latency. Latency is measured on a single NVIDIA A40 GPU with an input size of 10 × 9 × 9 .
MethodParams (M)FLOPs (G)Latency (ms)
ZRDL1.970.1630.20
RQPENet51.611.0424.62
StarNet3.580.2032.55
FusionQPE53.050.2529.90
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shu, T.; Zhao, H.; Cai, K.; Zhu, Z. Physics-Constrained Deep Learning with Adaptive Z-R Relationship for Accurate and Interpretable Quantitative Precipitation Estimation. Remote Sens. 2026, 18, 156. https://doi.org/10.3390/rs18010156

AMA Style

Shu T, Zhao H, Cai K, Zhu Z. Physics-Constrained Deep Learning with Adaptive Z-R Relationship for Accurate and Interpretable Quantitative Precipitation Estimation. Remote Sensing. 2026; 18(1):156. https://doi.org/10.3390/rs18010156

Chicago/Turabian Style

Shu, Ting, Huan Zhao, Kanglong Cai, and Zexuan Zhu. 2026. "Physics-Constrained Deep Learning with Adaptive Z-R Relationship for Accurate and Interpretable Quantitative Precipitation Estimation" Remote Sensing 18, no. 1: 156. https://doi.org/10.3390/rs18010156

APA Style

Shu, T., Zhao, H., Cai, K., & Zhu, Z. (2026). Physics-Constrained Deep Learning with Adaptive Z-R Relationship for Accurate and Interpretable Quantitative Precipitation Estimation. Remote Sensing, 18(1), 156. https://doi.org/10.3390/rs18010156

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop