MRKAN: A Multi-Scale Network for Dual-Polarization Radar Multi-Parameter Extrapolation
Highlights
- MRKAN substantially improves the joint extrapolation accuracy of Zh, Zdr, and Kdp. It outperforms conventional deep learning models across all major evaluation metrics and shows increased robustness in convective environments.
- The proposed CSMamba, GIMRBF, MOKAN, and MSFF effectively capture and fuse global, mesoscale, and local nonlinear features. Ablation experiments further confirm their complementary contributions to the overall model performance.
- The improved prediction of dual-polarization radar parameters shows substantial potential for enhancing short-term convective nowcasting. It also offers more reliable guidance for severe weather monitoring and early-warning operations.
- The proposed modules provide a methodological reference for future radar-based AI models. They also have broad applicability to other remote-sensing, precipitation estimation, and geophysical prediction tasks.
Abstract
1. Introduction
2. Related Work
2.1. Mamba
2.2. RBF
2.3. KAN
3. Materials and Methods
3.1. Overall Network Architecture
3.2. CSMamba
3.3. MOKAN
3.4. GIMRBF
3.5. MSFF
4. Results
4.1. Dual-Polarization Radar Data
4.2. Evaluation Metrics
4.3. Experimental Setup
4.4. Experimental Results and Analysis
4.5. Ablation Study
4.6. Complexity Experiment
4.7. Generalization Experiments
4.8. Additional Experiments
5. Discussion
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Du, J.; Yu, X.; Zhou, L.; Li, X.; Ao, T. Less Concentrated Precipitation and More Extreme Events over the Three River Headwaters Region of the Tibetan Plateau in a Warming Climate. Atmos. Res. 2024, 303, 107311. [Google Scholar] [CrossRef]
- Zhou, M.; Wu, J.; Chen, M.; Han, L. Comparative Study on the Performance of ConvLSTM and ConvGRU in Classification Problems—Taking Early Warning of Short-Duration Heavy Rainfall as an Example. Atmos. Ocean. Sci. Lett. 2024, 17, 100494. [Google Scholar] [CrossRef]
- Hlal, M.; Baraka Munyaka, J.-C.; Chenal, J.; Azmi, R.; Diop, E.B.; Bounabi, M.; Ebnou Abdem, S.A.; Almouctar, M.A.S.; Adraoui, M. Digital Twin Technology for Urban Flood Risk Management: A Systematic Review of Remote Sensing Applications and Early Warning Systems. Remote Sens. 2025, 17, 3104. [Google Scholar] [CrossRef]
- Li, R.; Qi, S.; Wang, Z.; Fu, X.; Gao, H.; Ma, J.; Zhao, L. Research on the Heavy Rainstorm–Flash Flood–Debris Flow Disaster Chain: A Case Study of the “Haihe River ‘23·7’ Regional Flood”. Remote Sens. 2024, 16, 4802. [Google Scholar] [CrossRef]
- Deopa, R.; Thakur, D.A.; Kumar, S.; Mohanty, M.P.; Asha, P. Discerning the Dynamics of Urbanization-Climate Change-Flood Risk Nexus in Densely Populated Urban Mega Cities: An Appraisal of Efficient Flood Management through Spatiotemporal and Geostatistical Rainfall Analysis and Hydrodynamic Modeling. Sci. Total Environ. 2024, 952, 175882. [Google Scholar] [CrossRef]
- Crane, R.K. Automatic Cell Detection and Tracking. IEEE Trans. Geosci. Electron. 1979, 17, 250–262. [Google Scholar] [CrossRef]
- Guo, Z.; Tang, J.; Tang, J.; Wang, S.; Yang, Y.; Luo, W.; Fang, J. Object-Based Evaluation of Precipitation Systems in Convection-Permitting Regional Climate Simulation Over Eastern China. J. Geophys. Res. Atmos. 2022, 127, e2021JD035645. [Google Scholar] [CrossRef]
- Di, Z.; Maggioni, V.; Mei, Y.; Vazquez, M.; Houser, P.; Emelianenko, M. Centroidal Voronoi Tessellation Based Methods for Optimal Rain Gauge Location Prediction. J. Hydrol. 2020, 584, 124651. [Google Scholar] [CrossRef]
- Rinehart, R.E.; Garvey, E.T. Three-Dimensional Storm Motion Detection by Conventional Weather Radar. Nature 1978, 273, 287–289. [Google Scholar] [CrossRef]
- Li, L.; Schmid, W.; Joss, J. Nowcasting of Motion and Growth of Precipitation with Radar over a Complex Orography. J. Appl. Meteorol. Climatol. 1995, 34, 1286–1300. [Google Scholar] [CrossRef]
- Johnson, J.T.; MacKeen, P.L.; Witt, A.; Mitchell, E.D.W.; Stumpf, G.J.; Eilts, M.D.; Thomas, K.W. The Storm Cell Identification and Tracking Algorithm: An Enhanced WSR-88D Algorithm. Weather Forecast. 1998, 13, 263–276. [Google Scholar] [CrossRef]
- Horn, B.K.P.; Schunck, B.G. Determining Optical Flow. Artif. Intell. 1981, 17, 185–203. [Google Scholar] [CrossRef]
- Lucas, B.D.; Kanade, T. An Iterative Image Registration Technique with an Application to Stereo Vision. In Proceedings of the 7th International Joint Conference on Artificial Intelligence—Volume 2; Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA, 1981; pp. 674–679. [Google Scholar]
- Bowler, N.E.H.; Pierce, C.E.; Seed, A. Development of a Precipitation Nowcasting Algorithm Based upon Optical Flow Techniques. J. Hydrol. 2004, 288, 74–91. [Google Scholar] [CrossRef]
- Shi, X.; Chen, Z.; Wang, H.; Yeung, D.Y.; Wong, W.-K.; Woo, W. Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting. In Proceedings of the Neural Information Processing Systems, Montréal, QC, Canada, 7–12 December 2015; pp. 802–810. [Google Scholar] [CrossRef]
- Shi, X.; Gao, Z.; Lausen, L.; Wang, H.; Yeung, D.-Y.; Wong, W.; Woo, W. Deep Learning for Precipitation Nowcasting: A Benchmark and a New Model. In Proceedings of the 31st International Conference on Neural Information Processing Systems; Curran Associates Inc.: Red Hook, NY, USA, 2017; pp. 5622–5632. [Google Scholar]
- Wang, Y.; Gao, Z.; Long, M.; Wang, J.; Yu, P.S. PredRNN++: Towards A Resolution of the Deep-in-Time Dilemma in Spatiotemporal Predictive Learning. In Proceedings of the International Conference on Machine Learning, Stockholm, Sweden, 10–15 July 2018; pp. 5123–5132. [Google Scholar] [CrossRef]
- Wang, Y.; Zhang, J.; Zhu, H.; Long, M.; Wang, J.; Yu, P.S. Memory in Memory: A Predictive Neural Network for Learning Higher-Order Non-Stationarity From Spatiotemporal Dynamics. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); IEEE: Long Beach, CA, USA, 2019; pp. 9146–9154. [Google Scholar]
- Wu, H.; Yao, Z.; Wang, J.; Long, M. MotionRNN: A Flexible Model for Video Prediction with Spacetime-Varying Motions. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Virtual, 19–25 June 2021; pp. 15430–15439. [Google Scholar] [CrossRef]
- Ayzel, G.; Scheffer, T.; Heistermann, M. RainNet v1.0: A Convolutional Neural Network for Radar-Based Precipitation Nowcasting. Geosci. Model Dev. 2020, 13, 2631–2644. [Google Scholar] [CrossRef]
- Song, K.; Yang, G.; Wang, Q.; Xu, C.; Liu, J.; Liu, W.; Shi, C.; Wang, Y.; Zhang, G.; Yu, X.; et al. Deep Learning Prediction of Incoming Rainfalls: An Operational Service for the City of Beijing China. In Proceedings of the 2019 International Conference on Data Mining Workshops (ICDMW), Beijing, China, 8–11 November 2019; pp. 180–185. [Google Scholar] [CrossRef]
- Trebing, K.; Staǹczyk, T.; Mehrkanoon, S. SmaAt-UNet: Precipitation Nowcasting Using a Small Attention-UNet Architecture. Pattern Recognit. Lett. 2021, 145, 178–186. [Google Scholar] [CrossRef]
- Pan, X.; Lu, Y.; Zhao, K.; Huang, H.; Wang, M.; Chen, H. Improving Nowcasting of Convective Development by Incorporating Polarimetric Radar Variables Into a Deep-Learning Model. Geophys. Res. Lett. 2021, 48, e2021GL095302. [Google Scholar] [CrossRef]
- Ma, Z.; Zhang, H.; Liu, J. Focal Frame Loss: A Simple but Effective Loss for Precipitation Nowcasting. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 6781–6788. [Google Scholar] [CrossRef]
- Fang, W.; Pang, L.; Sheng, V.S.; Wang, Q. STUNNER: Radar Echo Extrapolation Model Based on Spatiotemporal Fusion Neural Network. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5103714. [Google Scholar] [CrossRef]
- Wang, M.; Han, L. Radar Image Extrapolation with Conditional Generative Adversarial Network. In Proceedings of the 2023 IEEE Smart World Congress (SWC), Portsmouth, UK, 28–31 August 2023; pp. 1–6. [Google Scholar] [CrossRef]
- Ma, Z.; Zhang, H.; Liu, J. MM-RNN: A Multimodal RNN for Precipitation Nowcasting. IEEE Trans. Geosci. Remote Sens. 2023, 61, 4101914. [Google Scholar] [CrossRef]
- Zhao, K.; Huang, H.; Wang, M.; Lee, W.-C.; Chen, G.; Wen, L.; Wen, J.; Zhang, G.; Xue, M.; Yang, Z.; et al. Recent Progress in Dual-Polarization Radar Research and Applications in China. Adv. Atmos. Sci. 2019, 36, 961–974. [Google Scholar] [CrossRef]
- Gu, A.; Goel, K.; Ré, C. Efficiently Modeling Long Sequences with Structured State Spaces. arXiv 2021, arXiv:2111.00396. [Google Scholar] [CrossRef]
- Asadi, K.; Parikh, N.; Parr, R.E.; Konidaris, G.D.; Littman, M.L. Deep Radial-Basis Value Functions for Continuous Control. In Proceedings of the AAAI Conference on Artificial Intelligence; PKP: Burnaby, BC, Canada, 2021; Volume 35, pp. 6696–6704. [Google Scholar] [CrossRef]
- Liu, Z.; Wang, Y.; Vaidya, S.; Ruehle, F.; Halverson, J.; Soljačic, M.; Hou, T.Y.; Tegmark, M. KAN: Kolmogorov–Arnold Networks. arXiv 2024, arXiv:2404.19756. [Google Scholar] [CrossRef]
- Gu, A.; Dao, T. Mamba: Linear-Time Sequence Modeling with Selective State Spaces. arXiv 2024, arXiv:2312.00752. [Google Scholar] [CrossRef]
- Teng, Y.; Wu, Y.; Shi, H.; Ning, X.; Dai, G.; Wang, Y.; Li, Z.; Liu, X. DiM: Diffusion Mamba for Efficient High-Resolution Image Synthesis. arXiv 2024, arXiv:2405.14224. [Google Scholar] [CrossRef]
- Chen, W.; Niu, L.; Lu, Z.; Meng, F.; Zhou, J. Maskmamba: A hybrid mamba-transformer model for masked image generation. arXiv 2024, arXiv:2409.19937. [Google Scholar] [CrossRef]
- Guan, F.; Li, X.; Yu, Z.; Lu, Y.; Chen, Z. QMamba: On First Exploration of Vision Mamba for Image Quality Assessment. arXiv 2024, arXiv:2406.09546. [Google Scholar] [CrossRef]
- Pióro, M.; Ciebiera, K.; Król, K.; Ludziejewski, J.; Krutul, M.; Krajewski, J.; Antoniak, S.; Miłoś, P.; Cygan, M.; Jaszczur, S. MoE-Mamba: Efficient Selective State Space Models with Mixture of Experts. arXiv 2024, arXiv:2401.04081. [Google Scholar] [CrossRef]
- Jost, D.; Patil, B.; Reinke, C.; Alameda-Pineda, X. Univariate Radial Basis Function Layers: Brain-Inspired Deep Neural Layers for Low-Dimensional Inputs. arXiv 2023, arXiv:2311.16148. [Google Scholar] [CrossRef]
- Chen, Z.; Li, Z.; Song, L.; Chen, L.; Yu, J.; Yuan, J.; Xu, Y. NeuRBF: A Neural Fields Representation with Adaptive Radial Basis Functions. In Proceedings of the 2023 IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France, 1–6 October 2023; pp. 4159–4171. [Google Scholar] [CrossRef]
- Ghorbani, R.; Reinders, M.J.T.; Tax, D.M.J. RESTAD: Reconstruction and Similarity Based Transformer for Time Series Anomaly Detection. In Proceedings of the 2024 IEEE 34th International Workshop on Machine Learning for Signal Processing (MLSP), London, UK, 22–25 September 2024; pp. 1–6. [Google Scholar] [CrossRef]
- Wurzberger, F.; Schwenker, F. Learning in Deep Radial Basis Function Networks. Entropy 2024, 26, 368. [Google Scholar] [CrossRef]
- Ss, S. Chebyshev Polynomial-Based Kolmogorov-Arnold Networks: An Efficient Architecture for Nonlinear Function Approximation. arXiv 2024, arXiv:2405.07200. [Google Scholar] [CrossRef]
- Reinhardt, E.; Ramakrishnan, D.; Gleyzer, S. SineKAN: Kolmogorov-Arnold Networks Using Sinusoidal Activation Functions. Front. Artif. Intell. 2025, 7, 1462952. [Google Scholar] [CrossRef] [PubMed]
- Toscano, J.D.; Wang, L.-L.; Karniadakis, G.E. KKANs: Kková-Kolmogorov-Arnold Networks and Their Learning Dynamics. Neural Netw. 2025, 191, 107831. [Google Scholar] [CrossRef] [PubMed]
- Jiao, J.; Liu, Y.; Liu, Y.; Tian, Y.; Wang, Y.; Xie, L.; Ye, Q.; Yu, H.; Zhao, Y. VMamba: Visual State Space Model. In Proceedings of the Advances in Neural Information Processing Systems 37; Neural Information Processing Systems Foundation, Inc. (NeurIPS): Vancouver, BC, Canada, 2024; pp. 103031–103063. [Google Scholar] [CrossRef]
- Zhang, Y.; Geng, S.; Ma, G.; Zhu, L.; Liu, Q. An Improvement Multitask Transformer Network for Dual-Polarization Radar Extrapolation. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5108015. [Google Scholar] [CrossRef]
- Gao, Z.; Shi, X.; Wang, H.; Zhu, Y.; Wang, Y.; Li, M.; Yeung, D.-Y. Earthformer: Exploring Space-Time Transformers for Earth System Forecasting. Adv. Neural Inf. Process. Syst. 2022, 35, 25390–25403. [Google Scholar]
- Yang, Y.; Li, B.; Cao, W.; Chen, X.; Li, W. Cross-Modal Medical Image Generation from MRI to PET Using Robust Generative Adversarial Network. Expert Syst. Appl. 2026, 297, 129287. [Google Scholar] [CrossRef]










| Variables | Threshold | Method | CSI ↑ | POD ↑ | ETS ↑ | FAR ↓ |
|---|---|---|---|---|---|---|
| Zh | = 30 dBZ | ConvLSTM | 0.4724 | 0.5361 | 0.4546 | 0.2163 |
| TrajGRU | 0.5398 | 0.6330 | 0.5216 | 0.2214 | ||
| PredRNN++ | 0.5359 | 0.6212 | 0.5181 | 0.2182 | ||
| MotionRNN | 0.5222 | 0.5937 | 0.5051 | 0.2031 | ||
| MIM | 0.5573 | 0.6372 | 0.5402 | 0.1960 | ||
| SmaAt | 0.5229 | 0.5932 | 0.5056 | 0.1947 | ||
| FureNet | 0.5446 | 0.6178 | 0.5275 | 0.1822 | ||
| Earthformer | 0.5709 | 0.6479 | 0.5544 | 0.1931 | ||
| RobustGAN | 0.5887 | 0.6817 | 0.5716 | 0.1943 | ||
| Ours | 0.6320 | 0.7163 | 0.6164 | 0.1652 | ||
| Zh | = 35 dBZ | ConvLSTM | 0.3732 | 0.4325 | 0.3631 | 0.2738 |
| TrajGRU | 0.4469 | 0.5434 | 0.4358 | 0.2846 | ||
| PredRNN++ | 0.4486 | 0.5359 | 0.4380 | 0.2789 | ||
| MotionRNN | 0.4313 | 0.5043 | 0.4211 | 0.2659 | ||
| MIM | 0.4711 | 0.5553 | 0.4608 | 0.2521 | ||
| SmaAt | 0.4390 | 0.5165 | 0.4286 | 0.2563 | ||
| FureNet | 0.4542 | 0.5355 | 0.4439 | 0.2445 | ||
| Earthformer | 0.4824 | 0.5585 | 0.4725 | 0.2227 | ||
| RobustGAN | 0.5056 | 0.6144 | 0.4947 | 0.2584 | ||
| Ours | 0.5533 | 0.6429 | 0.5436 | 0.2112 | ||
| Zh | = 40 dBZ | ConvLSTM | 0.2324 | 0.2677 | 0.2277 | 0.3666 |
| TrajGRU | 0.3105 | 0.3862 | 0.3044 | 0.3660 | ||
| PredRNN++ | 0.3181 | 0.3804 | 0.3125 | 0.3523 | ||
| MotionRNN | 0.2887 | 0.3366 | 0.2837 | 0.3470 | ||
| MIM | 0.3344 | 0.3940 | 0.3290 | 0.3295 | ||
| SmaAt | 0.3055 | 0.3658 | 0.3002 | 0.3402 | ||
| FureNet | 0.3207 | 0.3957 | 0.3157 | 0.3432 | ||
| Earthformer | 0.3321 | 0.3839 | 0.3267 | 0.2919 | ||
| RobustGAN | 0.3739 | 0.4696 | 0.3676 | 0.3379 | ||
| Ours | 0.4258 | 0.4996 | 0.4197 | 0.2719 | ||
| Zdr | = 0.5 dB | ConvLSTM | 0.3520 | 0.4478 | 0.3322 | 0.3765 |
| TrajGRU | 0.4130 | 0.5567 | 0.3908 | 0.3884 | ||
| PredRNN++ | 0.3983 | 0.5161 | 0.3778 | 0.3683 | ||
| MotionRNN | 0.3669 | 0.4518 | 0.3481 | 0.3358 | ||
| MIM | 0.4022 | 0.5001 | 0.3827 | 0.3306 | ||
| SmaAt | 0.3959 | 0.5060 | 0.3755 | 0.3552 | ||
| FureNet | 0.4266 | 0.5509 | 0.4051 | 0.3740 | ||
| Earthformer | 0.4238 | 0.5449 | 0.4036 | 0.3475 | ||
| RobustGAN | 0.4395 | 0.5637 | 0.4189 | 0.3351 | ||
| Ours | 0.4665 | 0.5857 | 0.4466 | 0.3012 | ||
| Kdp | /km | ConvLSTM | 0.3184 | 0.4944 | 0.3040 | 0.5445 |
| TrajGRU | 0.3594 | 0.5432 | 0.3449 | 0.4993 | ||
| PredRNN++ | 0.3682 | 0.5462 | 0.3539 | 0.4889 | ||
| MotionRNN | 0.3181 | 0.4495 | 0.3115 | 0.4915 | ||
| MIM | 0.3501 | 0.4635 | 0.3433 | 0.5094 | ||
| SmaAt | 0.3585 | 0.5474 | 0.3440 | 0.5033 | ||
| FureNet | 0.3158 | 0.5255 | 0.2991 | 0.5913 | ||
| Earthformer | 0.3878 | 0.5263 | 0.3758 | 0.4229 | ||
| RobustGAN | 0.4012 | 0.5377 | 0.3857 | 0.4878 | ||
| Ours | 0.4403 | 0.5709 | 0.4285 | 0.3619 |
| Variables | Method | PCC ↑ | MAE ↓ | RMSE ↓ |
|---|---|---|---|---|
| Zh | ConvLSTM | 0.8360 | 6.7123 | 17.1937 |
| TrajGRU | 0.8712 | 5.9104 | 15.6132 | |
| PredRNN++ | 0.8613 | 5.8419 | 15.8129 | |
| MotionRNN | 0.8612 | 5.9459 | 15.8042 | |
| MIM | 0.8768 | 5.4452 | 14.9840 | |
| SmaAt | 0.8675 | 5.5597 | 15.4997 | |
| FureNet | 0.8878 | 5.4955 | 14.4172 | |
| Earthformer | 0.8917 | 4.5843 | 13.6698 | |
| RobustGAN | 0.8970 | 4.8984 | 13.9789 | |
| Ours | 0.9134 | 4.0313 | 12.5607 | |
| Zdr | ConvLSTM | 0.5129 | 3.4351 | 11.4436 |
| TrajGRU | 0.5554 | 3.6932 | 11.1592 | |
| PredRNN++ | 0.5630 | 3.2703 | 10.9063 | |
| MotionRNN | 0.5527 | 3.4947 | 10.9854 | |
| MIM | 0.5899 | 3.0445 | 10.5077 | |
| SmaAt | 0.5574 | 3.3534 | 11.0670 | |
| FureNet | 0.5699 | 3.7968 | 10.9697 | |
| Earthformer | 0.5716 | 3.2305 | 10.9296 | |
| RobustGAN | 0.5950 | 3.2492 | 10.7793 | |
| Ours | 0.6299 | 3.0009 | 10.3807 | |
| Kdp | ConvLSTM | 0.4995 | 1.4110 | 5.8748 |
| TrajGRU | 0.5699 | 1.3841 | 5.5710 | |
| PredRNN++ | 0.5787 | 1.4068 | 5.4559 | |
| MotionRNN | 0.5581 | 1.1650 | 4.7771 | |
| MIM | 0.6010 | 1.1547 | 4.6997 | |
| SmaAt | 0.5677 | 1.4419 | 5.5268 | |
| FureNet | 0.5142 | 1.8210 | 5.8282 | |
| Earthformer | 0.5996 | 1.4896 | 5.8268 | |
| RobustGAN | 0.6396 | 1.6825 | 6.0355 | |
| Ours | 0.6890 | 1.1485 | 4.4563 |
| Variables | Method | A | B | C | D | CSI | POD | ETS | FAR |
|---|---|---|---|---|---|---|---|---|---|
| Zh | Baseline | × | × | × | × | 0.4435 | 0.5034 | 0.4336 | 0.2809 |
| Ours (w/o B&C&D) | √ | × | × | × | 0.4688 | 0.5309 | 0.4591 | 0.2488 | |
| Ours (w/o A&C&D) | × | √ | × | × | 0.4565 | 0.5234 | 0.4356 | 0.2737 | |
| Ours (w/o A&B&D) | × | × | √ | × | 0.4612 | 0.5829 | 0.4496 | 0.3204 | |
| Ours (w/o C&D) | √ | √ | × | × | 0.4824 | 0.5580 | 0.4723 | 0.2476 | |
| Ours (w/o B&D) | √ | × | √ | × | 0.4949 | 0.5817 | 0.4849 | 0.2373 | |
| Ours (w/o A&D) | × | √ | √ | × | 0.4779 | 0.5382 | 0.4683 | 0.2246 | |
| Ours (w/o D) | √ | √ | √ | × | 0.5184 | 0.5990 | 0.5086 | 0.2202 | |
| Ours | √ | √ | √ | √ | 0.5533 | 0.6429 | 0.5436 | 0.2112 | |
| Zdr | Baseline | × | × | × | × | 0.3931 | 0.4803 | 0.3737 | 0.3067 |
| Ours (w/o B&C&D) | √ | × | × | × | 0.4032 | 0.5063 | 0.3923 | 0.3407 | |
| Ours (w/o A&C&D) | × | √ | × | × | 0.4018 | 0.4907 | 0.3897 | 0.3517 | |
| Ours (w/o A&B&D) | × | × | √ | × | 0.4144 | 0.5081 | 0.3979 | 0.3795 | |
| Ours (w/o C&D) | √ | √ | × | × | 0.4139 | 0.5134 | 0.4045 | 0.3361 | |
| Ours (w/o B&D) | √ | × | √ | × | 0.4171 | 0.5109 | 0.3977 | 0.3067 | |
| Ours (w/o A&D) | × | √ | √ | × | 0.4131 | 0.5126 | 0.4036 | 0.3370 | |
| Ours (w/o D) | √ | √ | √ | × | 0.4419 | 0.5505 | 0.4222 | 0.3133 | |
| Ours | √ | √ | √ | √ | 0.4665 | 0.5857 | 0.4466 | 0.3012 | |
| Kdp | Baseline | × | × | × | × | 0.3750 | 0.5102 | 0.3624 | 0.4226 |
| Ours (w/o B&C&D) | √ | × | × | × | 0.3846 | 0.5293 | 0.3799 | 0.4725 | |
| Ours (w/o A&C&D) | × | √ | × | × | 0.3759 | 0.5235 | 0.3654 | 0.4752 | |
| Ours (w/o A&B&D) | × | × | √ | × | 0.3857 | 0.5271 | 0.3781 | 0.4848 | |
| Ours (w/o C&D) | √ | √ | × | × | 0.4025 | 0.5379 | 0.3925 | 0.4511 | |
| Ours (w/o B&D) | √ | × | √ | × | 0.4035 | 0.5470 | 0.3905 | 0.4218 | |
| Ours (w/o A&D) | × | √ | √ | × | 0.4022 | 0.5203 | 0.3903 | 0.3757 | |
| Ours (w/o D) | √ | √ | √ | × | 0.4188 | 0.5518 | 0.4067 | 0.3847 | |
| Ours | √ | √ | √ | √ | 0.4403 | 0.5709 | 0.4285 | 0.3619 |
| Method | Parameters (M) | Floating Point Operations (GFLOPs) | Average Inference Latency (ms) |
|---|---|---|---|
| ConvLSTM | 10.29 | 163.58 | 38.09 |
| TrajGRU | 68.23 | 321.68 | 711.94 |
| PredRNN++ | 9.39 | 769.07 | 411.36 |
| MotionRNN | 9.30 | 751.76 | 1221.08 |
| MIM | 15.58 | 1275.07 | 1675.22 |
| SmaAt | 4.04 | 10.10 | 12.91 |
| FureNet | 58.08 | 34.10 | 13.01 |
| Earthformer | 5.85 | 279.32 | 330.64 |
| RobustGAN | 8.03 | 54.65 | 81.07 |
| Ours | 148.40 | 46.14 | 115.77 |
| Variables | Threshold | Method | CSI ↑ | POD ↑ | ETS ↑ | FAR ↓ |
|---|---|---|---|---|---|---|
| Zh | = 35 dBZ | ConvLSTM | 0.3303 | 0.3997 | 0.3251 | 0.3271 |
| TrajGRU | 0.4147 | 0.5245 | 0.4089 | 0.3267 | ||
| PredRNN++ | 0.3969 | 0.4784 | 0.3916 | 0.2977 | ||
| MotionRNN | 0.4024 | 0.4746 | 0.3972 | 0.2848 | ||
| MIM | 0.4125 | 0.5168 | 0.4217 | 0.2833 | ||
| SmaAt | 0.4051 | 0.4953 | 0.3994 | 0.3334 | ||
| FureNet | 0.4117 | 0.5065 | 0.4054 | 0.3805 | ||
| Earthformer | 0.4531 | 0.5279 | 0.4328 | 0.2813 | ||
| RobustGAN | 0.4692 | 0.5683 | 0.4534 | 0.3132 | ||
| Ours | 0.5285 | 0.6205 | 0.5227 | 0.2727 | ||
| Zdr | = 0.5 dB | ConvLSTM | 0.2639 | 0.3249 | 0.2367 | 0.3995 |
| TrajGRU | 0.3059 | 0.3779 | 0.2767 | 0.3934 | ||
| PredRNN++ | 0.3327 | 0.4290 | 0.2995 | 0.4066 | ||
| MotionRNN | 0.3247 | 0.4036 | 0.2939 | 0.3785 | ||
| MIM | 0.3216 | 0.4053 | 0.2893 | 0.3792 | ||
| SmaAt | 0.3256 | 0.4067 | 0.2946 | 0.3753 | ||
| FureNet | 0.3238 | 0.4289 | 0.2903 | 0.4237 | ||
| Earthformer | 0.3187 | 0.4362 | 0.3049 | 0.3723 | ||
| RobustGAN | 0.3365 | 0.4516 | 0.3176 | 0.3698 | ||
| Ours | 0.3850 | 0.4934 | 0.3511 | 0.3649 | ||
| Kdp | /km | ConvLSTM | 0.2472 | 0.3542 | 0.2315 | 0.5611 |
| TrajGRU | 0.2591 | 0.3487 | 0.2432 | 0.5153 | ||
| PredRNN++ | 0.3055 | 0.4573 | 0.2864 | 0.5213 | ||
| MotionRNN | 0.2792 | 0.3747 | 0.2631 | 0.5015 | ||
| MIM | 0.2703 | 0.3681 | 0.2612 | 0.5096 | ||
| SmaAt | 0.2691 | 0.3755 | 0.2529 | 0.5130 | ||
| FureNet | 0.2910 | 0.4437 | 0.2636 | 0.6068 | ||
| Earthformer | 0.3136 | 0.4539 | 0.3171 | 0.4742 | ||
| RobustGAN | 0.3208 | 0.4622 | 0.3265 | 0.4971 | ||
| Ours | 0.3626 | 0.4888 | 0.3426 | 0.4163 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Wang, J.; Zhang, Y.; Zhu, L.; Liu, Q.; Lin, H.; Peng, H.; Wu, L. MRKAN: A Multi-Scale Network for Dual-Polarization Radar Multi-Parameter Extrapolation. Remote Sens. 2026, 18, 372. https://doi.org/10.3390/rs18020372
Wang J, Zhang Y, Zhu L, Liu Q, Lin H, Peng H, Wu L. MRKAN: A Multi-Scale Network for Dual-Polarization Radar Multi-Parameter Extrapolation. Remote Sensing. 2026; 18(2):372. https://doi.org/10.3390/rs18020372
Chicago/Turabian StyleWang, Junfei, Yonghong Zhang, Linglong Zhu, Qi Liu, Haiyang Lin, Huaqing Peng, and Lei Wu. 2026. "MRKAN: A Multi-Scale Network for Dual-Polarization Radar Multi-Parameter Extrapolation" Remote Sensing 18, no. 2: 372. https://doi.org/10.3390/rs18020372
APA StyleWang, J., Zhang, Y., Zhu, L., Liu, Q., Lin, H., Peng, H., & Wu, L. (2026). MRKAN: A Multi-Scale Network for Dual-Polarization Radar Multi-Parameter Extrapolation. Remote Sensing, 18(2), 372. https://doi.org/10.3390/rs18020372

