Neural Network Based on Dynamic Collaboration of Flows for Temporal Downscaling
Abstract
:1. Introduction
2. Materials and Methods
2.1. Study Area and Data Processing
2.2. Temporal Downscaling for Multi-Variable Meteorological Data
2.3. Dynamic Collaboration of Flows
2.4. Multi-Scale Warping Module
2.5. Evaluation Methods
3. Results
3.1. Ablation Study
3.2. Performance Analysis
3.3. Robustness Analysis
3.4. Visual Analysis
3.5. Scalability Exploration
3.6. Transferability Analysis
4. Discussion
4.1. Interpretation of Results
4.2. Comparison with Existing Methods
4.3. Limitations and Future Work
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Abbreviations
VFI | Video frame interpolation |
T2M | Two-meter temperature |
SP | Surface pressure |
SH1000 | Specific humidity at 1000 hPa |
TP | Total precipitation |
TDNN | Temporal downscaling neural network |
DCOF | Dynamic collaboration of flows |
PS | Path selection |
MSWM | Multi-scale warping module |
ECMWF | European Centre for Medium-Range Weather Forecasts |
GAN | Generative adversarial network |
ConvLSTM | Convolutional long short-term memory network |
DSepConv | Deformable separable convolution |
MAE | Mean absolute error |
RMSE | Root mean square error |
ACC | Anomaly correlation coefficient |
PSNR | Peak signal-to-noise ratio |
SSIM | Structural similarity index |
PRECIS | Providing regional climates for impacts studies |
WRF | Weather research and forecasting |
AMCN | Attention mechanism-based multilevel crop network |
RRIN | Residue refinement interpolation network |
CAIN | Crop attention-based inception network |
SepConv | Adaptive separable convolution |
AdaCof | Adaptive collaboration of flows |
EDSC | Enhanced deformable separable convolution |
DVF | Deep voxel flow |
FLAVR | Flow-agnostic video representation |
References
- Li, J.; Zhao, J.; Lu, Z. A typical meteorological day database of solar terms for simplified simulation of outdoor thermal environment on a long-term. Urban Clim. 2024, 57, 102117. [Google Scholar] [CrossRef]
- Ding, X.; Zhao, Y.; Fan, Y.; Li, Y.; Ge, J. Machine learning-assisted mapping of city-scale air temperature: Using sparse meteorological data for urban climate modeling and adaptation. Build. Environ. 2023, 234, 110211. [Google Scholar] [CrossRef]
- Mohanty, A.; Sahoo, B.; Kale, R.V. A hybrid model enhancing streamflow forecasts in paddy land use-dominated catchments with numerical weather prediction model-based meteorological forcings. J. Hydrol. 2024, 635, 131225. [Google Scholar] [CrossRef]
- Long, J.; Xu, C.; Wang, Y.; Zhang, J. From meteorological to agricultural drought: Propagation time and influencing factors over diverse underlying surfaces based on CNN-LSTM model. Ecol. Inform. 2024, 82, 102681. [Google Scholar] [CrossRef]
- Jeong, D.I.; St-Hilaire, A.; Ouarda, T.B.M.J.; Gachon, P. A multi-site statistical downscaling model for daily precipitation using global scale GCM precipitation outputs. Int. J. Climatol. 2013, 33, 2431–2447. [Google Scholar] [CrossRef]
- Ma, M.; Tang, J.; Ou, T.; Zhou, P. High-resolution climate projection over the Tibetan Plateau using WRF forced by bias-corrected CESM. Atmos. Res. 2023, 286, 106670. [Google Scholar] [CrossRef]
- Zhao, G.; Li, D.; Camus, P.; Zhang, X.; Qi, J.; Yin, B. Weather-type statistical downscaling for ocean wave climate in the Chinese marginal seas. Ocean Model. 2024, 187, 102297. [Google Scholar] [CrossRef]
- Richardson, C.W. Stochastic simulation of daily precipitation, temperature, and solar radiation. Water Resour. Res. 1981, 17, 182–190. [Google Scholar] [CrossRef]
- Richardson, C.W.; Wright, D.A. WGEN: A Model for Generating Daily Weather Variables; United States Department of Agriculture: Washington, DC, USA, 1984.
- Semenov, M. Simulation of extreme weather events by a stochastic weather generator. Clim. Res. 2008, 35, 203–212. [Google Scholar] [CrossRef]
- Li, S.; Xu, C.; Su, M.; Lu, W.; Chen, Q.; Huang, Q.; Teng, Y. Downscaling of environmental indicators: A review. Sci. Total Environ. 2024, 916, 170251. [Google Scholar] [CrossRef]
- Vandal, T.; Kodra, E.; Ganguly, S.; Michaelis, A.; Nemani, R.; Ganguly, A.R. DeepSD: Generating high resolution climate change projections through single image super-resolution. In Proceedings of the 23rd ACM Sigkdd International Conference on Knowledge Discovery and Data Mining, Halifax, NS, Canada, 13–17 August 2017; ACM: New York, NY, USA, 2017. [Google Scholar]
- Rodrigues, E.R.; Oliveira, I.; Cunha, R.; Netto, M. DeepDownscale: A deep learning strategy for high-resolution weather forecast. In Proceedings of the 2018 IEEE 14th International Conference on e-Science (e-Science), Amsterdam, The Netherlands, 29 October–1 November 2018; pp. 415–422. [Google Scholar]
- Pan, B.; Hsu, K.; AghaKouchak, A.; Sorooshian, S. Improving precipitation estimation using convolutional neural network. Water Resour. Res. 2019, 55, 2301–2321. [Google Scholar] [CrossRef]
- Yu, M.; Liu, Q. Deep learning-based downscaling of tropospheric nitrogen dioxide using ground-level and satellite observations. Sci. Total Environ. 2021, 773, 145145. [Google Scholar] [CrossRef]
- Qian, X.; Qi, H.; Shang, S.; Wan, H.; Rahman, K.U.; Wang, R. Deep learning-based near-real-time monitoring of autumn irrigation extent at sub-pixel scale in a large irrigation district. Agric. Water Manag. 2023, 284, 108335. [Google Scholar] [CrossRef]
- Sahour, H.; Sultan, M.; Vazifedan, M.; Abdelmohsen, K.; Karki, S.; Yellich, J.A.; Gebremichael, E.; Alshehri, F.; Elbayoumi, T.M. Statistical applications to downscale GRACE-derived terrestrial water storage data and to fill temporal Gaps. Remote Sens. 2020, 12, 533. [Google Scholar] [CrossRef]
- Lu, M.; Jin, C.; Yu, M.; Zhang, Q.; Liu, H.; Huang, Z.; Dong, T. MCGLN: A multimodal ConvLSTM-GAN framework for lightning nowcasting utilizing multi-source spatiotemporal data. Atmos. Res. 2024, 297, 107093. [Google Scholar] [CrossRef]
- Uz, M.; Akyilmaz, O.; Shum, C. Deep learning-aided temporal downscaling of GRACE-derived terrestrial water storage anomalies across the Contiguous United States. J. Hydrol. 2024, 645, 132194. [Google Scholar] [CrossRef]
- Cheng, J. Research on Meteorological Forecasting System Based on Deep Learning Super-Resolution. Master’s Thesis, Wuhan University, Wuhan, China, 2021. [Google Scholar]
- Tian, H.; Wang, P.; Tansey, K.; Wang, J.; Quan, W.; Liu, J. Attention mechanism-based deep learning approach for wheat yield estimation and uncertainty analysis from remotely sensed variables. Agric. For. Meteorol. 2024, 356, 110183. [Google Scholar] [CrossRef]
- Xiao, Y.; Wang, Y.; Yuan, Q.; He, J.; Zhang, L. Generating a long-term (2003–2020) hourly 0.25° global PM2.5 dataset via spatiotemporal downscaling of CAMS with deep learning (DeepCAMS). Sci. Total Environ. 2022, 848, 157747. [Google Scholar] [CrossRef]
- Najibi, N.; Perez, A.J.; Arnold, W.; Schwarz, A.; Maendly, R.; Steinschneider, S. A statewide, weather-regime based stochastic weather generator for process-based bottom-up climate risk assessments in California—Part I: Model evaluation. Clim. Serv. 2024, 34, 100489. [Google Scholar] [CrossRef]
- Busch, U.; Heimann, D. Statistical-dynamical extrapolation of a nested regional climate simulation. Clim. Res. 2001, 19, 1–13. [Google Scholar] [CrossRef]
- Andersen, C.B.; Wright, D.B.; Thorndahl, S. CON-SST-RAIN: Continuous Stochastic Space–Time Rainfall generation based on Markov chains and transposition of weather radar data. J. Hydrol. 2024, 637, 131385. [Google Scholar] [CrossRef]
- Werlberger, M.; Pock, T.; Unger, M.; Bischof, H. Optical flow guided TV-L1 video interpolation and restoration. In International Workshop on Energy Minimization Methods in Computer Vision and Pattern Recognition; Springer: Berlin/Heidelberg, Germany, 2011; pp. 273–286. [Google Scholar]
- Dosovitskiy, A.; Fischer, P.; Ilg, E.; Hausser, P.; Hazirbas, C.; Golkov, V.; Van Der Smagt, P.; Cremers, D.; Brox, T. FlowNet: Learning optical flow with convolutional networks. In Proceedings of the 2015 IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 2758–2766. [Google Scholar]
- Sun, D.; Yang, X.; Liu, M.-Y.; Kautz, J. PWC-NET: CRNs for optical flow using pyramid, warping, and cost volume. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 8934–8943. [Google Scholar]
- Han, P.; Zhang, F.; Zhao, B.; Li, X. Motion-aware video frame interpolation. Neural Netw. 2024, 178, 106433. [Google Scholar] [CrossRef] [PubMed]
- Yang, Y.; Oh, B.T. Video frame interpolation using deep cascaded network structure. Signal Process. Image Commun. 2020, 89, 115982. [Google Scholar] [CrossRef]
- Yu, Z.; Li, H.; Wang, Z.; Hu, Z.; Chen, C.W. Multi-level video frame interpolation: Exploiting the interaction among different levels. IEEE Trans. Circuits Syst. Video Technol. 2013, 23, 1235–1248. [Google Scholar] [CrossRef]
- Raket, L.L.; Roholm, L.; Bruhn, A.; Weickert, J. Motion compensated frame interpolation with a symmetric optical flow constraint. In International Symposium on Visual Computing; Springer: Berlin/Heidelberg, Germany, 2012; pp. 447–457. [Google Scholar]
- Kim, H.G.; Seo, S.J.; Song, B.C. Multi-frame de-raining algorithm using a motion-compensated non-local mean filter for rainy video sequences. J. Vis. Commun. Image Represent. 2015, 26, 317–328. [Google Scholar] [CrossRef]
- Li, R.; Ma, W.; Li, Y.; You, L. A low-complex frame rate up-conversion with edge-preserved filtering. Electronics 2020, 9, 156. [Google Scholar] [CrossRef]
- Mahajan, D.; Huang, F.; Matusik, W.; Ramamoorthi, R.; Belhumeur, P. Moving gradients: A path-based method for plausible image interpolation. ACM Trans. Graph. (TOG) 2009, 28, 1–11. [Google Scholar] [CrossRef]
- Bao, W.; Lai, W.-S.; Zhang, X.; Gao, Z.; Yang, M.-H. MEMC-net: Motion estimation and motion compensation driven neural network for video interpolation and enhancement. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 43, 933–948. [Google Scholar] [CrossRef]
- Niklaus, S.; Mai, L.; Liu, F. Video frame interpolation via adaptive convolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 670–679. [Google Scholar]
- Niklaus, S.; Mai, L.; Liu, F. Video frame interpolation via adaptive separable convolution. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 261–270. [Google Scholar] [CrossRef]
- Cheng, X.; Chen, Z. Video frame interpolation via deformable separable convolution. Proc. AAAI Conf. Artif. Intell. 2020, 34, 10607–10614. [Google Scholar] [CrossRef]
- Lee, H.; Kim, T.; Chung, T.Y.; Pak, D.; Ban, Y.; Lee, S. AdaCof: Adaptive collaboration of flows for video frame interpolation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 5315–5324. [Google Scholar]
- Ding, T.; Liang, L.; Zhu, Z.; Zharkov, I. CDFI: Compression-Driven Network Design for Frame Interpolation. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 7997–8007. [Google Scholar]
- Su, H.; Li, Y.; Xu, Y.; Fu, X.; Liu, S. A review of deep-learning-based super-resolution: From methods to applications. Pattern Recognit. 2025, 157, 110935. [Google Scholar] [CrossRef]
- Alfarano, A.; Maiano, L.; Papa, L.; Amerini, I. Estimating optical flow: A comprehensive review of the state of the art. Comput. Vis. Image Underst. 2024, 249, 104160. [Google Scholar] [CrossRef]
- Wang, S.; Yang, X.; Feng, Z.; Sun, J.; Liu, J. EMCFN: Edge-based multi-scale cross fusion network for video frame interpolation. J. Vis. Commun. Image Represent. 2024, 103, 104226. [Google Scholar] [CrossRef]
- Yue, Z.; Shi, M. Enhancing space–time video super-resolution via spatial–temporal feature interaction. Neural Netw. 2025, 184, 107033. [Google Scholar] [CrossRef]
- Kas, M.; Kajo, I.; Ruichek, Y. Coarse-to-fine SVD-GAN based framework for enhanced frame synthesis. Eng. Appl. Artif. Intell. 2022, 110, 104699. [Google Scholar] [CrossRef]
- Hu, J.; Guo, C.; Luo, Y.; Mao, Z. MREIFlow: Unsupervised dense and time-continuous optical flow estimation from image and event data. Inf. Fusion 2025, 113, 102642. [Google Scholar] [CrossRef]
- Hersbach, H.; Bell, B.; Berrisford, P.; Hirahara, S.; Horányi, A.; Muñoz-Sabater, J.; Nicolas, J.; Peubey, C.; Radu, R.; Schepers, D.; et al. The ERA5 Global Reanalysis. Q. J. R. Meteorol. Soc. 2020, 146, 1999–2049. [Google Scholar] [CrossRef]
- Cheng, X.; Chen, Z. Multiple Video Frame Interpolation via Enhanced Deformable Separable Convolution. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 7029–7045. [Google Scholar] [CrossRef]
- Liu, J.; Tang, J.; Wu, G. Residual Feature Distillation Network for Lightweight Image Super-Resolution. In Computer Vision—ECCV 2020 Workshops. ECCV 2020. Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2020; Volume 12537, pp. 41–55. [Google Scholar]
- Tan, C.; Gao, Z.; Wu, L.; Xu, Y.; Xia, J.; Li, S.; Li, S.Z. Temporal attention unit: Towards efficient spatiotemporal predictive learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 18770–18782. [Google Scholar]
- Choi, M.; Kim, H.; Han, B.; Xu, N.; Lee, K.M. Channel Attention Is All You Need for Video Frame Interpolation. Proc. AAAI Conf. Artif. Intell. 2020, 34, 10663–10671. [Google Scholar] [CrossRef]
- Reda, F.; Kontkanen, J.; Tabellion, E.; Sun, D.; Pantofaru, C.; Curless, B. FILM: Frame Interpolation for Large Motion. In Proceedings of the 2022 European Conference on Computer Vision (ECCV), Tel Aviv, Israel, 23–27 October 2022; pp. 250–266. [Google Scholar]
- Kalluri, T.; Pathak, D.; Chandraker, M.; Tran, D. FLAVR: Flow-Agnostic Video Representations for Fast Frame Interpolation. In Proceedings of the 2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 2–7 January 2023; pp. 2070–2081. [Google Scholar]
- Niklaus, S.; Mai, L.; Wang, O. Revisiting Adaptive Convolutions for Video Frame Interpolation. In Proceedings of the 2021 IEEE Winter Conference on Applications of Computer Vision (WACV), Virtual Conference, 5–9 January 2021; pp. 1098–1108. [Google Scholar]
- Li, H.; Yuan, Y.; Wang, Q. Video Frame Interpolation Via Residue Refinement. In Proceedings of the 2020 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Virtual Conference, 4–9 May 2020; pp. 2613–2617. [Google Scholar]
- Liu, Z.; Yeh, R.A.; Tang, X.; Liu, Y.; Agarwala, A. Video frame synthesis using deep voxel flow. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 4473–4481. [Google Scholar]
- Lu, L.; Wu, R.; Lin, H.; Lu, J.; Jia, J. Video Frame Interpolation with Transformer. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 8–24 June 2022; pp. 3522–3532. [Google Scholar]
- Xiang, X.; Tian, Y.; Zhang, Y.; Fu, Y.; Allebach, J.P.; Xu, C. Zooming Slow-Mo: Fast and Accurate One-Stage Space-Time Video Super-Resolution. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 3367–3376. [Google Scholar]
Project | MAE ↓ | RMSE ↓ | ACC ↑ | PSNR ↑ | SSIM ↑ | |
---|---|---|---|---|---|---|
T2M | TDNN | 0.158 | 0.280 | 0.99951 | 50.507 | 0.9937 |
TDNN-PS | 0.170 | 0.296 | 0.99945 | 50.007 | 0.9926 | |
TDNN-MSWM | 0.168 | 0.295 | 0.99945 | 50.049 | 0.9925 | |
SP | TDNN | 20.064 | 26.479 | 0.99948 | 60.034 | 0.9983 |
TDNN-PS | 20.339 | 26.643 | 0.99947 | 59.981 | 0.9982 | |
TDNN-MSWM | 21.425 | 28.956 | 0.99937 | 59.258 | 0.9982 | |
SH1000 | TDNN | 0.071 | 0.128 | 0.99971 | 49.407 | 0.9946 |
TDNN-PS | 0.074 | 0.134 | 0.99969 | 49.021 | 0.9944 | |
TDNN-MSWM | 0.073 | 0.133 | 0.99969 | 49.052 | 0.9944 | |
Composite | TDNN | 6.764 | 15.289 | 0.99947 | 51.477 | 0.9955 |
TDNN-PS | 6.861 | 15.384 | 0.99946 | 51.058 | 0.9951 | |
TDNN-MSWM | 7.222 | 16.719 | 0.99937 | 51.058 | 0.9951 |
Model | MAE (k) ↓ | RMSE (k) ↓ | ACC ↑ | PSNR ↑ | SSIM ↑ |
---|---|---|---|---|---|
Cubic | 0.850 | 1.652 | 0.9832 | 35.082 | 0.9534 |
CAIN [52] | 1.027 | 1.441 | 0.9938 | 36.272 | 0.9195 |
EDSC [49] | 0.198 | 0.319 | 0.9994 | 49.356 | 0.9925 |
AdaCof [40] | 0.178 | 0.299 | 0.9994 | 49.921 | 0.9928 |
FILM [53] | 0.748 | 1.198 | 0.9910 | 37.875 | 0.9389 |
FLAVR [54] | 0.181 | 0.294 | 0.9994 | 50.084 | 0.9926 |
SepConv++ [55] | 0.422 | 0.735 | 0.9966 | 42.117 | 0.9867 |
RRIN [56] | 3.507 | 6.671 | 0.7311 | 22.960 | 0.7376 |
SepConv [38] | 0.252 | 0.387 | 0.9991 | 47.700 | 0.9919 |
DVF [57] | 0.318 | 0.521 | 0.9983 | 45.111 | 0.9900 |
VFIformer [58] | 5.573 | 8.577 | 0.4605 | 20.777 | 0.5378 |
Zooming Slow-Mo [59] | 0.189 | 0.303 | 0.9994 | 49.811 | 0.9932 |
Ours | 0.158 | 0.280 | 0.9995 | 50.507 | 0.9937 |
Model | MAE (pa) ↓ | RMSE (pa) ↓ | ACC ↑ | PSNR ↑ | SSIM ↑ |
---|---|---|---|---|---|
Cubic | 40.488 | 57.226 | 0.9975 | 53.341 | 0.9955 |
CAIN [52] | 209.812 | 271.595 | 0.9509 | 39.814 | 0.9078 |
EDSC [49] | 35.233 | 54.136 | 0.9978 | 53.823 | 0.9972 |
AdaCof [40] | 20.777 | 27.39 | 0.9994 | 59.741 | 0.9983 |
FILM [53] | 441.655 | 792.393 | 0.6638 | 30.514 | 0.9225 |
FLAVR [54] | 28.217 | 37.046 | 0.9991 | 57.118 | 0.9971 |
SepConv++ [55] | 20.942 | 27.787 | 0.9994 | 59.616 | 0.9982 |
RRIN [56] | 1630.097 | 3923.896 | 0.1795 | 16.618 | 0.8158 |
SepConv [38] | 52.606 | 83.071 | 0.9949 | 50.104 | 0.9955 |
DVF [57] | 32.957 | 51.519 | 0.9980 | 54.253 | 0.9981 |
VFIformer [58] | 2695.569 | 4387.114 | 0.0363 | 15.649 | 0.5266 |
Zooming Slow-Mo [59] | 32.802 | 43.036 | 0.9988 | 55.816 | 0.9974 |
Ours | 20.064 | 26.479 | 0.9995 | 60.034 | 0.9983 |
Model | MAE ↓ (g/kg) | RMSE ↓ (g/kg) | ACC ↑ | PSNR ↑ | SSIM ↑ |
---|---|---|---|---|---|
Cubic | 0.216 | 0.391 | 0.9973 | 39.713 | 0.9690 |
CAIN [52] | 0.741 | 0.980 | 0.9928 | 31.725 | 0.8884 |
EDSC [49] | 0.087 | 0.148 | 0.9996 | 48.156 | 0.9935 |
AdaCof [40] | 0.092 | 0.171 | 0.9995 | 46.892 | 0.9924 |
FILM [53] | 0.449 | 0.690 | 0.9921 | 34.766 | 0.9223 |
FLAVR [54] | 0.084 | 0.138 | 0.9996 | 48.746 | 0.9934 |
SepConv++ [55] | 0.084 | 0.150 | 0.9996 | 48.029 | 0.9934 |
RRIN [56] | 2.762 | 3.729 | 0.7710 | 20.115 | 0.6016 |
SepConv [38] | 0.107 | 0.171 | 0.9995 | 46.914 | 0.9916 |
DVF [57] | 0.094 | 0.168 | 0.9995 | 47.021 | 0.9924 |
VFIformer [58] | 3.754 | 4.676 | 0.5414 | 18.150 | 0.4398 |
Zooming Slow-Mo [59] | 0.080 | 0.129 | 0.9996 | 49.345 | 0.9945 |
Ours | 0.071 | 0.128 | 0.9997 | 49.407 | 0.9946 |
Project | MAE ↓ | RMSE ↓ | ACC ↑ | PSNR ↑ | SSIM ↑ | |
---|---|---|---|---|---|---|
T2M | TDNN | 0.158 | 0.279 | 0.99952 | 50.545 | 0.9940 |
TDNN-PS | 0.169 | 0.291 | 0.99947 | 50.170 | 0.9933 | |
TDNN-MSWM | 0.172 | 0.298 | 0.99944 | 49.951 | 0.9925 | |
SP | TDNN | 19.811 | 26.023 | 0.99950 | 60.208 | 0.9983 |
TDNN-PS | 22.288 | 29.860 | 0.99934 | 59.063 | 0.9982 | |
TDNN-MSWM | 22.806 | 30.814 | 0.99929 | 58.755 | 0.9982 | |
SH1000 | TDNN | 0.069 | 0.123 | 0.99975 | 49.725 | 0.9951 |
TDNN-PS | 0.074 | 0.134 | 0.99969 | 49.015 | 0.9944 | |
TDNN-MSWM | 0.073 | 0.129 | 0.99971 | 49.329 | 0.9948 | |
Composite | TDNN | 6.679 | 15.025 | 0.99949 | 51.667 | 0.9958 |
TDNN-PS | 7.510 | 17.241 | 0.99933 | 51.228 | 0.9954 | |
TDNN-MSWM | 7.684 | 17.792 | 0.99929 | 50.977 | 0.9951 |
Project | MAE ↓ | RMSE ↓ | ACC ↑ | PSNR ↑ | SSIM ↑ | |
---|---|---|---|---|---|---|
T2M | TDNN-1 | 0.158 | 0.280 | 0.9995 | 50.507 | 0.9937 |
TDNN-2 | 0.158 | 0.279 | 0.9995 | 50.545 | 0.9940 | |
Deviation (%) | 0.00 | −0.36 | 0.00 | 0.08 | 0.03 | |
SP | TDNN-1 | 20.064 | 26.479 | 0.9995 | 60.034 | 0.9983 |
TDNN-2 | 19.811 | 26.023 | 0.9995 | 60.208 | 0.9983 | |
Deviation (%) | −1.26 | −1.72 | 0.00 | 0.29 | 0.00 | |
SH1000 | TDNN-1 | 0.071 | 0.128 | 0.9997 | 49.407 | 0.9946 |
TDNN-2 | 0.069 | 0.123 | 0.9998 | 49.725 | 0.9951 | |
Deviation (%) | −2.82 | −3.91 | 0.01 | 0.64 | 0.05 |
Model | T2M | SP | SH1000 | ||||||
---|---|---|---|---|---|---|---|---|---|
MAE ↓ (k) | RMSE ↓ (k) | ACC ↑ | MAE ↓ (pa) | RMSE ↓ (pa) | ACC ↑ | MAE ↓ (g/kg) | RMSE ↓ (g/kg) | ACC ↑ | |
Cubic | 2.022 | 3.542 | 0.9289 | 132.463 | 194.027 | 0.9740 | 0.645 | 1.103 | 0.9794 |
CAIN [52] | 1.200 | 1.898 | 0.9929 | 175.684 | 226.030 | 0.9736 | 0.762 | 1.118 | 0.9932 |
EDSC [49] | 0.516 | 0.756 | 0.9964 | 93.095 | 131.280 | 0.9872 | 0.274 | 0.407 | 0.9971 |
AdaCof [40] | 0.493 | 0.771 | 0.9963 | 48.026 | 69.093 | 0.9965 | 0.267 | 0.406 | 0.9971 |
FILM [53] | 0.963 | 1.390 | 0.9880 | 292.344 | 486.038 | 0.8503 | 0.420 | 0.594 | 0.9938 |
FLAVR [54] | 0.432 | 0.661 | 0.9972 | 37.254 | 49.135 | 0.9982 | 0.248 | 0.367 | 0.9976 |
SepConv++ [55] | 0.917 | 1.371 | 0.9882 | 62.429 | 79.682 | 0.9952 | 0.289 | 0.441 | 0.9966 |
SepConv [38] | 0.790 | 1.150 | 0.9917 | 156.555 | 240.836 | 0.9585 | 0.332 | 0.485 | 0.9958 |
DVF [57] | 0.646 | 0.979 | 0.9940 | 74.463 | 100.861 | 0.9924 | 0.279 | 0.422 | 0.9969 |
Ours | 0.426 | 0.661 | 0.9973 | 35.324 | 47.694 | 0.9983 | 0.240 | 0.365 | 0.9977 |
Model | MAE ↓ (mm) | RMSE ↓ (mm) | ACC ↑ | PSNR ↑ | SSIM ↑ |
---|---|---|---|---|---|
CAIN [52] | 0.068 | 0.230 | 0.9313 | 51.373 | 0.8869 |
EDSC [49] | 0.067 | 0.227 | 0.9329 | 51.476 | 0.9447 |
AdaCof [40] | 0.047 | 0.212 | 0.9421 | 52.095 | 0.9478 |
FILM [53] | 0.058 | 0.236 | 0.9273 | 51.131 | 0.9411 |
FLAVR [54] | 0.051 | 0.214 | 0.9407 | 51.974 | 0.9452 |
SepConv++ [55] | 0.052 | 0.231 | 0.9303 | 51.327 | 0.9434 |
SepConv [38] | 0.078 | 0.252 | 0.9188 | 50.568 | 0.9328 |
Ours | 0.046 | 0.211 | 0.9426 | 52.131 | 0.9492 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, J.; Lin, L.; Zhang, Y.; Zhang, Z.; Gao, S.; Zhao, H. Neural Network Based on Dynamic Collaboration of Flows for Temporal Downscaling. Remote Sens. 2025, 17, 1434. https://doi.org/10.3390/rs17081434
Wang J, Lin L, Zhang Y, Zhang Z, Gao S, Zhao H. Neural Network Based on Dynamic Collaboration of Flows for Temporal Downscaling. Remote Sensing. 2025; 17(8):1434. https://doi.org/10.3390/rs17081434
Chicago/Turabian StyleWang, Junkai, Lianlei Lin, Yu Zhang, Zongwei Zhang, Sheng Gao, and Hanqing Zhao. 2025. "Neural Network Based on Dynamic Collaboration of Flows for Temporal Downscaling" Remote Sensing 17, no. 8: 1434. https://doi.org/10.3390/rs17081434
APA StyleWang, J., Lin, L., Zhang, Y., Zhang, Z., Gao, S., & Zhao, H. (2025). Neural Network Based on Dynamic Collaboration of Flows for Temporal Downscaling. Remote Sensing, 17(8), 1434. https://doi.org/10.3390/rs17081434