An Adaptive Multiscale Generative Adversarial Network for the Spatiotemporal Fusion of Landsat and MODIS Data
Abstract
:1. Introduction
- We design an efficient adaptive multiscale pyramidal network structure that effectively captures feature information at different scales, thereby enhancing feature representation and robustness in STF tasks.
- To fully perceive information on the temporal variability of HSLT, we augment the model with an adaptive attention module that effectively captures diverse variability.
- We design a deformable convolutional module for STF. This module learns spatially adaptive offsets and masks from temporal variation information at different scales, enabling our network to effectively capture temporal reflectance trends and dynamically expand the receptive fields.
- With these improvements, our AMS-STF method can blend remote sensing images effectively and efficiently, and achieves SOTA performance.
2. Methodology
2.1. AMS-STF Architecture
2.2. Generator
2.2.1. Overview of Multiscale Workflow
2.2.2. Encoder and Decoder Structure
2.2.3. Adaptive Attention Module Structure
2.2.4. Adaptive Fusion Module Structure
2.3. Discriminator
2.4. Loss Function
3. Experiments
3.1. Study Areas and Datasets
3.2. Models and Evaluation Metrics
3.3. Model Implementation and Settings
3.4. Experimental Results and Analysis
3.4.1. Contrast Experiments
3.4.2. Ablation Study
4. Discussion
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Zhang, F.; Zhu, X.; Liu, D. Blending MODIS and Landsat Images for Urban Flood Mapping. Int. J. Remote Sens. 2014, 35, 3237–3253. [Google Scholar] [CrossRef]
- Miah, M.T.; Sultana, M. Environmental Impact of Land Use and Land Cover Change in Rampal, Bangladesh: A Google Earth Engine-Based Remote Sensing Approach. In Proceedings of the 2022 4th International Conference on Sustainable Technologies for Industry 4.0 (STI), Dhaka, Bangladesh, 17–18 December 2022; pp. 1–7. [Google Scholar]
- Johnson, M.D.; Hsieh, W.W.; Cannon, A.J.; Davidson, A.; Bédard, F. Crop Yield Forecasting on the Canadian Prairies by Remotely Sensed Vegetation Indices and Machine Learning Methods. Agric. For. Meteorol. 2016, 218–219, 74–84. [Google Scholar] [CrossRef]
- Luo, X.; Wang, M.; Dai, G.; Chen, X. A Novel Technique to Compute the Revisit Time of Satellites and Its Application in Remote Sensing Satellite Optimization Design. Int. J. Aerosp. Eng. 2017, 2017, e6469439. [Google Scholar] [CrossRef]
- Liao, C.; Wang, J.; Dong, T.; Shang, J.; Liu, J.; Song, Y. Using Spatio-Temporal Fusion of Landsat-8 and MODIS Data to Derive Phenology, Biomass and Yield Estimates for Corn and Soybean. Sci. Total Environ. 2019, 650, 1707–1721. [Google Scholar] [CrossRef]
- Li, J.; Li, Y.; He, L.; Chen, J.; Plaza, A. Spatio-Temporal Fusion for Remote Sensing Data: An Overview and New Benchmark. Sci. China Inf. Sci. 2020, 63, 140301. [Google Scholar] [CrossRef]
- Zhu, X.; Cai, F.; Tian, J.; Williams, T.K.-A. Spatiotemporal Fusion of Multisource Remote Sensing Data: Literature Survey, Taxonomy, Principles, Applications, and Future Directions. Remote Sens. 2018, 10, 527. [Google Scholar] [CrossRef]
- Gao, F.; Masek, J.; Schwaller, M.; Hall, F. On the Blending of the Landsat and MODIS Surface Reflectance: Predicting Daily Landsat Surface Reflectance. IEEE Trans. Geosci. Remote Sens. 2006, 44, 2207–2218. [Google Scholar] [CrossRef]
- Hilker, T.; Wulder, M.A.; Coops, N.C.; Linke, J.; McDermid, G.; Masek, J.G.; Gao, F.; White, J.C. A New Data Fusion Model for High Spatial- and Temporal-Resolution Mapping of Forest Disturbance Based on Landsat and MODIS. Remote Sens. Environ. 2009, 113, 1613–1627. [Google Scholar] [CrossRef]
- Zhu, X.; Chen, J.; Gao, F.; Chen, X.; Masek, J.G. An Enhanced Spatial and Temporal Adaptive Reflectance Fusion Model for Complex Heterogeneous Regions. Remote Sens. Environ. 2010, 114, 2610–2623. [Google Scholar] [CrossRef]
- Li, A.; Bo, Y.; Zhu, Y.; Guo, P.; Bi, J.; He, Y. Blending Multi-Resolution Satellite Sea Surface Temperature (SST) Products Using Bayesian Maximum Entropy Method. Remote Sens. Environ. 2013, 135, 52–63. [Google Scholar] [CrossRef]
- Huang, B.; Zhang, H.; Song, H.; Wang, J.; Song, C. Unified Fusion of Remote-Sensing Imagery: Generating Simultaneously High-Resolution Synthetic Spatial–Temporal–Spectral Earth Observations. Remote Sens. Lett. 2013, 4, 561–569. [Google Scholar] [CrossRef]
- Zhukov, B.; Oertel, D.; Lanzl, F.; Reinhackel, G. Unmixing-Based Multisensor Multiresolution Image Fusion. IEEE Trans. Geosci. Remote Sens. 1999, 37, 1212–1226. [Google Scholar] [CrossRef]
- Wu, M.; Niu, Z.; Wang, C.; Wu, C.; Wang, L. Use of MODIS and Landsat Time Series Data to Generate High-Resolution Temporal Synthetic Landsat Data Using a Spatial and Temporal Reflectance Fusion Model. J. Appl. Remote Sens. 2012, 6, 063507. [Google Scholar] [CrossRef]
- Zhang, W.; Li, A.; Jin, H.; Bian, J.; Zhang, Z.; Lei, G.; Qin, Z.; Huang, C. An Enhanced Spatial and Temporal Data Fusion Model for Fusing Landsat and MODIS Surface Reflectance to Generate High Temporal Landsat-Like Data. Remote Sens. 2013, 5, 5346–5368. [Google Scholar] [CrossRef]
- Wang, Q.; Atkinson, P.M. Spatio-Temporal Fusion for Daily Sentinel-2 Images. Remote Sens. Environ. 2018, 204, 31–42. [Google Scholar] [CrossRef]
- Zhu, X.; Helmer, E.H.; Gao, F.; Liu, D.; Chen, J.; Lefsky, M.A. A Flexible Spatiotemporal Method for Fusing Satellite Images with Different Resolutions. Remote Sens. Environ. 2016, 172, 165–177. [Google Scholar] [CrossRef]
- Liu, M.; Yang, W.; Zhu, X.; Chen, J.; Chen, X.; Yang, L.; Helmer, E.H. An Improved Flexible Spatiotemporal DAta Fusion (IFSDAF) Method for Producing High Spatiotemporal Resolution Normalized Difference Vegetation Index Time Series. Remote Sens. Environ. 2019, 227, 74–89. [Google Scholar] [CrossRef]
- Li, X.; Foody, G.M.; Boyd, D.S.; Ge, Y.; Zhang, Y.; Du, Y.; Ling, F. SFSDAF: An Enhanced FSDAF That Incorporates Sub-Pixel Class Fraction Change Information for Spatio-Temporal Image Fusion. Remote Sens. Environ. 2020, 237, 111537. [Google Scholar] [CrossRef]
- Gevaert, C.M.; García-Haro, F.J. A Comparison of STARFM and an Unmixing-Based Algorithm for Landsat and MODIS Data Fusion. Remote Sens. Environ. 2015, 156, 34–44. [Google Scholar] [CrossRef]
- Huang, B.; Song, H. Spatiotemporal Reflectance Fusion via Sparse Representation. IEEE Trans. Geosci. Remote Sens. 2012, 50, 3707–3716. [Google Scholar] [CrossRef]
- Song, H.; Liu, Q.; Wang, G.; Hang, R.; Huang, B. Spatiotemporal Satellite Image Fusion Using Deep Convolutional Neural Networks. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 821–829. [Google Scholar] [CrossRef]
- Liu, X.; Deng, C.; Chanussot, J.; Hong, D.; Zhao, B. StfNet: A Two-Stream Convolutional Neural Network for Spatiotemporal Image Fusion. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6552–6564. [Google Scholar] [CrossRef]
- Tan, Z.; Yue, P.; Di, L.; Tang, J. Deriving High Spatiotemporal Remote Sensing Images Using Deep Convolutional Network. Remote Sens. 2018, 10, 1066. [Google Scholar] [CrossRef]
- Tan, Z.; Di, L.; Zhang, M.; Guo, L.; Gao, M. An Enhanced Deep Convolutional Model for Spatiotemporal Image Fusion. Remote Sens. 2019, 11, 2898. [Google Scholar] [CrossRef]
- Li, Y.; Li, J.; He, L.; Chen, J.; Plaza, A. A New Sensor Bias-Driven Spatio-Temporal Fusion Model Based on Convolutional Neural Networks. Sci. China Inf. Sci. 2020, 63, 140302. [Google Scholar] [CrossRef]
- Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Networks. In Proceedings of the Advances in Neural Information Processing Systems 27 (NIPS 2014), Montreal, QC, Canada, 8–13 December 2014. [Google Scholar]
- Zhang, H.; Song, Y.; Han, C.; Zhang, L. Remote Sensing Image Spatiotemporal Fusion Using a Generative Adversarial Network. IEEE Trans. Geosci. Remote Sens. 2021, 59, 4273–4286. [Google Scholar] [CrossRef]
- Chen, J.; Wang, L.; Feng, R.; Liu, P.; Han, W.; Chen, X. CycleGAN-STF: Spatiotemporal Fusion via CycleGAN-Based Image Generation. IEEE Trans. Geosci. Remote Sens. 2021, 59, 5851–5865. [Google Scholar] [CrossRef]
- Tan, Z.; Gao, M.; Li, X.; Jiang, L. A Flexible Reference-Insensitive Spatiotemporal Fusion Model for Remote Sensing Images Using Conditional Generative Adversarial Network. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5601413. [Google Scholar] [CrossRef]
- Song, B.; Liu, P.; Li, J.; Wang, L.; Zhang, L.; He, G.; Chen, L.; Liu, J. MLFF-GAN: A Multilevel Feature Fusion with GAN for Spatiotemporal Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2022, 60, 4410816. [Google Scholar] [CrossRef]
- Tan, Z.; Gao, M.; Yuan, J.; Jiang, L.; Duan, H. A Robust Model for MODIS and Landsat Image Fusion Considering Input Noise. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5407217. [Google Scholar] [CrossRef]
- Cao, H.; Luo, X.; Peng, Y.; Xie, T. MANet: A Network Architecture for Remote Sensing Spatiotemporal Fusion Based on Multiscale and Attention Mechanisms. Remote Sens. 2022, 14, 4600. [Google Scholar] [CrossRef]
- Shang, C.; Li, X.; Yin, Z.; Li, X.; Wang, L.; Zhang, Y.; Du, Y.; Ling, F. Spatiotemporal Reflectance Fusion Using a Generative Adversarial Network. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5400915. [Google Scholar] [CrossRef]
- Song, Y.; Zhang, H.; Huang, H.; Zhang, L. Remote Sensing Image Spatiotemporal Fusion via a Generative Adversarial Network With One Prior Image Pair. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5528117. [Google Scholar] [CrossRef]
- Zhu, Z.; Tao, Y.; Luo, X. HCNNet: A Hybrid Convolutional Neural Network for Spatiotemporal Image Fusion. IEEE Trans. Geosci. Remote Sens. 2022, 60, 2005716. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
- Liu, Z.; Wang, L.; Wu, W.; Qian, C.; Lu, T. TAM: Temporal Adaptive Module for Video Recognition. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021; pp. 13688–13698. [Google Scholar]
- Dai, J.; Qi, H.; Xiong, Y.; Li, Y.; Zhang, G.; Hu, H.; Wei, Y. Deformable Convolutional Networks. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017. [Google Scholar]
- Mirza, M.; Osindero, S. Conditional Generative Adversarial Nets. arXiv 2014, arXiv:1411.1784. [Google Scholar]
- Isola, P.; Zhu, J.-Y.; Zhou, T.; Efros, A.A. Image-to-Image Translation with Conditional Adversarial Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015. [Google Scholar]
- Kong, X.; Liu, X.; Gu, J.; Qiao, Y.; Dong, C. Reflash Dropout in Image Super-Resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022. [Google Scholar]
- Zhu, X.; Hu, H.; Lin, S.; Dai, J. Deformable ConvNets v2: More Deformable, Better Results. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019. [Google Scholar]
- Emelyanova, I.V.; McVicar, T.R.; Van Niel, T.G.; Li, L.T.; van Dijk, A.I.J.M. Assessing the Accuracy of Blending Landsat–MODIS Surface Reflectances in Two Landscapes with Contrasting Spatial and Temporal Dynamics: A Framework for Algorithm Selection. Remote Sens. Environ. 2013, 133, 193–209. [Google Scholar] [CrossRef]
- Wald, L. Data Fusion. Definitions and Architectures—Fusion of Images of Different Spatial Resolutions; Presses de l’Ecole, Ecole des Mines de Paris: Paris, France, 2002; p. 200. [Google Scholar]
- Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image Quality Assessment: From Error Visibility to Structural Similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
Method | RMSE | SSIM | PSNR | CC | |
---|---|---|---|---|---|
20011007 DOY 281 20011016 DOY 290 20011101 DOY 306 | ESTARFM | 0.02813 | 0.93558 | 33.97905 | 0.83528 |
FSDAF | 0.0316 | 0.91627 | 33.27262 | 0.82409 | |
EDCSTFN | 0.02974 | 0.92698 | 33.58072 | 0.85664 | |
GAN-STFM | 0.03142 | 0.92221 | 33.85925 | 0.82754 | |
MLFF-GAN | 0.04404 | 0.83763 | 30.79163 | 0.61286 | |
RSFN | 0.02776 | 0.92472 | 34.29955 | 0.85531 | |
AMS-STF | 0.02678 | 0.93806 | 34.67445 | 0.866 | |
20011016 DOY 290 20011101 DOY 306 20011108 DOY 313 | ESTARFM | 0.02702 | 0.93325 | 35.77195 | 0.87637 |
FSDAF | 0.0356 | 0.89079 | 33.41748 | 0.78448 | |
EDCSTFN | 0.02994 | 0.92175 | 34.71036 | 0.87465 | |
GAN-STFM | 0.03467 | 0.91208 | 33.87684 | 0.80218 | |
MLFF-GAN | 0.0449 | 0.8356 | 31.604 | 0.61908 | |
RSFN | 0.02636 | 0.94097 | 35.63747 | 0.89394 | |
AMS-STF | 0.02581 | 0.93934 | 35.82291 | 0.89321 | |
20011124 DOY 329 20011203 DOY 338 20020104 DOY 005 | ESTARFM | 0.02767 | 0.9428 | 34.13996 | 0.8865 |
FSDAF | 0.02973 | 0.93062 | 33.1739 | 0.86046 | |
EDCSTFN | 0.02819 | 0.93817 | 33.72862 | 0.89021 | |
GAN-STFM | 0.03035 | 0.92985 | 33.15433 | 0.86365 | |
MLFF-GAN | 0.04555 | 0.83608 | 30.0239 | 0.66096 | |
RSFN | 0.03317 | 0.93724 | 32.71135 | 0.84387 | |
AMS-STF | 0.02651 | 0.94702 | 34.33996 | 0.88403 |
Method | RMSE | SSIM | PSNR | CC | |
---|---|---|---|---|---|
20041212 DOY 347 20041228 DOY 363 20050113 DOY 013 | ESTARFM | 0.02273 | 0.9435 | 33.80933 | 0.87566 |
FSDAF | 0.03618 | 0.89095 | 29.46704 | 0.72716 | |
EDCSTFN | 0.02294 | 0.94365 | 33.88273 | 0.88355 | |
GAN-STFM | 0.02796 | 0.92092 | 31.98121 | 0.80937 | |
MLFF-GAN | 0.02416 | 0.93306 | 33.47799 | 0.86945 | |
RSFN | 0.02572 | 0.94461 | 32.61475 | 0.84748 | |
AMS-STF | 0.0221 | 0.94669 | 33.98035 | 0.89121 | |
20041228 DOY 363 20050113 DOY 013 20050129 DOY 029 | ESTARFM | 0.01965 | 0.96404 | 35.33982 | 0.91764 |
FSDAF | 0.02509 | 0.94586 | 32.70687 | 0.83823 | |
EDCSTFN | 0.01886 | 0.96492 | 34.99706 | 0.91067 | |
GAN-STFM | 0.0215 | 0.95825 | 34.33843 | 0.89115 | |
MLFF-GAN | 0.02023 | 0.96127 | 35.14885 | 0.91322 | |
RSFN | 0.01954 | 0.96495 | 34.59207 | 0.91384 | |
AMS-STF | 0.01674 | 0.96729 | 35.99133 | 0.9186 | |
20050113 DOY 013 20050129 DOY 029 20050214 DOY 045 | ESTARFM | 0.02153 | 0.96251 | 34.53211 | 0.92726 |
FSDAF | 0.02483 | 0.94972 | 33.22627 | 0.90086 | |
EDCSTFN | 0.02419 | 0.95487 | 32.94203 | 0.89944 | |
GAN-STFM | 0.0249 | 0.95 | 32.75499 | 0.88122 | |
MLFF-GAN | 0.02091 | 0.95746 | 34.34884 | 0.9141 | |
RSFN | 0.02352 | 0.9595 | 33.16905 | 0.90159 | |
AMS-STF | 0.02061 | 0.96384 | 34.46527 | 0.91821 |
Method | RMSE | SSIM | PSNR | CC | |
---|---|---|---|---|---|
20171115 DOY 319 20180203 DOY 034 20180408 DOY 098 | ESTARFM | 0.08882 | 0.74887 | 21.20253 | 0.60126 |
FSDAF | 0.08414 | 0.75271 | 21.64235 | 0.66504 | |
EDCSTFN | 0.06187 | 0.8313 | 24.70146 | 0.64017 | |
GAN-STFM | 0.06215 | 0.81421 | 24.4517 | 0.56468 | |
MLFF-GAN | 0.06008 | 0.81081 | 24.77594 | 0.53079 | |
RSFN | 0.0712 | 0.83966 | 23.76376 | 0.69218 | |
AMS-STF | 0.05183 | 0.83954 | 25.89058 | 0.63686 | |
20181001 DOY 274 20181204 DOY 338 20190121 DOY 021 | ESTARFM | 0.03213 | 0.91176 | 30.41504 | 0.8594 |
FSDAF | 0.04391 | 0.83985 | 27.75718 | 0.75754 | |
EDCSTFN | 0.03303 | 0.9046 | 30.26718 | 0.85864 | |
GAN-STFM | 0.03839 | 0.87491 | 28.85283 | 0.80361 | |
MLFF-GAN | 0.04002 | 0.86208 | 28.29381 | 0.79761 | |
RSFN | 0.03397 | 0.90635 | 30.05782 | 0.84058 | |
AMS-STF | 0.03114 | 0.91265 | 30.61632 | 0.83915 | |
20181204 DOY 338 20190121 DOY 021 20190529 DOY 149 | ESTARFM | 0.04931 | 0.85414 | 26.55303 | 0.75154 |
FSDAF | 0.04265 | 0.87367 | 27.81658 | 0.79528 | |
EDCSTFN | 0.04388 | 0.86271 | 27.59801 | 0.73183 | |
GAN-STFM | 0.0489 | 0.85718 | 26.71403 | 0.68612 | |
MLFF-GAN | 0.04002 | 0.86208 | 28.29381 | 0.79761 | |
RSFN | 0.04345 | 0.87203 | 27.70139 | 0.77398 | |
AMS-STF | 0.0365 | 0.88677 | 28.97685 | 0.73741 |
AAM | AFM | RMSE | SSIM | PSNR | CC | |
---|---|---|---|---|---|---|
20011007 DOY 281 20011016 DOY 290 20011101 DOY 306 | × | × | 0.02775 | 0.92073 | 34.22129 | 0.8596 |
× | √ | 0.02757 | 0.93393 | 34.55059 | 0.86103 | |
√ | × | 0.02909 | 0.92958 | 34.09474 | 0.82401 | |
√ | √ (No Mask) | 0.02991 | 0.91623 | 33.5907 | 0.8548 | |
√ | √ | 0.02678 | 0.93806 | 34.67445 | 0.866 | |
20011016 DOY 290 20011101 DOY 306 20011108 DOY 313 | × | × | 0.028 | 0.92143 | 34.88767 | 0.87814 |
× | √ | 0.028414 | 0.75271 | 21.64235 | 0.66504 | |
√ | × | 0.026187 | 0.83954 | 24.70146 | 0.64017 | |
√ | √ (No Mask) | 0.02982 | 0.92021 | 34.31486 | 0.88346 | |
√ | √ | 0.02581 | 0.93934 | 35.82291 | 0.89321 | |
20011124 DOY 329 20011203 DOY 338 20020104 DOY 005 | × | × | 0.02758 | 0.94078 | 33.85165 | 0.87784 |
× | √ | 0.02805 | 0.93997 | 33.63037 | 0.88589 | |
√ | × | 0.02831 | 0.94358 | 33.87435 | 0.86329 | |
√ | √ (No Mask) | 0.02836 | 0.94059 | 33.56888 | 0.87239 | |
√ | √ | 0.02651 | 0.94702 | 34.33996 | 0.88403 |
AAM | AFM | RMSE | SSIM | PSNR | CC | |
---|---|---|---|---|---|---|
20041212 DOY 347 20041228 DOY 363 20050113 DOY 013 | × | × | 0.03405 | 0.91585 | 30.41416 | 0.72445 |
× | √ | 0.02292 | 0.94464 | 33.36494 | 0.88001 | |
√ | × | 0.03448 | 0.92023 | 30.11523 | 0.76052 | |
√ | √ (No Mask) | 0.04037 | 0.89941 | 28.26648 | 0.69565 | |
√ | √ | 0.0221 | 0.94669 | 33.98035 | 0.89121 | |
20041228 DOY 363 20050113 DOY 013 20050129 DOY 029 | × | × | 0.02877 | 0.94279 | 31.48667 | 0.82165 |
× | √ | 0.01739 | 0.9671 | 35.634 | 0.91967 | |
√ | × | 0.02843 | 0.95734 | 32.02684 | 0.83395 | |
√ | √ (No Mask) | 0.02668 | 0.95683 | 32.90639 | 0.82552 | |
√ | √ | 0.01674 | 0.96729 | 35.99133 | 0.9186 | |
20050113 DOY 013 20050129 DOY 029 20050214 DOY 045 | × | × | 0.02961 | 0.94534 | 31.29416 | 0.82943 |
× | √ | 0.02409 | 0.95712 | 33.06139 | 0.90202 | |
√ | × | 0.02824 | 0.95686 | 31.86778 | 0.85814 | |
√ | √ (No Mask) | 0.02995 | 0.94844 | 31.23785 | 0.81604 | |
√ | √ | 0.02061 | 0.96384 | 34.46527 | 0.91821 |
AAM | AFM | RMSE | SSIM | PSNR | CC | |
---|---|---|---|---|---|---|
20171115 DOY 319 20180203 DOY 034 20180408 DOY 098 | × | × | 0.06554 | 0.81923 | 23.9781 | 0.42447 |
× | √ | 0.05694 | 0.81601 | 25.09822 | 0.55658 | |
√ | × | 0.06174 | 0.82214 | 24.37606 | 0.58822 | |
√ | √ (No Mask) | 0.06333 | 0.82478 | 24.21212 | 0.37662 | |
√ | √ | 0.05183 | 0.83954 | 25.89058 | 0.63686 | |
20181001 DOY 274 20181204 DOY 338 20190121 DOY 021 | × | × | 0.03476 | 0.90037 | 29.9774 | 0.76857 |
× | √ | 0.03561 | 0.89798 | 29.43434 | 0.81923 | |
√ | × | 0.03063 | 0.91054 | 30.10023 | 0.80075 | |
√ | √ (No Mask) | 0.03253 | 0.9024 | 30.31463 | 0.78595 | |
√ | √ | 0.03114 | 0.91265 | 30.61632 | 0.83915 | |
20181204 DOY 338 20190121 DOY 021 20190529 DOY 149 | × | × | 0.04404 | 0.87118 | 27.41348 | 0.59968 |
× | √ | 0.03719 | 0.88174 | 28.87261 | 0.72867 | |
√ | × | 0.04673 | 0.85256 | 26.74685 | 0.64293 | |
√ | √ (No Mask) | 0.04929 | 0.84671 | 26.30543 | 0.63452 | |
√ | √ | 0.0365 | 0.88677 | 28.97685 | 0.73741 |
Methods | Parameters |
---|---|
EDCSTFN | |
GAN-STFM | (Generator) |
(Discriminator) | |
MLFF-GAN | (Generator) |
(Discriminator) | |
RSFN | (Generator) |
(Discriminator) | |
AMS-STF (ours) | (Generator) |
(Discriminator) |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Pan, X.; Deng, M.; Ao, Z.; Xin, Q. An Adaptive Multiscale Generative Adversarial Network for the Spatiotemporal Fusion of Landsat and MODIS Data. Remote Sens. 2023, 15, 5128. https://doi.org/10.3390/rs15215128
Pan X, Deng M, Ao Z, Xin Q. An Adaptive Multiscale Generative Adversarial Network for the Spatiotemporal Fusion of Landsat and MODIS Data. Remote Sensing. 2023; 15(21):5128. https://doi.org/10.3390/rs15215128
Chicago/Turabian StylePan, Xiaoyu, Muyuan Deng, Zurui Ao, and Qinchuan Xin. 2023. "An Adaptive Multiscale Generative Adversarial Network for the Spatiotemporal Fusion of Landsat and MODIS Data" Remote Sensing 15, no. 21: 5128. https://doi.org/10.3390/rs15215128
APA StylePan, X., Deng, M., Ao, Z., & Xin, Q. (2023). An Adaptive Multiscale Generative Adversarial Network for the Spatiotemporal Fusion of Landsat and MODIS Data. Remote Sensing, 15(21), 5128. https://doi.org/10.3390/rs15215128