Comparative Analysis of Edge Information and Polarization on SAR-to-Optical Translation Based on Conditional Generative Adversarial Networks
Abstract
:1. Introduction
2. Methods
- (1)
- Preprocessing: optical remote sensing images and SAR images were preprocessed and split into small patches.
- (2)
- Feature extraction: rich spectral information of optical remote sensing images and rich structural information of SAR images were extracted as feature vectors.
- (3)
- cGANs model training: SAR-optical patches were input to train the cGANs until convergence. In this step, we input paired co-polarization SAR-optical patches, cross-polarization SAR-optical patches, and dual-polarization SAR-optical patches.
- (4)
- Accuracy assessment: neural network classification was used to classify the generated optical images and original optical images, then compare the classification results.
2.1. Paired Features for Model Training From Remote Sensing Images
- (1)
- A Gaussian filter was used to smooth the image and filter out noise.
- (2)
- The gradient magnitude and direction of the filtered image was calculated. The direction of a pixel was divided into components in the x direction and y direction. The Canny operator was used to perform relevant operations with the original image and calculate the gradient of the pixel in the horizontal and vertical directions.
- (3)
- All values along the gradient line, except for the local maxima, were suppressed to sharpen the edge features.
- (4)
- By selecting high and low thresholds, edge pixels with weak gradient values were filtered out and edge pixels with high gradient values were retained.
2.2. SAR-to-Optical Translation
2.2.1. Conditional Generative Adversarial Networks (cGANs)
2.2.2. Network Architecture
2.2.3. Establishing the SAR-to-Optical Translation Relationship by Model Training
2.2.4. Optical Image Generation
2.3. Evaluation of Reconstruction Image Data Quality
3. Results
3.1. Influence of Edge Information on SAR-to-Optical Translation
3.1.1. Qualitative Evaluation of Generated Images
3.1.2. Quantitative Evaluation of Generated Images
3.2. Comparison of Different Polarization Modes
3.2.1. Optimal Polarization Mode Using Textural Information and Edge Information
3.2.2. Optimal Polarization Mode Using only Textural Information
3.3. Accuracy Evaluation of Optimal Input Features for Different Surface Objects
3.3.1. Classification and Area Ratio Comparison
3.3.2. Correlation Comparison
4. Discussion
4.1. Effects of Different Reconstruction Methods on Different Optical Bands
4.2. Superior Reconstruction with an Adequate Textural Extraction Scale
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Melgani, F.; Bruzzone, L. Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef] [Green Version]
- Zhang, J.; Zhong, P.; Chen, Y.; Li, S. L-1/2-Regularized Deconvolution Network for the Representation and Restoration of Optical Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2014, 52, 2617–2627. [Google Scholar] [CrossRef]
- Hasituya; Chen, Z.; Li, F.; Hongmei. Mapping Plastic-Mulched Farmland with C-Band Full Polarization SAR Remote Sensing Data. Remote Sens. 2017, 9, 1264. [Google Scholar] [CrossRef] [Green Version]
- Meng, Y.; Liu, X.; Ding, C.; Xu, B.; Zhou, G.; Zhu, L. Analysis of ecological resilience to evaluate the inherent maintenance capacity of a forest ecosystem using a dense Landsat time series. Ecol. Inf. 2020, 57. [Google Scholar] [CrossRef]
- Inglada, J.; Vincent, A.; Arias, M.; Marais-Sicre, C. Improved Early Crop Type Identification By Joint Use of High Temporal Resolution SAR And Optical Image Time Series. Remote Sens. 2016, 8, 362. [Google Scholar] [CrossRef] [Green Version]
- He, W.; Yokoya, N. Multi-Temporal Sentinel-1 and-2 Data Fusion for Optical Image Simulation. ISPRS Int. J. Geo-Inf. 2018, 7, 389. [Google Scholar] [CrossRef] [Green Version]
- Li, Z.; Shen, H.; Cheng, Q.; Liu, Y.; You, S.; He, Z. Deep learning based cloud detection for medium and high resolution remote sensing images of different sensors. ISPRS J. Photogramm. Remote Sens. 2019, 150, 197–212. [Google Scholar] [CrossRef] [Green Version]
- Huang, B.; Li, Y.; Han, X.; Cui, Y.; Li, W.; Li, R. Cloud Removal From Optical Satellite Imagery With SAR Imagery Using Sparse Representation. IEEE Geosci. Remote Sens. Lett. 2015, 12, 1046–1050. [Google Scholar] [CrossRef]
- Li, Y.; Li, W.; Shen, C. Removal of Optically Thick Clouds From High-Resolution Satellite Imagery Using Dictionary Group Learning and Interdictionary Nonlocal Joint Sparse Coding. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 1870–1882. [Google Scholar] [CrossRef]
- Shang, R.; Wang, J.; Jiao, L.; Stolkin, R.; Hou, B.; Li, Y. SAR Targets Classification Based on Deep Memory Convolution Neural Networks and Transfer Parameters. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 2834–2846. [Google Scholar] [CrossRef]
- Larranaga, A.; Alvarez-Mozos, J. On the Added Value of Quad-Pol Data in a Multi-Temporal Crop Classification Framework Based on RADARSAT-2 Imagery. Remote Sens. 2016, 8, 335. [Google Scholar] [CrossRef] [Green Version]
- Hasituya; Chen, Z.; Li, F.; Hu, Y. Mapping plastic-mulched farmland by coupling optical and synthetic aperture radar remote sensing. Int. J. Remote Sens. 2020, 41, 7757–7778. [Google Scholar] [CrossRef]
- Haupt, S.; Engelbrecht, J.; Kemp, J. Predicting MODIS EVI from SAR Parameters Using Random Forests Algorithms. In Proceedings of the IEEE International Geoscience & Remote Sensing Symposium, Fort Worth, TX, USA, 23–28 July 2017; pp. 4382–4385. [Google Scholar]
- Hertzmann, A.; Jacobs, C.E.; Oliver, N.; Curless, B.; Salesin, D.H.; Acm, A.C.M. Image analogies. In Proceedings of the SIGGRAPH01: The 28th International Conference on Computer Graphics and Interactive Techniques, Los Angeles, CA, USA, 12–17 August 2001; pp. 327–340. [Google Scholar]
- Alotaibi, A. Deep Generative Adversarial Networks for Image-to-Image Translation: A Review. Symmetry 2020, 12, 1705. [Google Scholar] [CrossRef]
- Li, Y.; Fu, R.; Meng, X.; Jin, W.; Shao, F. A SAR-to-Optical Image Translation Method Based on Conditional Generation Adversarial Network (cGAN). IEEE Access 2020, 8, 60338–60343. [Google Scholar] [CrossRef]
- Zhang, W.; Xu, M. Translate SAR Data into Optical Image Using IHS and Wavelet Transform Integrated Fusion. J. Indian Soc. Remote Sens. 2019, 47, 125–137. [Google Scholar] [CrossRef]
- Eckardt, R.; Berger, C.; Thiel, C.; Schmullius, C. Removal of Optically Thick Clouds from Multi-Spectral Satellite Images Using Multi-Frequency SAR Data. Remote Sens. 2013, 5, 2973–3006. [Google Scholar] [CrossRef] [Green Version]
- Liu, L.; Lei, B. Can SAR Images and Optical Images Transfer with Each Other? In Proceedings of the 38th IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Valencia, Spain, 22–27 July 2018; pp. 7019–7022. [Google Scholar]
- Reyes, M.F.; Auer, S.; Merkle, N.; Henry, C.; Schmitt, M. SAR-to-Optical Image Translation Based on Conditional Generative Adversarial Networks-Optimization, Opportunities and Limits. Remote Sens. 2019, 11, 2067. [Google Scholar] [CrossRef] [Green Version]
- Merkle, N.; Mueller, R.; Reinartz, P. Registration og Optical and SAR Satellite Images Based on Geometric Feature Templates. In Proceedings of the International Conference on Sensors and Models in Remote Sensing and Photogrammetry, Kish Island, Iran, 23–25 November 2015; pp. 447–452. [Google Scholar]
- Chen, M.; Habib, A.; He, H.; Zhu, Q.; Zhang, W. Robust Feature Matching Method for SAR and Optical Images by Using Gaussian-Gamma-Shaped Bi-Windows-Based Descriptor and Geometric Constraint. Remote Sens. 2017, 9, 882. [Google Scholar] [CrossRef] [Green Version]
- Polcari, M.; Tolomei, C.; Bignami, C.; Stramondo, S. SAR and Optical Data Comparison for Detecting Co-Seismic Slip and Induced Phenomena during the 2018 M-w 7.5 Sulawesi Earthquake. Sensors 2019, 19, 3976. [Google Scholar] [CrossRef] [Green Version]
- Schmitt, M.; Hughes, L.H.; Körner, M.; Zhu, X.X. Colorizing Sentinel-1 SAR Images Using a Variational Autoencoder Conditioned on Sentinel-2 Imagery. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2018, XLII-2, 1045–1051. [Google Scholar] [CrossRef] [Green Version]
- Schmitt, M.; Hughes, L.H.; Zhu, X.X. The Sen1-2 Dataset for Deep Learning in Sar-Optical Data Fusion. ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci 2018, IV-1, 141–146. [Google Scholar] [CrossRef] [Green Version]
- Forkuor, G.; Conrad, C.; Thiel, M.; Ullmann, T.; Zoungrana, E. Integration of Optical and Synthetic Aperture Radar Imagery for Improving Crop Mapping in Northwestern Benin, West Africa. Remote Sens. 2014, 6, 6472–6499. [Google Scholar] [CrossRef] [Green Version]
- Xu, R.; Zhang, H.; Lin, H. Urban Impervious Surfaces Estimation From Optical and SAR Imagery: A Comprehensive Comparison. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 4010–4021. [Google Scholar] [CrossRef]
- Zhang, H.; Wan, L.; Wang, T.; Lin, Y.; Lin, H.; Zheng, Z. Impervious Surface Estimation From Optical and Polarimetric SAR Data Using Small-Patched Deep Convolutional Networks: A Comparative Study. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 2374–2387. [Google Scholar] [CrossRef]
- Auer, S.; Hornig, I.; Schmitt, M.; Reinartz, P. Simulation-Based Interpretation and Alignment of High-Resolution Optical and SAR Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 4779–4793. [Google Scholar] [CrossRef] [Green Version]
- He, C.; Fang, P.; Xiong, D.; Wang, W.; Liao, M. A Point Pattern Chamfer Registration of Optical and SAR Images Based on Mesh Grids. Remote Sens. 2018, 10, 1837. [Google Scholar] [CrossRef] [Green Version]
- Merkle, N.; Luo, W.; Auer, S.; Mueller, R.; Urtasun, R. Exploiting Deep Matching and SAR Data for the Geo-Localization Accuracy Improvement of Optical Satellite Images. Remote Sens. 2017, 9, 586. [Google Scholar] [CrossRef] [Green Version]
- Liu, S.; Qi, Z.; Li, X.; Yeh, A.G.-O. Integration of Convolutional Neural Networks and Object-Based Post-Classification Refinement for Land Use and Land Cover Mapping with Optical and SAR Data. Remote Sens. 2019, 11, 690. [Google Scholar] [CrossRef] [Green Version]
- Molijn, R.A.; Iannini, L.; Rocha, J.V.; Hanssen, R.F. Sugarcane Productivity Mapping through C-Band and L-Band SAR and Optical Satellite Imagery. Remote Sens. 2019, 11, 1109. [Google Scholar] [CrossRef] [Green Version]
- Zhang, Y.; Zhang, H.; Lin, H. Improving the impervious surface estimation with combined use of optical and SAR remote sensing images. Remote Sens. Environ. 2014, 141, 155–167. [Google Scholar] [CrossRef]
- Zhang, L.; Zhang, L.; Du, B. Deep Learning for Remote Sensing Data: A Technical Tutorial on the State of the Art. IEEE Geosci. Remote Sens. Mag. 2016, 4, 22–40. [Google Scholar] [CrossRef]
- Long, J.; Shelhamer, E.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
- Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Nets. In Proceedings of the 28th Conference on Neural Information Processing Systems (NIPS) Montreal, Montreal, QC, Canada, 8–13 December 2014. [Google Scholar]
- Kim, S.; Suh, D.Y. Recursive Conditional Generative Adversarial Networks for Video Transformation. IEEE Access 2019, 7, 37807–37821. [Google Scholar] [CrossRef]
- Niu, X.; Gong, M.; Zhan, T.; Yang, Y. A Conditional Adversarial Network for Change Detection in Heterogeneous Images. IEEE Geosci. Remote Sens. Lett. 2019, 16, 45–49. [Google Scholar] [CrossRef]
- Mirza, M.; Osindero, S. Conditional Generative Adversarial Nets. arXiv 2014, arXiv:1411.1784. [Google Scholar]
- Grohnfeldt, C.; Schmitt, M.; Zhu, X.X. A conditional generative adversarial network to fuse SAR and multispectral optical data for cloud removal from Sentinel-2 images. In Proceedings of the 38th IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Valencia, Spain, 22–27 July 2018. [Google Scholar]
- Pathak, D.; Krahenbuhl, P.; Donahue, J.; Darrell, T.; Efros, A.A. Context Encoders: Feature Learning by Inpainting. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 27–30 June 2016; pp. 2536–2544. [Google Scholar]
- Xu, L.; Zeng, X.; Li, W.; Huang, Z. Multi-granularity generative adversarial nets with reconstructive sampling for image inpainting. Neurocomputing 2020, 402, 220–234. [Google Scholar] [CrossRef]
- Yuan, L.; Ruan, C.; Hu, H.; Chen, D. Image Inpainting Based on Patch-GANs. IEEE Access 2019, 7, 46411–46421. [Google Scholar] [CrossRef]
- Zhang, Z.; Song, Y.; Qi, H. Age Progression/Regression by Conditional Adversarial Autoencoder. In Proceedings of the 30th IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 4352–4360. [Google Scholar]
- Sage, A.; Agustsson, E.; Timofte, R.; Van Gool, L. Logo Synthesis and Manipulation with Clustered Generative Adversarial Networks. In Proceedings of the 31st IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 5879–5888. [Google Scholar]
- Wang, T.-C.; Liu, M.-Y.; Zhu, J.-Y.; Tao, A.; Kautz, J.; Catanzaro, B. High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs. In Proceedings of the 31st IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 8798–8807. [Google Scholar]
- Isola, P.; Zhu, J.-Y.; Zhou, T.; Efros, A.A. Image-to-Image Translation with Conditional Adversarial Networks. In Proceedings of the 30th IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 5967–5976. [Google Scholar]
- Ge, H.; Yao, Y.; Chen, Z.; Sun, L. Unsupervised Transformation Network Based on GANs for Target-Domain Oriented Image Translation. IEEE Access 2018, 6, 61342–61350. [Google Scholar] [CrossRef]
- Hu, H.; Cui, M.; Hu, W. Generative adversarial networks- and ResNets-based framework for image translation with super-resolution. J. Electron. Imaging 2018, 27. [Google Scholar] [CrossRef]
- Wang, J.; Lv, J.; Yang, X.; Tang, C.; Peng, X. Multimodal image-to-image translation between domains with high internal variability. Soft Comput. 2020. [Google Scholar] [CrossRef]
- Zhu, J.-Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks. In Proceedings of the 16th IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2242–2251. [Google Scholar]
- Bermudez, J.D.; Happ, P.N.; Oliveira, D.A.B.; Feitosa, R.Q. SAR to Optical Image Synthesis for Cloud Removal with Generative Adversarial Networks. In Proceedings of the ISPRS TC I Mid-term Symposium on Innovative Sensing - From Sensors to Methods and Applications, Karlsruhe, Germany, 10–12 October 2018; pp. 5–11. [Google Scholar]
- Wang, L.; Xu, X.; Yu, Y.; Yang, R.; Gui, R.; Xu, Z.; Pu, F. SAR-to-Optical Image Translation Using Supervised Cycle-Consistent Adversarial Networks. IEEE Access 2019, 7, 129136–129149. [Google Scholar] [CrossRef]
- Zhang, J.; Zhou, J.; Lu, X. Feature-Guided SAR-to-Optical Image Translation. IEEE Access 2020, 8, 70925–70937. [Google Scholar] [CrossRef]
- Zhang, J.; Shamsolmoali, P.; Zhang, P.; Feng, D.; Yang, J. Multispectral image fusion using super-resolution conditional generative adversarial networks. J. Appl. Remote Sens. 2018, 13. [Google Scholar] [CrossRef]
- Ao, D.; Dumitru, C.O.; Schwarz, G.; Datcu, M. Dialectical GAN for SAR Image Translation: From Sentinel-1 to TerraSAR-X. Remote Sens. 2018, 10, 1597. [Google Scholar] [CrossRef] [Green Version]
- Li, Y.H.; Ao, D.Y.; Dumitru, C.O.; Hu, C.; Datcu, M. Super-resolution of geosynchronous synthetic aperture radar images using dialectical GANs. Sci. China Inf. Sci. 2019, 62. [Google Scholar] [CrossRef] [Green Version]
- Bermudez, J.D.; Happ, P.N.; Feitosa, R.Q.; Oliveira, D.A.B. Synthesis of Multispectral Optical Images From SAR/Optical Multitemporal Data Using Conditional Generative Adversarial Networks. IEEE Geosci. Remote Sens. Lett. 2019, 16, 1220–1224. [Google Scholar] [CrossRef]
- Song, Q.; Xu, F.; Jin, Y.-Q. Radar Image Colorization: Converting Single-Polarization to Fully Polarimetric Using Deep Neural Networks. IEEE Access 2018, 6, 1647–1661. [Google Scholar] [CrossRef]
- Lapini, A.; Pettinato, S.; Santi, E.; Paloscia, S.; Fontanelli, G.; Garzelli, A. Comparison of Machine Learning Methods Applied to SAR Images for Forest Classification in Mediterranean Areas. Remote Sens. 2020, 12, 369. [Google Scholar] [CrossRef] [Green Version]
- Turkar, V.; Deo, R.; Hariharan, S.; Rao, Y.S. Comparison of Classification Accuracy between Fully Polarimetric and Dual-Polarization SAR Images. In Proceedings of the 2011 IEEE International Geoscience and Remote Sensing Symposium, Vancouver, BC, Canada, 24–29 July 2011; pp. 440–443. [Google Scholar]
- Choe, B.-H.; Kim, D.-j.; Hwang, J.-H.; Oh, Y.; Moon, W.M. Detection of oyster habitat in tidal flats using multi-frequency polarimetric SAR data. Estuarine Coastal Shelf Sci. 2012, 97, 28–37. [Google Scholar] [CrossRef]
- Chen, Q.; Yang, H.; Li, L.; Liu, X. A Novel Statistical Texture Feature for SAR Building Damage Assessment in Different Polarization Modes. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 154–165. [Google Scholar] [CrossRef]
- Park, S.-E.; Lee, S.-G. On the Use of Single-, Dual-, and Quad-Polarimetric SAR Observation for Landslide Detection. ISPRS Int. J. Geo-Inf. 2019, 8, 384. [Google Scholar] [CrossRef] [Green Version]
- Zhu, Z.; Wang, S.; Woodcock, C.E. Improvement and expansion of the Fmask algorithm: Cloud, cloud shadow, and snow detection for Landsats 4-7, 8, and Sentinel 2 images. Remote Sens. Environ. 2015, 159, 269–277. [Google Scholar] [CrossRef]
- Zhu, Z.; Woodcock, C.E. Object-based cloud and cloud shadow detection in Landsat imagery. Remote Sens. Environ. 2012, 118, 83–94. [Google Scholar] [CrossRef]
- Haralick, R.M. Textural features for image classification. IEEE Trans Syst Man Cybern 1973, 3, 610–621. [Google Scholar] [CrossRef] [Green Version]
- Bharati, M.H.; Liu, J.J.; MacGregor, J.F. Image texture analysis: Methods and comparisons. Chemom. Intell. Lab. Syst. 2004, 72, 57–71. [Google Scholar] [CrossRef]
- Canny, J. A Computational Approach to Edge-detection. IEEE Trans. Pattern Anal. Mach. Intell. 1986, 8, 679–698. [Google Scholar] [CrossRef]
- Kalbasi, M.; Nikmehr, H. Noise-Robust, Reconfigurable Canny Edge Detection and its Hardware Realization. IEEE Access 2020, 8, 39934–39945. [Google Scholar] [CrossRef]
- Merkle, N.; Auer, S.; Mueller, R.; Reinartz, P. Exploring the Potential of Conditional Adversarial Networks for Optical and SAR Image Matching. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 1811–1820. [Google Scholar] [CrossRef]
- Lin, D.-Y.; Wang, Y.; Xu, G.-L.; Fu, K. Synthesizing Remote Sensing Images by Conditional Adversarial Networks. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Fort Worth, TX, USA, 23–28 July 2017; pp. 48–50. [Google Scholar]
- Enomoto, K.; Sakurada, K.; Wang, W.; Fukui, H.; Matsuoka, M.; Nakamura, R.; Kawaguchi, N. Filmy Cloud Removal on Satellite Imagery with Multispectral Conditional Generative Adversarial Nets. In Proceedings of the 30th IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Honolulu, HI, USA, 21–26 July 2017; pp. 1533–1541. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
- Kingma, D.P.; Ba, J.L. Adam: A Method for Stochastic Optimization. In Proceedings of the 3rd International Conference for Learning Representations, San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
- Ghannam, S.; Awadallah, M.; Abbott, A.L.; Wynne, R.H. Multisensor Multitemporal Data Fusion Using the Wavelet Transform. In Proceedings of the ISPRS Technical Commission I Symposium Denver, Denver, CO, USA, 17–20 November 2014; pp. 121–128. [Google Scholar]
- Gao, F.; Masek, J.; Schwaller, M.; Hall, F. On the blending of the Landsat and MODIS surface reflectance: Predicting daily Landsat surface reflectance. IEEE Trans. Geosci. Remote Sens. 2006, 44, 2207–2218. [Google Scholar] [CrossRef]
- Wei, J.; Wang, L.; Liu, P.; Chen, X.; Li, W.; Zomaya, A.Y. Spatiotemporal Fusion of MODIS and Landsat-7 Reflectance Images via Compressed Sensing. IEEE Trans. Geosci. Remote Sens. 2017, 55, 7126–7139. [Google Scholar] [CrossRef]
- Song, H.; Huang, B. Spatiotemporal Satellite Image Fusion Through One-Pair Image Learning. IEEE Trans. Geosci. Remote Sens. 2013, 51, 1883–1896. [Google Scholar] [CrossRef]
- Huang, B.; Song, H. Spatiotemporal Reflectance Fusion via Sparse Representation. IEEE Trans. Geosci. Remote Sens. 2012, 50, 3707–3716. [Google Scholar] [CrossRef]
- Zhu, X.; Helmer, E.H.; Gao, F.; Liu, D.; Chen, J.; Lefsky, M.A. A flexible spatiotemporal method for fusing satellite images with different resolutions. Remote Sens. Environ. 2016, 172, 165–177. [Google Scholar] [CrossRef]
- Liu, M.; Liu, X.; Wu, L.; Zou, X.; Jiang, T.; Zhao, B. A Modified Spatiotemporal Fusion Algorithm Using Phenological Information for Predicting Reflectance of Paddy Rice in Southern China. Remote Sens. 2018, 10, 772. [Google Scholar] [CrossRef] [Green Version]
- Cao, M.; Ming, D.; Xu, L.; Fang, J.; Liu, L.; Ling, X.; Ma, W. Frequency Spectrum-Based Optimal Texture Window Size Selection for High Spatial Resolution Remote Sensing Image Analysis. J. Spectro. 2019, 2019, 1–15. [Google Scholar] [CrossRef]
- Zhou, J.; Guo, R.Y.; Sun, M.; Di, T.T.; Wang, S.; Zhai, J.; Zhao, Z. The Effects of GLCM parameters on LAI estimation using texture values from Quickbird Satellite Imagery. Sci. Rep. 2017, 7. [Google Scholar] [CrossRef]
Polarization | Input Features | Band 1 | Band 2 | Band 3 | Band 4 | Band 5 | Band 6 | Band 7 | |
---|---|---|---|---|---|---|---|---|---|
PSNR | VV | GLCM | 38.751 | 37.380 | 36.393 | 33.577 | 27.282 | 29.529 | 31.373 |
GLCM+Canny | 38.348 | 35.602 | 35.007 | 32.360 | 26.460 | 28.280 | 29.996 | ||
VH | GLCM | 38.650 | 37.260 | 36.108 | 33.176 | 26.530 | 28.406 | 29.960 | |
GLCM+Canny | 40.465 | 37.411 | 35.978 | 32.496 | 26.366 | 28.380 | 30.359 | ||
VV&VH | GLCM | 38.317 | 36.695 | 35.463 | 32.305 | 26.180 | 28.439 | 30.183 | |
GLCM+Canny | 38.454 | 37.064 | 36.183 | 33.163 | 27.330 | 29.209 | 30.920 | ||
SSIM | VV | GLCM | 0.949 | 0.927 | 0.906 | 0.832 | 0.609 | 0.694 | 0.747 |
GLCM+Canny | 0.931 | 0.900 | 0.872 | 0.790 | 0.528 | 0.597 | 0.671 | ||
VH | GLCM | 0.947 | 0.923 | 0.893 | 0.805 | 0.549 | 0.632 | 0.688 | |
GLCM+Canny | 0.948 | 0.925 | 0.898 | 0.809 | 0.582 | 0.673 | 0.721 | ||
VV&VH | GLCM | 0.943 | 0.922 | 0.892 | 0.802 | 0.535 | 0.630 | 0.693 | |
GLCM+Canny | 0.949 | 0.927 | 0.900 | 0.821 | 0.603 | 0.666 | 0.722 |
Polarization | Vegetation | Water Bodies | Building Land | |
---|---|---|---|---|
Real image | - | 15.09% | 12.41% | 72.50% |
VV | GLCM | 18.04% | 11.80% | 70.16% |
GLCM+Canny | 25.61% | 13.10% | 61.29% | |
VH | GLCM | 22.22% | 11.41% | 66.37% |
GLCM+Canny | 31.34% | 12.77% | 55.89% | |
VV&VH | GLCM | 39.85% | 11.56% | 48.52% |
GLCM+Canny | 20.11% | 11.86% | 68.03% |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, Q.; Liu, X.; Liu, M.; Zou, X.; Zhu, L.; Ruan, X. Comparative Analysis of Edge Information and Polarization on SAR-to-Optical Translation Based on Conditional Generative Adversarial Networks. Remote Sens. 2021, 13, 128. https://doi.org/10.3390/rs13010128
Zhang Q, Liu X, Liu M, Zou X, Zhu L, Ruan X. Comparative Analysis of Edge Information and Polarization on SAR-to-Optical Translation Based on Conditional Generative Adversarial Networks. Remote Sensing. 2021; 13(1):128. https://doi.org/10.3390/rs13010128
Chicago/Turabian StyleZhang, Qian, Xiangnan Liu, Meiling Liu, Xinyu Zou, Lihong Zhu, and Xiaohao Ruan. 2021. "Comparative Analysis of Edge Information and Polarization on SAR-to-Optical Translation Based on Conditional Generative Adversarial Networks" Remote Sensing 13, no. 1: 128. https://doi.org/10.3390/rs13010128
APA StyleZhang, Q., Liu, X., Liu, M., Zou, X., Zhu, L., & Ruan, X. (2021). Comparative Analysis of Edge Information and Polarization on SAR-to-Optical Translation Based on Conditional Generative Adversarial Networks. Remote Sensing, 13(1), 128. https://doi.org/10.3390/rs13010128