High-Resolution SAR-to-Multispectral Image Translation Based on S2MS-GAN
Abstract
:1. Introduction
- A challenging task of high-resolution image translation from SAR-to-multispectral images is proposed. In contrast to current S2OT solutions, the S2MS-GAN in this paper is able to provide more remote sensing spectral information during the SAR image translation process.
- An end-to-end network model is designed, and a TV-BM3D module is introduced into the generator. The speckle noise in high-resolution SAR images can be effectively reduced by TV regularization and the BM3D module. Meanwhile, spectral attention is added to improve the spectral features of the generated multispectral images. In a series of evaluation experiments, the images generated by this method have higher accuracy, and the visualization results are better.
- A very high-resolution SAR-MS image dataset named S2MS-HR is constructed by performing paired preprocessing on each image with a spatial resolution of 0.3 m. S2MS-HR can provide the model with better learning generalization ability and offer a more extensive data basis for the interpretation and generation of SAR images.
2. Related Work
2.1. Generative Adversarial Network
2.2. Remote Sensing Image Translation
2.3. BM3D, TV, and Attention
3. Methods
3.1. S2MS-GAN Generator
3.2. S2MS-GAN Discriminator
3.3. Loss Fuction
3.3.1. Adversarial Loss
3.3.2. Cycle Consistency Loss
4. Experimental Section
4.1. Datasets and Preprocess
4.2. Implementation of S2MS-GAN
4.3. Evaluation Metrics
4.4. Results
4.4.1. Quantitative Evaluation
4.4.2. Qualitative Evaluation
4.5. Ablation Study
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Abbreviations
SAR | Synthetic Aperture Radar |
MS | Multispectral |
GAN | Generative Adversarial Network |
S2OT | SAR-to-Optical Image Translation |
S2MST | SAR-to-MS Image Translation |
BM3D | Block-Matching and 3D Filtering |
References
- Naik, P.; Dalponte, M.; Bruzzone, L. Prediction of forest aboveground biomass using multitemporal multispectral remote sensing data. Remote Sens. 2021, 13, 1282. [Google Scholar] [CrossRef]
- Berni, J.A.; Zarco-Tejada, P.J.; Suárez, L.; Fereres, E. Thermal and narrowband multispectral remote sensing for vegetation monitoring from an unmanned aerial vehicle. IEEE Trans. Geosci. Remote Sens. 2009, 47, 722–738. [Google Scholar] [CrossRef]
- Quan, Y.; Zhong, X.; Feng, W.; Dauphin, G.; Gao, L.; Xing, M. A novel feature extension method for the forest disaster monitoring using multispectral data. Remote Sens. 2020, 12, 2261. [Google Scholar] [CrossRef]
- Thakur, S.; Mondal, I.; Ghosh, P.B.; Das, P.; De, T.K. A review of the application of multispectral remote sensing in the study of mangrove ecosystems with special emphasis on image processing techniques. Spat. Inf. Res. 2020, 28, 39–51. [Google Scholar] [CrossRef]
- Farlik, J.; Kratky, M.; Casar, J.; Stary, V. Multispectral detection of commercial unmanned aerial vehicles. Sensors 2019, 19, 1517. [Google Scholar] [CrossRef]
- Li, Y.; Chen, J.; Zhu, J. A new ground accelerating target imaging method for airborne CSSAR. IEEE Geosci. Remote Sens. Lett. 2024, 21, 4013305. [Google Scholar] [CrossRef]
- Bermudez, J.D.; Happ, P.N.; Feitosa, R.Q.; Costa, G.A.O.P.; da Silva, S.V.; Soares, M.S. Synthesis of multispectral optical images from SAR/optical multitemporal data using conditional generative adversarial networks. IEEE Geosci. Remote Sens. Lett. 2019, 16, 1220–1224. [Google Scholar] [CrossRef]
- Abady, L.; Barni, M.; Garzelli, A.; Basarab, A.; Pascal, C.; Frandon, J.; Dimiccoli, M. GAN generation of synthetic multispectral satellite images. In Proceedings of the Image and Signal Processing for Remote Sensing XXVI, Online, 21–25 September 2020; Volume 11533, pp. 122–133. [Google Scholar]
- Pang, Y.; Lin, J.; Qin, T.; Chen, Z. Image-to-image translation: Methods and applications. IEEE Trans. Multimed. 2021, 24, 3859–3881. [Google Scholar] [CrossRef]
- Alotaibi, A. Deep generative adversarial networks for image-to-image translation: A review. Symmetry 2020, 12, 1705. [Google Scholar] [CrossRef]
- Kaji, S.; Kida, S. Overview of image-to-image translation by use of deep neural networks: Denoising, super-resolution, modality conversion, and reconstruction in medical imaging. Radiol. Phys. Technol. 2019, 12, 235–248. [Google Scholar] [CrossRef]
- El Mahdi, B.M.; Abdelkrim, N.; Abdenour, A.; Zohir, I.; Wassim, B.; Fethi, D. A novel multispectral maritime target classification based on ThermalGAN (RGB-to-thermal image translation). J. Exp. Theor. Artif. Intell. 2023, 1, 1–21. [Google Scholar]
- El Mahdi, B.M. A Novel Multispectral Vessel Recognition Based on RGB-to-Thermal Image Translation. Unmanned Syst. 2024, 12, 627–640. [Google Scholar] [CrossRef]
- Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Bengio, Y. Generative adversarial nets. Adv. Neural Inf. Process. Syst. 2014, 27, 139–144. [Google Scholar]
- Isola, P.; Zhu, J.Y.; Zhou, T.; Efros, A.A. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1125–1134. [Google Scholar]
- Zhu, J.Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2223–2232. [Google Scholar]
- Mirza, M.; Osindero, S. Conditional generative adversarial nets. arXiv 2014, arXiv:1411.1784. [Google Scholar]
- Guo, Z.; Zhang, Z.; Cai, Q.; Liu, J.; Fan, Y.; Mei, S. MS-GAN: Learn to Memorize Scene for Unpaired SAR-to-Optical Image Translation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 11467–11484. [Google Scholar] [CrossRef]
- Guo, J.; He, C.; Zhang, M.; Li, Y.; Gao, X.; Song, B. Edge-preserving convolutional generative adversarial networks for SAR-to-optical image translation. Remote Sens. 2021, 13, 3575. [Google Scholar] [CrossRef]
- Zhang, M.; Xu, J.; He, C.; Shang, W.; Li, Y.; Gao, X. SAR-to-Optical Image Translation via Thermodynamics-inspired Network. arXiv 2023, arXiv:2305.13839. [Google Scholar]
- Tasar, O.; Happy, S.L.; Tarabalka, Y.; Alliez, P. SemI2I: Semantically consistent image-to-image translation for domain adaptation of remote sensing data. In Proceedings of the IGARSS 2020—2020 IEEE International Geoscience and Remote Sensing Symposium, Honolulu, HI, USA, 26 September–2 October 2020; pp. 1837–1840. [Google Scholar]
- Merkle, N.; Auer, S.; Mueller, R.; Reinartz, P. Exploring the potential of conditional adversarial networks for optical and SAR image matching. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 1811–1820. [Google Scholar] [CrossRef]
- Yang, X.; Zhao, J.; Wei, Z.; Wang, N.; Gao, X. SAR-to-optical image translation based on improved CGAN. Pattern Recognit. 2022, 121, 108208. [Google Scholar] [CrossRef]
- Li, Y.; Fu, R.; Meng, X.; Jin, W.; Shao, F. A SAR-to-optical image translation method based on conditional generation adversarial network (cGAN). IEEE Access 2020, 8, 60338–60343. [Google Scholar] [CrossRef]
- Wei, J.; Zou, H.; Sun, L.; Cao, X.; He, S.; Liu, S.; Zhang, Y. CFRWD-GAN for SAR-to-optical image translation. Remote Sens. 2023, 15, 2547. [Google Scholar] [CrossRef]
- Enomoto, K.; Sakurada, K.; Wang, W.; Kawaguchi, N.; Matsuoka, M.; Nakamura, R. Image translation between SAR and optical imagery with generative adversarial nets. In Proceedings of the IGARSS 2018—2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 1752–1755. [Google Scholar]
- Katkovnik, V.; Ponomarenko, M.; Egiazarian, K. Complex-Valued Image Denoising Based on Group-Wise Complex-Domain Sparsity. arXiv 2017, arXiv:1711.00362. [Google Scholar]
- Drusch, M.; Del Bello, U.; Carlier, S.; Colin, O.; Fernandez, V.; Gascon, F.; Hoersch, B.; Isola, C.; Laberinti, P.; Martimort, P.; et al. SENTINEL 2: ESA’s Optical High-Resolution Mission for GMES Operational Services. Remote Sens. Environ. 2012, 120, 25–36. [Google Scholar] [CrossRef]
- Roy, D.P.; Wulder, M.A.; Loveland, T.R.; Woodcock, C.E.; Allen, R.G.; Anderson, M.C.; Zhu, Z. Landsat-8: Science and product vision for terrestrial global change research. Remote Sens. Environ. 2014, 145, 154–172. [Google Scholar] [CrossRef]
- Contreras, D.; Blaschke, T.; Tiede, D.; Jilge, M. Monitoring recovery after earthquakes through the integration of remote sensing, GIS, and ground observations: The case of L’Aquila (Italy). Cartogr. Geogr. Inf. Sci. 2016, 43, 115–133. [Google Scholar] [CrossRef]
- Mazzanti, P.; Scancella, S.; Virelli, M.; Frittelli, S.; Nocente, V.; Lombardo, F. Assessing the Performance of Multi-Resolution Satellite SAR Images for Post-Earthquake Damage Detection and Mapping Aimed at Emergency Response Management. Remote Sens. 2022, 14, 2210. [Google Scholar] [CrossRef]
- Aoki, Y.; Furuya, M.; De Zan, F. L-band Synthetic Aperture Radar: Current and future applications to Earth sciences. Earth Planets Space 2021, 73, 56. [Google Scholar] [CrossRef]
- Jiang, B.; Dong, X.; Deng, M.; Wan, F.; Wang, T.; Li, X.; Zhang, G.; Cheng, Q.; Lv, S. Geolocation Accuracy Validation of High-Resolution SAR Satellite Images Based on the Xianning Validation Field. Remote Sens. 2023, 15, 1794. [Google Scholar] [CrossRef]
- Freeman, A.; Durden, S.L. A Three-Component Scattering Model for Polarimetric SAR Data. IEEE Trans. Geosci. Remote Sens. 1998, 36, 963–973. [Google Scholar] [CrossRef]
- Pohl, C.; Van Genderen, J.L. Multisensor Image Fusion in Remote Sensing: Concepts, Methods and Applications. Int. J. Remote Sens. 1998, 19, 823–854. [Google Scholar] [CrossRef]
- Ulaby, F.T.; Moore, R.K.; Fung, A.K. Microwave Remote Sensing: Active and Passive. Volume 1—Microwave Remote Sensing Fundamentals and Radiometry; Artech House: London, UK, 1981. [Google Scholar]
- Toriya, H.; Dewan, A.; Kitahara, I. SAR2OPT: Image alignment between multi-modal images using generative adversarial networks. In Proceedings of the IGARSS 2019–2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; pp. 923–926. [Google Scholar]
- Qin, R.; Liu, T. A Review of Landcover Classification with Very-High Resolution Remotely Sensed Optical Images—Analysis Unit, Model Scalability and Transferability. Remote Sens. 2022, 14, 646. [Google Scholar] [CrossRef]
- Dabov, K.; Foi, A.; Katkovnik, V.; Egiazarian, K. Image denoising by sparse 3D transform-domain collaborative filtering. IEEE Trans. Image Process. 2007, 16, 2080–2095. [Google Scholar] [CrossRef] [PubMed]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
- O’Shea, K.; Nash, R. An introduction to convolutional neural networks. arXiv 2015, arXiv:1511.08458. [Google Scholar]
- Karras, T.; Aila, T.; Laine, S.; Lehtinen, J. Progressive growing of gans for improved quality, stability, and variation. arXiv 2017, arXiv:1710.10196. [Google Scholar]
- Wang, L.; Xu, X.; Yu, Y.; Yang, R.; Gui, R.; Xu, Z.; Pu, F. SAR-to-optical image translation using supervised cycle-consistent adversarial networks. IEEE Access 2019, 7, 129136–129149. [Google Scholar] [CrossRef]
- Subramanyam, M.V.; Prasad, G. A new approach for SAR image denoising. Int. J. Electr. Comput. Eng. 2015, 5, 5. [Google Scholar]
- Devapal, D.; Kumar, S.S.; Sethunadh, R. Discontinuity adaptive SAR image despeckling using curvelet-based BM3D technique. Int. J. Wavelets Multiresolut. Inf. Process. 2019, 17, 1950016. [Google Scholar] [CrossRef]
- Malik, M.; Azim, I.; Dar, A.H.; Asghar, S. An Adaptive SAR Despeckling Method Using Cuckoo Search Algorithm. Intell. Autom. Soft Comput. 2021, 29, 1. [Google Scholar] [CrossRef]
- Rudin, L.I.; Osher, S.; Fatemi, E. Nonlinear total variation based noise removal algorithms. Phys. D Nonlinear Phenom. 1992, 60, 259–268. [Google Scholar] [CrossRef]
- Wang, Y.; Yang, J.; Yin, W.; Zhang, Y. A new alternating minimization algorithm for total variation image reconstruction. SIAM J. Imaging Sci. 2008, 1, 248–272. [Google Scholar] [CrossRef]
- Cai, J.F.; Chan, R.H.; Shen, Z. A framelet-based image inpainting algorithm. Appl. Comput. Harmon. Anal. 2009, 24, 131–149. [Google Scholar] [CrossRef]
- Dong, C.; Loy, C.C.; He, K.; Tang, X. Learning a deep convolutional network for image super-resolution. In Proceedings of the European Conference on Computer Vision (ECCV), Zurich, Switzerland, 6–12 September 2014; pp. 184–199. [Google Scholar]
- Zhang, T.; Wiliem, A.; Yang, S.; Lovell, B. TV-GAN: Generative Adversarial Network Based Thermal to Visible Face Recognition. In Proceedings of the 2018 International Conference on Biometrics (ICB), Gold Coast, QLD, Australia, 20–23 February 2018; pp. 174–181. [Google Scholar]
- Vaswani, A. Attention is all you need. arXiv 2017, arXiv:1706.03762. [Google Scholar]
- Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar]
- Hu, J.; Shen, L.; Albanie, S.; Sun, G.; Vedaldi, A. Gather-excite: Exploiting feature context in convolutional neural networks. Adv. Neural Inf. Process. Syst. 2018, 31, 1–11. [Google Scholar]
- Wang, Q.; Wu, B.; Zhu, P.; Li, P.; Zuo, W.; Hu, Q. ECA-Net: Efficient channel attention for deep convolutional neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 11534–11542. [Google Scholar]
- Tanchenko, A. Visual-PSNR measure of image quality. J. Vis. Commun. Image Represent. 2014, 25, 874–878. [Google Scholar] [CrossRef]
- Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
- Zhang, R.; Isola, P.; Efros, A.A.; Shechtman, E.; Wang, O. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 586–595. [Google Scholar]
- Wang, Z.; Simoncelli, E.P.; Bovik, A.C. Multiscale Structural Similarity for Image Quality Assessment. In Proceedings of the Thirty-Seventh Asilomar Conference on Signals, Systems & Computers, Pacific Grove, CA, USA, 9–12 November 2003; Volume 2, pp. 1398–1402. [Google Scholar]
- Kruse, F.A.; Lefkoff, A.B.; Boardman, J.W.; Heidebrecht, K.B.; Shapiro, A.T.; Barloon, P.J.; Goetz, A.F.H. The Spectral Image Processing System (SIPS)—Interactive Visualization and Analysis of Imaging Spectrometer Data. Remote Sens. Environ. 1993, 44, 145–163. [Google Scholar] [CrossRef]
Multispectral Satellite | MS Image Resolution | SAR Satellite | SAR Image Resolution |
---|---|---|---|
LANDSET-8 | 15 m | GAOFEN-3 | 1 m |
Sentinel-2 | 10 m | SeaSet | 25 m |
Worldview-1 | 0.5 m | TerraSAR-X | 6 m |
Worldview-2 | 1.8 m | Alos | 10 m |
Quickbird | 2.44 m | Sentinel-1 | 2 m |
Models | SSIM↑ | PSNR↑ | LPIPS↓ | MSSSIM↑ | SAM°↓ |
---|---|---|---|---|---|
pix2pix | 0.1369 | 9.611 | 0.6122 | 0.1637 | 14.74 |
BicycleGAN | 0.1146 | 9.746 | 0.6560 | 0.1410 | 14.76 |
CycleGAN | 0.1760 | 11.35 | 0.4990 | 0.2241 | 13.42 |
AttentionGAN | 0.2092 | 11.62 | 0.4224 | 0.2593 | 11.37 |
U-GAT-IT | 0.1956 | 10.89 | 0.4659 | 0.2581 | 11.29 |
S2MS-GAN(Ours) | 0.2319 | 12.47 | 0.4081 | 0.2721 | 10.41 |
Models | SSIM↑ | PSNR↑ | LPIPS↓ | MSSSIM↑ | SAM°↓ |
---|---|---|---|---|---|
Baseline | 0.1760 | 11.35 | 0.4990 | 0.2241 | 13.42 |
+TV-BM3D | 0.1902 | 11.93 | 0.4817 | 0.2577 | 12.93 |
+Spectral Attention | 0.1874 | 11.41 | 0.4385 | 0.2558 | 11.50 |
+all(S2MS-GAN) | 0.2319 | 12.47 | 0.4081 | 0.2721 | 10.41 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Liu, Y.; Han, Q.; Yang, H.; Hu, H. High-Resolution SAR-to-Multispectral Image Translation Based on S2MS-GAN. Remote Sens. 2024, 16, 4045. https://doi.org/10.3390/rs16214045
Liu Y, Han Q, Yang H, Hu H. High-Resolution SAR-to-Multispectral Image Translation Based on S2MS-GAN. Remote Sensing. 2024; 16(21):4045. https://doi.org/10.3390/rs16214045
Chicago/Turabian StyleLiu, Yang, Qingcen Han, Hong Yang, and Huizhu Hu. 2024. "High-Resolution SAR-to-Multispectral Image Translation Based on S2MS-GAN" Remote Sensing 16, no. 21: 4045. https://doi.org/10.3390/rs16214045
APA StyleLiu, Y., Han, Q., Yang, H., & Hu, H. (2024). High-Resolution SAR-to-Multispectral Image Translation Based on S2MS-GAN. Remote Sensing, 16(21), 4045. https://doi.org/10.3390/rs16214045