Method for Generating Pseudo-NDVI from RVI Derived from Satellite-Borne SAR Imagery Data Using CycleGAN and pix2pix Models
Abstract
1. Introduction
2. Related Studies
3. Research Background
3.1. Problem Statement
3.2. Proposed Approach
4. Proposed Method
4.1. Overall Workflow
- (1)
- Data Acquisition: Target area selection is performed, followed by searching for well-conditioned Sentinel-2 MSI optical imagery to calculate NDVI within the targeted year.
- (2)
- Gap Identification: The NDVI time series is examined to pinpoint periods of unavailable optical imagery due to cloud cover or other atmospheric conditions.
- (3)
- SAR Processing: When gaps are identified, Sentinel-1 SAR data corresponding to those time periods are acquired. Ortho-rectification (terrain correction) is applied to the SAR data, followed by calculation of RVI from the VH and VV polarization channels.
- (4)
- Pseudo-NDVI Generation: The calculated RVI data are input to trained GAN models (either pix2pix or CycleGAN) to convert RVI to pseudo-NDVI.
- (5)
- Time Series Completion: Real NDVI from cloud-free periods are combined with pseudo-NDVI from gap periods to produce a continuous, complete annual NDVI time series suitable for crop monitoring and phenological analysis.
4.2. GAN-Based RVI-to-NDVI Conversion Framework
- (1)
- Vegetation-specific feature extraction: RVI emphasizes volume scattering from vegetation canopies while filtering out non-vegetation signals, providing a more focused input for the conversion task.
- (2)
- Enhanced correlation with biomass: RVI correlates more strongly with vegetation biomass and leaf area index (LAI) than raw backscatter coefficients, and these structural parameters often align with NDVI during active growth periods.
- (3)
- Simplified conversion task: The transformation becomes index-to-index translation rather than the more complex task of converting microwave backscatter physics to optical reflectance ratios.
- (1)
- pix2pix (Supervised Learning): Utilizes paired RVI-NDVI training data to learn direct pixel-level mapping with combined adversarial and reconstruction losses.
- (2)
- CycleGAN (Unsupervised Learning): Employs cycle-consistency constraints to learn bidirectional mappings between RVI and NDVI domains. Although designed for unpaired data, it is trained with paired data in this study for fair comparison.
4.3. GAN Architecture Details
4.3.1. pix2pix (Supervised Learning)
- (1)
- Encoder: Series of convolutional layers with downsampling (Conv-BatchNorm-LeakyReLU blocks) that progressively reduces spatial resolution while increasing channel depth.
- (2)
- Decoder: Series of transposed convolutional layers with upsampling (ConvTranspose-BatchNorm-ReLU blocks) that reconstructs the spatial resolution.
- (3)
- Skip Connections: Direct connections from encoder layers to corresponding decoder layers (U-Net structure) to preserve spatial details.
- (4)
- Output: Single-channel pseudo-NDVI with tanh activation, normalized to range [−1, 1].
- (1)
- Convolutional architecture that classifies N × N image patches as real or fake (70 × 70 receptive field per prediction).
- (2)
- More efficient than full-image discrimination and better preserves local texture details.
4.3.2. CycleGAN (Unsupervised Learning)
- (1)
- Encoder: Convolutional layers with downsampling to extract hierarchical features.
- (2)
- Transformer: Residual blocks maintaining spatial resolution for feature transformation.
- (3)
- Decoder: Transposed convolutional layers with upsampling to reconstruct output images.
- (1)
- Generator G: Converts RVI to pseudo-NDVI (RVI → NDVI)
- (2)
- Generator F: Converts NDVI to RVI (NDVI → RVI)
- (1)
- DX: Discriminator real vs. generated RVI images
- (2)
- DY: Discriminator real vs. generated NDVI images
4.4. Training Procedure
- Optimizer: Adam optimizer with β1 = 0.5, β2 = 0.999
- Learning Rate: Initial rate of 0.0002 with linear decay after 100 epochs
- Batch Size: 1 (to accommodate varying image sizes)
- Epochs: 200
- Alternating training of generator and discriminator
- Update discriminator once per generator update
- Learning rate decay in later epochs to stabilize training
- Early stopping based on validation loss to prevent overfitting
- RVI: Normalized to [−1, 1] based on dataset statistics
- NDVI: Already in range [−1, 1] (no additional normalization required)
5. Experiment
5.1. Data Used
5.2. Data Preprocessing
5.3. Dataset Construction
5.4. Evaluation Metrics
5.5. Intensive Study Area and Data
6. Results
6.1. Overall Performance Comparison
6.2. Qualitative Assessment
6.3. Impact of RVI-NDVI Correlation
- Group A: Correlation 0.7–1.0 (high correlation)
- Group B: Correlation 0.4–0.7 (moderate correlation)
- Group C: Correlation <0.4 (low correlation)
6.4. Performance by Land Cover Type
6.5. Multi-Input Enhancement
6.6. Seasonal Variations
6.7. Metric Comparisons
7. Discussion
7.1. Why RVI-to-NDVI Works Better than Direct SAR-to-NDVI
7.2. Limitations and Failure Cases
7.3. pix2pix vs. CycleGAN: Why Supervised Learning Wins
7.4. Practical Implications for Crop Monitoring
7.5. Comparison with Alternative Approaches
7.6. Failure Analysis
8. Conclusions
- Key Methodological Innovation
- Advances in Understanding GAN Architecture Performance
- Critical Insights on Performance Dependencies
- Practical Framework for Temporal Gap Filling
- Recognition of Fundamental Physical Constraints
- Roadmap for Advancement
- Specific Technical Targets
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- FAO. The State of Food and Agriculture 2021: Making Agrifood Systems More Resilient to Shocks and Stresses; Food and Agriculture Organization of the United Nations: Rome, Italy, 2021. [Google Scholar]
- Lobell, D.B.; Field, C.B.; Ramankutty, N. Climate trends and global crop production since 1980. Science 2020, 319, 607–610. [Google Scholar] [CrossRef] [PubMed]
- Zhang, X.; Friedl, M.A.; Schaaf, C.B. Global vegetation phenology from Moderate Resolution Imaging Spectroradiometer (MODIS): Evaluation of global patterns and temporal dynamics. Remote Sens. Environ. 2019, 190, 215–230. [Google Scholar]
- Tucker, C.J. Red and photographic infrared linear combinations for monitoring vegetation. Remote Sens. Environ. 1979, 8, 127–150. [Google Scholar] [CrossRef]
- Pettorelli, N.; Vik, J.O.; Mysterud, A.; Gaillard, J.M.; Tucker, C.J.; Stenseth, N.C. Using the satellite-derived NDVI to assess ecological responses to environmental change. Trends Ecol. Evol. 2005, 20, 503–510. [Google Scholar] [CrossRef]
- Huete, A.R.; Didan, K.; Miura, T.; Rodriguez, E.P.; Gao, X.; Ferreira, L.G. Overview of the radiometric and biophysical performance of the MODIS vegetation indices. Remote Sens. Environ. 2002, 83, 195–213. [Google Scholar] [CrossRef]
- Lucas, R.M.; Bunting, P.; Paterson, M.; Chisholm, L.A. Classification of Australian forest communities using aerial photography, CASI and HyMap imagery. Remote Sens. Environ. 2012, 126, 111–123. [Google Scholar]
- Kim, Y.; van Zyl, J.J. A time-series approach to estimate soil moisture using polarimetric radar data. IEEE Trans. Geosci. Remote Sens. 2001, 39, 2382–2393. [Google Scholar]
- Veloso, A.; Mermoz, S.; Bouvet, A.; Le Toan, T.; Planells, M.; Dejoux, J.F.; Ceschia, E. Understanding the temporal behavior of crops using Sentinel-1 and Sentinel-2-like data for agricultural applications. Remote Sens. Environ. 2017, 199, 415–426. [Google Scholar] [CrossRef]
- Ayari, H.; Touzi, R.; Bannour, H. NDVI Estimation Using Sentinel-1 Data over Wheat Fields in Tunisia. Remote Sens. 2024, 16, 1455. [Google Scholar]
- Ramathilagam, A.B.; Natarajan, S.; Kumar, A. SAR2NDVI: A pix2pix generative adversarial network for reconstructing field-level normalized difference vegetation index time series using Sentinel-1 synthetic aperture radar data. J. Appl. Remote Sens. 2023, 17, 024514. [Google Scholar] [CrossRef]
- Li, Q.; Fang, S.; Chen, J. SAR-optical image translation using conditional adversarial networks for vegetation monitoring. ISPRS J. Photogramm. Remote Sens. 2022, 192, 112–124. [Google Scholar]
- Sonobe, R.; Seki, H.; Shimaura, H.; Mochizuki, K.-I.; Saito, G.; Yoshino, K. Simulation of NDVI imagery using Generative Adversarial Network and Sentinel-1 C-SAR data. Photogramm. Remote Sens. 2022, 61, 80–87. [Google Scholar] [CrossRef]
- Vozhehova, R.; Maliarchuk, M.; Biliaieva, I.; Lykhovyd, P.; Maliarchuk, A.; Tomnytskyi, A. Spring Row Crops Productivity Prediction Using Normalized Difference Vegetation Index. J. Ecol. Eng. 2020, 21, 176–182. [Google Scholar] [CrossRef] [PubMed]
- Pipia, L.; Muñoz-Marí, J.; Amin, E.; Belda, S.; Camps-Valls, G.; Verrelst, J. Fusing optical and SAR time series for LAI gap filling with multioutput Gaussian processes. Remote Sens. Environ. 2019, 235, 111452. [Google Scholar] [CrossRef]
- Zhang, H.; Li, H.; Lin, J.; Zhang, Y.; Fan, J.; Liu, H. Seg-CycleGAN: SAR-to-optical image translation guided by a downstream task. arXiv 2024, arXiv:2408.05777. [Google Scholar] [CrossRef]
- Grohnfeldt, C.; Schmitt, M.; Zhu, X.X. A Conditional Generative Adversarial Network to Fuse SAR and Multispectral Optical Imagery for Cloud and Haze Removal. In Proceedings of the IEEE IGARSS (International Geoscience and Remote Sensing Symposium), Valencia, Spain, 22–27 July 2018. [Google Scholar]
- Meraner, M.; Ebel, B.; Zhu, X.X.; Schmitt, M. Cloud removal in Sentinel-2 imagery using a deep residual neural network and SAR-optical data fusion. ISPRS J. Photogramm. Remote Sens. 2020, 166, 333–346. [Google Scholar] [CrossRef]
- Mandal, D.; Kumar, V.; Ratha, D.; Dey, S.; Bhattacharya, A.; Lopez-Sanchez, J.M.; McNairn, H.; Rao, Y.S. Dual polarimetric radar vegetation index for crop growth monitoring using Sentinel-1 SAR data. Int. J. Appl. Earth Obs. Geoinf. 2020, 91, 102158. [Google Scholar] [CrossRef]
- Filgueiras, R.; Mantovani, E.C.; Althoff, D.; Filho, E.I.F.; da Cunha, F.F. Crop NDVI Monitoring Based on Sentinel 1. Remote Sens. 2019, 11, 1441. [Google Scholar] [CrossRef]
- Roßberg, T.; Schmitt, M. Estimating NDVI from Sentinel-1 SAR Data Using Deep Learning. In Proceedings of the IGARSS 2022—2022 IEEE International Geoscience and Remote Sensing Symposium, Kuala Lumpur, Malaysia, 17–22 July 2022. [Google Scholar]
- Arai, K. Modified pix2pixHD for Enhancing Spatial Resolution of Image for Conversion from SAR Images to Optical Images in Application of Landslide Area Detection. Information 2025, 16, 163. [Google Scholar] [CrossRef]
- Arai, K.; Nakaoka, Y.; Okumura, H. Method for Landslide Area Detection Based on EfficientNetV2 with Optical Image Converted from SAR Image Using pix2pixHD with Spatial Attention Mechanism in Loss Function. Information 2024, 15, 524. [Google Scholar] [CrossRef]
- Arai, K.; Nakaoka, Y.; Okumura, H. Method for Landslide Area Detection with RVI Data Which Indicates Base Soil Areas Changed from Vegetated Areas. Remote Sens. 2025, 17, 628. [Google Scholar] [CrossRef]












| Area Name | Area (km2) |
|---|---|
| Kamishihoro | 696.00 |
| Taiki | 815.67 |
| Shintoku | 1063.83 |
| Shikaoi | 404.70 |
| Shihoro | 259.19 |
| Otofuke | 466.02 |
| Shimizu | 402.25 |
| Memuro | 513.91 |
| Obihiro City | 619.34 |
| Sarabetu | 176.90 |
| Nakasatunai | 292.58 |
| Method | SIM | PSNR (dB) | RMSE |
|---|---|---|---|
| CycleGAN | 0.5240 | 19.54 | 28.02 |
| pix2pix | 0.5667 | 22.24 | 20.54 |
| Improvement | +8.2% | +13.8% | −26.7% |
| 0.7 ≦ x ≦ 1 | A | |
| 0.4 ≦ x < 0.7 | B | |
| x < 0.4 | C |
| Metric | Method | Anneal Change | Improvement (%) |
|---|---|---|---|
| SSIM | Conventional | 0.45~0.55 | 24.4~25.5 |
| Proposed | 0.56~0.69 | ||
| PSNR | Conventional | 18.0~22.0 | 22.7~25.0 |
| Proposed | 22.5~27.0 | ||
| MAE | Conventional | 16.0~25.2 | 45.5~62.6 |
| Proposed | 11.0~15.5 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Arai, K.; Maruta, R.; Okumura, H. Method for Generating Pseudo-NDVI from RVI Derived from Satellite-Borne SAR Imagery Data Using CycleGAN and pix2pix Models. Information 2026, 17, 154. https://doi.org/10.3390/info17020154
Arai K, Maruta R, Okumura H. Method for Generating Pseudo-NDVI from RVI Derived from Satellite-Borne SAR Imagery Data Using CycleGAN and pix2pix Models. Information. 2026; 17(2):154. https://doi.org/10.3390/info17020154
Chicago/Turabian StyleArai, Kohei, Ria Maruta, and Hiroshi Okumura. 2026. "Method for Generating Pseudo-NDVI from RVI Derived from Satellite-Borne SAR Imagery Data Using CycleGAN and pix2pix Models" Information 17, no. 2: 154. https://doi.org/10.3390/info17020154
APA StyleArai, K., Maruta, R., & Okumura, H. (2026). Method for Generating Pseudo-NDVI from RVI Derived from Satellite-Borne SAR Imagery Data Using CycleGAN and pix2pix Models. Information, 17(2), 154. https://doi.org/10.3390/info17020154

