Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (816)

Search Parameters:
Keywords = optical and synthetic aperture radar

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 6167 KiB  
Article
Assessing Burned Area Detection in Indonesia Using the Stacking Ensemble Neural Network (SENN): A Comparative Analysis of C- and L-Band Performance
by Dodi Sudiana, Anugrah Indah Lestari, Mia Rizkinia, Indra Riyanto, Yenni Vetrita, Athar Abdurrahman Bayanuddin, Fanny Aditya Putri, Tatik Kartika, Argo Galih Suhadha, Atriyon Julzarika, Shinichi Sobue, Anton Satria Prabuwono and Josaphat Tetuko Sri Sumantyo
Computers 2025, 14(8), 337; https://doi.org/10.3390/computers14080337 - 18 Aug 2025
Viewed by 156
Abstract
Burned area detection plays a critical role in assessing the impact of forest and land fires, particularly in Indonesia, where both peatland and non-peatland areas are increasingly affected. Optical remote sensing has been widely used for this task, but its effectiveness is limited [...] Read more.
Burned area detection plays a critical role in assessing the impact of forest and land fires, particularly in Indonesia, where both peatland and non-peatland areas are increasingly affected. Optical remote sensing has been widely used for this task, but its effectiveness is limited by persistent cloud cover in tropical regions. A Synthetic Aperture Radar (SAR) offers a cloud-independent alternative for burned area mapping. This study investigates the performance of a Stacking Ensemble Neural Network (SENN) model using polarimetric features derived from both C-band (Sentinel 1) and L-band (Advanced Land Observing Satellite—Phased Array L-band Synthetic Aperture Radar (ALOS-2/PALSAR-2)) data. The analysis covers three representative sites in Indonesia: peatland areas in (1) Rokan Hilir, (2) Merauke, and non-peatland areas in (3) Bima and Dompu. Validation is conducted using high-resolution PlanetScope imagery(Planet Labs PBC—San Francisco, California, United States). The results show that the SENN model consistently outperforms conventional artificial neural network (ANN) approaches across most evaluation metrics. L-band SAR data yields a superior performance to the C-band, particularly in peatland areas, with overall accuracy reaching 93–96% and precision between 92 and 100%. The method achieves 76% accuracy and 89% recall in non-peatland regions. Performance is lower in dry, hilly savanna landscapes. These findings demonstrate the effectiveness of the SENN, especially with L-band SAR, in improving burned area detection across diverse land types, supporting more reliable fire monitoring efforts in Indonesia. Full article
(This article belongs to the Special Issue Advanced Image Processing and Computer Vision (2nd Edition))
Show Figures

Figure 1

24 pages, 125401 KiB  
Article
Continuous Monitoring of Fire-Induced Forest Loss Using Sentinel-1 SAR Time Series and a Bayesian Method: A Case Study in Paragominas, Brazil
by Marta Bottani, Laurent Ferro-Famil, René Poccard-Chapuis and Laurent Polidori
Remote Sens. 2025, 17(16), 2822; https://doi.org/10.3390/rs17162822 - 14 Aug 2025
Viewed by 324
Abstract
Forest fires, intensified by climate change, threaten tropical ecosystems by accelerating biodiversity loss, releasing carbon emissions, and altering hydrological cycles. Continuous detection of fire-induced forest loss is therefore critical. However, commonly used optical-based methods often face limitations, particularly due to cloud cover and [...] Read more.
Forest fires, intensified by climate change, threaten tropical ecosystems by accelerating biodiversity loss, releasing carbon emissions, and altering hydrological cycles. Continuous detection of fire-induced forest loss is therefore critical. However, commonly used optical-based methods often face limitations, particularly due to cloud cover and coarse spatial resolution. This study explores the use of C-band Sentinel-1 Synthetic Aperture Radar (SAR) time series, combined with Bayesian Online Changepoint Detection (BOCD), for detecting and continuously monitoring fire-induced vegetation loss in forested areas. Three BOCD variants are evaluated: two single-polarization approaches individually using VV and VH reflectivities, and a dual-polarization approach (pol-BOCD) integrating both channels. The analysis focuses on a fire-affected area in Baixo Uraim (Paragominas, Brazil), supported by field-validated reference data. BOCD performance is compared against widely used optical products, including MODIS and VIIRS active fire and burned area data, as well as Sentinel-2-based difference Normalized Burn Ratio (dNBR) assessments. Results indicate that pol-BOCD achieves spatial accuracy comparable to dNBR (88.2% agreement), while enabling detections within a delay of three Sentinel-1 acquisitions. These findings highlight the potential of SAR-based BOCD for rapid, cloud-independent monitoring. While SAR enables continuous detection regardless of atmospheric conditions, optical imagery remains essential for characterizing the type and severity of change. Full article
Show Figures

Figure 1

27 pages, 4588 KiB  
Article
Remote Sensing as a Sentinel for Safeguarding European Critical Infrastructure in the Face of Natural Disasters
by Miguel A. Belenguer-Plomer, Omar Barrilero, Paula Saameño, Inês Mendes, Michele Lazzarini, Sergio Albani, Naji El Beyrouthy, Mario Al Sayah, Nathan Rueche, Abla Mimi Edjossan-Sossou, Tommaso Monopoli, Edoardo Arnaudo and Gianfranco Caputo
Appl. Sci. 2025, 15(16), 8908; https://doi.org/10.3390/app15168908 - 13 Aug 2025
Viewed by 297
Abstract
Critical infrastructure, such as transport networks, energy facilities, and urban installations, is increasingly vulnerable to natural hazards and climate change. Remote sensing technologies, namely satellite imagery, offer solutions for monitoring, evaluating, and enhancing the resilience of these vital assets. This paper explores how [...] Read more.
Critical infrastructure, such as transport networks, energy facilities, and urban installations, is increasingly vulnerable to natural hazards and climate change. Remote sensing technologies, namely satellite imagery, offer solutions for monitoring, evaluating, and enhancing the resilience of these vital assets. This paper explores how applications based on synthetic aperture radar (SAR) and optical satellite imagery contribute to the protection of critical infrastructure by enabling near real-time monitoring and early detection of natural hazards for actionable insights across various European critical infrastructure sectors. Case studies demonstrate the integration of remote sensing data into geographic information systems (GISs) for promoting situational awareness, risk assessment, and predictive modeling of natural disasters. These include floods, landslides, wildfires, and earthquakes. Accordingly, this study underlines the role of remote sensing in supporting long-term infrastructure planning and climate adaptation strategies. The presented work supports the goals of the European Union (EU-HORIZON)-sponsored ATLANTIS project, which focuses on strengthening the resilience of critical EU infrastructures by providing authorities and civil protection services with effective tools for managing natural hazards. Full article
Show Figures

Figure 1

22 pages, 3460 KiB  
Article
Investigating the Earliest Identifiable Timing of Sugarcane at Early Season Based on Optical and SAR Time-Series Data
by Yingpin Yang, Jiajun Zou, Yu Huang, Zhifeng Wu, Ting Fang, Jia Xue, Dakang Wang, Yibo Wang, Jinnian Wang, Xiankun Yang and Qiting Huang
Remote Sens. 2025, 17(16), 2773; https://doi.org/10.3390/rs17162773 - 10 Aug 2025
Viewed by 480
Abstract
Early-season sugarcane identification plays a pivotal role in precision agriculture, enabling timely yield forecasting and informed policy-making. Compared to post-season crop identification, early-season identification faces unique challenges, including incomplete temporal observations and spectral ambiguity among crop types in early seasons. Previous studies have [...] Read more.
Early-season sugarcane identification plays a pivotal role in precision agriculture, enabling timely yield forecasting and informed policy-making. Compared to post-season crop identification, early-season identification faces unique challenges, including incomplete temporal observations and spectral ambiguity among crop types in early seasons. Previous studies have not systematically investigated the capability of optical and synthetic aperture radar (SAR) data for early-season sugarcane identification, which may result in suboptimal accuracy and delayed identification timelines. Both the timing for reliable identification (≥90% accuracy) and the earliest achievable timepoint matching post-season level remain undetermined, and which features are effective in the early-season identification is still unknown. To address these questions, this study integrated Sentinel-1 and Sentinel-2 data, extracted 10 spectral indices and 8 SAR features, and employed a random forest classifier for early-season sugarcane identification by means of progressive temporal analysis. It was found that LSWI (Land Surface Water Index) performed best among 18 individual features. Through the feature set accumulation, the seven-dimensional feature set (LSWI, IRECI (Inverted Red-Edge Chlorophyll Index), EVI (Enhanced Vegetation Index), PSSRa (Pigment Specific Simple Ratio a), NDVI (Normalized Difference Vegetation Index), VH backscatter coefficient, and REIP (Red-Edge Inflection Point Index)) achieved the earliest attainment of 90% accuracy by 30 June (early-elongation stage), with peak accuracy (92.80% F1-score) comparable to post-season accuracy reached by 19 August (mid-elongation stage). The early-season sugarcane maps demonstrated high agreement with post-season maps. The 30 June map achieved 88.01% field-level and 90.22% area-level consistency, while the 19 August map reached 91.58% and 93.11%, respectively. The results demonstrate that sugarcane can be reliably identified with accuracy comparable to post-season mapping as early as six months prior to harvest through the integration of optical and SAR data. This study develops a robust approach for early-season sugarcane identification, which could fundamentally enhance precision agriculture operations through timely crop status assessment. Full article
(This article belongs to the Special Issue Advances in Remote Sensing for Crop Monitoring and Food Security)
Show Figures

Figure 1

22 pages, 28581 KiB  
Article
Remote Sensing Interpretation of Geological Elements via a Synergistic Neural Framework with Multi-Source Data and Prior Knowledge
by Kang He, Ruyi Feng, Zhijun Zhang and Yusen Dong
Remote Sens. 2025, 17(16), 2772; https://doi.org/10.3390/rs17162772 - 10 Aug 2025
Viewed by 410
Abstract
Geological elements are fundamental components of the Earth’s ecosystem, and accurately identifying their spatial distribution is essential for analyzing environmental processes, guiding land-use planning, and promoting sustainable development. Remote sensing technologies, combined with artificial intelligence algorithms, offer new opportunities for the efficient interpretation [...] Read more.
Geological elements are fundamental components of the Earth’s ecosystem, and accurately identifying their spatial distribution is essential for analyzing environmental processes, guiding land-use planning, and promoting sustainable development. Remote sensing technologies, combined with artificial intelligence algorithms, offer new opportunities for the efficient interpretation of geological features. However, in areas with dense vegetation coverage, the information directly extracted from single-source optical imagery is limited, thereby constraining interpretation accuracy. Supplementary inputs such as synthetic aperture radar (SAR), topographic features, and texture information—collectively referred to as sensitive features and prior knowledge—can improve interpretation, but their effectiveness varies significantly across time and space. This variability often leads to inconsistent performance in general-purpose models, thus limiting their practical applicability. To address these challenges, we construct a geological element interpretation dataset for Northwest China by incorporating multi-source data, including Sentinel-1 SAR imagery, Sentinel-2 multispectral imagery, sensitive features (such as the digital elevation model (DEM), texture features based on the gray-level co-occurrence matrix (GLCM), geological maps (GMs), and the normalized difference vegetation index (NDVI)), as well as prior knowledge (such as base geological maps). Using five mainstream deep learning models, we systematically evaluate the performance improvement brought by various sensitive features and prior knowledge in remote sensing-based geological interpretation. To handle disparities in spatial resolution, temporal acquisition, and noise characteristics across sensors, we further develop a multi-source complement-driven network (MCDNet) that integrates an improved feature rectification module (IFRM) and an attention-enhanced fusion module (AFM) to achieve effective cross-modal alignment and noise suppression. Experimental results demonstrate that the integration of multi-source sensitive features and prior knowledge leads to a 2.32–6.69% improvement in mIoU for geological elements interpretation, with base geological maps and topographic features contributing most significantly to accuracy gains. Full article
(This article belongs to the Special Issue Multimodal Remote Sensing Data Fusion, Analysis and Application)
Show Figures

Figure 1

23 pages, 6600 KiB  
Article
Research Analysis of the Joint Use of Sentinel-2 and ALOS-2 Data in Fine Classification of Tropical Natural Forests
by Qingyuan Xie, Wenxue Fu, Weijun Yan, Jiankang Shi, Chengzhi Hao, Hui Li, Sheng Xu and Xinwu Li
Forests 2025, 16(8), 1302; https://doi.org/10.3390/f16081302 - 10 Aug 2025
Viewed by 325
Abstract
Tropical natural forests play a crucial role in regulating the climate and maintaining global ecosystem functions. However, they face significant challenges due to human activities and climate change. Accurate classification of these forests can help reveal their spatial distribution patterns and support conservation [...] Read more.
Tropical natural forests play a crucial role in regulating the climate and maintaining global ecosystem functions. However, they face significant challenges due to human activities and climate change. Accurate classification of these forests can help reveal their spatial distribution patterns and support conservation efforts. This study employed four machine learning algorithms—random forest (RF), support vector machine (SVM), Logistic Regression (LR), and Extreme Gradient Boosting (XGBoost)—to classify tropical rainforests, tropical monsoon rainforests, tropical coniferous forests, broadleaf evergreen forests, and mangrove forests on Hainan Island using optical and synthetic aperture radar (SAR) multi-source remote sensing data. Among these, the XGBoost model achieved the best performance, with an overall accuracy of 0.89 and a Kappa coefficient of 0.82. Elevation and red-edge spectral bands were identified as the most important features for classification. Spatial distribution analysis revealed distinct patterns, such as mangrove forests occurring at the lowest elevations and tropical rainforests occupying middle and low elevations. The integration of optical and SAR data significantly enhanced classification accuracy and boundary recognition compared to using optical data alone. These findings underscore the effectiveness of machine learning and multi-source data for tropical forest classification, providing a valuable reference for ecological monitoring and sustainable management. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

20 pages, 11966 KiB  
Article
Improved Photosynthetic Accumulation Models for Biomass Estimation of Soybean and Cotton Using Vegetation Indices and Canopy Height
by Jinglong Liu, Jordi J. Mallorqui, Albert Aguasca, Xavier Fàbregas, Antoni Broquetas, Jordi Llop, Mireia Mas, Feng Zhao and Yanan Wang
Remote Sens. 2025, 17(15), 2736; https://doi.org/10.3390/rs17152736 - 7 Aug 2025
Viewed by 221
Abstract
Most crops accumulate above-ground biomass (AGB) through photosynthesis, inspiring the development of the Photosynthetic Accumulation Model (PAM) and Simplified PAM (SPAM). Both models estimate AGB based on time-series optical vegetation indices (VIs) and canopy height. To further enhance the model performance and evaluate [...] Read more.
Most crops accumulate above-ground biomass (AGB) through photosynthesis, inspiring the development of the Photosynthetic Accumulation Model (PAM) and Simplified PAM (SPAM). Both models estimate AGB based on time-series optical vegetation indices (VIs) and canopy height. To further enhance the model performance and evaluate its applicability across different crop types, an improved PAM model (IPAM) is proposed with three strategies. They are as follows: (i) using numerical integration to reduce reliance on dense observations, (ii) introduction of Fibonacci sequence-based structural correction to improve model accuracy, and (iii) non-photosynthetic area masking to reduce overestimation. Results from both soybean and cotton demonstrate the strong performance of the PAM-series models. Among them, the proposed IPAM model achieved higher accuracy, with mean R2 and RMSE values of 0.89 and 207 g/m2 for soybean and 0.84 and 251 g/m2 for cotton, respectively. Among the vegetation indices tested, the recently proposed Near-Infrared Reflectance of vegetation (NIRv) and Kernel-based normalized difference vegetation index (Kndvi) yielded the most accurate results. Both Monte Carlo simulations and theoretical error propagation analyses indicate a maximum deviation percentage of approximately 20% for both crops, which is considered acceptable given the expected inter-annual variation in model transferability. In addition, this paper discusses alternatives to height measurements and evaluates the feasibility of incorporating synthetic aperture radar (SAR) VIs, providing practical insights into the model’s adaptability across diverse data conditions. Full article
Show Figures

Figure 1

16 pages, 3847 KiB  
Article
Water Body Extraction Methods for SAR Images Fusing Sentinel-1 Dual-Polarized Water Index and Random Forest
by Min Zhai, Huayu Shen, Qihang Cao, Xuanhao Ding and Mingzhen Xin
Sensors 2025, 25(15), 4868; https://doi.org/10.3390/s25154868 - 7 Aug 2025
Viewed by 311
Abstract
Synthetic Aperture Radar (SAR) technology has the characteristics of all-day and all-weather functionality; accordingly, it is not affected by rainy weather, overcoming the limitations of optical remote sensing, and it provides irreplaceable technical support for efficient water body extraction. To address the issues [...] Read more.
Synthetic Aperture Radar (SAR) technology has the characteristics of all-day and all-weather functionality; accordingly, it is not affected by rainy weather, overcoming the limitations of optical remote sensing, and it provides irreplaceable technical support for efficient water body extraction. To address the issues of low accuracy and unstable results in water body extraction from Sentinel-1 SAR images using a single method, a water body extraction method fusing the Sentinel-1 dual-polarized water index and random forest is proposed. This novel method enhances water extraction accuracy by integrating the results of two different algorithms, reducing the biases associated with single-method water body extraction. Taking Dalu Lake, Yinfu Reservoir, and Huashan Reservoir as the study areas, water body information was extracted from SAR images using the dual-polarized water body index, the random forest method, and the fusion method. Taking the normalized difference water body index extraction results obtained via Sentinel-2 optical images as a reference, the accuracy of different water body extraction methods when used with SAR images was quantitatively evaluated. The experimental results show that, compared with the dual-polarized water body index and the random forest method, the fusion method, on average, increased overall water body extraction accuracy and Kappa coefficients by 3.9% and 8.2%, respectively, in the Dalu Lake experimental area; by 1.8% and 3.5%, respectively, in the Yinfu Reservoir experimental area; and by 4.1% and 8.1%, respectively, in the Huashan Reservoir experimental area. Therefore, the fusion method of the dual-polarized water index and random forest effectively improves the accuracy and reliability of water body extraction from SAR images. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

26 pages, 14923 KiB  
Article
Multi-Sensor Flood Mapping in Urban and Agricultural Landscapes of the Netherlands Using SAR and Optical Data with Random Forest Classifier
by Omer Gokberk Narin, Aliihsan Sekertekin, Caglar Bayik, Filiz Bektas Balcik, Mahmut Arıkan, Fusun Balik Sanli and Saygin Abdikan
Remote Sens. 2025, 17(15), 2712; https://doi.org/10.3390/rs17152712 - 5 Aug 2025
Viewed by 567
Abstract
Floods stand as one of the most harmful natural disasters, which have become more dangerous because of climate change effects on urban structures and agricultural fields. This research presents a comprehensive flood mapping approach that combines multi-sensor satellite data with a machine learning [...] Read more.
Floods stand as one of the most harmful natural disasters, which have become more dangerous because of climate change effects on urban structures and agricultural fields. This research presents a comprehensive flood mapping approach that combines multi-sensor satellite data with a machine learning method to evaluate the July 2021 flood in the Netherlands. The research developed 25 different feature scenarios through the combination of Sentinel-1, Landsat-8, and Radarsat-2 imagery data by using backscattering coefficients together with optical Normalized Difference Water Index (NDWI) and Hue, Saturation, and Value (HSV) images and Synthetic Aperture Radar (SAR)-derived Grey Level Co-occurrence Matrix (GLCM) texture features. The Random Forest (RF) classifier was optimized before its application based on two different flood-prone regions, which included Zutphen’s urban area and Heijen’s agricultural land. Results demonstrated that the multi-sensor fusion scenarios (S18, S20, and S25) achieved the highest classification performance, with overall accuracy reaching 96.4% (Kappa = 0.906–0.949) in Zutphen and 87.5% (Kappa = 0.754–0.833) in Heijen. For the flood class F1 scores of all scenarios, they varied from 0.742 to 0.969 in Zutphen and from 0.626 to 0.969 in Heijen. Eventually, the addition of SAR texture metrics enhanced flood boundary identification throughout both urban and agricultural settings. Radarsat-2 provided limited benefits to the overall results, since Sentinel-1 and Landsat-8 data proved more effective despite being freely available. This study demonstrates that using SAR and optical features together with texture information creates a powerful and expandable flood mapping system, and RF classification performs well in diverse landscape settings. Full article
(This article belongs to the Special Issue Remote Sensing Applications in Flood Forecasting and Monitoring)
Show Figures

Figure 1

48 pages, 18119 KiB  
Article
Dense Matching with Low Computational Complexity for Disparity Estimation in the Radargrammetric Approach of SAR Intensity Images
by Hamid Jannati, Mohammad Javad Valadan Zoej, Ebrahim Ghaderpour and Paolo Mazzanti
Remote Sens. 2025, 17(15), 2693; https://doi.org/10.3390/rs17152693 - 3 Aug 2025
Viewed by 387
Abstract
Synthetic Aperture Radar (SAR) images and optical imagery have high potential for extracting digital elevation models (DEMs). The two main approaches for deriving elevation models from SAR data are interferometry (InSAR) and radargrammetry. Adapted from photogrammetric principles, radargrammetry relies on disparity model estimation [...] Read more.
Synthetic Aperture Radar (SAR) images and optical imagery have high potential for extracting digital elevation models (DEMs). The two main approaches for deriving elevation models from SAR data are interferometry (InSAR) and radargrammetry. Adapted from photogrammetric principles, radargrammetry relies on disparity model estimation as its core component. Matching strategies in radargrammetry typically follow local, global, or semi-global methodologies. Local methods, while having higher accuracy, especially in low-texture SAR images, require larger kernel sizes, leading to quadratic computational complexity. Conversely, global and semi-global models produce more consistent and higher-quality disparity maps but are computationally more intensive than local methods with small kernels and require more memory (RAM). In this study, inspired by the advantages of local matching algorithms, a computationally efficient and novel model is proposed for extracting corresponding pixels in SAR-intensity stereo images. To enhance accuracy, the proposed two-stage algorithm operates without an image pyramid structure. Notably, unlike traditional local and global models, the computational complexity of the proposed approach remains stable as the input size or kernel dimensions increase while memory consumption stays low. Compared to a pyramid-based local normalized cross-correlation (NCC) algorithm and adaptive semi-global matching (SGM) models, the proposed method maintains good accuracy comparable to adaptive SGM while reducing processing time by up to 50% relative to pyramid SGM and achieving a 35-fold speedup over the local NCC algorithm with an optimal kernel size. Validated on a Sentinel-1 stereo pair with a 10 m ground-pixel size, the proposed algorithm yields a DEM with an average accuracy of 34.1 m. Full article
Show Figures

Graphical abstract

19 pages, 2089 KiB  
Article
Estimation of Soil Organic Carbon Content of Grassland in West Songnen Plain Using Machine Learning Algorithms and Sentinel-1/2 Data
by Haoming Li, Jingyao Xia, Yadi Yang, Yansu Bo and Xiaoyan Li
Agriculture 2025, 15(15), 1640; https://doi.org/10.3390/agriculture15151640 - 29 Jul 2025
Viewed by 216
Abstract
Based on multi-source data, including synthetic aperture radar (Sentinel-1, S1) and optical satellite images (Sentinel-2, S2), topographic data, and climate data, this study explored the performance and feasibility of different variable combinations in predicting SOC using three machine learning models. We designed the [...] Read more.
Based on multi-source data, including synthetic aperture radar (Sentinel-1, S1) and optical satellite images (Sentinel-2, S2), topographic data, and climate data, this study explored the performance and feasibility of different variable combinations in predicting SOC using three machine learning models. We designed the three models based on 244 samples from the study area, using 70% of the samples for the training set and 30% for the testing set. Nine experiments were conducted under three variable scenarios to select the optimal model. We used this optimal model to achieve high-precision predictions of SOC content. Our results indicated that both S1 and S2 data are significant for SOC prediction, and the use of multi-sensor data yielded more accurate results than single-sensor data. The RF model based on the integration of S1, S2, topography, and climate data achieved the highest prediction accuracy. In terms of variable importance, the S2 data exhibited the highest contribution to SOC prediction (31.03%). The SOC contents within the study region varied between 4.16 g/kg and 29.19 g/kg, showing a clear spatial trend of higher concentrations in the east than in the west. Overall, the proposed model showed strong performance in estimating grassland SOC and offered valuable scientific guidance for grassland conservation in the western Songnen Plain. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

25 pages, 9676 KiB  
Article
A Comparative Analysis of SAR and Optical Remote Sensing for Sparse Forest Structure Parameters: A Simulation Study
by Zhihui Mao, Lei Deng, Xinyi Liu and Yueyang Wang
Forests 2025, 16(8), 1244; https://doi.org/10.3390/f16081244 - 29 Jul 2025
Viewed by 374
Abstract
Forest structure parameters are critical for understanding and managing forest ecosystems, yet sparse forests have received limited attention in previous studies. To address this research gap, this study systematically evaluates and compares the sensitivity of active Synthetic Aperture Radar (SAR) and passive optical [...] Read more.
Forest structure parameters are critical for understanding and managing forest ecosystems, yet sparse forests have received limited attention in previous studies. To address this research gap, this study systematically evaluates and compares the sensitivity of active Synthetic Aperture Radar (SAR) and passive optical remote sensing to key forest structure parameters in sparse forests, including Diameter at Breast Height (DBH), Tree Height (H), Crown Width (CW), and Leaf Area Index (LAI). Using the novel computer-graphics-based radiosity model applicable to porous individual thin objects, named Radiosity Applicable to Porous Individual Objects (RAPID), we simulated 38 distinct sparse forest scenarios to generate both SAR backscatter coefficients and optical reflectance across various wavelengths, polarization modes, and incidence/observation angles. Sensitivity was assessed using the coefficient of variation (CV). The results reveal that C-band SAR in HH polarization mode demonstrates the highest sensitivity to DBH (CV = −6.73%), H (CV = −52.68%), and LAI (CV = −63.39%), while optical data in the red band show the strongest response to CW (CV = 18.83%) variations. The study further identifies optimal acquisition configurations, with SAR data achieving maximum sensitivity at smaller incidence angles and optical reflectance performing best at forward observation angles. This study addresses a critical gap by presenting the first systematic comparison of the sensitivity of multi-band SAR and VIS/NIR data to key forest structural parameters across sparsity gradients, thereby clarifying their applicability for monitoring young and middle-aged sparse forests with high carbon sequestration potential. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

23 pages, 8942 KiB  
Article
Optical and SAR Image Registration in Equatorial Cloudy Regions Guided by Automatically Point-Prompted Cloud Masks
by Yifan Liao, Shuo Li, Mingyang Gao, Shizhong Li, Wei Qin, Qiang Xiong, Cong Lin, Qi Chen and Pengjie Tao
Remote Sens. 2025, 17(15), 2630; https://doi.org/10.3390/rs17152630 - 29 Jul 2025
Viewed by 388
Abstract
The equator’s unique combination of high humidity and temperature renders optical satellite imagery highly susceptible to persistent cloud cover. In contrast, synthetic aperture radar (SAR) offers a robust alternative due to its ability to penetrate clouds with microwave imaging. This study addresses the [...] Read more.
The equator’s unique combination of high humidity and temperature renders optical satellite imagery highly susceptible to persistent cloud cover. In contrast, synthetic aperture radar (SAR) offers a robust alternative due to its ability to penetrate clouds with microwave imaging. This study addresses the challenges of cloud-induced data gaps and cross-sensor geometric biases by proposing an advanced optical and SAR image-matching framework specifically designed for cloud-prone equatorial regions. We use a prompt-driven visual segmentation model with automatic prompt point generation to produce cloud masks that guide cross-modal feature-matching and joint adjustment of optical and SAR data. This process results in a comprehensive digital orthophoto map (DOM) with high geometric consistency, retaining the fine spatial detail of optical data and the all-weather reliability of SAR. We validate our approach across four equatorial regions using five satellite platforms with varying spatial resolutions and revisit intervals. Even in areas with more than 50 percent cloud cover, our method maintains sub-pixel edging accuracy under manual check points and delivers comprehensive DOM products, establishing a reliable foundation for downstream environmental monitoring and ecosystem analysis. Full article
Show Figures

Figure 1

20 pages, 2305 KiB  
Article
Research on Accurate Inversion Techniques for Forest Cover Using Spaceborne LiDAR and Multi-Spectral Data
by Yang Yi, Mingchang Shi, Jin Yang, Jinqi Zhu, Jie Li, Lingyan Zhou, Luqi Xing and Hanyue Zhang
Forests 2025, 16(8), 1215; https://doi.org/10.3390/f16081215 - 24 Jul 2025
Viewed by 365
Abstract
Fractional Vegetation Cover (FVC) is an important parameter to reflect vegetation growth and describe plant canopy structure. This study integrates both active and passive remote sensing, capitalizing on the complementary strengths of optical and radar data, and applies various machine learning algorithms to [...] Read more.
Fractional Vegetation Cover (FVC) is an important parameter to reflect vegetation growth and describe plant canopy structure. This study integrates both active and passive remote sensing, capitalizing on the complementary strengths of optical and radar data, and applies various machine learning algorithms to retrieve FVC. The results demonstrate that, for FVC retrieval, the optimal combination of optical remote sensing bands includes B2 (490 nm), B5 (705 nm), B8 (833 nm), B8A (865 nm), and B12 (2190 nm) from Sentinel-2, achieving an Optimal Index Factor (OIF) of 522.50. The LiDAR data of ICESat-2 imagery is more suitable for extracting FVC than that of GEDI imagery, especially at a height of 1.5 m, and the correlation coefficient with the measured FVC is 0.763. The optimal feature variable combinations for FVC retrieval vary among different vegetation types, including synthetic aperture radar, optical remote sensing, and terrain data. Among the three models tested—multiple linear regression, random forest, and support vector machine—the random forest model outperformed the others, with fitting correlation coefficients all exceeding 0.974 and root mean square errors below 0.084. Adding LiDAR data on the basis of optical remote sensing combined with machine learning can effectively improve the accuracy of remote sensing retrieval of vegetation coverage. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

23 pages, 24301 KiB  
Article
Robust Optical and SAR Image Registration Using Weighted Feature Fusion
by Ao Luo, Anxi Yu, Yongsheng Zhang, Wenhao Tong and Huatao Yu
Remote Sens. 2025, 17(15), 2544; https://doi.org/10.3390/rs17152544 - 22 Jul 2025
Viewed by 434
Abstract
Image registration constitutes the fundamental basis for the joint interpretation of synthetic aperture radar (SAR) and optical images. However, robust image registration remains challenging due to significant regional heterogeneity in remote sensing scenes (e.g., co-existing urban and marine areas within a single image). [...] Read more.
Image registration constitutes the fundamental basis for the joint interpretation of synthetic aperture radar (SAR) and optical images. However, robust image registration remains challenging due to significant regional heterogeneity in remote sensing scenes (e.g., co-existing urban and marine areas within a single image). To overcome this challenge, this article proposes a novel optical–SAR image registration method named Gradient and Standard Deviation Feature Weighted Fusion (GDWF). First, a Block-local standard deviation (Block-LSD) operator is proposed to extract block-based feature points with regional adaptability. Subsequently, a dual-modal feature description is developed, constructing both gradient-based descriptors and local standard deviation (LSD) descriptors for the neighborhoods surrounding the detected feature points. To further enhance matching robustness, a confidence-weighted feature fusion strategy is proposed. By establishing a reliability evaluation model for similarity measurement maps, the contribution weights of gradient features and LSD features are dynamically optimized, ensuring adaptive performance under varying conditions. To verify the effectiveness of the method, different optical and SAR datasets are used to compare it with the currently advanced algorithms MOGF, CFOG, and FED-HOPC. The experimental results demonstrate that the proposed GDWF algorithm achieves the best performance in terms of registration accuracy and robustness among all compared methods, effectively handling optical–SAR image pairs with significant regional heterogeneity. Full article
Show Figures

Figure 1

Back to TopTop