Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (702)

Search Parameters:
Keywords = Sentinel-2 optical images

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 5274 KB  
Article
Assessing an Optical Tool for Identifying Tidal and Associated Mangrove Swamp Rice Fields in Guinea-Bissau, West Africa
by Jesus Céspedes, Jaime Garbanzo-León, Marina Temudo and Gabriel Garbanzo
Land 2025, 14(11), 2144; https://doi.org/10.3390/land14112144 - 28 Oct 2025
Viewed by 452
Abstract
An optical remote sensing approach was developed to identify areas with high and low salinity within the mangrove swamp rice system in West Africa. Conducted between 2019 and 2024 in Guinea-Bissau, this study examined two contrasting rice-growing environments, tidal mangrove (TM) and associated [...] Read more.
An optical remote sensing approach was developed to identify areas with high and low salinity within the mangrove swamp rice system in West Africa. Conducted between 2019 and 2024 in Guinea-Bissau, this study examined two contrasting rice-growing environments, tidal mangrove (TM) and associated mangrove (AM), to assess changes in vegetation dynamics, soil salinity concentration, and soil chemical properties. Field sampling was conducted during the dry season to avoid waterlogging, and soil analyses included texture, cation exchange capacity, micronutrients, and electrical conductivity (ECe). Meteorological stations recorded rainfall and environmental conditions over the period. Moreover, orthorectified and atmospherically corrected surface reflectance satellite imagery from PlanetScope and Sentinel-2 was selected due to their high spatial resolution and revisit frequency. From this data, vegetation dynamics were monitored using the Normalized Difference Vegetation Index (NDVI), with change detection calculated as the difference in NDVI between sequential images (ΔNDVI). Thresholds of 0.15 ≤ NDVI ≤ 0.5 and ΔNDVI > 0.1 were tested to identify significant vegetation growth, with smaller polygons (<1000 m2) removed to reduce noise. In this process, at least three temporal images per season were analyzed, and multi-year intersections were done to enhance accuracy. Our parameter optimization tests found that a locally calibrated NDVI threshold of 0.26 improved site classification. Thus, this integrated field–remote sensing approach proved to be a reproducible and cost-effective tool for detecting AM and TM environments and assessing vegetation responses to seasonal changes, contributing to improved land and water management in the salinity-affected mangrove swamp rice system. Full article
Show Figures

Figure 1

22 pages, 3840 KB  
Article
An Optical Water Type-Based Deep Learning Framework for Enhanced Turbidity Estimation in Inland Waters from Sentinel-2 Imagery
by Yue Ma, Qiuyue Chen, Kaishan Song, Qian Yang, Qiang Zheng and Yongchao Ma
Sensors 2025, 25(20), 6483; https://doi.org/10.3390/s25206483 - 20 Oct 2025
Viewed by 633
Abstract
Turbidity is a crucial and reliable indicator that is extensively utilized in water quality monitoring through remote sensing technology. The development of accurate and applicable models for turbidity estimation is essential. While many existing studies rely on uniform models based on statistical regression [...] Read more.
Turbidity is a crucial and reliable indicator that is extensively utilized in water quality monitoring through remote sensing technology. The development of accurate and applicable models for turbidity estimation is essential. While many existing studies rely on uniform models based on statistical regression or traditional machine learning techniques, the application of deep learning models for turbidity estimation remains limited. This study proposed deep learning models for turbidity estimation based on optical classification of inland waters using Sentinel-2 data. Specifically, the fuzzy c-means (FCM) clustering method was employed to classify optical water types (OWTs) based on their spectral reflectance characteristics. A weighted sum of the turbidity prediction results was generated by the OWT-based convolutional neural network-random forest (CNN-RF) model, with weights derived from the FCM membership degrees. Turbidity for four typical waters was mapped by the proposed method using Sentinel-2 images. The FCM method efficiently classified waters into three OWTs. The OWT-based weighted CNN-RF model demonstrated strong robustness and generalization performance, achieving a high prediction accuracy (R2 = 0.900, RMSE = 11.698 NTU). The turbidity maps preserved the spatial continuity of the turbidity distribution and accurately reflected water quality conditions. These findings facilitate the application of deep learning models based on optical classification in turbidity estimation and enhance the capabilities of remote sensing for water quality monitoring. Full article
(This article belongs to the Special Issue Remote Sensing Image Processing, Analysis and Application)
Show Figures

Figure 1

19 pages, 2733 KB  
Article
Style Transfer from Sentinel-1 to Sentinel-2 for Fluvial Scenes with Multi-Modal and Multi-Temporal Image Fusion
by Patrice E. Carbonneau
Remote Sens. 2025, 17(20), 3445; https://doi.org/10.3390/rs17203445 - 15 Oct 2025
Viewed by 401
Abstract
Recently, there has been significant progress in the area of semantic classification of water bodies at global scales with deep learning. For the key purposes of water inventory and change detection, advanced deep learning classifiers such as UNets and Vision Transformers have been [...] Read more.
Recently, there has been significant progress in the area of semantic classification of water bodies at global scales with deep learning. For the key purposes of water inventory and change detection, advanced deep learning classifiers such as UNets and Vision Transformers have been shown to be both accurate and flexible when applied to large-scale, or even global, satellite image datasets from optical (e.g., Sentinel-2) and radar sensors (e.g., Sentinel-1). Most of this work is conducted with optical sensors, which usually have better image quality, but their obvious limitation is cloud cover, which is why radar imagery is an important complementary dataset. However, radar imagery is generally more sensitive to soil moisture than optical data. Furthermore, topography and wind-ripple effects can alter the reflected intensity of radar waves, which can induce errors in water classification models that fundamentally rely on the fact that water is darker than the surrounding landscape. In this paper, we develop a solution to the use of Sentinel-1 radar images for the semantic classification of water bodies that uses style transfer with multi-modal and multi-temporal image fusion. Instead of developing new semantic classification models that work directly on Sentinel-1 images, we develop a global style transfer model that produces synthetic Sentinel-2 images from Sentinel-1 input. The resulting synthetic Sentinel-2 imagery can then be classified with existing models. This has the advantage of obviating the need for large volumes of manually labeled Sentinel-1 water masks. Next, we show that fusing an 8-year cloud-free composite of the near-infrared band 8 of Sentinel-2 to the input Sentinel-1 image improves the classification performance. Style transfer models were trained and validated with global scale data covering the years 2017 to 2024, and include every month of the year. When tested against a global independent benchmark, S1S2-Water, the semantic classifications produced from our synthetic imagery show a marked improvement with the use of image fusion. When we use only Sentinel-1 data, we find an overall IoU (Intersection over Union) score of 0.70, but when we add image fusion, the overall IoU score rises to 0.93. Full article
(This article belongs to the Special Issue Multimodal Remote Sensing Data Fusion, Analysis and Application)
Show Figures

Figure 1

32 pages, 19967 KB  
Article
Monitoring the Recovery Process After Major Hydrological Disasters with GIS, Change Detection and Open and Free Multi-Sensor Satellite Imagery: Demonstration in Haiti After Hurricane Matthew
by Wilson Andres Velasquez Hurtado and Deodato Tapete
Water 2025, 17(19), 2902; https://doi.org/10.3390/w17192902 - 7 Oct 2025
Viewed by 674
Abstract
Recovery from disasters is the complex process requiring coordinated measures to restore infrastructure, services and quality of life. While remote sensing is a well-established means for damage assessment, so far very few studies have shown how satellite imagery can be used by technical [...] Read more.
Recovery from disasters is the complex process requiring coordinated measures to restore infrastructure, services and quality of life. While remote sensing is a well-established means for damage assessment, so far very few studies have shown how satellite imagery can be used by technical officers of affected countries to provide crucial, up-to-date information to monitor the reconstruction progress and natural restoration. To address this gap, the present study proposes a multi-temporal observatory method relying on GIS, change detection techniques and open and free multi-sensor satellite imagery to generate thematic maps documenting, over time, the impact and recovery from hydrological disasters such as hurricanes, tropical storms and induced flooding. The demonstration is carried out with regard to Hurricane Matthew, which struck Haiti in October 2016 and triggered a humanitarian crisis in the Sud and Grand’Anse regions. Synthetic Aperture Radar (SAR) amplitude change detection techniques were applied to pre-, cross- and post-disaster Sentinel-1 image pairs from August 2016 to September 2020, while optical Sentinel-2 images were used for verification and land cover classification. With regard to inundated areas, the analysis allowed us to determine the needed time for water recession and rural plain areas to be reclaimed for agricultural exploitation. With regard to buildings, the cities of Jérémie and Les Cayes were not only the most impacted areas, but also were those where most reconstruction efforts were made. However, some instances of new settlements located in at-risk zones, and thus being susceptible to future hurricanes, were found. This result suggests that the thematic maps can support policy-makers and regulators in reducing risk and making the reconstruction more resilient. Finally, to evaluate the replicability of the proposed method, an example at a country-scale is discussed with regard to the June 2023 flooding event. Full article
(This article belongs to the Special Issue Applications of GIS and Remote Sensing in Hydrology and Hydrogeology)
Show Figures

Figure 1

36 pages, 9276 KB  
Article
Understanding Landslide Expression in SAR Backscatter Data: Global Study and Disaster Response Application
by Erin Lindsay, Alexandra Jarna Ganerød, Graziella Devoli, Johannes Reiche, Steinar Nordal and Regula Frauenfelder
Remote Sens. 2025, 17(19), 3313; https://doi.org/10.3390/rs17193313 - 27 Sep 2025
Viewed by 1075
Abstract
Cloud cover can delay landslide detection in optical satellite imagery for weeks, complicating disaster response. Synthetic Aperture Radar (SAR) backscatter imagery, which is widely used for monitoring floods and avalanches, remains underutilised for landslide detection due to a limited understanding of landslide signatures [...] Read more.
Cloud cover can delay landslide detection in optical satellite imagery for weeks, complicating disaster response. Synthetic Aperture Radar (SAR) backscatter imagery, which is widely used for monitoring floods and avalanches, remains underutilised for landslide detection due to a limited understanding of landslide signatures in SAR data. We developed a conceptual model of landslide expression in SAR backscatter (σ°) change images through iterative investigation of over 1000 landslides across 30 diverse study areas. Using multi-temporal composites and dense time series Sentinel-1 C-band SAR data, we identified characteristic patterns linked to land cover, terrain, and landslide material. The results showed either increased or decreased backscatter depending on environmental conditions, with reduced visibility in urban or mixed vegetation areas. Detection was also hindered by geometric distortions and snow cover. The diversity of landslide expression illustrates the need to consider local variability and multi-track (ascending and descending) satellite data in designing representative training datasets for automated detection models. The conceptual model was applied to three recent disaster events using the first post-event Sentinel-1 image, successfully identifying previously unknown landslides before optical imagery became available in two cases. This study provides a theoretical foundation for interpreting landslides in SAR imagery and demonstrates its utility for rapid landslide detection. The findings support further exploration of rapid landslides in SAR backscatter data and future development of automated detection models, offering a valuable tool for disaster response. Full article
Show Figures

Graphical abstract

25 pages, 8517 KB  
Article
Development of an Optical–Radar Fusion Method for Riparian Vegetation Monitoring and Its Application to Representative Rivers in Japan
by Han Li, Hiroki Kurusu, Yuzuna Suzuki and Yuji Kuwahara
Remote Sens. 2025, 17(19), 3281; https://doi.org/10.3390/rs17193281 - 24 Sep 2025
Viewed by 626
Abstract
Riparian vegetation plays a critical role in maintaining ecosystem function, ensuring drainage capacity, and enhancing disaster prevention and mitigation. However, existing ground-based survey methods are limited in both spatial coverage and temporal resolution, which increases the difficulty of meeting the growing demand for [...] Read more.
Riparian vegetation plays a critical role in maintaining ecosystem function, ensuring drainage capacity, and enhancing disaster prevention and mitigation. However, existing ground-based survey methods are limited in both spatial coverage and temporal resolution, which increases the difficulty of meeting the growing demand for rapid, dynamic, and fine-scale monitoring of riverine vegetation. To address this challenge, this study proposes a remote sensing approach that integrates Sentinel-1 synthetic aperture radar imagery with Sentinel-2 optical data. A composite vegetation index was developed by combining the normalized difference vegetation index and synthetic aperture radar backscatter coefficients, thereby enabling the joint characterization of horizontal and vertical vegetation activity. The method was first tested in the Kuji River Basin in Japan and subsequently validated across eight representative river systems nationwide using 16 sets of satellite images acquired between 2016 and 2023. The results demonstrate that the proposed method achieves an average geometric correction error of less than three pixels and yields a spatial distribution of the composite index that closely aligns with the actual vegetation conditions. Moreover, the difference rate between sparse and dense vegetation exceeded 90% across all rivers, indicating a strong discriminative capability and temporal sensitivity. Overall, this method is well-suited for the multiregional and multitemporal monitoring of riparian vegetation and offers a reliable quantitative tool for water environment management and ecological assessment. Full article
Show Figures

Figure 1

21 pages, 12036 KB  
Article
Temporal Analysis of Reservoirs, Lakes, and Rivers in the Euphrates–Tigris Basin from Multi-Sensor Data Between 2018 and 2022
by Omer Gokberk Narin, Roderik Lindenbergh and Saygin Abdikan
Remote Sens. 2025, 17(16), 2913; https://doi.org/10.3390/rs17162913 - 21 Aug 2025
Viewed by 2973
Abstract
Monitoring freshwater resources is essential for assessing the impacts of drought, water management and global warming. Spaceborne LiDAR altimeters allow researchers to obtain water height information, while water area and precipitation data can be obtained using different satellite systems. In our study, we [...] Read more.
Monitoring freshwater resources is essential for assessing the impacts of drought, water management and global warming. Spaceborne LiDAR altimeters allow researchers to obtain water height information, while water area and precipitation data can be obtained using different satellite systems. In our study, we examined 5 years (2018–2022) of data concerning the Euphrates–Tigris Basin (ETB), one of the most important freshwater resources of the Middle East, and the water bodies of both the ETB and the largest lake of Türkiye, Lake Van. A multi-sensor study aimed to detect and monitor water levels and water areas in the water scarcity basin. The ATL13 product of the Ice, Cloud, and Land Elevation Satellite-2 (ICESat-2) was used to determine water levels, while the normalized difference water index was applied to the Sentinel-2 optical imaging satellite to monitor the water area. Variations in both water level and area may be related to the time series of precipitation data from the ECMWF Reanalysis v5 (ERA5) product. In addition, our results were compared with global HydroWeb water level data. Consequently, it was observed that the water levels in the region decreased by 5–6 m in many reservoirs after 2019. It is noteworthy that there was a decrease of approximately 14 m in the water level and 684 km2 in the water area between July 2019 and July 2022 in Lake Therthar. Full article
(This article belongs to the Special Issue Multi-Source Remote Sensing Data in Hydrology and Water Management)
Show Figures

Figure 1

16 pages, 7115 KB  
Article
Generation of High-Resolution Time-Series NDVI Images for Monitoring Heterogeneous Crop Fields
by Sun-Hwa Kim, Jeong Eun, Inkwon Baek and Tae-Ho Kim
Sensors 2025, 25(16), 5183; https://doi.org/10.3390/s25165183 - 20 Aug 2025
Viewed by 941
Abstract
Various fusion methods of optical satellite images have been proposed for monitoring heterogeneous farmlands requiring high spatial and temporal resolution. In this study, a three-meter normalized difference vegetation index (NDVI) was generated by applying the spatiotemporal fusion (STF) method to simultaneously generate a [...] Read more.
Various fusion methods of optical satellite images have been proposed for monitoring heterogeneous farmlands requiring high spatial and temporal resolution. In this study, a three-meter normalized difference vegetation index (NDVI) was generated by applying the spatiotemporal fusion (STF) method to simultaneously generate a full-length normalized difference vegetation index time series (SSFIT) and enhanced spatial and temporal adaptive reflectance fusion method (ESTARFM) to the NDVI of Sentinel-2 (S2) and PlanetScope (PS), using images from 2019 to 2021 of rice paddy and heterogeneous cabbage fields in Korea. Before fusion, S2 was processed with the maximum NDVI composite (MNC) and the spatiotemporal gap-filling technique to minimize cloud effects. The fused NDVI image had a spatial resolution similar to PS, enabling more accurate monitoring of small and heterogeneous fields. In particular, the SSFIT technique showed higher accuracy than ESTARFM, with a root mean square error of less than 0.16 and correlation of more than 0.8 compared to the PS NDVI. Additionally, SSFIT takes four seconds to process data in the field area, while ESTARFM requires a relatively long processing time of five minutes. In some images where ESTARFM was applied, outliers originating from S2 were still present, and heterogeneous NDVI distributions were also observed. This spatiotemporal fusion (STF) technique can be used to produce high-resolution NDVI images for any date during the rainy season required for time-series analysis. Full article
(This article belongs to the Special Issue Remote Sensing for Crop Growth Monitoring)
Show Figures

Figure 1

19 pages, 2569 KB  
Article
CNN-Random Forest Hybrid Method for Phenology-Based Paddy Rice Mapping Using Sentinel-2 and Landsat-8 Satellite Images
by Dodi Sudiana, Sayyidah Hanifah Putri, Dony Kushardono, Anton Satria Prabuwono, Josaphat Tetuko Sri Sumantyo and Mia Rizkinia
Computers 2025, 14(8), 336; https://doi.org/10.3390/computers14080336 - 18 Aug 2025
Cited by 1 | Viewed by 1060
Abstract
The agricultural sector plays a vital role in achieving the second Sustainable Development Goal: “Zero Hunger”. To ensure food security, agriculture must remain resilient and productive. In Indonesia, a major rice-producing country, the conversion of agricultural land for non-agricultural uses poses a serious [...] Read more.
The agricultural sector plays a vital role in achieving the second Sustainable Development Goal: “Zero Hunger”. To ensure food security, agriculture must remain resilient and productive. In Indonesia, a major rice-producing country, the conversion of agricultural land for non-agricultural uses poses a serious threat to food availability. Accurate and timely mapping of paddy rice is therefore crucial. This study proposes a phenology-based mapping approach using a Convolutional Neural Network-Random Forest (CNN-RF) Hybrid model with multi-temporal Sentinel-2 and Landsat-8 imagery. Image processing and analysis were conducted using the Google Earth Engine platform. Raw spectral bands and four vegetation indices—NDVI, EVI, LSWI, and RGVI—were extracted as input features for classification. The CNN-RF Hybrid classifier demonstrated strong performance, achieving an overall accuracy of 0.950 and a Cohen’s Kappa coefficient of 0.893. These results confirm the effectiveness of the proposed method for mapping paddy rice in Indramayu Regency, West Java, using medium-resolution optical remote sensing data. The integration of phenological characteristics and deep learning significantly enhances classification accuracy. This research supports efforts to monitor and preserve paddy rice cultivation areas amid increasing land use pressures, contributing to national food security and sustainable agricultural practices. Full article
(This article belongs to the Special Issue Machine Learning Applications in Pattern Recognition)
Show Figures

Figure 1

24 pages, 4639 KB  
Article
Testing Satellite Snow Cover Observations Using Time-Lapse Camera Images in Mid-Latitude Mountain Ranges (Northern Spain)
by Adrián Melón-Nava and Javier Santos-González
Geosciences 2025, 15(8), 316; https://doi.org/10.3390/geosciences15080316 - 13 Aug 2025
Viewed by 1242
Abstract
Reliable monitoring of snow cover in mountainous regions remains a challenge due to frequent cloud cover and the revisit limitations of optical satellites. This study compares satellite snow-cover records with >99,000 ground-based time-lapse camera observations across northern Spain (2003–2025). Cloud cover caused major [...] Read more.
Reliable monitoring of snow cover in mountainous regions remains a challenge due to frequent cloud cover and the revisit limitations of optical satellites. This study compares satellite snow-cover records with >99,000 ground-based time-lapse camera observations across northern Spain (2003–2025). Cloud cover caused major data loss, with up to 57% of satellite images affected. Effective revisit intervals (the average time between usable images) diverge substantially from nominal values: 2.3 days for MODIS, 6.9 days for Sentinel-2, and over 21 days for Landsat. A hierarchical multisensor approach with 5-day gap-filling reduced this to just 1.3 days. On dates when cameras confirmed snow, satellites underestimated snow presence by 61.6% (Sentinel-2), 71.5% (Landsat), and 79.7% (MODIS), though gap-filling approaches reduced underestimation to 49.4%—deficits largely attributable to cloud-obscured scenes. When both satellite and camera provided cloud-free observations for the same date and location, classification agreement exceeded 85%. Despite this, satellites consistently failed to detect short-lived snow events and introduced temporal biases. On average, Snow Onset Dates were detected 13–52 days later, and Snow Melt-Out Dates differed by up to 40 days compared to camera-derived records. These results have implications for snow-cover monitoring using satellite images and highlight the need for integrating ground-based observations to compensate for satellite limitations and improve snow cover seasonality assessments in complex terrains. Full article
(This article belongs to the Section Cryosphere)
Show Figures

Figure 1

25 pages, 5956 KB  
Article
Research on Crop Classification Using U-Net Integrated with Multimodal Remote Sensing Temporal Features
by Zhihui Zhu, Yuling Chen, Chengzhuo Lu, Minglong Yang, Yonghua Xia, Dewu Huang and Jie Lv
Sensors 2025, 25(16), 5005; https://doi.org/10.3390/s25165005 - 13 Aug 2025
Cited by 1 | Viewed by 881
Abstract
Crop classification plays a vital role in acquiring the spatial distribution of agricultural crops, enhancing agricultural management efficiency, and ensuring food security. With the continuous advancement of remote sensing technologies, achieving efficient and accurate crop classification using remote sensing imagery has become a [...] Read more.
Crop classification plays a vital role in acquiring the spatial distribution of agricultural crops, enhancing agricultural management efficiency, and ensuring food security. With the continuous advancement of remote sensing technologies, achieving efficient and accurate crop classification using remote sensing imagery has become a prominent research focus. Conventional approaches largely rely on empirical rules or single-feature selection (e.g., NDVI or VV) for temporal feature extraction, lacking systematic optimization of multimodal feature combinations from optical and radar data. To address this limitation, this study proposes a crop classification method based on feature-level fusion of multimodal remote sensing data, integrating the complementary advantages of optical and SAR imagery to overcome the temporal and spatial representation constraints of single-sensor observations. The study was conducted in Story County, Iowa, USA, focusing on the growth cycles of corn and soybean. Eight vegetation indices (including NDVI and NDRE) and five polarimetric features (VV and VH) were constructed and analyzed. Using a random forest algorithm to assess feature importance, NDVI+NDRE and VV+VH were identified as the optimal feature combinations. Subsequently, 16 scenes of optical imagery (Sentinel-2) and 30 scenes of radar imagery (Sentinel-1) were fused at the feature level to generate a multimodal temporal feature image with 46 channels. Using Cropland Data Layer (CDL) samples as reference data, a U-Net deep neural network was employed for refined crop classification and compared with single-modal results. Experimental results demonstrated that the fusion model outperforms single-modal approaches in classification accuracy, boundary delineation, and consistency, achieving training, validation, and test accuracies of 95.83%, 91.99%, and 90.81% respectively. Furthermore, consistent improvements were observed across evaluation metrics, including F1-score, precision, and recall. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

28 pages, 24868 KB  
Article
Deep Meta-Connectivity Representation for Optically-Active Water Quality Parameters Estimation Through Remote Sensing
by Fangling Pu, Ziang Luo, Yiming Yang, Hongjia Chen, Yue Dai and Xin Xu
Remote Sens. 2025, 17(16), 2782; https://doi.org/10.3390/rs17162782 - 11 Aug 2025
Viewed by 530
Abstract
Monitoring optically-active water quality (OAWQ) parameters faces key challenges, primarily due to limited in situ measurements and the restricted availability of high-resolution multispectral remote sensing imagery. While deep learning has shown promise for OAWQ estimation, existing approaches such as GeoTile2Vec, which relies on [...] Read more.
Monitoring optically-active water quality (OAWQ) parameters faces key challenges, primarily due to limited in situ measurements and the restricted availability of high-resolution multispectral remote sensing imagery. While deep learning has shown promise for OAWQ estimation, existing approaches such as GeoTile2Vec, which relies on geographic proximity, and SimCLR, a domain-agnostic contrastive learning method, fail to capture land cover-driven water quality patterns, limiting their generalizability. To address this, we present deep meta-connectivity representation (DMCR), which integrates multispectral remote sensing imagery with limited in situ measurements to estimate OAWQ parameters. Our approach constructs meta-feature vectors from land cover images to represent the water quality characteristics of each multispectral remote sensing image tile. We introduce the meta-connectivity concept to quantify the OAWQ similarity between different tiles. Building on this concept, we design a contrastive self-supervised learning framework that uses sets of quadruple tiles extracted from Sentinel-2 imagery based on their meta-connectivity to learn DMCR vectors. After the core neural network is trained, we apply a random forest model to estimate parameters such as chlorophyll-a (Chl-a) and turbidity using matched in situ measurements and DMCR vectors across time and space. We evaluate DMCR on Lake Erie and Lake Ontario, generating a series of Chl-a and turbidity distribution maps. Performance is assessed using the R2 and RMSE metrics. Results show that meta-connectivity more effectively captures water quality similarities between tiles than widely utilized geographic proximity approaches such as those used in GeoTile2Vec. Furthermore, DMCR outperforms baseline models such as SimCLR with randomly cropped tiles. The resulting distribution maps align well with known factors influencing Chl-a and turbidity levels, confirming the method’s reliability. Overall, DMCR demonstrates strong potential for large-scale OAWQ estimation and contributes to improved monitoring of inland water bodies with limited in situ measurements through meta-connectivity-informed deep learning. The temporal-spatial water quality maps can support large-scale inland water monitoring, early warning of harmful algal blooms. Full article
(This article belongs to the Section Remote Sensing in Geology, Geomorphology and Hydrology)
Show Figures

Figure 1

16 pages, 3847 KB  
Article
Water Body Extraction Methods for SAR Images Fusing Sentinel-1 Dual-Polarized Water Index and Random Forest
by Min Zhai, Huayu Shen, Qihang Cao, Xuanhao Ding and Mingzhen Xin
Sensors 2025, 25(15), 4868; https://doi.org/10.3390/s25154868 - 7 Aug 2025
Viewed by 1129
Abstract
Synthetic Aperture Radar (SAR) technology has the characteristics of all-day and all-weather functionality; accordingly, it is not affected by rainy weather, overcoming the limitations of optical remote sensing, and it provides irreplaceable technical support for efficient water body extraction. To address the issues [...] Read more.
Synthetic Aperture Radar (SAR) technology has the characteristics of all-day and all-weather functionality; accordingly, it is not affected by rainy weather, overcoming the limitations of optical remote sensing, and it provides irreplaceable technical support for efficient water body extraction. To address the issues of low accuracy and unstable results in water body extraction from Sentinel-1 SAR images using a single method, a water body extraction method fusing the Sentinel-1 dual-polarized water index and random forest is proposed. This novel method enhances water extraction accuracy by integrating the results of two different algorithms, reducing the biases associated with single-method water body extraction. Taking Dalu Lake, Yinfu Reservoir, and Huashan Reservoir as the study areas, water body information was extracted from SAR images using the dual-polarized water body index, the random forest method, and the fusion method. Taking the normalized difference water body index extraction results obtained via Sentinel-2 optical images as a reference, the accuracy of different water body extraction methods when used with SAR images was quantitatively evaluated. The experimental results show that, compared with the dual-polarized water body index and the random forest method, the fusion method, on average, increased overall water body extraction accuracy and Kappa coefficients by 3.9% and 8.2%, respectively, in the Dalu Lake experimental area; by 1.8% and 3.5%, respectively, in the Yinfu Reservoir experimental area; and by 4.1% and 8.1%, respectively, in the Huashan Reservoir experimental area. Therefore, the fusion method of the dual-polarized water index and random forest effectively improves the accuracy and reliability of water body extraction from SAR images. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

26 pages, 14923 KB  
Article
Multi-Sensor Flood Mapping in Urban and Agricultural Landscapes of the Netherlands Using SAR and Optical Data with Random Forest Classifier
by Omer Gokberk Narin, Aliihsan Sekertekin, Caglar Bayik, Filiz Bektas Balcik, Mahmut Arıkan, Fusun Balik Sanli and Saygin Abdikan
Remote Sens. 2025, 17(15), 2712; https://doi.org/10.3390/rs17152712 - 5 Aug 2025
Viewed by 1381
Abstract
Floods stand as one of the most harmful natural disasters, which have become more dangerous because of climate change effects on urban structures and agricultural fields. This research presents a comprehensive flood mapping approach that combines multi-sensor satellite data with a machine learning [...] Read more.
Floods stand as one of the most harmful natural disasters, which have become more dangerous because of climate change effects on urban structures and agricultural fields. This research presents a comprehensive flood mapping approach that combines multi-sensor satellite data with a machine learning method to evaluate the July 2021 flood in the Netherlands. The research developed 25 different feature scenarios through the combination of Sentinel-1, Landsat-8, and Radarsat-2 imagery data by using backscattering coefficients together with optical Normalized Difference Water Index (NDWI) and Hue, Saturation, and Value (HSV) images and Synthetic Aperture Radar (SAR)-derived Grey Level Co-occurrence Matrix (GLCM) texture features. The Random Forest (RF) classifier was optimized before its application based on two different flood-prone regions, which included Zutphen’s urban area and Heijen’s agricultural land. Results demonstrated that the multi-sensor fusion scenarios (S18, S20, and S25) achieved the highest classification performance, with overall accuracy reaching 96.4% (Kappa = 0.906–0.949) in Zutphen and 87.5% (Kappa = 0.754–0.833) in Heijen. For the flood class F1 scores of all scenarios, they varied from 0.742 to 0.969 in Zutphen and from 0.626 to 0.969 in Heijen. Eventually, the addition of SAR texture metrics enhanced flood boundary identification throughout both urban and agricultural settings. Radarsat-2 provided limited benefits to the overall results, since Sentinel-1 and Landsat-8 data proved more effective despite being freely available. This study demonstrates that using SAR and optical features together with texture information creates a powerful and expandable flood mapping system, and RF classification performs well in diverse landscape settings. Full article
(This article belongs to the Special Issue Remote Sensing Applications in Flood Forecasting and Monitoring)
Show Figures

Figure 1

48 pages, 18119 KB  
Article
Dense Matching with Low Computational Complexity for Disparity Estimation in the Radargrammetric Approach of SAR Intensity Images
by Hamid Jannati, Mohammad Javad Valadan Zoej, Ebrahim Ghaderpour and Paolo Mazzanti
Remote Sens. 2025, 17(15), 2693; https://doi.org/10.3390/rs17152693 - 3 Aug 2025
Viewed by 807
Abstract
Synthetic Aperture Radar (SAR) images and optical imagery have high potential for extracting digital elevation models (DEMs). The two main approaches for deriving elevation models from SAR data are interferometry (InSAR) and radargrammetry. Adapted from photogrammetric principles, radargrammetry relies on disparity model estimation [...] Read more.
Synthetic Aperture Radar (SAR) images and optical imagery have high potential for extracting digital elevation models (DEMs). The two main approaches for deriving elevation models from SAR data are interferometry (InSAR) and radargrammetry. Adapted from photogrammetric principles, radargrammetry relies on disparity model estimation as its core component. Matching strategies in radargrammetry typically follow local, global, or semi-global methodologies. Local methods, while having higher accuracy, especially in low-texture SAR images, require larger kernel sizes, leading to quadratic computational complexity. Conversely, global and semi-global models produce more consistent and higher-quality disparity maps but are computationally more intensive than local methods with small kernels and require more memory (RAM). In this study, inspired by the advantages of local matching algorithms, a computationally efficient and novel model is proposed for extracting corresponding pixels in SAR-intensity stereo images. To enhance accuracy, the proposed two-stage algorithm operates without an image pyramid structure. Notably, unlike traditional local and global models, the computational complexity of the proposed approach remains stable as the input size or kernel dimensions increase while memory consumption stays low. Compared to a pyramid-based local normalized cross-correlation (NCC) algorithm and adaptive semi-global matching (SGM) models, the proposed method maintains good accuracy comparable to adaptive SGM while reducing processing time by up to 50% relative to pyramid SGM and achieving a 35-fold speedup over the local NCC algorithm with an optimal kernel size. Validated on a Sentinel-1 stereo pair with a 10 m ground-pixel size, the proposed algorithm yields a DEM with an average accuracy of 34.1 m. Full article
Show Figures

Graphical abstract

Back to TopTop