Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (48)

Search Parameters:
Keywords = cloud and cloud-shadow masking

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
36 pages, 9354 KiB  
Article
Effects of Clouds and Shadows on the Use of Independent Component Analysis for Feature Extraction
by Marcos A. Bosques-Perez, Naphtali Rishe, Thony Yan, Liangdong Deng and Malek Adjouadi
Remote Sens. 2025, 17(15), 2632; https://doi.org/10.3390/rs17152632 - 29 Jul 2025
Viewed by 201
Abstract
One of the persistent challenges in multispectral image analysis is the interference caused by dense cloud cover and its resulting shadows, which can significantly obscure surface features. This becomes especially problematic when attempting to monitor surface changes over time using satellite imagery, such [...] Read more.
One of the persistent challenges in multispectral image analysis is the interference caused by dense cloud cover and its resulting shadows, which can significantly obscure surface features. This becomes especially problematic when attempting to monitor surface changes over time using satellite imagery, such as from Landsat-8. In this study, rather than simply masking visual obstructions, we aimed to investigate the role and influence of clouds within the spectral data itself. To achieve this, we employed Independent Component Analysis (ICA), a statistical method capable of decomposing mixed signals into independent source components. By applying ICA to selected Landsat-8 bands and analyzing each component individually, we assessed the extent to which cloud signatures are entangled with surface data. This process revealed that clouds contribute to multiple ICA components simultaneously, indicating their broad spectral influence. With this influence on multiple wavebands, we managed to configure a set of components that could perfectly delineate the extent and location of clouds. Moreover, because Landsat-8 lacks cloud-penetrating wavebands, such as those in the microwave range (e.g., SAR), the surface information beneath dense cloud cover is not captured at all, making it physically impossible for ICA to recover what is not sensed in the first place. Despite these limitations, ICA proved effective in isolating and delineating cloud structures, allowing us to selectively suppress them in reconstructed images. Additionally, the technique successfully highlighted features such as water bodies, vegetation, and color-based land cover differences. These findings suggest that while ICA is a powerful tool for signal separation and cloud-related artifact suppression, its performance is ultimately constrained by the spectral and spatial properties of the input data. Future improvements could be realized by integrating data from complementary sensors—especially those operating in cloud-penetrating wavelengths—or by using higher spectral resolution imagery with narrower bands. Full article
(This article belongs to the Section Environmental Remote Sensing)
Show Figures

Graphical abstract

21 pages, 12122 KiB  
Article
RA3T: An Innovative Region-Aligned 3D Transformer for Self-Supervised Sim-to-Real Adaptation in Low-Altitude UAV Vision
by Xingrao Ma, Jie Xie, Di Shao, Aiting Yao and Chengzu Dong
Electronics 2025, 14(14), 2797; https://doi.org/10.3390/electronics14142797 - 11 Jul 2025
Viewed by 332
Abstract
Low-altitude unmanned aerial vehicle (UAV) vision is critically hindered by the Sim-to-Real Gap, where models trained exclusively on simulation data degrade under real-world variations in lighting, texture, and weather. To address this problem, we propose RA3T (Region-Aligned 3D Transformer), a novel self-supervised framework [...] Read more.
Low-altitude unmanned aerial vehicle (UAV) vision is critically hindered by the Sim-to-Real Gap, where models trained exclusively on simulation data degrade under real-world variations in lighting, texture, and weather. To address this problem, we propose RA3T (Region-Aligned 3D Transformer), a novel self-supervised framework that enables robust Sim-to-Real adaptation. Specifically, we first develop a dual-branch strategy for self-supervised feature learning, integrating Masked Autoencoders and contrastive learning. This approach extracts domain-invariant representations from unlabeled simulated imagery to enhance robustness against occlusion while reducing annotation dependency. Leveraging these learned features, we then introduce a 3D Transformer fusion module that unifies multi-view RGB and LiDAR point clouds through cross-modal attention. By explicitly modeling spatial layouts and height differentials, this component significantly improves recognition of small and occluded targets in complex low-altitude environments. To address persistent fine-grained domain shifts, we finally design region-level adversarial calibration that deploys local discriminators on partitioned feature maps. This mechanism directly aligns texture, shadow, and illumination discrepancies which challenge conventional global alignment methods. Extensive experiments on UAV benchmarks VisDrone and DOTA demonstrate the effectiveness of RA3T. The framework achieves +5.1% mAP on VisDrone and +7.4% mAP on DOTA over the 2D adversarial baseline, particularly on small objects and sparse occlusions, while maintaining real-time performance of 17 FPS at 1024 × 1024 resolution on an RTX 4080 GPU. Visual analysis confirms that the synergistic integration of 3D geometric encoding and local adversarial alignment effectively mitigates domain gaps caused by uneven illumination and perspective variations, establishing an efficient pathway for simulation-to-reality UAV perception. Full article
(This article belongs to the Special Issue Innovative Technologies and Services for Unmanned Aerial Vehicles)
Show Figures

Figure 1

16 pages, 1389 KiB  
Technical Note
Evaluation of Cloud Mask Performance of KOMPSAT-3 Top-of-Atmosphere Reflectance Incorporating Deeplabv3+ with Resnet 101 Model
by Suhwan Kim, Doehee Han, Yejin Lee, Eunsu Doo, Han Oh, Jonghan Ko and Jongmin Yeom
Appl. Sci. 2025, 15(8), 4339; https://doi.org/10.3390/app15084339 - 14 Apr 2025
Viewed by 509
Abstract
Cloud detection is a crucial task in satellite remote sensing, influencing applications such as vegetation indices, land use analysis, and renewable energy estimation. This study evaluates the performance of cloud masks generated for KOMPSAT-3 and KOMPSAT-3A imagery using the DeepLabV3+ deep learning model [...] Read more.
Cloud detection is a crucial task in satellite remote sensing, influencing applications such as vegetation indices, land use analysis, and renewable energy estimation. This study evaluates the performance of cloud masks generated for KOMPSAT-3 and KOMPSAT-3A imagery using the DeepLabV3+ deep learning model with a ResNet-101 backbone. To overcome the limitations of digital number (DN) data, Top-of-Atmosphere (TOA) reflectance was computed and used for model training. Comparative analysis between the DN and TOA reflectance demonstrated significant improvements with the TOA correction applied. The TOA reflectance combined with the NDVI channel achieved the highest precision (69.33%) and F1-score (59.27%), along with a mean Intersection over Union (mIoU) of 46.5%, outperforming all the other configurations. In particular, this combination was highly effective in detecting dense clouds, achieving an mIoU of 48.12%, while the Near-Infrared, green, and red (NGR) combination performed best in identifying cloud shadows with an mIoU of 23.32%. These findings highlight the critical role of radiometric correction and optimal channel selection in enhancing deep learning-based cloud detection. This study demonstrates the crucial role of radiometric correction, optimal channel selection, and the integration of additional synthetic indices in enhancing deep learning-based cloud detection performance, providing a foundation for the development of more refined cloud masking techniques in the future. Full article
Show Figures

Figure 1

18 pages, 6401 KiB  
Article
Continuous Satellite Image Generation from Standard Layer Maps Using Conditional Generative Adversarial Networks
by Arminas Šidlauskas, Andrius Kriščiūnas and Dalia Čalnerytė
ISPRS Int. J. Geo-Inf. 2024, 13(12), 448; https://doi.org/10.3390/ijgi13120448 - 11 Dec 2024
Cited by 3 | Viewed by 1606
Abstract
Satellite image generation has a wide range of applications. For example, parts of images must be restored in areas obscured by clouds or cloud shadows or areas that must be anonymized. The need to cover a large area with the generated images faces [...] Read more.
Satellite image generation has a wide range of applications. For example, parts of images must be restored in areas obscured by clouds or cloud shadows or areas that must be anonymized. The need to cover a large area with the generated images faces the challenge that separately generated images must maintain the structural and color continuity between the adjacent generated images as well as the actual ones. This study presents a modified architecture of the generative adversarial network (GAN) pix2pix that ensures the integrity of the generated remote sensing images. The pix2pix model comprises a U-Net generator and a PatchGAN discriminator. The generator was modified by expanding the input set with images representing the known parts of ground truth and the respective mask. Data used for the generative model consist of Sentinel-2 (S2) RGB satellite imagery as the target data and OpenStreetMap mapping data as the input. Since forested areas and fields dominate in images, a Kneedle clusterization method was applied to create datasets that better represent the other classes, such as buildings and roads. The original and updated models were trained on different datasets and their results were evaluated using gradient magnitude (GM), Fréchet inception distance (FID), structural similarity index measure (SSIM), and multiscale structural similarity index measure (MS-SSIM) metrics. The models with the updated architecture show improvement in gradient magnitude, SSIM, and MS-SSIM values for all datasets. The average GMs of the junction region and the full image are similar (do not exceed 7%) for the images generated using the modified architecture whereas it is more than 13% higher in the junction area for the images generated using the original architecture. The importance of class balancing is demonstrated by the fact that, for both architectures, models trained on the dataset with a higher ratio of classes representing buildings and roads compared to the models trained on the dataset without clusterization have more than 10% lower FID (162.673 to 190.036 for pix2pix and 173.408 to 195.621 for the modified architecture) and more than 5% higher SSIM (0.3532 to 0.3284 for pix2pix and 0.3575 to 0.3345 for the modified architecture) and MS-SSIM (0.3532 to 0.3284 for pix2pix and 0.3575 to 0.3345 for the modified architecture) values. Full article
Show Figures

Figure 1

15 pages, 3524 KiB  
Article
Effective Detection of Cloud Masks in Remote Sensing Images
by Yichen Cui, Hong Shen and Chan-Tong Lam
Sensors 2024, 24(23), 7730; https://doi.org/10.3390/s24237730 - 3 Dec 2024
Viewed by 1211
Abstract
Effective detection of the contours of cloud masks and estimation of their distribution can be of practical help in studying weather changes and natural disasters. Existing deep learning methods are unable to extract the edges of clouds and backgrounds in a refined manner [...] Read more.
Effective detection of the contours of cloud masks and estimation of their distribution can be of practical help in studying weather changes and natural disasters. Existing deep learning methods are unable to extract the edges of clouds and backgrounds in a refined manner when detecting cloud masks (shadows) due to their unpredictable patterns, and they are also unable to accurately identify small targets such as thin and broken clouds. For these problems, we propose MDU-Net, a multiscale dual up-sampling segmentation network based on an encoder–decoder–decoder. The model uses an improved residual module to capture the multi-scale features of clouds more effectively. MDU-Net first extracts the feature maps using four residual modules at different scales, and then sends them to the context information full flow module for the first up-sampling. This operation refines the edges of clouds and shadows, enhancing the detection performance. Subsequently, the second up-sampling module concatenates feature map channels to fuse contextual spatial information, which effectively reduces the false detection rate of unpredictable targets hidden in cloud shadows. On a self-made cloud and cloud shadow dataset based on the Landsat8 satellite, MDU-Net achieves scores of 95.61% in PA and 84.97% in MIOU, outperforming other models in both metrics and result images. Additionally, we conduct experiments to test the model’s generalization capability on the landcover.ai dataset to show that it also achieves excellent performance in the visualization results. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

30 pages, 8057 KiB  
Article
Multi-Temporal Pixel-Based Compositing for Cloud Removal Based on Cloud Masks Developed Using Classification Techniques
by Tesfaye Adugna, Wenbo Xu, Jinlong Fan, Xin Luo and Haitao Jia
Remote Sens. 2024, 16(19), 3665; https://doi.org/10.3390/rs16193665 - 1 Oct 2024
Cited by 1 | Viewed by 2290
Abstract
Cloud is a serious problem that affects the quality of remote-sensing (RS) images. Existing cloud removal techniques suffer from notable limitations, such as being specific to certain data types, cloud conditions, and spatial extents, as well as requiring auxiliary data, which hampers their [...] Read more.
Cloud is a serious problem that affects the quality of remote-sensing (RS) images. Existing cloud removal techniques suffer from notable limitations, such as being specific to certain data types, cloud conditions, and spatial extents, as well as requiring auxiliary data, which hampers their generalizability and flexibility. To address the issue, we propose a maximum-value compositing approach by generating cloud masks. We acquired 432 daily MOD09GA L2 MODIS imageries covering a vast region with persistent cloud cover and various climates and land-cover types. Labeled datasets for cloud, land, and no-data were collected from selected daily imageries. Subsequently, we trained and evaluated RF, SVM, and U-Net models to choose the best models. Accordingly, SVM and U-Net were chosen and employed to classify all the daily imageries. Then, the classified imageries were converted to two sets of mask layers to mask clouds and no-data pixels in the corresponding daily images by setting the masked pixels’ values to −0.999999. After masking, we employed the maximum-value technique to generate two sets of 16-day composite products, MaxComp-1 and MaxComp-2, corresponding to SVM and U-Net-derived cloud masks, respectively. Finally, we assessed the quality of our composite products by comparing them with the reference MOD13A1 16-day composite product. Based on the land-cover classification accuracy, our products yielded a significantly higher accuracy (5–28%) than the reference MODIS product across three classifiers (RF, SVM, and U-Net), indicating the quality of our products and the effectiveness of our techniques. In particular, MaxComp-1 yielded the best results, which further implies the superiority of SVM for cloud masking. In addition, our products appear to be more radiometrically and spectrally consistent and less noisy than MOD13A1, implying that our approach is more efficient in removing shadows and noises/artifacts. Our method yields high-quality products that are vital for investigating large regions with persistent clouds and studies requiring time-series data. Moreover, the proposed techniques can be adopted for higher-resolution RS imageries, regardless of the spatial extent, data volume, and type of clouds. Full article
Show Figures

Figure 1

25 pages, 4231 KiB  
Article
Estimating Chlorophyll-a and Phycocyanin Concentrations in Inland Temperate Lakes across New York State Using Sentinel-2 Images: Application of Google Earth Engine for Efficient Satellite Image Processing
by Sara Akbarnejad Nesheli, Lindi J. Quackenbush and Lewis McCaffrey
Remote Sens. 2024, 16(18), 3504; https://doi.org/10.3390/rs16183504 - 21 Sep 2024
Cited by 3 | Viewed by 3310
Abstract
Harmful algae blooms (HABs) have been reported with greater frequency in lakes across New York State (NYS) in recent years. In situ sampling is used to assess water quality, but such observations are time intensive and therefore practically limited in their spatial extent. [...] Read more.
Harmful algae blooms (HABs) have been reported with greater frequency in lakes across New York State (NYS) in recent years. In situ sampling is used to assess water quality, but such observations are time intensive and therefore practically limited in their spatial extent. Previous research has used remote sensing imagery to estimate phytoplankton pigments (typically chlorophyll-a or phycocyanin) as HAB indicators. The primary goal of this study was to validate a remote sensing-based method to estimate cyanobacteria concentrations at high temporal (5 days) and spatial (10–20 m) resolution, to allow identification of lakes across NYS at a significant risk of algal blooms, thereby facilitating targeted field investigations. We used Google Earth Engine (GEE) as a cloud computing platform to develop an efficient methodology to process Sentinel-2 image collections at a large spatial and temporal scale. Our research used linear regression to model the correlation between in situ observations of chlorophyll-a (Chl-a) and phycocyanin and indices derived from Sentinel-2 data to evaluate the potential of remote sensing-derived inputs for estimating cyanobacteria concentrations. We tested the performance of empirical models based on seven remote-sensing-derived indices, two in situ measurements, two cloud mitigation approaches, and three temporal sampling windows across NYS lakes for 2019 and 2020. Our best base model (R2 of 0.63), using concurrent sampling data and the ESA cloud masking—i.e., the QA60 bitmask—approach, related the maximum peak height (MPH) index to phycocyanin concentrations. Expanding the temporal match using a one-day time window increased the available training dataset size and improved the fit of the linear regression model (R2 of 0.71), highlighting the positive impact of increasing the training dataset on model fit. Applying the Cloud Score+ method for filtering cloud and cloud shadows further improved the fit of the phycocyanin estimation model, with an R2 of 0.84, but did not result in substantial improvements in the model’s application. The fit of the Chl-a models was generally poorer, but these models still had good accuracy in detecting moderate and high Chl-a values. Future work will focus on exploring alternative algorithms that can incorporate diverse data sources and lake characteristics, contributing to a deeper understanding of the relationship between remote sensing data and water quality parameters. This research provides a valuable tool for cyanobacteria parameter estimation with confidence quantification to identify lakes at risk of algal blooms. Full article
(This article belongs to the Section Engineering Remote Sensing)
Show Figures

Graphical abstract

23 pages, 12771 KiB  
Article
Harmonized Landsat and Sentinel-2 Data with Google Earth Engine
by Elias Fernando Berra, Denise Cybis Fontana, Feng Yin and Fabio Marcelo Breunig
Remote Sens. 2024, 16(15), 2695; https://doi.org/10.3390/rs16152695 - 23 Jul 2024
Cited by 10 | Viewed by 11726
Abstract
Continuous and dense time series of satellite remote sensing data are needed for several land monitoring applications, including vegetation phenology, in-season crop assessments, and improving land use and land cover classification. Supporting such applications at medium to high spatial resolution may be challenging [...] Read more.
Continuous and dense time series of satellite remote sensing data are needed for several land monitoring applications, including vegetation phenology, in-season crop assessments, and improving land use and land cover classification. Supporting such applications at medium to high spatial resolution may be challenging with a single optical satellite sensor, as the frequency of good-quality observations can be low. To optimize good-quality data availability, some studies propose harmonized databases. This work aims at developing an ‘all-in-one’ Google Earth Engine (GEE) web-based workflow to produce harmonized surface reflectance data from Landsat-7 (L7) ETM+, Landsat-8 (L8) OLI, and Sentinel-2 (S2) MSI top of atmosphere (TOA) reflectance data. Six major processing steps to generate a new source of near-daily Harmonized Landsat and Sentinel (HLS) reflectance observations at 30 m spatial resolution are proposed and described: band adjustment, atmospheric correction, cloud and cloud shadow masking, view and illumination angle adjustment, co-registration, and reprojection and resampling. The HLS is applied to six equivalent spectral bands, resulting in a surface nadir BRDF-adjusted reflectance (NBAR) time series gridded to a common pixel resolution, map projection, and spatial extent. The spectrally corresponding bands and derived Normalized Difference Vegetation Index (NDVI) were compared, and their sensor differences were quantified by regression analyses. Examples of HLS time series are presented for two potential applications: agricultural and forest phenology. The HLS product is also validated against ground measurements of NDVI, achieving very similar temporal trajectories and magnitude of values (R2 = 0.98). The workflow and script presented in this work may be useful for the scientific community aiming at taking advantage of multi-sensor harmonized time series of optical data. Full article
(This article belongs to the Section Forest Remote Sensing)
Show Figures

Figure 1

21 pages, 20756 KiB  
Article
A Novel Method for Cloud and Cloud Shadow Detection Based on the Maximum and Minimum Values of Sentinel-2 Time Series Images
by Kewen Liang, Gang Yang, Yangyan Zuo, Jiahui Chen, Weiwei Sun, Xiangchao Meng and Binjie Chen
Remote Sens. 2024, 16(8), 1392; https://doi.org/10.3390/rs16081392 - 15 Apr 2024
Cited by 8 | Viewed by 4296
Abstract
Automatic and accurate detection of clouds and cloud shadows is a critical aspect of optical remote sensing image preprocessing. This paper provides a time series maximum and minimum mask method (TSMM) for cloud and cloud shadow detection. Firstly, the Cloud Score+S2_HARMONIZED (CS+S2) is [...] Read more.
Automatic and accurate detection of clouds and cloud shadows is a critical aspect of optical remote sensing image preprocessing. This paper provides a time series maximum and minimum mask method (TSMM) for cloud and cloud shadow detection. Firstly, the Cloud Score+S2_HARMONIZED (CS+S2) is employed as a preliminary mask for clouds and cloud shadows. Secondly, we calculate the ratio of the maximum and sub-maximum values of the blue band in the time series, as well as the ratio of the minimum and sub-minimum values of the near-infrared band in the time series, to eliminate noise from the time series data. Finally, the maximum value of the clear blue band and the minimum value of the near-infrared band after noise removal are employed for cloud and cloud shadow detection, respectively. A national and a global dataset were used to validate the TSMM, and it was quantitatively compared against five other advanced methods or products. When clouds and cloud shadows are detected simultaneously, in the S2ccs dataset, the overall accuracy (OA) reaches 0.93 and the F1 score reaches 0.85. Compared with the most advanced CS+S2, there are increases of 3% and 9%, respectively. In the CloudSEN12 dataset, compared with CS+S2, the producer’s accuracy (PA) and F1 score show increases of 10% and 4%, respectively. Additionally, when applied to Landsat-8 images, TSMM outperforms Fmask, demonstrating its strong generalization capability. Full article
(This article belongs to the Special Issue Satellite-Based Cloud Climatologies)
Show Figures

Graphical abstract

44 pages, 18613 KiB  
Article
Improved Landsat Operational Land Imager (OLI) Cloud and Shadow Detection with the Learning Attention Network Algorithm (LANA)
by Hankui K. Zhang, Dong Luo and David P. Roy
Remote Sens. 2024, 16(8), 1321; https://doi.org/10.3390/rs16081321 - 9 Apr 2024
Cited by 5 | Viewed by 3024
Abstract
Landsat cloud and cloud shadow detection has a long heritage based on the application of empirical spectral tests to single image pixels, including the Landsat product Fmask algorithm, which uses spectral tests applied to optical and thermal bands to detect clouds and uses [...] Read more.
Landsat cloud and cloud shadow detection has a long heritage based on the application of empirical spectral tests to single image pixels, including the Landsat product Fmask algorithm, which uses spectral tests applied to optical and thermal bands to detect clouds and uses the sun-sensor-cloud geometry to detect shadows. Since the Fmask was developed, convolutional neural network (CNN) algorithms, and in particular U-Net algorithms (a type of CNN with a U-shaped network structure), have been developed and are applied to pixels in square patches to take advantage of both spatial and spectral information. The purpose of this study was to develop and assess a new U-Net algorithm that classifies Landsat 8/9 Operational Land Imager (OLI) pixels with higher accuracy than the Fmask algorithm. The algorithm, termed the Learning Attention Network Algorithm (LANA), is a form of U-Net but with an additional attention mechanism (a type of network structure) that, unlike conventional U-Net, uses more spatial pixel information across each image patch. The LANA was trained using 16,861 512 × 512 30 m pixel annotated Landsat 8 OLI patches extracted from 27 images and 69 image subsets that are publicly available and have been used by others for cloud mask algorithm development and assessment. The annotated data were manually refined to improve the annotation and were supplemented with another four annotated images selected to include clear, completely cloudy, and developed land images. The LANA classifies image pixels as either clear, thin cloud, cloud, or cloud shadow. To evaluate the classification accuracy, five annotated Landsat 8 OLI images (composed of >205 million 30 m pixels) were classified, and the results compared with the Fmask and a publicly available U-Net model (U-Net Wieland). The LANA had a 78% overall classification accuracy considering cloud, thin cloud, cloud shadow, and clear classes. As the LANA, Fmask, and U-Net Wieland algorithms have different class legends, their classification results were harmonized to the same three common classes: cloud, cloud shadow, and clear. Considering these three classes, the LANA had the highest (89%) overall accuracy, followed by Fmask (86%), and then U-Net Wieland (85%). The LANA had the highest F1-scores for cloud (0.92), cloud shadow (0.57), and clear (0.89), and the other two algorithms had lower F1-scores, particularly for cloud (Fmask 0.90, U-Net Wieland 0.88) and cloud shadow (Fmask 0.45, U-Net Wieland 0.52). In addition, a time-series evaluation was undertaken to examine the prevalence of undetected clouds and cloud shadows (i.e., omission errors). The band-specific temporal smoothness index (TSIλ) was applied to a year of Landsat 8 OLI surface reflectance observations after discarding pixel observations labelled as cloud or cloud shadow. This was undertaken independently at each gridded pixel location in four 5000 × 5000 30 m pixel Landsat analysis-ready data (ARD) tiles. The TSIλ results broadly reflected the classification accuracy results and indicated that the LANA had the smallest cloud and cloud shadow omission errors, whereas the Fmask had the greatest cloud omission error and the second greatest cloud shadow omission error. Detailed visual examination, true color image examples and classification results are included and confirm these findings. The TSIλ results also highlight the need for algorithm developers to undertake product quality assessment in addition to accuracy assessment. The LANA model, training and evaluation data, and application codes are publicly available for other researchers. Full article
(This article belongs to the Special Issue Deep Learning on the Landsat Archive)
Show Figures

Figure 1

25 pages, 4590 KiB  
Article
Intercomparison of Same-Day Remote Sensing Data for Measuring Winter Cover Crop Biophysical Traits
by Alison Thieme, Kusuma Prabhakara, Jyoti Jennewein, Brian T. Lamb, Greg W. McCarty and Wells Dean Hively
Sensors 2024, 24(7), 2339; https://doi.org/10.3390/s24072339 - 6 Apr 2024
Cited by 4 | Viewed by 2779
Abstract
Winter cover crops are planted during the fall to reduce nitrogen losses and soil erosion and improve soil health. Accurate estimations of winter cover crop performance and biophysical traits including biomass and fractional vegetative groundcover support accurate assessment of environmental benefits. We examined [...] Read more.
Winter cover crops are planted during the fall to reduce nitrogen losses and soil erosion and improve soil health. Accurate estimations of winter cover crop performance and biophysical traits including biomass and fractional vegetative groundcover support accurate assessment of environmental benefits. We examined the comparability of measurements between ground-based and spaceborne sensors as well as between processing levels (e.g., surface vs. top-of-atmosphere reflectance) in estimating cover crop biophysical traits. This research examined the relationships between SPOT 5, Landsat 7, and WorldView-2 same-day paired satellite imagery and handheld multispectral proximal sensors on two days during the 2012–2013 winter cover crop season. We compared two processing levels from three satellites with spatially aggregated proximal data for red and green spectral bands as well as the normalized difference vegetation index (NDVI). We then compared NDVI estimated fractional green cover to in-situ photographs, and we derived cover crop biomass estimates from NDVI using existing calibration equations. We used slope and intercept contrasts to test whether estimates of biomass and fractional green cover differed statistically between sensors and processing levels. Compared to top-of-atmosphere imagery, surface reflectance imagery were more closely correlated with proximal sensors, with intercepts closer to zero, regression slopes nearer to the 1:1 line, and less variance between measured values. Additionally, surface reflectance NDVI derived from satellites showed strong agreement with passive handheld multispectral proximal sensor-sensor estimated fractional green cover and biomass (adj. R2 = 0.96 and 0.95; RMSE = 4.76% and 259 kg ha−1, respectively). Although active handheld multispectral proximal sensor-sensor derived fractional green cover and biomass estimates showed high accuracies (R2 = 0.96 and 0.96, respectively), they also demonstrated large intercept offsets (−25.5 and 4.51, respectively). Our results suggest that many passive multispectral remote sensing platforms may be used interchangeably to assess cover crop biophysical traits whereas SPOT 5 required an adjustment in NDVI intercept. Active sensors may require separate calibrations or intercept correction prior to combination with passive sensor data. Although surface reflectance products were highly correlated with proximal sensors, the standardized cloud mask failed to completely capture cloud shadows in Landsat 7, which dampened the signal of NIR and red bands in shadowed pixels. Full article
(This article belongs to the Section Environmental Sensing)
Show Figures

Figure 1

22 pages, 5870 KiB  
Article
Hierarchical Integration of UAS and Sentinel-2 Imagery for Spruce Bark Beetle Grey-Attack Detection by Vegetation Index Thresholding Approach
by Grigorijs Goldbergs and Emīls Mārtiņš Upenieks
Forests 2024, 15(4), 644; https://doi.org/10.3390/f15040644 - 2 Apr 2024
Viewed by 3364
Abstract
This study aimed to examine the efficiency of the vegetation index (VI) thresholding approach for mapping deadwood caused by spruce bark beetle outbreak. For this, the study used upscaling from individual dead spruce detection by unmanned aerial (UAS) imagery as reference data for [...] Read more.
This study aimed to examine the efficiency of the vegetation index (VI) thresholding approach for mapping deadwood caused by spruce bark beetle outbreak. For this, the study used upscaling from individual dead spruce detection by unmanned aerial (UAS) imagery as reference data for continuous spruce deadwood mapping at a stand/landscape level by VI thresholding binary masks calculated from satellite Sentinel-2 imagery. The study found that the Normalized Difference Vegetation Index (NDVI) was most effective for distinguishing dead spruce from healthy trees, with an accuracy of 97% using UAS imagery. The study results showed that the NDVI minimises cloud and dominant tree shadows and illumination differences during UAS imagery acquisition, keeping the NDVI relatively stable over sunny and cloudy weather conditions. Like the UAS case, the NDVI calculated from Sentinel-2 (S2) imagery was the most reliable index for spruce deadwood cover mapping using a binary threshold mask at a landscape scale. Based on accuracy assessment, the summer leaf-on period (June–July) was found to be the most appropriate for spruce deadwood mapping by S2 imagery with an accuracy of 85% and a deadwood detection rate of 83% in dense, close-canopy mixed conifer forests. The study found that the spruce deadwood was successfully classified by S2 imagery when the spatial extent of the isolated dead tree cluster allocated at least 5–7 Sentinel-2 pixels. Full article
(This article belongs to the Special Issue Forest Structure Monitoring Based on Remote Sensing)
Show Figures

Figure 1

24 pages, 27307 KiB  
Article
Spatial–Temporal Approach and Dataset for Enhancing Cloud Detection in Sentinel-2 Imagery: A Case Study in China
by Chengjuan Gong, Ranyu Yin, Tengfei Long, Weili Jiao, Guojin He and Guizhou Wang
Remote Sens. 2024, 16(6), 973; https://doi.org/10.3390/rs16060973 - 10 Mar 2024
Cited by 5 | Viewed by 2211
Abstract
Clouds often cause challenges during the application of optical satellite images. Masking clouds and cloud shadows is a crucial step in the image preprocessing workflow. The absence of a thermal band in products of the Sentinel-2 series complicates cloud detection. Additionally, most existing [...] Read more.
Clouds often cause challenges during the application of optical satellite images. Masking clouds and cloud shadows is a crucial step in the image preprocessing workflow. The absence of a thermal band in products of the Sentinel-2 series complicates cloud detection. Additionally, most existing cloud detection methods provide binary results (cloud or non-cloud), which lack information on thin clouds and cloud shadows. This study attempted to use end-to-end supervised spatial–temporal deep learning (STDL) models to enhance cloud detection in Sentinel-2 imagery for China. To support this workflow, a new dataset for time-series cloud detection featuring high-quality labels for thin clouds and haze was constructed through time-series interpretation. A classification system consisting of six categories was employed to obtain more detailed results and reduce intra-class variance. Considering the balance of accuracy and computational efficiency, we constructed four STDL models based on shared-weight convolution modules and different classification modules (dense, long short-term memory (LSTM), bidirectional LSTM (Bi-LSTM), and transformer). The results indicated that spatial and temporal features were crucial for high-quality cloud detection. The STDL models with simple architectures that were trained on our dataset achieved excellent accuracy performance and detailed detection of clouds and cloud shadows, although only four bands with a resolution of 10 m were used. The STDL models that used the Bi-LSTM and that used the transformer as the classifier showed high and close overall accuracies. While the transformer classifier exhibited slightly lower accuracy than that of Bi-LSTM, it offered greater computational efficiency. Comparative experiments also demonstrated that the usable data labels and cloud detection results obtained with our workflow outperformed the results of the existing s2cloudless, MAJA, and CS+ methods. Full article
Show Figures

Figure 1

17 pages, 4843 KiB  
Article
Cropland Inundation Mapping in Rugged Terrain Using Sentinel-1 and Google Earth Imagery: A Case Study of 2022 Flood Event in Fujian Provinces
by Mengjun Ku, Hao Jiang, Kai Jia, Xuemei Dai, Jianhui Xu, Dan Li, Chongyang Wang and Boxiong Qin
Agronomy 2024, 14(1), 138; https://doi.org/10.3390/agronomy14010138 - 5 Jan 2024
Cited by 1 | Viewed by 1878
Abstract
South China is dominated by mountainous agriculture and croplands that are at risk of flood disasters, posing a great threat to food security. Synthetic aperture radar (SAR) has the advantage of being all-weather, with the ability to penetrate clouds and monitor cropland inundation [...] Read more.
South China is dominated by mountainous agriculture and croplands that are at risk of flood disasters, posing a great threat to food security. Synthetic aperture radar (SAR) has the advantage of being all-weather, with the ability to penetrate clouds and monitor cropland inundation information. However, SAR data may be interfered with by noise, i.e., radar shadows and permanent water bodies. Existing cropland data derived from open-access landcover data are not accurate enough to mask out these noises mainly due to insufficient spatial resolution. This study proposed a method that extracted cropland inundation with a high spatial resolution cropland mask. First, the Proportional–Integral–Derivative Network (PIDNet) was applied to the sub-meter-level imagery to identify cropland areas. Then, Sentinel-1 dual-polarized water index (SDWI) and change detection (CD) were used to identify flood area from open water bodies. A case study was conducted in Fujian province, China, which endured several heavy rainfalls in summer 2022. The result of the Intersection over Union (IoU) of the extracted cropland data reached 89.38%, and the F1-score of cropland inundation achieved 82.35%. The proposed method provides support for agricultural disaster assessment and disaster emergency monitoring. Full article
(This article belongs to the Special Issue Application of Remote Sensing and GIS Technology in Agriculture)
Show Figures

Figure 1

24 pages, 5702 KiB  
Article
Progress and Limitations in the Satellite-Based Estimate of Burnt Areas
by Giovanni Laneve, Marco Di Fonzo, Valerio Pampanoni and Ramon Bueno Morles
Remote Sens. 2024, 16(1), 42; https://doi.org/10.3390/rs16010042 - 21 Dec 2023
Cited by 4 | Viewed by 2578
Abstract
The detection of burnt areas from satellite imagery is one of the most straightforward and useful applications of satellite remote sensing. In general, the approach relies on a change detection analysis applied on pre- and post-event images. This change detection analysis usually is [...] Read more.
The detection of burnt areas from satellite imagery is one of the most straightforward and useful applications of satellite remote sensing. In general, the approach relies on a change detection analysis applied on pre- and post-event images. This change detection analysis usually is carried out by comparing the values of specific spectral indices such as: NBR (normalised burn ratio), BAI (burn area index), MIRBI (mid-infrared burn index). However, some potential sources of error arise, particularly when near-real-time automated approaches are adopted. An automated approach is mandatory when the burnt area monitoring should operate systematically on a given area of large size (country). Potential sources of errors include but are not limited to clouds on the pre- or post-event images, clouds or topographic shadows, agricultural practices, image pixel size, level of damage, etc. Some authors have already noted differences between global databases of burnt areas based on satellite images. Sources of errors could be related to the spatial resolution of the images used, the land-cover mask adopted to avoid false alarms, and the quality of the cloud and shadow masks. This paper aims to compare different burnt areas datasets (EFFIS, ESACCI, Copernicus, FIRMS, etc.) with the objective to analyse their differences. The comparison is restricted to the Italian territory. Furthermore, the paper aims to identify the degree of approximation of these satellite-based datasets by relying on ground survey data as ground truth. To do so, ground survey data provided by CUFA (Comando Unità Forestali, Ambientali e Agroalimentari Carabinieri) and CFVA (Corpo Forestale e Vigilanza Ambientale Sardegna) were used. The results confirm the existence of significant differences between the datasets. The subsequent comparison with the ground surveys, which was conducted while also taking into account their own approximations, allowed us to identify the accuracy of the satellite-based datasets. Full article
(This article belongs to the Special Issue Advances in Remote Sensing of Fire and Emergency Management)
Show Figures

Figure 1

Back to TopTop