Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (22)

Search Parameters:
Keywords = multitemporal cloud masking

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 14567 KiB  
Article
CloudTran++: Improved Cloud Removal from Multi-Temporal Satellite Images Using Axial Transformer Networks
by Dionysis Christopoulos, Valsamis Ntouskos and Konstantinos Karantzalos
Remote Sens. 2025, 17(1), 86; https://doi.org/10.3390/rs17010086 - 29 Dec 2024
Viewed by 1290
Abstract
We present a method for cloud removal from satellite images using axial transformer networks. The method considers a set of multi-temporal images in a given region of interest, together with the corresponding cloud masks, and produces a cloud-free image for a specific day [...] Read more.
We present a method for cloud removal from satellite images using axial transformer networks. The method considers a set of multi-temporal images in a given region of interest, together with the corresponding cloud masks, and produces a cloud-free image for a specific day of the year. We propose the combination of an encoder-decoder model employing axial attention layers for the estimation of the low-resolution cloud-free image, together with a fully parallel upsampler that reconstructs the image at full resolution. The method is compared with various baselines and state-of-the-art methods on Sentinel-2 datasets of different coverage, showing significant improvements across multiple standard metrics used for image quality assessment. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

30 pages, 8057 KiB  
Article
Multi-Temporal Pixel-Based Compositing for Cloud Removal Based on Cloud Masks Developed Using Classification Techniques
by Tesfaye Adugna, Wenbo Xu, Jinlong Fan, Xin Luo and Haitao Jia
Remote Sens. 2024, 16(19), 3665; https://doi.org/10.3390/rs16193665 - 1 Oct 2024
Cited by 1 | Viewed by 2290
Abstract
Cloud is a serious problem that affects the quality of remote-sensing (RS) images. Existing cloud removal techniques suffer from notable limitations, such as being specific to certain data types, cloud conditions, and spatial extents, as well as requiring auxiliary data, which hampers their [...] Read more.
Cloud is a serious problem that affects the quality of remote-sensing (RS) images. Existing cloud removal techniques suffer from notable limitations, such as being specific to certain data types, cloud conditions, and spatial extents, as well as requiring auxiliary data, which hampers their generalizability and flexibility. To address the issue, we propose a maximum-value compositing approach by generating cloud masks. We acquired 432 daily MOD09GA L2 MODIS imageries covering a vast region with persistent cloud cover and various climates and land-cover types. Labeled datasets for cloud, land, and no-data were collected from selected daily imageries. Subsequently, we trained and evaluated RF, SVM, and U-Net models to choose the best models. Accordingly, SVM and U-Net were chosen and employed to classify all the daily imageries. Then, the classified imageries were converted to two sets of mask layers to mask clouds and no-data pixels in the corresponding daily images by setting the masked pixels’ values to −0.999999. After masking, we employed the maximum-value technique to generate two sets of 16-day composite products, MaxComp-1 and MaxComp-2, corresponding to SVM and U-Net-derived cloud masks, respectively. Finally, we assessed the quality of our composite products by comparing them with the reference MOD13A1 16-day composite product. Based on the land-cover classification accuracy, our products yielded a significantly higher accuracy (5–28%) than the reference MODIS product across three classifiers (RF, SVM, and U-Net), indicating the quality of our products and the effectiveness of our techniques. In particular, MaxComp-1 yielded the best results, which further implies the superiority of SVM for cloud masking. In addition, our products appear to be more radiometrically and spectrally consistent and less noisy than MOD13A1, implying that our approach is more efficient in removing shadows and noises/artifacts. Our method yields high-quality products that are vital for investigating large regions with persistent clouds and studies requiring time-series data. Moreover, the proposed techniques can be adopted for higher-resolution RS imageries, regardless of the spatial extent, data volume, and type of clouds. Full article
Show Figures

Figure 1

16 pages, 4865 KiB  
Article
Combining Gaussian Process Regression with Poisson Blending for Seamless Cloud Removal from Optical Remote Sensing Imagery for Cropland Monitoring
by Soyeon Park and No-Wook Park
Agronomy 2023, 13(11), 2789; https://doi.org/10.3390/agronomy13112789 - 10 Nov 2023
Cited by 3 | Viewed by 1537
Abstract
Constructing optical image time series for cropland monitoring requires a cloud removal method that accurately restores cloud regions and eliminates discontinuity around cloud boundaries. This paper describes a two-stage hybrid machine learning-based cloud removal method that combines Gaussian process regression (GPR)-based predictions with [...] Read more.
Constructing optical image time series for cropland monitoring requires a cloud removal method that accurately restores cloud regions and eliminates discontinuity around cloud boundaries. This paper describes a two-stage hybrid machine learning-based cloud removal method that combines Gaussian process regression (GPR)-based predictions with image blending for seamless optical image reconstruction. GPR is employed in the first stage to generate initial prediction results by quantifying temporal relationships between multi-temporal images. GPR predictive uncertainty is particularly combined with prediction values to utilize uncertainty-weighted predictions as the input for the next stage. In the second stage, Poisson blending is applied to eliminate discontinuity in GPR-based predictions. The benefits of this method are illustrated through cloud removal experiments using Sentinel-2 images with synthetic cloud masks over two cropland sites. The proposed method was able to maintain the structural features and quality of the underlying reflectance in cloud regions and outperformed two existing hybrid cloud removal methods for all spectral bands. Furthermore, it demonstrated the best performance in predicting several vegetation indices in cloud regions. These experimental results indicate the benefits of the proposed cloud removal method for reconstructing cloud-contaminated optical imagery. Full article
(This article belongs to the Special Issue Use of Satellite Imagery in Agriculture—Volume II)
Show Figures

Figure 1

23 pages, 37642 KiB  
Article
Automated Georectification, Mosaicking and 3D Point Cloud Generation Using UAV-Based Hyperspectral Imagery Observed by Line Scanner Imaging Sensors
by Anthony Finn, Stefan Peters, Pankaj Kumar and Jim O’Hehir
Remote Sens. 2023, 15(18), 4624; https://doi.org/10.3390/rs15184624 - 20 Sep 2023
Cited by 7 | Viewed by 1950
Abstract
Hyperspectral sensors mounted on unmanned aerial vehicles (UAV) offer the prospect of high-resolution multi-temporal spectral analysis for a range of remote-sensing applications. However, although accurate onboard navigation sensors track the moment-to-moment pose of the UAV in flight, geometric distortions are introduced into the [...] Read more.
Hyperspectral sensors mounted on unmanned aerial vehicles (UAV) offer the prospect of high-resolution multi-temporal spectral analysis for a range of remote-sensing applications. However, although accurate onboard navigation sensors track the moment-to-moment pose of the UAV in flight, geometric distortions are introduced into the scanned data sets. Consequently, considerable time-consuming (user/manual) post-processing rectification effort is generally required to retrieve geometrically accurate mosaics of the hyperspectral data cubes. Moreover, due to the line-scan nature of many hyperspectral sensors and their intrinsic inability to exploit structure from motion (SfM), only 2D mosaics are generally created. To address this, we propose a fast, automated and computationally robust georectification and mosaicking technique that generates 3D hyperspectral point clouds. The technique first morphologically and geometrically examines (and, if possible, repairs) poorly constructed individual hyperspectral cubes before aligning these cubes into swaths. The luminance of each individual cube is estimated and normalised, prior to being integrated into a swath of images. The hyperspectral swaths are co-registered to a targeted element of a luminance-normalised orthomosaic obtained using a standard red–green–blue (RGB) camera and SfM. To avoid computationally intensive image processing operations such as 2D convolutions, key elements of the orthomosaic are identified using pixel masks, pixel index manipulation and nearest neighbour searches. Maximally stable extremal regions (MSER) and speeded-up robust feature (SURF) extraction are then combined with maximum likelihood sample consensus (MLESAC) feature matching to generate the best geometric transformation model for each swath. This geometrically transforms and merges individual pushbroom scanlines into a single spatially continuous hyperspectral mosaic; and this georectified 2D hyperspectral mosaic is then converted into a 3D hyperspectral point cloud by aligning the hyperspectral mosaic with the RGB point cloud used to create the orthomosaic obtained using SfM. A high spatial accuracy is demonstrated. Hyperspectral mosaics with a 5 cm spatial resolution were mosaicked with root mean square positional accuracies of 0.42 m. The technique was tested on five scenes comprising two types of landscape. The entire process, which is coded in MATLAB, takes around twenty minutes to process data sets covering around 30 Ha at a 5 cm resolution on a laptop with 32 GB RAM and an Intel® Core i7-8850H CPU running at 2.60 GHz. Full article
Show Figures

Figure 1

20 pages, 40396 KiB  
Article
Convolutional Neural Network-Driven Improvements in Global Cloud Detection for Landsat 8 and Transfer Learning on Sentinel-2 Imagery
by Shulin Pang, Lin Sun, Yanan Tian, Yutiao Ma and Jing Wei
Remote Sens. 2023, 15(6), 1706; https://doi.org/10.3390/rs15061706 - 22 Mar 2023
Cited by 15 | Viewed by 3840
Abstract
A stable and reliable cloud detection algorithm is an important step of optical satellite data preprocessing. Existing threshold methods are mostly based on classifying spectral features of isolated individual pixels and do not contain or incorporate the spatial information. This often leads to [...] Read more.
A stable and reliable cloud detection algorithm is an important step of optical satellite data preprocessing. Existing threshold methods are mostly based on classifying spectral features of isolated individual pixels and do not contain or incorporate the spatial information. This often leads to misclassifications of bright surfaces, such as human-made structures or snow/ice. Multi-temporal methods can alleviate this problem, but cloud-free images of the scene are difficult to obtain. To deal with this issue, we extended four deep-learning Convolutional Neural Network (CNN) models to improve the global cloud detection accuracy for Landsat imagery. The inputs are simplified as all discrete spectral channels from visible to short wave infrared wavelengths through radiometric calibration, and the United States Geological Survey (USGS) global Landsat 8 Biome cloud-cover assessment dataset is randomly divided for model training and validation independently. Experiments demonstrate that the cloud mask of the extended U-net model (i.e., UNmask) yields the best performance among all the models in estimating the cloud amounts (cloud amount difference, CAD = −0.35%) and capturing the cloud distributions (overall accuracy = 94.9%) for Landsat 8 imagery compared with the real validation masks; in particular, it runs fast and only takes about 41 ± 5.5 s for each scene. Our model can also actually detect broken and thin clouds over both dark and bright surfaces (e.g., urban and barren). Last, the UNmask model trained for Landsat 8 imagery is successfully applied in cloud detections for the Sentinel-2 imagery (overall accuracy = 90.1%) via transfer learning. These prove the great potential of our model in future applications such as remote sensing satellite data preprocessing. Full article
Show Figures

Figure 1

16 pages, 9237 KiB  
Article
How Weather Affects over Time the Repeatability of Spectral Indices Used for Geological Remote Sensing
by Harald van der Werff, Janneke Ettema, Akhil Sampatirao and Robert Hewson
Remote Sens. 2022, 14(24), 6303; https://doi.org/10.3390/rs14246303 - 13 Dec 2022
Cited by 6 | Viewed by 2316
Abstract
Geologic remote sensing studies often targets surface cover that is supposed to be invariant or only changing on a geological timescale. In terms of surface material characteristics, this holds for rocks and minerals, but only to a lesser degree for soils (including alluvium, [...] Read more.
Geologic remote sensing studies often targets surface cover that is supposed to be invariant or only changing on a geological timescale. In terms of surface material characteristics, this holds for rocks and minerals, but only to a lesser degree for soils (including alluvium, colluvium, regolith or weathered outcrop) and not for vegetation cover, for example. A view unobstructed by clouds, vegetation or fire scars is essential for a persistent observation of surface mineralogy. Sensors with a continuous multi-temporal operation (e.g., Landsat 8 OLI and Sentinel-2 MSI) can provide the data volume needed to come to an optimal seasonal acquisition and the application of data fusion approaches to create an unobstructed view. However, the acquisition environment always changes over time, driven by seasonal changes, illumination changes and the weather. Consequently, the creation of an unobstructed view does not necessarily lead to a repeatable measurement. In this paper, we evaluate the influence of weather and resulting soil moisture conditions over a 3-year period, with alternating dry and wet periods, on the variance of several “geological” spectral indices in a semi-arid area. Sentinel-2 MSI data are chosen to calculate band ratios for green vegetation, ferric and ferrous iron oxide mineralogy and hydroxyl bearing alteration (clay) mineralogy. The data were used “as provided”, meaning that the performance of the atmospheric correction and geometric accuracy is not changed. The results are shown as time-series for selected areas that include solid rock, beach sand, bare soil and natural vegetation surfaces. Results show that spectral index values vary not only between dry and wet periods, but also within dry periods longer than 45 days, as a result of changing soil moisture conditions long after a last rain event has passed. In terms of repeatability of measurements, an overall low soil-moisture level is more important for long-term stability of spectral index values than the occurrence of minor rain events. In terms of creating an unobstructed view, we found that thresholds for NDVI should not be higher than 0.1 when masking vegetation in geological remote sensing, which is lower than what usually is indicated in literature. In conclusion, multi-temporal data are not only important to study dynamic Earth processes, but also to improve mapping of surfaces that are seemingly invariant. As this work is based on a few selected pixels, the obtained results should be considered only indicative and not as a numerical truth. We conclude that multi-temporal data can be used to create an unobstructed view, but also to select the data that give the most repeatability of measurements. Images selection should not be based on a certain number of days without rain in the days preceding data acquisition but aim for the lowest soil moisture conditions. Consequently, weather data should be incorporated to come to an optimal selection of remote sensing imagery, and also when analyzing multi-temporal data. Full article
Show Figures

Figure 1

32 pages, 16861 KiB  
Article
Assessing the Added Value of Sentinel-1 PolSAR Data for Crop Classification
by Maria Ioannidou, Alkiviadis Koukos, Vasileios Sitokonstantinou, Ioannis Papoutsis and Charalampos Kontoes
Remote Sens. 2022, 14(22), 5739; https://doi.org/10.3390/rs14225739 - 13 Nov 2022
Cited by 23 | Viewed by 4173
Abstract
Crop classification is an important remote sensing task with many applications, e.g., food security monitoring, ecosystem service mapping, climate change impact assessment, etc. This work focuses on mapping 10 crop types at the field level in an agricultural region located in the Spanish [...] Read more.
Crop classification is an important remote sensing task with many applications, e.g., food security monitoring, ecosystem service mapping, climate change impact assessment, etc. This work focuses on mapping 10 crop types at the field level in an agricultural region located in the Spanish province of Navarre. For this, multi-temporal Synthetic Aperture Radar Polarimetric (PolSAR) Sentinel-1 imagery and multi-spectral Sentinel-2 data were jointly used. We applied the Cloude–Pottier polarimetric decomposition on PolSAR data to compute 23 polarimetric indicators and extracted vegetation indices from Sentinel-2 time-series to generate a big feature space of 818 features. In order to assess the relevance of the different features for the crop mapping task, we run a number of scenarios using a Support Vector Machines (SVM) classifier. The model that was trained using only the polarimetric data demonstrates a very promising performance, achieving an overall accuracy over 82%. A genetic algorithm was also implemented as a feature selection method for deriving an optimal feature subset. To showcase the positive effect of using polarimetric data over areas suffering from cloud coverage, we contaminated the original Sentinel-2 time-series with simulated cloud masks. By incorporating the genetic algorithm, we derived a high informative feature subset of 120 optical and polarimetric features, as the corresponding classification model increased the overall accuracy by 5% compared to the model trained only with Sentinel-2 features. The feature importance analysis indicated that apart from the Sentinel-2 spectral bands and vegetation indices, several polarimetric parameters, such as Shannon entropy, second eigenvalue and normalised Shannon entropy are of high value in identifying crops. In summary, the findings of our study highlight the significant contribution of Sentinel-1 PolSAR data in crop classification in areas with frequent cloud coverage and the effectiveness of the genetic algorithm in discovering the most informative features. Full article
(This article belongs to the Special Issue Remote Sensing Applications in Vegetation Classification)
Show Figures

Graphical abstract

19 pages, 9905 KiB  
Article
A Flexible Multi-Temporal and Multi-Modal Framework for Sentinel-1 and Sentinel-2 Analysis Ready Data
by Priti Upadhyay, Mikolaj Czerkawski, Christopher Davison, Javier Cardona, Malcolm Macdonald, Ivan Andonovic, Craig Michie, Robert Atkinson, Nikela Papadopoulou, Konstantinos Nikas and Christos Tachtatzis
Remote Sens. 2022, 14(5), 1120; https://doi.org/10.3390/rs14051120 - 24 Feb 2022
Cited by 11 | Viewed by 6017
Abstract
The rich, complementary data provided by Sentinel-1 and Sentinel-2 satellite constellations host considerable potential to transform Earth observation (EO) applications. However, a substantial amount of effort and infrastructure is still required for the generation of analysis-ready data (ARD) from the low-level products provided [...] Read more.
The rich, complementary data provided by Sentinel-1 and Sentinel-2 satellite constellations host considerable potential to transform Earth observation (EO) applications. However, a substantial amount of effort and infrastructure is still required for the generation of analysis-ready data (ARD) from the low-level products provided by the European Space Agency (ESA). Here, a flexible Python framework able to generate a range of consistent ARD aligned with the ESA-recommended processing pipeline is detailed. Sentinel-1 Synthetic Aperture Radar (SAR) data are radiometrically calibrated, speckle-filtered and terrain-corrected, and Sentinel-2 multi-spectral data resampled in order to harmonise the spatial resolution between the two streams and to allow stacking with multiple scene classification masks. The global coverage and flexibility of the framework allows users to define a specific region of interest (ROI) and time window to create geo-referenced Sentinel-1 and Sentinel-2 images, or a combination of both with closest temporal alignment. The framework can be applied to any location and is user-centric and versatile in generating multi-modal and multi-temporal ARD. Finally, the framework handles automatically the inherent challenges in processing Sentinel data, such as boundary regions with missing values within Sentinel-1 and the filtering of Sentinel-2 scenes based on ROI cloud coverage. Full article
(This article belongs to the Special Issue Sentinel Analysis Ready Data (Sentinel ARD))
Show Figures

Figure 1

22 pages, 9958 KiB  
Article
Detection of Invasive Black Locust (Robinia pseudoacacia) in Small Woody Features Using Spatiotemporal Compositing of Sentinel-2 Data
by Tomáš Rusňák, Andrej Halabuk, Ľuboš Halada, Hubert Hilbert and Katarína Gerhátová
Remote Sens. 2022, 14(4), 971; https://doi.org/10.3390/rs14040971 - 16 Feb 2022
Cited by 11 | Viewed by 3018
Abstract
Recognition of invasive species and their distribution is key for managing and protecting native species within both natural and man-made ecosystems. Small woody features (SWF) represent fragmented patches or narrow linear tree features that are of high importance in intensively utilized agricultural landscapes. [...] Read more.
Recognition of invasive species and their distribution is key for managing and protecting native species within both natural and man-made ecosystems. Small woody features (SWF) represent fragmented patches or narrow linear tree features that are of high importance in intensively utilized agricultural landscapes. Simultaneously, they frequently serve as expansion pathways for invasive species such as black locust. In this study, Sentinel-2 products, combined with spatiotemporal compositing approaches, are used to address the challenge of broad area black locust mapping at a high granularity. This is accomplished by conducting a comprehensive analysis of the classification performance of various compositing approaches and multitemporal classification settings throughout four vegetation seasons. The annual, seasonal (bi-monthly), and monthly median values of cloud-masked Sentinel-2 reflectance products are aggregated and stacked into varied time-series datasets per given year. The random forest algorithm is trained and output classification maps validated based on field-based reference datasets across Danubian lowlands (Slovakia). The main results of the study proved the usefulness of spatiotemporal compositing of Sentinel-2 products for mapping black locust in small woody features across wide area. In particular, temporally aggregated monthly composites stacked to seasonal time series datasets yielded consistently high overall accuracies ranging from 89.10% to 91.47% with balanced producer’s and user’s accuracies for each year’s annual series. We presume that a similar approach could be used for a broader scale species distribution mapping, assuming they are spectrally or phenologically distinctive, as is often the case for many invasive species. Full article
(This article belongs to the Special Issue Remote Sensing of Invasive Species)
Show Figures

Figure 1

22 pages, 7328 KiB  
Article
A Novel Strategy to Reconstruct NDVI Time-Series with High Temporal Resolution from MODIS Multi-Temporal Composite Products
by Linglin Zeng, Brian D. Wardlow, Shun Hu, Xiang Zhang, Guoqing Zhou, Guozhang Peng, Daxiang Xiang, Rui Wang, Ran Meng and Weixiong Wu
Remote Sens. 2021, 13(7), 1397; https://doi.org/10.3390/rs13071397 - 5 Apr 2021
Cited by 30 | Viewed by 7135
Abstract
Vegetation indices (VIs) data derived from satellite imageries play a vital role in land surface vegetation and dynamic monitoring. Due to the excessive noises (e.g., cloud cover, atmospheric contamination) in daily VI data, temporal compositing methods are commonly used to produce composite data [...] Read more.
Vegetation indices (VIs) data derived from satellite imageries play a vital role in land surface vegetation and dynamic monitoring. Due to the excessive noises (e.g., cloud cover, atmospheric contamination) in daily VI data, temporal compositing methods are commonly used to produce composite data to minimize the negative influence of noise over a given compositing time interval. However, VI time series with high temporal resolution were preferred by many applications such as vegetation phenology and land change detections. This study presents a novel strategy named DAVIR-MUTCOP (DAily Vegetation Index Reconstruction based on MUlti-Temporal COmposite Products) method for normalized difference vegetation index (NDVI) time-series reconstruction with high temporal resolution. The core of the DAVIR-MUTCOP method is a combination of the advantages of both original daily and temporally composite products, and selecting more daily observations with high quality through the temporal variation of temporally corrected composite data. The DAVIR-MUTCOP method was applied to reconstruct high-quality NDVI time-series using MODIS multi-temporal products in two study areas in the continental United States (CONUS), i.e., three field experimental sites near Mead, Nebraska from 2001 to 2012 and forty-six AmeriFlux sites evenly distributed across CONUS from 2006 to 2010. In these two study areas, the DAVIR-MUTCOP method was also compared to several commonly used methods, i.e., the Harmonic Analysis of Time-Series (HANTS) method using original daily observations, Savitzky–Golay (SG) filtering using daily observations with cloud mask products as auxiliary data, and SG filtering using temporally corrected composite data. The results showed that the DAVIR-MUTCOP method significantly improved the temporal resolution of the reconstructed NDVI time series. It performed the best in reconstructing NDVI time-series across time and space (coefficient of determination (R2 = 0.93 ~ 0.94) between reconstructed NDVI and ground-observed LAI). DAVIR-MUTCOP method presented the highest robustness and accuracy with the change of the filtering parameter (R2 = 0.99 ~ 1.00, bias = 0.001, root mean square error (RMSE) = 0.020). Only MODIS data were used in this study; nevertheless, the DAVIR-MUTCOP method proposed a universal and potential way to reconstruct daily time series of other VIs or from other operational sensors, e.g., AVHRR and VIIRS. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
Show Figures

Figure 1

17 pages, 7775 KiB  
Article
Combination of Landsat 8 OLI and Sentinel-1 SAR Time-Series Data for Mapping Paddy Fields in Parts of West and Central Java Provinces, Indonesia
by Sanjiwana Arjasakusuma, Sandiaga Swahyu Kusuma, Raihan Rafif, Siti Saringatin and Pramaditya Wicaksono
ISPRS Int. J. Geo-Inf. 2020, 9(11), 663; https://doi.org/10.3390/ijgi9110663 - 4 Nov 2020
Cited by 23 | Viewed by 4924
Abstract
The rise of Google Earth Engine, a cloud computing platform for spatial data, has unlocked seamless integration for multi-sensor and multi-temporal analysis, which is useful for the identification of land-cover classes based on their temporal characteristics. Our study aims to employ temporal patterns [...] Read more.
The rise of Google Earth Engine, a cloud computing platform for spatial data, has unlocked seamless integration for multi-sensor and multi-temporal analysis, which is useful for the identification of land-cover classes based on their temporal characteristics. Our study aims to employ temporal patterns from monthly-median Sentinel-1 (S1) C-band synthetic aperture radar data and cloud-filled monthly spectral indices, i.e., Normalized Difference Vegetation Index (NDVI), Modified Normalized Difference Water Index (MNDWI), and Normalized Difference Built-up Index (NDBI), from Landsat 8 (L8) OLI for mapping rice cropland areas in the northern part of Central Java Province, Indonesia. The harmonic function was used to fill the cloud and cloud-masked values in the spectral indices from Landsat 8 data, and smile Random Forests (RF) and Classification And Regression Trees (CART) algorithms were used to map rice cropland areas using a combination of monthly S1 and monthly harmonic L8 spectral indices. An additional terrain variable, Terrain Roughness Index (TRI) from the SRTM dataset, was also included in the analysis. Our results demonstrated that RF models with 50 (RF50) and 80 (RF80) trees yielded better accuracy for mapping the extent of paddy fields, with user accuracies of 85.65% (RF50) and 85.75% (RF80), and producer accuracies of 91.63% (RF80) and 93.48% (RF50) (overall accuracies of 92.10% (RF80) and 92.47% (RF50)), respectively, while CART yielded a user accuracy of only 84.83% and a producer accuracy of 80.86%. The model variable importance in both RF50 and RF80 models showed that vertical transmit and horizontal receive (VH) polarization and harmonic-fitted NDVI were identified as the top five important variables, and the variables representing February, April, June, and December contributed more to the RF model. The detection of VH and NDVI as the top variables which contributed up to 51% of the Random Forest model indicated the importance of the multi-sensor combination for the identification of paddy fields. Full article
(This article belongs to the Special Issue Earth Observation and GIScience for Agricultural Applications)
Show Figures

Graphical abstract

19 pages, 4669 KiB  
Article
Gated Convolutional Networks for Cloud Removal From Bi-Temporal Remote Sensing Images
by Peiyu Dai, Shunping Ji and Yongjun Zhang
Remote Sens. 2020, 12(20), 3427; https://doi.org/10.3390/rs12203427 - 19 Oct 2020
Cited by 24 | Viewed by 4275
Abstract
Pixels of clouds and cloud shadows in a remote sensing image impact image quality, image interpretation, and subsequent applications. In this paper, we propose a novel cloud removal method based on deep learning that automatically reconstructs the invalid pixels with the auxiliary information [...] Read more.
Pixels of clouds and cloud shadows in a remote sensing image impact image quality, image interpretation, and subsequent applications. In this paper, we propose a novel cloud removal method based on deep learning that automatically reconstructs the invalid pixels with the auxiliary information from multi-temporal images. Our method’s innovation lies in its feature extraction and loss functions, which reside in a novel gated convolutional network (GCN) instead of a series of common convolutions. It takes the current cloudy image, a recent cloudless image, and the mask of clouds as input, without any requirements of external training samples, to realize a self-training process with clean pixels in the bi-temporal images as natural training samples. In our feature extraction, gated convolutional layers, for the first time, are introduced to discriminate cloudy pixels from clean pixels, which make up for a common convolution layer’s lack of the ability to discriminate. Our multi-level constrained joint loss function, which consists of an image-level loss, a feature-level loss, and a total variation loss, can achieve local and global consistency both in shallow and deep levels of features. The total variation loss is introduced into the deep-learning-based cloud removal task for the first time to eliminate the color and texture discontinuity around cloud outlines needing repair. On the WHU cloud dataset with diverse land cover scenes and different imaging conditions, our experimental results demonstrated that our method consistently reconstructed the cloud and cloud shadow pixels in various remote sensing images and outperformed several mainstream deep-learning-based methods and a conventional method for every indicator by a large margin. Full article
Show Figures

Graphical abstract

17 pages, 8268 KiB  
Article
Automated Cloud and Cloud-Shadow Masking for Landsat 8 Using Multitemporal Images in a Variety of Environments
by Danang Surya Candra, Stuart Phinn and Peter Scarth
Remote Sens. 2019, 11(17), 2060; https://doi.org/10.3390/rs11172060 - 2 Sep 2019
Cited by 21 | Viewed by 7030
Abstract
Landsat 8 images have been widely used for many applications, but cloud and cloud-shadow cover issues remain. In this study, multitemporal cloud masking (MCM), designed to detect cloud and cloud-shadow for Landsat 8 in tropical environments, was improved for application in sub-tropical environments, [...] Read more.
Landsat 8 images have been widely used for many applications, but cloud and cloud-shadow cover issues remain. In this study, multitemporal cloud masking (MCM), designed to detect cloud and cloud-shadow for Landsat 8 in tropical environments, was improved for application in sub-tropical environments, with the greatest improvement in cloud masking. We added a haze optimized transformation (HOT) test and thermal band in the previous MCM algorithm to improve the algorithm in the detection of haze, thin-cirrus cloud, and thick cloud. We also improved the previous MCM in the detection of cloud-shadow by adding a blue band. In the visual assessment, the algorithm can detect a thick cloud, haze, thin-cirrus cloud, and cloud-shadow accurately. In the statistical assessment, the average user’s accuracy and producer’s accuracy of cloud masking results across the different land cover in the selected area was 98.03% and 98.98%, respectively. On the other hand, the average user’s accuracy and producer’s accuracy of cloud-shadow masking results was 97.97% and 96.66%, respectively. Compared to the Landsat 8 cloud cover assessment (L8 CCA) algorithm, MCM has better accuracies, especially in cloud-shadow masking. Our preliminary tests showed that the new MCM algorithm can detect cloud and cloud-shadow for Landsat 8 in a variety of environments. Full article
(This article belongs to the Special Issue Remote Sensing: 10th Anniversary)
Show Figures

Graphical abstract

24 pages, 6307 KiB  
Article
Reconstructing Cloud Contaminated Pixels Using Spatiotemporal Covariance Functions and Multitemporal Hyperspectral Imagery
by Yoseline Angel, Rasmus Houborg and Matthew F. McCabe
Remote Sens. 2019, 11(10), 1145; https://doi.org/10.3390/rs11101145 - 14 May 2019
Cited by 5 | Viewed by 4503
Abstract
One of the major challenges in optical-based remote sensing is the presence of clouds, which imposes a hard constraint on the use of multispectral or hyperspectral satellite imagery for earth observation. While some studies have used interpolation models to remove cloud affected data, [...] Read more.
One of the major challenges in optical-based remote sensing is the presence of clouds, which imposes a hard constraint on the use of multispectral or hyperspectral satellite imagery for earth observation. While some studies have used interpolation models to remove cloud affected data, relatively few aim at restoration via the use of multi-temporal reference images. This paper proposes not only the use of image time-series, but also the implementation of a geostatistical model that considers the spatiotemporal correlation between them to fill the cloud-related gaps. Using Hyperion hyperspectral images, we demonstrate a capacity to reconstruct cloud-affected pixels and predict their underlying surface reflectance values. To do this, cloudy pixels were masked and a parametric family of non-separable covariance functions was automated fitted, using a composite likelihood estimator. A subset of cloud-free pixels per scene was used to perform a kriging interpolation and to predict the spectral reflectance per each cloud-affected pixel. The approach was evaluated using a benchmark dataset of cloud-free pixels, with a synthetic cloud superimposed upon these data. An overall root mean square error (RMSE) of between 0.5% and 16% of the reflectance was achieved, representing a relative root mean square error (rRMSE) of between 0.2% and 7.5%. The spectral similarity between the predicted and reference reflectance signatures was described by a mean spectral angle (MSA) of between 1° and 11°, demonstrating the spatial and spectral coherence of predictions. The approach provides an efficient spatiotemporal interpolation framework for cloud removal, gap-filling, and denoising in remotely sensed datasets. Full article
(This article belongs to the Special Issue Remote Sensing Image Restoration and Reconstruction)
Show Figures

Graphical abstract

21 pages, 8939 KiB  
Technical Note
FORCE—Landsat + Sentinel-2 Analysis Ready Data and Beyond
by David Frantz
Remote Sens. 2019, 11(9), 1124; https://doi.org/10.3390/rs11091124 - 10 May 2019
Cited by 231 | Viewed by 19662
Abstract
Ever increasing data volumes of satellite constellations call for multi-sensor analysis ready data (ARD) that relieve users from the burden of all costly preprocessing steps. This paper describes the scientific software FORCE (Framework for Operational Radiometric Correction for Environmental monitoring), an ‘all-in-one’ solution [...] Read more.
Ever increasing data volumes of satellite constellations call for multi-sensor analysis ready data (ARD) that relieve users from the burden of all costly preprocessing steps. This paper describes the scientific software FORCE (Framework for Operational Radiometric Correction for Environmental monitoring), an ‘all-in-one’ solution for the mass-processing and analysis of Landsat and Sentinel-2 image archives. FORCE is increasingly used to support a wide range of scientific to operational applications that are in need of both large area, as well as deep and dense temporal information. FORCE is capable of generating Level 2 ARD, and higher-level products. Level 2 processing is comprised of state-of-the-art cloud masking and radiometric correction (including corrections that go beyond ARD specification, e.g., topographic or bidirectional reflectance distribution function correction). It further includes data cubing, i.e., spatial reorganization of the data into a non-overlapping grid system for enhanced efficiency and simplicity of ARD usage. However, the usage barrier of Level 2 ARD is still high due to the considerable data volume and spatial incompleteness of valid observations (e.g., clouds). Thus, the higher-level modules temporally condense multi-temporal ARD into manageable amounts of spatially seamless data. For data mining purposes, per-pixel statistics of clear sky data availability can be generated. FORCE provides functionality for compiling best-available-pixel composites and spectral temporal metrics, which both utilize all available observations within a defined temporal window using selection and statistical aggregation techniques, respectively. These products are immediately fit for common Earth observation analysis workflows, such as machine learning-based image classification, and are thus referred to as highly analysis ready data (hARD). FORCE provides data fusion functionality to improve the spatial resolution of (i) coarse continuous fields like land surface phenology and (ii) Landsat ARD using Sentinel-2 ARD as prediction targets. Quality controlled time series preparation and analysis functionality with a number of aggregation and interpolation techniques, land surface phenology retrieval, and change and trend analyses are provided. Outputs of this module can be directly ingested into a geographic information system (GIS) to fuel research questions without any further processing, i.e., hARD+. FORCE is open source software under the terms of the GNU General Public License v. >= 3, and can be downloaded from http://force.feut.de. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

Back to TopTop