Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (28)

Search Parameters:
Keywords = Fmask

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 7913 KB  
Article
Integrated Satellite Driven Machine Learning Framework for Precision Irrigation and Sustainable Cotton Production
by Syeda Faiza Nasim and Muhammad Khurram
Algorithms 2025, 18(12), 740; https://doi.org/10.3390/a18120740 - 25 Nov 2025
Viewed by 710
Abstract
This study develops a satellite-based, machine-learning-based prediction algorithm to predict optimal irrigation scheduling for cotton cultivation within Rahim Yar Khan, Pakistan. The framework leverages multispectral satellite imagery (Landsat 8 and Sentinel-2), GIS-derived climatic, land surface data and real-time weather information obtained from a [...] Read more.
This study develops a satellite-based, machine-learning-based prediction algorithm to predict optimal irrigation scheduling for cotton cultivation within Rahim Yar Khan, Pakistan. The framework leverages multispectral satellite imagery (Landsat 8 and Sentinel-2), GIS-derived climatic, land surface data and real-time weather information obtained from a freely accessible weather API, eliminating the need for ground-based IoT sensors. The proposed algorithm integrates FAO-56 evapotranspiration principles and water stress indices to accurately forecast irrigation requirements across the four critical growth stages of cotton. Supervised learning algorithms, including Gradient Boosting, Random Forest, and Logistic Regression, were evaluated, with Random Forest indicating better predictive accuracy with a coefficient of determination (R2) exceeding 0.92 and a root mean square error (RMSE) of approximately 415 kg/ha, owed its capacity to handle complex, non-linear relations, and feature interactions. The model was trained on data collected during 2023 and 2024, and its predictions for 2025 were validated against observed irrigation requirements. The proposed model enabled an average 12–18% reduction in total water application between 2023 and 2025, optimizing water use deprived of compromising crop yield. By merging satellite imagery, GIS data, and weather API information, this approach provides a cost-effective, scalable solution that enables precise, stage-specific irrigation scheduling. Cloud masking was executed by applying the built-in QA bands with the Fmask algorithm to eliminate cloud and cloud-shadow pixels in satellite imagery statistics. Time series were generated by compositing monthly median values to ensure consistency across images. The novelty of our study primarily focuses on its end-to-end integration framework, its application within semi-arid agronomic conditions, and its empirical validation and accuracy calculation over direct association of multi-source statistics with FAO-guided irrigation scheduling to support sustainable cotton cultivation. The quantification of irrigation capacity, determining how much water to apply, is identified as a focus for future research. Full article
Show Figures

Figure 1

21 pages, 20756 KB  
Article
A Novel Method for Cloud and Cloud Shadow Detection Based on the Maximum and Minimum Values of Sentinel-2 Time Series Images
by Kewen Liang, Gang Yang, Yangyan Zuo, Jiahui Chen, Weiwei Sun, Xiangchao Meng and Binjie Chen
Remote Sens. 2024, 16(8), 1392; https://doi.org/10.3390/rs16081392 - 15 Apr 2024
Cited by 15 | Viewed by 6545
Abstract
Automatic and accurate detection of clouds and cloud shadows is a critical aspect of optical remote sensing image preprocessing. This paper provides a time series maximum and minimum mask method (TSMM) for cloud and cloud shadow detection. Firstly, the Cloud Score+S2_HARMONIZED (CS+S2) is [...] Read more.
Automatic and accurate detection of clouds and cloud shadows is a critical aspect of optical remote sensing image preprocessing. This paper provides a time series maximum and minimum mask method (TSMM) for cloud and cloud shadow detection. Firstly, the Cloud Score+S2_HARMONIZED (CS+S2) is employed as a preliminary mask for clouds and cloud shadows. Secondly, we calculate the ratio of the maximum and sub-maximum values of the blue band in the time series, as well as the ratio of the minimum and sub-minimum values of the near-infrared band in the time series, to eliminate noise from the time series data. Finally, the maximum value of the clear blue band and the minimum value of the near-infrared band after noise removal are employed for cloud and cloud shadow detection, respectively. A national and a global dataset were used to validate the TSMM, and it was quantitatively compared against five other advanced methods or products. When clouds and cloud shadows are detected simultaneously, in the S2ccs dataset, the overall accuracy (OA) reaches 0.93 and the F1 score reaches 0.85. Compared with the most advanced CS+S2, there are increases of 3% and 9%, respectively. In the CloudSEN12 dataset, compared with CS+S2, the producer’s accuracy (PA) and F1 score show increases of 10% and 4%, respectively. Additionally, when applied to Landsat-8 images, TSMM outperforms Fmask, demonstrating its strong generalization capability. Full article
(This article belongs to the Special Issue Satellite-Based Cloud Climatologies)
Show Figures

Graphical abstract

44 pages, 18613 KB  
Article
Improved Landsat Operational Land Imager (OLI) Cloud and Shadow Detection with the Learning Attention Network Algorithm (LANA)
by Hankui K. Zhang, Dong Luo and David P. Roy
Remote Sens. 2024, 16(8), 1321; https://doi.org/10.3390/rs16081321 - 9 Apr 2024
Cited by 10 | Viewed by 4323
Abstract
Landsat cloud and cloud shadow detection has a long heritage based on the application of empirical spectral tests to single image pixels, including the Landsat product Fmask algorithm, which uses spectral tests applied to optical and thermal bands to detect clouds and uses [...] Read more.
Landsat cloud and cloud shadow detection has a long heritage based on the application of empirical spectral tests to single image pixels, including the Landsat product Fmask algorithm, which uses spectral tests applied to optical and thermal bands to detect clouds and uses the sun-sensor-cloud geometry to detect shadows. Since the Fmask was developed, convolutional neural network (CNN) algorithms, and in particular U-Net algorithms (a type of CNN with a U-shaped network structure), have been developed and are applied to pixels in square patches to take advantage of both spatial and spectral information. The purpose of this study was to develop and assess a new U-Net algorithm that classifies Landsat 8/9 Operational Land Imager (OLI) pixels with higher accuracy than the Fmask algorithm. The algorithm, termed the Learning Attention Network Algorithm (LANA), is a form of U-Net but with an additional attention mechanism (a type of network structure) that, unlike conventional U-Net, uses more spatial pixel information across each image patch. The LANA was trained using 16,861 512 × 512 30 m pixel annotated Landsat 8 OLI patches extracted from 27 images and 69 image subsets that are publicly available and have been used by others for cloud mask algorithm development and assessment. The annotated data were manually refined to improve the annotation and were supplemented with another four annotated images selected to include clear, completely cloudy, and developed land images. The LANA classifies image pixels as either clear, thin cloud, cloud, or cloud shadow. To evaluate the classification accuracy, five annotated Landsat 8 OLI images (composed of >205 million 30 m pixels) were classified, and the results compared with the Fmask and a publicly available U-Net model (U-Net Wieland). The LANA had a 78% overall classification accuracy considering cloud, thin cloud, cloud shadow, and clear classes. As the LANA, Fmask, and U-Net Wieland algorithms have different class legends, their classification results were harmonized to the same three common classes: cloud, cloud shadow, and clear. Considering these three classes, the LANA had the highest (89%) overall accuracy, followed by Fmask (86%), and then U-Net Wieland (85%). The LANA had the highest F1-scores for cloud (0.92), cloud shadow (0.57), and clear (0.89), and the other two algorithms had lower F1-scores, particularly for cloud (Fmask 0.90, U-Net Wieland 0.88) and cloud shadow (Fmask 0.45, U-Net Wieland 0.52). In addition, a time-series evaluation was undertaken to examine the prevalence of undetected clouds and cloud shadows (i.e., omission errors). The band-specific temporal smoothness index (TSIλ) was applied to a year of Landsat 8 OLI surface reflectance observations after discarding pixel observations labelled as cloud or cloud shadow. This was undertaken independently at each gridded pixel location in four 5000 × 5000 30 m pixel Landsat analysis-ready data (ARD) tiles. The TSIλ results broadly reflected the classification accuracy results and indicated that the LANA had the smallest cloud and cloud shadow omission errors, whereas the Fmask had the greatest cloud omission error and the second greatest cloud shadow omission error. Detailed visual examination, true color image examples and classification results are included and confirm these findings. The TSIλ results also highlight the need for algorithm developers to undertake product quality assessment in addition to accuracy assessment. The LANA model, training and evaluation data, and application codes are publicly available for other researchers. Full article
(This article belongs to the Special Issue Deep Learning on the Landsat Archive)
Show Figures

Figure 1

16 pages, 3123 KB  
Article
Exploring Urban XCO2 Patterns Using PRISMA Satellite: A Case Study in Shanghai
by Yu Wu, Yanan Xie and Rui Wang
Atmosphere 2024, 15(3), 246; https://doi.org/10.3390/atmos15030246 - 20 Feb 2024
Cited by 1 | Viewed by 2194
Abstract
As global warming intensifies, monitoring carbon dioxide (CO2) has increasingly become a focal point of research. Investigating urban XCO2 emission systems holds paramount importance, given the pivotal role of cities as major contributors to carbon emissions. Consequently, this study centers [...] Read more.
As global warming intensifies, monitoring carbon dioxide (CO2) has increasingly become a focal point of research. Investigating urban XCO2 emission systems holds paramount importance, given the pivotal role of cities as major contributors to carbon emissions. Consequently, this study centers on urban locales, employing Shanghai as a case study for a comprehensive evaluation of regional XCO2 levels. We utilized high spatial resolution imagery from the PRecursore IperSpettrale della Missione Applicativa (PRISMA) satellite to conduct an XCO2 assessment over the Baoshan District with a 30 m spatial resolution from April 2021 to October 2022. Our XCO2 analysis was conducted in two steps. Firstly, we conducted a sensitivity analysis on key parameters in the inversion process, where cloud cover severely interfered with inversion accuracy. Therefore, we developed the Fmask 4.0 cloud removal and iterative maximum a posteriori differential optical absorption spectroscopy (FIMAP-DOAS) algorithm. This novel integration eliminated cloud interference during the inversion process, achieving high-precision CO2 detection in the region. Secondly, we compared the XCO2 of the region with Level-2 data from carbon monitoring satellites such as OCO-2. The comparison results showed a strong consistency, with a root mean squared error (RMSE) of 0.75 ppm for Shanghai XCO2 data obtained from the PRISMA satellite compared to OCO-2 Level-2 data and an RMSE of 1.49 ppm compared to OCO-3. This study successfully established a high-accuracy and high-spatial-resolution XCO2 satellite monitoring system for the Shanghai area. The efficacy of the FIMAP-DOAS algorithm has been demonstrated in CO2 monitoring and inversion within urban environments, with potential applicability to other cities. Full article
(This article belongs to the Section Atmospheric Techniques, Instruments, and Modeling)
Show Figures

Figure 1

19 pages, 10172 KB  
Article
Reconstructing Snow Cover under Clouds and Cloud Shadows by Combining Sentinel-2 and Landsat 8 Images in a Mountainous Region
by Yanli Zhang, Changqing Ye, Ruirui Yang and Kegong Li
Remote Sens. 2024, 16(1), 188; https://doi.org/10.3390/rs16010188 - 2 Jan 2024
Cited by 8 | Viewed by 4005
Abstract
Snow cover is a sensitive indicator of global climate change, and optical images are an important means for monitoring its spatiotemporal changes. Due to the high reflectivity, rapid change, and intense spatial heterogeneity of mountainous snow cover, Sentinel-2 (S2) and Landsat 8 (L8) [...] Read more.
Snow cover is a sensitive indicator of global climate change, and optical images are an important means for monitoring its spatiotemporal changes. Due to the high reflectivity, rapid change, and intense spatial heterogeneity of mountainous snow cover, Sentinel-2 (S2) and Landsat 8 (L8) satellite imagery with both high spatial resolution and spectral resolution have become major data sources. However, optical sensors are more susceptible to cloud cover, and the two satellite images have significant spectral differences, making it challenging to obtain snow cover beneath clouds and cloud shadows (CCSs). Based on our previously published approach for snow reconstruction on S2 images using the Google Earth Engine (GEE), this study introduces two main innovations to reconstruct snow cover: (1) combining S2 and L8 images and choosing different CCS detection methods, and (2) improving the cloud shadow detection algorithm by considering land cover types, thus further improving the mountainous-snow-monitoring ability. The Babao River Basin of the Qilian Mountains in China is chosen as the study area; 399 scenes of S2 and 35 scenes of L8 are selected to analyze the spatiotemporal variations of snow cover from September 2019 to August 2022 in GEE. The results indicate that the snow reconstruction accuracies of both images are relatively high, and the overall accuracies for S2 and L8 are 80.74% and 88.81%, respectively. According to the time-series analysis of three hydrological years, it is found that there is a marked difference in the spatial distribution of snow cover in different hydrological years within the basin, with fluctuations observed overall. Full article
Show Figures

Figure 1

26 pages, 25329 KB  
Article
A Hybrid Algorithm with Swin Transformer and Convolution for Cloud Detection
by Chengjuan Gong, Tengfei Long, Ranyu Yin, Weili Jiao and Guizhou Wang
Remote Sens. 2023, 15(21), 5264; https://doi.org/10.3390/rs15215264 - 6 Nov 2023
Cited by 24 | Viewed by 5382
Abstract
Cloud detection is critical in remote sensing image processing, and convolutional neural networks (CNNs) have significantly advanced this field. However, traditional CNNs primarily focus on extracting local features, which can be challenging for cloud detection due to the variability in the size, shape, [...] Read more.
Cloud detection is critical in remote sensing image processing, and convolutional neural networks (CNNs) have significantly advanced this field. However, traditional CNNs primarily focus on extracting local features, which can be challenging for cloud detection due to the variability in the size, shape, and boundaries of clouds. To address this limitation, we propose a hybrid Swin transformer–CNN cloud detection (STCCD) network that combines the strengths of both architectures. The STCCD network employs a novel dual-stream encoder that integrates Swin transformer and CNN blocks. Swin transformers can capture global context features more effectively than traditional CNNs, while CNNs excel at extracting local features. The two streams are fused via a fusion coupling module (FCM) to produce a richer representation of the input image. To further enhance the network’s ability in extracting cloud features, we incorporate a feature fusion module based on the attention mechanism (FFMAM) and an aggregation multiscale feature module (AMSFM). The FFMAM selectively merges global and local features based on their importance, while the AMSFM aggregates feature maps from different spatial scales to obtain a more comprehensive representation of the cloud mask. We evaluated the STCCD network on three challenging cloud detection datasets (GF1-WHU, SPARCS, and AIR-CD), as well as the L8-Biome dataset to assess its generalization capability. The results show that the STCCD network outperformed other state-of-the-art methods on all datasets. Notably, the STCCD model, trained on only four bands (visible and near-infrared) of the GF1-WHU dataset, outperformed the official Landsat-8 Fmask algorithm in the L8-Biome dataset, which uses additional bands (shortwave infrared, cirrus, and thermal). Full article
Show Figures

Figure 1

20 pages, 6712 KB  
Article
Improving Cloud Detection in WFV Images Onboard Chinese GF-1/6 Satellite
by Hao Chang, Xin Fan, Lianzhi Huo and Changmiao Hu
Remote Sens. 2023, 15(21), 5229; https://doi.org/10.3390/rs15215229 - 3 Nov 2023
Cited by 5 | Viewed by 1901
Abstract
We have developed an algorithm for cloud detection in Chinese GF-1/6 satellite multispectral images, allowing us to generate cloud masks at the pixel level. Due to the lack of shortwave infrared and thermal infrared bands in the Chinese GF-1/6 satellite, bright land surfaces [...] Read more.
We have developed an algorithm for cloud detection in Chinese GF-1/6 satellite multispectral images, allowing us to generate cloud masks at the pixel level. Due to the lack of shortwave infrared and thermal infrared bands in the Chinese GF-1/6 satellite, bright land surfaces and snow are frequently misclassified as clouds. To mitigate this issue, we utilized MODIS standard snow data products for reference data to determine the presence of snow cover in the images. Subsequently, our algorithm was utilized to correct misclassifications in snow-covered mountainous regions. The experimental area selected was the perpetually snow-covered Western mountains in the United States. The results indicate the accurate labeling of extensive snow-covered areas, achieving an overall cloud detection accuracy of over 91%. Our algorithm enables users to easily determine whether pixels are affected by cloud contamination, effectively improving accuracy in annotating data quality and greatly facilitating subsequent data retrieval and utilization. Full article
(This article belongs to the Section Atmospheric Remote Sensing)
Show Figures

Graphical abstract

26 pages, 12137 KB  
Article
GF-1/6 Satellite Pixel-by-Pixel Quality Tagging Algorithm
by Xin Fan, Hao Chang, Lianzhi Huo and Changmiao Hu
Remote Sens. 2023, 15(7), 1955; https://doi.org/10.3390/rs15071955 - 6 Apr 2023
Cited by 3 | Viewed by 2701
Abstract
The Landsat and Sentinel series satellites contain their own quality tagging data products, marking the source image pixel by pixel with several specific semantic categories. These data products generally contain categories such as cloud, cloud shadow, land, water body, and snow. Due to [...] Read more.
The Landsat and Sentinel series satellites contain their own quality tagging data products, marking the source image pixel by pixel with several specific semantic categories. These data products generally contain categories such as cloud, cloud shadow, land, water body, and snow. Due to the lack of mid-wave and thermal infrared bands, the accuracy of traditional cloud detection algorithm is unstable when facing Chinese Gaofen-1/6 (GF-1/6) data. Moreover, it is challenging to distinguish clouds from snow. In order to produce GF-1/6 satellite pixel-by-pixel quality tagging data products, this paper builds a training sample set of more than 100,000 image pairs, primarily using Sentinel-2 satellite data. Then, we adopt the Swin Transformer model with a self-attention mechanism for GF-1/6 satellite image quality tagging. Experiments show that the model’s overall accuracy reaches the level of Fmask v4.6 with more than 10,000 training samples, and the model can distinguish between cloud and snow correctly. Our GF-1/6 quality tagging algorithm can meet the requirements of the “Analysis Ready Data (ARD) Technology Research for Domestic Satellite” project. Full article
(This article belongs to the Special Issue Gaofen 16m Analysis Ready Data)
Show Figures

Figure 1

25 pages, 9999 KB  
Article
CD_HIEFNet: Cloud Detection Network Using Haze Optimized Transformation Index and Edge Feature for Optical Remote Sensing Imagery
by Qing Guo, Lianzi Tong, Xudong Yao, Yewei Wu and Guangtong Wan
Remote Sens. 2022, 14(15), 3701; https://doi.org/10.3390/rs14153701 - 2 Aug 2022
Cited by 7 | Viewed by 4487
Abstract
Clouds in optical remote sensing images are an unavoidable existence that greatly affect the utilization of these images. Therefore, accurate and effective cloud detection is an indispensable step in image preprocessing. To date, most researchers have tried to use deep-learning methods for cloud [...] Read more.
Clouds in optical remote sensing images are an unavoidable existence that greatly affect the utilization of these images. Therefore, accurate and effective cloud detection is an indispensable step in image preprocessing. To date, most researchers have tried to use deep-learning methods for cloud detection. However, these studies generally use computer vision technology to improve the performances of the models, without considering the unique spectral feature information in remote sensing images. Moreover, due to the complex and changeable shapes of clouds, accurate cloud-edge detection is also a difficult problem. In order to solve these problems, we propose a deep-learning cloud detection network that uses the haze-optimized transformation (HOT) index and the edge feature extraction module for optical remote sensing images (CD_HIEFNet). In our model, the HOT index feature image is used to add the unique spectral feature information from clouds into the network for accurate detection, and the edge feature extraction (EFE) module is employed to refine cloud edges. In addition, we use ConvNeXt as the backbone network, and we improved the decoder to enhance the details of the detection results. We validated CD_HIEFNet using the Landsat-8 (L8) Biome dataset and compared it with the Fmask, FCN8s, U-Net, SegNet, DeepLabv3+ and CloudNet methods. The experimental results showed that our model has excellent performance, even in complex cloud scenarios. Moreover, according to the extended experimental results for the other L8 dataset and the Gaofen-1 data, CD_HIEFNet has strong performance in terms of robustness and generalization, thus helping to provide new ideas for cloud detection-related work. Full article
Show Figures

Graphical abstract

24 pages, 5753 KB  
Article
A New Clustering Method to Generate Training Samples for Supervised Monitoring of Long-Term Water Surface Dynamics Using Landsat Data through Google Earth Engine
by Alireza Taheri Dehkordi, Mohammad Javad Valadan Zoej, Hani Ghasemi, Ebrahim Ghaderpour and Quazi K. Hassan
Sustainability 2022, 14(13), 8046; https://doi.org/10.3390/su14138046 - 30 Jun 2022
Cited by 47 | Viewed by 5218
Abstract
Water resources are vital to the survival of living organisms and contribute substantially to the development of various sectors. Climatic diversity, topographic conditions, and uneven distribution of surface water flows have made reservoirs one of the primary water supply resources in Iran. This [...] Read more.
Water resources are vital to the survival of living organisms and contribute substantially to the development of various sectors. Climatic diversity, topographic conditions, and uneven distribution of surface water flows have made reservoirs one of the primary water supply resources in Iran. This study used Landsat 5, 7, and 8 data in Google Earth Engine (GEE) for supervised monitoring of surface water dynamics in the reservoir of eight Iranian dams (Karkheh, Karun-1, Karun-3, Karun-4, Dez, UpperGotvand, Zayanderud, and Golpayegan). A novel automated method was proposed for providing training samples based on an iterative K-means refinement procedure. The proposed method used the Function of the Mask (Fmask) initial water map to generate final training samples. Then, Support Vector Machines (SVM) and Random Forest (RF) models were trained with the generated samples and used for water mapping. Results demonstrated the satisfactory performance of the trained RF model with the samples of the proposed refinement procedure (with overall accuracies of 95.13%) in comparison to the trained RF with direct samples of Fmask initial water map (with overall accuracies of 78.91%), indicating the proposed approach’s success in producing training samples. The performance of three feature sets was also evaluated. Tasseled-Cap (TC) achieved higher overall accuracies than Spectral Indices (SI) and Principal Component Transformation of Image Bands (PCA). However, simultaneous use of all features (TC, SI, and PCA) boosted classification overall accuracy. Moreover, long-term surface water changes showed a downward trend in five study sites. Comparing the latest year’s water surface area (2021) with the maximum long-term extent showed that all study sites experienced a significant reduction (16–62%). Analysis of climate factors’ impacts also revealed that precipitation (0.51  R2  0.79) was more correlated than the temperature (0.22  R2  0.39) with water surface area changes. Full article
(This article belongs to the Special Issue Artificial Intelligence and Sustainability)
Show Figures

Figure 1

21 pages, 18547 KB  
Article
A Self-Trained Model for Cloud, Shadow and Snow Detection in Sentinel-2 Images of Snow- and Ice-Covered Regions
by Kamal Gopikrishnan Nambiar, Veniamin I. Morgenshtern, Philipp Hochreuther, Thorsten Seehaus and Matthias Holger Braun
Remote Sens. 2022, 14(8), 1825; https://doi.org/10.3390/rs14081825 - 10 Apr 2022
Cited by 15 | Viewed by 5907
Abstract
Screening clouds, shadows, and snow is a critical pre-processing step in many remote-sensing data processing pipelines that operate on satellite image data from polar and high mountain regions. We observe that the results of the state-of-the-art Fmask algorithm are not very accurate in [...] Read more.
Screening clouds, shadows, and snow is a critical pre-processing step in many remote-sensing data processing pipelines that operate on satellite image data from polar and high mountain regions. We observe that the results of the state-of-the-art Fmask algorithm are not very accurate in polar and high mountain regions. Given the unavailability of large, labeled Sentinel-2 training datasets, we present a multi-stage self-training approach that trains a model to perform semantic segmentation on Sentinel-2 L1C images using the noisy Fmask labels for training and a small human-labeled dataset for validation. At each stage of the proposed iterative framework, we use a larger network architecture in comparison to the previous stage and train a new model. The trained model at each stage is then used to generate new training labels for a bigger dataset, which are used for training the model in the next stage. We select the best model during training in each stage by evaluating the multi-class segmentation metric, mean Intersection over Union (mIoU), on the small human-labeled validation dataset. This effectively helps to correct the noisy labels. Our model achieved an overall accuracy of 93% compared to the Fmask 4 and Sen2Cor 2.8, which achieved 75% and 76%, respectively. We believe our approach can also be adapted for other remote-sensing applications for training deep-learning models with imprecise labels. Full article
(This article belongs to the Special Issue Remote Sensing in Glaciology and Cryosphere Research)
Show Figures

Graphical abstract

21 pages, 12185 KB  
Article
An Improved Fmask Method for Cloud Detection in GF-6 WFV Based on Spectral-Contextual Information
by Xiaomeng Yang, Lin Sun, Xinming Tang, Bo Ai, Hanwen Xu and Zhen Wen
Remote Sens. 2021, 13(23), 4936; https://doi.org/10.3390/rs13234936 - 4 Dec 2021
Cited by 3 | Viewed by 4217
Abstract
GF-6 is the first optical remote sensing satellite for precision agriculture observations in China. Accurate identification of the cloud in GF-6 helps improve data availability. However, due to the narrow band range contained in GF-6, Fmask version 3.2 for Landsat is not suitable [...] Read more.
GF-6 is the first optical remote sensing satellite for precision agriculture observations in China. Accurate identification of the cloud in GF-6 helps improve data availability. However, due to the narrow band range contained in GF-6, Fmask version 3.2 for Landsat is not suitable for GF-6. Hence, this paper proposes an improved Fmask based on the spectral-contextual information to solve the inapplicability of Fmask version 3.2 in GF-6. The improvements are divided into the following six aspects. The shortwave infrared (SWIR) in the “Basic Test” is replaced by blue band. The threshold in the original “HOT Test” is modified based on the comprehensive consideration of fog and thin clouds. The bare soil and rock are detected by the relationship between green and near infrared (NIR) bands. The bright buildings are detected by the relationship between the upper and lower quartiles of blue and red bands. The stratus with high humidity and fog_W (fog over water) are distinguished by the ratio of blue and red edge position 1 bands. Temperature probability for land is replaced by the HOT-based cloud probability (LHOT), and SWIR in brightness probability is replaced by NIR. The average cloud pixels accuracy (TPR) of the improved Fmask is 95.51%. Full article
Show Figures

Graphical abstract

22 pages, 10818 KB  
Article
KappaMask: AI-Based Cloudmask Processor for Sentinel-2
by Marharyta Domnich, Indrek Sünter, Heido Trofimov, Olga Wold, Fariha Harun, Anton Kostiukhin, Mihkel Järveoja, Mihkel Veske, Tanel Tamm, Kaupo Voormansik, Aire Olesk, Valentina Boccia, Nicolas Longepe and Enrico Giuseppe Cadau
Remote Sens. 2021, 13(20), 4100; https://doi.org/10.3390/rs13204100 - 13 Oct 2021
Cited by 38 | Viewed by 8566
Abstract
The Copernicus Sentinel-2 mission operated by the European Space Agency (ESA) provides comprehensive and continuous multi-spectral observations of all the Earth’s land surface since mid-2015. Clouds and cloud shadows significantly decrease the usability of optical satellite data, especially in agricultural applications; therefore, an [...] Read more.
The Copernicus Sentinel-2 mission operated by the European Space Agency (ESA) provides comprehensive and continuous multi-spectral observations of all the Earth’s land surface since mid-2015. Clouds and cloud shadows significantly decrease the usability of optical satellite data, especially in agricultural applications; therefore, an accurate and reliable cloud mask is mandatory for effective EO optical data exploitation. During the last few years, image segmentation techniques have developed rapidly with the exploitation of neural network capabilities. With this perspective, the KappaMask processor using U-Net architecture was developed with the ability to generate a classification mask over northern latitudes into the following classes: clear, cloud shadow, semi-transparent cloud (thin clouds), cloud and invalid. For training, a Sentinel-2 dataset covering the Northern European terrestrial area was labelled. KappaMask provides a 10 m classification mask for Sentinel-2 Level-2A (L2A) and Level-1C (L1C) products. The total dice coefficient on the test dataset, which was not seen by the model at any stage, was 80% for KappaMask L2A and 76% for KappaMask L1C for clear, cloud shadow, semi-transparent and cloud classes. A comparison with rule-based cloud mask methods was then performed on the same test dataset, where Sen2Cor reached 59% dice coefficient for clear, cloud shadow, semi-transparent and cloud classes, Fmask reached 61% for clear, cloud shadow and cloud classes and Maja reached 51% for clear and cloud classes. The closest machine learning open-source cloud classification mask, S2cloudless, had a 63% dice coefficient providing only cloud and clear classes, while KappaMask L2A, with a more complex classification schema, outperformed S2cloudless by 17%. Full article
(This article belongs to the Special Issue Computer Vision and Deep Learning for Remote Sensing Applications)
Show Figures

Figure 1

24 pages, 8401 KB  
Article
Light-Weight Cloud Detection Network for Optical Remote Sensing Images with Attention-Based DeeplabV3+ Architecture
by Xudong Yao, Qing Guo and An Li
Remote Sens. 2021, 13(18), 3617; https://doi.org/10.3390/rs13183617 - 10 Sep 2021
Cited by 34 | Viewed by 4628
Abstract
Clouds in optical remote sensing images cause spectral information change or loss, that affects image analysis and application. Therefore, cloud detection is of great significance. However, there are some shortcomings in current methods, such as the insufficient extendibility due to using the information [...] Read more.
Clouds in optical remote sensing images cause spectral information change or loss, that affects image analysis and application. Therefore, cloud detection is of great significance. However, there are some shortcomings in current methods, such as the insufficient extendibility due to using the information of multiple bands, the intense extendibility due to relying on some manually determined thresholds, and the limited accuracy, especially for thin clouds or complex scenes caused by low-level manual features. Combining the above shortcomings and the requirements for efficiency in practical applications, we propose a light-weight deep learning cloud detection network based on DeeplabV3+ architecture and channel attention module (CD-AttDLV3+), only using the most common red–green–blue and near-infrared bands. In the CD-AttDLV3+ architecture, an optimized backbone network-MobileNetV2 is used to reduce the number of parameters and calculations. Atrous spatial pyramid pooling effectively reduces the information loss caused by multiple down-samplings while extracting multi-scale features. CD-AttDLV3+ concatenates more low-level features than DeeplabV3+ to improve the cloud boundary quality. The channel attention module is introduced to strengthen the learning of important channels and improve the training efficiency. Moreover, the loss function is improved to alleviate the imbalance of samples. For the Landsat-8 Biome set, CD-AttDLV3+ achieves the highest accuracy in comparison with other methods, including Fmask, SVM, and SegNet, especially for distinguishing clouds from bright surfaces and detecting light-transmitting thin clouds. It can also perform well on other Landsat-8 and Sentinel-2 images. Experimental results indicate that CD-AttDLV3+ is robust, with a high accuracy and extendibility. Full article
(This article belongs to the Special Issue Deep Learning-Based Cloud Detection for Remote Sensing Images)
Show Figures

Figure 1

22 pages, 12961 KB  
Article
Cloud Detection Using an Ensemble of Pixel-Based Machine Learning Models Incorporating Unsupervised Classification
by Xiaohe Yu and David J. Lary
Remote Sens. 2021, 13(16), 3289; https://doi.org/10.3390/rs13163289 - 20 Aug 2021
Cited by 7 | Viewed by 3678
Abstract
Remote sensing imagery, such as that provided by the United States Geological Survey (USGS) Landsat satellites, has been widely used to study environmental protection, hazard analysis, and urban planning for decades. Clouds are a constant challenge for such imagery and, if not handled [...] Read more.
Remote sensing imagery, such as that provided by the United States Geological Survey (USGS) Landsat satellites, has been widely used to study environmental protection, hazard analysis, and urban planning for decades. Clouds are a constant challenge for such imagery and, if not handled correctly, can cause a variety of issues for a wide range of remote sensing analyses. Typically, cloud mask algorithms use the entire image; in this study we present an ensemble of different pixel-based approaches to cloud pixel modeling. Based on four training subsets with a selection of different input features, 12 machine learning models were created. We evaluated these models using the cropped LC8-Biome cloud validation dataset. As a comparison, Fmask was also applied to the cropped scene Biome dataset. One goal of this research is to explore a machine learning modeling approach that uses as small a training data sample as possible but still provides an accurate model. Overall, the model trained on the sample subset (1.3% of the total training samples) that includes unsupervised Self-Organizing Map classification results as an input feature has the best performance. The approach achieves 98.57% overall accuracy, 1.18% cloud omission error, and 0.93% cloud commission error on the 88 cropped test images. By comparison to Fmask 4.0, this model improves the accuracy by 10.12% and reduces the cloud omission error by 6.39%. Furthermore, using an additional eight independent validation images that were not sampled in model training, the model trained on the second largest subset with an additional five features has the highest overall accuracy at 86.35%, with 12.48% cloud omission error and 7.96% cloud commission error. This model’s overall correctness increased by 3.26%, and the cloud omission error decreased by 1.28% compared to Fmask 4.0. The machine learning cloud classification models discussed in this paper could achieve very good performance utilizing only a small portion of the total training pixels available. We showed that a pixel-based cloud classification model, and that as each scene obviously has unique spectral characteristics, and having a small portion of example pixels from each of the sub-regions in a scene can improve the model accuracy significantly. Full article
Show Figures

Graphical abstract

Back to TopTop