Next Article in Journal
Design of Swarm Intelligence Control Based on Double-Layer Deep Reinforcement Learning
Next Article in Special Issue
Assessing Flood and Landslide Susceptibility Using XGBoost: Case Study of the Basento River in Southern Italy
Previous Article in Journal
Effect of High Hydrostatic Pressure on the Quality Parameters of Wild Red Deer (Cervus elaphus) Meat
Previous Article in Special Issue
Slot Occupancy-Based Collision Avoidance Algorithm for Very-High-Frequency Data Exchange System Network in Maritime Internet of Things
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Technical Note

Evaluation of Cloud Mask Performance of KOMPSAT-3 Top-of-Atmosphere Reflectance Incorporating Deeplabv3+ with Resnet 101 Model

1
Department of Earth and Environmental Sciences, Jeonbuk National University, Jeonju 54896, Republic of Korea
2
Korea Aerospace Research Institute, 169-84, Gwahak-ro, Yuseong, Daejeon 34133, Republic of Korea
3
Applied Plant Science, Chonnam National University, 77 Yongbong-ro, Buk-gu, Gwangju 61186, Republic of Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(8), 4339; https://doi.org/10.3390/app15084339
Submission received: 10 February 2025 / Revised: 28 March 2025 / Accepted: 8 April 2025 / Published: 14 April 2025

Abstract

:
Cloud detection is a crucial task in satellite remote sensing, influencing applications such as vegetation indices, land use analysis, and renewable energy estimation. This study evaluates the performance of cloud masks generated for KOMPSAT-3 and KOMPSAT-3A imagery using the DeepLabV3+ deep learning model with a ResNet-101 backbone. To overcome the limitations of digital number (DN) data, Top-of-Atmosphere (TOA) reflectance was computed and used for model training. Comparative analysis between the DN and TOA reflectance demonstrated significant improvements with the TOA correction applied. The TOA reflectance combined with the NDVI channel achieved the highest precision (69.33%) and F1-score (59.27%), along with a mean Intersection over Union (mIoU) of 46.5%, outperforming all the other configurations. In particular, this combination was highly effective in detecting dense clouds, achieving an mIoU of 48.12%, while the Near-Infrared, green, and red (NGR) combination performed best in identifying cloud shadows with an mIoU of 23.32%. These findings highlight the critical role of radiometric correction and optimal channel selection in enhancing deep learning-based cloud detection. This study demonstrates the crucial role of radiometric correction, optimal channel selection, and the integration of additional synthetic indices in enhancing deep learning-based cloud detection performance, providing a foundation for the development of more refined cloud masking techniques in the future.

1. Introduction

In the field of satellite remote sensing, detecting cloud masks, including shadows, is crucial. This is not only vital for analyzing atmospheric cloud properties [1,2] but also significantly impacts higher-value-added products related to the land surface, such as vegetation indices, surface reflectance, water, and land use [3,4,5]. Clouds and their shadows limit the potential applications of satellite imagery in extracting accurate land surface information [6]. In addition, cloud detection is also one of the key variables for accurately estimating renewable solar energy outputs [7,8]. With the advent of the New Space era characterized by the launch and utilization of numerous satellites, the number of researchers who have increasingly focused on extracting clouds and cloud shadows from satellite imagery has markedly increased recently [9]. Therefore, improving our ability to accurately detect clouds and cloud shadows is crucial for ensuring the quality of satellite products and for successfully achieving objectives in various fields of application, including renewable solar energy potential.
The initial stages of developing cloud detection algorithms are focused on simple thresholding techniques of multispectral satellite bands, known as rule-based classification [10,11]. These methods utilize the physical properties of clouds in different bands, particularly the visible and infrared channels. With advances in satellite technology, researchers have developed rule-based cloud detection algorithms to leverage higher spatiotemporal resolution and more spectral bands. The MODIS cloud mask product [12] utilizes a variety of tests, including brightness temperature differences, reflectance thresholds, and spatial variability, to enhance cloud detection accuracy. The most well-known cloud detection method for Landsat satellite imagery is Fmask [6,13,14], which achieves high accuracy in most circumstances, even for mountainous areas and bright land surfaces. Despite these advancements, detecting thin cirrus clouds and distinguishing cloud shadows remain challenging. Furthermore, Fmask requires auxiliary data such as elevation, along with imagery from all channels, and its computational process can take several minutes, making it difficult to apply to large-scale satellite datasets.
Traditional rule-based thresholding techniques for cloud detection have evolved over time, demonstrating their potential to improve accuracy through key spectral indices. For example, Ge et al. (2023) utilized TOA reflectance, NDVI, and several derived values to classify clouds, applying fixed thresholds to distinguish clouds from vegetation and other land surface features [15]. This approach effectively reduced the false detections caused by similar spectral characteristics. However, these methods still rely on manually set thresholds, which limits their adaptability under varying atmospheric and surface conditions. As the need for robust detection systems grew, machine learning and deep learning methods gained traction [16,17,18], improving cloud detection by learning complex patterns and better distinguishing clouds, shadows, and the Earth’s surface. Despite these improvements, these methods require large amounts of labeled training data.
Building on the potential of deep learning, Hughes and Hayes (2014) developed a neural-network-based model that utilizes both the spectral band data and pre-computed spatial features from the images to identify clouds and their shadows [19]. Weiland et al. (2019) extended this by developing a model that uses convolutional neural networks (CNNs) [20], which can process 3D inputs and simultaneously integrate both spatial and spectral information. This approach has demonstrated reliable performance in cloud detection [16,21]. As a result, numerous cloud detection training datasets have been established to support the development of cloud detection models for various satellites. These datasets vary in their data processing levels: some are derived from Top-of-Atmosphere (TOA) reflectance or radiance, while others use digital numbers (DNs) from multispectral bands. For instance, cloud detection training datasets from satellites such as Sentinel-2A and 2B (Sen2Cor) [10], Landsat 8 (L8-Biome, L8-SPARCS, L8-95 Cloud [22]; https://landsat.usgs.gov/landsat-8-cloud-cover-assessment-validation-data (accessed on 7 April 2025)), and PeruSat [23] are provided as TOA reflectance or radiance, which are radiometrically corrected products. In contrast, other studies provide training datasets in a DN (Digital Number) format for deep learning-based cloud detection models.
For instance, Kim et al. (2022) evaluated the performance of cloud detection techniques using a modified DeepLabV3+ model with heterogeneous sensors [21], such as the Landsat and KOMPSAT-3 satellites. They found that, despite improvements to the DeepLabV3+ network, which allowed the model to extract and learn channel features from two distinct training datasets, the Jaccard index for clouds remained low due to differences in the datasets. Specifically, the KOMPSAT-3 dataset was distributed in a DN format, while the Landsat L8-Biome training dataset consisted of radiometrically calibrated image products.
In other words, even though the deep learning model applied normalization, training on DN data from KOMPSAT-3 and applying the model to radiometrically calibrated Landsat data lowered accuracy. The KOMPSAT-3 data were used without additional radiometric correction because the Korea Aerospace Research Institute (KARI) provided the training datasets in a DN format rather than in TOA (Top-of-Atmosphere) reflectance or radiance. Although the normalization process of the matchup dataset was expected to reduce the discrepancies between the datasets, it proved insufficient in this case. The absence of standardized training datasets has led to a lack of clear criteria for determining the optimal data format for training deep learning models. While several studies have demonstrated that deep learning approaches using either digital numbers (DNs) or Top-of-Atmosphere (TOA) reflectance provide more reliable cloud masking results compared to those of the physical or empirical models, inconsistencies in data formats can affect model training and performance evaluations.
In conventional cloud mask algorithms, the input for cloud detection should ideally be TOA reflectance rather than DN values to reduce radiometric differences between images [6,24,25]. However, radiometrically calibrated satellite data have not always been applied in deep learning algorithms, as mentioned above. The existing deep learning models are considered robust to pixel value variations due to their complex architecture; however, the impact of converting DNs to TOA reflectance or using various derived indices on cloud segmentation accuracy remains uncertain. Thus, a systematic evaluation is required to determine the impact of these transformations on model performance. However, it is essential to conduct a comprehensive evaluation to establish whether these transformations are necessary for reliable cloud segmentation. Additionally, the lack of defined criteria for developing training datasets in DNs or radiometrically calibrated formats from satellite data may affect other fields, such as building segmentation, road detection, and change detection. Therefore, further analysis is needed to evaluate the performance of deep learning models when radiometric corrections are applied or omitted. Also, this would enhance the development of deep learning-based cloud detection models.
In this study, we tested which training dataset is more effective when applying deep learning technology to the Korea Multi-Purpose Satellite (KOMPSAT), specifically by using TOA reflectance and derived indices during model training, compared to using the default DN values. As mentioned, the KARI distributed a training dataset of cloud masks as DN images with corresponding label files. Therefore, we newly calculated the TOA reflectance from KOMPSAT-3 and the 3A DN training dataset by using the solar geometry and ESUN of the selected images. In addition, we also tested the performance of a deep learning algorithm changing the composite of channels for the DNs and TOA reflectance because the combination of satellite channels affects the accuracy of the cloud mask results [26].

2. Material and Methods

2.1. Overview of KOMPSAT-3 and KOMPSAT-3A Satellite Datasets

The Korea Aerospace Research Institute (KARI, Daejeon, Republic of Korea) developed and launched the high-resolution KOMPSAT-3 and KOMPSAT-3A Earth observation satellites in 2012 and 2015, respectively. KOMPSAT-3 is equipped with 2.8 m multispectral channels and a 0.7 m Panchromatic (PAN) channel from the optical camera (AEISS; Advanced Earth Imaging Sensor System) capable of providing precise optical images of the Earth’s surface, which are used for purposes such as environmental monitoring, disaster response, and resource management. KOMPSAT-3A, an upgraded version of KOMPSAT-3, features the existing AEISS sensor along with a newly developed infrared (IR) sensor, enabling Earth observation at night or in cloudy weather conditions. Table 1 presents the spectral channels and specifications of KOMPSAT-3 and 3A. In this study, we used datasets that the KARI refined and constructed from KOMPSAT-3 and KOMPSAT-3A for training and preparing cloud detection data, which are publicly available on AI Hub (https://www.aihub.or.kr/ (accessed on 7 April 2025)).

2.2. KOMPSAT-3 and KOMPSAT-3A Dataset Construction Process

The cloud mask dataset used in this study was provided by the Korea Aerospace Research Institute (KARI) and was designed using a multi-step approach to ensure high reliability and accuracy. Initially, a Land Use Land Cover (LULC) map provided by the European Space Agency (ESA) was utilized to achieve balanced diversity across various land cover types in the KOMPSAT-3 and KOMPSAT-3A datasets. Subsequently, the research team gathered cloud detection training data from satellite imagery that the KARI multi-purpose satellites acquired between 2019 and 2022, focusing on images with cloud cover between 20% and 50%. This selection was made because images with cloud cover exceeding 50% can adversely affect image registration quality, making images with 20% to 50% cloud cover more suitable for dataset construction.
Artificial intelligence (AI) techniques initially labeled approximately 20–30% of the data to automate the process. After this, a team of annotation workers, who had undergone specialized training, performed a second round of manual annotations following the KARI’s established criteria. The data preprocessing team and the AI model development team further refined the dataset through an iterative review process involving collaboration. This rigorous process ensured the dataset’s reliability, making it suitable for cloud mask development and validation for KOMPSAT-3 and KOMPSAT-3A.
In this study, the cloud detection dataset consists of multi-purpose satellite imagery, including red, green, blue (RGB) and Near-Infrared (NIR) channel data (as shown in Table 1), along with previously generated cloud detection labels. The cloud mask labels in the dataset are categorized into four classes: clear sky, thick cloud, thin cloud, and cloud shadow. In Figure 1d, the colors used to represent these classes are as follows: red for thick clouds, green for thin clouds, yellow for cloud shadows, and black for clear sky. Clear sky refers to cloud-free areas where the surface is fully visible. Thick clouds are characterized by high reflectance values in optical imagery, completely obscuring surface features beneath them. Thin clouds are commonly observed surrounding thick clouds, where surface features such as buildings or vehicles remain partially visible but appear more blurred than in clear sky regions. Additionally, pixels affected by aerosols or fog, which are often difficult to distinguish from clouds in optical imagery, are also classified as thin clouds. Cloud shadow is strictly defined as the shadow cast by clouds, where sunlight is blocked, creating darkened areas on the surface. Shadows caused by terrain or other obstructions are not included in this category.
In this study, the multi-purpose satellite images used were provided by the Korea Aerospace Research Institute (KARI) and consisted of a total of 177 images. Of these, we used 130 images for training, 16 for validation, and the remaining 31 for testing. The resolution of each image used for training and validation was approximately 6000 × 6000 pixels, with slight variations in size. To address these size inconsistencies, we divided the satellite images into patches of 800 × 800 pixels, with a 400-pixel overlap between patches to minimize discontinuities. (Figure 2). This overlap ensures seamless transitions between patches during model application, helping the model generalize effectively across diverse conditions.
As a result, the number of training images increased from 130 to 22,890, and the validation images increased from 16 to 2772, bringing the total to 25,662 images. However, of these, 12,504 images contained only clear sky without any clouds or shadows, leading to an imbalance in the dataset. To improve model performance, data augmentation techniques such as horizontal and vertical flipping, rotation, and brightness adjustment were applied during preprocessing to balance the number of samples across different classes (Figure 3).
In this study, we evaluated various satellite channel combinations for digital numbers (DNs) and Top-of-Atmosphere (TOA) reflectance to assess the accuracy of each dataset. Figure 4 shows sampled images from KOMPSAT with different channel combinations, including RGB, NRG, and RGB with NIR. Additionally, we tested RGBN combined with the NDVI (Normalized Difference Vegetation Index). TOA-based channel combinations were also evaluated, specifically TOA RGB and TOA RGBN with the NDVI, to determine their performance in deep learning models.
For the TOA analysis, only the channel combinations that achieved the highest accuracy in the DN-based evaluations, namely RGBN and RGBN with the NDVI, were further assessed. The model performance for each channel combination was then compared to identify the most effective setup for accurate data interpretation.

2.3. The Network Structure of the Deep Learning Model

Clouds in satellite imagery have amorphous, unclear shapes that differ significantly from typical objects. These characteristics make cloud detection difficult and highlight the need for pixel-wise labeling, which is widely used in image segmentation [21,23]. Deep learning has significantly advanced cloud detection networks. Researchers have moved from early CNN-based segmentation methods to more sophisticated architectures, including ResNet-based and U-Net-based models, which leverage large parameter sets to effectively handle the complex nature of clouds [27,28]. Among these, advanced models like CPNet and DeepLabV3+ have shown notable performance [29,30]. CPNet, also known as Cloud-Net, builds on the U-Net framework and integrates ResNet to improve gradient flow and feature extraction. It uses an encoder–decoder architecture specifically designed for cloud detection [31].
In contrast, researchers widely use DeepLabV3+ because of its strong performance in image segmentation. It originally employs Xception as its encoder instead of ResNet, thereby reinforcing the concept that channel and spatial correlations can be considered separately [32]. However, this experiment showed that the Xception module was less effective at capturing the amorphous and variable nature of cloud pixels, leading to the adoption of a deeper ResNet101 module as the encoder. ResNet101 utilizes residual learning through skip connections and a bottleneck architecture that applies 1 × 1, 3 × 3, and again 1×1 convolutions to effectively regulate the number of input channels, thereby reducing the parameter count and enhancing both computational efficiency and stability.
DeepLabV3+ employs an Atrous Spatial Pyramid Pooling (ASPP) module to effectively extract multi-scale features while preserving spatial resolution [30].
y r i , j = m n x i + r m , j + r n w r m , n
where there are the following:
  • y r i , j : the output value at coordinate ( i , j ) after applying atrous convolution;
  • x i + r m , j + r n : the pixel value in the input feature map, sampled at intervals defined by the atrous rate r ;
  • w r m , n : the weights of the convolutional filter (kernel);
  • r : the atrous rate, which determines the spacing between the sampled pixels;
  • m , n : the indices of the kernel dimensions.
For instance, setting r   = 1, 2, 4 progressively expands the receptive field, enabling the network to capture both fine details and broader contextual information.
The ASPP module utilizes multiple parallel atrous convolutions with different dilation rates to capture features at various resolutions. In addition, a global context is captured via average pooling followed by a 1 × 1 convolution as IN (2).
y p o o l = C o n v 1 × 1 A v g P o o l x
The model generates the final ASPP output by concatenating the outputs from atrous convolutions with different rates (e.g., r   = 1, 6, 12, 18) along with y p o o l and processing the result through another 1 × 1 convolution:
y A S P P = C o n v 1 × 1 C o n c a t y 1 , y 6 , y 12 , y 18 , y p o o l
DeepLabV3+ ResNet101’s encoder–decoder structure preserves positional information, which is crucial for semantic segmentation tasks. By aggregating multi-scale features, ASPP enables the network to interpret the image from diverse perspectives, making it particularly effective for detecting clouds of varying sizes and shapes in satellite imagery.

2.4. Specifications of Hyperparameters

Originally introduced by Milletari et al. (2016) in their seminal work on semantic segmentation tasks, Dice Loss has proven to be an effective tool for tackling complex segmentation challenges [33]. In cloud segmentation tasks, class imbalance presents a significant hurdle, as the background (clear sky class) typically dominates the pixel distribution, vastly outnumbering the cloud regions. This imbalance often biases models toward over-predicting clear sky areas, yielding deceptively high accuracy metrics while failing to adequately detect clouds, which are the primary objects of interest. Dice Loss mitigates this issue by automatically balancing the foreground (clouds) and background (clear sky) without the need for manual class weight adjustments, a feature that distinguishes it from traditional approaches. This capability allows Dice Loss to outperform conventional loss functions, such as clear sky and multi-class logistic loss, particularly in imbalanced segmentation scenarios where minority classes require greater emphasis. Milletari et al. (2016) first demonstrated its effectiveness in medical image segmentation, a domain where similar class imbalance problems such as distinguishing small regions of interest like tumors from large backgrounds are prevalent [33]. Building on these findings, we extend the application of Dice Loss to cloud segmentation, anticipating comparable advantages in tackling class imbalance due to the structural similarities between these tasks. Recent studies further suggest that combining Dice Loss with pixel-weighting strategies can enhance its performance in satellite imagery, improving the delineation of cloud boundaries—an insight that reinforces its potential in this context. Dice Loss is mathematically derived from the Dice Similarity Coefficient, a metric that quantifies the overlap between the predicted segmentation and the ground truth, as expressed in Equation (1), making it particularly adept at optimizing for the precise segmentation of underrepresented classes like clouds. This derivation ensures that the loss function prioritizes overlap, offering a robust solution for improving detection in imbalanced datasets.
Dice   Loss = 1 2 | G T P r e d | G T + | P r e d |
  • GT: ground truth from labeled image;
  • Pred: predicted cloud detection image from model.
In addition, we determined the optimal settings for our deep learning model for cloud detection through extensive hyperparameter tuning, as shown in Table 2. Our selection of hyperparameters was guided by prior studies on cloud detection in satellite imagery, particularly those utilizing AI algorithms for Kompsat-3/3A satellite images [34]. Building upon these existing techniques, we incorporated experimental approaches to further refine the model. We utilized an Adam optimizer with a learning rate of 0.0003, which was adjusted by a scheduling factor of 0.5 to enhance convergence. To maintain consistent updates, the model incorporated a momentum of 0.937, and a weight decay of 0.0005 was applied to prevent overfitting. These parameters were carefully selected to optimize the model’s performance in accurately detecting clouds.

2.5. The Performance Metrics of Cloud Detection

To evaluate the accuracy of cloud masks generated for KOMPSAT-3 and 3A imagery, this study applied a focused set of performance metrics to the test dataset, extending beyond basic accuracy to capture the nuanced aspects of segmentation quality. We calculated the selected metrics, F1-score, and Intersection over Union (IoU), using the true-positive (TP), true-negative (TN), false-positive (FP), and false-negative (FN) values obtained from the pixel-wise comparisons. In this context, TP represents correctly identified cloud pixels, FP indicates non-cloud pixels misclassified as clouds, TN denotes correctly identified non-cloud pixels, and FN signifies cloud pixels missed by the model.
A c c u r a c y = T P + T N T P + T N + F P + F N
P r e c i s i o n = T P T P + F P
R e c a l l = T P T P + F N
F 1 S c o r e = 2 p r e c i s i o n R e c a l l P r e c i s i o n + R e c a l l
IoU = T P T P + F P + F N
These metrics collectively provide a robust framework for assessing the model’s performance in cloud segmentation. Accuracy, as shown in Equation (5), offers a baseline measure of the overall correctness across all pixels, while precision and recall, defined in Equations (6) and (7), respectively, break down the model’s ability to correctly classify cloud pixels and avoid errors. The F1-score, derived from Equation (8), integrates precision and recall into a single metric, emphasizing the balance between detecting clouds and minimizing false positives—a critical consideration for applications like meteorological analysis where both over- and under-detection can impact outcomes. In contrast, the IoU, as presented in Equation (9), evaluates the spatial overlap between the predicted and actual cloud regions, providing a geometric perspective on segmentation accuracy that is particularly relevant for tasks requiring precise cloud boundary delineation, such as atmospheric correction in satellite imagery processing. To extend this evaluation across the entire test dataset, the mean IoU (mIoU) was introduced, calculated as the average of the individual IoU scores across all the images, as shown in Equation (10).
m I o U = 1 N i = 0 n I o U i
The mIoU complements the IoU by offering a holistic measure of the model’s consistency and reliability, capturing its ability to maintain effective cloud detection across the diverse imaging conditions and scene complexities inherent in KOMPSAT-3 and 3A data. By incorporating the mIoU, this study ensures a comprehensive assessment that not only evaluates the localized segmentation quality but also verifies the model’s generalizability.

3. Results

3.1. Training and Validation Loss Analysis for Epochs

To evaluate the cloud segmentation model’s performance across different channel inputs, this study analyzed the training and validation loss per epoch for the various channel combinations derived from the KOMPSAT-3 and 3A imagery. Figure 5 illustrates these loss curves, with the left side depicting the training data and the right side representing the validation data.
For all the channel configurations, the training loss decreased steadily with increasing epochs, as depicted in Figure 6, which illustrates the training and validation loss per epoch for different channel combinations using KOMPSAT-3 and 3A data, with training data on the left and validation data on the right. Correspondingly, validation loss also declined consistently without rising, even at higher epochs, suggesting that the model avoids overfitting to the training data. However, certain configurations, particularly those incorporating multiple spectral bands, exhibited fluctuating validation loss curves. In the context of cloud segmentation, such variability may indicate instability in convergence, indicating that the model failed to consistently adapt to the data during training and exhibited erratic performance. As shown in Figure 7, the RGBNNDVI combination displayed the largest fluctuations and yielded the lowest detection accuracy.

3.2. Evaluation and Analysis of the Generalization Performance of Datasets

Figure 7 shows the accuracy and mIoU scores achieved by the model for various multispectral channel combinations using a test dataset excluded from the training process. As these data were neither included nor disclosed during training, they serve as a strong benchmark for assessing the model’s generalization and real-world applicability. The results indicate that simply altering the combination of multispectral channels can influence performance. Accuracy ranges from a minimum of 86.5% to a maximum of 87.5%, while the mIoU scores vary from 43.3% to 46.5%. Figure 7 further demonstrates that the TOA reflectance-based channel combinations consistently outperform those using DN values. Notably, incorporating the NDVI with TOA yields the highest accuracy and mIoU among all the tested configurations.
Table 3 summarizes the number of parameters for each channel combination. While the total parameter count varies slightly depending on the input data configuration, these differences are relatively small compared to the overall model size. These results indicate that adding the NDVI as an additional channel slightly increases the parameter count but does not significantly affect the overall model complexity.

3.3. Evaluation by Channel Configuration

As a result of the comprehensive analysis of the data presented in Table 4, six channel combinations RGB, NGR, RGBN, RGBN+NDVI, TOA, and TOA+NDVI were used to evaluate the performance of the cloud detection model. Performance was measured based on the precision, recall, F1-score, and mIoU and analyzed for four classes: clear sky, thick cloud, thin cloud, and cloud shadow, as well as for overall performance. The experimental results showed that for the clear sky class, all the channel combinations achieved F1-scores greater than 93%, with RGB (93.89%) and TOA (93.85%) demonstrating particularly strong performance. In the thick cloud class, TOA+NDVI achieved the highest F1-score at 64.97%, slightly outperforming RGBN (64.91%) and RGB (64.68%), while RGBN+NDVI (59.87%) showed relatively lower performance. In the thin cloud class, TOA+NDVI (46.62%) slightly outperformed RGB (45.97%) to achieve the best performance, while the other channels remained in the 38–42% range. In contrast, the cloud shadow class showed relatively high performance with NGR (38.61%) and RGBN (37.82%), while TOA+NDVI (31.75%) performed worse than TOA (34.87%) and RGBN+NDVI (32.39%). For overall performance, TOA+NDVI (59.27%) slightly outperformed RGB (59.11%) and showed superior results compared to the other combinations (56.04–58.80%). This indicates that TOA+NDVI provides a balanced performance across various classes.
To statistically verify the provided experimental results, an ANOVA test was performed on the F1-scores of each class. The results showed F-statistics of 1228.9363 for Clear Sky_F1, 595.9905 for Thick Cloud_F1, 2164.7038 for Thin Cloud_F1, and 380.6453 for Cloud Shadow_F1, with corresponding p-values of 0.0000. This indicates statistically significant differences in the mean F1-scores across the channel combinations, allowing the rejection of the null hypothesis (H₀: no difference in mean F1-scores between all channels). The analysis was conducted at a 95% confidence level. Additionally, as shown in Table 5, post hoc pairwise comparisons using Tukey HSD (honest significant difference) revealed that TOA+NDVI showed significant differences from other channels in several classes. For instance, in Clear Sky_F1, the average difference between NGR and TOA+NDVI was 0.0072 (p-adj = 0.0), which was statistically significant but with a minimal practical difference. In Thick Cloud_F1, TOA+NDVI showed average differences of 0.0294 and 0.0136 higher than NGR and RGB, respectively (p-adj = 0.0). In Thin Cloud_F1, TOA+NDVI showed significant differences from most channels except RGB, where the average difference was 0.0017 (p-adj = 0.7708), showing no significant difference. In Cloud Shadow_F1, TOA+NDVI showed average differences of −0.0517 and −0.0312 from NGR and TOA, respectively (p-adj = 0.0), reflecting relatively lower performance. These statistical analyses confirm that TOA+NDVI excels in the clear sky, thick cloud, and thin cloud classes but shows some limitations in the cloud shadow class.
When compared to the ground truth cloud labels (g) shown in Figure 8, the results in (c) reveal that all the models except the model trained with NGR data mistakenly classified a clear sky area as a light cloud in the lower-left portion of the satellite image. In contrast, (e) and (f), where TOA correction was applied, exhibited less over-detection of dense clouds compared to that of the other datasets. This indicates that the TOA correction contributed to improving cloud detection accuracy and reducing unnecessary false positives.

4. Conclusions

In this study, we evaluated the performance of cloud detection using a deep learning model, focusing on whether TOA reflectance was applied or not. Although our evaluation was relatively straightforward, the results demonstrated that incorporating TOA reflectance—commonly used as an input parameter in conventional cloud mask approaches—significantly enhanced the detection of cloud classes. The main contribution of this research lies in proposing a novel input data configuration that can improve model performance in cloud detection. Through the comprehensive analysis of various dataset combinations, we confirmed that the TOA+NDVI combination generally outperforms other configurations. This combination achieved particularly high accuracy and precision, effectively minimizing false detections. While its recall value was relatively low—resulting in more undetected cases—the highest recorded F1-score highlights that it strikes a strong balance overall. In contrast, the RGBN+NDVI combination exhibited limited performance, underscoring that some data configurations may be inherently constrained by their input characteristics. Class-by-class evaluations further supported these findings. The TOA+NDVI combination delivered superior results for both the dense and thin cloud classes, reflecting the physical properties of clouds more accurately, even when their boundaries were ambiguous. Meanwhile, the NGR combination performed well in detecting cloud shadows, indicating that different channel configurations may better capture distinct atmospheric and environmental nuances.
In conclusion, our study identifies the TOA+NDVI combination as a highly effective dataset configuration for cloud detection across a range of classes. Because the effectiveness of each combination depends on the underlying data characteristics and the selected variables, choosing the most suitable configuration for a specific class is essential. Generalizability can be enhanced through the application of diverse sensors. Future research should explore other model architectures, deep learning techniques, and variations in atmospheric conditions or temporal factors to further enhance cloud detection accuracy. These findings not only advance the field of cloud detection but can also be applied to other remote sensing domains. For instance, this approach could be valuable in disaster monitoring and response, enabling more efficient and error-free cloud detection, which makes it an important application. By using TOA or surface reflectance inputs, rather than relying solely on DN values, improved performance may be achieved in various Earth observation applications [29,30].

Author Contributions

Conceptualization, J.Y.; methodology, J.Y., H.O. and S.K.; software, H.O., S.K., D.H., Y.L. and E.D.; writing—original draft preparation, J.Y. and J.K.; writing—review and editing, J.Y.; funding acquisition, J.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. RS-2025-00515357). Also, This research was supported by “Regional Innovation Strategy (RIS)” through the National Re-search Foundation of Korea(NRF) funded by the Ministry of Education (MOE) (2023RIS-008).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The datasets generated and analyzed during this study for this article are accessible at https://doi.org/10.22761/DJ2020.2.2.008.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Kim, H.; Yeom, J.M.; Shin, D.; Choi, S.; Han, K.S.; Roujean, J.L. An assessment of thin cloud detection by applying bidirectional reflectance distribution function model-based background surface reflectance using Geostationary Ocean Color Imager (GOCI): A case study for South Korea. J. Geophys. Res. Atmos. 2017, 122, 8153–8172. [Google Scholar] [CrossRef]
  2. Mei, L.L.; Vountas, M.; Gómez-Chova, L.; Rozanov, V.; Jäger, M.; Lotz, W.; Burrows, J.P.; Hollmann, R. A cloud masking algorithm for the XBAER aerosol retrieval using MERIS data. Remote Sens. Environ. 2017, 197, 141–160. [Google Scholar] [CrossRef]
  3. Greco, S.; Infusino, M.; De Donato, C.; Coluzzi, R.; Imbrenda, V.; Lanfredi, M.; Simoniello, T.; Scalercio, S. Late spring frost in Mediterranean beech forests: Extended crown dieback and short-term effects on moth communities. Forests 2018, 9, 388. [Google Scholar] [CrossRef]
  4. Song, X.; Yang, C.; Wu, M.; Zhao, C.; Yang, G.; Hoffmann, W.; Huang, W. Evaluation of Sentinel-2A satellite imagery for mapping cotton root rot. Remote Sens. 2017, 9, 906. [Google Scholar] [CrossRef]
  5. Dörnhöfer, K.; Göritz, A.; Gege, P.; Pflug, B.; Oppelt, N. Water constituents and water depth retrieval from Sentinel-2A—A first evaluation in an oligotrophic lake. Remote Sens. 2016, 8, 941. [Google Scholar] [CrossRef]
  6. Zhu, Z.; Woodcock, C.E. Object-based cloud and cloud shadow detection in Landsat imagery. Remote Sens. Environ. 2012, 118, 83–94. [Google Scholar] [CrossRef]
  7. Inman, R.H.; Pedro, H.T.C.; Coimbra, C.F.M. Solar forecasting methods for renewable energy integration. Prog. Energy Combust. Sci. 2013, 39, 535–576. [Google Scholar] [CrossRef]
  8. Perez, R.; Kivalov, S.; Schlemmer, J.; Hemker, K., Jr.; Hoff, T.E.; Renne, D. Validation of short and medium term operational solar radiation forecasts in the US. Solar Energy 2013, 84, 2161–2172. [Google Scholar] [CrossRef]
  9. Buttar, P.K.; Sachan, M.K. Semantic segmentation of clouds in satellite images based on U-Net++ architecture and attention mechanism. Expert Syst. Appl. 2022, 209, 118380. [Google Scholar] [CrossRef]
  10. Main-Knorn, M.; Pflug, B.; Louis, J.; Debaecker, V.; Müller-Wilm, U.; Gascon, F. Sen2Cor for Sentinel-2. In Image and Signal Processing for Remote Sensing XXIII; Presented at the Image and Signal Processing for Remote Sensing XXIII; SPIE: Warsaw, Poland, 2017; pp. 37–48. [Google Scholar] [CrossRef]
  11. Yeom, J.M.; Roujean, J.L.; Han, K.S.; Lee, K.S.; Kim, H.W. Thin cloud detection over land using background surface reflectance based on the BRDF model applied to Geostationary Ocean Color Imager (GOCI) satellite data sets. Remote Sens. Environ. 2020, 239, 111610. [Google Scholar] [CrossRef]
  12. Saunders, R.W.; Kriebel, K.T. An improved method for detecting clear sky and cloudy radiances from AVHRR data. Int. J. Remote Sens. 1988, 9, 123–150. [Google Scholar] [CrossRef]
  13. Zhu, Z.; Wang, S.; Woodcock, C.E. Improvement and expansion of the Fmask algorithm: Cloud, cloud shadow, and snow detection for Landsats 4–7, 8, and Sentinel-2 images. Remote Sens. Environ. 2015, 159, 269–277. [Google Scholar] [CrossRef]
  14. Qiu, S.; Zhu, Z.; He, B. Fmask 4.0: Improved cloud and cloud shadow detection in Landsats 4–8 and Sentinel-2 imagery. Remote Sens. Environ. 2019, 231, 111205. [Google Scholar] [CrossRef]
  15. Ge, K.; Liu, J.; Wang, F.; Chen, B.; Hu, Y. A Cloud Detection Method Based on Spectral and Gradient Features for SDGSAT-1 Multispectral Images. Remote Sens. 2023, 15, 24. [Google Scholar] [CrossRef]
  16. Wright, N.; Duncan, J.M.A.; Callow, J.N.; Thompson, S.E.; George, R.J. CloudS2Mask: A novel deep learning approach for improved cloud and cloud shadow masking in Sentinel-2 imagery. Remote Sens. Environ. 2024, 306, 114122. [Google Scholar] [CrossRef]
  17. Zupanc, A.; Improving Cloud Detection with Machine Learning. Sentin. Hub Blog. 2020. Available online: https://medium.com/sentinel-hub/improving-cloud-detection-with-machine-learning-c09dc5d7cf13 (accessed on 7 April 2025).
  18. Jeppesen, J.H.; Jacobsen, R.H.; Inceoglu, F.; Toftegaard, T.S. A cloud detection algorithm for satellite imagery based on deep learning. Remote Sens. Environ. 2019, 229, 247–259. [Google Scholar] [CrossRef]
  19. Hughes, H.J.; Hayes, D.J. Automated detection of cloud and cloud shadow in single-date Landsat imagery using neural networks and spatial post-processing. Remote Sens. 2014, 6, 4907–4926. [Google Scholar] [CrossRef]
  20. Weiland, M.; Li, Y.; Martinis, S. Multi-sensor cloud and cloud shadow segmentation with a convolutional neural network. Remote Sens. Environ. 2019, 230, 111203. [Google Scholar] [CrossRef]
  21. Kim, M.; Ko, Y. A study on the cloud detection technique of heterogeneous sensors using Modified DeepLabV3+. Korean J. Remote Sens. 2022, 38, 511–521. [Google Scholar]
  22. Hughes, M.J.; Kennedy, R. High-quality cloud masking of Landsat 8 imagery using convolutional neural networks. Remote Sens. 2019, 11, 2591. [Google Scholar] [CrossRef]
  23. Morales, G.; Ramirez, A.; Telles, J. End-to-end cloud segmentation in high-resolution multispectral satellite imagery using deep learning. In Proceedings of the 2019 IEEE XXVI International Conference on Electronics, Electrical Engineering and Computing (INTERCON), Lima, Peru, 29–31 August 2019. [Google Scholar] [CrossRef]
  24. Choi, H.; Bindschadler, R. Cloud detection in Landsat imagery of ice sheets using shadow matching technique and automatic normalized difference snow index threshold value decision. Remote Sens. Environ. 2004, 91, 237–242. [Google Scholar] [CrossRef]
  25. Li, Z.; Shen, H.; Li, H.; Xia, G.; Gamba, P.; Zhang, L. Multi-feature combined cloud and cloud shadow detection in GaoFen-1 wide field of view imagery. Remote Sens. Environ. 2017, 191, 342–358. [Google Scholar] [CrossRef]
  26. Kim, B.; Lee, K.; Park, S. Burned-area mapping using post-fire PlanetScope images and a convolutional neural network. Remote Sens. 2024, 16, 2629. [Google Scholar] [CrossRef]
  27. Pešek, O.; Segal-Rozenhaimer, M.; Karnieli, A. Using convolutional neural networks for cloud detection on VENμS images over multiple land-cover types. Remote Sens. 2022, 14, 5210. [Google Scholar] [CrossRef]
  28. Fabio, L.; Piga, D.; Umberto, M.; Safouane, E.G. BenchCloudVision: A benchmark analysis of deep learning approaches for cloud detection and segmentation in remote sensing imagery. arXiv 2024, arXiv:2402.13918. [Google Scholar]
  29. Mohajerani, S.; Saeedi, P. CPNet: A context preserver convolutional neural network for detecting shadows in single RGB images. In Proceedings of the 2018 IEEE 20th International Workshop on Multimedia Signal Processing (MMSP), Vancouver, BC, Canada, 29–31 August 2018; pp. 1–5. [Google Scholar]
  30. Chen, L.-C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-decoder with atrous separable convolution for semantic image segmentation. arXiv 2018, arXiv:1802.02611. [Google Scholar]
  31. Mohajerani, S.; Saeedi, P. Cloud-Net: An end-to-end cloud detection algorithm for Landsat 8 imagery. arXiv 2019, arXiv:1901.10077. [Google Scholar] [CrossRef]
  32. Chen, L.-C.; Papandreou, G.; Schroff, F.; Adam, H. Rethinking atrous convolution for semantic image segmentation. arXiv 2017, arXiv:1706.05587. [Google Scholar]
  33. Milletari, F.; Navab, N.; Ahmadi, S.-A. V-Net: Fully convolutional neural networks for volumetric medical image segmentation. In Proceedings of the 2016 Fourth International Conference on 3D Vision (3DV), Stanford, CA, USA, 25–28 October 2016; pp. 565–571. [Google Scholar] [CrossRef]
  34. Oh, H.; Kim, B.; Kim, J. Deep learning-based cloud detection algorithm for KOMPSAT-3/3A imagery. In Proceedings of the KSAS 2020 Fall Conference, Jeju, Republic of Korea, 18–20 November 2020; The Korean Society for Aeronautical and Space Sciences: Daejeon, Republic of Korea, 2020; pp. 128–130. [Google Scholar]
Figure 1. The training dataset of cloud masks from KOMPSAT-3 and 3A on 1 January 2020 and 1 February 2023 (www.aihub.or.kr and https://dataon.kisti.re.kr, accessed on 7 April 2025). From (ac), there are RGB, NGR, and TOA-corrected RGBN, respectively, and (d) is the label of cloud mask.
Figure 1. The training dataset of cloud masks from KOMPSAT-3 and 3A on 1 January 2020 and 1 February 2023 (www.aihub.or.kr and https://dataon.kisti.re.kr, accessed on 7 April 2025). From (ac), there are RGB, NGR, and TOA-corrected RGBN, respectively, and (d) is the label of cloud mask.
Applsci 15 04339 g001
Figure 2. The patching schema of the cloud detection dataset for each of the KOMPSAT satellite images and the corresponding label dataset.
Figure 2. The patching schema of the cloud detection dataset for each of the KOMPSAT satellite images and the corresponding label dataset.
Applsci 15 04339 g002
Figure 3. Data augmentation techniques were applied with the following parameters: horizontal flip (50% probability); vertical flip (50% probability); Shift, Scale, and Rotate (50% probability); Random Brightness (50% probability); and contrast adjustment (30% probability).
Figure 3. Data augmentation techniques were applied with the following parameters: horizontal flip (50% probability); vertical flip (50% probability); Shift, Scale, and Rotate (50% probability); Random Brightness (50% probability); and contrast adjustment (30% probability).
Applsci 15 04339 g003
Figure 4. The selected sample images of the channel combination of the KOMPSAT-3 and 3A multispectral images for cloud masks with deep learning.
Figure 4. The selected sample images of the channel combination of the KOMPSAT-3 and 3A multispectral images for cloud masks with deep learning.
Applsci 15 04339 g004
Figure 5. The architecture of the DeeplabV3+ Resnet model for estimating cloud masks with KOMPSAT-3 and 3A TOA reflectance. The asterisk (*) denotes a convolutional layer.
Figure 5. The architecture of the DeeplabV3+ Resnet model for estimating cloud masks with KOMPSAT-3 and 3A TOA reflectance. The asterisk (*) denotes a convolutional layer.
Applsci 15 04339 g005
Figure 6. Training and validation loss per epoch for the different channel combinations using the KOMPSAT-3 and 3A data. The (left) side represents the training data, while the (right) side represents the validation data.
Figure 6. Training and validation loss per epoch for the different channel combinations using the KOMPSAT-3 and 3A data. The (left) side represents the training data, while the (right) side represents the validation data.
Applsci 15 04339 g006
Figure 7. Histograms for accuracy values and mIoU according to different channel combinations of test datasets.
Figure 7. Histograms for accuracy values and mIoU according to different channel combinations of test datasets.
Applsci 15 04339 g007
Figure 8. The selected images of the cloud masks for each channel combination case (from (af)) and corresponding ground true labels (g).
Figure 8. The selected images of the cloud masks for each channel combination case (from (af)) and corresponding ground true labels (g).
Applsci 15 04339 g008
Table 1. Characteristics of satellite sensors from KOMPSAT-3 and 3A for cloud masks and whether TOA radiance was applied or not.
Table 1. Characteristics of satellite sensors from KOMPSAT-3 and 3A for cloud masks and whether TOA radiance was applied or not.
SatelliteKOMPSAT 3KOMPSAT 3A
Blue-band wavelength (um)450–520450–520
Green-band wavelength (um)520–600520–600
Red-band wavelength (um)630–690630–690
NIR-band wavelength (um)760–900760–900
Resolution (m)2.82.2
Publicly opened cloud
detection dataset
KOMPSAT-3KOMPSAT-3A
SensorPAN, MSIPAN, MSI
Launch year20152017
Data acquisition period3 days3 days
Table 2. Hyperparameter range used to select optimized deep learning model structure for cloud detection.
Table 2. Hyperparameter range used to select optimized deep learning model structure for cloud detection.
ParametersConfiguration
Learning rate for Adam0.0003
Learning rate factor for scheduling0.5
Momentum0.937
Weight decay0.0005
Table 3. Parameter count for each input channel configuration.
Table 3. Parameter count for each input channel configuration.
Channel RGBNGRRGBNRGBN+NDVITOATOA+NDVI
Parameters45,670,48445,670,48445,673,62045,676,75645,673,62045,676,756
Table 4. The performance metrics of the cloud masks for each of the channel combination cases of the KOMPSAT-3 and 3A test datasets whether TOA reflectance was applied or not.
Table 4. The performance metrics of the cloud masks for each of the channel combination cases of the KOMPSAT-3 and 3A test datasets whether TOA reflectance was applied or not.
ExperimentClassPrecisionRecallF1-ScoremIoU
RGBClear Sky90.75%97.26%93.89%88.49%
Thick Cloud48.84%95.76%64.68%47.80%
Thin Cloud68.35%34.63%45.97%29.84%
Cloud Shadow56.18%22.28%31.91%18.98%
Overall66.03%62.48%59.11%46.28%
NGRClear Sky89.75%97.20%93.33%87.49%
Thick Cloud48.33%90.75%63.07%46.06%
Thin Cloud64.88%27.60%38.73%24.02%
Cloud Shadow62.92%27.85%38.61%23.92%
Overall66.47%60.85%58.43%45.37%
RGBNClear Sky89.76%97.86%93.63%88.02%
Thick Cloud49.92%92.76%64.91%48.05%
Thin Cloud70.76%26.79%38.87%24.12%
Cloud Shadow61.80%27.24%37.82%23.32%
Overall68.06%61.16%58.80%45.88%
RGBN+NDVIClear Sky89.87%96.84%93.22%87.31%
Thick Cloud44.19%92.79%59.87%42.73%
Thin Cloud65.55%27.43%38.67%23.97%
Cloud Shadow50.87%23.76%32.39%19.33%
Overall62.62%60.20%56.04%43.33%
TOAClear Sky90.12%97.91%93.85%88.42%
Thick Cloud48.27%88.40%62.44%45.39%
Thin Cloud71.75%29.99%42.30%26.82%
Cloud Shadow59.43%24.68%34.87%21.12%
Overall67.39%60.24%58.37%45.44%
TOA+NDVIClear Sky90.19%97.58%93.74%88.22%
Thick Cloud51.89%86.88%64.97%48.12%
Thin Cloud69.18%35.16%46.62%30.40%
Cloud Shadow66.06%20.89%31.75%18.87%
Overall69.33%60.13%59.27%46.40%
Table 5. The Tukey HSD post hoc analysis of F1-scores across channel combinations for each class. The Mean Difference (MD), defined as MD = x ¯ g r o u p 1 x ¯ g r o u p 2 , quantifies the difference in F1-scores between the two channel pairs. A positive MD indicates that the first channel (group1) has a higher F1-score than the second (group2), while a negative MD suggests the opposite. The adjusted p-value (p-adj) indicates the statistical significance of the difference, with values below 0.05 marked in Bold Italic to denote significance.
Table 5. The Tukey HSD post hoc analysis of F1-scores across channel combinations for each class. The Mean Difference (MD), defined as MD = x ¯ g r o u p 1 x ¯ g r o u p 2 , quantifies the difference in F1-scores between the two channel pairs. A positive MD indicates that the first channel (group1) has a higher F1-score than the second (group2), while a negative MD suggests the opposite. The adjusted p-value (p-adj) indicates the statistical significance of the difference, with values below 0.05 marked in Bold Italic to denote significance.
group1group2Clear SkyThick CloudThin CloudCloud Shadow
MDp-adjMDp-adjMDp-adjMDp-adj
NGRRGB0.00200.050700.07230−0.04320
NGRRGBN0.0080−0.00240.0966−0.0150−0.00110.9723
NGRRGBNNDVI000.0180−0.0160−0.02040
NGRTOA0.010−0.030500.0140−0.0220
NGRTOANDVI0.0050−0.035400.07410−0.0580
RGBRGBN0.0070−0.05310−0.08800.04220
RGBRGBNNDVI00−0.03270−0.08900.02280
RGBTOA0.0080−0.08120−0.05800.02120
RGBTOANDVI0.0040−0.086100.00180.58−0.01480
RGBNRGBNNDVI−0.0100.02040−0.0010.9495−0.01940
RGBNTOA0.0010−0.028100.02940−0.02090
RGBNTOANDVI00−0.03300.08950−0.0570
RGBNNDVITOA0.0110−0.048500.03040−0.00160.858
RGBNNDVITOANDVI0.0070−0.053400.09050−0.03760
TOATOANDVI00−0.004900.06010−0.0360
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kim, S.; Han, D.; Lee, Y.; Doo, E.; Oh, H.; Ko, J.; Yeom, J. Evaluation of Cloud Mask Performance of KOMPSAT-3 Top-of-Atmosphere Reflectance Incorporating Deeplabv3+ with Resnet 101 Model. Appl. Sci. 2025, 15, 4339. https://doi.org/10.3390/app15084339

AMA Style

Kim S, Han D, Lee Y, Doo E, Oh H, Ko J, Yeom J. Evaluation of Cloud Mask Performance of KOMPSAT-3 Top-of-Atmosphere Reflectance Incorporating Deeplabv3+ with Resnet 101 Model. Applied Sciences. 2025; 15(8):4339. https://doi.org/10.3390/app15084339

Chicago/Turabian Style

Kim, Suhwan, Doehee Han, Yejin Lee, Eunsu Doo, Han Oh, Jonghan Ko, and Jongmin Yeom. 2025. "Evaluation of Cloud Mask Performance of KOMPSAT-3 Top-of-Atmosphere Reflectance Incorporating Deeplabv3+ with Resnet 101 Model" Applied Sciences 15, no. 8: 4339. https://doi.org/10.3390/app15084339

APA Style

Kim, S., Han, D., Lee, Y., Doo, E., Oh, H., Ko, J., & Yeom, J. (2025). Evaluation of Cloud Mask Performance of KOMPSAT-3 Top-of-Atmosphere Reflectance Incorporating Deeplabv3+ with Resnet 101 Model. Applied Sciences, 15(8), 4339. https://doi.org/10.3390/app15084339

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop