Next Article in Journal
Assessing Model Trade-Offs in Agricultural Remote Sensing: A Review of Machine Learning and Deep Learning Approaches Using Almond Crop Mapping
Previous Article in Journal
Rapid Detection and Segmentation of Landslide Hazards in Loess Tableland Areas Using Deep Learning: A Case Study of the 2023 Jishishan Ms 6.2 Earthquake in Gansu, China
Previous Article in Special Issue
Unified Model and Survey on Modulation Schemes for Next-Generation Automotive Radar Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

Improving the Universal Performance of Land Cover Semantic Segmentation Through Training Data Refinement and Multi-Dataset Fusion via Redundant Models

National Satellite Operation & Application Center, Korea Aerospace Research Institute, Daejeon 34133, Republic of Korea
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(15), 2669; https://doi.org/10.3390/rs17152669 (registering DOI)
Submission received: 3 June 2025 / Revised: 28 July 2025 / Accepted: 30 July 2025 / Published: 1 August 2025

Abstract

Artificial intelligence (AI) has become the mainstream of analysis tools in remote sensing. Various semantic segmentation models have been introduced to segment land cover from aerial or satellite images, and remarkable results have been achieved. However, they often lack universal performance on unseen images, making them challenging to provide as a service. One of the primary reasons for the lack of robustness is overfitting, resulting from errors and inconsistencies in the ground truth (GT). In this study, we propose a method to mitigate these inconsistencies by utilizing redundant models and verify the improvement using a public dataset based on Google Earth images. Redundant models share the same network architecture and hyperparameters but are trained with different combinations of training and validation data on the same dataset. Because of the variations in sample exposure during training, these models yield slightly different inference results. This variability allows for the estimation of pixel-level confidence levels for the GT. The confidence level is incorporated into the GT to influence the loss calculation during the training of the enhanced model. Furthermore, we implemented a consensus model that employs modified masks, where classes with low confidence are substituted by the dominant classes identified through a majority vote from the redundant models. To further improve robustness, we extended the same approach to fuse the dataset with different class compositions based on imagery from the Korea Multipurpose Satellite 3A (KOMPSAT-3A). Performance evaluations were conducted on three network architectures: a simple network, U-Net, and DeepLabV3. In the single-dataset case, the performance of the enhanced and consensus models improved by an average of 2.49% and 2.59% across the network architectures. In the multi-dataset scenario, the enhanced models and consensus models showed an average performance improvement of 3.37% and 3.02% across the network architectures, respectively, compared to an average increase of 1.55% without the proposed method.
Keywords: satellite imagery; KOMPSAT; deep-learning; land cover segmentation; overfitting; semantic segmentation satellite imagery; KOMPSAT; deep-learning; land cover segmentation; overfitting; semantic segmentation

Share and Cite

MDPI and ACS Style

Chang, J.Y.; Oh, K.-Y.; Lee, K.-J. Improving the Universal Performance of Land Cover Semantic Segmentation Through Training Data Refinement and Multi-Dataset Fusion via Redundant Models. Remote Sens. 2025, 17, 2669. https://doi.org/10.3390/rs17152669

AMA Style

Chang JY, Oh K-Y, Lee K-J. Improving the Universal Performance of Land Cover Semantic Segmentation Through Training Data Refinement and Multi-Dataset Fusion via Redundant Models. Remote Sensing. 2025; 17(15):2669. https://doi.org/10.3390/rs17152669

Chicago/Turabian Style

Chang, Jae Young, Kwan-Young Oh, and Kwang-Jae Lee. 2025. "Improving the Universal Performance of Land Cover Semantic Segmentation Through Training Data Refinement and Multi-Dataset Fusion via Redundant Models" Remote Sensing 17, no. 15: 2669. https://doi.org/10.3390/rs17152669

APA Style

Chang, J. Y., Oh, K.-Y., & Lee, K.-J. (2025). Improving the Universal Performance of Land Cover Semantic Segmentation Through Training Data Refinement and Multi-Dataset Fusion via Redundant Models. Remote Sensing, 17(15), 2669. https://doi.org/10.3390/rs17152669

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop