Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (484)

Search Parameters:
Keywords = multispectral satellite sensors

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 707 KB  
Review
Application of Multispectral Imagery and Synthetic Aperture Radar Sensors for Monitoring Algal Blooms: A Review
by Vikash Kumar Mishra, Himanshu Maurya, Fred Nicolls and Amit Kumar Mishra
Phycology 2025, 5(4), 71; https://doi.org/10.3390/phycology5040071 (registering DOI) - 2 Nov 2025
Abstract
Water pollution is a growing concern for aquatic ecosystems worldwide, with threats like plastic waste, nutrient pollution, and oil spills harming biodiversity and impacting human health, fisheries, and local economies. Traditional methods of monitoring water quality, such as ground sampling, are often limited [...] Read more.
Water pollution is a growing concern for aquatic ecosystems worldwide, with threats like plastic waste, nutrient pollution, and oil spills harming biodiversity and impacting human health, fisheries, and local economies. Traditional methods of monitoring water quality, such as ground sampling, are often limited in how frequently and widely they can collect data. Satellite imagery is a potent tool in offering broader and more consistent coverage. This review explores how Multispectral Imagery (MSI) and Synthetic Aperture Radar (SAR), including polarimetric SAR (PolSAR), are utilised to monitor harmful algal blooms (HABs) and other types of aquatic pollution. It looks at recent advancements in satellite sensor technologies, highlights the value of combining different data sources (like MSI and SAR), and discusses the growing use of artificial intelligence for analysing satellite data. Real-world examples from places like Lake Erie, Vembanad Lake in India, and Korea’s coastal waters show how satellite tools such as the Geostationary Ocean Colour Imager (GOCI) and Environmental Sample Processor (ESP) are being used to track seasonal changes in water quality and support early warning systems. While satellite monitoring still faces challenges like interference from clouds or water turbidity, continued progress in sensor design, data fusion, and policy support is helping make remote sensing a key part of managing water health. Full article
Show Figures

Figure 1

26 pages, 15315 KB  
Article
Machine and Deep Learning Framework for Sargassum Detection and Fractional Cover Estimation Using Multi-Sensor Satellite Imagery
by José Manuel Echevarría-Rubio, Guillermo Martínez-Flores and Rubén Antelmo Morales-Pérez
Data 2025, 10(11), 177; https://doi.org/10.3390/data10110177 (registering DOI) - 1 Nov 2025
Abstract
Over the past decade, recurring influxes of pelagic Sargassum have posed significant environmental and economic challenges in the Caribbean Sea. Effective monitoring is crucial for understanding bloom dynamics and mitigating their impacts. This study presents a comprehensive machine learning (ML) and deep learning [...] Read more.
Over the past decade, recurring influxes of pelagic Sargassum have posed significant environmental and economic challenges in the Caribbean Sea. Effective monitoring is crucial for understanding bloom dynamics and mitigating their impacts. This study presents a comprehensive machine learning (ML) and deep learning (DL) framework for detecting Sargassum and estimating its fractional cover using imagery from key satellite sensors: the Operational Land Imager (OLI) on Landsat-8 and the Multispectral Instrument (MSI) on Sentinel-2. A spectral library was constructed from five core spectral bands (Blue, Green, Red, Near-Infrared, and Short-Wave Infrared). It was used to train an ensemble of five diverse classifiers: Random Forest (RF), K-Nearest Neighbors (KNN), XGBoost (XGB), a Multi-Layer Perceptron (MLP), and a 1D Convolutional Neural Network (1D-CNN). All models achieved high classification performance on a held-out test set, with weighted F1-scores exceeding 0.976. The probabilistic outputs from these classifiers were then leveraged as a direct proxy for the sub-pixel fractional cover of Sargassum. Critically, an inter-algorithm agreement analysis revealed that detections on real-world imagery are typically either of very high (unanimous) or very low (contentious) confidence, highlighting the diagnostic power of the ensemble approach. The resulting framework provides a robust and quantitative pathway for generating confidence-aware estimates of Sargassum distribution. This work supports efforts to manage these harmful algal blooms by providing vital information on detection certainty, while underscoring the critical need to empirically validate fractional cover proxies against in situ or UAV measurements. Full article
(This article belongs to the Section Spatial Data Science and Digital Earth)
Show Figures

Figure 1

33 pages, 2039 KB  
Review
Monitoring Wildfire Risk with a Near-Real-Time Live Fuel Moisture Content System: A Review and Roadmap for Operational Application in New Zealand
by Michael S. Watt, Shana Gross, John Keithley Difuntorum, Jessica L. McCarty, H. Grant Pearce, Jacquelyn K. Shuman and Marta Yebra
Remote Sens. 2025, 17(21), 3580; https://doi.org/10.3390/rs17213580 - 29 Oct 2025
Viewed by 334
Abstract
Live fuel moisture content (LFMC) is a critical variable influencing wildfire behavior, ignition potential, and suppression difficulty, yet it remains challenging to monitor consistently across landscapes due to sparse field observations, rapid temporal changes, and vegetation heterogeneity. This study presents a comprehensive review [...] Read more.
Live fuel moisture content (LFMC) is a critical variable influencing wildfire behavior, ignition potential, and suppression difficulty, yet it remains challenging to monitor consistently across landscapes due to sparse field observations, rapid temporal changes, and vegetation heterogeneity. This study presents a comprehensive review of satellite-based approaches for estimating LFMC, with emphasis on methods applicable to New Zealand, where wildfire risk is increasing due to climate change. We assess the suitability of different remote sensing data sources, including multispectral, thermal, and microwave sensors, and evaluate their integration for characterizing both LFMC and fuel types. Particular attention is given to the trade-offs between data resolution, revisit frequency, and spectral sensitivity. As knowledge of fuel type and structure is critical for understanding wildfire behavior and LFMC, the review also outlines key limitations in existing land cover products for fuel classification and highlights opportunities for improving fuel mapping using remotely sensed data. This review lays the groundwork for the development of an operational LFMC prediction system in New Zealand, with broader relevance to fire-prone regions globally. Such a system would support real-time wildfire risk assessment and enhance decision-making in fire management and emergency response. Full article
Show Figures

Figure 1

26 pages, 6622 KB  
Article
Radiometric Cross-Calibration and Performance Analysis of HJ-2A/2B 16m-MSI Using Landsat-8/9 OLI with Spectral-Angle Difference Correction
by Jian Zeng, Hang Zhao, Yongfang Su, Qiongqiong Lan, Qijin Han, Xuewen Zhang, Xinmeng Wang, Zhaopeng Xu, Zhiheng Hu, Xiaozheng Du and Bopeng Yang
Remote Sens. 2025, 17(21), 3569; https://doi.org/10.3390/rs17213569 - 28 Oct 2025
Viewed by 212
Abstract
The Huanjing-2A/2B (HJ-2A/2B) satellites are China’s next-generation environmental monitoring satellites, equipped with four visible light wide-swath charge-coupled device (CCD) sensors. These sensors enable the acquisition of 16-m multispectral imagery (16m-MSI) with a swath width of 800 km through field-of-view stitching. However, traditional vicarious [...] Read more.
The Huanjing-2A/2B (HJ-2A/2B) satellites are China’s next-generation environmental monitoring satellites, equipped with four visible light wide-swath charge-coupled device (CCD) sensors. These sensors enable the acquisition of 16-m multispectral imagery (16m-MSI) with a swath width of 800 km through field-of-view stitching. However, traditional vicarious calibration techniques are limited by their calibration frequency, making them insufficient for continuous monitoring requirements. To address this challenge, the present study proposes a spectral-angle difference correction-based cross-calibration approach, using the Landsat 8/9 Operational Land Imager (OLI) as the reference sensor to calibrate the HJ-2A/2B CCD sensors. This method improves both radiometric accuracy and temporal frequency. The study utilizes cloud-free image pairs of HJ-2A/2B CCD and Landsat 8/9 OLI, acquired simultaneously at the Dunhuang and Golmud calibration sites between 2021 and 2024, in combination with atmospheric parameters from the European Centre for Medium-Range Weather Forecasts (ECMWF) Reanalysis v5 (ERA5) dataset and historical ground-measured spectral reflectance data for cross-calibration. The methodology includes spatial matching and resampling of the image pairs, along with the identification of radiometrically stable homogeneous regions. To account for sensor viewing geometry differences, an observation-angle linear correction model is introduced. Spectral band adjustment factors (SBAFs) are also applied to correct for discrepancies in spectral response functions (SRFs) across sensors. Experimental results demonstrate that the cross-calibration coefficients differ by less than 10% compared to vicarious calibration results from the China Centre for Resources Satellite Data and Application (CRESDA). Additionally, using Sentinel-2 MSI as the reference sensor, the cross-calibration coefficients were independently validated through cross-validation. The results indicate that the radiometrically corrected HJ-2A/2B 16m-MSI CCD data, based on these coefficients, exhibit improved radiometric consistency with Sentinel-2 MSI observations. Further analysis shows that the cross-calibration method significantly enhances radiometric consistency across the HJ-2A/2B 16m-MSI CCD sensors, with radiometric response differences between CCD1 and CCD4 maintained below 3%. Error analysis quantifies the impact of atmospheric parameters and surface reflectance on calibration accuracy, with total uncertainty calculated. The proposed spectral-angle correction-based cross-calibration method not only improves calibration accuracy but also offers reliable technical support for long-term radiometric performance monitoring of the HJ-2A/2B 16m-MSI CCD sensors. Full article
(This article belongs to the Special Issue Remote Sensing Satellites Calibration and Validation: 2nd Edition)
Show Figures

Graphical abstract

16 pages, 3736 KB  
Article
Monitoring Harmful Algal Blooms in the Southern California Current Using Satellite Ocean Color and In Situ Data
by Min-Sun Lee, Kevin Arrigo, Alexandra Smith, C. Brock Woodson, Juhyung Lee and Fiorenza Micheli
J. Mar. Sci. Eng. 2025, 13(11), 2044; https://doi.org/10.3390/jmse13112044 - 25 Oct 2025
Viewed by 304
Abstract
Harmful algal blooms (HABs) pose increasing threats to marine ecosystems and fisheries worldwide, creating an urgent need for efficient wide-area monitoring schemes. Satellite remote sensing offers a promising approach. However, quantitative, real-time HAB monitoring via satellites remains underdeveloped. Here, we evaluated the applicability [...] Read more.
Harmful algal blooms (HABs) pose increasing threats to marine ecosystems and fisheries worldwide, creating an urgent need for efficient wide-area monitoring schemes. Satellite remote sensing offers a promising approach. However, quantitative, real-time HAB monitoring via satellites remains underdeveloped. Here, we evaluated the applicability of the Normalized Red Tide Index (NRTI), originally developed for Korean waters using the Geostationary Ocean Color Imager (GOCI), in detecting and quantifying HAB in the southern California Current. Our integrated monitoring encompassed two distinct regions of the California Current—Monterey Bay (central California) and La Bocana (Baja California)—separated by a 1470-km stretch of coastline and characterized by blooms of multiple HAB species. Our objectives were threefold: (1) to validate the relationship between NRTI and HAB cell densities through field measurements, (2) to evaluate the performance of hyperspectral NRTI derived from in situ reflectance measurements compared to existing multispectral indices including MODIS ocean color products, and (3) to assess the capability of multispectral sensors to represent NRTI by comparing multispectral-derived indices against hyperspectral NRTI measurements. We found species-specific relationships between hyperspectral NRTI and in situ HAB cell densities, with Prorocentrum gracile in Baja California showing a robust logarithmic fit (R2 = 0.92) and multi-species assemblage (dominated by Akashiwo sanguinea) in Monterey Bay displaying a weak, positive correlation. MODIS-derived NRTI values were consistently lower than hyperspectral estimates due to reduced spectral resolution, but the two datasets were strongly correlated (R2 = 0.97), allowing for reliable tracking of relative bloom intensity. MODIS applications further captured distinct bloom dynamics across regions, with localized nearshore blooms in Baja California and broader offshore expansion in Monterey Bay. These results suggest that the NRTI-based monitoring scheme can effectively quantify HAB intensity across broad geographic scales, but its application requires explicit consideration of regional HAB assemblages. Full article
(This article belongs to the Section Marine Environmental Science)
Show Figures

Figure 1

30 pages, 11870 KB  
Article
Early Mapping of Farmland and Crop Planting Structures Using Multi-Temporal UAV Remote Sensing
by Lu Wang, Yuan Qi, Juan Zhang, Rui Yang, Hongwei Wang, Jinlong Zhang and Chao Ma
Agriculture 2025, 15(21), 2186; https://doi.org/10.3390/agriculture15212186 - 22 Oct 2025
Viewed by 403
Abstract
Fine-grained identification of crop planting structures provides key data for precision agriculture, thereby supporting scientific production and evidence-based policy making. This study selected a representative experimental farmland in Qingyang, Gansu Province, and acquired Unmanned Aerial Vehicle (UAV) multi-temporal data (six epochs) from multiple [...] Read more.
Fine-grained identification of crop planting structures provides key data for precision agriculture, thereby supporting scientific production and evidence-based policy making. This study selected a representative experimental farmland in Qingyang, Gansu Province, and acquired Unmanned Aerial Vehicle (UAV) multi-temporal data (six epochs) from multiple sensors (multispectral [visible–NIR], thermal infrared, and LiDAR). By fusing 59 feature indices, we achieved high-accuracy extraction of cropland and planting structures and identified the key feature combinations that discriminate among crops. The results show that (1) multi-source UAV data from April + June can effectively delineate cropland and enable accurate plot segmentation; (2) July is the optimal time window for fine-scale extraction of all planting-structure types in the area (legumes, millet, maize, buckwheat, wheat, sorghum, maize–legume intercropping, and vegetables), with a cumulative importance of 72.26% for the top ten features, while the April + June combination retains most of the separability (67.36%), enabling earlier but slightly less precise mapping; and (3) under July imagery, the SAM (Segment Anything Model) segmentation + RF (Random Forest) classification approach—using the RF-selected top 10 of the 59 features—achieved an overall accuracy of 92.66% with a Kappa of 0.9163, representing a 7.57% improvement over the contemporaneous SAM + CNN (Convolutional Neural Network) method. This work establishes a basis for UAV-based recognition of typical crops in the Qingyang sector of the Loess Plateau and, by deriving optimal recognition timelines and feature combinations from multi-epoch data, offers useful guidance for satellite-based mapping of planting structures across the Loess Plateau following multi-scale data fusion. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

29 pages, 10201 KB  
Article
Hybrid Methodological Evaluation Using UAV/Satellite Information for the Monitoring of Super-Intensive Olive Groves
by Esther Alfonso, Serafín López-Cuervo, Julián Aguirre, Enrique Pérez-Martín and Iñigo Molina
Appl. Sci. 2025, 15(20), 11171; https://doi.org/10.3390/app152011171 - 18 Oct 2025
Viewed by 355
Abstract
Advances in Earth observation technology using multispectral imagery from satellite Earth observation systems and sensors mounted on unmanned aerial vehicles (UAVs) are enabling more accurate crop monitoring. These images, once processed, facilitate the analysis of crop health by enabling the study of crop [...] Read more.
Advances in Earth observation technology using multispectral imagery from satellite Earth observation systems and sensors mounted on unmanned aerial vehicles (UAVs) are enabling more accurate crop monitoring. These images, once processed, facilitate the analysis of crop health by enabling the study of crop vigour, the calculation of biomass indices, and the continuous temporal monitoring using vegetation indices (VIs). These indicators allow for the identification of diseases, pests, or water stress, among others. This study compares images acquired with the Altum PT sensor (UAV) and Super Dove (satellite) to evaluate their ability to detect specific problems in super-intensive olive groves at two critical times: January, during pruning, and April, at the beginning of fruit development. Four different VIs were used, and multispectral maps were generated for each: the Normalized Difference Vegetation Index (NDVI), the Green Normalized Difference Vegetation Index (GNDVI), the Normalized Difference Red Edge Index (NDRE) and the Leaf Chlorophyll Index (LCI). Data for each plant (n = 11,104) were obtained for analysis across all dates and sensors. A combined methodology (Spearman’s correlation coefficient, Student’s t-test and decision trees) was used to validate the behaviour of the variables and propose predictive models. The results showed significant differences between the sensors, with a common trend in spatial patterns and a correlation range between 0.45 and 0.68. Integrating both technologies enables multiscale assessment, optimizing agronomic management and supporting more sustainable precision agriculture. Full article
Show Figures

Figure 1

18 pages, 2350 KB  
Article
Deep Ensembles and Multisensor Data for Global LCZ Mapping: Insights from So2Sat LCZ42
by Loris Nanni and Sheryl Brahnam
Algorithms 2025, 18(10), 657; https://doi.org/10.3390/a18100657 - 17 Oct 2025
Viewed by 282
Abstract
Classifying multiband images acquired by advanced sensors, including those mounted on satellites, is a central task in remote sensing and environmental monitoring. These sensors generate high-dimensional outputs rich in spectral and spatial information, enabling detailed analyses of Earth’s surface. However, the complexity of [...] Read more.
Classifying multiband images acquired by advanced sensors, including those mounted on satellites, is a central task in remote sensing and environmental monitoring. These sensors generate high-dimensional outputs rich in spectral and spatial information, enabling detailed analyses of Earth’s surface. However, the complexity of such data presents substantial challenges to achieving both accuracy and efficiency. To address these challenges, we tested the ensemble learning framework based on ResNet50, MobileNetV2, and DenseNet201, each trained on distinct three-channel representations of the input to capture complementary features. Training is conducted on the LCZ42 dataset of 400,673 paired Sentinel-1 SAR and Sentinel-2 multispectral image patches annotated with Local Climate Zone (LCZ) labels. Experiments show that our best ensemble surpasses several recent state-of-the-art methods on the LCZ42 benchmark. Full article
Show Figures

Figure 1

18 pages, 112460 KB  
Article
Gradient Boosting for the Spectral Super-Resolution of Ocean Color Sensor Data
by Brittney Slocum, Jason Jolliff, Sherwin Ladner, Adam Lawson, Mark David Lewis and Sean McCarthy
Sensors 2025, 25(20), 6389; https://doi.org/10.3390/s25206389 - 16 Oct 2025
Viewed by 686
Abstract
We present a gradient boosting framework for reconstructing hyperspectral signatures in the visible spectrum (400–700 nm) of satellite-based ocean scenes from limited multispectral inputs. Hyperspectral data is composed of many, typically greater than 100, narrow wavelength bands across the electromagnetic spectrum. While hyperspectral [...] Read more.
We present a gradient boosting framework for reconstructing hyperspectral signatures in the visible spectrum (400–700 nm) of satellite-based ocean scenes from limited multispectral inputs. Hyperspectral data is composed of many, typically greater than 100, narrow wavelength bands across the electromagnetic spectrum. While hyperspectral data can offer reflectance values at every nanometer, multispectral sensors typically provide only 3 to 11 discrete bands, undersampling the visible color space. Our approach is applied to remote sensing reflectance (Rrs) measurements from a set of ocean color sensors, including Suomi-National Polar-orbiting Partnership (SNPP) Visible Infrared Imaging Radiometer Suite (VIIRS), the Ocean and Land Colour Instrument (OLCI), Hyperspectral Imager for the Coastal Ocean (HICO), and NASA’s Plankton, Aerosol, Cloud, Ocean Ecosystem Ocean Color Instrument (PACE OCI), as well as in situ Rrs data from National Oceanic and Atmospheric Administration (NOAA) calibration and validation cruises. By leveraging these datasets, we demonstrate the feasibility of transforming low-spectral-resolution imagery into high-fidelity hyperspectral products. This capability is particularly valuable given the increasing availability of low-cost platforms equipped with RGB or multispectral imaging systems. Our results underscore the potential of hyperspectral enhancement for advancing ocean color monitoring and enabling broader access to high-resolution spectral data for scientific and environmental applications. Full article
Show Figures

Figure 1

23 pages, 10835 KB  
Article
Evaluation of Post-Fire Treatments (Erosion Barriers) on Vegetation Recovery Using RPAS and Sentinel-2 Time-Series Imagery
by Fernando Pérez-Cabello, Carlos Baroja-Saenz, Raquel Montorio and Jorge Angás-Pajas
Remote Sens. 2025, 17(20), 3422; https://doi.org/10.3390/rs17203422 - 13 Oct 2025
Viewed by 382
Abstract
Post-fire soil and vegetation changes can intensify erosion and sediment yield by altering the factors controlling the runoff–infiltration balance. Erosion barriers (EBs) are widely used in hydrological and forest restoration to mitigate erosion, reduce sediment transport, and promote vegetation recovery. However, precise spatial [...] Read more.
Post-fire soil and vegetation changes can intensify erosion and sediment yield by altering the factors controlling the runoff–infiltration balance. Erosion barriers (EBs) are widely used in hydrological and forest restoration to mitigate erosion, reduce sediment transport, and promote vegetation recovery. However, precise spatial assessments of their effectiveness remain scarce, requiring validation through operational methodologies. This study evaluates the impact of EB on post-fire vegetation recovery at two temporal and spatial scales: (1) Remotely Piloted Aircraft System (RPAS) imagery, acquired at high spatial resolution but limited to a single acquisition date coinciding with the field flight. These data were captured using a MicaSense RedEdge-MX multispectral camera and an RGB optical sensor (SODA), from which NDVI and vegetation height were derived through aerial photogrammetry and digital surface models (DSMs). (2) Sentinel-2 satellite imagery, offering coarser spatial resolution but enabling multi-temporal analysis, through NDVI time series spanning four consecutive years. The study was conducted in the area of the Luna Fire (northern Spain), which burned in July 2015. A paired sampling design compared upstream and downstream areas of burned wood stacks and control sites using NDVI values and vegetation height. Results showed slightly higher NDVI values (0.45) upstream of the EB (p < 0.05), while vegetation height was, on average, ~8 cm lower than in control sites (p > 0.05). Sentinel-2 analysis revealed significant differences in NDVI distributions between treatments (p < 0.05), although mean values were similar (~0.32), both showing positive trends over four years. This study offers indirect insight into the functioning and effectiveness of EB in post-fire recovery. The findings highlight the need for continued monitoring of treated areas to better understand environmental responses over time and to inform more effective land management strategies. Full article
(This article belongs to the Special Issue Remote Sensing for Risk Assessment, Monitoring and Recovery of Fires)
Show Figures

Figure 1

31 pages, 1983 KB  
Review
Integrating Remote Sensing and Autonomous Robotics in Precision Agriculture: Current Applications and Workflow Challenges
by Magdalena Łągiewska and Ewa Panek-Chwastyk
Agronomy 2025, 15(10), 2314; https://doi.org/10.3390/agronomy15102314 - 30 Sep 2025
Viewed by 1241
Abstract
Remote sensing technologies are increasingly integrated with autonomous robotic platforms to enhance data-driven decision-making in precision agriculture. Rather than replacing conventional platforms such as satellites or UAVs, autonomous ground robots complement them by enabling high-resolution, site-specific observations in real time, especially at the [...] Read more.
Remote sensing technologies are increasingly integrated with autonomous robotic platforms to enhance data-driven decision-making in precision agriculture. Rather than replacing conventional platforms such as satellites or UAVs, autonomous ground robots complement them by enabling high-resolution, site-specific observations in real time, especially at the plant level. This review analyzes how remote sensing sensors—including multispectral, hyperspectral, LiDAR, and thermal—are deployed via robotic systems for specific agricultural tasks such as canopy mapping, weed identification, soil moisture monitoring, and precision spraying. Key benefits include higher spatial and temporal resolution, improved monitoring of under-canopy conditions, and enhanced task automation. However, the practical deployment of such systems is constrained by terrain complexity, power demands, and sensor calibration. The integration of artificial intelligence and IoT connectivity emerges as a critical enabler for responsive, scalable solutions. By focusing on how autonomous robots function as mobile sensor platforms, this article contributes to the understanding of their role within modern precision agriculture workflows. The findings support future development pathways aimed at increasing operational efficiency and sustainability across diverse crop systems. Full article
Show Figures

Figure 1

22 pages, 4736 KB  
Article
Radiometric Cross-Calibration and Validation of KOMPSAT-3/AEISS Using Sentinel-2A/MSI
by Jin-Hyeok Choi, Kyoung-Wook Jin, Dong-Hwan Cha, Kyung-Bae Choi, Yong-Han Jo, Kwang-Nyun Kim, Gwui-Bong Kang, Ho-Yeon Shin, Ji-Yun Lee, Eunyeong Kim, Hojong Chang and Yun Gon Lee
Remote Sens. 2025, 17(19), 3280; https://doi.org/10.3390/rs17193280 - 24 Sep 2025
Viewed by 542
Abstract
The successful launch of Korea Multipurpose Satellite-3/Advanced Earth Imaging Sensor System (KOMPSAT-3/AEISS) on 18 May 2012 allowed the Republic of Korea to meet the growing demand for high-resolution satellite imagery. However, like all satellite sensors, KOMPSAT-3/AEISS experienced temporal changes post-launch and thus requires [...] Read more.
The successful launch of Korea Multipurpose Satellite-3/Advanced Earth Imaging Sensor System (KOMPSAT-3/AEISS) on 18 May 2012 allowed the Republic of Korea to meet the growing demand for high-resolution satellite imagery. However, like all satellite sensors, KOMPSAT-3/AEISS experienced temporal changes post-launch and thus requires ongoing evaluation and calibration. Although more than a decade has passed since launch, the KOMPSAT-3/AEISS mission and its multi-year data archive remain widely used. This study focused on the cross-calibration of KOMPSAT-3/AEISS with Sentinel-2A/Multispectral Instrument (MSI) by comparing the radiometric responses of the two satellite sensors under similar observation conditions, leveraging the linear relationship between Digital Numbers (DN) and top-of-atmosphere (TOA) radiance. Cross-calibration was performed using near-simultaneous satellite images of the same region, and the Spectral Band Adjustment Factor (SBAF) was calculated and applied to account for differences in spectral response functions (SRF). Additionally, Bidirectional Reflectance Distribution Function (BRDF) correction was applied using MODIS-based kernel models to minimize angular reflectance effects caused by differences in viewing and illumination geometry. This study aims to evaluate the radiometric consistency of KOMPSAT-3/AEISS relative to Sentinel-2A/MSI over Baotou scenes acquired in 2022–2023, derive band-specific calibration coefficients and compare them with prior results, and conduct a side-by-side comparison of cross-calibration and vicarious calibration. Furthermore, the cross-calibration yielded band-specific gains of 0.0196 (Blue), 0.0237 (Green), 0.0214 (Red), and 0.0136 (NIR). These findings offer valuable implications for Earth observation, environmental monitoring, and the planning and execution of future satellite missions. Full article
Show Figures

Graphical abstract

29 pages, 19475 KB  
Article
Fine-Scale Grassland Classification Using UAV-Based Multi-Sensor Image Fusion and Deep Learning
by Zhongquan Cai, Changji Wen, Lun Bao, Hongyuan Ma, Zhuoran Yan, Jiaxuan Li, Xiaohong Gao and Lingxue Yu
Remote Sens. 2025, 17(18), 3190; https://doi.org/10.3390/rs17183190 - 15 Sep 2025
Cited by 1 | Viewed by 857
Abstract
Grassland classification via remote sensing is essential for ecosystem monitoring and precision management, yet conventional satellite-based approaches are fundamentally constrained by coarse spatial resolution. To overcome this limitation, we harness high-resolution UAV multi-sensor data, integrating multi-scale image fusion with deep learning to achieve [...] Read more.
Grassland classification via remote sensing is essential for ecosystem monitoring and precision management, yet conventional satellite-based approaches are fundamentally constrained by coarse spatial resolution. To overcome this limitation, we harness high-resolution UAV multi-sensor data, integrating multi-scale image fusion with deep learning to achieve fine-scale grassland classification that satellites cannot provide. First, four categories of UAV data, including RGB, multispectral, thermal infrared, and LiDAR point cloud, were collected, and a fused image tensor consisting of 10 channels (NDVI, VCI, CHM, etc.) was constructed through orthorectification and resampling. For feature-level fusion, four deep fusion networks were designed. Among them, the MultiScale Pyramid Fusion Network, utilizing a pyramid pooling module, effectively integrated spectral and structural features, achieving optimal performance in all six image fusion evaluation metrics, including information entropy (6.84), spatial frequency (15.56), and mean gradient (12.54). Subsequently, training and validation datasets were constructed by integrating visual interpretation samples. Four backbone networks, including UNet++, DeepLabV3+, PSPNet, and FPN, were employed, and attention modules (SE, ECA, and CBAM) were introduced separately to form 12 model combinations. Results indicated that the UNet++ network combined with the SE attention module achieved the best segmentation performance on the validation set, with a mean Intersection over Union (mIoU) of 77.68%, overall accuracy (OA) of 86.98%, F1-score of 81.48%, and Kappa coefficient of 0.82. In the categories of Leymus chinensis and Puccinellia distans, producer’s accuracy (PA)/user’s accuracy (UA) reached 86.46%/82.30% and 82.40%/77.68%, respectively. Whole-image prediction validated the model’s coherent identification capability for patch boundaries. In conclusion, this study provides a systematic approach for integrating multi-source UAV remote sensing data and intelligent grassland interpretation, offering technical support for grassland ecological monitoring and resource assessment. Full article
Show Figures

Figure 1

22 pages, 15219 KB  
Article
Integrating UAS Remote Sensing and Edge Detection for Accurate Coal Stockpile Volume Estimation
by Sandeep Dhakal, Ashish Manandhar, Ajay Shah and Sami Khanal
Remote Sens. 2025, 17(18), 3136; https://doi.org/10.3390/rs17183136 - 10 Sep 2025
Viewed by 809
Abstract
Accurate stockpile volume estimation is essential for industries that manage bulk materials across various stages of production. Conventional ground-based methods such as walking wheels, total stations, Global Navigation Satellite Systems (GNSSs), and Terrestrial Laser Scanners (TLSs) have been widely used, but often involve [...] Read more.
Accurate stockpile volume estimation is essential for industries that manage bulk materials across various stages of production. Conventional ground-based methods such as walking wheels, total stations, Global Navigation Satellite Systems (GNSSs), and Terrestrial Laser Scanners (TLSs) have been widely used, but often involve significant safety risks, particularly when accessing hard-to-reach or hazardous areas. Unmanned Aerial Systems (UASs) provide a safer and more efficient alternative for surveying irregularly shaped stockpiles. This study evaluates UAS-based methods for estimating the volume of coal stockpiles at a storage facility near Cadiz, Ohio. Two sensor platforms were deployed: a Freefly Alta X quadcopter equipped with a Real-Time Kinematic (RTK) Light Detection and Ranging (LiDAR, active sensor) and a WingtraOne UAS with Post-Processed Kinematic (PPK) multispectral imaging (optical, passive sensor). Three approaches were compared: (1) LiDAR; (2) Structure-from-Motion (SfM) photogrammetry with a Digital Surface Model (DSM) and Digital Terrain Model (DTM) (SfM–DTM); and (3) an SfM-derived DSM combined with a kriging-interpolated DTM (SfM–intDTM). An automated boundary detection workflow was developed, integrating slope thresholding, Near-Infrared (NIR) spectral filtering, and Canny edge detection. Volume estimates from SfM–DTM and SfM–intDTM closely matched LiDAR-based reference estimates, with Root Mean Square Error (RMSE) values of 147.51 m3 and 146.18 m3, respectively. The SfM–intDTM approach achieved a Mean Absolute Percentage Error (MAPE) of ~2%, indicating strong agreement with LiDAR and improved accuracy compared to prior studies. A sensitivity analysis further highlighted the role of spatial resolution in volume estimation. While RMSE values remained consistent (141–162 m3) and the MAPE below 2.5% for resolutions between 0.06 m and 5 m, accuracy declined at coarser resolutions, with the MAPE rising to 11.76% at 10 m. This emphasizes the need to balance the resolution with the study objectives, geographic extent, and computational costs when selecting elevation data for volume estimation. Overall, UAS-based SfM photogrammetry combined with interpolated DTMs and automated boundary extraction offers a scalable, cost-effective, and accurate approach for stockpile volume estimation. The methodology is well-suited for both the high-precision monitoring of individual stockpiles and broader regional-scale assessments and can be readily adapted to other domains such as quarrying, agricultural storage, and forestry operations. Full article
Show Figures

Figure 1

23 pages, 6105 KB  
Article
YUV Color Model-Based Adaptive Pansharpening with Lanczos Interpolation and Spectral Weights
by Shavkat Fazilov, Ozod Yusupov, Erali Eshonqulov, Khabiba Abdieva and Ziyodullo Malikov
Mathematics 2025, 13(17), 2868; https://doi.org/10.3390/math13172868 - 5 Sep 2025
Viewed by 526
Abstract
Pansharpening is a method of image fusion that combines a panchromatic (PAN) image with high spatial resolution and multispectral (MS) images which possess different spectral characteristics and are frequently obtained from satellite sensors. Despite the development of numerous pansharpening methods in recent years, [...] Read more.
Pansharpening is a method of image fusion that combines a panchromatic (PAN) image with high spatial resolution and multispectral (MS) images which possess different spectral characteristics and are frequently obtained from satellite sensors. Despite the development of numerous pansharpening methods in recent years, a key challenge continues to be the maintenance of both spatial details and spectral accuracy in the combined image. To tackle this challenge, we introduce a new approach that enhances the component substitution-based Adaptive IHS method by integrating the YUV color model along with weighting coefficients influenced by the multispectral data. In our proposed approach, the conventional IHS color model is substituted with the YUV model to enhance spectral consistency. Additionally, Lanczos interpolation is used to upscale the MS image to match the spatial resolution of the PAN image. Each channel of the MS image is fused using adaptive weights derived from the influence of multispectral data, leading to the final pansharpened image. Based on the findings from experiments conducted on the PairMax and PanCollection datasets, our proposed method exhibited superior spectral and spatial performance when compared to several existing pansharpening techniques. Full article
(This article belongs to the Special Issue Machine Learning Applications in Image Processing and Computer Vision)
Show Figures

Figure 1

Back to TopTop