Journal Description
Remote Sensing
Remote Sensing
is an international, peer-reviewed, open access journal about the science and application of remote sensing technology, and is published semimonthly online by MDPI. The Remote Sensing Society of Japan (RSSJ) and the Japan Society of Photogrammetry and Remote Sensing (JSPRS) are affiliated with Remote Sensing, and their members receive a discount on the article processing charge.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, SCIE (Web of Science), Ei Compendex, PubAg, GeoRef, Astrophysics Data System, Inspec, dblp, and other databases.
- Journal Rank: JCR - Q1 (Geosciences, Multidisciplinary) / CiteScore - Q1 (General Earth and Planetary Sciences)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 24.9 days after submission; acceptance to publication is undertaken in 2.5 days (median values for papers published in this journal in the first half of 2025).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
- Companion journal: Geomatics.
- Journal Cluster of Geospatial and Earth Sciences: Remote Sensing, Geosciences, Quaternary, Earth, Geographies, Geomatics and Fossil Studies.
Impact Factor:
4.1 (2024);
5-Year Impact Factor:
4.8 (2024)
Latest Articles
Research on Rice Field Identification Methods in Mountainous Regions
Remote Sens. 2025, 17(19), 3356; https://doi.org/10.3390/rs17193356 - 2 Oct 2025
Abstract
Rice is one of the most important staple crops in China, and the rapid and accurate extraction of rice planting areas plays a crucial role in the agricultural management and food security assessment. However, the existing rice field identification methods faced the significant
[...] Read more.
Rice is one of the most important staple crops in China, and the rapid and accurate extraction of rice planting areas plays a crucial role in the agricultural management and food security assessment. However, the existing rice field identification methods faced the significant challenges in mountainous regions due to the severe cloud contamination, insufficient utilization of multi-dimensional features, and limited classification accuracy. This study presented a novel rice field identification method based on the Graph Convolutional Networks (GCN) that effectively integrated multi-source remote sensing data tailored for the complex mountainous terrain. A coarse-to-fine cloud removal strategy was developed by fusing the synthetic aperture radar (SAR) imagery with temporally adjacent optical remote sensing imagery, achieving high cloud removal accuracy, thereby providing reliable and clear optical data for the subsequent rice mapping. A comprehensive multi-feature library comprising spectral, texture, polarization, and terrain attributes was constructed and optimized via a stepwise selection process. Furthermore, the 19 key features were established to enhance the classification performance. The proposed method achieved an overall accuracy of 98.3% for the rice field identification in Huoshan County of the Dabie Mountains, and a 96.8% consistency compared to statistical yearbook data. The ablation experiments demonstrated that incorporating terrain features substantially improved the rice field identification accuracy under the complex topographic conditions. The comparative evaluations against support vector machine (SVM), random forest (RF), and U-Net models confirmed the superiority of the proposed method in terms of accuracy, local performance, terrain adaptability, training sample requirement, and computational cost, and demonstrated its effectiveness and applicability for the high-precision rice field distribution mapping in mountainous environments.
Full article
(This article belongs to the Special Issue Precision Agriculture and Crop Monitoring Based on Remote Sensing Methods)
►
Show Figures
Open AccessArticle
Assessing the Sensitivity of Snow Depth Retrieval Algorithms to Inter-Sensor Brightness Temperature Differences
by
Guangjin Liu, Lingmei Jiang, Huizhen Cui, Jinmei Pan, Jianwei Yang and Min Wu
Remote Sens. 2025, 17(19), 3355; https://doi.org/10.3390/rs17193355 - 2 Oct 2025
Abstract
Passive microwave remote sensing provides indispensable observations for constructing long-term snow depth records, which are critical for climatology, hydrology, and operational applications. Nevertheless, despite decades of snow depth monitoring, systematic evaluations of how inter-sensor brightness temperature differences (TBDs) propagate into retrieval uncertainties are
[...] Read more.
Passive microwave remote sensing provides indispensable observations for constructing long-term snow depth records, which are critical for climatology, hydrology, and operational applications. Nevertheless, despite decades of snow depth monitoring, systematic evaluations of how inter-sensor brightness temperature differences (TBDs) propagate into retrieval uncertainties are still lacking. In this study, TBDs between DMSP-F18/SSMIS, FY-3D/MWRI, and AMSR2 sensors were quantified, and the sensitivity of seven snow depth retrieval algorithms to these discrepancies was systematically assessed. The results indicate that TBDs between SSMIS and AMSR2 are larger than those between MWRI and AMSR2, likely reflecting variations in sensor specifications such as frequency, observation angle, and overpass time. In terms of algorithm sensitivity, SPD, WESTDC, FY-3B, and FY-3D demonstrate less sensitivity across sensors, with standard deviations of snow depth differences generally below 2 cm. In contrast, the Foster algorithm exhibits pronounced sensitivity to TBDs, with standard deviations exceeding 11 cm and snow depth differences reaching over 20 cm in heavily forested regions (forest fracion >90%). This study provides guidance for SWE virtual constellation design and algorithm selection, supporting long-term, seamless, and consistent snow depth retrievals.
Full article
(This article belongs to the Section Remote Sensing in Geology, Geomorphology and Hydrology)
►▼
Show Figures

Figure 1
Open AccessArticle
Spatiotemporal Analysis of Vineyard Dynamics: UAS-Based Monitoring at the Individual Vine Scale
by
Stefan Ruess, Gernot Paulus and Stefan Lang
Remote Sens. 2025, 17(19), 3354; https://doi.org/10.3390/rs17193354 - 2 Oct 2025
Abstract
The rapid and reliable acquisition of canopy-related metrics is essential for improving decision support in viticultural management, particularly when monitoring individual vines for targeted interventions. This study presents a spatially explicit workflow that integrates Uncrewed Aerial System (UAS) imagery, 3D point-cloud analysis, and
[...] Read more.
The rapid and reliable acquisition of canopy-related metrics is essential for improving decision support in viticultural management, particularly when monitoring individual vines for targeted interventions. This study presents a spatially explicit workflow that integrates Uncrewed Aerial System (UAS) imagery, 3D point-cloud analysis, and Object-Based Image Analysis (OBIA) to detect and monitor individual grapevines throughout the growing season. Vines are identified directly from 3D point clouds without the need for prior training data or predefined row structures, achieving a mean Euclidean distance of 10.7 cm to the reference points. The OBIA framework segments vine vegetation based on spectral and geometric features without requiring pre-clipping or manual masking. All non-vine elements—including soil, grass, and infrastructure—are automatically excluded, and detailed canopy masks are created for each plant. Vegetation indices are computed exclusively from vine canopy objects, ensuring that soil signals and internal canopy gaps do not bias the results. This enables accurate per-vine assessment of vigour. NDRE values were calculated at three phenological stages—flowering, veraison, and harvest—and analyzed using Local Indicators of Spatial Association (LISA) to detect spatial clusters and outliers. In contrast to value-based clustering methods, LISA accounts for spatial continuity and neighborhood effects, allowing the detection of stable low-vigour zones, expanding high-vigour clusters, and early identification of isolated stressed vines. A strong correlation (R2 = 0.73) between per-vine NDRE values and actual yield demonstrates that NDRE-derived vigour reliably reflects vine productivity. The method provides a transferable, data-driven framework for site-specific vineyard management, enabling timely interventions at the individual plant level before stress propagates spatially.
Full article
(This article belongs to the Special Issue Remote and Proximal Sensing for Precision Agriculture and Viticulture(2nd Edition))
►▼
Show Figures

Figure 1
Open AccessArticle
Comparative Evaluation of SNO and Double Difference Calibration Methods for FY-3D MERSI TIR Bands Using MODIS/Aqua as Reference
by
Shufeng An, Fuzhong Weng, Xiuzhen Han and Chengzhi Ye
Remote Sens. 2025, 17(19), 3353; https://doi.org/10.3390/rs17193353 - 2 Oct 2025
Abstract
►▼
Show Figures
Radiometric consistency across satellite platforms is fundamental to producing high-quality Climate Data Records (CDRs). Because different cross-calibration methods have distinct advantages and limitations, comparative evaluation is necessary to ensure record accuracy. This study presents a comparative assessment of two widely applied calibration approaches—Simultaneous
[...] Read more.
Radiometric consistency across satellite platforms is fundamental to producing high-quality Climate Data Records (CDRs). Because different cross-calibration methods have distinct advantages and limitations, comparative evaluation is necessary to ensure record accuracy. This study presents a comparative assessment of two widely applied calibration approaches—Simultaneous Nadir Overpass (SNO) and Double Difference (DD)—for the thermal infrared (TIR) bands of FY-3D MERSI. MODIS/Aqua serves as the reference sensor, while radiative transfer simulations driven by ERA5 inputs are generated with the Advanced Radiative Transfer Modeling System (ARMS) to support the analysis. The results show that SNO performs effectively when matchup samples are sufficiently large and globally representative but is less applicable under sparse temporal sampling or orbital drift. In contrast, the DD method consistently achieves higher calibration accuracy for MERSI Bands 24 and 25 under clear-sky conditions. It reduces mean biases from ~−0.5 K to within ±0.1 K and lowers RMSE from ~0.6 K to 0.3–0.4 K during 2021–2022. Under cloudy conditions, DD tends to overcorrect because coefficients derived from clear-sky simulations are not directly transferable to cloud-covered scenes, whereas SNO remains more stable though less precise. Overall, the results suggest that the two methods exhibit complementary strengths, with DD being preferable for high-accuracy calibration in clear-sky scenarios and SNO offering greater stability across variable atmospheric conditions. Future work will validate both methods under varied surface and atmospheric conditions and extend their use to additional sensors and spectral bands.
Full article

Figure 1
Open AccessArticle
Generating the 500 m Global Satellite Vegetation Productivity Phenology Product from 2001 to 2020
by
Boyu Ren, Yunfeng Cao, Jiaxin Tian, Shunlin Liang and Meng Yu
Remote Sens. 2025, 17(19), 3352; https://doi.org/10.3390/rs17193352 - 2 Oct 2025
Abstract
►▼
Show Figures
Accurate monitoring of vegetation phenology is vital for understanding climate change impacts on terrestrial ecosystems. While global vegetation greenness phenology (VGP) products are widely available, vegetation productivity phenology (VPP), which better reflects ecosystems’ carbon dynamics, remains largely inaccessible. This study introduces a novel
[...] Read more.
Accurate monitoring of vegetation phenology is vital for understanding climate change impacts on terrestrial ecosystems. While global vegetation greenness phenology (VGP) products are widely available, vegetation productivity phenology (VPP), which better reflects ecosystems’ carbon dynamics, remains largely inaccessible. This study introduces a novel global 500 m VPP dataset (GLASS VPP) from 2001 to 2020, derived from the GLASS gross primary productivity (GPP) product. Validation against three ground-based datasets—Fluxnet 2015, PhenoCam V2.0, and PEP725—demonstrated the dataset’s superior accuracy. Compared to the widely used MCD12Q2 VGP product, GLASS VPP reduced RMSE and bias by 35% and 63%, respectively, when validated against Fluxnet data. It also showed stronger correlations than MCD12Q2 when compared with PhenoCam (195 sites) and PEP725 (99 sites) observations, and it captured spatial and altitudinal phenology patterns more effectively. Overall, GLASS VPP exhibits a higher spatial integrity, stronger ecological interpretability, and improved consistency with ground observations, making it a valuable dataset for phenology modeling, carbon cycle research, and ecological forecasting under climate change.
Full article

Figure 1
Open AccessArticle
ILF-BDSNet: A Compressed Network for SAR-to-Optical Image Translation Based on Intermediate-Layer Features and Bio-Inspired Dynamic Search
by
Yingying Kong and Cheng Xu
Remote Sens. 2025, 17(19), 3351; https://doi.org/10.3390/rs17193351 - 1 Oct 2025
Abstract
►▼
Show Figures
Synthetic aperture radar (SAR) exhibits all-day and all-weather capabilities, granting it significant application in remote sensing. However, interpreting SAR images requires extensive expertise, making SAR-to-optical remote sensing image translation a crucial research direction. While conditional generative adversarial networks (CGANs) have demonstrated exceptional performance
[...] Read more.
Synthetic aperture radar (SAR) exhibits all-day and all-weather capabilities, granting it significant application in remote sensing. However, interpreting SAR images requires extensive expertise, making SAR-to-optical remote sensing image translation a crucial research direction. While conditional generative adversarial networks (CGANs) have demonstrated exceptional performance in image translation tasks, their massive number of parameters pose substantial challenges. Therefore, this paper proposes ILF-BDSNet, a compressed network for SAR-to-optical image translation. Specifically, first, standard convolutions in the feature-transformation module of the teacher network are replaced with depthwise separable convolutions to construct the student network, and a dual-resolution collaborative discriminator based on PatchGAN is proposed. Next, knowledge distillation based on intermediate-layer features and channel pruning via weight sharing are designed to train the student network. Then, the bio-inspired dynamic search of channel configuration (BDSCC) algorithm is proposed to efficiently select the optimal subnet. Meanwhile, the pixel-semantic dual-domain alignment loss function is designed. The feature-matching loss within this function establishes an alignment mechanism based on intermediate-layer features from the discriminator. Extensive experiments demonstrate the superiority of ILF-BDSNet, which significantly reduces number of parameters and computational complexity while still generating high-quality optical images, providing an efficient solution for SAR image translation in resource-constrained environments.
Full article

Figure 1
Open AccessArticle
PyGEE-ST-MEDALUS: AI Spatiotemporal Framework Integrating MODIS and Sentinel-1/-2 Data for Desertification Risk Assessment in Northeastern Algeria
by
Zakaria Khaldi, Jingnong Weng, Franz Pablo Antezana Lopez, Guanhua Zhou, Ilyes Ghedjatti and Aamir Ali
Remote Sens. 2025, 17(19), 3350; https://doi.org/10.3390/rs17193350 - 1 Oct 2025
Abstract
Desertification threatens the sustainability of dryland ecosystems, yet many existing monitoring frameworks rely on static maps, coarse spatial resolution, or lack temporal forecasting capacity. To address these limitations, this study introduces PyGEE-ST-MEDALUS, a novel spatiotemporal framework combining the full MEDALUS desertification model with
[...] Read more.
Desertification threatens the sustainability of dryland ecosystems, yet many existing monitoring frameworks rely on static maps, coarse spatial resolution, or lack temporal forecasting capacity. To address these limitations, this study introduces PyGEE-ST-MEDALUS, a novel spatiotemporal framework combining the full MEDALUS desertification model with deep learning (CNN, LSTM, DeepMLP) and machine learning (RF, XGBoost, SVM) techniques on the Google Earth Engine (GEE) platform. Applied across Tebessa Province, Algeria (2001–2028), the framework integrates MODIS and Sentinel-1/-2 data to compute four core indices—climatic, soil, vegetation, and land management quality—and create the Desertification Sensitivity Index (DSI). Unlike prior studies that focus on static or spatial-only MEDALUS implementations, PyGEE-ST-MEDALUS introduces scalable, time-series forecasting, yielding superior predictive performance (R2 ≈ 0.96; RMSE < 0.03). Over 71% of the region was classified as having high to very high sensitivity, driven by declining vegetation and thermal stress. Comparative analysis confirms that this study advances the state-of-the-art by integrating interpretable AI, near-real-time satellite analytics, and full MEDALUS indicators into one cloud-based pipeline. These contributions make PyGEE-ST-MEDALUS a transferable, efficient decision-support tool for identifying degradation hotspots, supporting early warning systems, and enabling evidence-based land management in dryland regions.
Full article
(This article belongs to the Special Issue Integrating Remote Sensing, Machine Learning, and Process-Based Modelling for Monitoring Environmental and Agricultural Landscapes Under Climate Change)
►▼
Show Figures

Graphical abstract
Open AccessArticle
LiteSAM: Lightweight and Robust Feature Matching for Satellite and Aerial Imagery
by
Boya Wang, Shuo Wang, Yibin Han, Linfeng Xu and Dong Ye
Remote Sens. 2025, 17(19), 3349; https://doi.org/10.3390/rs17193349 - 1 Oct 2025
Abstract
We present a (Light)weight (S)atellite–(A)erial feature (M)atching framework (LiteSAM) for robust UAV absolute visual localization (AVL) in GPS-denied environments. Existing satellite–aerial matching methods struggle with large appearance variations, texture-scarce regions, and limited efficiency for real-time UAV
[...] Read more.
We present a (Light)weight (S)atellite–(A)erial feature (M)atching framework (LiteSAM) for robust UAV absolute visual localization (AVL) in GPS-denied environments. Existing satellite–aerial matching methods struggle with large appearance variations, texture-scarce regions, and limited efficiency for real-time UAV applications. LiteSAM integrates three key components to address these issues. First, efficient multi-scale feature extraction optimizes representation, reducing inference latency for edge devices. Second, a Token Aggregation–Interaction Transformer (TAIFormer) with a convolutional token mixer (CTM) models inter- and intra-image correlations, enabling robust global–local feature fusion. Third, a MinGRU-based dynamic subpixel refinement module adaptively learns spatial offsets, enhancing subpixel-level matching accuracy and cross-scenario generalization. The experiments show that LiteSAM achieves competitive performance across multiple datasets. On UAV-VisLoc, LiteSAM attains an RMSE@30 of 17.86 m, outperforming state-of-the-art semi-dense methods such as EfficientLoFTR. Its optimized variant, LiteSAM (opt., without dual softmax), delivers inference times of 61.98 ms on standard GPUs and 497.49 ms on NVIDIA Jetson AGX Orin, which are 22.9% and 19.8% faster than EfficientLoFTR (opt.), respectively. With 6.31M parameters, which is 2.4× fewer than EfficientLoFTR’s 15.05M, LiteSAM proves to be suitable for edge deployment. Extensive evaluations on natural image matching and downstream vision tasks confirm its superior accuracy and efficiency for general feature matching.
Full article
Open AccessArticle
SS3L: Self-Supervised Spectral–Spatial Subspace Learning for Hyperspectral Image Denoising
by
Yinhu Wu, Dongyang Liu and Junping Zhang
Remote Sens. 2025, 17(19), 3348; https://doi.org/10.3390/rs17193348 - 1 Oct 2025
Abstract
►▼
Show Figures
Hyperspectral imaging (HSI) systems often suffer from complex noise degradation during the imaging process, significantly impacting downstream applications. Deep learning-based methods, though effective, rely on impractical paired training data, while traditional model-based methods require manually tuned hyperparameters and lack generalization. To address these
[...] Read more.
Hyperspectral imaging (HSI) systems often suffer from complex noise degradation during the imaging process, significantly impacting downstream applications. Deep learning-based methods, though effective, rely on impractical paired training data, while traditional model-based methods require manually tuned hyperparameters and lack generalization. To address these issues, we propose SS3L (Self-Supervised Spectral-Spatial Subspace Learning), a novel HSI denoising framework that requires neither paired data nor manual tuning. Specifically, we introduce a self-supervised spectral–spatial paradigm that learns noisy features from noisy data, rather than paired training data, based on spatial geometric symmetry and spectral local consistency constraints. To avoid manual hyperparameter tuning, we propose an adaptive rank subspace representation and a loss function designed based on the collaborative integration of spectral and spatial losses via noise-aware spectral-spatial weighting, guided by the estimated noise intensity. These components jointly enable a dynamic trade-off between detail preservation and noise reduction under varying noise levels. The proposed SS3L embeds noise-adaptive subspace representations into the dynamic spectral–spatial hybrid loss-constrained network, enabling cross-sensor denoising through prior-informed self-supervision. Experimental results demonstrate that SS3L effectively removes noise while preserving both structural fidelity and spectral accuracy under diverse noise conditions.
Full article

Figure 1
Open AccessArticle
S2M-Net: A Novel Lightweight Network for Accurate Smal Ship Recognition in SAR Images
by
Guobing Wang, Rui Zhang, Junye He, Yuxin Tang, Yue Wang, Yonghuan He, Xunqiang Gong and Jiang Ye
Remote Sens. 2025, 17(19), 3347; https://doi.org/10.3390/rs17193347 - 1 Oct 2025
Abstract
►▼
Show Figures
Synthetic aperture radar (SAR) provides all-weather and all-day imaging capabilities and can penetrate clouds and fog, playing an important role in ship detection. However, small ships usually contain weak feature information in such images and are easily affected by noise, which makes detection
[...] Read more.
Synthetic aperture radar (SAR) provides all-weather and all-day imaging capabilities and can penetrate clouds and fog, playing an important role in ship detection. However, small ships usually contain weak feature information in such images and are easily affected by noise, which makes detection challenging. In practical deployment, limited computing resources require lightweight models to improve real-time performance, yet achieving a lightweight design while maintaining high detection accuracy for small targets remains a key challenge in object detection. To address this issue, we propose a novel lightweight network for accurate small-ship recognition in SAR images, named S2M-Net. Specifically, the Space-to-Depth Convolution (SPD-Conv) module is introduced in the feature extraction stage to optimize convolutional structures, reducing computation and parameters while retaining rich feature information. The Mixed Local-Channel Attention (MLCA) module integrates local and channel attention mechanisms to enhance adaptation to complex backgrounds and improve small-target detection accuracy. The Multi-Scale Dilated Attention (MSDA) module employs multi-scale dilated convolutions to fuse features from different receptive fields, strengthening detection across ships of various sizes. The experimental results show that S2M-Net achieved mAP50 values of 0.989, 0.955, and 0.883 on the SSDD, HRSID, and SARDet-100k datasets, respectively. Compared with the baseline model, the F1 score increased by 1.13%, 2.71%, and 2.12%. Moreover, S2M-Net outperformed other state-of-the-art algorithms in FPS across all datasets, achieving a well-balanced trade-off between accuracy and efficiency. This work provides an effective solution for accurate ship detection in SAR images.
Full article

Figure 1
Open AccessReview
Seeing the Trees from Above: A Survey on Real and Synthetic Agroforestry Datasets for Remote Sensing Applications
by
Babak Chehreh, Alexandra Moutinho and Carlos Viegas
Remote Sens. 2025, 17(19), 3346; https://doi.org/10.3390/rs17193346 - 1 Oct 2025
Abstract
Trees are vital to both environmental health and human well-being. They purify the air we breathe, support biodiversity by providing habitats for wildlife, prevent soil erosion to maintain fertile land, and supply wood for construction, fuel, and a multitude of essential products such
[...] Read more.
Trees are vital to both environmental health and human well-being. They purify the air we breathe, support biodiversity by providing habitats for wildlife, prevent soil erosion to maintain fertile land, and supply wood for construction, fuel, and a multitude of essential products such as fruits, to name a few. Therefore, it is important to monitor and preserve them to protect the natural environment for future generations and ensure the sustainability of our planet. Remote sensing is the rapidly advancing and powerful tool that enables us to monitor and manage trees and forests efficiently and at large scale. Statistical methods, machine learning, and more recently deep learning are essential for analyzing the vast amounts of data collected, making data the fundamental component of these methodologies. The advancement of these methods goes hand in hand with the availability of sample data; therefore, a review study on available high-resolution aerial datasets of trees can help pave the way for further development of analytical methods in this field. This study aims to shed light on publicly available datasets by conducting a systematic search and filter and an in-depth analysis of them, including their alignment with the FAIR—findable, accessible, interoperable, and reusable—principles and the latest trends concerning applications for such datasets.
Full article
(This article belongs to the Special Issue Advances in Deep Learning Approaches: UAV Data Analysis)
►▼
Show Figures

Figure 1
Open AccessArticle
Robust Satellite Techniques (RSTs) for SO2 Detection with MSG-SEVIRI Data: A Case Study of the 2021 Tajogaite Eruption
by
Rui Mota, Carolina Filizzola, Alfredo Falconieri, Francesco Marchese, Nicola Pergola, Valerio Tramutoli, Artur Gil and José Pacheco
Remote Sens. 2025, 17(19), 3345; https://doi.org/10.3390/rs17193345 - 1 Oct 2025
Abstract
►▼
Show Figures
Volcanic gas emissions, particularly sulfur dioxide (SO2), are crucial for volcano monitoring. SO2 has a significant impact on air quality, the climate, and human health, making it a critical component of volcano monitoring programs. Additionally, SO2 can be used
[...] Read more.
Volcanic gas emissions, particularly sulfur dioxide (SO2), are crucial for volcano monitoring. SO2 has a significant impact on air quality, the climate, and human health, making it a critical component of volcano monitoring programs. Additionally, SO2 can be used to assess the state of a volcano and the progression of an individual eruption and can serve as a proxy for volcanic ash. The Tajogaite La Palma (Spain) eruption in 2021 emitted large amounts of SO2 over 85 days, with the plume reaching Central Europe. In this study, we present the results achieved by monitoring Tajogaite SO2 emissions from 19 September to 31 October 2021 at different acquisition times (i.e., 10:00 UTC, 12:00 UTC, 14:00 UTC, and 16:00 UTC). An optimized configuration of the Robust Satellite Technique (RST) approach, tailored to volcanic SO2 detection and exploiting the Spinning Enhanced Visible and InfraRed Imager (SEVIRI) channel at an 8.7 µm wavelength, was used. The results, assessed by means of a performance evaluation compared with masks drawn from the EUMETSAT Volcanic Ash RGB, show that the RST product identified volcanic SO2 plumes on approximately 81% of eruption days, with a very low false-positive rate (2% and 0.3% for the mid/low and high-confidence-level RST products, respectively), a weighted precision of ~79%, and an F1-score of ~54%. In addition, the comparison with the Tropospheric Monitoring Instrument (TROPOMI) S5P Product Algorithm Laboratory (S5P-PAL) L3 grid Daily SO2 CBR product shows that RST-SEVIRI detections were mostly associated with SO2 plumes having a column density greater than 0.4 Dobson Units (DU). This study gives rise to some interesting scenarios regarding the near-real-time monitoring of volcanic SO2 by means of the Flexible Combined Imager (FCI) aboard the Meteosat Third-Generation (MTG) satellites, offering improved instrumental features compared with the SEVIRI.
Full article

Figure 1
Open AccessArticle
69-Year Geodetic Mass Balance of Nevado Coropuna (Peru), the World’s Largest Tropical Icefield, from 1955 to 2024
by
Julian Llanto, Ramón Pellitero, Jose Úbeda, Alan D.J. Atkinson-Gordo and José Pasapera
Remote Sens. 2025, 17(19), 3344; https://doi.org/10.3390/rs17193344 - 1 Oct 2025
Abstract
The first comprehensive mass balance estimation for the world’s largest tropical icefield is presented. Geodetical mass balance was calculated using photogrammetry from aerial and satellite images spanning from 1955 to 2024. The results meet expected quality standards using some new satellite sources, such
[...] Read more.
The first comprehensive mass balance estimation for the world’s largest tropical icefield is presented. Geodetical mass balance was calculated using photogrammetry from aerial and satellite images spanning from 1955 to 2024. The results meet expected quality standards using some new satellite sources, such as the Peruvian PeruSAT-1, although the quality of airborne imagery is consistently lower than that of satellite sources. The Nevado Coropuna icefield remained almost stable between 1955 and 1986 with −0.04 m dh yr−1. Since then, it has undergone a sustained and accelerated negative mass balance, reaching a maximum annual dh yr−1 of −0.73 ± 0.19 in the 2020–2023 timeframe. The glacier loss is not equal across the entire ice mass, but more acute in the northern and northeastern outlet tongues. Debris-covered ice and rock glaciers show a much weaker negative mass balance signal. The impact of ENSO events is not evident in the overall ice evolution, although their long-term relevance is acknowledged. Overall, the negative response of Nevado Coropuna to global warming (−0.36 ± 0.12 m.w.e. yr−1 for the 2013 to 2024 period) is less pronounced than that of other Peruvian glaciers, but more severe than that reported for the nearby Dry Andes of Chile and Argentina.
Full article
(This article belongs to the Special Issue Earth Observation of Glacier and Snow Cover Mapping in Cold Regions)
►▼
Show Figures

Figure 1
Open AccessArticle
From Trends to Drivers: Vegetation Degradation and Land-Use Change in Babil and Al-Qadisiyah, Iraq (2000–2023)
by
Nawar Al-Tameemi, Zhang Xuexia, Fahad Shahzad, Kaleem Mehmood, Xiao Linying and Jinxing Zhou
Remote Sens. 2025, 17(19), 3343; https://doi.org/10.3390/rs17193343 - 1 Oct 2025
Abstract
Land degradation in Iraq’s Mesopotamian plain threatens food security and rural livelihoods, yet the relative roles of climatic water deficits versus anthropogenic pressures remain poorly attributed in space. We test the hypothesis that multi-timescale climatic water deficits (SPEI-03/-06/-12) exert a stronger effect on
[...] Read more.
Land degradation in Iraq’s Mesopotamian plain threatens food security and rural livelihoods, yet the relative roles of climatic water deficits versus anthropogenic pressures remain poorly attributed in space. We test the hypothesis that multi-timescale climatic water deficits (SPEI-03/-06/-12) exert a stronger effect on vegetation degradation risk than anthropogenic pressures, conditional on hydrological connectivity and irrigation. Using Babil and Al-Qadisiyah (2000–2023) as a case, we implement a four-part pipeline: (i) Fractional Vegetation Cover with Mann–Kendall/Sen’s slope to quantify greening/browning trends; (ii) LandTrendr to extract disturbance timing and magnitude; (iii) annual LULC maps from a Random Forest classifier to resolve transitions; and (iv) an XGBoost classifier to map degradation risk and attribute climate vs. anthropogenic influence via drop-group permutation (ΔAUC), grouped SHAP shares, and leave-group-out ablation, all under spatial block cross-validation. Driver attribution shows mid-term and short-term drought (SPEI-06, SPEI-03) as the strongest predictors, and conditional permutation yields a larger average AUC loss for the climate block than for the anthropogenic block, while grouped SHAP shares are comparable between the two, and ablation suggests a neutral to weak anthropogenic edge. The XGBoost model attains AUC = 0.884 (test) and maps 9.7% of the area as high risk (>0.70), concentrated away from perennial water bodies. Over 2000–2023, LULC change indicates CA +515 km2, HO +129 km2, UL +70 km2, BL −697 km2, WB −16.7 km2. Trend analysis shows recovery across 51.5% of the landscape (+29.6% dec−1 median) and severe decline over 2.5% (−22.0% dec−1). The integrated design couples trend mapping with driver attribution, clarifying how compounded climatic stress and intensive land use shape contemporary desertification risk and providing spatial priorities for restoration and adaptive water management.
Full article
(This article belongs to the Special Issue Advances in Forest Degradation and Deforestation Monitoring with AI and Multi-Source Remote Sensing Data)
►▼
Show Figures

Figure 1
Open AccessArticle
Real-Time Terrain Mapping with Responsibility-Based GMM and Adaptive Azimuth Scan Command
by
Hyunju Lee and Dongwon Jung
Remote Sens. 2025, 17(19), 3342; https://doi.org/10.3390/rs17193342 - 1 Oct 2025
Abstract
►▼
Show Figures
This paper presents a real-time terrain mapping method for aircraft’s navigation, combining probabilistic terrain modeling with adaptive azimuth scan command adjustment. The method refines a preloaded DTED in real time using radar scan data, enabling aircraft to update and utilize terrain elevation information
[...] Read more.
This paper presents a real-time terrain mapping method for aircraft’s navigation, combining probabilistic terrain modeling with adaptive azimuth scan command adjustment. The method refines a preloaded DTED in real time using radar scan data, enabling aircraft to update and utilize terrain elevation information during flight. The terrain is represented using a Gaussian Mixture Model (GMM), where radar scan data are evaluated based on their posterior responsibilities. A conditional nested GMM refinement is selectively applied in structurally ambiguous regions to capture multi-modal elevation patterns. The azimuth scan command is adaptively adjusted based on posterior responsibilities by increasing the step size in well-mapped regions and decreasing it in areas with low responsibility. This lightweight and adaptive strategy supports real-time operation with low computational cost. Simulations across diverse terrain types demonstrate accurate grid updates and adaptive scan control, with the proposed method achieving max error 29 m compared to grid-based averaging of 43 m and K-means clustering of 81 m. As the total number of updates is comparable to the existing methods, the proposed approach offers an advantage for real-time applications with enhanced grid accuracy.
Full article

Graphical abstract
Open AccessArticle
MSSA: A Multi-Scale Semantic-Aware Method for Remote Sensing Image–Text Retrieval
by
Yun Liao, Zongxiao Hu, Fangwei Jin, Junhui Liu, Nan Chen, Jiayi Lv and Qing Duan
Remote Sens. 2025, 17(19), 3341; https://doi.org/10.3390/rs17193341 - 30 Sep 2025
Abstract
In recent years, the convenience and potential for information extraction offered by Remote Sensing Image–Text Retrieval (RSITR) have made it a significant focus of research in remote sensing (RS) knowledge services. Current mainstream methods for RSITR generally align fused image features at multiple
[...] Read more.
In recent years, the convenience and potential for information extraction offered by Remote Sensing Image–Text Retrieval (RSITR) have made it a significant focus of research in remote sensing (RS) knowledge services. Current mainstream methods for RSITR generally align fused image features at multiple scales with textual features, primarily focusing on the local information of RS images while neglecting potential semantic information. This results in insufficient alignment in the cross-modal semantic space. To overcome this limitation, we propose a Multi-Scale Semantic-Aware Remote Sensing Image–Text Retrieval method (MSSA). This method introduces Progressive Spatial Channel Joint Attention (PSCJA), which enhances the expressive capability of multi-scale image features through Window-Region-Global Progressive Attention (WRGPA) and Segmented Channel Attention (SCA). Additionally, the Image-Guided Text Attention (IGTA) mechanism dynamically adjust textual attention weights based on visual context. Furthermore, the Cross-Modal Semantic Extraction Module (CMSE) incorporated learnable semantic tokens at each scale, enabling attention interaction between multi-scale features of different modalities and the capturing of hierarchical semantic associations. This multi-scale semantic-guided retrieval method ensures cross-modal semantic consistency, significantly improving the accuracy of cross-modal retrieval in RS. MSSA demonstrates superior retrieval accuracy in experiments across three baseline datasets, achieving a new state-of-the-art performance.
Full article
(This article belongs to the Section Remote Sensing Image Processing)
►▼
Show Figures

Figure 1
Open AccessArticle
JOTGLNet: A Guided Learning Network with Joint Offset Tracking for Multiscale Deformation Monitoring
by
Jun Ni, Siyuan Bao, Xichao Liu, Sen Du, Dapeng Tao and Yibing Zhan
Remote Sens. 2025, 17(19), 3340; https://doi.org/10.3390/rs17193340 - 30 Sep 2025
Abstract
Ground deformation monitoring in mining areas is essential for hazard prevention and environmental protection. Although interferometric synthetic aperture radar (InSAR) provides detailed phase information for accurate deformation measurement, its performance is often compromised in regions experiencing rapid subsidence and strong noise, where phase
[...] Read more.
Ground deformation monitoring in mining areas is essential for hazard prevention and environmental protection. Although interferometric synthetic aperture radar (InSAR) provides detailed phase information for accurate deformation measurement, its performance is often compromised in regions experiencing rapid subsidence and strong noise, where phase aliasing and coherence loss lead to significant inaccuracies. To overcome these limitations, this paper proposes JOTGLNet, a guided learning network with joint offset tracking, for multiscale deformation monitoring. This method integrates pixel offset tracking (OT), which robustly captures large-gradient displacements, with interferometric phase data that offers high sensitivity in coherent regions. A dual-path deep learning architecture was designed where the interferometric phase serves as the primary branch and OT features act as complementary information, enhancing the network’s ability to handle varying deformation rates and coherence conditions. Additionally, a novel shape perception loss combining morphological similarity measurement and error learning was introduced to improve geometric fidelity and reduce unbalanced errors across deformation regions. The model was trained on 4000 simulated samples reflecting diverse real-world scenarios and validated on 1100 test samples with a maximum deformation up to 12.6 m, achieving an average prediction error of less than 0.15 m—outperforming state-of-the-art methods whose errors exceeded 0.19 m. Additionally, experiments on five real monitoring datasets further confirmed the superiority and consistency of the proposed approach.
Full article
(This article belongs to the Special Issue Target Recognition and Detection Based on High Resolution Radar Images)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Enhancing Wildfire Monitoring with SDGSAT-1: A Performance Analysis
by
Xinkun Zhu, Guojiang Zhang, Bo Xiang, Jiangxia Ye, Lei Kong, Wenlong Yang, Mingshan Wu, Song Yang, Wenquan Wang, Weili Kou, Qiuhua Wang and Zhichao Huang
Remote Sens. 2025, 17(19), 3339; https://doi.org/10.3390/rs17193339 - 30 Sep 2025
Abstract
Advancements in remote sensing technology have enabled the acquisition of high spatial and radiometric resolution imagery, offering abundant and reliable data sources for forest fire monitoring. In order to explore the ability of Sustainable Development Science Satellite 1 (SDGSAT-1) in wildfire monitoring, a
[...] Read more.
Advancements in remote sensing technology have enabled the acquisition of high spatial and radiometric resolution imagery, offering abundant and reliable data sources for forest fire monitoring. In order to explore the ability of Sustainable Development Science Satellite 1 (SDGSAT-1) in wildfire monitoring, a systematic and comprehensive study was proposed on smoke detection during the wildfire early warning phase, fire point identification during the fire occurrence, and burned area delineation after the wildfire. The smoke detection effect of SDGSAT-1 was analyzed by machine learning and the discriminating potential of SDGSAT-1 burned area was discussed by Mid-Infrared Burn Index (MIRBI) and Normalized Burn Ratio 2 (NBR2). In addition, compared with Sentinel-2, the fixed-threshold method and the two-channel fixed-threshold plus contextual approach are further used to demonstrate the performance of SDGSAT-1 in fire point identification. The results show that the average accuracy of SDGSAT-1 fire burned area recognition is 90.21%, and a clear fire boundary can be obtained. The average smoke detection precision is 81.72%, while the fire point accuracy is 97.40%, and the minimum identified fire area is 0.0009 km2, which implies SDGSAT-1 offers significant advantages in the early detection and identification of small-scale fires, which is significant in fire emergency and disposal. The performance of fire point detection is superior to that of Sentinel-2 and Landsat 8. SDGSAT-1 demonstrates great potential in monitoring the entire process of wildfire occurrence, development, and evolution. With its higher-resolution satellite imagery, it has become an important data source for monitoring in the field of remote sensing.
Full article
(This article belongs to the Special Issue Multi-Scale Remote Sensing for Wetland Landscape Change Monitoring and Ecological Resilience)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Unveiling Forest Density Dynamics in Saihanba Forest Farm by Integrating Airborne LiDAR and Landsat Satellites
by
Nan Wang, Donghui Xie, Lin Jin, Yi Li, Xihan Mu and Guangjian Yan
Remote Sens. 2025, 17(19), 3338; https://doi.org/10.3390/rs17193338 - 29 Sep 2025
Abstract
Forest density is a key parameter in forestry research, and its variation can significantly impact ecosystems. Saihanba, as a focal site for afforestation and restoration, offers an ideal case for monitoring these dynamics. In this study, we compared three machine learning algorithms—Random Forest,
[...] Read more.
Forest density is a key parameter in forestry research, and its variation can significantly impact ecosystems. Saihanba, as a focal site for afforestation and restoration, offers an ideal case for monitoring these dynamics. In this study, we compared three machine learning algorithms—Random Forest, Support Vector Regression, and XGBoost—using Landsat surface reflectance data together with the Normalized Difference Vegetation Index (NDVI) and the Enhanced Vegetation Index (EVI), and reference tree densities derived from LiDAR individual tree segmentation. The best-performing algorithm, XGBoost (R2 = 0.65, RMSE = 174 trees ha−1), was then applied to generate a long-term forest density dataset for Saihanba at five-year intervals, covering the period from 1988 to 2023. Results revealed distinct differences among tree species, with larch achieving the highest accuracy (R2 = 0.65, RMSE = 161 trees ha−1), whereas spruce had larger prediction errors (RMSE = 201 trees ha−1) despite a relatively high R2 (0.70). Incorporating 30 m slope data revealed that moderate slopes (5–30°) favored faster forest recovery. From 1988 to 2023, average forest density rose from 521 to 628 trees ha−1—a 20.6% increase—demonstrating the effectiveness of restoration and providing a transferable framework for large-scale ecological monitoring.
Full article
(This article belongs to the Special Issue Digital Modeling for Sustainable Forest Management)
Open AccessArticle
The Role of Collecting Data on Various Site Conditions Through Satellite Remote Sensing Technology and Field Surveys in Predicting the Landslide Travel Distance: A Case Study of the 2022 Petrópolis Disaster in Brazil
by
Thiago Dutra dos Santos and Taro Uchida
Remote Sens. 2025, 17(19), 3337; https://doi.org/10.3390/rs17193337 - 29 Sep 2025
Abstract
Landslide runout distance is governed not only by collapsed magnitude but also by site-specific geoenvironmental conditions. While remote sensing techniques has advanced landslide susceptibility mapping, its application to runout modeling remains limited. This study examined the role of collecting data on various site
[...] Read more.
Landslide runout distance is governed not only by collapsed magnitude but also by site-specific geoenvironmental conditions. While remote sensing techniques has advanced landslide susceptibility mapping, its application to runout modeling remains limited. This study examined the role of collecting data on various site conditions through remote sensing and field surveys datasets in predicting the landslide travel distance from the 2022 disaster in Petrópolis, Rio de Janeiro. A total of 218 multivariate linear regression models were developed using morphometric, remote sensing, and field survey variables collected across collapse, transport, and deposition zones. Results show that predictive accuracy was limited when based solely on landslide scale (R2 = 0.06–0.10) but improved substantially with the inclusion of site condition data across collapse, transport, and deposition zones (R2 = 0.49–0.51). Additionally, model performance was strongly influenced by runout path typology, with channelized flows producing the most stable and accurate predictions (R2 = 0.73–0.90), while obstructed and open-slope paths performed worse (R2 = 0.39–0.61). These findings demonstrate that empirical models integrating multizonal site-condition data and runout path typology offer a scalable, reproducible framework for landslide hazard mapping in data-scarce, complex mountainous urban environments.
Full article
(This article belongs to the Section Remote Sensing in Geology, Geomorphology and Hydrology)
►▼
Show Figures

Figure 1

Journal Menu
► ▼ Journal Menu-
- Remote Sensing Home
- Aims & Scope
- Editorial Board
- Reviewer Board
- Topical Advisory Panel
- Photography Exhibition
- Instructions for Authors
- Special Issues
- Topics
- Sections & Collections
- Article Processing Charge
- Indexing & Archiving
- Editor’s Choice Articles
- Most Cited & Viewed
- Journal Statistics
- Journal History
- Journal Awards
- Society Collaborations
- Conferences
- Editorial Office
Journal Browser
► ▼ Journal BrowserHighly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Entropy, Environments, Land, Remote Sensing
Bioterraformation: Emergent Function from Systemic Eco-Engineering
Topic Editors: Matteo Convertino, Jie LiDeadline: 30 November 2025
Topic in
Energies, Aerospace, Applied Sciences, Remote Sensing, Sensors
GNSS Measurement Technique in Aerial Navigation
Topic Editors: Kamil Krasuski, Damian WierzbickiDeadline: 31 December 2025
Topic in
Geosciences, Land, Remote Sensing, Sustainability
Disaster and Environment Monitoring Based on Multisource Remote Sensing Images
Topic Editors: Bing Guo, Yuefeng Lu, Yingqiang Song, Rui Zhang, Huihui ZhaoDeadline: 1 January 2026
Topic in
Agriculture, Remote Sensing, Sustainability, Water, Hydrology, Limnological Review, Earth
Water Management in the Age of Climate Change
Topic Editors: Yun Yang, Chong Chen, Hao SunDeadline: 31 January 2026

Conferences
Special Issues
Special Issue in
Remote Sensing
Synthetic Aperture Radar (SAR) Remote Sensing for Civil and Environmental Applications
Guest Editors: Saeid Homayouni, Hossein Aghababaei, Alireza Tabatabaeenejad, Benyamin HosseinyDeadline: 10 October 2025
Special Issue in
Remote Sensing
Remote Sensing of Land Surface Temperature: Retrieval, Modeling, and Applications
Guest Editors: Zihan Liu, Jin Ma, Kangning Li, Lluís Pérez-PlanellsDeadline: 13 October 2025
Special Issue in
Remote Sensing
Big Earth Data and Sustainable Development Goals (SDGs) Multi-Objectives Comprehensive Evaluation
Guest Editors: Lanwei Zhu, Lei Wang, Chunlin Huang, Min ChenDeadline: 15 October 2025
Special Issue in
Remote Sensing
Photogrammetry Meets AI
Guest Editors: Fabio Remondino, Rongjun QinDeadline: 15 October 2025
Topical Collections
Topical Collection in
Remote Sensing
Google Earth Engine Applications
Collection Editors: Lalit Kumar, Onisimo Mutanga
Topical Collection in
Remote Sensing
Feature Papers for Section Environmental Remote Sensing
Collection Editor: Magaly Koch
Topical Collection in
Remote Sensing
Discovering A More Diverse Remote Sensing Discipline
Collection Editors: Karen Joyce, Meghan Halabisky, Cristina Gómez, Michelle Kalamandeen, Gopika Suresh, Kate C. Fickas
Topical Collection in
Remote Sensing
The VIIRS Collection: Calibration, Validation, and Application
Collection Editors: Xi Shao, Xiaoxiong Xiong, Changyong Cao