Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,128)

Search Parameters:
Keywords = optical remote sensing

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 1651 KB  
Article
FALB: A Frequency-Aware Lightweight Bottleneck with Learnable Wavelet Fusion and Contextual Attention for Enhanced Ship Classification in Remote Sensing
by Liang Huang, Yiping Song, Qiao Sun, He Yang, Lin Chen and Xianfeng Zhang
Remote Sens. 2026, 18(8), 1186; https://doi.org/10.3390/rs18081186 - 15 Apr 2026
Abstract
Ship classification in optical remote sensing requires balancing discriminative representation and model efficiency. Standard convolutional neural network (CNN) bottlenecks rely on local spatial kernels and may emphasize high-frequency texture cues, while stronger backbones increase parameter cost. We propose a frequency-aware lightweight bottleneck (FALB) [...] Read more.
Ship classification in optical remote sensing requires balancing discriminative representation and model efficiency. Standard convolutional neural network (CNN) bottlenecks rely on local spatial kernels and may emphasize high-frequency texture cues, while stronger backbones increase parameter cost. We propose a frequency-aware lightweight bottleneck (FALB) that couples enhanced wavelet convolution (WTsConv) and contextual anchor attention (CAA) in a cascaded design. WTsConv adopts Sym4 wavelets and a learnable symmetric fusion weight between spatial and wavelet-reconstructed features to improve frequency-aware feature mixing. CAA is then applied to the refined features for contextual aggregation. Integrated into ResNet-50 bottlenecks, FALB is evaluated on FGSCM-52 and achieves 97.88% top-1 accuracy with 17.78 M parameters, compared with 96.92% and 25.56 M for the ResNet-50 baseline, surpassing ResNet-50 by 0.96% and outperforming compared general-purpose baselines while reducing parameters by 30.4%. Under this experimental setting, FALB improves the observed accuracy–parameter trade-off for remote sensing ship classification. Full article
(This article belongs to the Special Issue Ship Imaging, Detection and Recognition for High-Resolution SAR)
Show Figures

Figure 1

20 pages, 2175 KB  
Review
A Bibliometric Analysis of Machine and Deep Learning in Remote Sensing for Precision Agriculture
by Dorijan Radočaj, Mladen Jurišić, Ivan Plaščak and Lucija Galić
Agronomy 2026, 16(8), 807; https://doi.org/10.3390/agronomy16080807 - 14 Apr 2026
Abstract
This review provides a comprehensive bibliometric analysis of the literature on the integration of remote sensing data and machine learning or deep learning algorithms in precision agriculture. The analysis covers 1056 publications, included in the Web of Science Core Collection, and identifies the [...] Read more.
This review provides a comprehensive bibliometric analysis of the literature on the integration of remote sensing data and machine learning or deep learning algorithms in precision agriculture. The analysis covers 1056 publications, included in the Web of Science Core Collection, and identifies the temporal patterns of research, the most frequently used algorithms, the prominent remote sensing technologies, and the geographical distribution of research output. Increased research output during the period of 2013–2025 is attributed to the availability of high-level computing, satellites, and UAV imagery. The earlier studies in machine learning primarily involved the use of the Random Forest and Support Vector Machine algorithms, whereas in the past few years, deep learning, and especially Convolutional Neural Networks, have become more dominant. The most widely used data sources in remote sensing are the imagery from UAVs and the Sentinel satellite missions. The evaluation revealed that most of the geographical research activity was centered in the United States and China, but there is a trend of increasing research activity in most of the other developed countries. Research in Africa and South America remains particularly underdeveloped. Considering the rapid development of research, data fusion of optical and radar satellite imagery, UAV imagery, weather and soil datasets are expected to further improve the representation of agricultural systems. Full article
19 pages, 1448 KB  
Article
Integrating Multispectral and SAR Satellite Data for Alpine Wetland Mapping and Spatio-Temporal Change Analysis in the Qinghai Lake Basin
by Qianle Zhuang, Zeyu Tang, Chenggang Li, Meiting Fang and Xiaolu Ling
Remote Sens. 2026, 18(8), 1173; https://doi.org/10.3390/rs18081173 - 14 Apr 2026
Abstract
Alpine wetlands in the Qinghai Lake Basin, located on the northeastern Qinghai–Tibetan Plateau, are ecologically important but highly vulnerable to climate change and anthropogenic disturbance. Traditional field-based surveys are labor-intensive and spatially constrained, underscoring the need for automated remote sensing approaches for large-scale [...] Read more.
Alpine wetlands in the Qinghai Lake Basin, located on the northeastern Qinghai–Tibetan Plateau, are ecologically important but highly vulnerable to climate change and anthropogenic disturbance. Traditional field-based surveys are labor-intensive and spatially constrained, underscoring the need for automated remote sensing approaches for large-scale wetland mapping. In this study, an object-based image analysis (OBIA) framework was developed by integrating Sentinel-2 optical imagery with Sentinel-1 synthetic aperture radar (SAR) data to classify two representative plateau wetland types: marsh meadows and inland tidal flats. Seven categories of features were evaluated, including spectral features, vegetation indices, water indices, red-edge features, topographic variables, radar backscatter, and geometric-textural metrics. The Separability and Thresholds (SEaTH) algorithm was employed for feature selection and optimization prior to classification using a Random Forest model. The results indicate that the incorporating geometric and textural features significantly improved classification performance, achieving an overall accuracy (OA) of 82.53% and a Kappa coefficient of 0.74. Moreover, the SEaTH-based feature optimization scheme yielded the best performance, with an OA of 86.24% and a Kappa coefficient of 0.79. Compared with the full feature set, this approach improved producer’s accuracy by 3.96–6.11% and increased overall accuracy by 1.48%. The proposed framework provides an effective and computationally efficient approach for mapping ecologically fragile alpine wetlands and offers valuable support for wetland conservation in the Qinghai Lake Basin. Full article
27 pages, 49307 KB  
Article
Enhancing Soil Salinity Mapping by Integrating PolSAR Scattering Components and Spectral Indices in a 2D Feature Space Using RADARSAT-2 and Landsat-8 Imagery
by Bilali Aizezi, Ilyas Nurmemet, Aihepa Aihaiti, Yu Qin, Meimei Zhang, Ru Feng, Yixin Zhang and Yang Xiang
Remote Sens. 2026, 18(8), 1153; https://doi.org/10.3390/rs18081153 - 13 Apr 2026
Abstract
Soil salinization in arid oases constrains soil functioning and crop production, making spatially explicit monitoring important for land management. Multispectral optical remote sensing enables large-area salinity assessment, but in oasis environments such as the Keriya Oasis, its performance can be limited by spectral [...] Read more.
Soil salinization in arid oases constrains soil functioning and crop production, making spatially explicit monitoring important for land management. Multispectral optical remote sensing enables large-area salinity assessment, but in oasis environments such as the Keriya Oasis, its performance can be limited by spectral confusion between salt crusts and bright bare soils, sparse vegetation cover, and strong surface heterogeneity. Synthetic aperture radar (SAR), by contrast, provides all-weather imaging capability and sensitivity to surface scattering and dielectric-related conditions, but its salinity interpretation is often affected by surface complexity and environmental coupling. To address these, a spectral index–polarimetric scattering integration framework that combines RADARSAT-2 and Landsat-8 OLI features within a simple two-dimensional (2D) feature space was developed. Two groups of models were constructed from variables selected through a data-driven screening process: (1) polarimetric feature space models based on combinations such as VanZyl volume scattering with Pauli odd-bounce or Touzi alpha scattering; and (2) multi-source feature space models that integrate the optimal polarimetric component with key spectral indicators such as SI4 and MSAVI. Among all tested models, VanZyl_vol-SI4 achieved the best performance (fitting: R2 = 0.749, RMSE = 5.798 dS m−1, MAE = 4.086 dS m−1; validation: R2 = 0.716, RMSE = 5.566 dS m−1, MAE = 4.528 dS m−1). The results indicate that integrating PolSAR scattering information with optical indices can improve salinity mapping relative to single-source feature spaces in the Keriya Oasis. The proposed 2D framework provides a concise way to compare different feature combinations and supports regional identification of salt-affected soils. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
Show Figures

Figure 1

22 pages, 3734 KB  
Article
CLEAR: A Cognitive LLM-Empowered Adaptive Restoration Framework for Robust Ship Detection in Complex Maritime Scenarios
by Min Li, Xinyu Zhao and Yunfeng Wan
Remote Sens. 2026, 18(8), 1142; https://doi.org/10.3390/rs18081142 - 12 Apr 2026
Viewed by 84
Abstract
Ship detection in remote sensing imagery serves as a cornerstone of modern maritime surveillance. Existing visible light detectors suffer from severe performance degradation in adverse environmental conditions (e.g., fog, low light) due to domain gaps. Traditional global enhancement methods often lack adaptability, leading [...] Read more.
Ship detection in remote sensing imagery serves as a cornerstone of modern maritime surveillance. Existing visible light detectors suffer from severe performance degradation in adverse environmental conditions (e.g., fog, low light) due to domain gaps. Traditional global enhancement methods often lack adaptability, leading to “negative transfer”—where artifacts are introduced into clean images or mismatched with degradation types. To address these challenges, we propose CLEAR (Cognitive Large Language Model (LLM)-Empowered Adaptive Restoration) framework. Inspired by the dual-process theory of cognition, we introduce a dynamic switching mechanism between fast perception and deep reasoning. Rather than processing all images indiscriminately, it utilizes a hybrid gating mechanism to efficiently filter nominal samples, triggering Vision–Language Model (VLM) only when necessary to diagnose degradation and dispatch targeted restoration operators. Extensive experiments on the constructed HRSC-Robust dataset demonstrate that CLEAR achieves an overall mean Average Precision (mAP) at 0.5 Intersection-over-Union (IoU) of 86.92%, outperforming the baseline by 7.74%. Notably, it establishes a “fail-safe” mechanism for optical degradations. By adaptively resolving fog and low-light, it effectively mitigates detector blindness—exemplified by a doubled Recall rate (52.52%) in dark scenarios. Furthermore, a confidence-based sparse triggering strategy ensures operational efficiency, maintaining a throughput of ~11.8 FPS in nominal conditions. This work validates the potential of VLMs for interpretable and robust remote sensing tasks. Full article
Show Figures

Figure 1

26 pages, 5676 KB  
Article
Light-Induced Changes in RGB Reflectance Parameters in Wheat and Pea Leaves in the Minute Range
by Yuriy Zolin, Alyona Popova, Lyubov Yudina, Leonid Andryushaev, Vladimir Sukhov and Ekaterina Sukhova
Plants 2026, 15(8), 1184; https://doi.org/10.3390/plants15081184 - 12 Apr 2026
Viewed by 66
Abstract
Parameters of reflected light, measured in narrow or broad spectral bands, are widely analyzed for remote and proximal sensing of plant responses to stressors. Specifically, parameters of reflectance in red (R), green (G), and blue (B) spectral bands measured using simple color images [...] Read more.
Parameters of reflected light, measured in narrow or broad spectral bands, are widely analyzed for remote and proximal sensing of plant responses to stressors. Specifically, parameters of reflectance in red (R), green (G), and blue (B) spectral bands measured using simple color images can be sensitive to characteristics of plants. The conventional view is that RGB reflectance primarily reveals long-term changes in plants (days, weeks, etc.). In this study, we investigated light-induced changes in RGB reflectance in wheat (Triticum aestivum L.) and pea (Pisum sativum L.) leaves. Illumination increased this reflectance for about 10 min in wheat and about 15–20 min in pea; these changes relaxed after light intensity was decreased. The changes in RGB reflectance were strongly related to the effective quantum yield of photosystem II and non-photochemical quenching of chlorophyll fluorescence under high light intensity; these relations were absent under low light intensity. We hypothesized that changes in both RGB reflectance and photosynthetic parameters were related to the light-induced changes in chloroplast localization. A simple mathematical model of optical properties and photosynthesis in leaves was developed; results of the model-based analysis supported the proposed hypothesis. Experimental analysis of the dynamics of light transmittance additionally supported this hypothesis. Our results thus show that RGB imaging can be sensitive to fast changes in plants. Full article
(This article belongs to the Special Issue Plant Sensors in Precision Agriculture)
Show Figures

Figure 1

24 pages, 15558 KB  
Article
A Mutual-Structure Weighted Sub-Pixel Multimodal Optical Remote Sensing Image Matching Method
by Tao Huang, Hongbo Pan, Nanxi Zhou, Siyuan Zou and Shun Zhou
Remote Sens. 2026, 18(8), 1137; https://doi.org/10.3390/rs18081137 - 12 Apr 2026
Viewed by 91
Abstract
Sub-pixel matching of multimodal optical images is a critical step in the combined application of multiple sensors. However, structural noise and inconsistencies arising from variations in multimodal image responses usually limit the accuracy of matching. Phase congruency mutual-structure weighted least absolute deviation (PCWLAD) [...] Read more.
Sub-pixel matching of multimodal optical images is a critical step in the combined application of multiple sensors. However, structural noise and inconsistencies arising from variations in multimodal image responses usually limit the accuracy of matching. Phase congruency mutual-structure weighted least absolute deviation (PCWLAD) is developed as a coarse-to-fine framework. In the coarse matching stage, we preserve the complete structure and use an enhanced cross-modal similarity criterion to mitigate structural information loss by phase congruency (PC) noise filtering. In the fine matching stage, a mutual-structure filtering and weighted least absolute deviation-based method is introduced to enhance inter-modal structural consistency and to accurately estimate sub-pixel displacements adaptively. Experiments on three multimodal datasets—Landsat visible-infrared, short-range visible-near-infrared, and unmanned aerial vehicle (UAV) optical image pairs—show that PCWLAD achieves superior average performance compared with eight state-of-the-art methods, attaining an average matching accuracy of approximately 0.4 pixels. Full article
(This article belongs to the Special Issue Advances in Multi-Source Remote Sensing Data Fusion and Analysis)
18 pages, 4334 KB  
Article
Multi-Source Remote Sensing-Constrained Evaluation of CMAQ Aerosol Optical Depth over Major Urban Clusters in China
by Zhaoyang Peng, Yikun Yang, Yuzhi Jin, Bin Wang, Zhouyang Zhang, Ting Pan and Zeyuan Tian
Remote Sens. 2026, 18(8), 1134; https://doi.org/10.3390/rs18081134 - 10 Apr 2026
Viewed by 261
Abstract
Aerosol optical depth (AOD) is a key indicator for quantifying aerosol radiative effects and evaluating air quality. However, atmospheric chemical transport models often exhibit systematic AOD biases, and model capability for column-integrated optical properties is not always consistent with that for near-surface particulate [...] Read more.
Aerosol optical depth (AOD) is a key indicator for quantifying aerosol radiative effects and evaluating air quality. However, atmospheric chemical transport models often exhibit systematic AOD biases, and model capability for column-integrated optical properties is not always consistent with that for near-surface particulate matter concentrations. Here, we evaluate AOD simulated by the Community Multiscale Air Quality (CMAQ) model over five major urban clusters in China, including the Beijing-Tianjin-Hebei (BTH) region, Fenwei Plain (FWP), Sichuan Basin (SCB), Yangtze River Delta (YRD), and Pearl River Delta (PRD), using satellite retrievals from the Moderate Resolution Imaging Spectroradiometer (MODIS), ground-based retrievals from the Aerosol Robotic Network (AERONET), and vertical extinction profiles from the Cloud–Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO). CMAQ reproduces the major spatial patterns and exhibits relatively small biases in near-surface PM2.5. However, it persistently underestimates AOD relative to MODIS, with the largest negative bias occurring in April (i.e., a typical spring month). This contrast indicates a pronounced inconsistency between column-integrated aerosol amount and surface mass density. Relative to AERONET, CMAQ shows a negative bias (NMB = −38%), whereas MODIS shows a positive bias (NMB = 56%), suggesting that both model and retrieval uncertainties contribute to the CMAQ–MODIS disagreements. CALIPSO-constrained vertical analysis further suggests that insufficient extinction above the planetary boundary layer (PBL) is an important contributor to the negative AOD bias, although the relative roles of boundary-layer and upper-layer contributions vary across regions, underscoring the importance of accurately representing aerosol vertical transport and optical processes. These results indicate that evaluations based solely on surface observations may fail to fully capture the overall structure of AOD errors, particularly given the clear differences between near-surface mass concentrations and column optical properties, which vary across regions. This also highlights the importance of improving the representation of aerosol vertical transport and optical processes in chemical transport models. Full article
(This article belongs to the Section Atmospheric Remote Sensing)
Show Figures

Figure 1

17 pages, 6586 KB  
Article
Harnessing Foundation Models for Optical–SAR Object Detection via Gated–Guided Fusion
by Qianyin Jiang, Jianshang Liao, Qiuyu Lin and Junkang Zhang
ISPRS Int. J. Geo-Inf. 2026, 15(4), 160; https://doi.org/10.3390/ijgi15040160 - 8 Apr 2026
Viewed by 257
Abstract
Remote sensing object detection is fundamental to Earth observation, yet remains challenging when relying on a single sensing modality. While optical imagery provides rich spatial and textural details, it is highly sensitive to illumination and adverse weather; conversely, Synthetic Aperture Radar (SAR) offers [...] Read more.
Remote sensing object detection is fundamental to Earth observation, yet remains challenging when relying on a single sensing modality. While optical imagery provides rich spatial and textural details, it is highly sensitive to illumination and adverse weather; conversely, Synthetic Aperture Radar (SAR) offers robust all-weather acquisition but suffers from speckle noise and limited semantic interpretability. To address these limitations, we leverage the potential of foundation models for optical–SAR object detection via a novel gated–guided fusion approach. By integrating transferable and generalizable representations from foundation models into the detection pipeline, we enhance semantic expressiveness and cross-environment robustness. Specifically, a gated–guided fusion mechanism is designed to selectively merge cross-modal features with foundational priors, enabling the network to prioritize informative cues while suppressing unreliable signals in complex scenes. Furthermore, we propose a dual-stream architecture incorporating attention mechanisms and State Space Models (SSMs) to simultaneously capture local and long-range dependencies. Extensive experiments on the large-scale M4-SAR dataset demonstrate that our method achieves state-of-the-art performance, significantly improving detection accuracy and robustness under challenging sensing conditions. Full article
Show Figures

Figure 1

65 pages, 8778 KB  
Systematic Review
Beyond Accuracy: Transferability Limits, Validation Inflation, and Uncertainty Gaps in Satellite-Based Water Quality Monitoring—A Systematic Quantitative Synthesis and Operational Framework
by Saeid Pourmorad, Valerie Graw, Andreas Rienow and Luca Antonio Dimuccio
Remote Sens. 2026, 18(7), 1098; https://doi.org/10.3390/rs18071098 - 7 Apr 2026
Viewed by 372
Abstract
Satellite remote sensing has become essential for water quality assessment across inland and coastal environments, with rapid improvements in recent years. Significant advances have been made in detecting optically active parameters (such as chlorophyll-a, suspended matter, and turbidity), showing consistently strong performance across [...] Read more.
Satellite remote sensing has become essential for water quality assessment across inland and coastal environments, with rapid improvements in recent years. Significant advances have been made in detecting optically active parameters (such as chlorophyll-a, suspended matter, and turbidity), showing consistently strong performance across multiple studies. Specifically, the median validation performance (R2) derived from the quantitative synthesis indicates R2 = 0.82 for chlorophyll-a (interquartile range—IQR: 0.75–0.90), R2 = 0.80 for total suspended matter (IQR: 0.78–0.85), and R2 = 0.88 for turbidity (IQR: 0.85–0.90). Conversely, the retrieval of optically inactive parameters (such as nutrients like total phosphorus and total nitrogen) remains more context dependent. It exhibits moderate, more variable results, with median R2 = 0.68 (IQR: 0.64–0.74) for total phosphorus and R2 = 0.75 (IQR: 0.70–0.80) for total nitrogen. These findings clearly illustrate the varying success of retrievals of optically active and inactive parameters and underscore the inherent difficulties of indirect estimation methods. However, high reported accuracy has yet to translate into transferable, uncertainty-informed, and operational monitoring systems. This gap stems from structural issues in validation design, physics integration, uncertainty management, and multi-sensor compatibility rather than data limitations alone. We present a PRISMA-guided, distribution-aware quantitative synthesis of 152 peer-reviewed studies (1980–2025), based on a systematic search protocol, to evaluate satellite-based retrievals of both optically active and inactive parameters. Instead of simply averaging performance, we analyse the empirical distributions of validation metrics, considering the validation protocol, sensor type, parameter category, degree of physics integration, and uncertainty quantification. The synthesis demonstrates that validation strategy often influences reported results more than the algorithm class itself, with accuracy inflated under non-independent cross-validation methods and notable variability between studies concealed by mean-based reports. Across four decades, four persistent structural challenges remain: limited transferability across sites and sensors beyond calibration areas; weak or implicit physical integration in many data-driven models; lack of or inconsistency in uncertainty quantification; and fragmented multi-sensor harmonisation that restricts operational scalability. To address these issues, we introduce two evidence-based coding frameworks: a physics-integration taxonomy (P0–P4) and an uncertainty-quantification hierarchy (U0–U4). Applying these frameworks shows that most studies remain focused on low-to-moderate levels of physics integration and primarily consider uncertainty at the prediction stage, with limited attention to upstream sources throughout the observation and inference process. Building on this structured synthesis, we propose a transferable, physics-informed, and uncertainty-aware conceptual framework that links model architecture, validation robustness, and probabilistic uncertainty to well-founded design principles. By shifting satellite water quality modelling from isolated algorithm demonstrations towards integrated, evidence-based system design, this study promotes scalable, decision-grade environmental monitoring amid the accelerating impacts of climate change. Full article
Show Figures

Figure 1

31 pages, 6459 KB  
Article
Cooperative Hybrid Domain Network for Salient Object Detection in Optical Remote Sensing Images
by Yi Gu, Jianhang Zhou and Lelei Yan
Remote Sens. 2026, 18(7), 1087; https://doi.org/10.3390/rs18071087 - 4 Apr 2026
Viewed by 256
Abstract
Salient Object Detection (SOD) in Optical Remote Sensing Images (ORSIs) aims to localize and segment visually prominent objects amidst complex backgrounds and extreme scale variations. However, we observe that current frequency-aware methods typically rely on a naive feature aggregation paradigm, merging frequency and [...] Read more.
Salient Object Detection (SOD) in Optical Remote Sensing Images (ORSIs) aims to localize and segment visually prominent objects amidst complex backgrounds and extreme scale variations. However, we observe that current frequency-aware methods typically rely on a naive feature aggregation paradigm, merging frequency and spatial features via simple concatenation, addition, or direct combination. This shallow interaction overlooks the inherent semantic misalignment between the two domains, resulting in feature redundancy and poor boundary delineation. To address this limitation, we propose the Cooperative Hybrid Domain Network (CHDNet), a framework designed to facilitate synergistic cooperation between heterogeneous domains. Specifically, we propose the Cross-Domain Multi-Head Self-Attention (CD-MHSA) mechanism as a semantic bridge following the encoder. It employs a dimension expansion strategy to construct a Unified Interaction Manifold and utilizes a Frequency Anchor Interaction mechanism to achieve precise modulation of spatial textures using global spectral cues. Furthermore, to address the dual challenges of lacking explicit interpretation mechanisms for semantic co-occurrence and the susceptibility of topological structures to fracture in complex scenes during the decoding phase, we design a Multi-Branch Cooperative Decoder (MBCD) comprising three parallel paths: edge semantics, global relations, and reverse correction. This module dynamically integrates these heterogeneous clues through a Cooperative Fusion Strategy, combining explicit global dependency modeling with dual-domain reverse mining. Extensive experiments on multiple benchmark datasets demonstrate that the proposed CHDNet achieves performance superior to state-of-the-art (SOTA) methods. Full article
Show Figures

Figure 1

38 pages, 1589 KB  
Review
Monitoring of Agricultural Crops by Remote Sensing in Central Europe: A Comprehensive Review
by Jitka Kumhálová, Jiří Sedlák, Jiří Marčan, Věra Vandírková, Petr Novotný, Matěj Kohútek and František Kumhála
Remote Sens. 2026, 18(7), 1075; https://doi.org/10.3390/rs18071075 - 3 Apr 2026
Viewed by 436
Abstract
Remote sensing has become a cornerstone of modern agricultural monitoring, addressing the dual challenges of increasing production while ensuring environmental sustainability. Based on a conceptual framework developed over the past decade, key application areas include yield estimation, phenology, stress assessment (e.g., drought), crop [...] Read more.
Remote sensing has become a cornerstone of modern agricultural monitoring, addressing the dual challenges of increasing production while ensuring environmental sustainability. Based on a conceptual framework developed over the past decade, key application areas include yield estimation, phenology, stress assessment (e.g., drought), crop mapping, and land-use change detection. In Central Europe, regionally specific conditions such as fragmented land ownership, small and irregular plots, and high climate variability shape these applications. Annual field crops, such as cereals, oilseeds, maize, and forage crops dominate production and represent the primary focus of monitoring efforts. Optical data from Sentinel-2 are effective for mapping crop types and analyzing phenology, especially when dense time series are available. However, persistent cloud cover during critical growth phases limits the effectiveness of optical approaches, prompting the integration of radar data from Sentinel-1. Multi-sensor strategies increase the robustness of classification and temporal continuity, supporting monitoring under adverse conditions. Reliable reference data from systems such as the Land Parcel Identification System enables parcel-level validation and facilitates object-oriented analyses in line with management needs. Future developments will increasingly rely on advanced time-series analysis, machine learning, and the integration of agrometeorological and crop model data. As climate change intensifies drought frequency and yield variability, remote sensing will play a pivotal role in enabling near-real-time monitoring and decision support within the evolving landscape of digital agriculture ecosystems. The aim of this review article is to provide an overview of crop monitoring in the Central European region over approximately the past fifteen years, emphasizing trends in subsequent technological and procedural developments. Full article
(This article belongs to the Special Issue Crop Yield Prediction Using Remote Sensing Techniques)
Show Figures

Figure 1

22 pages, 1709 KB  
Review
Satellite Remote Sensing for Cultural Heritage Protection: The Consensus Platform and AI-Assisted Bibliometric Analysis of Scientific and Grey Literature (2010–2025)
by Claudio Sossio De Simone, Nicola Masini and Nicodemo Abate
Heritage 2026, 9(4), 149; https://doi.org/10.3390/heritage9040149 - 3 Apr 2026
Viewed by 361
Abstract
Satellite remote sensing has rapidly evolved from an experimental support tool into a structural component of preventive archaeology and cultural heritage governance. Drawing on scientific publications and policy-oriented grey literature from 2010–2025, this study provides an integrated review of how optical, SAR, and [...] Read more.
Satellite remote sensing has rapidly evolved from an experimental support tool into a structural component of preventive archaeology and cultural heritage governance. Drawing on scientific publications and policy-oriented grey literature from 2010–2025, this study provides an integrated review of how optical, SAR, and multi-sensor satellite data are used to detect archaeological sites, monitor landscape and structural change, and support risk-informed planning across diverse legal and institutional contexts. A multi-platform workflow combines AI-assisted semantic querying (Consensus), bibliometric searches (Scopus), and the collaborative management and geospatial visualisation of references through Zotero, VOSviewer (1.6.19), and QGIS (3.44)-based literature mapping, thereby linking thematic trends, co-authorship networks, and geographical patterns of research and regulation. The results show non-linear but marked publication growth, a strongly interdisciplinary profile, and the consolidation of international hubs that drive advances in Sentinel-2-based prospection, Landsat and night-time lights urbanisation metrics, and SAR time series for deformation, looting, and conflict-damage mapping. Parallel analysis of grey literature and institutional initiatives (Copernicus Cultural Heritage Task Force, national “extraordinary plans”, regional declarations, and UNESCO guidelines) reveals the codification of satellite Earth observation within rescue archaeology protocols, emergency archaeology, and long-term conservation strategies. Overall, the evidence indicates a transition towards data-driven, multi-sensor, and multi-scalar research, underpinned by open satellite data, reproducible workflows, and AI-supported evidence synthesis. Full article
Show Figures

Figure 1

22 pages, 8737 KB  
Article
Remote Sensing of Soil Moisture in Bare Chernozems on Flat and Sloping Terrains
by Zlatomir Dimitrov, Atanas Z. Atanasov, Dessislava Ganeva, Milena Kercheva, Gergana Kuncheva, Viktor Kolchakov and Martin Nenov
Sustainability 2026, 18(7), 3373; https://doi.org/10.3390/su18073373 - 31 Mar 2026
Viewed by 198
Abstract
The aim of the current study was to select and test the appropriate model and input parameters for remote sensing retrieval of surface soil moisture (SSM) in the case of bare Chernozems on flat and sloping terrains in northern Bulgaria under different tillage [...] Read more.
The aim of the current study was to select and test the appropriate model and input parameters for remote sensing retrieval of surface soil moisture (SSM) in the case of bare Chernozems on flat and sloping terrains in northern Bulgaria under different tillage systems. Normalized synthetic aperture radar (SAR) measurements from Sentinel-1 C-band dual-pol products (Gamma-Nought in VV, ratio) were utilized in two ways to delineate SSM from environmental factors that bias determination. The accuracy of the obtained SSM prediction was evaluated against ground-based volumetric water content (VWC) measured in the 0–3.8 cm soil layer at multiple points using a TDR meter. The TDR VWC data were preliminarily calibrated against gravimetric measurements in the 0–5 cm soil layer. The obtained data for soil water retention curves in all studied variants were used to determine the range of soil moisture variation. The measured ground-based data for surface roughness generally correlate with the co-pol Gamma-Nought in VV. The data modeled with the surface soil moisture script in Sentinel Hub (SSM-SH) was calibrated using the ground-based data. Incidence angle normalization of Sentinel-1 products improved the relationship between SAR observables and SSM, when expressed as the ratio of soil moisture to total porosity (rVWC). The modeling indicated the highest importance of the optical indices, together with the temporal differences of radar descriptors sensitive to variations in soil moisture over time. Although the applied Random Forest Regression (RFR) model achieved higher accuracy during training (nRMSE of 7.27%, R2 of 0.86), the Gaussian Process Regression (GPR) model provided better generalization performance on the independent validation dataset. The results proved the advantages of the joint utilization of temporal Sentinel-1 SAR measurements with Sentinel-2 optical acquisitions to determine SSM in different bare soil conditions for achieving high accuracy. Full article
Show Figures

Figure 1

23 pages, 18538 KB  
Article
MSRNet: Mamba-Based Self-Refinement Framework for Remote Sensing Change Detection
by Haoxuan Sun, Xiaogang Yang, Ruitao Lu, Jing Zhang, Bo Li and Tao Zhang
Remote Sens. 2026, 18(7), 1042; https://doi.org/10.3390/rs18071042 - 30 Mar 2026
Viewed by 339
Abstract
Accurate change detection (CD) in very high-resolution (VHR, <1 m) optical remote sensing images remains challenging, as it requires effective modeling of long-range bi-temporal dependencies and robustness against label noise in complex urban environments. Existing deep learning-based CD methods either rely on convolutional [...] Read more.
Accurate change detection (CD) in very high-resolution (VHR, <1 m) optical remote sensing images remains challenging, as it requires effective modeling of long-range bi-temporal dependencies and robustness against label noise in complex urban environments. Existing deep learning-based CD methods either rely on convolutional operations with limited receptive fields or employ global attention mechanisms with high computational cost, making it difficult to simultaneously achieve efficient global context modeling and fine-grained structural sensitivity. To address these challenges, we propose a Mamba-based self-refinement framework for remote sensing change detection (MSRNet). Specifically, we introduce an attention-enhanced oblique state space module (AOSS) to model spatio-temporal dependencies with linear complexity while preserving fine-grained structural information. The four-branch attention fusion module (FBAM) further enhances cross-dimensional feature interaction to improve the discriminative capability of differential representations. In addition, a self-refinement module (SRM) incorporates a momentum encoder to generate high-quality pseudo-labels, mitigating annotation noise and enabling learning from latent changes. Extensive experiments on two benchmark VHR datasets, LEVIR-CD and WHU-CD, demonstrate that MSRNet achieves state-of-the-art performance in both accuracy and computational efficiency. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Figure 1

Back to TopTop