Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,573)

Search Parameters:
Keywords = optical remote sensing image

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 1044 KB  
Article
Optical Design and Analysis of a Conical Scan-Type Slanted Off-Axis Camera
by Yiting Wang, Xi He, Zongqiang Fu, Rui Duan and Xiubin Yang
Photonics 2026, 13(4), 400; https://doi.org/10.3390/photonics13040400 - 21 Apr 2026
Abstract
Compared with the conventional push-broom imaging mode, conical scanning extends the imaging swath through rotational scanning and is suitable for high-resolution, wide-swath remote sensing. To achieve continuous full-coverage imaging, the camera must be mounted at a certain tilt angle and employ an off-axis [...] Read more.
Compared with the conventional push-broom imaging mode, conical scanning extends the imaging swath through rotational scanning and is suitable for high-resolution, wide-swath remote sensing. To achieve continuous full-coverage imaging, the camera must be mounted at a certain tilt angle and employ an off-axis optical system with a sufficiently large field of view (FOV). However, the tilted installation causes nonuniform irradiance and increased off-axis distortion, while wide-field off-axis imaging also introduces radiometric consistency problems in focal-plane multi-detector stitching. To address these issues, this study investigates the optical design of a tilted off-axis camera for conical-scan imaging. Under the constraints of full coverage and swath requirements, key optical parameters were jointly determined, and a lightweight wide-coverage off-axis three-mirror system was designed, optimized, and evaluated. The final system has a focal length of 1545 mm, an F-number of 8.4, and a full FOV of 23.4° × 11.7°. The modulation transfer function is greater than 0.41 at the Nyquist frequency, and the maximum distortion is less than 2.5446%. In addition, for the focal-plane optical stitching structure, the coupled effects of local structural vignetting and global geometric vignetting induced by the tilted installation were analyzed. The results show that the gray-level difference in the adjacent detector overlap regions is only 0.31–0.53 digital numbers (DN), and the full focal plane shows a smooth gray-level attenuation rate of 5.39–6.77%. These results indicate that vignetting has no significant effect on focal-plane stitching. The proposed camera is well suited for conical-scan imaging. Full article
26 pages, 31446 KB  
Article
A Training-Free Paradigm for Data-Scarce Maritime Scene Classification Using Vision-Language Models
by Jiabao Wu, Yujie Chen, Wentao Chen, Yicheng Lai, Junjun Li, Xuhang Chen and Wangyu Wu
Sensors 2026, 26(8), 2549; https://doi.org/10.3390/s26082549 - 21 Apr 2026
Abstract
Maritime Domain Awareness (MDA) relies heavily on data acquired from high-resolution optical spaceborne sensors; however, processing this massive quantity of sensor data via traditional supervised deep learning is severely bottlenecked by its dependency on exhaustively annotated datasets. Under extreme data scarcity, conventional architectures [...] Read more.
Maritime Domain Awareness (MDA) relies heavily on data acquired from high-resolution optical spaceborne sensors; however, processing this massive quantity of sensor data via traditional supervised deep learning is severely bottlenecked by its dependency on exhaustively annotated datasets. Under extreme data scarcity, conventional architectures suffer severe performance degradation, rendering them impractical for time-critical, zero-day deployments. To overcome this barrier, we propose a training-free inference paradigm that leverages the extensive pre-trained knowledge of Large Vision-Language Models (VLMs). Specifically, we introduce a Domain Knowledge-Enhanced In-Context Learning (DK-ICL) framework coupled with a Macro-Topological Chain-of-Thought (MT-CoT) strategy. This approach bridges the perspective gap between natural images and top–down optical sensor imagery by translating expert remote sensing heuristics into a strict, step-by-step reasoning pipeline. Extensive evaluations demonstrate the substantial efficacy of this framework. Armed with merely 4 visual exemplars per category as in-context triggers, our MT-CoT augmented VLMs outperform traditional models trained under identical scarcity by over 38% in F1-score. Crucially, real-world case studies confirm that this zero-gradient approach maintains robust generalization on unannotated, out-of-distribution coastal clutters, achieving performance parity with data-heavy networks trained on 50 times the data volume. By substituting massive human annotation and GPU optimization with scalable logical deduction, this paradigm establishes a resource-efficient foundation for next-generation intelligent maritime sensing networks. Full article
Show Figures

Figure 1

17 pages, 5384 KB  
Review
Hyperspectral Sensing Enabled by Optics-Free Sensor Architectures
by Yicheng Wang, Xueyi Wang, Xintong Guo and Yining Mu
Nanomanufacturing 2026, 6(2), 8; https://doi.org/10.3390/nanomanufacturing6020008 - 20 Apr 2026
Abstract
Hyperspectral sensing allows for the capture of spatially resolved spectral data, a capability critical for applications spanning from remote sensing to biomedical diagnostics. Nevertheless, the widespread adoption of this technology is hindered by the bulk and complexity of traditional systems based on diffractive [...] Read more.
Hyperspectral sensing allows for the capture of spatially resolved spectral data, a capability critical for applications spanning from remote sensing to biomedical diagnostics. Nevertheless, the widespread adoption of this technology is hindered by the bulk and complexity of traditional systems based on diffractive optics. To overcome these hurdles, substantial research efforts have been dedicated to system miniaturization via component scaling and computational imaging. This review outlines the technological progression of compact hyperspectral imaging, ranging from miniaturized dispersive elements and tunable filters to computational snapshot designs using optical multiplexing. Although these approaches decrease system volume, they generally treat the sensor as a passive intensity recorder requiring external encoding. Therefore, we focus here on the rising paradigm of sensor-level integration made possible by nanomanufacturing. We examine optics-free architectures where spectral discrimination is embedded directly into the pixel, distinguishing between pixel-level nanophotonic filtering and intrinsic material-based selectivity. We specifically highlight emerging platforms such as compositionally engineered and cavity-enhanced perovskites, as well as electrically tunable organic or two-dimensional (2D) material heterostructures. To conclude, this review discusses persistent challenges regarding fabrication uniformity and stability, providing an outlook on the future of scalable and fully integrated hyperspectral vision systems. Full article
Show Figures

Figure 1

24 pages, 7609 KB  
Article
CGHD: Dual-Temporal Dataset of Composite Geological Hazards via Multi-Source Optical Remote Sensing Images
by Yuebao Wang, Guang Yang, Xiaotong Guo, Wangze Lu, Rongxiang Liu, Meng Huang and Shuai Liu
Remote Sens. 2026, 18(8), 1198; https://doi.org/10.3390/rs18081198 - 16 Apr 2026
Viewed by 269
Abstract
Geological hazards are characterized by their sudden occurrence, high destructiveness, and wide spatial impact. In particular, landslides and debris flows triggered by earthquakes and intense rainfall often lead to severe casualties and substantial property losses. Therefore, the rapid delineation of affected areas is [...] Read more.
Geological hazards are characterized by their sudden occurrence, high destructiveness, and wide spatial impact. In particular, landslides and debris flows triggered by earthquakes and intense rainfall often lead to severe casualties and substantial property losses. Therefore, the rapid delineation of affected areas is crucial for disaster assessment and post-disaster reconstruction. To this end, several geohazard datasets have been developed from remote sensing imagery, focusing on specific regions, disaster types, and data sources, providing valuable support for geohazard detection and risk assessment. Our study addresses the diversity of real-world geological disasters in terms of their types, causes, and spatial distribution and constructs the Composite Geological Hazards Dataset (CGHD), a dual-temporal geohazard dataset that enhances generalisation and practical applicability. CGHD incorporates pre- and post-disaster remote sensing images of 14 landslide and debris flow events that occurred worldwide between 2017 and 2024, collected using four remote sensing platforms and encompassing multiple spatial scales and land-cover categories. The affected areas varied significantly in size and shape, with land-cover types including roads, buildings, vegetation, farmland, and water bodies. This resulted in 3963 pairs of pre- and post-disaster images, each with a size of 1024 × 1024 pixels. We validated the reliability of the CGHD through experiments with nine change-detection models and further evaluated its generalisation capability using an unseen dataset. The experimental results demonstrate that CGHD achieves high recognition accuracy and strong generalisation across diverse geographic environments, providing comprehensive data support for intelligent geohazard recognition and disaster assessment. Full article
Show Figures

Figure 1

40 pages, 3667 KB  
Review
Deep Learning Methods for SAR and Optical Image Fusion: A Review
by Chengyan Guo, Zhiyuan Zhang, Kexin Huang, Lan Luo, Ziqing Yang, Shuyun Shi and Junpeng Shi
Remote Sens. 2026, 18(8), 1196; https://doi.org/10.3390/rs18081196 - 16 Apr 2026
Viewed by 352
Abstract
Synthetic Aperture Radar (SAR) and optical image fusion technology plays a crucial role in remote sensing applications. It effectively combines the high spatial resolution and rich spectral information of optical images with the all-weather and penetrating observation advantages of SAR images, thereby significantly [...] Read more.
Synthetic Aperture Radar (SAR) and optical image fusion technology plays a crucial role in remote sensing applications. It effectively combines the high spatial resolution and rich spectral information of optical images with the all-weather and penetrating observation advantages of SAR images, thereby significantly enhancing image interpretation accuracy and task execution capabilities. This paper systematically reviews deep learning-based fusion methods for SAR and optical images, with a particular focus on recent advances in deep learning models. Furthermore, it summarizes commonly used evaluation metrics for assessing fusion image quality, providing a basis for comparing and analyzing the performance of different methods. In addition, commonly used SAR-optical fusion datasets are briefly reviewed to highlight their roles in algorithm development and performance evaluation. Unlike conventional review articles, this paper further analyzes the guidance and supporting role of fusion algorithms from the perspective of typical and specific applications. Finally, it identifies key challenges and issues faced by current fusion methods, including data registration, model lightweight design, and multimodal feature alignment, and offers perspectives on future research directions. This review aims to provide routes and references for the development of SAR and optical image fusion technology. Full article
Show Figures

Figure 1

19 pages, 9700 KB  
Article
Integrating Multispectral and SAR Satellite Data for Alpine Wetland Mapping and Spatio-Temporal Change Analysis in the Qinghai Lake Basin
by Qianle Zhuang, Zeyu Tang, Chenggang Li, Meiting Fang and Xiaolu Ling
Remote Sens. 2026, 18(8), 1173; https://doi.org/10.3390/rs18081173 - 14 Apr 2026
Viewed by 199
Abstract
Alpine wetlands in the Qinghai Lake Basin, located on the northeastern Qinghai–Tibetan Plateau, are ecologically important but highly vulnerable to climate change and anthropogenic disturbance. Traditional field-based surveys are labor-intensive and spatially constrained, underscoring the need for automated remote sensing approaches for large-scale [...] Read more.
Alpine wetlands in the Qinghai Lake Basin, located on the northeastern Qinghai–Tibetan Plateau, are ecologically important but highly vulnerable to climate change and anthropogenic disturbance. Traditional field-based surveys are labor-intensive and spatially constrained, underscoring the need for automated remote sensing approaches for large-scale wetland mapping. In this study, an object-based image analysis (OBIA) framework was developed by integrating Sentinel-2 optical imagery with Sentinel-1 synthetic aperture radar (SAR) data to classify two representative plateau wetland types: marsh meadows and inland tidal flats. Seven categories of features were evaluated, including spectral features, vegetation indices, water indices, red-edge features, topographic variables, radar backscatter, and geometric-textural metrics. The Separability and Thresholds (SEaTH) algorithm was employed for feature selection and optimization prior to classification using a Random Forest model. The results indicate that the incorporating geometric and textural features significantly improved classification performance, achieving an overall accuracy (OA) of 82.53% and a Kappa coefficient of 0.74. Moreover, the SEaTH-based feature optimization scheme yielded the best performance, with an OA of 86.24% and a Kappa coefficient of 0.79. Compared with the full feature set, this approach improved producer’s accuracy by 3.96–6.11% and increased overall accuracy by 1.48%. The proposed framework provides an effective and computationally efficient approach for mapping ecologically fragile alpine wetlands and offers valuable support for wetland conservation in the Qinghai Lake Basin. Full article
Show Figures

Figure 1

27 pages, 49307 KB  
Article
Enhancing Soil Salinity Mapping by Integrating PolSAR Scattering Components and Spectral Indices in a 2D Feature Space Using RADARSAT-2 and Landsat-8 Imagery
by Bilali Aizezi, Ilyas Nurmemet, Aihepa Aihaiti, Yu Qin, Meimei Zhang, Ru Feng, Yixin Zhang and Yang Xiang
Remote Sens. 2026, 18(8), 1153; https://doi.org/10.3390/rs18081153 - 13 Apr 2026
Viewed by 345
Abstract
Soil salinization in arid oases constrains soil functioning and crop production, making spatially explicit monitoring important for land management. Multispectral optical remote sensing enables large-area salinity assessment, but in oasis environments such as the Keriya Oasis, its performance can be limited by spectral [...] Read more.
Soil salinization in arid oases constrains soil functioning and crop production, making spatially explicit monitoring important for land management. Multispectral optical remote sensing enables large-area salinity assessment, but in oasis environments such as the Keriya Oasis, its performance can be limited by spectral confusion between salt crusts and bright bare soils, sparse vegetation cover, and strong surface heterogeneity. Synthetic aperture radar (SAR), by contrast, provides all-weather imaging capability and sensitivity to surface scattering and dielectric-related conditions, but its salinity interpretation is often affected by surface complexity and environmental coupling. To address these, a spectral index–polarimetric scattering integration framework that combines RADARSAT-2 and Landsat-8 OLI features within a simple two-dimensional (2D) feature space was developed. Two groups of models were constructed from variables selected through a data-driven screening process: (1) polarimetric feature space models based on combinations such as VanZyl volume scattering with Pauli odd-bounce or Touzi alpha scattering; and (2) multi-source feature space models that integrate the optimal polarimetric component with key spectral indicators such as SI4 and MSAVI. Among all tested models, VanZyl_vol-SI4 achieved the best performance (fitting: R2 = 0.749, RMSE = 5.798 dS m−1, MAE = 4.086 dS m−1; validation: R2 = 0.716, RMSE = 5.566 dS m−1, MAE = 4.528 dS m−1). The results indicate that integrating PolSAR scattering information with optical indices can improve salinity mapping relative to single-source feature spaces in the Keriya Oasis. The proposed 2D framework provides a concise way to compare different feature combinations and supports regional identification of salt-affected soils. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
Show Figures

Figure 1

22 pages, 3734 KB  
Article
CLEAR: A Cognitive LLM-Empowered Adaptive Restoration Framework for Robust Ship Detection in Complex Maritime Scenarios
by Min Li, Xinyu Zhao and Yunfeng Wan
Remote Sens. 2026, 18(8), 1142; https://doi.org/10.3390/rs18081142 - 12 Apr 2026
Viewed by 317
Abstract
Ship detection in remote sensing imagery serves as a cornerstone of modern maritime surveillance. Existing visible light detectors suffer from severe performance degradation in adverse environmental conditions (e.g., fog, low light) due to domain gaps. Traditional global enhancement methods often lack adaptability, leading [...] Read more.
Ship detection in remote sensing imagery serves as a cornerstone of modern maritime surveillance. Existing visible light detectors suffer from severe performance degradation in adverse environmental conditions (e.g., fog, low light) due to domain gaps. Traditional global enhancement methods often lack adaptability, leading to “negative transfer”—where artifacts are introduced into clean images or mismatched with degradation types. To address these challenges, we propose CLEAR (Cognitive Large Language Model (LLM)-Empowered Adaptive Restoration) framework. Inspired by the dual-process theory of cognition, we introduce a dynamic switching mechanism between fast perception and deep reasoning. Rather than processing all images indiscriminately, it utilizes a hybrid gating mechanism to efficiently filter nominal samples, triggering Vision–Language Model (VLM) only when necessary to diagnose degradation and dispatch targeted restoration operators. Extensive experiments on the constructed HRSC-Robust dataset demonstrate that CLEAR achieves an overall mean Average Precision (mAP) at 0.5 Intersection-over-Union (IoU) of 86.92%, outperforming the baseline by 7.74%. Notably, it establishes a “fail-safe” mechanism for optical degradations. By adaptively resolving fog and low-light, it effectively mitigates detector blindness—exemplified by a doubled Recall rate (52.52%) in dark scenarios. Furthermore, a confidence-based sparse triggering strategy ensures operational efficiency, maintaining a throughput of ~11.8 FPS in nominal conditions. This work validates the potential of VLMs for interpretable and robust remote sensing tasks. Full article
Show Figures

Figure 1

26 pages, 5676 KB  
Article
Light-Induced Changes in RGB Reflectance Parameters in Wheat and Pea Leaves in the Minute Range
by Yuriy Zolin, Alyona Popova, Lyubov Yudina, Leonid Andryushaev, Vladimir Sukhov and Ekaterina Sukhova
Plants 2026, 15(8), 1184; https://doi.org/10.3390/plants15081184 - 12 Apr 2026
Viewed by 431
Abstract
Parameters of reflected light, measured in narrow or broad spectral bands, are widely analyzed for remote and proximal sensing of plant responses to stressors. Specifically, parameters of reflectance in red (R), green (G), and blue (B) spectral bands measured using simple color images [...] Read more.
Parameters of reflected light, measured in narrow or broad spectral bands, are widely analyzed for remote and proximal sensing of plant responses to stressors. Specifically, parameters of reflectance in red (R), green (G), and blue (B) spectral bands measured using simple color images can be sensitive to characteristics of plants. The conventional view is that RGB reflectance primarily reveals long-term changes in plants (days, weeks, etc.). In this study, we investigated light-induced changes in RGB reflectance in wheat (Triticum aestivum L.) and pea (Pisum sativum L.) leaves. Illumination increased this reflectance for about 10 min in wheat and about 15–20 min in pea; these changes relaxed after light intensity was decreased. The changes in RGB reflectance were strongly related to the effective quantum yield of photosystem II and non-photochemical quenching of chlorophyll fluorescence under high light intensity; these relations were absent under low light intensity. We hypothesized that changes in both RGB reflectance and photosynthetic parameters were related to the light-induced changes in chloroplast localization. A simple mathematical model of optical properties and photosynthesis in leaves was developed; results of the model-based analysis supported the proposed hypothesis. Experimental analysis of the dynamics of light transmittance additionally supported this hypothesis. Our results thus show that RGB imaging can be sensitive to fast changes in plants. Full article
(This article belongs to the Special Issue Plant Sensors in Precision Agriculture)
Show Figures

Figure 1

24 pages, 15558 KB  
Article
A Mutual-Structure Weighted Sub-Pixel Multimodal Optical Remote Sensing Image Matching Method
by Tao Huang, Hongbo Pan, Nanxi Zhou, Siyuan Zou and Shun Zhou
Remote Sens. 2026, 18(8), 1137; https://doi.org/10.3390/rs18081137 - 12 Apr 2026
Cited by 1 | Viewed by 203
Abstract
Sub-pixel matching of multimodal optical images is a critical step in the combined application of multiple sensors. However, structural noise and inconsistencies arising from variations in multimodal image responses usually limit the accuracy of matching. Phase congruency mutual-structure weighted least absolute deviation (PCWLAD) [...] Read more.
Sub-pixel matching of multimodal optical images is a critical step in the combined application of multiple sensors. However, structural noise and inconsistencies arising from variations in multimodal image responses usually limit the accuracy of matching. Phase congruency mutual-structure weighted least absolute deviation (PCWLAD) is developed as a coarse-to-fine framework. In the coarse matching stage, we preserve the complete structure and use an enhanced cross-modal similarity criterion to mitigate structural information loss by phase congruency (PC) noise filtering. In the fine matching stage, a mutual-structure filtering and weighted least absolute deviation-based method is introduced to enhance inter-modal structural consistency and to accurately estimate sub-pixel displacements adaptively. Experiments on three multimodal datasets—Landsat visible-infrared, short-range visible-near-infrared, and unmanned aerial vehicle (UAV) optical image pairs—show that PCWLAD achieves superior average performance compared with eight state-of-the-art methods, attaining an average matching accuracy of approximately 0.4 pixels. Full article
(This article belongs to the Special Issue Advances in Multi-Source Remote Sensing Data Fusion and Analysis)
31 pages, 6459 KB  
Article
Cooperative Hybrid Domain Network for Salient Object Detection in Optical Remote Sensing Images
by Yi Gu, Jianhang Zhou and Lelei Yan
Remote Sens. 2026, 18(7), 1087; https://doi.org/10.3390/rs18071087 - 4 Apr 2026
Viewed by 322
Abstract
Salient Object Detection (SOD) in Optical Remote Sensing Images (ORSIs) aims to localize and segment visually prominent objects amidst complex backgrounds and extreme scale variations. However, we observe that current frequency-aware methods typically rely on a naive feature aggregation paradigm, merging frequency and [...] Read more.
Salient Object Detection (SOD) in Optical Remote Sensing Images (ORSIs) aims to localize and segment visually prominent objects amidst complex backgrounds and extreme scale variations. However, we observe that current frequency-aware methods typically rely on a naive feature aggregation paradigm, merging frequency and spatial features via simple concatenation, addition, or direct combination. This shallow interaction overlooks the inherent semantic misalignment between the two domains, resulting in feature redundancy and poor boundary delineation. To address this limitation, we propose the Cooperative Hybrid Domain Network (CHDNet), a framework designed to facilitate synergistic cooperation between heterogeneous domains. Specifically, we propose the Cross-Domain Multi-Head Self-Attention (CD-MHSA) mechanism as a semantic bridge following the encoder. It employs a dimension expansion strategy to construct a Unified Interaction Manifold and utilizes a Frequency Anchor Interaction mechanism to achieve precise modulation of spatial textures using global spectral cues. Furthermore, to address the dual challenges of lacking explicit interpretation mechanisms for semantic co-occurrence and the susceptibility of topological structures to fracture in complex scenes during the decoding phase, we design a Multi-Branch Cooperative Decoder (MBCD) comprising three parallel paths: edge semantics, global relations, and reverse correction. This module dynamically integrates these heterogeneous clues through a Cooperative Fusion Strategy, combining explicit global dependency modeling with dual-domain reverse mining. Extensive experiments on multiple benchmark datasets demonstrate that the proposed CHDNet achieves performance superior to state-of-the-art (SOTA) methods. Full article
Show Figures

Figure 1

23 pages, 18538 KB  
Article
MSRNet: Mamba-Based Self-Refinement Framework for Remote Sensing Change Detection
by Haoxuan Sun, Xiaogang Yang, Ruitao Lu, Jing Zhang, Bo Li and Tao Zhang
Remote Sens. 2026, 18(7), 1042; https://doi.org/10.3390/rs18071042 - 30 Mar 2026
Viewed by 393
Abstract
Accurate change detection (CD) in very high-resolution (VHR, <1 m) optical remote sensing images remains challenging, as it requires effective modeling of long-range bi-temporal dependencies and robustness against label noise in complex urban environments. Existing deep learning-based CD methods either rely on convolutional [...] Read more.
Accurate change detection (CD) in very high-resolution (VHR, <1 m) optical remote sensing images remains challenging, as it requires effective modeling of long-range bi-temporal dependencies and robustness against label noise in complex urban environments. Existing deep learning-based CD methods either rely on convolutional operations with limited receptive fields or employ global attention mechanisms with high computational cost, making it difficult to simultaneously achieve efficient global context modeling and fine-grained structural sensitivity. To address these challenges, we propose a Mamba-based self-refinement framework for remote sensing change detection (MSRNet). Specifically, we introduce an attention-enhanced oblique state space module (AOSS) to model spatio-temporal dependencies with linear complexity while preserving fine-grained structural information. The four-branch attention fusion module (FBAM) further enhances cross-dimensional feature interaction to improve the discriminative capability of differential representations. In addition, a self-refinement module (SRM) incorporates a momentum encoder to generate high-quality pseudo-labels, mitigating annotation noise and enabling learning from latent changes. Extensive experiments on two benchmark VHR datasets, LEVIR-CD and WHU-CD, demonstrate that MSRNet achieves state-of-the-art performance in both accuracy and computational efficiency. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Figure 1

21 pages, 15074 KB  
Article
Single-View High-Resolution Satellite Image Positioning by Integrating Global Open-Source Basemaps
by Zihui Xu, Ke Zhang, Xianwen Wang, Bing Wang, Yuhao Wang, Jingyu Wang, Yu Su, Feima Yuan, Bin Dong, Jianhua Li, Zhiquan Zhao and Tao Liu
Remote Sens. 2026, 18(7), 1028; https://doi.org/10.3390/rs18071028 - 29 Mar 2026
Viewed by 394
Abstract
High-resolution optical satellite data have become fundamental for acquiring global accurate remote sensing information (e.g., object geometric and spectral characteristics). However, due to the difficulty in obtaining accurate ground control points on a global scale, achieving accurate global positioning of satellite imagery remains [...] Read more.
High-resolution optical satellite data have become fundamental for acquiring global accurate remote sensing information (e.g., object geometric and spectral characteristics). However, due to the difficulty in obtaining accurate ground control points on a global scale, achieving accurate global positioning of satellite imagery remains a technical challenge. To realize global positioning optimization without relying on accurate control points, this paper leverages open-source data such as Google Earth orthophoto maps (GE maps) and FABDEM, and proposes the Coarse-to-Fine Open-Source Basemap Integration (CFBI) Method. The core idea of this method is to effectively eliminate gross errors in coarse control points by leveraging the differential projection offsets of roofs between single-view satellite images and multi-source orthophotos. On this basis, an iterative weight-selection adjustment strategy is adopted to achieve accurate positioning results. Experiments conducted in three regions, Jacksonville, New York, and Boston, demonstrate that the proposed algorithm significantly improves the positioning accuracy of satellite imagery, with an average enhancement of 62.92%, and accuracy in most areas reaching within 2 m. Full article
(This article belongs to the Special Issue AI-Enhanced Remote Sensing for Image Matching and 3D Reconstruction)
Show Figures

Figure 1

21 pages, 32230 KB  
Article
Structure-Aware Feature Descriptor with Multi-Scale Side Window Filtering for Multi-Modal Image Matching
by Junhong Guo, Lixing Zhao, Quan Liang, Xinwang Du, Yixuan Xu and Xiaoyan Li
Appl. Sci. 2026, 16(6), 3018; https://doi.org/10.3390/app16063018 - 20 Mar 2026
Viewed by 240
Abstract
Traditional image feature matching methods often fail to achieve satisfactory performance on multimodal remote sensing images (MRSIs), mainly due to significant nonlinear radiometric distortion (NRD) and complex geometric deformation caused by different imaging mechanisms. The key to successful MRSI matching lies in preserving [...] Read more.
Traditional image feature matching methods often fail to achieve satisfactory performance on multimodal remote sensing images (MRSIs), mainly due to significant nonlinear radiometric distortion (NRD) and complex geometric deformation caused by different imaging mechanisms. The key to successful MRSI matching lies in preserving high-frequency edge structures that are robust to geometric deformation, while overcoming nonlinear intensity mappings induced by NRD. To address these challenges, this paper proposes a novel high-precision matching framework, termed structure-aware feature descriptor with multi-scale side window filtering (SA-SWF). The proposed framework consists of three stages: (1) an anisotropic morphological scale space is constructed based on multi-scale side window filtering to strictly preserve geometric edges, and feature points are extracted using a multi-scale adaptive structure tensor with sub-pixel refinement to ensure high localization precision; (2) a structure-aware feature descriptor is constructed by integrating gradient reversal invariance and entropy-weighted attention mechanisms, rendering the multi-modal description highly robust against contrast inversion and noise; and (3) a coarse-to-fine robust matching strategy is established to progressively refine correspondences from descriptor-space matching to strict sub-pixel geometric verification, thereby minimizing alignment errors. Experiments on 60 multimodal image pairs from six categories, including infrared-infrared, optical–optical, infrared–optical, depth–optical, map–optical, and SAR–optical datasets, demonstrate that SA-SWF consistently outperforms seven state-of-the-art competitors. Across all six dataset categories, SA-SWF achieves a 100% success rate, the highest average number of correct matches (356.8), and the lowest average root mean square error (1.57 pixels). These results confirm the superior robustness, stability, and geometric accuracy of SA-SWF under severe radiometric and geometric distortions. Full article
Show Figures

Figure 1

29 pages, 5790 KB  
Article
Self-Supervised Reservoir Water Area Detection Across Multi-Source Optical Imagery
by Guiyan Mo, Qing Yang and Xiaofeng Zhou
Remote Sens. 2026, 18(6), 918; https://doi.org/10.3390/rs18060918 - 18 Mar 2026
Viewed by 296
Abstract
Reservoirs are critical infrastructure for water and energy security, and require accurate and timely monitoring of reservoir water extent to make informed decisions. Optical remote sensing provides frequent, large-area observations; however, automated water extraction is often complicated by dam operation and surface heterogeneity, [...] Read more.
Reservoirs are critical infrastructure for water and energy security, and require accurate and timely monitoring of reservoir water extent to make informed decisions. Optical remote sensing provides frequent, large-area observations; however, automated water extraction is often complicated by dam operation and surface heterogeneity, which increase spectral variability. Supervised methods, though widely used, generally require manual labels and often perform poorly when transferred across sensors and regions, limiting operational deployment. In this paper, we develop a geo-spectral feature-guided Self-Supervised Water Detection (SWD) framework, an automated algorithm designed for multi-source optical imagery. SWD consists of two stages: pixel-level classification and object-level refinement. Initially, SWD integrates spatial priors with spectral features to automatically derive high-confidence samples, which are then utilized to parameterize Gaussian mixture model to represent multimodal spectral distribution throughout the image. Furthermore, superpixel-constrained region growing is applied to refine shoreline and ensure object-level consistency. We validated SWD across 36 test cases comprising three sensors, six reservoirs, and two hydrological conditions. Compared with Random Forest and U-Net, SWD achieved the best performance. Specifically, (1) in cross-scale tests, SWD achieved high consistency with IoU ≥ 0.774; (2) in cross-region transfers, SWD maintained stable generalization (SD: 0.010); and (3) in hydrological response assessments, SWD captured water-level fluctuations with minimal bias variation (ΔRE < 1%). In addition, SWD framework is computationally efficient, with processing times of 0.49–1.29 s/Mpx on a standard CPU. This study demonstrates that SWD effectively addresses spectral variability and surface complexity in reservoir water area detection across multi-source optical imagery. It operates without manual labels or model training, enabling automated, large-scale and multi-temporal reservoir water monitoring. Full article
Show Figures

Figure 1

Back to TopTop