Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (725)

Search Parameters:
Keywords = remote sensing image interpretation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 6459 KB  
Article
Cooperative Hybrid Domain Network for Salient Object Detection in Optical Remote Sensing Images
by Yi Gu, Jianhang Zhou and Lelei Yan
Remote Sens. 2026, 18(7), 1087; https://doi.org/10.3390/rs18071087 - 4 Apr 2026
Viewed by 213
Abstract
Salient Object Detection (SOD) in Optical Remote Sensing Images (ORSIs) aims to localize and segment visually prominent objects amidst complex backgrounds and extreme scale variations. However, we observe that current frequency-aware methods typically rely on a naive feature aggregation paradigm, merging frequency and [...] Read more.
Salient Object Detection (SOD) in Optical Remote Sensing Images (ORSIs) aims to localize and segment visually prominent objects amidst complex backgrounds and extreme scale variations. However, we observe that current frequency-aware methods typically rely on a naive feature aggregation paradigm, merging frequency and spatial features via simple concatenation, addition, or direct combination. This shallow interaction overlooks the inherent semantic misalignment between the two domains, resulting in feature redundancy and poor boundary delineation. To address this limitation, we propose the Cooperative Hybrid Domain Network (CHDNet), a framework designed to facilitate synergistic cooperation between heterogeneous domains. Specifically, we propose the Cross-Domain Multi-Head Self-Attention (CD-MHSA) mechanism as a semantic bridge following the encoder. It employs a dimension expansion strategy to construct a Unified Interaction Manifold and utilizes a Frequency Anchor Interaction mechanism to achieve precise modulation of spatial textures using global spectral cues. Furthermore, to address the dual challenges of lacking explicit interpretation mechanisms for semantic co-occurrence and the susceptibility of topological structures to fracture in complex scenes during the decoding phase, we design a Multi-Branch Cooperative Decoder (MBCD) comprising three parallel paths: edge semantics, global relations, and reverse correction. This module dynamically integrates these heterogeneous clues through a Cooperative Fusion Strategy, combining explicit global dependency modeling with dual-domain reverse mining. Extensive experiments on multiple benchmark datasets demonstrate that the proposed CHDNet achieves performance superior to state-of-the-art (SOTA) methods. Full article
Show Figures

Figure 1

28 pages, 2875 KB  
Article
CF-Mamba: A Dual-Path Collaborative Method for Hyperspectral Image Classification
by Yapeng Wang, Guo Cao, Boshan Shi and Youqiang Zhang
Remote Sens. 2026, 18(7), 1063; https://doi.org/10.3390/rs18071063 - 2 Apr 2026
Viewed by 263
Abstract
Hyperspectral image (HSI) classification is a core task in remote sensing data interpretation. Although recently introduced state space models (SSMs), such as Mamba, have demonstrated promising performance in hyperspectral analysis due to their linear computational complexity and strong long-sequence modeling capability, existing single-stream [...] Read more.
Hyperspectral image (HSI) classification is a core task in remote sensing data interpretation. Although recently introduced state space models (SSMs), such as Mamba, have demonstrated promising performance in hyperspectral analysis due to their linear computational complexity and strong long-sequence modeling capability, existing single-stream scanning mechanisms struggle to effectively balance the intrinsic spectral continuity dependency and the high-dimensional redundancy inherent in HSI data. Moreover, they often suffer from representation discrepancies when fusing features from heterogeneous representation spaces. To address these challenges, we propose a continuous–discrete collaborative framework, termed Confluence Mamba (CF-Mamba). Specifically, the continuous modeling path (AHSE) introduces a multi-view adaptive routing mechanism to accurately capture anisotropic spectral–spatial continuous evolution patterns. Simultaneously, the discrete interaction path (IISE) employs interval sampling and channel shuffling strategies to efficiently decouple high-dimensional redundancy while maintaining fine-grained feature interactions. Furthermore, the confluence gating unit (CGU) leverages a bidirectional cross-modulation mechanism to constrain discrete feature distributions using continuous contextual information, effectively alleviating representation discrepancies during multi-scale feature fusion. Extensive experiments conducted on four benchmark datasets, namely, Indian Pines, Pavia University, Houston, and WHU-Hi-Longkou, demonstrate that CF-Mamba achieves overall accuracies of 97.77%, 99.68%, 99.06%, and 99.59%, respectively. The proposed method consistently outperforms existing CNN-, Transformer-, and Mamba-based approaches in terms of both classification performance and computational efficiency. Full article
Show Figures

Figure 1

34 pages, 19919 KB  
Article
Unsupervised Change Detection in Heterogeneous Remote Sensing Images via Dynamic Mask Guidance
by Paixin Xie, Gao Chen, Qingfeng Zhou, Xiaoyan Li and Jingwen Yan
Remote Sens. 2026, 18(7), 1022; https://doi.org/10.3390/rs18071022 - 29 Mar 2026
Viewed by 253
Abstract
Unsupervised change detection (CD) in heterogeneous remote sensing images is intrinsically difficult due to severe sensor-specific discrepancies. In the absence of ground truth, these discrepancies result in ambiguous optimization objectives that make it difficult for models to distinguish true land-cover changes from modality-driven [...] Read more.
Unsupervised change detection (CD) in heterogeneous remote sensing images is intrinsically difficult due to severe sensor-specific discrepancies. In the absence of ground truth, these discrepancies result in ambiguous optimization objectives that make it difficult for models to distinguish true land-cover changes from modality-driven pseudo-changes. To address these challenges, we propose MaskUCD, a novel unsupervised framework that reformulates heterogeneous CD as a dynamic mask-driven constraint scheduling problem. Fundamentally distinct from conventional strategies that enforce selective feature alignment, MaskUCD employs a spatially adaptive optimization mechanism. Specifically, the iteratively refined mask serves as a geometric reference to guide optimization. It enforces strict feature alignment in mask-unchanged regions to suppress modality-induced discrepancies, while simultaneously promoting feature divergence in mask-changed regions to emphasize semantic inconsistencies. In this way, explicit optimization objectives are established, together with an intrinsic interpretability constraint that guides the CD process. This strategy treats the mask as a structural guide for representation learning rather than a ground-truth reference, thereby avoiding error accumulation caused by directly using inaccurate masks as supervisory signals. To facilitate this optimization, we design a specialized asymmetric autoencoder with a hybrid encoder architecture, utilizing multi-scale frequency analysis and global context modeling to enhance feature representation capabilities. Consequently, this design enables the generation of refined and semantically consistent masks, which provide increasingly precise structural guidance, yielding converged and discriminative difference maps. Extensive experiments demonstrate that MaskUCD achieves state-of-the-art performance and superior robustness compared to existing advanced methods. Full article
Show Figures

Figure 1

18 pages, 11374 KB  
Article
CSGL-Former: Cross-Stripes Global–Local Fusion Transformer for Remote Sensing Image Dehazing
by Shuyi Feng, Xiran Zhang, Jie Yuan and Youwen Zhu
Sensors 2026, 26(7), 2102; https://doi.org/10.3390/s26072102 - 28 Mar 2026
Viewed by 238
Abstract
Remote sensing (RS) images are often degraded by atmospheric haze, which compromises both visual interpretation and downstream applications. To address this, we introduce CSGL-Former, a novel Cross-Stripes Global–Local Fusion Transformer for RS image dehazing. Our model efficiently captures anisotropic long-range dependencies using cross-stripes [...] Read more.
Remote sensing (RS) images are often degraded by atmospheric haze, which compromises both visual interpretation and downstream applications. To address this, we introduce CSGL-Former, a novel Cross-Stripes Global–Local Fusion Transformer for RS image dehazing. Our model efficiently captures anisotropic long-range dependencies using cross-stripes attention (CSA) and aggregates hierarchical global semantics via a Multi-Layer Global Aggregation (MLGA) module. In the decoder, global context is adaptively blended with fine-grained local features to restore intricate textures. Finally, inspired by the atmospheric scattering model, a soft reconstruction head restores the clear image by predicting spatially varying affine parameters, strictly preserving content fidelity while effectively removing haze. Trained end-to-end, CSGL-Former demonstrates a compelling balance of accuracy and efficiency. Extensive experiments on the RRSHID and SateHaze1K benchmarks show that our model achieves state-of-the-art or highly competitive performance against representative baselines. Ablation studies further validate the effectiveness of each proposed component. Full article
(This article belongs to the Special Issue Advanced Pattern Recognition: Intelligent Sensing and Imaging)
Show Figures

Figure 1

20 pages, 2636 KB  
Article
Inferring Wildfire Ignition Causes in Spain Using Machine Learning and Explainable AI
by Clara Ochoa, Magí Franquesa, Marcos Rodrigues and Emilio Chuvieco
Fire 2026, 9(4), 138; https://doi.org/10.3390/fire9040138 - 24 Mar 2026
Viewed by 646
Abstract
A substantial proportion of wildfires in Mediterranean regions continue to be recorded without information about the cause or source of ignition, limiting our ability to understand ignition drivers and design effective prevention strategies. In this study, we develop a spatially harmonised wildfire database [...] Read more.
A substantial proportion of wildfires in Mediterranean regions continue to be recorded without information about the cause or source of ignition, limiting our ability to understand ignition drivers and design effective prevention strategies. In this study, we develop a spatially harmonised wildfire database for mainland Spain by integrating ignition records from the Spanish General Fire Statistics (EGIF) with fire perimeters generated from satellite images. We then apply a Random Forest classifier to infer ignition causes for events lacking cause attribution. To interpret model behaviour, we use Shapley Additive Explanation (SHAP) values at both global and local scales. Results indicate that human-caused ignitions are dominant, with intentional and negligence-related fires accounting for 52.13% of all known events, although they are associated with contrasting climatic and land-use settings. Negligence-related fires tend to occur under hot, dry and windy conditions, often in agricultural interfaces, whereas intentional fires are more frequent under cooler and wetter conditions and in areas with higher population density and land-use change. Lightning-caused fires represent a small fraction of total ignitions (3%) but exhibit a distinct climatic signature, occurring primarily in sparsely populated areas, under intermediate moisture conditions, and often leading to larger burned areas. Despite strong overall model performance (F1-score = 0.82), minority classes (e.g., lightning and fire rekindling, 0.17%) remain challenging to classify, reflecting both data imbalance and uncertainty in causal attribution. Overall, the combined use of machine learning and explainable AI provides a coherent spatial characterisation of wildfire ignition drivers across mainland Spain, highlights systematic differences among ignition causes, and identifies key limitations in existing fire cause records. This framework represents a practical step towards improving fire cause information by integrating remote sensing products with field-based fire reports, thereby supporting more targeted and evidence-based fire risk management. Full article
Show Figures

Figure 1

29 pages, 5347 KB  
Article
Optimized Reinforcement Learning-Driven Model for Remote Sensing Change Detection
by Yan Zhao, Zhiyun Xiao, Tengfei Bao and Yulong Zhou
J. Imaging 2026, 12(3), 139; https://doi.org/10.3390/jimaging12030139 - 19 Mar 2026
Viewed by 258
Abstract
In recent years, deep learning has driven remarkable progress in remote sensing change detection (CD); however, practical deployment is still hindered by two limitations. First, CD results are easily degraded by imaging-induced uncertainties—mixed pixels and blurred boundaries, radiometric inconsistencies (e.g., shadows and seasonal [...] Read more.
In recent years, deep learning has driven remarkable progress in remote sensing change detection (CD); however, practical deployment is still hindered by two limitations. First, CD results are easily degraded by imaging-induced uncertainties—mixed pixels and blurred boundaries, radiometric inconsistencies (e.g., shadows and seasonal illumination changes), and slight residual misregistration—leading to pseudo-changes and fragmented boundaries. Second, prevailing methods follow a static one-pass inference paradigm and lack an explicit feedback mechanism for adaptive error correction, which weakens generalization in complex or unseen scenes. To address these issues, we propose a feedback-driven CD framework that integrates a dual-branch U-Net with deep reinforcement learning (RL) for pixel-level probabilistic iterative refinement of an initial change probability map. The backbone produces a preliminary posterior estimate of change likelihood from multi-scale bi-temporal features, while a PPO-based RL agent formulates refinement as a Markov decision process. The agent leverages a state representation that fuses multi-scale features, prediction confidence/uncertainty, and spatial consistency cues (e.g., neighborhood coherence and edge responses) to apply multi-step corrective actions. From an imaging and interpretation perspective, the RL module can be viewed as a learnable, self-adaptive imaging optimization mechanism: for high-risk regions affected by blurred boundaries, radiometric inconsistencies, and local misalignment, the agent performs feedback-driven multi-step corrections to improve boundary fidelity and spatial coherence while suppressing pseudo-changes caused by shadows and illumination variations. Experiments on four datasets (CDD, SYSU-CD, PVCD, and BRIGHT) verify consistent improvements. Using SiamU-Net as an example, the proposed RL refinement increases mIoU by 3.07, 2.54, 6.13, and 3.1 points on CDD, SYSU-CD, PVCD, and BRIGHT, respectively, with similarly consistent gains observed when the same RL module is integrated into other representative CD backbones. Full article
(This article belongs to the Section AI in Imaging)
Show Figures

Figure 1

22 pages, 8609 KB  
Article
Integrating SimAM Attention and S-DRU Feature Reconstruction for Sentinel-2 Imagery-Based Soybean Planting Area Extraction
by Haotong Wu, Xinwen Wan, Rong Qian, Chao Ruan, Jinling Zhao and Chuanjian Wang
Agriculture 2026, 16(6), 693; https://doi.org/10.3390/agriculture16060693 - 19 Mar 2026
Viewed by 274
Abstract
Accurate and stable acquisition of the spatial distribution of soybean planting areas is essential for supporting precision agricultural monitoring and ensuring food security. However, crop remote-sensing mapping for specific regions still faces critical data bottlenecks: high-precision, large-scale pixel-level annotation is costly, resulting in [...] Read more.
Accurate and stable acquisition of the spatial distribution of soybean planting areas is essential for supporting precision agricultural monitoring and ensuring food security. However, crop remote-sensing mapping for specific regions still faces critical data bottlenecks: high-precision, large-scale pixel-level annotation is costly, resulting in scarce available labeled samples that make it difficult to construct large-scale training datasets. Although parameter-intensive models such as FCN and SegNet can achieve sufficient end-to-end training on large-scale public remote sensing datasets like LoveDA, when directly applied to the data-limited dataset in this study area, the models are prone to overfitting, leading to a significant decline in generalization ability. To address these issues, this study proposes a lightweight U-shaped semantic segmentation model, SimSDRU-Net. The model utilizes a pre-trained VGG-16 backbone to extract shallow texture and deep semantic features. The pre-trained weights mitigate the impact of overfitting in data-limited settings. In the decoding stage, a parameter-free lightweight SimAM attention module enhances effective soybean features and suppresses soil background redundancy, while an embedded S-DRU unit fuses multi-scale features for deep complementary reconstruction to improve edge detail capture. A label dataset was constructed using Sentinel-2 images as the data source and Menard County (USA) as the study area. The USDA CDL was used as a foundation for the dataset, with Google high-resolution images serving as visual interpretation aids. In the context of the experiment, Deeplabv3+ and U-Net++ were compared with U-Net under identical conditions. The results demonstrated that SimSDRU-Net exhibited optimal performance, with MIoU of 89.03%, MPA of 93.81%, and OA of 95.96%. Specifically, SimSDRU-Net uses the SimAM attention module to generate spatial attention weights by analyzing feature statistical differences through an energy function, so as to adaptively enhance soybean texture features. Meanwhile, the S-DRU unit groups, dynamically weights, and cross-branch reconstructs multi-scale convolutional features to preserve fine boundary details and achieve accurate segmentation of soybean plots. The present study demonstrates that SimSDRU-Net integrates lightweight design and high precision in data-limited scenarios, thereby providing effective technical support for the rapid extraction of soybean planting areas in North America. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

28 pages, 15951 KB  
Article
Local–Global Aware Concept Bottleneck Models for Interpretable Image Classification
by Ci Liu, Zijie Lin and Chen Tang
Sensors 2026, 26(6), 1833; https://doi.org/10.3390/s26061833 - 14 Mar 2026
Viewed by 350
Abstract
Concept Bottleneck Models facilitate interpretable image classification by predicting human-understandable concepts prior to class labels. However, when constructed upon CLIP, they exhibit unreliable concept scores stemming from CLIP’s global representation bias and insufficient region-level sensitivity, which severely constrain their effectiveness in sensor-driven applications [...] Read more.
Concept Bottleneck Models facilitate interpretable image classification by predicting human-understandable concepts prior to class labels. However, when constructed upon CLIP, they exhibit unreliable concept scores stemming from CLIP’s global representation bias and insufficient region-level sensitivity, which severely constrain their effectiveness in sensor-driven applications like remote sensing and medical imaging where localized visual evidence is critical. To mitigate this, we propose the Local–Global Aware Concept Bottleneck Model (LGA-CBM), which improves concept prediction through a training-free refinement pipeline. Building on initial CLIP-derived concept scores, LGA-CBM incorporates three key components: a Dual Masking Guided Concept Score Refinement (DMCSR) module that exploits attention weights to strengthen region–concept alignment; a Local-to-Global Concept Reidentification (L2GCR) strategy to harmonize local and global activations; and a Similar Concepts Correction Mechanism (SCCM) integrating Grounding DINO for fine-grained disambiguation. A sparse linear layer then maps the refined concepts to class labels, enabling highly interpretable classification with minimal concept usage. Experiments across six benchmark datasets demonstrate that LGA-CBM consistently achieves state-of-the-art performance in both accuracy and interpretability, producing explanations that align closely with human cognition. Full article
(This article belongs to the Special Issue AI for Emerging Image-Based Sensor Applications)
Show Figures

Figure 1

22 pages, 3475 KB  
Article
Cross-Layer Feature Fusion and Attention-Based Class Feature Alignment Network for Unsupervised Cross-Domain Remote Sensing Scene Classification
by Jiahao Wei, Erzhu Li and Ce Zhang
Remote Sens. 2026, 18(6), 859; https://doi.org/10.3390/rs18060859 - 11 Mar 2026
Viewed by 290
Abstract
Remote sensing scene classification is one of the crucial techniques for high-resolution remote sensing image interpretation and has received widespread attention in recent years. However, acquiring high-quality labeled data is both costly and time-consuming, making unsupervised domain adaptation (UDA) an important research focus [...] Read more.
Remote sensing scene classification is one of the crucial techniques for high-resolution remote sensing image interpretation and has received widespread attention in recent years. However, acquiring high-quality labeled data is both costly and time-consuming, making unsupervised domain adaptation (UDA) an important research focus in scene classification. Existing UDA methods focus primarily on aligning the overall feature distributions across domains but neglect class feature alignment, resulting in the loss of critical class information. To address this issue, a cross-layer feature fusion and attention-based class feature alignment network (CFACA-NET) is proposed for unsupervised cross-domain remote sensing scene classification. Specifically, a multi-layer feature extraction module (MFEM) consisting of a cross-layer feature fusion module (CFFM), a multi-scale dynamic attention module (MSDAM), and a fused feature optimization module (FFOM) is designed to enhance the representation ability of scene features. A high-confidence sample selection module is further introduced, which utilizes evidence theory and information entropy to obtain reliable pseudo-labels. Finally, a class feature alignment module is proposed, incorporating a two-stage training strategy to achieve effective class feature alignment. Experimental results on three remote sensing scene classification datasets demonstrate that CFACA-NET outperforms existing state-of-the-art methods in cross-domain classification performance, effectively enhancing cross-domain adaptation capability. Full article
Show Figures

Figure 1

25 pages, 11205 KB  
Article
Remote Sensing Image Captioning via Self-Supervised DINOv3 and Transformer Fusion
by Maryam Mehmood, Ahsan Shahzad, Farhan Hussain, Lismer Andres Caceres-Najarro and Muhammad Usman
Remote Sens. 2026, 18(6), 846; https://doi.org/10.3390/rs18060846 - 10 Mar 2026
Viewed by 589
Abstract
Effective interpretation of coherent and usable information from aerial images (e.g., satellite imagery or high-altitude drone photography) can greatly reduce human effort in many situations, both natural (e.g., earthquakes, forest fires, tsunamis) and man-made (e.g., highway pile-ups, traffic congestion), particularly in disaster management. [...] Read more.
Effective interpretation of coherent and usable information from aerial images (e.g., satellite imagery or high-altitude drone photography) can greatly reduce human effort in many situations, both natural (e.g., earthquakes, forest fires, tsunamis) and man-made (e.g., highway pile-ups, traffic congestion), particularly in disaster management. This research proposes a novel encoder–decoder framework for captioning of remote sensing images that integrates self-supervised DINOv3 visual features with a hybrid Transformer–LSTM decoder. Unlike existing approaches that rely on supervised CNN-based encoders (e.g., ResNet, VGG), the proposed method leverages DINOv3’s self-supervised learning capabilities to extract dense, semantically rich features from aerial images without requiring domain-specific labeled pretraining. The proposed hybrid decoder combines Transformer layers for global context modeling with LSTM layers for sequential caption generation, producing coherent and context-aware descriptions. Feature extraction is performed using the DINOv3 model, which employs the gram-anchoring technique to stabilize dense feature maps. Captions are generated through a hybrid of Transformer with Long Short-Term Memory (LSTM) layers, which adds contextual meaning to captions through sequential hidden layer modeling with gated memory. The model is first evaluated on two traditional remote sensing image captioning datasets: RSICD and UCM-Captions. Multiple evaluation metrics like Bilingual Evaluation Understudy (BLEU), Consensus-based Image Description Evaluation (CIDEr), Recall-Oriented Understudy for Gisting Evaluation (ROUGE-L), and Metric for Evaluation of Translation with Explicit Ordering (METEOR), are used to quantify the performance and robustness of the proposed DINOv3 hybrid model. The proposed model outperforms conventional Convolutional Neural Network (CNN) and Vision Transformers (ViT)-based models by approximately 9–12% across most evaluation metrics. Attention heatmaps are also employed to qualitatively validate the proposed model when identifying and describing key spatial elements. In addition, the proposed model is evaluated on advanced remote sensing datasets, including RSITMD, DisasterM3, and GeoChat. The results demonstrate that self-supervised vision transformers are robust encoders for multi-modal understanding in remote sensing image analysis and captioning. Full article
Show Figures

Figure 1

32 pages, 8390 KB  
Article
End-to-End Customized CNN Pipeline for Multiparameter Surface Water Quality Estimation from Sentinel-2 Imagery
by Essam Sharaf El Din, Karim M. El Zahar and Ahmed Shaker
Remote Sens. 2026, 18(5), 794; https://doi.org/10.3390/rs18050794 - 5 Mar 2026
Viewed by 434
Abstract
This study addresses the critical need for accurate, continuous monitoring of surface water quality parameters (SWQPs) using remote sensing, overcoming limitations in existing models that often rely on pre-trained networks ill-suited for complex aquatic environments. We present a customized convolutional neural network (CNN) [...] Read more.
This study addresses the critical need for accurate, continuous monitoring of surface water quality parameters (SWQPs) using remote sensing, overcoming limitations in existing models that often rely on pre-trained networks ill-suited for complex aquatic environments. We present a customized convolutional neural network (CNN) architecture, implemented in the MATLAB environment, designed to simultaneously predict optically active (Total Organic Carbon, TOC) and non-optically active (Dissolved Oxygen, DO) parameters from eighteen Sentinel-2 Level-2A satellite images, acquired between 2023 and 2024. Our approach integrates spatial and spectral data through a customized CNN with three convolutional layers and two dense layers, optimized via adaptive learning strategies, data augmentation, and rigorous regularization to enhance predictive performance and prevent overfitting. The models were trained and validated on fused datasets of satellite imagery and in situ measurements, organized into comprehensive four-dimensional arrays capturing spectral, spatial, and sample dimensions. The results demonstrated high accuracy, with coefficient of determination (R2) values exceeding 0.97 and low root mean square error (RMSE) across training, validation, and testing subsets. Spatial prediction maps generated at high resolution revealed realistic ecological and hydrological patterns consistent with known regional water quality dynamics in New Brunswick. Our contribution, accessible to users with MATLAB, lies in the development of a transparent, adaptable, and reproducible CNN framework tailored for multiparameter water quality estimation, which extends beyond traditional empirical, site-specific regression models by enabling non-invasive, cost-effective, and continuous monitoring from satellite platforms over a large, heterogeneous province-scale domain. Additionally, model interpretability was enhanced through SHapley Additive exPlanations (SHAP) analysis, which identified key spectral bands influencing predictions and provided ecological insights, offering guidance for future sensor design and data reduction strategies. This study addresses a significant research gap by providing a dual-parameter focused, end-to-end deep learning solution optimized for province-scale remote sensing data, facilitating more informed environmental management. This study can support water managers and agencies by providing province-wide DO and TOC maps derived from freely available Sentinel-2 imagery, reducing reliance on sparse field sampling alone and helping to identify areas of low oxygen or high organic carbon. Future work will extend this framework temporally and spatially and explore hybrid CNN architectures incorporating temporal dependencies for improved generalization and accuracy. Full article
(This article belongs to the Special Issue Remote Sensing in Water Quality Monitoring)
Show Figures

Figure 1

26 pages, 9001 KB  
Article
PSiam-HDSFNet: A Pseudo-Siamese Hybrid Dilation Spiral Feature Network for Flood Inundation Change Detection Based on Heterogeneous Remote Sensing Imagery
by Yichuang Luo, Xunqiang Gong, Yuanxin Ye, Pengyuan Lv, Shuting Yang, Ailong Ma and Yanfei Zhong
Remote Sens. 2026, 18(5), 788; https://doi.org/10.3390/rs18050788 - 4 Mar 2026
Viewed by 283
Abstract
Flood change detection from remote sensing data can be used to identify post-disaster flooded areas, providing decision support for emergency rescue and post-disaster reconstruction. Although the combination of SAR and optical images effectively addresses obscuration by clouds and rain, the inherent difference in [...] Read more.
Flood change detection from remote sensing data can be used to identify post-disaster flooded areas, providing decision support for emergency rescue and post-disaster reconstruction. Although the combination of SAR and optical images effectively addresses obscuration by clouds and rain, the inherent difference in their imaging mechanisms poses a challenge to improving the accuracy of flood area change detection. Furthermore, existing flood inundation change detection methods based on heterogeneous remote sensing imagery struggle to distinguish small ground objects within the background from the actual inundated regions. Therefore, a pseudo-Siamese hybrid dilation spiral feature network (PSiam-HDSFNet) is proposed in this paper. Firstly, the feature extraction pipeline progressively processes optical and SAR images through five-layer Enhanced Deep Residual Blocks and five-layer Residual Dense Blocks, respectively. A Hybrid Dilated Pyramid (HDP) module based on a sawtooth wave-like dilated coefficient is designed to enhance multi-scale semantics of deep features in order to selectively reinforce semantic features in flood areas and weaken the noise semantics from small ground objects. Then, a Spiral Feature Pyramid (SFP) module is designed to make the deep features of SAR and optical images more consistent in spatial structure and numerical distribution patterns, so that the features of flood areas become more prominent while the noise semantics from small ground objects are further suppressed. After that, the Galerkin-type attention with linear complexity is introduced to the decoder, rapidly reconstructing the abstract semantic information of floods into interpretable flood features. Finally, the Align OPT-SAR (AlignOS) method is designed to align SAR and optical image features, enabling subsequent flood area detection. Seven metrics are adopted in the comparison between PSiam-HDSFNet and the other 14 methods. The results indicate that PSiam-HDSFNet improves change detection accuracy by extracting and processing depth features of these two images without image domain translation, and its F1 scores are improved by 7.704%, 7.664%, 4.353%, and 1.111% in the four flood coverage categories detection tasks compared to the suboptimum. Full article
Show Figures

Figure 1

24 pages, 1346 KB  
Systematic Review
Artificial Intelligence in Cadastre: A Systematic Review of Methods, Applications, and Trends
by Jingshu Chen, Majid Nazeer, Bo Sum Lee and Man Sing Wong
Land 2026, 15(3), 411; https://doi.org/10.3390/land15030411 - 2 Mar 2026
Viewed by 843
Abstract
Surveying and register administration are core to land administration, and accordingly, land surveying and registration are essential to socio-economic development due to their potential accuracy and efficiency. Until now, customary land surveying and registration have relied on human input, which is a situation [...] Read more.
Surveying and register administration are core to land administration, and accordingly, land surveying and registration are essential to socio-economic development due to their potential accuracy and efficiency. Until now, customary land surveying and registration have relied on human input, which is a situation that undermines efficiency and is prone to errors in data handling. During the last decade, the exponential growth in artificial intelligence (AI), in particular, geospatial artificial intelligence (GeoAI), has provided new methodologies that can overcome these deficiencies. This review examines AI in cadastral management by analyzing technical solutions and trends across three areas including data collection, modeling, and common applications. This review aims to provide a comprehensive survey of the current use of AI in cadastral management to the extent of defining a future research avenue. Based on the comprehensive review of literature, this study has reached the following three conclusions. (1) Automated extraction of parcel boundaries has been achieved through deep learning in data collection and processing, removing the bottlenecks of manual interpretation. Models such as convolutional neural networks (CNNs) and Transformers have been used for pixel-level semantic segmentation of high-resolution remote sensing images, leading to significant improvements in efficiency and accuracy. (2) Non-spatial data have been processed with natural language processing techniques to automatically extract information and construct relationships, thus overcoming the limitations of paper-based archives and traditional relational databases. (3) Deep learning models have been applied to automatically detect parcel changes and to enable integrated analysis of spatial and non-spatial data, which has supported the transition of cadastral management from two-dimensional to three-dimensional. However, several challenges remain, including differences in multi-temporal data processing, spatial semantic ambiguity, and the lack of large-scale, high-quality annotated data. Future research can focus on improving model generalization, advancing cross-modal data fusion, and providing recommendations for the development of a reliable and practical intelligent cadastral system. Full article
Show Figures

Figure 1

20 pages, 69379 KB  
Article
Geothermal Anomaly Identification and Analysis Based on Remote Sensing Technology and Multi-Source Data in the Datong Basin, China
by Daozhi An, Xucai Zhang, Meihua Wei, Yanguang Liu, Wenlong Zhou and Zhiyuan Kang
Sustainability 2026, 18(5), 2407; https://doi.org/10.3390/su18052407 - 2 Mar 2026
Viewed by 281
Abstract
With increasing worldwide attention to green and sustainable energy, thermal infrared remote sensing technology has gained significant popularity for detecting geothermal anomalies, as it can overcome the limitations of traditional ground surveys. This study explores the potential application of thermal infrared images in [...] Read more.
With increasing worldwide attention to green and sustainable energy, thermal infrared remote sensing technology has gained significant popularity for detecting geothermal anomalies, as it can overcome the limitations of traditional ground surveys. This study explores the potential application of thermal infrared images in geothermal exploration within the Datong Basin. We mainly utilized Landsat-8 images to obtain the actual land surface temperature (LST), hydrothermal alteration, and linear structures of the Datong Basin. Radiative transfer equation algorithm (RTE), principal component analysis (PCA), and interactive interpretation method were applied in this study. The results show that LST retrieval through the RTE method accurately reveals geothermal anomalies in the Datong Basin. Five areas with distinct high-LST values were identified as geothermal anomaly zones based on field investigation, including Xiejiatun, Gushancun, Taipingpu, Shuitongsi, and Wenjiayao–Yuanjialiang. Effective estimation of hydrothermal alteration zones (dominated by clays, OH/H2O, and carbonates) in the basin was achieved using the PCA method and band combinations. In total, 394 linear structures were obtained through interactive interpretation, including 45 concealed structures. All of these linear structures were associated with deep-seated faults. The basin’s primary controlling structures are the Yunmen Mountain piedmont fault (F1-1) and the northern margin of Xiong’er Mountain faults (F1-2 and F1-3), with F1-1 and F1-3 playing a key role in regional thermal regulation. The high-LST premium geothermal target zones of Shuitongsi and Gushancun were identified based on remote sensing interpretations and geothermal geological conditions. Furthermore, strong consistency was verified between the remote sensing predictions and four deep drilling temperature field measurements. This study confirms that remote sensing is an effective approach for geothermal potential identification, providing a scientific basis for future sustainable resource exploration in other regions. Full article
Show Figures

Figure 1

23 pages, 39500 KB  
Article
An Integrated UAV-Based Solution for Hyperspectral Bidirectional Reflectance Analysis: A Case Study of the Dunhuang Radiometric Calibration Site
by Haoheng Mi, Jingwei Bai, Guangyao Zhou, Hong Guan, Peng Zhang, Hairong Tang, Kang Jiang and Yongchao Zhao
Remote Sens. 2026, 18(5), 674; https://doi.org/10.3390/rs18050674 - 24 Feb 2026
Viewed by 372
Abstract
Angular reflectance effects are essential for radiometric calibration and the interpretation of remotely sensed data, yet remain difficult to characterize under realistic field conditions. This study presents a UAV-based approach for high-angular-resolution hyperspectral HDRF measurement over the Dunhuang Radiometric Calibration Site (DRCS). Six [...] Read more.
Angular reflectance effects are essential for radiometric calibration and the interpretation of remotely sensed data, yet remain difficult to characterize under realistic field conditions. This study presents a UAV-based approach for high-angular-resolution hyperspectral HDRF measurement over the Dunhuang Radiometric Calibration Site (DRCS). Six circular UAV flights were conducted at viewing zenith angles from 10° to 60° using a non-imaging hyperspectral sensor, with continuous ground-based irradiance measurements used to derive HDRF values in the 400–850 nm range. Ross–Li model fitting achieved high accuracy (R2 > 0.968), while residual analysis identified systematic discrepancies associated with forward-scattering geometry and secondary illumination from nearby solar towers, with local residuals up to 4.5%. These results highlight the value of dense angular sampling and rapid UAV-based measurements for interpreting field-measured HDRF and for the informed application of reflectance models in calibration environments. Full article
(This article belongs to the Section Engineering Remote Sensing)
Show Figures

Figure 1

Back to TopTop