Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (12,010)

Search Parameters:
Keywords = remote sensing image

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 15074 KB  
Article
Single-View High-Resolution Satellite Image Positioning by Integrating Global Open-Source Basemaps
by Zihui Xu, Ke Zhang, Xianwen Wang, Bing Wang, Yuhao Wang, Jingyu Wang, Yu Su, Feima Yuan, Bin Dong, Jianhua Li, Zhiquan Zhao and Tao Liu
Remote Sens. 2026, 18(7), 1028; https://doi.org/10.3390/rs18071028 (registering DOI) - 29 Mar 2026
Abstract
High-resolution optical satellite data have become fundamental for acquiring global accurate remote sensing information (e.g., object geometric and spectral characteristics). However, due to the difficulty in obtaining accurate ground control points on a global scale, achieving accurate global positioning of satellite imagery remains [...] Read more.
High-resolution optical satellite data have become fundamental for acquiring global accurate remote sensing information (e.g., object geometric and spectral characteristics). However, due to the difficulty in obtaining accurate ground control points on a global scale, achieving accurate global positioning of satellite imagery remains a technical challenge. To realize global positioning optimization without relying on accurate control points, this paper leverages open-source data such as Google Earth orthophoto maps (GE maps) and FABDEM, and proposes the Coarse-to-Fine Open-Source Basemap Integration (CFBI) Method. The core idea of this method is to effectively eliminate gross errors in coarse control points by leveraging the differential projection offsets of roofs between single-view satellite images and multi-source orthophotos. On this basis, an iterative weight-selection adjustment strategy is adopted to achieve accurate positioning results. Experiments conducted in three regions, Jacksonville, New York, and Boston, demonstrate that the proposed algorithm significantly improves the positioning accuracy of satellite imagery, with an average enhancement of 62.92%, and accuracy in most areas reaching within 2 m. Full article
(This article belongs to the Special Issue AI-Enhanced Remote Sensing for Image Matching and 3D Reconstruction)
Show Figures

Figure 1

25 pages, 264783 KB  
Article
RDAH-Net: Bridging Relative Depth and Absolute Height for Monocular Height Estimation in Remote Sensing
by Liting Jiang, Feng Wang, Niangang Jiao, Jingxing Zhu, Yuming Xiang and Hongjian You
Remote Sens. 2026, 18(7), 1024; https://doi.org/10.3390/rs18071024 (registering DOI) - 29 Mar 2026
Abstract
Generating high-precision normalized digital surface models (nDSMs) from a single remote sensing image remains a challenging and ill-posed problem due to the absence of reliable geometric constraints. In this work, we show that monocular depth provides structurally stable cues of local geometry but [...] Read more.
Generating high-precision normalized digital surface models (nDSMs) from a single remote sensing image remains a challenging and ill-posed problem due to the absence of reliable geometric constraints. In this work, we show that monocular depth provides structurally stable cues of local geometry but lacks the global scale and vertical reference required for absolute height recovery. This intrinsic mismatch limits direct depth-to-height regression, particularly when transferring across heterogeneous terrains, land-cover compositions, and imaging conditions. Building on this idea, we propose the Relative Depth–Absolute Height Prediction Network (RDAH-Net), a framework that exploits relative depth as a geometry-aware prior while learning terrain-dependent height mappings from image appearance to absolute height. As the backbone, we employ a lightweight MobileNetV2 enhanced with a Convolutional Block Attention Module (CBAM), and further incorporate a cross-modal bidirectional attention fusion scheme with positional encoding to achieve a deep and effective fusion of image appearance and depth prior cues. Finally, a PixelShuffle-based upsampling strategy is used to sharpen prediction details and mitigate typical upsampling artifacts. Extensive experiments across diverse regions demonstrate that RDAH-Net achieves robust and generalizable height estimation, providing a practical alternative for large-scale mapping and rapid update scenarios. Full article
27 pages, 7912 KB  
Article
Hierarchical Wetland Mapping in the East China Sea Based on Integrated Multifaceted Source Features
by Jie Wang, Yixuan Zhou, Xin Fang, Shengqi Wang, Haiyang Zhang and Runbin Hu
Remote Sens. 2026, 18(7), 1023; https://doi.org/10.3390/rs18071023 (registering DOI) - 29 Mar 2026
Abstract
The East China Sea represents a critical coastal wetland region, characterized by complex geomorphology, heterogeneous land-cover composition, and diverse wetland types. Accurate delineation of coastal wetland extent is essential for ecosystem service assessment and sustainable coastal management, directly contributing to wetland-related Sustainable Development [...] Read more.
The East China Sea represents a critical coastal wetland region, characterized by complex geomorphology, heterogeneous land-cover composition, and diverse wetland types. Accurate delineation of coastal wetland extent is essential for ecosystem service assessment and sustainable coastal management, directly contributing to wetland-related Sustainable Development Goals (SDGs), particularly SDG 15, on ecosystem conservation and biodiversity protection. However, pronounced spectral similarity and structural heterogeneity among wetland classes pose substantial challenges to reliable classification. To address these challenges, this study developed a hierarchical classification framework integrating Random Forest, K-means clustering, and a decision tree classifier based on multi-source Sentinel-1 and Sentinel-2 imagery. Spectral, polarimetric, texture, and morphological features were systematically constructed to enhance class separability. Using this framework, a 10 m resolution coastal wetland map of the East China Sea was generated for 2023. The proposed approach achieved an overall accuracy of 91.32% and improved the discrimination of spectrally similar wetland types. Feature fusion reduced confusion among water-related classes, while object-based clustering improved the extraction of linear riverine wetlands. The resulting 10 m wetland map provides updated spatial information for ecological assessment and coastal management in the East China Sea. Full article
(This article belongs to the Special Issue Big Earth Data in Support of the Sustainable Development Goals)
Show Figures

Figure 1

34 pages, 20615 KB  
Article
Unsupervised Change Detection in Heterogeneous Remote Sensing Images via Dynamic Mask Guidance
by Paixin Xie, Gao Chen, Qingfeng Zhou, Xiaoyan Li and Jingwen Yan
Remote Sens. 2026, 18(7), 1022; https://doi.org/10.3390/rs18071022 (registering DOI) - 29 Mar 2026
Abstract
Unsupervised change detection (CD) in heterogeneous remote sensing images is intrinsically difficult due to severe sensor-specific discrepancies. In the absence of ground truth, these discrepancies result in ambiguous optimization objectives that make it difficult for models to distinguish true land-cover changes from modality-driven [...] Read more.
Unsupervised change detection (CD) in heterogeneous remote sensing images is intrinsically difficult due to severe sensor-specific discrepancies. In the absence of ground truth, these discrepancies result in ambiguous optimization objectives that make it difficult for models to distinguish true land-cover changes from modality-driven pseudo-changes. To address these challenges, we propose MaskUCD, a novel unsupervised framework that reformulates heterogeneous CD as a dynamic mask-driven constraint scheduling problem. Fundamentally distinct from conventional strategies that enforce selective feature alignment, MaskUCD employs a spatially adaptive optimization mechanism. Specifically, the iteratively refined mask serves as a geometric reference to guide optimization. It enforces strict feature alignment in mask-unchanged regions to suppress modality-induced discrepancies, while simultaneously promoting feature divergence in mask-changed regions to emphasize semantic inconsistencies. In this way, explicit optimization objectives are established, together with an intrinsic interpretability constraint that guides the CD process. This strategy treats the mask as a structural guide for representation learning rather than a ground-truth reference, thereby avoiding error accumulation caused by directly using inaccurate masks as supervisory signals. To facilitate this optimization, we design a specialized asymmetric autoencoder with a hybrid encoder architecture, utilizing multi-scale frequency analysis and global context modeling to enhance feature representation capabilities. Consequently, this design enables the generation of refined and semantically consistent masks, which provide increasingly precise structural guidance, yielding converged and discriminative difference maps. Extensive experiments demonstrate that MaskUCD achieves state-of-the-art performance and superior robustness compared to existing advanced methods. Full article
31 pages, 11688 KB  
Article
RShDet: An Adaptive Spectral-Aware Network for Remote Sensing Object Detection Under Haze Corruption
by Wei Zhang, Yuantao Wang, Haowei Yang and Xuerui Mao
Remote Sens. 2026, 18(7), 1020; https://doi.org/10.3390/rs18071020 (registering DOI) - 29 Mar 2026
Abstract
Remote sensing (RS) object detection faces intrinsic challenges arising from the overhead imaging paradigm and the diversity of climatic conditions. In particular, atmospheric phenomena such as clouds and haze cause severe visual degradation, making reliable object detection difficult. However, most existing detectors are [...] Read more.
Remote sensing (RS) object detection faces intrinsic challenges arising from the overhead imaging paradigm and the diversity of climatic conditions. In particular, atmospheric phenomena such as clouds and haze cause severe visual degradation, making reliable object detection difficult. However, most existing detectors are developed under clear-weather conditions, which limits their generalization capability in realistic haze-degraded RS scenarios. To alleviate this issue, an adaptive spectral-aware network for RS object detection under haze interference is proposed, termed RShDet, which is designed to handle both high-altitude RS imagery and low-altitude Unmanned Aerial Vehicle (UAV) scenarios. Firstly, the Object-Centered Dynamic Enhancement (OCDE) module dynamically adjusts the spatial positions of key-value pairs through query-agnostic offsets, enabling the network to emphasize object-relevant regions while suppressing haze-induced background interference. Secondly, the Dynamic Multi-Spectral Perception and Filtering (DSPF) module introduces a multi-spectral attention mechanism that adaptively selects informative frequency components, thereby enhancing discriminative feature representations in hazy environments. Thirdly, the Frequency-Domain Multi-Feature Fusion (FDMF) module employs learnable weights to complementarily integrate amplitude and phase information in the frequency domain, enabling effective cross-task feature interaction between the enhancement and detection branches. Extensive experiments demonstrate that RShDet consistently achieves superior detection performance under hazy conditions across both synthetic and real-world benchmarks. Specifically, it achieves improvements of 2.4% mAP50 on Hazy-DOTA, 1.9% mAP on HazyDet, and 2.33% mAP on the real-world foggy dataset RTTS, surpassing existing state-of-the-art methods. Full article
(This article belongs to the Special Issue Advances in Remote Sensing Image Target Detection and Recognition)
42 pages, 6313 KB  
Article
When Lie Groups Meet Hyperspectral Images: Equivariant Manifold Network for Few-Shot HSI Classification
by Haolong Ban, Junchao Feng, Zejin Liu, Yue Jiang, Zhenxing Wang, Jialiang Liu, Yaowen Hu and Yuanshan Lin
Sensors 2026, 26(7), 2117; https://doi.org/10.3390/s26072117 (registering DOI) - 29 Mar 2026
Abstract
Hyperspectral imagery (HSI) offers rich spectral signatures and fine-grained spatial structures for remote sensing, but practical HSI classification is often constrained by scarce labels and complex geometric disturbances, including translation, rotation, scaling, and shear. Existing deep models are typically developed under Euclidean assumptions [...] Read more.
Hyperspectral imagery (HSI) offers rich spectral signatures and fine-grained spatial structures for remote sensing, but practical HSI classification is often constrained by scarce labels and complex geometric disturbances, including translation, rotation, scaling, and shear. Existing deep models are typically developed under Euclidean assumptions and rely on data-hungry training pipelines, which makes them brittle in the few-shot regime. To address this challenge, we propose EMNet, a Lie-group-based Equivariant Manifold Network for few-shot HSI classification that explicitly encodes geometric invariance and improves discriminative accuracy. EMNet couples an SE(2)-based Equivariance-Guided Module (EGM) to enforce equivariance to translations and rotations with an affine Lie-group-based Characteristic Filtering Convolution (CFC) that models scaling and shearing on the feature manifold while adaptively suppressing redundant responses. Extensive experiments on WHU-Hi-HongHu, Houston2013, and Indian Pines demonstrate state-of-the-art performance with competitive complexity, achieving OAs of 95.77% (50 samples/class), 97.37% (50 samples/class), and 96.09% (5% labeled samples), respectively, and yielding up to +3.34% OA, +6.01% AA, and +4.14% Kappa over the strong DGPF-RENet baseline. Under a stricter 25-samples-per-class protocol with 10 repeated random hold-out splits, EMNet consistently improves the mean accuracy while exhibiting lower variance, indicating better stability to sampling uncertainty. On the city-scale Xiongan New Area dataset with extreme long-tail imbalance (1580 × 3750 pixels, 256 bands, and 5.925 M labeled pixels), EMNet further boosts OA from 85.89% to 93.77% under the 1% labeled-sample protocol, highlighting robust generalization for large-area mapping. Beyond point estimates, we report mean ± SD/SE across repeated splits and provide rigorous statistical validation by computing Yule’s Q statistic for class-wise behavior similarity, performing the Friedman test with Nemenyi post hoc comparisons for multi-method ranking significance, and presenting 95% confidence intervals together with Cohen’s d effect sizes to quantify practical improvement. Full article
(This article belongs to the Special Issue Hyperspectral Sensing: Imaging and Applications)
35 pages, 51980 KB  
Article
Structurally Consistent and Grounding-Aware Stagewise Reasoning for Referring Remote Sensing Image Segmentation
by Shan Dong, Jianlin Xie, Liang Chen, He Chen, Baogui Qi and Yunqiu Ge
Remote Sens. 2026, 18(7), 1015; https://doi.org/10.3390/rs18071015 (registering DOI) - 28 Mar 2026
Abstract
Referring Remote Sensing Image Segmentation (RRSIS) is a representative multimodal understanding task for remote sensing, which segments designated targets from remote images according to free-form natural language descriptions. However, complex remote sensing characteristics, such as cluttered backgrounds, large-scale variations, small scattered targets and [...] Read more.
Referring Remote Sensing Image Segmentation (RRSIS) is a representative multimodal understanding task for remote sensing, which segments designated targets from remote images according to free-form natural language descriptions. However, complex remote sensing characteristics, such as cluttered backgrounds, large-scale variations, small scattered targets and repetitive textures, lead to unstable visual grounding and further spatial grounding drift, resulting in inaccurate segmentation results. Existing approaches typically perform implicit visual–linguistic fusion across encoding and decoding stages, entangling spatial grounding with mask refinement. This tightly coupled formulation lacks explicit structural constraints and is prone to cross-modal ambiguity, especially in complex remote sensing layouts. To address these limitations, we propose a Structurally consistent and Grounding-aware Stagewise Reasoning Framework (SGSRF) that follows a grounding-first, segmentation-second paradigm. The framework decomposes inference into three cascaded stages with progressively imposed structural constraints. First, Cross-modal Consistency Refinement (CCR) lays the foundation for stable spatial grounding by enhancing visual–textual structural alignment via CLIP-based features and Structural Consistency Regularization (SCR), producing well-aligned multimodal representations and reliable grounding cues. Second, Grounding-aware Prompt (GPG) Generation bridges grounding and segmentation by converting aligned representations into complementary sparse and dense prompts, which serve as explicit grounding guidance for the segmentation model. Third, Grounding Modulated Segmentation (GMS) leverages the Segment Anything Model (SAM) to generate fine-grained mask prediction under the joint guidance of prompts and grounding cues, improving spatial grounding stability and robustness to background interference and scale variation. Extensive experiments on three remote sensing benchmarks , namely RefSegRS, RRSIS-D, and RISBench, demonstrate that SGSRF achieves state-of-the-art performance. The proposed stagewise paradigm integrates structural alignment, explicit grounding, and prompt-driven segmentation into a unified framework, providing a practical and robust solution for RRSIS in real-world Earth observation applications. Full article
18 pages, 11374 KB  
Article
CSGL-Former: Cross-Stripes Global–Local Fusion Transformer for Remote Sensing Image Dehazing
by Shuyi Feng, Xiran Zhang, Jie Yuan and Youwen Zhu
Sensors 2026, 26(7), 2102; https://doi.org/10.3390/s26072102 (registering DOI) - 28 Mar 2026
Abstract
Remote sensing (RS) images are often degraded by atmospheric haze, which compromises both visual interpretation and downstream applications. To address this, we introduce CSGL-Former, a novel Cross-Stripes Global–Local Fusion Transformer for RS image dehazing. Our model efficiently captures anisotropic long-range dependencies using cross-stripes [...] Read more.
Remote sensing (RS) images are often degraded by atmospheric haze, which compromises both visual interpretation and downstream applications. To address this, we introduce CSGL-Former, a novel Cross-Stripes Global–Local Fusion Transformer for RS image dehazing. Our model efficiently captures anisotropic long-range dependencies using cross-stripes attention (CSA) and aggregates hierarchical global semantics via a Multi-Layer Global Aggregation (MLGA) module. In the decoder, global context is adaptively blended with fine-grained local features to restore intricate textures. Finally, inspired by the atmospheric scattering model, a soft reconstruction head restores the clear image by predicting spatially varying affine parameters, strictly preserving content fidelity while effectively removing haze. Trained end-to-end, CSGL-Former demonstrates a compelling balance of accuracy and efficiency. Extensive experiments on the RRSHID and SateHaze1K benchmarks show that our model achieves state-of-the-art or highly competitive performance against representative baselines. Ablation studies further validate the effectiveness of each proposed component. Full article
(This article belongs to the Special Issue Advanced Pattern Recognition: Intelligent Sensing and Imaging)
Show Figures

Figure 1

22 pages, 6161 KB  
Article
Remote Sensing Data-Based Modelling for Analyzing Green Tide Proliferation Drivers in the Yellow Sea
by Jing Yang, Enye He, Xuanliang Ji, Qianqiu Guo, Shan Gao and Yuxuan Jiang
Remote Sens. 2026, 18(7), 1014; https://doi.org/10.3390/rs18071014 (registering DOI) - 28 Mar 2026
Abstract
Since 2007, green tides have recurrently occurred in the Yellow Sea during spring and summer, with a massive outbreak recorded in 2021. Given the critical significance of green tide monitoring and prediction for marine ecological security and sustainable development, this study developed a [...] Read more.
Since 2007, green tides have recurrently occurred in the Yellow Sea during spring and summer, with a massive outbreak recorded in 2021. Given the critical significance of green tide monitoring and prediction for marine ecological security and sustainable development, this study developed a satellite remote sensing-validated coupled simulation system for green tide drift and growth, by integrating multi-source satellite remote sensing data and oceanographic reanalysis datasets. Leveraging this system, we systematically analyzed the spatiotemporal evolution characteristics and underlying driving mechanisms of both routine green tide processes in 2014–2015 and the extreme 2021 event. Satellite images with low cloud cover and extensive green tide distribution were screened to confirm the accuracy of green tide drift trajectories and distribution ranges for validating the model’s reliability, and the results demonstrated the spatial consistency between simulation results and satellite observations. The validated model was used to track the drift and growth–decline processes of green tides and investigate the underlying cause of high-biomass appearance in 2021. Combined with environmental parameters, our analyses revealed that variations in attachment substrates alter wind resistance coefficients, thereby potentially accelerating the northward drift velocity of green tides. Furthermore, substrate properties may exert a significant regulatory effect on the attachment, germination, and biomass accumulation of Ulva prolifera spores, which could be a leading factor driving the massive green tide outbreak. Full article
Show Figures

Figure 1

37 pages, 4825 KB  
Article
Effects of Cane Density on Primocane Raspberry Assessed Using UAV-Based Multispectral Imaging
by Kamil Buczyński, Magdalena Kapłan and Zbigniew Jarosz
Agriculture 2026, 16(7), 742; https://doi.org/10.3390/agriculture16070742 - 27 Mar 2026
Abstract
Cane density is a key management factor in raspberry production, directly affecting yield formation and canopy structure. However, most previous studies have focused on floricane cultivars and relied on conventional field measurements, while the response of primocane raspberries and their canopy level dynamics [...] Read more.
Cane density is a key management factor in raspberry production, directly affecting yield formation and canopy structure. However, most previous studies have focused on floricane cultivars and relied on conventional field measurements, while the response of primocane raspberries and their canopy level dynamics remain less explored. The objective of this study was to evaluate how cane density influences yield components, cane growth, and canopy structure in primocane raspberry cultivars, and to assess whether these effects can be captured using UAV-based multispectral imaging. Field experiments were conducted over two growing seasons using two primocane cultivars grown under different cane density treatments. Yield components and cane growth parameters were measured, and repeated drone multispectral surveys were performed during the production period to quantify the spatial and temporal variability of vegetation indices. Increasing cane density led to higher total yield per unit area in both cultivars, mainly through an increase in fruit number rather than fruit weight, indicating a compensatory yield response. Cane density significantly modified canopy architecture, with responses varying between cultivars and seasons. Multispectral vegetation indices revealed predominantly consistent density-dependent gradients, characterized by higher mean values and reduced spatial and temporal variability at higher cane densities. Denser cane configurations were associated with lower total temporal amplitude and smoother seasonal trajectories, indicating a stabilization of canopy reflectance dynamics. Although this overall pattern was preserved across indices, the magnitude and regularity of temporal responses were index-specific and cultivar-dependent. The results demonstrate that cane density management in primocane raspberries affects both yield formation and canopy structure, and that these effects can be effectively monitored using UAV-based multispectral imaging. Integrating remote sensing with field measurements offers a valuable approach for supporting data-driven optimization of raspberry production systems. Full article
Show Figures

Figure 1

25 pages, 9555 KB  
Article
EFSL-YOLO: An Improved Model for Small Object Detection in UAV Vision
by Meng Zhou, Shuke He, Chang Wang and Jing Wang
Drones 2026, 10(4), 243; https://doi.org/10.3390/drones10040243 - 27 Mar 2026
Abstract
To address the challenges in UAV remote sensing imagery, such as small object size, dense occlusion and complex background interference, this paper proposes an enhanced small object detection algorithm based on an improved YOLOv13 model for drone applications in complex weather environments. First, [...] Read more.
To address the challenges in UAV remote sensing imagery, such as small object size, dense occlusion and complex background interference, this paper proposes an enhanced small object detection algorithm based on an improved YOLOv13 model for drone applications in complex weather environments. First, an enhanced feature fusion attention network (EFFA-Net) is designed in the preprocessing stage to reduce image degradation and suppress the interference caused by smoke and haze. Then, in the backbone, a swish-gated convolution (SwiGLUConv) module is designed to adaptively expand the receptive field and enhance multi-scale feature extraction, which strengthens the representation of small targets while maintaining efficient computation. Furthermore, a locally enhanced multi-scale context fusion (LF-MSCF) module is integrated into the feature fusion neck of YOLO, combining multi-head self-attention, channel attention, and spatial attention to suppress background noise and redundant responses, thereby improving detection accuracy. Extensive experiments on the VisDrone-DET2019 dataset, UAVDT dataset, and HazyDet dataset demonstrate that the proposed algorithm outperforms other mainstream methods, showcasing excellent detection accuracy and robustness in complex UAV aerial scenarios. Full article
21 pages, 11455 KB  
Article
Cross-Scale Spectral Calibration for Spatiotemporal Fusion of Remote Sensing Images
by Yishuo Tian, Xiaorong Xue, Jingtong Yang, Wen Zhang, Bingyan Lu, Xin Zhao and Wancheng Wang
Sensors 2026, 26(7), 2090; https://doi.org/10.3390/s26072090 - 27 Mar 2026
Abstract
Spatiotemporal fusion aims to generate remote sensing images with both high spatial and high temporal resolution by integrating multi-source observations. However, significant spectral inconsistencies often arise when fusing images acquired at different spatial scales, which severely degrade the radiometric fidelity and temporal reliability [...] Read more.
Spatiotemporal fusion aims to generate remote sensing images with both high spatial and high temporal resolution by integrating multi-source observations. However, significant spectral inconsistencies often arise when fusing images acquired at different spatial scales, which severely degrade the radiometric fidelity and temporal reliability of the fused results. Most existing methods focus on enhancing spatial details or temporal consistency, while the cross-scale spectral discrepancy between coarse- and fine-resolution images has not been sufficiently addressed. To tackle this issue, we propose a cross-scale spectral calibration framework for spatiotemporal fusion (XSC-Net), which explicitly models and corrects spectral responses across different spatial scales. The proposed method introduces a spatial feature refinement block to enhance spatially discriminative structures and a hierarchical spectral refinement block to adaptively calibrate channel-wise spectral representations. By jointly exploiting spatial and spectral correlations, the proposed framework effectively suppresses spectral distortion while preserving fine spatial details. Extensive experiments on the public CIA and LGC datasets indicate that XSC-Net compares favorably with state-of-the-art methods, demonstrating superior performance over established baselines. Furthermore, ablation studies verify the efficacy and contribution of the proposed architectural components. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

25 pages, 8205 KB  
Article
Forest Road Extraction via Optimized DeepLabv3+ and Multi-Temporal Remote Sensing for Wildfire Emergency Response
by Zhuoran Gao, Ziyang Li, Weiyuan Yao, Tingtao Zhang, Shi Qiu and Zhaoyan Liu
Appl. Sci. 2026, 16(7), 3228; https://doi.org/10.3390/app16073228 - 26 Mar 2026
Viewed by 227
Abstract
Forest fires occur frequently in China; however, the complex terrain and incomplete road networks severely constrain ground rescue efficiency. Accurate forest road information is essential for the optimization of emergency response and rescue force deployment. Existing road extraction algorithms are primarily designed for [...] Read more.
Forest fires occur frequently in China; however, the complex terrain and incomplete road networks severely constrain ground rescue efficiency. Accurate forest road information is essential for the optimization of emergency response and rescue force deployment. Existing road extraction algorithms are primarily designed for urban environments and exhibit limited efficacy in forest scenarios due to dense canopy, complex background interference and specific forest road features. To address this gap, this study proposes a forest road extraction method based on an enhanced DeepLabv3+ model using multi-temporal, high-resolution satellite imagery. Specifically, a Multi-Scale Channel Attention (MCSA) mechanism is embedded in skip connections to suppress background interference, while strip pooling is integrated into the Atrous Spatial Pyramid Pooling (ASPP) module to better capture slender road features. A composite Focal-Dice loss function is also constructed to mitigate sample imbalance. Finally, by applying the model in multi-temporal remote sensing images, a fusion strategy is introduced to integrate multi-seasonal road masks to enhance overall accuracy and topological integrity. Experimental results show that the proposed method achieves a precision of 54.1%, an F1-Score of 59.3%, and an IoU of 41.8%, effectively enhancing road continuity and providing robust technical support for fire-rescue decision-making. Full article
Show Figures

Figure 1

33 pages, 172200 KB  
Article
HDCGAN+: A Low-Illumination UAV Remote Sensing Image Enhancement and Evaluation Method Based on WPID
by Kelly Chen Ke, Min Sun, Xinyi Wang, Dong Liu and Hanjun Yang
Remote Sens. 2026, 18(7), 999; https://doi.org/10.3390/rs18070999 - 26 Mar 2026
Viewed by 108
Abstract
Remote sensing images acquired by UAVs under nighttime or low-illumination conditions suffer from insufficient illumination, leading to degraded image quality, detail loss, and noise, which restrict their application in public security and disaster emergency scenarios. Although existing machine learning-based enhancement methods can recover [...] Read more.
Remote sensing images acquired by UAVs under nighttime or low-illumination conditions suffer from insufficient illumination, leading to degraded image quality, detail loss, and noise, which restrict their application in public security and disaster emergency scenarios. Although existing machine learning-based enhancement methods can recover part of the missing information, they often cause color distortion and texture inconsistency. This study proposes an improved low-illumination image enhancement method based on a Weakly Paired Image Dataset (WPID), combining the Hierarchical Deep Convolutional Generative Adversarial Network (HDCGAN) with a low-rank image fusion strategy to enhance the quality of low-illumination UAV remote sensing images. First, YCbCr color channel separation is applied to preserve color information from visible images. Then, a Low-Rank Representation Fusion Network (LRRNet) is employed to perform structure-aware fusion between thermal infrared (TIR) and visible images, thereby enabling effective preservation of structural details and realistic color appearance. Furthermore, a weakly paired training mechanism is incorporated into HDCGAN to enhance detail restoration and structural fidelity. To achieve objective evaluation, a structural consistency assessment framework is constructed based on semantic segmentation results from the Segment Anything Model (SAM). Experimental results demonstrate that the proposed method outperforms state-of-the-art approaches in both visual quality and application-oriented evaluation metrics. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

25 pages, 3151 KB  
Article
FCR-TransUNet: A Novel Approach to Crop Classification in Remote Sensing Images Employing Attention and Feature Enhancement Techniques
by Yongqi Han, Xingtong Liu, Yun Zhang, Hongfu Ai, Chuan Qin and Xinle Zhang
Agriculture 2026, 16(7), 727; https://doi.org/10.3390/agriculture16070727 - 25 Mar 2026
Viewed by 258
Abstract
Accurate crop classification is critical for optimizing agricultural resource use and informing production decisions. Deep learning, with its robust feature extraction ability, has become a prevalent technique for remote sensing-based crop classification. However, agricultural landscape complexity poses three key challenges: background noise interference, [...] Read more.
Accurate crop classification is critical for optimizing agricultural resource use and informing production decisions. Deep learning, with its robust feature extraction ability, has become a prevalent technique for remote sensing-based crop classification. However, agricultural landscape complexity poses three key challenges: background noise interference, class confusion from inter-crop spectral similarity, and blurred small-area crop boundaries due to class imbalance. This paper proposes FCR-TransUNet, a TransUNet-based enhanced model integrating three modules: Feature Enhancement Module (FEM) for noise filtering, Class-Attention (CAExperimental results on the Youyi Farm and barley datasets validate the superiority of the proposed model. On the Youyi Farm dataset, FCR-TransUNet achieves an MIoU of 92.2%, representing an improvement of 1.8% over SAM2-UNet and 2.9% over the baseline TransUNet. On the barley dataset, it yields an MIoU of 89.9%. Ablation studies further verify the effectiveness of each designed module. To comprehensively evaluate the classification performance of FCR-TransUNet across the full crop growth cycle, experiments were conducted using remote sensing images from May, July, and August, respectively. The results demonstrate that FCR-TransUNet exhibits strong stability and adaptability at different crop growth stages, providing a reliable solution for precision agriculture and intelligent agricultural production. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

Back to TopTop