Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (12,014)

Search Parameters:
Keywords = remote sensing image

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 9604 KB  
Article
Long-Term Sediment Accretion Rates of Floodplains Using Remote Sensing Waterline Extraction Method: A Case Study of Poyang Lake, China
by Yinghao Zhang, Xiao Zhang, Na Zhang, Jie Xu, Shengyang Hui and Xijun Lai
Remote Sens. 2026, 18(7), 1044; https://doi.org/10.3390/rs18071044 (registering DOI) - 31 Mar 2026
Abstract
With a typical floodplain in Poyang Lake selected as the study area, this paper employed the remote sensing Waterline Extraction Method (WEM) to invert its topographic changes based on 264 Landsat images from 1987 to 2024. The research systematically revealed the spatiotemporal variations [...] Read more.
With a typical floodplain in Poyang Lake selected as the study area, this paper employed the remote sensing Waterline Extraction Method (WEM) to invert its topographic changes based on 264 Landsat images from 1987 to 2024. The research systematically revealed the spatiotemporal variations in sediment accretion rates over the past 40 years and their influencing factors. By comparing different WEMs, the object-based method was identified as the most suitable for this study area. Accuracy validation of the topographic inversion showed that when using no fewer than 13 images, the average elevation error rate remained below 7.0%, indicating good reliability. The period from 1987 to 2024 was divided into 15 sub-periods, and digital elevation models of the floodplain were reconstructed for each. Results indicated that: (1) natural floodplain unaffected by sand mining experienced continuous accretion, with an average rate of approximately 3.1 ± 0.7 cm yr−1 (surface elevation change) between 1987 and 2024; (2) in areas impacted by sand mining, the sediment accretion rate after mining (about 1.7 ± 0.8 cm yr−1) was lower than that before mining (about 2.6 ± 2.7 cm yr−1), likely due to the loss of vegetation cover reducing sediment retention capacity; (3) different vegetation types notably influenced accretion rates, with mixed CarexT. lutarioriparia communities showing a consistently higher rate (about 3.5 ± 0.9 cm yr−1) than pure Carex communities (about 1.7 ± 0.7 cm yr−1), primarily attributable to differences in plant morphology, root architecture, and inundation tolerance. Further analysis revealed that riverine sediment supply was the fundamental material source for floodplain accretion. The phased decline in sediment discharge from the Ganjiang and Xiushui rivers since 1996 generally corresponds to the decreasing trend in sediment accretion rates observed after 2004. Full article
Show Figures

Figure 1

23 pages, 18509 KB  
Article
MSRNet: Mamba-Based Self-Refinement Framework for Remote Sensing Change Detection
by Haoxuan Sun, Xiaogang Yang, Ruitao Lu, Jing Zhang, Bo Li and Tao Zhang
Remote Sens. 2026, 18(7), 1042; https://doi.org/10.3390/rs18071042 (registering DOI) - 30 Mar 2026
Abstract
Accurate change detection (CD) in very high-resolution (VHR, <1 m) optical remote sensing images remains challenging, as it requires effective modeling of long-range bi-temporal dependencies and robustness against label noise in complex urban environments. Existing deep learning-based CD methods either rely on convolutional [...] Read more.
Accurate change detection (CD) in very high-resolution (VHR, <1 m) optical remote sensing images remains challenging, as it requires effective modeling of long-range bi-temporal dependencies and robustness against label noise in complex urban environments. Existing deep learning-based CD methods either rely on convolutional operations with limited receptive fields or employ global attention mechanisms with high computational cost, making it difficult to simultaneously achieve efficient global context modeling and fine-grained structural sensitivity. To address these challenges, we propose a Mamba-based self-refinement framework for remote sensing change detection (MSRNet). Specifically, we introduce an attention-enhanced oblique state space module (AOSS) to model spatio-temporal dependencies with linear complexity while preserving fine-grained structural information. The four-branch attention fusion module (FBAM) further enhances cross-dimensional feature interaction to improve the discriminative capability of differential representations. In addition, a self-refinement module (SRM) incorporates a momentum encoder to generate high-quality pseudo-labels, mitigating annotation noise and enabling learning from latent changes. Extensive experiments on two benchmark VHR datasets, LEVIR-CD and WHU-CD, demonstrate that MSRNet achieves state-of-the-art performance in both accuracy and computational efficiency. Full article
(This article belongs to the Section AI Remote Sensing)
20 pages, 3297 KB  
Article
Revisiting Remote Sensing Image Dehazing via a Dynamic Histogram-Sorted Transformer
by Naiwei Chen, Xin He, Shengyuan Li, Fengning Liu, Haoyi Lv, Haowei Peng and Yuebu Qubie
Remote Sens. 2026, 18(7), 1040; https://doi.org/10.3390/rs18071040 - 30 Mar 2026
Abstract
Remote sensing images are highly susceptible to spatially non-uniform haze under complex atmospheric conditions, leading to contrast degradation and structural detail loss. Moreover, remote sensing scenes usually exhibit complex spatial structures, highly uneven haze distribution, and significant statistical variability, which further increases the [...] Read more.
Remote sensing images are highly susceptible to spatially non-uniform haze under complex atmospheric conditions, leading to contrast degradation and structural detail loss. Moreover, remote sensing scenes usually exhibit complex spatial structures, highly uneven haze distribution, and significant statistical variability, which further increases the difficulty of haze removal. To address this issue, we revisit the haze degradation mechanism of remote sensing imagery and propose a dynamic histogram-sorted Transformer dehazing method from the perspectives of statistical distribution modeling and region-adaptive restoration. Specifically, a Histogram-Sorted Adaptive Attention is designed to map spatial features into the statistical distribution domain through a dynamic histogram sorting mechanism, enabling explicit discrimination and precise modeling of regions with different haze densities. Meanwhile, a Perception-Adaptive Feed-Forward Network is constructed, which incorporates a stable routing-based mixture-of-experts mechanism to adaptively select restoration strategies according to local texture characteristics and global haze density, thereby significantly enhancing the adaptability of the model in complex remote sensing scenarios. Extensive experimental results demonstrate that the proposed method achieves superior performance over existing approaches across multiple remote sensing benchmark datasets, effectively improving both visual quality and robustness of remote sensing imagery. Full article
30 pages, 21910 KB  
Article
A New Feature Set for Texture-Based Classification of Remotely Sensed Images in a Quantum Framework
by Archana G. Pai, Koushikey Chhapariya, Krishna M. Buddhiraju and Surya S. Durbha
J. Imaging 2026, 12(4), 149; https://doi.org/10.3390/jimaging12040149 - 30 Mar 2026
Abstract
Texture feature extraction plays a crucial role in land-use and land-cover (LULC) classification for the remotely sensed images. However, when these images are quantized to a limited number of gray levels to reduce data volume or noise, conventional texture descriptors often lose discriminative [...] Read more.
Texture feature extraction plays a crucial role in land-use and land-cover (LULC) classification for the remotely sensed images. However, when these images are quantized to a limited number of gray levels to reduce data volume or noise, conventional texture descriptors often lose discriminative power. This study investigates singular values of the gray-level co-occurrence matrix (GLCM) as novel texture features for image classification, with local binary pattern (LBP), complete LBP (CLBP) statistics, and original GLCM features proposed by Haralick et al. for comparison. Under coarse quantization, texture descriptors of LBP and its variants, which encode micro-texture, lose detail, whereas GLCM, which encodes macro-texture, retains structural co-occurrence patterns. This study thus proposes a new feature set, namely the Singular Values of the gray-level co-occurrence matrix (SVGM), for texture discrimination. Experimental analysis indicates SVGM achieves higher class separability by preserving dominant spatial structure while suppressing noise and redundancy. Quantitative evaluation using classical SVMs with multiple kernels, quantum learning models with different kernels, and neural baselines (ANN and 1D-CNN) further shows that SVGM consistently improves classification performance. Within our tested models, quantum kernel SVMs are competitive and achieve the best results on some datasets, while classical models perform best on others. Full article
(This article belongs to the Section Image and Video Processing)
Show Figures

Figure 1

21 pages, 15074 KB  
Article
Single-View High-Resolution Satellite Image Positioning by Integrating Global Open-Source Basemaps
by Zihui Xu, Ke Zhang, Xianwen Wang, Bing Wang, Yuhao Wang, Jingyu Wang, Yu Su, Feima Yuan, Bin Dong, Jianhua Li, Zhiquan Zhao and Tao Liu
Remote Sens. 2026, 18(7), 1028; https://doi.org/10.3390/rs18071028 - 29 Mar 2026
Abstract
High-resolution optical satellite data have become fundamental for acquiring global accurate remote sensing information (e.g., object geometric and spectral characteristics). However, due to the difficulty in obtaining accurate ground control points on a global scale, achieving accurate global positioning of satellite imagery remains [...] Read more.
High-resolution optical satellite data have become fundamental for acquiring global accurate remote sensing information (e.g., object geometric and spectral characteristics). However, due to the difficulty in obtaining accurate ground control points on a global scale, achieving accurate global positioning of satellite imagery remains a technical challenge. To realize global positioning optimization without relying on accurate control points, this paper leverages open-source data such as Google Earth orthophoto maps (GE maps) and FABDEM, and proposes the Coarse-to-Fine Open-Source Basemap Integration (CFBI) Method. The core idea of this method is to effectively eliminate gross errors in coarse control points by leveraging the differential projection offsets of roofs between single-view satellite images and multi-source orthophotos. On this basis, an iterative weight-selection adjustment strategy is adopted to achieve accurate positioning results. Experiments conducted in three regions, Jacksonville, New York, and Boston, demonstrate that the proposed algorithm significantly improves the positioning accuracy of satellite imagery, with an average enhancement of 62.92%, and accuracy in most areas reaching within 2 m. Full article
(This article belongs to the Special Issue AI-Enhanced Remote Sensing for Image Matching and 3D Reconstruction)
Show Figures

Figure 1

25 pages, 264783 KB  
Article
RDAH-Net: Bridging Relative Depth and Absolute Height for Monocular Height Estimation in Remote Sensing
by Liting Jiang, Feng Wang, Niangang Jiao, Jingxing Zhu, Yuming Xiang and Hongjian You
Remote Sens. 2026, 18(7), 1024; https://doi.org/10.3390/rs18071024 - 29 Mar 2026
Abstract
Generating high-precision normalized digital surface models (nDSMs) from a single remote sensing image remains a challenging and ill-posed problem due to the absence of reliable geometric constraints. In this work, we show that monocular depth provides structurally stable cues of local geometry but [...] Read more.
Generating high-precision normalized digital surface models (nDSMs) from a single remote sensing image remains a challenging and ill-posed problem due to the absence of reliable geometric constraints. In this work, we show that monocular depth provides structurally stable cues of local geometry but lacks the global scale and vertical reference required for absolute height recovery. This intrinsic mismatch limits direct depth-to-height regression, particularly when transferring across heterogeneous terrains, land-cover compositions, and imaging conditions. Building on this idea, we propose the Relative Depth–Absolute Height Prediction Network (RDAH-Net), a framework that exploits relative depth as a geometry-aware prior while learning terrain-dependent height mappings from image appearance to absolute height. As the backbone, we employ a lightweight MobileNetV2 enhanced with a Convolutional Block Attention Module (CBAM), and further incorporate a cross-modal bidirectional attention fusion scheme with positional encoding to achieve a deep and effective fusion of image appearance and depth prior cues. Finally, a PixelShuffle-based upsampling strategy is used to sharpen prediction details and mitigate typical upsampling artifacts. Extensive experiments across diverse regions demonstrate that RDAH-Net achieves robust and generalizable height estimation, providing a practical alternative for large-scale mapping and rapid update scenarios. Full article
27 pages, 7912 KB  
Article
Hierarchical Wetland Mapping in the East China Sea Based on Integrated Multifaceted Source Features
by Jie Wang, Yixuan Zhou, Xin Fang, Shengqi Wang, Haiyang Zhang and Runbin Hu
Remote Sens. 2026, 18(7), 1023; https://doi.org/10.3390/rs18071023 - 29 Mar 2026
Abstract
The East China Sea represents a critical coastal wetland region, characterized by complex geomorphology, heterogeneous land-cover composition, and diverse wetland types. Accurate delineation of coastal wetland extent is essential for ecosystem service assessment and sustainable coastal management, directly contributing to wetland-related Sustainable Development [...] Read more.
The East China Sea represents a critical coastal wetland region, characterized by complex geomorphology, heterogeneous land-cover composition, and diverse wetland types. Accurate delineation of coastal wetland extent is essential for ecosystem service assessment and sustainable coastal management, directly contributing to wetland-related Sustainable Development Goals (SDGs), particularly SDG 15, on ecosystem conservation and biodiversity protection. However, pronounced spectral similarity and structural heterogeneity among wetland classes pose substantial challenges to reliable classification. To address these challenges, this study developed a hierarchical classification framework integrating Random Forest, K-means clustering, and a decision tree classifier based on multi-source Sentinel-1 and Sentinel-2 imagery. Spectral, polarimetric, texture, and morphological features were systematically constructed to enhance class separability. Using this framework, a 10 m resolution coastal wetland map of the East China Sea was generated for 2023. The proposed approach achieved an overall accuracy of 91.32% and improved the discrimination of spectrally similar wetland types. Feature fusion reduced confusion among water-related classes, while object-based clustering improved the extraction of linear riverine wetlands. The resulting 10 m wetland map provides updated spatial information for ecological assessment and coastal management in the East China Sea. Full article
(This article belongs to the Special Issue Big Earth Data in Support of the Sustainable Development Goals)
Show Figures

Figure 1

34 pages, 20615 KB  
Article
Unsupervised Change Detection in Heterogeneous Remote Sensing Images via Dynamic Mask Guidance
by Paixin Xie, Gao Chen, Qingfeng Zhou, Xiaoyan Li and Jingwen Yan
Remote Sens. 2026, 18(7), 1022; https://doi.org/10.3390/rs18071022 - 29 Mar 2026
Abstract
Unsupervised change detection (CD) in heterogeneous remote sensing images is intrinsically difficult due to severe sensor-specific discrepancies. In the absence of ground truth, these discrepancies result in ambiguous optimization objectives that make it difficult for models to distinguish true land-cover changes from modality-driven [...] Read more.
Unsupervised change detection (CD) in heterogeneous remote sensing images is intrinsically difficult due to severe sensor-specific discrepancies. In the absence of ground truth, these discrepancies result in ambiguous optimization objectives that make it difficult for models to distinguish true land-cover changes from modality-driven pseudo-changes. To address these challenges, we propose MaskUCD, a novel unsupervised framework that reformulates heterogeneous CD as a dynamic mask-driven constraint scheduling problem. Fundamentally distinct from conventional strategies that enforce selective feature alignment, MaskUCD employs a spatially adaptive optimization mechanism. Specifically, the iteratively refined mask serves as a geometric reference to guide optimization. It enforces strict feature alignment in mask-unchanged regions to suppress modality-induced discrepancies, while simultaneously promoting feature divergence in mask-changed regions to emphasize semantic inconsistencies. In this way, explicit optimization objectives are established, together with an intrinsic interpretability constraint that guides the CD process. This strategy treats the mask as a structural guide for representation learning rather than a ground-truth reference, thereby avoiding error accumulation caused by directly using inaccurate masks as supervisory signals. To facilitate this optimization, we design a specialized asymmetric autoencoder with a hybrid encoder architecture, utilizing multi-scale frequency analysis and global context modeling to enhance feature representation capabilities. Consequently, this design enables the generation of refined and semantically consistent masks, which provide increasingly precise structural guidance, yielding converged and discriminative difference maps. Extensive experiments demonstrate that MaskUCD achieves state-of-the-art performance and superior robustness compared to existing advanced methods. Full article
31 pages, 11688 KB  
Article
RShDet: An Adaptive Spectral-Aware Network for Remote Sensing Object Detection Under Haze Corruption
by Wei Zhang, Yuantao Wang, Haowei Yang and Xuerui Mao
Remote Sens. 2026, 18(7), 1020; https://doi.org/10.3390/rs18071020 - 29 Mar 2026
Abstract
Remote sensing (RS) object detection faces intrinsic challenges arising from the overhead imaging paradigm and the diversity of climatic conditions. In particular, atmospheric phenomena such as clouds and haze cause severe visual degradation, making reliable object detection difficult. However, most existing detectors are [...] Read more.
Remote sensing (RS) object detection faces intrinsic challenges arising from the overhead imaging paradigm and the diversity of climatic conditions. In particular, atmospheric phenomena such as clouds and haze cause severe visual degradation, making reliable object detection difficult. However, most existing detectors are developed under clear-weather conditions, which limits their generalization capability in realistic haze-degraded RS scenarios. To alleviate this issue, an adaptive spectral-aware network for RS object detection under haze interference is proposed, termed RShDet, which is designed to handle both high-altitude RS imagery and low-altitude Unmanned Aerial Vehicle (UAV) scenarios. Firstly, the Object-Centered Dynamic Enhancement (OCDE) module dynamically adjusts the spatial positions of key-value pairs through query-agnostic offsets, enabling the network to emphasize object-relevant regions while suppressing haze-induced background interference. Secondly, the Dynamic Multi-Spectral Perception and Filtering (DSPF) module introduces a multi-spectral attention mechanism that adaptively selects informative frequency components, thereby enhancing discriminative feature representations in hazy environments. Thirdly, the Frequency-Domain Multi-Feature Fusion (FDMF) module employs learnable weights to complementarily integrate amplitude and phase information in the frequency domain, enabling effective cross-task feature interaction between the enhancement and detection branches. Extensive experiments demonstrate that RShDet consistently achieves superior detection performance under hazy conditions across both synthetic and real-world benchmarks. Specifically, it achieves improvements of 2.4% mAP50 on Hazy-DOTA, 1.9% mAP on HazyDet, and 2.33% mAP on the real-world foggy dataset RTTS, surpassing existing state-of-the-art methods. Full article
(This article belongs to the Special Issue Advances in Remote Sensing Image Target Detection and Recognition)
42 pages, 6313 KB  
Article
When Lie Groups Meet Hyperspectral Images: Equivariant Manifold Network for Few-Shot HSI Classification
by Haolong Ban, Junchao Feng, Zejin Liu, Yue Jiang, Zhenxing Wang, Jialiang Liu, Yaowen Hu and Yuanshan Lin
Sensors 2026, 26(7), 2117; https://doi.org/10.3390/s26072117 - 29 Mar 2026
Abstract
Hyperspectral imagery (HSI) offers rich spectral signatures and fine-grained spatial structures for remote sensing, but practical HSI classification is often constrained by scarce labels and complex geometric disturbances, including translation, rotation, scaling, and shear. Existing deep models are typically developed under Euclidean assumptions [...] Read more.
Hyperspectral imagery (HSI) offers rich spectral signatures and fine-grained spatial structures for remote sensing, but practical HSI classification is often constrained by scarce labels and complex geometric disturbances, including translation, rotation, scaling, and shear. Existing deep models are typically developed under Euclidean assumptions and rely on data-hungry training pipelines, which makes them brittle in the few-shot regime. To address this challenge, we propose EMNet, a Lie-group-based Equivariant Manifold Network for few-shot HSI classification that explicitly encodes geometric invariance and improves discriminative accuracy. EMNet couples an SE(2)-based Equivariance-Guided Module (EGM) to enforce equivariance to translations and rotations with an affine Lie-group-based Characteristic Filtering Convolution (CFC) that models scaling and shearing on the feature manifold while adaptively suppressing redundant responses. Extensive experiments on WHU-Hi-HongHu, Houston2013, and Indian Pines demonstrate state-of-the-art performance with competitive complexity, achieving OAs of 95.77% (50 samples/class), 97.37% (50 samples/class), and 96.09% (5% labeled samples), respectively, and yielding up to +3.34% OA, +6.01% AA, and +4.14% Kappa over the strong DGPF-RENet baseline. Under a stricter 25-samples-per-class protocol with 10 repeated random hold-out splits, EMNet consistently improves the mean accuracy while exhibiting lower variance, indicating better stability to sampling uncertainty. On the city-scale Xiongan New Area dataset with extreme long-tail imbalance (1580 × 3750 pixels, 256 bands, and 5.925 M labeled pixels), EMNet further boosts OA from 85.89% to 93.77% under the 1% labeled-sample protocol, highlighting robust generalization for large-area mapping. Beyond point estimates, we report mean ± SD/SE across repeated splits and provide rigorous statistical validation by computing Yule’s Q statistic for class-wise behavior similarity, performing the Friedman test with Nemenyi post hoc comparisons for multi-method ranking significance, and presenting 95% confidence intervals together with Cohen’s d effect sizes to quantify practical improvement. Full article
(This article belongs to the Special Issue Hyperspectral Sensing: Imaging and Applications)
35 pages, 51980 KB  
Article
Structurally Consistent and Grounding-Aware Stagewise Reasoning for Referring Remote Sensing Image Segmentation
by Shan Dong, Jianlin Xie, Liang Chen, He Chen, Baogui Qi and Yunqiu Ge
Remote Sens. 2026, 18(7), 1015; https://doi.org/10.3390/rs18071015 - 28 Mar 2026
Viewed by 51
Abstract
Referring Remote Sensing Image Segmentation (RRSIS) is a representative multimodal understanding task for remote sensing, which segments designated targets from remote images according to free-form natural language descriptions. However, complex remote sensing characteristics, such as cluttered backgrounds, large-scale variations, small scattered targets and [...] Read more.
Referring Remote Sensing Image Segmentation (RRSIS) is a representative multimodal understanding task for remote sensing, which segments designated targets from remote images according to free-form natural language descriptions. However, complex remote sensing characteristics, such as cluttered backgrounds, large-scale variations, small scattered targets and repetitive textures, lead to unstable visual grounding and further spatial grounding drift, resulting in inaccurate segmentation results. Existing approaches typically perform implicit visual–linguistic fusion across encoding and decoding stages, entangling spatial grounding with mask refinement. This tightly coupled formulation lacks explicit structural constraints and is prone to cross-modal ambiguity, especially in complex remote sensing layouts. To address these limitations, we propose a Structurally consistent and Grounding-aware Stagewise Reasoning Framework (SGSRF) that follows a grounding-first, segmentation-second paradigm. The framework decomposes inference into three cascaded stages with progressively imposed structural constraints. First, Cross-modal Consistency Refinement (CCR) lays the foundation for stable spatial grounding by enhancing visual–textual structural alignment via CLIP-based features and Structural Consistency Regularization (SCR), producing well-aligned multimodal representations and reliable grounding cues. Second, Grounding-aware Prompt (GPG) Generation bridges grounding and segmentation by converting aligned representations into complementary sparse and dense prompts, which serve as explicit grounding guidance for the segmentation model. Third, Grounding Modulated Segmentation (GMS) leverages the Segment Anything Model (SAM) to generate fine-grained mask prediction under the joint guidance of prompts and grounding cues, improving spatial grounding stability and robustness to background interference and scale variation. Extensive experiments on three remote sensing benchmarks , namely RefSegRS, RRSIS-D, and RISBench, demonstrate that SGSRF achieves state-of-the-art performance. The proposed stagewise paradigm integrates structural alignment, explicit grounding, and prompt-driven segmentation into a unified framework, providing a practical and robust solution for RRSIS in real-world Earth observation applications. Full article
18 pages, 11374 KB  
Article
CSGL-Former: Cross-Stripes Global–Local Fusion Transformer for Remote Sensing Image Dehazing
by Shuyi Feng, Xiran Zhang, Jie Yuan and Youwen Zhu
Sensors 2026, 26(7), 2102; https://doi.org/10.3390/s26072102 - 28 Mar 2026
Viewed by 96
Abstract
Remote sensing (RS) images are often degraded by atmospheric haze, which compromises both visual interpretation and downstream applications. To address this, we introduce CSGL-Former, a novel Cross-Stripes Global–Local Fusion Transformer for RS image dehazing. Our model efficiently captures anisotropic long-range dependencies using cross-stripes [...] Read more.
Remote sensing (RS) images are often degraded by atmospheric haze, which compromises both visual interpretation and downstream applications. To address this, we introduce CSGL-Former, a novel Cross-Stripes Global–Local Fusion Transformer for RS image dehazing. Our model efficiently captures anisotropic long-range dependencies using cross-stripes attention (CSA) and aggregates hierarchical global semantics via a Multi-Layer Global Aggregation (MLGA) module. In the decoder, global context is adaptively blended with fine-grained local features to restore intricate textures. Finally, inspired by the atmospheric scattering model, a soft reconstruction head restores the clear image by predicting spatially varying affine parameters, strictly preserving content fidelity while effectively removing haze. Trained end-to-end, CSGL-Former demonstrates a compelling balance of accuracy and efficiency. Extensive experiments on the RRSHID and SateHaze1K benchmarks show that our model achieves state-of-the-art or highly competitive performance against representative baselines. Ablation studies further validate the effectiveness of each proposed component. Full article
(This article belongs to the Special Issue Advanced Pattern Recognition: Intelligent Sensing and Imaging)
Show Figures

Figure 1

22 pages, 6161 KB  
Article
Remote Sensing Data-Based Modelling for Analyzing Green Tide Proliferation Drivers in the Yellow Sea
by Jing Yang, Enye He, Xuanliang Ji, Qianqiu Guo, Shan Gao and Yuxuan Jiang
Remote Sens. 2026, 18(7), 1014; https://doi.org/10.3390/rs18071014 - 28 Mar 2026
Viewed by 177
Abstract
Since 2007, green tides have recurrently occurred in the Yellow Sea during spring and summer, with a massive outbreak recorded in 2021. Given the critical significance of green tide monitoring and prediction for marine ecological security and sustainable development, this study developed a [...] Read more.
Since 2007, green tides have recurrently occurred in the Yellow Sea during spring and summer, with a massive outbreak recorded in 2021. Given the critical significance of green tide monitoring and prediction for marine ecological security and sustainable development, this study developed a satellite remote sensing-validated coupled simulation system for green tide drift and growth, by integrating multi-source satellite remote sensing data and oceanographic reanalysis datasets. Leveraging this system, we systematically analyzed the spatiotemporal evolution characteristics and underlying driving mechanisms of both routine green tide processes in 2014–2015 and the extreme 2021 event. Satellite images with low cloud cover and extensive green tide distribution were screened to confirm the accuracy of green tide drift trajectories and distribution ranges for validating the model’s reliability, and the results demonstrated the spatial consistency between simulation results and satellite observations. The validated model was used to track the drift and growth–decline processes of green tides and investigate the underlying cause of high-biomass appearance in 2021. Combined with environmental parameters, our analyses revealed that variations in attachment substrates alter wind resistance coefficients, thereby potentially accelerating the northward drift velocity of green tides. Furthermore, substrate properties may exert a significant regulatory effect on the attachment, germination, and biomass accumulation of Ulva prolifera spores, which could be a leading factor driving the massive green tide outbreak. Full article
Show Figures

Figure 1

37 pages, 4825 KB  
Article
Effects of Cane Density on Primocane Raspberry Assessed Using UAV-Based Multispectral Imaging
by Kamil Buczyński, Magdalena Kapłan and Zbigniew Jarosz
Agriculture 2026, 16(7), 742; https://doi.org/10.3390/agriculture16070742 - 27 Mar 2026
Viewed by 219
Abstract
Cane density is a key management factor in raspberry production, directly affecting yield formation and canopy structure. However, most previous studies have focused on floricane cultivars and relied on conventional field measurements, while the response of primocane raspberries and their canopy level dynamics [...] Read more.
Cane density is a key management factor in raspberry production, directly affecting yield formation and canopy structure. However, most previous studies have focused on floricane cultivars and relied on conventional field measurements, while the response of primocane raspberries and their canopy level dynamics remain less explored. The objective of this study was to evaluate how cane density influences yield components, cane growth, and canopy structure in primocane raspberry cultivars, and to assess whether these effects can be captured using UAV-based multispectral imaging. Field experiments were conducted over two growing seasons using two primocane cultivars grown under different cane density treatments. Yield components and cane growth parameters were measured, and repeated drone multispectral surveys were performed during the production period to quantify the spatial and temporal variability of vegetation indices. Increasing cane density led to higher total yield per unit area in both cultivars, mainly through an increase in fruit number rather than fruit weight, indicating a compensatory yield response. Cane density significantly modified canopy architecture, with responses varying between cultivars and seasons. Multispectral vegetation indices revealed predominantly consistent density-dependent gradients, characterized by higher mean values and reduced spatial and temporal variability at higher cane densities. Denser cane configurations were associated with lower total temporal amplitude and smoother seasonal trajectories, indicating a stabilization of canopy reflectance dynamics. Although this overall pattern was preserved across indices, the magnitude and regularity of temporal responses were index-specific and cultivar-dependent. The results demonstrate that cane density management in primocane raspberries affects both yield formation and canopy structure, and that these effects can be effectively monitored using UAV-based multispectral imaging. Integrating remote sensing with field measurements offers a valuable approach for supporting data-driven optimization of raspberry production systems. Full article
Show Figures

Graphical abstract

25 pages, 9555 KB  
Article
EFSL-YOLO: An Improved Model for Small Object Detection in UAV Vision
by Meng Zhou, Shuke He, Chang Wang and Jing Wang
Drones 2026, 10(4), 243; https://doi.org/10.3390/drones10040243 - 27 Mar 2026
Viewed by 109
Abstract
To address the challenges in UAV remote sensing imagery, such as small object size, dense occlusion and complex background interference, this paper proposes an enhanced small object detection algorithm based on an improved YOLOv13 model for drone applications in complex weather environments. First, [...] Read more.
To address the challenges in UAV remote sensing imagery, such as small object size, dense occlusion and complex background interference, this paper proposes an enhanced small object detection algorithm based on an improved YOLOv13 model for drone applications in complex weather environments. First, an enhanced feature fusion attention network (EFFA-Net) is designed in the preprocessing stage to reduce image degradation and suppress the interference caused by smoke and haze. Then, in the backbone, a swish-gated convolution (SwiGLUConv) module is designed to adaptively expand the receptive field and enhance multi-scale feature extraction, which strengthens the representation of small targets while maintaining efficient computation. Furthermore, a locally enhanced multi-scale context fusion (LF-MSCF) module is integrated into the feature fusion neck of YOLO, combining multi-head self-attention, channel attention, and spatial attention to suppress background noise and redundant responses, thereby improving detection accuracy. Extensive experiments on the VisDrone-DET2019 dataset, UAVDT dataset, and HazyDet dataset demonstrate that the proposed algorithm outperforms other mainstream methods, showcasing excellent detection accuracy and robustness in complex UAV aerial scenarios. Full article
Back to TopTop