Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (811)

Search Parameters:
Keywords = remote interference

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
13 pages, 3540 KB  
Article
A New Approach for Real-Time Coal–Rock Identification via Multi-Source Near-Bit Drilling Data
by Shangxin Feng, Jianfeng Hu, Zhihai Fan, Jianxi Ren, Yanping Miao and Jian Hu
Energies 2026, 19(7), 1785; https://doi.org/10.3390/en19071785 - 5 Apr 2026
Viewed by 169
Abstract
Real-time coal–rock identification is essential for intelligent mining, enabling hazard prevention and geological modeling. However, existing methods often suffer from unclear bit–rock interaction mechanisms, signal distortion, sensor remoteness, or delayed data acquisition, limiting their effectiveness in continuous operations. This study proposes a novel [...] Read more.
Real-time coal–rock identification is essential for intelligent mining, enabling hazard prevention and geological modeling. However, existing methods often suffer from unclear bit–rock interaction mechanisms, signal distortion, sensor remoteness, or delayed data acquisition, limiting their effectiveness in continuous operations. This study proposes a novel approach for real-time coal–rock identification based on multi-source near-bit drilling data. A near-bit data acquisition system was developed and positioned directly behind the drill bit, integrating sensors to capture high-fidelity parameters—including weight on bit (WOB), torque, rotational speed, rate of penetration (ROP), natural gamma ray, and borehole trajectory—thereby eliminating frictional interference from the drill string. A data-driven theoretical model was established to derive a near-bit drillability index (NDI) for rock strength and to correlate gamma ray responses with lithology. Field trials were conducted in a coal mine in northern Shaanxi, involving over 30 boreholes and systematic core validation. The results demonstrate that the method enables continuous, high-resolution identification of coal–rock interfaces and strength variations along the borehole trajectory, with interpreted results aligning well with core logs and achieving approximately 85% accuracy in strength estimation. By ensuring compatibility with conventional drilling rigs and supporting real-time data transmission and 3D geological updating, this study offers a practical and robust technical pathway for achieving geological transparency and real-time steering in underground coal mining. Full article
(This article belongs to the Section H: Geo-Energy)
Show Figures

Figure 1

21 pages, 1583 KB  
Article
Performance and Detectability Analysis of Resident Space Objects Using an S-Band Bi-Static Radar with the Sardinia Radio Telescope as Receiver
by Luca Schirru
Remote Sens. 2026, 18(7), 1083; https://doi.org/10.3390/rs18071083 - 3 Apr 2026
Viewed by 163
Abstract
The continuous growth of the population of Resident Space Objects (RSOs) poses increasing challenges for Space Situational Awareness (SSA), particularly in terms of detection capability and collision risk mitigation. Ground-based radar systems represent a primary class of remote sensing instruments for RSO observation; [...] Read more.
The continuous growth of the population of Resident Space Objects (RSOs) poses increasing challenges for Space Situational Awareness (SSA), particularly in terms of detection capability and collision risk mitigation. Ground-based radar systems represent a primary class of remote sensing instruments for RSO observation; however, their deployment is often constrained by cost and infrastructure requirements. In this context, the reuse of existing large radio astronomy facilities as radar receivers offers an innovative and potentially cost-effective alternative. This paper presents a fully model-based feasibility study of S-band bi-static radar observations of RSOs using the Sardinia Radio Telescope (SRT) as a high-sensitivity ground-based receiver. The analysis is entirely analytical and is conducted in the absence of experimental radar measurements. A bi-static radar equation framework is adopted to evaluate received signal power and the resulting signal-to-noise ratio (SNR) as functions of target size, range, and observation geometry. The model explicitly incorporates thermal noise, integration time and target dynamics, radio-frequency interference (RFI), atmospheric and environmental clutter contributions, and the realistic antenna radiation pattern of the SRT through a Gaussian beam approximation. Detection thresholds, maximum observable ranges, and performance envelopes are derived for representative RSO dimensions, and the impact of off-boresight reception on detectability is quantified. The results define the operational conditions under which RSOs may be detected in an S-band bi-static configuration and demonstrate the potential of the SRT as a non-conventional ground-based instrument for space object observation, supporting future developments in SSA and space debris monitoring strategies. Full article
Show Figures

Figure 1

30 pages, 9416 KB  
Article
Weed Discrimination at the Seedling Stage in Dryland Fields Under Maize–Soybean Rotation
by Yaohua Yue and Anbang Zhao
Plants 2026, 15(7), 1114; https://doi.org/10.3390/plants15071114 - 3 Apr 2026
Viewed by 151
Abstract
Under maize–soybean rotation systems, weeds and crops at the seedling stage in dryland fields exhibit high similarity in morphological structure, scale distribution, and spatial arrangement. In addition, complex illumination conditions, occlusion, and background interference further complicate accurate weed discrimination. To address these challenges, [...] Read more.
Under maize–soybean rotation systems, weeds and crops at the seedling stage in dryland fields exhibit high similarity in morphological structure, scale distribution, and spatial arrangement. In addition, complex illumination conditions, occlusion, and background interference further complicate accurate weed discrimination. To address these challenges, this study proposes an improved YOLOv11n-based weed detection method for seedling-stage crops under dryland rotation conditions, aiming to enhance detection accuracy and robustness in UAV-acquired field images. Three key improvements were introduced to enhance model performance: (1) the incorporation of Dynamic Convolution (DynamicConv) to adaptively strengthen feature representation for weeds with varying morphologies and scales in low-altitude remote sensing imagery; (2) the design of a SlimNeck lightweight feature fusion architecture to improve multi-scale feature propagation efficiency while reducing computational cost; (3) the cascaded group attention mechanism (CGA) is integrated into the C2PSA module, thereby improving discrimination capability under complex background conditions. These results represent consistent improvements over baseline models, including YOLOv5, YOLOv6, YOLOv8, YOLOv11, and YOLOv12. Specifically, detection performance for broadleaf weeds and Poaceae weeds reached mAP@0.5 values of 87.2% and 73.9%, respectively. Overall, the proposed method demonstrates superior detection accuracy and stability for seedling-stage weed identification under rotation conditions, providing reliable technical support for variable-rate herbicide application and precision field management. Full article
(This article belongs to the Section Crop Physiology and Crop Production)
Show Figures

Figure 1

24 pages, 25968 KB  
Article
High Spatio-Temporal Resolution CYGNSS Reflectivity Reconstruction via TCN for Enhanced Freeze/Thaw Retrieval
by Xiangle Li, Wentao Yang, Dong Wang, Weixin Li, Dandan Wang and Lei Yang
Remote Sens. 2026, 18(7), 1056; https://doi.org/10.3390/rs18071056 - 1 Apr 2026
Viewed by 280
Abstract
In recent years, the Cyclone Global Navigation Satellite System (CYGNSS) of NASA has attracted widespread attention for the retrieval of freeze/thaw (F/T) states through the analysis of reflected signals. F/T variations in high-altitude regions have long been a focal point in this field. [...] Read more.
In recent years, the Cyclone Global Navigation Satellite System (CYGNSS) of NASA has attracted widespread attention for the retrieval of freeze/thaw (F/T) states through the analysis of reflected signals. F/T variations in high-altitude regions have long been a focal point in this field. However, these areas lack benchmark observational data with high temporal and spatial resolution. A model named Partial Convolution–Time Convolutional Network (PTCN) is proposed in this paper to reconstruct CYGNSS data at a 3 km resolution. This model integrates partial convolution with a time convolutional network (TCN) and does not rely on any auxiliary data. Partial convolution is employed to distinguish valid pixels, with the interference of missing values being removed. TCN is employed to capture temporal features, which results in the reconstruction of observational data. Compared with the original observational data (at a 3 km resolution), the coverage of the reconstructed data is six times that of the original. A simulation of missing data is applied for the first time in the quantitative evaluation of observational data reconstruction. The results show that the value of R for the reconstructed data reaches 0.92, and the value of the root mean square error (RMSE) reaches 2.7. The reconstructed data is used for daily F/T retrieval. At both 36 km and 9 km resolutions, the F/T retrieval accuracy after reconstruction is comparable to that before reconstruction. The temporal resolution is improved by 256%, which successfully fills 92% of the observational gaps in soil moisture passive–active (SMAP) data. Compared with ground-based F/T retrievals, the reconstructed F/T accuracies are 87.71% at 36 km and 82.3% at 9 km.The model successfully reconstructs high-temporal and spatial resolution CYGNSS data while maintaining accuracy. In the future, this method holds significant potential for the application of global GNSS-R high-temporal and spatial resolution remote sensing observations. Full article
Show Figures

Figure 1

23 pages, 1395 KB  
Article
A Mask-Guided Multigranular Mamba Network for Remote Sensing Change Captioning
by Yifan Qu and Huaidong Zhang
Remote Sens. 2026, 18(7), 1048; https://doi.org/10.3390/rs18071048 - 31 Mar 2026
Viewed by 276
Abstract
Remote sensing image change captioning (RSICC) aims to generate semantic textual descriptions characterizing changes between bi-temporal remote sensing images, with wide applications in disaster assessment and urban planning. However, existing methods face specific drawbacks: CNN-based models have limited ability to capture long-range spatial [...] Read more.
Remote sensing image change captioning (RSICC) aims to generate semantic textual descriptions characterizing changes between bi-temporal remote sensing images, with wide applications in disaster assessment and urban planning. However, existing methods face specific drawbacks: CNN-based models have limited ability to capture long-range spatial correlations due to local receptive fields, and Transformer-based models suffer from quadratic complexity while distributing attention uniformly across all spatial positions, resulting in weak perception of salient changes in background-dominated scenes. In this paper, we present PM3Net (Progressive Mask-guided Multigranular Mamba Network), which leverages Mamba state space models with linear complexity for efficient spatiotemporal change modeling. The Progressive Mask-guided Encoder (PME) creates dual-source change masks combining L2 norm spatial differences with cosine distance semantic differences for progressive change feature extraction from detailed structures to high-level semantics. The Mask-guided Feature Enhancement (MFE) module applies mask-weighted refinement and cross-layer fusion to emphasize salient change regions while suppressing background interference, producing multigranular visual representations. Experiments on LEVIR-MCI and WHU-CDC datasets show PM3Net achieves superior results compared to existing methods, with BLEU-4 scores of 66.89 and 73.05, respectively. The results confirm PM3Net’s ability to solve the RSICC task while demonstrating how Mamba models can succeed in this specific field. Full article
Show Figures

Figure 1

25 pages, 4776 KB  
Article
FireMambaNet: A Multi-Scale Mamba Network for Tiny Fire Segmentation in Satellite Imagery
by Bo Song, Bo Li, Hong Huang, Zhiyong Zhang, Zhili Chen, Tao Yue and Yun Chen
Remote Sens. 2026, 18(7), 1021; https://doi.org/10.3390/rs18071021 - 29 Mar 2026
Viewed by 252
Abstract
Satellite remote sensing plays an essential role in wildfire monitoring due to its large-scale observation capability. However, fire targets in satellite imagery are typically extremely small, sparsely distributed, and embedded in complex backgrounds, making accurate segmentation highly challenging for existing methods. To address [...] Read more.
Satellite remote sensing plays an essential role in wildfire monitoring due to its large-scale observation capability. However, fire targets in satellite imagery are typically extremely small, sparsely distributed, and embedded in complex backgrounds, making accurate segmentation highly challenging for existing methods. To address these challenges, this paper proposes a multi-scale Mamba-based network for tiny fire segmentation, named FireMambaNet. The network adopts a nested U-shaped encoder-decoder architecture, primarily consisting of three modules: the Cross-layer Gated Residual U-shaped module (CG-RSU), the Fire-aware Directional Context Modulation module (FDCM), and the Multi-scale Mamba Attention Module (M2AM). The CG-RSU, as the core building block, adaptively suppresses background redundancy and enhances weak fire responses by extracting multi-scale features through cross-layer gating. The FDCM explicitly enhances the network’s ability to perceive anisotropic expansion features of fire points, such as those along the wind direction and terrain orientation, by modeling multi-directional context. The M2AM model employs a Mamba state-space model to suppress background interference through global context modeling during cross-scale feature fusion, while enhancing consistency among sparsely distributed tiny fire targets. In addition, experimental validation is conducted using two subsets from the Active Fire dataset, which have significant pixel-level sparse features: Oceania and Asia4. The results show that the proposed method significantly outperforms various mainstream CNN, Transformer, and Mamba baseline models on both datasets. It achieves an IoU of 88.51% and F1 score of 93.76% on the Oceania dataset, and an IoU of 85.65% and F1 score of 92.26% on the Asia4 dataset. Compared to the best-performing CNN baseline model, the IoU is improved by 1.81% and 2.07%, respectively. Overall, the FireMambaNet demonstrates significant advantages in detecting tiny fire points in complex backgrounds. Full article
Show Figures

Figure 1

30 pages, 11698 KB  
Article
RShDet: An Adaptive Spectral-Aware Network for Remote Sensing Object Detection Under Haze Corruption
by Wei Zhang, Yuantao Wang, Haowei Yang and Xuerui Mao
Remote Sens. 2026, 18(7), 1020; https://doi.org/10.3390/rs18071020 - 29 Mar 2026
Viewed by 204
Abstract
Remote sensing (RS) object detection faces intrinsic challenges arising from the overhead imaging paradigm and the diversity of climatic conditions. In particular, atmospheric phenomena such as clouds and haze cause severe visual degradation, making reliable object detection difficult. However, most existing detectors are [...] Read more.
Remote sensing (RS) object detection faces intrinsic challenges arising from the overhead imaging paradigm and the diversity of climatic conditions. In particular, atmospheric phenomena such as clouds and haze cause severe visual degradation, making reliable object detection difficult. However, most existing detectors are developed under clear-weather conditions, which limits their generalization capability in realistic haze-degraded RS scenarios. To alleviate this issue, an adaptive spectral-aware network for RS object detection under haze interference is proposed, termed RShDet, which is designed to handle both high-altitude RS imagery and low-altitude Unmanned Aerial Vehicle (UAV) scenarios. Firstly, the Object-Centered Dynamic Enhancement (OCDE) module dynamically adjusts the spatial positions of key-value pairs through query-agnostic offsets, enabling the network to emphasize object-relevant regions while suppressing haze-induced background interference. Secondly, the Dynamic Multi-Spectral Perception and Filtering (DSPF) module introduces a multi-spectral attention mechanism that adaptively selects informative frequency components, thereby enhancing discriminative feature representations in hazy environments. Thirdly, the Frequency-Domain Multi-Feature Fusion (FDMF) module employs learnable weights to complementarily integrate amplitude and phase information in the frequency domain, enabling effective cross-task feature interaction between the enhancement and detection branches. Extensive experiments demonstrate that RShDet consistently achieves superior detection performance under hazy conditions across both synthetic and real-world benchmarks. Specifically, it achieves improvements of 2.4% mAP50 on Hazy-DOTA, 1.9% mAP on HazyDet, and 2.33% mAP on the real-world foggy dataset RTTS, surpassing existing state-of-the-art methods. Full article
(This article belongs to the Special Issue Advances in Remote Sensing Image Target Detection and Recognition)
Show Figures

Figure 1

35 pages, 51987 KB  
Article
Structurally Consistent and Grounding-Aware Stagewise Reasoning for Referring Remote Sensing Image Segmentation
by Shan Dong, Jianlin Xie, Liang Chen, He Chen, Baogui Qi and Yunqiu Ge
Remote Sens. 2026, 18(7), 1015; https://doi.org/10.3390/rs18071015 - 28 Mar 2026
Viewed by 229
Abstract
Referring Remote Sensing Image Segmentation (RRSIS) is a representative multimodal understanding task for remote sensing, which segments designated targets from remote images according to free-form natural language descriptions. However, complex remote sensing characteristics, such as cluttered backgrounds, large-scale variations, small scattered targets and [...] Read more.
Referring Remote Sensing Image Segmentation (RRSIS) is a representative multimodal understanding task for remote sensing, which segments designated targets from remote images according to free-form natural language descriptions. However, complex remote sensing characteristics, such as cluttered backgrounds, large-scale variations, small scattered targets and repetitive textures, lead to unstable visual grounding and further spatial grounding drift, resulting in inaccurate segmentation results. Existing approaches typically perform implicit visual–linguistic fusion across encoding and decoding stages, entangling spatial grounding with mask refinement. This tightly coupled formulation lacks explicit structural constraints and is prone to cross-modal ambiguity, especially in complex remote sensing layouts. To address these limitations, we propose a Structurally consistent and Grounding-aware Stagewise Reasoning Framework (SGSRF) that follows a grounding-first, segmentation-second paradigm. The framework decomposes inference into three cascaded stages with progressively imposed structural constraints. First, Cross-modal Consistency Refinement (CCR) lays the foundation for stable spatial grounding by enhancing visual–textual structural alignment via CLIP-based features and Structural Consistency Regularization (SCR), producing well-aligned multimodal representations and reliable grounding cues. Second, Grounding-aware Prompt (GPG) Generation bridges grounding and segmentation by converting aligned representations into complementary sparse and dense prompts, which serve as explicit grounding guidance for the segmentation model. Third, Grounding Modulated Segmentation (GMS) leverages the Segment Anything Model (SAM) to generate fine-grained mask prediction under the joint guidance of prompts and grounding cues, improving spatial grounding stability and robustness to background interference and scale variation. Extensive experiments on three remote sensing benchmarks, namely RefSegRS, RRSIS-D, and RISBench, demonstrate that SGSRF achieves state-of-the-art performance. The proposed stagewise paradigm integrates structural alignment, explicit grounding, and prompt-driven segmentation into a unified framework, providing a practical and robust solution for RRSIS in real-world Earth observation applications. Full article
Show Figures

Figure 1

25 pages, 9560 KB  
Article
EFSL-YOLO: An Improved Model for Small Object Detection in UAV Vision
by Meng Zhou, Shuke He, Chang Wang and Jing Wang
Drones 2026, 10(4), 243; https://doi.org/10.3390/drones10040243 - 27 Mar 2026
Viewed by 342
Abstract
To address the challenges in UAV remote sensing imagery, such as small object size, dense occlusion and complex background interference, this paper proposes an enhanced small object detection algorithm based on an improved YOLOv13 model for drone applications in complex weather environments. First, [...] Read more.
To address the challenges in UAV remote sensing imagery, such as small object size, dense occlusion and complex background interference, this paper proposes an enhanced small object detection algorithm based on an improved YOLOv13 model for drone applications in complex weather environments. First, an enhanced feature fusion attention network (EFFA-Net) is designed in the preprocessing stage to reduce image degradation and suppress the interference caused by smoke and haze. Then, in the backbone, a swish-gated convolution (SwiGLUConv) module is designed to adaptively expand the receptive field and enhance multi-scale feature extraction, which strengthens the representation of small targets while maintaining efficient computation. Furthermore, a locally enhanced multi-scale context fusion (LF-MSCF) module is integrated into the feature fusion neck of YOLO, combining multi-head self-attention, channel attention, and spatial attention to suppress background noise and redundant responses, thereby improving detection accuracy. Extensive experiments on the VisDrone-DET2019 dataset, UAVDT dataset, and HazyDet dataset demonstrate that the proposed algorithm outperforms other mainstream methods, showcasing excellent detection accuracy and robustness in complex UAV aerial scenarios. Full article
Show Figures

Figure 1

25 pages, 8205 KB  
Article
Forest Road Extraction via Optimized DeepLabv3+ and Multi-Temporal Remote Sensing for Wildfire Emergency Response
by Zhuoran Gao, Ziyang Li, Weiyuan Yao, Tingtao Zhang, Shi Qiu and Zhaoyan Liu
Appl. Sci. 2026, 16(7), 3228; https://doi.org/10.3390/app16073228 - 26 Mar 2026
Viewed by 378
Abstract
Forest fires occur frequently in China; however, the complex terrain and incomplete road networks severely constrain ground rescue efficiency. Accurate forest road information is essential for the optimization of emergency response and rescue force deployment. Existing road extraction algorithms are primarily designed for [...] Read more.
Forest fires occur frequently in China; however, the complex terrain and incomplete road networks severely constrain ground rescue efficiency. Accurate forest road information is essential for the optimization of emergency response and rescue force deployment. Existing road extraction algorithms are primarily designed for urban environments and exhibit limited efficacy in forest scenarios due to dense canopy, complex background interference and specific forest road features. To address this gap, this study proposes a forest road extraction method based on an enhanced DeepLabv3+ model using multi-temporal, high-resolution satellite imagery. Specifically, a Multi-Scale Channel Attention (MCSA) mechanism is embedded in skip connections to suppress background interference, while strip pooling is integrated into the Atrous Spatial Pyramid Pooling (ASPP) module to better capture slender road features. A composite Focal-Dice loss function is also constructed to mitigate sample imbalance. Finally, by applying the model in multi-temporal remote sensing images, a fusion strategy is introduced to integrate multi-seasonal road masks to enhance overall accuracy and topological integrity. Experimental results show that the proposed method achieves a precision of 54.1%, an F1-Score of 59.3%, and an IoU of 41.8%, effectively enhancing road continuity and providing robust technical support for fire-rescue decision-making. Full article
Show Figures

Figure 1

25 pages, 42196 KB  
Article
Frequency–Spatial Domain Jointly Guided Perceptual Network for Infrared Small Target Detection
by Yeteng Han, Minrui Ye, Bohan Liu, Jie Li, Chaoxian Jia, Wennan Cui and Tao Zhang
Remote Sens. 2026, 18(7), 1000; https://doi.org/10.3390/rs18071000 - 26 Mar 2026
Viewed by 532
Abstract
Infrared small target detection is a critical task in remote sensing. However, it remains highly challenging due to low contrast, heavy background clutter, and large variations in target scale. Traditional convolutional networks are inadequate for joint modeling, as they cannot effectively capture both [...] Read more.
Infrared small target detection is a critical task in remote sensing. However, it remains highly challenging due to low contrast, heavy background clutter, and large variations in target scale. Traditional convolutional networks are inadequate for joint modeling, as they cannot effectively capture both fine structural details and global contextual dependencies. To address these issues, we propose FSGPNet, a frequency–spatial domain jointly guided perceptual network that explicitly exploits complementary representations in both the frequency and spatial domains. Specifically, a Frequency–Spatial Enhancement Module (FSEM) is introduced to strengthen target details while suppressing background interference through high-frequency enhancement and Perona–Malik diffusion. To enhance global context modeling, we propose a Multi-Scale Global Perception (MSGP) module that integrates non-local attention with multi-scale dilated convolutions, enabling robust background modeling. Furthermore, a Gabor Transformer Attention Module (GTAM) is designed to achieve selective frequency–spatial feature aggregation via self-attention over multi-directional and multi-scale Gabor responses, effectively highlighting discriminative structures of various small targets. Extensive experiments are conducted on two benchmark datasets (IRSTD-1K and NUDT-SIRST) that cover typical remote sensing infrared scenarios. Quantitative and qualitative results demonstrate that FSGPNet consistently outperforms state-of-the-art methods across multiple evaluation metrics. These findings validate the effectiveness and robustness of the proposed FSGPNet for detecting small infrared targets in remote sensing applications. Full article
(This article belongs to the Special Issue Deep Learning-Based Small-Target Detection in Remote Sensing)
Show Figures

Figure 1

25 pages, 3151 KB  
Article
FCR-TransUNet: A Novel Approach to Crop Classification in Remote Sensing Images Employing Attention and Feature Enhancement Techniques
by Yongqi Han, Xingtong Liu, Yun Zhang, Hongfu Ai, Chuan Qin and Xinle Zhang
Agriculture 2026, 16(7), 727; https://doi.org/10.3390/agriculture16070727 - 25 Mar 2026
Viewed by 377
Abstract
Accurate crop classification is critical for optimizing agricultural resource use and informing production decisions. Deep learning, with its robust feature extraction ability, has become a prevalent technique for remote sensing-based crop classification. However, agricultural landscape complexity poses three key challenges: background noise interference, [...] Read more.
Accurate crop classification is critical for optimizing agricultural resource use and informing production decisions. Deep learning, with its robust feature extraction ability, has become a prevalent technique for remote sensing-based crop classification. However, agricultural landscape complexity poses three key challenges: background noise interference, class confusion from inter-crop spectral similarity, and blurred small-area crop boundaries due to class imbalance. This paper proposes FCR-TransUNet, a TransUNet-based enhanced model integrating three modules: Feature Enhancement Module (FEM) for noise filtering, Class-Attention (CAExperimental results on the Youyi Farm and barley datasets validate the superiority of the proposed model. On the Youyi Farm dataset, FCR-TransUNet achieves an MIoU of 92.2%, representing an improvement of 1.8% over SAM2-UNet and 2.9% over the baseline TransUNet. On the barley dataset, it yields an MIoU of 89.9%. Ablation studies further verify the effectiveness of each designed module. To comprehensively evaluate the classification performance of FCR-TransUNet across the full crop growth cycle, experiments were conducted using remote sensing images from May, July, and August, respectively. The results demonstrate that FCR-TransUNet exhibits strong stability and adaptability at different crop growth stages, providing a reliable solution for precision agriculture and intelligent agricultural production. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

27 pages, 8177 KB  
Article
DINOv3-PEFT: A Dual-Branch Collaborative Network with Parameter-Efficient Fine-Tuning for Precise Road Segmentation in SAR Imagery
by Debao Chen, Wanlin Yang, Ye Yuan and Juntao Gu
Remote Sens. 2026, 18(7), 973; https://doi.org/10.3390/rs18070973 - 24 Mar 2026
Viewed by 220
Abstract
Extracting road networks from Synthetic Aperture Radar (SAR) data represents a core challenge in remote sensing scene analysis, particularly for applications in traffic monitoring and emergency management. The task is complicated by several inherent limitations: speckle noise degrades image quality, geometric distortions arise [...] Read more.
Extracting road networks from Synthetic Aperture Radar (SAR) data represents a core challenge in remote sensing scene analysis, particularly for applications in traffic monitoring and emergency management. The task is complicated by several inherent limitations: speckle noise degrades image quality, geometric distortions arise from the side-looking acquisition geometry, and roads often exhibit weak radiometric separation from surrounding terrain. Traditional processing pipelines and recent single-branch deep learning frameworks have shown insufficient performance when global contextual reasoning and fine-scale spatial detail must both be addressed. This work presents DINOv3-PEFT, a parameter-efficient dual-encoder network designed specifically for SAR road segmentation. The architecture employs two complementary processing streams tailored to SAR characteristics: one stream utilizes adapter-based fine-tuning applied to pre-trained DINOv3 weights (kept frozen), which captures long-distance spatial relationships crucial for maintaining network connectivity despite speckle corruption. The second stream, based on convolutional operations, focuses on extracting localized geometric features that preserve the narrow, elongated structure and sharp boundaries typical of road infrastructure. Feature fusion occurs through the Topological-Geometric Feature Integration (TGFI) Module, which synthesizes multi-scale representations hierarchically. This mechanism proves effective at bridging fragmented road segments and recovering geometric accuracy in scenarios with heavy shadow casting or signal interference. Performance evaluation on the GF-3 satellite dataset across four spatial resolutions (1 m, 3 m, 5 m, and 10 m) demonstrates the proposed method achieves an 82.61% F1-score, a 76.51% IoU, and a 98.08% overall accuracy, all averaged across the four resolutions. When benchmarked against six state-of-the-art methods, DINOv3-PEFT demonstrates substantial improvements in road class segmentation quality and topological connectivity preservation, supporting its robustness for operational SAR road mapping tasks. Full article
Show Figures

Figure 1

25 pages, 10489 KB  
Article
An Unsupervised Machine Learning-Based Approach for Combining Sentinel 1 and 2 to Assess the Severity of Fires over Large Areas Using a Google Earth Engine
by Ciro Giuseppe Riccardi, Nicodemo Abate and Rosa Lasaponara
Remote Sens. 2026, 18(6), 956; https://doi.org/10.3390/rs18060956 - 23 Mar 2026
Viewed by 556
Abstract
Wildfires represent a significant global environmental challenge, necessitating advanced monitoring and assessment techniques. This study explores the integration of Sentinel-1 Synthetic Aperture Radar (SAR) and Sentinel-2 optical data within a Google Earth Engine (GEE) framework to enhance wildfire detection, burned area estimation, and [...] Read more.
Wildfires represent a significant global environmental challenge, necessitating advanced monitoring and assessment techniques. This study explores the integration of Sentinel-1 Synthetic Aperture Radar (SAR) and Sentinel-2 optical data within a Google Earth Engine (GEE) framework to enhance wildfire detection, burned area estimation, and severity assessment. By leveraging SAR’s capability to penetrate atmospheric obstructions and optical data’s spectral sensitivity to vegetation changes, the proposed methodology addresses limitations of single-sensor approaches. The results demonstrate strong correlations between SAR-based indices, such as the Radar Vegetation Index (RVI) and Dual-Polarized SAR Vegetation Index (DPSVI), and traditional optical indices, including the Normalized Burn Ratio (NBR) and differenced NBR (ΔNBR). Despite challenges related to terrain influence, sensor resolution differences, and computational demands, the integration of multi-sensor data in a cloud-based environment offers a scalable and efficient solution for wildfire monitoring. During the peak of the fire events, significant atmospheric obstruction was technically verified using Sentinel-2 metadata and the QA60 cloud mask band, which confirmed persistent cloud cover and thick smoke plumes over the study areas. This interference limited the reliability of purely optical monitoring, further justifying the integration of SAR data. Future research should focus on refining data fusion techniques, incorporating additional datasets such as thermal infrared imagery and meteorological variables, and enhancing automation through artificial intelligence (AI). This study underscores the potential of remote sensing advancements in improving fire management strategies and global wildfire mitigation efforts. Full article
(This article belongs to the Special Issue Advances in Remote Sensing for Burned Area Mapping)
Show Figures

Figure 1

26 pages, 2977 KB  
Article
HGR-QL: Optimized Q-Learning for Multi-UAV Path Planning in Mountain Search and Rescue
by Qi Liu, Daqiao Zhang, Shaopeng Li, Pei Dai and Wenjing Li
Drones 2026, 10(3), 223; https://doi.org/10.3390/drones10030223 - 22 Mar 2026
Viewed by 283
Abstract
Existing Q-Learning-based path planning methods face significant bottlenecks in large-scale collaboration, dynamic interference adaptation, and regional value differentiation, failing to meet the practical needs of mountain search and rescue. This study proposes HGR-QL, an optimized Q-Learning method for large-scale multi-UAV operations. Referencing remote [...] Read more.
Existing Q-Learning-based path planning methods face significant bottlenecks in large-scale collaboration, dynamic interference adaptation, and regional value differentiation, failing to meet the practical needs of mountain search and rescue. This study proposes HGR-QL, an optimized Q-Learning method for large-scale multi-UAV operations. Referencing remote sensing datasets, a 50 × 50 dynamic grid environment is constructed by integrating 20% fixed obstacles and 10 moving interference sources, highly simulating real mountain features. Integrating the individual Q-tables and the regional shared Q-tables, the hierarchical independent Q-table architecture is designed, balancing local autonomy and global collaboration. To guide UAVs focusing on remote sensing-identified high-value areas, an innovative multi-level gradient collision avoidance reward function is constructed, avoiding task deviation. Comparative experiments across three scenarios with four baselines and ablation tests validate the core modules. Results show HGR-QL outperforms peers in key metrics: in the dynamic interference scenario, it achieves a 74.47% task completion rate, 25.44 collisions, and a stable 100.00 ms communication delay. HGR-QL provides a lightweight, scalable solution, effectively enhancing the efficiency, safety, and stability of mountain search and rescue and supporting the “golden 72 h” rescue window. Full article
Show Figures

Figure 1

Back to TopTop