Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,134)

Search Parameters:
Keywords = radar-based imaging

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 3963 KB  
Article
A Convective Initiation Nowcasting Algorithm Based on FY-4B Satellite AGRI and GHI Data
by Zongxin Yang, Zhigang Cheng, Wenjun Sang, Wen Zhang, Yu Huang, Yuwen Huang and Zhi Wang
Atmosphere 2026, 17(4), 380; https://doi.org/10.3390/atmos17040380 - 8 Apr 2026
Abstract
Based on the Advanced Geostationary Radiation Imager (AGRI) and Geostationary High-speed Imager (GHI) information in the Fengyun-4B (FY-4B) satellite, we propose a convective initiation (CI) nowcasting algorithm for Sichuan Province, China. The algorithm optimizes satellite reflectance by considering multi-channel brightness differences, visible reflectance, [...] Read more.
Based on the Advanced Geostationary Radiation Imager (AGRI) and Geostationary High-speed Imager (GHI) information in the Fengyun-4B (FY-4B) satellite, we propose a convective initiation (CI) nowcasting algorithm for Sichuan Province, China. The algorithm optimizes satellite reflectance by considering multi-channel brightness differences, visible reflectance, and cloud-top cooling by exploiting the Farneback optical flow, where the cloud is followed by false cooling due to cloud motion. Moreover, the high temporal resolution of GHI enables the detection of early cumulus cloud growth. The algorithm was developed using daytime CI events in the coverage area of Mianyang radar station from 22 July to 9 August 2023, and the remaining areas in the Chengdu scan area were used for validation. The results showed that the proposed method achieves a probability of detection (POD) of 83.1%, a false alarm ratio (FAR) of 33.0%, and a critical success index (CSI) of 58.9%. Compared with the AGRI-only method and the SATCAST algorithm, the POD increases by 5.4% and 8.4%, respectively, while the CSI improves by 1.3% and 2.3%. The average lead time reaches 34.2 min, which is 4.6 min longer than AGRI-only and 7.9 min longer than SATCAST. This suggests that AGRI and GHI data improve the spatiotemporal resolution of CI nowcasting. This approach improves the early detection of convective initiation under the climatic background of warm cloud convection in Sichuan, offering new insights for short-term warnings of regional convective weather. Full article
(This article belongs to the Special Issue Meteorological Issues for Low-Altitude Economy)
Show Figures

Graphical abstract

21 pages, 8662 KB  
Article
Research on Vortex Radar Imaging Characteristics Based on the Scattering Distribution of Three-Dimensional Wind-Driven Sea Surface Waves
by Xiaoxiao Zhang, Haodong Geng, Xiang Su, Lin Ren and Zhensen Wu
Remote Sens. 2026, 18(8), 1111; https://doi.org/10.3390/rs18081111 - 8 Apr 2026
Abstract
The resolution and accuracy of airborne/spaceborne SAR are continuously improving, making it an effective means for observing ocean dynamic processes and detecting marine targets. In contrast, utilizing its unique orbital angular momentum (OAM) mode, vortex radar does not require temporal accumulation to achieve [...] Read more.
The resolution and accuracy of airborne/spaceborne SAR are continuously improving, making it an effective means for observing ocean dynamic processes and detecting marine targets. In contrast, utilizing its unique orbital angular momentum (OAM) mode, vortex radar does not require temporal accumulation to achieve azimuthal resolution, making it particularly suitable for observing moving sea surfaces. This capability enables stable and continuous monitoring of dynamic ocean scenes. This paper proposes a vortex radar imaging method based on three-dimensional sea surface scattering characteristics: first, a three-dimensional wind-driven sea surface geometric model is established based on the Elfouhaily sea spectrum, and its scattering characteristics under different incident angles, wind speeds, and wind directions are analyzed using the semi-deterministic facet-based two-scale method; then, two-dimensional range-azimuth imaging is achieved through coordinate transformation, echo modeling, pulse compression, and fast Fourier transform (FFT) in OAM mode domain, with the correctness of the imaging algorithm verified through multiple point target imaging results. Finally, simulation results of two-dimensional sea surface vortex imaging under different incident angles are presented, and the influence of wind speed and direction on sea surface vortex imaging is analyzed. The study shows that the vortex imaging system can effectively reflect wave fluctuations and wind direction characteristics, demonstrating the feasibility and potential of vortex radar imaging in oceanographic applications. Full article
(This article belongs to the Special Issue Observations of Atmospheric and Oceanic Processes by Remote Sensing)
Show Figures

Figure 1

27 pages, 18185 KB  
Article
SAR-Based Rotated Ship Detection in Coastal Regions Combining Attention and Dynamic Angle Loss
by Ning Wang, Wenxing Mu, Yixuan An and Tao Liu
Electronics 2026, 15(8), 1557; https://doi.org/10.3390/electronics15081557 - 8 Apr 2026
Abstract
With the expanding application of synthetic aperture radar (SAR) in ocean monitoring and port regulation, nearshore ship detection based on SAR image faces notable challenges arising from strong background scattering, dense target occlusion, and large pose variations. Therefore, this paper proposes a two-stage [...] Read more.
With the expanding application of synthetic aperture radar (SAR) in ocean monitoring and port regulation, nearshore ship detection based on SAR image faces notable challenges arising from strong background scattering, dense target occlusion, and large pose variations. Therefore, this paper proposes a two-stage oriented detection network named EARS-Net to improve the accuracy of ship detection in complex nearshore environments. Specifically, a lightweight convolutional block attention module (CBAM) is embedded into the high-level semantic stages of ResNet50 to enhance discriminative ship features while suppressing interference from port infrastructures and shoreline structures. Then, the dynamic angle regression loss (DAL) is proposed, and the angle weight function is designed according to the ship direction distribution characteristics, which allocates higher regression weight to the ship target with larger tilt angle, improving the defect of insufficient positioning accuracy for large angle ships. Moreover, a training strategy that combines focal loss, multi-scale training, and rotated online hard example mining (ROHEM) is employed to alleviate sample imbalance and improve generalization in dense scenes. Experimental results on the nearshore subset of the SSDD show that EARS-Net achieves an average precision (AP) of 0.903 on the test set, demonstrating reliable detection capability under complex backgrounds and dense target distributions. These results validate the effectiveness of our method and highlight its potential as a practical engineering solution for enhancing port situational awareness and coastal security monitoring. Full article
Show Figures

Figure 1

21 pages, 54538 KB  
Article
A Combined Wavelet–SVD Denoising and Wavelet Packet Decomposition Method for Quantitative GPR-Based Assessment of Compaction
by Shaoshi Dai, Shuxin Lv, Bin Kong, Yufei Wu, Tao Su and Zhi Xu
Appl. Sci. 2026, 16(7), 3483; https://doi.org/10.3390/app16073483 - 2 Apr 2026
Viewed by 204
Abstract
Ballast compaction is a critical factor influencing ballast bed condition and the operational safety of heavy-haul railways. However, existing quantitative evaluation methods often suffer from overly idealized simulation models and limitations in signal processing and assessment frameworks. To address these issues, this study [...] Read more.
Ballast compaction is a critical factor influencing ballast bed condition and the operational safety of heavy-haul railways. However, existing quantitative evaluation methods often suffer from overly idealized simulation models and limitations in signal processing and assessment frameworks. To address these issues, this study proposes a quantitative analysis approach for ballast compaction by integrating non-uniform medium simulation modeling, wavelet–Singular Value Decomposition (SVD) joint denoising, frequency–wavenumber (F-K) migration imaging, and wavelet packet decomposition (WPD)-based feature extraction. Forward simulations were conducted based on the constructed model, and the proposed methodology was validated using 1.5 GHz (gigahertz, 1 GHz = 109 Hz) ground penetrating radar (GPR) data acquired from compaction experiments. The results demonstrate that wavelet–SVD joint denoising effectively suppresses deep coherent noise caused by strong reflections from sleepers, significantly enhancing the identification of deep effective signals and ensuring accurate localization and feature extraction of compaction zones. The Geometric Mean of WPD High/Low-Frequency Energy Ratio (GMHLFER) exhibits a strong positive correlation with the degree of compaction. In simulations, as the proportion of compacted material increased from 9% to 21%, the GMHLFER rose from 21.555 to 26.581. In field tests, the value increased from 22.012 to 26.012 as compaction severity progressed from slight to severe, demonstrating stable responses across full-gradient compaction conditions and indicating high robustness and sensitivity. The proposed method provides an effective approach for quantitative characterization of ballast compaction in heavy-haul railways, and offers a promising technical pathway for intelligent inspection and condition assessment of railway ballast beds. Full article
Show Figures

Figure 1

24 pages, 10406 KB  
Article
Evaluating the Performance of AlphaEarth Foundation Embeddings for Irrigated Cropland Mapping Across Regions and Years
by Lulu Yang, Yuan Gao, Xiangyang Zhao, Nannan Liang, Ru Ma, Shixiang Xi, Xiao Zhang and Rui Wang
Remote Sens. 2026, 18(7), 1065; https://doi.org/10.3390/rs18071065 - 2 Apr 2026
Viewed by 245
Abstract
Accurate irrigated cropland mapping is critical for agricultural water management and food security. Existing image-based irrigation mapping workflows primarily rely on vegetation indices and synthetic aperture radar (SAR) backscatter features, which have limited capacity to characterize the temporal evolution of irrigation processes and [...] Read more.
Accurate irrigated cropland mapping is critical for agricultural water management and food security. Existing image-based irrigation mapping workflows primarily rely on vegetation indices and synthetic aperture radar (SAR) backscatter features, which have limited capacity to characterize the temporal evolution of irrigation processes and crop growth conditions. The AlphaEarth Foundation (AEF) model developed by Google DeepMind provides compact embeddings with temporal semantic information learned via self-supervision, yet their utility for irrigation mapping has not been systematically assessed. In this study, a comprehensive assessment of AEF embeddings for irrigated cropland mapping was performed in terms of feature separability, classification performance, and spatiotemporal transferability. Experiments were conducted in two representative irrigated regions: the Guanzhong Plain in China and Kansas in the USA. Class separability of the 64 embedding dimensions was quantified using the Jeffries–Matusita (JM) distance. Then, the AEF embeddings were compared with the Sentinel feature set (Sentinel-2 bands, normalized difference vegetation index(NDVI), enhanced vegetation index(EVI), normalized difference water index(NDWI) and Sentinel-1 vertical transmit vertical receive(VV), vertical transmit horizontal receive(VH)) using K-means clustering and supervised classifiers, including Decision Tree (DT), Random Forest (RF), Gradient Boosting Decision Trees (GBDT), Support Vector Machine (SVM), and Multi-layer Perceptron (MLP). Finally, transfer experiments across 2022 and 2024 in the Guanzhong Plain and Kansas were conducted to examine cross-year and cross-region performance. The results showed that AEF embeddings consistently provide stronger class separability in both study areas, with a maximum JM distance of 1.58 (A29). Using AEF embeddings, RF achieved overall accuracies (OA) of 0.95 in the Guanzhong Plain and 0.93 in Kansas, outperforming models based on Sentinel-1/2 bands and indices. Notably, unsupervised K-means clustering on AEF embeddings yielded OA > 0.85, indicating high intrinsic separability between irrigated and rainfed croplands. Transfer experiments further demonstrate stable temporal transfer (cross-year OA > 0.87), whereas cross-region transfer is constrained by differences in irrigation regimes, crop phenology and management practices, resulting in limited spatial generalization (OA~0.3). Overall, this study demonstrates the potential of high-information-density representations from geospatial foundation models for irrigated cropland mapping and provides methodological and technical insights to support transfer learning and operational mapping over large areas. Full article
(This article belongs to the Special Issue Near Real-Time (NRT) Agriculture Monitoring)
Show Figures

Figure 1

20 pages, 4887 KB  
Article
Geo-RVF: A Multi-Task Lightweight Perception System Based on Radar–Vision Fusion for USVs
by Jianhong Zhou, Zhen Huang, Yifan Liu, Gang Zhang, Yilan Yu and Zhen Tian
J. Mar. Sci. Eng. 2026, 14(7), 664; https://doi.org/10.3390/jmse14070664 - 31 Mar 2026
Viewed by 243
Abstract
Visual perception in Unmanned Surface Vehicles (USVs) suffers from drastic lighting changes and missing texture features. These factors lead to depth scale drift and motion estimation bias. Moreover, existing multi-modal fusion models are computationally complex and unfit for resource-limited edge devices. To address [...] Read more.
Visual perception in Unmanned Surface Vehicles (USVs) suffers from drastic lighting changes and missing texture features. These factors lead to depth scale drift and motion estimation bias. Moreover, existing multi-modal fusion models are computationally complex and unfit for resource-limited edge devices. To address these problems, a lightweight Radar–Vision Fusion (Geo-RVF) algorithm is proposed. To supplement spatial information, point clouds are projected to build sparse depth maps. A probability consistency-based depth correction module is designed to suppress water noise. This helps extract accurate geometric anchors to guide visual depth propagation. Subsequently, a Recurrent Autoregressive Network (RAN) fuses radar and image features in the temporal dimension. This resolves dynamic positional deviations caused by texture degradation and distant small targets. After real-time optimization, Geo-RVF achieves 23 FPS on the Jetson Orin NX. On a collected dataset, the method attains a mean average precision (mAP) 50–90 of 44.2% and a mean intersection over union (mIoU) of 99%, outperforming HybridNets and Achelous. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

28 pages, 15563 KB  
Article
Rapid Detection of Ionospheric Disturbances in L-Band InSAR Systems: A Case Study Using LT-1 Data
by Huaishuai Wang, Hongjun Song, Yulun Wu, Yang Liu, Jili Wang and Xiang Zhang
Remote Sens. 2026, 18(7), 1030; https://doi.org/10.3390/rs18071030 - 29 Mar 2026
Viewed by 300
Abstract
Ionospheric effects constitute a key error source limiting the accuracy of surface deformation monitoring using L-band interferometric synthetic aperture radar (InSAR). Efficient identification of interferometric pairs affected by ionospheric disturbances is therefore essential for large-scale and high-throughput automated InSAR processing. To address this [...] Read more.
Ionospheric effects constitute a key error source limiting the accuracy of surface deformation monitoring using L-band interferometric synthetic aperture radar (InSAR). Efficient identification of interferometric pairs affected by ionospheric disturbances is therefore essential for large-scale and high-throughput automated InSAR processing. To address this issue, a parameterized ionospheric detection method based on azimuth offsets derived from sub-aperture images is proposed. The proposed method integrates random-sampling pixel offset tracking (RS-POT) with piecewise Gaussian fitting to enable rapid and robust detection of ionospheric disturbances. Experimental validation was conducted using 50 interferometric pairs acquired by the LuTan-1 (LT-1) satellite, China’s first dual-satellite L-band SAR mission, covering high-, mid-, and low-latitude regions with varying ionospheric conditions. The results demonstrate that the proposed method can reliably identify ionospheric disturbances under diverse conditions while maintaining high computational efficiency. The proposed framework provides an effective solution for determining whether ionospheric correction is required, thereby improving the efficiency of automated interferometric processing workflows. Full article
Show Figures

Figure 1

20 pages, 13035 KB  
Article
Development of Wideband Circular Microstrip Patch Antenna for Use in Microwave Imaging for Brain Tumor Detection
by Hüseyin Özmen, Mengwei Wu and Mariana Dalarsson
Sensors 2026, 26(7), 2062; https://doi.org/10.3390/s26072062 - 25 Mar 2026
Viewed by 570
Abstract
This work presents the design of a compact, wideband circular microstrip patch antenna for microwave imaging-based brain tumor detection. The main contribution is the development of a compact antenna structure incorporating enhanced ground-plane slot modifications, which significantly improves impedance bandwidth while maintaining a [...] Read more.
This work presents the design of a compact, wideband circular microstrip patch antenna for microwave imaging-based brain tumor detection. The main contribution is the development of a compact antenna structure incorporating enhanced ground-plane slot modifications, which significantly improves impedance bandwidth while maintaining a small electrical size, making it highly suitable for medical imaging systems. In addition, the study integrates antenna design, safety evaluation, and microwave imaging analysis within a unified framework to assess tumor localization feasibility using a realistic head model in CST Microwave Studio. The proposed antenna is fabricated on an FR-4 substrate with dimensions of 37 × 54.5 × 1.6 mm3, corresponding to an electrical size of 0.176λ × 0.260λ × 0.0076λ at the lowest operating frequency of 1.43 GHz. Ground-plane slot enhancements are introduced to achieve wideband performance, resulting in an impedance bandwidth from 1.43 to 4 GHz and a fractional bandwidth of 94.7%. The antenna exhibits a maximum realized gain of 3.7 dB. To evaluate its suitability for medical applications, specific absorption rate (SAR) analysis is performed using a realistic human head model at multiple antenna positions and at 1.5, 2.1, 2.5, 3.3, and 3.9 GHz frequencies. The computed SAR values range from 0.109 to 1.56 W/kg averaged over 10 g of tissue, satisfying the IEEE C95.1 safety guideline limit of 2 W/kg. For tumor detection assessment, time-domain simulations are conducted in CST Microwave Studio using a monostatic radar configuration, where the antenna operates as both transmitter and receiver at twelve angular positions around the head with 30° increments. The collected scattered signals are processed using the Delay-and-Sum (DAS) beamforming algorithm to reconstruct dielectric contrast maps and localize the tumor. It should be noted that the tumor-imaging demonstrations presented in this work are based on numerical simulations, while experimental validation is limited to the characterization of the fabricated antenna. Nevertheless, the findings indicate that the proposed antenna is a promising candidate for noninvasive, low-cost microwave brain tumor imaging applications. Full article
Show Figures

Figure 1

28 pages, 5779 KB  
Article
Recovery of Petermann Glacier Velocity from SAR Imagery Using a Spatiotemporal Hybrid Neural Network
by Zongze Li, Haimei Mo, Lebao Yang and Jinsong Chong
Appl. Sci. 2026, 16(7), 3169; https://doi.org/10.3390/app16073169 - 25 Mar 2026
Viewed by 236
Abstract
Numerous studies have demonstrated the potential of Synthetic Aperture Radar (SAR) in monitoring glacier velocity. However, owing to the complex dynamics of glaciers and the variability of their surface features, velocity fields derived from even short-interval SAR image pairs often exhibit missing parts. [...] Read more.
Numerous studies have demonstrated the potential of Synthetic Aperture Radar (SAR) in monitoring glacier velocity. However, owing to the complex dynamics of glaciers and the variability of their surface features, velocity fields derived from even short-interval SAR image pairs often exhibit missing parts. This study proposes a missing glacier velocity recovery method based on a spatiotemporal hybrid neural network to solve the above problem. Considering the spatiotemporal characteristics of glacier velocity fields, a hybrid network combining an Artificial Neural Network (ANN) and a Denoising Autoencoder (DAE) is developed. The ANN is first employed to capture spatial correlations associated with missing values, after which it is integrated with the DAE to model temporal variations using a time-aware loss function. An iterative weighting strategy adaptively balances spatial and temporal features during training. The method is applied to SAR–derived velocity fields of Petermann Glacier. Experimental results show that the method significantly improves the performance of glacier velocity recovery compared to traditional methods. Additionally, the study compares and analyzes the velocity of Petermann Glacier in different seasons, and the findings indicate that the glacier exhibits more pronounced seasonal differences in the accumulation zone. Full article
Show Figures

Figure 1

33 pages, 3891 KB  
Article
Correlation and Semantic Prior-Guided Multi-Scale Cross-Modal Interaction Network for SAR-OPT Image Fusion
by Xiaoyang Hou, Lingxi Zhou, Chenguo Feng, Hao Cha, Yang Liu, Liguo Liu and Haibo Liu
Remote Sens. 2026, 18(7), 975; https://doi.org/10.3390/rs18070975 - 24 Mar 2026
Viewed by 305
Abstract
Syntheticaperture radar (SAR) and optical (OPT) image fusion aims to leverage their complementary information to obtain a more comprehensive representation of ground objects. However, significant discrepancies exist between the two modalities in terms of imaging mechanisms and feature distributions. Consequently, existing multi-modal image [...] Read more.
Syntheticaperture radar (SAR) and optical (OPT) image fusion aims to leverage their complementary information to obtain a more comprehensive representation of ground objects. However, significant discrepancies exist between the two modalities in terms of imaging mechanisms and feature distributions. Consequently, existing multi-modal image fusion methods struggle to achieve robust cross-modal feature alignment and deep semantic consistency between the fused results and the source modalities. To address the above challenges, this paper proposes a correlation and semantic prior-guided multi-scale cross-modal interaction network (CSP-MCIN) for effective SAR-OPT image fusion. Specifically, CSP-MCIN first employs two modality-specific encoders based on ResNet-18 to extract low-level details and high-level semantic features from SAR and OPT images, respectively. Subsequently, a multi-scale interactive decoder integrating cross-modal Transformers and gated fusion units is constructed to align and aggregate semantic and detail information from both encoders. Finally, to strengthen source-modality representations, a novel loss function combining a pixel-domain correlation loss and a CLIP-guided semantic consistency loss is designed and optimized under a PCGrad-based multi-objective optimization scheme. Experimental results on public SAR-OPT image datasets demonstrate that the proposed CSP-MCIN achieves superior fusion performance and computational efficiency compared with state-of-the-art approaches. Full article
Show Figures

Figure 1

27 pages, 8177 KB  
Article
DINOv3-PEFT: A Dual-Branch Collaborative Network with Parameter-Efficient Fine-Tuning for Precise Road Segmentation in SAR Imagery
by Debao Chen, Wanlin Yang, Ye Yuan and Juntao Gu
Remote Sens. 2026, 18(7), 973; https://doi.org/10.3390/rs18070973 - 24 Mar 2026
Viewed by 226
Abstract
Extracting road networks from Synthetic Aperture Radar (SAR) data represents a core challenge in remote sensing scene analysis, particularly for applications in traffic monitoring and emergency management. The task is complicated by several inherent limitations: speckle noise degrades image quality, geometric distortions arise [...] Read more.
Extracting road networks from Synthetic Aperture Radar (SAR) data represents a core challenge in remote sensing scene analysis, particularly for applications in traffic monitoring and emergency management. The task is complicated by several inherent limitations: speckle noise degrades image quality, geometric distortions arise from the side-looking acquisition geometry, and roads often exhibit weak radiometric separation from surrounding terrain. Traditional processing pipelines and recent single-branch deep learning frameworks have shown insufficient performance when global contextual reasoning and fine-scale spatial detail must both be addressed. This work presents DINOv3-PEFT, a parameter-efficient dual-encoder network designed specifically for SAR road segmentation. The architecture employs two complementary processing streams tailored to SAR characteristics: one stream utilizes adapter-based fine-tuning applied to pre-trained DINOv3 weights (kept frozen), which captures long-distance spatial relationships crucial for maintaining network connectivity despite speckle corruption. The second stream, based on convolutional operations, focuses on extracting localized geometric features that preserve the narrow, elongated structure and sharp boundaries typical of road infrastructure. Feature fusion occurs through the Topological-Geometric Feature Integration (TGFI) Module, which synthesizes multi-scale representations hierarchically. This mechanism proves effective at bridging fragmented road segments and recovering geometric accuracy in scenarios with heavy shadow casting or signal interference. Performance evaluation on the GF-3 satellite dataset across four spatial resolutions (1 m, 3 m, 5 m, and 10 m) demonstrates the proposed method achieves an 82.61% F1-score, a 76.51% IoU, and a 98.08% overall accuracy, all averaged across the four resolutions. When benchmarked against six state-of-the-art methods, DINOv3-PEFT demonstrates substantial improvements in road class segmentation quality and topological connectivity preservation, supporting its robustness for operational SAR road mapping tasks. Full article
Show Figures

Figure 1

38 pages, 5379 KB  
Review
A Scoping Review of Automated Calving Front Detection in Satellite Images and Calving Front Position Datasets
by Wojciech Milczarek, Marek Sompolski, Michał Tympalski and Anna Kopeć
Remote Sens. 2026, 18(7), 969; https://doi.org/10.3390/rs18070969 - 24 Mar 2026
Viewed by 217
Abstract
Calving front position is a key indicator of glacier and ice-sheet dynamics and an important variable for assessing mass loss and sea-level rise. Rapid growth in satellite data availability and image analysis techniques has driven the development of numerous automated calving front detection [...] Read more.
Calving front position is a key indicator of glacier and ice-sheet dynamics and an important variable for assessing mass loss and sea-level rise. Rapid growth in satellite data availability and image analysis techniques has driven the development of numerous automated calving front detection algorithms; however, the methodological landscape remains fragmented. This scoping review aims to map the existing literature on automated calving front detection, characterize the types of algorithms and data sources used, and identify trends, gaps, and challenges in current approaches. A systematic search of major bibliographic databases and complementary sources was conducted to identify studies describing automated or semi-automated calving front detection from satellite imagery or derived datasets. Eligible studies included peer-reviewed articles and relevant grey literature using optical, synthetic aperture radar (SAR), or multi-sensor data. Data were charted using a predefined framework that captures the algorithmic approach, input data characteristics, spatial and temporal coverage, validation strategies, and reported performance metrics. The review identifies a wide range of methods, from early threshold- and edge-based techniques to recent machine learning and deep learning approaches, with a strong shift toward convolutional neural networks over the past few years. Despite methodological progress, validation practices and evaluation metrics remain heterogeneous, and standardized benchmark datasets are scarce. This scoping review provides a structured overview of the field and highlights priorities for future methodological development and benchmarking. Full article
(This article belongs to the Special Issue AI, Large Language Models, and Remote Sensing for Disaster Monitoring)
Show Figures

Figure 1

27 pages, 10703 KB  
Article
WE-KAN: SAR Image Rotated Object Detection Method Based on Wavelet Domain Feature Enhancement and KAN Prediction Head
by Mingchun Li, Yang Liu, Qiang Wang and Dali Chen
Sensors 2026, 26(7), 2011; https://doi.org/10.3390/s26072011 - 24 Mar 2026
Viewed by 202
Abstract
Synthetic aperture radar (SAR) imagery plays a vital role in critical applications such as military reconnaissance and disaster monitoring. These applications require high detection accuracy. Therefore, rotated object detection has gained increasing attention. By predicting an object orientation angle, it offers advantages over [...] Read more.
Synthetic aperture radar (SAR) imagery plays a vital role in critical applications such as military reconnaissance and disaster monitoring. These applications require high detection accuracy. Therefore, rotated object detection has gained increasing attention. By predicting an object orientation angle, it offers advantages over horizontal bounding boxes, especially for elongated structures such as ships and bridges in SAR scenes. However, challenges such as speckle noise and complex backgrounds in SAR imagery still hinder high-precision detection. To address this, we propose WE-KAN, a novel rotated object detection framework based on wavelet features and Kolmogorov–Arnold network (KAN) prediction. First, we enhance the backbone by incorporating wavelet domain features from SAR grayscale images. The extracted wavelet domain features and image features are fused by a proposed attention module. Second, considering the sensitivity to angle prediction, we design a angle predictor based on KAN. This architecture provides a powerful and dedicated solution for accurate angle regression. Finally, for precise rotated bounding box regression, we employ a joint loss function combining a rotated intersection over union (RIoU) with a Gaussian distance loss function. These designs improve the model’s robustness to noise and its perception of fine object structures. When evaluated on the large-scale public RSAR dataset, our method achieves an AP50 of 70.1 and a mAP of 35.9 under the same training schedule and backbone network, significantly outperforming existing baselines. This demonstrates the effectiveness and robustness of our method for dense, small, and highly oriented objects in complex SAR scenes. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

22 pages, 14276 KB  
Article
DualFOD: A Dual-Modality Deep Learning Framework for UAS-Based Foreign Object Debris Detection Using Thermal and RGB Imagery
by Owais Ahmed, Caleb S. Caldwell and Adeel Khalid
Drones 2026, 10(3), 225; https://doi.org/10.3390/drones10030225 - 23 Mar 2026
Viewed by 429
Abstract
Foreign Object Debris (FOD) poses critical risks to aircraft during takeoff and landing, resulting in billions of dollars in losses annually due to infrastructure damage and flight delays. Advancements in automated inspection technologies have enabled the use of Unmanned Aerial Systems (UAS) combined [...] Read more.
Foreign Object Debris (FOD) poses critical risks to aircraft during takeoff and landing, resulting in billions of dollars in losses annually due to infrastructure damage and flight delays. Advancements in automated inspection technologies have enabled the use of Unmanned Aerial Systems (UAS) combined with Artificial Intelligence (AI) for rapid FOD identification. While prior research has extensively evaluated optical sensors such as RGB imaging and radar, limited work has investigated the potential of thermal imaging for improved FOD visibility under challenging environmental conditions. This study proposes DualFOD, a dual-modality detection framework that integrates a supervised YOLO12-based RGB detector with an unsupervised thermal anomaly extraction pipeline for identifying debris on runway surfaces. A decision-level fusion algorithm combines detections from both branches using spatial proximity matching to produce a unified FOD inventory. The RGB branch achieves a precision of 0.954 and mAP@0.5 of 0.890 on the held-out test set. Cross-site validation at the Cobb County Sport Aviation Complex demonstrates that thermal detection recovers debris missed by RGB at higher altitudes, with the fused output consistently outperforming either single-modality branch. This research contributes toward scalable autonomous FOD monitoring that enhances operational safety in aviation environments. Full article
Show Figures

Figure 1

28 pages, 8596 KB  
Article
Synergistic Cross-Level Multimodal Representation of Radar Echoes for Maritime Target Detection
by Junfang Wang, Yunhua Wang, Jianbo Cui and Yanmin Zhang
J. Mar. Sci. Eng. 2026, 14(6), 580; https://doi.org/10.3390/jmse14060580 - 20 Mar 2026
Viewed by 308
Abstract
To address the challenge of detecting weak targets with small radar cross-sections (RCS), this work explores an integrated framework that leverages cross-level multimodal fusion of radar echoes. This method considers the target’s motion properties via Doppler spectrum and phase sequences (direct physical level), [...] Read more.
To address the challenge of detecting weak targets with small radar cross-sections (RCS), this work explores an integrated framework that leverages cross-level multimodal fusion of radar echoes. This method considers the target’s motion properties via Doppler spectrum and phase sequences (direct physical level), and introduces the Gramian Angular Field (GAF) to map the echo amplitude sequence into two-dimensional (2D) structured images, thereby revealing the dynamic evolution characteristics of echo energy (abstract representation level). This approach integrates direct physical attributes and abstract system evolution features within a unified representation. To accommodate the structural differences among modalities, a heterogeneous branch processing network is designed: the Transformer is employed to capture long-range dependencies in one-dimensional (1D) sequences, while ResNet18 is used to extract spatial texture features from two-dimensional images. A self-attention mechanism is further introduced to achieve adaptive fusion of the multimodal data. Experimental results based on the IPIX dataset suggest that this cross-level strategy provides improved detection performance across various scenarios, as observed in complex marine environments. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

Back to TopTop