Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,708)

Search Parameters:
Keywords = imaging radar

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 3847 KiB  
Article
Water Body Extraction Methods for SAR Images Fusing Sentinel-1 Dual-Polarized Water Index and Random Forest
by Min Zhai, Huayu Shen, Qihang Cao, Xuanhao Ding and Mingzhen Xin
Sensors 2025, 25(15), 4868; https://doi.org/10.3390/s25154868 (registering DOI) - 7 Aug 2025
Abstract
Synthetic Aperture Radar (SAR) technology has the characteristics of all-day and all-weather functionality; accordingly, it is not affected by rainy weather, overcoming the limitations of optical remote sensing, and it provides irreplaceable technical support for efficient water body extraction. To address the issues [...] Read more.
Synthetic Aperture Radar (SAR) technology has the characteristics of all-day and all-weather functionality; accordingly, it is not affected by rainy weather, overcoming the limitations of optical remote sensing, and it provides irreplaceable technical support for efficient water body extraction. To address the issues of low accuracy and unstable results in water body extraction from Sentinel-1 SAR images using a single method, a water body extraction method fusing the Sentinel-1 dual-polarized water index and random forest is proposed. This novel method enhances water extraction accuracy by integrating the results of two different algorithms, reducing the biases associated with single-method water body extraction. Taking Dalu Lake, Yinfu Reservoir, and Huashan Reservoir as the study areas, water body information was extracted from SAR images using the dual-polarized water body index, the random forest method, and the fusion method. Taking the normalized difference water body index extraction results obtained via Sentinel-2 optical images as a reference, the accuracy of different water body extraction methods when used with SAR images was quantitatively evaluated. The experimental results show that, compared with the dual-polarized water body index and the random forest method, the fusion method, on average, increased overall water body extraction accuracy and Kappa coefficients by 3.9% and 8.2%, respectively, in the Dalu Lake experimental area; by 1.8% and 3.5%, respectively, in the Yinfu Reservoir experimental area; and by 4.1% and 8.1%, respectively, in the Huashan Reservoir experimental area. Therefore, the fusion method of the dual-polarized water index and random forest effectively improves the accuracy and reliability of water body extraction from SAR images. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

27 pages, 8913 KiB  
Article
Laser Radar and Micro-Light Polarization Image Matching and Fusion Research
by Jianling Yin, Gang Li, Bing Zhou and Leilei Cheng
Electronics 2025, 14(15), 3136; https://doi.org/10.3390/electronics14153136 - 6 Aug 2025
Abstract
Aiming at addressing the defect of the data blindness of a LiDAR point cloud in transparent media such as glass in low illumination environments, a new method is proposed to realize covert target reconnaissance, identification and ranging using the fusion of a shimmering [...] Read more.
Aiming at addressing the defect of the data blindness of a LiDAR point cloud in transparent media such as glass in low illumination environments, a new method is proposed to realize covert target reconnaissance, identification and ranging using the fusion of a shimmering polarized image and a laser LiDAR point cloud, and the corresponding system is constructed. Based on the extraction of pixel coordinates from the 3D LiDAR point cloud, the method adds information on the polarization degree and polarization angle of the micro-light polarization image, as well as on the reflective intensity of each point of the LiDAR. The mapping matrix of the radar point cloud to the pixel coordinates is made to contain depth offset information and show better fitting, thus optimizing the 3D point cloud converted from the micro-light polarization image. On this basis, algorithms such as 3D point cloud fusion and pseudo-color mapping are used to further optimize the matching and fusion procedures for the micro-light polarization image and the radar point cloud, so as to successfully realize the alignment and fusion of the 2D micro-light polarization image and the 3D LiDAR point cloud. The experimental results show that the alignment rate between the 2D micro-light polarization image and the 3D LiDAR point cloud reaches 74.82%, which can effectively detect the target hidden behind the glass under the low illumination condition and fill the blind area of the LiDAR point cloud data acquisition. This study verifies the feasibility and advantages of “polarization + LiDAR” fusion in low-light glass scene reconnaissance, and it provides a new technological means of covert target detection in complex environments. Full article
(This article belongs to the Special Issue Image and Signal Processing Techniques and Applications)
Show Figures

Figure 1

29 pages, 12050 KiB  
Article
PolSAR-SFCGN: An End-to-End PolSAR Superpixel Fully Convolutional Generation Network
by Mengxuan Zhang, Jingyuan Shi, Long Liu, Wenbo Zhang, Jie Feng, Jin Zhu and Boce Chu
Remote Sens. 2025, 17(15), 2723; https://doi.org/10.3390/rs17152723 - 6 Aug 2025
Abstract
Polarimetric Synthetic Aperture Radar (PolSAR) image classification is one of the most important applications in remote sensing. The impressive superpixel generation approaches can improve the efficiency of the subsequent classification task and restrain the influence of the speckle noise to an extent. Most [...] Read more.
Polarimetric Synthetic Aperture Radar (PolSAR) image classification is one of the most important applications in remote sensing. The impressive superpixel generation approaches can improve the efficiency of the subsequent classification task and restrain the influence of the speckle noise to an extent. Most of the classical PolSAR superpixel generation approaches use the features extracted manually and even only consider the pseudocolor images. They do not make full use of polarimetric information and do not necessarily lead to good enough superpixels. The deep learning methods can extract effective deep features but they are difficult to combine with superpixel generation to achieve true end-to-end training. Addressing the above issues, this study proposes an end-to-end fully convolutional superpixel generation network for PolSAR images. It integrates the extraction of polarization information features and the generation of PolSAR superpixels into one step. PolSAR superpixels can be generated based on deep polarization feature extraction and need no traditional clustering process. Both the performance and efficiency of generations of PolSAR superpixels can be enhanced effectively. The experimental results on various PolSAR datasets show that the proposed method can achieve impressive superpixel segmentation by fitting the real boundaries of different types of ground objects effectively and efficiently. It can achieve excellent classification performance by connecting a very simple classification network, which is helpful to improve the efficiency of the subsequent PolSAR image classification tasks. Full article
20 pages, 7088 KiB  
Article
SAR Images Despeckling Using Subaperture Decomposition and Non-Local Low-Rank Tensor Approximation
by Xinwei An, Hongcheng Zeng, Zhaohong Li, Wei Yang, Wei Xiong, Yamin Wang and Yanfang Liu
Remote Sens. 2025, 17(15), 2716; https://doi.org/10.3390/rs17152716 - 6 Aug 2025
Abstract
Synthetic aperture radar (SAR) images suffer from speckle noise due to their imaging mechanism, which deteriorates image interpretability and hinders subsequent tasks like target detection and recognition. Traditional denoising methods fall short of the demands for high-quality SAR image processing, and deep learning [...] Read more.
Synthetic aperture radar (SAR) images suffer from speckle noise due to their imaging mechanism, which deteriorates image interpretability and hinders subsequent tasks like target detection and recognition. Traditional denoising methods fall short of the demands for high-quality SAR image processing, and deep learning approaches trained on synthetic datasets exhibit poor generalization because noise-free real SAR images are unattainable. To solve this problem and improve the quality of SAR images, a speckle noise suppression method based on subaperture decomposition and non-local low-rank tensor approximation is proposed. Subaperture decomposition yields azimuth-frame subimages with high global structural similarity, which are modeled as low-rank and formed into a 3D tensor. The tensor is decomposed to derive a low-dimensional orthogonal basis and low-rank representation, followed by non-local denoising and iterative regularization in the low-rank subspace for data reconstruction. Experiments on simulated and real SAR images demonstrate that the proposed method outperforms state-of-the-art techniques in speckle suppression, significantly improving SAR image quality. Full article
Show Figures

Figure 1

27 pages, 14923 KiB  
Article
Multi-Sensor Flood Mapping in Urban and Agricultural Landscapes of the Netherlands Using SAR and Optical Data with Random Forest Classifier
by Omer Gokberk Narin, Aliihsan Sekertekin, Caglar Bayik, Filiz Bektas Balcik, Mahmut Arıkan, Fusun Balik Sanli and Saygin Abdikan
Remote Sens. 2025, 17(15), 2712; https://doi.org/10.3390/rs17152712 - 5 Aug 2025
Abstract
Floods stand as one of the most harmful natural disasters, which have become more dangerous because of climate change effects on urban structures and agricultural fields. This research presents a comprehensive flood mapping approach that combines multi-sensor satellite data with a machine learning [...] Read more.
Floods stand as one of the most harmful natural disasters, which have become more dangerous because of climate change effects on urban structures and agricultural fields. This research presents a comprehensive flood mapping approach that combines multi-sensor satellite data with a machine learning method to evaluate the July 2021 flood in the Netherlands. The research developed 25 different feature scenarios through the combination of Sentinel-1, Landsat-8, and Radarsat-2 imagery data by using backscattering coefficients together with optical Normalized Difference Water Index (NDWI) and Hue, Saturation, and Value (HSV) images and Synthetic Aperture Radar (SAR)-derived Grey Level Co-occurrence Matrix (GLCM) texture features. The Random Forest (RF) classifier was optimized before its application based on two different flood-prone regions, which included Zutphen’s urban area and Heijen’s agricultural land. Results demonstrated that the multi-sensor fusion scenarios (S18, S20, and S25) achieved the highest classification performance, with overall accuracy reaching 96.4% (Kappa = 0.906–0.949) in Zutphen and 87.5% (Kappa = 0.754–0.833) in Heijen. For the flood class F1 scores of all scenarios, they varied from 0.742 to 0.969 in Zutphen and from 0.626 to 0.969 in Heijen. Eventually, the addition of SAR texture metrics enhanced flood boundary identification throughout both urban and agricultural settings. Radarsat-2 provided limited benefits to the overall results, since Sentinel-1 and Landsat-8 data proved more effective despite being freely available. This study demonstrates that using SAR and optical features together with texture information creates a powerful and expandable flood mapping system, and RF classification performs well in diverse landscape settings. Full article
(This article belongs to the Special Issue Remote Sensing Applications in Flood Forecasting and Monitoring)
Show Figures

Figure 1

25 pages, 2418 KiB  
Review
Contactless Vital Sign Monitoring: A Review Towards Multi-Modal Multi-Task Approaches
by Ahmad Hassanpour and Bian Yang
Sensors 2025, 25(15), 4792; https://doi.org/10.3390/s25154792 - 4 Aug 2025
Viewed by 248
Abstract
Contactless vital sign monitoring has emerged as a transformative healthcare technology, enabling the assessment of vital signs without physical contact with the human body. This review comprehensively reviews the rapidly evolving landscape of this field, with particular emphasis on multi-modal sensing approaches and [...] Read more.
Contactless vital sign monitoring has emerged as a transformative healthcare technology, enabling the assessment of vital signs without physical contact with the human body. This review comprehensively reviews the rapidly evolving landscape of this field, with particular emphasis on multi-modal sensing approaches and multi-task learning paradigms. We systematically categorize and analyze existing technologies based on sensing modalities (vision-based, radar-based, thermal imaging, and ambient sensing), integration strategies, and application domains. The paper examines how artificial intelligence has revolutionized this domain, transitioning from early single-modality, single-parameter approaches to sophisticated systems that combine complementary sensing technologies and simultaneously extract multiple vital sign parameters. We discuss the theoretical foundations and practical implementations of multi-modal fusion, analyzing signal-level, feature-level, decision-level, and deep learning approaches to sensor integration. Similarly, we explore multi-task learning frameworks that leverage the inherent relationships between vital sign parameters to enhance measurement accuracy and efficiency. The review also critically addresses persisting technical challenges, clinical limitations, and ethical considerations, including environmental robustness, cross-subject variability, sensor fusion complexities, and privacy concerns. Finally, we outline promising future directions, from emerging sensing technologies and advanced fusion architectures to novel application domains and privacy-preserving methodologies. This review provides a holistic perspective on contactless vital sign monitoring, serving as a reference for researchers and practitioners in this rapidly advancing field. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

17 pages, 12127 KiB  
Article
Shoreline Response to Hurricane Otis and Flooding Impact from Hurricane John in Acapulco, Mexico
by Luis Valderrama-Landeros, Iliana Pérez-Espinosa, Edgar Villeda-Chávez, Rafael Alarcón-Medina and Francisco Flores-de-Santiago
Coasts 2025, 5(3), 28; https://doi.org/10.3390/coasts5030028 - 4 Aug 2025
Viewed by 299
Abstract
The city of Acapulco was impacted by two near-consecutive hurricanes. On 25 October 2023, Hurricane Otis made landfall, reaching the highest Category 5 storm on the Saffir–Simpson scale, causing extensive coastal destruction due to extreme winds and waves. Nearly one year later (23 [...] Read more.
The city of Acapulco was impacted by two near-consecutive hurricanes. On 25 October 2023, Hurricane Otis made landfall, reaching the highest Category 5 storm on the Saffir–Simpson scale, causing extensive coastal destruction due to extreme winds and waves. Nearly one year later (23 September 2024), Hurricane John—a Category 2 storm—caused severe flooding despite its lower intensity, primarily due to its unusual trajectory and prolonged rainfall. Digital shoreline analysis of PlanetScope images (captured one month before and after Hurricane Otis) revealed that the southern coast of Acapulco, specifically Zona Diamante—where the major seafront hotels are located—experienced substantial shoreline erosion (94 ha) and damage. In the northwestern section of the study area, the Coyuca Bar experienced the most dramatic geomorphological change in surface area. This was primarily due to the complete disappearance of the bar on October 26, which resulted in a shoreline retreat of 85 m immediately after the passage of Hurricane Otis. Sentinel-1 Synthetic Aperture Radar (SAR) showed that Hurricane John inundated 2385 ha, four times greater than Hurricane Otis’s flooding (567 ha). The retrofitted QGIS methodology demonstrated high reliability when compared to limited in situ local reports. Given the increased frequency of intense hurricanes, these methods and findings will be relevant in other coastal areas for monitoring and managing local communities affected by severe climate events. Full article
Show Figures

Figure 1

48 pages, 16562 KiB  
Article
Dense Matching with Low Computational Complexity for   Disparity Estimation in the Radargrammetric Approach of SAR Intensity Images
by Hamid Jannati, Mohammad Javad Valadan Zoej, Ebrahim Ghaderpour and Paolo Mazzanti
Remote Sens. 2025, 17(15), 2693; https://doi.org/10.3390/rs17152693 - 3 Aug 2025
Viewed by 201
Abstract
Synthetic Aperture Radar (SAR) images and optical imagery have high potential for extracting digital elevation models (DEMs). The two main approaches for deriving elevation models from SAR data are interferometry (InSAR) and radargrammetry. Adapted from photogrammetric principles, radargrammetry relies on disparity model estimation [...] Read more.
Synthetic Aperture Radar (SAR) images and optical imagery have high potential for extracting digital elevation models (DEMs). The two main approaches for deriving elevation models from SAR data are interferometry (InSAR) and radargrammetry. Adapted from photogrammetric principles, radargrammetry relies on disparity model estimation as its core component. Matching strategies in radargrammetry typically follow local, global, or semi-global methodologies. Local methods, while having higher accuracy, especially in low-texture SAR images, require larger kernel sizes, leading to quadratic computational complexity. Conversely, global and semi-global models produce more consistent and higher-quality disparity maps but are computationally more intensive than local methods with small kernels and require more memory (RAM). In this study, inspired by the advantages of local matching algorithms, a computationally efficient and novel model is proposed for extracting corresponding pixels in SAR-intensity stereo images. To enhance accuracy, the proposed two-stage algorithm operates without an image pyramid structure. Notably, unlike traditional local and global models, the computational complexity of the proposed approach remains stable as the input size or kernel dimensions increase while memory consumption stays low. Compared to a pyramid-based local normalized cross-correlation (NCC) algorithm and adaptive semi-global matching (SGM) models, the proposed method maintains good accuracy comparable to adaptive SGM while reducing processing time by up to 50% relative to pyramid SGM and achieving a 35-fold speedup over the local NCC algorithm with an optimal kernel size. Validated on a Sentinel-1 stereo pair with a 10 m ground-pixel size, the proposed algorithm yields a DEM with an average accuracy of 34.1 m. Full article
24 pages, 29785 KiB  
Article
Multi-Scale Feature Extraction with 3D Complex-Valued Network for PolSAR Image Classification
by Nana Jiang, Wenbo Zhao, Jiao Guo, Qiang Zhao and Jubo Zhu
Remote Sens. 2025, 17(15), 2663; https://doi.org/10.3390/rs17152663 - 1 Aug 2025
Viewed by 233
Abstract
Compared to traditional real-valued neural networks, which process only amplitude information, complex-valued neural networks handle both amplitude and phase information, leading to superior performance in polarimetric synthetic aperture radar (PolSAR) image classification tasks. This paper proposes a multi-scale feature extraction (MSFE) method based [...] Read more.
Compared to traditional real-valued neural networks, which process only amplitude information, complex-valued neural networks handle both amplitude and phase information, leading to superior performance in polarimetric synthetic aperture radar (PolSAR) image classification tasks. This paper proposes a multi-scale feature extraction (MSFE) method based on a 3D complex-valued network to improve classification accuracy by fully leveraging multi-scale features, including phase information. We first designed a complex-valued three-dimensional network framework combining complex-valued 3D convolution (CV-3DConv) with complex-valued squeeze-and-excitation (CV-SE) modules. This framework is capable of simultaneously capturing spatial and polarimetric features, including both amplitude and phase information, from PolSAR images. Furthermore, to address robustness degradation from limited labeled samples, we introduced a multi-scale learning strategy that jointly models global and local features. Specifically, global features extract overall semantic information, while local features help the network capture region-specific semantics. This strategy enhances information utilization by integrating multi-scale receptive fields, complementing feature advantages. Extensive experiments on four benchmark datasets demonstrated that the proposed method outperforms various comparison methods, maintaining high classification accuracy across different sampling rates, thus validating its effectiveness and robustness. Full article
Show Figures

Graphical abstract

15 pages, 4258 KiB  
Article
Complex-Scene SAR Aircraft Recognition Combining Attention Mechanism and Inner Convolution Operator
by Wansi Liu, Huan Wang, Jiapeng Duan, Lixiang Cao, Teng Feng and Xiaomin Tian
Sensors 2025, 25(15), 4749; https://doi.org/10.3390/s25154749 - 1 Aug 2025
Viewed by 224
Abstract
Synthetic aperture radar (SAR), as an active microwave imaging system, has the capability of all-weather and all-time observation. In response to the challenges of aircraft detection in SAR images due to the complex background interference caused by the continuous scattering of airport buildings [...] Read more.
Synthetic aperture radar (SAR), as an active microwave imaging system, has the capability of all-weather and all-time observation. In response to the challenges of aircraft detection in SAR images due to the complex background interference caused by the continuous scattering of airport buildings and the demand for real-time processing, this paper proposes a YOLOv7-MTI recognition model that combines the attention mechanism and involution. By integrating the MTCN module and involution, performance is enhanced. The Multi-TASP-Conv network (MTCN) module aims to effectively extract low-level semantic and spatial information using a shared lightweight attention gate structure to achieve cross-dimensional interaction between “channels and space” with very few parameters, capturing the dependencies among multiple dimensions and improving feature representation ability. Involution helps the model adaptively adjust the weights of spatial positions through dynamic parameterized convolution kernels, strengthening the discrete strong scattering points specific to aircraft and suppressing the continuous scattering of the background, thereby alleviating the interference of complex backgrounds. Experiments on the SAR-AIRcraft-1.0 dataset, which includes seven categories such as A220, A320/321, A330, ARJ21, Boeing737, Boeing787, and others, show that the mAP and mRecall of YOLOv7-MTI reach 93.51% and 96.45%, respectively, outperforming Faster R-CNN, SSD, YOLOv5, YOLOv7, and YOLOv8. Compared with the basic YOLOv7, mAP is improved by 1.47%, mRecall by 1.64%, and FPS by 8.27%, achieving an effective balance between accuracy and speed, providing research ideas for SAR aircraft recognition. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

31 pages, 18320 KiB  
Article
Penetrating Radar on Unmanned Aerial Vehicle for the Inspection of Civilian Infrastructure: System Design, Modeling, and Analysis
by Jorge Luis Alva Alarcon, Yan Rockee Zhang, Hernan Suarez, Anas Amaireh and Kegan Reynolds
Aerospace 2025, 12(8), 686; https://doi.org/10.3390/aerospace12080686 - 31 Jul 2025
Viewed by 251
Abstract
The increasing demand for noninvasive inspection (NII) of complex civil infrastructures requires overcoming the limitations of traditional ground-penetrating radar (GPR) systems in addressing diverse and large-scale applications. The solution proposed in this study focuses on an initial design that integrates a low-SWaP (Size, [...] Read more.
The increasing demand for noninvasive inspection (NII) of complex civil infrastructures requires overcoming the limitations of traditional ground-penetrating radar (GPR) systems in addressing diverse and large-scale applications. The solution proposed in this study focuses on an initial design that integrates a low-SWaP (Size, Weight, and Power) ultra-wideband (UWB) impulse radar with realistic electromagnetic modeling for deployment on unmanned aerial vehicles (UAVs). The system incorporates ultra-realistic antenna and propagation models, utilizing Finite Difference Time Domain (FDTD) solvers and multilayered media, to replicate realistic airborne sensing geometries. Verification and calibration are performed by comparing simulation outputs with laboratory measurements using varied material samples and target models. Custom signal processing algorithms are developed to extract meaningful features from complex electromagnetic environments and support anomaly detection. Additionally, machine learning (ML) techniques are trained on synthetic data to automate the identification of structural characteristics. The results demonstrate accurate agreement between simulations and measurements, as well as the potential for deploying this design in flight tests within realistic environments featuring complex electromagnetic interference. Full article
(This article belongs to the Section Aeronautics)
Show Figures

Figure 1

29 pages, 3731 KiB  
Article
An Automated Method for Identifying Voids and Severe Loosening in GPR Images
by Ze Chai, Zicheng Wang, Zeshan Xu, Ziyu Feng and Yafeng Zhao
J. Imaging 2025, 11(8), 255; https://doi.org/10.3390/jimaging11080255 - 30 Jul 2025
Viewed by 269
Abstract
This paper proposes a novel automatic recognition method for distinguishing voids and severe loosening in road structures based on features of ground-penetrating radar (GPR) B-scan images. By analyzing differences in image texture, the intensity and clarity of top reflection interfaces, and the regularity [...] Read more.
This paper proposes a novel automatic recognition method for distinguishing voids and severe loosening in road structures based on features of ground-penetrating radar (GPR) B-scan images. By analyzing differences in image texture, the intensity and clarity of top reflection interfaces, and the regularity of internal waveforms, a set of discriminative features is constructed. Based on these features, we develop the FKS-GPR dataset, a high-quality, manually annotated GPR dataset collected from real road environments, covering diverse and complex background conditions. Compared to datasets based on simulations, FKS-GPR offers higher practical relevance. An improved ACF-YOLO network is then designed for automatic detection, and the experimental results show that the proposed method achieves superior accuracy and robustness, validating its effectiveness and engineering applicability. Full article
(This article belongs to the Section Image and Video Processing)
Show Figures

Figure 1

25 pages, 6401 KiB  
Article
Efficient Sampling Schemes for 3D Imaging of Radar Target Scattering Based on Synchronized Linear Scanning and Rotational Motion
by Changyu Lou, Jingcheng Zhao, Xingli Wu, Yuchen Zhang, Zongkai Yang, Jiahui Li and Jungang Miao
Remote Sens. 2025, 17(15), 2636; https://doi.org/10.3390/rs17152636 - 29 Jul 2025
Viewed by 258
Abstract
Three-dimensional (3D) radar imaging is essential for target detection and measurement of scattering characteristics. Cylindrical scanning, a prevalent spatial sampling technique, provides benefits in engineering applications and has been extensively utilized for assessing the radar stealth capabilities of large aircraft. Traditional cylindrical scanning [...] Read more.
Three-dimensional (3D) radar imaging is essential for target detection and measurement of scattering characteristics. Cylindrical scanning, a prevalent spatial sampling technique, provides benefits in engineering applications and has been extensively utilized for assessing the radar stealth capabilities of large aircraft. Traditional cylindrical scanning generally utilizes highly sampled full-coverage techniques, leading to an excessive quantity of sampling points and diminished image efficiency, constraining its use for quick detection applications. This work presents an efficient 3D sampling strategy that integrates vertical linear scanning with horizontal rotating motion to overcome these restrictions. A joint angle–space sampling model is developed, and geometric constraints are implemented to enhance the scanning trajectory. The experimental results demonstrate that, compared to conventional techniques, the proposed method achieves a 94% reduction in the scanning duration while maintaining a peak sidelobe level ratio (PSLR) of 12 dB. Furthermore, this study demonstrates that 3D imaging may be accomplished solely by a “V”-shaped trajectory, efficiently determining the minimal possible sampling aperture. This approach offers novel insights and theoretical backing for the advancement of high-efficiency, low-redundancy 3D radar imaging systems. Full article
(This article belongs to the Special Issue Recent Advances in SAR: Signal Processing and Target Recognition)
Show Figures

Figure 1

24 pages, 8636 KiB  
Article
Oil Film Segmentation Method Using Marine Radar Based on Feature Fusion and Artificial Bee Colony Algorithm
by Jin Xu, Bo Xu, Xiaoguang Mou, Boxi Yao, Zekun Guo, Xiang Wang, Yuanyuan Huang, Sihan Qian, Min Cheng, Peng Liu and Jianning Wu
J. Mar. Sci. Eng. 2025, 13(8), 1453; https://doi.org/10.3390/jmse13081453 - 29 Jul 2025
Viewed by 183
Abstract
In the wake of the continuous development of the international strategic petroleum reserve system, the tonnage and quantity of oil tankers have been increasing. This trend has driven the expansion of offshore oil exploration and transportation, resulting in frequent incidents of ship oil [...] Read more.
In the wake of the continuous development of the international strategic petroleum reserve system, the tonnage and quantity of oil tankers have been increasing. This trend has driven the expansion of offshore oil exploration and transportation, resulting in frequent incidents of ship oil spills. Catastrophic impacts have been exerted on the marine environment by these accidents, posing a serious threat to economic development and ecological security. Therefore, there is an urgent need for efficient and reliable methods to detect oil spills in a timely manner and minimize potential losses as much as possible. In response to this challenge, a marine radar oil film segmentation method based on feature fusion and the artificial bee colony (ABC) algorithm is proposed in this study. Initially, the raw experimental data are preprocessed to obtain denoised radar images. Subsequently, grayscale adjustment and local contrast enhancement operations are carried out on the denoised images. Next, the gray level co-occurrence matrix (GLCM) features and Tamura features are extracted from the locally contrast-enhanced images. Then, the generalized least squares (GLS) method is employed to fuse the extracted texture features, yielding a new feature fusion map. Afterwards, the optimal processing threshold is determined to obtain effective wave regions by using the bimodal graph direct method. Finally, the ABC algorithm is utilized to segment the oil films. This method can provide data support for oil spill detection in marine radar images. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

19 pages, 2089 KiB  
Article
Estimation of Soil Organic Carbon Content of Grassland in West Songnen Plain Using Machine Learning Algorithms and Sentinel-1/2 Data
by Haoming Li, Jingyao Xia, Yadi Yang, Yansu Bo and Xiaoyan Li
Agriculture 2025, 15(15), 1640; https://doi.org/10.3390/agriculture15151640 - 29 Jul 2025
Viewed by 145
Abstract
Based on multi-source data, including synthetic aperture radar (Sentinel-1, S1) and optical satellite images (Sentinel-2, S2), topographic data, and climate data, this study explored the performance and feasibility of different variable combinations in predicting SOC using three machine learning models. We designed the [...] Read more.
Based on multi-source data, including synthetic aperture radar (Sentinel-1, S1) and optical satellite images (Sentinel-2, S2), topographic data, and climate data, this study explored the performance and feasibility of different variable combinations in predicting SOC using three machine learning models. We designed the three models based on 244 samples from the study area, using 70% of the samples for the training set and 30% for the testing set. Nine experiments were conducted under three variable scenarios to select the optimal model. We used this optimal model to achieve high-precision predictions of SOC content. Our results indicated that both S1 and S2 data are significant for SOC prediction, and the use of multi-sensor data yielded more accurate results than single-sensor data. The RF model based on the integration of S1, S2, topography, and climate data achieved the highest prediction accuracy. In terms of variable importance, the S2 data exhibited the highest contribution to SOC prediction (31.03%). The SOC contents within the study region varied between 4.16 g/kg and 29.19 g/kg, showing a clear spatial trend of higher concentrations in the east than in the west. Overall, the proposed model showed strong performance in estimating grassland SOC and offered valuable scientific guidance for grassland conservation in the western Songnen Plain. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

Back to TopTop