Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (481)

Search Parameters:
Keywords = scattering feature extraction

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 6497 KB  
Article
Semantic Segmentation of High-Resolution Remote Sensing Images Based on RS3Mamba: An Investigation of the Extraction Algorithm for Rural Compound Utilization Status
by Xinyu Fang, Zhenbo Liu, Su’an Xie and Yunjian Ge
Remote Sens. 2025, 17(20), 3443; https://doi.org/10.3390/rs17203443 - 15 Oct 2025
Abstract
In this study, we utilize Gaofen-2 satellite remote sensing images to optimize and enhance the extraction of feature information from rural compounds, addressing key challenges in high-resolution remote sensing analysis: traditional methods struggle to effectively capture long-distance spatial dependencies for scattered rural compounds. [...] Read more.
In this study, we utilize Gaofen-2 satellite remote sensing images to optimize and enhance the extraction of feature information from rural compounds, addressing key challenges in high-resolution remote sensing analysis: traditional methods struggle to effectively capture long-distance spatial dependencies for scattered rural compounds. To this end, we implement the RS3Mamba+ deep learning model, which introduces the Mamba state space model (SSM) into its auxiliary branching—leveraging Mamba’s sequence modeling advantage to efficiently capture long-range spatial correlations of rural compounds, a critical capability for analyzing sparse rural buildings. This Mamba-assisted branch, combined with multi-directional selective scanning (SS2D) and the enhanced STEM network framework (replacing single 7 × 7 convolution with two-stage 3 × 3 convolutions to reduce information loss), works synergistically with a ResNet-based main branch for local feature extraction. We further introduce a multiscale attention feature fusion mechanism that optimizes feature extraction and fusion, enhances edge contour extraction accuracy in courtyards, and improves the recognition and differentiation of courtyards from regions with complex textures. The feature information of courtyard utilization status is finally extracted using empirical methods. A typical rural area in Weifang City, Shandong Province, is selected as the experimental sample area. Results show that the extraction accuracy reaches an average intersection over union (mIoU) of 79.64% and a Kappa coefficient of 0.7889, improving the F1 score by at least 8.12% and mIoU by 4.83% compared with models such as DeepLabv3+ and Transformer. The algorithm’s efficacy in mitigating false alarms triggered by shadows and intricate textures is particularly salient, underscoring its potential as a potent instrument for the extraction of rural vacancy rates. Full article
Show Figures

Figure 1

23 pages, 22294 KB  
Article
Persistent Scatterer Pixel Selection Method Based on Multi-Temporal Feature Extraction Network
by Zihan Hu, Mofan Li, Gen Li, Yifan Wang, Chuanxu Sun and Zehua Dong
Remote Sens. 2025, 17(19), 3319; https://doi.org/10.3390/rs17193319 - 27 Sep 2025
Viewed by 373
Abstract
Persistent scatterer (PS) pixel selection is crucial in the PS-InSAR technique, ensuring the quality and quantity of PS pixels for accurate deformation measurements. However, traditional methods like the amplitude dispersion index (ADI)-based method struggle to balance the quality and quantity of PS pixels. [...] Read more.
Persistent scatterer (PS) pixel selection is crucial in the PS-InSAR technique, ensuring the quality and quantity of PS pixels for accurate deformation measurements. However, traditional methods like the amplitude dispersion index (ADI)-based method struggle to balance the quality and quantity of PS pixels. To adequately select high-quality PS pixels, and thus improve the deformation measurement performance of PS-InSAR, the multi-temporal feature extraction network (MFN) is constructed in this paper. The MFN combines the 3D U-Net and the convolutional long short-term memory (CLSTM) to achieve time-series analysis. Compared with traditional methods, the proposed MFN can fully extract the spatiotemporal characteristics of complex SAR images to improve PS pixel selection performance. The MFN was trained with datasets constructed by reliable PS pixels estimated by the ADI-based method with a low threshold using ∼350 time-series Sentinel-1A SAR images, which contain man-made objects, farmland, parkland, wood, desert, and waterbody areas. To test the validity of the MFN, a deformation measurement experiment was designed for Tongzhou District, Beijing, China with 38 SAR images obtained by Sentinel-1A. Moreover, the similar time-series interferometric pixel (STIP) index was introduced to evaluate the phase stability of selected PS pixels. The experimental results indicate a significant improvement in both the quality and quantity of selected PS pixels, as well as a higher deformation measurement accuracy, compared to the traditional ADI-based method. Full article
Show Figures

Graphical abstract

21 pages, 9052 KB  
Article
SAM–Attention Synergistic Enhancement: SAR Image Object Detection Method Based on Visual Large Model
by Yirong Yuan, Jie Yang, Lei Shi and Lingli Zhao
Remote Sens. 2025, 17(19), 3311; https://doi.org/10.3390/rs17193311 - 26 Sep 2025
Viewed by 526
Abstract
The object detection model for synthetic aperture radar (SAR) images needs to have strong generalization ability and more stable detection performance due to the complex scattering mechanism, high sensitivity of the orientation angle, and susceptibility to speckle noise. Visual large models possess strong [...] Read more.
The object detection model for synthetic aperture radar (SAR) images needs to have strong generalization ability and more stable detection performance due to the complex scattering mechanism, high sensitivity of the orientation angle, and susceptibility to speckle noise. Visual large models possess strong generalization capabilities for natural image processing, but their application to SAR imagery remains relatively rare. This paper attempts to introduce a visual large model into the SAR object detection task, aiming to alleviate the problems of weak cross-domain generalization and poor adaptability to few-shot samples caused by the characteristics of SAR images in existing models. The proposed model comprises an image encoder, an attention module, and a detection decoder. The image encoder leverages the pre-trained Segment Anything Model (SAM) for effective feature extraction from SAR images. An Adaptive Channel Interactive Attention (ACIA) module is introduced to suppress SAR speckle noise. Further, a Dynamic Tandem Attention (DTA) mechanism is proposed in the decoder to integrate scale perception, spatial focusing, and task adaptation, while decoupling classification from detection for improved accuracy. Leveraging the strong representational and few-shot adaptation capabilities of large pre-trained models, this study evaluates their cross-domain and few-shot detection performance on SAR imagery. For cross-domain detection, the model was trained on AIR-SARShip-1.0 and tested on SSDD, achieving an mAP50 of 0.54. For few-shot detection on SAR-AIRcraft-1.0, using only 10% of the training samples, the model reached an mAP50 of 0.503. Full article
(This article belongs to the Special Issue Big Data Era: AI Technology for SAR and PolSAR Image)
Show Figures

Figure 1

24 pages, 6747 KB  
Article
YOLOv11-MSE: A Multi-Scale Dilated Attention-Enhanced Lightweight Network for Efficient Real-Time Underwater Target Detection
by Zhenfeng Ye, Xing Peng, Dingkang Li and Feng Shi
J. Mar. Sci. Eng. 2025, 13(10), 1843; https://doi.org/10.3390/jmse13101843 - 23 Sep 2025
Viewed by 534
Abstract
Underwater target detection is a critical technology for marine resource management and ecological protection, but its performance is often limited by complex underwater environments, including optical attenuation, scattering, and dense distributions of small targets. Existing methods have significant limitations in feature extraction efficiency, [...] Read more.
Underwater target detection is a critical technology for marine resource management and ecological protection, but its performance is often limited by complex underwater environments, including optical attenuation, scattering, and dense distributions of small targets. Existing methods have significant limitations in feature extraction efficiency, robustness in class-imbalanced scenarios, and computational complexity. To address these challenges, this study proposes a lightweight adaptive detection model, YOLOv11-MSE, which optimizes underwater detection performance through three core innovations. First, a multi-scale dilated attention (MSDA) mechanism is embedded into the backbone network to dynamically capture multi-scale contextual features while suppressing background noise. Second, a Slim-Neck architecture based on GSConv and VoV-GSCSPC modules is designed to achieve efficient feature fusion via hybrid convolution strategies, significantly reducing model complexity. Finally, an efficient multi-scale attention (EMA) module is introduced in the detection head to reinforce key feature representations and suppress environmental noise through cross-dimensional interactions. Experiments on the underwater detection dataset (UDD) demonstrate that YOLOv11-MSE outperforms the baseline model YOLOv11, achieving a 9.67% improvement in detection precision and a 3.45% increase in mean average precision (mAP50) while reducing computational complexity by 6.57%. Ablation studies further validate the synergistic optimization effects of each module, particularly in class-imbalanced scenarios where detection precision for rare categories (e.g., scallops) is significantly enhanced, with precision and mAP50 improving by 60.62% and 10.16%, respectively. This model provides an efficient solution for edge computing scenarios, such as underwater robots and ecological monitoring, through its lightweight design and high underwater target detection capability. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

31 pages, 8218 KB  
Article
Growth Stage-Specific Modeling of Chlorophyll Content in Korla Pear Leaves by Integrating Spectra and Vegetation Indices
by Mingyang Yu, Weifan Fan, Junkai Zeng, Yang Li, Lanfei Wang, Hao Wang and Jianping Bao
Agronomy 2025, 15(9), 2218; https://doi.org/10.3390/agronomy15092218 - 19 Sep 2025
Viewed by 329
Abstract
This study, leveraging near-infrared spectroscopy technology and integrating vegetation index analysis, aims to develop a hyperspectral imaging-based non-destructive inspection technique for swift monitoring of crop chlorophyll content by rapidly predicting leaf SPAD. To this end, a high-precision spectral prediction model was first established [...] Read more.
This study, leveraging near-infrared spectroscopy technology and integrating vegetation index analysis, aims to develop a hyperspectral imaging-based non-destructive inspection technique for swift monitoring of crop chlorophyll content by rapidly predicting leaf SPAD. To this end, a high-precision spectral prediction model was first established under laboratory conditions using ex situ lyophilized Leaf samples. This model provides a core algorithmic foundation for future non-destructive field applications. A systematic study was conducted to develop prediction models for leaf SPAD values of Korla fragrant pear at different growth stages (fruit-setting period, fruit swelling period and Maturity period). This involved comparing various spectral preprocessing algorithms (AirPLS, Savitzky–Golay, Multiplicative Scatter Correction, FD, etc.) and CARS Feature Selection methods for the screening of optimal spectral feature band. Subsequently, models were constructed using BP Neural Network and Support Vector Regression algorithms. The results showed that leaf samples at different growth stages exhibited significant differences in their spectral features within the 5000–7000 cm−1 (effective features for predicting chlorophyll (SPAD)) and 7000–8000 cm−1 (moisture absorption valley) bands. The Savitzky–Golay+FD (Savitzky–Golay smoothing combined with first-order derivative (FD)) preprocessing algorithm performed optimally in feature extraction. Growth period specificity models significantly outperformed whole growth period models, with the optimal models for the fruit-setting period and fruit swelling period being FD-CARS-BP (Coefficient of determination (R2) > 0.86), and the optimal model for the Maturity period being Savitzky–Golay-FD+Savitzky–Golay-CARS-BP (Coefficient_of_determination (R2) = 0.862). Furthermore, joint modeling of characteristic spectra and vegetation indices further improved prediction performance (Coefficient of determination (R2) > 0.85, Root Mean Square Error (RMSE) 2.5). This study presents a reliable method for non-destructive monitoring of chlorophyll content in Korla fragrant pears, offering significant value for nutrient management and stress early warning in precision agriculture. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

21 pages, 3287 KB  
Article
STFTransNet: A Transformer Based Spatial Temporal Fusion Network for Enhanced Multimodal Driver Inattention State Recognition System
by Minjun Kim and Gyuho Choi
Sensors 2025, 25(18), 5819; https://doi.org/10.3390/s25185819 - 18 Sep 2025
Viewed by 506
Abstract
Recently, studies on driver inattention state recognition as an advanced mobility application technology are being actively conducted to prevent traffic accidents caused by driver drowsiness and distraction. The driver inattention state recognition system is a technology that recognizes drowsiness and distraction by using [...] Read more.
Recently, studies on driver inattention state recognition as an advanced mobility application technology are being actively conducted to prevent traffic accidents caused by driver drowsiness and distraction. The driver inattention state recognition system is a technology that recognizes drowsiness and distraction by using driver behavior, biosignals, and vehicle data characteristics. Existing driver drowsiness detection systems are wearable accessories that have partial occlusion of facial features and light scattering due to changes in internal and external lighting, which results in momentary image resolution degradation, making it difficult to recognize the driver’s condition. In this paper, we propose a transformer based spatial temporal fusion network (STFTransNet) that fuses multi-modality information for improved driver inattention state recognition in images where the driver’s face is partially occluded by wearing accessories and the instantaneous resolution is degraded due to light scattering from changes in lighting in a driving environment. The proposed STFTransNet consists of (i) a mediapipe face mesh-based facial landmark extraction process for facial feature extraction, (ii) an RCN-based two-stream cross-attention process for learning spatial features of driver face and body action images, (iii) a TCN-based temporal feature extraction process for learning temporal features of extracted features, and (iv) an ensemble of spatial and temporal features and a classification process to recognize the final driver state. As a result of the experiment, the proposed STFTransNet achieved an accuracy of 4.56% better than the existing VBFLLFA model in the NTHU-DDD public DB, 3.48% better than the existing InceptionV3 + HRNN model in the StateFarm public DB, and 3.78% better than the existing VBFLLFA model in the YawDD public DB. The proposed STFTransNet is designed as a two-stream network that can input the driver’s face and action images and solves the degradation in driver inattention state recognition performance due to partial facial feature occlusion and light blur through spatial feature and temporal feature fusion. Full article
(This article belongs to the Special Issue Sensor-Based Behavioral Biometrics)
Show Figures

Figure 1

14 pages, 3320 KB  
Article
SFD-YOLO: A Multi-Angle Scattered Field-Based Optical Surface Defect Recognition Method
by Xuan Liu, Hao Sun, Jian Zhang and Chunyan Wang
Photonics 2025, 12(9), 929; https://doi.org/10.3390/photonics12090929 - 18 Sep 2025
Viewed by 591
Abstract
The surface quality of optical components plays a decisive role in advanced imaging, precision manufacturing, and high-power laser systems, where even defects can induce abnormal scattering and degrade system performance. Addressing the limitations of conventional single-view inspection methods, this study presents a panoramic [...] Read more.
The surface quality of optical components plays a decisive role in advanced imaging, precision manufacturing, and high-power laser systems, where even defects can induce abnormal scattering and degrade system performance. Addressing the limitations of conventional single-view inspection methods, this study presents a panoramic multi-angle scattered light field acquisition approach integrated with deep learning-based recognition. A hemispherical synchronous imaging system is designed to capture complete scattered distributions from surface defects in a single exposure, ensuring both structural consistency and angular completeness of the measured data. To enhance the interpretation of complex scattering patterns, we develop a tailored lightweight network, SFD-YOLO, which incorporates the PSimam attention module for improved salient feature extraction and the Efficient_Mamba_CSP module for robust global semantic modeling. Using a simulated dataset of multi-width scratch defects, the proposed method achieves high classification accuracy with strong generalization and computational efficiency. Compared to the baseline YOLOv11-cls, SFD-YOLO improves Top-1 accuracy from 92.5% to 95.6%, while reducing the parameter count from 1.54 M to 1.25 M and maintaining low computational cost (Flops 4.0G). These results confirm that panoramic multi-angle scattered imaging, coupled with advanced neural architectures, provides a powerful and practical framework for optical surface defect detection, offering valuable prospects for high-precision quality evaluation and intelligent defect inversion in optical inspection. Full article
(This article belongs to the Section Lasers, Light Sources and Sensors)
Show Figures

Figure 1

22 pages, 8527 KB  
Article
MCEM: Multi-Cue Fusion with Clutter Invariant Learning for Real-Time SAR Ship Detection
by Haowei Chen, Manman He, Zhen Yang and Lixin Gan
Sensors 2025, 25(18), 5736; https://doi.org/10.3390/s25185736 - 14 Sep 2025
Viewed by 526
Abstract
Small-vessel detection in Synthetic Aperture Radar (SAR) imagery constitutes a critical capability for maritime surveillance systems. However, prevailing methodologies such as sea-clutter statistical models and deep learning-based detectors face three fundamental limitations: weak target scattering signatures, complex sea clutter interference, and computational inefficiency. [...] Read more.
Small-vessel detection in Synthetic Aperture Radar (SAR) imagery constitutes a critical capability for maritime surveillance systems. However, prevailing methodologies such as sea-clutter statistical models and deep learning-based detectors face three fundamental limitations: weak target scattering signatures, complex sea clutter interference, and computational inefficiency. These challenges create inherent trade-offs between noise suppression and feature preservation while hindering high-resolution representation learning. To address these constraints, we propose the Multi-cue Efficient Maritime detector (MCEM), an anchor-free framework integrating three synergistic components: a Feature Extraction Module (FEM) with scale-adaptive convolutions for enhanced signature representation; a Feature Fusion Module (F2M) decoupling target-background ambiguities; and a Detection Head Module (DHM) optimizing accuracy-efficiency balance. Comprehensive evaluations demonstrate MCEM’s state-of-the-art performance: achieving 45.1% APS on HRSID (+2.3pp over YOLOv8) and 77.7% APL on SSDD (+13.9pp over same baseline), the world’s most challenging high-clutter SAR datasets. The framework enables robust maritime surveillance in complex oceanic conditions, particularly excelling in small target detection amidst high clutter. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

19 pages, 2627 KB  
Communication
A Novel Recognition-Before-Tracking Method Based on a Beam Constraint in Passive Radars for Low-Altitude Target Surveillance
by Xiaomao Cao, Hong Ma, Jiang Jin, Xianrong Wan and Jianxin Yi
Appl. Sci. 2025, 15(18), 9957; https://doi.org/10.3390/app15189957 - 11 Sep 2025
Viewed by 386
Abstract
Effective means are urgently needed to identify non-cooperative targets intruding on airport clearance zones for the safety of low-altitude flights. Passive radars are an ideal means of low-altitude airspace surveillance for their low costs in terms of hardware and operation. However, non-ideal signals [...] Read more.
Effective means are urgently needed to identify non-cooperative targets intruding on airport clearance zones for the safety of low-altitude flights. Passive radars are an ideal means of low-altitude airspace surveillance for their low costs in terms of hardware and operation. However, non-ideal signals transmitted by third-party illuminators challenge feature extraction and target recognition in such radars. To tackle this problem, we propose a light-weight recognition-before-tracking method based on a beam constraint for passive radars. Under the background of sparse targets, the proposed method utilizes the continuity of target motion to identify the same target from the same array beam. Then, with its peaks detected in range-Doppler maps, a feature vector based on the biased radar cross-section is constructed for recognition. Meanwhile, to use the local scattering characteristics of targets for dynamic recognition, we introduce a parameter named normalized bistatic velocity to characterize the attitude of the target relative to the receiving station. With the proposed light-weight metric, the similarity of feature vectors between the unknown target and standard targets is measured to determine the target type. The feasibility and effectiveness of the proposed method are validated by the simulated and measured data. Full article
Show Figures

Figure 1

21 pages, 18869 KB  
Article
MambaRA-GAN: Underwater Image Enhancement via Mamba and Intra-Domain Reconstruction Autoencoder
by Jiangyan Wu, Guanghui Zhang and Yugang Fan
J. Mar. Sci. Eng. 2025, 13(9), 1745; https://doi.org/10.3390/jmse13091745 - 10 Sep 2025
Viewed by 348
Abstract
Underwater images frequently suffer from severe quality degradation due to light attenuation and scattering effects, manifesting as color distortion, low contrast, and detail blurring. These issues significantly impair the performance of downstream tasks. Therefore, underwater image enhancement (UIE) becomes a key technology to [...] Read more.
Underwater images frequently suffer from severe quality degradation due to light attenuation and scattering effects, manifesting as color distortion, low contrast, and detail blurring. These issues significantly impair the performance of downstream tasks. Therefore, underwater image enhancement (UIE) becomes a key technology to solve underwater image degradation. However, existing data-driven UIE methods typically rely on difficult-to-acquire paired data for training, severely limiting their practical applicability. To overcome this limitation, this study proposes MambaRA-GAN, a novel unpaired UIE framework built upon a CycleGAN architecture, which introduces a novel integration of Mamba and intra-domain reconstruction autoencoders. The key innovations of our work are twofold: (1) We design a generator architecture based on a Triple-Gated Mamba (TG-Mamba) block. This design dynamically allocates feature channels to three parallel branches via learnable weights, achieving optimal fusion of CNN’s local feature extraction capabilities and Mamba’s global modeling capabilities. (2) We construct an intra-domain reconstruction autoencoder, isomorphic to the generator, to quantitatively assess the quality of reconstructed images within the cycle consistency loss. This introduces more effective structural information constraints during training. The experimental results demonstrate that the proposed method achieves significant improvements across five objective performance metrics. Visually, it effectively restores natural colors, enhances contrast, and preserves rich detail information, robustly validating its efficacy for the UIE task. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

18 pages, 3048 KB  
Article
Estimation of Wheat Leaf Water Content Based on UAV Hyper-Spectral Remote Sensing and Machine Learning
by Yunlong Wu, Shouqi Yuan, Junjie Zhu, Yue Tang and Lingdi Tang
Agriculture 2025, 15(17), 1898; https://doi.org/10.3390/agriculture15171898 - 7 Sep 2025
Cited by 1 | Viewed by 541
Abstract
Leaf water content is a critical metric during the growth and development of winter wheat. Rapid and efficient monitoring of leaf water content in winter wheat is essential for achieving precision irrigation and assessing crop quality. Unmanned aerial vehicle (UAV)-based hyperspectral remote sensing [...] Read more.
Leaf water content is a critical metric during the growth and development of winter wheat. Rapid and efficient monitoring of leaf water content in winter wheat is essential for achieving precision irrigation and assessing crop quality. Unmanned aerial vehicle (UAV)-based hyperspectral remote sensing technology has enormous application potential in the field of crop monitoring. In this study, UAV was used as the platform to conduct six canopy hyperspectral data samplings and field-measured leaf water content (LWC) across four growth stages of winter wheat. Then, six spectral transformations were performed on the original spectral data and combined with the correlation analysis with wheat leaf water content (LWC). Multiple scattering correction (MSC), standard normal variate (SNV), and first derivative (FD) were selected as the subsequent transformation methods. Additionally, competitive adaptive reweighted sampling (CARS) and the Hilbert–Schmidt independence criterion lasso (HSICLasso) were employed for feature selection to eliminate redundant information from the spectral data. Finally, three machine learning algorithms—partial least squares regression (PLSR), support vector regression (SVR), and random forest (RF)—were combined with different data preprocessing methods, and 50 random partition datasets and model evaluation experiments were conducted to compare the accuracy of different combination models in assessing wheat LWC. The results showed that there are significant differences in the predictive performance of different combination models. By comparing the prediction accuracy on the test set, the optimal combinations of the three models are MSC + CARS + SVR (R2 = 0.713, RMSE = 0.793, RPD = 2.097), SNV + CARS + PLSR (R2 = 0.692, RMSE = 0.866, RPD = 2.053), and FD + CARS + RF (R2 = 0.689, RMSE = 0.848, RPD = 2.002). All three models can accurately and stably predict winter wheat LWC, and the CARS feature extraction method can improve the prediction accuracy and enhance the stability of the model, among which the SVR algorithm has better robustness and generalization ability. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

24 pages, 8351 KB  
Article
The Information Consistency Between Full- and Improved Dual-Polarimetric Mode SAR for Multiscenario Oil Spill Detection
by Guannan Li, Gaohuan Lv, Tong Wang, Xiang Wang and Fen Zhao
Sensors 2025, 25(17), 5551; https://doi.org/10.3390/s25175551 - 5 Sep 2025
Viewed by 1040
Abstract
Detecting marine oil spills is vital for protecting the marine environment, ensuring maritime traffic safety, supporting marine development, and enabling effective emergency response. The dual-polarimetric (DP) synthetic aperture radar (SAR) system represents an evolution from single to full polarization (FP), which has become [...] Read more.
Detecting marine oil spills is vital for protecting the marine environment, ensuring maritime traffic safety, supporting marine development, and enabling effective emergency response. The dual-polarimetric (DP) synthetic aperture radar (SAR) system represents an evolution from single to full polarization (FP), which has become an essential tool for oil spill detection with the growing availability of open-source and shared datasets. Recent research has focused on enhancing DP information structures to better exploit this data. This study introduces improved DP models’ structure with modified the scattering vector coefficients to ensure consistency with the corresponding components of the FP system, enabling comprehensive comparison and analysis with traditional DP structure, includes theoretical and quantitative evaluations of simulated data from FP system, as well as validation using real DP scenarios. The results showed the following: (1) The polarimetric entropy HL obtained through the improved DP scattering matrix CL can achieve higher information consistency results closely aligns with FP system and better performance, compared to the typical two DP scattering structures. (2) For multiple polarimetric features from DP scattering matrix (both traditional feature and combination feature), the improved DP scattering matrix CL can be used for oil spill extraction effectively with prominent results. (3) For oil spill extraction, the polarimetric features-based CL tend to have relatively high contribution, especially the H_A feature combination, leading to substantial gains in improved classification performance. This approach not only enriches the structural information of the DP system under VV–VH mode but also improves oil spill identification by integrating multi-structured DP features. Furthermore, it offers a practical alternative when FP data are unavailable. Full article
(This article belongs to the Section Environmental Sensing)
Show Figures

Graphical abstract

25 pages, 9041 KB  
Article
A Novel Wind Turbine Clutter Detection Algorithm for Weather Radar Data
by Fugui Zhang, Yao Gao, Qiangyu Zeng, Zhicheng Ren, Hao Wang and Wanjun Chen
Electronics 2025, 14(17), 3467; https://doi.org/10.3390/electronics14173467 - 29 Aug 2025
Viewed by 525
Abstract
Wind turbine radar echoes exhibit significant scattering power and Doppler spectrum broadening effects, which can interfere with the detection of meteorological targets and subsequently impact weather prediction and disaster warning decisions. In operational weather radar applications, the influence of wind farm on radar [...] Read more.
Wind turbine radar echoes exhibit significant scattering power and Doppler spectrum broadening effects, which can interfere with the detection of meteorological targets and subsequently impact weather prediction and disaster warning decisions. In operational weather radar applications, the influence of wind farm on radar observations must be fully considered by meteorological departments and related institutions. In this paper, a Wind Turbine Clutter Classification Algorithm based on Random Forest (WTCDA-RF) classification is proposed. The level-II radar data is processed in blocks, and the spatial position invariance of wind farm clutter is leveraged for feature extraction. Samples are labeled based on position information, and valid samples are screened and saved to construct a vector sample set of wind farm clutter. Through training and optimization, the proposed WTCDA-RF model achieves an ACC of 90.92%, a PRE of 89.37%, a POD of 92.89%, and an F1-score of 91.10%, with a CSI of 83.65% and a FAR of only 10.63%. This not only enhances the accuracy of weather forecasts and ensures the reliability of radar data but also provides operational conditions for subsequent clutter removal, improves disaster warning capabilities, and ensures timely and accurate warning information under extreme weather conditions. Full article
Show Figures

Figure 1

26 pages, 3570 KB  
Article
Monitoring Spatiotemporal Dynamics of Farmland Abandonment and Recultivation Using Phenological Metrics
by Xingtao Liu, Shudong Wang, Xiaoyuan Zhang, Lin Zhen, Chenyang Ma, Saw Yan Naing, Kai Liu and Hang Li
Land 2025, 14(9), 1745; https://doi.org/10.3390/land14091745 - 28 Aug 2025
Viewed by 1535
Abstract
Driven by both natural and anthropogenic factors, farmland abandonment and recultivation constitute complex and widespread global phenomena that impact the ecological environment and society. In the Inner Mongolia Yellow River Basin (IMYRB), a critical tension lies between agricultural production and ecological conservation, characterized [...] Read more.
Driven by both natural and anthropogenic factors, farmland abandonment and recultivation constitute complex and widespread global phenomena that impact the ecological environment and society. In the Inner Mongolia Yellow River Basin (IMYRB), a critical tension lies between agricultural production and ecological conservation, characterized by dynamic bidirectional transitions that hold significant implications for the harmony of human–nature relations and the advancement of ecological civilization. With the development of remote sensing, it has become possible to rapidly and accurately extract farmland changes and monitor its vegetation restoration status. However, mapping abandoned farmland presents significant challenges due to its scattered and heterogeneous distribution across diverse landscapes. Furthermore, subjectivity in questionnaire-based data collection compromises the precision of farmland abandonment monitoring. This study aims to extract crop phenological metrics, map farmland abandonment, and recultivation dynamics in the IMYRB and assess post-transition vegetation changes. We used Landsat time-series data to detect the land-use changes and vegetation responses in the IMYRB. The Farmland Abandonment and Recultivation Extraction Index (FAREI) was developed using crop phenology spectral features. Key crop-specific phenological indicators, including sprout, peak, and wilting stages, were extracted from annual MODIS NDVI data for 2020. Based on these key nodes, the Landsat data from 1999 to 2022 was employed to map farmland abandonment and recultivation. Vegetation recovery trajectories were further analyzed by the Mann–Kendall test and the Theil–Sen estimator. The results showed rewarding accuracy for farmland conversion mapping, with overall precision exceeding 79%. Driven by ecological restoration programs, rural labor migration, and soil salinization, two distinct phases of farmland abandonment were identified, 87.9 kha during 2002–2004 and 105.14 kha during 2016–2019, representing an approximate 19.6% increase. Additionally, the post-2016 surge in farmland recultivation was primarily linked to national food security policies and localized soil amelioration initiatives. Vegetation restoration trends indicate significant greening over the past two decades, with particularly significant increases observed between 2011 and 2022. In the future, more attention should be paid to the trade-off between ecological protection and food security. Overall, this study developed a novel method for monitoring farmland dynamics, offering critical insights to inform adaptive ecosystem management and advance ecological conservation and sustainable development in ecologically fragile semi-arid regions. Full article
(This article belongs to the Special Issue Connections Between Land Use, Land Policies, and Food Systems)
Show Figures

Figure 1

10 pages, 11710 KB  
Communication
Domain Wall Motion and the Interfacial Dzyaloshinskii–Moriya Interaction in Pt/Co/RuO2(Ru) Multilayers
by Milad Jalali, Kai Wang, Haoxiang Xu, Yaowen Liu and Sylvain Eimer
Materials 2025, 18(17), 4008; https://doi.org/10.3390/ma18174008 - 27 Aug 2025
Viewed by 948
Abstract
The interfacial Dzyaloshinskii–Moriya interaction (DMI) plays a pivotal role in stabilising and controlling the motion of chiral spin textures, such as Néel-type bubble domains, in ultrathin magnetic films—an essential feature for next-generation spintronic devices. In this work, we investigate domain wall (DW) dynamics [...] Read more.
The interfacial Dzyaloshinskii–Moriya interaction (DMI) plays a pivotal role in stabilising and controlling the motion of chiral spin textures, such as Néel-type bubble domains, in ultrathin magnetic films—an essential feature for next-generation spintronic devices. In this work, we investigate domain wall (DW) dynamics in magnetron-sputtered Ta(3 nm)/Pt(3 nm)/Co(1 nm)/RuO2(1 nm) [Ru(1 nm)]/Pt(3 nm) multilayers, benchmarking their behaviour against control stacks. Vibrating sample magnetometry (VSM) was employed to determine saturation magnetisation and perpendicular magnetic anisotropy (PMA), while polar magneto-optical Kerr effect (P-MOKE) measurements provided coercivity data. Kerr microscopy visualised the expansion of bubble-shaped domains under combined perpendicular and in-plane magnetic fields, enabling the extraction of effective DMI fields. Brillouin light scattering (BLS) spectroscopy quantified the asymmetric propagation of spin waves, and micromagnetic simulations corroborated the experimental findings. The Pt/Co/RuO2 system exhibits a Dzyaloshinskii–Moriya interaction (DMI) constant of ≈1.08 mJ/m2, slightly higher than the Pt/Co/Ru system (≈1.03 mJ/m2) and much higher than the Pt/Co control (≈0.23 mJ/m2). Correspondingly, domain walls in the RuO2-capped films show pronounced velocity asymmetry under in-plane fields, whereas the symmetric Pt/Co/Pt shows negligible asymmetry. Despite lower depinning fields in the Ru-capped sample, its domain walls move faster than those in the RuO2-capped sample, indicating reduced pinning. Our results demonstrate that integrating RuO2 significantly alters interfacial spin–orbit interactions. Full article
(This article belongs to the Section Thin Films and Interfaces)
Show Figures

Figure 1

Back to TopTop