Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (745)

Search Parameters:
Keywords = adaptive weighted fusion

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 3312 KB  
Article
Automatic Picking Method for the First Arrival Time of Microseismic Signals Based on Fractal Theory and Feature Fusion
by Huicong Xu, Kai Li, Pengfei Shan, Xuefei Wu, Shuai Zhang, Zeyang Wang, Chenguang Liu, Zhongming Yan, Liang Wu and Huachuan Wang
Fractal Fract. 2025, 9(11), 679; https://doi.org/10.3390/fractalfract9110679 - 23 Oct 2025
Viewed by 151
Abstract
Microseismic signals induced by mining activities often have low signal-to-noise ratios, and traditional picking methods are easily affected by noise, making accurate identification of P-wave arrivals difficult. To address this problem, this study proposes an adaptive denoising algorithm based on wavelet-threshold-enhanced Complete Ensemble [...] Read more.
Microseismic signals induced by mining activities often have low signal-to-noise ratios, and traditional picking methods are easily affected by noise, making accurate identification of P-wave arrivals difficult. To address this problem, this study proposes an adaptive denoising algorithm based on wavelet-threshold-enhanced Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN) and develops an automatic P-wave arrival picking method incorporating fractal box dimension features, along with a corresponding accuracy evaluation framework. The raw microseismic signals are decomposed using the improved CEEMDAN method, with high-frequency intrinsic mode functions (IMFs) processed by wavelet-threshold denoising and low- and mid-frequency IMFs retained for reconstruction, effectively suppressing background noise and enhancing signal clarity. Fractal box dimension is applied to characterize waveform complexity over short and long-time windows, and by introducing fractal derivatives and short-long window differences, abrupt changes in local-to-global complexity at P-wave arrivals are revealed. Energy mutation features are extracted using the short-term/long-term average (STA/LTA) energy ratio, and noise segments are standardized via Z-score processing. A multi-feature weighted fusion scoring function is constructed to achieve robust identification of P-wave arrivals. Evaluation metrics, including picking error, mean absolute error, and success rate, are used to comprehensively assess the method’s performance in terms of temporal deviation, statistical consistency, and robustness. Case studies using microseismic data from a mining site show that the proposed method can accurately identify P-wave arrivals under different signal-to-noise conditions, with automatic picking results highly consistent with manual labels, mean errors within the sampling interval (2–4 ms), and a picking success rate exceeding 95%. The method provides a reliable tool for seismic source localization and dynamic hazard prediction in mining microseismic monitoring. Full article
Show Figures

Figure 1

23 pages, 11949 KB  
Article
MDAS-YOLO: A Lightweight Adaptive Framework for Multi-Scale and Dense Pest Detection in Apple Orchards
by Bo Ma, Jiawei Xu, Ruofei Liu, Junlin Mu, Biye Li, Rongsen Xie, Shuangxi Liu, Xianliang Hu, Yongqiang Zheng, Hongjian Zhang and Jinxing Wang
Horticulturae 2025, 11(11), 1273; https://doi.org/10.3390/horticulturae11111273 - 22 Oct 2025
Viewed by 399
Abstract
Accurate monitoring of orchard pests is vital for green and efficient apple production. Yet images captured by intelligent pest-monitoring lamps often contain small targets, weak boundaries, and crowded scenes, which hamper detection accuracy. We present MDAS-YOLO, a lightweight detection framework tailored for smart [...] Read more.
Accurate monitoring of orchard pests is vital for green and efficient apple production. Yet images captured by intelligent pest-monitoring lamps often contain small targets, weak boundaries, and crowded scenes, which hamper detection accuracy. We present MDAS-YOLO, a lightweight detection framework tailored for smart pest monitoring in apple orchards. At the input stage, we adopt the LIME++ enhancement to mitigate low illumination and non-uniform lighting, improving image quality at the source. On the model side, we integrate three structural innovations: (1) a C3k2-MESA-DSM module in the backbone to explicitly strengthen contours and fine textures via multi-scale edge enhancement and dual-domain feature selection; (2) an AP-BiFPN in the neck to achieve adaptive cross-scale fusion through learnable weighting and differentiated pooling; and (3) a SimAM block before the detection head to perform zero-parameter, pixel-level saliency re-calibration, suppressing background redundancy without extra computation. On a self-built apple-orchard pest dataset, MDAS-YOLO attains 95.68% mAP, outperforming YOLOv11n by 6.97 percentage points while maintaining a superior trade-off among accuracy, model size, and inference speed. Overall, the proposed synergistic pipeline—input enhancement, early edge fidelity, mid-level adaptive fusion, and end-stage lightweight re-calibration—effectively addresses small-scale, weak-boundary, and densely distributed pests, providing a promising and regionally validated approach for intelligent pest monitoring and sustainable orchard management, and offering methodological insights for future multi-regional pest monitoring research. Full article
(This article belongs to the Section Insect Pest Management)
Show Figures

Figure 1

25 pages, 8305 KB  
Article
SAHI-Tuned YOLOv5 for UAV Detection of TM-62 Anti-Tank Landmines: Small-Object, Occlusion-Robust, Real-Time Pipeline
by Dejan Dodić, Vuk Vujović, Srđan Jovković, Nikola Milutinović and Mitko Trpkoski
Computers 2025, 14(10), 448; https://doi.org/10.3390/computers14100448 - 21 Oct 2025
Viewed by 262
Abstract
Anti-tank landmines endanger post-conflict recovery. Detecting camouflaged TM-62 landmines in low-altitude unmanned aerial vehicle (UAV) imagery is challenging because targets occupy few pixels and are low-contrast and often occluded. We introduce a single-class anti-tank dataset and a YOLOv5 pipeline augmented with a SAHI-based [...] Read more.
Anti-tank landmines endanger post-conflict recovery. Detecting camouflaged TM-62 landmines in low-altitude unmanned aerial vehicle (UAV) imagery is challenging because targets occupy few pixels and are low-contrast and often occluded. We introduce a single-class anti-tank dataset and a YOLOv5 pipeline augmented with a SAHI-based small-object stage and Weighted Boxes Fusion. The evaluation combines COCO metrics with an operational operating point (score = 0.25; IoU = 0.50) and stratifies by object size and occlusion. On a held-out test partition representative of UAV acquisition, the baseline YOLOv5 attains mAP@0.50:0.95 = 0.553 and AP@0.50 = 0.851. With tuned SAHI (768 px tiles, 40% overlap) plus fusion, performance rises to mAP@0.50:0.95 = 0.685 and AP@0.50 = 0.935—ΔmAP = +0.132 (+23.9% rel.) and ΔAP@0.50 = +0.084 (+9.9% rel.). At the operating point, precision = 0.94 and recall = 0.89 (F1 = 0.914), implying a 58.4% reduction in missed detections versus a non-optimized SAHI baseline and a +14.3 AP@0.50 gain on the small/occluded subset. Ablations attribute gains to tile size, overlap, and fusion, which boost recall on low-pixel, occluded landmines without inflating false positives. The pipeline sustains real-time UAV throughput and supports actionable triage for humanitarian demining, as well as motivating RGB–thermal fusion and cross-season/-domain adaptation. Full article
(This article belongs to the Special Issue Advanced Image Processing and Computer Vision (2nd Edition))
Show Figures

Figure 1

19 pages, 6725 KB  
Article
Chaos Fusion Mutation-Based Weighted Mean of Vectors Algorithm for Linear Antenna Array Optimization
by Zhuo Chen, Yan Liu, Liang Dong, Anyong Liu and Yibo Wang
Sensors 2025, 25(20), 6482; https://doi.org/10.3390/s25206482 - 20 Oct 2025
Viewed by 299
Abstract
This study proposes the Chaos Fusion Mutation-Based Weighted Mean of Vectors Algorithm, an advanced optimization technique within the weighted mean of vectors (INFO) framework for synthesizing unequally spaced linear arrays. The proposed algorithm incorporates three complementary mechanisms: a good-point-set initialization to enhance early [...] Read more.
This study proposes the Chaos Fusion Mutation-Based Weighted Mean of Vectors Algorithm, an advanced optimization technique within the weighted mean of vectors (INFO) framework for synthesizing unequally spaced linear arrays. The proposed algorithm incorporates three complementary mechanisms: a good-point-set initialization to enhance early population coverage, a sine–tent–cosine (STC) chaos–based adaptive parameterization to balance exploration and exploitation, and a normal-cloud mutation to preserve diversity and prevent premature convergence. Array-factor (AF) optimization is posed as a constrained problem, simultaneously minimizing sidelobe level (SLL) and achieving deep-null steering, with penalties applied to enforce geometric and engineering constraints. Across diverse array-synthesis tasks, the proposed algorithm consistently attains lower peak SLLs and more accurate nulls, with faster and more stable convergence than benchmark metaheuristics. Across five simulation scenarios, it demonstrates robust superiority, notably surpassing an enhanced IWO in the combined objectives of deep-null suppression and maximum SLL reduction. In a representative engineering example, we obtain an SLL and a deep null of approximately −32.30 and −125.1 dB, respectively, at 104°. Evaluation of the CEC2020 real-world constrained problems confirms robust convergence and competitive statistical ranking. For reproducibility, all data and code are publicly accessible, as detailed in the Data Availability section. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

26 pages, 11507 KB  
Article
PLD-DETR: A Method for Defect Inspection of Power Transmission Lines
by Jianing Chen, Xin Zhang, Dawei Feng, Jiahao Li and Liang Zhu
Electronics 2025, 14(20), 4107; https://doi.org/10.3390/electronics14204107 - 20 Oct 2025
Viewed by 300
Abstract
Unmanned Aerial Vehicle (UAV)-based computer vision has emerged as a crucial approach for transmission line defect detection. However, transmission lines contain multi-scale components in complex environments, thereby complicating the accurate extraction of multi-scale features and necessitating a careful balance between model complexity with [...] Read more.
Unmanned Aerial Vehicle (UAV)-based computer vision has emerged as a crucial approach for transmission line defect detection. However, transmission lines contain multi-scale components in complex environments, thereby complicating the accurate extraction of multi-scale features and necessitating a careful balance between model complexity with detection accuracy. This paper proposes a Transformer-based framework called Power Line Defect Detection Transformer (PLD-DETR). To simultaneously capture shallow texture and deep semantic information while avoiding single-path limitations, a dual-domain selection mechanism block is designed as the backbone network, enabling collaborative feature extraction at different levels. Subsequently, an adaptive sparse self-attention mechanism is introduced to dynamically adjust attention weights for improved processing of critical feature regions, aiming to enhance attention to semantically rich regions and reduce background interference. Finally, we construct a multi-branch auxiliary bidirectional feature pyramid network to address information loss in traditional feature fusion. It fuses multi-scale features from four backbone layers through top-down and bottom-up bidirectional information flow, significantly improving feature representation capability. While maintaining model lightness, experimental results demonstrate that PLD-DETR achieves 2.7%, 7.01%, and 5.58% improvements in AP50, AP75, and AP50–95, respectively, compared to the baseline model. Compared with other transmission line defect detection methods, PLD-DETR demonstrates superior performance in both accuracy and efficiency Full article
(This article belongs to the Special Issue AI Applications for Smart Grid)
Show Figures

Figure 1

19 pages, 1603 KB  
Article
BiLSTM-LN-SA: A Novel Integrated Model with Self-Attention for Multi-Sensor Fire Detection
by Zhaofeng He, Yu Si, Liyuan Yang, Nuo Xu, Xinglong Zhang, Mingming Wang and Xiaoyun Sun
Sensors 2025, 25(20), 6451; https://doi.org/10.3390/s25206451 - 18 Oct 2025
Viewed by 333
Abstract
Multi-sensor fire detection technology has been widely adopted in practical applications; however, existing methods still suffer from high false alarm rates and inadequate adaptability in complex environments due to their limited capacity to capture deep time-series dependencies in sensor data. To enhance robustness [...] Read more.
Multi-sensor fire detection technology has been widely adopted in practical applications; however, existing methods still suffer from high false alarm rates and inadequate adaptability in complex environments due to their limited capacity to capture deep time-series dependencies in sensor data. To enhance robustness and accuracy, this paper proposes a novel model named BiLSTM-LN-SA, which integrates a Bidirectional Long Short-Term Memory (BiLSTM) network with Layer Normalization (LN) and a Self-Attention (SA) mechanism. The BiLSTM module extracts intricate time-series features and long-term dependencies. The incorporation of Layer Normalization mitigates feature distribution shifts across different environments, thereby improving the model’s adaptability to cross-scenario data and its generalization capability. Simultaneously, the Self-Attention mechanism dynamically recalibrates the importance of features at different time steps, adaptively enhancing fire-critical information and enabling deeper, process-aware feature fusion. Extensive evaluation on a real-world dataset demonstrates the superiority of the BiLSTM-LN-SA model, which achieves a test accuracy of 98.38%, an F1-score of 0.98, and an AUC of 0.99, significantly outperforming existing methods including EIF-LSTM, rTPNN, and MLP. Notably, the model also maintains low false positive and false negative rates of 1.50% and 1.85%, respectively. Ablation studies further elucidate the complementary roles of each component: the self-attention mechanism is pivotal for dynamic feature weighting, while layer normalization is key to stabilizing the learning process. This validated design confirms the model’s strong generalization capability and practical reliability across varied environmental scenarios. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

26 pages, 10675 KB  
Article
DFAS-YOLO: Dual Feature-Aware Sampling for Small-Object Detection in Remote Sensing Images
by Xiangyu Liu, Shenbo Zhou, Jianbo Ma, Yumei Sun, Jianlin Zhang and Haorui Zuo
Remote Sens. 2025, 17(20), 3476; https://doi.org/10.3390/rs17203476 - 18 Oct 2025
Viewed by 554
Abstract
In remote sensing imagery, detecting small objects is challenging due to the limited representational ability of feature maps when resolution changes. This issue is mainly reflected in two aspects: (1) upsampling causes feature shifts, making feature fusion difficult to align; (2) downsampling leads [...] Read more.
In remote sensing imagery, detecting small objects is challenging due to the limited representational ability of feature maps when resolution changes. This issue is mainly reflected in two aspects: (1) upsampling causes feature shifts, making feature fusion difficult to align; (2) downsampling leads to the loss of details. Although recent advances in object detection have been remarkable, small-object detection remains unresolved. In this paper, we propose Dual Feature-Aware Sampling YOLO (DFAS-YOLO) to address these issues. First, the Soft-Aware Adaptive Fusion (SAAF) module corrects upsampling by applying adaptive weighting and spatial attention, thereby reducing errors caused by feature shifts. Second, the Global Dense Local Aggregation (GDLA) module employs parallel convolution, max pooling, and average pooling with channel aggregation, combining their strengths to preserve details after downsampling. Furthermore, the detection head is redesigned based on object characteristics in remote sensing imagery. Extensive experiments on the VisDrone2019 and HIT-UAV datasets demonstrate that DFAS-YOLO achieves competitive detection accuracy compared with recent models. Full article
Show Figures

Figure 1

20 pages, 6483 KB  
Article
Loop-MapNet: A Multi-Modal HDMap Perception Framework with SDMap Dynamic Evolution and Priors
by Yuxuan Tang, Jie Hu, Daode Zhang, Wencai Xu, Feiyu Zhao and Xinghao Cheng
Appl. Sci. 2025, 15(20), 11160; https://doi.org/10.3390/app152011160 - 17 Oct 2025
Viewed by 294
Abstract
High-definition maps (HDMaps) are critical for safe autonomy on structured roads. Yet traditional production—relying on dedicated mapping fleets and manual quality control—is costly and slow, impeding large-scale, frequent updates. Recently, standard-definition maps (SDMaps) derived from remote sensing have been adopted as priors to [...] Read more.
High-definition maps (HDMaps) are critical for safe autonomy on structured roads. Yet traditional production—relying on dedicated mapping fleets and manual quality control—is costly and slow, impeding large-scale, frequent updates. Recently, standard-definition maps (SDMaps) derived from remote sensing have been adopted as priors to support HDMap perception, lowering cost but struggling with subtle urban changes and localization drift. We propose Loop-MapNet, a self-evolving, multimodal, closed-loop mapping framework. Loop-MapNet effectively leverages surround-view images, LiDAR point clouds, and SDMaps; it fuses multi-scale vision via a weighted BiFPN, and couples PointPillars BEV and SDMap topology encoders for cross-modal sensing. A Transformer-based bidirectional adaptive cross-attention aligns SDMap with online perception, enabling robust fusion under heterogeneity. We further introduce a confidence-guided masked autoencoder (CG-MAE) that leverages confidence and probabilistic distillation to both capture implicit SDMap priors and enhance the detailed representation of low-confidence HDMap regions. With spatiotemporal consistency checks, Loop-MapNet incrementally updates SDMaps to form a perception–mapping–update loop, compensating remote-sensing latency and enabling online map optimization. On nuScenes, within 120 m, Loop-MapNet attains 61.05% mIoU, surpassing the best baseline by 0.77%. Under extreme localization errors, it maintains 60.46% mIoU, improving robustness by 2.77%; CG-MAE pre-training raises accuracy in low-confidence regions by 1.72%. These results demonstrate advantages in fusion and robustness, moving beyond one-way prior injection and enabling HDMap–SDMap co-evolution for closed-loop autonomy and rapid SDMap refresh from remote sensing. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

17 pages, 2475 KB  
Article
YOLO-LMTB: A Lightweight Detection Model for Multi-Scale Tea Buds in Agriculture
by Guofeng Xia, Yanchuan Guo, Qihang Wei, Yiwen Cen, Loujing Feng and Yang Yu
Sensors 2025, 25(20), 6400; https://doi.org/10.3390/s25206400 - 16 Oct 2025
Viewed by 388
Abstract
Tea bud targets are typically located in complex environments characterized by multi-scale variations, high density, and strong color resemblance to the background, which pose significant challenges for rapid and accurate detection. To address these issues, this study presents YOLO-LMTB, a lightweight multi-scale detection [...] Read more.
Tea bud targets are typically located in complex environments characterized by multi-scale variations, high density, and strong color resemblance to the background, which pose significant challenges for rapid and accurate detection. To address these issues, this study presents YOLO-LMTB, a lightweight multi-scale detection model based on the YOLOv11n architecture. First, a Multi-scale Edge-Refinement Context Aggregator (MERCA) module is proposed to replace the original C3k2 block in the backbone. MERCA captures multi-scale contextual features through hierarchical receptive field collaboration and refines edge details, thereby significantly improving the perception of fine structures in tea buds. Furthermore, a Dynamic Hyperbolic Token Statistics Transformer (DHTST) module is developed to replace the original PSA block. This module dynamically adjusts feature responses and statistical measures through attention weighting using learnable threshold parameters, effectively enhancing discriminative features while suppressing background interference. Additionally, a Bidirectional Feature Pyramid Network (BiFPN) is introduced to replace the original network structure, enabling the adaptive fusion of semantically rich and spatially precise features via bidirectional cross-scale connections while reducing computational complexity. In the self-built tea bud dataset, experimental results demonstrate that compared to the original model, the YO-LO-LMTB model achieves a 2.9% improvement in precision (P), along with increases of 1.6% and 2.0% in mAP50 and mAP50-95, respectively. Simultaneously, the number of parameters decreased by 28.3%, and the model size reduced by 22.6%. To further validate the effectiveness of the improvement scheme, experiments were also conducted using public datasets. The results demonstrate that each enhancement module can boost the model’s detection performance and exhibits strong generalization capabilities. The model not only excels in multi-scale tea bud detection but also offers a valuable reference for reducing computational complexity, thereby providing a technical foundation for the practical application of intelligent tea-picking systems. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

20 pages, 2565 KB  
Article
GBV-Net: Hierarchical Fusion of Facial Expressions and Physiological Signals for Multimodal Emotion Recognition
by Jiling Yu, Yandong Ru, Bangjun Lei and Hongming Chen
Sensors 2025, 25(20), 6397; https://doi.org/10.3390/s25206397 - 16 Oct 2025
Viewed by 488
Abstract
A core challenge in multimodal emotion recognition lies in the precise capture of the inherent multimodal interactive nature of human emotions. Addressing the limitation of existing methods, which often process visual signals (facial expressions) and physiological signals (EEG, ECG, EOG, and GSR) in [...] Read more.
A core challenge in multimodal emotion recognition lies in the precise capture of the inherent multimodal interactive nature of human emotions. Addressing the limitation of existing methods, which often process visual signals (facial expressions) and physiological signals (EEG, ECG, EOG, and GSR) in isolation and thus fail to exploit their complementary strengths effectively, this paper presents a new multimodal emotion recognition framework called the Gated Biological Visual Network (GBV-Net). This framework enhances emotion recognition accuracy through deep synergistic fusion of facial expressions and physiological signals. GBV-Net integrates three core modules: (1) a facial feature extractor based on a modified ConvNeXt V2 architecture incorporating lightweight Transformers, specifically designed to capture subtle spatio-temporal dynamics in facial expressions; (2) a hybrid physiological feature extractor combining 1D convolutions, Temporal Convolutional Networks (TCNs), and convolutional self-attention mechanisms, adept at modeling local patterns and long-range temporal dependencies in physiological signals; and (3) an enhanced gated attention fusion module capable of adaptively learning inter-modal weights to achieve dynamic, synergistic integration at the feature level. A thorough investigation of the publicly accessible DEAP and MAHNOB-HCI datasets reveals that GBV-Net surpasses contemporary methods. Specifically, on the DEAP dataset, the model attained classification accuracies of 95.10% for Valence and 95.65% for Arousal, with F1-scores of 95.52% and 96.35%, respectively. On MAHNOB-HCI, the accuracies achieved were 97.28% for Valence and 97.73% for Arousal, with F1-scores of 97.50% and 97.74%, respectively. These experimental findings substantiate that GBV-Net effectively captures deep-level interactive information between multimodal signals, thereby improving emotion recognition accuracy. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

24 pages, 2221 KB  
Article
Multi-Scale Frequency-Aware Transformer for Pipeline Leak Detection Using Acoustic Signals
by Menghan Chen, Yuchen Lu, Wangyu Wu, Yanchen Ye, Bingcai Wei and Yao Ni
Sensors 2025, 25(20), 6390; https://doi.org/10.3390/s25206390 - 16 Oct 2025
Viewed by 444
Abstract
Pipeline leak detection through acoustic signal measurement faces critical challenges, including insufficient utilization of time-frequency domain features, poor adaptability to noisy environments, and inadequate exploitation of frequency-domain prior knowledge in existing deep learning approaches. This paper proposes a Multi-Scale Frequency-Aware Transformer (MSFAT) architecture [...] Read more.
Pipeline leak detection through acoustic signal measurement faces critical challenges, including insufficient utilization of time-frequency domain features, poor adaptability to noisy environments, and inadequate exploitation of frequency-domain prior knowledge in existing deep learning approaches. This paper proposes a Multi-Scale Frequency-Aware Transformer (MSFAT) architecture that integrates measurement-based acoustic signal analysis with artificial intelligence techniques. The MSFAT framework consists of four core components: a frequency-aware embedding layer that achieves joint representation learning of time-frequency dual-domain features through parallel temporal convolution and frequency transformation, a multi-head frequency attention mechanism that dynamically adjusts attention weights based on spectral distribution using frequency features as modulation signals, an adaptive noise filtering module that integrates noise detection, signal enhancement, and adaptive fusion functions through end-to-end joint optimization, and a multi-scale feature aggregation mechanism that extracts discriminative global representations through complementary pooling strategies. The proposed method addresses the fundamental limitations of traditional measurement-based detection systems by incorporating domain-specific prior knowledge into neural network architecture design. Experimental validation demonstrates that MSFAT achieves 97.2% accuracy and an F1-score, representing improvements of 10.5% and 10.9%, respectively, compared to standard Transformer approaches. The model maintains robust detection performance across signal-to-noise ratio conditions ranging from 5 to 30 dB, demonstrating superior adaptability to complex industrial measurement environments. Ablation studies confirm the effectiveness of each innovative module, with frequency-aware mechanisms contributing most significantly to the enhanced measurement precision and reliability in pipeline leak detection applications. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

24 pages, 1450 KB  
Article
A New Wide-Area Backup Protection Algorithm Based on Confidence Weighting and Conflict Adaptation
by Zhen Liu, Wei Han, Baojiang Tian, Gaofeng Hao, Fengqing Cui, Xiaoyu Li, Shenglai Wang and Yikai Wang
Electronics 2025, 14(20), 4032; https://doi.org/10.3390/electronics14204032 - 14 Oct 2025
Viewed by 208
Abstract
To alleviate the communication burden of wide-area protection and enhance the fault tolerance of multi-source criteria, this paper introduces an improved wide-area backup protection method based on multi-source information fusion. Initially, the variation characteristics of bus sequence voltages after a fault are utilized [...] Read more.
To alleviate the communication burden of wide-area protection and enhance the fault tolerance of multi-source criteria, this paper introduces an improved wide-area backup protection method based on multi-source information fusion. Initially, the variation characteristics of bus sequence voltages after a fault are utilized to screen suspected fault lines, thereby reducing communication traffic. Subsequently, four basic probability assignment functions are constructed using the polarity of zero-sequence current charge, the polarity of phase-difference current charge, and the starting signals of Zone II/III distance protection from the local and adjacent lines. The confidence of each probability function is evaluated using normalized information entropy, while consistency is analyzed via Gaussian similarity, enabling dynamic allocation of fusion weights. Additionally, a conflict adaptation factor is designed to adjust the fusion strategy dynamically, improving fault tolerance in high-conflict scenarios and mitigating the impact of abnormal single criteria on decision results. Finally, the fused fault probability is used to identify the fault line. Simulation results based on the IEEE 39-bus model demonstrate that the proposed algorithm can accurately identify fault lines under different fault types and locations and remains robust under conditions such as information loss and protection maloperation or failure. Full article
(This article belongs to the Special Issue Advanced Online Monitoring and Fault Diagnosis of Power Equipment)
Show Figures

Figure 1

20 pages, 49845 KB  
Article
DDF-YOLO: A Small Target Detection Model Using Multi-Scale Dynamic Feature Fusion for UAV Aerial Photography
by Ziang Ma, Chao Wang, Chuanzhi Chen, Jinbao Chen and Guang Zheng
Aerospace 2025, 12(10), 920; https://doi.org/10.3390/aerospace12100920 - 13 Oct 2025
Viewed by 594
Abstract
Unmanned aerial vehicle (UAV)-based object detection shows promising potential in intelligent transportation and disaster response. However, detecting small targets remains challenging due to inherent limitations (long-distance and low-resolution imaging) and environmental interference (complex backgrounds and occlusions). To address these issues, this paper proposes [...] Read more.
Unmanned aerial vehicle (UAV)-based object detection shows promising potential in intelligent transportation and disaster response. However, detecting small targets remains challenging due to inherent limitations (long-distance and low-resolution imaging) and environmental interference (complex backgrounds and occlusions). To address these issues, this paper proposes an enhanced small target detection model, DDF-YOLO, which achieves higher detection performance. First, a dynamic feature extraction module (C2f-DCNv4) employs deformable convolutions to effectively capture features from irregularly shaped objects. In addition, a dynamic upsampling module (DySample) optimizes multi-scale feature fusion by combining shallow spatial details with deep semantic features, preserving critical low-level information while enhancing generalization across scales. Finally, to balance rapid convergence with precise localization, an adaptive Focaler-ECIoU loss function dynamically adjusts training weights based on sample quality during bounding box regression. Extensive experiments on VisDrone2019 and UAVDT benchmarks demonstrate DDF-YOLO’s superiority. Compared to YOLOv8n, our model achieves gains of 8.6% and 4.8% in mAP50, along with improvements of 5.0% and 3.3% in mAP50-95, respectively. Furthermore, it exhibits superior efficiency, requiring only 7.3 GFLOPs and attaining an inference speed of 179 FPS. These results validate the model’s robustness for UAV-based detection, particularly in small-object scenarios. Full article
(This article belongs to the Section Aeronautics)
Show Figures

Figure 1

22 pages, 5357 KB  
Article
An Effective Approach to Rotatory Fault Diagnosis Combining CEEMDAN and Feature-Level Integration
by Sumika Chauhan, Govind Vashishtha and Prabhkiran Kaur
Algorithms 2025, 18(10), 644; https://doi.org/10.3390/a18100644 - 12 Oct 2025
Viewed by 259
Abstract
This paper introduces an effective approach for rotatory fault diagnosis, specifically focusing on centrifugal pumps, by combining complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN) and feature-level integration. Centrifugal pumps are critical in various industries, and their condition monitoring is essential for [...] Read more.
This paper introduces an effective approach for rotatory fault diagnosis, specifically focusing on centrifugal pumps, by combining complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN) and feature-level integration. Centrifugal pumps are critical in various industries, and their condition monitoring is essential for reliability. The proposed methodology addresses the limitations of traditional single-sensor fault diagnosis by fusing information from acoustic and vibration signals. CEEMDAN was employed to decompose raw signals into intrinsic mode functions (IMFs), mitigating noise and non-stationary characteristics. Weighted kurtosis was used to select significant IMFs, and a comprehensive set of time, frequency, and time–frequency domain features was extracted. Feature-level fusion integrated these features, and a support vector machine (SVM) classifier, optimized using the crayfish optimization algorithm (COA), identified different health conditions. The methodology was validated on a centrifugal pump with various impeller defects, achieving a classification accuracy of 95.0%. The results demonstrate the efficacy of the proposed approach in accurately diagnosing the state of centrifugal pumps. Full article
Show Figures

Figure 1

27 pages, 7948 KB  
Article
Attention-Driven Time-Domain Convolutional Network for Source Separation of Vocal and Accompaniment
by Zhili Zhao, Min Luo, Xiaoman Qiao, Changheng Shao and Rencheng Sun
Electronics 2025, 14(20), 3982; https://doi.org/10.3390/electronics14203982 - 11 Oct 2025
Viewed by 365
Abstract
Time-domain signal models have been widely applied to single-channel music source separation tasks due to their ability to overcome the limitations of fixed spectral representations and phase information loss. However, the high acoustic similarity and synchronous temporal evolution between vocals and accompaniment make [...] Read more.
Time-domain signal models have been widely applied to single-channel music source separation tasks due to their ability to overcome the limitations of fixed spectral representations and phase information loss. However, the high acoustic similarity and synchronous temporal evolution between vocals and accompaniment make accurate separation challenging for existing time-domain models. These challenges are mainly reflected in two aspects: (1) the lack of a dynamic mechanism to evaluate the contribution of each source during feature fusion, and (2) difficulty in capturing fine-grained temporal details, often resulting in local artifacts in the output. To address these issues, we propose an attention-driven time-domain convolutional network for vocal and accompaniment source separation. Specifically, we design an embedding attention module to perform adaptive source weighting, enabling the network to emphasize components more relevant to the target mask during training. In addition, an efficient convolutional block attention module is developed to enhance local feature extraction. This module integrates an efficient channel attention mechanism based on one-dimensional convolution while preserving spatial attention, thereby improving the ability to learn discriminative features from the target audio. Comprehensive evaluations on public music datasets demonstrate the effectiveness of the proposed model and its significant improvements over existing approaches. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

Back to TopTop