Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,240)

Search Parameters:
Keywords = edge information

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 8183 KB  
Article
MEE-DETR: Multi-Scale Edge-Aware Enhanced Transformer for PCB Defect Detection
by Xiaoyu Ma, Xiaolan Xie and Yuhui Song
Electronics 2026, 15(3), 504; https://doi.org/10.3390/electronics15030504 - 23 Jan 2026
Abstract
Defect inspection of Printed Circuit Board (PCB) is essential for maintaining the safety and reliability of electronic products. With the continuous trend toward smaller components and higher integration levels, identifying tiny imperfections on densely packed PCB structures has become increasingly difficult and remains [...] Read more.
Defect inspection of Printed Circuit Board (PCB) is essential for maintaining the safety and reliability of electronic products. With the continuous trend toward smaller components and higher integration levels, identifying tiny imperfections on densely packed PCB structures has become increasingly difficult and remains a major challenge for current inspection systems. To tackle this problem, this study proposes the Multi-scale Edge-Aware Enhanced Detection Transformer (MEE-DETR), a deep learning-based object detection method. Building upon the RT-DETR framework, which is grounded in Transformer-based machine learning, the proposed approach systematically introduces enhancements at three levels: backbone feature extraction, feature interaction, and multi-scale feature fusion. First, the proposed Edge-Strengthened Backbone Network (ESBN) constructs multi-scale edge extraction and semantic fusion pathways, effectively strengthening the structural representation of shallow defect edges. Second, the Entanglement Transformer Block (ETB), synergistically integrates frequency self-attention, spatial self-attention, and a frequency–spatial entangled feed-forward network, enabling deep cross-domain information interaction and consistent feature representation. Finally, the proposed Adaptive Enhancement Feature Pyramid Network (AEFPN), incorporating the Adaptive Cross-scale Fusion Module (ACFM) for cross-scale adaptive weighting and the Enhanced Feature Extraction C3 Module (EFEC3) for local nonlinear enhancement, substantially improves detail preservation and semantic balance during feature fusion. Experiments conducted on the PKU-Market-PCB dataset reveal that MEE-DETR delivers notable performance gains. Specifically, Precision, Recall, and mAP50–95 improve by 2.5%, 9.4%, and 4.2%, respectively. In addition, the model’s parameter size is reduced by 40.7%. These results collectively indicate that MEE-DETR achieves excellent detection performance with a lightweight network architecture. Full article
29 pages, 4551 KB  
Article
Graph Fractional Hilbert Transform: Theory and Application
by Daxiang Li and Zhichao Zhang
Fractal Fract. 2026, 10(2), 74; https://doi.org/10.3390/fractalfract10020074 (registering DOI) - 23 Jan 2026
Abstract
The graph Hilbert transform (GHT) is a key tool in constructing analytic signals and extracting envelope and phase information in graph signal processing. However, its utility is limited by confinement to the graph Fourier domain, a fixed phase shift, information loss for real-valued [...] Read more.
The graph Hilbert transform (GHT) is a key tool in constructing analytic signals and extracting envelope and phase information in graph signal processing. However, its utility is limited by confinement to the graph Fourier domain, a fixed phase shift, information loss for real-valued spectral components, and the absence of tunable parameters. The graph fractional Fourier transform introduces domain flexibility through a fractional order parameter α but does not resolve the issues of phase rigidity and information loss. Inspired by the dual-parameter fractional Hilbert transform (FRHT) in classical signal processing, we propose the graph FRHT (GFRHT). The GFRHT incorporates a dual-parameter framework: the fractional order α enables analysis across arbitrary fractional domains, interpolating between vertex and spectral spaces, while the angle parameter β provides adjustable phase shifts and a non-zero real-valued response (cosβ) for real eigenvalues, thereby eliminating information loss. We formally define the GFRHT, establish its core properties, and design a method for graph analytic signal construction, enabling precise envelope extraction and demodulation. Experiments on anomaly identification, speech classification and edge detection demonstrate that GFRHT outperforms GHT, offering greater flexibility and superior performance in graph signal processing. Full article
22 pages, 586 KB  
Article
Onco-Hem Connectome—Network-Based Phenotyping of Polypharmacy and Drug–Drug Interactions in Onco-Hematological Inpatients
by Sabina-Oana Vasii, Daiana Colibășanu, Florina-Diana Goldiș, Sebastian-Mihai Ardelean, Mihai Udrescu, Dan Iliescu, Daniel-Claudiu Malița, Ioana Ioniță and Lucreția Udrescu
Pharmaceutics 2026, 18(2), 146; https://doi.org/10.3390/pharmaceutics18020146 - 23 Jan 2026
Abstract
We introduce the Onco-Hem Connectome (OHC), a patient similarity network (PSN) designed to organize real-world hemato-oncology inpatients by exploratory phenotypes with potential clinical utility. Background: Polypharmacy and drug–drug interactions (DDIs) are pervasive in hemato-oncology and vary with comorbidity and treatment intensity. Methods: We [...] Read more.
We introduce the Onco-Hem Connectome (OHC), a patient similarity network (PSN) designed to organize real-world hemato-oncology inpatients by exploratory phenotypes with potential clinical utility. Background: Polypharmacy and drug–drug interactions (DDIs) are pervasive in hemato-oncology and vary with comorbidity and treatment intensity. Methods: We retrospectively analyzed a 2023 single-center cohort of 298 patients (1158 hospital episodes). Standardized feature vectors combined demographics, comorbidity (Charlson, Elixhauser), comorbidity polypharmacy score (CPS), aggregate DDI severity score (ADSS), diagnoses, and drug exposures. Cosine similarity defined edges (threshold ≥ 0.6) to build an undirected PSN; communities were detected with modularity-based clustering and profiled by drugs, diagnosis codes, and canonical chemotherapy regimens. Results: The OHC comprised 295 nodes and 4179 edges (density 0.096, modularity Q = 0.433), yielding five communities. Communities differed in comorbidity burden (Kruskal–Wallis ε2: Charlson 0.428, Elixhauser 0.650, age 0.125, all FDR-adjusted p < 0.001) but not in utilization (LOS, episodes) after FDR (ε2 ≈ 0.006–0.010). Drug enrichment (e.g., enoxaparin Δ = +0.13 in Community 2; vinblastine Δ = +0.09 in Community 3) and principal diagnoses (e.g., C90.0 23%, C91.1 15%, C83.3 15% in Community 1) supported distinct clinical phenotypes. Robustness analyses showed block-equalized features preserved communities (ARI 0.946; NMI 0.941). Community drug signatures and regimen signals aligned with diagnosis patterns, reflecting the integration of resource-use variables in the feature design. Conclusions: The Onco-Hem Connectome yields interpretable, phenotype-level insights that can inform supportive care bundles, DDI-aware prescribing, and stewardship, and it provides a foundation for phenotype-specific risk models (e.g., prolonged stay, infection, high-DDI episodes) in hemato-oncology. Full article
(This article belongs to the Special Issue Drug–Drug Interactions—New Perspectives)
Show Figures

Figure 1

26 pages, 4329 KB  
Review
Advanced Sensor Technologies in Cutting Applications: A Review
by Motaz Hassan, Roan Kirwin, Chandra Sekhar Rakurty and Ajay Mahajan
Sensors 2026, 26(3), 762; https://doi.org/10.3390/s26030762 (registering DOI) - 23 Jan 2026
Abstract
Advances in sensing technologies are increasingly transforming cutting operations by enabling data-driven condition monitoring, predictive maintenance, and process optimization. This review surveys recent developments in sensing modalities for cutting systems, including vibration sensors, acoustic emission sensors, optical and vision-based systems, eddy-current sensors, force [...] Read more.
Advances in sensing technologies are increasingly transforming cutting operations by enabling data-driven condition monitoring, predictive maintenance, and process optimization. This review surveys recent developments in sensing modalities for cutting systems, including vibration sensors, acoustic emission sensors, optical and vision-based systems, eddy-current sensors, force sensors, and emerging hybrid/multi-modal sensing frameworks. Each sensing approach offers unique advantages in capturing mechanical, acoustic, geometric, or electromagnetic signatures related to tool wear, process instability, and fault development, while also showing modality-specific limitations such as noise sensitivity, environmental robustness, and integration complexity. Recent trends show a growing shift toward hybrid and multi-modal sensor fusion, where data from multiple sensors are combined using advanced data analytics and machine learning to improve diagnostic accuracy and reliability under changing cutting conditions. The review also discusses how artificial intelligence, Internet of Things connectivity, and edge computing enable scalable, real-time monitoring solutions, along with the challenges related to data needs, computational costs, and system integration. Future directions highlight the importance of robust fusion architectures, physics-informed and explainable models, digital twin integration, and cost-effective sensor deployment to accelerate adoption across various manufacturing environments. Overall, these advancements position advanced sensing and hybrid monitoring strategies as key drivers of intelligent, Industry 4.0-oriented cutting processes. Full article
Show Figures

Figure 1

55 pages, 3089 KB  
Review
A Survey on Green Wireless Sensing: Energy-Efficient Sensing via WiFi CSI and Lightweight Learning
by Rod Koo, Xihao Liang, Deepak Mishra and Aruna Seneviratne
Energies 2026, 19(2), 573; https://doi.org/10.3390/en19020573 - 22 Jan 2026
Abstract
Conventional sensing expends energy at three stages: powering dedicated sensors, transmitting measurements, and executing computationally intensive inference. Wireless sensing re-purposes WiFi channel state information (CSI) inherent in every packet, eliminating extra sensors and uplink traffic, though reliance on deep neural networks (DNNs) often [...] Read more.
Conventional sensing expends energy at three stages: powering dedicated sensors, transmitting measurements, and executing computationally intensive inference. Wireless sensing re-purposes WiFi channel state information (CSI) inherent in every packet, eliminating extra sensors and uplink traffic, though reliance on deep neural networks (DNNs) often trained and run on graphics processing units (GPUs) can negate these gains. This review highlights two core energy efficiency levers in CSI-based wireless sensing. First ambient CSI harvesting cuts power use by an order of magnitude compared to radar and active Internet of Things (IoT) sensors. Second, integrated sensing and communication (ISAC) embeds sensing functionality into existing WiFi links, thereby reducing device count, battery waste, and carbon impact. We review conventional handcrafted and accuracy-first methods to set the stage for surveying green learning strategies and lightweight learning techniques, including compact hybrid neural architectures, pruning, knowledge distillation, quantisation, and semi-supervised training that preserve accuracy while reducing model size and memory footprint. We also discuss hardware co-design from low-power microcontrollers to edge application-specific integrated circuits (ASICs) and WiFi firmware extensions that align computation with platform constraints. Finally, we identify open challenges in domain-robust compression, multi-antenna calibration, energy-proportionate model scaling, and standardised joules per inference metrics. Our aim is a practical battery-friendly wireless sensing stack ready for smart home and 6G era deployments. Full article
Show Figures

Graphical abstract

23 pages, 6077 KB  
Article
Patient Similarity Networks for Irritable Bowel Syndrome: Revisiting Brain Morphometry and Cognitive Features
by Arvid Lundervold, Julie Billing, Birgitte Berentsen and Astri J. Lundervold
Diagnostics 2026, 16(2), 357; https://doi.org/10.3390/diagnostics16020357 - 22 Jan 2026
Abstract
Background: Irritable Bowel Syndrome (IBS) is a heterogeneous gastrointestinal disorder characterized by complex brain–gut interactions. Patient Similarity Networks (PSNs) offer a novel approach for exploring this heterogeneity and identifying clinically relevant patient subgroups. Methods: We analyzed data from 78 participants (49 IBS patients [...] Read more.
Background: Irritable Bowel Syndrome (IBS) is a heterogeneous gastrointestinal disorder characterized by complex brain–gut interactions. Patient Similarity Networks (PSNs) offer a novel approach for exploring this heterogeneity and identifying clinically relevant patient subgroups. Methods: We analyzed data from 78 participants (49 IBS patients and 29 healthy controls) with 36 brain morphometric measures (FreeSurfer v7.4.1) and 6 measures of cognitive functions (5 RBANS domain indices plus a Total Scale score). PSNs were constructed using multiple similarity measures (Euclidean, cosine, correlation-based) with Gaussian kernel transformation. We performed community detection (Louvain algorithm), centrality analyses, feature importance analysis, and correlations with symptom severity. Statistical validation included bootstrap confidence intervals and permutation testing. Results: The PSN comprised 78 nodes connected by 469 edges, with four communities detected. These communities did not significantly correspond to diagnostic groups (Adjusted Rand Index = 0.011, permutation p=0.212), indicating IBS patients and healthy controls were intermixed. However, each community exhibited distinct neurobiological profiles: Community 1 (oldest, preserved cognition) showed elevated intracranial volume but reduced subcortical gray matter; Community 2 (youngest, most severe IBS symptoms) had elevated cortical volumes but reduced white matter; Community 3 (most balanced IBS/HC ratio, mildest IBS symptoms) showed the largest subcortical volumes; Community 4 (lowest cognitive performance across multiple domains) displayed the lowest RBANS scores alongside high IBS prevalence. Top network features included subcortical structures, corpus callosum, and cognitive indices (Language, Attention). Conclusions: PSN identifies brain–cognition communities that cut across diagnostic categories, with distinct feature profiles suggesting different hypothesis-generating neurobiological patterns within IBS that may inform personalized treatment strategies. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

43 pages, 6577 KB  
Review
Biopolymers and Biocomposites for Additive Manufacturing of Optical Frames
by Beatriz Carvalho, Fátima Santos, Juliana Araújo, Bruna Santos, João Alhada Lourenço, Pedro Ramos and Telma Encarnação
Macromol 2026, 6(1), 8; https://doi.org/10.3390/macromol6010008 (registering DOI) - 21 Jan 2026
Abstract
Optical frames are used worldwide to correct visual impairments, protect from UV damage, or simply for fashion purposes. Optical frames are often made of poorly biodegradable and fossil-based materials, with designs not targeted to everyone’s tastes and requirements. Additive manufacturing processes allow personalisation [...] Read more.
Optical frames are used worldwide to correct visual impairments, protect from UV damage, or simply for fashion purposes. Optical frames are often made of poorly biodegradable and fossil-based materials, with designs not targeted to everyone’s tastes and requirements. Additive manufacturing processes allow personalisation of optical frames and the use of new sustainable biomaterials to replace fossil-based ones. This comprehensive review combines an extensive survey of the scientific literature, market trends, and information from other relevant sources, analysing the biomaterials currently used in additive manufacturing and identifying biomaterials (biopolymers, natural fibres, and natural additives) with the potential to be developed into biocomposites for printing optical frames. Requirements for optical devices were carefully considered, such as standards, regulations, and demands for manufacturing materials. By comparing with fossil-based analogues and by discussing the chemical, physical, and mechanical properties of each biomaterial, it was found that combining various materials in biocomposites is promising for achieving the desirable properties for printing optical frames. The advantages of the various techniques of this cutting-edge technology were also analysed and discussed for optical industry applications. This study aims to answer the central research question: which biopolymers and biocomposite constituents (natural fibres, plasticisers, and additives) have the ideal mechanical, thermal, physical, and chemical properties for combining into a biomaterial suitable for producing sustainable, customisable, and inclusive optical frames on demand, using additive manufacturing techniques. Full article
Show Figures

Figure 1

22 pages, 9985 KB  
Article
A Comparative Analysis of Multi-Spectral and RGB-Acquired UAV Data for Cropland Mapping in Smallholder Farms
by Evania Chetty, Maqsooda Mahomed and Shaeden Gokool
Drones 2026, 10(1), 72; https://doi.org/10.3390/drones10010072 - 21 Jan 2026
Abstract
Accurate cropland classification within smallholder farming systems is essential for effective land management, efficient resource allocation, and informed agricultural decision-making. This study evaluates cropland classification performance using Red, Green, Blue (RGB) and multi-spectral (blue, green, red, red-edge, near-infrared) unmanned aerial vehicle (UAV) imagery. [...] Read more.
Accurate cropland classification within smallholder farming systems is essential for effective land management, efficient resource allocation, and informed agricultural decision-making. This study evaluates cropland classification performance using Red, Green, Blue (RGB) and multi-spectral (blue, green, red, red-edge, near-infrared) unmanned aerial vehicle (UAV) imagery. Both datasets were derived from imagery acquired using a MicaSense Altum sensor mounted on a DJI Matrice 300 UAV. Cropland classification was performed using machine learning algorithms implemented within the Google Earth Engine (GEE) platform, applying both a non-binary classification of five land cover classes and a binary classification within a probabilistic framework to distinguishing cropland from non-cropland areas. The results indicate that multi-spectral imagery achieved higher classification accuracy than RGB imagery for non-binary classification, with overall accuracies of 75% and 68%, respectively. For binary cropland classification, RGB imagery achieved an area under the receiver operating characteristic curve (AUC–ROC) of 0.75, compared to 0.77 for multi-spectral imagery. These findings suggest that, while multi-spectral data provides improved classification performance, RGB imagery can achieve comparable accuracy for fundamental cropland delineation. This study contributes baseline evidence on the relative performance of RGB and multi-spectral UAV imagery for cropland mapping in heterogeneous smallholder farming landscapes and supports further investigation of RGB-based approaches in resource-constrained agricultural contexts. Full article
(This article belongs to the Special Issue Advances of UAV in Precision Agriculture—2nd Edition)
Show Figures

Figure 1

33 pages, 2648 KB  
Article
TABS-Net: A Temporal Spectral Attentive Block with Space–Time Fusion Network for Robust Cross-Year Crop Mapping
by Xin Zhou, Yuancheng Huang, Qian Shen, Yue Yao, Qingke Wen, Fengjiang Xi and Chendong Ma
Remote Sens. 2026, 18(2), 365; https://doi.org/10.3390/rs18020365 - 21 Jan 2026
Abstract
Accurate and stable mapping of crop types is fundamental to agricultural monitoring and food security. However, inter-annual phenological shifts driven by variations in air temperature, precipitation, and sowing dates introduce systematic changes in the spectral distributions associated with the same day of year [...] Read more.
Accurate and stable mapping of crop types is fundamental to agricultural monitoring and food security. However, inter-annual phenological shifts driven by variations in air temperature, precipitation, and sowing dates introduce systematic changes in the spectral distributions associated with the same day of year (DOY). As a result, the “date–spectrum–class” mapping learned during training can become misaligned when applied to a new year, leading to increased misclassification and unstable performance. To tackle this problem, we develop TABS-Net (Temporal–Spectral Attentive Block with Space–Time Fusion Network). The core contributions of this study are summarized as follows: (1) we propose an end-to-end 3D CNN framework to jointly model spatial, temporal, and spectral information; (2) we design and embed CBAM3D modules into the backbone to emphasize informative bands and key time windows; and (3) we introduce DOY positional encoding and temporal jitter during training to explicitly align seasonal timing and simulate phenological shifts, thereby enhancing cross-year robustness. We conduct a comprehensive evaluation on a Cropland Data Layer (CDL) subset. Within a single year, TABS-Net delivers higher and more balanced overall accuracy, Macro-F1, and mIoU than strong baselines, including 2D stacking, 1D temporal convolution/LSTM, and transformer models. In cross-year experiments, we quantify temporal stability using inter-annual robustness (IAR); with both DOY encoding and temporal jitter enabled, the model attains IAR values close to one for major crop classes, effectively compensating for phenological misalignment and inter-annual variability. Ablation studies show that DOY encoding and temporal jitter are the primary contributors to improved inter-annual consistency, while CBAM3D reduces crop–crop and crop–background confusion by focusing on discriminative spectral regions such as the red-edge and near-infrared bands and on key growth stages. Overall, TABS-Net combines higher accuracy with stronger robustness across multiple years, offering a scalable and transferable solution for large-area, multi-year remote sensing crop mapping. Full article
18 pages, 635 KB  
Article
A Federated Deep Learning Framework for Sleep-Stage Monitoring Using the ISRUC-Sleep Dataset
by Alba Amato
Appl. Sci. 2026, 16(2), 1073; https://doi.org/10.3390/app16021073 - 21 Jan 2026
Abstract
Automatic sleep-stage classification is a key component of long-term sleep monitoring and digital health applications. Although deep learning models trained on centralized datasets have achieved strong performance, their deployment in real-world healthcare settings is constrained by privacy, data-governance, and regulatory requirements. Federated learning [...] Read more.
Automatic sleep-stage classification is a key component of long-term sleep monitoring and digital health applications. Although deep learning models trained on centralized datasets have achieved strong performance, their deployment in real-world healthcare settings is constrained by privacy, data-governance, and regulatory requirements. Federated learning (FL) addresses these issues by enabling decentralized training in which raw data remain local and only model parameters are exchanged; however, its effectiveness under realistic physiological heterogeneity remains insufficiently understood. In this work, we investigate a subject-level federated deep learning framework for sleep-stage classification using polysomnography data from the ISRUC-Sleep dataset. We adopt a realistic one subject = one client setting spanning three clinically distinct subgroups and evaluate a lightweight one-dimensional convolutional neural network (1D-CNN) under four training regimes: a centralized baseline and three federated strategies (FedAvg, FedProx, and FedBN), all sharing identical architecture and preprocessing. The centralized model, trained on a cohort with regular sleep architecture, achieves stable performance (accuracy 69.65%, macro-F1 0.6537). In contrast, naive FedAvg fails to converge under subject-level non-IID data (accuracy 14.21%, macro-F1 0.0601), with minority stages such as N1 and REM largely lost. FedProx yields only marginal improvement, while FedBN—by preserving client-specific batch-normalization statistics—achieves the best federated performance (accuracy 26.04%, macro-F1 0.1732) and greater stability across clients. These findings indicate that the main limitation of FL for sleep staging lies in physiological heterogeneity rather than model capacity, highlighting the need for heterogeneity-aware strategies in privacy-preserving sleep analytics. Full article
Show Figures

Figure 1

24 pages, 13473 KB  
Article
Automatic Threshold Selection Guided by Maximizing Homologous Isomeric Similarity Under Unified Transformation Toward Unimodal Distribution
by Yaobin Zou, Wenli Yu and Qingqing Huang
Electronics 2026, 15(2), 451; https://doi.org/10.3390/electronics15020451 - 20 Jan 2026
Abstract
Traditional thresholding methods are often tailored to specific histogram patterns, making it difficult to achieve robust segmentation across diverse images exhibiting non-modal, unimodal, bimodal, or multimodal distributions. To address this limitation, this paper proposes an automatic thresholding method guided by maximizing homologous isomeric [...] Read more.
Traditional thresholding methods are often tailored to specific histogram patterns, making it difficult to achieve robust segmentation across diverse images exhibiting non-modal, unimodal, bimodal, or multimodal distributions. To address this limitation, this paper proposes an automatic thresholding method guided by maximizing homologous isomeric similarity under a unified transformation toward unimodal distribution. The primary objective is to establish a generalized selection criterion that functions independently of the input histogram’s pattern. The methodology employs bilateral filtering, non-maximum suppression, and Sobel operators to transform diverse histogram patterns into a unified, right-skewed unimodal distribution. Subsequently, the optimal threshold is determined by maximizing the normalized Renyi mutual information between the transformed edge image and binary contour images extracted at varying levels. Experimental validation on both synthetic and real-world images demonstrates that the proposed method offers greater adaptability and higher accuracy compared to representative thresholding and non-thresholding techniques. The results show a significant reduction in misclassification errors and improved correlation metrics, confirming the method’s effectiveness as a unified thresholding solution for images with non-modal, unimodal, bimodal, or multimodal histogram patterns. Full article
(This article belongs to the Special Issue Image Processing and Pattern Recognition)
Show Figures

Figure 1

17 pages, 5027 KB  
Article
Symmetry-Enhanced YOLOv8s Algorithm for Small-Target Detection in UAV Aerial Photography
by Zhiyi Zhou, Chengyun Wei, Lubin Wang and Qiang Yu
Symmetry 2026, 18(1), 197; https://doi.org/10.3390/sym18010197 - 20 Jan 2026
Abstract
In order to solve the problems of small-target detection in UAV aerial photography, such as small scale, blurred features and complex background interference, this article proposes the ACS-YOLOv8s method to optimize the YOLOv8s network: notably, most small man-made targets in UAV aerial scenes [...] Read more.
In order to solve the problems of small-target detection in UAV aerial photography, such as small scale, blurred features and complex background interference, this article proposes the ACS-YOLOv8s method to optimize the YOLOv8s network: notably, most small man-made targets in UAV aerial scenes (e.g., small vehicles, micro-drones) inherently possess symmetry, a key geometric attribute that can significantly enhance the discriminability of blurred or incomplete target features, and thus symmetry-aware mechanisms are integrated into the aforementioned improved modules to further boost detection performance. The backbone network introduces an adaptive feature enhancement module, the edge and detail representation of small targets is enhanced by dynamically modulating the receptive field with deformable attention while also capturing symmetric contour features to strengthen the perception of target geometric structures; a cascaded multi-receptive field module is embedded at the end of the trunk to integrate multi-scale features in a hierarchical manner to take into account both expressive ability and computational efficiency with a focus on fusing symmetric multi-scale features to optimize feature representation; the neck is integrated with a spatially adaptive feature modulation network to achieve dynamic weighting of cross-layer features and detail fidelity and, meanwhile, models symmetric feature dependencies across channels to reduce the loss of discriminative information. Experimental results based on the VisDrone2019 data set show that ACS-YOLOv8s is superior to the baseline model in precision, recall, and mAP indicators, with mAP50 increased by 2.8% to 41.6% and mAP50:90 increased by 1.9% to 25.0%, verifying its effectiveness and robustness in small-target detection in complex drone aerial-photography scenarios. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

26 pages, 55590 KB  
Article
Adaptive Edge-Aware Detection with Lightweight Multi-Scale Fusion
by Xiyu Pan, Kai Xiong and Jianjun Li
Electronics 2026, 15(2), 449; https://doi.org/10.3390/electronics15020449 - 20 Jan 2026
Abstract
In object detection, boundary blurring caused by occlusion and background interference often hinders effective feature extraction. To address this challenge, we propose Edge Aware-YOLO, a novel framework designed to enhance edge awareness and efficient feature fusion. Our method integrates three key contributions. First, [...] Read more.
In object detection, boundary blurring caused by occlusion and background interference often hinders effective feature extraction. To address this challenge, we propose Edge Aware-YOLO, a novel framework designed to enhance edge awareness and efficient feature fusion. Our method integrates three key contributions. First, the Variable Sobel Compact Inverted Block (VSCIB) employs convolution kernels with adjustable orientation and size, enabling robust multi-scale edge adaptation. Second, the Spatial Pyramid Shared Convolution (SPSC) replaces standard pooling with shared dilated convolutions, minimizing detail loss during feature reconstruction. Finally, the Efficient Downsampling Convolution (EDC) utilizes a dual-branch architecture to balance channel compression with semantic preservation. Extensive evaluations on public datasets demonstrate that Edge Aware-YOLO significantly outperforms state-of-the-art models. On MS COCO, it achieves 56.3% mAP50 and 40.5% mAP50–95 (gains of 1.5% and 1.0%) with only 2.4M parameters and 5.8 GFLOPs, surpassing advanced models like YOLOv11. Full article
(This article belongs to the Topic Intelligent Image Processing Technology)
Show Figures

Figure 1

28 pages, 8014 KB  
Article
YOLO-UMS: Multi-Scale Feature Fusion Based on YOLO Detector for PCB Surface Defect Detection
by Hong Peng, Wenjie Yang and Baocai Yu
Sensors 2026, 26(2), 689; https://doi.org/10.3390/s26020689 - 20 Jan 2026
Abstract
Printed circuit boards (PCBs) are critical in the electronics industry. As PCB layouts grow increasingly complex, defect detection processes often encounter challenges such as low image contrast, uneven brightness, minute defect sizes, and irregular shapes, making it difficult to achieve rapid and accurate [...] Read more.
Printed circuit boards (PCBs) are critical in the electronics industry. As PCB layouts grow increasingly complex, defect detection processes often encounter challenges such as low image contrast, uneven brightness, minute defect sizes, and irregular shapes, making it difficult to achieve rapid and accurate automated inspection. To address these challenges, this paper proposes a novel object detector, YOLO-UMS, designed to enhance the accuracy and speed of PCB surface defect detection. First, a lightweight plug-and-play Unified Multi-Scale Feature Fusion Pyramid Network (UMSFPN) is proposed to process and fuse multi-scale information across different resolution layers. The UMSFPN uses a Cross-Stage Partial Multi-Scale Module (CSPMS) and an optimized fusion strategy. This approach balances the integration of fine-grained edge information from shallow layers and coarse-grained semantic details from deep layers. Second, the paper introduces a lightweight RG-ELAN module, based on the ELAN network, to enhance feature extraction for small targets in complex scenes. The RG-ELAN module uses low-cost operations to generate redundant feature maps and reduce computational complexity. Finally, the Adaptive Interaction Feature Integration (AIFI) module enriches high-level features by eliminating redundant interactions among shallow-layer features. The channel-priority convolutional attention module (CPCA), deployed in the detection head, strengthens the expressive power of small target features. The experimental results show that the new UMSFPN neck can help improve the AP50 by 3.1% and AP by 2% on the self-collected dataset PCB-M, which is better than the original PAFPN neck. Meanwhile, UMSFPN achieves excellent results across different detectors and datasets, verifying its broad applicability. Without pre-training weights, YOLO-UMS achieves an 84% AP50 on the PCB-M dataset, which is a 6.4% improvement over the baseline YOLO11. Comparing results with existing target detection algorithms shows that the algorithm exhibits good performance in terms of detection accuracy. It provides a feasible solution for efficient and accurate detection of PCB surface defects in the industry. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

24 pages, 69667 KB  
Article
YOLO-ELS: A Lightweight Cherry Tomato Maturity Detection Algorithm
by Zhimin Tong, Yu Zhou, Changhao Li, Changqing Cai and Lihong Rong
Appl. Sci. 2026, 16(2), 1043; https://doi.org/10.3390/app16021043 - 20 Jan 2026
Abstract
Within the domain of intelligent picking robotics, fruit recognition and positioning are essential. Challenging conditions such as varying light, occlusion, and limited edge-computing power compromise fruit maturity detection. To tackle these issues, this paper proposes a lightweight algorithm YOLO-ELS based on YOLOv8n. Specifically, [...] Read more.
Within the domain of intelligent picking robotics, fruit recognition and positioning are essential. Challenging conditions such as varying light, occlusion, and limited edge-computing power compromise fruit maturity detection. To tackle these issues, this paper proposes a lightweight algorithm YOLO-ELS based on YOLOv8n. Specifically, we reconstruct the backbone by replacing the bottlenecks in the C2f structure with Edge-Information-Enhanced Modules (EIEM) to prioritize morphological cues and filter background redundancy. Furthermore, a Large Separable Kernel Attention (LSKA) mechanism is integrated into the SPPF layer to expand the effective receptive field for multi-scale targets. To mitigate occlusion-induced errors, a Spatially Enhanced Attention Module (SEAM) is incorporated into the decoupled detection head to enhance feature responses in obscured regions. Finally, the Inner-GIoU loss is adopted to refine bounding box regression and accelerate convergence. Experimental results demonstrate that compared to the YOLOv8n baseline, the proposed YOLO-ELS achieves a 14.8% reduction in GFLOPs and a 2.3% decrease in parameters, while attaining a precision, recall, and mAP@50% of 92.7%, 83.9%, and 92.0%, respectively. When compared with mainstream models such as DETR, Faster-RCNN, SSD, TOOD, YOLOv5s, and YOLO11n, the mAP@50% is improved by 7.0%, 4.7%, 11.4%, 8.6%, 3.1%, and 3.2%. Deployment tests on the NVIDIA Jetson Orin Nano Super edge platform yield an inference latency of 25.2 ms and a detection speed of 28.2 FPS, successfully meeting the real-time operational requirements of automated harvesting systems. These findings confirm that YOLO-ELS effectively balances high detection accuracy with lightweight architecture, providing a robust technical foundation for intelligent fruit picking in resource-constrained greenhouse environments. Full article
(This article belongs to the Section Agricultural Science and Technology)
Show Figures

Figure 1

Back to TopTop