Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,038)

Search Parameters:
Keywords = lightweight feature map

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 16767 KB  
Article
AeroLight: A Lightweight Architecture with Dynamic Feature Fusion for High-Fidelity Small-Target Detection in Aerial Imagery
by Hao Qiu, Xiaoyan Meng, Yunjie Zhao, Liang Yu and Shuai Yin
Sensors 2025, 25(17), 5369; https://doi.org/10.3390/s25175369 (registering DOI) - 30 Aug 2025
Abstract
Small-target detection in Unmanned Aerial Vehicle (UAV) aerial images remains a significant and unresolved challenge in aerial image analysis, hampered by low target resolution, dense object clustering, and complex, cluttered backgrounds. In order to cope with these problems, we present AeroLight, a novel [...] Read more.
Small-target detection in Unmanned Aerial Vehicle (UAV) aerial images remains a significant and unresolved challenge in aerial image analysis, hampered by low target resolution, dense object clustering, and complex, cluttered backgrounds. In order to cope with these problems, we present AeroLight, a novel and efficient detection architecture that achieves high-fidelity performance in resource-constrained environments. AeroLight is built upon three key innovations. First, we have optimized the feature pyramid at the architectural level by integrating a high-resolution head specifically designed for minute object detection. This design enhances sensitivity to fine-grained spatial details while streamlining redundant and computationally expensive network layers. Second, a Dynamic Feature Fusion (DFF) module is proposed to adaptively recalibrate and merge multi-scale feature maps, mitigating information loss during integration and strengthening object representation across diverse scales. Finally, we enhance the localization precision of irregular-shaped objects by refining bounding box regression using a Shape-IoU loss function. AeroLight is shown to improve mAP50 and mAP50-95 by 7.5% and 3.3%, respectively, on the VisDrone2019 dataset, while reducing the parameter count by 28.8% when compared with the baseline model. Further validation on the RSOD dataset and Huaxing Farm Drone dataset confirms its superior performance and generalization capabilities. AeroLight provides a powerful and efficient solution for real-world UAV applications, setting a new standard for lightweight, high-precision object recognition in aerial imaging scenarios. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

24 pages, 21436 KB  
Article
ESG-YOLO: An Efficient Object Detection Algorithm for Transplant Quality Assessment of Field-Grown Tomato Seedlings Based on YOLOv8n
by Xinhui Wu, Zhenfa Dong, Can Wang, Ziyang Zhu, Yanxi Guo and Shuhe Zheng
Agronomy 2025, 15(9), 2088; https://doi.org/10.3390/agronomy15092088 - 29 Aug 2025
Abstract
Intelligent detection of tomato seedling transplant quality represents a core technology for advancing agricultural automation. However, in practical applications, existing algorithms still face numerous technical challenges, particularly with prominent issues of false detections and missed detections during recognition. To address these challenges, we [...] Read more.
Intelligent detection of tomato seedling transplant quality represents a core technology for advancing agricultural automation. However, in practical applications, existing algorithms still face numerous technical challenges, particularly with prominent issues of false detections and missed detections during recognition. To address these challenges, we developed the ESG-YOLO object detection model and successfully deployed it on edge devices, enabling real-time assessment of tomato seedling transplanting quality. Our methodology integrates three key innovations: First, an EMA (Efficient Multi-scale Attention) module is embedded within the YOLOv8 neck network to suppress interference from redundant information and enhance morphological focus on seedlings. Second, the feature fusion network is reconstructed using a GSConv-based Slim-neck architecture, achieving a lightweight neck structure compatible with edge deployment. Finally, optimization employs the GIoU (Generalized Intersection over Union) loss function to precisely localize seedling position and morphology, thereby reducing false detection and missed detection. The experimental results demonstrate that our ESG-YOLO model achieves a mean average precision mAP of 97.4%, surpassing lightweight models including YOLOv3-tiny, YOLOv5n, YOLOv7-tiny, and YOLOv8n in precision, with improvements of 9.3, 7.2, 5.7, and 2.2%, respectively. Notably, for detecting key yield-impacting categories such as “exposed seedlings” and “missed hills”, the average precision (AP) values reach 98.8 and 94.0%, respectively. To validate the model’s effectiveness on edge devices, the ESG-YOLO model was deployed on an NVIDIA Jetson TX2 NX platform, achieving a frame rate of 18.0 FPS for efficient detection of tomato seedling transplanting quality. This model provides technical support for transplanting performance assessment, enabling quality control and enhanced vegetable yield, thus actively contributing to smart agriculture initiatives. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

26 pages, 4311 KB  
Article
YOLOv13-Cone-Lite: An Enhanced Algorithm for Traffic Cone Detection in Autonomous Formula Racing Cars
by Zhukai Wang, Senhan Hu, Xuetao Wang, Yu Gao, Wenbo Zhang, Yaoyao Chen, Hai Lin, Tingting Gao, Junshuo Chen, Xianwu Gong, Binyu Wang and Weiyu Liu
Appl. Sci. 2025, 15(17), 9501; https://doi.org/10.3390/app15179501 - 29 Aug 2025
Abstract
This study introduces YOLOv13-Cone-Lite, an enhanced algorithm based on YOLOv13s, designed to meet the stringent accuracy and real-time performance demands for traffic cone detection in autonomous formula racing cars on enclosed tracks. We improved detection accuracy by refining the network architecture. Specifically, the [...] Read more.
This study introduces YOLOv13-Cone-Lite, an enhanced algorithm based on YOLOv13s, designed to meet the stringent accuracy and real-time performance demands for traffic cone detection in autonomous formula racing cars on enclosed tracks. We improved detection accuracy by refining the network architecture. Specifically, the DS-C3k2_UIB module, an advanced iteration of the Universal Inverted Bottleneck (UIB), was integrated into the backbone to boost small object feature extraction. Additionally, the Non-Maximum Suppression (NMS)-free ConeDetect head was engineered to eliminate post-processing delays. To accommodate resource-limited onboard terminals, we minimized superfluous parameters through structural reparameterization pruning and performed 8-bit integer (INT8) quantization using the TensorRT toolkit, resulting in a lightweight model. Experimental findings show that YOLOv13-Cone-Lite achieves a mAP50 of 92.9% (a 4.5% enhancement over the original YOLOv13s), a frame rate of 68 Hz (double the original model’s speed), and a parameter size of 8.7 MB (a 52.5% reduction). The proposed algorithm effectively addresses challenges like intricate lighting and long-range detection of small objects and offers the automotive industry a framework to develop more efficient onboard perception systems, while informing object detection in other closed autonomous environments like factory campuses. Notably, the model is optimized for enclosed tracks, with open traffic generalization needing further validation. Full article
Show Figures

Figure 1

15 pages, 1690 KB  
Article
OTB-YOLO: An Enhanced Lightweight YOLO Architecture for UAV-Based Maize Tassel Detection
by Yu Han, Xingya Wang, Luyan Niu, Song Shi, Yingbo Gao, Kuijie Gong, Xia Zhang and Jiye Zheng
Plants 2025, 14(17), 2701; https://doi.org/10.3390/plants14172701 - 29 Aug 2025
Abstract
To tackle the challenges posed by substantial variations in target scale, intricate background interference, and the likelihood of missing small targets in multi-temporal UAV maize tassel imagery, an optimized lightweight detection model derived from YOLOv11 is introduced, named OTB-YOLO. Here, “OTB” is an [...] Read more.
To tackle the challenges posed by substantial variations in target scale, intricate background interference, and the likelihood of missing small targets in multi-temporal UAV maize tassel imagery, an optimized lightweight detection model derived from YOLOv11 is introduced, named OTB-YOLO. Here, “OTB” is an acronym derived from the initials of the model’s core improved modules: Omni-dimensional dynamic convolution (ODConv), Triplet Attention, and Bi-directional Feature Pyramid Network (BiFPN). This model integrates the PaddlePaddle open-source maize tassel recognition benchmark dataset with the public Multi-Temporal Drone Corn Dataset (MTDC). Traditional convolutional layers are substituted with omni-dimensional dynamic convolution (ODConv) to mitigate computational redundancy. A triplet attention module is incorporated to refine feature extraction within the backbone network, while a bidirectional feature pyramid network (BiFPN) is engineered to enhance accuracy via multi-level feature pyramids and bidirectional information flow. Empirical analysis demonstrates that the enhanced model achieves a precision of 95.6%, recall of 92.1%, and mAP@0.5 of 96.6%, marking improvements of 3.2%, 2.5%, and 3.1%, respectively, over the baseline model. Concurrently, the model’s computational complexity is reduced to 6.0 GFLOPs, rendering it appropriate for deployment on UAV edge computing platforms. Full article
Show Figures

Figure 1

18 pages, 2738 KB  
Article
TeaAppearanceLiteNet: A Lightweight and Efficient Network for Tea Leaf Appearance Inspection
by Xiaolei Chen, Long Wu, Xu Yang, Lu Xu, Shuyu Chen and Yong Zhang
Appl. Sci. 2025, 15(17), 9461; https://doi.org/10.3390/app15179461 - 28 Aug 2025
Abstract
The inspection of the appearance quality of tea leaves is vital for market classification and value assessment within the tea industry. Nevertheless, many existing detection approaches rely on sophisticated model architectures, which hinder their practical use on devices with limited computational resources. This [...] Read more.
The inspection of the appearance quality of tea leaves is vital for market classification and value assessment within the tea industry. Nevertheless, many existing detection approaches rely on sophisticated model architectures, which hinder their practical use on devices with limited computational resources. This study proposes a lightweight object detection network, TeaAppearanceLiteNet, tailored for tea leaf appearance analysis. A novel C3k2_PartialConv module is introduced to significantly reduce computational redundancy while maintaining effective feature extraction. The CBMA_MSCA attention mechanism is incorporated to enable the multi-scale modeling of channel attention, enhancing the perception accuracy of features at various scales. By incorporating the Detect_PinwheelShapedConv head, the spatial representation power of the network is significantly improved. In addition, the MPDIoU_ShapeIoU loss is formulated to enhance the correspondence between predicted and ground-truth bounding boxes across multiple dimensions—covering spatial location, geometric shape, and scale—which contributes to a more stable regression and higher detection accuracy. Experimental results demonstrate that, compared to baseline methods, TeaAppearanceLiteNet achieves a 12.27% improvement in accuracy, reaching a mAP@0.5 of 84.06% with an inference speed of 157.81 FPS. The parameter count is only 1.83% of traditional models. The compact and high-efficiency design of TeaAppearanceLiteNet enables its deployment on mobile and edge devices, thereby supporting the digitalization and intelligent upgrading of the tea industry under the framework of smart agriculture. Full article
Show Figures

Figure 1

33 pages, 8300 KB  
Article
Farmland Navigation Line Extraction Method Based on RS-LineNet Network and Root Subordination Relationship Optimization
by Yanlei Xu, Zhen Lu, Jian Li, Yuting Zhai, Chao Liu, Xinyu Zhang and Yang Zhou
Agronomy 2025, 15(9), 2069; https://doi.org/10.3390/agronomy15092069 - 28 Aug 2025
Viewed by 74
Abstract
Navigation line extraction is vital for visual navigation with agricultural machinery. The current methods primarily utilize plant canopy detection frames to extract feature points for navigation line fitting. However, this approach is highly susceptible to environmental changes, causing position instability and reduced extraction [...] Read more.
Navigation line extraction is vital for visual navigation with agricultural machinery. The current methods primarily utilize plant canopy detection frames to extract feature points for navigation line fitting. However, this approach is highly susceptible to environmental changes, causing position instability and reduced extraction accuracy. To address this problem, this study aims to develop a robust navigation line extraction method that overcomes canopy-based feature instability. We propose extracting feature points from root detection frames for navigation line fitting. Compared to canopy points, root feature point positions remain more stable under natural interference and less prone to fluctuations. A dataset of corn crop row images under multiple growth environments was collected. Based on YOLOv8n (You Only Look Once version 8, nano model), we proposed the RS-LineNet lightweight model and introduced a root subordination relationship filtering algorithm to further improve detection precision. Compared with the YOLOv8n model, RS-LineNet achieves 4.2% higher precision, 16.2% improved recall, and an 11.8% increase in mean average precision (mAP50), while reducing the model weight and parameters to 32% and 23% of the original. Navigation lines extracted under different environments exhibit an 0.8° average angular error, which is 3.1° lower than canopy-based methods. On Jetson TX2, the frame rate exceeds 12 FPS, meeting practical application requirements. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

26 pages, 23082 KB  
Article
SPyramidLightNet: A Lightweight Shared Pyramid Network for Efficient Underwater Debris Detection
by Yi Luo and Osama Eljamal
Appl. Sci. 2025, 15(17), 9404; https://doi.org/10.3390/app15179404 - 27 Aug 2025
Viewed by 166
Abstract
Underwater debris detection plays a crucial role in marine environmental protection. However, existing object detection algorithms generally suffer from excessive model complexity and insufficient detection accuracy, making it difficult to meet the real-time detection requirements in resource-constrained underwater environments. To address this challenge, [...] Read more.
Underwater debris detection plays a crucial role in marine environmental protection. However, existing object detection algorithms generally suffer from excessive model complexity and insufficient detection accuracy, making it difficult to meet the real-time detection requirements in resource-constrained underwater environments. To address this challenge, this paper proposes a novel lightweight object detection network named the Shared Pyramid Lightweight Network (SPyramidLightNet). The network adopts an improved architecture based on YOLOv11 and achieves an optimal balance between detection performance and computational efficiency by integrating three core innovative modules. First, the Split–Merge Attention Block (SMAB) employs a dynamic kernel selection mechanism and split–merge strategy, significantly enhancing feature representation capability through adaptive multi-scale feature fusion. Second, the C3 GroupNorm Detection Head (C3GNHead) introduces a shared convolution mechanism and GroupNorm normalization strategy, substantially reducing the computational complexity of the detection head while maintaining detection accuracy. Finally, the Shared Pyramid Convolution (SPyramidConv) replaces traditional pooling operations with a parameter-sharing multi-dilation-rate convolution architecture, achieving more refined and efficient multi-scale feature aggregation. Extensive experiments on underwater debris datasets demonstrate that SPyramidLightNet achieves 0.416 on the mAP@0.5:0.95 metric, significantly outperforming mainstream algorithms including Faster-RCNN, SSD, RT-DETR, and the YOLO series. Meanwhile, compared to the baseline YOLOv11, the proposed algorithm achieves an 11.8% parameter compression and a 17.5% computational complexity reduction, with an inference speed reaching 384 FPS, meeting the stringent requirements for real-time detection. Ablation experiments and visualization analyses further validate the effectiveness and synergistic effects of each core module. This research provides important theoretical guidance for the design of lightweight object detection algorithms and lays a solid foundation for the development of automated underwater debris recognition and removal technologies. Full article
Show Figures

Figure 1

24 pages, 5170 KB  
Article
EIM-YOLO: A Defect Detection Method for Metal-Painted Surfaces on Electrical Sealing Covers
by Zhanjun Wu and Likang Yang
Appl. Sci. 2025, 15(17), 9380; https://doi.org/10.3390/app15179380 - 26 Aug 2025
Viewed by 176
Abstract
Electrical sealing covers are widely used in various industrial equipment, where the quality of their metal-painted surfaces directly affects product appearance and long-term reliability. Micro-defects such as pores, particles, scratches, and uneven paint coatings can compromise protective performance during manufacturing. In the rapidly [...] Read more.
Electrical sealing covers are widely used in various industrial equipment, where the quality of their metal-painted surfaces directly affects product appearance and long-term reliability. Micro-defects such as pores, particles, scratches, and uneven paint coatings can compromise protective performance during manufacturing. In the rapidly growing new energy vehicle (NEV) industry, battery charging-port sealing covers are critical components, requiring precise defect detection due to exposure to harsh environments, like extreme weather and dust-laden conditions. Even minor defects can lead to water ingress or foreign matter accumulation, affecting vehicle performance and user safety. Conventional manual or rule-based inspection methods are inefficient, and the existing deep learning models struggle with detecting minor and subtle defects. To address these challenges, this study proposes EIM-YOLO, an improved object detection framework for the automated detection of metal-painted surface defects on electrical sealing covers. We propose a novel lightweight convolutional module named C3PUltraConv, which reduces model parameters by 3.1% while improving mAP50 by 1% and recall by 3.2%. The backbone integrates RFAConv for enhanced feature perception, and the neck architecture uses an optimized BiFPN-concat structure with adaptive weight learning for better multi-scale feature fusion. Experimental validation on a real-world industrial dataset collected using industrial cameras shows that EIM-YOLO achieves a precision of 71% (an improvement of 3.4%), with mAP50 reaching 64.8% (a growth of 2.6%), and mAP50–95 improving by 1.2%. Maintaining real-time detection capability, EIM-YOLO significantly outperforms the existing baseline models, offering a more accurate solution for automated quality control in NEV manufacturing. Full article
Show Figures

Figure 1

19 pages, 29645 KB  
Article
Defect Detection in GIS X-Ray Images Based on Improved YOLOv10
by Guoliang Xu, Xiaolong Bai and Menghao Huang
Sensors 2025, 25(17), 5310; https://doi.org/10.3390/s25175310 - 26 Aug 2025
Viewed by 317
Abstract
Timely and accurate detection of internal defects in Gas-Insulated Switchgear (GIS) with X-ray imaging is critical for power system reliability. However, automated detection faces significant challenges from small, low-contrast defects and complex background structures. This paper proposes an enhanced object-detection model based on [...] Read more.
Timely and accurate detection of internal defects in Gas-Insulated Switchgear (GIS) with X-ray imaging is critical for power system reliability. However, automated detection faces significant challenges from small, low-contrast defects and complex background structures. This paper proposes an enhanced object-detection model based on the lightweight YOLOv10n framework, specifically optimized for this task. Key improvements include adopting the Normalized Wasserstein Distance (NWD) loss function for small object localization, integrating Monte Carlo (MCAttn) and Parallelized Patch-Aware (PPA) attention to enhance feature extraction, and designing a GFPN-inspired neck for improved multi-scale feature fusion. The model was rigorously evaluated on a custom GIS X-ray dataset. The final model achieved a mean Average Precision (mAP) of 0.674 (IoU 0.5:0.95), representing a 5.0 percentage point improvement over the YOLOv10n baseline and surpassing other comparative models. Qualitative results also confirmed the model’s enhanced capability in detecting challenging small and low-contrast defects. This study presents an effective approach for automated GIS defect detection, with significant potential to enhance power grid maintenance efficiency and safety. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

19 pages, 14216 KB  
Article
LRA-YOLO: A Lightweight Power Equipment Detection Algorithm Based on Large Receptive Field and Attention Guidance
by Jiwen Yuan, Lei Hu and Qimin Hu
Information 2025, 16(9), 736; https://doi.org/10.3390/info16090736 - 26 Aug 2025
Viewed by 213
Abstract
Power equipment detection is a critical component in power transmission line inspection. However, existing power equipment detection algorithms often face problems such as large model sizes and high computational complexity. This paper proposes a lightweight power equipment detection algorithm based on large receptive [...] Read more.
Power equipment detection is a critical component in power transmission line inspection. However, existing power equipment detection algorithms often face problems such as large model sizes and high computational complexity. This paper proposes a lightweight power equipment detection algorithm based on large receptive field and attention guidance. First, we propose a lightweight large receptive field feature extraction module, CRepLK, which reparameterizes multiple branches into large kernel convolution to improve the multi-scale detection capability of the model; secondly, we propose a lightweight ELA-guided Dynamic Sampling Fusion (LEDSF) Neck, which alleviates the feature misalignment problem inherent in conventional neck networks to a certain extent; finally, we propose a lightweight Partial Asymmetric Detection Head (PADH), which utilizes the redundancy of feature maps to achieve the significant light weight of the detection head. Experimental results show that on the Insplad power equipment dataset, the number of parameters, computational cost (GFLOPs) and the size of the model weight are reduced by 46.8%, 44.1% and 46.4%, respectively, compared with the Baseline model, while the mAP is improved by 1%. Comparative experiments on three power equipment datasets show that our model achieves a compelling balance between efficiency and detection performance in power equipment detection scenarios. Full article
(This article belongs to the Special Issue Intelligent Image Processing by Deep Learning, 2nd Edition)
Show Figures

Figure 1

18 pages, 10978 KB  
Article
A Lightweight Infrared and Visible Light Multimodal Fusion Method for Object Detection in Power Inspection
by Linghao Zhang, Junwei Kuang, Yufei Teng, Siyu Xiang, Lin Li and Yingjie Zhou
Processes 2025, 13(9), 2720; https://doi.org/10.3390/pr13092720 - 26 Aug 2025
Viewed by 222
Abstract
Visible and infrared thermal imaging are crucial techniques for detecting structural and temperature anomalies in electrical power system equipment. To meet the demand for multimodal infrared/visible light monitoring of target devices, this paper introduces CBAM-YOLOv4, an improved lightweight object detection model, which leverages [...] Read more.
Visible and infrared thermal imaging are crucial techniques for detecting structural and temperature anomalies in electrical power system equipment. To meet the demand for multimodal infrared/visible light monitoring of target devices, this paper introduces CBAM-YOLOv4, an improved lightweight object detection model, which leverages a novel synergistic integration of the Convolutional Block Attention Module (CBAM) with YOLOv4. The model employs MobileNet-v3 as the backbone to reduce parameter count, applies depthwise separable convolution to decrease computational complexity, and incorporates the CBAM module to enhance the extraction of critical optical features under complex backgrounds. Furthermore, a feature-level fusion strategy is adopted to integrate visible and infrared image information effectively. Validation on public datasets demonstrates that the proposed model achieves an 18.05 frames per second increase in detection speed over the baseline, a 1.61% improvement in mean average precision (mAP), and a 2 MB reduction in model size, substantially improving both detection accuracy and efficiency through this optimized integration in anomaly inspection of electrical equipment. Validation on a representative edge device, the NVIDIA Jetson Nano, confirms the model’s practical applicability. After INT8 quantization, the model achieves a real-time inference speed of 40.8 FPS with a high mAP of 80.91%, while consuming only 5.2 W of power. Compared to the standard YOLOv4, our model demonstrates a significant improvement in both processing efficiency and detection accuracy, offering a uniquely balanced and deployable solution for mobile inspection platforms. Full article
(This article belongs to the Special Issue Hybrid Artificial Intelligence for Smart Process Control)
Show Figures

Figure 1

18 pages, 6210 KB  
Article
A Non-Destructive System Using UVE Feature Selection and Lightweight Deep Learning to Assess Wheat Fusarium Head Blight Severity Levels
by Xiaoying Liang, Shuo Yang, Lin Mu, Huanrui Shi, Zhifeng Yao and Xu Chen
Agronomy 2025, 15(9), 2051; https://doi.org/10.3390/agronomy15092051 - 26 Aug 2025
Viewed by 220
Abstract
Fusarium head blight (FHB), a globally significant agricultural disaster, causes annual losses of dozens of millions of tons of wheat toxins produced by FHB, such as deoxyroscyliaceol, further pose serious threats to human and livestock health. Consequently, rapid and non-destructive determination of FHB [...] Read more.
Fusarium head blight (FHB), a globally significant agricultural disaster, causes annual losses of dozens of millions of tons of wheat toxins produced by FHB, such as deoxyroscyliaceol, further pose serious threats to human and livestock health. Consequently, rapid and non-destructive determination of FHB severity is crucial for implementing timely and precise scientific control measures, thereby ensuring wheat supply security. Therefore, this study adopts hyperspectral imaging (HSI) combined with a lightweight deep learning model. Firstly, the wheat ears were inoculated with Fusarium fungi at the spike’s midpoint, and HSI data were acquired, yielding 1660 samples representing varying disease severities. Through the integration of multiplicative scatter correction (MSC) and uninformative variable elimination (UVE) methods, features are extracted from spectral data in a manner that optimizes the reduction of feature dimensionality while preserving elevated classification accuracy. Finally, a lightweight FHB severity discrimination model based on MobileNetV2 was developed and deployed as an easy-to-use analysis system. Analysis revealed that UVE-selected characteristic bands for FHB severity predominantly fell within 590–680 nm (chlorophyll degradation related), 930–1043 nm (water stress related) and 738 nm (cell wall polysaccharide decomposition related). This distribution aligns with the synergistic effect of rapid chlorophyll degradation and structural damage accompanying disease progression. The resulting MobileNetV2 model achieved a mean average precision (mAP) of 99.93% on the training set and 98.26% on the independent test set. Crucially, it maintains an 8.50 MB parameter size, it processes data 2.36 times faster, significantly enhancing its suitability for field-deployed equipment by optimally balancing accuracy and operational efficiency. This advancement empowers agricultural workers to implement timely control measures, dramatically improving precision alongside optimized field deployment. Full article
Show Figures

Figure 1

24 pages, 103094 KB  
Article
A Method for Automated Detection of Chicken Coccidia in Vaccine Environments
by Ximing Li, Qianchao Wang, Lanqi Chen, Xinqiu Wang, Mengting Zhou, Ruiqing Lin and Yubin Guo
Vet. Sci. 2025, 12(9), 812; https://doi.org/10.3390/vetsci12090812 - 26 Aug 2025
Viewed by 254
Abstract
Vaccines play a crucial role in the prevention and control of chicken coccidiosis, effectively reducing economic losses in the poultry industry and significantly improving animal welfare. To ensure the production quality and immune effect of vaccines, accurate detection of chicken Coccidia oocysts in [...] Read more.
Vaccines play a crucial role in the prevention and control of chicken coccidiosis, effectively reducing economic losses in the poultry industry and significantly improving animal welfare. To ensure the production quality and immune effect of vaccines, accurate detection of chicken Coccidia oocysts in vaccine is essential. However, this task remains challenging due to the minute size of oocysts, variable spatial orientation, and morphological similarity among species. Therefore, we propose YOLO-Cocci, a chicken coccidia detection model based on YOLOv8n, designed to improve the detection accuracy of chicken coccidia oocysts in vaccine environments. Firstly, an efficient multi-scale attention (EMA) module was added to the backbone to enhance feature extraction and enable more precise focus on oocyst regions. Secondly, we developed the inception-style multi-scale fusion pyramid network (IMFPN) as an efficient neck. By integrating richer low-level features and applying convolutional kernels of varying sizes, IMFPN effectively preserves the features of small objects and enhances feature representation, thereby improving detection accuracy. Finally, we designed a lightweight feature-reconstructed and partially decoupled detection head (LFPD-Head), which enhances detection accuracy while reducing both model parameters and computational cost. The experimental results show that YOLO-Cocci achieves an mAP@0.5 of 89.6%, an increase of 6.5% over the baseline model, while reducing the number of parameters and computation by 14% and 12%, respectively. Notably, in the detection of Eimeria necatrix, mAP@0.5 increased by 14%. In order to verify the application effect of the improved detection algorithm, we developed client software that can realize automatic detection and visualize the detection results. This study will help improve the level of automated assessment of vaccine quality and thus promote the improvement of animal welfare. Full article
Show Figures

Figure 1

17 pages, 2498 KB  
Article
FPH-DEIM: A Lightweight Underwater Biological Object Detection Algorithm Based on Improved DEIM
by Qiang Li and Wenguang Song
Appl. Syst. Innov. 2025, 8(5), 123; https://doi.org/10.3390/asi8050123 - 26 Aug 2025
Viewed by 227
Abstract
Underwater biological object detection plays a critical role in intelligent ocean monitoring and underwater robotic perception systems. However, challenges such as image blurring, complex lighting conditions, and significant variations in object scale severely limit the performance of mainstream detection algorithms like the YOLO [...] Read more.
Underwater biological object detection plays a critical role in intelligent ocean monitoring and underwater robotic perception systems. However, challenges such as image blurring, complex lighting conditions, and significant variations in object scale severely limit the performance of mainstream detection algorithms like the YOLO series and Transformer-based models. Although these methods offer real-time inference, they often suffer from unstable accuracy, slow convergence, and insufficient small object detection in underwater environments. To address these challenges, we propose FPH-DEIM, a lightweight underwater object detection algorithm based on an improved DEIM framework. It integrates three tailored modules for perception enhancement and efficiency optimization: a Fine-grained Channel Attention (FCA) mechanism that dynamically balances global and local channel responses to suppress background noise and enhance target features; a Partial Convolution (PConv) operator that reduces redundant computation while maintaining semantic fidelity; and a Haar Wavelet Downsampling (HWDown) module that preserves high-frequency spatial information critical for detecting small underwater organisms. Extensive experiments on the URPC 2021 dataset show that FPH-DEIM achieves a mAP@0.5 of 89.4%, outperforming DEIM (86.2%), YOLOv5-n (86.1%), YOLOv8-n (86.2%), and YOLOv10-n (84.6%) by 3.2–4.8 percentage points. Furthermore, FPH-DEIM significantly reduces the number of model parameters to 7.2 M and the computational complexity to 7.1 GFLOPs, offering reductions of over 13% in parameters and 5% in FLOPs compared to DEIM, and outperforming YOLO models by margins exceeding 2 M parameters and 14.5 GFLOPs in some cases. These results demonstrate that FPH-DEIM achieves an excellent balance between detection accuracy and lightweight deployment, making it well-suited for practical use in real-world underwater environments. Full article
Show Figures

Figure 1

18 pages, 7165 KB  
Article
Dual-Path Enhanced YOLO11 for Lightweight Instance Segmentation with Attention and Efficient Convolution
by Qin Liao, Jianjun Chen, Fei Wang, Md Harun Or Rashid, Taihua Xu and Yan Fan
Electronics 2025, 14(17), 3389; https://doi.org/10.3390/electronics14173389 - 26 Aug 2025
Viewed by 202
Abstract
Instance segmentation stands as a foundational technology in real-world applications such as autonomous driving, where the inherent trade-off between accuracy and computational efficiency remains a key barrier to practical deployment. To tackle this challenge, we propose a dual-path enhanced framework based on YOLO11l. [...] Read more.
Instance segmentation stands as a foundational technology in real-world applications such as autonomous driving, where the inherent trade-off between accuracy and computational efficiency remains a key barrier to practical deployment. To tackle this challenge, we propose a dual-path enhanced framework based on YOLO11l. In this framework, two improved models, YOLO-SA and YOLO-SD, are developed to enable high-performance lightweight instance segmentation. The core innovation lies in balancing precision and efficiency through targeted architectural advancements. For YOLO-SA, we embed the parameter-free SimAM attention mechanism into the C3k2 module, yielding a novel C3k2SA structure. This design leverages neural inhibition principles to dynamically enhance focus on critical regions (e.g., object contours and semantic key points) without adding to model complexity. For YOLO-SD, we replace standard backbone convolutions with lightweight SPD-Conv layers (featuring spatial awareness) and adopt DySample in place of nearest-neighbor interpolation in the upsampling path. This dual modification minimizes information loss during feature propagation while accelerating feature extraction, directly optimizing computational efficiency. Experimental validation on the Cityscapes dataset demonstrates the effectiveness of our approach: YOLO-SA increases mAP from 0.401 to 0.410 with negligible overhead; YOLO-SD achieves a slight mAP improvement over the baseline while reducing parameters by approximately 5.7% and computational cost by 1.06%. These results confirm that our dual-path enhancements effectively reconcile accuracy and efficiency, offering a practical, lightweight solution tailored for resource-constrained real-world scenarios. Full article
(This article belongs to the Special Issue Knowledge Representation and Reasoning in Artificial Intelligence)
Show Figures

Figure 1

Back to TopTop