Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (269)

Search Parameters:
Keywords = YOLO 11n

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 6413 KiB  
Article
A Recognition Method for Marigold Picking Points Based on the Lightweight SCS-YOLO-Seg Model
by Baojian Ma, Zhenghao Wu, Yun Ge, Bangbang Chen, He Zhang, Hao Xia and Dongyun Wang
Sensors 2025, 25(15), 4820; https://doi.org/10.3390/s25154820 - 5 Aug 2025
Abstract
Accurate identification of picking points remains a critical challenge for automated marigold harvesting, primarily due to complex backgrounds and significant pose variations of the flowers. To overcome this challenge, this study proposes SCS-YOLO-Seg, a novel method based on a lightweight segmentation model. The [...] Read more.
Accurate identification of picking points remains a critical challenge for automated marigold harvesting, primarily due to complex backgrounds and significant pose variations of the flowers. To overcome this challenge, this study proposes SCS-YOLO-Seg, a novel method based on a lightweight segmentation model. The approach enhances the baseline YOLOv8n-seg architecture by replacing its backbone with StarNet and introducing C2f-Star, a novel lightweight feature extraction module. These modifications achieve substantial model compression, significantly reducing the model size, parameter count, and computational complexity (GFLOPs). Segmentation efficiency is further optimized through a dual-path collaborative architecture (Seg-Marigold head). Following mask extraction, picking points are determined by intersecting the optimized elliptical mask fitting results with the stem skeleton. Experimental results demonstrate that SCS-YOLO-Seg effectively balances model compression with segmentation performance. Compared to YOLOv8n-seg, it maintains high accuracy while significantly reducing resource requirements, achieving a picking point identification accuracy of 93.36% with an average inference time of 28.66 ms per image. This work provides a robust and efficient solution for vision systems in automated marigold harvesting. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

15 pages, 1241 KiB  
Article
Triplet Spatial Reconstruction Attention-Based Lightweight Ship Component Detection for Intelligent Manufacturing
by Bocheng Feng, Zhenqiu Yao and Chuanpu Feng
Appl. Sci. 2025, 15(15), 8676; https://doi.org/10.3390/app15158676 (registering DOI) - 5 Aug 2025
Abstract
Automatic component recognition plays a crucial role in intelligent ship manufacturing, but existing methods suffer from low recognition accuracy and high computational cost in industrial scenarios involving small samples, component stacking, and diverse categories. To address the requirements of shipbuilding industrial applications, a [...] Read more.
Automatic component recognition plays a crucial role in intelligent ship manufacturing, but existing methods suffer from low recognition accuracy and high computational cost in industrial scenarios involving small samples, component stacking, and diverse categories. To address the requirements of shipbuilding industrial applications, a Triplet Spatial Reconstruction Attention (TSA) mechanism that combines threshold-based feature separation with triplet parallel processing is proposed, and a lightweight You Only Look Once Ship (YOLO-Ship) detection network is constructed. Unlike existing attention mechanisms that focus on either spatial reconstruction or channel attention independently, the proposed TSA integrates triplet parallel processing with spatial feature separation–reconstruction techniques to achieve enhanced target feature representation while significantly reducing parameter count and computational overhead. Experimental validation on a small-scale actual ship component dataset demonstrates that the improved network achieves 88.7% mean Average Precision (mAP), 84.2% precision, and 87.1% recall, representing improvements of 3.5%, 2.2%, and 3.8%, respectively, compared to the original YOLOv8n algorithm, requiring only 2.6 M parameters and 7.5 Giga Floating-point Operations per Second (GFLOPs) computational cost, achieving a good balance between detection accuracy and lightweight model design. Future research directions include developing adaptive threshold learning mechanisms for varying industrial conditions and integration with surface defect detection capabilities to enhance comprehensive quality control in intelligent manufacturing systems. Full article
(This article belongs to the Special Issue Artificial Intelligence on the Edge for Industry 4.0)
Show Figures

Figure 1

20 pages, 2316 KiB  
Article
Detection of Dental Anomalies in Digital Panoramic Images Using YOLO: A Next Generation Approach Based on Single Stage Detection Models
by Uğur Şevik and Onur Mutlu
Diagnostics 2025, 15(15), 1961; https://doi.org/10.3390/diagnostics15151961 - 5 Aug 2025
Abstract
Background/Objectives: The diagnosis of pediatric dental conditions from panoramic radiographs is uniquely challenging due to the dynamic nature of the mixed dentition phase, which can lead to subjective and inconsistent interpretations. This study aims to develop and rigorously validate an advanced deep [...] Read more.
Background/Objectives: The diagnosis of pediatric dental conditions from panoramic radiographs is uniquely challenging due to the dynamic nature of the mixed dentition phase, which can lead to subjective and inconsistent interpretations. This study aims to develop and rigorously validate an advanced deep learning model to enhance diagnostic accuracy and efficiency in pediatric dentistry, providing an objective tool to support clinical decision-making. Methods: An initial comparative study of four state-of-the-art YOLO variants (YOLOv8, v9, v10, and v11) was conducted to identify the optimal architecture for detecting four common findings: Dental Caries, Deciduous Tooth, Root Canal Treatment, and Pulpotomy. A stringent two-tiered validation strategy was employed: a primary public dataset (n = 644 images) was used for training and model selection, while a completely independent external dataset (n = 150 images) was used for final testing. All annotations were validated by a dual-expert team comprising a board-certified pediatric dentist and an experienced oral and maxillofacial radiologist. Results: Based on its leading performance on the internal validation set, YOLOv11x was selected as the optimal model, achieving a mean Average Precision (mAP50) of 0.91. When evaluated on the independent external test set, the model demonstrated robust generalization, achieving an overall F1-Score of 0.81 and a mAP50 of 0.82. It yielded clinically valuable recall rates for therapeutic interventions (Root Canal Treatment: 88%; Pulpotomy: 86%) and other conditions (Deciduous Tooth: 84%; Dental Caries: 79%). Conclusions: Validated through a rigorous dual-dataset and dual-expert process, the YOLOv11x model demonstrates its potential as an accurate and reliable tool for automated detection in pediatric panoramic radiographs. This work suggests that such AI-driven systems can serve as valuable assistive tools for clinicians by supporting diagnostic workflows and contributing to the consistent detection of common dental findings in pediatric patients. Full article
Show Figures

Figure 1

24 pages, 6437 KiB  
Article
LEAD-YOLO: A Lightweight and Accurate Network for Small Object Detection in Autonomous Driving
by Yunchuan Yang, Shubin Yang and Qiqing Chan
Sensors 2025, 25(15), 4800; https://doi.org/10.3390/s25154800 - 4 Aug 2025
Abstract
The accurate detection of small objects remains a critical challenge in autonomous driving systems, where improving detection performance typically comes at the cost of increased model complexity, conflicting with the lightweight requirements of edge deployment. To address this dilemma, this paper proposes LEAD-YOLO [...] Read more.
The accurate detection of small objects remains a critical challenge in autonomous driving systems, where improving detection performance typically comes at the cost of increased model complexity, conflicting with the lightweight requirements of edge deployment. To address this dilemma, this paper proposes LEAD-YOLO (Lightweight Efficient Autonomous Driving YOLO), an enhanced network architecture based on YOLOv11n that achieves superior small object detection while maintaining computational efficiency. The proposed framework incorporates three innovative components: First, the Backbone integrates a lightweight Convolutional Gated Transformer (CGF) module, which employs normalized gating mechanisms with residual connections, and a Dilated Feature Fusion (DFF) structure that enables progressive multi-scale context modeling through dilated convolutions. These components synergistically enhance small object perception and environmental context understanding without compromising network efficiency. Second, the neck features a hierarchical feature fusion module (HFFM) that establishes guided feature aggregation paths through hierarchical structuring, facilitating collaborative modeling between local structural information and global semantics for robust multi-scale object detection in complex traffic scenarios. Third, the head implements a shared feature detection head (SFDH) structure, incorporating shared convolution modules for efficient cross-scale feature sharing and detail enhancement branches for improved texture and edge modeling. Extensive experiments validate the effectiveness of LEAD-YOLO: on the nuImages dataset, the method achieves 3.8% and 5.4% improvements in mAP@0.5 and mAP@[0.5:0.95], respectively, while reducing parameters by 24.1%. On the VisDrone2019 dataset, performance gains reach 7.9% and 6.4% for corresponding metrics. These findings demonstrate that LEAD-YOLO achieves an excellent balance between detection accuracy and model efficiency, thereby showcasing substantial potential for applications in autonomous driving. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

20 pages, 1971 KiB  
Article
FFG-YOLO: Improved YOLOv8 for Target Detection of Lightweight Unmanned Aerial Vehicles
by Tongxu Wang, Sizhe Yang, Ming Wan and Yanqiu Liu
Appl. Syst. Innov. 2025, 8(4), 109; https://doi.org/10.3390/asi8040109 - 4 Aug 2025
Abstract
Target detection is essential in intelligent transportation and autonomous control of unmanned aerial vehicles (UAVs), with single-stage detection algorithms used widely due to their speed. However, these algorithms face limitations in detecting small targets, especially in aerial photography from unmanned aerial vehicles (UAVs), [...] Read more.
Target detection is essential in intelligent transportation and autonomous control of unmanned aerial vehicles (UAVs), with single-stage detection algorithms used widely due to their speed. However, these algorithms face limitations in detecting small targets, especially in aerial photography from unmanned aerial vehicles (UAVs), where small targets are often occluded, multi-scale semantic information is easily lost, and there is a trade-off between real-time processing and computational resources. Existing algorithms struggle to effectively extract multi-dimensional features and deep semantic information from images and to balance detection accuracy with model complexity. To address these limitations, we developed FFG-YOLO, a lightweight small-target detection method for UAVs based on YOLOv8. FFG-YOLO incorporates three modules: a feature enhancement block (FEB), a feature concat block (FCB), and a global context awareness block (GCAB). These modules strengthen feature extraction from small targets, resolve semantic bias in multi-scale feature fusion, and help differentiate small targets from complex backgrounds. We also improved the positioning accuracy of small targets using the Wasserstein distance loss function. Experiments showed that FFG-YOLO outperformed other algorithms, including YOLOv8n, in small-target detection due to its lightweight nature, meeting the stringent real-time performance and deployment requirements of UAVs. Full article
Show Figures

Figure 1

17 pages, 29159 KiB  
Article
REW-YOLO: A Lightweight Box Detection Method for Logistics
by Guirong Wang, Shuanglong Li, Xiaojing Zhu, Yuhuai Wang, Jianfang Huang, Yitao Zhong and Zhipeng Wu
Modelling 2025, 6(3), 76; https://doi.org/10.3390/modelling6030076 (registering DOI) - 4 Aug 2025
Abstract
Inventory counting of logistics boxes in complex scenarios has always been a core task in intelligent logistics systems. To solve the problems of a high leakage rate and low computational efficiency caused by stacking, occlusion, and rotation in box detection against complex backgrounds [...] Read more.
Inventory counting of logistics boxes in complex scenarios has always been a core task in intelligent logistics systems. To solve the problems of a high leakage rate and low computational efficiency caused by stacking, occlusion, and rotation in box detection against complex backgrounds in logistics environments, this paper proposes a lightweight, rotated object detection model: REW-YOLO (RepViT-Block YOLO with Efficient Local Attention and Wise-IoU). By integrating structural reparameterization techniques, the C2f-RVB module was designed to reduce computational redundancy in traditional convolutions. Additionally, the ELA-HSFPN multi-scale feature fusion network was constructed to enhance edge feature extraction for occluded boxes and improve detection accuracy in densely packed scenarios. A rotation angle regression branch and a dynamic Wise-IoU loss function were introduced to further refine localization and balance sample quality. Experimental results on the self-constructed BOX-data dataset demonstrate that the REW-YOLO achieves 90.2% mAP50 and 130.8 FPS, with a parameter count of only 2.18 M, surpassing YOLOv8n by 2.9% in accuracy while reducing computational cost by 28%. These improvements provide an efficient solution for automated box detection in logistics applications. Full article
Show Figures

Figure 1

17 pages, 6471 KiB  
Article
A Deep Learning Framework for Traffic Accident Detection Based on Improved YOLO11
by Weijun Li, Liyan Huang and Xiaofeng Lai
Vehicles 2025, 7(3), 81; https://doi.org/10.3390/vehicles7030081 (registering DOI) - 4 Aug 2025
Abstract
The automatic detection of traffic accidents plays an increasingly vital role in advancing intelligent traffic monitoring systems and improving road safety. Leveraging computer vision techniques offers a promising solution, enabling rapid, reliable, and automated identification of accidents, thereby significantly reducing emergency response times. [...] Read more.
The automatic detection of traffic accidents plays an increasingly vital role in advancing intelligent traffic monitoring systems and improving road safety. Leveraging computer vision techniques offers a promising solution, enabling rapid, reliable, and automated identification of accidents, thereby significantly reducing emergency response times. This study proposes an enhanced version of the YOLO11 architecture, termed YOLO11-AMF. The proposed model integrates a Mamba-Like Linear Attention (MLLA) mechanism, an Asymptotic Feature Pyramid Network (AFPN), and a novel Focaler-IoU loss function to optimize traffic accident detection performance under complex and diverse conditions. The MLLA module introduces efficient linear attention to improve contextual representation, while the AFPN adopts an asymptotic feature fusion strategy to enhance the expressiveness of the detection head. The Focaler-IoU further refines bounding box regression for improved localization accuracy. To evaluate the proposed model, a custom dataset of traffic accident images was constructed. Experimental results demonstrate that the enhanced model achieves precision, recall, mAP50, and mAP50–95 scores of 96.5%, 82.9%, 90.0%, and 66.0%, respectively, surpassing the baseline YOLO11n by 6.5%, 6.0%, 6.3%, and 6.3% on these metrics. These findings demonstrate the effectiveness of the proposed enhancements and suggest the model’s potential for robust and accurate traffic accident detection within real-world conditions. Full article
Show Figures

Figure 1

23 pages, 7739 KiB  
Article
AGS-YOLO: An Efficient Underwater Small-Object Detection Network for Low-Resource Environments
by Weikai Sun, Xiaoqun Liu, Juan Hao, Qiyou Yao, Hailin Xi, Yuwen Wu and Zhaoye Xing
J. Mar. Sci. Eng. 2025, 13(8), 1465; https://doi.org/10.3390/jmse13081465 - 30 Jul 2025
Viewed by 226
Abstract
Detecting underwater targets is crucial for ecological evaluation and the sustainable use of marine resources. To enhance environmental protection and optimize underwater resource utilization, this study proposes AGS-YOLO, an innovative underwater small-target detection model based on YOLO11. Firstly, this study proposes AMSA, a [...] Read more.
Detecting underwater targets is crucial for ecological evaluation and the sustainable use of marine resources. To enhance environmental protection and optimize underwater resource utilization, this study proposes AGS-YOLO, an innovative underwater small-target detection model based on YOLO11. Firstly, this study proposes AMSA, a multi-scale attention module, and optimizes the C3k2 structure to improve the detection and precise localization of small targets. Secondly, a streamlined GSConv convolutional module is incorporated to minimize the parameter count and computational load while effectively retaining inter-channel dependencies. Finally, a novel and efficient cross-scale connected neck network is designed to achieve information complementarity and feature fusion among different scales, efficiently capturing multi-scale semantics while decreasing the complexity of the model. In contrast with the baseline model, the method proposed in this paper demonstrates notable benefits for use in underwater devices constrained by limited computational capabilities. The results demonstrate that AGS-YOLO significantly outperforms previous methods in terms of accuracy on the DUO underwater dataset, with mAP@0.5 improving by 1.3% and mAP@0.5:0.95 improving by 2.6% relative to those of the baseline YOLO11n model. In addition, the proposed model also shows excellent performance on the RUOD dataset, demonstrating its competent detection accuracy and reliable generalization. This study proposes innovative approaches and methodologies for underwater small-target detection, which have significant practical relevance. Full article
Show Figures

Figure 1

20 pages, 3518 KiB  
Article
YOLO-AWK: A Model for Injurious Bird Detection in Complex Farmland Environments
by Xiang Yang, Yongliang Cheng, Minggang Dong and Xiaolan Xie
Symmetry 2025, 17(8), 1210; https://doi.org/10.3390/sym17081210 - 30 Jul 2025
Viewed by 218
Abstract
Injurious birds pose a significant threat to food production and the agricultural economy. To address the challenges posed by their small size, irregular shape, and frequent occlusion in complex farmland environments, this paper proposes YOLO-AWK, an improved bird detection model based on YOLOv11n. [...] Read more.
Injurious birds pose a significant threat to food production and the agricultural economy. To address the challenges posed by their small size, irregular shape, and frequent occlusion in complex farmland environments, this paper proposes YOLO-AWK, an improved bird detection model based on YOLOv11n. Firstly, to improve the ability of the enhanced model to recognize bird targets in complex backgrounds, we introduce the in-scale feature interaction (AIFI) module to replace the original SPPF module. Secondly, to more accurately localize and identify bird targets of different shapes and sizes, we use WIoUv3 as a new loss function. Thirdly, to remove the noise interference and improve the extraction of bird residual features, we introduce the Kolmogorov–Arnold network (KAN) module. Finally, to improve the model’s detection accuracy for small bird targets, we add a small target detection head. The experimental results show that the detection performance of YOLO-AWK on the farmland bird dataset is significantly improved, and the final precision, recall, mAP@0.5, and mAP@0.5:0.95 reach 93.9%, 91.2%, 95.8%, and 75.3%, respectively, which outperforms the original model by 2.7, 2.3, 1.6, and 3.0 percentage points, respectively. These results demonstrate that the proposed method offers a reliable and efficient technical solution for farmland injurious bird monitoring. Full article
(This article belongs to the Special Issue Symmetry and Its Applications in Image Processing)
Show Figures

Figure 1

25 pages, 4296 KiB  
Article
StripSurface-YOLO: An Enhanced Yolov8n-Based Framework for Detecting Surface Defects on Strip Steel in Industrial Environments
by Haomin Li, Huanzun Zhang and Wenke Zang
Electronics 2025, 14(15), 2994; https://doi.org/10.3390/electronics14152994 - 27 Jul 2025
Viewed by 377
Abstract
Recent advances in precision manufacturing and high-end equipment technologies have imposed ever more stringent requirements on the accuracy, real-time performance, and lightweight design of online steel strip surface defect detection systems. To reconcile the persistent trade-off between detection precision and inference efficiency in [...] Read more.
Recent advances in precision manufacturing and high-end equipment technologies have imposed ever more stringent requirements on the accuracy, real-time performance, and lightweight design of online steel strip surface defect detection systems. To reconcile the persistent trade-off between detection precision and inference efficiency in complex industrial environments, this study proposes StripSurface–YOLO, a novel real-time defect detection framework built upon YOLOv8n. The core architecture integrates an Efficient Cross-Stage Local Perception module (ResGSCSP), which synergistically combines GSConv lightweight convolutions with a one-shot aggregation strategy, thereby markedly reducing both model parameters and computational complexity. To further enhance multi-scale feature representation, this study introduces an Efficient Multi-Scale Attention (EMA) mechanism at the feature-fusion stage, enabling the network to more effectively attend to critical defect regions. Moreover, conventional nearest-neighbor upsampling is replaced by DySample, which produces deeper, high-resolution feature maps enriched with semantic content, improving both inference speed and fusion quality. To heighten sensitivity to small-scale and low-contrast defects, the model adopts Focal Loss, dynamically adjusting to sample difficulty. Extensive evaluations on the NEU-DET dataset demonstrate that StripSurface–YOLO reduces FLOPs by 11.6% and parameter count by 7.4% relative to the baseline YOLOv8n, while achieving respective improvements of 1.4%, 3.1%, 4.1%, and 3.0% in precision, recall, mAP50, and mAP50:95. Under adverse conditions—including contrast variations, brightness fluctuations, and Gaussian noise—SteelSurface-YOLO outperforms the baseline model, delivering improvements of 5.0% in mAP50 and 4.7% in mAP50:95, attesting to the model’s robust interference resistance. These findings underscore the potential of StripSurface–YOLO to meet the rigorous performance demands of real-time surface defect detection in the metal forging industry. Full article
Show Figures

Figure 1

25 pages, 4344 KiB  
Article
YOLO-DFAM-Based Onboard Intelligent Sorting System for Portunus trituberculatus
by Penglong Li, Shengmao Zhang, Hanfeng Zheng, Xiumei Fan, Yonchuang Shi, Zuli Wu and Heng Zhang
Fishes 2025, 10(8), 364; https://doi.org/10.3390/fishes10080364 - 25 Jul 2025
Viewed by 263
Abstract
This study addresses the challenges of manual measurement bias and low robustness in detecting small, occluded targets in complex marine environments during real-time onboard sorting of Portunus trituberculatus. We propose YOLO-DFAM, an enhanced YOLOv11n-based model that replaces the global average pooling in [...] Read more.
This study addresses the challenges of manual measurement bias and low robustness in detecting small, occluded targets in complex marine environments during real-time onboard sorting of Portunus trituberculatus. We propose YOLO-DFAM, an enhanced YOLOv11n-based model that replaces the global average pooling in the Focal Modulation module with a spatial–channel dual-attention mechanism and incorporates the ASF-YOLO cross-scale fusion strategy to improve feature representation across varying target sizes. These enhancements significantly boost detection, achieving an mAP@50 of 98.0% and precision of 94.6%, outperforming RetinaNet-CSL and Rotated Faster R-CNN by up to 6.3% while maintaining real-time inference at 180.3 FPS with only 7.2 GFLOPs. Unlike prior static-scene approaches, our unified framework integrates attention-guided detection, scale-adaptive tracking, and lightweight weight estimation for dynamic marine conditions. A ByteTrack-based tracking module with dynamic scale calibration, EMA filtering, and optical flow compensation ensures stable multi-frame tracking. Additionally, a region-specific allometric weight estimation model (R2 = 0.9856) reduces dimensional errors by 85.7% and maintains prediction errors below 4.7% using only 12 spline-interpolated calibration sets. YOLO-DFAM provides an accurate, efficient solution for intelligent onboard fishery monitoring. Full article
Show Figures

Figure 1

25 pages, 6528 KiB  
Article
Lightweight Sheep Face Recognition Model Combining Grouped Convolution and Parameter Fusion
by Gaochao Liu, Lijun Kang and Yongqiang Dai
Sensors 2025, 25(15), 4610; https://doi.org/10.3390/s25154610 - 25 Jul 2025
Viewed by 186
Abstract
Sheep face recognition technology is critical in key areas such as individual sheep identification and behavior monitoring. Existing sheep face recognition models typically require high computational resources. When these models are deployed on mobile or embedded devices, problems such as reduced model recognition [...] Read more.
Sheep face recognition technology is critical in key areas such as individual sheep identification and behavior monitoring. Existing sheep face recognition models typically require high computational resources. When these models are deployed on mobile or embedded devices, problems such as reduced model recognition accuracy and increased recognition time arise. To address these problems, an improved Parameter Fusion Lightweight You Only Look Once (PFL-YOLO) sheep face recognition model based on YOLOv8n is proposed. In this study, the Efficient Hybrid Conv (EHConv) module is first integrated to enhance the extraction capability of the model for sheep face features. At the same time, the Residual C2f (RC2f) module is introduced to facilitate the effective fusion of multi-scale feature information and improve the information processing capability of the model; furthermore, the Efficient Spatial Pyramid Pooling Fast (ESPPF) module was used to fuse features of different scales. Finally, parameter fusion optimization work was carried out for the detection head, and the construction of the Parameter Fusion Detection (PFDetect) module was achieved, which significantly reduced the number of model parameters and computational complexity. The experimental results show that the PFL-YOLO model exhibits an excellent performance–efficiency balance in sheep face recognition tasks: mAP@50 and mAP@50:95 reach 99.5% and 87.4%, respectively, and the accuracy is close to or equal to the mainstream benchmark model. At the same time, the number of parameters is only 1.01 M, which is reduced by 45.1%, 83.7%, 66.6%, 71.4%, and 61.2% compared to YOLOv5n, YOLOv7-tiny, YOLOv8n, YOLOv9-t, and YOLO11n, respectively. The size of the model was compressed to 2.1 MB, which was reduced by 44.7%, 82.5%, 65%, 72%, and 59.6%, respectively, compared to similar lightweight models. The experimental results confirm that the PFL-YOLO model maintains high accuracy recognition performance while being lightweight and can provide a new solution for sheep face recognition models on resource-constrained devices. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

25 pages, 8282 KiB  
Article
Performance Evaluation of Robotic Harvester with Integrated Real-Time Perception and Path Planning for Dwarf Hedge-Planted Apple Orchard
by Tantan Jin, Xiongzhe Han, Pingan Wang, Yang Lyu, Eunha Chang, Haetnim Jeong and Lirong Xiang
Agriculture 2025, 15(15), 1593; https://doi.org/10.3390/agriculture15151593 - 24 Jul 2025
Viewed by 289
Abstract
Apple harvesting faces increasing challenges owing to rising labor costs and the limited seasonal workforce availability, highlighting the need for robotic harvesting solutions in precision agriculture. This study presents a 6-DOF robotic arm system designed for harvesting in dwarf hedge-planted orchards, featuring a [...] Read more.
Apple harvesting faces increasing challenges owing to rising labor costs and the limited seasonal workforce availability, highlighting the need for robotic harvesting solutions in precision agriculture. This study presents a 6-DOF robotic arm system designed for harvesting in dwarf hedge-planted orchards, featuring a lightweight perception module, a task-adaptive motion planner, and an adaptive soft gripper. A lightweight approach was introduced by integrating the Faster module within the C2f module of the You Only Look Once (YOLO) v8n architecture to optimize the real-time apple detection efficiency. For motion planning, a Dynamic Temperature Simplified Transition Adaptive Cost Bidirectional Transition-Based Rapidly Exploring Random Tree (DSA-BiTRRT) algorithm was developed, demonstrating significant improvements in the path planning performance. The adaptive soft gripper was evaluated for its detachment and load-bearing capacities. Field experiments revealed that the direct-pull method at 150 mN·m torque outperformed the rotation-pull method at both 100 mN·m and 150 mN·m. A custom control system integrating all components was validated in partially controlled orchards, where obstacle clearance and thinning were conducted to ensure operation safety. Tests conducted on 80 apples showed a 52.5% detachment success rate and a 47.5% overall harvesting success rate, with average detachment and full-cycle times of 7.7 s and 15.3 s per apple, respectively. These results highlight the system’s potential for advancing robotic fruit harvesting and contribute to the ongoing development of autonomous agricultural technologies. Full article
(This article belongs to the Special Issue Agricultural Machinery and Technology for Fruit Orchard Management)
Show Figures

Figure 1

21 pages, 5181 KiB  
Article
TEB-YOLO: A Lightweight YOLOv5-Based Model for Bamboo Strip Defect Detection
by Xipeng Yang, Chengzhi Ruan, Fei Yu, Ruxiao Yang, Bo Guo, Jun Yang, Feng Gao and Lei He
Forests 2025, 16(8), 1219; https://doi.org/10.3390/f16081219 - 24 Jul 2025
Viewed by 319
Abstract
The accurate detection of surface defects in bamboo is critical to maintaining product quality. Traditional inspection methods rely heavily on manual labor, making the manufacturing process labor-intensive and error-prone. To overcome these limitations, TEB-YOLO is introduced in this paper, a lightweight and efficient [...] Read more.
The accurate detection of surface defects in bamboo is critical to maintaining product quality. Traditional inspection methods rely heavily on manual labor, making the manufacturing process labor-intensive and error-prone. To overcome these limitations, TEB-YOLO is introduced in this paper, a lightweight and efficient defect detection model based on YOLOv5s. Firstly, EfficientViT replaces the original YOLOv5s backbone, reducing the computational cost while improving feature extraction. Secondly, BiFPN is adopted in place of PANet to enhance multi-scale feature fusion and preserve detailed information. Thirdly, an Efficient Local Attention (ELA) mechanism is embedded in the backbone to strengthen local feature representation. Lastly, the original CIoU loss is replaced with EIoU loss to enhance localization precision. The proposed model achieves a precision of 91.7% with only 10.5 million parameters, marking a 5.4% accuracy improvement and a 22.9% reduction in parameters compared to YOLOv5s. Compared with other mainstream models including YOLOv5n, YOLOv7, YOLOv8n, YOLOv9t, and YOLOv9s, TEB-YOLO achieves precision improvements of 11.8%, 1.66%, 2.0%, 2.8%, and 1.1%, respectively. The experiment results show that TEB-YOLO significantly improves detection precision and model lightweighting, offering a practical and effective solution for real-time bamboo surface defect detection. Full article
Show Figures

Figure 1

22 pages, 5154 KiB  
Article
BCS_YOLO: Research on Corn Leaf Disease and Pest Detection Based on YOLOv11n
by Shengnan Hao, Erjian Gao, Zhanlin Ji and Ivan Ganchev
Appl. Sci. 2025, 15(15), 8231; https://doi.org/10.3390/app15158231 - 24 Jul 2025
Viewed by 234
Abstract
Frequent corn leaf diseases and pests pose serious threats to agricultural production. Traditional manual detection methods suffer from significant limitations in both performance and efficiency. To address this, the present paper proposes a novel biotic condition screening (BCS) model for the detection of [...] Read more.
Frequent corn leaf diseases and pests pose serious threats to agricultural production. Traditional manual detection methods suffer from significant limitations in both performance and efficiency. To address this, the present paper proposes a novel biotic condition screening (BCS) model for the detection of corn leaf diseases and pests, called BCS_YOLO, based on the You Only Look Once version 11n (YOLOv11n). The proposed model enables accurate detection and classification of various corn leaf pathologies and pest infestations under challenging agricultural field conditions. It achieves this thanks to three key newly designed modules—a Self-Perception Coordinated Global Attention (SPCGA) module, a High/Low-Frequency Feature Enhancement (HLFFE) module, and a Local Attention Enhancement (LAE) module. The SPCGA module improves the model’s ability to perceive fine-grained targets by fusing multiple attention mechanisms. The HLFFE module adopts a frequency domain separation strategy to strengthen edge delineation and structural detail representation in affected areas. The LAE module effectively improves the model’s discrimination ability between targets and backgrounds through local importance calculation and intensity adjustment mechanisms. Conducted experiments show that BCS_YOLO achieves 78.4%, 73.7%, 76.0%, and 82.0% in precision, recall, F1 score, and mAP@50, respectively, representing corresponding improvements of 3.0%, 3.3%, 3.2%, and 4.6% compared to the baseline model (YOLOv11n), while also outperforming the mainstream object detection models. In summary, the proposed BCS_YOLO model provides a practical and scalable solution for efficient detection of corn leaf diseases and pests in complex smart-agriculture scenarios, demonstrating significant theoretical and application value. Full article
(This article belongs to the Special Issue Innovations in Artificial Neural Network Applications)
Show Figures

Figure 1

Back to TopTop