Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,412)

Search Parameters:
Keywords = lightweight YOLOv5

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 7617 KB  
Article
DAS-YOLO: Adaptive Structure–Semantic Symmetry Calibration Network for PCB Defect Detection
by Weipan Wang, Wengang Jiang, Lihua Zhang, Siqing Chen and Qian Zhang
Symmetry 2026, 18(2), 222; https://doi.org/10.3390/sym18020222 - 25 Jan 2026
Abstract
Industrial-grade printed circuit boards (PCBs) exhibit high structural order and inherent geometric symmetry, where minute surface defects essentially constitute symmetry-breaking anomalies that disrupt topological integrity. Detecting these anomalies is quite challenging due to issues like scale variation and low contrast. Therefore, this paper [...] Read more.
Industrial-grade printed circuit boards (PCBs) exhibit high structural order and inherent geometric symmetry, where minute surface defects essentially constitute symmetry-breaking anomalies that disrupt topological integrity. Detecting these anomalies is quite challenging due to issues like scale variation and low contrast. Therefore, this paper proposes a symmetry-aware object detection framework, DAS-YOLO, based on an improved YOLOv11. The U-shaped adaptive feature extraction module (Def-UAD) reconstructs the C3K2 unit, overcoming the geometric limitations of standard convolutions through a deformation adaptation mechanism. This significantly enhances feature extraction capabilities for irregular defect topologies. A semantic-aware module (SADRM) is introduced at the backbone and neck regions. The lightweight and efficient ESSAttn improves the distinguishability of small or weak targets. At the same time, to address information asymmetry between deep and shallow features, an iterative attention feature fusion module (IAFF) is designed. By dynamically weighting and calibrating feature biases, it achieves structured coordination and balanced multi-scale representation. To evaluate the validity of the proposed method, we carried out comprehensive experiments using publicly accessible datasets focused on PCB defects. The results show that the Recall, mAP@50, and mAP@50-95 of DAS-YOLO reached 82.60%, 89.50%, and 46.60%, respectively, which are 3.7%, 1.8%, and 2.9% higher than those of the baseline model, YOLOv11n. Comparisons with mainstream detectors such as GD-YOLO and SRN further demonstrate a significant advantage in detection accuracy. These results confirm that the proposed framework offers a solution that strikes a balance between accuracy and practicality in addressing the key challenges in PCB surface defect detection. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

20 pages, 4006 KB  
Article
Deformable Pyramid Sparse Transformer for Semi-Supervised Driver Distraction Detection
by Qiang Zhao, Zhichao Yu, Jiahui Yu, Simon James Fong, Yuchu Lin, Rui Wang and Weiwei Lin
Sensors 2026, 26(3), 803; https://doi.org/10.3390/s26030803 - 25 Jan 2026
Abstract
Ensuring sustained driver attention is critical for intelligent transportation safety systems; however, the performance of data-driven driver distraction detection models is often limited by the high cost of large-scale manual annotation. To address this challenge, this paper proposes an adaptive semi-supervised driver distraction [...] Read more.
Ensuring sustained driver attention is critical for intelligent transportation safety systems; however, the performance of data-driven driver distraction detection models is often limited by the high cost of large-scale manual annotation. To address this challenge, this paper proposes an adaptive semi-supervised driver distraction detection framework based on teacher–student learning and deformable pyramid feature fusion. The framework leverages a limited amount of labeled data together with abundant unlabeled samples to achieve robust and scalable distraction detection. An adaptive pseudo-label optimization strategy is introduced, incorporating category-aware pseudo-label thresholding, delayed pseudo-label scheduling, and a confidence-weighted pseudo-label loss to dynamically balance pseudo-label quality and training stability. To enhance fine-grained perception of subtle driver behaviors, a Deformable Pyramid Sparse Transformer (DPST) module is integrated into a lightweight YOLOv11 detector, enabling precise multi-scale feature alignment and efficient cross-scale semantic fusion. Furthermore, a teacher-guided feature consistency distillation mechanism is employed to promote semantic alignment between teacher and student models at the feature level, mitigating the adverse effects of noisy pseudo-labels. Extensive experiments conducted on the Roboflow Distracted Driving Dataset demonstrate that the proposed method outperforms representative fully supervised baselines in terms of mAP@0.5 and mAP@0.5:0.95 while maintaining a balanced trade-off between precision and recall. These results indicate that the proposed framework provides an effective and practical solution for real-world driver monitoring systems under limited annotation conditions. Full article
(This article belongs to the Section Vehicular Sensing)
21 pages, 9353 KB  
Article
YOLOv10n-Based Peanut Leaf Spot Detection Model via Multi-Dimensional Feature Enhancement and Geometry-Aware Loss
by Yongpeng Liang, Lei Zhao, Wenxin Zhao, Shuo Xu, Haowei Zheng and Zhaona Wang
Appl. Sci. 2026, 16(3), 1162; https://doi.org/10.3390/app16031162 - 23 Jan 2026
Viewed by 79
Abstract
Precise identification of early peanut leaf spot is strategically significant for safeguarding oilseed supplies and reducing pesticide reliance. However, general-purpose detectors face severe domain adaptation bottlenecks in unstructured field environments due to small feature dissipation, physical occlusion, and class imbalance. To address this, [...] Read more.
Precise identification of early peanut leaf spot is strategically significant for safeguarding oilseed supplies and reducing pesticide reliance. However, general-purpose detectors face severe domain adaptation bottlenecks in unstructured field environments due to small feature dissipation, physical occlusion, and class imbalance. To address this, this study constructs a dataset spanning two phenological cycles and proposes POD-YOLO, a physics-aware and dynamics-optimized lightweight framework. Anchored on the YOLOv10n architecture and adhering to a “data-centric” philosophy, the framework optimizes the parameter convergence path via a synergistic “Augmentation-Loss-Optimization” mechanism: (1) Input Stage: A Physical Domain Reconstruction (PDR) module is introduced to simulate physical occlusion, blocking shortcut learning and constructing a robust feature space; (2) Loss Stage: A Loss Manifold Reshaping (LMR) mechanism is established utilizing dual-branch constraints to suppress background gradients and enhance small target localization; and (3) Optimization Stage: A Decoupled Dynamic Scheduling (DDS) strategy is implemented, integrating AdamW with cosine annealing to ensure smooth convergence on small-sample data. Experimental results demonstrate that POD-YOLO achieves a 9.7% precision gain over the baseline and 83.08% recall, all while maintaining a low computational cost of 8.4 GFLOPs. This study validates the feasibility of exploiting the potential of lightweight architectures through optimization dynamics, offering an efficient paradigm for edge-based intelligent plant protection. Full article
(This article belongs to the Section Optics and Lasers)
Show Figures

Figure 1

24 pages, 2902 KB  
Article
Research on Prolonged Violation Behavior Recognition in Construction Sites Based on Artificial Intelligence
by Kai Yu, Zhenyue Wang, Lujie Zhou, Xuesong Yang, Zhaoxiang Mu and Tianyu Wang
Symmetry 2026, 18(1), 204; https://doi.org/10.3390/sym18010204 - 22 Jan 2026
Viewed by 19
Abstract
Prolonged violation behavior is characterized by sustained temporal presence, slow action changes, and similarity to normal behavior. Due to the complex construction environment, intelligent recognition algorithms face significant challenges. This paper proposes an improved YOLOv8-based model, DGEA-YOLOv8, to address these issues, using “playing [...] Read more.
Prolonged violation behavior is characterized by sustained temporal presence, slow action changes, and similarity to normal behavior. Due to the complex construction environment, intelligent recognition algorithms face significant challenges. This paper proposes an improved YOLOv8-based model, DGEA-YOLOv8, to address these issues, using “playing with mobile phones” as a case study. The model integrates the DCNv3 module in the backbone to enhance behavior deformation adaptability and the GELAN module to improve lightweight performance and global perception in resource-limited environments. An ECA attention mechanism is added to enhance small target detection, while the ASPP module boosts multi-scale perception. ByteTrack is incorporated for continuous tracking of prolonged violation behavior in construction scenarios. Experimental results show that DGEA-YOLOv8 achieves 94.5% mAP50, a 2.95% improvement over the YOLOv8s baseline, with better data capture rates and lower ID change rates compared to algorithms like Deepsort and Strongsort. A construction-specific dataset of over 3000 images verifies the model’s effectiveness. From the perspective of data symmetry, the proposed model demonstrates strong capability in addressing asymmetric feature distributions and behavioral imbalance inherent in prolonged violations, restoring spatiotemporal consistency in detection. In conclusion, DGEA-YOLOv8 provides a precise, efficient, and adaptive solution for recognizing prolonged violation behaviors in construction sites. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

25 pages, 2891 KB  
Article
Automated Measurement of Sheep Body Dimensions via Fusion of YOLOv12n-Seg-SSM and 3D Point Clouds
by Xiaona Zhao, Xifeng Liu, Zihao Gao, Xinran Liang, Yanjun Yuan, Yangfan Bai, Zhimin Zhang, Fuzhong Li and Wuping Zhang
Agriculture 2026, 16(2), 272; https://doi.org/10.3390/agriculture16020272 - 21 Jan 2026
Viewed by 61
Abstract
Accurate measurement of sheep body dimensions is fundamental for growth monitoring and breeding management. To address the limited segmentation accuracy and the trade-off between lightweight design and precision in existing non-contact measurement methods, this study proposes an improved model, YOLOv12n-Seg-SSM, for the automatic [...] Read more.
Accurate measurement of sheep body dimensions is fundamental for growth monitoring and breeding management. To address the limited segmentation accuracy and the trade-off between lightweight design and precision in existing non-contact measurement methods, this study proposes an improved model, YOLOv12n-Seg-SSM, for the automatic measurement of body height, body length, and chest circumference from side-view images of sheep. The model employs a synergistic strategy that combines semantic segmentation with 3D point cloud geometric fitting. It incorporates the SegLinearSimAM feature enhancement module, the SEAttention channel optimization module, and the ENMPDIoU loss function to improve measurement robustness under complex backgrounds and occlusions. After segmentation, valid RGB-D point clouds are generated through depth completion and point cloud filtering, enabling 3D computation of key body measurements. Experimental results demonstrate that the improved model outperforms the baseline YOLOv12n-Seg: the mAP@0.5 for segmentation reaches 94.20%, the mAP@0.5 for detection reaches 95.00% (improvements of 0.5 and 1.3 percentage points, respectively), and the recall increases to 99.00%. In validation tests on 43 Hu sheep, the R2 values for chest circumference, body height, and body length were 0.925, 0.888 and 0.819, respectively, with measurement errors within 5%. The model requires only 10.71 MB of memory and 9.9 GFLOPs of computation, enabling real-time operation on edge devices. This study demonstrates that the proposed method achieves non-contact automatic measurement of sheep body dimensions, providing a practical solution for on-site growth monitoring and intelligent management in livestock farms. Full article
(This article belongs to the Special Issue Computer Vision Analysis Applied to Farm Animals)
Show Figures

Figure 1

24 pages, 7972 KB  
Article
YOLO-MCS: A Lightweight Loquat Object Detection Algorithm in Orchard Environments
by Wei Zhou, Leina Gao, Fuchun Sun and Yuechao Bian
Agriculture 2026, 16(2), 262; https://doi.org/10.3390/agriculture16020262 - 21 Jan 2026
Viewed by 61
Abstract
To address the challenges faced by loquat detection algorithms in orchard settings—including complex backgrounds, severe branch and leaf occlusion, and inaccurate identification of densely clustered fruits—which lead to high computational complexity, insufficient real-time performance, and limited recognition accuracy, this study proposed a lightweight [...] Read more.
To address the challenges faced by loquat detection algorithms in orchard settings—including complex backgrounds, severe branch and leaf occlusion, and inaccurate identification of densely clustered fruits—which lead to high computational complexity, insufficient real-time performance, and limited recognition accuracy, this study proposed a lightweight detection model based on the YOLO-MCS architecture. First, to address fruit occlusion by branches and leaves, the backbone network adopts the lightweight EfficientNet-b0 architecture. Leveraging its composite model scaling feature, this significantly reduces computational costs while balancing speed and accuracy. Second, to deal with inaccurate recognition of densely clustered fruits, the C2f module is enhanced. Spatial Channel Reconstruction Convolution (SCConv) optimizes and reconstructs the bottleneck structure of the C2f module, accelerating inference while improving the model’s multi-scale feature extraction capabilities. Finally, to overcome interference from complex natural backgrounds in loquat fruit detection, this study introduces the SimAm module during the initial detection phase. Its feature recalibration strategy enhances the model’s ability to focus on target regions. According to the experimental results, the improved YOLO-MCS model outperformed the original YOLOv8 model in terms of Precision (P) and mean Average Precision (mAP) by 1.3% and 2.2%, respectively. Additionally, the model reduced GFLOPs computation by 34.1% and Params by 43.3%. Furthermore, in tests under complex weather conditions and with interference factors such as leaf occlusion, branch occlusion, and fruit mutual occlusion, the YOLO-MCS model demonstrated significant robustness, achieving mAP of 89.9% in the loquat recognition task. The exceptional performance serves as a robust technical base on the development and research of intelligent systems for harvesting loquats. Full article
Show Figures

Figure 1

24 pages, 69667 KB  
Article
YOLO-ELS: A Lightweight Cherry Tomato Maturity Detection Algorithm
by Zhimin Tong, Yu Zhou, Changhao Li, Changqing Cai and Lihong Rong
Appl. Sci. 2026, 16(2), 1043; https://doi.org/10.3390/app16021043 - 20 Jan 2026
Viewed by 74
Abstract
Within the domain of intelligent picking robotics, fruit recognition and positioning are essential. Challenging conditions such as varying light, occlusion, and limited edge-computing power compromise fruit maturity detection. To tackle these issues, this paper proposes a lightweight algorithm YOLO-ELS based on YOLOv8n. Specifically, [...] Read more.
Within the domain of intelligent picking robotics, fruit recognition and positioning are essential. Challenging conditions such as varying light, occlusion, and limited edge-computing power compromise fruit maturity detection. To tackle these issues, this paper proposes a lightweight algorithm YOLO-ELS based on YOLOv8n. Specifically, we reconstruct the backbone by replacing the bottlenecks in the C2f structure with Edge-Information-Enhanced Modules (EIEM) to prioritize morphological cues and filter background redundancy. Furthermore, a Large Separable Kernel Attention (LSKA) mechanism is integrated into the SPPF layer to expand the effective receptive field for multi-scale targets. To mitigate occlusion-induced errors, a Spatially Enhanced Attention Module (SEAM) is incorporated into the decoupled detection head to enhance feature responses in obscured regions. Finally, the Inner-GIoU loss is adopted to refine bounding box regression and accelerate convergence. Experimental results demonstrate that compared to the YOLOv8n baseline, the proposed YOLO-ELS achieves a 14.8% reduction in GFLOPs and a 2.3% decrease in parameters, while attaining a precision, recall, and mAP@50% of 92.7%, 83.9%, and 92.0%, respectively. When compared with mainstream models such as DETR, Faster-RCNN, SSD, TOOD, YOLOv5s, and YOLO11n, the mAP@50% is improved by 7.0%, 4.7%, 11.4%, 8.6%, 3.1%, and 3.2%. Deployment tests on the NVIDIA Jetson Orin Nano Super edge platform yield an inference latency of 25.2 ms and a detection speed of 28.2 FPS, successfully meeting the real-time operational requirements of automated harvesting systems. These findings confirm that YOLO-ELS effectively balances high detection accuracy with lightweight architecture, providing a robust technical foundation for intelligent fruit picking in resource-constrained greenhouse environments. Full article
(This article belongs to the Section Agricultural Science and Technology)
Show Figures

Figure 1

22 pages, 8145 KB  
Article
Research on Greenhouse Eggplant Fruit Detection and Tracking-Based Counting Using an Improved YOLOv5s-DeepSORT
by Jianfei Zhu, Long Bai, Caishan Liu, Chengxu Nian, Keke Zhang and Sibo Yang
Agriculture 2026, 16(2), 253; https://doi.org/10.3390/agriculture16020253 - 19 Jan 2026
Viewed by 122
Abstract
Accurate fruit counting is essential for yield evaluation and automated management in greenhouse eggplant production. This study presents a lightweight detection and counting method based on an improved YOLOv5s–DeepSORT framework. To reduce computational cost while preserving accuracy, we replace the YOLOv5s backbone with [...] Read more.
Accurate fruit counting is essential for yield evaluation and automated management in greenhouse eggplant production. This study presents a lightweight detection and counting method based on an improved YOLOv5s–DeepSORT framework. To reduce computational cost while preserving accuracy, we replace the YOLOv5s backbone with MobileNetV3, insert an Efficient Channel Attention (ECA) module to enhance discriminative fruit features, and substitute the neck C3 block with C2f to strengthen multi-scale feature fusion. Compared with the original YOLOv5s, our improved YOLOv5s increases precision by 2.3% while reducing the number of parameters and FLOPs by 37.0% and 50.9%, respectively. For counting, we integrate DeepSORT with a counting-zone strategy that increments the count once per target when the bounding-box center first enters the counting zone, thereby mitigating identity switches (ID switches) and suppressing duplicate counts. Experimental results demonstrate that the proposed method enables accurate and real-time eggplant fruit counting in complex greenhouse scenes, providing practical support for automated yield assessment on inspection robots. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

16 pages, 6142 KB  
Article
Research on Image Detection of Thin-Vein Precious Metal Ores and Rocks Based on Improved YOLOv8n
by Heyan Zhou, Yuanhui Li, Yunsen Wang, Hong Zhou and Kunmeng Li
Appl. Sci. 2026, 16(2), 988; https://doi.org/10.3390/app16020988 - 19 Jan 2026
Viewed by 147
Abstract
To address the high-dilution issues arising from efficient mining methods such as medium-deep drilling for underground thin veins of precious metals, detecting raw rock fragments after blasting for subsequent sorting has become a cutting-edge research focus. With the continuous advancement of artificial intelligence, [...] Read more.
To address the high-dilution issues arising from efficient mining methods such as medium-deep drilling for underground thin veins of precious metals, detecting raw rock fragments after blasting for subsequent sorting has become a cutting-edge research focus. With the continuous advancement of artificial intelligence, deep learning offers novel applications for rock detection. Accordingly, this study employs an improved lightweight YOLOv8n model to detect two typical thin-vein precious metal ores: gold ore and wolframite. In consideration of the computational resource constraints in underground environments, a triple optimization strategy is proposed. First, GhostConv and C2f-Ghost modules were introduced into the backbone network to reduce redundant computations while preserving feature representation capabilities. Second, the VoVGSCSP module was incorporated into the neck to further decrease model parameters and computational load. Finally, the ECA mechanism was embedded before the SPPF pooling layer to enhance feature extraction for ores and rocks, thereby improving detection accuracy. The results demonstrate that the GVE-YOLOv8 model contains only 2.28 million parameters—a 24.3% reduction compared to the original YOLOv8n. FLOPs decrease from 8.1 G to 5.6 G, and the model size reduces from 6.3 MB to 4.9 MB, while detection accuracy improves to 98.3% mAP50 and 95.3% mAP50-95. This enhanced model meets the performance requirements for accurately detecting raw ore and rock fragments after underground blasting, thereby providing a novel research method for thin-vein mining. Full article
Show Figures

Figure 1

25 pages, 12600 KB  
Article
Underwater Object Recovery Using a Hybrid-Controlled ROV with Deep Learning-Based Perception
by Inés Pérez-Edo, Salvador López-Barajas, Raúl Marín-Prades and Pedro J. Sanz
J. Mar. Sci. Eng. 2026, 14(2), 198; https://doi.org/10.3390/jmse14020198 - 18 Jan 2026
Viewed by 311
Abstract
The deployment of large remotely operated vehicles (ROVs) or autonomous underwater vehicles (AUVs) typically requires support vessels, crane systems, and specialized personnel, resulting in increased logistical complexity and operational costs. In this context, lightweight and modular underwater robots have emerged as a cost-effective [...] Read more.
The deployment of large remotely operated vehicles (ROVs) or autonomous underwater vehicles (AUVs) typically requires support vessels, crane systems, and specialized personnel, resulting in increased logistical complexity and operational costs. In this context, lightweight and modular underwater robots have emerged as a cost-effective alternative, capable of reaching significant depths and performing tasks traditionally associated with larger platforms. This article presents a system architecture for recovering a known object using a hybrid-controlled ROV, integrating autonomous perception, high-level interaction, and low-level control. The proposed architecture includes a perception module that estimates the object pose using a Perspective-n-Point (PnP) algorithm, combining object segmentation from a YOLOv11-seg network with 2D keypoints obtained from a YOLOv11-pose model. In addition, a Natural Language ROS Agent is incorporated to enable high-level command interaction between the operator and the robot. These modules interact with low-level controllers that regulate the vehicle degrees of freedom and with autonomous behaviors such as target approach and grasping. The proposed system is evaluated through simulation and experimental tank trials, including object recovery experiments conducted in a 12 × 8 × 5 m test tank at CIRTESU, as well as perception validation in simulated, tank, and harbor scenarios. The results demonstrate successful recovery of a black box using a BlueROV2 platform, showing that architectures of this type can effectively support operators in underwater intervention tasks, reducing operational risk, deployment complexity, and mission costs. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

19 pages, 5706 KB  
Article
Research on a Unified Multi-Type Defect Detection Method for Lithium Batteries Throughout Their Entire Lifecycle Based on Multimodal Fusion and Attention-Enhanced YOLOv8
by Zitao Du, Ziyang Ma, Yazhe Yang, Dongyan Zhang, Haodong Song, Xuanqi Zhang and Yijia Zhang
Sensors 2026, 26(2), 635; https://doi.org/10.3390/s26020635 - 17 Jan 2026
Viewed by 260
Abstract
To address the limitations of traditional lithium battery defect detection—low efficiency, high missed detection rates for minute/composite defects, and inadequate multimodal fusion—this study develops an improved YOLOv8 model based on multimodal fusion and attention enhancement for unified full-lifecycle multi-type defect detection. Integrating visible-light [...] Read more.
To address the limitations of traditional lithium battery defect detection—low efficiency, high missed detection rates for minute/composite defects, and inadequate multimodal fusion—this study develops an improved YOLOv8 model based on multimodal fusion and attention enhancement for unified full-lifecycle multi-type defect detection. Integrating visible-light and X-ray modalities, the model incorporates a Squeeze-and-Excitation (SE) module to dynamically weight channel features, suppressing redundancy and highlighting cross-modal complementarity. A Multi-Scale Fusion Module (MFM) is constructed to amplify subtle defect expression by fusing multi-scale features, building on established feature fusion principles. Experimental results show that the model achieves an mAP@0.5 of 87.5%, a minute defect recall rate (MRR) of 84.1%, and overall industrial recognition accuracy of 97.49%. It operates at 35.9 FPS (server) and 25.7 FPS (edge) with end-to-end latency of 30.9–38.9 ms, meeting high-speed production line requirements. Exhibiting strong robustness, the lightweight model outperforms YOLOv5/7/8/9-S in core metrics. Large-scale verification confirms stable performance across the battery lifecycle, providing a reliable solution for industrial defect detection and reducing production costs. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Graphical abstract

20 pages, 5733 KB  
Article
A Lightweight Segmentation Model Method for Marigold Picking Point Localization
by Baojian Ma, Zhenghao Wu, Yun Ge, Bangbang Chen, Jijing Lin, He Zhang and Hao Xia
Horticulturae 2026, 12(1), 97; https://doi.org/10.3390/horticulturae12010097 - 17 Jan 2026
Viewed by 120
Abstract
A key challenge in automated marigold harvesting lies in the accurate identification of picking points under complex environmental conditions, such as dense shading and intense illumination. To tackle this problem, this research proposes a lightweight instance segmentation model combined with a harvest position [...] Read more.
A key challenge in automated marigold harvesting lies in the accurate identification of picking points under complex environmental conditions, such as dense shading and intense illumination. To tackle this problem, this research proposes a lightweight instance segmentation model combined with a harvest position estimation method. Based on the YOLOv11n-seg segmentation framework, we develop a lightweight PDS-YOLO model through two key improvements: (1) structural pruning of the base model to reduce its parameter count, (2) incorporation of a Channel-wise Distillation (CWD)-based feature distillation method to compensate for the accuracy loss caused by pruning. The resulting lightweight segmentation model achieves a size of only 1.3 MB (22.8% of the base model) and a computational cost of 5 GFLOPs (49.02% of the base model). At the same time, it maintains high segmentation performance, with a precision of 93.6% and a mean average precision (mAP) of 96.7% for marigold segmentation. Furthermore, the proposed model demonstrates enhanced robustness under challenging scenarios including strong lighting, cloudy weather, and occlusion, improving the recall rate by 1.1% over the base model. Based on the segmentation results, a method for estimating marigold harvest positions using 3D point clouds is proposed. Fitting and deflection angle experiments confirm that the fitting errors are constrained within 3–12 mm, which lies within an acceptable range for automated harvesting. These results validate the capability of the proposed approach to accurately locate marigold harvest positions under top-down viewing conditions. The lightweight segmentation network and harvest position estimation method presented in this work offer effective technical support for selective harvesting of marigolds. Full article
(This article belongs to the Special Issue Orchard Intelligent Production: Technology and Equipment)
Show Figures

Figure 1

18 pages, 3091 KB  
Article
Automated Detection of Malaria (Plasmodium) Parasites in Images Captured with Mobile Phones Using Convolutional Neural Networks
by Jhosephi Vásquez Ascate, Bill Bardales Layche, Rodolfo Cardenas Vigo, Erwin Dianderas Caut, Carlos Ramírez Calderón, Carlos Garcia Cortegano, Alejandro Reategui Pezo, Katty Arista Flores, Juan Ramírez Calderón, Cristiam Carey Angeles, Karine Zevallos Villegas, Martin Casapia Morales and Hugo Rodríguez Ferrucci
Appl. Sci. 2026, 16(2), 927; https://doi.org/10.3390/app16020927 - 16 Jan 2026
Viewed by 261
Abstract
Microscopic examination of Giemsa-stained thick blood smears remains the reference standard for malaria diagnosis, but it requires specialized personnel and is difficult to scale in resource-limited settings. We present a lightweight, smartphone-based system for automatic detection of Plasmodium parasites in thick smears captured [...] Read more.
Microscopic examination of Giemsa-stained thick blood smears remains the reference standard for malaria diagnosis, but it requires specialized personnel and is difficult to scale in resource-limited settings. We present a lightweight, smartphone-based system for automatic detection of Plasmodium parasites in thick smears captured with mobile phones attached to a conventional microscope. We built a clinically validated dataset of 400 slides from Loreto, Peru, consisting of 8625 images acquired with three smartphone models and 54,531 annotated instances of Plasmodium vivax and P. falciparum across eight morphologic classes. The workflow includes YOLOv11n-based visual-field segmentation, rescaling, tiling into 640 × 640 patches, data augmentation, and parasite detection. Four lightweight detectors were evaluated; YOLOv11n achieved the best trade-off, with an F1-score of 0.938 and an overall accuracy of 90.92% on the test subset. For diagnostic interpretability, performance was also assessed at the visual-field level by grouping detections into Vivax, Falciparum, Mixed, and Background. On a high-end smartphone (Samsung Galaxy S24 Ultra), the deployed YOLOv11n model achieved 110.9 ms latency per 640 × 640 inference (9.02 FPS). Full article
(This article belongs to the Section Applied Biosciences and Bioengineering)
Show Figures

Figure 1

17 pages, 2852 KB  
Article
A Lightweight Edge-AI System for Disease Detection and Three-Level Leaf Spot Severity Assessment in Strawberry Using YOLOv10n and MobileViT-S
by Raikhan Amanova, Baurzhan Belgibayev, Madina Mansurova, Madina Suleimenova, Gulshat Amirkhanova and Gulnur Tyulepberdinova
Computers 2026, 15(1), 63; https://doi.org/10.3390/computers15010063 - 16 Jan 2026
Viewed by 186
Abstract
Mobile edge-AI plant monitoring systems enable automated disease control in greenhouses and open fields, reducing dependence on manual inspection and the variability of visual diagnostics. This paper proposes a lightweight two-stage edge-AI system for strawberries, in which a YOLOv10n detector on board a [...] Read more.
Mobile edge-AI plant monitoring systems enable automated disease control in greenhouses and open fields, reducing dependence on manual inspection and the variability of visual diagnostics. This paper proposes a lightweight two-stage edge-AI system for strawberries, in which a YOLOv10n detector on board a mobile agricultural robot locates leaves affected by seven common diseases (including Leaf Spot) with real-time capability on an embedded platform. Patches are then automatically extracted for leaves classified as Leaf Spot and transmitted to the second module—a compact MobileViT-S-based classifier with ordinal output that assesses the severity of Leaf Spot on three levels (S1—mild, S2—moderate, S3—severe) on a specialised set of 373 manually labelled leaf patches. In a comparative experiment with lightweight architectures ResNet-18, EfficientNet-B0, MobileNetV3-Small and Swin-Tiny, the proposed Ordinal MobileViT-S demonstrated the highest accuracy in assessing the severity of Leaf Spot (accuracy ≈ 0.97 with 4.9 million parameters), surpassing both the baseline models and the standard MobileViT-S with a cross-entropy loss function. On the original image set, the YOLOv10n detector achieves an mAP@0.5 of 0.960, an F1 score of 0.93 and a recall of 0.917, ensuring reliable detection of affected leaves for subsequent Leaf Spot severity assessment. The results show that the “YOLOv10n + Ordinal MobileViT-S” cascade provides practical severity-aware Leaf Spot diagnosis on a mobile agricultural robot and can serve as the basis for real-time strawberry crop health monitoring systems. Full article
Show Figures

Figure 1

19 pages, 1722 KB  
Article
Light-YOLO-Pepper: A Lightweight Model for Detecting Missing Seedlings
by Qiang Shi, Yongzhong Zhang, Xiaoxue Du, Tianhua Chen and Yafei Wang
Agriculture 2026, 16(2), 231; https://doi.org/10.3390/agriculture16020231 - 15 Jan 2026
Viewed by 245
Abstract
The aim of this study was to accurately meet the demand of real-time detection of seedling shortage in large-scale seedling production and solve the problems of low precision of traditional models and insufficient adaptability of mainstream lightweight models. This study proposed a Light-YOLO-Pepper [...] Read more.
The aim of this study was to accurately meet the demand of real-time detection of seedling shortage in large-scale seedling production and solve the problems of low precision of traditional models and insufficient adaptability of mainstream lightweight models. This study proposed a Light-YOLO-Pepper seedling shortage detection model based on the improvement of YOLOv8n. This model was based on YOLOv8n. The SE (Squeeze-and-Excitation) attention module was introduced to dynamically suppress the interference of the nutrient soil background and enhance the features of the seedling shortage area. Depth-separable convolution (DSConv) was used to replace the traditional convolution, which can reduce computational redundancy while retaining core features. Based on K- means clustering, customized anchor boxes were generated to adapt to the hole sizes of 72-unit (large size) and 128-unit (small size and high-density) seedling trays. The results show that the overall mAP@0.5, accuracy and recall rate of Light-YOLO-Pepper model were 93.6 ± 0.5%, 94.6 ± 0.4% and 93.2 ± 0.6%, which were 3.3%, 3.1%, and 3.4% higher than YOLOv8n model, respectively. The parameter size of the Light-YOLO-Pepper model was only 1.82 M, the calculation cost was 3.2 G FLOPs, and the reasoning speeds with regard to the GPU and CPU were 168.4 FPS and 28.9 FPS, respectively. The Light-YOLO-Pepper model was superior to the mainstream model in terms of its lightweight and real-time performance. The precision difference between the two seedlings was only 1.2%, and the precision retention rate in high-density scenes was 98.73%. This model achieves the best balance of detection accuracy, lightweight performance, and scene adaptability, and can efficiently meet the needs of embedded equipment and real-time detection in large-scale seedling production, providing technical support for replanting automation. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

Back to TopTop