Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (16)

Search Parameters:
Keywords = yolov8-sp

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 2750 KB  
Article
Combining Object Detection, Super-Resolution GANs and Transformers to Facilitate Tick Identification Workflow from Crowdsourced Images on the eTick Platform
by Étienne Clabaut, Jérémie Bouffard and Jade Savage
Insects 2025, 16(8), 813; https://doi.org/10.3390/insects16080813 - 6 Aug 2025
Viewed by 683
Abstract
Ongoing changes in the distribution and abundance of several tick species of medical relevance in Canada have prompted the development of the eTick platform—an image-based crowd-sourcing public surveillance tool for Canada enabling rapid tick species identification by trained personnel, and public health guidance [...] Read more.
Ongoing changes in the distribution and abundance of several tick species of medical relevance in Canada have prompted the development of the eTick platform—an image-based crowd-sourcing public surveillance tool for Canada enabling rapid tick species identification by trained personnel, and public health guidance based on tick species and province of residence of the submitter. Considering that more than 100,000 images from over 73,500 identified records representing 25 tick species have been submitted to eTick since the public launch in 2018, a partial automation of the image processing workflow could save substantial human resources, especially as submission numbers have been steadily increasing since 2021. In this study, we evaluate an end-to-end artificial intelligence (AI) pipeline to support tick identification from eTick user-submitted images, characterized by heterogeneous quality and uncontrolled acquisition conditions. Our framework integrates (i) tick localization using a fine-tuned YOLOv7 object detection model, (ii) resolution enhancement of cropped images via super-resolution Generative Adversarial Networks (RealESRGAN and SwinIR), and (iii) image classification using deep convolutional (ResNet-50) and transformer-based (ViT) architectures across three datasets (12, 6, and 3 classes) of decreasing granularities in terms of taxonomic resolution, tick life stage, and specimen viewing angle. ViT consistently outperformed ResNet-50, especially in complex classification settings. The configuration yielding the best performance—relying on object detection without incorporating super-resolution—achieved a macro-averaged F1-score exceeding 86% in the 3-class model (Dermacentor sp., other species, bad images), with minimal critical misclassifications (0.7% of “other species” misclassified as Dermacentor). Given that Dermacentor ticks represent more than 60% of tick volume submitted on the eTick platform, the integration of a low granularity model in the processing workflow could save significant time while maintaining very high standards of identification accuracy. Our findings highlight the potential of combining modern AI methods to facilitate efficient and accurate tick image processing in community science platforms, while emphasizing the need to adapt model complexity and class resolution to task-specific constraints. Full article
(This article belongs to the Section Medical and Livestock Entomology)
Show Figures

Graphical abstract

22 pages, 18086 KB  
Article
Deep Learning Architecture for Tomato Plant Leaf Detection in Images Captured in Complex Outdoor Environments
by Andros Meraz-Hernández, Jorge Fuentes-Pacheco, Andrea Magadán-Salazar, Raúl Pinto-Elías and Nimrod González-Franco
Mathematics 2025, 13(15), 2338; https://doi.org/10.3390/math13152338 - 22 Jul 2025
Viewed by 809
Abstract
The detection of plant constituents is a crucial issue in precision agriculture, as monitoring these enables the automatic analysis of factors such as growth rate, health status, and crop yield. Tomatoes (Solanum sp.) are an economically and nutritionally important crop in Mexico [...] Read more.
The detection of plant constituents is a crucial issue in precision agriculture, as monitoring these enables the automatic analysis of factors such as growth rate, health status, and crop yield. Tomatoes (Solanum sp.) are an economically and nutritionally important crop in Mexico and worldwide, which is why automatic monitoring of these plants is of great interest. Detecting leaves on images of outdoor tomato plants is challenging due to the significant variability in the visual appearance of leaves. Factors like overlapping leaves, variations in lighting, and environmental conditions further complicate the task of detection. This paper proposes modifications to the Yolov11n architecture to improve the detection of tomato leaves in images of complex outdoor environments by incorporating attention modules, transformers, and WIoUv3 loss for bounding box regression. The results show that our proposal led to a 26.75% decrease in the number of parameters and a 7.94% decrease in the number of FLOPs compared with the original version of Yolov11n. Our proposed model outperformed Yolov11n and Yolov12n architectures in recall, F1-measure, and mAP@50 metrics. Full article
Show Figures

Figure 1

21 pages, 7811 KB  
Article
Research on Broiler Mortality Identification Methods Based on Video and Broiler Historical Movement
by Hongyun Hao, Fanglei Zou, Enze Duan, Xijie Lei, Liangju Wang and Hongying Wang
Agriculture 2025, 15(3), 225; https://doi.org/10.3390/agriculture15030225 - 21 Jan 2025
Viewed by 1098
Abstract
The presence of dead broilers within a flock can be significant vectors for disease transmission and negatively impact the overall welfare of the remaining broilers. This study introduced a dead broiler detection method that leverages the fact that dead broilers remain stationary within [...] Read more.
The presence of dead broilers within a flock can be significant vectors for disease transmission and negatively impact the overall welfare of the remaining broilers. This study introduced a dead broiler detection method that leverages the fact that dead broilers remain stationary within the flock in videos. Dead broilers were identified through the analysis of the historical movement information of each broiler in the video. Firstly, the frame difference method was utilized to capture key frames in the video. An enhanced segmentation network, YOLOv8-SP, was then developed to obtain the mask coordinates of each broiler, and an optical flow estimation method was employed to generate optical flow maps and evaluate their movement. An average optical flow intensity (AOFI) index of broilers was defined and calculated to evaluate the motion level of each broiler in each key frame. With the AOFI threshold, broilers in the key frames were classified into candidate dead broilers and active live broilers. Ultimately, the identification of dead broilers was achieved by analyzing the frequency of each broiler being judged as a candidate death in all key frames within the video. We incorporated the parallelized patch-aware attention (PPA) module into the backbone network and improved the overlaps function with the custom power transform (PT) function. The box and mask segmentation mAP of the YOLOv8-SP model increased by 1.9% and 1.8%, respectively. The model’s target recognition performance for small targets and partially occluded targets was effectively improved. False and missed detections of dead broilers occurred in 4 of the 30 broiler testing videos, and the accuracy of the dead broiler identification algorithm proposed in this study was 86.7%. Full article
(This article belongs to the Special Issue Modeling of Livestock Breeding Environment and Animal Behavior)
Show Figures

Figure 1

17 pages, 4310 KB  
Article
Object Detection in High-Resolution UAV Aerial Remote Sensing Images of Blueberry Canopy Fruits
by Yun Zhao, Yang Li and Xing Xu
Agriculture 2024, 14(10), 1842; https://doi.org/10.3390/agriculture14101842 - 18 Oct 2024
Cited by 6 | Viewed by 1886
Abstract
Blueberries, as one of the more economically rewarding fruits in the fruit industry, play a significant role in fruit detection during their growing season, which is crucial for orchard farmers’ later harvesting and yield prediction. Due to the small size and dense growth [...] Read more.
Blueberries, as one of the more economically rewarding fruits in the fruit industry, play a significant role in fruit detection during their growing season, which is crucial for orchard farmers’ later harvesting and yield prediction. Due to the small size and dense growth of blueberry fruits, manual detection is both time-consuming and labor-intensive. We found that there are few studies utilizing drones for blueberry fruit detection. By employing UAV remote sensing technology and deep learning techniques for detection, substantial human, material, and financial resources can be saved. Therefore, this study collected and constructed a UAV remote sensing target detection dataset for blueberry canopy fruits in a real blueberry orchard environment, which can be used for research on remote sensing target detection of blueberries. To improve the detection accuracy of blueberry fruits, we proposed the PAC3 module, which incorporates location information encoding during the feature extraction process, allowing it to focus on the location information of the targets and thereby reducing the chances of missing blueberry fruits. We adopted a fast convolutional structure instead of the traditional convolutional structure, reducing the model’s parameter count and computational complexity. We proposed the PF-YOLO model and conducted experimental comparisons with several excellent models, achieving improvements in mAP of 5.5%, 6.8%, 2.5%, 2.1%, 5.7%, 2.9%, 1.5%, and 3.4% compared to Yolov5s, Yolov5l, Yolov5s-p6, Yolov5l-p6, Tph-Yolov5, Yolov8n, Yolov8s, and Yolov9c, respectively. We also introduced a non-maximal suppression algorithm, Cluster-NMF, which accelerates inference speed through matrix parallel computation and merges multiple high-quality target detection frames to generate an optimal detection frame, enhancing the efficiency of blueberry canopy fruit detection without compromising inference speed. Full article
(This article belongs to the Section Agricultural Product Quality and Safety)
Show Figures

Figure 1

23 pages, 4476 KB  
Article
YOLOv5s-Based Image Identification of Stripe Rust and Leaf Rust on Wheat at Different Growth Stages
by Qian Jiang, Hongli Wang, Zhenyu Sun, Shiqin Cao and Haiguang Wang
Plants 2024, 13(20), 2835; https://doi.org/10.3390/plants13202835 - 10 Oct 2024
Cited by 2 | Viewed by 1668
Abstract
Stripe rust caused by Puccinia striiformis f. sp. tritici and leaf rust caused by Puccinia triticina, are two devastating diseases on wheat, which seriously affect the production safety of wheat. Timely detection and identification of the two diseases are essential for taking effective disease [...] Read more.
Stripe rust caused by Puccinia striiformis f. sp. tritici and leaf rust caused by Puccinia triticina, are two devastating diseases on wheat, which seriously affect the production safety of wheat. Timely detection and identification of the two diseases are essential for taking effective disease management measures to reduce wheat yield losses. To realize the accurate identification of wheat stripe rust and wheat leaf rust during the different growth stages, in this study, the image-based identification of wheat stripe rust and wheat leaf rust during different growth stages was investigated based on deep learning using image processing technology. Based on the YOLOv5s model, we built identification models of wheat stripe rust and wheat leaf rust during the seedling stage, stem elongation stage, booting stage, inflorescence emergence stage, anthesis stage, milk development stage, and all the growth stages. The models were tested on the different testing sets in the different individual growth stages and in all the growth stages. The results showed that the models performed differently in disease image identification. The model based on the disease images acquired during an individual growth stage was not suitable for the identification of the disease images acquired during the other individual growth stages, except for the model based on the disease images acquired during the milk development stage, which had acceptable identification performance on the testing sets in the anthesis stage and the milk development stage. In addition, the results demonstrated that wheat growth stages had a great influence on the image identification of the two diseases. The model built based on the disease images acquired in all the growth stages produced acceptable identification results. Mean F1 Score values between 64.06% and 79.98% and mean average precision (mAP) values between 66.55% and 82.80% were achieved on each testing set composed of the disease images acquired during an individual growth stage and on the testing set composed of the disease images acquired during all the growth stages. This study provides a basis for the image-based identification of wheat stripe rust and wheat leaf rust during the different growth stages, and it provides a reference for the accurate identification of other plant diseases. Full article
(This article belongs to the Special Issue Plant Pathology and Epidemiology for Grain, Pulses, and Cereal Crops)
Show Figures

Figure 1

23 pages, 11618 KB  
Article
Identification of Insect Pests on Soybean Leaves Based on SP-YOLO
by Kebei Qin, Jie Zhang and Yue Hu
Agronomy 2024, 14(7), 1586; https://doi.org/10.3390/agronomy14071586 - 20 Jul 2024
Cited by 7 | Viewed by 2158
Abstract
Soybean insect pests can seriously affect soybean yield, so efficient and accurate detection of soybean insect pests is crucial for soybean production. However, pest detection in complex environments suffers from the problems of small pest targets, large inter-class feature similarity, and background interference [...] Read more.
Soybean insect pests can seriously affect soybean yield, so efficient and accurate detection of soybean insect pests is crucial for soybean production. However, pest detection in complex environments suffers from the problems of small pest targets, large inter-class feature similarity, and background interference with feature extraction. To address the above problems, this study proposes the detection algorithm SP-YOLO for soybean pests based on YOLOv8n. The model utilizes FasterNet to replace the backbone of YOLOv8n, which reduces redundant features and improves the model’s ability to extract effective features. Second, we propose the PConvGLU architecture, which enhances the capture and representation of image details while reducing computation and memory requirements. In addition, this study proposes a lightweight shared detection header, which enables the model parameter amount computation to be reduced and the model accuracy to be further improved by shared convolution and GroupNorm. The improved model achieves 80.8% precision, 66.4% recall, and 73% average precision, which is 6%, 5.4%, and 5.2%, respectively, compared to YOLOv8n. The FPS reaches 256.4, and the final model size is only 6.2 M, while the number of computational quantities of covariates is basically comparable to that of the original model. The detection capability of SP-YOLO is significantly enhanced compared to that of the existing methods, which provides a good solution for soybean pest detection. SP-YOLO provides an effective technical support for soybean pest detection. Full article
Show Figures

Figure 1

24 pages, 4766 KB  
Article
A Multi-Information Fusion Method for Repetitive Tunnel Disease Detection
by Zhiyuan Gan, Li Teng, Ying Chang, Xinyang Feng, Mengnan Gao and Xinwen Gao
Sustainability 2024, 16(10), 4285; https://doi.org/10.3390/su16104285 - 19 May 2024
Cited by 3 | Viewed by 1650
Abstract
Existing tunnel defect detection methods often lack repeated inspections, limiting longitudinal analysis of defects. To address this, we propose a multi-information fusion approach for continuous defect monitoring. Initially, we utilized the You Only Look Once version 7 (Yolov7) network to identify defects in [...] Read more.
Existing tunnel defect detection methods often lack repeated inspections, limiting longitudinal analysis of defects. To address this, we propose a multi-information fusion approach for continuous defect monitoring. Initially, we utilized the You Only Look Once version 7 (Yolov7) network to identify defects in tunnel lining videos. Subsequently, defect localization is achieved with Super Visual Odometer (SuperVO) algorithm. Lastly, the SuperPoint–SuperGlue Matching Network (SpSg Network) is employed to analyze similarities among defect images. Combining the above information, the repeatability detection of the disease is realized. SuperVO was tested in tunnels of 159 m and 260 m, showcasing enhanced localization accuracy compared to traditional visual odometry methods, with errors measuring below 0.3 m on average and 0.8 m at maximum. The SpSg Network surpassed the depth-feature-based Siamese Network in image matching, achieving a precision of 96.61%, recall of 93.44%, and F1 score of 95%. These findings validate the effectiveness of this approach in the repetitive detection and monitoring of tunnel defects. Full article
(This article belongs to the Special Issue Emergency Plans and Disaster Management in the Era of Smart Cities)
Show Figures

Figure 1

22 pages, 23766 KB  
Article
Fine-Grained Feature Perception for Unmanned Aerial Vehicle Target Detection Algorithm
by Shi Liu, Meng Zhu, Rui Tao and Honge Ren
Drones 2024, 8(5), 181; https://doi.org/10.3390/drones8050181 - 3 May 2024
Cited by 8 | Viewed by 2679
Abstract
Unmanned aerial vehicle (UAV) aerial images often present challenges such as small target sizes, high target density, varied shooting angles, and dynamic poses. Existing target detection algorithms exhibit a noticeable performance decline when confronted with UAV aerial images compared to general scenes. This [...] Read more.
Unmanned aerial vehicle (UAV) aerial images often present challenges such as small target sizes, high target density, varied shooting angles, and dynamic poses. Existing target detection algorithms exhibit a noticeable performance decline when confronted with UAV aerial images compared to general scenes. This paper proposes an outstanding small target detection algorithm for UAVs, named Fine-Grained Feature Perception YOLOv8s-P2 (FGFP-YOLOv8s-P2), based on YOLOv8s-P2 architecture. We specialize in improving inspection accuracy while meeting real-time inspection requirements. First, we enhance the targets’ pixel information by utilizing slice-assisted training and inference techniques, thereby reducing missed detections. Then, we propose a feature extraction module with deformable convolutions. Decoupling the learning process of offset and modulation scalar enables better adaptation to variations in the size and shape of diverse targets. In addition, we introduce a large kernel spatial pyramid pooling module. By cascading convolutions, we leverage the advantages of large kernels to flexibly adjust the model’s attention to various regions of high-level feature maps, better adapting to complex visual scenes and circumventing the cost drawbacks associated with large kernels. To match the excellent real-time detection performance of the baseline model, we propose an improved Random FasterNet Block. This block introduces randomness during convolution and captures spatial features of non-linear transformation channels, enriching feature representations and enhancing model efficiency. Extensive experiments and comprehensive evaluations on the VisDrone2019 and DOTA-v1.0 datasets demonstrate the effectiveness of FGFP-YOLOv8s-P2. This achievement provides robust technical support for efficient small target detection by UAVs in complex scenarios. Full article
Show Figures

Figure 1

17 pages, 11061 KB  
Article
Lightweight Transmission Line Fault Detection Method Based on Leaner YOLOv7-Tiny
by Qingyan Wang, Zhen Zhang, Qingguo Chen, Junping Zhang and Shouqiang Kang
Sensors 2024, 24(2), 565; https://doi.org/10.3390/s24020565 - 16 Jan 2024
Cited by 14 | Viewed by 2243
Abstract
Aiming to address the issues of parameter complexity and high computational load in existing fault detection algorithms for transmission lines, which hinder their deployment on devices like drones, this study proposes a novel lightweight model called Leaner YOLOv7-Tiny. The primary goal is to [...] Read more.
Aiming to address the issues of parameter complexity and high computational load in existing fault detection algorithms for transmission lines, which hinder their deployment on devices like drones, this study proposes a novel lightweight model called Leaner YOLOv7-Tiny. The primary goal is to swiftly and accurately detect typical faults in transmission lines from aerial images. This algorithm inherits the ELAN structure from YOLOv7-Tiny network and replaces its backbone with depthwise separable convolutions to reduce model parameters. By integrating the SP attention mechanism, it fuses multi-scale information, capturing features across various scales to enhance small target recognition. Finally, an improved FCIoU Loss function is introduced to balance the contribution of high-quality and low-quality samples to the loss function, expediting model convergence and boosting detection accuracy. Experimental results demonstrate a 20% reduction in model size compared to the original YOLOv7-Tiny algorithm. Detection accuracy for small targets surpasses that of current mainstream lightweight object detection algorithms. This approach holds practical significance for transmission line fault detection. Full article
Show Figures

Figure 1

18 pages, 3000 KB  
Article
A Lightweight Man-Overboard Detection and Tracking Model Using Aerial Images for Maritime Search and Rescue
by Yijian Zhang, Qianyi Tao and Yong Yin
Remote Sens. 2024, 16(1), 165; https://doi.org/10.3390/rs16010165 - 30 Dec 2023
Cited by 16 | Viewed by 4018
Abstract
Unmanned rescue systems have become an efficient means of executing maritime search and rescue operations, ensuring the safety of rescue personnel. Unmanned aerial vehicles (UAVs), due to their agility and portability, are well-suited for these missions. In this context, we introduce a lightweight [...] Read more.
Unmanned rescue systems have become an efficient means of executing maritime search and rescue operations, ensuring the safety of rescue personnel. Unmanned aerial vehicles (UAVs), due to their agility and portability, are well-suited for these missions. In this context, we introduce a lightweight detection model, YOLOv7-FSB, and its integration with ByteTrack for real-time detection and tracking of individuals in maritime distress situations. YOLOv7-FSB is our lightweight detection model, designed to optimize the use of computational resources on UAVs. It comprises several key components: FSNet serves as the backbone network, reducing redundant computations and memory access to enhance the overall efficiency. The SP-ELAN module is introduced to ensure operational speed while improving feature extraction capabilities. We have also enhanced the feature pyramid structure, making it highly effective for locating individuals in distress within aerial images captured by UAVs. By integrating this lightweight model with ByteTrack, we have created a system that improves detection accuracy from 86.9% to 89.2% while maintaining a detection speed similar to YOLOv7-tiny. Additionally, our approach achieves a MOTA of 85.5% and a tracking speed of 82.7 frames per second, meeting the demanding requirements of maritime search and rescue missions. Full article
Show Figures

Figure 1

18 pages, 6338 KB  
Article
Single-Stage Pose Estimation and Joint Angle Extraction Method for Moving Human Body
by Shuxian Wang, Xiaoxun Zhang, Fang Ma, Jiaming Li and Yuanyou Huang
Electronics 2023, 12(22), 4644; https://doi.org/10.3390/electronics12224644 - 14 Nov 2023
Cited by 21 | Viewed by 5279
Abstract
Detecting posture changes of athletes in sports is an important task in teaching and training competitions, but its detection remains challenging due to the diversity and complexity of sports postures. This paper introduces a single-stage pose estimation algorithm named yolov8-sp. This algorithm enhances [...] Read more.
Detecting posture changes of athletes in sports is an important task in teaching and training competitions, but its detection remains challenging due to the diversity and complexity of sports postures. This paper introduces a single-stage pose estimation algorithm named yolov8-sp. This algorithm enhances the original yolov8 architecture by incorporating the concept of multi-dimensional feature fusion and the attention mechanism for automatically capturing feature importance. Furthermore, in this paper, angle extraction is conducted for three crucial motion joints in the motion scene, with polynomial corrections applied across successive frames. In comparison with the baseline yolov8, the improved model significantly outperforms it in AP50 (average precision) aspects. Specifically, the model’s performance improves from 84.5 AP to 87.1 AP, and the performance of AP5095, APM, and APL aspects also shows varying degrees of improvement; the joint angle detection accuracy under different sports scenarios is tested, and the overall accuracy is improved from 73.2% to 89.0%, which proves the feasibility of the method for posture estimation of the human body in sports and provides a reliable tool for the analysis of athletes’ joint angles. Full article
(This article belongs to the Special Issue AI Security and Safety)
Show Figures

Figure 1

17 pages, 9687 KB  
Article
An Approach for Plant Leaf Image Segmentation Based on YOLOV8 and the Improved DEEPLABV3+
by Tingting Yang, Suyin Zhou, Aijun Xu, Junhua Ye and Jianxin Yin
Plants 2023, 12(19), 3438; https://doi.org/10.3390/plants12193438 - 29 Sep 2023
Cited by 38 | Viewed by 9040
Abstract
Accurate plant leaf image segmentation provides an effective basis for automatic leaf area estimation, species identification, and plant disease and pest monitoring. In this paper, based on our previous publicly available leaf dataset, an approach that fuses YOLOv8 and improved DeepLabv3+ is proposed [...] Read more.
Accurate plant leaf image segmentation provides an effective basis for automatic leaf area estimation, species identification, and plant disease and pest monitoring. In this paper, based on our previous publicly available leaf dataset, an approach that fuses YOLOv8 and improved DeepLabv3+ is proposed for precise image segmentation of individual leaves. First, the leaf object detection algorithm-based YOLOv8 was introduced to reduce the interference of backgrounds on the second stage leaf segmentation task. Then, an improved DeepLabv3+ leaf segmentation method was proposed to more efficiently capture bar leaves and slender petioles. Densely connected atrous spatial pyramid pooling (DenseASPP) was used to replace the ASPP module, and the strip pooling (SP) strategy was simultaneously inserted, which enabled the backbone network to effectively capture long distance dependencies. The experimental results show that our proposed method, which combines YOLOv8 and the improved DeepLabv3+, achieves a 90.8% mean intersection over the union (mIoU) value for leaf segmentation on our public leaf dataset. When compared with the fully convolutional neural network (FCN), lite-reduced atrous spatial pyramid pooling (LR-ASPP), pyramid scene parsing network (PSPnet), U-Net, DeepLabv3, and DeepLabv3+, the proposed method improves the mIoU of leaves by 8.2, 8.4, 3.7, 4.6, 4.4, and 2.5 percentage points, respectively. Experimental results show that the performance of our method is significantly improved compared with the classical segmentation methods. The proposed method can thus effectively support the development of smart agroforestry. Full article
(This article belongs to the Section Plant Modeling)
Show Figures

Figure 1

23 pages, 12053 KB  
Article
HSSNet: A End-to-End Network for Detecting Tiny Targets of Apple Leaf Diseases in Complex Backgrounds
by Xing Gao, Zhiwen Tang, Yubao Deng, Shipeng Hu, Hongmin Zhao and Guoxiong Zhou
Plants 2023, 12(15), 2806; https://doi.org/10.3390/plants12152806 - 28 Jul 2023
Cited by 11 | Viewed by 2469
Abstract
Apple leaf diseases are one of the most important factors that reduce apple quality and yield. The object detection technology based on deep learning can detect diseases in a timely manner and help automate disease control, thereby reducing economic losses. In the natural [...] Read more.
Apple leaf diseases are one of the most important factors that reduce apple quality and yield. The object detection technology based on deep learning can detect diseases in a timely manner and help automate disease control, thereby reducing economic losses. In the natural environment, tiny apple leaf disease targets (a resolution is less than 32 × 32 pixel2) are easily overlooked. To address the problems of complex background interference, difficult detection of tiny targets and biased detection of prediction boxes that exist in standard detectors, in this paper, we constructed a tiny target dataset TTALDD-4 containing four types of diseases, which include Alternaria leaf spot, Frogeye leaf spot, Grey spot and Rust, and proposed the HSSNet detector based on the YOLOv7-tiny benchmark for professional detection of apple leaf disease tiny targets. Firstly, the H-SimAM attention mechanism is proposed to focus on the foreground lesions in the complex background of the image. Secondly, SP-BiFormer Block is proposed to enhance the ability of the model to perceive tiny targets of leaf diseases. Finally, we use the SIOU loss to improve the case of prediction box bias. The experimental results show that HSSNet achieves 85.04% mAP (mean average precision), 67.53% AR (average recall), and 83 FPS (frames per second). Compared with other standard detectors, HSSNet maintains high real-time detection speed with higher detection accuracy. This provides a reference for the automated control of apple leaf diseases. Full article
(This article belongs to the Collection Application of AI in Plants)
Show Figures

Figure 1

21 pages, 9132 KB  
Article
SP-YOLO-Lite: A Lightweight Violation Detection Algorithm Based on SP Attention Mechanism
by Zhihao Huang, Jiajun Wu, Lumei Su, Yitao Xie, Tianyou Li and Xinyu Huang
Electronics 2023, 12(14), 3176; https://doi.org/10.3390/electronics12143176 - 21 Jul 2023
Cited by 7 | Viewed by 2337
Abstract
In the operation site of power grid construction, it is crucial to comprehensively and efficiently detect violations of regulations for the personal safety of the workers with a safety monitoring system based on object detection technology. However, common general-purpose object detection algorithms are [...] Read more.
In the operation site of power grid construction, it is crucial to comprehensively and efficiently detect violations of regulations for the personal safety of the workers with a safety monitoring system based on object detection technology. However, common general-purpose object detection algorithms are difficult to deploy on low-computational-power embedded platforms situated at the edge due to their high model complexity. These algorithms suffer from drawbacks such as low operational efficiency, slow detection speed, and high energy consumption. To address this issue, a lightweight violation detection algorithm based on the SP (Segmentation-and-Product) attention mechanism, named SP-YOLO-Lite, is proposed to improve the YOLOv5s detection algorithm and achieve low-cost deployment and efficient operation of object detection algorithms on low-computational-power monitoring platforms. First, to address the issue of excessive complexity in backbone networks built with conventional convolutional modules, a Lightweight Convolutional Block was employed to construct the backbone network, significantly reducing computational and parameter costs while maintaining high detection model accuracy. Second, in response to the problem of existing attention mechanisms overlooking spatial local information, we introduced an image segmentation operation and proposed a novel attention mechanism called Segmentation-and-Product (SP) attention. It enables the model to effectively capture local informative features of the image, thereby enhancing model accuracy. Furthermore, a Neck network that is both lightweight and feature-rich is proposed by introducing Depthwise Separable Convolution and Segmentation-and-Product attention module to Path Aggregation Network, thus addressing the issue of high computation and parameter volume in the Neck network of YOLOv5s. Experimental results show that compared with the baseline network YOLOv5s, the proposed SP-YOLO-Lite model reduces the computation and parameter volume by approximately 70%, achieving similar detection accuracy on both the VOC dataset and our self-built SMPC dataset. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Graphical abstract

17 pages, 6740 KB  
Article
SP-YOLOv8s: An Improved YOLOv8s Model for Remote Sensing Image Tiny Object Detection
by Mingyang Ma and Huanli Pang
Appl. Sci. 2023, 13(14), 8161; https://doi.org/10.3390/app13148161 - 13 Jul 2023
Cited by 57 | Viewed by 8524
Abstract
An improved YOLOv8s-based method is proposed to address the challenge of accurately recognizing tiny objects in remote sensing images during practical human-computer interaction. In detecting tiny targets, the accuracy of YOLOv8s is low because the downsampling module of the original YOLOv8s algorithm causes [...] Read more.
An improved YOLOv8s-based method is proposed to address the challenge of accurately recognizing tiny objects in remote sensing images during practical human-computer interaction. In detecting tiny targets, the accuracy of YOLOv8s is low because the downsampling module of the original YOLOv8s algorithm causes the network to lose fine-grained feature information, and the neck network feature information needs to be sufficiently fused. In this method, the strided convolution module in YOLOv8s is replaced with the SPD-Conv module. By doing so, the feature map undergoes downsampling while preserving fine-grained feature information, thereby improving the learning and expressive capabilities of the network and enhancing recognition accuracy. Meanwhile, the path aggregation network is substituted with the SPANet structure, which facilitates the acquisition of more prosperous gradient paths. This substitution enhances the fusion of feature maps at various scales, reduces model parameters, and further improves detection accuracy. Additionally, it enhances the network’s robustness to complex backgrounds. Experimental verification is conducted on the following two intricate datasets containing tiny objects: AI-TOD and TinyPerson. A comparative analysis with the original YOLOv8s algorithm reveals notable enhancements in recognition accuracy. Specifically, under real-time performance constraints, the proposed method yields a 4.9% and 9.1% improvement in mAP0.5 recognition accuracy for AI-TOD and TinyPerson datasets, respectively. Moreover, the recognition accuracy for mAP0.5:0.95 is enhanced by 3.4% and 3.2% for the same datasets, respectively. The results indicate that the proposed method enables rapid and accurate recognition of tiny objects in complex backgrounds. Furthermore, it demonstrates better recognition precision and stability than other algorithms, such as YOLOv5s and YOLOv8s. Full article
Show Figures

Figure 1

Back to TopTop