Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (35)

Search Parameters:
Keywords = Focal_EIoU Loss

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 2115 KiB  
Article
Surface Defect Detection of Magnetic Tiles Based on YOLOv8-AHF
by Cheng Ma, Yurong Pan and Junfu Chen
Electronics 2025, 14(14), 2857; https://doi.org/10.3390/electronics14142857 - 17 Jul 2025
Viewed by 230
Abstract
Magnetic tiles are an important component of permanent magnet motors, and the quality of magnetic tiles directly affects the performance and service life of a motor. It is necessary to perform defect detection on magnetic tiles in industrial production and remove those with [...] Read more.
Magnetic tiles are an important component of permanent magnet motors, and the quality of magnetic tiles directly affects the performance and service life of a motor. It is necessary to perform defect detection on magnetic tiles in industrial production and remove those with defects. The YOLOv8-AHF algorithm is proposed to improve the ability of network feature information extraction and solve the problem of missed detection or poor detection results in surface defect detection due to the small volume of permanent magnet motor tiles, which reduces the deviation between the predicted box and the true box simultaneously. Firstly, a hybrid module of a combination of atrous convolution and depthwise separable convolution (ADConv) is introduced in the backbone of the model to capture global and local features in magnet tile detection images. In the neck section, a hybrid attention module (HAM) is introduced to focus on the regions of interest in the magnetic tile surface defect images, which improves the ability of information transmission and fusion. The Focal-Enhanced Intersection over Union loss function (Focal-EIoU) is optimized to effectively achieve localization. We conducted comparative experiments, ablation experiments, and corresponding generalization experiments on the magnetic tile surface defect dataset. The experimental results show that the evaluation metrics of YOLOv8-AHF surpass mainstream single-stage object detection algorithms. Compared to the You Only Look Once version 8 (YOLOv8) algorithm, the performance of the YOLOv8-AHF algorithm was improved by 5.9%, 4.1%, 5%, 5%, and 5.8% in terms of mAP@0.5, mAP@0.5:0.95, F1-Score, precision, and recall, respectively. This algorithm achieved significant performance improvement in the task of detecting surface defects on magnetic tiles. Full article
Show Figures

Figure 1

18 pages, 10602 KiB  
Article
A Lightweight Network for UAV Multi-Scale Feature Fusion-Based Object Detection
by Sheng Deng and Yaping Wan
Information 2025, 16(3), 250; https://doi.org/10.3390/info16030250 - 20 Mar 2025
Viewed by 697
Abstract
To tackle the issues of small target sizes, missed detections, and false alarms in aerial drone imagery, alongside the constraints posed by limited hardware resources during model deployment, a streamlined object detection approach is proposed to enhance the performance of YOLOv8s. This approach [...] Read more.
To tackle the issues of small target sizes, missed detections, and false alarms in aerial drone imagery, alongside the constraints posed by limited hardware resources during model deployment, a streamlined object detection approach is proposed to enhance the performance of YOLOv8s. This approach introduces a new module, C2f_SEPConv, which incorporates Partial Convolution (PConv) and channel attention mechanisms (Squeeze-and-Excitation, SE), effectively replacing the previous bottleneck and minimizing both the model’s parameter count and computational demands. Modifications to the detection head allow it to perform more effectively in scenarios with small targets in aerial images. To capture multi-scale object information, a Multi-Scale Cross-Axis Attention (MSCA) mechanism is embedded within the backbone network. The neck network integrates a Multi-Scale Fusion Block (MSFB) to combine multi-level features, further boosting detection precision. Furthermore, the Focal-EIoU loss function supersedes the traditional CIoU loss function to address challenges related to the regression of small targets. Evaluations conducted on the VisDrone dataset reveal that the proposed method improves Precision, Recall, mAP0.5, and mAP0.5:0.95 by 4.4%, 5.6%, 6.4%, and 4%, respectively, compared to YOLOv8s, with a 28.3% reduction in parameters. On the DOTAv1.0 dataset, a 2.1% enhancement in mAP0.5 is observed. Full article
Show Figures

Graphical abstract

21 pages, 4676 KiB  
Article
LCDDN-YOLO: Lightweight Cotton Disease Detection in Natural Environment, Based on Improved YOLOv8
by Haoran Feng, Xiqu Chen and Zhaoyan Duan
Agriculture 2025, 15(4), 421; https://doi.org/10.3390/agriculture15040421 - 17 Feb 2025
Cited by 4 | Viewed by 985
Abstract
To address the challenges of detecting cotton pests and diseases in natural environments, as well as the similarities in the features exhibited by cotton pests and diseases, a Lightweight Cotton Disease Detection in Natural Environment (LCDDN-YOLO) algorithm is proposed. The LCDDN-YOLO algorithm is [...] Read more.
To address the challenges of detecting cotton pests and diseases in natural environments, as well as the similarities in the features exhibited by cotton pests and diseases, a Lightweight Cotton Disease Detection in Natural Environment (LCDDN-YOLO) algorithm is proposed. The LCDDN-YOLO algorithm is based on YOLOv8n, and replaces part of the convolutional layers in the backbone network with Distributed Shift Convolution (DSConv). The BiFPN network is incorporated into the original architecture, adding learnable weights to evaluate the significance of various input features, thereby enhancing detection accuracy. Furthermore, it integrates Partial Convolution (PConv) and Distributed Shift Convolution (DSConv) into the C2f module, called PDS-C2f. Additionally, the CBAM attention mechanism is incorporated into the neck network to improve model performance. A Focal-EIoU loss function is also integrated to optimize the model’s training process. Experimental results show that compared to YOLOv8, the LCDDN-YOLO model reduces the number of parameters by 12.9% and the floating-point operations (FLOPs) by 9.9%, while precision, mAP@50, and recall improve by 4.6%, 6.5%, and 7.8%, respectively, reaching 89.5%, 85.4%, and 80.2%. In summary, the LCDDN-YOLO model offers excellent detection accuracy and speed, making it effective for pest and disease control in cotton fields, particularly in lightweight computing scenarios. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

17 pages, 3243 KiB  
Article
An Improved YOLOv5s-Based Algorithm for Unsafe Behavior Detection of Construction Workers in Construction Scenarios
by Yongqiang Liu, Pengxiang Wang and Haomin Li
Appl. Sci. 2025, 15(4), 1853; https://doi.org/10.3390/app15041853 - 11 Feb 2025
Cited by 1 | Viewed by 935
Abstract
Currently, the identification of unsafe behaviors among construction workers predominantly relies on manual methods, which are time-consuming, labor intensive, and inefficient. To enhance identification accuracy and ensure real-time performance, this paper proposes an enhanced YOLOv5s framework with three strategic improvements: (1) adoption of [...] Read more.
Currently, the identification of unsafe behaviors among construction workers predominantly relies on manual methods, which are time-consuming, labor intensive, and inefficient. To enhance identification accuracy and ensure real-time performance, this paper proposes an enhanced YOLOv5s framework with three strategic improvements: (1) adoption of the Focal-EIoU loss function to resolve sample imbalance and localization inaccuracies in complex scenarios; (2) integration of the Coordinate Attention (CA) mechanism, which enhances spatial perception through channel-direction feature encoding, outperforming conventional SE blocks in positional sensitivity; and (3) development of a dedicated small-target detection layer to capture critical fine-grained features. Based on the improved model, a method for identifying unsafe behaviors of construction workers is proposed. Validated through a sluice renovation project in Jiangsu Province, the optimized model demonstrates a 3.6% higher recall (reducing missed detections) and a 2.2% mAP improvement over baseline, while maintaining a 42 FPS processing speed. The model effectively identifies unsafe behaviors at water conservancy construction sites, accurately detecting relevant unsafe actions, while meeting real-time performance requirements. Full article
Show Figures

Figure 1

18 pages, 5370 KiB  
Article
Research on Blood Cell Image Detection Method Based on Fourier Ptychographic Microscopy
by Mingjing Li, Le Yang, Shu Fang, Xinyang Liu, Haijiao Yun, Xiaoli Wang, Qingyu Du, Ziqing Han and Junshuai Wang
Sensors 2025, 25(3), 882; https://doi.org/10.3390/s25030882 - 31 Jan 2025
Viewed by 816
Abstract
Autonomous Fourier Ptychographic Microscopy (FPM) is a technology widely used in the field of pathology. It is compatible with high resolution and large field-of-view imaging and can observe more image details. Red blood cells play an indispensable role in assessing the oxygen-carrying capacity [...] Read more.
Autonomous Fourier Ptychographic Microscopy (FPM) is a technology widely used in the field of pathology. It is compatible with high resolution and large field-of-view imaging and can observe more image details. Red blood cells play an indispensable role in assessing the oxygen-carrying capacity of the human body and in screening for clinical diagnosis and treatment needs. In this paper, the blood cell data set is constructed based on the FPM system experimental platform. Before training, four enhancement strategies are adopted for the blood cell image data to improve the generalization and robustness of the model. A blood cell detection algorithm based on SCD-YOLOv7 is proposed. Firstly, the C-MP (Convolutional Max Pooling) module and DELAN (Deep Efficient Learning Automotive Network) module are used in the feature extraction network to optimize the feature extraction process and improve the extraction ability of overlapping cell features by considering the characteristics of channels and spatial dimensions. Secondly, through the Sim-Head detection head, the global information of the deep feature map (mean average precision) and the local details of the shallow feature map are fully utilized to improve the performance of the algorithm for small target detection. MAP is a comprehensive indicator for evaluating the performance of object detection algorithms, which measures the accuracy and robustness of a model by calculating the average precision (AP) under different categories or thresholds. Finally, the Focal-EIoU (Focal Extended Intersection over Union) loss function is introduced, which not only improves the convergence speed of the model but also significantly improves the accuracy of blood cell detection. Through quantitative and qualitative analysis of ablation experiments and comparative experimental results, the detection accuracy of the SCD-YOLOv7 algorithm on the blood cell data set reached 92.4%, increased by 7.2%, and the calculation amount was reduced by 14.6 G. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

21 pages, 44945 KiB  
Article
Grape Target Detection Method in Orchard Environment Based on Improved YOLOv7
by Fuchun Sun, Qiurong Lv, Yuechao Bian, Renwei He, Dong Lv, Leina Gao, Haorong Wu and Xiaoxiao Li
Agronomy 2025, 15(1), 42; https://doi.org/10.3390/agronomy15010042 - 27 Dec 2024
Cited by 2 | Viewed by 791
Abstract
In response to the poor detection performance of grapes in orchards caused by issues such as leaf occlusion and fruit overlap, this study proposes an improved grape detection method named YOLOv7-MCSF based on the You Only Look Once v7 (YOLOv7) framework. Firstly, the [...] Read more.
In response to the poor detection performance of grapes in orchards caused by issues such as leaf occlusion and fruit overlap, this study proposes an improved grape detection method named YOLOv7-MCSF based on the You Only Look Once v7 (YOLOv7) framework. Firstly, the original backbone network is replaced with MobileOne to achieve a lightweight improvement of the model, thereby reducing the number of parameters. In addition, a Channel Attention (CA) module was added to the neck network to reduce interference from the orchard background and to accelerate the inference speed. Secondly, the SPPFCSPC pyramid pooling is embedded to enhance the speed of image feature fusion while maintaining a consistent receptive field. Finally, the Focal-EIoU loss function is employed to optimize the regression prediction boxes, accelerating their convergence and improving regression accuracy. The experimental results indicate that, compared to the original YOLOv7 model, the YOLOv7-MCSF model achieves a 26.9% reduction in weight, an increase in frame rate of 21.57 f/s, and improvements in precision, recall, and mAP of 2.4%, 1.8%, and 3.5%, respectively. The improved model can efficiently and in real-time identify grape clusters, providing technical support for the deployment of mobile devices and embedded grape detection systems in orchard environments. Full article
(This article belongs to the Special Issue Remote Sensing Applications in Crop Monitoring and Modelling)
Show Figures

Figure 1

17 pages, 7209 KiB  
Article
Sorghum Spike Detection Method Based on Gold Feature Pyramid Module and Improved YOLOv8s
by Shujin Qiu, Jian Gao, Mengyao Han, Qingliang Cui, Xiangyang Yuan and Cuiqing Wu
Sensors 2025, 25(1), 104; https://doi.org/10.3390/s25010104 - 27 Dec 2024
Cited by 1 | Viewed by 697
Abstract
In order to solve the problems of high planting density, similar color, and serious occlusion between spikes in sorghum fields, such as difficult identification and detection of sorghum spikes, low accuracy and high false detection, and missed detection rates, this study proposes an [...] Read more.
In order to solve the problems of high planting density, similar color, and serious occlusion between spikes in sorghum fields, such as difficult identification and detection of sorghum spikes, low accuracy and high false detection, and missed detection rates, this study proposes an improved sorghum spike detection method based on YOLOv8s. The method involves augmenting the information fusion capability of the YOLOv8 model’s neck module by integrating the Gold feature pyramid module. Additionally, the SPPF module is refined with the LSKA attention mechanism to heighten focus on critical features. To tackle class imbalance in sorghum detection and expedite model convergence, a loss function incorporating Focal-EIOU is employed. Consequently, the YOLOv8s-Gold-LSKA model, based on the Gold module and LSKA attention mechanism, is developed. Experimental results demonstrate that this improved method significantly enhances sorghum spike detection accuracy in natural field settings. The improved model achieved a precision of 90.72%, recall of 76.81%, mean average precision (mAP) of 85.86%, and an F1-score of 81.19%. Comparing the improved model of this study with the three target detection models of YOLOv5s, SSD, and YOLOv8, respectively, the improved model of this study has better detection performance. This advancement provides technical support for the rapid and accurate recognition of multiple sorghum spike targets in natural field backgrounds, thereby improving sorghum yield estimation accuracy. It also contributes to increased sorghum production and harvest, as well as the enhancement of intelligent harvesting equipment for agricultural machinery. Full article
(This article belongs to the Special Issue Sensor and AI Technologies in Intelligent Agriculture: 2nd Edition)
Show Figures

Figure 1

28 pages, 8539 KiB  
Article
Enhancing YOLOv5 Performance for Small-Scale Corrosion Detection in Coastal Environments Using IoU-Based Loss Functions
by Qifeng Yu, Yudong Han, Yi Han, Xinjia Gao and Lingyu Zheng
J. Mar. Sci. Eng. 2024, 12(12), 2295; https://doi.org/10.3390/jmse12122295 - 13 Dec 2024
Cited by 3 | Viewed by 1799
Abstract
The high salinity, humidity, and oxygen-rich environments of coastal marine areas pose serious corrosion risks to metal structures, particularly in equipment such as ships, offshore platforms, and port facilities. With the development of artificial intelligence technologies, image recognition-based intelligent detection methods have provided [...] Read more.
The high salinity, humidity, and oxygen-rich environments of coastal marine areas pose serious corrosion risks to metal structures, particularly in equipment such as ships, offshore platforms, and port facilities. With the development of artificial intelligence technologies, image recognition-based intelligent detection methods have provided effective support for corrosion monitoring in marine engineering structures. This study aims to explore the performance improvements of different modified YOLOv5 models in small-object corrosion detection tasks, focusing on five IoU-based improved loss functions and their optimization effects on the YOLOv5 model. First, the study utilizes corrosion testing data from the Zhoushan seawater station of the China National Materials Corrosion and Protection Science Data Center to construct a corrosion image dataset containing 1266 labeled images. Then, based on the improved IoU loss functions, five YOLOv5 models were constructed: YOLOv5-NWD, YOLOv5-Shape-IoU, YOLOv5-WIoU, YOLOv5-Focal-EIoU, and YOLOv5-SIoU. These models, along with the traditional YOLOv5 model, were trained using the dataset, and their performance was evaluated using metrics such as precision, recall, F1 score, and FPS. The results showed that YOLOv5-NWD performed the best across all metrics, with a 7.2% increase in precision and a 2.2% increase in F1 score. The YOLOv5-Shape-IoU model followed, with improvements of 4.5% in precision and 2.6% in F1 score. In contrast, the performance improvements of YOLOv5-Focal-EIoU, YOLOv5-SIoU, and YOLOv5-WIoU were more limited. Further analysis revealed that different IoU ratios significantly affected the performance of the YOLOv5-NWD model. Experiments showed that the 4:6 ratio yielded the highest precision, while the 6:4 ratio performed the best in terms of recall, F1 score, and confusion matrix results. In addition, this study conducted an assessment using four datasets of different sizes: 300, 600, 900, and 1266 images. The results indicate that increasing the size of the training dataset enables the model to find a better balance between precision and recall, that is, a higher F1 score, while also effectively improving the model’s processing speed. Therefore, the choice of an appropriate IoU ratio should be based on specific application needs to optimize model performance. This study provides theoretical support for small-object corrosion detection tasks, advances the development of loss function design, and enhances the detection accuracy and reliability of YOLOv5 in practical applications. Full article
Show Figures

Figure 1

24 pages, 6186 KiB  
Article
A Method for Detecting Tomato Maturity Based on Deep Learning
by Song Wang, Jianxia Xiang, Daqing Chen and Cong Zhang
Appl. Sci. 2024, 14(23), 11111; https://doi.org/10.3390/app142311111 - 28 Nov 2024
Cited by 3 | Viewed by 1998
Abstract
In complex scenes, factors such as tree branches and leaves occlusion, dense distribution of tomato fruits, and similarity of fruit color to the background color make it difficult to correctly identify the ripeness of the tomato fruits when harvesting them. Therefore, in this [...] Read more.
In complex scenes, factors such as tree branches and leaves occlusion, dense distribution of tomato fruits, and similarity of fruit color to the background color make it difficult to correctly identify the ripeness of the tomato fruits when harvesting them. Therefore, in this study, an improved YOLOv8 algorithm is proposed to address the problem of tomato fruit ripeness detection in complex scenarios, which is difficult to carry out accurately. The algorithm employs several technical means to improve detection accuracy and efficiency. First, Swin Transformer is used to replace the third C2f in the backbone part. The modeling of global and local information is realized through the self-attention mechanism, which improves the generalization ability and feature extraction ability of the model, thereby bringing higher detection accuracy. Secondly, the C2f convolution in the neck section is replaced with Distribution Shifting Convolution, so that the model can better process spatial information and further improve the object detection accuracy. In addition, by replacing the original CIOU loss function with the Focal–EIOU loss function, the problem of sample imbalance is solved and the detection performance of the model in complex scenarios is improved. After improvement, the mAP of the model increased by 2.3%, and the Recall increased by 6.8% on the basis of YOLOv8s, and the final mAP and Recall reached 86.9% and 82.0%, respectively. The detection speed of the improved model reaches 190.34 FPS, which meets the demand of real-time detection. The results show that the improved YOLOv8 algorithm proposed in this study exhibits excellent performance in the task of tomato ripeness detection in complex scenarios, providing important experience and guidance for tomato ripeness detection. Full article
(This article belongs to the Special Issue Recent Advances in Precision Farming and Digital Agriculture)
Show Figures

Figure 1

20 pages, 9461 KiB  
Article
Vehicle Target Detection Using the Improved YOLOv5s Algorithm
by Zhaopeng Dong
Electronics 2024, 13(23), 4672; https://doi.org/10.3390/electronics13234672 - 26 Nov 2024
Cited by 2 | Viewed by 796
Abstract
This paper explores the application of the YOLOv5s algorithm integrated with the DeepSORT tracking detection algorithm in vehicle target detection, leveraging its advantages in data processing, loss function, network structure, and training strategy. Regarding detection frame regression, adopting Focal-EIOU can improve vehicle detection [...] Read more.
This paper explores the application of the YOLOv5s algorithm integrated with the DeepSORT tracking detection algorithm in vehicle target detection, leveraging its advantages in data processing, loss function, network structure, and training strategy. Regarding detection frame regression, adopting Focal-EIOU can improve vehicle detection accuracy by precisely measuring overlap and better handle complex scenarios, enhancing the overall performance. The CoordConv convolution layer with more spatial position information is employed to enhance the original network structure’s convolution layer and improve vehicle positioning accuracy. The principle and effectiveness of the Shuffle Attention mechanism are analyzed and added to the YOLOv5s network structure to enhance training, improve detection accuracy and running speed. And the DeepSORT tracking detection algorithm is designed to achieve high-speed operation and high-accuracy matching in target tracking, enabling efficient and reliable tracking of objects. Simultaneously, the network structure is optimized to enhance algorithmic speed and performance. To meet the requirements of vehicle detection in practical transportation systems, real-world vehicle images are collected as a dataset for model training to achieve accurate vehicle detection. The results show that the accuracy rate P of the improved YOLOv5s algorithm is increased by 0.484%, and mAP_0.5:0.95 reaches 92.221%, with an increase of 1.747%. Full article
Show Figures

Figure 1

16 pages, 3898 KiB  
Article
APD-YOLOv7: Enhancing Sustainable Farming through Precise Identification of Agricultural Pests and Diseases Using a Novel Diagonal Difference Ratio IOU Loss
by Jianwen Li, Shutian Liu, Dong Chen, Shengbang Zhou and Chuanqi Li
Sustainability 2024, 16(20), 8855; https://doi.org/10.3390/su16208855 - 13 Oct 2024
Cited by 4 | Viewed by 1621
Abstract
The diversity and complexity of the agricultural environment pose significant challenges for the collection of pest and disease data. Additionally, pest and disease datasets often suffer from uneven distribution in quantity and inconsistent annotation standards. Enhancing the accuracy of pest and disease recognition [...] Read more.
The diversity and complexity of the agricultural environment pose significant challenges for the collection of pest and disease data. Additionally, pest and disease datasets often suffer from uneven distribution in quantity and inconsistent annotation standards. Enhancing the accuracy of pest and disease recognition remains a challenge for existing models. We constructed a representative agricultural pest and disease dataset, FIP6Set, through a combination of field photography and web scraping. This dataset encapsulates key issues encountered in existing agricultural pest and disease datasets. Referencing existing bounding box regression (BBR) loss functions, we reconsidered their geometric features and proposed a novel bounding box similarity comparison metric, DDRIoU, suited to the characteristics of agricultural pest and disease datasets. By integrating the focal loss concept with the DDRIoU loss, we derived a new loss function, namely Focal-DDRIoU loss. Furthermore, we modified the network structure of YOLOV7 by embedding the MobileViTv3 module. Consequently, we introduced a model specifically designed for agricultural pest and disease detection in precision agriculture. We conducted performance evaluations on the FIP6Set dataset using mAP75 as the evaluation metric. Experimental results demonstrate that the Focal-DDRIoU loss achieves improvements of 1.12%, 1.24%, 1.04%, and 1.50% compared to the GIoU, DIoU, CIoU, and EIoU losses, respectively. When employing the GIoU, DIoU, CIoU, EIoU, and Focal-DDRIoU loss functions, the adjusted network structure showed enhancements of 0.68%, 0.68%, 0.78%, 0.60%, and 0.56%, respectively, compared to the original YOLOv7. Furthermore, the proposed model outperformed the mainstream YOLOv7 and YOLOv5 models by 1.86% and 1.60%, respectively. The superior performance of the proposed model in detecting agricultural pests and diseases directly contributes to reducing pesticide misuse, preventing large-scale pest and disease outbreaks, and ultimately enhancing crop yields. These outcomes strongly support the promotion of sustainable agricultural development. Full article
Show Figures

Figure 1

14 pages, 2621 KiB  
Article
Yolo Based Defects Detection Algorithm for EL in PV Modules with Focal and Efficient IoU Loss
by Shen Ding, Wanting Jing, Hao Chen and Congyan Chen
Appl. Sci. 2024, 14(17), 7493; https://doi.org/10.3390/app14177493 - 24 Aug 2024
Cited by 2 | Viewed by 1747
Abstract
Considering the defect detection issues in electroluminescence (EL) of photovoltaic (PV) cell systems, lots of factors result in performance degradation, including defect diversity, data imbalance, scale difference, etc. Focal-EIoU loss, an effective defect detection solution for EL, is proposed based on the improved [...] Read more.
Considering the defect detection issues in electroluminescence (EL) of photovoltaic (PV) cell systems, lots of factors result in performance degradation, including defect diversity, data imbalance, scale difference, etc. Focal-EIoU loss, an effective defect detection solution for EL, is proposed based on the improved YOLOv5. Firstly, by analyzing the detection background and scale characteristics of EL defects, a binary classification is carried out in the system. Subsequently, a cascade detection network based on YOLOv5 is designed to further extract features from the binary-classified defects. The defect localization and classification are achieved in this way. To address the problem of imbalanced defect samples, a loss function is designed based on EIoU and Focal-F1 Loss. Experimental results are illustrated to show the effectiveness. Compared with the existing CNN-based deep learning approaches, the proposed focal loss calculation-based method can effectively improve the performance of handling sample imbalance. Moreover, in the detection of 12 types of defects, the Yolov5 algorithms can always obtain higher MAP (mean average precision) even with different parameter levels (Yolov5m: 0.791 vs. 0.857, Yolov5l: 0.798 vs. 0.862, Yolov5x: 0.802 vs. 0.867, Yolov5s: 0.793 vs. 0.865). Full article
Show Figures

Figure 1

27 pages, 7948 KiB  
Article
LTSCD-YOLO: A Lightweight Algorithm for Detecting Typical Satellite Components Based on Improved YOLOv8
by Zixuan Tang, Wei Zhang, Junlin Li, Ran Liu, Yansong Xu, Siyu Chen, Zhiyue Fang and Fuchenglong Zhao
Remote Sens. 2024, 16(16), 3101; https://doi.org/10.3390/rs16163101 - 22 Aug 2024
Cited by 3 | Viewed by 2893
Abstract
Typical satellite component detection is an application-valuable and challenging research field. Currently, there are many algorithms for detecting typical satellite components, but due to the limited storage space and computational resources in the space environment, these algorithms generally have the problem of excessive [...] Read more.
Typical satellite component detection is an application-valuable and challenging research field. Currently, there are many algorithms for detecting typical satellite components, but due to the limited storage space and computational resources in the space environment, these algorithms generally have the problem of excessive parameter count and computational load, which hinders their effective application in space environments. Furthermore, the scale of datasets used by these algorithms is not large enough to train the algorithm models well. To address the above issues, this paper first applies YOLOv8 to the detection of typical satellite components and proposes a Lightweight Typical Satellite Components Detection algorithm based on improved YOLOv8 (LTSCD-YOLO). Firstly, it adopts the lightweight network EfficientNet-B0 as the backbone network to reduce the model’s parameter count and computational load; secondly, it uses a Cross-Scale Feature-Fusion Module (CCFM) at the Neck to enhance the model’s adaptability to scale changes; then, it integrates Partial Convolution (PConv) into the C2f (Faster Implementation of CSP Bottleneck with two convolutions) module and Re-parameterized Convolution (RepConv) into the detection head to further achieve model lightweighting; finally, the Focal-Efficient Intersection over Union (Focal-EIoU) is used as the loss function to enhance the model’s detection accuracy and detection speed. Additionally, a larger-scale Typical Satellite Components Dataset (TSC-Dataset) is also constructed. Our experimental results show that LTSCD-YOLO can maintain high detection accuracy with minimal parameter count and computational load. Compared to YOLOv8s, LTSCD-YOLO improved the mean average precision (mAP50) by 1.50% on the TSC-Dataset, reaching 94.5%. Meanwhile, the model’s parameter count decreased by 78.46%, the computational load decreased by 65.97%, and the detection speed increased by 17.66%. This algorithm achieves a balance between accuracy and light weight, and its generalization ability has been validated on real images, making it effectively applicable to detection tasks of typical satellite components in space environments. Full article
Show Figures

Graphical abstract

20 pages, 4591 KiB  
Article
On-Line Detection Method of Salted Egg Yolks with Impurities Based on Improved YOLOv7 Combined with DeepSORT
by Dongjun Gong, Shida Zhao, Shucai Wang, Yuehui Li, Yong Ye, Lianfei Huo and Zongchun Bai
Foods 2024, 13(16), 2562; https://doi.org/10.3390/foods13162562 - 16 Aug 2024
Cited by 1 | Viewed by 1385
Abstract
Salted duck egg yolk, a key ingredient in various specialty foods in China, frequently contains broken eggshell fragments embedded in the yolk due to high-speed shell-breaking processes, which pose significant food safety risks. This paper presents an online detection method, YOLOv7-SEY-DeepSORT (salted egg [...] Read more.
Salted duck egg yolk, a key ingredient in various specialty foods in China, frequently contains broken eggshell fragments embedded in the yolk due to high-speed shell-breaking processes, which pose significant food safety risks. This paper presents an online detection method, YOLOv7-SEY-DeepSORT (salted egg yolk, SEY), designed to integrate an enhanced YOLOv7 with DeepSORT for real-time and accurate identification of salted egg yolks with impurities on production lines. The proposed method utilizes YOLOv7 as the core network, incorporating multiple Coordinate Attention (CA) modules in its Neck section to enhance the extraction of subtle eggshell impurities. To address the impact of imbalanced sample proportions on detection accuracy, the Focal-EIoU loss function is employed, adaptively adjusting bounding box loss values to ensure precise localization of yolks with impurities in images. The backbone network is replaced with the lightweight MobileOne neural network to reduce model parameters and improve real-time detection performance. DeepSORT is used for matching and tracking yolk targets across frames, accommodating rotational variations. Experimental results demonstrate that YOLOv7-SEY-DeepSORT achieves a mean average precision (mAP) of 0.931, reflecting a 0.53% improvement over the original YOLOv7. The method also shows enhanced tracking performance, with Multiple Object Tracking Accuracy (MOTA) and Multiple Object Tracking Precision (MOTP) scores of 87.9% and 73.8%, respectively, representing increases of 17.0% and 9.8% over SORT and 2.9% and 4.7% over Tracktor. Overall, the proposed method balances high detection accuracy with real-time performance, surpassing other mainstream object detection methods in comprehensive performance. Thus, it provides a robust solution for the rapid and accurate detection of defective salted egg yolks and offers a technical foundation and reference for future research on the automated and safe processing of egg products. Full article
(This article belongs to the Section Food Analytical Methods)
Show Figures

Figure 1

16 pages, 7607 KiB  
Article
YOLOv7-Branch: A Jujube Leaf Branch Detection Model for Agricultural Robot
by Ruijun Jing, Jijiang Xu, Jingkai Liu, Xiongwei He and Zhiguo Zhao
Sensors 2024, 24(15), 4856; https://doi.org/10.3390/s24154856 - 26 Jul 2024
Cited by 2 | Viewed by 1179
Abstract
The intelligent harvesting technology for jujube leaf branches presents a novel avenue for enhancing both the quantity and quality of jujube leaf tea, whereas the precise detection technology for jujube leaf branches emerges as a pivotal factor constraining its development. The precise identification [...] Read more.
The intelligent harvesting technology for jujube leaf branches presents a novel avenue for enhancing both the quantity and quality of jujube leaf tea, whereas the precise detection technology for jujube leaf branches emerges as a pivotal factor constraining its development. The precise identification and localization of jujube leaf branches using real-time object detection technology are crucial steps toward achieving intelligent harvesting. When integrated into real-world scenarios, issues such as the background noise introduced by tags, occlusions, and variations in jujube leaf morphology constrain the accuracy of detection and the precision of localization. To address these issues, we describe a jujube leaf branch object detection network based on YOLOv7. First, the Polarized Self-Attention module is embedded into the convolutional layer, and the Gather-Excite module is embedded into the concat layer to incorporate spatial information, thus achieving the suppression of irrelevant information such as background noise. Second, we incorporate implicit knowledge into the Efficient Decoupled Head and replace the original detection head, enhancing the network’s capability to extract deep features. Third, to address the issue of imbalanced jujube leaf samples, we employ Focal-EIoU as the bounding box loss function to expedite the regression prediction and enhance the localization accuracy of the model’s bounding boxes. Experiments show that the precision of our model is 85%, which is increased by 3.5% compared to that of YOLOv7-tiny. The mAP@0.5 value is 83.7%. Our model’s recognition rate, recall and mean average precision are superior to those of other models. Our method could provide technical support for yield estimation in the intelligent management of jujube orchards. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

Back to TopTop