Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (102)

Search Parameters:
Keywords = modified YOLOv5s

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 4592 KiB  
Article
SSAM_YOLOv5: YOLOv5 Enhancement for Real-Time Detection of Small Road Signs
by Fatima Qanouni, Hakim El Massari, Noreddine Gherabi and Maria El-Badaoui
Digital 2025, 5(3), 30; https://doi.org/10.3390/digital5030030 - 29 Jul 2025
Viewed by 246
Abstract
Many traffic-sign detection systems are available to assist drivers with particular conditions such as small and distant signs, multiple signs on the road, objects similar to signs, and other challenging conditions. Real-time object detection is an indispensable aspect of these detection systems, with [...] Read more.
Many traffic-sign detection systems are available to assist drivers with particular conditions such as small and distant signs, multiple signs on the road, objects similar to signs, and other challenging conditions. Real-time object detection is an indispensable aspect of these detection systems, with detection speed and efficiency being critical parameters. In terms of these parameters, to enhance performance in road-sign detection under diverse conditions, we proposed a comprehensive methodology, SSAM_YOLOv5, to handle feature extraction and small-road-sign detection performance. The method was based on a modified version of YOLOv5s. First, we introduced attention modules into the backbone to focus on the region of interest within video frames; secondly, we replaced the activation function with the SwishT_C activation function to enhance feature extraction and achieve a balance between inference, precision, and mean average precision (mAP@50) rates. Compared to the YOLOv5 baseline, the proposed improvements achieved remarkable increases of 1.4% and 1.9% in mAP@50 on the Tiny LISA and GTSDB datasets, respectively, confirming their effectiveness. Full article
Show Figures

Figure 1

21 pages, 4863 KiB  
Article
Detection Model for Cotton Picker Fire Recognition Based on Lightweight Improved YOLOv11
by Zhai Shi, Fangwei Wu, Changjie Han, Dongdong Song and Yi Wu
Agriculture 2025, 15(15), 1608; https://doi.org/10.3390/agriculture15151608 - 25 Jul 2025
Viewed by 264
Abstract
In response to the limited research on fire detection in cotton pickers and the issue of low detection accuracy in visual inspection, this paper proposes a computer vision-based detection method. The method is optimized according to the structural characteristics of cotton pickers, and [...] Read more.
In response to the limited research on fire detection in cotton pickers and the issue of low detection accuracy in visual inspection, this paper proposes a computer vision-based detection method. The method is optimized according to the structural characteristics of cotton pickers, and a lightweight improved YOLOv11 algorithm is designed for cotton fire detection in cotton pickers. The backbone of the model is replaced with the MobileNetV2 network to achieve effective model lightweighting. In addition, the convolutional layers in the original C3k2 block are optimized using partial convolutions to reduce computational redundancy and improve inference efficiency. Furthermore, a visual attention mechanism named CBAM-ECA (Convolutional Block Attention Module-Efficient Channel Attention) is designed to suit the complex working conditions of cotton pickers. This mechanism aims to enhance the model’s feature extraction capability under challenging environmental conditions, thereby improving overall detection accuracy. To further improve localization performance and accelerate convergence, the loss function is also modified. These improvements enable the model to achieve higher precision in fire detection while ensuring fast and accurate localization. Experimental results demonstrate that the improved model reduces the number of parameters by 38%, increases the frame processing speed (FPS) by 13.2%, and decreases the computational complexity (GFLOPs) by 42.8%, compared to the original model. The detection accuracy for flaming combustion, smoldering combustion, and overall detection is improved by 1.4%, 3%, and 1.9%, respectively, with an increase of 2.4% in mAP (mean average precision). Compared to other models—YOLOv3-tiny, YOLOv5, YOLOv8, and YOLOv10—the proposed method achieves higher detection accuracy by 5.9%, 7%, 5.9%, and 5.3%, respectively, and shows improvements in mAP by 5.4%, 5%, 4.8%, and 6.3%. The improved detection algorithm maintains high accuracy while achieving faster inference speed and fewer model parameters. These improvements lay a solid foundation for fire prevention and suppression in cotton collection boxes on cotton pickers. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

30 pages, 15434 KiB  
Article
A DSP–FPGA Heterogeneous Accelerator for On-Board Pose Estimation of Non-Cooperative Targets
by Qiuyu Song, Kai Liu, Shangrong Li, Mengyuan Wang and Junyi Wang
Aerospace 2025, 12(7), 641; https://doi.org/10.3390/aerospace12070641 - 19 Jul 2025
Viewed by 311
Abstract
The increasing presence of non-cooperative targets poses significant challenges to the space environment and threatens the sustainability of aerospace operations. Accurate on-orbit perception of such targets, particularly those without cooperative markers, requires advanced algorithms and efficient system architectures. This study presents a hardware–software [...] Read more.
The increasing presence of non-cooperative targets poses significant challenges to the space environment and threatens the sustainability of aerospace operations. Accurate on-orbit perception of such targets, particularly those without cooperative markers, requires advanced algorithms and efficient system architectures. This study presents a hardware–software co-design framework for the pose estimation of non-cooperative targets. Firstly, a two-stage architecture is proposed, comprising object detection and pose estimation. YOLOv5s is modified with a Focus module to enhance feature extraction, and URSONet adopts global average pooling to reduce the computational burden. Optimization techniques, including batch normalization fusion, ReLU integration, and linear quantization, are applied to improve inference efficiency. Secondly, a customized FPGA-based accelerator is developed with an instruction scheduler, memory slicing mechanism, and computation array. Instruction-level control supports model generalization, while a weight concatenation strategy improves resource utilization during convolution. Finally, a heterogeneous DSP–FPGA system is implemented, where the DSP manages data pre-processing and result integration, and the FPGA performs core inference. The system is deployed on a Xilinx X7K325T FPGA operating at 200 MHz. Experimental results show that the optimized model achieves a peak throughput of 399.16 GOP/s with less than 1% accuracy loss. The proposed design reaches 0.461 and 0.447 GOP/s/DSP48E1 for two model variants, achieving a 2× to 3× improvement over comparable designs. Full article
(This article belongs to the Section Astronautics & Space Science)
Show Figures

Figure 1

20 pages, 4488 KiB  
Article
OMB-YOLO-tiny: A Lightweight Detection Model for Damaged Pleurotus ostreatus Based on Enhanced YOLOv8n
by Lei Shi, Zhuo Bai, Xiangmeng Yin, Zhanchen Wei, Haohai You, Shilin Liu, Fude Wang, Xuexi Qi, Helong Yu, Chunguang Bi and Ruiqing Ji
Horticulturae 2025, 11(7), 744; https://doi.org/10.3390/horticulturae11070744 - 27 Jun 2025
Viewed by 304
Abstract
Pleurotus ostreatus, classified under the phylum Basidiomycota, order Agaricales, and family Pleurotaceae, is a prevalent gray edible fungus. Its physical damage not only compromises quality and appearance but also significantly diminishes market value. This study proposed an enhanced method for detecting Pleurotus [...] Read more.
Pleurotus ostreatus, classified under the phylum Basidiomycota, order Agaricales, and family Pleurotaceae, is a prevalent gray edible fungus. Its physical damage not only compromises quality and appearance but also significantly diminishes market value. This study proposed an enhanced method for detecting Pleurotus ostreatus damage based on an improved YOLOv8n model, aiming to advance the accessibility of damage recognition technology, enhance automation in Pleurotus cultivation, and reduce labor dependency. This approach holds critical implications for agricultural modernization and serves as a pivotal step in advancing China’s agricultural modernization, while providing valuable references for subsequent research. Utilizing a self-collected, self-organized, and self-constructed dataset, we modified the feature extraction module of the original YOLOv8n by integrating a lightweight GhostHGNetv2 backbone network. During the feature fusion stage, the original YOLOv8 components were replaced with a lightweight SlimNeck network, and an Attentional Scale Sequence Fusion (ASF) mechanism was incorporated into the feature fusion architecture, resulting in the proposed OMB-YOLO model. This model achieves a remarkable balance between parameter efficiency and detection accuracy, attaining a parameter of 2.24 M and a mAP@0.5 of 90.11% on the test set. To further optimize model lightweighting, the DepGraph method was applied for pruning the OMB-YOLO model, yielding the OMB-YOLO-tiny variant. Experimental evaluations on the damaged Pleurotus dataset demonstrate that the OMB-YOLO-tiny model outperforms mainstream models in both accuracy and inference speed while reducing parameters by nearly half. With a parameter of 1.72 M and mAP@0.5 of 90.14%, the OMB-YOLO-tiny model emerges as an optimal solution for detecting Pleurotus ostreatus damage. These results validate its efficacy and practical applicability in agricultural quality control systems. Full article
Show Figures

Figure 1

24 pages, 15144 KiB  
Article
Evaluation of Deep Learning Models for Insects Detection at the Hive Entrance for a Bee Behavior Recognition System
by Gabriela Vdoviak, Tomyslav Sledevič, Artūras Serackis, Darius Plonis, Dalius Matuzevičius and Vytautas Abromavičius
Agriculture 2025, 15(10), 1019; https://doi.org/10.3390/agriculture15101019 - 8 May 2025
Viewed by 808
Abstract
Monitoring insect activity at hive entrances is essential for advancing precision beekeeping practices by enabling non-invasive, real-time assessment of the colony’s health and early detection of potential threats. This study evaluates deep learning models for detecting worker bees, pollen-bearing bees, drones, and wasps, [...] Read more.
Monitoring insect activity at hive entrances is essential for advancing precision beekeeping practices by enabling non-invasive, real-time assessment of the colony’s health and early detection of potential threats. This study evaluates deep learning models for detecting worker bees, pollen-bearing bees, drones, and wasps, comparing different YOLO-based architectures optimized for real-time inference on an RTX 4080 Super and Jetson AGX Orin. A new publicly available dataset with diverse environmental conditions was used for training and validation. Performance comparisons showed that modified YOLOv8 models achieved a better precision–speed trade-off relative to other YOLO-based architectures, enabling efficient deployment on embedded platforms. Results indicate that model modifications enhance detection accuracy while reducing inference time, particularly for small object classes such as pollen. The study explores the impact of different annotation strategies on classification performance and tracking consistency. The findings demonstrate the feasibility of deploying AI-powered hive monitoring systems on embedded platforms, with potential applications in precision beekeeping and pollination surveillance. Full article
Show Figures

Figure 1

22 pages, 8831 KiB  
Article
YOLOv8n-SMMP: A Lightweight YOLO Forest Fire Detection Model
by Nianzu Zhou, Demin Gao and Zhengli Zhu
Fire 2025, 8(5), 183; https://doi.org/10.3390/fire8050183 - 3 May 2025
Cited by 4 | Viewed by 1256
Abstract
Global warming has driven a marked increase in forest fire occurrences, underscoring the critical need for timely and accurate detection to mitigate fire-related losses. Existing forest fire detection algorithms face limitations in capturing flame and smoke features in complex natural environments, coupled with [...] Read more.
Global warming has driven a marked increase in forest fire occurrences, underscoring the critical need for timely and accurate detection to mitigate fire-related losses. Existing forest fire detection algorithms face limitations in capturing flame and smoke features in complex natural environments, coupled with high computational complexity and inadequate lightweight design for practical deployment. To address these challenges, this paper proposes an enhanced forest fire detection model, YOLOv8n-SMMP (SlimNeck–MCA–MPDIoU–Pruned), based on the YOLO framework. Key innovations include the following: introducing the SlimNeck solution to streamline the neck network by replacing conventional convolutions with Group Shuffling Convolution (GSConv) and substituting the Cross-convolution with 2 filters (C2f) module with the lightweight VoV-based Group Shuffling Cross-Stage Partial Network (VoV-GSCSP) feature extraction module; integrating the Multi-dimensional Collaborative Attention (MCA) mechanism between the neck and head networks to enhance focus on fire-related regions; adopting the Minimum Point Distance Intersection over Union (MPDIoU) loss function to optimize bounding box regression during training; and implementing selective channel pruning tailored to the modified network architecture. The experimental results reveal that, relative to the baseline model, the optimized lightweight model achieves a 3.3% enhancement in detection accuracy (mAP@0.5), slashes the parameter count by 31%, and reduces computational overhead by 33%. These advancements underscore the model’s superior performance in real-time forest fire detection, outperforming other mainstream lightweight YOLO models in both accuracy and efficiency. Full article
(This article belongs to the Special Issue Intelligent Forest Fire Prediction and Detection)
Show Figures

Figure 1

21 pages, 6261 KiB  
Article
Vehicle Recognition and Driving Information Detection with UAV Video Based on Improved YOLOv5-DeepSORT Algorithm
by Binshuang Zheng, Jing Zhou, Zhengqiang Hong, Junyao Tang and Xiaoming Huang
Sensors 2025, 25(9), 2788; https://doi.org/10.3390/s25092788 - 28 Apr 2025
Viewed by 597
Abstract
To investigate whether the skid resistance of the ramp meets the requirements of vehicle driving safety and stability, the simulation using the ideal driver model is inaccurate. Therefore, considering the driver’s driving habits, this paper proposes the use of Unmanned aerial vehicles (UAVs) [...] Read more.
To investigate whether the skid resistance of the ramp meets the requirements of vehicle driving safety and stability, the simulation using the ideal driver model is inaccurate. Therefore, considering the driver’s driving habits, this paper proposes the use of Unmanned aerial vehicles (UAVs) for the collection and extraction of vehicle driving information. To process the collected UAV video, the Google Collaboration platform is used to modify and compile the “You Only Look Once” version 5 (YOLOv5) algorithm with Python 3.7.12, and YOLOv5 is retrained with the captured video. The results show that the precision rate P and recall rate R have satisfactory results with an F1 value of 0.86, reflecting a good P-R relationship. The loss function also stabilized at a very low level after 70 training epochs. Then, the trained YOLOv5 is used to replace the Faster R-CNN detector in the DeepSORT algorithm to improve the detection accuracy and speed and extract the vehicle driving information from the perspective of UAV. By coding, the coordinate information of the vehicle trajectory is extracted, the trajectory is smoothed, and the frame difference method is used to calculate the real-time speed information, which is convenient for the establishment of a real driver model. Full article
Show Figures

Figure 1

21 pages, 12927 KiB  
Article
Defect Detection Method of Carbon Fiber Unidirectional Band Prepreg Based on Enhanced YOLOv8s
by Weipeng Su, Mei Sang, Yutong Liu and Xueming Wang
Sensors 2025, 25(9), 2665; https://doi.org/10.3390/s25092665 - 23 Apr 2025
Cited by 1 | Viewed by 593
Abstract
To address the challenges in existing carbon fiber prepreg surface defect detection processes, specifically difficulties in small target detection and inaccuracies in elongated defects with large aspect ratios, this study proposes an enhanced YOLOv8s-based models for defect detection on the surface of carbon [...] Read more.
To address the challenges in existing carbon fiber prepreg surface defect detection processes, specifically difficulties in small target detection and inaccuracies in elongated defects with large aspect ratios, this study proposes an enhanced YOLOv8s-based models for defect detection on the surface of carbon fiber unidirectional band prepreg. The proposed model respectively integrates two attention mechanisms, a Global Attention Mechanism (GAM) and a Deformable Large Kernel Attention (DLKA) Mechanism, into the architecture of YOLOv8s, which is a lightweight version of YOLO optimized for speed. The attention mechanisms are inserted between the backbone network and the detection head, respectively, to enhance feature extraction before target localization. The YOLOv8s-GAM model achieves a mean average precision (mAP@0.5) of 84.4%, precision of 79.3%, and recall of 78.3%, while the YOLOv8s-DLKA model shows improved performance with mAP@0.5 of 86.4%, precision of 82.4%, and recall of 80.2%. Compared with the original YOLOv8s model, these two modified models demonstrate improvements in mAP@0.5 of 1.6% and 3.6%, precision gains of 0.9% and 4.1%, and recall enhancements of 1.4% and 3.6%, respectively. These models provide technical solutions for precise defect identification on the surface of carbon fiber unidirectional band prepreg. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

25 pages, 13110 KiB  
Article
An Improved Unmanned Aerial Vehicle Forest Fire Detection Model Based on YOLOv8
by Bensheng Yun, Xiaohan Xu, Jie Zeng, Zhenyu Lin, Jing He and Qiaoling Dai
Fire 2025, 8(4), 138; https://doi.org/10.3390/fire8040138 - 31 Mar 2025
Cited by 2 | Viewed by 898
Abstract
Forest fires have a great destructive impact on the Earth’s ecosystem; therefore, the top priority of current research is how to accurately and quickly monitor forest fires. Taking into account efficiency and cost-effectiveness, deep-learning-driven UAV remote sensing fire detection algorithms have emerged as [...] Read more.
Forest fires have a great destructive impact on the Earth’s ecosystem; therefore, the top priority of current research is how to accurately and quickly monitor forest fires. Taking into account efficiency and cost-effectiveness, deep-learning-driven UAV remote sensing fire detection algorithms have emerged as a favored research trend and have seen extensive application. However, in the process of drone monitoring, fires often appear very small and are easily obstructed by trees, which greatly limits the amount of effective information that algorithms can extract. Meanwhile, considering the limitations of unmanned aerial vehicles, the algorithm model also needs to have lightweight characteristics. To address challenges such as the small targets, occlusions, and image blurriness in UAV-captured wildfire images, this paper proposes an improved UAV forest fire detection model based on YOLOv8. Firstly, we incorporate SPDConv modules, enhancing the YOLOv8 architecture and boosting its efficacy in dealing with minor objects and images with low resolution. Secondly, we introduce the C2f-PConv module, which effectively improves computational efficiency by reducing redundant calculations and memory access. Thirdly, the model boosts classification precision through the integration of a Mixed Local Channel Attention (MLCA) strategy preceding the three detection outputs. Finally, the W-IoU loss function is utilized, which adaptively modifies the weights for different target boxes within the loss computation, to efficiently address the difficulties associated with detecting small targets. The experimental results showed that the accuracy of our model increased by 2.17%, the recall increased by 5.5%, and the mAP@0.5 increased by 1.9%. In addition, the number of parameters decreased by 43.8%, with only 5.96M parameters, while the model size and GFlops decreased by 43.3% and 36.7%, respectively. Our model not only reduces the number of parameters and computational complexity, but also exhibits superior accuracy and effectiveness in UAV fire image recognition tasks, thereby offering a robust and reliable solution for UAV fire monitoring. Full article
(This article belongs to the Special Issue Intelligent Forest Fire Prediction and Detection)
Show Figures

Figure 1

31 pages, 11795 KiB  
Article
DT-YOLO: An Improved Object Detection Algorithm for Key Components of Aircraft and Staff in Airport Scenes Based on YOLOv5
by Zhige He, Yuanqing He and Yang Lv
Sensors 2025, 25(6), 1705; https://doi.org/10.3390/s25061705 - 10 Mar 2025
Viewed by 1214
Abstract
With the rapid development and increasing demands of civil aviation, the accurate detection of key aircraft components and staff on airport aprons is of great significance for ensuring the safety of flights and improving the operational efficiency of airports. However, the existing detection [...] Read more.
With the rapid development and increasing demands of civil aviation, the accurate detection of key aircraft components and staff on airport aprons is of great significance for ensuring the safety of flights and improving the operational efficiency of airports. However, the existing detection models for airport aprons are relatively scarce, and their accuracy is insufficient. Based on YOLOv5, we propose an improved object detection algorithm, called DT-YOLO, to address these issues. We first built a dataset called AAD-dataset for airport apron scenes by randomly sampling and capturing surveillance videos taken from the real world to support our research. We then introduced a novel module named D-CTR in the backbone, which integrates the global feature extraction capability of Transformers with the limited receptive field of convolutional neural networks (CNNs) to enhance the feature representation ability and overall performance. A dropout layer was introduced to reduce redundant and noisy features, prevent overfitting, and improve the model’s generalization ability. In addition, we utilized deformable convolutions in CNNs to extract features from multi-scale and deformed objects, further enhancing the model’s adaptability and detection accuracy. In terms of loss function design, we modified GIoULoss to address its discontinuities and instability in certain scenes, which effectively mitigated gradient explosion and improved the stability of the model. Finally, experiments were conducted on the self-built AAD-dataset. The results demonstrated that DT-YOLO significantly improved the mean average precision (mAP). Specifically, the mAP increased by 2.6 on the AAD-dataset; moreover, other metrics also showed a certain degree of improvement, including detection speed, AP50, AP75, and so on, which comprehensively proves that DT-YOLO can be applied for real-time object detection in airport aprons, ensuring the safe operation of aircraft and efficient management of airports. Full article
(This article belongs to the Special Issue Computer Vision Recognition and Communication Sensing System)
Show Figures

Figure 1

24 pages, 8499 KiB  
Article
An Optimized YOLOv11 Framework for the Efficient Multi-Category Defect Detection of Concrete Surface
by Zhuang Tian, Fan Yang, Lei Yang, Yunjie Wu, Jiaying Chen and Peng Qian
Sensors 2025, 25(5), 1291; https://doi.org/10.3390/s25051291 - 20 Feb 2025
Cited by 3 | Viewed by 2723
Abstract
Thoroughly and accurately identifying various defects on concrete surfaces is crucial to ensure structural safety and prolong service life. However, in actual engineering inspections, the varying shapes and complexities of concrete structural defects challenge the insufficient robustness and generalization of mainstream models, often [...] Read more.
Thoroughly and accurately identifying various defects on concrete surfaces is crucial to ensure structural safety and prolong service life. However, in actual engineering inspections, the varying shapes and complexities of concrete structural defects challenge the insufficient robustness and generalization of mainstream models, often leading to misdetections and under-detections, which ultimately jeopardize structural safety. To overcome the disadvantages above, an efficient concrete defect detection model called YOLOv11-EMC (efficient multi-category concrete defect detection) is proposed. Firstly, ordinary convolution is substituted with a modified deformable convolution to efficiently extract irregular defect features, and the model’s robustness and generalization are significantly enhanced. Then, the C3k2module is integrated with a revised dynamic convolution module, which reduces unnecessary computations while enhancing flexibility and feature representation. Experiments show that, compared with Yolov11, Yolov11-EMC has improved precision, recall, mAP50, and F1 by 8.3%, 2.1%, 4.3%, and 3% respectively. Results of drone field tests show that Yolov11-EMC successfully lowers false and under-detections while simultaneously increasing detection accuracy, providing a superior methodology to tasks that require identifying tangible flaws in practical engineering applications. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

17 pages, 18733 KiB  
Article
MDD-YOLOv8: A Multi-Scale Object Detection Model Based on YOLOv8 for Synthetic Aperture Radar Images
by Jie Liu, Xue Liu, Huaixin Chen and Sijie Luo
Appl. Sci. 2025, 15(4), 2239; https://doi.org/10.3390/app15042239 - 19 Feb 2025
Cited by 1 | Viewed by 1648
Abstract
The targets in Synthetic Aperture Radar (SAR) images are often tiny, irregular, and difficult to detect against complex backgrounds, leading to a high probability of missed or incorrect detections by object detection algorithms. To address this issue and improve the recall rate, we [...] Read more.
The targets in Synthetic Aperture Radar (SAR) images are often tiny, irregular, and difficult to detect against complex backgrounds, leading to a high probability of missed or incorrect detections by object detection algorithms. To address this issue and improve the recall rate, we introduce an improved version of YOLOv8 (You Only Look Once), named MDD-YOLOv8. This model is not only fast but also highly accurate, with fewer instances of missed or incorrect detections. Our proposed model outperforms the baseline YOLOv8 in SAR image detection by utilizing dynamic convolution to replace static convolution (DynamicConv) and incorporating a deformable large kernel attention mechanism (DLKA). Additionally, we modify the structure of the FPN-PAN and introduce an extra detection header to better detect tiny objects. Experiments on the MSAR-1.0 dataset demonstrate that MDD-YOLOv8 achieves 87.7% precision, 76.1% recall, 78.9% mAP@50, and 0.81 F1 score. These metrics show an improvement of 8.1%, 6.0%, 6.9%, and 0.07, respectively, compared to the original YOLOv8. Although, MDD-YOLOv8 increases parameters by about 20% and GFLOPs by 53% more than YOLOv8n. To further validate the model’s effectiveness, we conducted generalization experiments on four additional SAR image datasets, proving that MDD-YOLOv8’s performance is robust and universally applicable. In summary, MDD-YOLOv8 is a robust, generalized model with strong potential for industrial applications. Full article
(This article belongs to the Special Issue Object Detection Technology)
Show Figures

Figure 1

21 pages, 7597 KiB  
Article
A Novel Neural Network Model Based on Real Mountain Road Data for Driver Fatigue Detection
by Dabing Peng, Junfeng Cai, Lu Zheng, Minghong Li, Ling Nie and Zuojin Li
Biomimetics 2025, 10(2), 104; https://doi.org/10.3390/biomimetics10020104 - 12 Feb 2025
Viewed by 819
Abstract
Mountainous roads are severely affected by environmental factors such as insufficient lighting and shadows from tree branches, which complicates the detection of drivers’ facial features and the determination of fatigue states. An improved method for recognizing driver fatigue states on mountainous roads using [...] Read more.
Mountainous roads are severely affected by environmental factors such as insufficient lighting and shadows from tree branches, which complicates the detection of drivers’ facial features and the determination of fatigue states. An improved method for recognizing driver fatigue states on mountainous roads using the YOLOv5 neural network is proposed. Initially, modules from Deformable Convolutional Networks (DCNs) are integrated into the feature extraction stage of the YOLOv5 framework to improve the model’s flexibility in recognizing facial characteristics and handling postural changes. Subsequently, a Triplet Attention (TA) mechanism is embedded within the YOLOv5 network to bolster image noise suppression and improve the network’s robustness in recognition. Finally, the Wing loss function is introduced into the YOLOv5 model to heighten the sensitivity to micro-features and enhance the network’s capability to capture details. Experimental results demonstrate that the modified YOLOv5 neural network achieves an average accuracy rate of 85% in recognizing driver fatigue states. Full article
(This article belongs to the Special Issue Bio-Inspired Robotics and Applications)
Show Figures

Figure 1

23 pages, 5106 KiB  
Article
A Real-Time Green and Lightweight Model for Detection of Liquefied Petroleum Gas Cylinder Surface Defects Based on YOLOv5
by Burhan Duman
Appl. Sci. 2025, 15(1), 458; https://doi.org/10.3390/app15010458 - 6 Jan 2025
Cited by 3 | Viewed by 1236
Abstract
Industry requires defect detection to ensure the quality and safety of products. In resource-constrained devices, real-time speed, accuracy, and computational efficiency are the most critical requirements for defect detection. This paper presents a novel approach for real-time detection of surface defects on LPG [...] Read more.
Industry requires defect detection to ensure the quality and safety of products. In resource-constrained devices, real-time speed, accuracy, and computational efficiency are the most critical requirements for defect detection. This paper presents a novel approach for real-time detection of surface defects on LPG cylinders, utilising an enhanced YOLOv5 architecture referred to as GLDD-YOLOv5. The architecture integrates ghost convolution and ECA blocks to improve feature extraction with less computational overhead in the network’s backbone. It also modifies the P3–P4 head structure to increase detection speed. These changes enable the model to focus more effectively on small and medium-sized defects. Based on comparative analysis with other YOLO models, the proposed method demonstrates superior performance. Compared to the base YOLOv5s model, the proposed method achieved a 4.6% increase in average accuracy, a 44% reduction in computational cost, a 45% decrease in parameter counts, and a 26% reduction in file size. In experimental evaluations on the RTX2080Ti, the model achieved an inference rate of 163.9 FPS with a total carbon footprint of 0.549 × 10−3 gCO2e. The proposed technique offers an efficient and robust defect detection model with an eco-friendly solution compatible with edge computing devices. Full article
(This article belongs to the Section Green Sustainable Science and Technology)
Show Figures

Figure 1

17 pages, 6147 KiB  
Article
A Fire Detection Method for Aircraft Cargo Compartments Utilizing Radio Frequency Identification Technology and an Improved YOLO Model
by Kai Wang, Wei Zhang and Xiaosong Song
Electronics 2025, 14(1), 106; https://doi.org/10.3390/electronics14010106 - 30 Dec 2024
Cited by 2 | Viewed by 871
Abstract
During flight, aircraft cargo compartments are in a confined state. If a fire occurs, it will seriously affect flight safety. Therefore, fire detection systems must issue alarms within seconds of a fire breaking out, necessitating high real-time performance for aviation fire detection systems. [...] Read more.
During flight, aircraft cargo compartments are in a confined state. If a fire occurs, it will seriously affect flight safety. Therefore, fire detection systems must issue alarms within seconds of a fire breaking out, necessitating high real-time performance for aviation fire detection systems. In addressing the issue of fire target detection, the YOLO series models demonstrate superior performance in striking a balance between computational efficiency and recognition accuracy when compared with alternative models. Consequently, this paper opts to optimize the YOLO model. An enhanced version of the FDY-YOLO object detection algorithm is introduced in this paper for the purpose of instantaneous fire detection. Firstly, the FaB-C3 module, modified based on the FasterNet backbone network, replaces the C3 component in the YOLOv5 framework, significantly decreasing the computational burden of the algorithm. Secondly, the DySample module is used to replace the upsampling module and optimize the model’s ability to extract the features of small-scale flames or smoke in the early stages of a fire. We introduce RFID technology to manage the cameras that are capturing images. Finally, the model’s loss function is changed to the MPDIoU loss function, improving the model’s localization accuracy. Based on our self-constructed dataset, compared with the YOLOv5 model, FDY-YOLO achieves a 0.8% increase in mean average precision (mAP) while reducing the computational load by 40%. Full article
(This article belongs to the Special Issue RFID Applied to IoT Devices)
Show Figures

Figure 1

Back to TopTop