Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (46)

Search Parameters:
Keywords = helmet wear detection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 10163 KB  
Article
Real-Time Deep-Learning-Based Recognition of Helmet-Wearing Personnel on Construction Sites from a Distance
by Fatih Aslan and Yaşar Becerikli
Appl. Sci. 2025, 15(20), 11188; https://doi.org/10.3390/app152011188 - 18 Oct 2025
Viewed by 484
Abstract
On construction sites, it is crucial and and in most cases mandatory to wear safety equipment such as helmets, safety shoes, vests, and belts. The most important of these is the helmet, as it protects against head injuries and can also serve as [...] Read more.
On construction sites, it is crucial and and in most cases mandatory to wear safety equipment such as helmets, safety shoes, vests, and belts. The most important of these is the helmet, as it protects against head injuries and can also serve as a marker for detecting and tracking workers, since a helmet is typically visible to cameras on construction sites. Checking helmet usage, however, is a labor-intensive and time-consuming process. A lot of work has been conducted on detecting and tracking people. Some studies have involved hardware-based systems that require batteries and are often perceived as intrusive by workers, while others have focused on vision-based methods. The aim of this work is not only to detect workers and helmets, but also to identify workers through labeled helmets using symbol detection methods. Person and helmet detection tasks were handled by training existing datasets and gained accurate results. For symbol detection, 14 different shapes were selected and put on helmets in a triple format side by side. A total of 11,243 images have been annotated. YOLOv5 and YOLOv8 were used to train the dataset and obtain models. The results show that both methods achieved high precision and recall. However, YOLOv5 slightly outperformed YOLOv8 in real-time identification tests, correctly detecting the helmet symbols. A testing dataset containing different distances was generated in order to measure accuracy by distance. According to the results, accurate identification was achieved at distances of up to 10 meters. Also, a location-based symbol-ordering algorithm is proposed. Since symbol detection does not follow any order and works with confidence values in the inference mode, a left to right approach is followed. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

16 pages, 6908 KB  
Article
YOLO-DCRCF: An Algorithm for Detecting the Wearing of Safety Helmets and Gloves in Power Grid Operation Environments
by Jinwei Zhao, Zhi Yang, Baogang Li and Yubo Zhao
J. Imaging 2025, 11(9), 320; https://doi.org/10.3390/jimaging11090320 - 19 Sep 2025
Viewed by 454
Abstract
Safety helmets and gloves are indispensable personal protective equipment in power grid operation environments. Traditional detection methods for safety helmets and gloves suffer from reduced accuracy due to factors such as dense personnel presence, varying lighting conditions, occlusions, and diverse postures. To enhance [...] Read more.
Safety helmets and gloves are indispensable personal protective equipment in power grid operation environments. Traditional detection methods for safety helmets and gloves suffer from reduced accuracy due to factors such as dense personnel presence, varying lighting conditions, occlusions, and diverse postures. To enhance the detection performance of safety helmets and gloves in power grid operation environments, this paper proposes a novel algorithm, YOLO-DCRCF, based on YOLO11 for detecting the wearing of safety helmets and gloves in such settings. By integrating Deformable Convolutional Network version 2 (DCNv2), the algorithm enhances the network’s capability to model geometric transformations. Additionally, a recalibration feature pyramid (RCF) network is innovatively designed to strengthen the interaction between shallow and deep features, enabling the network to capture multi-scale information of the target. Experimental results show that the proposed YOLO-DCRCF model achieved mAP50 scores of 92.7% on the Safety Helmet Wearing Dataset (SHWD) and 79.6% on the Safety Helmet and Gloves Wearing Dataset (SHAGWD), surpassing the baseline YOLOv11 model by 1.1% and 2.7%, respectively. These results meet the real-time safety monitoring requirements of power grid operation sites. Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
Show Figures

Figure 1

21 pages, 10256 KB  
Article
Dual-Path Attention Network for Multi-State Safety Helmet Identification in Complex Power Scenarios
by Wei Li, Rong Jia, Xiangwu Chen, Ge Cao and Ziyan Zhao
Processes 2025, 13(9), 2750; https://doi.org/10.3390/pr13092750 - 28 Aug 2025
Viewed by 500
Abstract
The environment of the power operation site is complex and changeable, and the accurate identification of the wearing status of workers’ safety helmets is significant to ensure personal safety and the stable operation of the power system. Existing research suffers from high rates [...] Read more.
The environment of the power operation site is complex and changeable, and the accurate identification of the wearing status of workers’ safety helmets is significant to ensure personal safety and the stable operation of the power system. Existing research suffers from high rates of missed detections and limited ability to discriminate fine-grained states, especially the identification of “wrongly wearing” states. Therefore, this paper proposes an intelligent identification method of safety helmet status for power workers based on a dual-path attention network. We embed the convolutional block attention module (CBAM) in the two paths of the backbone and neck layers of YOLOv5 and enhance the feature focusing ability of the key areas of the helmet through the channel-spatial attention coordination, so as to suppress the interference of complex background. In addition, a special dataset covering power scenarios is constructed, including fine-grained state annotation under various lighting, different poses, and occlusion conditions to improve the generalization of the model. Finally, the proposed method is applied to the images of the electric power operation site for experimental verification. The experimental results show that the proposed YOLO-CBAM achieves an outstanding mean average precision of 98.81% for identifying all helmet states, providing reliable technical support for intelligent safety monitoring. Full article
Show Figures

Figure 1

21 pages, 5952 KB  
Article
Evaluation of Helmet Wearing Compliance: A Bionic Spidersense System-Based Method for Helmet Chinstrap Detection
by Zhen Ma, He Xu, Ziyu Wang, Jielong Dou, Yi Qin and Xueyu Zhang
Biomimetics 2025, 10(9), 570; https://doi.org/10.3390/biomimetics10090570 - 27 Aug 2025
Viewed by 3714
Abstract
With the rapid advancement of industrial intelligence, ensuring occupational safety has become an increasingly critical concern. Among the essential personal protective equipment (PPE), safety helmets play a vital role in preventing head injuries. There is a growing demand for real-time detection of helmet [...] Read more.
With the rapid advancement of industrial intelligence, ensuring occupational safety has become an increasingly critical concern. Among the essential personal protective equipment (PPE), safety helmets play a vital role in preventing head injuries. There is a growing demand for real-time detection of helmet chinstrap wearing status during industrial operations. However, existing detection methods often encounter limitations such as user discomfort or potential privacy invasion. To overcome these challenges, this study proposes a non-intrusive approach for detecting the wearing state of helmet chinstraps, inspired by the mechanosensory hair arrays found on spider legs. The proposed method utilizes multiple MEMS inertial sensors to emulate the sensory functionality of spider leg hairs, thereby enabling efficient acquisition and analysis of helmet wearing states. Unlike conventional vibration-based detection techniques, posture signals reflect spatial structural characteristics; however, their integration from multiple sensors introduces increased signal complexity and background noise. To address this issue, an improved adaptive convolutional neural network (ICNN) integrated with a long short-term memory (LSTM) network is employed to classify the tightness levels of the helmet chinstrap using both single-sensor and multi-sensor data. Experimental validation was conducted based on data collected from 20 participants performing wall-climbing robot operation tasks. The results demonstrate that the proposed method achieves a high recognition accuracy of 96%. This research offers a practical, privacy-preserving, and highly effective solution for helmet-wearing status monitoring in industrial environments. Full article
(This article belongs to the Section Biomimetic Design, Constructions and Devices)
Show Figures

Figure 1

19 pages, 8903 KB  
Article
LSH-YOLO: A Lightweight Algorithm for Helmet-Wear Detection
by Zhao Liu, Fuwei Wang, Weimin Wang, Shenyi Cao, Xinhao Gao and Mingxin Chen
Buildings 2025, 15(16), 2918; https://doi.org/10.3390/buildings15162918 - 18 Aug 2025
Viewed by 614
Abstract
This work addresses the high computational cost and excessive parameter count associated with existing helmet-wearing detection models in complex construction scenarios. This paper proposes a lightweight helmet detection model, LSH-YOLO (Lightweight Safety Helmet) based on improvements to YOLOv8. First, the KernelWarehouse (KW) dynamic [...] Read more.
This work addresses the high computational cost and excessive parameter count associated with existing helmet-wearing detection models in complex construction scenarios. This paper proposes a lightweight helmet detection model, LSH-YOLO (Lightweight Safety Helmet) based on improvements to YOLOv8. First, the KernelWarehouse (KW) dynamic convolution is introduced to replace the standard convolution in the backbone and bottleneck structures. KW dynamically adjusts convolution kernels based on input features, thereby enhancing feature extraction and reducing redundant computation. Based on this, an improved C2f-KW module is proposed to further strengthen feature representation and lower computational complexity. Second, a lightweight detection head, SCDH (Shared Convolutional Detection Head), is designed to replace the original YOLOv8 Detect head. This modification maintains detection accuracy while further reducing both computational cost and parameter count. Finally, the Wise-IoU loss function is introduced to further enhance detection accuracy. Experimental results show that LSH-YOLO increases mAP50 by 0.6%, reaching 92.9%, while reducing computational cost by 63% and parameter count by 19%. Compared to YOLOv8n, LSH-YOLO demonstrates clear advantages in computational efficiency and detection performance, significantly lowering hardware resource requirements. These improvements make the model highly suitable for deployment in resource-constrained environments for real-time intelligent monitoring, thereby advancing the fields of industrial edge computing and intelligent safety surveillance. Full article
(This article belongs to the Special Issue AI in Construction: Automation, Optimization, and Safety)
Show Figures

Figure 1

22 pages, 2583 KB  
Article
Helmet Detection in Underground Coal Mines via Dynamic Background Perception with Limited Valid Samples
by Guangfu Wang, Dazhi Sun, Hao Li, Jian Cheng, Pengpeng Yan and Heping Li
Mach. Learn. Knowl. Extr. 2025, 7(3), 64; https://doi.org/10.3390/make7030064 - 9 Jul 2025
Viewed by 876
Abstract
The underground coal mine environment is complex and dynamic, making the application of visual algorithms for object detection a crucial component of underground safety management as well as a key factor in ensuring the safe operation of workers. We look at this in [...] Read more.
The underground coal mine environment is complex and dynamic, making the application of visual algorithms for object detection a crucial component of underground safety management as well as a key factor in ensuring the safe operation of workers. We look at this in the context of helmet-wearing detection in underground mines, where over 25% of the targets are small objects. To address challenges such as the lack of effective samples for unworn helmets, significant background interference, and the difficulty of detecting small helmet targets, this paper proposes a novel underground helmet-wearing detection algorithm that combines dynamic background awareness with a limited number of valid samples to improve accuracy for underground workers. The algorithm begins by analyzing the distribution of visual surveillance data and spatial biases in underground environments. By using data augmentation techniques, it then effectively expands the number of training samples by introducing positive and negative samples for helmet-wearing detection from ordinary scenes. Thereafter, based on YOLOv10, the algorithm incorporates a background awareness module with region masks to reduce the adverse effects of complex underground backgrounds on helmet-wearing detection. Specifically, it adds a convolution and attention fusion module in the detection head to enhance the model’s perception of small helmet-wearing objects by enlarging the detection receptive field. By analyzing the aspect ratio distribution of helmet wearing data, the algorithm improves the aspect ratio constraints in the loss function, further enhancing detection accuracy. Consequently, it achieves precise detection of helmet-wearing in underground coal mines. Experimental results demonstrate that the proposed algorithm can detect small helmet-wearing objects in complex underground scenes, with a 14% reduction in background false detection rates, and thereby achieving accuracy, recall, and average precision rates of 94.4%, 89%, and 95.4%, respectively. Compared to other mainstream object detection algorithms, the proposed algorithm shows improvements in detection accuracy of 6.7%, 5.1%, and 11.8% over YOLOv9, YOLOv10, and RT-DETR, respectively. The algorithm proposed in this paper can be applied to real-time helmet-wearing detection in underground coal mine scenes, providing safety alerts for standardized worker operations and enhancing the level of underground security intelligence. Full article
Show Figures

Graphical abstract

13 pages, 4428 KB  
Article
YOLO-CBF: Optimized YOLOv7 Algorithm for Helmet Detection in Road Environments
by Zhiqiang Wu, Jiaohua Qin, Xuyu Xiang and Yun Tan
Electronics 2025, 14(7), 1413; https://doi.org/10.3390/electronics14071413 - 31 Mar 2025
Cited by 1 | Viewed by 1121
Abstract
Helmet-wearing detection for electric vehicle riders is essential for traffic safety, yet existing detection models often suffer from high target occlusion and low detection accuracy in complex road environments. To address these issues, this paper proposes YOLO-CBF, an improved YOLOv7-based detection network. The [...] Read more.
Helmet-wearing detection for electric vehicle riders is essential for traffic safety, yet existing detection models often suffer from high target occlusion and low detection accuracy in complex road environments. To address these issues, this paper proposes YOLO-CBF, an improved YOLOv7-based detection network. The proposed model integrates coordinate convolution to enhance spatial information perception, optimizes the Focal EIOU loss function, and incorporates the BiFormer dynamic sparse attention mechanism to achieve more efficient computation and dynamic content perception. These enhancements enable the model to extract key features more effectively, improving detection precision. Experimental results show that YOLO-CBF achieves an average mAP of 95.6% for helmet-wearing detection in various scenarios, outperforming the original YOLOv7 by 4%. Additionally, YOLO-CBF demonstrates superior performance compared to other mainstream object detection models, achieving accurate and reliable helmet detection for electric vehicle riders. Full article
(This article belongs to the Special Issue Advances in Computer Vision and Deep Learning and Its Applications)
Show Figures

Figure 1

14 pages, 4199 KB  
Article
Lightweight Helmet-Wearing Detection Algorithm Based on StarNet-YOLOv10
by Hongli Wang, Qiangwen Zong, Yang Liao, Xiao Luo, Mingzhi Gong, Zhenyao Liang, Bin Gu and Yong Liao
Processes 2025, 13(4), 946; https://doi.org/10.3390/pr13040946 - 22 Mar 2025
Viewed by 946
Abstract
The safety helmet is the equipment that construction workers must wear, and it plays an important role in protecting their lives. However, there are still many construction workers who do not pay attention to the wearing of helmets. Therefore, the real-time high-precision intelligent [...] Read more.
The safety helmet is the equipment that construction workers must wear, and it plays an important role in protecting their lives. However, there are still many construction workers who do not pay attention to the wearing of helmets. Therefore, the real-time high-precision intelligent detection of construction workers’ helmet wearing is crucial. To this end, this paper proposes a lightweight helmet-wearing detection algorithm based on StarNet-YOLOv10. Firstly, the StarNet network structure is used to replace the backbone network part of the original YOLOv10 model while retaining the original Spatial Pyramid Pooling Fast (SPPF) and Partial Self-attention (PSA) parts. Secondly, the C2f module in the neck network is optimised by combining the PSA attention module and the GhostBottleneckv2 module, which improves the extraction of feature information and the expression ability of the model. Finally, optimisation is performed in the head network by introducing the Large Separable Kernel Attention (LSKA) attention mechanism to improve the detection accuracy and detection efficiency of the detection head. The experimental results show that compared with the existing Faster R-CNN, YOLOv5s, YOLOv6, and the original YOLOv10 models, the StarNet-YOLOv10 model proposed in this paper has a greater degree of improvement in the accuracy, recall, average precision mean, computational volume, and frame rate, in which the accuracy is as high as 83.36%, the recall rate can be up to 81.17%, and the average precision mean can reach 78.66%. Meanwhile, compared with the original YOLOv10 model, this model improves 1.7% in accuracy, 1.62% in recall, and 4.43% in mAP. Therefore, the present model can well meet the detection requirements of helmet wearing and can effectively reduce the safety hazards caused by not wearing helmets on construction sites. Full article
Show Figures

Figure 1

13 pages, 35903 KB  
Article
Detection Method for Safety Helmet Wearing on Construction Sites Based on UAV Images and YOLOv8
by Xin Jiao, Cheng Li, Xin Zhang, Jian Fan, Zhenwei Cai, Zhenglong Zhou and Ying Wang
Buildings 2025, 15(3), 354; https://doi.org/10.3390/buildings15030354 - 24 Jan 2025
Cited by 9 | Viewed by 3589
Abstract
With the increasing demand for safety management on construction sites, traditional manual inspection methods for detecting helmet usage face challenges such as low efficiency, limited coverage, and strong subjectivity, making them inadequate for modern construction site safety requirements. To address these issues, this [...] Read more.
With the increasing demand for safety management on construction sites, traditional manual inspection methods for detecting helmet usage face challenges such as low efficiency, limited coverage, and strong subjectivity, making them inadequate for modern construction site safety requirements. To address these issues, this study proposes a helmet detection method based on unmanned aerial vehicles (UAVs) and the YOLOv8 object detection algorithm. The method utilizes UAVs to flexibly capture construction site images, combined with the optimized YOLOv8s model, and employs transfer learning to annotate and train labels for “person” and “helmet”. Additionally, to improve detection accuracy, the study introduces matching criteria and a time-window strategy to further reduce false positives and negatives. Experimental results demonstrate that the proposed method can achieve efficient and accurate helmet usage detection in diverse construction site scenarios, significantly enhancing the automation and reliability of site safety management. Despite its excellent performance, future research should focus on optimizing real-time adaptability and improving performance in complex environments. This study provides an innovative and efficient technical solution for construction site safety management, contributing to the creation of safer and more efficient construction environments. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
Show Figures

Figure 1

19 pages, 3311 KB  
Article
YOLOv8s-SNC: An Improved Safety-Helmet-Wearing Detection Algorithm Based on YOLOv8
by Daguang Han, Chunli Ying, Zhenhai Tian, Yanjie Dong, Liyuan Chen, Xuguang Wu and Zhiwen Jiang
Buildings 2024, 14(12), 3883; https://doi.org/10.3390/buildings14123883 - 3 Dec 2024
Cited by 5 | Viewed by 3665
Abstract
The use of safety helmets in industrial settings is crucial for preventing head injuries. However, traditional helmet detection methods often struggle with complex and dynamic environments. To address this challenge, we propose YOLOv8s-SNC, an improved YOLOv8 algorithm for robust helmet detection in industrial [...] Read more.
The use of safety helmets in industrial settings is crucial for preventing head injuries. However, traditional helmet detection methods often struggle with complex and dynamic environments. To address this challenge, we propose YOLOv8s-SNC, an improved YOLOv8 algorithm for robust helmet detection in industrial scenarios. The proposed method introduces the SPD-Conv module to preserve feature details, the SEResNeXt detection head to enhance feature representation, and the C2f-CA module to improve the model’s ability to capture key information, particularly for small and dense targets. Additionally, a dedicated small object detection layer is integrated to improve detection accuracy for small targets. Experimental results demonstrate the effectiveness of YOLOv8s-SNC. When compared to the original YOLOv8, the enhanced algorithm shows a 2.6% improvement in precision (P), a 7.6% increase in recall (R), a 6.5% enhancement in mAP_0.5, and a 4.1% improvement in mean average precision (mAP). This study contributes a novel solution for industrial safety helmet detection, enhancing worker safety and efficiency. Full article
Show Figures

Figure 1

14 pages, 2268 KB  
Article
Enhanced Occupational Safety in Agricultural Machinery Factories: Artificial Intelligence-Driven Helmet Detection Using Transfer Learning and Majority Voting
by Simge Özüağ and Ömer Ertuğrul
Appl. Sci. 2024, 14(23), 11278; https://doi.org/10.3390/app142311278 - 3 Dec 2024
Cited by 3 | Viewed by 1924
Abstract
The objective of this study was to develop an artificial intelligence (AI)-driven model for the detection of helmet usage among workers in tractor and agricultural machinery factories with the aim of enhancing occupational safety. A transfer learning approach was employed, utilizing nine pre-trained [...] Read more.
The objective of this study was to develop an artificial intelligence (AI)-driven model for the detection of helmet usage among workers in tractor and agricultural machinery factories with the aim of enhancing occupational safety. A transfer learning approach was employed, utilizing nine pre-trained neural networks for the extraction of deep features. The following neural networks were employed: MobileNetV2, ResNet50, DarkNet53, AlexNet, ShuffleNet, DenseNet201, InceptionV3, Inception-ResNetV2, and GoogLeNet. Subsequently, the extracted features were subjected to iterative neighborhood component analysis (INCA) for feature selection, after which they were classified using the k-nearest neighbor (kNN) algorithm. The classification outputs of all networks were combined through iterative majority voting (IMV) to achieve optimal results. To evaluate the model, an image dataset comprising 662 images of individuals wearing helmets and 722 images of individuals without helmets sourced from the internet was constructed. The proposed model achieved an accuracy of 90.39%, with DenseNet201 producing the most accurate results. This AI-driven helmet detection model demonstrates significant potential in improving occupational safety by assisting safety officers, especially in confined environments, reducing human error, and enhancing efficiency. Full article
(This article belongs to the Section Agricultural Science and Technology)
Show Figures

Figure 1

18 pages, 6474 KB  
Article
A Safety Helmet Detection Model Based on YOLOv8-ADSC in Complex Working Environments
by Jingyang Wang, Bokai Sang, Bo Zhang and Wei Liu
Electronics 2024, 13(23), 4589; https://doi.org/10.3390/electronics13234589 - 21 Nov 2024
Cited by 7 | Viewed by 3004
Abstract
A safety helmet is indispensable personal protective equipment in high-risk working environments. Factors such as dense personnel, varying lighting conditions, occlusions, and different head postures can reduce the precision of traditional methods for detecting safety helmets. This paper proposes an improved YOLOv8n safety [...] Read more.
A safety helmet is indispensable personal protective equipment in high-risk working environments. Factors such as dense personnel, varying lighting conditions, occlusions, and different head postures can reduce the precision of traditional methods for detecting safety helmets. This paper proposes an improved YOLOv8n safety helmet detection model, YOLOv8-ADSC, to enhance the performance of safety helmet detection in complex working environments. In this model, firstly, Adaptive Spatial Feature Fusion (ASFF) and Deformable Convolutional Network version 2 (DCNv2) are used to enhance the detection head, enabling the network to more effectively capture multi-scale information of the target; secondly, a new detection layer for small targets is incorporated to enhance sensitivity to smaller targets; and finally, the Upsample module is replaced with the lightweight up-sampling module Content-Aware ReAssembly of Features (CARAFE), which increases the perception range, reduces information loss caused by up-sampling, and improves the precision and robustness of target detection. The experimental results on the public Safety-Helmet-Wearing-Dataset (SHWD) demonstrate that, in comparison to the original YOLOv8n model, the mAP@0.5 of YOLOv8-ADSC has increased by 2% for all classes, reaching 94.2%, and the mAP@0.5:0.95 has increased by 2.3%, reaching 62.4%. YOLOv8-ADSC can be better suited to safety helmet detection in complex working environments. Full article
(This article belongs to the Special Issue Deep Learning in Image Processing and Segmentation)
Show Figures

Figure 1

16 pages, 5783 KB  
Article
LG-YOLOv8: A Lightweight Safety Helmet Detection Algorithm Combined with Feature Enhancement
by Zhipeng Fan, Yayun Wu, Wei Liu, Ming Chen and Zeguo Qiu
Appl. Sci. 2024, 14(22), 10141; https://doi.org/10.3390/app142210141 - 6 Nov 2024
Cited by 8 | Viewed by 3304
Abstract
In the realm of construction site monitoring, ensuring the proper use of safety helmets is crucial. Addressing the issues of high parameter values and sluggish detection speed in current safety helmet detection algorithms, a feature-enhanced lightweight algorithm, LG-YOLOv8, was introduced. Firstly, we introduce [...] Read more.
In the realm of construction site monitoring, ensuring the proper use of safety helmets is crucial. Addressing the issues of high parameter values and sluggish detection speed in current safety helmet detection algorithms, a feature-enhanced lightweight algorithm, LG-YOLOv8, was introduced. Firstly, we introduce C2f-GhostDynamicConv as a powerful tool. This module enhances feature extraction to represent safety helmet wearing features, aiming to improve the efficiency of computing resource utilization. Secondly, the Bi-directional Feature Pyramid (BiFPN) was employed to further enrich the feature information, integrating feature maps from various levels to achieve more comprehensive semantic information. Finally, to enhance the training speed of the model and achieve a more lightweight outcome, we introduce a novel lightweight asymmetric detection head (LADH-Head) to optimize the original YOLOv8-n’s detection head. Evaluations on the SWHD dataset confirm the effectiveness of the LG-YOLOv8 algorithm. Compared to the original YOLOv8-n algorithm, our approach achieves a mean Average Precision (mAP) of 94.1%, a 59.8% reduction in parameters, a 54.3% decrease in FLOPs, a 44.2% increase in FPS, and a 2.7 MB compression of the model size. Therefore, LG-YOLOv8 has high accuracy and fast detection speed for safety helmet detection, which realizes real-time accurate detection of safety helmets and an ideal lightweight effect. Full article
Show Figures

Figure 1

16 pages, 8126 KB  
Article
Helmet Wearing Detection Algorithm Based on YOLOv5s-FCW
by Jingyi Liu, Hanquan Zhang, Gang Lv, Panpan Liu, Shiming Hu and Dong Xiao
Appl. Sci. 2024, 14(21), 9741; https://doi.org/10.3390/app14219741 - 24 Oct 2024
Cited by 1 | Viewed by 1885
Abstract
An enhanced algorithm, YOLOv5s-FCW, is put forward in this study to tackle the problems that exist in the current helmet detection (HD) methods. These issues include having too many parameters, a complex network, and large computation requirements, making it unsuitable for deployment on [...] Read more.
An enhanced algorithm, YOLOv5s-FCW, is put forward in this study to tackle the problems that exist in the current helmet detection (HD) methods. These issues include having too many parameters, a complex network, and large computation requirements, making it unsuitable for deployment on embedded and other devices. Additionally, existing algorithms struggle with detecting small targets and do not achieve high enough recognition accuracy. Firstly, the YOLOv5s backbone network is replaced by FasterNet for feature extraction (FE), which reduces the number of parameters and computational effort in the network. Secondly, a convolutional block attention module (CBAM) is added to the YOLOv5 model to improve the detection model’s ability to detect small objects such as helmets by increasing its attention to them. Finally, to enhance model convergence, the WIoU_Loss loss function is adopted instead of the GIoU_Loss loss function. As reported by the experimental results, the YOLOv5s-FCW algorithm proposed in this study has improved accuracy by 4.6% compared to the baseline algorithm. The proposed approach not only enhances detection concerning small and obscured targets but also reduces computation for the YOLOv5s model by 20%, thereby decreasing the hardware cost while maintaining a higher average accuracy regarding detection. Full article
Show Figures

Figure 1

26 pages, 6644 KB  
Article
Investigation of Unsafe Construction Site Conditions Using Deep Learning Algorithms Using Unmanned Aerial Vehicles
by Sourav Kumar, Mukilan Poyyamozhi, Balasubramanian Murugesan, Narayanamoorthi Rajamanickam, Roobaea Alroobaea and Waleed Nureldeen
Sensors 2024, 24(20), 6737; https://doi.org/10.3390/s24206737 - 20 Oct 2024
Cited by 6 | Viewed by 3353
Abstract
The rapid adoption of Unmanned Aerial Vehicles (UAVs) in the construction industry has revolutionized safety, surveying, quality monitoring, and maintenance assessment. UAVs are increasingly used to prevent accidents caused by falls from heights or being struck by falling objects by ensuring workers comply [...] Read more.
The rapid adoption of Unmanned Aerial Vehicles (UAVs) in the construction industry has revolutionized safety, surveying, quality monitoring, and maintenance assessment. UAVs are increasingly used to prevent accidents caused by falls from heights or being struck by falling objects by ensuring workers comply with safety protocols. This study focuses on leveraging UAV technology to enhance labor safety by monitoring the use of personal protective equipment, particularly helmets, among construction workers. The developed UAV system utilizes the tensorflow technique and an alert system to detect and identify workers not wearing helmets. Employing the high-precision, high-speed, and widely applicable Faster R-CNN method, the UAV can accurately detect construction workers with and without helmets in real-time across various site conditions. This proactive approach ensures immediate feedback and intervention, significantly reducing the risk of injuries and fatalities. Additionally, the implementation of UAVs minimizes the workload of site supervisors by automating safety inspections and monitoring, allowing for more efficient and continuous oversight. The experimental results indicate that the UAV system’s high precision, recall, and processing capabilities make it a reliable and cost-effective solution for improving construction site safety. The precision, mAP, and FPS of the developed system with the R-CNN are 93.1%, 58.45%, and 27 FPS. This study demonstrates the potential of UAV technology to enhance safety compliance, protect workers, and improve the overall quality of safety management in the construction industry. Full article
(This article belongs to the Special Issue Advances on UAV-Based Sensing and Imaging)
Show Figures

Figure 1

Back to TopTop