Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (93)

Search Parameters:
Keywords = night-time vehicle detection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 7605 KiB  
Article
Pedestrian-Crossing Detection Enhanced by CyclicGAN-Based Loop Learning and Automatic Labeling
by Kuan-Chieh Wang, Chao-Li Meng, Chyi-Ren Dow and Bonnie Lu
Appl. Sci. 2025, 15(12), 6459; https://doi.org/10.3390/app15126459 - 8 Jun 2025
Viewed by 493
Abstract
Pedestrian safety at crosswalks remains a critical concern as traffic accidents frequently result from drivers’ failure to yield, leading to severe injuries or fatalities. In response, various jurisdictions have enacted pedestrian priority laws to regulate driver behavior. Nevertheless, intersections lacking clear traffic signage [...] Read more.
Pedestrian safety at crosswalks remains a critical concern as traffic accidents frequently result from drivers’ failure to yield, leading to severe injuries or fatalities. In response, various jurisdictions have enacted pedestrian priority laws to regulate driver behavior. Nevertheless, intersections lacking clear traffic signage and environments with limited visibility continue to present elevated risks. The scarcity and difficulty of collecting data under such complex conditions pose significant challenges to the development of accurate detection systems. This study proposes a CyclicGAN-based loop-learning framework, in which the learning process begins with a set of manually annotated images used to train an initial labeling model. This model is then applied to automatically annotate newly generated synthetic images, which are incorporated into the training dataset for subsequent rounds of model retraining and image generation. Through this iterative process, the model progressively refines its ability to simulate and recognize diverse contextual features, thereby enhancing detection performance under varying environmental conditions. The experimental results show that environmental variations—such as daytime, nighttime, and rainy conditions—substantially affect the model performance in terms of F1-score. Training with a balanced mix of real and synthetic images yields an F1-score comparable to that obtained using real data alone. These results suggest that CycleGAN-generated images can effectively augment limited datasets and enhance model generalization. The proposed system may be integrated with in-vehicle assistance platforms as a supportive tool for pedestrian-crossing detection in data-scarce environments, contributing to improved driver awareness and road safety. Full article
Show Figures

Figure 1

20 pages, 4951 KiB  
Article
LNT-YOLO: A Lightweight Nighttime Traffic Light Detection Model
by Syahrul Munir and Huei-Yung Lin
Smart Cities 2025, 8(3), 95; https://doi.org/10.3390/smartcities8030095 - 6 Jun 2025
Viewed by 1105
Abstract
Autonomous vehicles are one of the key components of smart mobility that leverage innovative technology to navigate and operate safely in urban environments. Traffic light detection systems, as a key part of autonomous vehicles, play a key role in navigation during challenging traffic [...] Read more.
Autonomous vehicles are one of the key components of smart mobility that leverage innovative technology to navigate and operate safely in urban environments. Traffic light detection systems, as a key part of autonomous vehicles, play a key role in navigation during challenging traffic scenarios. Nighttime driving poses significant challenges for autonomous vehicle navigation, particularly in regard to the accuracy of traffic lights detection (TLD) systems. Existing TLD methodologies frequently encounter difficulties under low-light conditions due to factors such as variable illumination, occlusion, and the presence of distracting light sources. Moreover, most of the recent works only focused on daytime scenarios, often overlooking the significantly increased risk and complexity associated with nighttime driving. To address these critical issues, this paper introduces a novel approach for nighttime traffic light detection using the LNT-YOLO model, which is based on the YOLOv7-tiny framework. LNT-YOLO incorporates enhancements specifically designed to improve the detection of small and poorly illuminated traffic signals. Low-level feature information is utilized to extract the small-object features that have been missing because of the structure of the pyramid structure in the YOLOv7-tiny neck component. A novel SEAM attention module is proposed to refine the features that represent both the spatial and channel information by leveraging the features from the Simple Attention Module (SimAM) and Efficient Channel Attention (ECA) mechanism. The HSM-EIoU loss function is also proposed to accurately detect a small traffic light by amplifying the loss for hard-sample objects. In response to the limited availability of datasets for nighttime traffic light detection, this paper also presents the TN-TLD dataset. This newly curated dataset comprises carefully annotated images from real-world nighttime driving scenarios, featuring both circular and arrow traffic signals. Experimental results demonstrate that the proposed model achieves high accuracy in recognizing traffic lights in the TN-TLD dataset and in the publicly available LISA dataset. The LNT-YOLO model outperforms the original YOLOv7-tiny model and other state-of-the-art object detection models in mAP performance by 13.7% to 26.2% on the TN-TLD dataset and by 9.5% to 24.5% on the LISA dataset. These results underscore the model’s feasibility and robustness compared to other state-of-the-art object detection models. The source code and dataset will be available through the GitHub repository. Full article
Show Figures

Figure 1

26 pages, 9817 KiB  
Article
FASTSeg3D: A Fast, Efficient, and Adaptive Ground Filtering Algorithm for 3D Point Clouds in Mobile Sensing Applications
by Daniel Ayo Oladele, Elisha Didam Markus and Adnan M. Abu-Mahfouz
AI 2025, 6(5), 97; https://doi.org/10.3390/ai6050097 - 7 May 2025
Viewed by 908
Abstract
Background: Accurate ground segmentation in 3D point clouds is critical for robotic perception, enabling robust navigation, object detection, and environmental mapping. However, existing methods struggle with over-segmentation, under-segmentation, and computational inefficiency, particularly in dynamic or complex environments. Methods: This study proposes FASTSeg3D, a [...] Read more.
Background: Accurate ground segmentation in 3D point clouds is critical for robotic perception, enabling robust navigation, object detection, and environmental mapping. However, existing methods struggle with over-segmentation, under-segmentation, and computational inefficiency, particularly in dynamic or complex environments. Methods: This study proposes FASTSeg3D, a novel two-stage algorithm for real-time ground filtering. First, Range Elevation Estimation (REE) organizes point clouds efficiently while filtering outliers. Second, adaptive Window-Based Model Fitting (WBMF) addresses over-segmentation by dynamically adjusting to local geometric features. The method was rigorously evaluated in four challenging scenarios: large objects (vehicles), pedestrians, small debris/vegetation, and rainy conditions across day/night cycles. Results: FASTSeg3D achieved state-of-the-art performance, with a mean error of <7%, error sensitivity < 10%, and IoU scores > 90% in all scenarios except extreme cases (rainy/night small-object conditions). It maintained a processing speed 10× faster than comparable methods, enabling real-time operation. The algorithm also outperformed benchmarks in F1 score (avg. 94.2%) and kappa coefficient (avg. 0.91), demonstrating superior robustness. Conclusions: FASTSeg3D addresses critical limitations in ground segmentation by balancing speed and accuracy, making it ideal for real-time robotic applications in diverse environments. Its computational efficiency and adaptability to edge cases represent a significant advancement for autonomous systems. Full article
(This article belongs to the Section AI in Autonomous Systems)
Show Figures

Figure 1

10 pages, 1224 KiB  
Proceeding Paper
Multi-Feature Long Short-Term Memory Facial Recognition for Real-Time Automated Drowsiness Observation of Automobile Drivers with Raspberry Pi 4
by Michael Julius R. Moredo, James Dion S. Celino and Joseph Bryan G. Ibarra
Eng. Proc. 2025, 92(1), 52; https://doi.org/10.3390/engproc2025092052 - 6 May 2025
Viewed by 461
Abstract
We developed a multi-feature drowsiness detection model employing eye aspect ratio (EAR), mouth aspect ratio (MAR), head pose angles (yaw, pitch, and roll), and a Raspberry Pi 4 for real-time applications. The model was trained on the NTHU-DDD dataset and optimized using long [...] Read more.
We developed a multi-feature drowsiness detection model employing eye aspect ratio (EAR), mouth aspect ratio (MAR), head pose angles (yaw, pitch, and roll), and a Raspberry Pi 4 for real-time applications. The model was trained on the NTHU-DDD dataset and optimized using long short-term memory (LSTM) deep learning algorithms implemented using TensorFlow version 2.14.0. The model enabled robust drowsiness detection at a rate of 10 frames per second (FPS). The system embedded with the model was constructed for live image capture. The camera placement was adjusted for optimal positioning in the system. Various features were determined under diverse conditions (day, night, and with and without glasses). After training, the model showed an accuracy of 95.23%, while the accuracy ranged from 91.81 to 95.82% in validation. In stationary and moving vehicles, the detection accuracy ranged between 51.85 and 85.71%. Single-feature configurations exhibited an accuracy of 51.85 to 72.22%, while in dual features, the accuracy ranged from 66.67 to 75%. An accuracy of 80.95 to 85.71% was attained with the integration of all features. Challenges in the drowsiness included diminished accuracy with MAR alone and delayed prediction during transitions from non-drowsy to drowsy status. These findings underscore the model’s applicability in detecting drowsiness while highlighting the necessity for refinement. Through algorithm optimization, dataset expansion, and the integration of additional features and feedback mechanisms, the model can be improved in terms of performance and reliability. Full article
(This article belongs to the Proceedings of 2024 IEEE 6th Eurasia Conference on IoT, Communication and Engineering)
Show Figures

Figure 1

29 pages, 11492 KiB  
Article
Sustainable Real-Time Driver Gaze Monitoring for Enhancing Autonomous Vehicle Safety
by Jong-Bae Kim
Sustainability 2025, 17(9), 4114; https://doi.org/10.3390/su17094114 - 1 May 2025
Viewed by 638
Abstract
Despite advances in autonomous driving technology, current systems still require drivers to remain alert at all times. These systems issue warnings regardless of whether the driver is actually gazing at the road, which can lead to driver fatigue and reduced responsiveness over time, [...] Read more.
Despite advances in autonomous driving technology, current systems still require drivers to remain alert at all times. These systems issue warnings regardless of whether the driver is actually gazing at the road, which can lead to driver fatigue and reduced responsiveness over time, ultimately compromising safety. This paper proposes a sustainable real-time driver gaze monitoring method to enhance the safety and reliability of autonomous vehicles. The method uses a YOLOX-based face detector to detect the driver’s face and facial features, analyzing their size, position, shape, and orientation to determine whether the driver is gazing forward. By accurately assessing the driver’s gaze direction, the method adjusts the intensity and frequency of alerts, helping to reduce unnecessary warnings and improve overall driving safety. Experimental results demonstrate that the proposed method achieves a gaze classification accuracy of 97.3% and operates robustly in real-time under diverse environmental conditions, including both day and night. These results suggest that the proposed method can be effectively integrated into Level 3 and higher autonomous driving systems, where monitoring driver attention remains critical for safe operation. Full article
Show Figures

Figure 1

22 pages, 9648 KiB  
Article
Three-Dimensional Real-Scene-Enhanced GNSS/Intelligent Vision Surface Deformation Monitoring System
by Yuanrong He, Weijie Yang, Qun Su, Qiuhua He, Hongxin Li, Shuhang Lin and Shaochang Zhu
Appl. Sci. 2025, 15(9), 4983; https://doi.org/10.3390/app15094983 - 30 Apr 2025
Viewed by 648
Abstract
With the acceleration of urbanization, surface deformation monitoring has become crucial. Existing monitoring systems face several challenges, such as data singularity, the poor nighttime monitoring quality of video surveillance, and fragmented visual data. To address these issues, this paper presents a 3D real-scene [...] Read more.
With the acceleration of urbanization, surface deformation monitoring has become crucial. Existing monitoring systems face several challenges, such as data singularity, the poor nighttime monitoring quality of video surveillance, and fragmented visual data. To address these issues, this paper presents a 3D real-scene (3DRS)-enhanced GNSS/intelligent vision surface deformation monitoring system. The system integrates GNSS monitoring terminals and multi-source meteorological sensors to accurately capture minute displacements at monitoring points and multi-source Internet of Things (IoT) data, which are then automatically stored in MySQL databases. To enhance the functionality of the system, the visual sensor data are fused with 3D models through streaming media technology, enabling 3D real-scene augmented reality to support dynamic deformation monitoring and visual analysis. WebSocket-based remote lighting control is implemented to enhance the quality of video data at night. The spatiotemporal fusion of UAV aerial data with 3D models is achieved through Blender image-based rendering, while edge detection is employed to extract crack parameters from intelligent inspection vehicle data. The 3DRS model is constructed through UAV oblique photography, 3D laser scanning, and the combined use of SVSGeoModeler and SketchUp. A visualization platform for surface deformation monitoring is built on the 3DRS foundation, adopting an “edge collection–cloud fusion–terminal interaction” approach. This platform dynamically superimposes GNSS and multi-source IoT monitoring data onto the 3D spatial base, enabling spatiotemporal correlation analysis of millimeter-level displacements and early risk warning. Full article
Show Figures

Figure 1

29 pages, 16077 KiB  
Article
Traffic Sign Detection and Quality Assessment Using YOLOv8 in Daytime and Nighttime Conditions
by Ziyad N. Aldoski and Csaba Koren
Sensors 2025, 25(4), 1027; https://doi.org/10.3390/s25041027 - 9 Feb 2025
Cited by 2 | Viewed by 1456
Abstract
Traffic safety remains a pressing global concern, with traffic signs playing a vital role in regulating and guiding drivers. However, environmental factors like lighting and weather often compromise their visibility, impacting human drivers and autonomous vehicle (AV) systems. This study addresses critical traffic [...] Read more.
Traffic safety remains a pressing global concern, with traffic signs playing a vital role in regulating and guiding drivers. However, environmental factors like lighting and weather often compromise their visibility, impacting human drivers and autonomous vehicle (AV) systems. This study addresses critical traffic sign detection (TSD) and classification (TSC) gaps by leveraging the YOLOv8 algorithm to evaluate the detection accuracy and sign quality under diverse lighting conditions. The model achieved robust performance metrics across day and night scenarios using the novel ZND dataset, comprising 16,500 labeled images sourced from the GTSRB, GitHub repositories, and real-world own photographs. Complementary retroreflectivity assessments using handheld retroreflectometers revealed correlations between the material properties of the signs and their detection performance, emphasizing the importance of the retroreflective quality, especially under night-time conditions. Additionally, video analysis highlighted the influence of sharpness, brightness, and contrast on detection rates. Human evaluations further provided insights into subjective perceptions of visibility and their relationship with algorithmic detection, underscoring areas for potential improvement. The findings emphasize the need for using various assessment methods, advanced algorithms, enhanced sign materials, and regular maintenance to improve detection reliability and road safety. This research bridges the theoretical and practical aspects of TSD, offering recommendations that could advance AV systems and inform future traffic sign design and evaluation standards. Full article
(This article belongs to the Special Issue Intelligent Traffic Safety and Security)
Show Figures

Figure 1

22 pages, 25824 KiB  
Article
NoctuDroneNet: Real-Time Semantic Segmentation of Nighttime UAV Imagery in Complex Environments
by Ruokun Qu, Jintao Tan, Yelu Liu, Chenglong Li and Hui Jiang
Drones 2025, 9(2), 97; https://doi.org/10.3390/drones9020097 - 27 Jan 2025
Viewed by 1120
Abstract
Nighttime semantic segmentation represents a challenging frontier in computer vision, made particularly difficult by severe low-light conditions, pronounced noise, and complex illumination patterns. These challenges intensify when dealing with Unmanned Aerial Vehicle (UAV) imagery, where varying camera angles and altitudes compound the difficulty. [...] Read more.
Nighttime semantic segmentation represents a challenging frontier in computer vision, made particularly difficult by severe low-light conditions, pronounced noise, and complex illumination patterns. These challenges intensify when dealing with Unmanned Aerial Vehicle (UAV) imagery, where varying camera angles and altitudes compound the difficulty. In this paper, we introduce NoctuDroneNet (Nocturnal UAV Drone Network, hereinafter referred to as NoctuDroneNet), a real-time segmentation model tailored specifically for nighttime UAV scenarios. Our approach integrates convolution-based global reasoning with training-only semantic alignment modules to effectively handle diverse and extreme nighttime conditions. We construct a new dataset, NUI-Night, focusing on low-illumination UAV scenes to rigorously evaluate performance under conditions rarely represented in standard benchmarks. Beyond NUI-Night, we assess NoctuDroneNet on the Varied Drone Dataset (VDD), a normal-illumination UAV dataset, demonstrating the model’s robustness and adaptability to varying flight domains despite the lack of large-scale low-light UAV benchmarks. Furthermore, evaluations on the Night-City dataset confirm its scalability and applicability to complex nighttime urban environments. NoctuDroneNet achieves state-of-the-art performance on NUI-Night, surpassing strong real-time baselines in both segmentation accuracy and speed. Qualitative analyses highlight its resilience to under-/over-exposure and small-object detection, underscoring its potential for real-world applications like UAV emergency landings under minimal illumination. Full article
Show Figures

Figure 1

25 pages, 3292 KiB  
Article
Lane Detection Based on CycleGAN and Feature Fusion in Challenging Scenes
by Eric Hsueh-Chan Lu and Wei-Chih Chiu
Vehicles 2025, 7(1), 2; https://doi.org/10.3390/vehicles7010002 - 1 Jan 2025
Cited by 3 | Viewed by 1480
Abstract
Lane detection is a pivotal technology of the intelligent driving system. By identifying the position and shape of the lane, the vehicle can stay in the correct lane and avoid accidents. Image-based deep learning is currently the most advanced method for lane detection. [...] Read more.
Lane detection is a pivotal technology of the intelligent driving system. By identifying the position and shape of the lane, the vehicle can stay in the correct lane and avoid accidents. Image-based deep learning is currently the most advanced method for lane detection. Models using this method already have a very good recognition ability in general daytime scenes, and can almost achieve real-time detection. However, these models often fail to accurately identify lanes in challenging scenarios such as night, dazzle, or shadows. Furthermore, the lack of diversity in the training data restricts the capacity of the models to handle different environments. This paper proposes a novel method to train CycleGAN with existing daytime and nighttime datasets. This method can extract features of different styles and multi-scales, thereby increasing the richness of model input. We use CycleGAN as a domain adaptation model combined with an image segmentation model to boost the model’s performance in different styles of scenes. The proposed consistent loss function is employed to mitigate performance disparities of the model in different scenarios. Experimental results indicate that our method enhances the detection performance of original lane detection models in challenging scenarios. This research helps improve the dependability and robustness of intelligent driving systems, ultimately making roads safer and enhancing the driving experience. Full article
Show Figures

Figure 1

19 pages, 3240 KiB  
Article
Concentration and Potential Sources of Total Gaseous Mercury in a Concentrated Non-Ferrous Metals Smelting Area in Mengzi of China
by Xinyu Han, Yuqi Xie, Haojie Su, Wei Du, Guixin Du, Shihan Deng, Jianwu Shi, Senlin Tian, Ping Ning, Feng Xiang and Haitao Xie
Atmosphere 2025, 16(1), 8; https://doi.org/10.3390/atmos16010008 - 26 Dec 2024
Cited by 1 | Viewed by 601
Abstract
To investigate concentration and potential sources of total gaseous mercury (TGM) in a concentrated non-ferrous metals smelting area in southwest China, a high temporal resolution automatic mercury meter was used to measure TGM in the environment and the emissions from major sources of [...] Read more.
To investigate concentration and potential sources of total gaseous mercury (TGM) in a concentrated non-ferrous metals smelting area in southwest China, a high temporal resolution automatic mercury meter was used to measure TGM in the environment and the emissions from major sources of Mengzi city. The average concentration of TGM in urban air was 2.1 ± 3.5 ng·m−3 with a range of 0.1~61.1 ng·m−3 over the study period. The highest TGM concentration was in fall (3.3 ± 4.3 ng·m−3). The daytime TGM concentration (2.8 ± 3.5 ng·m−3) was significantly higher than that in the nighttime (1.6 ± 1.1 ng·m−3), which may be attributed to the increased emissions of mercury from the high volume of vehicle activity during the day. To discuss the contributions of local sources and long-range transport, eight pollution events were identified based on the ratio of ΔTGM/ΔCO (Carbon Monoxide), which can be found that local sources are a key contributor to the major TGM pollution events. Concentrations of TGM in flue gases from eight non-ferrous industrial sources were also measured in Mengzi, which were found that the highest TGM emission concentration was up to 4.6 mg·m−3. Simultaneously, the concentrations of TGM in ambient air around these industries and Xidu Tunnel were also detected, the concentrations were 1 to 4 times higher than that in the urban air sampling site. Based on the analysis of air mass and PSCF, when northwest wind happened, these emissions of industries and vehicles can be identified as the primary sources of TGM in urban air of Mengzi. Full article
(This article belongs to the Section Air Quality)
Show Figures

Figure 1

18 pages, 6063 KiB  
Article
Development of Artificial Intelligent-Based Methodology to Prepare Input for Estimating Vehicle Emissions
by Elif Yavuz, Alihan Öztürk, Nedime Gaye Nur Balkanlı, Şeref Naci Engin and S. Levent Kuzu
Appl. Sci. 2024, 14(23), 11175; https://doi.org/10.3390/app142311175 - 29 Nov 2024
Cited by 2 | Viewed by 933
Abstract
Machine learning has significantly advanced traffic surveillance and management, with YOLO (You Only Look Once) being a prominent Convolutional Neural Network (CNN) algorithm for vehicle detection. This study utilizes YOLO version 7 (YOLOv7) combined with the Kalman-based SORT (Simple Online and Real-time Tracking) [...] Read more.
Machine learning has significantly advanced traffic surveillance and management, with YOLO (You Only Look Once) being a prominent Convolutional Neural Network (CNN) algorithm for vehicle detection. This study utilizes YOLO version 7 (YOLOv7) combined with the Kalman-based SORT (Simple Online and Real-time Tracking) algorithm as one of the models used in our experiments for real-time vehicle identification. We developed the “ISTraffic” dataset. We have also included an overview of existing datasets in the domain of vehicle detection, highlighting their shortcomings: existing vehicle detection datasets often have incomplete annotations and limited diversity, but our “ISTraffic” dataset addresses these issues with detailed and extensive annotations for higher accuracy and robustness. The ISTraffic dataset is meticulously annotated, ensuring high-quality labels for every visible object, including those that are truncated, obscured, or extremely small. With 36,841 annotated examples and an average of 32.7 annotations per image, it offers extensive coverage and dense annotations, making it highly valuable for various object detection and tracking applications. The detailed annotations enhance detection capabilities, enabling the development of more accurate and reliable models for complex environments. This comprehensive dataset is versatile, suitable for applications ranging from autonomous driving to surveillance, and has significantly improved object detection performance, resulting in higher accuracy and robustness in challenging scenarios. Using this dataset, our study achieved significant results with the YOLOv7 model. The model demonstrated high accuracy in detecting various vehicle types, even under challenging conditions. The results highlight the effectiveness of the dataset in training robust vehicle detection models and underscore its potential for future research and development in this field. Our comparative analysis evaluated YOLOv7 against its variants, YOLOv7x and YOLOv7-tiny, using both the “ISTraffic” dataset and the COCO (Common Objects in Context) benchmark. YOLOv7x outperformed others with a mAP@0.5 of 0.87, precision of 0.89, and recall of 0.84, showing a 35% performance improvement over COCO. Performance varied under different conditions, with daytime yielding higher accuracy compared to night-time and rainy weather, where vehicle headlights affected object contours. Despite effective vehicle detection and counting, tracking high-speed vehicles remains a challenge. Additionally, the algorithm’s deep learning estimates of emissions (CO, NO, NO2, NOx, PM2.5, and PM10) were 7.7% to 10.1% lower than ground-truth. Full article
Show Figures

Figure 1

35 pages, 9872 KiB  
Article
Research and Application of YOLOv11-Based Object Segmentation in Intelligent Recognition at Construction Sites
by Luhao He, Yongzhang Zhou, Lei Liu and Jianhua Ma
Buildings 2024, 14(12), 3777; https://doi.org/10.3390/buildings14123777 - 26 Nov 2024
Cited by 16 | Viewed by 10628
Abstract
With the increasing complexity of construction site environments, robust object detection and segmentation technologies are essential for enhancing intelligent monitoring and ensuring safety. This study investigates the application of YOLOv11-Seg, an advanced target segmentation technology, for intelligent recognition on construction sites. The research [...] Read more.
With the increasing complexity of construction site environments, robust object detection and segmentation technologies are essential for enhancing intelligent monitoring and ensuring safety. This study investigates the application of YOLOv11-Seg, an advanced target segmentation technology, for intelligent recognition on construction sites. The research focuses on improving the detection and segmentation of 13 object categories, including excavators, bulldozers, cranes, workers, and other equipment. The methodology involves preparing a high-quality dataset through cleaning, annotation, and augmentation, followed by training the YOLOv11-Seg model over 351 epochs. The loss function analysis indicates stable convergence, demonstrating the model’s effective learning capabilities. The evaluation results show an mAP@0.5 average of 0.808, F1 Score(B) of 0.8212, and F1 Score(M) of 0.8382, with 81.56% of test samples achieving confidence scores above 90%. The model performs effectively in static scenarios, such as equipment detection in Xiong’an New District, and dynamic scenarios, including real-time monitoring of workers and vehicles, maintaining stable performance even at 1080P resolution. Furthermore, it demonstrates robustness under challenging conditions, including nighttime, non-construction scenes, and incomplete images. The study concludes that YOLOv11-Seg exhibits strong generalization capability and practical utility, providing a reliable foundation for enhancing safety and intelligent monitoring at construction sites. Future work may integrate edge computing and UAV technologies to support the digital transformation of construction management. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
Show Figures

Figure 1

19 pages, 6273 KiB  
Article
Highway Safety with an Intelligent Headlight System for Improved Nighttime Driving
by Jacob Kwaku Nkrumah, Yingfeng Cai, Ammar Jafaripournimchahi, Hai Wang and Vincent Akolbire Atindana
Sensors 2024, 24(22), 7283; https://doi.org/10.3390/s24227283 - 14 Nov 2024
Cited by 2 | Viewed by 2542
Abstract
Automotive headlights are crucial for nighttime driving, but accidents frequently occur when drivers fail to dim their high beams in the presence of oncoming vehicles, causing temporary blindness and increasing the risk of collisions. To address this problem, the current study developed an [...] Read more.
Automotive headlights are crucial for nighttime driving, but accidents frequently occur when drivers fail to dim their high beams in the presence of oncoming vehicles, causing temporary blindness and increasing the risk of collisions. To address this problem, the current study developed an intelligent headlight system using a sensor-based approach to control headlight beam intensity. This system is designed to distinguish between various light sources, including streetlights, building lights, and moving vehicle lights. The primary goal of the study was to create an affordable alternative to machine-learning-based intelligent headlight systems, which are limited to high-end vehicles due to the high cost of their components. In simulations, the proposed system achieved a 98% success rate, showing enhanced responsiveness, particularly when detecting an approaching vehicle at 90°. The system’s effectiveness was further validated through real-vehicle implementation, confirming the feasibility of the approach. By automating headlight control, the system reduces driver fatigue, enhances safety, and minimizes nighttime highway accidents, contributing to a safer driving environment. Full article
(This article belongs to the Section Electronic Sensors)
Show Figures

Figure 1

15 pages, 2347 KiB  
Article
A Machine Vision System for Monitoring Wild Birds on Poultry Farms to Prevent Avian Influenza
by Xiao Yang, Ramesh Bahadur Bist, Sachin Subedi, Zihao Wu, Tianming Liu, Bidur Paneru and Lilong Chai
AgriEngineering 2024, 6(4), 3704-3718; https://doi.org/10.3390/agriengineering6040211 - 9 Oct 2024
Viewed by 2712
Abstract
The epidemic of avian influenza outbreaks, especially high-pathogenicity avian influenza (HPAI), which causes respiratory disease and death, is a disaster in poultry. The outbreak of HPAI in 2014–2015 caused the loss of 60 million chickens and turkeys. The most recent HPAI outbreak, ongoing [...] Read more.
The epidemic of avian influenza outbreaks, especially high-pathogenicity avian influenza (HPAI), which causes respiratory disease and death, is a disaster in poultry. The outbreak of HPAI in 2014–2015 caused the loss of 60 million chickens and turkeys. The most recent HPAI outbreak, ongoing since 2021, has led to the loss of over 50 million chickens so far in the US and Canada. Farm biosecurity management practices have been used to prevent the spread of the virus. However, existing practices related to controlling the transmission of the virus through wild birds, especially waterfowl, are limited. For instance, ducks were considered hosts of avian influenza viruses in many past outbreaks. The objectives of this study were to develop a machine vision framework for tracking wild birds and test the performance of deep learning models in the detection of wild birds on poultry farms. A deep learning framework based on computer vision was designed and applied to the monitoring of wild birds. A night vision camera was used to collect data on wild bird near poultry farms. In the data, there were two main wild birds: the gadwall and brown thrasher. More than 6000 pictures were extracted through random video selection and applied in the training and testing processes. An overall precision of 0.95 (mAP@0.5) was reached by the model. The model is capable of automatic and real-time detection of wild birds. Missed detection mainly came from occlusion because the wild birds tended to hide in grass. Future research could be focused on applying the model to alert to the risk of wild birds and combining it with unmanned aerial vehicles to drive out detected wild birds. Full article
(This article belongs to the Special Issue Precision Farming Technologies for Monitoring Livestock and Poultry)
Show Figures

Figure 1

27 pages, 22106 KiB  
Article
A Real-Time Embedded System for Driver Drowsiness Detection Based on Visual Analysis of the Eyes and Mouth Using Convolutional Neural Network and Mouth Aspect Ratio
by Ruben Florez, Facundo Palomino-Quispe, Ana Beatriz Alvarez, Roger Jesus Coaquira-Castillo and Julio Cesar Herrera-Levano
Sensors 2024, 24(19), 6261; https://doi.org/10.3390/s24196261 - 27 Sep 2024
Cited by 8 | Viewed by 5869
Abstract
Currently, the number of vehicles in circulation continues to increase steadily, leading to a parallel increase in vehicular accidents. Among the many causes of these accidents, human factors such as driver drowsiness play a fundamental role. In this context, one solution to address [...] Read more.
Currently, the number of vehicles in circulation continues to increase steadily, leading to a parallel increase in vehicular accidents. Among the many causes of these accidents, human factors such as driver drowsiness play a fundamental role. In this context, one solution to address the challenge of drowsiness detection is to anticipate drowsiness by alerting drivers in a timely and effective manner. Thus, this paper presents a Convolutional Neural Network (CNN)-based approach for drowsiness detection by analyzing the eye region and Mouth Aspect Ratio (MAR) for yawning detection. As part of this approach, endpoint delineation is optimized for extraction of the region of interest (ROI) around the eyes. An NVIDIA Jetson Nano-based device and near-infrared (NIR) camera are used for real-time applications. A Driver Drowsiness Artificial Intelligence (DD-AI) architecture is proposed for the eye state detection procedure. In a performance analysis, the results of the proposed approach were compared with architectures based on InceptionV3, VGG16, and ResNet50V2. Night-Time Yawning–Microsleep–Eyeblink–Driver Distraction (NITYMED) was used for training, validation, and testing of the architectures. The proposed DD-AI network achieved an accuracy of 99.88% with the NITYMED test data, proving superior to the other networks. In the hardware implementation, tests were conducted in a real environment, resulting in 96.55% and 14 fps on average for the DD-AI network, thereby confirming its superior performance. Full article
(This article belongs to the Special Issue Applications of Sensors Based on Embedded Systems)
Show Figures

Figure 1

Back to TopTop