Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (49)

Search Parameters:
Keywords = motorcycle detection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 32193 KB  
Article
Object Detection on Road: Vehicle’s Detection Based on Re-Training Models on NVIDIA-Jetson Platform
by Sleiter Ramos-Sanchez, Jinmi Lezama, Ricardo Yauri and Joyce Zevallos
J. Imaging 2026, 12(1), 20; https://doi.org/10.3390/jimaging12010020 - 1 Jan 2026
Viewed by 366
Abstract
The increasing use of artificial intelligence (AI) and deep learning (DL) techniques has driven advances in vehicle classification and detection applications for embedded devices with deployment constraints due to computational cost and response time. In the case of urban environments with high traffic [...] Read more.
The increasing use of artificial intelligence (AI) and deep learning (DL) techniques has driven advances in vehicle classification and detection applications for embedded devices with deployment constraints due to computational cost and response time. In the case of urban environments with high traffic congestion, such as the city of Lima, it is important to determine the trade-off between model accuracy, type of embedded system, and the dataset used. This study was developed using a methodology adapted from the CRISP-DM approach, which included the acquisition of traffic videos in the city of Lima, their segmentation, and manual labeling. Subsequently, three SSD-based detection models (MobileNetV1-SSD, MobileNetV2-SSD-Lite, and VGG16-SSD) were trained on the NVIDIA Jetson Orin NX 16 GB platform. The results show that the VGG16-SSD model achieved the highest average precision (mAP 90.7%), with a longer training time, while the MobileNetV1-SSD (512×512) model achieved comparable performance (mAP 90.4%) with a shorter time. Additionally, data augmentation through contrast adjustment improved the detection of minority classes such as Tuk-tuk and Motorcycle. The results indicate that, among the evaluated models, MobileNetV1-SSD (512×512) achieved the best balance between accuracy and computational load for its implementation in ADAS embedded systems in congested urban environments. Full article
(This article belongs to the Special Issue Advances in Machine Learning for Computer Vision Applications)
Show Figures

Figure 1

24 pages, 40856 KB  
Article
UTUAV: A Drone Dataset for Urban Traffic Analysis
by Felipe Lepin, Sergio A. Velastin, Roberto León, Jesús García-Herrero, Gonzalo Rojas-Martínez and Jorge Ernesto Espinosa-Oviedo
Drones 2026, 10(1), 15; https://doi.org/10.3390/drones10010015 - 27 Dec 2025
Viewed by 359
Abstract
Vehicle detection from unmanned aerial vehicles (UAVs) has gained increasing attention due to the growing availability and accessibility of these platforms. UAV-captured videos have proven valuable in a variety of applications, including agriculture, security, and search and rescue operations. To support research in [...] Read more.
Vehicle detection from unmanned aerial vehicles (UAVs) has gained increasing attention due to the growing availability and accessibility of these platforms. UAV-captured videos have proven valuable in a variety of applications, including agriculture, security, and search and rescue operations. To support research in UAV-based vehicle detection, this paper introduces UTUAV: Urban Traffic Unmanned Aerial Vehicle, a dataset composed of traffic video images collected over the streets of Medellín, Colombia. The images are recorded from a semi-static position at two different altitudes (100 and 120 m) and include three manually annotated vehicle types: cars, motorcycles, and large vehicles. The analysis focuses on the main characteristics and challenges presented in the dataset. In particular, data leakage occurs when a single video is used to construct the training, validation, and evaluation sets. An inadequate data split can result in highly similar samples leaking into the evaluation set, leading to inflated performance metrics that do not reflect a model’s true generalization ability. Additionally, baseline results from recent state-of-the-art object detection models based on CNNs and Transformers (YOLOv8, YOLOv11, YOLOv12 and RT-DETR) are presented. The experiments highlight several challenges, including the difficulty of detecting small-scale objects, especially motorcycles, and limited generalization capabilities under altitude changes, a phenomenon commonly referred to as domain shift. Full article
(This article belongs to the Section Innovative Urban Mobility)
Show Figures

Figure 1

19 pages, 3521 KB  
Article
Intelligent Traffic Management: Comparative Evaluation of YOLOv3, YOLOv5, and YOLOv8 for Vehicle Detection in Urban Environments in Montería, Colombia
by Darío Doria Usta, Ricardo Hundelshaussen, César López Martínez, João Felipe Coimbra Leite Costa and Diego Machado Marques
Future Transp. 2025, 5(4), 191; https://doi.org/10.3390/futuretransp5040191 - 5 Dec 2025
Viewed by 437
Abstract
This study compares the performance of three YOLO-based object detection models—YOLOv3, YOLOv5, and YOLOv8—for vehicle detection and classification at an urban intersection in Montería, Colombia. Recordings from five consecutive days, spanning three time slots, were used, totaling approximately 135,000 frames with variability in [...] Read more.
This study compares the performance of three YOLO-based object detection models—YOLOv3, YOLOv5, and YOLOv8—for vehicle detection and classification at an urban intersection in Montería, Colombia. Recordings from five consecutive days, spanning three time slots, were used, totaling approximately 135,000 frames with variability in lighting and weather conditions. Frames were preprocessed by maintaining the aspect ratio and were normalized according to each model. The evaluation employed models pre-trained on COCO, without fine-tuning, enabling an objective assessment of their generalization capacity. Precision, recall, F1-score, and mAP@0.5 were computed globally and by vehicle class. YOLOv5 achieved the best balance between precision and recall (F1-score = 0.78) and the highest mAP (0.63), while YOLOv3 showed lower recall and mAP, and YOLOv8 performed competitively but slightly below YOLOv5. Cars and motorcycles were the most robust classes, whereas bicycles and trucks showed greater detection challenges. Visual evaluation confirmed stable performance on cloudy days and in light rain, with reduced accuracy under sunny conditions with high contrast. These findings highlight the potential of modern YOLO architectures for intelligent urban traffic monitoring and management. The generated dataset constitutes a replicable resource for future mobility research in similar contexts. Full article
Show Figures

Figure 1

21 pages, 3387 KB  
Article
Development of an Autonomous and Interactive Robot Guide for Industrial Museum Environments Using IoT and AI Technologies
by Andrés Arteaga-Vargas, David Velásquez, Juan Pablo Giraldo-Pérez and Daniel Sanin-Villa
Sci 2025, 7(4), 175; https://doi.org/10.3390/sci7040175 - 1 Dec 2025
Viewed by 947
Abstract
This paper presents the design of an autonomous robot guide for a museum-like environment in a motorcycle assembly plant. The system integrates Industry 4.0 technologies such as artificial vision, indoor positioning, generative artificial intelligence, and cloud connectivity to enhance the visitor experience. The [...] Read more.
This paper presents the design of an autonomous robot guide for a museum-like environment in a motorcycle assembly plant. The system integrates Industry 4.0 technologies such as artificial vision, indoor positioning, generative artificial intelligence, and cloud connectivity to enhance the visitor experience. The development follows the Design Inclusive Research (DIR) methodology and the VDI 2206 standard to ensure a structured scientific and engineering process. A key innovation is the integration of mmWave sensors alongside LiDAR and RGB-D cameras, enabling reliable human detection and improved navigation safety in reflective indoor environments, as well as the deployment of an open-source large language model for natural, on-device interaction with visitors. The current results include the complete mechanical, electronic, and software architecture; simulation validation; and a preliminary implementation in the real museum environment, where the system demonstrated consistent autonomous navigation, stable performance, and effective user interaction. Full article
(This article belongs to the Section Computer Sciences, Mathematics and AI)
Show Figures

Figure 1

13 pages, 448 KB  
Article
Analysis of the Prevalence of Alcohol and Psychoactive Substances Among Drivers in the Material from the Department of Forensic Medicine at the Medical University of Bialystok in Poland
by Michal Szeremeta, Julia Janica, Gabriela Jurkiewicz, Marta Galicka, Julia Koścień, Julia Więcko, Jakub Perkowski, Michal Krzysztof Jeleniewski, Karol Siemieniuk and Anna Niemcunowicz-Janica
Toxics 2025, 13(11), 960; https://doi.org/10.3390/toxics13110960 - 6 Nov 2025
Viewed by 1061
Abstract
In recent years, the issue of drivers under the influence of medications and psychoactive substances as a cause of road accidents has gained increasing importance. This study aimed to assess the prevalence and blood concentration ranges of alcohol and psychoactive substances among drivers [...] Read more.
In recent years, the issue of drivers under the influence of medications and psychoactive substances as a cause of road accidents has gained increasing importance. This study aimed to assess the prevalence and blood concentration ranges of alcohol and psychoactive substances among drivers in northeastern Poland between 2013 and 2024. To determine the prevalence of medications and psychoactive substances in drivers’ blood, data were collected from 266 blood samples obtained from drivers (251 men and 15 women). Among these, 79 drivers died immediately, 61 drivers survived the accident, and 126 drivers were stopped for roadside checks. The presence of the studied substances was confirmed using gas chromatography combined with mass spectrometry detection (GC-MS) and liquid chromatography combined with mass spectrometry detection (LC-MS). Blood alcohol content was measured using headspace gas chromatography with a flame ionisation detector (HS-GC-FID). Psychoactive substances were detected in 152 of the 266 samples. Drivers testing positive for medications and psychoactive substances were most frequently stopped during roadside controls—67.46%. Among the total positive cases, psychoactive substances used alone or in combination included THC—46.3% (range 0.2–20 ng/mL), alcohol—26.8% (range 0.1–4.1‰), amphetamines—20.7% (range 15–2997 ng/mL), opiates—4.3% (morphine 66.0 ng/mL; methadone 174.0 ng/mL; ranges: tramadol 15.0–600.0 ng/mL; fentanyl 45.0–100.0 ng/mL), benzodiazepines—9.8% (ranges: diazepam 55.0–480.0 ng/mL; midazolam 17.0–1200.0 ng/mL; clonazepam 21.0–36.0 ng/mL), stimulants—6.10% (ranges: amphetamine 15.0–2997.0 ng/mL; cocaine 4.0–30.0 ng/mL; benzoylecgonine 38.0–602.0 ng/mL; PMMA 45.0–360.0 ng/mL; MDMA 20.0–75.0 ng/mL; mephedrone 37.5 ng/mL; alfa-PVP 120 ng/mL), psychotropic drugs—3.1% (carbamazepine 8.0–2100.0 ng/mL; zolpidem 233.0 ng/mL; citalopram 320.0 ng/mL; opipramol 220 ng/mL). The most commonly used substance among car and motorcycle drivers was THC (37.7% of car drivers and 60% of motorcyclists). Among operators of other types of vehicles, alcohol was the most frequently detected substance, present in 35% of cases. The majority of drivers (81.1%) were under the influence of a single substance. Among the drivers, 7.3% consumed alcohol in combination with at least one other substance, and 11.6% used two or more substances excluding alcohol. Among the psychoactive substances most frequently used alone or in combination with others, THC was predominant. Roadside testing, based on effects similar to alcohol intoxication, was mainly conducted on male drivers. Full article
(This article belongs to the Special Issue Current Issues and Research Perspectives in Forensic Toxicology)
Show Figures

Graphical abstract

10 pages, 4117 KB  
Proceeding Paper
Development of a Data-Driven Methodology for Rapid Identification of Key Performance Indicators in Motorcycle Racing
by Jan Fojtasek and Michael Bohm
Eng. Proc. 2025, 113(1), 12; https://doi.org/10.3390/engproc2025113012 - 28 Oct 2025
Viewed by 517
Abstract
This study presents a novel method for the rapid identification of key performance indicators (KPIs) from measured riding data of a Ducati Panigale V2 motorcycle, aimed at enhancing racing performance through a deeper understanding of rider-vehicle interaction. The methodology involves the design and [...] Read more.
This study presents a novel method for the rapid identification of key performance indicators (KPIs) from measured riding data of a Ducati Panigale V2 motorcycle, aimed at enhancing racing performance through a deeper understanding of rider-vehicle interaction. The methodology involves the design and implementation of mathematical tools within the RaceStudio3 software to analyze data from the motorcycle’s sensor system. This approach facilitates the swift detection of critical events, including gearshift delays, improper throttle control, and suspension issues. The fusion of data from the motorcycle enables a comprehensive evaluation of the rider’s influence on performance. The results demonstrate the potential of the proposed method to provide valuable insights for optimizing motorcycle setup and rider technique. Full article
(This article belongs to the Proceedings of The Sustainable Mobility and Transportation Symposium 2025)
Show Figures

Figure 1

30 pages, 1709 KB  
Review
Performance of Advanced Rider Assistance Systems in Varying Weather Conditions
by Zia Ullah, João A. C. da Silva, Ricardo Rodrigues Nunes, Arsénio Reis, Vítor Filipe, João Barroso and E. J. Solteiro Pires
Vehicles 2025, 7(4), 105; https://doi.org/10.3390/vehicles7040105 - 24 Sep 2025
Cited by 1 | Viewed by 2226
Abstract
Advanced rider assistance systems (ARAS) play a crucial role in enhancing motorcycle safety through features such as collision avoidance, blind-spot detection, and adaptive cruise control, which rely heavily on sensors like radar, cameras, and LiDAR. However, their performance is often compromised under adverse [...] Read more.
Advanced rider assistance systems (ARAS) play a crucial role in enhancing motorcycle safety through features such as collision avoidance, blind-spot detection, and adaptive cruise control, which rely heavily on sensors like radar, cameras, and LiDAR. However, their performance is often compromised under adverse weather conditions, leading to sensor interference, reduced visibility, and inconsistent reliability. This study evaluates the effectiveness and limitations of ARAS technologies in rain, fog, and snow, focusing on how sensor performance, algorithms, techniques, and dataset suitability influence system reliability. A thematic analysis was conducted, selecting studies focused on ARAS in adverse weather conditions based on specific selection criteria. The analysis shows that while ARAS offers substantial safety benefits, its accuracy declines in challenging environments. Existing datasets, algorithms, and techniques were reviewed to identify the most effective options for ARAS applications. However, more comprehensive weather-resilient datasets and adaptive multi-sensor fusion approaches are still needed. Advancing in these areas will be critical to improving the robustness of ARAS and ensuring safer riding experiences across diverse environmental conditions. Full article
Show Figures

Figure 1

21 pages, 8671 KB  
Article
IFE-CMT: Instance-Aware Fine-Grained Feature Enhancement Cross Modal Transformer for 3D Object Detection
by Xiaona Song, Haozhe Zhang, Haichao Liu, Xinxin Wang and Lijun Wang
Sensors 2025, 25(18), 5685; https://doi.org/10.3390/s25185685 - 12 Sep 2025
Viewed by 913
Abstract
In recent years, multi-modal 3D object detection algorithms have experienced significant development. However, current algorithms primarily focus on designing overall fusion strategies for multi-modal features, neglecting finer-grained representations, which leads to a decline in the detection accuracy of small objects. To address this [...] Read more.
In recent years, multi-modal 3D object detection algorithms have experienced significant development. However, current algorithms primarily focus on designing overall fusion strategies for multi-modal features, neglecting finer-grained representations, which leads to a decline in the detection accuracy of small objects. To address this issue, this paper proposes the Instance-aware Fine-grained feature Enhancement Cross Modal Transformer (IFE-CMT) model. We designed an Instance feature Enhancement Module (IE-Module), which can accurately extract object features from multi-modal data and use them to enhance overall features while avoiding view transformations and maintaining low computational overhead. Additionally, we design a new point cloud branch network that effectively expands the network’s receptive field, enhancing the model’s semantic expression capabilities while preserving texture details of the objects. Experimental results on the nuScenes dataset demonstrate that compared to the CMT model, our proposed IFE-CMT model improves mAP and NDS by 2.1% and 0.8% on the validation set, respectively. On the test set, it improves mAP and NDS by 1.9% and a 0.7%. Notably, for small object categories such as bicycles and motorcycles, the mAP improved by 6.6% and 3.7%, respectively, significantly enhancing the detection accuracy of small objects. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

7 pages, 1498 KB  
Proceeding Paper
AI and Big Data for Assessing Carbon Emission in Tourism Areas: A Pilot Study in Phuket City
by Pawita Boonrat, Voravika Wattanasoontorn, Kanruthay Ruktaengam, Konthee Boonmeeprakob and Napatsakorn Roswhan
Eng. Proc. 2025, 108(1), 23; https://doi.org/10.3390/engproc2025108023 - 1 Sep 2025
Cited by 1 | Viewed by 1107
Abstract
Artificial intelligence (AI) and big data technology were applied to assess carbon emissions in a high-tourism area in this study. In the study site, the Thalang Road in Phuket Old Town, Thailand, visitors and vehicles (including cars, motorcycles, trucks, vans, and Tuktuks) were [...] Read more.
Artificial intelligence (AI) and big data technology were applied to assess carbon emissions in a high-tourism area in this study. In the study site, the Thalang Road in Phuket Old Town, Thailand, visitors and vehicles (including cars, motorcycles, trucks, vans, and Tuktuks) were counted using closed-circuit television (CCTV) footage and classified via the real-time detection transformer (RT-DETR) algorithm. The data were combined with records of electricity usage. From March to October 2024, 20,000 visitors per month visited the site. Electricity was the main source of carbon emissions, averaging 88 ± 11 tCO2-eq monthly. Transport accounted for 500 ± 14 kg CO2-eq. The average emission per visitor was calculated as 4.2 ± 0.4 kg CO2-eq. The results showed how sustainable tourism policies and urban planning strategies need to be developed in Phuket. Based on the results, indirect emissions from the site need to be estimated. Full article
Show Figures

Figure 1

27 pages, 8690 KB  
Article
Automatic Number Plate Detection and Recognition System for Small-Sized Number Plates of Category L-Vehicles for Remote Emission Sensing Applications
by Hafiz Hashim Imtiaz, Paul Schaffer, Paul Hesse, Martin Kupper and Alexander Bergmann
Sensors 2025, 25(11), 3499; https://doi.org/10.3390/s25113499 - 31 May 2025
Viewed by 2559
Abstract
Road traffic emissions are still a significant contributor to air pollution, which causes adverse health effects. Remote emission sensing (RES) is a state-of-the-art technique that continuously monitors the emissions of thousands of vehicles in traffic. Automatic number plate recognition (ANPR) systems are an [...] Read more.
Road traffic emissions are still a significant contributor to air pollution, which causes adverse health effects. Remote emission sensing (RES) is a state-of-the-art technique that continuously monitors the emissions of thousands of vehicles in traffic. Automatic number plate recognition (ANPR) systems are an essential part of RES systems to identify the registered owners of high-emitting vehicles. Recognizing number plates on L-vehicles (two-wheelers) with a standard ANPR system is challenging due to differences in size and placement across various categories. No ANPR system is designed explicitly for Category L vehicles, especially mopeds. In this work, we present an automatic number plate detection and recognition system for Category L vehicles (L-ANPR) specially developed to recognize L-vehicle number plates of various sizes and colors from different categories and countries. The cost-effective and energy efficient L-ANPR system was implemented on roads during remote emission measurement campaigns in multiple European cities and tested with hundreds of vehicles. The L-ANPR system recognizes Category L vehicles by calculating the size of each passing vehicle using photoelectric sensors. It can then trigger the L-ANPR detection system, which begins detecting license plates and recognizing license plate numbers with the L-ANPR recognizing system. The L-ANPR system’s license plate detection model is trained using thousands of images of license plates from various types of Category L vehicles across different countries, and the overall detection accuracy with test images exceeded 90%. The L-ANPR system’s character recognition is designed to identify large characters on standard number plates as well as smaller characters in various colors on small, moped license plates, achieving a recognition accuracy surpassing 70%. The reasons for false recognitions are identified and the solutions are discussed in detail. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

26 pages, 9112 KB  
Article
On Construction of Real-Time Monitoring System for Sport Cruiser Motorcycles Using NB-IoT and Multi-Sensors
by Endah Kristiani, Tzu-Hao Yu and Chao-Tung Yang
Sensors 2024, 24(23), 7484; https://doi.org/10.3390/s24237484 - 23 Nov 2024
Cited by 3 | Viewed by 3282
Abstract
This study leverages IoT technology to develop a real-time monitoring system for large motorcycles. We collaborated with professional mechanics to define the required data types and system architecture, ensuring practicality and efficiency. The system integrates the NB-IoT for efficient remote data transmission and [...] Read more.
This study leverages IoT technology to develop a real-time monitoring system for large motorcycles. We collaborated with professional mechanics to define the required data types and system architecture, ensuring practicality and efficiency. The system integrates the NB-IoT for efficient remote data transmission and uses MQTT for optimized messaging. It also includes advanced database management and intuitive data visualization for enhancing the user experience. For hardware installation, the system follows strict guidelines to avoid damaging the motorcycle’s original structure, comply with Taiwan’s legal standards, and prevent unauthorized modifications. The implementation of this real-time monitoring system is anticipated to significantly reduce safety risks associated with mechanical failures as it continuously monitors inappropriate driving behaviors and detects mechanical abnormalities in real time. The study indicates that the integration of advanced technologies, such as the NB-IoT and multi-sensor systems, can lead to improved driving safety and operational efficiency. Furthermore, the research suggests that the system’s ability to provide instant notifications and alerts through the platforms’ instant messaging can enhance user responsiveness to potential hazards, thereby contributing to a safer riding experience. Full article
(This article belongs to the Special Issue Sensing and Mobile Edge Computing)
Show Figures

Figure 1

14 pages, 4045 KB  
Article
Vehicular Traffic Flow Detection and Monitoring for Implementation of Smart Traffic Light: A Case Study for Road Intersection in Limeira, Brazil
by Talía Simões dos Santos Ximenes, Antonio Carlos de Oliveira Silva, Guilherme Pieretti de Martino, William Machado Emiliano, Mauro Menzori, Yuri Alexandre Meyer and Vitor Eduardo Molina Júnior
Future Transp. 2024, 4(4), 1388-1401; https://doi.org/10.3390/futuretransp4040067 - 8 Nov 2024
Cited by 3 | Viewed by 4471
Abstract
This paper proposes the development of a smart traffic light prototype based on vehicular traffic flow measurement in the stretch between two avenues in the city of Limeira, SP, Brazil, focusing on the stretch towards UNICAMP’s School of Technology. To this end, we [...] Read more.
This paper proposes the development of a smart traffic light prototype based on vehicular traffic flow measurement in the stretch between two avenues in the city of Limeira, SP, Brazil, focusing on the stretch towards UNICAMP’s School of Technology. To this end, we initially developed a Python code using the OpenCV library in order to detect and count vehicles. With the counting in operation, programming logic was inserted, aiming at preparing traffic light timers based on vehicular traffic. Finally, the traffic lights were added to display video via a code change to show the ongoing color changes, also obtaining a code for identifying vehicles and flow, in addition to the virtual traffic light system itself in the system. Vehicle counting accuracy was 75% for large vehicles, 90% for passenger cars, and 100% for motorcycles. The simulation of a smart traffic light implementation worked satisfactorily according to the post-processing of the video recorded for validation. Full article
Show Figures

Figure 1

18 pages, 2700 KB  
Article
Evaluating the Effectiveness of an Online Gamified Traffic Safety Education Platform for Adolescent Motorcyclists in Pakistan
by Imran Nawaz, Ariane Cuenen, Geert Wets, Roeland Paul, Tufail Ahmed and Davy Janssens
Appl. Sci. 2024, 14(19), 8590; https://doi.org/10.3390/app14198590 - 24 Sep 2024
Cited by 3 | Viewed by 3229
Abstract
This study explores the potential of online traffic safety education for adolescent motorcyclists in Pakistan. An e-learning platform, “Route 2 School” (R2S), was developed focusing on traffic knowledge, situation awareness, risk detection, and risk management. Male students (14–18 years) who commute to school [...] Read more.
This study explores the potential of online traffic safety education for adolescent motorcyclists in Pakistan. An e-learning platform, “Route 2 School” (R2S), was developed focusing on traffic knowledge, situation awareness, risk detection, and risk management. Male students (14–18 years) who commute to school by motorcycle were divided into an experimental group (EG) and a control group (CG), both completing pre- and post-measurement questionnaires. The EG showed significant improvement in knowledge, risk detection, and risk management compared to the CG, but not in situation awareness. Participants reported increased traffic safety awareness and suggested adding more interactive elements. The R2S platform’s scores revealed better performance in risk detection and risk management modules than situation awareness. Time spent on modules varied, with situation awareness requiring the most time. Adolescents expressed satisfaction with the platform, acknowledging its role in increasing traffic awareness. This study provides initial insights into the effectiveness of online traffic safety education in Pakistan, highlighting the potential to address the lack of comprehensive traffic safety education in schools. Further research and stakeholder engagement are recommended to integrate such platforms into formal education, potentially reducing traffic-related injuries among adolescent motorcyclists in developing countries. Full article
(This article belongs to the Special Issue Technology Enhanced and Mobile Learning: Innovations and Applications)
Show Figures

Figure 1

19 pages, 3687 KB  
Article
Comparative Analysis of YOLOv8 and YOLOv10 in Vehicle Detection: Performance Metrics and Model Efficacy
by Athulya Sundaresan Geetha, Mujadded Al Rabbani Alif, Muhammad Hussain and Paul Allen
Vehicles 2024, 6(3), 1364-1382; https://doi.org/10.3390/vehicles6030065 - 10 Aug 2024
Cited by 50 | Viewed by 14180
Abstract
Accurate vehicle detection is crucial for the advancement of intelligent transportation systems, including autonomous driving and traffic monitoring. This paper presents a comparative analysis of two advanced deep learning models—YOLOv8 and YOLOv10—focusing on their efficacy in vehicle detection across multiple classes such as [...] Read more.
Accurate vehicle detection is crucial for the advancement of intelligent transportation systems, including autonomous driving and traffic monitoring. This paper presents a comparative analysis of two advanced deep learning models—YOLOv8 and YOLOv10—focusing on their efficacy in vehicle detection across multiple classes such as bicycles, buses, cars, motorcycles, and trucks. Using a range of performance metrics, including precision, recall, F1 score, and detailed confusion matrices, we evaluate the performance characteristics of each model.The findings reveal that YOLOv10 generally outperformed YOLOv8, particularly in detecting smaller and more complex vehicles like bicycles and trucks, which can be attributed to its architectural enhancements. Conversely, YOLOv8 showed a slight advantage in car detection, underscoring subtle differences in feature processing between the models. The performance for detecting buses and motorcycles was comparable, indicating robust features in both YOLO versions. This research contributes to the field by delineating the strengths and limitations of these models and providing insights into their practical applications in real-world scenarios. It enhances understanding of how different YOLO architectures can be optimized for specific vehicle detection tasks, thus supporting the development of more efficient and precise detection systems. Full article
Show Figures

Figure 1

20 pages, 5383 KB  
Article
Enhancing Autonomous Vehicle Perception in Adverse Weather: A Multi Objectives Model for Integrated Weather Classification and Object Detection
by Nasser Aloufi, Abdulaziz Alnori and Abdullah Basuhail
Electronics 2024, 13(15), 3063; https://doi.org/10.3390/electronics13153063 - 2 Aug 2024
Cited by 22 | Viewed by 8437
Abstract
Robust object detection and weather classification are essential for the safe operation of autonomous vehicles (AVs) in adverse weather conditions. While existing research often treats these tasks separately, this paper proposes a novel multi objectives model that treats weather classification and object detection [...] Read more.
Robust object detection and weather classification are essential for the safe operation of autonomous vehicles (AVs) in adverse weather conditions. While existing research often treats these tasks separately, this paper proposes a novel multi objectives model that treats weather classification and object detection as a single problem using only the AV camera sensing system. Our model offers enhanced efficiency and potential performance gains by integrating image quality assessment, Super-Resolution Generative Adversarial Network (SRGAN), and a modified version of You Only Look Once (YOLO) version 5. Additionally, by leveraging the challenging Detection in Adverse Weather Nature (DAWN) dataset, which includes four types of severe weather conditions, including the often-overlooked sandy weather, we have conducted several augmentation techniques, resulting in a significant expansion of the dataset from 1027 images to 2046 images. Furthermore, we optimize the YOLO architecture for robust detection of six object classes (car, cyclist, pedestrian, motorcycle, bus, truck) across adverse weather scenarios. Comprehensive experiments demonstrate the effectiveness of our approach, achieving a mean average precision (mAP) of 74.6%, underscoring the potential of this multi objectives model to significantly advance the perception capabilities of autonomous vehicles’ cameras in challenging environments. Full article
(This article belongs to the Special Issue Advances in the System of Higher-Dimension-Valued Neural Networks)
Show Figures

Figure 1

Back to TopTop