Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (1)

Search Parameters:
Keywords = optimally-weighted image-pose approach (OWIPA)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 13627 KB  
Article
Optimally-Weighted Image-Pose Approach (OWIPA) for Distracted Driver Detection and Classification
by Hong Vin Koay, Joon Huang Chuah, Chee-Onn Chow, Yang-Lang Chang and Bhuvendhraa Rudrusamy
Sensors 2021, 21(14), 4837; https://doi.org/10.3390/s21144837 - 15 Jul 2021
Cited by 24 | Viewed by 4688
Abstract
Distracted driving is the prime factor of motor vehicle accidents. Current studies on distraction detection focus on improving distraction detection performance through various techniques, including convolutional neural networks (CNNs) and recurrent neural networks (RNNs). However, the research on detection of distracted drivers through [...] Read more.
Distracted driving is the prime factor of motor vehicle accidents. Current studies on distraction detection focus on improving distraction detection performance through various techniques, including convolutional neural networks (CNNs) and recurrent neural networks (RNNs). However, the research on detection of distracted drivers through pose estimation is scarce. This work introduces an ensemble of ResNets, which is named Optimally-weighted Image-Pose Approach (OWIPA), to classify the distraction through original and pose estimation images. The pose estimation images are generated from HRNet and ResNet. We use ResNet101 and ResNet50 to classify the original images and the pose estimation images, respectively. An optimum weight is determined through grid search method, and the predictions from both models are weighted through this parameter. The experimental results show that our proposed approach achieves 94.28% accuracy on AUC Distracted Driver Dataset. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Back to TopTop