You are currently viewing a new version of our website. To view the old version click .
Applied Sciences
  • Article
  • Open Access

30 August 2022

Driver Fatigue and Distracted Driving Detection Using Random Forest and Convolutional Neural Network

,
and
1
Department of Electrical Engineering, National Chung Cheng University, Chiayi 62102, Taiwan
2
Department of Computer Science and Information Engineering, National Taipei University of Technology, Taipei 10608, Taiwan
3
Department of Computer Science and Information Engineering, National United University, Miaoli 36003, Taiwan
*
Author to whom correspondence should be addressed.

Abstract

Driver fatigue and distracted driving are the two most common causes of major accidents. Thus, the on-board monitoring of driving behaviors is key in the development of intelligent vehicles. In this paper, we propose an approach which detects driver fatigue and distracted driving behaviors using vision-based techniques. For driver fatigue detection, a single shot scale-invariant face detector (S3FD) is first used to detect the face in the image and then the face alignment network (FAN) is utilized to extract facial features. After that, the facial features are used to determine the driver’s yawns, head posture, and the opening or closing of their eyes. Finally, the random forest technique is used to analyze the driving conditions. For distracted driving detection, a convolutional neural network (CNN) is used to classify various distracted driving behaviors. Also, Adam optimizer is used to reinforce optimization performance. Compared with existing methods, our approach is more accurate and efficient. Moreover, distracted driving can be detected in real-time running on the embedded hardware.

1. Introduction

Based on the statistics from the Ministry of Transportation and Communications (Taiwan), driver fatigue and distracted driving are the two most common causes of major accidents. In Taiwan, about 20% of traffic accidents each year are due to driver fatigue and distracted driving. Driver fatigue and distracted driving have gradually become the principal causes of road traffic accidents. In some countries, driver fatigue and distracted driving are considered to be as dangerous as drink driving. Some traffic laws also forbid driving for a long period. Therefore, it is crucial to detect both driver fatigue and distracted driving in drivers.
Several studies have formulated methods for detecting driver fatigue and distracted driving [,,,]. These methods can be divided into three categories depending on whether they: (1) use vehicle driving data as recorded by the onboard diagnostic systems; (2) use data on the psychological characteristics of the driver, including electroencephalogram (EEG), electrooculogram (EOG), heartbeat, and finger pulse data; or (3) use vision-based techniques to monitor the driver’s status by detecting the driver’s yawns, head posture, facial expression, and the opening or closing of their eyes.
In this paper, we propose a monitoring approach which detects driver fatigue and distracted driving, and is based on the random forest approach [] and a convolutional neural network (CNN) []. Our approach does not require the use of invasive techniques to collect data. Therefore, it can be used in practical applications to generate reminders and prevent driving accidents. Figure 1 presents a flowchart of the proposed approach. The approach detects driver fatigue and distracted driving from images captured by a camera in front of the driver. To detect driver fatigue, the approach first uses a single shot scale-invariant face detector (S3FD) to detect the human face in the image [], since it performs superiorly for the different scales at different the regions of interest (ROI). The approach then uses the highly accurate face alignment network (FAN) to extract the features of the human face []. There are 68 facial feature points extracted from the image, including features such as the eyes and mouth. The defined fatigue parameters are then computed using the extracted facial features. Finally, the random forest is trained to determine whether the driver is fatigued, and whether a warning message should be issued to the driver. To detect distracted driving, we mainly use a data set which we collect ourselves. This data set contains data in seven categories, with six categories reflecting common types of distracted driving and one category reflecting safe driving. A CNN trained on our distracted driving data set is used for testing. A warning signal for distracted driving is generated when some positives are detected in the acquired image sequence.
Figure 1. Flowchart of the proposed approach for detecting driver fatigue and distracted driving.
Our contributions are two-fold. First, we propose a monitoring approach which performs well in detecting driver fatigue and distracted driving. Driver fatigue detection is based on the random forest approach and distracted driving detection is based on the CNN. A warning message is issued when the driver is fatigued or distracted. Second, our approach can detect distracted driving in real-time by transferring the collected data on the Nvidia Jetson TX2 platform.

3. Proposed Approach

3.1. Driver Fatigue Detection

For the detection of driver fatigue, we use the random forest algorithm because it performs well, has a fast training speed, and can process high-dimensional data. We first detect faces in the input images. Subsequently, we use the S3FD [] for detecting faces because it can provide high-quality performance for the different scales at different ROIs. Then, we use the FAN [] to extract 68 facial feature points in the image to find features such as the eyes and mouth. This network uses a 2D representation of the face and the coordinates of the facial feature points as inputs, and it is then trained with four hourglass modules []. Hence, upsampling and downsampling can be used to obtain the information of each image size, reduce the loss of image information, and finally acquire a heatmap. This heatmap can be used to predict the position of each facial feature point in the image. The advantage of using this network is that it can detect larger or unusual face poses, and is thus generally more effective than Dlib or the cascaded regression method [].
For closed eye and eye blink detection, six feature points of each eye are used to define the eye aspect ratio (EAR) [], which is calculated using the length and width of the eye as follows:
E A R = | | P 2 P 6 | | + | | P 3 P 5 | | 2 ( | | P 1 P 4 | | )
where Pi, for i = 1, 2, …, 6, are the feature points. A closed eye is defined as EAR < 0.15 for a given period. Eye blinking behavior is indicated by the frequent fluctuation of the EAR. For the detection of eye gaze direction, the region of the eye extracted with the six feature points is first converted into a grayscale eye image. The grayscale eye image is then processed with blurring and erosion to eliminate the reflected light, and is finally binarized to derive the enclosing contour and centroid. The eye gaze direction is expressed in terms of the horizontal and vertical directions. Figure 2 illustrates the results from the processing of an eye region extracted with the six feature points.
Figure 2. Processing results for an eye region. (a) grayscale eye image, (b) blurred eye image, (c) eroded eye image, and (d) binarized eye image.
Yawning is such that the mouth is open for a fairly long period. Thus, we use the mouth aspect ratio (MAR), similar to the EAR, to detect yawning. The MAR is defined as follows:
M A R = | | P 2 P 8 | | + | | P 3 P 7 | | + | | P 4 P 6 | | 2 ( | | P 1 P 5 | | )
where Pi, for i = 1, 2, …, 8, are the eight feature points representing the mouth []. A single yawn is indicated by a MAR that exceeds a given threshold for a given period (i.e., if the mouth remains open for too long). Figure 3 presents the flowchart for how yawns are detected.
Figure 3. Flowchart for detecting yawns.
The purpose of calculating the head posture is to determine whether the driver is nodding. To determine the orientation of the head, we first establish 14 correspondences between the 2D facial feature points and the 3D face model. The identified 2D image features are then mapped onto the 3D model to derive the rotational changes of the head.
For driver fatigue detection, six fatigue parameters are defined: (a) PERCLOS, the percentage of eye closure over time; (b) Blink frequency, the blink frequency in a period of time; (c) Maximum close duration (MCD), the longest eye closure in a period of time; (d) NodFreq, the frequency of nodding over time; (e) YawnFreq, the frequency of yawning in a period of time; and (f) GazeDir, the gaze direction.
We use the YawDD data set for training and testing []. This data set contains 322 male and female drivers with or without glasses (or sunglasses). The camera is placed under the front mirror of the vehicle. Three or four video clips are recorded for each participant. Each video shows a different mouth condition, such as normal talking, singing, and yawning. The persons with talking and yawning behaviors are considered as sober and drowsy drivers, respectively. We use the six fatigue parameters to train the random forest, and classify the driver status.

3.2. Distracted Driving Detection

We use a CNN for detecting distracted driving since it is a fast network which performs efficiently. Figure 4 shows the convolutional neural network structure used for distracted driving detection. In the input layer, we first convert the input image to a resolution of 256 × 256 pixels. The converted image then passes through five convolutional layers with the activation function ReLU and max pooling. The advantage of using max pooling is that the CNN can run faster. After the convolutional layers, two fully connected layers are combined with ReLU and Dropout. Dropout is included to prevent over-fitting. Finally, the output layer comprises a fully connected layer with a softmax activation function for classification. We use the Adam optimizer for enhanced optimization performance. In the Kaggle distracted driving competition, the log loss was used as an evaluation. Therefore, we use the log function of Log loss = −(ylog(p) + (1 − y)log(1 − p)). In the experiments, a warning signal is generated if a specific distracted driving behavior is detected for 4 out of 10 frames.
Figure 4. Convolutional neural network structure for distracted driving detection.
To collect data on distracted driving, we use two GoPro cameras (with an image resolution of 1920 × 1080, horizontal field of view of 95.5°, and vertical field of view of 56.7°) to acquire the videos of 12 participants driving. One camera is installed in front of the driver and the other is installed to the right. The data set contains 32,776 images classified into seven categories: “drink”, “phone–left”, “phone–right”, “panel”, “texting–left”, “texting–right”, and ‘safe–driving” with 1289, 6217, 5962, 4670, 5982, 6486, and 2160 images, respectively.

4. Results

We experimentally tested our approach’s ability to detect driver fatigue and distracted driving using the YawDD data set and our own data set. The experiments were performed on a PC with an Intel i7-8700HQ CPU (Santa Clara, CA, USA) and Nvidia RTX2070 GPU (Santa Clara, CA, USA). Moreover, the experiments were run on the embedded platform equipped with a Nvidia Jetson TX2, dual-core Denver 2 CPU, quad-core ARM A57, and a 256 core GPU (Santa Clara, CA, USA).

4.1. Results of Driver Fatigue Detection

We randomly selected five men and five women from the YawDD data set for the experiments. The accuracies of face detection, eye detection, eye opening and closing detection using a CNN, and the eye opening and closing using the EAR metric are shown in Table 1. The average accuracies were 100% for both face and eye detection. Moreover, the average accuracies of eye opening and closing detection using a CNN and the EAR metric were 93.5% and 94.5%, respectively. Hence, the EAR metric performed better than a CNN when used to detect eye opening and closing.
Table 1. Detection accuracy. Total: total number of images. Face: face detection accuracy. Eye: eye detection accuracy. O/C CNN: detection accuracy of eye open/close using CNN. O/C EAR: detection accuracy of eye open/close using EAR.
Figure 5 indicated the detection of eye opening and closing using the CNN and EAR metrics. The orientation of the face was also reflected in the direction of head posture.
Figure 5. Detection of eye opening and closing using a CNN (left) and the EAR metric (right). The images also indicated the orientation of the face in the detection of head posture.
In the classification of driver fatigue, 20% of all 53 videos in the YawDD data set were used for testing and the rest were used for training. We used the random forest algorithm to classify and analyze driver fatigue. The results are shown in Table 2. The number of trees and the minimum number of samples in a leaf were set as 10 and 1, respectively.
Table 2. Results from the random forest algorithm and parameter settings.
The importance of each fatigue parameter was shown in Table 3. The results revealed that the frequency of yawning was the most crucial parameter for detecting driver fatigue because it was highly related to the use of the YawDD data set for training. The second and third most crucial parameters were blink frequency and PERCLOS, respectively.
Table 3. Importance of the fatigue parameters.
We compared our approach in detecting driver fatigue with its counterparts in the literature [,,,]. Table 4 shows the comparison of our approach with previous methods. At 91%, the proposed approach is more accurate than those of Zhang et al. [], at 88.6%, Akrout and Mahdi [], at 83%, Moujahid et al. [], at 79.8%, and Bakheet and Al-Hamadi [], at 85.6%.
Table 4. Comparison of our approach with the previous methods.
Our approach was also fast. It processed an image in 0.47–0.5 s on the PC (including approximately 0.24 s for the random forest computation) and could output a driver fatigue detection result in 2.8 s on the Jetson TX2 platform.

4.2. Results of Distracted Driving Detection

We divided our distracted driving front data set into a training set and a validation set, with 26,223 frames and 6553 frames, respectively. Table 5 shows the confusion matrix for the distracted driving front data set. We evaluated the CNN model, which was trained on data from 11 participants, and on the data from the one remaining participant. The overall training accuracy of the CNN model was 99.7%. The average accuracy, calculated as the number of correct frames divided by the total number of frames, was 91.6%.
Table 5. Confusion matrix for the distracted driving front data set.
Figure 6 presented a resulting image for detecting distracted driving.
Figure 6. Detection of distracted driving.
To compare the distracted driving detection methods, we used the Kaggle distracted driving data set for training and validation. This driving data set was taken by a camera mounted in the car. The data set contained a total of 22,424 training images and 79,726 testing images. The distracted driving behaviors in the Kaggle data set [] fell into 1 of 10 categories: “safe driving” (c0), “texting–right hand” (c1), “talking on the phone–right hand” (c2), “texting–left hand” (c3), “talking on the phone–left hand” (c4), “operating the radio” (c5), “drink” (c6), “reaching behind” (c7), “hair and makeup” (c8), and “talk to passengers” (c9). For each test image, our approach assigned a probability for each of the 10 categories. We compared the proposed distracted driving detection against several previous methods [,,]. Table 6 shows the comparison of various distracted driving detection techniques. In the proposed approach, the training accuracy and log loss were 98.3% and 0.17, respectively. The accuracy and log loss of the validation set were 97.5% and 0.11, respectively. These results demonstrated the superiority of our approach over existing methods.
Table 6. Comparison of the proposed distracted driving detection with previous methods.
Table 7 shows the confusion matrix for the Kaggle data set. The frame rates were approximately 140–170 and 40–70 fps for the PC and Jetson TX2, respectively. The results revealed that the proposed approach can achieve real-time detection on the Nvidia Jetson TX2 platform.
Table 7. Confusion matrix for the Kaggle data set.

5. Conclusions

We have proposed an approach for detecting driver fatigue and distracted driving. Our approach uses the face and posture information of the driver to determine the driving status. For driver fatigue detection, our approach uses six fatigue parameters and the YawDD data set for training. Random forest is then trained to determine whether the driver is fatigued. The average accuracies are 100% for both face and eye detection. Moreover, the average accuracies of eye opening and closing detection using the EAR metric is 94.5%. For classification, the random forest algorithm can achieve the accuracy of 91%. The frequency of yawning is the most crucial parameter for detecting driver fatigue because it was highly related to the use of the YawDD data set for training. Other data sets can be added for training and a smaller feature detection network can be used for improvement. For distracted driving, our approach can be detected in real-time on the Nvidia Jetson TX2 platform. The results demonstrate that our approach performs better than the previous methods.
In future research, we aim to use a CNN to more accurately estimate where the driver is looking and to process eye information for detecting driver fatigue. Moreover, we aim to reduce the computational expense by using a smaller face detection network while maintaining adequate accuracy.

Author Contributions

Methodology, B.-T.D. and H.-Y.L.; Supervision, H.-Y.L. and C.-C.C.; Writing—original draft, B.-T.D.; Writing—review & editing, H.-Y.L. and C.-C.C. All authors have read and agreed to the published version of the manuscript.

Funding

The authors would like to thank the Ministry of Science and Technology of Taiwan for financially supporting this research under Contract No. MOST 111-2221-E-239-027.

Institutional Review Board Statement

Some participants participated in the collection of public databases, and they will be informed by the authors that their facial expressions will be used for research. Some participants participation has been recorded in the author’s database, and they will also be informed that their facial expressions will be used for research. The authors have also selected and used a few of their students to be drivers.

Data Availability Statement

Not applicable.

Acknowledgments

This paper is a revised and expanded version of a paper entitled, “An on-board monitoring system for driving fatigue and distraction detection,” in Proceedings of the 22nd IEEE International Conference on Industrial Technology (ICIT), Valencia, Spain, 10–12 March 2021.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lee, K.W.; Yoon, H.S.; Song, J.M.; Park, K.R. Convolutional neural network-based classification of driver’s emotion during aggressive and smooth driving using multi-modal camera sensors. Sensors 2018, 18, 957. [Google Scholar] [CrossRef] [PubMed]
  2. Lin, H.Y.; Dai, J.M.; Wu, L.T.; Chen, L.Q. A vision based driver assistance system with forward collision and overtaking detection. Sensors 2020, 20, 5139. [Google Scholar] [CrossRef] [PubMed]
  3. Dong, B.T.; Lin, H.Y. An on-board monitoring system for driving fatigue and distraction detection. In Proceedings of the 22nd IEEE International Conference on Industrial Technology (ICIT), Valencia, Spain, 10–12 March 2021. [Google Scholar]
  4. Kashevnik, A.; Shchedrin, R.; Kaiser, C.; Stocker, A. Driver distraction detection methods: A literature review and framework. IEEE Access 2021, 9, 60063–60076. [Google Scholar]
  5. Distract CNN. Available online: https://github.com/nkkumawat/Driver-Distraction-Detection/branches (accessed on 5 January 2020).
  6. Zhang, S.; Zhu, X.; Lei, Z.; Shi, H.; Wang, X.; Li, S.Z. S3fd: Single shot scale-invariant face detector. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 192–201. [Google Scholar]
  7. Bulat, A.; Tzimiropoulos, G. How far are we from solving the 2d 3d face alignment problem? (and a dataset of 230,000 3d facial landmarks). In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 1021–1030. [Google Scholar]
  8. Li, Z.; Li, S.; Cheng, B.; Shi, J. Online detection of driver fatigue using steering wheel angles for real driving conditions. Sensors 2017, 17, 495. [Google Scholar] [CrossRef]
  9. Mardi, Z.; Ashtiani, S.N.M.; Mikaili, M. Eeg-based drowsiness detection for safe driving using chaotic features and statistical tests. J. Med. Signals Sens. 2011, 1, 130–137. [Google Scholar] [PubMed]
  10. Babaeian, M.; Mozumdar, M. Driver drowsiness detection algorithms using electrocardiogram data analysis. In Proceedings of the 2019 IEEE 9th Annual Computing and Communication Workshop and Conference (CCWC), Las Vegas, NV, USA, 7–9 January 2019; pp. 1–6. [Google Scholar]
  11. Salvati, L.; d’Amore, M.; Fiorentino, A.; Pellegrino, A.; Sena, P.; Villecco, F. On-road detection of driver fatigue and drowsiness during medium-distance journeys. Entropy 2021, 23, 135. [Google Scholar]
  12. Abbas, Q.; Ibrahim, M.E.A.; Khan, S.; Baig, A.R. Hypo-driver: A multiview driver fatigue and distraction level detection system. Comput. Mater. Contin. 2022, 71, 1999–2007. [Google Scholar]
  13. Danisman, T.; Bilasco, I.M.; Djeraba, C.; Ihaddadene, N. Drowsy driver detection system using eye blink patterns. In Proceedings of the 2010 International Conference on Machine and Web Intelligence, Algiers, Algeria, 3–5 October 2010; pp. 230–233. [Google Scholar]
  14. Abtahi, S.; Hariri, B.; Shirmohammadi, S. Driver drowsiness monitoring based on yawning detection. In Proceedings of the 2011 IEEE International Instrumentation and Measurement Technology Conference, Hangzhou, China, 10–12 May 2011. [Google Scholar]
  15. Savas, B.K.; Becerikli, Y. Real time driver fatigue detection based on SVM algorithm. In Proceedings of the 2018 6th International Conference on Control Engineering Information Technology (CEIT), Istanbul, Turkey, 25–27 October 2018. [Google Scholar]
  16. Ou, W.; Shih, M.; Chang, C.; Yu, X.; Fan, C. Intelligent video-based drowsy driver detection system under various illuminations and embedded software implementation. In Proceedings of the 2015 IEEE International Conference on Consumer Electronics, Taipei, Taiwan, 6–8 June 2015; pp. 192–193. [Google Scholar]
  17. Dasgupta, A.; Rahman, D.; Routray, A. A smartphone-based drowsiness detection and warning system for automotive drivers. IEEE Trans. Intell. Transp. Syst. 2019, 20, 4045–4054. [Google Scholar]
  18. Qiao, Y.; Zeng, K.; Xu, L.; Yin, X. A smartphone-based driver fatigue detection using fusion of multiple real-time facial features. In Proceedings of the 2016 13th IEEE Annual Consumer Communications Networking Conference (CCNC), Las Vegas, NV, USA, 9–12 January 2016; pp. 230–235. [Google Scholar]
  19. Galarza, E.E.; Egas, F.D.; Silva, F.M.; Velasco, P.M.; Galarza, E.D. Real time driver drowsiness detection based on driver’s face image behavior using a system of human computer interaction implemented in a smartphone. In Proceedings of the International Conference on Information Technology & Systems (ICITS 2018), Libertad City, Ecuador, 10–12 January 2018; pp. 563–572. [Google Scholar]
  20. Zhang, W.; Su, J. Driver yawning detection based on long short term memory networks. In Proceedings of the 2017 IEEE Symposium Series on Computational Intelligence (SSCI), Honolulu, HI, USA, 27 November 2017–1 December 2017; pp. 1–5. [Google Scholar]
  21. Akrout, B.; Mahdi, W. Yawning detection by the analysis of variational descriptor for monitoring driver drowsiness. In Proceedings of the 2016 International Image Processing, Applications and Systems (IPAS), Hammamet, Tunisia, 5–7 November 2016; pp. 1–5. [Google Scholar]
  22. Abouelnaga, Y.; Eraqi, H.M.; Moustafa, M.N. Real-time distracted driver posture classification. arXiv 2017, arXiv:1706.09498. [Google Scholar]
  23. Baheti, B.; Gajre, S.; Talbar, S. Detection of distracted driver using convolutional neural network. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 18–22 June 2018; pp. 1145–1151. [Google Scholar]
  24. Kose, N.; Kopuklu, O.; Unnervik, A.; Rigoll, G. Real-time driver state monitoring using a cnn based spatio-temporal approach. In Proceedings of the 2019 IEEE Intelligent Transportation Systems Conference (ITSC), Auckland, New Zealand, 27–30 October 2019; pp. 3236–3242. [Google Scholar]
  25. Jain, A.; Koppula, H.S.; Raghavan, B.; Soh, S.; Saxena, A. Car that knows before you do: Anticipating maneuvers via learning temporal driving models. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 3182–3190. [Google Scholar]
  26. Chawan, P.M.; Satardekar, S.; Shah, D.; Badugu, R.; Pawar, A. Distracted driver detection and classification. Int. J. Eng. Res. Appl. 2018, 8, 51–55. [Google Scholar]
  27. Majdi, M.S.; Ram, S.; Gill, J.T.; Rodríguez, J.J. Drive-net: Convolutional network for driver distraction detection. In Proceedings of the 2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI), Las Vegas, NV, USA, 8–10 April 2018; pp. 1–4. [Google Scholar]
  28. Moslemi, N.; Azmi, R.; Soryani, M. Driver distraction recognition using 3d convolutional neural networks. In Proceedings of the 2019 4th International Conference on Pattern Recognition and Image Analysis (IPRIA), Tehran, Iran, 6–7 March 2019; pp. 145–151. [Google Scholar]
  29. Anber, S.; Alsaggaf, W.; Shalash, W. A hybrid driver fatigue and distraction detection model using AlexNet based on facial features. Electronics 2022, 11, 285. [Google Scholar] [CrossRef]
  30. Newell, A.; Yang, K.; Deng, J. Stacked hourglass networks for human pose estimation. In Proceedings of the 14th European Conference on Computer Vision (ECCV2016), Amsterdam, The Netherlands, 11–14 October 2016; pp. 483–499. [Google Scholar]
  31. King, D.E. Dlib-ml: A machine learning toolkit. J. Mach. Learn. Res. 2009, 10, 1755–1758. [Google Scholar]
  32. Rogalska, A.; Rynkiewicz, F.; Daszuta, M.; Guzek, K.; Napieralski, P. Blinking extraction in eye gaze system for stereoscopy movies. Open Phys. 2019, 17, 512–518. [Google Scholar] [CrossRef]
  33. Relangi, S.; Nilesh, M.; Kumar, K.; Naveen, A. Full length driver drowsiness detection model—Utilising driver specific judging parameters. In Proceedings of the International Conference on Intelligent Manufacturing and Energy Sustainability (ICIMES 2019), Hyderabad, India, 21–22 June 2019; pp. 791–798. [Google Scholar]
  34. Abtahi, S.; Omidyeganeh, M.; Shirmohammadi, S.; Hariri, B. Yawdd: A yawning detection dataset. In Proceedings of the 5th ACM Multimedia Systems Conference, Singapore, 19–21 March 2014; pp. 24–28. [Google Scholar]
  35. Moujahid, A.; Dornaika, F.; Arganda-Carreras, I.; Reta, J. Efficient and compact face descriptor for driver drowsiness detection. Expert Syst. Appl. 2021, 168, 114334. [Google Scholar] [CrossRef]
  36. Bakheet, S.; Al-Hamadi, A. A framework for instantaneous driver drowsiness detection based on improved HOG features and Naïve Bayesian classification. Brain Sci. 2021, 11, 240. [Google Scholar] [CrossRef] [PubMed]
  37. 10 Classes. Available online: https://www.kaggle.com/competitions/state-farm-distracted-driver-detection/data (accessed on 5 January 2020).
  38. Zhang, B. Apply and compare different classical image classification method: Detect distracted driver. In CS 229 Project Report; Stanford University: Stanford, CA, USA, 2016. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.