Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (6)

Search Parameters:
Keywords = Mouth Aspect Ratio (MAR)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 3470 KB  
Article
Driver Monitoring System Using Computer Vision for Real-Time Detection of Fatigue, Distraction and Emotion via Facial Landmarks and Deep Learning
by Tamia Zambrano, Luis Arias, Edgar Haro, Victor Santos and María Trujillo-Guerrero
Sensors 2026, 26(3), 889; https://doi.org/10.3390/s26030889 - 29 Jan 2026
Abstract
Car accidents remain a leading cause of death worldwide, with drowsiness and distraction accounting for roughly 25% of fatal crashes in Ecuador. This study presents a real-time driver monitoring system that uses computer vision and deep learning to detect fatigue, distraction, and emotions [...] Read more.
Car accidents remain a leading cause of death worldwide, with drowsiness and distraction accounting for roughly 25% of fatal crashes in Ecuador. This study presents a real-time driver monitoring system that uses computer vision and deep learning to detect fatigue, distraction, and emotions from facial expressions. It combines a MobileNetV2-based CNN trained on RAF-DB for emotion recognition and MediaPipe’s 468 facial landmarks to compute the EAR (Eye Aspect Ratio), the MAR (Mouth Aspect Ratio), the gaze, and the head pose. Tests with 27 participants in both real and simulated driving environments showed strong results. There was a 100% accuracy in detecting distraction, 85.19% for yawning, and 88.89% for eye closure. The system also effectively recognized happiness (100%) and anger/disgust (96.3%). However, it struggled with sadness and failed to detect fear, likely due to the subtlety of real-world expressions and limitations in the training dataset. Despite these challenges, the results highlight the importance of integrating emotional awareness into driver monitoring systems, which helps reduce false alarms and improve response accuracy. This work supports the development of lightweight, non-invasive technologies that enhance driving safety through intelligent behavior analysis. Full article
(This article belongs to the Special Issue Sensor Fusion for the Safety of Automated Driving Systems)
Show Figures

Figure 1

25 pages, 2630 KB  
Article
Lightweight and Real-Time Driver Fatigue Detection Based on MG-YOLOv8 with Facial Multi-Feature Fusion
by Chengming Chen, Xinyue Liu, Meng Zhou, Zhijian Li, Zhanqi Du and Yandan Lin
J. Imaging 2025, 11(11), 385; https://doi.org/10.3390/jimaging11110385 - 1 Nov 2025
Cited by 1 | Viewed by 983
Abstract
Driver fatigue is a primary factor in traffic accidents and poses a serious threat to road safety. To address this issue, this paper proposes a multi-feature fusion fatigue detection method based on an improved YOLOv8 model. First, the method uses an enhanced YOLOv8 [...] Read more.
Driver fatigue is a primary factor in traffic accidents and poses a serious threat to road safety. To address this issue, this paper proposes a multi-feature fusion fatigue detection method based on an improved YOLOv8 model. First, the method uses an enhanced YOLOv8 model to achieve high-precision face detection. Then, it crops the detected face regions. Next, the lightweight PFLD (Practical Facial Landmark Detector) model performs keypoint detection on the cropped images, extracting 68 facial feature points and calculating key indicators related to fatigue status. These indicators include the eye aspect ratio (EAR), eyelid closure percentage (PERCLOS), mouth aspect ratio (MAR), and head posture ratio (HPR). To mitigate the impact of individual differences on detection accuracy, the paper introduces a novel sliding window model that combines a dynamic threshold adjustment strategy with an exponential weighted moving average (EWMA) algorithm. Based on this framework, blink frequency (BF), yawn frequency (YF), and nod frequency (NF) are calculated to extract time-series behavioral features related to fatigue. Finally, the driver’s fatigue state is determined using a comprehensive fatigue assessment algorithm. Experimental results on the WIDER FACE and YAWDD datasets demonstrate this method’s significant advantages in improving detection accuracy and computational efficiency. By striking a better balance between real-time performance and accuracy, the proposed method shows promise for real-world driving applications. Full article
Show Figures

Figure 1

10 pages, 1224 KB  
Proceeding Paper
Multi-Feature Long Short-Term Memory Facial Recognition for Real-Time Automated Drowsiness Observation of Automobile Drivers with Raspberry Pi 4
by Michael Julius R. Moredo, James Dion S. Celino and Joseph Bryan G. Ibarra
Eng. Proc. 2025, 92(1), 52; https://doi.org/10.3390/engproc2025092052 - 6 May 2025
Viewed by 1408
Abstract
We developed a multi-feature drowsiness detection model employing eye aspect ratio (EAR), mouth aspect ratio (MAR), head pose angles (yaw, pitch, and roll), and a Raspberry Pi 4 for real-time applications. The model was trained on the NTHU-DDD dataset and optimized using long [...] Read more.
We developed a multi-feature drowsiness detection model employing eye aspect ratio (EAR), mouth aspect ratio (MAR), head pose angles (yaw, pitch, and roll), and a Raspberry Pi 4 for real-time applications. The model was trained on the NTHU-DDD dataset and optimized using long short-term memory (LSTM) deep learning algorithms implemented using TensorFlow version 2.14.0. The model enabled robust drowsiness detection at a rate of 10 frames per second (FPS). The system embedded with the model was constructed for live image capture. The camera placement was adjusted for optimal positioning in the system. Various features were determined under diverse conditions (day, night, and with and without glasses). After training, the model showed an accuracy of 95.23%, while the accuracy ranged from 91.81 to 95.82% in validation. In stationary and moving vehicles, the detection accuracy ranged between 51.85 and 85.71%. Single-feature configurations exhibited an accuracy of 51.85 to 72.22%, while in dual features, the accuracy ranged from 66.67 to 75%. An accuracy of 80.95 to 85.71% was attained with the integration of all features. Challenges in the drowsiness included diminished accuracy with MAR alone and delayed prediction during transitions from non-drowsy to drowsy status. These findings underscore the model’s applicability in detecting drowsiness while highlighting the necessity for refinement. Through algorithm optimization, dataset expansion, and the integration of additional features and feedback mechanisms, the model can be improved in terms of performance and reliability. Full article
(This article belongs to the Proceedings of 2024 IEEE 6th Eurasia Conference on IoT, Communication and Engineering)
Show Figures

Figure 1

27 pages, 22106 KB  
Article
A Real-Time Embedded System for Driver Drowsiness Detection Based on Visual Analysis of the Eyes and Mouth Using Convolutional Neural Network and Mouth Aspect Ratio
by Ruben Florez, Facundo Palomino-Quispe, Ana Beatriz Alvarez, Roger Jesus Coaquira-Castillo and Julio Cesar Herrera-Levano
Sensors 2024, 24(19), 6261; https://doi.org/10.3390/s24196261 - 27 Sep 2024
Cited by 17 | Viewed by 9917
Abstract
Currently, the number of vehicles in circulation continues to increase steadily, leading to a parallel increase in vehicular accidents. Among the many causes of these accidents, human factors such as driver drowsiness play a fundamental role. In this context, one solution to address [...] Read more.
Currently, the number of vehicles in circulation continues to increase steadily, leading to a parallel increase in vehicular accidents. Among the many causes of these accidents, human factors such as driver drowsiness play a fundamental role. In this context, one solution to address the challenge of drowsiness detection is to anticipate drowsiness by alerting drivers in a timely and effective manner. Thus, this paper presents a Convolutional Neural Network (CNN)-based approach for drowsiness detection by analyzing the eye region and Mouth Aspect Ratio (MAR) for yawning detection. As part of this approach, endpoint delineation is optimized for extraction of the region of interest (ROI) around the eyes. An NVIDIA Jetson Nano-based device and near-infrared (NIR) camera are used for real-time applications. A Driver Drowsiness Artificial Intelligence (DD-AI) architecture is proposed for the eye state detection procedure. In a performance analysis, the results of the proposed approach were compared with architectures based on InceptionV3, VGG16, and ResNet50V2. Night-Time Yawning–Microsleep–Eyeblink–Driver Distraction (NITYMED) was used for training, validation, and testing of the architectures. The proposed DD-AI network achieved an accuracy of 99.88% with the NITYMED test data, proving superior to the other networks. In the hardware implementation, tests were conducted in a real environment, resulting in 96.55% and 14 fps on average for the DD-AI network, thereby confirming its superior performance. Full article
(This article belongs to the Special Issue Applications of Sensors Based on Embedded Systems)
Show Figures

Figure 1

26 pages, 1060 KB  
Article
Detection of Drowsiness among Drivers Using Novel Deep Convolutional Neural Network Model
by Fiaz Majeed, Umair Shafique, Mejdl Safran, Sultan Alfarhood and Imran Ashraf
Sensors 2023, 23(21), 8741; https://doi.org/10.3390/s23218741 - 26 Oct 2023
Cited by 29 | Viewed by 9203
Abstract
Detecting drowsiness among drivers is critical for ensuring road safety and preventing accidents caused by drowsy or fatigued driving. Research on yawn detection among drivers has great significance in improving traffic safety. Although various studies have taken place where deep learning-based approaches are [...] Read more.
Detecting drowsiness among drivers is critical for ensuring road safety and preventing accidents caused by drowsy or fatigued driving. Research on yawn detection among drivers has great significance in improving traffic safety. Although various studies have taken place where deep learning-based approaches are being proposed, there is still room for improvement to develop better and more accurate drowsiness detection systems using behavioral features such as mouth and eye movement. This study proposes a deep neural network architecture for drowsiness detection employing a convolutional neural network (CNN) for driver drowsiness detection. Experiments involve using the DLIB library to locate key facial points to calculate the mouth aspect ratio (MAR). To compensate for the small dataset, data augmentation is performed for the ‘yawning’ and ‘no_yawning’ classes. Models are trained and tested involving the original and augmented dataset to analyze the impact on model performance. Experimental results demonstrate that the proposed CNN model achieves an average accuracy of 96.69%. Performance comparison with existing state-of-the-art approaches shows better performance of the proposed model. Full article
(This article belongs to the Special Issue Fault-Tolerant Sensing Paradigms for Autonomous Vehicles)
Show Figures

Figure 1

15 pages, 20767 KB  
Article
Research on a Real-Time Driver Fatigue Detection Algorithm Based on Facial Video Sequences
by Tianjun Zhu, Chuang Zhang, Tunglung Wu, Zhuang Ouyang, Houzhi Li, Xiaoxiang Na, Jianguo Liang and Weihao Li
Appl. Sci. 2022, 12(4), 2224; https://doi.org/10.3390/app12042224 - 21 Feb 2022
Cited by 68 | Viewed by 14243
Abstract
The research on driver fatigue detection is of great significance to improve driving safety. This paper proposes a real-time comprehensive driver fatigue detection algorithm based on facial landmarks to improve the detection accuracy, which detects the driver’s fatigue status by using facial video [...] Read more.
The research on driver fatigue detection is of great significance to improve driving safety. This paper proposes a real-time comprehensive driver fatigue detection algorithm based on facial landmarks to improve the detection accuracy, which detects the driver’s fatigue status by using facial video sequences without equipping their bodies with other intelligent devices. A tasks-constrained deep convolutional network is constructed to detect the face region based on 68 key points, which can solve the optimization problem caused by the different convergence speeds of each task. According to the real-time facial video images, the eye feature of the eye aspect ratio (EAR), mouth aspect ratio (MAR) and percentage of eye closure time (PERCLOS) are calculated based on facial landmarks. A comprehensive driver fatigue assessment model is established to assess the fatigue status of drivers through eye/mouth feature selection. After a series of comparative experiments, the results show that this proposed algorithm achieves good performance in both accuracy and speed for driver fatigue detection. Full article
(This article belongs to the Special Issue Human-Computer Interactions)
Show Figures

Figure 1

Back to TopTop