Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (60)

Search Parameters:
Keywords = eye blink detection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 4100 KiB  
Protocol
Automated Analysis Pipeline for Extracting Saccade, Pupil, and Blink Parameters Using Video-Based Eye Tracking
by Brian C. Coe, Jeff Huang, Donald C. Brien, Brian J. White, Rachel Yep and Douglas P. Munoz
Vision 2024, 8(1), 14; https://doi.org/10.3390/vision8010014 - 18 Mar 2024
Viewed by 237
Abstract
The tremendous increase in the use of video-based eye tracking has made it possible to collect eye tracking data from thousands of participants. The traditional procedures for the manual detection and classification of saccades and for trial categorization (e.g., correct vs. incorrect) are [...] Read more.
The tremendous increase in the use of video-based eye tracking has made it possible to collect eye tracking data from thousands of participants. The traditional procedures for the manual detection and classification of saccades and for trial categorization (e.g., correct vs. incorrect) are not viable for the large datasets being collected. Additionally, video-based eye trackers allow for the analysis of pupil responses and blink behaviors. Here, we present a detailed description of our pipeline for collecting, storing, and cleaning data, as well as for organizing participant codes, which are fairly lab-specific but nonetheless, are important precursory steps in establishing standardized pipelines. More importantly, we also include descriptions of the automated detection and classification of saccades, blinks, “blincades” (blinks occurring during saccades), and boomerang saccades (two nearly simultaneous saccades in opposite directions where speed-based algorithms fail to split them), This is almost entirely task-agnostic and can be used on a wide variety of data. We additionally describe novel findings regarding post-saccadic oscillations and provide a method to achieve more accurate estimates for saccade end points. Lastly, we describe the automated behavior classification for the interleaved pro/anti-saccade task (IPAST), a task that probes voluntary and inhibitory control. This pipeline was evaluated using data collected from 592 human participants between 5 and 93 years of age, making it robust enough to handle large clinical patient datasets. In summary, this pipeline has been optimized to consistently handle large datasets obtained from diverse study cohorts (i.e., developmental, aging, clinical) and collected across multiple laboratory sites. Full article
Show Figures

Figure 1

12 pages, 6691 KiB  
Article
A Flexible Wearable Sensor Based on Laser-Induced Graphene for High-Precision Fine Motion Capture for Pilots
by Xiaoqing Xing, Yao Zou, Mian Zhong, Shichen Li, Hongyun Fan, Xia Lei, Juhang Yin, Jiaqing Shen, Xinyi Liu, Man Xu, Yong Jiang, Tao Tang, Yu Qian and Chao Zhou
Sensors 2024, 24(4), 1349; https://doi.org/10.3390/s24041349 - 19 Feb 2024
Viewed by 578
Abstract
There has been a significant shift in research focus in recent years toward laser-induced graphene (LIG), which is a high-performance material with immense potential for use in energy storage, ultrahydrophobic water applications, and electronic devices. In particular, LIG has demonstrated considerable potential in [...] Read more.
There has been a significant shift in research focus in recent years toward laser-induced graphene (LIG), which is a high-performance material with immense potential for use in energy storage, ultrahydrophobic water applications, and electronic devices. In particular, LIG has demonstrated considerable potential in the field of high-precision human motion posture capture using flexible sensing materials. In this study, we investigated the surface morphology evolution and performance of LIG formed by varying the laser energy accumulation times. Further, to capture human motion posture, we evaluated the performance of highly accurate flexible wearable sensors based on LIG. The experimental results showed that the sensors prepared using LIG exhibited exceptional flexibility and mechanical performance when the laser energy accumulation was optimized three times. They exhibited remarkable attributes, such as high sensitivity (~41.4), a low detection limit (0.05%), a rapid time response (response time of ~150 ms; relaxation time of ~100 ms), and excellent response stability even after 2000 s at a strain of 1.0% or 8.0%. These findings unequivocally show that flexible wearable sensors based on LIG have significant potential for capturing human motion posture, wrist pulse rates, and eye blinking patterns. Moreover, the sensors can capture various physiological signals for pilots to provide real-time capturing. Full article
Show Figures

Figure 1

16 pages, 2454 KiB  
Article
Application Specific Reconfigurable Processor for Eyeblink Detection from Dual-Channel EOG Signal
by Diba Das, Mehdi Hasan Chowdhury, Aditta Chowdhury, Kamrul Hasan, Quazi Delwar Hossain and Ray C. C. Cheung
J. Low Power Electron. Appl. 2023, 13(4), 61; https://doi.org/10.3390/jlpea13040061 - 23 Nov 2023
Viewed by 1443
Abstract
The electrooculogram (EOG) is one of the most significant signals carrying eye movement information, such as blinks and saccades. There are many human–computer interface (HCI) applications based on eye blinks. For example, the detection of eye blinks can be useful for paralyzed people [...] Read more.
The electrooculogram (EOG) is one of the most significant signals carrying eye movement information, such as blinks and saccades. There are many human–computer interface (HCI) applications based on eye blinks. For example, the detection of eye blinks can be useful for paralyzed people in controlling wheelchairs. Eye blink features from EOG signals can be useful in drowsiness detection. In some applications of electroencephalograms (EEGs), eye blinks are considered noise. The accurate detection of eye blinks can help achieve denoised EEG signals. In this paper, we aimed to design an application-specific reconfigurable binary EOG signal processor to classify blinks and saccades. This work used dual-channel EOG signals containing horizontal and vertical EOG signals. At first, the EOG signals were preprocessed, and then, by extracting only two features, the root mean square (RMS) and standard deviation (STD), blink and saccades were classified. In the classification stage, 97.5% accuracy was obtained using a support vector machine (SVM) at the simulation level. Further, we implemented the system on Xilinx Zynq-7000 FPGAs by hardware/software co-design. The processing was entirely carried out using a hybrid serial–parallel technique for low-power hardware optimization. The overall hardware accuracy for detecting blinks was 95%. The on-chip power consumption for this design was 0.8 watts, whereas the dynamic power was 0.684 watts (86%), and the static power was 0.116 watts (14%). Full article
Show Figures

Graphical abstract

14 pages, 2763 KiB  
Article
Design and Usability Study of a Point of Care mHealth App for Early Dry Eye Screening and Detection
by Sydney Zhang and Julio Echegoyen
J. Clin. Med. 2023, 12(20), 6479; https://doi.org/10.3390/jcm12206479 - 12 Oct 2023
Viewed by 909
Abstract
Significantly increased eye blink rate and partial blinks have been well documented in patients with dry eye disease (DED), a multifactorial eye disorder with few effective methods for clinical diagnosis. In this study, a point of care mHealth App named “EyeScore” was developed, [...] Read more.
Significantly increased eye blink rate and partial blinks have been well documented in patients with dry eye disease (DED), a multifactorial eye disorder with few effective methods for clinical diagnosis. In this study, a point of care mHealth App named “EyeScore” was developed, utilizing blink rate and patterns as early clinical biomarkers for DED. EyeScore utilizes an iPhone for a 1-min in-app recording of eyelid movements. The use of facial landmarks, eye aspect ratio (EAR) and derivatives enabled a comprehensive analysis of video frames for the determination of eye blink rate and partial blink counts. Smartphone videos from ten DED patients and ten non-DED controls were analyzed to optimize EAR-based thresholds, with eye blink and partial blink results in excellent agreement with manual counts. Importantly, a clinically relevant algorithm for the calculation of “eye healthiness score” was created, which took into consideration eye blink rate, partial blink counts as well as other demographic and clinical risk factors for DED. This 10-point score can be conveniently measured anytime with non-invasive manners and successfully led to the identification of three individuals with DED conditions from ten non-DED controls. Thus, EyeScore can be validated as a valuable mHealth App for early DED screening, detection and treatment monitoring. Full article
Show Figures

Graphical abstract

18 pages, 4164 KiB  
Article
BlinkLinMulT: Transformer-Based Eye Blink Detection
by Ádám Fodor, Kristian Fenech and András Lőrincz
J. Imaging 2023, 9(10), 196; https://doi.org/10.3390/jimaging9100196 - 26 Sep 2023
Viewed by 2111
Abstract
This work presents BlinkLinMulT, a transformer-based framework for eye blink detection. While most existing approaches rely on frame-wise eye state classification, recent advancements in transformer-based sequence models have not been explored in the blink detection literature. Our approach effectively combines low- and high-level [...] Read more.
This work presents BlinkLinMulT, a transformer-based framework for eye blink detection. While most existing approaches rely on frame-wise eye state classification, recent advancements in transformer-based sequence models have not been explored in the blink detection literature. Our approach effectively combines low- and high-level feature sequences with linear complexity cross-modal attention mechanisms and addresses challenges such as lighting changes and a wide range of head poses. Our work is the first to leverage the transformer architecture for blink presence detection and eye state recognition while successfully implementing an efficient fusion of input features. In our experiments, we utilized several publicly available benchmark datasets (CEW, ZJU, MRL Eye, RT-BENE, EyeBlink8, Researcher’s Night, and TalkingFace) to extensively show the state-of-the-art performance and generalization capability of our trained model. We hope the proposed method can serve as a new baseline for further research. Full article
(This article belongs to the Section Image and Video Processing)
Show Figures

Figure 1

12 pages, 1861 KiB  
Article
Face Keypoint Detection Method Based on Blaze_ghost Network
by Ning Yu, Yongping Tian, Xiaochuan Zhang and Xiaofeng Yin
Appl. Sci. 2023, 13(18), 10385; https://doi.org/10.3390/app131810385 - 17 Sep 2023
Cited by 1 | Viewed by 866
Abstract
The accuracy and speed of facial keypoint detection are crucial factors for effectively extracting fatigue features, such as eye blinking and yawning. This paper focuses on the improvement and optimization of facial keypoint detection algorithms, presenting a facial keypoint detection method based on [...] Read more.
The accuracy and speed of facial keypoint detection are crucial factors for effectively extracting fatigue features, such as eye blinking and yawning. This paper focuses on the improvement and optimization of facial keypoint detection algorithms, presenting a facial keypoint detection method based on the Blaze_ghost network and providing more reliable support for facial fatigue analysis. Firstly, the Blaze_ghost network is designed as the backbone network with a deeper structure and more parameters to better capture facial detail features, improving the accuracy of keypoint localization. Secondly, HuberWingloss is designed as the loss function to further reduce the training difficulty of the model and enhance its generalization ability. Compared to traditional loss functions, HuberWingloss can reduce the interference of outliers (such as noise and occlusion) in model training, improve the model’s robustness to complex situations, and further enhance the accuracy of keypoint detection. Experimental results show that the proposed method achieves significant improvements in both the NME (Normal Mean Error) and FR (Failure Rate) evaluation metrics. Compared to traditional methods, the proposed model demonstrates a considerable improvement in keypoint localization accuracy while still maintaining high detection efficiency. Full article
Show Figures

Figure 1

13 pages, 5232 KiB  
Article
Flexible Wearable Strain Sensors Based on Laser-Induced Graphene for Monitoring Human Physiological Signals
by Yao Zou, Mian Zhong, Shichen Li, Zehao Qing, Xiaoqing Xing, Guochong Gong, Ran Yan, Wenfeng Qin, Jiaqing Shen, Huazhong Zhang, Yong Jiang, Zhenhua Wang and Chao Zhou
Polymers 2023, 15(17), 3553; https://doi.org/10.3390/polym15173553 - 26 Aug 2023
Cited by 21 | Viewed by 1633
Abstract
Flexible wearable strain sensors based on laser-induced graphene (LIG) have attracted significant interest due to their simple preparation process, three-dimensional porous structure, excellent electromechanical characteristics, and remarkable mechanical robustness. In this study, we demonstrated that LIG with various defects could be prepared on [...] Read more.
Flexible wearable strain sensors based on laser-induced graphene (LIG) have attracted significant interest due to their simple preparation process, three-dimensional porous structure, excellent electromechanical characteristics, and remarkable mechanical robustness. In this study, we demonstrated that LIG with various defects could be prepared on the surface of polyimide (PI) film, patterned in a single step by adjusting the scanning speed while maintaining a constant laser power of 12.4 W, and subjected to two repeated scans under ambient air conditions. The results indicated that LIG produced at a scanning speed of 70 mm/s exhibited an obvious stacked honeycomb micropore structure, and the flexible strain sensor fabricated with this material demonstrated stable resistance. The sensor exhibited high sensitivity within a low strain range of 0.4–8.0%, with the gauge factor (GF) reaching 107.8. The sensor demonstrated excellent stability and repeatable response at a strain of 2% after approximately 1000 repetitions. The flexible wearable LIG-based sensor with a serpentine bending structure could be used to detect various physiological signals, including pulse, finger bending, back of the hand relaxation and gripping, blinking eyes, smiling, drinking water, and speaking. The results of this study may serve as a reference for future applications in health monitoring, medical rehabilitation, and human–computer interactions. Full article
Show Figures

Figure 1

15 pages, 4815 KiB  
Article
Real-Time Deep Learning-Based Drowsiness Detection: Leveraging Computer-Vision and Eye-Blink Analyses for Enhanced Road Safety
by Furkat Safarov, Farkhod Akhmedov, Akmalbek Bobomirzaevich Abdusalomov, Rashid Nasimov and Young Im Cho
Sensors 2023, 23(14), 6459; https://doi.org/10.3390/s23146459 - 17 Jul 2023
Cited by 9 | Viewed by 3223
Abstract
Drowsy driving can significantly affect driving performance and overall road safety. Statistically, the main causes are decreased alertness and attention of the drivers. The combination of deep learning and computer-vision algorithm applications has been proven to be one of the most effective approaches [...] Read more.
Drowsy driving can significantly affect driving performance and overall road safety. Statistically, the main causes are decreased alertness and attention of the drivers. The combination of deep learning and computer-vision algorithm applications has been proven to be one of the most effective approaches for the detection of drowsiness. Robust and accurate drowsiness detection systems can be developed by leveraging deep learning to learn complex coordinate patterns using visual data. Deep learning algorithms have emerged as powerful techniques for drowsiness detection because of their ability to learn automatically from given inputs and feature extractions from raw data. Eye-blinking-based drowsiness detection was applied in this study, which utilized the analysis of eye-blink patterns. In this study, we used custom data for model training and experimental results were obtained for different candidates. The blinking of the eye and mouth region coordinates were obtained by applying landmarks. The rate of eye-blinking and changes in the shape of the mouth were analyzed using computer-vision techniques by measuring eye landmarks with real-time fluctuation representations. An experimental analysis was performed in real time and the results proved the existence of a correlation between yawning and closed eyes, classified as drowsy. The overall performance of the drowsiness detection model was 95.8% accuracy for drowsy-eye detection, 97% for open-eye detection, 0.84% for yawning detection, 0.98% for right-sided falling, and 100% for left-sided falling. Furthermore, the proposed method allowed a real-time eye rate analysis, where the threshold served as a separator of the eye into two classes, the “Open” and “Closed” states. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

17 pages, 8109 KiB  
Article
Eye-Blink Event Detection Using a Neural-Network-Trained Frame Segment for Woman Drivers in Saudi Arabia
by Muna S. Al-Razgan, Issema Alruwaly and Yasser A. Ali
Electronics 2023, 12(12), 2699; https://doi.org/10.3390/electronics12122699 - 16 Jun 2023
Viewed by 1016
Abstract
Women have been allowed to drive in Saudi Arabia since 2018, revoking a 30-year ban that also adhered to the traffic rules provided in the country. Conventional drivers are often monitored for safe driving by monitoring their facial reactions, eye blinks, and expressions. [...] Read more.
Women have been allowed to drive in Saudi Arabia since 2018, revoking a 30-year ban that also adhered to the traffic rules provided in the country. Conventional drivers are often monitored for safe driving by monitoring their facial reactions, eye blinks, and expressions. As driving experience and vehicle handling features have been less exposed to novice women drivers in Saudi Arabia, technical assistance and physical observations are mandatory. Such observations are sensed as images/video frames for computer-based analyses. Precise computer vision processes are employed for detecting and classifying events using image processing. The identified events are unique to novice women drivers in Saudi Arabia, assisting with their vehicle usage. This article introduces the Event Detection using Segmented Frame (ED-SF) method to improve the abnormal Eye-Blink Detection (EBD) of women drivers. The eye region is segmented using variation pixel extraction in this process. The pixel extraction process requires textural variation identified from different frames. The condition is that the frames are to be continuous in the event detection. This method employs a convolution neural network with two hidden layer processes. In the first layer, continuous and discrete frame differentiations are identified. The second layer is responsible for segmenting the eye region, devouring the textural variation. The variations and discrete frames are used for training the neural network to prevent segment errors in the extraction process. Therefore, the frame segment changes are used for Identifying the expressions through different inputs across different texture luminosities. This method applies to less-experienced and road-safety-knowledge-lacking woman drivers who have initiated their driving journey in Saudi-Arabia-like countries. Thus the proposed method improves the EBD accuracy by 9.5% compared to Hybrid Convolutional Neural Networks (HCNN), Long Short-Term Neural Networks (HCNN + LSTM), Two-Stream Spatial-Temporal Graph Convolutional Networks (2S-STGCN), and the Customized Driving Fatigue Detection Method CDFDM. Full article
Show Figures

Figure 1

18 pages, 4300 KiB  
Article
A Hardware-Based Configurable Algorithm for Eye Blink Signal Detection Using a Single-Channel BCI Headset
by Rafael López-Ahumada, Raúl Jiménez-Naharro and Fernando Gómez-Bravo
Sensors 2023, 23(11), 5339; https://doi.org/10.3390/s23115339 - 05 Jun 2023
Viewed by 1520
Abstract
Eye blink artifacts in electroencephalographic (EEG) signals have been used in multiple applications as an effective method for human–computer interaction. Hence, an effective and low-cost blinking detection method would be an invaluable aid for the development of this technology. A configurable hardware algorithm, [...] Read more.
Eye blink artifacts in electroencephalographic (EEG) signals have been used in multiple applications as an effective method for human–computer interaction. Hence, an effective and low-cost blinking detection method would be an invaluable aid for the development of this technology. A configurable hardware algorithm, described using hardware description language, for eye blink detection based on EEG signals from a one-channel brain–computer interface (BCI) headset was developed and implemented, showing better performance in terms of effectiveness and detection time than manufacturer-provided software. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

25 pages, 1204 KiB  
Article
Research on Railway Dispatcher Fatigue Detection Method Based on Deep Learning with Multi-Feature Fusion
by Liang Chen and Wei Zheng
Electronics 2023, 12(10), 2303; https://doi.org/10.3390/electronics12102303 - 19 May 2023
Cited by 1 | Viewed by 980
Abstract
Traffic command and scheduling are the core monitoring aspects of railway transportation. Detecting the fatigued state of dispatchers is, therefore, of great significance to ensure the safety of railway operations. In this paper, we present a multi-feature fatigue detection method based on key [...] Read more.
Traffic command and scheduling are the core monitoring aspects of railway transportation. Detecting the fatigued state of dispatchers is, therefore, of great significance to ensure the safety of railway operations. In this paper, we present a multi-feature fatigue detection method based on key points of the human face and body posture. Considering unfavorable factors such as facial occlusion and angle changes that have limited single-feature fatigue state detection methods, we developed our model based on the fusion of body postures and facial features for better accuracy. Using facial key points and eye features, we calculate the percentage of eye closure that accounts for more than 80% of the time duration, as well as blinking and yawning frequency, and we analyze fatigue behaviors, such as yawning, a bowed head (that could indicate sleep state), and lying down on a table, using a behavior recognition algorithm. We fuse five facial features and behavioral postures to comprehensively determine the fatigue state of dispatchers. The results show that on the 300 W dataset, as well as a hand-crafted dataset, the inference time of the improved facial key point detection algorithm based on the retina–face model was 100 ms and that the normalized average error (NME) was 3.58. On our own dataset, the classification accuracy based the an Bi-LSTM-SVM adaptive enhancement algorithm model reached 97%. Video data of volunteers who carried out scheduling operations in the simulation laboratory were used for our experiments, and our multi-feature fusion fatigue detection algorithm showed an accuracy rate of 96.30% and a recall rate of 96.30% in fatigue classification, both of which were higher than those of existing single-feature detection methods. Our multi-feature fatigue detection method offers a potential solution for fatigue level classification in vital areas of the industry, such as in railway transportation. Full article
(This article belongs to the Special Issue Advances in Computer Vision and Deep Learning and Its Applications)
Show Figures

Figure 1

15 pages, 418 KiB  
Article
Evaluation of Unsupervised Anomaly Detection Techniques in Labelling Epileptic Seizures on Human EEG
by Oleg E. Karpov, Matvey S. Khoymov, Vladimir A. Maksimenko, Vadim V. Grubov, Nikita Utyashev, Denis A. Andrikov, Semen A. Kurkin and Alexander E. Hramov
Appl. Sci. 2023, 13(9), 5655; https://doi.org/10.3390/app13095655 - 04 May 2023
Cited by 7 | Viewed by 1942
Abstract
Automated labelling of epileptic seizures on electroencephalograms is an essential interdisciplinary task of diagnostics. Traditional machine learning approaches operate in a supervised fashion requiring complex pre-processing procedures that are usually labour intensive and time-consuming. The biggest issue with the analysis of electroencephalograms is [...] Read more.
Automated labelling of epileptic seizures on electroencephalograms is an essential interdisciplinary task of diagnostics. Traditional machine learning approaches operate in a supervised fashion requiring complex pre-processing procedures that are usually labour intensive and time-consuming. The biggest issue with the analysis of electroencephalograms is the artefacts caused by head movements, eye blinks, and other non-physiological reasons. Similarly to epileptic seizures, artefacts produce rare high-amplitude spikes on electroencephalograms, complicating their separability. We suggest that artefacts and seizures are rare events; therefore, separating them from the rest data seriously reduces information for further processing. Based on the occasional nature of these events and their distinctive pattern, we propose using anomaly detection algorithms for their detection. These algorithms are unsupervised and require minimal pre-processing. In this work, we test the possibility of an anomaly (or outlier) detection algorithm to detect seizures. We compared the state-of-the-art outlier detection algorithms and showed how their performance varied depending on input data. Our results evidence that outlier detection methods can detect all seizures reaching 100% recall, while their precision barely exceeds 30%. However, the small number of seizures means that the algorithm outputs a set of few events that could be quickly classified by an expert. Thus, we believe that outlier detection algorithms could be used for the rapid analysis of electroencephalograms to save the time and effort of experts. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) in Neuroscience)
Show Figures

Figure 1

16 pages, 15848 KiB  
Article
Wearable and Invisible Sensor Design for Eye-Motion Monitoring Based on Ferrofluid and Electromagnetic Sensing Technologies
by Jiawei Tang, Patrick Luk and Yuyang Zhou
Bioengineering 2023, 10(5), 514; https://doi.org/10.3390/bioengineering10050514 - 25 Apr 2023
Viewed by 1367
Abstract
For many human body diseases, treatments in the early stages are more efficient and safer than those in the later stages; therefore, detecting the early symptoms of a disease is crucial. One of the most significant early indicators for diseases is bio-mechanical motion. [...] Read more.
For many human body diseases, treatments in the early stages are more efficient and safer than those in the later stages; therefore, detecting the early symptoms of a disease is crucial. One of the most significant early indicators for diseases is bio-mechanical motion. This paper provides a unique way of monitoring bio-mechanical eye motion based on electromagnetic sensing technology and a ferro-magnetic material, ferrofluid. The proposed monitoring method has the advantages of being inexpensive, non-invasive, sensor-invisible and extremely effective. Most of the medical devices are cumbersome and bulky, which makes them hard to apply for daily monitoring. However, the proposed eye-motion monitoring method is designed based on ferrofluid eye make-up and invisible sensors embedded inside the frame of glasses such that the system is wearable for daily monitoring. In addition, it has no influence on the appearance of the patient, which is beneficial for the mental health of some patients who do not want to attract public attention during treatment. The sensor responses are modelled using finite element simulation models, and wearable sensor systems are created. The designed frame of the glasses is manufactured based on 3-D printing technology. Experiments are conducted to monitor eye bio-mechanical motions, such as the frequency of eye blinking. Both the quick blinking behaviour with an overall frequency of around 1.1 Hz and the slow blinking behaviour with an overall frequency of around 0.4 Hz can be observed through experimentation. Simulations and measurements results show that the proposed sensor design can be employed for bio-mechanical eye-motion monitoring. In addition, the proposed system has the advantages of invisible sensor set-up and will not affect the appearance of the patient, which is not only convenient for the daily life of the patient but also beneficial for mental health. Full article
(This article belongs to the Special Issue Electronic Wearable Solutions for Sport and Health)
Show Figures

Graphical abstract

14 pages, 3381 KiB  
Article
Facial Motion Capture System Based on Facial Electromyogram and Electrooculogram for Immersive Social Virtual Reality Applications
by Chunghwan Kim, Ho-Seung Cha, Junghwan Kim, HwyKuen Kwak, WooJin Lee and Chang-Hwan Im
Sensors 2023, 23(7), 3580; https://doi.org/10.3390/s23073580 - 29 Mar 2023
Cited by 2 | Viewed by 2101
Abstract
With the rapid development of virtual reality (VR) technology and the market growth of social network services (SNS), VR-based SNS have been actively developed, in which 3D avatars interact with each other on behalf of the users. To provide the users with more [...] Read more.
With the rapid development of virtual reality (VR) technology and the market growth of social network services (SNS), VR-based SNS have been actively developed, in which 3D avatars interact with each other on behalf of the users. To provide the users with more immersive experiences in a metaverse, facial recognition technologies that can reproduce the user’s facial gestures on their personal avatar are required. However, it is generally difficult to employ traditional camera-based facial tracking technology to recognize the facial expressions of VR users because a large portion of the user’s face is occluded by a VR head-mounted display (HMD). To address this issue, attempts have been made to recognize users’ facial expressions based on facial electromyogram (fEMG) recorded around the eyes. fEMG-based facial expression recognition (FER) technology requires only tiny electrodes that can be readily embedded in the HMD pad that is in contact with the user’s facial skin. Additionally, electrodes recording fEMG signals can simultaneously acquire electrooculogram (EOG) signals, which can be used to track the user’s eyeball movements and detect eye blinks. In this study, we implemented an fEMG- and EOG-based FER system using ten electrodes arranged around the eyes, assuming a commercial VR HMD device. Our FER system could continuously capture various facial motions, including five different lip motions and two different eyebrow motions, from fEMG signals. Unlike previous fEMG-based FER systems that simply classified discrete expressions, with the proposed FER system, natural facial expressions could be continuously projected on the 3D avatar face using machine-learning-based regression with a new concept named the virtual blend shape weight, making it unnecessary to simultaneously record fEMG and camera images for each user. An EOG-based eye tracking system was also implemented for the detection of eye blinks and eye gaze directions using the same electrodes. These two technologies were simultaneously employed to implement a real-time facial motion capture system, which could successfully replicate the user’s facial expressions on a realistic avatar face in real time. To the best of our knowledge, the concurrent use of fEMG and EOG for facial motion capture has not been reported before. Full article
(This article belongs to the Special Issue Sensing Technology in Virtual Reality)
Show Figures

Figure 1

17 pages, 4292 KiB  
Article
A CNN-Based Wearable System for Driver Drowsiness Detection
by Yongkai Li, Shuai Zhang, Gancheng Zhu, Zehao Huang, Rong Wang, Xiaoting Duan and Zhiguo Wang
Sensors 2023, 23(7), 3475; https://doi.org/10.3390/s23073475 - 26 Mar 2023
Cited by 6 | Viewed by 2410
Abstract
Drowsiness poses a serious challenge to road safety and various in-cabin sensing technologies have been experimented with to monitor driver alertness. Cameras offer a convenient means for contactless sensing, but they may violate user privacy and require complex algorithms to accommodate user (e.g., [...] Read more.
Drowsiness poses a serious challenge to road safety and various in-cabin sensing technologies have been experimented with to monitor driver alertness. Cameras offer a convenient means for contactless sensing, but they may violate user privacy and require complex algorithms to accommodate user (e.g., sunglasses) and environmental (e.g., lighting conditions) constraints. This paper presents a lightweight convolution neural network that measures eye closure based on eye images captured by a wearable glass prototype, which features a hot mirror-based design that allows the camera to be installed on the glass temples. The experimental results showed that the wearable glass prototype, with the neural network in its core, was highly effective in detecting eye blinks. The blink rate derived from the glass output was highly consistent with an industrial gold standard EyeLink eye-tracker. As eye blink characteristics are sensitive measures of driver drowsiness, the glass prototype and the lightweight neural network presented in this paper would provide a computationally efficient yet viable solution for real-world applications. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

Back to TopTop