Next Article in Journal
Towards a Multi-Enzyme Capacitive Field-Effect Biosensor by Comparative Study of Drop-Coating and Nano-Spotting Technique
Previous Article in Journal
Fault Diagnosis for High-Speed Train Axle-Box Bearing Using Simplified Shallow Information Fusion Convolutional Neural Network
Open AccessArticle

Gaze in the Dark: Gaze Estimation in a Low-Light Environment with Generative Adversarial Networks

Department of Computer Engineering, Kumoh National Institute of Technology, Gumi 39177, Korea
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(17), 4935; https://doi.org/10.3390/s20174935
Received: 5 August 2020 / Revised: 24 August 2020 / Accepted: 27 August 2020 / Published: 31 August 2020
(This article belongs to the Special Issue Human-Computer Interaction in Pervasive Computing Environments)
In smart interactive environments, such as digital museums or digital exhibition halls, it is important to accurately understand the user’s intent to ensure successful and natural interaction with the exhibition. In the context of predicting user intent, gaze estimation technology has been considered one of the most effective indicators among recently developed interaction techniques (e.g., face orientation estimation, body tracking, and gesture recognition). Previous gaze estimation techniques, however, are known to be effective only in a controlled lab environment under normal lighting conditions. In this study, we propose a novel deep learning-based approach to achieve a successful gaze estimation under various low-light conditions, which is anticipated to be more practical for smart interaction scenarios. The proposed approach utilizes a generative adversarial network (GAN) to enhance users’ eye images captured under low-light conditions, thereby restoring missing information for gaze estimation. Afterward, the GAN-recovered images are fed into the convolutional neural network architecture as input data to estimate the direction of the user gaze. Our experimental results on the modified MPIIGaze dataset demonstrate that the proposed approach achieves an average performance improvement of 4.53%–8.9% under low and dark light conditions, which is a promising step toward further research. View Full-Text
Keywords: adversarial network; deep learning; gaze estimation; low-light environment adversarial network; deep learning; gaze estimation; low-light environment
Show Figures

Figure 1

MDPI and ACS Style

Kim, J.-H.; Jeong, J.-W. Gaze in the Dark: Gaze Estimation in a Low-Light Environment with Generative Adversarial Networks. Sensors 2020, 20, 4935.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Search more from Scilit
 
Search
Back to TopTop