Next Article in Journal
Novel Enzyme-Free Multifunctional Bentonite/Polypyrrole/Silver Nanocomposite Sensor for Hydrogen Peroxide Detection over a Wide pH Range
Next Article in Special Issue
A Multi-Column CNN Model for Emotion Recognition from EEG Signals
Previous Article in Journal
Aerial Cooperative Jamming for Cellular-Enabled UAV Secure Communication Network: Joint Trajectory and Power Control Design
Open AccessArticle

Hands-Free User Interface for AR/VR Devices Exploiting Wearer’s Facial Gestures Using Unsupervised Deep Learning

Seamless Transportation Lab (STL), School of Integrated Technology, and Yonsei Institute of Convergence Technology, Yonsei University, Incheon 21983, Korea
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(20), 4441; https://doi.org/10.3390/s19204441
Received: 2 September 2019 / Revised: 4 October 2019 / Accepted: 10 October 2019 / Published: 14 October 2019
(This article belongs to the Special Issue Sensor Applications on Emotion Recognition)
Developing a user interface (UI) suitable for headset environments is one of the challenges in the field of augmented reality (AR) technologies. This study proposes a hands-free UI for an AR headset that exploits facial gestures of the wearer to recognize user intentions. The facial gestures of the headset wearer are detected by a custom-designed sensor that detects skin deformation based on infrared diffusion characteristics of human skin. We designed a deep neural network classifier to determine the user’s intended gestures from skin-deformation data, which are exploited as user input commands for the proposed UI system. The proposed classifier is composed of a spatiotemporal autoencoder and deep embedded clustering algorithm, trained in an unsupervised manner. The UI device was embedded in a commercial AR headset, and several experiments were performed on the online sensor data to verify operation of the device. We achieved implementation of a hands-free UI for an AR headset with average accuracy of 95.4% user-command recognition, as determined through tests by participants. View Full-Text
Keywords: hands-free interface; augmented reality; spatiotemporal autoencoder; deep embedded clustering hands-free interface; augmented reality; spatiotemporal autoencoder; deep embedded clustering
Show Figures

Figure 1

MDPI and ACS Style

Cha, J.; Kim, J.; Kim, S. Hands-Free User Interface for AR/VR Devices Exploiting Wearer’s Facial Gestures Using Unsupervised Deep Learning. Sensors 2019, 19, 4441.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop