Next Article in Journal
An Integrated Dead Reckoning with Cooperative Positioning Solution to Assist GPS NLOS Using Vehicular Communications
Next Article in Special Issue
Critical Data-Based Incremental Cooperative Communication for Wireless Body Area Network
Previous Article in Journal
Top-Down NOX Emissions of European Cities Based on the Downwind Plume of Modelled and Space-Borne Tropospheric NO2 Columns
Previous Article in Special Issue
Characterization of the Fat Channel for Intra-Body Communication at R-Band Frequencies
Article Menu
Issue 9 (September) cover image

Export Article

Open AccessArticle
Sensors 2018, 18(9), 2894; https://doi.org/10.3390/s18092894

Integrating Gaze Tracking and Head-Motion Prediction for Mobile Device Authentication: A Proof of Concept

1,2,†,* , 1,†,* , 1
,
3
and
1,2
1
School of Cyber Engineering, Xidian University, Xi’an 710071, China
2
Shaanxi Key Laboratory of Network and System Security, Xidian University, Xi’an 710071, China
3
ZTE Corporation, Xi’an 710114, China
These authors contributed equally to this work.
*
Authors to whom correspondence should be addressed.
Received: 24 June 2018 / Revised: 22 August 2018 / Accepted: 28 August 2018 / Published: 31 August 2018
(This article belongs to the Special Issue Wireless Body Area Networks and Connected Health)
Full-Text   |   PDF [3298 KB, uploaded 5 September 2018]   |  

Abstract

We introduce a two-stream model to use reflexive eye movements for smart mobile device authentication. Our model is based on two pre-trained neural networks, iTracker and PredNet, targeting two independent tasks: (i) gaze tracking and (ii) future frame prediction. We design a procedure to randomly generate the visual stimulus on the screen of mobile device, and the frontal camera will simultaneously capture head motions of the user as one watches it. Then, iTracker calculates the gaze-coordinates error which is treated as a static feature. To solve the imprecise gaze-coordinates caused by the low resolution of the frontal camera, we further take advantage of PredNet to extract the dynamic features between consecutive frames. In order to resist traditional attacks (shoulder surfing and impersonation attacks) during the procedure of mobile device authentication, we innovatively combine static features and dynamic features to train a 2-class support vector machine (SVM) classifier. The experiment results show that the classifier achieves accuracy of 98.6% to authenticate the user identity of mobile devices. View Full-Text
Keywords: smart mobile devices; gaze tracking; head motions; authentication; neural networks smart mobile devices; gaze tracking; head motions; authentication; neural networks
Figures

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).
SciFeed

Share & Cite This Article

MDPI and ACS Style

Ma, Z.; Wang, X.; Ma, R.; Wang, Z.; Ma, J. Integrating Gaze Tracking and Head-Motion Prediction for Mobile Device Authentication: A Proof of Concept. Sensors 2018, 18, 2894.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Sensors EISSN 1424-8220 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top