You are currently viewing a new version of our website. To view the old version click .
Sensors
  • Article
  • Open Access

19 May 2018

Etracker: A Mobile Gaze-Tracking System with Near-Eye Display Based on a Combined Gaze-Tracking Algorithm

,
,
and
1
Xi’an Institute of Optics and Precision Mechanics of CAS, Xi’an 710119, China
2
University of Chinese Academy of Sciences, Beijing 100049, China
3
Department of Computer Science, Chu Hai College of Higher Education, Tuen Mun, Hong Kong, China
*
Author to whom correspondence should be addressed.
This article belongs to the Special Issue Biomedical Infrared Imaging: From Sensors to Applications

Abstract

Eye tracking technology has become increasingly important for psychological analysis, medical diagnosis, driver assistance systems, and many other applications. Various gaze-tracking models have been established by previous researchers. However, there is currently no near-eye display system with accurate gaze-tracking performance and a convenient user experience. In this paper, we constructed a complete prototype of the mobile gaze-tracking system ‘Etracker’ with a near-eye viewing device for human gaze tracking. We proposed a combined gaze-tracking algorithm. In this algorithm, the convolutional neural network is used to remove blinking images and predict coarse gaze position, and then a geometric model is defined for accurate human gaze tracking. Moreover, we proposed using the mean value of gazes to resolve pupil center changes caused by nystagmus in calibration algorithms, so that an individual user only needs to calibrate it the first time, which makes our system more convenient. The experiments on gaze data from 26 participants show that the eye center detection accuracy is 98% and Etracker can provide an average gaze accuracy of 0.53° at a rate of 30–60 Hz.

1. Introduction

In recent years, eye tracking has become an important research topic in computer vision and pattern recognition, because the human gaze positions are essential information for many applications including human–computer interaction (HCI) [1,2], driver assistance [3], optometry, market data analysis, and medical diagnosis. There have been studies using human eye movements for strabismus examination [4]. Parkinson’s disease can also be detected on the basis of human eye blinking. In our previous work [5], we proposed diagnosing developmental coordination disorders (DCD) by detecting changes in patients’ gaze positions and body motion.
The fundamental issues of gaze-tracking technology include tracking system, tracking algorithms, and user experiences. These three issues are closely related to each other. A tracking system may consist of eye cameras, display devices or front-facing cameras, and processing units. Tracking algorithms may include eye region detection, gaze position detection, gaze mapping algorithms, and calibration algorithms, depending on the tracking system. User experiences are highly dependent on the tracking systems and algorithms. Most of the current gaze-tracking systems are either table-mounted or mobile systems. A table-mounted system usually works with an external display screen, which makes human–computer interaction convenient, but it is not robust to head movement. A mobile system is robust to significant head movements, but it is not easy to conduct human–computer interaction with its front-facing camera. A near-eye viewing system combines the advantages of both. To the best of our knowledge, there is no complete prototype for such a system. The fundamental problem of tracking algorithms is to track human eye regions and gaze positions accurately. The cameras are sensitive to light variations and shooting distance, which makes the human eyes very eccentric in the recorded images. In addition, illumination changes, blinking, eyelids, and eyelashes make accurate gaze tracking very challenging. A robust gaze-tracking algorithm is supposed to have stable performance in different environments and efficiently meet the needs of various applications. From the perspective of user experience, the current gaze-tracking systems still have some problems, such as being inconvenient to use, occluding participants’ field of view, a complicated operator interface, and the need to calibrate it before every use. Moreover, most of the current commercial systems are quite expensive. These factors limit the applications of gaze-tracking technology in various research topics.
Regarding the above issues, we propose a mobile gaze-tracking system with a near-eye viewing device, reliable tracking algorithms, and a convenient user experience. The main contributions of our research compared to previous works are as follows.
  • We create a complete prototype of an efficient, easy-to-use, and inexpensive mobile gaze-tracking system, Etracker. Compared to existing gaze-tracking systems [6,7,8,9], Etracker is small, lightweight, unobtrusive, and user-friendly. It records eye movements and computes gaze positions in real time.
  • We use a novel near-eye viewing device in the gaze-tracking system, which replaces the traditional large display devices, e.g., computer monitors, TVs, and projectors. The near-eye viewing device has a millimeter-sized display chip with a resolution of 1024 × 720 pixels and displays a size of 35 × 25 cm2 virtual image at a distance of 0.5 m from the human eyes.
  • We propose a combined gaze estimation method based on CNNs (ResNet-101) and a geometric model. The CNN is used to remove the blinking eye images and locate the coarse gaze position, and the accurate gaze positions are detected by a geometric model. The gaze accuracy can reach 0.53°.
  • We propose using the mean value of pupil centers to smooth the changes caused by nystagmus in calibration algorithms. Therefore, an individual user only needs to calibrate it the first time.
The rest of this paper is organized as follows. In Section 2, a review of previous works on gaze-tracking systems and eye detection algorithms are presented. The proposed gaze-tracking device and algorithms are described in detail in Section 3. In Section 4, the analysis and explanations of the results are presented. Finally, some discussions and conclusions are drawn in Section 5.

3. The Proposed Method

3.1. Etracker Hardware System and Experimental Environment

Our goal was to design a lightweight and convenient gaze-tracking system. Etracker, the proposed gaze-tracking system consists of a micro lens camera and a near-eye viewing device, as shown in Figure 1 and Figure 2. The micro lens camera and the near-eye viewing device are installed on lightweight eyeglasses. The design can not only reduce the device size and weight, but also avoid the occlusions within the participant’s field of view caused by a near-eye camera. The size of micro lens camera is 0.5 cm × 0.5 cm × 0.3 cm . The resolution is 640 × 480 pixels and the frame rate is 60 f/s. The size of Etracker is 12 cm × 4 cm × 3 cm and the weight is 52 g.
Figure 1. The proposed system and experimental environment.
Figure 2. Gaze-tracking device and near-eye viewing device.
Compared to a visible light camera, an infrared camera can obtain clearer eye contours and is insensitive to external light changes. That is the reason why we chose an infrared camera to capture eye images. We modified a commercial visible light camera for medical endoscopes as the eye camera. We used a hole punch to cut one round piece of exposed film and fixed it on the camera lens as the infrared filter. Then, six infrared LEDs with 850 nm wavelength were soldered onto the circuit board around the camera. We successfully used this camera to collect eye data from different participants and stored those eye images in computer via the USB3.0 interface. As shown in Figure 2, this tiny micro lens camera makes our system small, lightweight, and user-friendly.
Traditional gaze-tracking systems usually require computer monitors or TVs to display the calibrations and gaze information. This makes the gaze-tracking system larger and inconvenient to use. Therefore, the lightweight near-eye viewing technology can greatly reduce the size of the gaze-tracking system. The near-eye viewing device is composed of the OLED micro-display and optic/display holder. It allows the single-element optical design to deliver the nearly 40° field of view as shown in Figure 3. In our gaze-tracking system, the resolution of the near-eye viewing device can reach 1024 × 720 pixels. The near-eye viewing device can also be easily connected to the computer and embedded devices (e.g., ARM, FPGA, MCU) via the VGA port. The characteristics of the display device used in our system are shown in Table 2. When the participant wears the Etracker, the near-eye viewing device will display a series of visual content, e.g., calibration marks, advertisement images, web design photos, videos. The eye camera records the participant’s eye movement in real time. In our experiment, we used an Intel(R) Core(TM) i5-6600 desktop computer with 16 GB RAM and NVIDIA GeForce GTX 745 GPU to collect and process the recorded data.
Figure 3. Near-eye viewing device.
Table 2. The performance of the near-eye viewing device in Etracker.

3.2. Workflow of Proposed Gaze-Tracking Method

The overall workflow for the proposed gaze-tracking method is shown in Figure 4. If the user is new to the system, an initial calibration is required. During the calibration, the participant must keep gazing at the flashing mark in the near-eye viewing system until it disappears. If the user did the calibration before, calibration is not needed and it goes straight to the second step. In the second step, we use a single micro lens infrared camera to capture the participant’s eye images in real time, as shown in Figure 1. Then, based on the participant’s eye images recorded during the calibration step, we employed the CNNs model to locate the coarse gaze positions. In this step, we also used CNNs to remove the error images caused by blinking. In the fourth step, the geometric mapping model between the human eyes and the near-eye viewing device was calculated based on the geometric relationship between the participant’s eye locations and gaze positions. Finally, we realized accurate gaze-tracking based on CNNs and geometric model. In the following sections, we will introduce the proposed method in detail.
Figure 4. Workflow of the proposed gaze-tracking method.

3.3. Initial Calibration

The proposed gaze-tracking system will determine whether the participants need initial calibration based on their input information. The purpose of the initial calibration is to make Etracker learn the characteristics of the participant’s eye movements, which helps the proposed gaze-tracking system accurately estimate the participant’s gaze positions. In terms of the number of calibration marks, we have different calibration schemes, such as three-marks, nine-marks, and so on. Considering the calibration speed and accuracy, we adapted nine calibration marks scheme in this paper. As shown in Figure 5, the position of the nine calibration marks on the screen is (112, 60), (512, 60), (924, 60), (112, 360), (512, 360), (924, 360), (112, 660), (512, 660) and (924, 660), respectively. Each calibration mark is comprised of one black inner circle (radius: 5 pixels), one black middle circle (radius: 10 pixels), and one black outer circle (radius: 15 pixels). If the participant needs an initial calibration, the nine calibration marks will flash in the near-eye viewing device randomly. Figure 6 shows the initial calibration step in detail. During the calibration, each mark stays in the display for two seconds, and the participant must keep staring at the mark until it disappears. The infrared camera records the participant’s eye movements at different gaze positions. In order to ensure that the participant concentrates on the calibration task and fixates on display marks accurately, we tested the system in a quiet experiment. For each calibration mark, we collected 60 eye gaze images.
Figure 5. Nine-point calibration marks.
Figure 6. Initial calibration step.

3.4. Coarse Gaze Estimation and Blinking Image Removal by CNNs

In our research, we recorded the participant’s eye images through Etracker. Then, the coarse gaze positions were estimated based on convolutional neural networks (CNNs). In [29], the authors used an artificial neural network (ANN) and trained a gaze detection function that could directly locate gaze positions from a series of input eye images. Although they obtained some gaze-tracking results through experiments, the gaze estimation model trained in this way is inaccurate in some cases. For example, blinking will lead to false eye gaze prediction. In addition, they needed to train a separate model for each participant. Therefore, we hope to establish a coarse gaze estimation model based on initial calibration data, and use this model to remove error eye images such as blinking.
Given the recent success of ResNet-101 [40] for image recognition, we fine-tuned this model for eye detection and gaze estimation. The main reason we used a pre-trained CNNs model in our work is that training a deep neural network requires a large number of datasets. Using the existing fine-tuned network structure allowed us to achieve better detection results in our dataset. The architecture of the proposed CNNs is summarized in Table 3. The size of the various layers is similar to those of ResNet-101. Because our goal is to design a CNNs that can use the information from a single eye image to efficiently and accurately predict eye status and coarse gaze positions, the number of output nodes in full connect layer is 10. The nine outputs of the softmax layer represent the nine coarse gaze positions and the remaining one output indicates that the eye status is blinking.
Table 3. Structure of CNNs model used in our work.
A set of eye images associate with ground truth gaze positions collected in the calibration step are used to fine-tune the CNNs. Then we performed eye status detection and gaze estimation based on computations within the CNNs model. Note that in order to use ResNet-101 model through fine-tuning, we normalized the collected eye images ( 640 × 480 pixels) to 224 × 224 pixels.
The structure of the CNNs model is shown in Table 3. Conv-1–Conv-5 are the convolutional layers. They are iterated according to the ‘Iterations Number’ shown in Table 3. The max pool and average pool are subsampling layers that can reduce the number of CNNs’ parameters. A full connected layer integrates the feature information after a number of convolutions and pooling operations. Finally, the gaze estimation results are output by the softmax layer.

3.5. Combined Gaze-Tracking Algorithm

After estimating the coarse gaze positions, we aimed to develop a geometric model between the eye image and near-eye viewing device that can accurately and efficiently calculate the gaze positions. In Section 3.3, we explained that participants need to perform initial calibration when using Etracker for the first time. According to the calibration data, we used the CNNs to locate the coarse gaze positions and omit the eye images with blinking. Next, we established an accurate gaze-tracking model based on the geometric relationship between eye positions and gaze points. As shown in Figure 7, the black circle represents the human eye model, the red circle represents the pupil position at different gaze positions, and the green circle and point show the range of coarse gaze position estimated by the CNNs shown in Section 3.4.
Figure 7. Coarse gaze position estimated by CNNs.
Due to nystagmus and light changes, even if the participant maintains the same gaze position, the eye centers in the continuously recorded eye images are different. Using only CNNs for accurate gaze tracking requires a large amount of training data and a long training time. Therefore, when performing the initial calibration step, we collected multiple eye images and used our previous work [41] to detect eye centers C i ( x j , y j ) . Then, we counted the mean value M C i ( x , y ) of these eye center positions, where i represents the i t h ( i = 1 , , 9 ) calibration mark, and j denotes the j t h ( j = 1 , , 60 ) eye image. We also used the threshold T to remove the points that have large deviations from the mean value, where T is the Euclidean distance from C i ( x j , y j ) to M C i ( x , y ) :
i f   C i ( x j , y j ) M C i ( x , y ) > T   o m i t e l s e   C i ( x m , y m ) = C i ( x j , y j ) .
Finally, we recalculated the mean value M C i ( x , y ) of the remaining eye centers C i ( x m , y m ) . As shown in Figure 8, the red circles represent the eye centers when the participant gazed at the same calibration mark. The green circles represent misdetection of the eye center due to blinking and the blue cross represents the geometric center, which is calculated by the coordinates’ mean value.
Figure 8. Eye center positions when the participant gazed at the same calibration mark.
Figure 9 shows the geometric model between human eye centers and coarse gaze positions; we established an accurate gaze-tracking equation based on their geometric relations. The red circles in the ‘eye moveable region’ represent the geometric eye centers. The red circles in the ‘display region’ represent the coarse gaze positions of CNNs output. According to the structural characteristics of human eyes, when a participant observes an object, the eyes have a larger field of view in the horizontal direction than in the vertical direction. Hence, the distance moved in the horizontal direction is greater than in the vertical direction. We established accurate gaze-tracking equations for the horizontal and vertical directions. The equations are as follows:
D ( x ) = a M C i ( x ) 2 + b M C i ( y ) 2 + c M C i ( x ) + d M C i ( y ) + e M C i ( x ) M C i ( y ) + f D ( y ) = g M C i ( x ) 2 + h M C i ( y ) 2 + i M C i ( x ) + j M C i ( y ) + k M C i ( x ) M C i ( y ) + l ,
where M C i ( x , y ) ( i = 1 , , 9 ) is the eye position and D i ( x , y ) is the coarse gaze position. Then, we used the coordinate relationship between the eye centers and coarse gaze positions to calculate the equations’ parameters, i.e., a, b, c, …, l. Finally, we only needed to detect the eye centers and use the geometric model to realize the participant’s gaze tracking accurately and automatically.
Figure 9. The geometric model of human eye centers and coarse gaze positions.

4. Results

4.1. Dataset Collection

To verify the stability and performance of Etracker, we collected 26 participants’ data in a challenging real-word environment. The participants were aged between 20 and 29 years and included both males and females. Each participant needed to perform five repetitions of tests, including one calibration test, two non-calibration tests to generate the training dataset, and two other non-calibration tests to generate the testing datasets. The participants were asked to take the Etracker off and put it back on after each test. Some sample images are shown in Figure 10.
Figure 10. Samples of images in the dataset.

4.2. Gaze Tracking Results

In our experiments, a gaze-tracking model was established after each participant first used Etracker. Then the participants repeatedly gazed at the calibration marks four times. The established gaze-tracking model was used to predict the participant’s gaze positions. Some examples are shown in Figure 11. The red circles represent the calibration marks and the blue crosses are the gaze positions estimated by the Etracker. The results show that there are small errors of distance between the predicted gaze positions and the ground truth values.
Figure 11. Estimated gaze positions based on nine calibration marks.
In order to further illustrate the accuracy of the proposed gaze-tracking system, we use α to measure the error between the estimated gaze position and ground truth as shown in Figure 12 and Equation (3). The distance from the images displayed in the near-eye viewing device to the participant’s eye is about H = 50 cm.
tan α = h H
Figure 12. Error calculation model.
We randomly selected eight participants from our dataset. Table 4 and Table 5 show the accuracy of gaze estimation by only CNNs and the combined gaze-tracking algorithm, respectively. The results show that the Etracker gaze accuracy is approximately 0.74°, using only CNNs for gaze estimation. If we used the combined gaze-tracking algorithm, i.e., CNNs and geometrical model, the gaze accuracy can be significantly improved to around 0.54°. We also notice that for the CNNs method gaze position 5 is more accurate than gaze positions 1, 3, 7, and 9 (Table 4). Because gaze position 5 is at the center of the near-eye viewing device, the participant’s pupil has smaller distortion when they gaze at position 5.
Table 4. The errors in coarse gaze detection when using only the CNNs method (unit: °).
Table 5. The errors in accurate gaze detection by combined gaze-tracking algorithm (unit: °).
We also calculated the error of estimated gaze positions in the horizontal and vertical directions in Table 6. The results show that the gaze accuracy in the horizontal direction is higher than in the vertical direction. Because our infrared camera captures images below the human eyes (in Figure 2), the horizontal displacements are easier to detect than vertical displacements.
Table 6. Gaze tracking errors in horizontal and vertical directions by combined gaze-tracing algorithm.
As shown in Table 7, we randomly selected eight participants to perform the initial calibration, and then counted the results of repeated use of Etracker four times without calibration. We found that, even for multiple tests with different participants, there was no significant increase in the tracking error. The average gaze-tracking error of the proposed device is still around 0.54°. This shows that our proposed system is calibration free once the initial calibration is done.
Table 7. The errors of repeated use of Etracker with only one initial calibration (unit: °).
Figure 13 shows the accuracy of 26 participants’ gaze tracking through the combined gaze-tracking algorithm. The results show that the average gaze detection accuracy is approximately 0.53°.
Figure 13. Gaze-tracking accuracy of different participants.
In our research, we built a geometric model to locate the accurate gaze positions based on CNN outputs (Section 3.5). Figure 14 shows some qualitative results of eye center detection using our system and Swirski’s method [32]. Swirski’s approach first performs edge detection, then randomly selects a feature point on the pupil’s edge and uses an elliptic equation to fit the pupil edge and center. However, it fails if the pupil region is affected by external light, e.g., the last image. This proves that our system can successfully detect eye locations even if the pupil region is blurred. The highly precise and robust eye detection algorithm ensures that the geometric model can accurately predict the participant’s gaze positions. In our dataset, the eye centers detection accuracy is 98%.
Figure 14. Some results of eye location compared to the state-of-the-art method.
A comparison between the proposed Etracker and the state-of-the-art gaze-tracking devices is presented in Table 8. Note that, even we use only CNNs for coarse gaze position estimation, the tracking accuracy is significantly better than in Mayberry [42], Tonsen [26], and Borsato [24]. Although the tracking speed of our method (60 Hz) is lower than Borsato (1000 Hz), our device is small and highly integrated, which means it can be easily used in practical applications. Compared to the binocular gaze-tracking device proposed by Kassner [9], our system only requires one IR camera and the system cost is lower.
Table 8. Comparison with existing gaze-tracking system.
Etracker can reach the best tracking accuracy (0.53°) compared to existing gaze-tracking systems, and the gaze-tracking speed can meet daily application requirements. Furthermore, compared with Krafka’s [15] system, in which only rough gaze positions can be detected, our system can realize the tracking of any participant’s gaze positions in real time, as shown in Figure 15. The participant is asked to gaze at the pen tip displayed in the near-eye viewing device. The red cross represents the participant’s gaze position calculated by Etracker and the results show that our proposed gaze-tracking system can accurately locate the participant’s gaze position.
Figure 15. Some results of real-time gaze tracking.

4.3. Further Discussion

The experimental results demonstrate that our proposed Etracker gaze-tracking system can achieve satisfying gaze-tracking accuracy. Eye blinking is an important issue affecting the accuracy of gaze tracking, which leads to incorrect gaze data and makes the absolute orientation unusable. Most current gaze-tracking systems do not take blinking into account; they only verify their devices in an ideal environment. Our proposed system solves this problem very well. In our system, we use the CNNs joint geometric model for accurate gaze tracking. With a CNNs learning-based model, we can not only remove the error images due to eye blinking and nystagmus, but also predict the coarse gaze positions. After the first initial calibration, participants do not need to calibrate it for later use, which makes our gaze-tracking system more user-friendly. In contrast to some existing gaze-tracking devices that can only be used to track the specific gaze position, e.g., nine calibration marks, our system can detect any gaze position within the participant’s field of view in real time.

5. Conclusions

In this paper, we proposed a mobile gaze-tracking system, Etracker, with a near-eye viewing device. We adopted a low-cost micro lens infrared camera to record a participant’s eye images in real time. The size and cost of the proposed gaze-tracking system is greatly reduced with this design. We do not need to use large-scale monitors or a calibration board during the calibration step; both are needed in traditional gaze-tracking systems.
In order to further improve the accuracy of the proposed gaze-tracking system, we established a combined gaze-tracking model based on CNNs and geometrical features. We performed fine-tuning with a pre-trained ResNet-101, and used this neural network to remove faulty eye images such as blinking and predict coarse gaze positions. In order to locate the gaze positions more accurately, we built a geometrical model based on the positional relationship between the participant’s eye centers and the near-eye viewing device. Coordinates’ mean value was used to locate the stable eye centers, so that only initial calibration was needed for an individual user. The experimental results from 26 participants showed that the proposed gaze-tracking algorithm makes the system’s gaze accuracy reach 0.53°.
In future work, we hope to improve the gaze accuracy of Etracker by designing a binocular gaze-tracking system. We also plan to collect more participant data to further improve the accuracy and robustness of the gaze-tracking system.

Author Contributions

B.L. designed the Etracker gaze-tracking system and proposed the gaze-tracking algorithm based on CNN and a geometrical model. D.W., H.F., and W.L. helped to revise the experimental methods and article structure.

Funding

The work presented in this paper was fully supported by a grant from the Research Grants Council of the Hong Kong Special Administrative Region, China (Reference No.: UGC/FDS13/E01/17).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lopez-Basterretxea, A.; Mendez-Zorrilla, A.; Garcia-Zapirain, B. Eye/head tracking technology to improve HCI with iPad applications. Sensors 2015, 15, 2244–2264. [Google Scholar] [CrossRef] [PubMed]
  2. Liu, C.H.; Chang, P.Y.; Huang, C.Y. Using eye-tracking and support vector machine to measure learning attention in elearning. Appl. Mech. Mater. 2013, 311, 9–14. [Google Scholar] [CrossRef]
  3. Ahlstrom, C.; Kircher, K.; Kircher, A. A Gaze-Based Driver Distraction Warning System and Its Effect onVisual Behavior. IEEE Trans. Intell. Transp. Syst. 2013, 14, 965–973. [Google Scholar] [CrossRef]
  4. Chen, Z.; Fu, H.; Lo, W.L.; Chi, Z. Strabismus Recognition Using Eye-tracking Data and Convolutional Neural Networks. J. Healthc. Eng. 2018, 2018, 7692198. [Google Scholar] [CrossRef]
  5. Li, R.; Li, B.; Zhang, S.; Fu, H.; Lo, W.; Yu, J.; Sit, C.H.P.; Wen, D. Evaluation of the fine motor skills of children with DCD using the digitalised visual-motor tracking system. J. Eng. 2018, 2018, 123–129. [Google Scholar] [CrossRef]
  6. Gwon, S.Y.; Cho, C.W.; Lee, H.C.; Lee, W.O.; Park, K.R. Gaze tracking system for user wearing glasses. Sensors 2014, 14, 2110–2134. [Google Scholar] [CrossRef] [PubMed]
  7. Biswas, P.; Langdon, P. Multimodal intelligent eye-gaze tracking system. Int. J. Hum. Comput. Interact. 2015, 31, 277–294. [Google Scholar] [CrossRef]
  8. Kocejko, T.; Bujnowski, A.; Wtorek, J. Eye Mouse for Disabled. In Proceedings of the Conference on Human System Interactions, Krakow, Poland, 25–27 May 2008; pp. 199–202. [Google Scholar]
  9. Kassner, M.; Patera, W.; Bulling, A. Pupil: An open source platform for pervasive eye tracking and mobile gaze-based interaction. In Proceedings of the 2014 ACM international joint conference on pervasive and ubiquitous computing: Adjunct publication, Seattle, WA, USA, 13–17 September 2014; ACM: New York, NY, USA, 2014; pp. 1151–1160. [Google Scholar]
  10. Su, M.-C.; Wang, K.-C.; Chen, G.-D. An Eye Tracking System and Its Application in Aids for People with Severe Disabilities. Biomed. Eng. Appl. Basis Commun. 2006, 18, 319–327. [Google Scholar] [CrossRef]
  11. Lee, H.C.; Lee, W.O.; Cho, C.W.; Gwon, S.Y.; Park, K.R.; Lee, H.; Cha, J. Remote Gaze Tracking System on a Large Display. Sensors 2013, 13, 13439–13463. [Google Scholar] [CrossRef] [PubMed]
  12. Naqvi, R.A.; Arsalan, M.; Batchuluun, G.; Yoon, H.S.; Park, K.R. Deep learning-based gaze detection system for automobile drivers using a NIR camera sensor. Sensors 2018, 18, 456. [Google Scholar] [CrossRef] [PubMed]
  13. Kazemi, V.; Josephine, S. One millisecond face alignment with an ensemble of regression trees. In Proceedings of the 27th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2014, Columbus, OH, USA, 24–27 June 2014; pp. 1867–1874. [Google Scholar]
  14. Kim, K.W.; Hong, H.G.; Nam, G.P.; Park, K.R. A Study of Deep CNN-Based Classification of Open and Closed Eyes Using a Visible Light Camera Sensor. Sensors 2017, 17, 1534. [Google Scholar] [CrossRef] [PubMed]
  15. Krafka, K.; Khosla, A.; Kellnhofer, P.; Kannan, H.; Bhandarkar, S.; Matusik, W.; Torralba, A. Eye tracking for everyone. arXiv, 2016; arXiv:1606.05814. [Google Scholar]
  16. Cerrolaza, J.J.; Villanueva, A.; Cabeza, R. Taxonomic study of polynomial regressions applied to the calibration of video-oculographic systems. In Proceedings of the 2008 symposium on Eye Tracking Research and Applications, Savannah, GA, USA, 26–28 March 2008; pp. 259–266. [Google Scholar]
  17. Tawari, A.; Chen, K.H.; Trivedi, M.M. Where is the driver looking: Analysis of head, eye and iris for robust gaze zone estimation. In Proceedings of the IEEE International Conference on Intelligent Transportation Systems, Qingdao, China, 8–11 October 2014; pp. 988–994. [Google Scholar]
  18. Jung, D.; Lee, J.M.; Gwon, S.Y.; Pan, W.; Lee, H.C.; Park, K.R.; Kim, H.-C. Compensation method of natural head movement for gaze tracking system using an ultrasonic sensor for distance measurement. Sensors 2016, 16, 110. [Google Scholar] [CrossRef] [PubMed]
  19. Pan, W.; Jung, D.; Yoon, H.S.; Lee, D.E.; Naqvi, R.A.; Lee, K.W.; Park, K.R. Empirical study on designing of gaze tracking camera based on the information of user’s head movement. Sensors 2016, 16, 1396. [Google Scholar] [CrossRef] [PubMed]
  20. Vora, S.; Rangesh, A.; Trivedi, M.M. On generalizing driver gaze zone estimation using convolutional neural networks. In Proceedings of the IEEE Intelligent Vehicles Symposium, Redondo Beach, CA, USA, 11–14 June 2017; pp. 849–854. [Google Scholar]
  21. Galante, A.; Menezes, P. A Gaze-Based Interaction System for People with Cerebral Palsy. Procedia Technol. 2012, 5, 895–902. [Google Scholar] [CrossRef]
  22. Pires, B.R.; Devyver, M.; Tsukada, A.; Kanade, T. Unwrapping the eye for visible-spectrum gaze tracking on wearable devices. Applications of Computer Vision (WACV). In Proceedings of the 2013 IEEE Workshop on Applications of Computer Vision WACV), Tampa, FL, USA, 15–17 January 2013; pp. 369–376. [Google Scholar]
  23. Plopski, A.; Nitschke, C.; Kiyokawa, K.; Schmalstieg, D.; Takemura, H. Hybrid Eye Tracking: Combining Iris Contour and Corneal Imaging. In Proceedings of the 25th International Conference onArtificial Reality and Telexistence and Eurographics Symposium on Virtual Environments, Kyoto, Japan, 28–30 October 2015; pp. 183–190. [Google Scholar]
  24. Borsato, F.H.; Morimoto, C.H. Episcleral surface tracking: Challenges and possibilities for using mice sensors for wearable eye tracking. In Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research & Applications, Charleston, WV, USA, 14–17 March 2016; ACM: New York, NY, USA, 2016; pp. 39–46. [Google Scholar]
  25. Topal, C.; Gunal, S.; Koçdeviren, O.; Doğan, A.; Gerek, Ö.N. A low-computational approach on gaze estimation with eye touch system. IEEE Trans. Cybern. 2014, 44, 228–239. [Google Scholar] [CrossRef] [PubMed]
  26. Tonsen, M.; Steil, J.; Sugano, Y.; Bulling, A. InvisibleEye: Mobile Eye Tracking Using Multiple Low-Resolution Cameras and Learning-Based Gaze Estimation. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2017, 1, 106. [Google Scholar] [CrossRef]
  27. Kocejko, T.; Ruminski, J.; Wtorek, J.; Martin, B. Eye tracking within near-to-eye display. In Proceedings of the 2015 IEEE 8th International Conference on Human System Interaction (HSI), Warsaw, Poland, 25–27 June 2015; pp. 166–172. [Google Scholar]
  28. Wang, J.; Zhang, G.; Shi, J. 2D gaze estimation based on pupil-glint vector using an artificial neural network. Appl. Sci. Basel 2016, 6, 174. [Google Scholar] [CrossRef]
  29. Valenti, R.; Gevers, T. Accurate eye center location through invariant isocentric patterns. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 1785–1798. [Google Scholar] [CrossRef] [PubMed]
  30. Markus, N.; Frljaka, M.; Pandzia, I.S.; Ahlbergb, J.; Forchheimer, R. Eye pupil localization with an ensemble of randomized trees. Pattern Recognit. 2014, 47, 578–587. [Google Scholar] [CrossRef]
  31. Timm, F.; Barth, E. Accurate eye centre localisation by means of gradients. In Proceedings of the International Conference on Computer Vision Theory and Applications, Vilamoura, Portugal, 5–7 March 2011; Volume 11, pp. 125–130. [Google Scholar]
  32. Świrski, L.; Bulling, A.; Dodgson, N. Robust real-time pupil tracking in highly off-axis images. In Proceedings of the Symposium on Eye Tracking Research and Applications, Santa Barbara, CA, USA, 28–30 March 2012. [Google Scholar]
  33. Araujo, G.M.; Ribeiro, F.M.L.; Silva, E.A.B.; Goldenstein, S.K. Fast eye localization without a face model using inner product detectors. In Proceedings of the 2014 IEEE International Conference on Image Processing (ICIP), Paris, France, 27–30 October 2014; pp. 1366–1370. [Google Scholar]
  34. Borza, D.; Darabant, A.S.; Danescu, R. Real-Time Detection and Measurement of EyeFeatures from Color Images. Sensors 2016, 16, 1105. [Google Scholar] [CrossRef] [PubMed]
  35. Fuhl, W.; Kübler, T.; Sippel, K.; Rosenstiel, W.; Kasneci, E. Excuse: Robust pupil detection in real-world scenarios. In Proceedings of the 16th International Conference on Computer Analysis of Images and Patterns (CAIP), Valletta, Malta, 2–4 September 2015; pp. 39–51. [Google Scholar]
  36. Fuhl, W.; Santini, T.; Kasneci, G.; Kasneci, E. PupilNet: Convolutional neural networks for robust pupil detection. arXiv, 2016; arXiv:1601.04902. [Google Scholar]
  37. Amos, B.; Ludwiczuk, B.; Satyanarayanan, M. Openface: A General-Purpose Face Recognition Library with Mobile Applications; CMU School of Computer Science, Carnegie Mellon University: Pittsburgh, PA, USA, 2016. [Google Scholar]
  38. Gou, C.; Wu, Y.; Wang, K.; Wang, K.; Wang, F.Y.; Ji, Q. A joint cascaded framework for simultaneous eye detection and eye state estimation. Pattern Recognit. 2017, 67, 23–31. [Google Scholar] [CrossRef]
  39. Sharma, R.; Savakis, A. Lean histogram of oriented gradients features for effective eye detection. J. Electron. Imaging 2015, 24, 063007. [Google Scholar] [CrossRef]
  40. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  41. Li, B.; Fu, H. Real time eye detector with cascaded Convolutional Neural Networks. Appl. Comput. Intell. Soft Comput. 2018, 2018, 1439312. [Google Scholar] [CrossRef]
  42. Mayberry, A.; Hu, P.; Marlin, B.; Salthouse, C.; Ganesan, D. iShadow: Design of a wearable, real-time mobile gaze tracker. In Proceedings of the 12th annual international conference on Mobile systems, applications, and services, Bretton Woods, NH, USA, 16–19 June 2014; ACM: New York, NY, USA, 2014; pp. 82–94. [Google Scholar]

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.