sensors-logo

Journal Browser

Journal Browser

Eye Tracking Sensors Data Analysis with Deep Learning Methods

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (31 March 2023) | Viewed by 6288

Special Issue Editor


E-Mail Website
Guest Editor
Departament of Applied Informatics, Silesian University of Technology, Akademicka 16, 44-100 Gliwice, Poland
Interests: eye tracking; eye movement; data mining; machine learning; artificial intelligence
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Eye tracking has many applications, including psychology, cognitive science, neurology, ophthalmology, marketing, and human–computer interfaces. There are many different eye-tracking sensors measuring video signals, electric potential, light reflection, or coil movement. For all these techniques, the obtained data must be interpreted appropriately to be usable. This interpretation is frequently complicated and includes many aspects: pupil detection, calibration, event detection, saliency analysis, and cognitive analysis. That is why there is growing interest in utilizing deep learning methods for eye movement data processing. The aim of this Special Issue is to gather different applications of deep learning and machine learning techniques that may be used for data obtained from eye tracking sensors.

We expect valuable papers that show novel machine learning methods for eye movement data analysis in different areas, including (but not limiting to):

  • data acquisition (feature-based and appearance-based methods);
  • calibration (explicit and implicit);
  • events detection (such as fixations and saccades);
  • analysis of the processed signal to detect saliency;
  • finding differences among people;
  • finding differences depending on stimuli;
  • analyzing the influence of other properties such as tiredness or anxiety.

We are also expecting review papers that summarize works in some specific areas and give recommendations for their further development

Dr. Pawel Kasprowski
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • eye tracking
  • deep learning
  • machine learning
  • eye movement

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

17 pages, 669 KiB  
Article
Person-Specific Gaze Estimation from Low-Quality Webcam Images
by Mohd Faizan Ansari, Pawel Kasprowski and Peter Peer
Sensors 2023, 23(8), 4138; https://doi.org/10.3390/s23084138 - 20 Apr 2023
Cited by 1 | Viewed by 1728
Abstract
Gaze estimation is an established research problem in computer vision. It has various applications in real life, from human–computer interactions to health care and virtual reality, making it more viable for the research community. Due to the significant success of deep learning techniques [...] Read more.
Gaze estimation is an established research problem in computer vision. It has various applications in real life, from human–computer interactions to health care and virtual reality, making it more viable for the research community. Due to the significant success of deep learning techniques in other computer vision tasks—for example, image classification, object detection, object segmentation, and object tracking—deep learning-based gaze estimation has also received more attention in recent years. This paper uses a convolutional neural network (CNN) for person-specific gaze estimation. The person-specific gaze estimation utilizes a single model trained for one individual user, contrary to the commonly-used generalized models trained on multiple people’s data. We utilized only low-quality images directly collected from a standard desktop webcam, so our method can be applied to any computer system equipped with such a camera without additional hardware requirements. First, we used the web camera to collect a dataset of face and eye images. Then, we tested different combinations of CNN parameters, including the learning and dropout rates. Our findings show that building a person-specific eye-tracking model produces better results with a selection of good hyperparameters when compared to universal models that are trained on multiple users’ data. In particular, we achieved the best results for the left eye with 38.20 MAE (Mean Absolute Error) in pixels, the right eye with 36.01 MAE, both eyes combined with 51.18 MAE, and the whole face with 30.09 MAE, which is equivalent to approximately 1.45 degrees for the left eye, 1.37 degrees for the right eye, 1.98 degrees for both eyes combined, and 1.14 degrees for full-face images. Full article
(This article belongs to the Special Issue Eye Tracking Sensors Data Analysis with Deep Learning Methods)
Show Figures

Figure 1

Review

Jump to: Research

18 pages, 3291 KiB  
Review
Review and Evaluation of Eye Movement Event Detection Algorithms
by Birtukan Birawo and Pawel Kasprowski
Sensors 2022, 22(22), 8810; https://doi.org/10.3390/s22228810 - 15 Nov 2022
Cited by 8 | Viewed by 3781
Abstract
Eye tracking is a technology aimed at understanding the direction of the human gaze. Event detection is a process of detecting and classifying eye movements that are divided into several types. Nowadays, event detection is almost exclusively done by applying a detection algorithm [...] Read more.
Eye tracking is a technology aimed at understanding the direction of the human gaze. Event detection is a process of detecting and classifying eye movements that are divided into several types. Nowadays, event detection is almost exclusively done by applying a detection algorithm to the raw recorded eye-tracking data. However, due to the lack of a standard procedure for how to perform evaluations, evaluating and comparing various detection algorithms in eye-tracking signals is very challenging. In this paper, we used data from a high-speed eye-tracker SMI HiSpeed 1250 system and compared event detection performance. The evaluation focused on fixations, saccades and post-saccadic oscillation classification. It used sample-by-sample comparisons to compare the algorithms and inter-agreement between algorithms and human coders. The impact of varying threshold values on threshold-based algorithms was examined and the optimum threshold values were determined. This evaluation differed from previous evaluations by using the same dataset to evaluate the event detection algorithms and human coders. We evaluated and compared the different algorithms from threshold-based, machine learning-based and deep learning event detection algorithms. The evaluation results show that all methods perform well for fixation and saccade detection; however, there are substantial differences in classification results. Generally, CNN (Convolutional Neural Network) and RF (Random Forest) algorithms outperform threshold-based methods. Full article
(This article belongs to the Special Issue Eye Tracking Sensors Data Analysis with Deep Learning Methods)
Show Figures

Figure 1

Back to TopTop