sensors-logo

Journal Browser

Journal Browser

Special Issue "Eye Tracking Techniques, Applications, and Challenges"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: 31 October 2021.

Special Issue Editors

Prof. Dr. Marco Porta
E-Mail Website
Guest Editor
Department of Electrical, Computer and Biomedical Engineering, Faculty of Engineering, University of Pavia, Via A. Ferrata 5, 27100 Pavia, Italy
Interests: vision-based perceptive interfaces; eye tracking; human-computer interaction
Dr. Pawel Kasprowski
E-Mail Website
Guest Editor
Department of Applied Informatics, Silesian University of Technology, Akademicka 16, 44-100 Gliwice, Poland
Interests: eye-tracking; data mining; human-computer interaction; machine learning; signal processing
Dr. Luca Lombardi
E-Mail Website
Guest Editor
Department of Electrical, Computer and Biomedical Engineering, University of Pavia, Pavia, Italy
Dr. Piercarlo Dondi
E-Mail Website
Guest Editor
Department of Electrical, Computer and Biomedical Engineering, University of Pavia, Via Ferrata 5, 27100, Pavia, Italy
Interests: computer vision; human-computer interaction; 3D modelling; digital humanities
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleagues,

Eye tracking technology is becoming more widespread nowadays, thanks to the recent availability of cheap commercial devices (such as gaze detection sensors). At the same time, novel techniques are being continuously pursued to improve gaze detection precision, and new ways to fully exploit the potential of eye data are also continuously being explored.

The number of potential applications of eye tracking is also growing constantly, including health care (both medical diagnosing and treatment), Human-Computer Interaction, user behavior understanding, psychology, biometric identification, education, and many more.

The purpose of this Special Issue is to present the recent advancements in eye tracking research, including advancements in eye signal acquisition using different optical and nonoptical sensors, and different ways of eye movement signal processing. Any research that exploits eye tracking sensors for various purposes, be it Human-Computer Interaction, user behavior understanding, biometrics, or others, is also welcome.

The Special Issue will include extended versions of selected papers from the ETTAC 2020 Workshop, which was the first workshop to be held on eye tracking techniques, applications, and challenges. It was organized during the 25th International Conference on Pattern Recognition (ICPR 2020). However, we also solicit other contributions from researchers involved and interested in this research area.

Prof. Dr. Marco Porta
Dr. Pawel Kasprowski
Dr. Luca Lombardi
Dr. Piercarlo Dondi
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Gaze detection
  • Human-computer interaction
  • Usability evaluation
  • Virtual and Augmeted reality
  • User behavior understanding
  • Biometrics, security and privacy
  • Medicine and health care
  • Gaze data visualization

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
Assessment of the Effect of Cleanliness on the Visual Inspection of Aircraft Engine Blades: An Eye Tracking Study
Sensors 2021, 21(18), 6135; https://doi.org/10.3390/s21186135 - 13 Sep 2021
Viewed by 283
Abstract
Background—The visual inspection of aircraft parts such as engine blades is crucial to ensure safe aircraft operation. There is a need to understand the reliability of such inspections and the factors that affect the results. In this study, the factor ‘cleanliness’ was analysed [...] Read more.
Background—The visual inspection of aircraft parts such as engine blades is crucial to ensure safe aircraft operation. There is a need to understand the reliability of such inspections and the factors that affect the results. In this study, the factor ‘cleanliness’ was analysed among other factors. Method—Fifty industry practitioners of three expertise levels inspected 24 images of parts with a variety of defects in clean and dirty conditions, resulting in a total of N = 1200 observations. The data were analysed statistically to evaluate the relationships between cleanliness and inspection performance. Eye tracking was applied to understand the search strategies of different levels of expertise for various part conditions. Results—The results show an inspection accuracy of 86.8% and 66.8% for clean and dirty blades, respectively. The statistical analysis showed that cleanliness and defect type influenced the inspection accuracy, while expertise was surprisingly not a significant factor. In contrast, inspection time was affected by expertise along with other factors, including cleanliness, defect type and visual acuity. Eye tracking revealed that inspectors (experts) apply a more structured and systematic search with less fixations and revisits compared to other groups. Conclusions—Cleaning prior to inspection leads to better results. Eye tracking revealed that inspectors used an underlying search strategy characterised by edge detection and differentiation between surface deposits and other types of damage, which contributed to better performance. Full article
(This article belongs to the Special Issue Eye Tracking Techniques, Applications, and Challenges)
Show Figures

Graphical abstract

Article
Biometric Identification Based on Eye Movement Dynamic Features
Sensors 2021, 21(18), 6020; https://doi.org/10.3390/s21186020 - 08 Sep 2021
Viewed by 276
Abstract
The paper presents studies on biometric identification methods based on the eye movement signal. New signal features were investigated for this purpose. They included its representation in the frequency domain and the largest Lyapunov exponent, which characterizes the dynamics of the eye movement [...] Read more.
The paper presents studies on biometric identification methods based on the eye movement signal. New signal features were investigated for this purpose. They included its representation in the frequency domain and the largest Lyapunov exponent, which characterizes the dynamics of the eye movement signal seen as a nonlinear time series. These features, along with the velocities and accelerations used in the previously conducted works, were determined for 100-ms eye movement segments. 24 participants took part in the experiment, composed of two sessions. The users’ task was to observe a point appearing on the screen in 29 locations. The eye movement recordings for each point were used to create a feature vector in two variants: one vector for one point and one vector including signal for three consecutive locations. Two approaches for defining the training and test sets were applied. In the first one, 75% of randomly selected vectors were used as the training set, under a condition of equal proportions for each participant in both sets and the disjointness of the training and test sets. Among four classifiers: kNN (k = 5), decision tree, naïve Bayes, and random forest, good classification performance was obtained for decision tree and random forest. The efficiency of the last method reached 100%. The outcomes were much worse in the second scenario when the training and testing sets when defined based on recordings from different sessions; the possible reasons are discussed in the paper. Full article
(This article belongs to the Special Issue Eye Tracking Techniques, Applications, and Challenges)
Show Figures

Figure 1

Article
Saliency-Based Gaze Visualization for Eye Movement Analysis
Sensors 2021, 21(15), 5178; https://doi.org/10.3390/s21155178 - 30 Jul 2021
Viewed by 441
Abstract
Gaze movement and visual stimuli have been utilized to analyze human visual attention intuitively. Gaze behavior studies mainly show statistical analyses of eye movements and human visual attention. During these analyses, eye movement data and the saliency map are presented to the analysts [...] Read more.
Gaze movement and visual stimuli have been utilized to analyze human visual attention intuitively. Gaze behavior studies mainly show statistical analyses of eye movements and human visual attention. During these analyses, eye movement data and the saliency map are presented to the analysts as separate views or merged views. However, the analysts become frustrated when they need to memorize all of the separate views or when the eye movements obscure the saliency map in the merged views. Therefore, it is not easy to analyze how visual stimuli affect gaze movements since existing techniques focus excessively on the eye movement data. In this paper, we propose a novel visualization technique for analyzing gaze behavior using saliency features as visual clues to express the visual attention of an observer. The visual clues that represent visual attention are analyzed to reveal which saliency features are prominent for the visual stimulus analysis. We visualize the gaze data with the saliency features to interpret the visual attention. We analyze the gaze behavior with the proposed visualization to evaluate that our approach to embedding saliency features within the visualization supports us to understand the visual attention of an observer. Full article
(This article belongs to the Special Issue Eye Tracking Techniques, Applications, and Challenges)
Show Figures

Figure 1

Article
Low-Cost Eye Tracking Calibration: A Knowledge-Based Study
Sensors 2021, 21(15), 5109; https://doi.org/10.3390/s21155109 - 28 Jul 2021
Viewed by 442
Abstract
Subject calibration has been demonstrated to improve the accuracy in high-performance eye trackers. However, the true weight of calibration in off-the-shelf eye tracking solutions is still not addressed. In this work, a theoretical framework to measure the effects of calibration in deep learning-based [...] Read more.
Subject calibration has been demonstrated to improve the accuracy in high-performance eye trackers. However, the true weight of calibration in off-the-shelf eye tracking solutions is still not addressed. In this work, a theoretical framework to measure the effects of calibration in deep learning-based gaze estimation is proposed for low-resolution systems. To this end, features extracted from the synthetic U2Eyes dataset are used in a fully connected network in order to isolate the effect of specific user’s features, such as kappa angles. Then, the impact of system calibration in a real setup employing I2Head dataset images is studied. The obtained results show accuracy improvements over 50%, probing that calibration is a key process also in low-resolution gaze estimation scenarios. Furthermore, we show that after calibration accuracy values close to those obtained by high-resolution systems, in the range of 0.7°, could be theoretically obtained if a careful selection of image features was performed, demonstrating significant room for improvement for off-the-shelf eye tracking systems. Full article
(This article belongs to the Special Issue Eye Tracking Techniques, Applications, and Challenges)
Show Figures

Figure 1

Article
OpenEDS2020 Challenge on Gaze Tracking for VR: Dataset and Results
Sensors 2021, 21(14), 4769; https://doi.org/10.3390/s21144769 - 13 Jul 2021
Viewed by 437
Abstract
This paper summarizes the OpenEDS 2020 Challenge dataset, the proposed baselines, and results obtained by the top three winners of each competition: (1) Gaze prediction Challenge, with the goal of predicting the gaze vector 1 to 5 frames into the future based on [...] Read more.
This paper summarizes the OpenEDS 2020 Challenge dataset, the proposed baselines, and results obtained by the top three winners of each competition: (1) Gaze prediction Challenge, with the goal of predicting the gaze vector 1 to 5 frames into the future based on a sequence of previous eye images, and (2) Sparse Temporal Semantic Segmentation Challenge, with the goal of using temporal information to propagate semantic eye labels to contiguous eye image frames. Both competitions were based on the OpenEDS2020 dataset, a novel dataset of eye-image sequences captured at a frame rate of 100 Hz under controlled illumination, using a virtual-reality head-mounted display with two synchronized eye-facing cameras. The dataset, which we make publicly available for the research community, consists of 87 subjects performing several gaze-elicited tasks, and is divided into 2 subsets, one for each competition task. The proposed baselines, based on deep learning approaches, obtained an average angular error of 5.37 degrees for gaze prediction, and a mean intersection over union score (mIoU) of 84.1% for semantic segmentation. The winning solutions were able to outperform the baselines, obtaining up to 3.17 degrees for the former task and 95.2% mIoU for the latter. Full article
(This article belongs to the Special Issue Eye Tracking Techniques, Applications, and Challenges)
Show Figures

Figure 1

Article
Gaze Behavior Effect on Gaze Data Visualization at Different Abstraction Levels
Sensors 2021, 21(14), 4686; https://doi.org/10.3390/s21144686 - 08 Jul 2021
Viewed by 540
Abstract
Many gaze data visualization techniques intuitively show eye movement together with visual stimuli. The eye tracker records a large number of eye movements within a short period. Therefore, visualizing raw gaze data with the visual stimulus appears complicated and obscured, making it difficult [...] Read more.
Many gaze data visualization techniques intuitively show eye movement together with visual stimuli. The eye tracker records a large number of eye movements within a short period. Therefore, visualizing raw gaze data with the visual stimulus appears complicated and obscured, making it difficult to gain insight through visualization. To avoid the complication, we often employ fixation identification algorithms for more abstract visualizations. In the past, many scientists have focused on gaze data abstraction with the attention map and analyzed detail gaze movement patterns with the scanpath visualization. Abstract eye movement patterns change dramatically depending on fixation identification algorithms in the preprocessing. However, it is difficult to find out how fixation identification algorithms affect gaze movement pattern visualizations. Additionally, scientists often spend much time on adjusting parameters manually in the fixation identification algorithms. In this paper, we propose a gaze behavior-based data processing method for abstract gaze data visualization. The proposed method classifies raw gaze data using machine learning models for image classification, such as CNN, AlexNet, and LeNet. Additionally, we compare the velocity-based identification (I-VT), dispersion-based identification (I-DT), density-based fixation identification, velocity and dispersion-based (I-VDT), and machine learning based and behavior-based modelson various visualizations at each abstraction level, such as attention map, scanpath, and abstract gaze movement visualization. Full article
(This article belongs to the Special Issue Eye Tracking Techniques, Applications, and Challenges)
Show Figures

Figure 1

Article
On the Improvement of Eye Tracking-Based Cognitive Workload Estimation Using Aggregation Functions
Sensors 2021, 21(13), 4542; https://doi.org/10.3390/s21134542 - 02 Jul 2021
Viewed by 721
Abstract
Cognitive workload, being a quantitative measure of mental effort, draws significant interest of researchers, as it allows to monitor the state of mental fatigue. Estimation of cognitive workload becomes especially important for job positions requiring outstanding engagement and responsibility, e.g., air-traffic dispatchers, pilots, [...] Read more.
Cognitive workload, being a quantitative measure of mental effort, draws significant interest of researchers, as it allows to monitor the state of mental fatigue. Estimation of cognitive workload becomes especially important for job positions requiring outstanding engagement and responsibility, e.g., air-traffic dispatchers, pilots, car or train drivers. Cognitive workload estimation finds its applications also in the field of education material preparation. It allows to monitor the difficulty degree for specific tasks enabling to adjust the level of education materials to typical abilities of students. In this study, we present the results of research conducted with the goal of examining the influence of various fuzzy or non-fuzzy aggregation functions upon the quality of cognitive workload estimation. Various classic machine learning models were successfully applied to the problem. The results of extensive in-depth experiments with over 2000 aggregation operators shows the applicability of the approach based on the aggregation functions. Moreover, the approach based on aggregation process allows for further improvement of classification results. A wide range of aggregation functions is considered and the results suggest that the combination of classical machine learning models and aggregation methods allows to achieve high quality of cognitive workload level recognition preserving low computational cost. Full article
(This article belongs to the Special Issue Eye Tracking Techniques, Applications, and Challenges)
Show Figures

Figure 1

Communication
Ultrasound for Gaze Estimation—A Modeling and Empirical Study
Sensors 2021, 21(13), 4502; https://doi.org/10.3390/s21134502 - 30 Jun 2021
Viewed by 381
Abstract
Most eye tracking methods are light-based. As such, they can suffer from ambient light changes when used outdoors, especially for use cases where eye trackers are embedded in Augmented Reality glasses. It has been recently suggested that ultrasound could provide a low power, [...] Read more.
Most eye tracking methods are light-based. As such, they can suffer from ambient light changes when used outdoors, especially for use cases where eye trackers are embedded in Augmented Reality glasses. It has been recently suggested that ultrasound could provide a low power, fast, light-insensitive alternative to camera-based sensors for eye tracking. Here, we report on our work on modeling ultrasound sensor integration into a glasses form factor AR device to evaluate the feasibility of estimating eye-gaze in various configurations. Next, we designed a benchtop experimental setup to collect empirical data on time of flight and amplitude signals for reflected ultrasound waves for a range of gaze angles of a model eye. We used this data as input for a low-complexity gradient-boosted tree machine learning regression model and demonstrate that we can effectively estimate gaze (gaze RMSE error of 0.965 ± 0.178 degrees with an adjusted R2 score of 90.2 ± 4.6). Full article
(This article belongs to the Special Issue Eye Tracking Techniques, Applications, and Challenges)
Show Figures

Figure 1

Article
Convolutional Neural Networks Cascade for Automatic Pupil and Iris Detection in Ocular Proton Therapy
Sensors 2021, 21(13), 4400; https://doi.org/10.3390/s21134400 - 27 Jun 2021
Cited by 1 | Viewed by 544
Abstract
Eye tracking techniques based on deep learning are rapidly spreading in a wide variety of application fields. With this study, we want to exploit the potentiality of eye tracking techniques in ocular proton therapy (OPT) applications. We implemented a fully automatic approach based [...] Read more.
Eye tracking techniques based on deep learning are rapidly spreading in a wide variety of application fields. With this study, we want to exploit the potentiality of eye tracking techniques in ocular proton therapy (OPT) applications. We implemented a fully automatic approach based on two-stage convolutional neural networks (CNNs): the first stage roughly identifies the eye position and the second one performs a fine iris and pupil detection. We selected 707 video frames recorded during clinical operations during OPT treatments performed at our institute. 650 frames were used for training and 57 for a blind test. The estimations of iris and pupil were evaluated against the manual labelled contours delineated by a clinical operator. For iris and pupil predictions, Dice coefficient (median = 0.94 and 0.97), Szymkiewicz–Simpson coefficient (median = 0.97 and 0.98), Intersection over Union coefficient (median = 0.88 and 0.94) and Hausdorff distance (median = 11.6 and 5.0 (pixels)) were quantified. Iris and pupil regions were found to be comparable to the manually labelled ground truths. Our proposed framework could provide an automatic approach to quantitatively evaluating pupil and iris misalignments, and it could be used as an additional support tool for clinical activity, without impacting in any way with the consolidated routine. Full article
(This article belongs to the Special Issue Eye Tracking Techniques, Applications, and Challenges)
Show Figures

Figure 1

Article
Glimpse: A Gaze-Based Measure of Temporal Salience
Sensors 2021, 21(9), 3099; https://doi.org/10.3390/s21093099 - 29 Apr 2021
Cited by 1 | Viewed by 704
Abstract
Temporal salience considers how visual attention varies over time. Although visual salience has been widely studied from a spatial perspective, its temporal dimension has been mostly ignored, despite arguably being of utmost importance to understand the temporal evolution of attention on dynamic contents. [...] Read more.
Temporal salience considers how visual attention varies over time. Although visual salience has been widely studied from a spatial perspective, its temporal dimension has been mostly ignored, despite arguably being of utmost importance to understand the temporal evolution of attention on dynamic contents. To address this gap, we proposed Glimpse, a novel measure to compute temporal salience based on the observer-spatio-temporal consistency of raw gaze data. The measure is conceptually simple, training free, and provides a semantically meaningful quantification of visual attention over time. As an extension, we explored scoring algorithms to estimate temporal salience from spatial salience maps predicted with existing computational models. However, these approaches generally fall short when compared with our proposed gaze-based measure. Glimpse could serve as the basis for several downstream tasks such as segmentation or summarization of videos. Glimpse’s software and data are publicly available. Full article
(This article belongs to the Special Issue Eye Tracking Techniques, Applications, and Challenges)
Show Figures

Figure 1

Article
Industrial Energy Assessment Training Effectiveness Evaluation: An Eye-Tracking Study
Sensors 2021, 21(5), 1584; https://doi.org/10.3390/s21051584 - 24 Feb 2021
Cited by 1 | Viewed by 689
Abstract
It is essential to understand the effectiveness of any training program so it can be improved accordingly. Various studies have applied standard metrics for the evaluation of visual behavior to recognize the areas of interest that attract individuals’ attention as there is a [...] Read more.
It is essential to understand the effectiveness of any training program so it can be improved accordingly. Various studies have applied standard metrics for the evaluation of visual behavior to recognize the areas of interest that attract individuals’ attention as there is a high correlation between attentional behavior and where one is focusing on. However, through reviewing the literature, we believe that studies that applied eye-tracking technologies for training purposes are still limited, especially in the industrial energy assessment training field. In this paper, the effectiveness of industrial energy assessment training was quantitatively evaluated by measuring the attentional allocation of trainees using eye-tracking technology. Moreover, this study identifies the areas that require more focus based on evaluating the performance of subjects after receiving the training. Additionally, this research was conducted in a controlled environment to remove the distractions that may be caused by environmental factors to only concentrate on variables that influence the learning behavior of subjects. The experiment results showed that after receiving the training, the subjects’ performance in energy assessment was significantly improved in two areas: production, and recycling and waste management, and the designed training program enhanced the knowledge of participants in identifying energy-saving opportunities to the knowledge level of experienced participants. Full article
(This article belongs to the Special Issue Eye Tracking Techniques, Applications, and Challenges)
Show Figures

Figure 1

Back to TopTop