Next Article in Journal
Fs-Laser Fabricated Miniature Fabry–Perot Interferometer in a No-Core Fiber for High-Temperature Applications
Previous Article in Journal
Spatio-Temporal Changes in Ecosystem Quality across the Belt and Road Region
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Investigation of Camera-Free Eye-Tracking Glasses Compared to a Video-Based System

by
Abdullah Zafar
1,
Claudia Martin Calderon
1,
Anne Marie Yeboah
2,
Kristine Dalton
2,
Elizabeth Irving
2 and
Ewa Niechwiej-Szwedo
1,*
1
Department of Kinesiology & Health Sciences, University of Waterloo, Waterloo, ON N2L 3G1, Canada
2
School of Optometry & Vision Science, University of Waterloo, Waterloo, ON N2L 3G1, Canada
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(18), 7753; https://doi.org/10.3390/s23187753
Submission received: 15 July 2023 / Revised: 3 September 2023 / Accepted: 6 September 2023 / Published: 8 September 2023
(This article belongs to the Section Intelligent Sensors)

Abstract

:
Technological advances in eye-tracking have resulted in lightweight, portable solutions that are capable of capturing eye movements beyond laboratory settings. Eye-tracking devices have typically relied on heavier, video-based systems to detect pupil and corneal reflections. Advances in mobile eye-tracking technology could facilitate research and its application in ecological settings; more traditional laboratory research methods are able to be modified and transferred to real-world scenarios. One recent technology, the AdHawk MindLink, introduced a novel camera-free system embedded in typical eyeglass frames. This paper evaluates the AdHawk MindLink by comparing the eye-tracking recordings with a research “gold standard”, the EyeLink II. By concurrently capturing data from both eyes, we compare the capability of each eye tracker to quantify metrics from fixation, saccade, and smooth pursuit tasks—typical elements in eye movement research—across a sample of 13 adults. The MindLink system was capable of capturing fixation stability within a radius of less than 0.5 , estimating horizontal saccade amplitudes with an accuracy of 0.04 ± 2.3 , vertical saccade amplitudes with an accuracy of 0.32 ± 2.3 , and smooth pursuit speeds with an accuracy of 0.5 to 3 s , depending on the pursuit speed. While the performance of the MindLink system in measuring fixation stability, saccade amplitude, and smooth pursuit eye movements were slightly inferior to the video-based system, MindLink provides sufficient gaze-tracking capabilities for dynamic settings and experiments.

1. Introduction

Eye movements have been widely used to probe sensorimotor and cognitive mechanisms giving us deeper insight into human behavior and information processing [1,2], for example, attention [3,4,5], working memory [6,7], decision-making [8], and learning [9,10]. The neural control of eye movements engages multiple brain networks, thus, eye-tracking methodologies have been used to understand development and aging [11,12,13], to help with medical diagnostics [14,15], and to monitor disease progression [16]. More recently, eye tracking has been applied to study complex behaviors in natural environments [17], for example, navigation [18,19], occupational performance [20,21], and education [22,23].
Eye-tracking methodologies have evolved greatly over time mainly due to technological and computational advances. Cognolato, Atzori, and Müller (2018) outlined seven different eye-tracking methods: (1) electrooculogram (EOG), (2) electromagnetic methods (i.e., scleral search coils), (3) contact lenses, (4) limbus/iris–sclera boundary video-oculography, (5) pupil video-oculography, (6) pupil and corneal reflections video-oculography, and (7) dual Purkinje image corneal reflection video-oculography (see [24] for a complete review of each technique). EOG and search coils are two camera-free eye-tracking approaches currently available. Search coils provide the most accurate and precise recordings; however, this approach is the most invasive and restrictive because a copper wire coil is embedded in a contact lens and the recording must be completed while the participant is sitting in an oscillating magnetic field. The voltage recorded by the coils represents the orientation of the eyes in the magnetic field. On the other hand, EOG is less invasive because the signal, generated by changes in the electrical potential between the cornea and the retina when the eyes move, is recorded by electrodes placed on the outer canthi of the eyes. The main limitation of the EOG approach is lower accuracy in comparison to other systems [25].
The most frequently used technique with human participants is video-oculography, which is a non-invasive, video-based approach where videos of the pupil and corneal reflection are recorded and processed to estimate eye position. Such systems are traditionally expensive and heavy from the weight of the cameras and the corresponding frames needed to support the camera. Moreover, in order to achieve a high level of precision, these systems require users to restrict head motion and, therefore, lack portability or utility in more dynamic scenarios beyond constrained laboratory-based studies. In recent years, there has been an emergence of more mobile and portable gaze trackers that have allowed researchers to move into more dynamic and natural environments. However, these systems still rely on video-based technology, which limits the flexibility and processing power capabilities of these smaller (more portable) devices.
A new camera-free eye-tracking technology has emerged, which uses compact microelectromechanical systems (MEMSs) to scan the eyes with low-powered beams. MEMSs allow gaze estimation without relying on the more resource-expensive (computationally and power-consumption-wise) image processing of video camera feeds. The MindLink system by AdHawk Microsystems is an example of such a camera-free system, where the entire MEMS-based tracking system is embedded into eyeglass frames, facilitating portability and utility beyond highly constrained laboratory experiments. While there is great potential for such advancements in technology to stimulate eye-movement research studies in ecological settings, a rigorous validation against the current research grade, lab-based “gold standard” must be established.
To date, the only published evaluation of the signal quality from the MindLink system involved a comparison with the EyeLink II eye tracker, using the simultaneous data capture of a single eye per system [26]. The signal quality was mainly evaluated as the spatial accuracy and precision of the measured gaze position to a stimulus target. However, additional metrics are frequently used in research studies to characterize eye movements, such as saccade amplitude ([27]), fixation stability, peak velocity, and smooth pursuit velocity gain ([28]). As such, the aim of the current work was to compare the paired eye movement recordings simultaneously captured from the EyeLink II and MindLink eye trackers in order to present a detailed analysis of the validity and capability of the MindLink system in measuring fixations, saccades, and smooth pursuit eye movements.

2. Materials and Methods

2.1. Eye-Tracking Devices

EyeLink II is a head-mounted, video-based eye tracker (SR Research Ltd., Ottawa, ON, Canada) capable of recording either monocular or binocular gaze position using pupil-only or pupil–corneal reflection tracking. Two eye-tracking cameras (one for each eye) are attached to an adjustable headband, which fits on the head of the user.
The AdHawk MindLink is a camera-free eye tracker that uses a microelectromechanical system (MEMS) to detect the binocular gaze position at a sampling frequency of up to 500 Hz. The MEMS is fit into an eyeglass frame and comprises an infrared scanner located behind each nosepiece along with five infrared sensors distributed around the inside of the eyeglass rim. The scanner emits a low-powered infrared beam across the eye, and the resulting corneal reflections are detected by the infrared sensors, which are then used to determine pupil position.
In the current study, the sampling frequency for both eye trackers was 250 Hz. Eye trackers were calibrated separately using a nine-point calibration grid following the instructions from the user’s guide for each device (Figure 1). Validation was also performed and accepted when the tracking error was less than 1 degree.

EyeLink-MindLink Signal Alignment

Temporal synchronization between the eye trackers was achieved using a TTL pulse sent from the Experiment Builder, which controlled the stimulus presentation. A TTL pulse was sent to an Arduino Uno at the start of each trial and each pulse was time-stamped using custom Python scripts on the PC system running the MindLink system. The time-stamp was used to split the MindLink recordings into trials corresponding to the EyeLink sample reports exported from the Experiment Builder. Gaze positions between eye trackers were then further aligned for each trial using the maximum of a normalized cross-correlation function between the gaze positions from each eye tracker. Trials where the cross-correlation between signals was less than 0.5 were excluded for poor tracking (13.5% of all trials).

2.2. Participants

Thirteen participants were recruited from the University of Waterloo, Canada. All participants (4 males/9 females, mean age 23.5 ± 3.4) had normal or corrected-to-normal vision. They reported no neurological or musculoskeletal disorders. The study was reviewed and received ethics clearance through the University of Waterloo’s Office of Research Ethics Committee (ORE no. 43620). Written consent was obtained from all participants.

2.3. Stimulus and Tasks

All visual stimuli were generated using the Experiment Builder (version 2.3.38, SR Research), and presented on a 19 inch CRT monitor (Samsung SyncMaster; resolution 1024 × 768, 85 Hz refresh rate). Each participant was seated comfortably with their head stabilized in a chin rest. Participants completed all five tasks described below in a pseudo-randomized order (see Figure 2 for an example of the experimental task order).

2.3.1. Fixation Task

Participants completed five fixation trials. The fixation target was a black dot, measuring 0.25 of the visual angle, presented on a white background at the center of a monitor positioned 100 cm in front of the participant. During each trial, participants were asked to look at the dot for a duration of 5 s.

2.3.2. Saccade Tasks

Participants completed horizontal and vertical saccade tasks. For both tasks, participants were seated 80 cm away from the monitor. At the beginning of each trial, a white fixation cross was presented on a black background for a duration ranging from 1000 to 1500 ms. The saccade target was a white dot (diameter 0.25 ), which appeared to follow the fixation cross; this target was then followed by a second target presented in the opposite hemifield. Participants were instructed to start the next trial at their discretion.
Horizontal saccades: For the horizontal saccade task, targets were presented to the left or the right of the fixation (target amplitude range 3–12 (see Table 1)). Participants completed 175 trials.
Vertical saccades: For the vertical saccade task, targets were presented above or below the fixation (target amplitude range 0.5–17 (see Table 1)). Participants completed 60 trials.

2.3.3. Smooth Pursuit Tasks

Participants completed a horizontal and vertical smooth pursuit task. Participants started each trial by pressing a button on the keyboard at their discretion. During each trial, participants were asked to fixate on a white fixation cross presented on a black background in the center of the monitor located 80 cm in front of them. The fixation cross disappeared 1000 ms after the trial initiation, a white target dot appeared and began oscillating at one of the three pseudo-randomized speeds (see Table 1). Participants were asked to follow the dot (i.e., keep their eyes on the moving target) as it moved across the screen.
Horizontal smooth pursuit: For the horizontal smooth pursuit task, the white target dot appeared on the left or right of the fixation cross before oscillating (see Table 1 for the oscillating speeds). Participants completed 30 horizontal smooth pursuit trials.
Vertical smooth pursuit: For the vertical smooth pursuit task, the white target dot appeared above or below the fixation cross before oscillating (see Table 1 for the oscillating speeds). Participants completed 30 vertical smooth pursuit trials.

2.4. Analysis

2.4.1. Conversion to Degrees of the Visual Angle

The data from each system were converted into degrees of the visual angle either from display screen pixels (EyeLink) or a gaze vector (MindLink).
For the pixel-to-degree conversion, the following functions were used:
x degrees-EL = arctan w cm · x px w px 0.5 d cm
y degrees-EL = arctan h cm · y px h px 0.5 d cm
Here, h cm , w cm are the height and width of the display screen in centimeters, respectively (with constant values of h cm = 35.2 cm and w cm = 26.4 cm). Similarly, h px , w px are the height and width of the display screen in pixels (with h px = 1024 and w px = 768). The parameter d cm represents the distance from the participant to the display screen (constant at d cm = 80 cm). Finally, x px , y px are the raw gaze position coordinates in display screen pixels obtained from the EyeLink sample reports.
For the gaze vector-to-degree conversion, the following MATLAB functions were used:
x degrees-ML = arctan v x v y 2 + v z 2
y degrees-ML = arctan v y v z
Here, v x , v y are the horizontal and vertical components of the instantaneous gaze vector reported by the MindLink system, respectively. The gaze vector is a unit vector of the form v gaze = ( v x , v y , v z ) with the origin set at the center of the pupil, with the x-axis oriented toward the right, the y-axis oriented upward, and the z-axis oriented backward from the participant’s point of view.
The average eye position (mean of the right eye and left eye) was used to extract eye movement features for the comparison between the two eye trackers, while individual eye streams were used to compare the right and left eye conjugacy within each tracker.

2.4.2. Fixation Stability

Fixation stability was assessed by calculating the standard deviation of the horizontal (x) and vertical (y) positions of the left and right eyes, which were then used to calculate a log-transformed bivariate contour ellipse area (log 10 BCEA). Fixation stability was assessed during a 2000 ms fixation interval, which did not contain blinks or other noise artifacts. The following formula was used to calculate log 10 BCEA [29]:
log 10 BCEA = log 10 π χ 2 σ x σ y 1 ρ 2
where χ 2 = 2.291 is the chi-square value (2 degrees of freedom) corresponding to a probability value of P = 0.682 ( ± 1 standard deviation), σ x is the standard deviation of the horizontal eye position, σ y is the standard deviation of the vertical eye position, and ρ is the Pearson correlation coefficient.

2.4.3. Saccades

Saccades were automatically detected using amplitude, velocity, and acceleration thresholds of 0.01 , 30 s , and 800 s 2 , respectively [30]. For each saccade in the EyeLink data, a corresponding saccade in the MindLink data was selected if it was within a 100 ms window of the start of the EyeLink saccade. A total of 12.7% of trials were excluded (14.2% for horizontal saccades, 12.1% for vertical saccades) due to poor tracking where the cross-correlation was less than 0.5 between the signals for each eye tracker.
The main outcome measures to evaluate saccades between the two systems were the amplitude and the asymptotic peak velocity extracted from the main sequence. Amplitude was defined as the change in position (horizontal or vertical) in degrees for the detected saccade.
The main sequence was obtained by plotting the amplitude ( , abscissa) and peak saccade velocity ( s , ordinate) for all saccades for each participant. Curves were then fit to the data using:
V = V max · 1 exp ( c · A )
where A and V are the saccade amplitude and velocity data, respectively, and the fit parameters are c and V max —the asymptotic peak velocity. Curve fitting is achieved with SciPy (Version 1.11.1) using a trust region reflective algorithm (method = ‘trf’), seed parameters of (c = 1, V max = 500), and a Cauchy loss function (loss = ‘cauchy’). V max is extracted for both systems separately for all participants.
Finally, the saccade conjugacy between the left and right eye is evaluated by calculating amplitude differences between the left and right eye (right eye saccade amplitude minus the left eye saccade amplitude) for each participant. The participants’ amplitude difference average was calculated and statistically compared against a value of zero using a one-sample t-test for each eye tracker.

2.4.4. Smooth Pursuit

In order to compare the smooth pursuit between eye trackers, the gaze position from each eye tracker was first temporally aligned with the target position by shifting the gaze signal according to the maximum cross-correlation function between the gaze and target. Saccades were then automatically detected (as in Section 2.4.3) and removed from the trial.
The main outcome measure used to compare tracking quality during the smooth pursuit tasks between EyeLink and MindLink was the velocity gain, which is defined as the ratio of eye velocity to target velocity. Velocity gain was computed for the linear portions of the sinusoidal pursuit trials. For the two faster target frequencies (0.1 Hz and 0.2 Hz), the linear motion occurred over a 1000 ms window centered at the peak target velocity. For the slowest target frequencies (0.01 Hz), the entire trial was approximated as linear motion, and a 1000 ms window at the start of the trial was used. Smooth pursuit data were subjected to a linear fit using the MATLAB function: fit(t, r, ‘poly1’), where t is the timestamp for each sample and r is the gaze position. The first coefficient of the linear fit (i.e., the slope of the function) was used as an estimate of smooth pursuit velocity. A corresponding linear fit of the target position data was performed to estimate target velocity. A single measure of velocity gain was then calculated as the velocity from the eye tracker divided by the velocity of the target motion.
Trials were removed from the analysis if the maximum cross-correlation between the gaze and target was less than 0.5 or when less than 200 ms of data remained available for the linear fit after saccade removal. In total, 9.8% of trials were excluded (6.9% for horizontal pursuit, 12.6% for vertical pursuit) due to poor tracking, where the cross-correlation was less than 0.5 between the signals from each eye tracker.
The conjugacy between the left and right eyes during the smooth pursuit was calculated as the average velocity gain difference between eyes (right eye velocity gain minus left eye velocity gain). The mean conjugacy across all trials for each participant was then calculated for each eye tracker at each speed level and compared against a target value of zero using a two-way repeated-measure analysis of variance (ANOVA) with the eye tracker and speed factors.

3. Results

Validity (bias) and limits of agreement between the measures obtained from the two eye trackers were examined using a Bland–Altman test.

3.1. Fixation Stability

Figure 3 shows typical fixation eye position profiles recorded with EyeLink II and AdHawk MindLink; Table 2 outlines summary statistics of fixation stability measures for all participants for both eye trackers. Fixation stability was examined by comparing the standard deviation (SD) of the horizontal and vertical eye positions between devices. Figure 4 shows Bland–Altman plots, illustrating the distribution of differences between the two eye-tracking systems in the standard deviation of the horizontal and vertical eye positions across all trials for each participant. The mean difference and 95% confidence interval between the two trackers (EyeLink and MindLink) was −0.08 [−0.32, 0.17] for the horizontal standard deviation and −0.08 [−0.37, 0.21] for the vertical standard deviation. The mean differences across participants are shown in Figure 5.
A Bland–Altman plot for the global measure of fixation stability, log 10 BCEA, is shown in Figure 6. The differences between the devices for each participant are shown in Figure 7. The mean difference and 95% confidence interval between the two trackers (EyeLink and MindLink) was −0.29 degrees 2 [−0.98, 0.4] for fixation log 10 BCEA.

3.2. Saccades

Figure 8 shows typical horizontal and vertical saccade eye position profiles recorded with EyeLink II and AdHawk MindLink.

3.2.1. Horizontal Saccades

Horizontal saccades were characterized by comparing the amplitude measured by each device. Figure 9 shows a Bland–Altman plot illustrating the horizontal saccade amplitude agreement between the devices for both eyes. The mean difference and 95% confidence interval between the two trackers (EyeLink and MindLink) was 0.04 [−4.56, 4.63] for the horizontal saccade amplitude. For each participant, the measurement differences between devices are shown in Figure 10.
A horizontal main sequence plot using data from all participants and trials is shown in Figure 11. The dashed lines represent the calculated asymptotic maximum velocity for each system. The Bland–Altman plot in Figure 12 demonstrates the difference in detected horizontal maximum velocity between the two systems for each participant, where the mean difference and 95% confidence interval between the two trackers (EyeLink and MindLink) was −10.70 s [−204.57, 183.17]. For each participant, the differences in detected horizontal maximum velocity values between the devices are shown in Figure 13.

3.2.2. Vertical Saccades

Vertical saccade characterization was examined by comparing the vertical saccade amplitude measured by each device. Figure 14 shows a Bland–Altman plot illustrating the vertical saccade amplitude agreement between the devices for both eyes for the first target saccade. The mean difference and 95% confidence interval between the two trackers (EyeLink and MindLink) was −0.32 [−4.84, 4.20] for the vertical standard deviation. For each participant, the measurement differences between devices are shown in Figure 15.
A vertical main sequence plot using data for all participants and trials is shown in Figure 16. The dashed lines represent the calculated asymptotic maximum velocity for each system. The Bland–Altman plot in Figure 17 demonstrates the difference in detected vertical maximum velocity between the two systems for each participant, where the mean difference and 95% confidence interval between the two trackers (EyeLink and MindLink) was 204.21 s [−266.22, 674.64]. For each participant, differences in detected vertical maximum velocity values between devices are shown in Figure 18.

3.2.3. Saccade Conjugacy

Saccade conjugacy was tested for both eye trackers by performing a one-sample t-test on the saccade amplitude difference between eyes (right–left) for each eye tracker. For both vertical and horizontal saccades, there was a significant deviation of conjugacy from the ideal zeros in both eye trackers (p < 0.01). The summary statistics and t-test results for horizontal and vertical saccade amplitude differences are outlined in Table 3.

3.3. Smooth Pursuit

Figure 19 shows typical smooth pursuit eye position profiles recorded with EyeLink II and AdHawk MindLink, and Table 4 outlines the summary statistics of horizontal and vertical smooth pursuit measures from all participants for both eye trackers.

3.3.1. Horizontal Smooth Pursuit

Horizontal smooth pursuit characterization was examined by comparing the horizontal velocity gain measured by each device. Figure 20 shows the Bland–Altman plots, illustrating the horizontal velocity gain agreement between the devices for both eyes for each pursuit speed. The mean horizontal gain differences and 95% confidence intervals between the two trackers (EyeLink and MindLink) were 0.18 [−1.71, 2.07], 0.07 [−0.28, 0.42], and 0.06 [−0.25, 0.37] for horizontal pursuit speeds of 0.5 s , 5 s , and 10 s , respectively. For each participant, the horizontal velocity gain differences between devices are shown in Figure 21.
For each participant, the horizontal velocity measurement differences between devices are shown in Figure 22. The mean horizontal velocity differences and 95% confidence intervals between the two trackers (EyeLink and MindLink) were −0.02 s [−1.11, 1.07], 0.09 s [−1.78, 1.97], and 0.04 s [−3.55, 3.63] for horizontal pursuit speeds of 0.5 s , 5 s , and 10 s , respectively.

3.3.2. Vertical Smooth Pursuit

Vertical smooth pursuit characterization was examined by comparing the vertical velocity gain measured by each device. Figure 23 shows Bland–Altman plots, illustrating the vertical velocity gain agreement between the devices for both eyes for each pursuit speed. The mean vertical gain differences and 95% confidence intervals between the two trackers (EyeLink and MindLink) were 0.09 [−2.16, 2.35], 0.06 [−0.53, 0.64], and 0.08 [−0.24, 0.39] for vertical pursuit speeds of 0.5 s , 5 s , and 10 s , respectively. For each participant, vertical velocity gain differences between devices are shown in Figure 24.
For each participant, vertical velocity measurement differences between devices are shown in Figure 25. The mean vertical velocity differences and 95% confidence intervals between the two trackers (EyeLink and MindLink) were −0.07 s [−1.34, 1.20], −0.21 s [−3.16, 2.74], and 0.04 s [−3.76, 3.83] for vertical pursuit speeds of 0.5 s , 5 s , and 10 s , respectively.

3.3.3. Smooth Pursuit Conjugacy

Smooth pursuit conjugacy was tested for both eye trackers by performing a two-way repeated-measure ANOVA, where the dependent variable was the gain difference between eyes (right–left), with pursuit speed (0.5 s , 5 s , and 10 s ), and eye trackers (EyeLink, MindLink, or ‘Ideal’—gain difference set to 0). The ANOVA results showed no significant gain differences ( p > 0.05 ) between eyes for either eye tracker, in both the horizontal smooth pursuit (Table 5) and vertical smooth pursuit (Table 6).

4. Discussion

The aim of this study was to evaluate the ability of the MindLink eye tracker to capture eye movement characteristics frequently used in eye-tracking studies. The MindLink eye tracker is embedded in the eyeglass frame and uses novel camera-free technology to record eye positions. This technology may allow researchers to move out of the lab and into more ecological environments. As such, we compared the capabilities of the MindLink system to the traditional video-based EyeLink II system. Assessing the capabilities and limitations of a novel mobile system such as MindLink is necessary in order to understand the proper use cases of such a system. Previously, the only study assessing MindLink examined the accuracy and precision of the signal quality compared to an EyeLink eye tracker [26]. Here, we examined eye movement metrics frequently reported in eye-tracking studies.

4.1. Fixation

The Bland–Altman analysis for fixation stability (Figure 4 and Figure 6) showed greater variability in the MindLink recordings compared to the EyeLink recordings. These results are consistent with a previous study [26], where MindLink demonstrated poorer spatial precision. Notably, the difference in the fixation eye position between EyeLink and MindLink was less than 0.5 (Table 2). This level of precision is most likely sufficient for studies conducted in dynamic settings, such as sports, where fixation differences greater than 0.5 have been observed between expertise levels [31]. However, when the expected effect size between groups or conditions is less than 0.5 , the MindLink may not be a suitable device for detecting such differences.

4.2. Saccades

Saccade amplitude measurements from MindLink demonstrated high variability of agreement with the EyeLink system when considering individual participants (Figure 10 and Figure 15). The agreement lacked consistency because MindLink both under- and over-estimated saccade amplitudes and velocity gains within and across participants. In dynamic settings, such as driving, large amplitude saccades are important for performance and are defined with amplitudes greater than 10 [27]. The MindLink amplitude measurement error relative to EyeLink’s had a standard deviation of around 2.5 (Figure 9 and Figure 14), demonstrating MindLink’s ability to measure and discriminate large-scale eye movements relevant to real-world applications. Regarding the measurement of peak saccade velocity, video-based systems have previously demonstrated measurement inaccuracies compared to scleral search coil methods [32], and as such, it is difficult to know whether it is EyeLink or MindLink that is contributing more to the discrepancies in peak velocity estimation.

4.3. Smooth Pursuit

Smooth pursuit velocity gain differences between the eye trackers were dependent on the pursuit speed, with higher pursuit speeds demonstrating more agreement in velocity gain (Figure 20 and Figure 23). The high discrepancy in gain for the slower velocities was likely due to a lower limit on the pursuit velocity estimation accuracy of about 0.5 s for horizontal pursuit and up to 3 s for vertical pursuit, as apparent from Figure 22 and Figure 25 and Table 4. As such, the measurement of the slowest pursuit speed of 0.5 s could have an error of 0.5 s , resulting in a large velocity gain estimation. However, the pursuit of targets moving at higher velocities (≥10 s ), which is common in dynamic scenes [33,34,35], can be captured relatively accurately by MindLink. Finally, the MindLink performance is comparable to another mobile eye tracker, the Pupil Labs glasses, which presented a pursuit velocity estimation error of about 2.4 s relative to an EyeLink video-based system [36].

4.4. Limitations

This study offers the first investigation of common eye movement metrics recorded using a camera-free eye-tracking system. The new system was compared with a research-grade video-based eye tracker during a simultaneous recording. While this allowed us to compare the same eye movements recorded with both systems, it also introduced the possibility of interference between devices. To address these issues, pilot testing was conducted prior to the experiment where eye movements were recorded separately by each eye tracker, and the recordings were compared to assess the signals from the single vs. simultaneous recordings. Results from the pilot study did not reveal any significant interference; nonetheless, signal interference between the devices could have had some small influence on our results. Another limitation of the study is the limited number of tasks used (i.e., fixation, saccade, and smooth pursuit). These tasks have been used widely in eye movement studies; however, the capability of the MindLink system remains to be examined in other tasks where participants are not constrained by a chin rest and the stimulus is presented in more ecological conditions rather than a computer monitor. Finally, the application of MEMS technology to eye tracking is quite novel [37]. While the current tracking algorithms provide less accurate recordings when compared to the EyeLink eye tracker, it is likely that as the technology matures the tracking algorithms will also improve.

5. Conclusions

The AdHawk MindLink is a camera-free eye-tracking system, which presents many advantages over traditional video-based eye-tracking systems; these include a lighter weight, a more compact design, and greater portability outside of constrained postures and settings. While the performance of an effective video-based system, such as EyeLink II, is superior at measuring eye movement details, the MindLink stands out for its smaller footprint, and it maintains an adequate level of accuracy that is suitable for dynamic environments. Further work should explore the factors contributing to the accuracy variability observed among participants, as well as the performance of MEMS technology used by MindLink (involving infrared light) in natural light settings.

Author Contributions

Conceptualization, A.Z., K.D., E.I. and E.N.-S.; formal analysis, A.Z., C.M.C. and A.M.Y.; funding acquisition, E.I. and E.N.-S.; investigation, A.Z., C.M.C., A.M.Y., K.D., E.I. and E.N.-S.; methodology, A.Z., K.D., E.I. and E.N.-S.; resources, E.I. and E.N.-S.; software, A.Z.; supervision, K.D., E.I. and E.N.-S.; visualization, A.Z. and C.M.C.; writing—original draft, A.Z.; writing—review and editing, A.Z., C.M.C., A.M.Y., K.D., E.I. and E.N.-S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Sciences and Engineering Research Council of Canada (NSERC) and the Accessible Technology Program.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Research Ethics Board at the University of Waterloo, Canada. (Protocol no. 43620, date of approval: 19 September 2022).

Informed Consent Statement

Informed consent was obtained from all participants involved in the study.

Data Availability Statement

Data are available upon request.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
SDstandard deviation
ELEyeLink
MLMindLink

References

  1. Hodgson, T.L.; Ezard, G.; Hermens, F. Eye movements in neuropsychological tasks. Curr. Top. Behav. Neurosci. 2019, 41, 393–418. [Google Scholar] [PubMed]
  2. Land, M.F.; Hayhoe, M. In what ways do eye movements contribute to everyday activities? Vis. Res. 2001, 41, 3559–3565. [Google Scholar] [CrossRef] [PubMed]
  3. Moehler, T.; Fiehler, K. Effects of central and peripheral cueing on perceptual and saccade performance. Vis. Res. 2018, 143, 26–33. [Google Scholar] [CrossRef] [PubMed]
  4. Jonikaitis, D.; Dhawan, S.; Deubel, H. Saccade selection and inhibition: Motor and attentional components. J. Neurophysiol. 2019, 121, 1368–1380. [Google Scholar] [CrossRef]
  5. Ahlström, C.; Kircher, K.; Nyström, M.; Wolfe, B. Eye tracking in driver attention research - how gaze data interpretations influence what we learn. Front. Neuroergonomics 2021, 2, 778043. [Google Scholar] [CrossRef]
  6. Parker, A.; Poole, J.; Dagnall, N. Saccade-induced retrieval enhancement and the recovery of perceptual item-specific information. Cogn. Process. 2020, 21, 223–237. [Google Scholar] [CrossRef]
  7. Bachurina, V.; Arsalidou, M. Effects of task complexity and working memory load on eye-tracking indicies of cognitive effort in adults and children. J. Vis. 2020, 20, 1069. [Google Scholar] [CrossRef]
  8. Seideman, J.A.; Stanford, T.R.; Salinas, E. Saccade metrics reflect decision-making dynamics during urgent choices. Nat. Commun. 2018, 9, 2907. [Google Scholar] [CrossRef]
  9. Brams, S.; Ziv, G.; Levin, O.; Spitz, J.; Wagemans, J.; Williams, A.M.; Helsen, W.F. The relationship between gaze behaviour, expertise, and performance: A systemic review. Psychol. Bull. 2019, 145, 980–1027. [Google Scholar] [CrossRef]
  10. Gegenfurtner, A.; Lehtinen, E.; Säljö, R. Expertise differences in the comprehension of visualizations: A meta-analysis of eye-tracking research in proffesional domains. Educ. Psychol. Rev. 2011, 23, 523–552. [Google Scholar] [CrossRef]
  11. Irving, E.L.; Steinbach, M.J.; Lillakas, L.; Babu, R.J.; Hutchings, N. Horizontal saccade dynamics across the human life span. Investig. Ophthalmol. Vis. Sci. 2006, 47, 2478–2484. [Google Scholar] [CrossRef] [PubMed]
  12. Luna, B.; Velanova, K.; Geier, C.F. Development of eye-movement control. Brain Cogn. 2008, 68, 293–308. [Google Scholar] [CrossRef] [PubMed]
  13. Niechwiej-Szwedo, E.; Wu, S.; Nouredanesh, M.; Tung, J.; Christian, L.W. Development of eye-hand coordination in typically developing children and adolescents assessed using a reach-to-grasp sequencing task. Hum. Mov. Sci. 2021, 80, 102868. [Google Scholar] [CrossRef] [PubMed]
  14. McDonald, M.A.; Holdsworth, S.J.; Danesh-Meyer, H.V. Eye movements in mild traumatic brain injury: Ocular biomarkers. J. Eye Mov. Res. 2022, 15. [Google Scholar] [CrossRef] [PubMed]
  15. Yep, R.; Smorenburg, M.L.; Riek, H.C.; Calancie, O.G.; Kirkpatrick, R.H.; Perkins, J.E.; Huang, J.; Coe, B.C.; Brien, D.C.; Munoz, D.P. Interleaved pro/anti-saccade behavior across the lifespan. Front. Aging Neurosci. 2022, 14, 842549. [Google Scholar] [CrossRef]
  16. Anderson, T.J.; MacAskill, M.R. Eye movements in patients with neurodegenerative disorders. Nat. Rev. Neurol. 2013, 9, 74–85. [Google Scholar] [CrossRef]
  17. Lappi, O. Eye movements in the wild: Oculomotor control, gaze behaviour & frames of reference. Neurosci. Biobehav. Rev. 2016, 69, 49–68. [Google Scholar]
  18. Zhu, S.; Lakshminarasimhan, K.J.; Arfaei, N.; Angelaki, D.E. Eye movements reveal spatiotemporal dynamics of visually-informed planning in navigation. Elife 2022, 11, e73097. [Google Scholar] [CrossRef]
  19. Atik, O.; Arslan, O. Use of eye tracking for assessment of electronic navigation competency in maritime training. J. Eye Mov. Res. 2019, 12. [Google Scholar] [CrossRef]
  20. Peißl, S.; Wickens, C.D.; Baruah, R. Eye-tracking measures in aviation: A selective literature review. Int. J. Aerosp. Psychol. 2018, 28, 98–112. [Google Scholar] [CrossRef]
  21. Darici, D.; Masthoff, M.; Rischen, R.; Schmitz, M.; Ohlenburg, H.; Missler, M. Medical imaging training with eye movement modeling examples: A randomized controlled study. Med. Teach. 2023, 45, 918–924. [Google Scholar] [CrossRef]
  22. Halszka, J.; Holmqvist, K.; Gruber, H. Eye tracking in educational science: Theoretical frameworks and research agendas. J. Eye Mov. Res. 2017, 10, 1–18. [Google Scholar]
  23. Ashraf, H.; Sodergren, M.H.; Merali, N.; Mylonas, G.; Singh, H.; Darzi, A. Eye-tracking technology in medical education: A systemic review. Med. Teach. 2018, 40, 62–69. [Google Scholar] [CrossRef] [PubMed]
  24. Cognolato, M.; Atzori, M.; Müller, H. Head-mounted eye gaze tracking devices: An overview of modern devices and recent advances. J. Rehabil. Assitive Techonologies Eng. 2018, 5, 2055668318773991. [Google Scholar] [CrossRef] [PubMed]
  25. Holmqvist, K.; Örbom, S.L.; Hooge, I.T.C.; Niehorster, D.C.; Alexander, R.G.; Andersson, R.; Benjamins, J.S.; Blignaut, P.; Brouwer, A.M.; Chuang, L.L.; et al. Eye tracking: Empirical foundations for a minimal reporting guideline. Behav. Res. Methods 2023, 55, 364–416. [Google Scholar] [CrossRef] [PubMed]
  26. Aziz, S.; Lohr, D.J.; Komogortsev, O. SynchronEyes: A Novel, Paired Data Set of Eye Movements Recorded Simultaneously with Remote and Wearable Eye-Tracking Devices. In Proceedings of the 2022 Symposium on Eye Tracking Research and Applications, Seattle, WA, USA, 8–11 June 2022; pp. 1–6. [Google Scholar]
  27. Cardona, G.; Quevedo, N. Blinking and driving: The influence of saccades and cognitive workload. Curr. Eye Res. 2014, 39, 239–244. [Google Scholar] [CrossRef] [PubMed]
  28. Fletcher, W.A.; Sharpe, J.A. Smooth pursuit dysfunction in Alzheimer’s disease. Neurology 1988, 38, 272. [Google Scholar] [CrossRef]
  29. González, E.G.; Wong, A.M.; Niechwiej-Szwedo, E.; Tarita-Nistor, L.; Steinbach, M.J. Eye position stability in amblyopia and in normal binocular vision. Investig. Ophthalmol. Vis. Sci. 2012, 53, 5386–5394. [Google Scholar] [CrossRef]
  30. Engbert, R.; Kliegl, R. Microsaccades uncover the orientation of covert attention. Vis. Res. 2003, 43, 1035–1045. [Google Scholar] [CrossRef] [PubMed]
  31. Di Russo, F.; Pitzalis, S.; Spinelli, D. Fixation stability and saccadic latency in elite shooters. Vis. Res. 2003, 43, 1837–1845. [Google Scholar] [CrossRef]
  32. Kimmel, D.L.; Mammo, D.; Newsome, W.T. Tracking the eye non-invasively: Simultaneous comparison of the scleral search coil and optical tracking techniques in the macaque monkey. Front. Behav. Neurosci. 2012, 6, 49. [Google Scholar] [CrossRef] [PubMed]
  33. Spering, M.; Schütz, A.C.; Braun, D.I.; Gegenfurtner, K.R. Keep your eyes on the ball: Smooth pursuit eye movements enhance prediction of visual motion. J. Neurophysiol. 2011, 105, 1756–1767. [Google Scholar] [CrossRef]
  34. Fehd, H.M.; Seiffert, A.E. Eye movements during multiple object tracking: Where do participants look? Cognition 2008, 108, 201–209. [Google Scholar] [CrossRef] [PubMed]
  35. Fooken, J.; Yeo, S.H.; Pai, D.K.; Spering, M. Eye movement accuracy determines natural interception strategies. J. Vis. 2016, 16, 1. [Google Scholar] [CrossRef]
  36. Ehinger, B.V.; Groß, K.; Ibs, I.; König, P. A new comprehensive eye-tracking test battery concurrently evaluating the Pupil Labs glasses and the EyeLink 1000. PeerJ 2019, 7, e7086. [Google Scholar] [CrossRef]
  37. Sarkar, N.; Strathearn, D.; Lee, G.; Olfat, M.; Rohani, A.; Mansour, R.R. A large angle, low voltage, small footprint micromirror for eye tracking and near-eye display applications. In Proceedings of the 2015 Transducers—2015 18th International Conference on Solid-State Sensors, Actuators and Microsystems (TRANSDUCERS), Anchorage, AK, USA, 21–25 June 2015; pp. 855–858. [Google Scholar]
Figure 1. Participant instrumentation and calibration: (a) The EyeLink headset was worn on top of the MindLink frames, with the EyeLink cameras viewing the eyes through the empty frame front, (b) the MindLink system was calibrated on its collection PC laptop, (c) the EyeLink system was calibrated using the pupil-only mode on the stimulus display monitor. For both calibrations, participants’ heads were fixed in a chin-rest.
Figure 1. Participant instrumentation and calibration: (a) The EyeLink headset was worn on top of the MindLink frames, with the EyeLink cameras viewing the eyes through the empty frame front, (b) the MindLink system was calibrated on its collection PC laptop, (c) the EyeLink system was calibrated using the pupil-only mode on the stimulus display monitor. For both calibrations, participants’ heads were fixed in a chin-rest.
Sensors 23 07753 g001
Figure 2. Example of one possible ordering of the pseudo-randomized experimental task procedure.
Figure 2. Example of one possible ordering of the pseudo-randomized experimental task procedure.
Sensors 23 07753 g002
Figure 3. Raw fixation profiles. Exemplar single trial fixation profiles from average (left and right) eye position data recorded from EyeLink II (EL) and AdHawk MindLink (ML): (a) horizontal position, (b) vertical position.
Figure 3. Raw fixation profiles. Exemplar single trial fixation profiles from average (left and right) eye position data recorded from EyeLink II (EL) and AdHawk MindLink (ML): (a) horizontal position, (b) vertical position.
Sensors 23 07753 g003
Figure 4. Bland–Altman plot in the fixation eye position standard deviation; (a) horizontal standard deviation; (b) vertical standard deviation. Colors distinguish individual participants.
Figure 4. Bland–Altman plot in the fixation eye position standard deviation; (a) horizontal standard deviation; (b) vertical standard deviation. Colors distinguish individual participants.
Sensors 23 07753 g004
Figure 5. Mean ± SD difference (EyeLink and MindLink) of the fixation eye position. (a) Horizontal standard deviation. (b) Vertical standard deviation. Mean ± SD for each participant as well as the mean ± SD difference averaged across all participants and trials shown.
Figure 5. Mean ± SD difference (EyeLink and MindLink) of the fixation eye position. (a) Horizontal standard deviation. (b) Vertical standard deviation. Mean ± SD for each participant as well as the mean ± SD difference averaged across all participants and trials shown.
Sensors 23 07753 g005
Figure 6. Bland–Altman plot of the fixation eye position bivariate contour ellipsoid area (log-transformed). Colors distinguish individual participants.
Figure 6. Bland–Altman plot of the fixation eye position bivariate contour ellipsoid area (log-transformed). Colors distinguish individual participants.
Sensors 23 07753 g006
Figure 7. Mean ± SD difference (EyeLink and MindLink) in the fixation eye position log 10 BCEA for each participant, as well as the mean ± SD difference averaged across all participants and trials.
Figure 7. Mean ± SD difference (EyeLink and MindLink) in the fixation eye position log 10 BCEA for each participant, as well as the mean ± SD difference averaged across all participants and trials.
Sensors 23 07753 g007
Figure 8. Saccade eye position profiles from average gaze position data recorded from EyeLink II (EL) and AdHawk MindLink (ML): (a) horizontal saccade, (b) vertical saccade.
Figure 8. Saccade eye position profiles from average gaze position data recorded from EyeLink II (EL) and AdHawk MindLink (ML): (a) horizontal saccade, (b) vertical saccade.
Sensors 23 07753 g008
Figure 9. Bland–Altman plot of the horizontal saccade amplitude between EyeLink and MindLink. Colors distinguish individual participants.
Figure 9. Bland–Altman plot of the horizontal saccade amplitude between EyeLink and MindLink. Colors distinguish individual participants.
Sensors 23 07753 g009
Figure 10. Mean ± SD difference (EyeLink and MindLink) of the horizontal saccade amplitude for each participant, as well as the mean ± SD difference averaged across all participants and trials.
Figure 10. Mean ± SD difference (EyeLink and MindLink) of the horizontal saccade amplitude for each participant, as well as the mean ± SD difference averaged across all participants and trials.
Sensors 23 07753 g010
Figure 11. Horizontal main sequence plot for all participants and trials. The dashed line represents the detected asymptotic maximum velocity.
Figure 11. Horizontal main sequence plot for all participants and trials. The dashed line represents the detected asymptotic maximum velocity.
Sensors 23 07753 g011
Figure 12. Bland–Altman plot of the detected maximum horizontal saccade velocity between EyeLink and MindLink. Colors distinguish individual participants.
Figure 12. Bland–Altman plot of the detected maximum horizontal saccade velocity between EyeLink and MindLink. Colors distinguish individual participants.
Sensors 23 07753 g012
Figure 13. Bar plot of the detected maximum horizontal saccade velocity as measured by both EyeLink and MindLink for each participant, as well as mean values.
Figure 13. Bar plot of the detected maximum horizontal saccade velocity as measured by both EyeLink and MindLink for each participant, as well as mean values.
Sensors 23 07753 g013
Figure 14. Bland–Altman plot of the vertical saccade amplitude between EyeLink and MindLink. Colors distinguish individual participants.
Figure 14. Bland–Altman plot of the vertical saccade amplitude between EyeLink and MindLink. Colors distinguish individual participants.
Sensors 23 07753 g014
Figure 15. Mean ± SD difference (EyeLink and MindLink) of the vertical saccade amplitude for each participant, as well as the mean ± SD difference averaged across all participants and trials.
Figure 15. Mean ± SD difference (EyeLink and MindLink) of the vertical saccade amplitude for each participant, as well as the mean ± SD difference averaged across all participants and trials.
Sensors 23 07753 g015
Figure 16. Vertical main sequence plot for all participants and trials. The dashed line represents the detected asymptotic maximum velocity.
Figure 16. Vertical main sequence plot for all participants and trials. The dashed line represents the detected asymptotic maximum velocity.
Sensors 23 07753 g016
Figure 17. Bland–Altman plot of detected maximum vertical saccade velocity between EyeLink and MindLink. Colors distinguish individual participants.
Figure 17. Bland–Altman plot of detected maximum vertical saccade velocity between EyeLink and MindLink. Colors distinguish individual participants.
Sensors 23 07753 g017
Figure 18. Bar plot of the detected maximum vertical saccade velocity as measured by both EyeLink and MindLink for each participant, as well as mean values.
Figure 18. Bar plot of the detected maximum vertical saccade velocity as measured by both EyeLink and MindLink for each participant, as well as mean values.
Sensors 23 07753 g018
Figure 19. Smooth pursuit eye position profiles from average gaze position data recorded from EyeLink II (EL) and AdHawk MindLink (ML): (a) 0.5 s , (b) 5 s , and (c) 10 s .
Figure 19. Smooth pursuit eye position profiles from average gaze position data recorded from EyeLink II (EL) and AdHawk MindLink (ML): (a) 0.5 s , (b) 5 s , and (c) 10 s .
Sensors 23 07753 g019
Figure 20. Bland–Altman plot of horizontal velocity gain between EyeLink and MindLink. Colors distinguish smooth pursuit target speeds.
Figure 20. Bland–Altman plot of horizontal velocity gain between EyeLink and MindLink. Colors distinguish smooth pursuit target speeds.
Sensors 23 07753 g020
Figure 21. Mean ± SD difference (EyeLink and MindLink) of horizontal velocity gain for each participant, as well as the mean ± SD difference averaged across all participants and trials.
Figure 21. Mean ± SD difference (EyeLink and MindLink) of horizontal velocity gain for each participant, as well as the mean ± SD difference averaged across all participants and trials.
Sensors 23 07753 g021
Figure 22. Mean ± SD difference (EyeLink and MindLink) of the horizontal velocity measurement for each participant, as well as the mean ± SD difference averaged across all participants and trials.
Figure 22. Mean ± SD difference (EyeLink and MindLink) of the horizontal velocity measurement for each participant, as well as the mean ± SD difference averaged across all participants and trials.
Sensors 23 07753 g022
Figure 23. Bland–Altman plot of the vertical velocity gain between EyeLink and MindLink. Colors distinguish smooth pursuit target speeds.
Figure 23. Bland–Altman plot of the vertical velocity gain between EyeLink and MindLink. Colors distinguish smooth pursuit target speeds.
Sensors 23 07753 g023
Figure 24. Mean ± SD difference (EyeLink and MindLink) of the vertical velocity gain for each participant, as well as the mean ± SD difference averaged across all participants and trials.
Figure 24. Mean ± SD difference (EyeLink and MindLink) of the vertical velocity gain for each participant, as well as the mean ± SD difference averaged across all participants and trials.
Sensors 23 07753 g024
Figure 25. Mean ± SD difference (EyeLink and MindLink) of the vertical velocity measurement for each participant, as well as the mean ± SD difference averaged across all participants and trials.
Figure 25. Mean ± SD difference (EyeLink and MindLink) of the vertical velocity measurement for each participant, as well as the mean ± SD difference averaged across all participants and trials.
Sensors 23 07753 g025
Table 1. Stimulus parameters for the horizontal and vertical saccade and smooth pursuit tasks.
Table 1. Stimulus parameters for the horizontal and vertical saccade and smooth pursuit tasks.
Task# of TrialsTarget Amplitude
SaccadesHorizontal1753–25
(2.9, 5.8, 8.6, 11.4, 12.2, 14.4, 17.3, 20.1, 24.5)
Vertical600.5–17
(0.5, 2.9, 5.7, 6.4, 8.6, 11.9, 17.2)
Smooth PursuitHorizontal300.01, 0.1, 0.2 Hz
(0.5, 5, 10 s )
Vertical300.01, 0.1, 0.2 Hz
(0.5, 5, 10 s )
Table 2. Summary statistics of fixation stability measures for each eye tracker.
Table 2. Summary statistics of fixation stability measures for each eye tracker.
Eye TrackerMeasurenMean ( )SD ( )
EyeLinkSD (horizontal)130.130.05
SD (vertical)130.180.04
log 10 BCEA13−0.920.21
MindLinkSD (horizontal)130.210.11
SD (vertical)130.250.10
log 10 BCEA13−0.630.30
Table 3. One-sample t-test results comparing differences in the measured saccade amplitudes between eyes for EyeLink and MindLink trackers, for horizontal and vertical saccades.
Table 3. One-sample t-test results comparing differences in the measured saccade amplitudes between eyes for EyeLink and MindLink trackers, for horizontal and vertical saccades.
Saccade DirectionEye TrackerMean of Differencet-Statisticp-Valuedf
HorizontalEyeLink−0.34−4.120.00212
MindLink−1.29−4.090.00212
VerticalEyeLink−0.28−4.040.00212
MindLink−1.40−37.30.00312
Table 4. Summary statistics of smooth pursuit measures for each eye tracker.
Table 4. Summary statistics of smooth pursuit measures for each eye tracker.
Eye TrackerDirectionMeasuren0.5 s 5 s 10 s
Mean SD Mean SD Mean SD
EyeLinkHorizontalSpeed130.580.204.720.209.850.74
Gain131.000.370.950.040.900.07
VerticalSpeed130.420.103.260.297.321.10
Gain130.670.200.650.060.670.10
MindLinkHorizontalSpeed130.520.154.360.659.221.10
Gain130.840.270.870.130.850.10
VerticalSpeed130.490.192.860.556.441.24
Gain130.780.350.570.110.590.11
Table 5. Two-way ANOVA results for horizontal velocity gain.
Table 5. Two-way ANOVA results for horizontal velocity gain.
EffectDFnDFdFp η g 2
Eye Tracker1.0512.641.210.300.01
Pursuit Speed1.2114.580.770.420.01
Eye Tracker : Pursuit Speed1.1814.130.630.470.03
Table 6. Two-way ANOVA results for vertical velocity gain.
Table 6. Two-way ANOVA results for vertical velocity gain.
EffectDFnDFdFp η g 2
Eye Tracker1.0511.500.870.380.01
Pursuit Speed1.0711.740.020.920.0004
Eye Tracker : Pursuit Speed1.1712.880.850.390.03
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zafar, A.; Calderon, C.M.; Yeboah, A.M.; Dalton, K.; Irving, E.; Niechwiej-Szwedo, E. Investigation of Camera-Free Eye-Tracking Glasses Compared to a Video-Based System. Sensors 2023, 23, 7753. https://doi.org/10.3390/s23187753

AMA Style

Zafar A, Calderon CM, Yeboah AM, Dalton K, Irving E, Niechwiej-Szwedo E. Investigation of Camera-Free Eye-Tracking Glasses Compared to a Video-Based System. Sensors. 2023; 23(18):7753. https://doi.org/10.3390/s23187753

Chicago/Turabian Style

Zafar, Abdullah, Claudia Martin Calderon, Anne Marie Yeboah, Kristine Dalton, Elizabeth Irving, and Ewa Niechwiej-Szwedo. 2023. "Investigation of Camera-Free Eye-Tracking Glasses Compared to a Video-Based System" Sensors 23, no. 18: 7753. https://doi.org/10.3390/s23187753

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop