Abstract
Recent breakthroughs in machine learning, artificial intelligence, and the emergence of large datasets have made the integration of eye tracking increasingly feasible not only in computing but also in many other disciplines to accelerate innovation and scientific discovery. These transformative changes often depend on intelligently analyzing and interpreting gaze data, which demand a substantial technical background. Overcoming these technical barriers has remained an obstacle to the broader adoption of eye tracking technologies in certain communities. In an effort to increase accessibility that potentially empowers a broader community of researchers and practitioners to leverage eye tracking, this paper presents an open-source software platform: Beach Environment for the Analytics of Human Gaze (BEACH-Gaze), designed to offer comprehensive descriptive and predictive analytical support. Firstly, BEACH-Gaze provides sequential gaze analytics through window segmentation in its data processing and analysis pipeline, which can be used to achieve simulations of real-time gaze-based systems. Secondly, it integrates a range of established machine learning models, allowing researchers from diverse disciplines to generate gaze-enabled predictions without advanced technical expertise. The overall goal is to simplify technical details and to aid the broader community interested in eye tracking research and applications in data interpretation, and to leverage knowledge gained from eye gaze in the development of machine intelligence. As such, we further demonstrate three use cases that apply descriptive and predictive gaze analytics to support individuals with autism spectrum disorder during technology-assisted exercises, to dynamically tailor visual cues for an individual user via physiologically adaptive visualizations, and to predict pilots’ performance in flight maneuvers to enhance aviation safety.
1. Introduction
Eye tracking is increasingly being applied across multiple disciplines and sectors, such as diagnosing medical conditions [], optimizing marketing and consumer research [], enriching user experience with immersive interactions in virtual reality and augmented reality [], enhancing automotive safety in advanced driver-assistance systems [], and creating effective educational technologies enabled by gaze-driven learning behaviors []. Comprehensive overviews and guides in eye tracking research are extensively documented in [], highlighting its diverse applications and methodological advancements. Coupled with recent advancements in machine learning, intelligent analysis and interpretation of gaze data will likely continue to drive innovation and discoveries in multiple domains that utilize eye tracking. However, the complexity involved in intelligent data analysis continues to be a barrier to the widespread adoption of eye tracking in certain communities, as such analyses typically require strong technical backgrounds. With eye tracking applications having grown to include the broader community of practitioners who may not always be readily equipped with comprehensive programming knowledge to develop tools independently, there is a pressing need for software support that can rapidly produce insights from gaze data. Increased accessibility to intelligent gaze analytics will thus facilitate broader participation in, and adoption of, eye tracking technologies across diverse disciplines.
To this end, contributing to providing comprehensive and easy-to-use tool capabilities that aid researchers across disciplines in intelligent gaze analytics without needing extensive technical skills, this paper presents an open-source software solution with integrated machine learning support to facilitate analyses of descriptive gaze measures and gaze-enabled predictions, namely, the Beach Environment for the Analytics of Human Gaze (BEACH-Gaze). The overall goal of BEACH-Gaze is to simplify the interpretation of gaze data to democratize access to this technology and to empower more researchers to leverage eye-tracking in their work. By lowering technical barriers and increasing accessibility, BEACH-Gaze will likely enable a wider range of researchers and practitioners to harness the power of intelligent gaze analytics, leading to accelerated innovative and impactful applications across various fields.
More specifically, BEACH-Gaze is a desktop application with a graphical user interface and is distributed under the GNU General Public License with a reuseable and extensible codebase. It consists of two main modules, including (i) the descriptive analytics module that provides sequential and summative descriptive gaze measures derived from raw eye gaze capturing gaze patterns of a moment in time as well as over time, and (ii) the predictive analytics module that provides a range of machine learning support for gaze-enabled classifications facilitating simulations of real-time predictions. With an overall goal to support collaborative development and rapid innovation in intelligent gaze analytics, the significance of this work lies in its contribution towards more accessible, comprehensive, and collaborative solutions to potentially accelerate research and discoveries in eye tracking research and applications across diverse disciplines. Notably, BEACH-Gaze enhances accessibility to intelligent gaze analytics and expands the scope and depth of gaze-enabled research. Additionally, the open-source software promotes collaborative development, allowing researchers to benefit from a shared pool of knowledge and tools with a reusable and extensible codebase, which can lead to more consistent and comparable research outcomes in future studies and applications.
Within the context of BEACH-Gaze, we distinguish between real-time and simulated real-time gaze modeling and predictions as follows. Real-time modelling refers to the continuous acquisition, processing, and interpretation of raw gaze data as it is being generated by the eye tracker during task execution. Simulated real-time modelling utilizes recorded gaze data in a time-sequenced manner to emulate the conditions of a real-time system. While the data is not generated live, the simulation preserves the temporal structure and progression of gaze data, allowing researchers to test prediction algorithms, evaluate system responsiveness, and refine thresholding strategies under controlled conditions. The key distinction lies in the temporal immediacy and system integration, where BEACH-Gaze focuses on providing a flexible environment for iterative testing and retrospective analysis. To this end, BEACH-Gaze supports simulated real-time modelling, enabling researchers to transition from offline experimentation to real-time deployment as system matures and application demands evolve.
This paper presents the design, architecture, and functional capabilities of BEACH-Gaze, including its comprehensive support for descriptive and predictive gaze analytics, integrated machine learning models, and a user interface to reduce technical barriers to empower a broader community of researchers to engage with eye-tracking research and application. In the current era of artificial intelligence and advanced data science, BEACH-Gaze aims to bridge the gap between complex computational methods and accessible, interdisciplinary research in eye tracking. The novelty of BEACH-Gaze lies in its ability to simulate real-time gaze-based predictions, support sequential and summative gaze analyses, and provide extensive classification and regression capabilities. These capabilities of BEACH-Gaze allow granular, time-sensitive insights and behavioral profiling, critical for applications in domains where understanding and responding to human attention and cognitive states in real time is essential. Moreover, its integrated machine learning models facilitate a wide range of predictive tasks. Unlike existing tools that often require extensive programming knowledge or offer limited analytical depth, BEACH-Gaze combines accessibility with advanced functionality, making it uniquely positioned to support interdisciplinary eye tracking research and application. To demonstrate its practical utility and cross-domain applicability, three distinct use cases in diverse fields (autism, physiologically adaptive visualization, and aviation safety) are also presented in this paper that serve to contextualize the relevance and effectiveness of BEACH-Gaze in supporting real-world interdisciplinary research and applications to accelerate scientific discovery, enhance human decision-making, and support user-centered innovation.
2. Related Work
Over the years, numerous commercial and open-source solutions have been developed to support eye tracking research and gaze data analysis. Due to concerns such as the cost of licensing renewals, limited customization options, dependency on vendors for updates and support, and the restrictions imposed by proprietary source code, there has been a growing need for open-source alternates within the scientific community. Some notable examples are discussed below.
In the context of facilitating open-source execution of eye tracking studies, PyGaze [] offers a cross-platform toolbox designed for minimal-effort programming of eye tracking experiments written in Python. PyGaze aims to combine the ease of graphical experiment builders with the flexibility of programming, allowing researchers to create experiments with short, readable code without sacrificing functionality. It provides visual and auditory stimuli, collects data from a range of devices (e.g., keyboards, mice, joysticks), supports custom algorithms for eye movement detection, is compatible with multiple eye tracker brands (e.g., EyeLink, SMI, Tobii), and integrates Python libraries for enhanced functionality. Another example is GazeParser [], an open-source and multiplatform library that provides low-cost eye tracking to more expensive commercial alternatives. It integrates and utilizes Python packages (e.g., OpenCV, SciPy, and Matplotlib) in data visualization, consists of a video-based eye tracker, and a set of Python libraries for data recording and analysis. GazeParser is compatible with experimental control libraries such as PsychoPy [] and VisionEgg []. PsychoPy is an open-source software suite written in Python that is designed to create visual and auditory stimuli for neuroscience experiments. Though primarily developed for creating experiments in psychology, it also supports eye tracking experiments with extensive customization options for stimuli and experimental setups, is cross-platform compatible (e.g., Windows, macOS, Linux), supports simple scripts, and utilizes Python libraries such as OpenGL for graphics. VisionEgg is an open-source library for real-time visual stimulus generation. It is typically used in conjunction with eye tracking to create complex visual scenes by leveraging OpenGL. Written in Python with extensions in C, it supports real-time stimulus generation of complex visual scenes (e.g., 3D graphics), it provides luminance and temporal calibration, and accepts various input devices (e.g., mice, movement tracking systems, digital triggers). Originated from research in physics education, OGAMA [] supports the recording and analyzing of eye tracking and mouse tracking data from slideshow-based experiments. It is written in C#.NET and is open source, with features supporting slideshow design, AOI definition, recording and replaying of eye and mouse tracking data. Recent efforts in developing open-source tools for running eye tracking experiments also include OpenGaze [] that supports gaze and facial behavior analysis in web applications, Libretracker [] that tracks eye movements using head-mounted webcams, and the TobiiGlassesPySuite [] that supports the use of Tobii Pro Glasses 2 in platform-independent recordings.
To support evaluations and comparisons made across eye tracking systems and calibration methods, one example of an open-source and tracker-independent system is TraQuMe [], which is designed to evaluate tracker reliability and data quality. It offers spatial accuracy and precision by measuring the proximity of the gaze point to the target and the consistency of repeated measurements. Additionally, it provides visualizations to help interpret results and works independently of specific eye tracker brands. Another example is GazeVisual-Lib [], which is designed to evaluate the performance and data quality of eye trackers. It supports the extraction of evaluative metrics (e.g., true vs. estimated gaze position, de-noising and outlier removal) and provides visualizations (e.g., heatmaps, gaze plots, AOI specific metrics) for the researcher to thoroughly compare performance and identify limitations of their eye tracking systems.
In the context of open-source tools supporting gaze data analysis, one example is PyTrack [], which offers support for both analysis and visualization of gaze data, such as automated extraction of gaze measures (e.g., blinks, fixations, saccades, microsaccades, and pupil size from raw gaze), visualizations (e.g., gaze plots, heat maps, and dynamic visualizations), statistical tests (e.g., t-tests and ANOVA tests), and area of interest (AOI) analysis (e.g., number of revisits in user-defined AOIs). To overcome challenges in eye tracking studies conducted on wide-field screens with natural head movements, ref. [] presents an open-source toolkit with the Pupil Core eye tracker to support dynamic AOI analysis such as semi-automatic and manual allocation of dynamic AOIs, extractions of dwell times and time to first entry, as well as overlaying gaze data on video. EMDAT [] is an open-source toolkit designed to support processing and analysis of raw data derived from Tobii eye trackers via Tobii Studio. It provides built-in data cleaning functions and can generate gaze features such as fixations, saccades, pupil metrics, and temporal patterns for each participant. It also supports batch processing of multiple participants. There are also several frameworks and toolboxes to support gaze data analysis including those written in MATLAB such as GlassesViewer [] focusing on analyzing data from the Tobii Pro Glasses 2 eye tracker, SacLab [] focusing on saccade analysis, EALab [] focusing on multivariate data analysis and classifications, Eye-MMV [] focusing on fixation detection using a novel two-step spatial dispersion threshold algorithm, GazeAlyze [] focusing on static visual stimuli with extendable modules; others written in Python such as PSOVIS [] that is designed to extract post-saccadic oscillations signals from the eye movement recordings and PyGazeAnalyzer [] that extends PyGaze and supports high-level plotting of eye tracking data; as well as those written in R such as ETRAN-R [] that supports visualization and statistical analysis of eye tracking data. Table 1 presents a comparison of the key features of these tools.
Table 1.
Feature Comparison of Open-Source Tools for Gaze Data Analytics.
One key observation of the available open-source tools is that, despite their goal of reducing the technical effort for researchers, they often still necessitate some basic understanding of Python, MATLAB, or R. This requirement can pose a barrier for non-technical experimenters. To address this issue in an effort to broaden participation of eye tracking research and applications, BEACH-Gaze offers a graphical user interface (GUI) that eliminates the need for prior programming knowledge, thereby making it accessible to researchers and practitioners of all technical backgrounds. Another observation is that while there is substantial analytical support for extracting gaze activities, there is a lack of descriptive gaze measures that quantify and characterize sequential visual attention and gaze patterns. Current analytical support has predominantly emphasized summative gaze metrics that capture overall traits and tendencies across an entire recording. However, there remains a significant gap in analytical support for exploring the temporal dynamics of gaze data, thereby overlooking how the evolution of gaze over time can inform and enhance machine intelligence. To address this, BEACH-Gaze offers window-segmented sequential analyses of descriptive gaze measures, capturing the nature of time-series gaze data, reflecting the evolution of gaze with deeper insights into user behavior.
Another key observation is that, apart from EALab, there is a notable lack of classification support for generating gaze-empowered predictions to facilitate intelligent gaze analytics. Notably, BEACH-Gaze and EALab differ in several key aspects. Firstly, BEACH-Gaze enables temporal gaze analyses to quantify how gaze patterns evolve over time as a person’s visual needs change during an interaction. Secondly, recognizing the potential evolution of gaze in the same visual scene, BEACH-Gaze allows the experimenter to customize periodic analyses of descriptive gaze measures by setting timed, scheduled intervals, focusing solely on significant gaze events, or utilizing cumulative gaze tendencies exhibited by an individual. As gaze patterns evolve, the resulting classifications will also vary based on differences observed at different time intervals. As such, to support time-based predictions throughout an interaction, BEACH-Gaze allows the experimenter to customize the timing and frequency of classifications to simulate real-time gaze-driven predictions. Lastly, BEACH-Gaze supports a wider range of models (46 classification and 31 regression models compared to 6 classifiers in EALab) and does not require the experimenter to have expertise in configuring machine learning algorithms.
3. BEACH-Gaze
BEACH-Gaze is compatible with Windows and MacOS and supports the processing and analyses of raw gaze data generated from the Gazepoint GP3 and GP3 HD eye trackers []. By default, it is configured to process raw gaze data collected from monitors measuring 24″ with full HD resolution 1920 × 1080 pixels, which are the highest specifications supported by the Gazepoint eye trackers. BEACH-Gaze is written in JAVA and freely available to download at []. BEACH-Gaze supports individual (i.e., one person’s gaze) and batched (i.e., a group of individuals’ gazes) processing and analyses of raw gaze, analysis of experimenter-defined AOIs, descriptive gaze measures at timed intervals and classifications thereof to simulate real-time gaze-enabled predictions.
3.1. Design Overview
Figure 1 illustrates the architecture of BEACH-Gaze. Raw gaze files (e.g., all_gaze.csv and fixations.csv) first undergo a pre-processing process to reduce noise, whereby invalid entries (as indicated by the eye tracker’s validity codes, negative values, and off-screen entries), incomplete entries (e.g., when only one eye was captured or when x and y coordinates are missing), and corrupted entries (e.g., pupil dilation exceeding possible ranges) are removed. Anisocoria (asymmetric pupils) is rarely greater than 1 mm [], and normal pupil size in adults typically varies from 2–4 mm in diameter in bright light and 4–8 mm in the dark [].
Figure 1.
Architecture of BEACH-Gaze.
Once de-noised, the data is further processed by the Descriptive Modules to produce descriptive gaze measures (DGMs, discussed in Section 3.2). Depending on the researcher’s configuration, one, two, or all modules may be executed. In the summative module, cumulative DGMs are generated for the entire duration of an eye tracking recording. This can be used to support researchers interested in the aggregated gaze behavior over time that provides an overview of visual attention patterns by consolidating spatial and temporal gaze data into a single, comprehensive view. For researchers interested in the temporal progression of gaze, such as nuanced shifts in attention that may have occurred throughout an interaction, the window-based module supports the generation of temporal DGMs at various time intervals, capturing the evolution of gaze throughout an interaction. If the researcher had defined AOIs (reflected in the raw gaze data), then DGMs can also be generated for the specific AOIs, in either summative fashion (i.e., aggregated results of an entire recording) or temporal fashion (i.e., sequentially segmented results throughout a recording).
To support intelligent analytics of gaze, summative, window-based, and AOI-specific DGMs can be sent to the Predictive Modules to generate predictions for a variable that is meaningful in the given experimental scenario, such as whether a participant’s task score will be above or below a threshold, or how long a participant will fixate on a visual cue based on the DGMs produced from the Descriptive Modules. The researcher can also configure whether classification or regression is to be performed, and the predictions, along with their accuracies, are produced. Further details on the modules, DGMs, window-segmentation of temporal gaze analytics, as well as the classifications are discussed below.
3.2. Descriptive Gaze Analytics
Table 2 presents the list of DGMs generated by BEACH-Gaze based on raw gaze with an overall goal of providing a comprehensive overview of the key gaze characteristics found in the raw gaze dataset. These DGMs can be generated for each researcher-defined AOI. Additionally, BEACH-Gaze can process raw gaze files on a per-person basis producing DGMs for that specific individual, as well as in batches on a group basis to produce aggregated DGMs for an entire group of individuals. The AOI-specific DGMs are inspired by the extensive review on various gaze measures documented in [].
Table 2.
Descriptive Gaze Measures.
For a person or a group of people, the DGMs aim to summarize the different quantifiable aspects of the eye tracking dataset. For instance, where applicable, a set of descriptive statistics, e.g., sum, mean, median, standard deviation (SD), minimum, and maximum values, are calculated to help the researcher to understand the distribution, central tendency, and variability of a gaze dataset. In the context of saccades, such descriptive statistics can be applied to magnitude, duration, amplitude, velocity and peak velocity, as well as relative and absolute directions of valid data points captured over time. To determine peak velocity, BEACH-Gaze implements the algorithm proposed by [] and approximates saccade amplitude via the Euclidean distance between fixations [,]. Similarly, a set of DGMs are determined for fixations that describe the fundamental characteristics of the gaze dataset such as its spread, central, and range. In addition, the smallest boundary that can wrap around all fixations is calculated via convex hull to indicate the area within which a person’s (or a group of people’s) gaze has moved.
To support gaze analytics for eye tracking studies involving AOIs, BEACH-Gaze produces a range of AOI-specific DGMs. To quantify how a person (or a group of people) allocates visual attention across various AOIs, BEACH-Gaze generates stationary entropy to indicate how evenly gaze is distributed across different AOIs, and transition entropy to measure the randomness of transitions between different AOIs as defined by [] and implemented in []. In addition, the proportion of fixations and their durations spent in an AOI relative to all fixations captured is generated. To quantify how a person (or group of people) navigates across various AOIs, BEACH-Gaze generates statistics such as count and proportion of gaze travelling from one to the other AOI in each pairwise AOI. Note that for a pair of AOIs named A and B, transitions from A to B differ from transitions from B to A. Also, when determining proportions, self-transitions are defined as gaze moving away from an AOI and returns to the same AOI without visiting any other AOIs in between. Moreover, to capture sequences of AOIs, as well as emerging subsequence patterns in a person’s (or a group of people’s) gaze, BEACH-Gaze implements the algorithms proposed by []. A sequence is defined as the ordered series of AOIs that make up a person’s entire scanpath, and BEACH-Gaze uses the algorithm proposed in [] to extract subsequence patterns from these sequences, in both expanded and collapsed forms. An expanded pattern includes all fixations, including consecutive fixations within the same AOI (i.e., repetitions), whereas collapsed patterns discard AOI repetitions. For example, with five AOIs named A, B, C, D, and E, the sequence ACCDEABAAAABC becomes ACDEABABC in its collapsed form and remains changed in its expanded form. Additional measures such as how often specific patterns occur within the AOI sequences of a group of people (i.e., pattern frequency), how frequent a particular pattern of AOIs appears across a group of people (i.e., sequence support), the mean occurrences of a specific pattern across all sequences analyzed (i.e., mean pattern frequency), and the relative frequency of a specific pattern within the entire set of gaze sequences (i.e., proportional pattern frequency) can then be determined using BEACH-Gaze.
Lastly, BEACH-Gaze supports the generation of DMGs such as blinks per minute (i.e., blink rate), dynamic change in pupil size (of each eye and across both eyes), the relationship between the time spent on processing information and the time spent on information search (i.e., fixation-to-saccade ratio), and the total time of eye movements including both fixations and saccades (i.e., scanpath duration). To determine pupil dilation, a baseline can be set by the researcher via the GUI (discussed in Section 3.4), where a person’s pupil sizes are observed over a period of time during relatively low-demand task conditions (e.g., during calibration). Subsequent enlargement of the pupils can then be determined compared to the baseline values.
3.3. Evolution of DGMs Captured via Window Segmentation
BEACH-Gaze allows the experimenter to tailor periodic analytics of DGMs, supporting temporal gaze analyses aimed at reflecting gaze evolution over time as a person’s visual needs evolve throughout an interaction, which ultimately facilitates simulations of real-time gaze-based predictions. To achieve this, BEACH-Gaze provides four window-based methods to segment and configure periodic generations of DGMs (Figure 2), namely scheduled digests via tumbling windows, cumulated gaze via expanding windows, current snapshot via hopping windows, and irregular gaze events via session windows. In all windows, the researcher can customize the window size and timed interval as appropriate depending on the needs and goals of a given scenario.
Figure 2.
Window-based DGM Analytics to Facilitate Real-Time Classifications: (a) Scheduled Digests via Tumbling Window; (b) Irregular Events via Session Window; (c) Gaze Snapshots via Hopping Window; (d) Cumulative Gaze via Expanding Window.
To capture gaze as scheduled digests, BEACH-Gaze supports DGM analytics performed in a series of non-overlapping and fixed-in-size tumbling windows at scheduled contiguous time intervals (Figure 2a). For example, if the size of the tumbling window is set to 30 s, then the first window would contain DGMs for gaze collected between 00:00:00 and 00:00:30, the second window would contain DGMs for gaze generated between 00:00:30 and 00:00:60, and so on until the end of all known gaze have been analyzed.
Alternatively, the researcher can emphasize irregular gaze events detected for a person (e.g., elevated values compared to an established baseline) that can be analyzed in a series of non-overlapping and non-fixed-in-size session windows (Figure 2b) with a specific timeout duration and a maximum size. In this context, irregular gaze events are significant deviations in DGMs from a baseline specific to an individual or task. For instance, during calibration or knowing an established norm for a given task, a baseline presenting typical DGM values can be generated for a person or a specific task. Subsequent DMGs can then be compared to this baseline, whereby increases in value would be considered as irregular gaze events. For example, by observing a person during the initial phase of a task, one can establish a baseline profile of gaze behavior characterized by typical patterns and tendencies, e.g., average fixation duration, average blink rate, etc. Subsequent values are then evaluated relative to this baseline profile, where increases may be interpreted as notable deviations (i.e., events). The determination of what constitutes a meaningful deviation can be determined by the researcher and in consideration of the task nature. As an example, with a session window set to a two-minute timeout and a maximum five-minute duration, the first window is created after detecting the initial irregular gaze event and continues to search for the next event for two minutes. If no further events are found, the window ends after two minutes. If another event is found, the window renews search for two more minutes, either times out when no further events are found or ends once reaching the maximum duration of five minutes.
Moreover, the most recent gaze state of a person can be reflected in their gaze snapshots, where the last known gaze is analyzed via overlapping and fixed-in-size hopping windows (Figure 2c) configured with a window size and hop size. For example, with a 90 s window size and 60 s hop size, the first window would contain DGMs for gaze collected during 00:00:00 and 00:01:30, the second window would contain DGMs between 00:01:00 and 00:02:30, and so on until reaching the end of all known gaze.
Lastly, BEACH-Gaze supports cumulated analytics that can be processed via overlapping and non-fixed-in-size expanding windows (Figure 2d). For example, with a window size configured to be three minutes, the first window would contain DGMs for gaze collected during 00:00:00 and 00:03:00, the second window would then expand to contain DGMs captured between 00:00:00 and 00:06:00, so on, and the last window would contain DGMs for the entire gaze dataset generated during an interaction.
3.4. Predictive Gaze Analytics
In addition to providing descriptive gaze measures, BEACH-Gaze builds upon Waikato Environment for Knowledge Analysis (WEKA) [] and integrates a range of established classification models in machine learning to support advanced gaze-enabled predictions. With window-based DGMs that can be generated throughout various stages of an interaction, the resulting classifications will also vary depending on the differences observed at various time intervals. The researcher can input DGMs into the Predictive Gaze Analytics of BEACH-Gaze to generate classifications to predict a discreet category (e.g., if a person belongs to the successful or unsuccessful group) or a continuous value (e.g., a task score) based the DGMs captured using one or more windows. To support these gaze-driven predictions, BEACH-Gaze integrates the WEKA API version 3.8.6 [] and leverages a broad range of established classification models (outlined in Table 3) to generate classifications, with default WEKA configurations in the classification models (that can be customized) using a stratified 10-fold cross validation for model evaluation and Bonferroni-corrected t-tests for statistical testing to ensure robust and reliable performance.
Table 3.
Classification and Regression Models Supported in BEACH-Gaze.
3.5. User Interface
Figure 3 shows the GUI in BEACH-Gaze, highlighting the two main modules that generate descriptive (Figure 3a) and predictive analytics (Figure 3b). In the Descriptive Analytics tab, the researcher would first input the raw gaze files (either for one person or batched for a group of people) and select the desired output directory for the DGMs, then proceed to choose one or more of the windows to run analysis. As discussed in Section 3.3, timed and scheduled DGMs would be generated in a temporal fashion to capture sequential gaze measures detected at various stages. If none of the windows is selected, one summative file containing all DGMs for the entire duration is generated (instead of sequential DGM files at various time intervals). The researcher can then perform classification experiments in the Predictive Analytics tab, by providing the sequential DGM files (e.g., the output from Descriptive Analytics) and setting the type of classification models to apply (e.g., classification or regression) depending on the goal of the prediction.
Figure 3.
Graphical User Interface in BEACH-Gaze: (a) Descriptive Gaze Analytics; (b) Predictive Gaze Analytics.
In the case of a tumbling window (as illustrated in Figure 2a), the DGMs and consequently the predictions based upon them are generated using gaze collected in a researcher-defined time zone (e.g., a window size of 60 s is set in an example shown in Figure 3a, which contains gaze data captured during the initial 0–60 s); it then moves onto the next bordering time zone to generate subsequent DGMs and predictions (i.e., based on gaze data captured between 60–120 s, then 120–180 s, and so on), and tumbles forward until reaching the end of an interaction.
In the expanding window (as illustrated in Figure 2d), an initial set of gaze data is analyzed (e.g., an example shows 60 s in Figure 3a, meaning the first window contains gaze data captured during the initial 0–60 s), which then gets expanded to include new gaze data at the next specified time interval (i.e., 0–120 s, followed by 0–180 s, and so on).
In the case of a hopping window (as illustrated in Figure 2c), BEACH-Gaze processes gaze using a researcher-defined window size, then moves forward to the next scheduled hop relative to the previous one. The example shown in Figure 3a has a 60 s window size and a 30 s hop size, meaning every 30 s, gaze over the last 60 s is analyzed (i.e., DGMs and predictions based upon are generated using gaze captured between 0–60 s, 30–90 s, 60–120 s, and so on).
Lastly, throughout an interaction, a person may encounter pivotal moments that significantly influence their performance. These moments can manifest as distinct gaze behaviors, reflecting phases of irregular gaze events. In this context, gaze events are essentially deviations (i.e., subsequent values exceeding a baseline) from the established norms (i.e., a baseline that can be determined during calibration, or at the start of an interaction) of a person’s gaze. BEACH-Gaze supports a number of approaches to detect an irregular gaze event, including saccade magnitude, saccade direction, left eye pupil diameter, right eye pupil diameter, average of left and right eye pupil diameter combined, fixation duration, and blink rate. When using the session window (as illustrated in Figure 2b), the analytics begin when the first event is found, and it then keeps searching for the next event within a specified time period. An example in Figure 3a shows a 10 s timeout, 90 s maximum duration, 30 s baseline duration, and the SACCADE_MAG in the dropdown menu, meaning that using the saccade amplitude to detect irregular gaze events, a baseline (i.e., the average saccade amplitude observed) is established after 30 s, whereby subsequent saccade amplitudes that are higher in value are deemed as “events”. When another event is found, the session window grows (i.e., the 10 s timeout is renewed) until it meets the maximum duration set to 90 s. If no further events are found, the session window closes.
4. Use Cases
BEACH-Gaze has contributed to technology innovation and knowledge discovery across several application areas that leverage eye tracking. Three example use cases are discussed below, illustrating example applications of descriptive and predictive gaze analytics, including (i) supporting technology-assisted exercise applications aimed at increasing physical activity intensity for individuals with autism spectrum disorder (discussed in Section 4.1); (ii) enabling physiologically adaptive visualizations that dynamically respond to an individual’s gaze (discussed in Section 4.2); and (iii) enhancing aviation safety by predicting pilot performance during flight maneuvers based on DGMs, thereby informing the timing of critical interventions (discussed in Section 4.3). These case studies aim to highlight the critical role of gaze analytics in enhancing machine intelligence—enabling systems to interpret human attention, adapt in real time, and make informed predictions—thereby advancing the synergy between human users and machine empowered intelligent decision-making in the era of artificial intelligence and advanced data science.
4.1. Technology Assisted Exercise for Individuals with Autism Spectrum Disorder
The integration of exercise and technology has sparked the emergence of digital fitness since the early 2000s, with innovations designed to inspire greater physical activity []. Users of fitness devices and applications have often found these technologies effective in encouraging and sustaining physical activity, leading to enhanced health and overall well-being. However, a significant number of tools often fall short for certain groups, such as individuals with autism spectrum disorder (ASD). Research in the U.S. found that 1 in 54 children were diagnosed with ASD as of 2016 [], characterized by deficits in social communication, restrictive interests, and repetitive behaviors []. Consequently, individuals with ASD are more likely to lead sedentary lifestyles, increasing the risk of obesity and other health issues [].
Given these factors, there is a pressing need to promote equitable access to sport by advancing technology assisted exercise apps that encourage physical activity amongst individuals with ASD. Technology-assisted exercise applications, such as those described in [,], leverage real-time heart rate visualizations to encourage ASD individuals to participate in longer and more intense physical activities, both in single-user and multi-user modes. More specifically, while wearing a Scosche heart rate monitor that measures heart rates in beats per minute (BPM), a user would begin an exercise session with as a main character flying forward a path, as the heart rate elevates or drops in real time. The goal is to enrich the user experience of a physical exercise, through technology, to increase motivation and promote engagement for individuals with ASD.
To evaluate the effectiveness of such an application, the researchers conducted a series of eye tracking usability studies involving 20 verbal individuals with ASD (all of whom were able to read and comprehend the instructions provided). Each participant completed two stationary bicycle exercise sessions (on a Matrix IC7 Indoor Stationary Bicycle) on separate days: one control session without the application that visualizes heart rates in real time, and one experimental session with the application. The order of these sessions was randomized and counterbalanced across participants to mitigate order effects.
The results indicated that amongst individuals with ASD, 83% achieved higher heart rates, 66.6% maintained heart rates at or above 90 BPM, and 27.7% re-engage in their exercise to reach 90 BPM after previously dropping below. To further investigate how the individuals interacted with the given iPadOS application, the participants were grouped into two categories using a median split of their heart rates achieved during the exercise (at 118 BPM), as the above median group and the below median group. DGMs produced by BEACH-Gaze showed several key differences in the visual attention between the two groups, as shown in Figure 4. For instance, those individuals who achieved higher heart rates searched for visual cues that were relatively close to one another (as indicated by the Pearson correlation coefficient r value of −0.511 between heart rate and mean saccade magnitude), suggesting a more consistent and controlled interaction. This is further amplified by the SD of the saccade magnitudes (with an r value of −0.273), indicating that those who achieved higher heart rates exhibited less dispersed searches, suggesting more focused gaze behaviors. Furthermore, individuals with above the median heart rate generated longer scanpaths (with an r value of 0.261), suggesting an increased engagement with the iPadOS application. Similarly, positive correlations between heart rates and convex hull areas were found (with an r value of 0.633), indicating the individuals achieved higher heart rates also scanned a larger area as they interacted with the iPadOS application. Notably, while negative correlations are evident for the above median group (as heart rate increased, both mean saccade magnitude and SD of saccade magnitude decreased, shown in Figure 4a), only positive correlations are found for the below median group (as heart rate increased, both mean saccade magnitude and SD of saccade magnitude increased, shown in Figure 4b). This finding suggests that individuals who were less successful in the exercise may have shown lower engagement with the iPadOS application, indicated by their dispersed DGMs. This, in turn, highlights the potential benefits of the proposed technology-assisted application in its effectiveness to engage individuals with ASD, thereby enhancing exercise intensity and supporting the achievement of physical activity goals.

Figure 4.
Heat map matrices showing correlation coefficient (r values) generated between pairwise DGMs and heart rates in evaluative studies using eye tracking: (a) r values generated for the above median group; (b) r values generated for the below median group.
4.2. Physiologically Adaptive Visualization for Mappings Between Ontologies
Traditional visual aids to support human interaction with structured datasets such as ontologies have typically adopted one-size-fits-all solutions, overlooking personalized visual cues to enhance human comprehension of complex data and ontological relationships. Contributing to advancing adaptive visualizations for mappings between pairwise ontologies, a physiologically adaptive visualization that customizes visual cues for an individual user based on this person’s eye gaze is shown in Figure 5. The goal of an adaptive visualization system such as that in [,] is to leverage signals in eye gaze to predict a user’s success in a given task. If a potential failure is predicted, real-time visual interventions by means of highlighting key elements or de-emphasizing distractions are triggered to guide the user’s attention and support task completion.
Figure 5.
Adaptive Visualization Driven by Real-Time Gaze-Based Predictions of User Failure: (a) An example of highlighting upon predicted user failure; (b) An example of deemphasis upon predicted user failure. Clicking on a node toggles the expansion or collapse of an ontological class. Solid triangles represent nodes with children, hollow triangles indicate nodes that are fully expanded in the visualization, and dotted nodes signify classes without children. Solid lines between nodes denote mappings between classes that are fully visible in the visualization, e.g., “Urinary_System_Part” in one ontology is mapped to “muscle” in another ontology in (a). Dotted lines represent mappings between subclasses where at least one class is not currently visible in the visualization, e.g., “leg” is mapped to a subclass of “Extremity_Part” in (b).
Contributing to recognizing when timely interventions should be invoked, a series of experiments utilizing BEACH-Gaze has demonstrated [,,,,,] the benefits of comparing different approaches to window segmentation in sequential gaze analytics when generating user predictions in the domain of human-semantic data interaction. Building upon the knowledge gained across different classification models and influential gaze measures that predict when adaptations should be initiated, the gaze-adaptive visualization [,] advances personalized visualization to provide solutions that also recognize what (e.g., adapting to an individual’s performance) and how (e.g., displaying visual overlays dynamically in real time) to adapt to an individual user. More specifically, adaptive visualization is achieved using a long short-term memory network to continuously predict a user’s task success and failure based on real-time gaze collected while a person is interacting with the visualizations. When a task failure is predicted, visual interventions (e.g., highlighting and deemphasis) are applied to direct user attention and aid task completion. Empirical evaluation of this adaptive visualization with 76 participants in a between-subject study has indicated improved user performance without tradeoffs in workload or task speed.
4.3. Enhanced Aviation Safety via Gaze-Driven Predictions of Pilot Performance
Commercial air travel is widely regarded as one of the safest modes of transportation today, with fewer than one accident per million departures []. Over the past few decades, the number of aviation accidents has steadily declined due to technological advancements such as automation, along with enhanced training and improved air traffic control procedures []. However, over-reliance and overconfidence in automated systems has potentially resulted in a lack of manual and active monitoring by flight crews, with 60–80% of aviation accidents attributed to human error []. Effective and efficient monitoring is vital for aviation safety, particularly during dynamic phases such as takeoff and landing, whereby accurately observing various flight instruments and integrating multiple sources of readings and visual cues are crucial for decision-making. Since most of this visual information is processed by the human eyes, there is an opportunity to investigate the feasibility of incorporating eye tracking into human-centered flight deck designs.
A first step towards realizing future intelligent aircraft that can potentially anticipate and mitigate threats at runtime is to identify the optimal timing for system intervention such as if a pilot will succeed or fail while performing a flight maneuver. To this end, BEACH-Gaze has enabled predictive analytics of pilots’ gaze in simulated flight scenarios. In a study involving 17 participants asked to take off in a Cessna 172 aircraft equipped with the six-pack instrument panel on the X-Plane 11 simulator, results showed that it was feasible to predict pilots’ performance in the takeoff with up to 83.5% accuracy across a range of established classifiers []. Also, the DGMs found to be most influential to predict a pilot’s performance in the takeoff included less dispersed gaze magnitudes, longer average saccadic magnitude, longer scanpaths, and larger convex hulls. Furthermore, pilots who performed well during the climb phase demonstrated quicker visual searches, those who performed better during the takeoff phase exhibited a wider scanned area of their visual environment, and more successful pilots reported lower cognitive workload that is also reflected in their pupil dilations [].
In another X-Plane 12 simulated study [] involving 50 pilots performing an Instrument Landing System (ILS) approach in cloudy conditions landing a Cessna 172 aircraft at the Seattle-Tacoma International airport, BEACH-Gaze enabled the comparison of seven different approaches to detect notable gaze events experienced by a pilot, such as elevated values of selected DGMs including saccade magnitude, saccade direction, pupil dilation, fixation duration, blink rate, stationary and transition entropy. The results showed the effectiveness of leveraging upon session windows to detect notable gaze collected at pivotal moments of a given task when predicting pilot performance. As shown in Figure 6a, pilot success and failure can be predicted as early as 3.7 min after the task began, with accuracies up to 80.92% (after 4.3 min) using fixation duration to detect notable gaze events experienced by the pilots. Several established classifiers without special configurations outperformed a baseline classifier that predicts the majority (e.g., zero rule), with the support vector machine classifier (e.g., sequential minimal optimization) producing predictions with higher accuracies.

Figure 6.
Predicting Pilot Success and Failure in Simulated Approach and Landing over Time using Average Fixation Duration as the Baseline to Detect Irregular Gaze Events: (a) Prediction accuracies of multiple classifiers; (b) Performance distribution of the classification models.
Demonstrating that capturing notable gaze behaviors can reflect key phases of critical events potentially indicating a pilot’s overall performance in the ILS approach, a session-window-based approach to predict task success and failure (binary categorical outcome either above or below a performance threshold) was evaluated in []. Specifically, gaze features available at runtime, such as saccade magnitude and direction, pupil dilation, fixation duration, and blink rate were used to detect irregular gaze events. A baseline (average values generated for these gaze features) for a pilot was established after observing the person for the first two minutes after task initiation. A session window was empirically determined and mapped to a four-second timeout and a sixty-second maximum duration to detect notable events. The rationale is that at the start of a task, cognitive demand is minimal, allowing for the capture of baseline behaviors reflecting a pilot’s typical procedures in visual search and information processing. As the task becomes increasingly demanding and complex (leading to increased mental stress and workload), increases from the baseline values (i.e., defined as notable events) may highlight critical moments during flight maneuvers that can be leveraged to predict the pilot’s performance. The predictions showed improved accuracies across a range of classifiers when compared against results derived from more established gaze metrics such as stationary and transition entropy (shown in Figure 7).
Figure 7.
Classifier Performance Across Gaze Features when Predicting Pilot Success and Failure in the ILS Approach using a Session Window.
Compared to the zero rule classifier as a benchmark that predicts the majority class, consistently yielding an accuracy of approximately 64% with minimal variance across all features, all other classifiers showed varied levels of improvement in their prediction accuracies. Random forest demonstrated superior performance across most gaze features, particularly in stationary entropy, saccade magnitude, and blink rate. It achieved high median accuracy with relatively narrow interquartile ranges, indicating both effectiveness and stability. Sequential minimal optimization also performed robustly, especially when combined with fixation duration and pupil dilation. Logistic regression and multilayer perceptron exhibited feature-dependent performance, where the former exceled when combined with blink rate and pupil dilation, and the latter showed strong results when combined with saccade magnitude and fixation duration. Gaze features including saccade magnitude, pupil dilation, and fixation duration emerged as the most discriminative, enabling higher prediction accuracies across classifiers. Overall, all gaze features led to improved prediction accuracies across classifiers when compared to established gaze measures such as transition and stationary entropy.
5. Future Work
This paper presents BEACH-Gaze that is designed to simplify the technical aspects of gaze data analysis, therefore making eye tracking more accessible to a broader scientific community. In an era marked by rapid advances in artificial intelligence, machine learning, and the growth of large datasets, this work seeks to lower technical barriers, enabling broader use and participation of eye tracking technology. It also promotes accessible, intelligent analysis of gaze data across disciplines, supporting the development of machine intelligence and gaze-enabled intelligent systems.
BEACH-Gaze can be extended significantly in future development to provide robust support aiding the broader community interested in eye tracking research and applications. Several development work items are in the pipeline to improve and extend its features and functionalities. In particular, we plan to incorporate advanced deep learning models in BEACH-Gaze to enhance the precision and reliability of gaze-based predictions and classifications. This will likely enable more sophisticated analysis and interpretation of eye tracking data, opening up new possibilities in gaze-enabled intelligent systems, particularly in complex and dynamic interaction scenarios where traditional models may fall short. Also, we plan to support multi-task learning in more advanced predictive analytics, whereby BEACH-Gaze can perform multiple related tasks simultaneously (e.g., classifying gaze behavior while also predicting user intent), enriching the analytical depth and improving system efficiency.
Moreover, we intend to include visualizations to support graphical data analysis, such as static and dynamic heatmaps, gaze plots, time-series graphs, and group comparisons. These visualizations will enable more intuitive and powerful ways for researchers and practitioners to visually explore, analyze, and interpret gaze data, making it easier to gain actionable insights and identify patterns, trends, and insights otherwise hidden with traditional analysis methods. Furthermore, it is possible to explore the semantic interpretation of gaze as a communicative modality. Beyond its role in reflecting cognitive processes, eye gaze may also be interpreted as a sophisticated non-verbal language that could be used to construct a dictionary of gaze-based expressions [], whereby DGMs could be mapped to communicative intent. As such, it may be possible to integrate semantic modeling to uncover patterns in gaze behavior that correspond to meaningful communicative signals to further enrich the interpretation of gaze semantics and deepen our understanding of human intent, particularly in social, educational, and clinical contexts.
Lastly, we aim to expand compatibility to support a wider range of eye trackers to make BEACH-Gaze more accessible to a broader audience. This includes integration with well-known eye tracking hardware from notable manufacturers to accommodate the diverse needs and preferences of researchers and practitioners in the field. The overall goal of these enhancements is to collectively make BEACH-Gaze a more powerful and versatile tool to facilitate a deeper understanding of gaze data and to foster innovation in gaze analytics across multiple domains, research scenarios, and user needs. Additionally, while this paper focuses on the design, architecture, functional capabilities, and cross-domain applicability, future evaluation focusing on empirical usability studies of distinct user groups (e.g., technical vs. non-technical users) can further quantify the effectiveness, efficiency, and user satisfaction of BEACH-Gaze.
Author Contributions
Conceptualization, B.F.; methodology, B.F.; software, B.F., K.C., A.R.S., P.G., N.G.G., A.J. and M.H.; validation, B.F.; formal analysis, B.F.; investigation, B.F.; resources, B.F.; data curation, B.F.; writing—original draft preparation, B.F.; writing—review and editing, B.F.; visualization, B.F.; supervision, B.F.; project administration, B.F.; funding acquisition, B.F. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Institutional Review Board Statement
The study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Review Board of California State University Long Beach (protocol code 23-075, 21-136, 19-121 and 22 October 2024).
Informed Consent Statement
Informed consent was obtained from all subjects involved in the study.
Data Availability Statement
The data presented in this study are not openly available due to privacy and ethical restrictions. Requests to access the data should be directed to the lead author.
Conflicts of Interest
The authors declare no conflicts of interest.
Abbreviations
The following abbreviations are used in this manuscript:
| BEACH-Gaze | Beach Environment for the Analytics of Human Gaze |
| AOI | area of interest |
| GUI | graphical user interface |
| DGM | descriptive gaze measure |
| SD | standard deviation |
| WEKA | Waikato Environment for Knowledge Analysis |
| ASD | autism spectrum disorder |
| BPM | beats per minute |
| ILS | Instrument Landing System |
References
- Zammarchi, G.; Conversano, C. Application of Eye Tracking Technology in Medicine: A Bibliometric Analysis. Vision 2021, 5, 56. [Google Scholar] [CrossRef]
- Ruppenthal, T.; Schweers, N. Eye Tracking as an Instrument in Consumer Research to Investigate Food from A Marketing Perspective: A Bibliometric and Visual Analysis. J. Theor. Appl. Electron. Commer. Res. 2024, 19, 1095–1117. [Google Scholar] [CrossRef]
- Moreno-Arjonilla, J.; López-Ruiz, A.; Jiménez-Pérez, J.R.; Callejas-Aguilera, J.E.; Jurado, J.M. Eye-tracking on Virtual Reality: A Survey. Virtual Real 2024, 28, 38. [Google Scholar] [CrossRef]
- Arias-Portela, C.Y.; Mora-Vargas, J.; Caro, M. Situational Awareness Assessment of Drivers Boosted by Eye-Tracking Metrics: A Literature Review. Appl. Sci. 2024, 14, 1611. [Google Scholar] [CrossRef]
- Ke, F.; Liu, R.; Sokolikj, Z.; Dahlstrom-Hakki, I.; Israel, M. Using Eye-Tracking in Education: Review of Empirical Research and Technology. Educ. Technol. Res. Dev. 2024, 72, 1383–1418. [Google Scholar] [CrossRef]
- Duchowski, A.T. Eye Tracking Methodology: Theory and Practice, 3rd ed.; Springer: Cham, Switzerland, 2017. [Google Scholar]
- Dalmaijer, E.S.; Mathôt, S.; Van der Stigchel, S. PyGaze: An open-source, cross-platform toolbox for minimal-effort programming of eyetracking experiments. Behav. Res. Methods 2014, 46, 913–921. [Google Scholar] [CrossRef]
- Sogo, H. GazeParser: An open-source and multiplatform library for low-cost eye tracking and analysis. Behav. Res. Methods 2013, 45, 684–695. [Google Scholar] [CrossRef]
- Peirce, J.W. PsychoPy—Psychophysics software in Python. J. Neurosci. Methods 2007, 162, 8–13. [Google Scholar] [CrossRef]
- Straw, A. Vision Egg: An open-source library for realtime visual stimulus generation. Front. Neuroinform. 2008, 2, 4. [Google Scholar] [CrossRef] [PubMed]
- Voßkühler, A.; Nordmeier, V.; Kuchinke, L.; Jacobs, A.M. OGAMA (Open Gaze and Mouse Analyzer): Open-source software designed to analyze eye and mouse movements in slideshow study designs. Behav. Res. Methods 2008, 40, 1150–1162. [Google Scholar] [CrossRef]
- Zhang, X.; Sugano, Y.; Bulling, A. Evaluation of Appearance-Based Methods and Implications for Gaze-Based Applications. In Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019; pp. 1–13. [Google Scholar] [CrossRef]
- Krause, A.F.; Essig, K. LibreTracker: A Free and Open-source Eyetracking Software for head-mounted Eyetrackers. In Proceedings of the 20th European Conference on Eye Movements, Alicante, Spain, 18–22 August 2019; p. 391. [Google Scholar]
- De Tommaso, D.; Wykowska, A. TobiiGlassesPySuite: An open-source suite for using the Tobii Pro Glasses 2 in eye-tracking studies. In Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications, New York, NY, USA, 25–28 June 2019; pp. 1–5. [Google Scholar] [CrossRef]
- Akkil, D.; Isokoski, P.; Kangas, J.; Rantala, J.; Raisamo, R. TraQuMe: A tool for measuring the gaze tracking quality. In Proceedings of the Symposium on Eye Tracking Research and Applications, New York, NY, USA, 26–28 March 2014; pp. 327–330. [Google Scholar] [CrossRef]
- Kar, A.; Corcoran, P. Development of Open-source Software and Gaze Data Repositories for Performance Evaluation of Eye Tracking Systems. Vision 2019, 3, 55. [Google Scholar] [CrossRef]
- Ghose, U.; Srinivasan, A.A.; Boyce, W.P.; Xu, H.; Chng, E.S. PyTrack: An end-to-end analysis toolkit for eye tracking. Behav. Res. Methods 2020, 52, 2588–2603. [Google Scholar] [CrossRef] [PubMed]
- Faraji, Y.; van Rijn, J.W.; van Nispen, R.M.A.; van Rens, G.H.M.B.; Melis-Dankers, B.J.M.; Koopman, J.; van Rijn, L.J. A toolkit for wide-screen dynamic area of interest measurements using the Pupil Labs Core Eye Tracker. Behav. Res. Methods 2023, 55, 3820–3830. [Google Scholar] [CrossRef]
- Kardan, S.; Lallé, S.; Toker, D.; Conati, C. EMDAT: Eye Movement Data Analysis Toolkit, version 1.x.; The University of British Columbia: Vancouver, BC, Canada, 2021. [Google Scholar] [CrossRef]
- Niehorster, D.C.; Hessels, R.S.; Benjamins, J.S. GlassesViewer: Open-source software for viewing and analyzing data from the Tobii Pro Glasses 2 eye tracker. Behav. Res. Methods 2020, 52, 1244–1253. [Google Scholar] [CrossRef]
- Cercenelli, L.; Tiberi, G.; Corazza, I.; Giannaccare, G.; Fresina, M.; Marcelli, E. SacLab: A toolbox for saccade analysis to increase usability of eye tracking systems in clinical ophthalmology practice. Comput. Biol. Med. 2017, 80, 45–55. [Google Scholar] [CrossRef]
- Andreu-Perez, J.; Solnais, C.; Sriskandarajah, K. EALab (Eye Activity Lab): A MATLAB Toolbox for Variable Extraction, Multivariate Analysis and Classification of Eye-Movement Data. Neuroinformatics 2016, 14, 51–67. [Google Scholar] [CrossRef] [PubMed]
- Krassanakis, V.; Filippakopoulou, V.; Nakos, B. EyeMMV toolbox: An eye movement post-analysis tool based on a two-step spatial dispersion threshold for fixation identification. J. Eye Mov. Res. 2014, 7, 1–10. [Google Scholar] [CrossRef]
- Berger, C.; Winkels, M.; Lischke, A.; Höppner, J. GazeAlyze: A MATLAB toolbox for the analysis of eye movement data. Behav. Res. Methods 2012, 44, 404–419. [Google Scholar] [CrossRef]
- Mardanbegi, D.; Wilcockson, T.D.W.; Xia, B.; Gellersen, H.W.; Crawford, T.O.; Sawyer, P. PSOVIS: An interactive tool for extracting post-saccadic oscillations from eye movement data. In Proceedings of the 19th European Conference on Eye Movements, Wuppertal, Germany, 20–24 August 2017; Available online: http://cogain2017.cogain.org/camready/talk6-Mardanbegi.pdf (accessed on 4 November 2025).
- Zhegallo, A.V.; Marmalyuk, P.A. ETRAN—R: Extension Package for Eye Tracking Results Analysis. In Proceedings of the 19th European Conference on Eye Movements, Wuppertal, Germany, 20–24 August 2017. [Google Scholar] [CrossRef]
- Gazepoint Product Page. Available online: https://www.gazept.com/shop/?v=0b3b97fa6688 (accessed on 24 July 2025).
- BEACH-Gaze Repository. Available online: https://github.com/TheD2Lab/BEACH-Gaze (accessed on 24 July 2025).
- Liu, G.T.; Volpe, N.J.; Galetta, S.L. Neuro-Ophthalmology Diagnosis and Management, 3rd ed.; Elsevier: Amsterdam, The Netherlands, 2018; Chapter 13; p. 429. ISBN 9780323340441. [Google Scholar]
- Spector, R.H. The Pupils. In Clinical Methods: The History, Physical, and Laboratory Examinations, 3rd ed.; Walker, H.K., Hall, W.D., Hurst, J.W., Eds.; Butterworths: Boston, MA, USA, 1990. [Google Scholar]
- Steichen, B. Computational Methods to Infer Human Factors for Adaptation and Personalization Using Eye Tracking. In A Human-Centered Perspective of Intelligent Personalized Environments and Systems; Human–Computer Interaction Series; Ferwerda, B., Graus, M., Germanakos, P., Tkalčič, M., Eds.; Springer: Cham, Switzerland, 2024. [Google Scholar] [CrossRef]
- Bahill, T.; McDonald, J.D. Frequency limitations and optimal step size for the two-point central difference derivative algorithm with applications to human eye movement data. IEEE Trans. Biomed. Eng. 1983, BME-30, 191–194. [Google Scholar] [CrossRef]
- McGregor, D.K.; Stern, J.A. Time on Task and Blink Effects on Saccade Duration. Ergonomics 1996, 39, 649–660. [Google Scholar] [CrossRef]
- Holmqvist, K.; Nyström, M.; Andersson, R.; Dewhurst, R.; Jarodzka, H.; van de Weijer, J. Eye Tracking: A Comprehensive Guide to Methods and Measures, 1st ed.; Oxford University Press: Oxford, UK, 2011. [Google Scholar]
- Krejtz, K.; Duchowski, A.; Szmidt, T.; Krejtz, I.; González Perilli, F.; Pires, A.; Vilaro, A.; Villalobos, N. Gaze transition entropy. ACM Trans. Appl. Percept. 2015, 13, 1–20. [Google Scholar] [CrossRef]
- Doellken, M.; Zapata, J.; Thomas, N.; Matthiesen, S. Implementing innovative gaze analytic methods in design for manufacturing: A study on eye movements in exploiting design guidelines. Procedia CIRP 2021, 100, 415–420. [Google Scholar] [CrossRef]
- Steichen, B.; Wu, M.M.A.; Toker, D.; Conati, C.; Carenini, G. Te,Te,Hi,Hi: Eye Gaze Sequence Analysis for Informing User-Adaptive Information Visualizations. In Proceedings of the 22nd International Conference on User Modeling, Adaptation, and Personalization, Aalborg, Denmark, 7–11 July 2014; Volume 8538, pp. 183–194. [Google Scholar] [CrossRef]
- West, J.M.; Haake, A.R.; Rozanski, E.P.; Karn, K.S. eyePatterns: Software for identifying patterns and similarities across fixation sequences. In Proceedings of the 2006 Symposium on Eye Tracking Research & Applications, San Diego, CA, USA, 27–29 March 2006; pp. 149–154. [Google Scholar]
- Frank, E.; Hall, M.A.; Witten, I.H. The WEKA Workbench. In Online Appendix for Data Mining: Practical Machine Learning Tools and Techniques, 4th ed.; Morgan Kaufmann: San Francisco, CA, USA, 2016. [Google Scholar]
- WEKA API. 2024. Available online: https://waikato.github.io/weka-wiki/downloading_weka/ (accessed on 24 July 2025).
- Parrott, M.; Ruyak, J.; Liguori, G. The History of Exercise Equipment: From Sticks and Stones to Apps and Phones. ACSM’s Health Fit. J. 2020, 24, 5–8. [Google Scholar] [CrossRef]
- Matthew, M.J.; Shaw, K.A.; Baio, J.; Washington, A.; Patrick, M.; DiRienzo, M.; Christensen, D.L.; Wiggins, L.D.; Pettygrove, S.; Andrews, J.G.; et al. Prevalence of Autism Spectrum Disorder Among Children Aged 8 Years—Autism and Developmental Disabilities Monitoring Network, 11 Sites, United States, 2016. Morb. Mortal. Wkly. Rep. (MMWR) Surveill. Summ. 2020, 69, 1–12. [Google Scholar] [CrossRef]
- Shaw, K.A.; Williams, S.; Hughes, M.M.; Warren, Z.; Bakian, A.V.; Durkin, M.S.; Esler, A.; Hall-Lande, J.; Salinas, A.; Vehorn, A.; et al. Statewide County-Level Autism Spectrum Disorder Prevalence Estimates—Seven, U.S. States, 2018. Ann. Epidemiol. 2023, 79, 39–43. [Google Scholar] [CrossRef] [PubMed]
- Dieringer, S.T.; Zoder-Martell, K.; Porretta, D.L.; Bricker, A.; Kabazie, J. Increasing Physical Activity in Children with Autism Through Music, Prompting and Modeling. Psychol. Sch. 2017, 54, 421–432. [Google Scholar] [CrossRef]
- Fu, B.; Chao, J.; Bittner, M.; Zhang, W.; Aliasgari, M. Improving Fitness Levels of Individuals with Autism Spectrum Disorder: A Preliminary Evaluation of Real-Time Interactive Heart Rate Visualization to Motivate Engagement in Physical Activity. In Proceedings of the 17th International Conference on Computers Helping People with Special Needs, Online, 9–11 September 2020; Springer: New York, NY, USA; pp. 81–89. [Google Scholar] [CrossRef]
- Fu, B.; Orevillo, K.; Lo, D.; Bae, A.; Bittner, M. Real-Time Heart Rate Visualization for Individuals with Autism Spectrum Disorder—An Evaluation of Technology Assisted Physical Activity Application to Increase Exercise Intensity. In Proceedings of the 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications—Human Computer Interaction Theory and Applications, Rome, Italy, 27–29 February 2024; SciTePress: Rome, Italy, 2024; pp. 455–463, ISBN 978-989-758-679-8, ISSN 2184-4321. [Google Scholar] [CrossRef]
- Fu, B.; Chow, N. AdaptLIL: A Real-Time Adaptive Linked Indented List Visualization for Ontology Mapping. In Proceedings of the 23rd International Semantic Web Conference, Baltimore, MD, USA, 11–15 November 2025; Volume 15232, pp. 3–22. [Google Scholar] [CrossRef]
- Chow, N.; Fu, B. AdaptLIL: A Gaze-Adaptive Visualization for Ontology Mapping; IEEE VIS: Washington, DC, USA, 2024. [Google Scholar]
- Fu, B. Predictive Gaze Analytics: A Comparative Case Study of the Foretelling Signs of User Performance During Interaction with Visualizations of Ontology Class Hierarchies. Multimodal Technol. Interact. 2024, 8, 90. [Google Scholar] [CrossRef]
- Fu, B.; Steichen, B. Impending Success or Failure? An Investigation of Gaze-Based User Predictions During Interaction with Ontology Visualizations. In Proceedings of the International Conference on Advanced Visual Interfaces, Frascati, Italy, 6–10 June 2022; pp. 1–9. [Google Scholar] [CrossRef]
- Fu, B.; Steichen, B. Supporting User-Centered Ontology Visualization: Predictive Analytics using Eye Gaze to Enhance Human-Ontology Interaction. Int. J. Intell. Inf. Database Syst. 2021, 15, 28–56. [Google Scholar] [CrossRef]
- Fu, B.; Steichen, B.; McBride, A. Tumbling to Succeed: A Predictive Analysis of User Success in Interactive Ontology Visualization. In Proceedings of the 10th International Conference on Web Intelligence, Mining and Semantics, Biarritz, France, 30 June –3 July 2020; ACM: New York, NY, USA; pp. 78–87. [Google Scholar] [CrossRef]
- Fu, B.; Steichen, B. Using Behavior Data to Predict User Success in Ontology Class Mapping—An Application of Machine Learning in Interaction Analysis. In Proceedings of the IEEE 13th International Conference on Semantic Computing, Newport Beach, CA, USA, 30 January–1 February 2019; pp. 216–223. [Google Scholar] [CrossRef]
- Fu, B.; Steichen, B.; Zhang, W. Towards Adaptive Ontology Visualization—Predicting User Success from Behavioral Data. Int. J. Semant. Comput. 2019, 13, 431–452. [Google Scholar] [CrossRef]
- Boeing. Statistical Summary of Commercial Jet Airplane Accidents Worldwide Operations; Aviation Safety, Boeing Commercial Airplanes: Seattle, WA, USA, 2015; pp. 1959–2014. [Google Scholar]
- Lee, J.D.; Seppelt, B.D. Human factors and ergonomics in automation design. In Handbook of Human Factors and Ergonomics, 3rd ed.; John Wiley & Sons: Hoboken, NJ, USA, 2006; pp. 1570–1596. [Google Scholar]
- Shappell, S.; Detwiler, C.; Holcomb, K.; Hackworth, C.; Boquet, A.; Wiegmann, D. Human Error and Commercial Aviation Accidents: A Comprehensive, Fine-Grained Analysis Using HFACS. 2006. Available online: https://commons.erau.edu/publication/1218 (accessed on 24 July 2025).
- Fu, B.; Gatsby, P.; Soriano, A.R.; Chu, K.; Guardado, N.G. Towards Intelligent Flight Deck—A Case Study of Applied Eye Tracking in The Predictions of Pilot Success and Failure During Simulated Flight Takeoff. In Proceedings of the International Workshop on Designing and Building Hybrid Human–AI Systems, Arenzano, Italy, 3 June 2024; Volume 3701. [Google Scholar]
- Fu, B.; Soriano, A.R.; Chu, K.; Gatsby, P.; Guardado, N.G. Modelling Visual Attention for Future Intelligent Flight Deck—A Case Study of Pilot Eye Tracking in Simulated Flight Takeoff. In Proceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization, Sardinia, Italy, 1–4 July 2024; pp. 170–175. [Google Scholar] [CrossRef]
- Fu, B.; De Jong, C.; Guardado, N.G.; Soriano, A.R.; Reyes, A. Event-Driven Predictive Gaze Analytics to Enhance Aviation Safety. In Proceedings of the ACM on Human Computer Interaction, Yokohama, Japan, 26 April–1 May 2025; Volume 9. [Google Scholar] [CrossRef]
- Poggi, I. The Gazeionary. In The Language of Gaze (Chapter 6), 1st ed.; Routledge: London, UK, 2024. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).