1. Introduction
Gaze data quality refers to the validity of the gaze data measured and reported by an eye tracker [
1]. The most common method of representing gaze data quality is by specifying gaze estimation accuracy, which refers to the difference between the true and the measured gaze positions [
2]. There currently exists significant diversity in gaze accuracy measures as described in reference [
3], which leads to ambiguity in interpretation of the quality of gaze data from different eye tracking systems and difficulty in comparison of two or more eye trackers. Moreover, with the growing applications of gaze information in consumer devices like augmented and virtual reality, smartphones, and smart TVs [
4,
5,
6,
7] the eye trackers used in such applications need to be thoroughly evaluated to ensure the high quality and consistency of their gaze data outputs. This calls for the development and adoption of homogeneous metrics for reporting gaze accuracy and a consistent set of methods for complete characterization of eye trackers’ data under different operating conditions [
8]. There are several software tools [
9,
10,
11,
12,
13] that have been developed over the years by gaze researchers as well as eye tracker manufacturers for gaze data analysis. The general focus of these software is toward determining eye movement characteristics (i.e., fixations, scanpath, saccades) and studying eye movement relationships with human cognitive process, such as creation of attention maps, understanding regions of user interests, and visual search patterns. Also, a range of gaze datasets have been developed so far by gaze researchers, which are either aimed at building of new gaze estimation algorithms or toward cognitive studies, visual saliency research, and scanpath analysis. However, gaze datasets that contain gaze and ground truth data collected under different operating conditions of an eye tracker, from multiple user platforms, are not yet publicly available.
In this paper, two open-source code and data repositories are presented that are targeted specifically toward in-depth analysis and comparison of eye gaze data quality from generic eye trackers. These repositories are (1) the GazeVisual-Lib repository of software resources hosted on GitHub and (2) the NUIG_EyeGaze01 gaze data repository hosted on Mendeley data. This paper describes the creation, organization and usage of these two repositories that are aimed towards standardized evaluation of the performance of generic eye-trackers. These repositories can benefit gaze researchers, developers of gaze-based systems and applications, and generic users by providing them easy-to-use methods for quantitatively evaluating gaze data outputs from an eye tracker, compare quality of two or more trackers or user platforms. The key features of these two repositories are summarized in
Section 1.1 and
Section 1.2 below.
The motivation behind developing the GazeVisual-Lib software repository is that it could be used by gaze researchers to analyze gaze data and answer critical questions related to gaze data quality. For example, what are the performance limits and tolerances of a given eye tracker? How much is an eye tracker’s accuracy affected when operating under non ideal operating conditions? Which operating conditions affect the tracker’s performance in a particular use-case? How can two gaze datasets, or the performance of two eye tracking systems, be compared quantitatively? What are the performance bottlenecks of individual algorithms? How can gaze error patterns be detected and predicted? The software resources provided in the GazeVisual-Lib repository can help any generic user or an eye gaze researcher to find answer to these questions with minimal programming effort.
The motivation for developing the NUIG_EyeGaze01 data repository is to present gaze datasets collected under unique and challenging operating conditions which are not usually available to gaze researchers. The gaze data within the repository has been collected from a high-resolution eye tracker under carefully designed operating conditions so that best- and worst-case performance characteristics of an eye tracker under the influence of these conditions may be studied. These gaze datasets can help researchers to compare the variation in the data quality of multiple eye trackers, determine anomalous gaze data types, and study a tracker’s reliability and system characteristics under unconstrained operating conditions.
1.1. GazeVisual-Lib: An Open Software Repository for Eye Tracker Data Evaluation
This paper describes the GitHub code repository named GazeVisual-Lib that contains the source codes for a complete GUI application tool and a range of numerical and visualization methods for quantitative and visual exploration of eye gaze data quality. A major component of the GazeVisual-Lib repository is the source code of a desktop GUI software application named GazeVisual, which takes in raw eye gaze data and implements several accuracy metrics and a range of visualizations to study gaze data quality. It also has methods to interface the GUI with an eye tracker for live gaze data collection [
14]. Multiple videos are provided in the repository that show how to use the software to upload gaze data and derive results and visualizations. Apart from this, in the repository, there are codes in different sub-directories that could be used for (a) estimating gaze accuracy in angular resolutions as the difference between input gaze and ground truth data coordinates, (b) metrics and visualizations for exploration of gaze data quality [
8], (c) de-noising and outlier removal from gaze data, and (d) augmentation of a gaze (fixation/scanpath) dataset by seven different methods. The GazeVisual-Lib repository is hosted on GitHub and accompanies full documentation and guidance on the use of individual repository components. The GitHub repository can be found at github.com/anuradhakar49/GazeVisual-Lib.
This paper provides details on how to use the GazeVisual-Lib repository, installation of the dependencies i.e., Python libraries (
www.python.org) required for running the codes from the repository, and practical illustrative examples that would guide a user to run the GazeVisual GUI tool. Also, all the Python codes are made available as Jupyter notebooks within the GitHub repository, so that any user can run these and adapt these resources easily.
1.2. NUIG_EyeGaze01: An Open Gaze Data Repository
In addition to the open coding resources, a new eye gaze dataset is presented in this paper, named NUIG_EyeGaze01 (Labelled eye gaze dataset). This dataset is created through dedicated experiments, using data from a high resolution eye tracker while it operated on three different eye tracking platforms—a desktop, a laptop, and a tablet—under a wide range of operating conditions such as variable user head poses, user distances, screen resolutions, and platform poses. The gaze data files are made available publicly and could be useful to gaze researchers for benchmark comparison of performance of other eye trackers, for building advanced gaze data evaluation metrics, and also for understanding gaze error patterns caused by the different operating conditions mentioned above. The NUIG_EyeGaze01 dataset is hosted on Mendeley Data, which is an open data repository and may be found in the following link:
https://data.mendeley.com/datasets/cfm4d9y7bh/1.
In this paper, details on the data collection process for creation of the NUIG_EyeGaze01 dataset and the dataset organization is provided. The contents of the collected gaze data files are discussed along with sample data presentation from the various experiments done for the data collections. Also, a sample Python code snippet is provided in this paper that may be used to read from the CSV data files in the open dataset, so that researchers can readily use these datasets and extract and manipulate the information in them. Finally, the utility and significance of the dataset and the coding resources toward gaze research are discussed.
1.3. Scope and Organization of the Paper
The scope of this paper is focused around discussing the organization and contents of the two code and data repositories described in
Section 1.1 and
Section 1.2. This paper describes the components of the GazeVisual-Lib repository along with detailed instructions on how these resources maybe used with minimum programing effort. This is done so that readers can understand the purpose, contents, and implementation of the GazeVisual-Lib repository, and it can be readily useful to the interdisciplinary gaze research community for evaluation of gaze data quality. It may be noted that the mathematical derivation of the metrics, visualizations, and concept of the GazeVisual GUI application (present in the GitHub repository) have been discussed in details in our previous papers [
8] and [
14], which provide the scientific background for the coding resources presented in the repository. In a similar way, this paper describes the content and structure of the NUIG_EyeGaze01 data repository with details on each data file, their columns and file naming conventions and sample usage. These would ensure that the collected datasets may be easily used by vision researchers. The philosophy behind the gaze data collection process has been discussed in [
8].
The paper is organized as follows:
Section 2 presents a literature review on contemporary gaze data evaluation software and publicly available gaze datasets.
Section 3 describes the structure and contents of the GazeVisual-Lib repository, and
Section 4 presents the details of the NUIG_EyeGaze01 data repository. Sub-
Section 4.4 presents discussions and analysis of the collected datasets.
Appendix A presents the installation instructions for the various libraries required to run the GazeVisual-Lib coding resources, and
Appendix B contains a series of gaze data plots created using data from the NUIG_EyeGaze01 repository.
2. Previous Works on Open-Source Gaze Data Analysis Software and Gaze Datasets
Eye-tracking has found applications in fields such as neuro and cognitive sciences, psychology, human-computer interactions, consumer electronics, and assistive technologies [
15]. Performance of eye trackers are judged based on their accuracy (in degrees of angular resolution) and it is affected by physiological limitations, tracker setup geometry, or due to type of calibration techniques used in them [
16]. Works such as references [
2] discuss the evaluation and comparison of several commercial eye trackers. In reference [
5], an open-source Matlab toolkit is presented that can be interfaced with a commercial (Tobii EyeX) tracker. The paper evaluates and reports the eye tracker’s performance in terms of angular accuracy, precision, latency, and sampling frequency. In reference [
6], the performances of three wearable trackers, from Pupil Labs (120 Hz), SMI, and the TobiiPro, are compared in terms of their accuracy under multiple viewing conditions. In reference [
2], the accuracy and precision the Eye Tribe tracker is compared with the SMI tracker. The work concluded that the selection of software to record and process the data are significant in obtaining high accuracy results from an eye tracker.
There are several open-source software packages and toolboxes that have been developed for recording and analyzing gaze data, for example ETCAL [
7], Pygaze [
10], GazeParser [
17], EyeMMV [
18], and GazeAlyse [
19] to name a few. PyGaze is an open-source software package in Python language which is built for creating eye tracking experiments, e.g., for presentation of visual stimulus and collection of user response via keyboard or mouse. It also allows online detection of eye movements and supports a wide range of commercial eye trackers. Another Python based open-source library is GazeParser which was developed for low-cost eye tracking, gaze data recording and analysis. It captures images from a camera to record eye position and subsequently performs calibration, synchronization of stimulus presentation along with recording and analysis of eye movements. Eye Movements Metrics & Visualizations (EyeMMV) is a toolbox built using MATLAB for eye movement analysis. It contains functions for identifying fixations, heatmap, and scanpath visualizations and region of interest analysis. Another Matlab-based toolbox is GazeAlyze, which does analysis of eye movement data, e.g., detecting and filtering artefacts, generating regions of interest, and visualizations such as path plots and fixation heat maps. There are also functions for correcting eye movement data due to the head position changes. The EMA toolbox [
20] is implemented in Matlab for eye movement analysis and can parse gaze data from eye trackers like SR Research, SMI (RED 250), and Tobii EyeX. This toolbox allows for data conversion from normalized to pixel to degrees, determination of saccades and their kinematics, and creating saliency maps. Another toolkit named Pytrack [
21] is built for analyzing and visualizing eye tracking data, feature extraction with respect to blinks, fixations, saccades, micro-saccades and pupil diameter, generate gaze heat map, micro-saccade position, and velocity plots.
ETCAL [
7] is a recent development among open-source gaze research tools, and it is a library that provides a common platform to implement a range of calibration procedures to determine gaze points from raw eye movement recordings. The library contains algorithms for preparation and optimization of calibration models and automatic detection of gaze targets for implicit calibration scenarios. ETCAL is a useful tool for researchers who work with different calibration scenarios or want to build their own eye trackers, compare different calibration algorithms and data quality.
It may be observed that most of software developed so far for eye trackers aim towards exploration of eye movement characteristics (detecting fixations, scanpath, saccades, eye movement speed, direction, duration), studying eye movement and their relationships with human behavior (such as building attention maps), deriving regions and sequence of interests, and analyzing cognitive processes. However, only a few software tools (for example, ETCAL [
7]) exist that are designed for quantitative evaluation and visualization of gaze error characteristics, e.g., for estimation of gaze error statistics and distributions and comparison of gaze errors collected under different operating conditions (or error sources). Therefore, in this paper, a new open-source repository of Python-based software tools is presented that can be used for the in-depth analysis of the gaze error characteristics that is collected from any eye tracker, irrespective of the tracking platform, hardware, or algorithm.
With respect to eye gaze datasets, there currently exists a multitude of them, and more are being developed by researchers to cater to individual research problems. A survey of gaze datasets was made, and it was observed that existing gaze datasets can be broadly classified into two types: the ones used for building and testing gaze estimation algorithms, and the others that are used for modelling and validating user attention patterns and cognitive processes.
Table 1 shows the results of this survey and presents the details of several datasets that have been developed for building and testing gaze estimation algorithms.
Table 2 presents the datasets developed for saliency and cognitive studies. These datasets have been developed with users looking at a series of images while their eye movements/images/videos are recorded. The collected eye movement data is then used for building and validating cognitive studies, visual attention patterns, saliency models, etc.
It is observed that typically gaze datasets include eye images/video, eye corners, iris, blink rate, eye closure, fixation or smooth pursuit data. Some include head pose information, while datasets are captured under “free-head motion,” i.e., the exact angular positions of the user head are not known. Some datasets include conditions such as users with/without glasses, change in illumination and background, varying race, age, etc. In this work, a new eye tracking dataset is built comprising of gaze data from three different user platforms, specifically for benchmark evaluation of eye trackers operating under unconstrained operational scenarios and is described in
Section 4 of this paper.
4. Description of the NUIG_EyeGaze01 Gaze Data Repository
There are currently no publicly available eye gaze datasets that allow benchmark comparison and analysis of gaze data quality from two or more eye trackers. Also, there are no datasets that can be used to study gaze error patterns caused by various external operating conditions (or error sources) like head poses, user distances, or platform poses. While there exist plenty of eye gaze datasets containing eye images and videos or fixations and scanpath, none of them contain fixations and corresponding ground truth data collected from more than one eye tracking platform on different display resolutions. Without high-quality gaze data collected under measured variations of different such operating conditions (or error sources), no objective or practical comparisons of performance of new and/or existing gaze tracking systems can be made.
Considering these factors, a rich and diverse gaze dataset, using the eye tracking data collected through dedicated eye tracking experiments conducted under wide range of operating conditions, is therefore built and presented in an open data repository. The dataset is named NUIG_EyeGaze01 (Labelled eye gaze dataset) and is hosted in the Mendeley open data repository with the doi:10.17632/cfm4d9y7bh.1. The link to the dataset is provided in
Section 1.2. This is a new kind of gaze data set, collected from three user platforms (desktop, laptop, tablet) under the influence of one condition at a time. Using this dataset, the impact of different operating conditions may be observed and quantitatively compared. The conditions include fifteen different head poses, four user distances, nine different platform poses, and three display screen size and resolutions. Each gaze data file is labelled with the type of operating conditions under which it was collected.
4.1. Description of the Gaze Data Collection Process
The gaze data collection setup for creating the NUIG_EyeGaze01 dataset included a commercial eye tracker mounted on a desktop computer, a laptop and a tablet whose specifications are provided in
Table 4. A Tobii EyeX eye tracker was used for gaze estimation, and an Eyetribe tracker was used for pilot data collection. Participants were seated in front of the tracker-screen setup, and their chin was fixed with a chin rest (
Figure 10a). Prior to each data collection session, the eye tracker was calibrated with its own calibration software (six-point calibration). After calibration, a visual stimulus interface (
Figure 10b) was presented to the participants [
4], and they were asked to gaze at the specific stimuli targets that appeared on the display screen as their gaze was recorded by the eye tracker. For each experiment session, the following gaze data parameters were estimated for each user: (a) gaze positions data vs ground truth data (locations of stimuli) in pixels and millimeter (b) gaze yaw, pitch angles vs time, and corresponding ground truth yaw, pitch angles vs time (ms);(c) gaze primary angular error, yaw error, and pitch error for each stimuli position and time point.
For gaze data collection under variable operating conditions of the eye tracker, a series of gaze data collection experiments were done on the desktop, laptop and tablet platforms using the eye tracker. These experiments included (a) user distance experiments where users were seated at 50, 60, 70, or 80 cm from the tracker. This was done for the desktop, laptop and the tablet platforms (b) head pose experiments where a user had to position their head at certain fixed head pose angles while their gaze data was collected (
Figure 10c). This was done only for the desktop platform (c) platform pose experiments, where the eye tracking platform or tablet was oriented at certain fixed tablet pose angles
Figure 10d while user gaze data was collected. This was done only for the tablet platform. Further details about the participants, experimental setup and variables may be found in
Table 4.
Table 5 provides the details about the contents of each CSV data file contained within the repository and a description of the data columns.
In
Figure 11,
Figure 12 and
Figure 13, samples of eye gaze data overlapped on ground truth locations of the stimuli from each of these experiments are provided. Gaze data is in black and ground truth data are in blue. It is seen that data from different experiments look consistent but are affected by variable levels of outliers, which is why the outlier removal methods are provided in the GazeVisual-Lib repository. All these data, along with time stamps, were written in comma separated values (CSV) format for each user and each experiment session. Gaze data plots from multiple participants for the different operating conditions may be found in
Appendix B of this paper.
4.2. Organization of the NUIG_EyeGaze01 Gaze Dataset on Mendeley Data
The NUIG_EyeGaze01 dataset hosted on Mendeley is shown in
Figure 14. It contains gaze and ground truth data in CSV files distributed under multiple folders and subfolders which are depicted in
Figure 15. Each CSV file in the dataset contains 21 columns (
Figure 16) with multiple gaze data variables estimated from the raw gaze coordinates. The variables are computed from the raw gaze data using the methods described in reference [
4]. Other than the raw gaze data, inputs for calculating the variables are resolution, pixel pitch of the display where gaze was tracked, and user distance from the tracker.
Within the NUIG_EyeGaze01(Labelled eye gaze dataset) data repository, the data CSV file names are labelled with the participant number, platform name and operating condition. Name of each gaze data file has the convention: USERNUMBER_CONDITION_PLATFORM.CSV (e.g., us01_80_desk.csv). The data files can be downloaded, and respective column values can be read to directly use or visualize them using Python or any CSV reading program. A detailed documentation of the data is also provided within the repository.
The NUIG_EyeGaze01 data repository is published under CC BY-NC 3.0 license. According to this license, Licensees may copy and distribute the material if they give the licensor the credits (attribution). Licensees may distribute derivative works only under a license identical to the license that governs the original work. The license also specifies that Licensees may use the data only for non-commercial purposes and there is also the condition that Licensees may copy, distribute, display, and perform only verbatim copies of the work, not derivative works of it.
There remain possibilities for extending this gaze dataset by collecting gaze data under other challenging conditions. For example, calibration could be done with a fixed head pose, and then gaze data be collected from the subject in another head pose. Then, this data could be compared with that from fixed head pose and its specific calibration. Another scenario could be collecting gaze data when head pose and eye tracker pose change together, e.g., in an automotive environment.
4.3. Using Data from the NUIG_EyeGaze01 Repository
Users can read gaze data and other variables from any of the CSV data files present in the NUIG_EyeGaze01 repository on Mendeley Data using Python and the Pandas library (after downloading the files to their computer).
Figure 17 shows such a code snippet that can be used for reading data from a gaze data CSV file and plotting the gaze error variable as a function of time.
4.4. Analysing Gaze Data from the NUIG_EyeGaze01 Repository
In order to study the characteristics of gaze data collected from the different eye tracker platforms (desktop, tablet) and under different operating conditions, statistical analysis is done on the datasets and their results are provided below.
Table 6 and
Table 7 below present the gaze error statistical values (mean, median absolute deviation, interquartile range, and 95% confidence intervals) from desktop and tablet experiments respectively. The methods for calculating gaze errors and estimating statistical metrics on gaze error values is provided in our previous paper [
4]. It may be noted that the gaze data used for this analysis is available in the NUIG_EyeGaze01 data repository, and the software codes used for the gaze data analysis are provided in the GazeVisual-Lib GitHub repository.
In
Table 6 and
Figure 18a, the terms UD 50, UD60, UD70, and UD80 correspond to gaze data from different user-distance experiments done on the desktop platform and R20, Y20, and P20 correspond to gaze data from head pose roll pitch yaw angle (20 degrees for each) experiments. All value-fields in the table have units in degrees of angular resolution. It is seen that gaze error levels are higher at low user distances and error reduces as user-tracker distance increases. Errors due to head yaw are seen to have the highest magnitude and errors due to head pitch have the highest inter-quartile range (or variability) in error magnitudes. Also, error levels due to various head poses are quite higher compared to when head pose is neutral (UD60 values in
Table 6).
In
Table 7 and
Figure 18b, UD 50, UD60, UD70, and UD80 correspond to gaze data from different user-distance experiments done on the tablet platform and R20, Y20, and P20 represent data from the tablet pose roll pitch yaw angles (20 degrees for each) experiments. It is seen that magnitudes of gaze angular errors due to tablet pose are high, and the highest error is caused due to platform roll variations. The error characteristics from tablet data are quite different than those from the desktop platform, and error magnitudes are lower for tablet for all user distances. Also, magnitudes of errors due to different platform poses (
Figure 18b) are higher than errors due to head poses (
Figure 18a).
Figure 19a,b below show gaze error distributions for the data (after outlier removal) from desktop user distance and head pose experiments. The gaze error distributions are estimated using Kernel Density Estimate [
10] on gaze error values corresponding to different operating conditions, using Gaussian Kernel and a bandwidth value of 0.2. It is seen that each operating condition leaves a definite signature on the gaze error distributions. Distinction exists between patterns of gaze errors for different user distances and head poses as the error distribution shifts toward higher, average, or lower error values for different conditions. Similar observations are made for tablet data for different conditions (
Figure 19c,d). The error distributions are seen to be non-Gaussian and also do not resemble any known statistical distribution.
5. Utility and Impact of Open Resources toward Eye Gaze Research
The GazeVisual-lib repository described in this paper provides a set of open and standardized methods for gaze data evaluation to the interdisciplinary eye gaze research community so that gaze data from a variety of eye trackers, dynamic applications [
50,
51,
52,
53], or user platforms may be evaluated and compared under a unified framework. While using the repository, users can fully understand the sequence of development of the data evaluation codes, starting from raw gaze data, making these methods adaptable to gaze data from any source. With these methods, the practical limits and capabilities of any eye tracking system may be studied and compared quantitatively and can also be upgraded by researchers to adapt to their individual research problems.
Since knowing the quality of gaze data is essential for ensuring the reliability of any gaze-based application or research, the evaluation routines of the repository can be used to constantly monitor the data quality of any eye tracker, especially during real-life operations that accompany variable setup and user conditions. Using the GUI application tool, users can perform in depth gaze data evaluation without the need for any detailed programming knowledge owing to its simple interface. This is particularly important due to the inter-disciplinary nature of gaze research where eye trackers are used widely by people from non-technological fields. The intended user group of the GazeVisual-Lib code repository is therefore quite diverse, ranging from developers of gaze estimation algorithms to users from fields like human-computer interaction, psychology and cognitive studies. Incidentally, gaze data quality is a critical aspect that affects all the stages of any gaze data–based research or application, and the open-source codes for gaze data evaluation are therefore expected to be highly useful in this respect.
The experiments described in this paper have helped to develop and introduce an accessible, diverse, and benchmark eye gaze dataset that can aid in identifying the capabilities and limits of different eye tracking systems. Such labelled gaze datasets containing signatures of different operating conditions that frequently affect gaze data quality on different user platforms do not exist yet, and keeping this in mind, the NUIG-Eyegaze01 dataset has been made publicly available. The data can be put to a wide range of uses, including modelling and comparing error patterns [
54], development and testing of gaze anomaly detection algorithms, or gaze error compensation algorithms, to name a few. These are all sparsely explored areas in gaze research, which could benefit from our diverse and open data repository. Further, the datasets may also be augmented using the data augmentation routines in the GazeVisual-Lib repository. The code and data repositories are therefore complementary to each other. A major utility of presenting the data and code repositories as open resources is that they are meant to encourage research toward practical and realistic performance evaluation of eye trackers, standardization of gaze research results, and building of more open-source tools for these purposes.
6. Conclusions
The open-source gaze data evaluation methods of GazeVisual-Lib could be useful for researchers, engineers, and developers working with gaze estimation systems for the thorough assessment of their gaze data quality. The methods could be especially beneficial for eye trackers that operate under variable operating conditions where gaze data quality frequently becomes unreliable. Also, the GUI application GazeVisual may be used to perform prompt and in-depth gaze data evaluation without the need for any detailed programming knowledge. This could be particularly useful for the inter-disciplinary gaze research community where eye trackers are used widely in non-technological fields. The potential user group of GazeVisual-Lib is therefore quite diverse, ranging from gaze tracking system developers, researchers using eye trackers in virtual/augmented reality, human–computer interactions, cognitive sciences, and generic users having any consumer-grade eye tracker or gaze-based application.
The new eye gaze database NUIG_EyeGaze01 presented in this paper could be beneficial to designers of gaze-based systems for benchmark comparison of their system performances under challenging operating conditions such as variations of head pose, user distance, and tracker orientations. As can be observed from the gaze data analysis results presented in
Section 4.4, possible future directions of research using these gaze datasets (in conjunction with the coding resources of GazeVisual-Lib) include comparison of gaze error patterns from multiple eye trackers, modelling of gaze error patterns induced by different operating conditions, studying gaze error distributions, or the development of gaze error pattern detection algorithms. These would depend on how gaze researchers, statisticians, and researchers working with machine learning models would prefer to use these datasets. The open resources presented in the paper are envisioned to foster collaborative development and adoption of even better resources toward standardized gaze data evaluation, which ultimately can strengthen the usability and reliability of gaze estimation systems in their wide range of applications.