Next Article in Journal
Sampling Rate Influences Saccade Detection in Mobile Eye tracking of a Reading Task
Previous Article in Journal
Ways of Improving the Precision of Eye Tracking Data: Controlling the Influence of Dirt and Dust on Pupil Detection
 
 
Journal of Eye Movement Research is published by MDPI from Volume 18 Issue 1 (2025). Previous articles were published by another publisher in Open Access under a CC-BY (or CC-BY-NC-ND) licence, and they are hosted by MDPI on mdpi.com as a courtesy and upon agreement with Bern Open Publishing (BOP).
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Eye-Tracking Analysis of Interactive 3D Geovisualization

by
Lukas Herman
1,
Stanislav Popelka
2 and
Vendula Hejlova
2
1
Masaryk University, Brno, Czech Republic
2
PalackÝ University, Olomouc, Czech Republic
J. Eye Mov. Res. 2017, 10(3), 1-15; https://doi.org/10.16910/jemr.10.3.2
Submission received: 23 January 2017 / Published: 31 May 2017

Abstract

:
This paper describes a new tool for eye-tracking data and their analysis with the use of interactive 3D models. This tool helps to analyse interactive 3D models easier than by time-consuming, frame-by-frame investigation of captured screen recordings with super-imposed scanpaths. The main function of this tool, called 3DgazeR, is to calculate 3D coordinates (X, Y, Z coordinates of the 3D scene) for individual points of view. These 3D coordinates can be calculated from the values of the position and orientation of a virtual camera and the 2D coordinates of the gaze upon the screen. The functionality of 3DgazeR is introduced in a case study example using Digital Elevation Models as stimuli. The purpose of the case study was to verify the functionality of the tool and discover the most suitable visualization methods for geographic 3D models. Five selected methods are pre-sented in the results section of the paper. Most of the output was created in a Geographic Information System. 3DgazeR works with generic CSV files, SMI eye-tracker, and the low-cost EyeTribe tracker connected with open source application OGAMA. It can com-pute 3D coordinates from raw data and fixations.

Introduction

The introduction summarizes state of the art in 3D cartography eye-tracking research, followed by a presen-tation of previous attempts to record eye-tracking data over interactive 3D models. In the methods section, 3DgazeR and its implementation are described. The re-sults contain five selected data visualization methods applied in the example of the simple case study. At the end of the paper, a summary of 3DgazeR advantages and limitations is described.

3D Geovisualization

Bleisch (2012) defines 3D geovisualization as a ge-neric term used for a range of 3D visualizations repre-senting the real world, parts of the real world, or other data with a spatial reference. With the advent of virtual globes such as Google Earth, or perhaps even earlier with the notion of a digital earth (Gore, 1998), they have be-come increasingly popular, and many people already know about 3D geovisualizations even though they may not call them as such. Most 3D geovisualizations are digital elevation models draped with ortho or satellite imagery and relatively detailed 3D city models (Bleisch 2012). These perspective views are often referred to as 3D maps. The overview of the usability and usefulness of 3D geovisualizations was presented by Çöltekin et al. (2016). Authors categorized the results from existing empirical studies according to visualization type, task type, and user type.
3D geovisualization is not limited to the depiction of ter-rain where the Z axis represents elevation. The development of a phenomenon in time is often displayed, for example, with the aid of a so-called Space-Time-Cube (STC). Häger-straand (1970) proposed a framework for time geography to study social interaction and the movement of individuals in space and time. The STC is a visual representation of this framework where the cube’s horizontal plane represents space, and the 3D vertical axis represents time (Kveladze et al., 2013). With a Space-Time-Cube, any spatio-temporal data can be displayed. That data can be, for example, infor-mation recorded by GPS devices, statistics with location and time components, or data acquired with eye-tracking tech-nology (Li et al., 2010).
3D maps and visualizations can generally be divided into two categories: static and interactive. Static visualizations are essentially perspective views (images) of any 3D scene. In interactive 3D visualizations, the user can control and manipulate the scene. The disadvantages of static 3D maps are mainly overlapping objects in the 3D scene and the distortion of distant objects. Inexperienced users could have problems with scene manipulation using a mouse (Wood et al., 2005).
Most of the cases referred to as 3D geovisualization are not true 3D, but a pseudo 3D (or 2.5D – each X and Y coordinate corresponds to exactly one Z value). Accord-ing to Kraak (1988), true 3D can be used in those cases where special equipment achieves realistic 3D projection (i.e. 3D LCD displays, holograms, stereoscopic images, anaglyphs or physical models).
Haeberling (2002) notes that there is almost no carto-graphic theory or principles for creating 3D maps. In his dissertation, Goralski (2009) also argues that solid knowledge of 3D cartography is still missing. A similar view can be found in other studies (Ellis & Dix, 2006; MacEachren, 2004; Slocum et al., 2001; Wood et al., 2005). The authors report very little knowledge about how and in which cases 3D visualization can be effec-tively used. Performing an appropriate assessment of the usability of 3D maps is necessary.

Usability Methods for 3D Geovisualization

Due to the massive increase in map production in re-cent years, it is important to focus on map usability re-search. Maps can be modified and optimized to better serve users based on the results of this research.
One of the first works dealing with map usability re-search was published by Petchenik (1977). In her work "Cognition in Cartography", she states that for the suc-cessful transfer of information between the map creator and map reader, it is necessary for the reader to under-stand the map in the same way as the map creator. The challenge of cognitive cartography is to understand how users read various map elements and how the meanings of those elements between different users vary.
The primary direction of cognitive cartography re-search leads to studies in how maps are perceived, to increase their efficiency, and adapt their design to the needs of a specific group of users. The International Car-tographic Association (ICA) has two commissions devot-ed to map users, the appraisal of map effectiveness, and map optimization – the Commission on Use and User Issues (http://use.icaci.org/) and the Commission on Cog-nitive Visualization (http://cogvis.icaci.org/). User as-pects are examined in respect of the different purposes of maps (for example Stanek et al., 2010 or Kubicek et al., 2017).
Haeberling (2003) evaluated the design variables em-ployed in 3D maps (camera angle and distance, the direction of light, sky settings and the amount of haze). Petrovic and Masera (2004) used a questionnaire to de-termine user preferences between 2D and 3D maps. Par-ticipants of their study had to decide which type of map they would use to solve four tasks: measuring distances, comparing elevation, determining the direction of north, and evaluating the direction of tilt. Results of the study of Petrovic and Masera (2004) showed that 3D maps are better for estimating elevation and orientation than their 2D equivalents, but 3D maps may cause potential prob-lems for distance measuring.
Savage et al. (2004) tried to answer the question whether using 3D perspective views has an advantage over using traditional 2D topographic maps. Participants were randomly divided into two groups and asked to solve spatial tasks with a 2D or a 3D map. The results of the study showed no advantage in using 3D maps for tasks that involved estimating elevation. Additionally, in tasks where it was not necessary to determine an object’s elevation (e.g. measuring distances), the 3D variant was not as good as 2D.
User testing of 3D interactive virtual environments is relatively scarce. One of few articles describing such an environment is presented by Wilkening and Fabrikant (2013). Using the Google Earth application, they moni-tored the proportion of applied movement types – zoom, pan, tilt, and rotation. Bleisch et al. (2009) assessed the 3D visualization of abstract numeric data. Although speed and accuracy were measured, no information about navigation in 3D space was recorded in this study. Lokka and Çöltekin (2016) investigated memory capacity in the context of navigating a path in a virtual 3D environment. They observed the differences between age groups.
Previous studies (Sprinarova et al., 2015; Wilkening & Fabrikant, 2013 and Herman & Stachon, 2016) indi-cate that there are considerable differences between indi-viduals in how they read maps, especially in the strategies and procedures used to determine an answer to a ques-tion. To understand map reading strategy, the use of eye-tracking facilitates the study.

Eye-Tracking in Cartography

Although eye-tracking to study maps was first used in the late 1950s, it has seen increased use over the last ten to fifteen years. Probably the first eye-tracking study for evaluating cartographic products was the study of Enoch (1959), who used as stimuli simple maps drawn on a background of aerial images. Steinke (1987) presented one of the first published summaries about the application of eye-tracking in cartography. He compiled the results of former research and highlighted the importance of distin-guishing between the perceptions of user groups of dif-ferent age or education.
Today, several departments in Europe and the USA conduct eye-tracking research in cartography (Wang et al., 2016). In Olomouc, Czech Republic, eye-tracking has been used to study the output of landscape visibility anal-yses (Popelka et al., 2013) and to investigate cartographic principles (Brychtova et al., 2012). In Zurich, Switzer-land, Fabrikant et al. (2008) evaluated a series of maps expressing the evolution of phenomenon over time and weather maps (Fabrikant et al., 2010). Çöltekin from the same university analyzed users’ visual analytics strategies (Çöltekin et al., 2010). In Ghent, Belgium paper and digital topographic maps were compared (Incoul et al., 2015) and differences in attentive behavior between nov-ice and expert map users were analyzed (Ooms et al., 2014). Ooms et al. (2015) proposed the methodology for combining eye-tracking with user logging to reference eye-movement data to geographic objects. This approach is similar to ours, but instead of 3D model a dynamic map is used.

Eye-Tracking to Assess 3D Visualization

The issue of 3D visualization on maps has so far only been addressed marginally. At the State University of Texas, Fuhrmann et al. (2009) evaluated the differences in how a traditional topographic map and its 3D holo-graphic equivalent were perceived. Participants were asked to suggest an optimal route. Analysis of the eye-tracking metrics showed the better option to be the holo-graphic map.
One of the first and more complex studies dealing with eye-tracking and the evaluation of 3D maps is the study by Putto et al. (2014). In this study, the impact of three types of terrain visualization was evaluated while being required to solve four tasks (visual search, area selection, and route planning). The shortest average length of fixation was observed for the shaded relief, indicating that this method is the easiest for users.
Eye-tracking for evaluating 3D visualization in car-tography is widely used at Palacký University in Olo-mouc, Czech Republic, with studies examining the dif-ferences in how 3D relief maps are perceived (Popelka & Brychtova, 2013), 3D maps of cities (Dolezalova & Popelka, 2016), a 3D model of an extinct village (Popel-ka & Dedkova, 2014), and tourist maps with hill-shading (Popelka, 2014) being produced there. These studies showed that it is not possible to generalize the results and state that 3D is more effective than 2D or vice versa. The effectivity of visualization depends on the exact type of stimuli and also on the task.
In all these studies static images were used as stimuli. Nevertheless, the main advantage of 3D models is being able to manipulate them (pan, zoom, rotate). An analysis of eye-tracking data measured on interactive stimuli is costly, as eye-trackers produce video material with over-laid gaze-cursors and any classification of fixations re-quires extensive manual effort (Pfeiffer, 2012). Eye tracking studies dealing with interactive 3D stimuli typi-cally comprise a time-consuming frame-by-frame analy-sis of captured screen recordings with superimposed scanpaths. One of the few available gaze visualization techniques for 3D contexts is the representation of fixa-tions and saccades as 3D scanpaths (Stellmach et al., 2010a). A challenge with 3D stimuli is mapping fixations onto the correct geometrical model of the stimulus (Blascheck et al., 2014).
Several attempts to analyze eye-tracking data record-ed during the work with interactive 3D stimuli exist. Probably the most extensive work has been done by Stellmach, who developed tool called SWEETER – a gaze analysis tool adapted to the Tobii eye-tracker sys-tem and XNA Framework. SWEETER offers a coherent framework for loading 3D scenes and corresponding gaze data logs, as well as deploying adapted gaze visualiza-tions techniques (Stellmach et al., 2010b).
Another method for visualizing the gaze data of dy-namic stimuli was developed by Ramloll et al. (2004). It is especially useful for 3D objects on retail sites allow-ing shoppers to examine products as interactive, non-stereoscopic 3D objects on 2D displays. In this approach, each gaze position and fixation point is mapped to a 3D object’s relevant polygon. A 3D object is then flattened and overlaid with the appropriate gaze visualizations. The advantage of this flattening is that the output can be re-produced on a 2D static medium (i.e. paper).
Both approaches handle with a remote eye-tracker to record data. Pfeiffer (2012) used a head-mounted eye-tracking system by Arrington Research. This study ex-tended recent approaches of combining eye-tracking with motion capture, including holistic estimations of the 3D point of regard. In addition, he presented a refined ver-sion of 3D attention volumes for representing and visual-izing attention in 3D space.
Duchowski et al. (2002) developed an algorithm for binocular eye-tracking in virtual reality, which is capable of calculating the three-dimensional virtual coordinates of the viewer’s gaze.
A head-mounted eye-tracker from the SMI was used in the study of Baldauf et al. (2010), who developed the application KIBITZER – a wearable gaze-sensitive sys-tem to explore urban surroundings. The eye-tracker is connected via a smartphone and the user’s eye-gaze is analyzed to scan the visible surroundings for georefer-enced digital information. The user is informed about points of interest in his or her current gaze direction.
SMI glasses were also involved in the work of Paletta et al. (2013), who used them in combination with Mi-crosoft Kinect. A 3D model of the environment was ac-quired with Microsoft Kinect and gaze positions captured by the SMI glasses were mapped onto the 3D model.
Unfortunately, all the presented approaches work with specific types of device and are not generally avail-able for the public. For this reason, we decided to develop our own application called 3DgazeR (3D Gaze Recorder). 3DgazeR can place recorded raw data and fixations into the 3D model’s coordinate system. The application works primarily with geographical 3D models (DEM – Digital Elevation Models in our pilot study). Majority of the case study is performed in open source Geographic Infor-mation System QGIS. The application works with data from an SMI RED 250 device and a low-cost, EyeTribe eye-tracker. This eye-tracker is connected with open source application OGAMA. Many different eye-trackers could be connected with OGAMA and then our tool will work with their data.

Methods

We designed and implemented our own experimental application, 3DgazeR, due to the unavailability of tools allowing eye-tracking while using interactive 3D stimuli. The main function of this instrument is to calculate the 3D coordinates (X, Y, Z coordinates of the 3D scene) for individual points of view.
These 3D coordinates can be calculated from the val-ues of the position and orientation of a virtual camera and the 2D coordinates of the gaze on the screen. 2D screen coordinates are obtained from the eye-tracking system, and the position and orientation of the virtual camera are recorded with the 3DgazeR tool (Figure 1).
3DgazeR incorporates a modular design. The three modules are:
  • Data acquisition module
  • Connecting module to combine the virtual camera data and eye-tracking system data
  • Calculating module to calculate 3D coordinates
The modular design reduces computational complexi-ty for data acquisition. Data for gaze position and virtual camera position and orientation are recorded inde-pendently. Combining the data and calculating 3D coor-dinates is done in the post-processing phase. Splitting the modules for combining data and calculating 3D coordi-nates allows information from different eye-tracking systems (SMI RED, EyeTribe, generic CSV files) and various types of data (raw data, fixation) to be processed.
All three modules constituting 3DgazeR only use open web technologies: HTML (HyperText Markup Language), PHP (Hypertext Preprocessor), JavaScript, jQuery and JavaScript library for rendering 3D graphics X3DOM. Library X3DOM was chosen because of its broad support in commonly used web browsers, as well as documentation for the accessibility and availability of software to create stimuli. X3DOM uses an X3D (eXtensible 3D) structure format and is built on HTML5, JavaScript, and WebGL. The current implementation of X3DOM uses a so-called fallback model that renders 3D scenes through an Instan-tReality plug-in, a Flash11 plug-in, or WebGL. To run X3DOM, no specific plug-in is needed. X3DOM is free for both for non-commercial and commercial use (Behr et al., 2009). Common JavaScript events, such as onclick on 3D objects, are supported in X3DOM. A runtime API is also available and provides a proxy object for reading and mod-ifying runtime parameters programmatically. The API functions serve for interactive navigation, resetting views or changing navigation modes. X3D data can be stored in an HTML file or as part of external files. Their combina-tion is achieved via an inline element. Particular X3D elements can be clearly distinguished through their DEF attribute, which is a unique identifier. Other principles and advantages of X3DOM are described in Behr et al. (2009), Behr et al. (2010), Herman and Reznik (2015), and Her-man and Russnak (2016).

Data Acquisition Module

The data acquisition module is used to collect primary data. Its main component is a window containing the 3D model used as a stimulus. This 3D scene can be navigated or otherwise manipulated. The rendering of virtual con-tent inside a graphics pipeline is the orthographic or per-spective projection of 3D geometry onto a 2D plane. The parameters for this projection are usually defined by some form of virtual camera. Only main parameters, position, and orientation of the virtual camera are record-ed in the proposed solution. The position and orientation of the virtual camera are recorded every 50 milliseconds (frequency of records 20 Hz). The recording is performed using functions from X3DOM runtime API and JavaS-cript in general. The recorded position and orientation of the virtual camera is sent every two seconds to a server and stored using a PHP script to a CSV (Comma Separat-ed Value) file. Storage of the 3D scene loading time is necessary for subsequent combination with eye-tracking data. Similarly, termination of the 3D scene is also stored. The interface is designed as a full-screen 3D scene while input for answers is provided on the following screen (after the 3D scene).

Connection Module

The connecting module combines two partial CSV files based on timestamps. The first step is joining trimmed data (from the eye-tracker and from the movement of the virtual environment) by beginning markers and end depiction of the 3D scene. The begin-ning in both records is designated as time 0 (Figure 2).
Each record from the eye-tracker is then assigned to the nearest previous recorded position of the virtual cam-era (by timestamp), which is the simplest method of join-ing temporal data and it was not difficult to implement. The maximum time deviation (uncertainty) is then less than 50ms (the virtual camera recording step).
Five variants of the connecting module were created – for SMI RED 250 and EyeTribe, and for raw data and fixations. The tool also allows to read data from generic CSV file with three columns representing time (in mili-seconds), X and Y coordinates. The entire connecting module is implemented in JavaScript.

Calculating Module

The calculating module comprises a similar window and 3D model to those used in the test module. The same screen resolution must be used as during the acquisition of data. For every record, the intersection of the viewing ray with the displayed 3D model is calculated. A 3D scene is depicted with a virtual camera’s input position and orientation. The X3DOM runtime API function getViewingRay and screen coordinates as input data are used for this calculation. Setting and calculating the vir-tual camera’s parameters is automated using the FOR cycle. The result is a table containing timestamps, 3D scene coordinates (X, Y, Z), the DEF element the ray intersects with, and optionally, a normal vector to this intersection. If the user is not looking at any particular 3D object, this fact is also recorded, including whether the user is looking beyond the dimensions of the monitor.
This function is based on ray casting method (see Figure 3) and is divided into three steps:
  • calculation of the the viewing ray direction from the virtual camera position, orientation and screen coordinates (function calcViewRay);
  • ray casting to the scene;
  • finding the intersection with the closest object (function hitPnt).
For more information about ray casting see Hughes et al. (2014).
For additional processing, analysis, and visualization of calculated data, GIS software is used. It was primarily open source program QGIS, but ArcGIS with 3D Analyst and ArcScene (3D viewing application) can also be used. We worked with QGIS version 2.12 with several addi-tional plug-ins. Most important was Qgis2threejs plug-in. Qgis2threejs creates 3D models and exports terrain data, map canvas images, and overlaid vector data to a web browser supporting WebGL).

Pilot Study

Our pilot experiment was designed as exploratory re-search. The primary goal of this experiment was to test the possibilities of 3DgazeR in evaluating different meth-ods of visualization and analyzing eye-tracking data ac-quired with an interactive 3D model.

Apparatus, Tasks and Stimuli

For the testing, we chose a low-cost EyeTribe device. Currently, the EyeTribe tracker is the least expensive commercial eye-tracker in the world at a price of $99 (https://theeyetribe.com). Popelka et al. (2016) compared the precision of the EyeTribe and the professional device SMI RED 250. The results of the comparison show that the EyeTribe tracker is a valuable tool for cartographic research. The eye-tracker was connected to OGAMA software (Voßkühler et al., 2008). The device operated at a frequency of 60Hz, however saving information about camera orientation caused problems with the frequencies higher than 20Hz. Some computer setups were not able to store camera data correctly when the frequency was high-er than 20Hz. The length of the file was shorter than real recording, because some rows were omitted. That means that eye-tracking data were recorded every 16.67ms and data about camera position and orientation were recorded every 50ms.
Two versions of the test were created – variant A and variant B. Each variant included eight tasks over almost the same 3D models (differ only in the used texture). The 3D models in variant A had no transparency, and the terrain was covered with a hypsometric color scale (from green to brown). The same hypsometric scale covered four 3D models in variant B, but transparency was set at 30%. The second half of the models in variant B had no transparency, but the terrain was covered with satellite images from Landsat 8. The order of the tasks was differ-ent in both variants. A comparison of variant A and vari-ant B for the same task is shown in Figure 4. Four tasks were required:
  • Which object has the highest elevation? (Variant A – tasks 1, 5; Variant B – tasks 1, 2)
  • Find the highest peak. (Variant A – tasks 2, 6; Variant B – tasks 3, 4)
  • Which elements are visible from the given position? (Variant A – tasks 3, 7; Variant B – tasks 5, 6)
  • From which positions a given object is visible? (Variant A – tasks 4, 8; Variant B – tasks 7, 8)
The first two tasks had only one correct answer, while the other two had one or more correct answers.

Design and Participants

We decided to test 20 participants on both variants be-fore the pilot test. At least 3 days between the testing sessions are demanded to decrease the learning effect when performing the second variant. Half of the partici-pants were students of the Department of Geoinformatics with cartographic knowledge, half of them were carto-graphic novices. Half of the participants were men, half women. The age range was 18-32 years.
Screen resolution during the experiment was 1600 x 900 and the sampling frequency was set to 60Hz. Each participant was seated at an appropriate distance from the monitor with an eye-tracking device calibrated with 16 points. Calibration results of either Perfect or Good (on the scale used in OGAMA) were accepted. An external keyboard was connected to the laptop to start and end the tasks (F2 key for start and F3 for the end). A researcher controlled the keyboard. The participant performed the test using only a common PC mouse.
The experiment began with calibrating the device in the OGAMA environment. After that, participants filled in their ID and other personal information such as age, sex, etc. The experiment was captured as a screen record-ing.
Prepared with individual HTML pages, the experi-ment included questions, tasks, 3D models, and input screens for answers. The names of the CSV files where recording of the virtual camera movement would be stored coincided with the task in the subsequently created eye-tracking experiment in OGAMA. This would allow correct combination in the connecting module.
As recording began, a page with initial information about the experiment appeared. The experiment ran in Google Chrome in full-screen mode and is available at http://eyetracking.upol.cz/3d/ (in Czech). Each task was limited to 60 seconds duration, and the whole experiment lasts approximately 10 to 15 minutes. Longer total time of the experiment may affect the user performance. From the evidence from previous experiments, we found out that when the recording is longer than 20 minutes, the participants started to be tired and they lost concentration.
Care was taken with the correct starting time for tasks. A screen with a target symbol appeared after the 3D model had loaded. The participant switched to the task by pressing the F2 key. This key press was recorded by OGAMA and used by 3DGazeR to divide recording according to the task. After that, a participant could manipulate the 3D model to discover the correct answer. The participant then pressed F3, and a screen with a selection of answers appeared.

Recording, Data Processing and Validation

It is necessary to store the data for each task separately, alternatively control or manually modify (e.g. delete the unnecessary lines at the end of recording). The data is then processed in the connecting module where data from the eye-tracking device is combined with virtual camera move-ment. The output is then sent to the calculation module which must be switched to full-screen mode. The calculation must take place on the same screen resolution as the testing. The output should be modified for import into GIS software and visualized. For example, the data format of the time column had to be modified into a form required to subsequently create animation.
These adjusted data can be imported into QGIS. CSV data are loaded and displayed here using the Create a Layer from a Delimited Text File dialog. The retrieved data can be stored in GML (Geography Markup Language) or Shapefile as point layers. After the export and re-render of this new layer above the 3D model, it is possible that some data may have the wrong elevation (Figure 5). This distortion occurs when the 3D model is rotated while eyes are simultaneously focused on a specific place, or when the model is rotated, and eyes track with a smooth pursuit. To remove these distortions and correctly fit eye-tracking data exactly on the model, the Point Sampling Tool plug-in in Qgis2threejs was used.

Evaluation of the Data Validity

For the evaluation of the validity of 3DGazeR output, we have created the short animation of 3D model with one red sphere in the middle. The diameter of the sphere was approximately 1/12 of the 3D model width. In the beginning, the sphere was located in the middle of the screen. After five seconds, the camera changed its posi-tion (it took two seconds), and the sphere moved to the upper left side of the screen. The camera stayed there for six seconds and then moved again, so the sphere was displayed in the next corner of the screen. This process was repeated for all four corners of the screen. The vali-dation study is available at http://eyetracking.upol.cz/3d/.
The task of the participant was to look at the sphere all time. The validation was performed on five partici-pants. Recorded data was processed in connection and calculating modules of 3DGazeR. For the evaluation of the data validity, we decided to analyze how many data samples were assigned to the sphere. Average values for all five participants are displayed in Figure 6. Each bar in the graph represents one camera position (or movement). The blue color corresponds to the data samples where the gaze coordinates were assigned to the sphere; the red color is used when the gaze was recorded out of the sphere. It is evident that inaccuracies were observed for the first position of the sphere because it took some time to participants to find the sphere. A similar problem was found when the first movement appeared. Later, the per-cent of samples recorded out of the sphere is minimal. In total, average amount of samples recorded out of the sphere is 3.79 %. These results showed that the tool works correctly and the inaccuracies are caused by the inability of the respondents to keep eyes focused on the sphere that was verified by watching the video recording in OGAMA.

Results

Visualization techniques allow researchers to analyze different levels and aspects of recorded eye tracking data in an exploratory and qualitative way. Visualization tech-niques help to analyze the spatio-temporal aspect of eye tracking data and the complex relationships it contains (Blascheck et al., 2014). We decided to use both fixations and raw data that for visualization. 3D alternatives to the usual methods of eye-tracking data were created, and other methods suitable for visualization of 3D eye-tracking data were explored. The following visualization methods were tested:
  • 3D raw data
  • 3D scanpath (fixations and saccades)
  • 3D attention map
  • Animation
  • Z coordinate variation over time graph

3D Raw Data

First, we tried to visualize raw data as simple points placed on a 3D surface. This method is very simple, but its main disadvantage is the poor arrangement of depicting data in this way, mainly in areas with a high density of points. The size, color, and transparency of symbols can be set in used GIS software. With this type of visualization, data from different groups of participants can be compared, as shown in Figure 7. Raw data displayed as points were used as input for creating other types of visualizations. Figure shows the 3D visualization of raw data created in QGIS. Visualization of a large number of points in the 3D scene in a web browser through Three.js is hardware de-manding. Thus, visualization of raw data is more effective in ArcScene.

3D Scanpath

The usual approach for depicting eye-tracking data is scanpath visualization superimposed on a stimulus repre-sentation. Scanpaths show the eye-movement trajectory by drawing connected lines (saccades) between subse-quent fixation positions. A traditional spherical represen-tation of fixations was chosen, but Stellmach et al. (2010b) also demonstrate different types of representa-tion. Cones can be used to represent fixations or view-points and view directions for camera paths.
The size of each sphere was determined from the length of the fixation. Fixations were detected in OGAMA envi-ronment with the use of I-DT algorithm. The settings for thresholds were set to maximum distance of 30px and minimum number of three samples per fixation. Fixation length was used as the attribute for the size of each sphere. Transparency (30 %) was set because of overlaps. In the next step we created 3D saccades linking fixations. The PointConnector plug-in in QGIS was used for this purpose.
This visualization method is quite clear. It provides an overview of the duration of individual fixations, their position, and relation to each other. It tells where the participant’s gaze lingered and where it stayed only brief-ly. Lines indicate if a participant skipped between remote locations and back or if the observation of the stimulus was smooth. The scanpath from one participant solving variant A, task 4 is shown in Figure 8. From the length of fixations, it is evident that the participant observed loca-tions near spherical bodies defining the target points crucial for task solving. His gaze shifted progressively from target to target, whereby the red target attracted the most attention.

3D Attention Map

Visual gaze analysis in three-dimensional virtual en-vironments still lacks the methods and techniques for aggregating attentional representations. Stellmach et al. (2010b) introduced three types of attention maps suitable for 3D stimuli – projected, object-based, and surface-based attention maps. In Digital Elevation Models, the use of projected attention maps is the most appropriate.
Object-based attention maps, which are relatively similar to the concept of Areas of Interest, can also be used for eye-tracking analysis of interactive 3D models with 3DgazeR. In this case, stimuli must contain predeter-mined components (objects) with unique identifiers (at-tribute DEF in X3DOM library).
Projected attention maps can be created in the ArcScene environment using the Heatmap plug-in in QGIS function. Heatmap calculates the density of features (in our case fixations) in a neighborhood around those features. Conceptually, a smoothly curved surface is fitted over each point. The important factors for creating Heatmap are grid cell size and search radius. We used a cell size of 25 m (it is about one thousandth of the terrain model size) and search radius as an implicit value (see Figure 9).
The advantage of projected attention maps is the clari-ty for visualization of a large amount of data. In a Geo-graphic Information System, the exact color scheme of the attention map can be defined (with minimum and maximum values).
An interesting result was obtained from task 6, variant B. Figure 9 compares the resultant attention maps from participants with cartographic knowledge with those from the general public. For cartographers, the most important part of the terrain was around the blue cube. Participants without cartographic knowledge focused on other objects in the terrain. An interpretation of this behavior could be that the cartographers were consistent with the task and looked at the blue cube from different areas. By contrast, novices used the opposite approach and investigated which objects were visible from the blue cube’s position.

Animation

A suitable tool for evaluating user strategies is anima-tion. Creating an animation with a 3D model is not possi-ble in QGIS software, so we used ArcScene (with the function Create Time Animation) for this purpose. The model can also be rotated during the animation, providing interactivity from data acquisition through to final analy-sis. Animations can be used to study fixations of individ-uals or to compare several users. Animations can be ex-ported from ArcScene software as video files (e.g. AVI), but it loses its interactivity. AVI files exported from ArcScene are available at http://eyetracking.upol.cz/3d/. A similar method to animation is taking screenshots, which can also be alternatively used in the qualitative (manual) analysis of critical task solving moments, such as at the end or when entering an answer.

Graph

When analyzing 3D eye-tracking data, it would be appropriate to concentrate on analyzing the Z coordinate (height). From the data recorded with 3DgazeR, the Z coordinate’s changes over time can be displayed, so the elevations the participants looked at in the model during the test can be investigated. Data from the program ArcScene were exported into a DBF table and analyzed in OpenOffice Calc. A scatter plot with data points connected by lines should be used here. A graph for one participant (see Figure 10) or multiple participants can be created. A graph of Z coordinate raw data values was created in this case.
It is apparent from this graph when participants looked at higher ground or lowlands. In Figure 10, we can see how the participant initially fluctuated between elevations in observing locations and focused on the highest point around the 27th second during the task. In general, we conclude that this participant studied the entire terrain quite carefully and looked at a variety of low to very high elevations.

Discussion

We developed our own testing tool, 3DgazeR, be-cause none of the software tools found through literature review were freely available for application. Those soft-ware tools worked with specific devices, or had proprie-tary licenses, and were not free or open source software. 3DgazeR is freely available to interested parties under a BSD license to fill this gap. English version of 3DgazeR is available at http://eyetracking.upol.cz/3d/. Furthermore, 3DgazeR has several significant ad-vantages:
  • It permits evaluation of different types of 3D stimuli because the X3DOM library is very flexible – for an overview of various 3D models displayed through X3DOM see Behr et al. (2010), Herman and Reznik (2015), or Herman and Russnak (2016).
  • It is based on open web technologies and thus an inexpensive solution, and does not need special software installed or plug-ins on the client or server sides.
  • It combines open JavaScript libraries and PHP, and so may be easily extended or modified.
  • It writes data into a CSV file, allowing easy analysis under various commercial, freeware, and open source programs.
3DgazeR also demonstrates general approaches in creating eye-tracking analyses of interactive 3D visuali-zations. Some limitations of this testing tool, however, were identified during the pilot test:
  • A higher recording frequency of virtual camera position and orientation in the data acquisition module would allow greater precision during analysis
  • Some of the calculated 3D gaze data (points) are not correctly placed on a surface. This distortion occurs when the 3D model is rotated while eyes are simultaneously focused on a specific place, or when the model is rotated, and eyes track with a smooth motion. A higher frequency in recording virtual camera position and orientation can solve this problem
  • Data processing is time-consuming and involves manual effort. Automating this process and developing tools to speed up data analysis and visualization would greatly enhance its productivity.
Future development of 3DgazeR should aim at over-coming these limitations. Other possible extensions to our methodology and the 3DgazeR tool have been identified:
  • We want to modify 3DgazeR to support other types of 3D models (e.g. 3D models of buildings, machines, or similar objects), and focus mainly on the design and testing of such procedures to create 3D models comprising individual parts marked with unique identifiers (as mentioned above – with a DEF attribute). Such 3D models also allow us to create object-based attention maps. The first trials in this direction are already underway. They represent simple 3D models which are predominantly created manually. This is time-consuming and requires knowledge of XML (eXtensible Markup Language) structure and X3D format. We would like to simplify and automate this process as much as possible in the future.
  • We would like to increase the frequency of records of position and orientation of the virtual camera, especially during its movement because the uncertainty caused by merging data with different frequencies may affect further analysis of data. On the other hand, when it is no user interaction (virtual camera position is not changed at this time), it would be suitable to decrease the frequency to reduce the size of created CSV file. The ideal solution would be the recording with adaptive frequency, depending on whether the virtual camera is moving or not.
  • We also want to improve the connecting module to use more accurate method for joining data of the movement of the virtual camera with data from the eye-tracking system.
  • We tested primarily open source software (QGIS, OpenOffice Calc) for visualization of the results. Creation of 3D animation was not possible in QGIS, so commercial software ArcScene was used for this purpose. The use of ArcScene is more effective also in the case of raw data visualization. We want to test the possibilities of advanced statistical analysis in some open source program, e.g. R.
3DgazerR enables each participant’s strategy (e.g. Figure 8 and Figure 10) to be studied, their pairs compared, and group strategies (e.g. Figure 7 and Figure 9) analyzed. In the future, once the above adjustments and additions have been included, we want use 3DgazerR for complex anal-ysis of user interaction in virtual space and compare 3D eye-tracking data with user interaction recordings intro-duced by Herman and Stachon (2016). We would like to extend the results of existing studies, e.g. Stellmach et al. (2010b), in this manner.

Conclusions

We created an experimental tool called 3DgazeR to record eye-tracking data for interactive 3D visualizations. 3DgazeR is freely available to interested parties under a BSD license. The main function of 3DgazeR is to cal-culate 3D coordinates (X, Y, Z coordinates of the 3D scene) for individual points of view. These 3D coordi-nates can be calculated from the values of the position and orientation of a virtual camera and the 2D coordi-nates of the gaze upon the screen. 3DgazeR works with both the SMI eye-tracker and the low-cost EyeTribe tracker and can compute 3D coordinates from raw data and fixations. The functionality of the 3DgazeR has been tested in a case study using terrain models (DEM) as stimuli. The purpose of this test was to verify the func-tionality of the tool and discover suitable methods of visualizing and analyzing recorded data. Five visualiza-tion methods were proposed and evaluated: 3D raw data, 3D scanpath (fixations and saccades), 3D attention map (heat map), animation, and a graph of Z coordinate varia-tion over time.

Ethics and Conflict of Interest

The authors declare that the contents of the article are in agreement with the ethics described in http://biblio.unibe.ch/portale/elibrary/BOP/jemr/ethics.html and that there is no conflict of interest regarding the pub-lication of this paper.

Acknowledgments

A special thank you to Lucie Bartosova, who per-formed the testing and did a lot of work preparing data for analysis. This research was supported by grants No. MUNI/M/0846/2015 and MUNI/A/1419/2016, both awarded by Masaryk University, Czech Republic. The project was also supported by student project IGA_PrF_2017_024 of the Palacký University in Olo-mouc, Czech Republic.

References

  1. Baldauf, M., P. Fröhlich, and S. Hutter. 2010. KIBITZ-ER: a wearable system for eye-gaze-based mobile ur-ban exploration. In Proceedings of the 1st Augmented Human International Conference. ACM: pp. 9–13. [Google Scholar] [CrossRef]
  2. Behr, J., P. Eschler, Y. Jung, and M. Zöllner. 2009. X3DOM: a DOM-based HTML5/X3D integration model. In Proceedings of the 14th International Confer-ence on 3D Web Technology. ACM: pp. 127–135. [Google Scholar] [CrossRef]
  3. Behr, J., Y. Jung, J. Keil, T. Drevensek, M. Zoellner, P. Eschler, and D. Fellner. 2010. A scalable architec-ture for the HTML5/X3D integration model X3DOM. In Proceedings of the 15th International Conference on Web 3D Technology. ACM. [Google Scholar] [CrossRef]
  4. Blascheck, T., K. Kurzhals, M. Raschke, M. Burch, D. Weiskopf, and T. Ertl. 2014. State-of-the-art of visualization for eye tracking data. Proceedings of EuroVis. [Google Scholar] [CrossRef]
  5. Bleisch, S. 2012. 3D Geovisualization–Definition and Structures for the Assessment of Usefulness. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences I-2: 129–134. [Google Scholar] [CrossRef]
  6. Bleisch, S, J Burkhard, and S. Nebiker. 2009. Efficient Integration of data graphics into virtual 3D Environ-ments. Proceedings of 24th International Cartog-raphy Conference. [Google Scholar]
  7. Brychtova, A., S. Popelka, and Z. Dobesova. 2012. Eye-tracking methods for investigation of cartographic principles. 12th International Multidisciplinary Scien-tific Geoconference 2: 1041–1048. [Google Scholar] [CrossRef]
  8. Çöltekin, A., S. Fabrikant, and M. Lacayo. 2010. Explor-ing the efficiency of users' visual analytics strategies based on sequence analysis of eye movement record-ings. International Journal of Geographical Infor-mation Science 24, 10: 1559–1575. [Google Scholar] [CrossRef]
  9. Çöltekin, A., I. Lokka, and M. Zahner. 2016. On the Usability and Usefulness of 3D (Geo) Visualizations-A Focus on Virtual Reality Environments. Interna-tional Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, 387–392. [Google Scholar] [CrossRef]
  10. Dolezalova, J., and S. Popelka. 2016. Evaluation of the user strategy on 2D and 3D city maps based on novel scanpath comparison method and graph visualization. International Archives of the Photogrammetry, Re-mote Sensing and Spatial Information Sciences, 637–640. [Google Scholar] [CrossRef]
  11. Duchowski, A., E. Medlin, N. Cournia, H. Murphy, A. Gramopadhye, S. Nair, J. Vorah, and B. Melloy. 2002. 3-D eye movement analysis, Behavior Re-search Methods. Instruments, & Computers 34, 4: 573–591. [Google Scholar] [CrossRef]
  12. Ellis, G., and A. Dix. 2006. An explorative analysis of user evaluation studies in information visualisation. Proceedings of the 2006 AVI workshop; pp. 1–7. [Google Scholar] [CrossRef]
  13. Enoch, J. M. 1959. Effect of the size of a complex dis-play upon visual search. JOSA 49, 3: 280–285. [Google Scholar] [CrossRef]
  14. Fabrikant, S. I., S. Rebich-Hespanha, N. Andrienko, G. Andrienko, and D. R. Montello. 2008. Novel method to measure inference affordance in static small-multiple map displays representing dynamic processes. The Cartographic Journal 45, 3: 201–215. [Google Scholar] [CrossRef]
  15. Fabrikant, S. I., S. R. Hespanha, and M. Hegarty. 2010. Cognitively inspired and perceptually salient graphic displays for efficient spatial inference making. Annals of the Association of american Geographers 100, 1: 13–29. [Google Scholar] [CrossRef]
  16. Fuhrmann, S., O. Komogortsev, and D. Tamir. 2009. Investigating Hologram-Based Route Planning. Transactions in GIS 13, s1: 177–196. [Google Scholar] [CrossRef]
  17. Góralski, R. 2009. Three-dimensional interactive maps: theory and practice . Unpublished Ph.D. Thesis, Uni-versity of Glamorgan. [Google Scholar]
  18. Gore, A. 1998. The digital earth: understanding our planet in the 21st century. Australian Surveyor 43, 2: 89–91. [Google Scholar] [CrossRef]
  19. Haeberling, C. 2002. 3D Map Presentation–A System-atic Evaluation of Important Graphic Aspects. Pro-ceedings of ICA Mountain Cartography Workshop "Mount Hood"; pp. 1–11. [Google Scholar]
  20. Haeberling, C. 2003. Topografische 3D-Karten-Thesen für kartografische Gestaltungsgrundsätze. [CrossRef]
  21. Hägerstraand, T. 1970. What about people in regional science? Papers in regional science 24, 1: 7–24. [Google Scholar] [CrossRef]
  22. Herman, L., and T. Reznik. 2015. 3D web visualization of environmental information-integration of heteroge-neous data sources when providing navigation and in-teraction. International Archives of the Photogramme-try, Remote Sensing and Spatial Information Sciences, 479–485. [Google Scholar] [CrossRef]
  23. Herman, L., and J. Russnák. 2016. X3DOM: Open Web Platform for Presenting 3D Geographical Data and E-learning. Proceedings of 23rd Central European Con-ference; pp. 31–40. [Google Scholar]
  24. Herman, L., and Z. Stachoň. 2016. Comparison of User Performance with Interactive and Static 3D Visualiza-tion–Pilot Study. International Archives of the Photo-grammetry, Remote Sensing and Spatial Information Sciences, 655–661. [Google Scholar] [CrossRef]
  25. Hughes, J. F., A. Van Dam, J. D. Foley, and S. K. Feiner. 2014. Computer graphics: principles and practice. Pearson Education: p. 1264. [Google Scholar]
  26. Incoul, A., K. Ooms, and P. De Maeyer. 2015. Compar-ing paper and digital topographic maps using eye tracking. In Modern Trends in Cartography. Springer: pp. 339–356. [Google Scholar] [CrossRef]
  27. Kraak, M. 1988. Computer-assisted cartographical 3D imaging techniques. Delft University Press: Volume 175. [Google Scholar]
  28. Kubíček, P., Č. Šašinka, Z. Stachoň, Z. Štěrba, J. Apel-tauer, and T. Urbánek. 2017. Cartographic Design and Usability of Visual Variables for Linear Features. The Cartographic Journal 54, 1: 91–102. [Google Scholar] [CrossRef]
  29. Kveladze, I., M.-J. Kraak, and C. P. van Elzakker. 2013. A methodological framework for researching the usa-bility of the space-time cube. The Cartographic Jour-nal 50, 3: 201–210. [Google Scholar] [CrossRef]
  30. Li, X., A. Çöltekin, and M.-J. Kraak. 2010. Visual explo-ration of eye movement data using the space-time-cube. Geographic Information Science Springer: 295–309. [Google Scholar]
  31. Lokka, I., and A. Çöltekin. 2016. Simulating navigation with virtual 3D geovisualizations–A focus on memory related factors. International Archives of the Photo-grammetry, Remote Sensing and Spatial Information Sciences, 671–673. [Google Scholar]
  32. MacEachren, A. M. 2004. How maps work: representa-tion, visualization, and design. Guilford Press: vol. 513. [Google Scholar]
  33. Ooms, K., P. De Maeyer, and V. Fack. 2014. Study of the attentive behavior of novice and expert map users using eye tracking. Cartography and Geographic In-formation Science 41, 1: 37–54. [Google Scholar] [CrossRef]
  34. Ooms, K., A. Çöltekin, P. De Maeyer, L. Dupont, S. Fab-rikant, A. Incoul, M. Kuhn, H. Slabbinck, P. Vansteenkiste, and L. Van Der Haegen. 2015. Com-bining user logging with eye tracking for interactive and dynamic applications. Behavior research meth-ods 47, 4: 977–993. [Google Scholar] [CrossRef]
  35. Paletta, L., K. Santner, G. Fritz, H. Mayer, and J. Schram-mel. 2013. 3D attention: measurement of visual saliency using eye tracking glasses. CHI'13 Extended Abstracts on Human Factors in Computing Systems ACM; pp. 199–204. [Google Scholar] [CrossRef]
  36. Petchenik, B. B. 1977. Cognition in cartography. Carto-graphica: The International Journal for Geographic Information and Geovisualization 14, 1: 117–128. [Google Scholar] [CrossRef]
  37. Petrovič, D., and P. Mašera. 2004. Analysis of user’s response on 3D cartographic presentations. Proceed-ings of 7th meeting of the ICA Commission on Moun-tain Cartography; pp. 1–10. [Google Scholar]
  38. Pfeiffer, T. 2012. Measuring and visualizing attention in space with 3d attention volumes. In Proceedings of the Symposium on Eye Tracking Research and Applica-tions. ACM: pp. 29–36. [Google Scholar] [CrossRef]
  39. Popelka, S. 2014. The role of hill-shading in tourist maps. CEUR Workshop Proceedings; pp. 17–21. [Google Scholar]
  40. Popelka, S., and A. Brychtova. 2013. Eye-tracking Study on Different Perception of 2D and 3D Terrain Visual-isation. The Cartographic Journal 50, 3: 240–246. [Google Scholar] [CrossRef]
  41. Popelka, S., A. Brychtova, J. Svobodova, J. Brus, and J. Dolezal. 2013. Advanced visibility analyses and visibility evaluation using eye-tracking. Proceedings of 21st International Conference on Geoinformatics; pp. 1–6. [Google Scholar] [CrossRef]
  42. Popelka, S., and P. Dedkova. 2014. Extinct village 3D visualization and its evaluation with eye-movement recording. Lecture Notes in Computer Science 8579: 786–795. [Google Scholar] [CrossRef]
  43. Popelka, S., Z. Stachon, C. Sasinka, and J. Dolezalova. 2016. EyeTribe Tracker Data Accuracy Evaluation and Its Interconnection with Hypothesis Software for Cartographic Purposes. Computational Intelligence and Neuroscience, 1–14. [Google Scholar] [CrossRef]
  44. Putto, K., P. Kettunen, J. Torniainen, C. M. Krause, and L. Tiina Sarjakoski. 2014. Effects of cartographic elevation visualizations and map-reading tasks on eye movements. The Cartographic Journal, 225–236. [Google Scholar] [CrossRef]
  45. Ramloll, R., C. Trepagnier, M. Sebrechts, and J. Beedasy. 2004. Gaze data visualization tools: opportunities and challenges. In Proceedings of Eighth International Conference on Information Visualisation. pp. 173–180. [Google Scholar] [CrossRef]
  46. Savage, D. M., E. N. Wiebe, and H. A. Devine. 2004. Performance of 2d versus 3d topographic representa-tions for different task types. In Proceedings of Human Factors and Ergonomics Society Annual Meeting. pp. 1793–1797. [Google Scholar]
  47. Slocum, T. A., C. Blok, B. Jiang, A. Koussoulakou, D. R. Montello, S. Fuhrmann, and N. R. Hedley. 2001. Cognitive and usability issues in geovisualiza-tion. Cartography and Geographic Information Sci-ence 28, 1: 61–75. [Google Scholar] [CrossRef]
  48. Špriňarová, K., V. Juřík, Č. Šašinka, L. Herman, Z. Štěrba, Z. Stachoň, J. Chmelík, and B. Kozlíková. 2015. Human-Computer Interaction in Real-3D and Pseudo-3D Cartographic Visualization: A Compara-tive Study. In Cartography-Maps Connecting the World. Springer: pp. 59–73. [Google Scholar] [CrossRef]
  49. Staněk, K., L. Friedmannová, P. Kubíček, and M. Konečný. 2010. Selected issues of cartographic communi-cation optimization for emergency centers. Interna-tional Journal of Digital Earth 3, 4: 316–339. [Google Scholar] [CrossRef]
  50. Steinke, T. R. 1987. Eye movement studies in cartog-raphy and related fields. Cartographica: The Interna-tional Journal for Geographic Information and Geo-visualization 24, 2: 40–73. [Google Scholar] [CrossRef]
  51. Stellmach, S., L. Nacke, and R. Dachselt. 2010a. 3d attentional maps: aggregated gaze visualizations in three-dimensional virtual environments. In Proceedings of the international conference on advanced visual in-terfaces. ACM: pp. 59–73. [Google Scholar]
  52. Stellmach, S., L. Nacke, and R. Dachselt. 2010b. Ad-vanced gaze visualizations for three-dimensional vir-tual environments. In Proceedings of the 2010 symposi-um on eye-tracking research & Applications. ACM: pp. 109–112. [Google Scholar]
  53. Voßkühler, A., V. Nordmeier, L. Kuchinke, and A. M. Jacobs. 2008. OGAMA (Open Gaze and Mouse Ana-lyzer): open-source software designed to analyze eye and mouse movements in slideshow study designs. Behavior research methods 40, 4: 1150–1162. [Google Scholar] [CrossRef]
  54. Wang, S., Y. Chen, Y. Yuan, H. Ye, and S. Zheng. 2016. Visualizing the Intellectual Structure of Eye Movement Research in Cartography. ISPRS Interna-tional Journal of Geo-Information 5, 10: 168. [Google Scholar] [CrossRef]
  55. Wilkening, J., and S. I. Fabrikant. 2013. How users inter-act with a 3D geo-browser under time pressure. Car-tography and Geographic Information Science 40, 1: 40–52. [Google Scholar] [CrossRef]
  56. Wood, J., S. Kirschenbauer, J. Döllner, A. Lopes, and L. Bodum. 2005. Edited by J. Dykes. Using 3D in visualization. In Exploring Geovisualization. Elsevier: pp. 295–312. [Google Scholar]
Figure 1. Schema of 3DgazeR modules.
Figure 1. Schema of 3DgazeR modules.
Jemr 10 00015 g001
Figure 2. Examples of data about eye-tracking data (left) and virtual camera movement (right) and schema of their connection.
Figure 2. Examples of data about eye-tracking data (left) and virtual camera movement (right) and schema of their connection.
Jemr 10 00015 g002
Figure 3. Principle of ray casting method for 3D scene coordinates calculation.
Figure 3. Principle of ray casting method for 3D scene coordinates calculation.
Jemr 10 00015 g003
Figure 4. An example of stimuli from variant A – terrain covered with a hypsometric scale (left) and variant B – terrain covered with a satellite image (right).
Figure 4. An example of stimuli from variant A – terrain covered with a hypsometric scale (left) and variant B – terrain covered with a satellite image (right).
Jemr 10 00015 g004
Figure 5. Raw data displayed as a layer in GIS-software (green points – calculated 3D gaze data; red – points with incorrect elevation).
Figure 5. Raw data displayed as a layer in GIS-software (green points – calculated 3D gaze data; red – points with incorrect elevation).
Jemr 10 00015 g005
Figure 6. Evaluation of the data validity. Red color corresponds to the data samples where gaze was not recorded on the target sphere.
Figure 6. Evaluation of the data validity. Red color corresponds to the data samples where gaze was not recorded on the target sphere.
Jemr 10 00015 g006
Figure 7. Comparison of 3D raw data (red points – females, blue points – males) for variant B, task 6.
Figure 7. Comparison of 3D raw data (red points – females, blue points – males) for variant B, task 6.
Jemr 10 00015 g007
Figure 8. Scanpath (3D fixations and saccades) of one user for variant A, task 4. Interactive version is available at http://eyetracking.upol.cz/3d/.
Figure 8. Scanpath (3D fixations and saccades) of one user for variant A, task 4. Interactive version is available at http://eyetracking.upol.cz/3d/.
Jemr 10 00015 g008
Figure 9. Comparison of 3D attention maps from cartographers (left) and non-cartographers (right) for variant B, task 6. Interactive versions are available at http://eyetracking.upol.cz/3d.
Figure 9. Comparison of 3D attention maps from cartographers (left) and non-cartographers (right) for variant B, task 6. Interactive versions are available at http://eyetracking.upol.cz/3d.
Jemr 10 00015 g009
Figure 10. Graph of observed elevations during task (variant A, task 4, participant no. 20).
Figure 10. Graph of observed elevations during task (variant A, task 4, participant no. 20).
Jemr 10 00015 g010

Share and Cite

MDPI and ACS Style

Herman, L.; Popelka, S.; Hejlova, V. Eye-Tracking Analysis of Interactive 3D Geovisualization. J. Eye Mov. Res. 2017, 10, 1-15. https://doi.org/10.16910/jemr.10.3.2

AMA Style

Herman L, Popelka S, Hejlova V. Eye-Tracking Analysis of Interactive 3D Geovisualization. Journal of Eye Movement Research. 2017; 10(3):1-15. https://doi.org/10.16910/jemr.10.3.2

Chicago/Turabian Style

Herman, Lukas, Stanislav Popelka, and Vendula Hejlova. 2017. "Eye-Tracking Analysis of Interactive 3D Geovisualization" Journal of Eye Movement Research 10, no. 3: 1-15. https://doi.org/10.16910/jemr.10.3.2

APA Style

Herman, L., Popelka, S., & Hejlova, V. (2017). Eye-Tracking Analysis of Interactive 3D Geovisualization. Journal of Eye Movement Research, 10(3), 1-15. https://doi.org/10.16910/jemr.10.3.2

Article Metrics

Back to TopTop