Next Article in Journal
Template-Based 3D Road Modeling for Generating Large-Scale Virtual Road Network Environment
Previous Article in Journal
Expressing History through a Geo-Spatial Ontology
Previous Article in Special Issue
Performance Testing on Marker Clustering and Heatmap Visualization Techniques: A Comparative Study on JavaScript Mapping Libraries
Open AccessArticle

User Evaluation of Map-Based Visual Analytic Tools

Department of Geoinformatics, Faculty of Science, Palacký University in Olomouc, 17. listopadu 50, 771 46 Olomouc, Czech Republic
Department of Geography, Faculty of Science, Masaryk University, Kotlářská 2, 611 37 Brno, Czech Republic
Grammar School Brno-Řečkovice, Terezy Novákové 2, 621 00 Brno, Czech Republic
Plan4All, K Rybníčku 557, 330 12 Horní Bříza, Czech Republic
InnoConnect, Fialková 1026/16, 326 00 Plzeň, Czech Republic
HELP SERVICE – REMOTE SENSING Ltd., Husova 2117, 256 01 Benešov, Czech Republic
Author to whom correspondence should be addressed.
ISPRS Int. J. Geo-Inf. 2019, 8(8), 363;
Received: 7 June 2019 / Revised: 25 July 2019 / Accepted: 7 August 2019 / Published: 20 August 2019
(This article belongs to the Special Issue Smart Cartography for Big Data Solutions)


Big data have also become a big challenge for cartographers, as the majority of big data may be localized. The use of visual analytics tools, as well as comprising interactive maps, stimulates inter-disciplinary actors to explore new ideas and decision-making methods. This paper deals with the evaluation of three map-based visual analytics tools by means of the eye-tracking method. The conceptual part of the paper begins with an analysis of the state-of-the-art and ends with the design of proof-of-concept experiments. The verification part consists of the design, composition, and realization of the conducted eye-tracking experiment, in which three map-based visual analytics tools were tested in terms of user-friendliness. A set of recommendations on GUI (graphical user interface) design and interactive functionality for map makers is formulated on the basis of the discovered errors and shortcomings in the assessed stimuli. The results of the verification were used as inputs for improving the three tested map-based visual analytics tools and might serve as a best practice for map-based visual analytics tools in general, as well as for improving the policy making cycle as elaborated by the European project PoliVisu (Policy Development based on Advanced Geospatial Data Analytics and Visualization).
Keywords: eye-tracking; interactive map; PoliVisu; usability; visual analytics; WebGLayer; HSLayers NG eye-tracking; interactive map; PoliVisu; usability; visual analytics; WebGLayer; HSLayers NG

1. Introduction

The flood of data, the speed of (near) real-time updates, very complex relations between any piece of information, high bias, and less time for decision making have become common features of today’s world. Cartography, geographic information systems, and web-based applications address such issues in many ways, visual analytics being one such application [1,2,3].
Within a few years, visual analytics has gone beyond the state-of-the-art through the utilization of geographic information gained from (big) data. Such an approach stimulates innovative thinking about complex challenges [4]. Visual analytics techniques and tools are used to synthesize information and derive insights from big and dynamic data, detect the expected and discover the unexpected, and provide understandable assessments and communicate them effectively [5]. A strong focus is on presenting the spatial-temporal aspect of the data in an “interactive” and “connected” way. Interactivity means that the user can interact through the GUI (graphical user interface) and the system will respond in (near) real-time. Connectivity is realized by integrated, linked, and synchronized graphs, (heat) maps, tables, or 3D views. The use of interactive maps, heat maps, and charts to understand user behavior (e.g., shifts in traffic flows/volume due to changing events) enables inter-disciplinary actors to explore new ideas together in a holistic, comprehensive, systematic, analytical, and visual manner before deploying costly solutions.
Using interactive visual analytics tools, the analysis of problems can have greater depth as many layers of data relating to the physical and social world can be considered together. With big data tools, impacts can be explored across a whole region, rather than just one or two small localities. Instead of providing spreadsheets of uninspiring figures to illustrate the impact of, for example, urban transportation (e.g.,, road routing decisions in general [6], or different types of hazards [7], visualizations provide one version of the truth for all to use.
The visual analytics tools presented in this paper are the results of several European research and innovation projects. Their development between November 2017 and October 2020 is being funded from the European Horizon 2020 project titled PoliVisu (Policy Development based on Advanced Geospatial Data Analytics and Visualization; PoliVisu develops the traditional public policy making cycle (outlined by Patton and Sawicki [8]) using big data and visual analytics tools. The aim is to provide an open set of digital tools to leverage data to help public sector decision-making become more democratic by (a) experimenting with different policy options through impact visualization and (b) using the resulting visualizations to engage and harness the collective intelligence of policy stakeholders for collaborative solution development [9].

1.1. User Evaluation of Tools for Visual Analysis

Visual analytics, in general, should allow users to focus their full perceptual and cognitive capabilities on analytical processes. To allow this, visual analytics tools must be user-friendly and understandable to users. It is therefore logical that these tools should be evaluated by their users. Visual analytics tools are usually assessed on the basis of their use in experimental situations that employ some kind of analysis. Only a few studies have addressed their evaluation by actual users.
The evaluation of visual analytics tools usually detects design problems in the layout of their user interface as well as in the interaction with this interface [10]. Freitas et al. [10] and Scholtz [11] define the sets of criteria for evaluating visual analytics tools, addressing both the evaluation of visual representations and interaction mechanisms. Scholz et al. [12] provide further discussion of the development of an evaluation methodology for visual analytics environments. Some authors, e.g., [13,14] focus on the heuristic (expert) evaluation of visual analytics tools.
A special type of visual analysis is geospatial visual analytics, which deals with problems involving geographical space and the various objects, events, phenomena, and processes populating it [1,15]. Cartographic research of geovisual analytics has stressed interest in user studies informed by psychology and related cognitive science [1,5]. As Roth et al. [16] states, future work is needed to fully define high-level, insight-based tasks for geovisual analytics and interactive cartography.
In accordance with the above discussion, the user testing of geospatial visual analytics tools is important. Such user testing is presented, for example, by Robinson et al. [17] and Roth, Ross, and MacEachren [18]. Roth, Ross, and MacEachren [18] evaluated interactive maps for the visual analysis of criminal activities. They used both evaluation by experts (the think aloud method) and typical users (an online survey). Robinson et al. [16] evaluated the usability of geospatial visual analytics tools for epidemiology by means of various research methods (verbal protocol analysis, card-sorting, focus groups, and in-depth case study). Schnűrer, Sieber, and Çöltekin [19] compared four variants of the GUI of an interactive map by analyzing the number and distribution of mouse clicks. A number of other user studies verified the usability of web map portals (i.e., [20,21,22,23,24,25]).
However, web map portals themselves do not enable the visual analytical functions of geospatial data, though they may serve as base maps for visual analytics.
Because visual analytics tools are based on visual stimuli that are perceived by sight, eye-tracking is a suitable method for evaluating these tools. A review of eye-tracking applications for the evaluation of visual analytics is presented by Kurzhals et al. [26] and Krassanakis and Cybulski [27]. Kiefer et al. [28] state that eye-tracking evaluation of interactive is challenging. Other papers, for example, Çöltekin et al. [29] and Çöltekin, Fabrikant, and Lacayo [30] describe application of eye-tracking to compare two variants of GUI for an interactive map. Golebiowska, Opach, and Rød [31] tested web-based analytical tool consisting of choropleth map, table, and parallel coordinates plot. Eye-tracking was also used to evaluate the geoportal MyFireWatch, which is based on an interactive map displaying the distribution of bushfires in Australia [32] or for evaluation of weather forecast maps [33].
The presented paper focuses on the user evaluation of interactive maps based on WebGLayer and HSLayers NG technology. The eye-tracking method was chosen for evaluation because it allows users to study the visual perception of users in combination with direct observation, which allows user interaction with interactive maps to be evaluated.

1.2. Description of the Investigated Tools

Visual analytics of spatiotemporal changes can bring new insights into the consequences of human decisions in many areas. We may identify several application domains, starting from smart mobility policy [6,34,35], through the monitoring of machinery and sensor data in agriculture [36,37], the monitoring of population movement and distribution [38,39], and noise mapping [40], to spatiotemporal visualizations of crime scenes [41,42]. The main focus of visual analytics is, therefore, placed on interactivity through utilizing the concept of multiple coordinated views [43,44] and dynamic queries to emphasize the impact of changes in various phenomena.
The WebGLayer is a tool that was used during experiments (Chicago Map, Flanders Map) on the user evaluation of visual analytics in this paper. The WebGLayer is a JavaScript, WebGL (web graphic library)-based open source library for coordinated multiple view visualizations [45]. It is an open-source software released under the BSD (Berkley software distribution) license. The library is based on WebGL and uses the device’s GPU for the fast rendering and filtering of data. It can render data on a map provided by third party libraries (e.g., OpenLayers, Leaflet, GoogleMap API). The library is focused on spatial data and large datasets—so far, up to tens of millions of data records. It was developed and supported by the EU CIP (Competitiveness and Innovation Framework Programme) project OpenTransportNet for traffic flows and traffic accidents [5] and was later expanded and enhanced within the European Horizon 2020 PoliVisu project to facilitate a wider range of visualizations that meet cities’ needs when working with smart mobility policy in the era of big data.
The WebGLayer allows the development of interactive heatmap visualizations of large datasets (see a comparison of its performance to other contemporary used solutions in [45]) by implementing multiple linked views to present data. Each of the views enables different interactions (such as brushing, filtering, or relationship analysis) that trigger an instant update of the other views. Thus, users benefit from immediate and dynamic data visualizations, gain a better understanding of data by applying filters, and develop the opportunity to discover relationships and patterns in the data.
HSLayers NG is a JavaScript-based framework used during the user evaluation of visual analytics experiments (Pilsen Map). The library extends OpenLayers 4 functionality. It uses modern JS frameworks instead of ExtJS 3 at the frontend. HSLayers is built in a modular way, which enables the modules to be freely attached and removed as far as the dependencies for each of them are satisfied. The core of framework is developed using AngularJS, requireJS, and Bootstrap. The library is distributed under MIT license. The library was customized and a tailored application showing spatiotemporal changes of traffic volumes was created.
The three following cases’ usage (two of WebGLayer, one of HSLayers NG) are evaluated and described in this paper:
WebGLayer-based Chicago Map—crimes in Chicago (United States of America). The use case visualizes the reported incidents of crime that occurred in the city of Chicago, the United States of America, between 1 January 2017 and 31 December 2017. In total, 63,216 crimes were reported in this period and published in the form of an application that is available at the URL:
HSLayers NG-based Pilsen Map—traffic intensity monitoring in Pilsen (Czech Republic). The use case visualizes historical, present, and estimated future traffic flows. The historical data in 2017 and 2018 comprise 500 million records (150 GB of data), while another million records (1 GB) from online detectors are generated each day. The developed application is available at the URL: The new traffic intensity application is in its prototype phase. Once finished, it will be migrated to the mentioned address.
WebGLayer-based Flanders Map—traffic accidents in Flanders (Belgium). The use case visualizes 63,532 traffic accidents that happened in Flanders in recent years. These represent approximately 85% of all accidents that involved injury or death registered by the police. The data were provided by the Federal Police and processed in collaboration with Informatie Vlaanderen. The developed application is available at the URL:

1.3. Research Aims

The main goal of this study was to evaluate user aspects with respect to three selected web maps using the eye-tracking and observation methods, especially with regard to user accessibility. This main goal was realized by fulfilling these sub-goals:
  • To test the user accessibility of the selected maps;
  • To find problems and shortcomings in the evaluated web maps;
  • To formulate recommendations for map authors based on these findings.

2. Methods

The aim of the study was to comprehensively evaluate the user accessibility of three newly designed PoliVisu maps, to detect possible usability problems, and to obtain feedback from users. This chapter is divided into individual subchapters that describe the process of creating an experiment and its subsequent implementation.
The whole experiment took place in an eye-tracking laboratory at the Department of Geoinformatics at Palacký University in Olomouc.

2.1. Design of the Experiment

The experiment was created in the SMI Experiment Center application and was divided into two parts—static and dynamic. Figure 1 attempts to elucidate the experiment design.

2.1.1. Static Part

The static part of the experiment contained six screenshots—two of each evaluated map. Every stimulus was presented for 10 seconds and participants looked at them freely—they did not need to complete any task. The purpose of this part was to analyze which parts of the interface attract attention. Stimulus in the form of a fixation cross was displayed for 600 ms before each screenshot. The purpose of the fixation cross was to unify the location where eye-movement trajectories would start (center of the stimulus). Stimuli from the static part of the experiment are displayed in Figure 2. Stimuli were selected to show the interface of the web map with some filtering. The web maps used do not include much possibilities how to change the layout. The only exception is Pilsen map, where “Timeline of constructions” was displayed in one stimulus and Flanders map, where parallel coordinates feature was displayed. The presentation of the static part took one minute.

2.1.2. Dynamic Part

The dynamic part of the experiment was designed to test the user accessibility of all three maps. The experiment contained four tasks for each map (12 tasks in total). The last task of the experiment (connected to the Flanders map) was supplemented by a verbal interpretation of one of the map’s features (parallel coordinates plot). Participants had to express the functionality of this plot. The maps and tasks were presented in a fixed order. The work with these tasks tested how a user was able to adapt to a new map interface and work with it effectively. The tasks were selected with close cooperation with authors of web maps, so the features that authors considered as important or problematic were chosen for the tasks. One of the main benefits of web maps was the possibility of data filtering, so the majority of the tasks were focused on this functionality. The tasks were in line with the task taxonomy from Edsall et al. [46]. Participants have been directly or indirectly assigned to use zooming, panning, accessing exact data, focusing, altering representation type, altering symbolization, posing queries, toggling visibility, conditioning, brushing, and linking. The tasks were designed so that the total length of this part did not exceed 10 minutes. Since the participants were Czech, the tasks were presented in the Czech language. Their translation into English is here:
Tasks for “Chicago Map”:
  • Change the color of the heatmap.
  • Display on the map all the crimes which happened on Saturday from 17:00 to midnight.
  • Find out how many of these crimes were (say aloud).
  • Mark any area on the map using the “Polygon” tool.
Tasks for “Pilsen Map”:
  • Change map type to “Basic OSM”.
  • Display the legend of the map.
  • Find out information about the road closure in the center of Pilsen at 17:00 on 31 October 2018.
  • Display the “Timeline of constructions” and find out when the Šumavská Bus Terminal will be closed (say aloud).
Tasks for “Flanders Map”:
  • Switch map layer to accidents.
  • View all traffic accidents that occurred during rain (“regenval”).
  • Estimate how many of these accidents happened within a speed of 40–50 km per hour (say aloud).
  • Verbally interpret the parallel coordinates plot.

2.1.3. Experiment Description

Testing was conducted in October 2018. The experiment was created in SMI Experiment Center and SMI RED 250. Images were used as stimuli in the static part of the experiment. In the dynamic part, the screen recording type of stimuli was used to analyze interactive web maps. Google Chrome was used as the browser.
Calibration was required before the actual eye-movement measurements. An error below 1.5° was considered acceptable. Average calibration error was 0.55° and the distance between the participant and the monitor was approximately 60 cm. A chin rest was not used in the study. We checked the accuracy visually on the other monitor during whole recording, and the re-calibration procedure was not needed. After calibration, participants were familiarized with the experiment and some personal information was gathered (name, age, gender, and experience with web maps).
Next, the static part began. For each map, two screenshots were presented in a fixed order: Chicago, Pilsen, and Flanders. Immediately after the end of the static part, the dynamic part was presented using the same maps in the same order. In the case of the dynamic part, the task was presented before the stimuli. Participants had unlimited time to assimilate the task and commit it to memory. They were also allowed unlimited time to complete the task. Participants could also skip any tasks that they were unable to complete. The whole experiment took an average of 11 minutes. This short length of the experiment helped to retain the attention of respondents and the re-calibration was not necessary. During the experiment, the movements of the participant’s eyes, the participant’s screen display, and the participant’s behavior and speech were recorded.

2.2. Participants

A total of 17 participants participated in the experiment. The participants were students and employees of the Department of Geoinformatics, Palacký University Olomouc, Czech Republic. The data were analyzed qualitatively, so the number of participants needed to reveal at least 95% of web maps problems was calculated. The testing was divided into two phases. In the first, a total of 15 participants were recorded. Data for these participants were analyzed and problems that occurred during their work with the web maps were identified.
The sample size calculator for discovering problems in a user interface ( was used for the calculation of the recommended sample size. This calculator produces an estimate of the number of users needed to discover a specified percentage of total problems. It uses the Good–Turing and normalization procedure. The calculator is based on the binomial probability formula [47,48]. The problems identified for each participant were marked into a matrix. In this study, 16 different problems (55 in total) were identified for 15 participants. The calculator recommended a sample size of 17 participants as a requirement for identifying 95% of all problems available for discovery. For this reason, two additional participants were recorded in the second phase of the testing. Table 1 presents the list of participants, including information about their sex and experience in the field of cartography and web maps. Those with significant experience were employees or PhD students of the Department of Geoinformatics, and those with intermediate experience were bachelor-degree students. The table also contains the calibration error and tracking ratio for each participant.

2.3. Apparatus

Eye-tracker SMI RED 250 with a sampling frequency of 250Hz was used for eye-movement recording. Stimuli were presented on a 24″ monitor. The eye-tracker was located in a laboratory with stable experimental conditions (light conditions, etc.) and a specially designed table (Figure 3). The experiment was prepared in SMI Experiment Center and recorded data were analyzed in SMI BeGaze. For analysis of the static part of the experiment, the open-source application OGAMA [49] was used. For conversion between SMI and OGAMA, the SMI2OGAMA convertor ( was used. Fixations and saccades were identified using I-DT algorithm with settings of 80 ms and 50 px according to [49]. The issue of fixation detection is described in several studies [50,51,52,53,54].

2.4. Methods of Analysis

Recorded eye-movement data were analyzed qualitatively. The main outputs of the analysis were problems identified for the three tested web maps. These problems were based on the inspection of videos of the dynamic part of the experiment overlaid by eye-movement trajectories. This type of analysis was time consuming, since all videos had to be analyzed manually.
In contrast, the static part of the experiment was analyzed using visual analytics methods. Data were first visualized in the environment of SMI BeGaze or OGAMA and the visual analysis of these graphical outputs brought some interesting findings. The visualization methods were attention map (see Figure 4), where the number of recorded fixations is displayed by different colors; and focus map (e.g., upper left part of Figure 5), where more fixations lead to a clearer view. Darker areas indicate fewer fixations. A higher number of fixations lead to a clearer view of the map. Darker areas indicate fewer fixations. Another analytical method, called key performance indicators in the SMI environment, shows some basic eye-tracking metrics for created areas of interest. Areas of interest were marked around the key parts of the interface of web maps—their title, legend, controls, or specific filters. The map field was considered as one AOI, since we were analyzing how participants work with the interface of the web map, not with the map itself. In this study, seven metrics related to AOIs were chosen. Their description according to [55] is:
  • Sequence—order of gaze hits into the AOIs based on entry time;
  • Entry time—average duration for the first fixation into the AOI;
  • Dwell time—sum of all fixations and saccades within an AOI/number of participants;
  • Hit ratio—how many participants looked at least once into the AOI;
  • Revisitors—number of participants with more than one visit in the AOI.
The last method used for visual analytics was called sequence chart, which displays the time sequence for the visited areas of interest. Respondents’ eye-tracking data are represented by colored strips. The color of each strip represents each area of interest. Thus, the sequence chart shows the order of the AOIs visited, how long the respondents looked at them, and whether they looked at them repeatedly.

3. Results

The results of eye-tracking measurements were divided into two sections—one for the static part and one for the dynamic part. Recommendations to authors of web maps based on the findings of this study are listed at the end of the chapter.

3.1. Static Part

3.1.1. Chicago Map

The static part of the experiment served to reveal which part of the GUI attracted the participants when they saw the map for the first time. For this purpose, attention maps showing the distribution of the first three fixations were created (Figure 4).
A fixation cross was presented before the stimuli to unify the starting point of the eye-movement trajectory. Figure 5 shows that the first fixation for most participants was directed towards the center of the screen. The second fixation remained in the map part of the stimulus. An interesting fact is that, for some participants, the first fixation was directed towards the upper-left corner of the stimulus, i.e., to the active filters panel. Attention did not fall on the title of the map located on the right above the interactive panel, since it was not very distinctive.
The focus map in the upper-left corner of Figure 5 confirmed the previous findings that some participants’ attention was attracted by the active filters panel in the upper-left corner of the map. The entry time into the AOI around the active filters was 3234 ms. The average entry time into the title of the map was 3029 ms. This value is lower, so, on average, participants looked at the title earlier than at the filters panel—yet, from the sequence chart at the bottom of Figure 5, it is evident that a relatively large number of participants started their stimulus inspection with the filters panel (participants P01, P08, P09, P10, P11, P14, and P15). There were also participants who did not look at the title of the map at all (P04, P05, P16). The interactive panels on the right side were inspected relatively marginally—participants spent only around 7%–8% of the time there.

3.1.2. Pilsen Map

The same set of analyses was performed for the Pilsen map. Attention maps of the first three fixations in the static stimulus of the Pilsen map show that participants looked into the legend that was displayed in the stimulus (Figure 6).
In the next part of the investigation, key performance indicators were analyzed. The most interesting finding is that the AOI with the title (marked green in Figure 7) was observed only sixth, i.e., as the last but one. The average entry time for this AOI was 5754 ms and it was visited by only 10 participants; thus, it might be concluded that the title was not very distinctive in the GUI and did not strongly attract the attention of the participants. In contrast, the legend AOI was observed for a relatively long time (24%) and was observed by all 17 participants.

3.1.3. Flanders Map

Attention maps of the first three fixations for the Flanders map (Figure 8) show that the early fixations of some participants were towards the title in the upper-right corner and the polygon tool in the upper-left corner. A relatively large number of participants stayed in the center of the map.
Data visualization in Figure 9 shows that the polygon tool was observed earlier than the title of the map (2186 ms versus 3538 ms). The polygon tool is perhaps very distinctive and, thus, attracted the attention of the users instead of the title. The parallel coordinates plot in the bottom of the interface was observed by only one participant (P10) for a very short time. On this example, the importance of the localization of the tool within the GUI is clear. The polygon tool and parallel coordinates plot have similar importance, but they are placed in different parts of the web map and the attention of participants towards these tools differs significantly.

3.2. Dynamic Part

The result of the dynamic part of the experiment was a set of videos containing a record of respondents’ work with individual stimuli. The videos contained a capture of the screen overlaid with eye-movement. These videos were complemented by webcam recordings, by which the reactions of the participants and their comments were documented. A list of frequently recurring problems that users experienced in performing the given tasks was created on the basis of direct observation and the thorough inspection of the individual records. The results were divided into three parts according to the web maps used as stimuli.

3.2.1. Chicago Map

No problem was encountered during the completion of Task 1, so a 30-second time limit was selected as a prerequisite for the task. As shown in Table 2, 16 out of 17 respondents completed the task successfully within this time limit. The interface for the change of the heatmap color was not problematic.
For Task 2, two main problems were identified. The participant’s task was to display crimes that occurred on a specific day at a particular time. One of the respondents was unable to fulfill this task. Three participants switched the display layer from the heatmap option to crimes before choosing parameters to display specific values. This change had no impact on the displayed values at all, only on the aesthetics of the map.
Task 3 can be considered as the most problematic of the Chicago stimulus. The participants had to find out the exact number of crimes that occurred under the conditions defined in the previous task and to say it aloud. Seven of the 17 participants gave the total number of recorded crimes on the map, which was part of the map label and was invariable. Five participants reported a different number, mostly the total number of crimes committed on the given day, but not at the specified time (see Figure 10). Four participants found no value.
Two problems were found with respect to Task 4. First, two participants were unable to locate the polygon tool and, thus, could not mark any area on the map. Second, one participant was unable to complete the generated polygon when working with the tool.

3.2.2. Pilsen Map

Table 3 shows that there were no major problems with respect to the first and fourth tasks with the Pilsen map, as all participants managed to complete the tasks successfully. Also, all but one participant managed to complete the second task, which was to display the legend. However, 11 of the 17 participants failed to complete Task 3, which was to find information about the road closure in the center of Pilsen at 17:00 on 31 October 2018. Here, the 11 participants in question clicked outside the closure, although they were sure they had clicked on it correctly. Consequently, an adjacent street was marked instead of the closure and the necessary information was not displayed. This problem was caused by the fact that the symbol for the closure was too big and that if the user did not click exactly on its center, the street behind it was selected instead. A detail of this problem can be seen in Figure 11. In the upper-left image, the user clicked slightly off-center on the symbol. From the zoomed image in the upper-right quadrant, it is evident that the click missed the road closure. In the images at the bottom, the user clicked on the center of the symbol and the correct information was displayed.

3.2.3. Flanders Map

Table 4 shows that there was no significant problem with completing task 1 for the Flanders map. As in the case of the Chicago map, a time limit of 30 seconds was selected as a prerequisite for the task. Only one participant failed to complete the task inside this limit, so it can be said that this element was in the right place.
There were more mistakes in Task 2. The task was to display all road accidents that happened during rain. A serious problem with the selection of the bars in the interactive panel for this map was discovered. As with the Chicago map, bar graphs could be selected interactively. However, if a user wanted to select a bar, it was not sufficient just to click on the given bar; it was also necessary to drag over the bar with the mouse cursor. Twelve of the 17 participants had problems with bar selection. Instead of dragging the mouse, they tried to mark bars with a click. When they found out that clicking did not work, they moved their attention to another part of the map and started looking for answers elsewhere.
A further problem with bar selection was reported by seven participants. They dragged their mouse cursor over the bar, but it was not selected. To correctly select the bar, it was necessary to move the cursor across the side of the bar (see Figure 12). These issues with the interface are considered serious and will be fixed in the next version of the WebGLayer library in order to improve the usability of the map.
There was also a problem with Task 3. Participants were asked to estimate how many of the previously selected accidents occurred between 40 and 50 km/h. The information was displayed by highlighting only part of the bar (see Figure 13). Four respondents gave the total number of road accidents between the given speeds (not at these speeds and under rain).
The last task of the whole experiment was to verbally interpret the parallel coordinates feature. None of the participants were able to describe the functionality of this feature. Most of the participants complained about a confusing design, a small and illegible font, and poorly chosen colors that disappeared against a black background. The feature itself was not accompanied by any information panel or legend. It would be helpful to add some description or guide concerning how the parallel coordinates feature works. This issue should be fixed in the next release of the WebGLayer library.
Participant 10 was the only participant to notice the relationship between the values he selected in previous tasks and the parallel coordinates plot. Below is a transcription of his description of the tool:
P10: “It’s a graphical expression of what I checked here, as if I checked all the days of the week here, I see that there is a relationship between all the days in the week. Then I see here that I chose some 40 to 50 km and then I can see characteristics of accidents. Overall, everything I have checked is displayed in this map.”
Author: “And the colors?”
P10: “Why it is green and why it is red? So, red will probably be some higher number of it.”

4. Recommendations and Discussion

The aim of this part is to formulate recommendations for the authors of the tested maps based on the results of observation and eye-tracking testing, also for authors of such maps in general, point out problems with fulfilling the tasks, and suggest possible improvements. We also discuss the limits of the research.

4.1. Chicago Map

The results of the static part showed that the respondents’ attention at the beginning of stimuli observation was not directed to the map title. The title of the map was not sufficiently distinctive or was placed in the wrong location.
The biggest problem in the dynamic part was determining the exact value of selected parameters. Seven of the 17 respondents thought one particular value was in the information panel. However, this value indicated the total number of crimes, not the number of crimes after filtering. This mistake could have arisen because of the disadvantageous position of the information on the total number of crimes or because of the unclear way in which values in the bars were supposed to be interpreted.
Another minor problem with the Chicago map was that participants switched the displayed layer from the heatmap option to crimes before the selection of the required parameters. This change had no impact on the displayed values, only on the aesthetics of the map. To overcome this problem and user-misunderstanding, it would be helpful to add a description of, or information about, these options.

4.2. Pilsen Map

In the static part of the experiment, the same problem as in the previous map appeared. Participants’ attention was not initially directed to the title of the map. The title of the map was not sufficiently distinctive or was placed in the wrong location.
Only one noticeable problem appeared in the dynamic part—a bug with the selection of a particular road closure. To mark the exact road closure, the user had to zoom in and click on the symbol very precisely. Otherwise, information about traffic intensity in the adjacent street was displayed instead of information about road closures. This problem needs to be fixed in the future. Since no other user issues were reported by the participants, the map can be considered to be user friendly.

4.3. Flanders Map

The results of the static part showed that the smallest number of fixations was recorded in the area of the parallel coordinates plot, although it is a relatively significant part of the map. The name of the feature did not attract the attention of the participants.
Several problems were found in the dynamic part. The most important related to the selection of individual bars in the interactive panel. The respondent had to mark these elements by dragging the mouse, not by clicking, which was very confusing for the participants. The solution to the problem would be to use the same functionality as in the Chicago map, where this interactive panel looks almost the same but allows the selection of bars by dragging as well as by mouse click.
The second major problem of the Flanders map also related to the selection of individual bars in the interactive panel. When the participant dragged the mouse precisely over the bar, it was not selected. This problem is demonstrated in Figure 12. It could be eliminated by using the same information panel as is in the Chicago map.
The last major issue concerned the understanding of the parallel coordinates plot, which is usually used to identify clusters based on attribute values [56,57]. The respondent’s task was to interpret this function verbally, but none of the participants were able to fulfil the task. To remedy this problem, it would be useful to place an information panel or legend to describe and explain its functionality. The problems mentioned by the participants included poorly selected colors that disappeared against a black background, a chaotic representation of the parallel coordinates feature, and a small and unreadable font.

4.4. Research Limitations

The eye-tracking experiment consisted of two parts—static and interactive. The order of the stimuli (interactive maps) was fixed, so we could not exclude some influence of the “learning effect”. However, this effect was minimized by the stimuli being different and also by the different natures of the assigned tasks. We could also have included more tasks, but the average length of one measurement was, in this study, about 11 minutes. As stated for example by Popelka [53] or Popelka [54], eye-tracking experiments should not be too long, otherwise participants lose concentration and focus.
Another possible limitation relates to the number of participants. In general, it can be assumed that a higher number of participants generates more representative results. Also, if more non-cartographers are involved in such experiments, even more problems would probably be identified. However, in the experimental testing of interactive maps using eye-tracking, the number of participants is usually similar (i.e., 20 participants in [30]; 10 participants in [58]; 16 participants in [32]. The eye-tracking testing itself proceeded without complications. No results from any of the participants had to be excluded, because all participants passed the calibration step successfully (nobody exceed the calibration threshold, which was 1.5° of the visual angle).
Data collected through observation and eye-tracking were evaluated using qualitative methods, which are sometimes subjective, and their application to different maps from different regions can be difficult. However, this qualitative approach served to identify individual problems and deficiencies in the tested interactive maps. A method that also provides quantitative data and can be combined with the approaches described in this paper is user logging. The combination of eye-tracking and user logging for the evaluation of interactive maps has been described by Manson et al. [58] and Ooms et al. [21]. Herman, Popelka, and Hejlová [59] analyzed user strategies when working with interactive 3D maps. In these studies, user logging was used to compare user strategies rather than to identify map control errors.
This study can serve as a guide and inspiration for similar studies on interactive map assessment, but above all, it provides recommendations for authors of interactive maps for visual analytics.

5. Conclusions

The main goal of this paper was to evaluate from the user and user accessibility points of view three newly designed interactive analytical applications developed in the PoliVisu project to visualize processed big data. Eye-tracking was the primary method for such evaluation, accompanied by other methods of qualitative research, such as observation and interview. Three interactive analytical applications were chosen as the stimuli for verification based on the provided state-of-the-art analysis. The applications were different in domain, beginning with crimes in Chicago, through traffic flow monitoring in Pilsen, to traffic accidents in Flanders. The verification part of the conducted research comprised an eye-tracking experiment that was successfully designed, assembled, and executed. At the end, all three interactive maps were evaluated. Realistic interactive tasks (spatio-temporal filtering, etc.) were used in this part of user testing. The resulting data were subsequently analyzed and processed. On the basis of the measurements, errors and shortcomings were found, and these were encountered by respondents when working with the web maps. The chosen qualitative approach allowed us to analyze these results in depth and they will be used both for improving the evaluated interactive maps and the WebGLayer and HSLayers NG libraries, and also for further development of interactive maps for the visual analysis of (big) data in general.
The main findings for the developed applications as well as for map-based interactive analytical applications dealing with (big) data in general are:
  • Important features (title, legend) should be placed within the GUI on the left side.
  • If a user creates a selection based on a selected time range or another attribute value, the value of this selection should be highlighted. The value of the selection should be displayed in the maximum available data views, and the value should be given in a sufficiently large, distinctive font and in a contrasting color.
  • All the point symbols in the map should be interactive (“clickable”).
  • When selecting values in bar charts or histograms, both mouse-drag and mouse-click should be supported.
  • The parallel coordinates plot should be designed and implemented carefully. It is advisable to create tooltips or a help function for it. The parallel coordinates plot serves to identify clusters based on attribute values, so it is advisable to color these groups within the implicit visualization.
Findings 3 and 4 are related to a particular technology (HSLayers NG, WebGLayer), while findings 1, 2, and 5 are relevant to all interactive maps for the visual analytics of (big) data. All these findings were presented to the WebGLayers and HSLayers NG libraries developers. The libraries will be updated by incorporating the findings described in this paper. These libraries are being used in other projects, such as the Data-Driven Bioeconomy (DataBio) or Sino-EU Soil Observatory for Intelligent Land Use Management (SIEUSOIL)—for example, in precision agriculture, and in environmental and smart city domains.
We have also identified the following points for further research.
  • It would be beneficial to use also quantitative analyses—statistical evaluation of eye-tracking metrics like dwell time, number of fixations, or scanpath length. Quantitative measures will be helpful when two or more variants of the maps are developed, and the experiment will serve to find out which of them is the most usable. In that case, the experiment will need to be designed differently. The learning effect will be significant, so a higher number of participants divided into two (or more) groups will be necessary.
  • It would be beneficial to intensify the mixed design of research experiments, which combines the advantages of quantitative and qualitative methods. Specifically, it is advisable to combine eye-tracking and observation with other research methods such as user logging (providing quantitative data about user interaction), interview, or think aloud (providing a better insight into user thinking or reasons for making errors when working with interactive maps).
  • Investigation of user strategies during the work with the interactive environment might be investigated. The freely available tool Scangraph [60] is suitable for this purpose. Scangraph serves to identify similar user strategies. Another related sub-topic is the identification of differences between expert users and laypersons when working with interactive maps, which were previously identified, for example, in interactive 3D geovisualizations [34].
  • Finally, interactive maps for visual analytics are very variable, from web maps [32,36], through 3D maps [40,61], to immersive virtual environments [7], and each of these forms should be evaluated by user testing. However, because of this variability, the results of such evaluation would only be able to be transferred or compared only partially. In general, user testing should be a part of the development of any interactive map.
In general, further testing is required to better understand all user aspects of interactive maps and the factors that influence them. User evaluation of interactive maps is not usual issue (see, for example, Krassanakis and Cybulski [27]) and it is considered challenging (Kiefer et al. [28]).

Author Contributions

Stanislav Popelka was responsible for the used methodology, advised on experimental design, collaborated on the interpretation of the results, and coordinated the preparation of the whole paper. Lukáš Herman prepared the literature review, collaborated on the interpretation and discussion of the results, and wrote the abstract and the conclusions. Michaela Pařilová conducted the experimental study. Tomáš Řezník described the use cases and collaborated on writing the abstract and conclusions. Tomáš Řezník, Karel Jedlička, Jiří Bouchal, and Michal Kepka designed and developed the three map-based visual analytics tools. All the authors revised the final text when focusing on the evaluations and recommendations.


This research received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 769608 titled “Policy Development based on Advanced Geospatial Data Analytics and Visualisation” (PoliVisu) as well as a grant from the Ministry of Education, Youth and Sports of the Czech Republic under grant agreement No. LTACH-17002 titled “Dynamic Mapping Methods Oriented to Risk and Disaster Management in the Era of Big Data”.


We would like to thank all the participants for their time and efforts.

Conflicts of Interest

The authors declare no conflict of interest.


  1. Andrienko, G.; Andrienko, N.; Jankowski, P.; Keim, D.; Kraak, M.J.; MacEachren, A.; Wrobel, S. Geovisual analytics for spatial decision support: Setting the research agenda. Int. J. Geogr. Inf. Sci. 2007, 21, 839–857. [Google Scholar] [CrossRef]
  2. Konečný, M. Cartography: Challenges and potential in the virtual geographic environments era. Ann. GIS 2011, 17, 135–146. [Google Scholar] [CrossRef]
  3. Zhu, L.F.; Wang, Z.L.; Li, Z.W. Representing Time-Dynamic Geospatial Objects on Virtual Globes Using CZML-Part I: Overview and Key Issues. ISPRS Int. J. Geo-Inf. 2018, 7, 97. [Google Scholar] [CrossRef]
  4. Lin, H.; Batty, M.; Jørgensen, S.E.; Fu, B.; Konečný, M.; Voinov, A.; Torrens, P.; Lu, G.; Zhu, A.X.; Wilson, J.P. Virtual Environments Begin to Embrace Process-based Geographic Analysis. Trans. GIS 2015, 19, 493–498. [Google Scholar] [CrossRef]
  5. Thomas, J.J.; Cook, K.A. Illuminating the Path: The Research and Development Agenda for Visual Analytics; IEEE Computer Society Press: Richland, WA, USA, 2005. [Google Scholar]
  6. Jedlička, K.; Hájek, P.; Čada, V.; Martolos, J.; Šťastný, J.; Beran, D.; Kolovský, F.; Kozhukh, D. Open Transport Map—Routable OpenStreetMap. In Proceedings of the 2016 IST-Africa Week Conference, Durban, South Africa, 11–13 May 2016; pp. 1–11. [Google Scholar]
  7. Havenith, H.-B.; Cerfontaine, P.; Mreyen, A.-S. How Virtual Reality Can Help Visualise and Assess Geohazards. Int. J. Digit. Earth 2019, 12, 173–189. [Google Scholar] [CrossRef]
  8. Patton, C.V.; Sawicki, D.S. Basic Methods of Policy Analysis and Planning; Routlege: Abingdon, VA, USA, 1993; pp. 154–196. [Google Scholar]
  9. McAleer, S.R.; Kogut, P.; Raes, L. The Case for Collaborative Policy Experimentation Using Advanced Geospatial Data Analytics and Visualisation. In Proceedings of the International Conference on Internet Science, Thessaloniki, Greece, 22–24 November 2017; pp. 137–152. [Google Scholar]
  10. Freitas, C.M.; Luzzardi, P.R.; Cava, R.A.; Winckler, M.; Pimenta, M.S.; Nedel, L.P. On evaluating information visualization techniques. In Proceedings of the working conference on Advanced Visual Interfaces, Trento, Italy, 22–24 May 2002; pp. 373–374. [Google Scholar]
  11. Scholtz, J. Beyond usability: Evaluation aspects of visual analytic environments. In Proceedings of the 2006 IEEE Symposium on Visual Analytics Science and Technology, Baltimore, MD, USA, 31 October–2 November 2006; pp. 145–150. [Google Scholar]
  12. Scholtz, J.; Plaisant, C.; Whiting, M.; Grinstein, G. Evaluation of visual analytics environments: The road to the Visual Analytics Science and Technology challenge evaluation methodology. Inf. Vis. 2014, 13, 326–335. [Google Scholar] [CrossRef]
  13. Tory, M.; Moller, T. Evaluating visualizations: Do expert reviews work? IEEE Comput. Graph. Appl. 2005, 25, 8–11. [Google Scholar] [CrossRef] [PubMed]
  14. Scholtz, J. Developing qualitative metrics for visual analytic environments. In Proceedings of the 3rd BELIV’10 Workshop: Beyond time and errors: Novel evaluation methods for Information Visualization, Florence, Italy, 5 April 2008; pp. 1–7. [Google Scholar]
  15. Andrienko, G.L.; Andrienko, N.; Keim, D.; MacEachren, A.M.; Wrobel, S. Challenging problems of geospatial visual analytics. J. Vis. Lang. Comput. 2011, 22, 251–256. [Google Scholar] [CrossRef]
  16. Roth, R.E.; Çöltekin, A.; Delazari, L.; Filho, H.F.; Griffin, A.; Hall, A.; Korpi, J.; Lokka, I.; Mendonça, A.; Ooms, K. User studies in cartography: Opportunities for empirical research on interactive maps and visualizations. Int. J. Cartogr. 2017, 3, 61–89. [Google Scholar] [CrossRef]
  17. Robinson, A.C.; Chen, J.; Lengerich, E.J.; Meyer, H.G.; MacEachren, A.M. Combining usability techniques to design geovisualization tools for epidemiology. Cartogr. Geogr. Inf. Sci. 2005, 32, 243–255. [Google Scholar] [CrossRef]
  18. Roth, R.; Ross, K.; MacEachren, A. User-centered design for interactive maps: A case study in crime analysis. ISPRS Int. J. Geo-Inf. 2015, 4, 262–301. [Google Scholar] [CrossRef]
  19. Schnürer, R.; Sieber, R.; Çöltekin, A. The Next Generation of Atlas User Interfaces: A User Study with “Digital Natives”. In Modern Trends in Cartography; Springer: Heidelberg, Germany, 2015; pp. 23–36. [Google Scholar]
  20. Alacam, Ö.; Dalci, M. A usability study of WebMaps with eye tracking tool: The effects of iconic representation of information. In New Trends in Human-Computer Interaction; Springer: Berlin, Germany, 2009; pp. 12–21. [Google Scholar]
  21. Ooms, K.; Çöltekin, A.; De Maeyer, P.; Dupont, L.; Fabrikant, S.; Incoul, A.; Kuhn, M.; Slabbinck, H.; Vansteenkiste, P.; Van der Haegen, L. Combining user logging with eye tracking for interactive and dynamic applications. Behav. Res. Methods 2015, 47, 977–993. [Google Scholar] [CrossRef] [PubMed]
  22. Stehlíková, J.; Řezníková, H.; Kočová, H.; Stachoň, Z. Visualization Problems in Worldwide Map Portals. In Modern Trends in Cartography; Brus, J., Vondrakova, A., Vozenilek, V., Eds.; Lecture Notes in Geoinformation and Cartography; Springer: Cham, Switzerland, 2015; pp. 213–225. [Google Scholar]
  23. Burian, J.; Popelka, S.; Beitlová, M. Evaluation of the Cartographical Quality of Urban Plans by Eye-Tracking. ISPRS Int. J. Geo-Inf. 2018, 7, 192. [Google Scholar] [CrossRef]
  24. Roth, R.E.; Harrower, M. Addressing map interface usability: Learning from the lakeshore nature preserve interactive map. Cartogr. Perspect. 2008, 46–66. [Google Scholar] [CrossRef]
  25. Göbel, F.; Kiefer, P.; Raubal, M. FeaturEyeTrack: Automatic matching of eye tracking data with map features on interactive maps. GeoInformatica 2019, 1–25, in press. [Google Scholar]
  26. Kurzhals, K.; Fisher, B.; Burch, M.; Weiskopf, D. Eye tracking evaluation of visual analytics. Inf. Vis. 2016, 15, 340–358. [Google Scholar] [CrossRef]
  27. Krassanakis, V.; Cybulski, P. A review on eye movement analysis in map reading process: The status of the last decade. Geod. Cartogr. 2019, 68, 191–209. [Google Scholar]
  28. Kiefer, P.; Giannopoulos, I.; Raubal, M.; Duchowski, A. Eye Tracking for Spatial Research: Cognition, Computation, Challenges. Spat. Cogn. Comput. 2017, 17, 1–19. [Google Scholar] [CrossRef]
  29. Çöltekin, A.; Heil, B.; Garlandini, S.; Fabrikant, S.I. Evaluating the effectiveness of interactive map interface designs: A case study integrating usability metrics with eye-movement analysis. Cartogr. Geogr. Inf. Sci. 2009, 36, 5–17. [Google Scholar] [CrossRef]
  30. Çöltekin, A.; Fabrikant, S.; Lacayo, M. Exploring the efficiency of users’ visual analytics strategies based on sequence analysis of eye movement recordings. Int. J. Geogr. Inf. Sci. 2010, 24, 1559–1575. [Google Scholar] [CrossRef]
  31. Golebiowska, I.; Opach, T.; Rod, J.K. For your eyes only? Evaluating a coordinated and multiple views tool with a map, a parallel coordinated plot and a table using an eye-tracking approach. Int. J. Geogr. Inf. Sci. 2017, 31, 237–252. [Google Scholar] [CrossRef]
  32. Brady, D.; Ferguson, N.; Adams, M. Usability of MyFireWatch for non-expert users measured by eyetracking. Aust. J. Emerg. Manag. 2018, 33, 28. [Google Scholar]
  33. Popelka, S.; Vondrakova, A.; Hujnakova, P. Eye-Tracking Evaluation of Weather Web Maps. ISPRS Int. J. Geo-Inf. 2019, 8, 256. [Google Scholar] [CrossRef]
  34. Herman, L.; Juřík, V.; Stachoň, Z.; Vrbík, D.; Russnák, J.; Řezník, T. Evaluation of User Performance in Interactive and Static 3D Maps. ISPRS Int. J. Geo-Inf. 2018, 7, 415. [Google Scholar] [CrossRef]
  35. Nétek, R.; Pour, T.; Slezaková, R. Implementation of Heat Maps in Geographical Information System–Exploratory Study on Traffic Accident Data. Open Geosci. 2018, 10, 367–384. [Google Scholar] [CrossRef]
  36. Kubíček, P.; Kozel, J.; Štampach, R.; Lukas, V. Prototyping the visualization of geographic and sensor data for agriculture. Comput. Electron. Agric. 2013, 97, 83–91. [Google Scholar] [CrossRef]
  37. Řezník, T.; Charvát, K.; Lukas, V.; Junior, K.C.; Kepka, M.; Horáková, Š.; Křivánek, Z.; Řezníková, H. Open Farm Management Information System Supporting Ecological and Economical Tasks. In Proceedings of the Environmental Software Systems. Computer Science for Environmental Protection: 12th IFIP WG 5.11 International Symposium, ISESS 2017, Zadar, Croatia, 10–12 May 2017; pp. 221–233. [Google Scholar]
  38. Burian, J.; Zajíčková, L.; Popelka, S.; Rypka, M. Spatial aspects of movement of Olomouc and Ostrava citizens. Int. Multidiscip. Sci. Geoconference Sgem: Surv. Geol. Min. Ecol. Manag. 2016, 3, 439–446. [Google Scholar]
  39. Kubíček, P.; Konečný, M.; Stachoň, Z.; Shen, J.; Herman, L.; Řezník, T.; Staněk, K.; Štampach, R.; Leitgeb, Š. Population distribution modelling at fine spatio-temporal scale based on mobile phone data. Int. J. Digit. Earth 2018, 1–22. [Google Scholar] [CrossRef]
  40. Herman, L.; Řezník, T. 3D web visualization of environmental information-integration of heterogeneous data sources when providing navigation and interaction. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, 40, 479. [Google Scholar] [CrossRef]
  41. Russnák, J.; Ondrejka, P.; Herman, L.; Kubíček, P.; Mertel, A. Visualization and spatial analysis of police open data as a part of community policing in the city of Pardubice (Czech Republic). Ann. GIS 2016, 22, 187–201. [Google Scholar]
  42. Horák, J.; Ivan, I.; Inspektor, T.; Tesla, J. Sparse Big Data Problem. A Case Study of Czech Graffiti Crimes. In The Rise of Big Spatial Data; Springer: Cham, Switzerland, 2017; pp. 85–106. [Google Scholar]
  43. Roberts, J.C. State of the art: Coordinated & multiple views in exploratory visualization. In Proceedings of the Fifth International Conference on Coordinated and Multiple Views in Exploratory Visualization, Zurich, Switzerland, 2 July 2007; pp. 61–71. [Google Scholar]
  44. Neset, T.-S.; Opach, T.; Lion, P.; Lilja, A.; Johansson, J. Map-based web tools supporting climate change adaptation. Prof. Geogr. 2016, 68, 103–114. [Google Scholar] [CrossRef]
  45. Ježek, J.; Jedlička, K.; Mildorf, T.; Kellar, J.; Beran, D. Design and Evaluation of WebGL-Based Heat Map Visualization for Big Point Data. In The Rise of Big Spatial Data; Springer: Heidelberg, Germany, 2017; pp. 13–26. [Google Scholar]
  46. Edsall, R.; Andrienko, G.; Andrienko, N.; Buttenfield, B. Interactive maps for exploring spatial data. In Manual of Geographic Information Systems; ASPRS: Bethesda, MD, USA, 2008; pp. 837–857. [Google Scholar]
  47. Lewis, J.R. Testing small system customer set-up. In Proceedings of the Human Factors Society Annual Meeting, Seattle, WA, USA, 25–29 October 1982; pp. 718–720. [Google Scholar]
  48. Lewis, J.R. Evaluation of Procedures for Adjusting Problem-Discovery Rates Estimated from Small Samples. Int. J. Hum. -Comput. Interact. 2001, 13, 445–479. [Google Scholar] [CrossRef]
  49. Voßkühler, A.; Nordmeier, V.; Kuchinke, L.; Jacobs, A.M. OGAMA (Open Gaze and Mouse Analyzer): Open-source software designed to analyze eye and mouse movements in slideshow study designs. Behav. Res. Methods 2008, 40, 1150–1162. [Google Scholar] [CrossRef] [PubMed]
  50. Blignaut, P. Fixation identification: The optimum threshold for a dispersion algorithm. Atten. Percept. Psychophys. 2009, 71, 881–895. [Google Scholar] [CrossRef] [PubMed]
  51. Komogortsev, O.V.; Jayarathna, S.; Koh, D.H.; Gowda, S.M. Qualitative and quantitative scoring and evaluation of the eye movement classification algorithms. In Symposium on Eye-Tracking Research & Applications; Texas State University: Austin, TX, USA, 2010; pp. 65–68. [Google Scholar]
  52. Ooms, K.; Krassanakis, V. Measuring the Spatial Noise of a Low-Cost Eye Tracker to Enhance Fixation Detection. J. Imaging 2018, 4, 96. [Google Scholar] [CrossRef]
  53. Popelka, S. Optimal eye fixation detection settings for cartographic purposes. In Proceedings of the 14th SGEM GeoConference on Informatics, Geoinformatics and Remote Sensing, Albena, Bulgaria, 17–26 June 2014; pp. 705–712. [Google Scholar]
  54. Popelka, S. Eye-Tracking (nejen) v Kognitivní Kartografii [Eye-Tracking (Not Only) in Cognitive Cartography], 1st ed.; Palacký University Olomouc: Olomouc, Czech Republic, 2018; 248p, ISBN 978-80-244-5313-2. [Google Scholar]
  55. SensoMotoricInstruments. BeGaze Software Manual; SensoMotoric Instruments: Berlin, Germany, 2008. [Google Scholar]
  56. Edsall, R.M. The parallel coordinate plot in action: Design and use for geographic visualization. Comput. Stat. Data Anal. 2003, 43, 605–619. [Google Scholar] [CrossRef]
  57. Zhou, H.; Yuan, X.; Qu, H.; Cui, W.; Chen, B. Visual clustering in parallel coordinates. In Proceedings of the Computer Graphics Forum, Eindhoven, The Netherlands, 26–28 May 2008; pp. 1047–1054. [Google Scholar]
  58. Manson, S.M.; Kne, L.; Dyke, K.R.; Shannon, J.; Eria, S. Using eye-tracking and mouse metrics to test usability of web mapping navigation. Cartogr. Geogr. Inf. Sci. 2012, 39, 48–60. [Google Scholar] [CrossRef]
  59. Herman, L.; Popelka, S.; Hejlová, V. Eye-tracking Analysis of Interactive 3D Geovisualization. J. Eye Mov. Res. 2017, 10, 1–15. [Google Scholar] [CrossRef]
  60. Doležalová, J.; Popelka, S. ScanGraph: A Novel Scanpath Comparison Method Using Visualisation of Graph Cliques. J. Eye Mov. Res. 2016, 9. [Google Scholar] [CrossRef]
  61. Herman, L.; Russnák, J.; Stuchlík, R.; Hladík, J. Visualization of traffic offences in the city of Brno (Czech Republic): Achieving 3D thematic cartography through open source and open data. In Proceedings of the 25th Central European Conference of Useful Geography: Transfer from Research to Practice, Brno, Czech Republic, 12–13 August 2017; pp. 270–280. [Google Scholar]
Figure 1. Scheme of the experiment.
Figure 1. Scheme of the experiment.
Ijgi 08 00363 g001
Figure 2. Stimuli from the static part of the experiment.
Figure 2. Stimuli from the static part of the experiment.
Ijgi 08 00363 g002
Figure 3. Testing in the eye-tracking laboratory.
Figure 3. Testing in the eye-tracking laboratory.
Ijgi 08 00363 g003
Figure 4. Attention map for stimulus 1 of Chicago map showing first, second, and third fixation.
Figure 4. Attention map for stimulus 1 of Chicago map showing first, second, and third fixation.
Ijgi 08 00363 g004
Figure 5. Analysis of recorded data for static stimulus of Chicago map. Focus map (upper left), key performance indicators (upper right), and sequence chart (bottom).
Figure 5. Analysis of recorded data for static stimulus of Chicago map. Focus map (upper left), key performance indicators (upper right), and sequence chart (bottom).
Ijgi 08 00363 g005
Figure 6. Attention map for stimulus 1 of the Pilsen map showing first, second, and third fixations.
Figure 6. Attention map for stimulus 1 of the Pilsen map showing first, second, and third fixations.
Ijgi 08 00363 g006
Figure 7. Analysis of recorded data for static stimulus of Pilsen map. Focus map (upper left), key performance indicators (upper right), and sequence chart (bottom).
Figure 7. Analysis of recorded data for static stimulus of Pilsen map. Focus map (upper left), key performance indicators (upper right), and sequence chart (bottom).
Ijgi 08 00363 g007
Figure 8. Attention map for stimulus 5 of Flanders map showing first, second, and third fixation.
Figure 8. Attention map for stimulus 5 of Flanders map showing first, second, and third fixation.
Ijgi 08 00363 g008
Figure 9. Analysis of recorded data for the static stimulus of the Flanders map. Focus map (upper left), key performance indicators (upper right), and sequence chart (bottom).
Figure 9. Analysis of recorded data for the static stimulus of the Flanders map. Focus map (upper left), key performance indicators (upper right), and sequence chart (bottom).
Ijgi 08 00363 g009
Figure 10. Correct and wrong answers to Task 3 in the Chicago map.
Figure 10. Correct and wrong answers to Task 3 in the Chicago map.
Ijgi 08 00363 g010
Figure 11. Problematic Task 3 in the Pilsen map.
Figure 11. Problematic Task 3 in the Pilsen map.
Ijgi 08 00363 g011
Figure 12. Problem with the selection of the bar in the Flanders Map. On the left side, the cursor is dragged over the bar, but the bar is not selected. On the right side, the correct way of selecting data is shown.
Figure 12. Problem with the selection of the bar in the Flanders Map. On the left side, the cursor is dragged over the bar, but the bar is not selected. On the right side, the correct way of selecting data is shown.
Ijgi 08 00363 g012
Figure 13. Problematic Task 2 in Flanders map.
Figure 13. Problematic Task 2 in Flanders map.
Ijgi 08 00363 g013
Table 1. Participants of the study.
Table 1. Participants of the study.
ParticipantSexExperienceDev. X (°)Dev. Y (°)Tracking Ratio (%)
Table 2. Problems identified with the Chicago map.
Table 2. Problems identified with the Chicago map.
TaskProblemNo. of Participants
1—Change the color of the heatmap.Participant did not change the color for 30 s.1
2—Display all the crimes which happened on Saturday from 17:00 to midnight.Participant expected that the layer crime would have an influence on the display of results.3
Task was not fulfilled.1
3—Find out how many of these crimes there were on a specific day at a specific time.Participant gave the total number of registered crimes. 7
Participant gave the wrong number.5
Participant did not find the number.4
4—Mark any area in the map using the “Polygon” tool.Participant did not draw the polygon.2
Participant had problems with finishing the polygon.1
Table 3. Problems identified with the Pilsen map.
Table 3. Problems identified with the Pilsen map.
TaskProblemNo. of Participants
1—Change map type to “Basic OSM”.-0
2—Display the legend of the map.Participant did not find the legend.1
3—Find information about the road closure in the center of Pilsen at 17:00 on 31 October 2018.Participant clicked outside the road closure11
4—Display the “Timeline of constructions” and find out when the Šumavská Bus Terminal will be closed.-0
Table 4. Problems identified with the Flanders map.
Table 4. Problems identified with the Flanders map.
TaskProblemNo. of Participants
1—Switching map layer to accidents.Participant did not change the layer for 30 s.1
2—View all traffic accidents that occurred during rain (“regenval”).Participant tried to mark graph by means of clicking instead of dragging.12
Participant selected the correct field, but the graph did not change.7
Participant completed the task with help (“try to drag instead of click”)2
Task was not completed.2
3—Estimate how many of these accidents happened within the speed range of 40–50 km per hour.Participant estimated the total number of accidents within the speed range (not those occurring in rain)4
4—Verbally interpret the parallel coordinates plot.Investigated separately
Back to TopTop