Next Article in Journal
Text Mining in Cybersecurity: Exploring Threats and Opportunities
Next Article in Special Issue
Towards Designing Diegetic Gaze in Games: The Use of Gaze Roles and Metaphors
Previous Article in Journal
Data-Driven Lexical Normalization for Medical Social Media
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

gEYEded: Subtle and Challenging Gaze-Based Player Guidance in Exploration Games

1
Department of Digital Media, University of Applied Sciences Upper Austria, 4232 Hagenberg, Austria
2
Department of Media Informatics, University of Regensburg, 93053 Regensburg, Germany
*
Author to whom correspondence should be addressed.
Multimodal Technol. Interact. 2019, 3(3), 61; https://doi.org/10.3390/mti3030061
Submission received: 9 July 2019 / Revised: 1 August 2019 / Accepted: 19 August 2019 / Published: 22 August 2019
(This article belongs to the Special Issue Novel User Interfaces and Interaction Techniques in the Games Context)

Abstract

:
This paper investigates the effects of gaze-based player guidance on the perceived game experience, performance, and challenge in a first-person exploration game. In contrast to existing research, the proposed approach takes the game context into account by providing players not only with guidance but also granting them an engaging game experience with a focus on exploration. This is achieved by incorporating gaze-sensitive areas that indicate the location of relevant game objects. A comparative study was carried out to validate our concept and to examine if a game supported with a gaze guidance feature triggers a more immersive game experience in comparison to a crosshair guidance version and a solution without any guidance support. In general, our study findings reveal a more positive impact of the gaze-based guidance approach on the experience and performance in comparison to the other two conditions. However, subjects had a similar impression concerning the game challenge in all conditions.

1. Introduction

Gaze-based interactions have found their way into the games domain (e.g., [1,2,3,4,5,6,7,8,9,10]): Top-level “AAA games”, such as Far Cry 5 [11] or Assassin’s Creed Odyssey [12], support eye-tracking devices with the aim to provide players a more intuitive form of game interaction [13]. Furthermore, upcoming hardware devices, such as the HTC Vive Pro Eye [14], indicate that this technology might play an important role in the following years. Currently, gaze input is often employed as a supporting element and, in some cases, as a complete replacement for mouse input and game controllers [15]: For example, using eye-tracking technology, game objects can be focused or selected by simply looking at them [16].
However, we argue that current approaches do not make use of the full potential that gaze-based input can offer. Besides the established procedure of using gaze for explicit input, it can also be exploited as a tool to guide players through the game world. This can be done, for instance, by adopting a game scene according to where a player is currently looking, or by directing the player’s attention towards specific objects.
Although the integration of gaze appears to be a promising approach for guiding players, until now and to our best knowledge, no research has been carried out in the field of games. Existing research efforts integrate gaze-based guidance in other media (e.g., images, movies): So-called Overt Gaze Direction (OGD) [17,18,19,20] and Subtle Gaze Direction (SGD) [21,22,23,24] aim at providing users efficient and effective support for guiding them through an experience (see Section 2).
It is argued that, in contrast to other media, games incorporate the aspect of challenging the players [25]: During play, they are confronted with various obstacles (e.g., mazes, enemies, roadblocks, puzzles) that players have to overcome to win the game. Disregarding challenge in games might result in a negative game experience: If a gaze-based guidance system becomes too effective and efficient in supporting players, the game might not provide enough challenge leading to depriving them of the experience of mastering the game on their own. Via the proposed approach, this issue is addressed by not only building on existing methods, such as OGD and SGD, but also by reflecting on the aspects of challenge and exploratory activities embedded in exploration games. This is done by indicating, but not entirely revealing, the location of crucial game objects and elements through gaze-supported feedback (more information on the approach can be found in Section 3).
To define a focus for this work, we created a prototype (see Figure 1) embedded in a specific genre (i.e., exploration games), which we deem is suitable for a gaze-based guidance system: In many cases (e.g., The Vanishing of Ethan Carter [26]) players are confronted with (visually) complex game scenarios and are typically supported through various guidance means (e.g., colour [27] or animation [28]). As the visual channel is often the focus of exploration games (e.g., find a hidden object to progress in the game), the incorporation of the players’ gaze for guidance appears to be feasible. A comparative study is part of the paper (see Section 4) to identify the effects of such an approach in comparison to established forms of interaction in exploration games (i.e., using a crosshair solution that is fixed and located in the center of the screen) instead of the gaze position to guide players.
However, it has to be noted that, although the paper deals explicitly with exploration games, findings of our approach are also intended to be transferred to other genres and fields of application that will be presented in the Section 6. Summarizing, this paper features the following contributions:
  • Introducing gaze-based player guidance in exploration games
  • Investigation of gaze-based guidance in comparison to a crosshair-variant in exploration games
In general, readers should gain insights on how gaze-based player guidance could be integrated into games with a focus on exploration. Furthermore, game designers should receive information on how to design gaze-based interactions that provide a challenging and engaging game experience and that fosters exploratory behavior among players.

2. Related Work

Player guidance forms a crucial part in shaping the gaming experience and ensures that players can master a game without getting frustrated or lost [28]. It has the potential to aid players when it is needed and can create a feeling of autonomy and freedom during play [25]. This requires keeping the balance between not helping at all, and helping players too much while they are navigating through a level or solving a task [28]. It is essential to consider factors like a consistent color palette [27] or the placement of visual cues throughout a game [29] to provide thorough player guidance. Games with a strong focus on exploration, like What remains of Edith Finch [30], use visual cues to guide the player through the game world [31]. These games are typically made up of visually complex scenes with a high number of interactive and non-interactive game objects that should encourage exploration. One main challenge for the artists and the designers is that not all elements are relevant to the plot. It appears to be difficult to point the players in the right direction without sacrificing the challenge that is necessary to keep the players’ interest [32]. Eye-tracking technology in the game context has become a relevant field of research (e.g., [33]) and can offer a solution to this issue, as gaze data can be gathered and analyzed [34], which then can be used to guide the user’s gaze into the desired direction when they are in need of help.
Although it is implied by researchers, such as Bailey et al. [22] that gaze-based guidance in the games domain could offer benefits to players, no research findings that specifically deal with games are available. Current efforts focus on gaze-based guidance in other media (e.g., images, movies): In this context, Gregorick et al. [24] classify the different approaches “by the visibility of the used guidance stimulus into overt and subtle gaze direction methods”.

2.1. Subtle Gaze Direction

Subtle gaze direction (SGD) is supposed to be almost invisible to the viewer’s eye by applying barely noticeable transformations. Barth et al. [21] studied the implications of the scan path for visual communication with a sample of different videos. They used SGD by briefly showing red dots in the viewers’ periphery to attract their attention and thereby guiding their gaze. Sheikh et al. [35] filmed multiple 360-degree clips that featured both audio and visual cues that were supposed to direct the viewer’s attention towards two people that were having a conversation. A mix of both types of cues proved to be most successful. In 2009 Bailey et al. [22] introduced a more sophisticated strategy that uses an eye-tracker and subtle image-space modulations to direct a viewer’s gaze in a digital image. It utilizes the fact that the human peripheral vision sees a blurred image in contrast to the sharp image of the foveal vision. By briefly modulating a specific point of interest in an image that is in the peripheral region of the observer’s field of view, the attention is directed towards the point of interest. As soon as the foveal region is close to the modulated area, the modulation is terminated. This way, the viewer does not realize what attracted the attention of the gaze [22]. Methods such as SGD are often used to guide the viewer’s visual attention, and even though these methods are usually highly effective, they can be distracting or break the immersion while the viewer wants to focus on something else [23].

2.2. Overt Gaze Direction

Whereas SGD usually goes unnoticed, overt gaze direction (OGD) is more prominent and can be noticed by the viewer. OGD intends to direct the viewer’s gaze by applying global image transformations [24]. Unlike subtle gaze direction, OGD techniques are not intended to be invisible to the viewer’s eye, but instead, make use of apparent methods to more effectively guide the viewer’s gaze towards a target location within the image. Examples of OGD include an approach by Hata et al. [23] who direct the viewer’s gaze by blurring parts of an image. As the human visual system is trained to examine details in our surroundings regularly, it does not deem the blurred parts of the image as exciting, as there is not much information that can be obtained. Naturally, the eye wanders to those parts of the image that are more detailed and therefore not blurred.
Besides other works that include blur as a means of directing gaze [17,18], OGD can be applied in multiple other ways such as highlighting certain parts of an image by altering the luminance channel [36], salience adjustments [21,37,38], image overlays [19], depth-of-field [39], and stylization techniques of an image [20].

2.3. Gaze Direction in Games

As mentioned previously, research on gaze direction in the context of games is hard to find, and the potentials of the approach are only indicated in the available literature. One way to use gaze direction in games is implied by Bailey et al. [22]. They suggest that gaze direction could be used to guide the players’ gaze into a specific direction, away from a particular region in a scene that requires more rendering time because of its complexity or resolution. This way, the player would not notice progressively updating parts of the game.
Apart from technical considerations, gaze-based guidance could also be used to increase the players’ performance to overcome obstacles. In an experiment conducted by Ben-Joseph et al. [40], they used SGD to help users in virtual reality (VR) scenarios to find an object much faster than they would have without the help of gaze direction techniques. This was done by applying modulations in the luminance channel in the peripheral vision of the player.
While SGD and OGD pose two interesting concepts for gaming, there are some downsides. SGD might not always be successful or even desired, since it only provides subtle cues, potentially leaving the player lost or unsure of how to proceed. OGD, on the other hand, poses a different solution by guiding the player’s gaze directly to the desired location. Due to the apparent process of altering the image, the game might not feel like a challenge anymore, depriving them of the experience of mastering the game on their own. In sum, little is known about how to include gaze-supported guidance in the context of games successfully.

3. Our Approach

Based on the limitations introduced in the previous section, a novel approach is put forward that aims to introduce gaze-based player guidance in the context of exploration games. The proposed concept encapsulates elements of SGD through the type of the visual feedback (i.e., vignette effect) and OGD by explicitly pointing to specific game objects (see Section 3.1 for more information). Furthermore, the approach takes the aspect of challenging the player into account. This is achieved by incorporating a game mechanic in the gaze-based guidance process that is employed in many games: The location of crucial game elements and the distance between them with the player are only indicated and not fully revealed.
Game examples that make use of this mechanic are: L.A. Noire [41] (players investigate crime scenes and are guided via auditory and haptic feedback to obtain pieces of evidence), Alien: Isolation [42] (players have to hide from the alien and are supported by a motion tracker device that indicates the alien’s current location), Metroid: Return of Samus [43] (players have to find and defeat enemies, the location and proximity of opponents is communicated via visual feedback), and Zelda: A Link Between Worlds [44] (players can optionally collect creatures in order to craft better weapons, the location of these creatures is indicated through acoustic signals). From a game design perspective, this game mechanic is an essential resource for a gaze-based player guidance approach, as it utilizes the lack of information regarding the location of the desired object to foster challenge and fun. Furthermore, the central theme of the mechanic is grounded on player guidance: The withholding of information and indirect guidance are the main driving forces of exploring an area. Following this notion, the cues provided by the games leave room for interpretation (to avoid an obvious solution to the problem) and, therefore, grant a challenging and engaging player experience.
We believe that this mechanic is appropriate for guiding players via gaze because it could offer two main benefits: firstly, gaze-based feedback leads the player to crucial elements and objects in the game level. Secondly, the inclusion of exploratory elements and challenges in the player guidance has the potential to positively influence the game experience (which is not addressed in SGD and OGD approaches—e.g., neither providing too much nor too little information about the location of relevant game objects). However, the following questions arise: How can the previously described game mechanic be combined with gaze-based input, and how such an approach is perceived in comparison to more traditional forms of input (in our case: mouse input)?
In our approach, we aim at providing answers to these questions: Like in the game examples mentioned above, the lack of information regarding the exact location of the game goal(s) serves as the basis for the game design. Players are guided by gaze-based cues that are triggered when the players look at specific gaze-sensitive areas in the game scenery. These unique areas do not directly point to the object of interest, but only imply that the required game element is hidden somewhere within the area. Players are led to a particular area, but then have to explore the section in detail to find the required object. Furthermore, gaze-based cues also feature information about how far away a subject is from the area, which is reflected by the intensity of the feedback. The mechanic was also mapped to game prototype with mouse input to compare the approach with more traditional forms of input. Furthermore, we were also interested in the mechanic itself, independent from the type of input, arouses an engaging game experience in an exploration game by creating a game variant that does not feature any guidance (for a more detailed description see Section 3.1 and Section 4.1).
The game prototype, the conditions, the technical setup, the hypotheses, the procedure, the measures, and the data analysis are presented to get a more detailed picture of the approach.

3.1. Game Prototype

To find answers to our questions and to turn the approach into reality, a game prototype was created (see Figure 2). It was conceptualized as a first-person exploration game with a thief theme, where players have the opportunity to delve deep into a game level. The game’s design and progression structure are similar to other exploration games (e.g., Stanley Parable [45], The Vanishing of Ethan Carter [26], Everyone s gone to the Rapture [46]), where players are required to find certain objects to progress in the game. This is also reflected in the game’s premise: the player’s avatar is a thief with a special skill (i.e., sensing the presence of valuable objects), who broke into the mansion of a rich nobleman to steal valuable gold coins. As the victim is expected to return any moment from his walk, the player has only a limited amount of time to find and grab as many coins as possible (3 min).
To make sure that the game was perceived as an engaging quest, the game design included the following aspects. The game’s background story embeds the guidance system in the setting. The player’s avatar has a unique skill—a thief sense—that visually and indirectly implicates the location of the coins (see Section 4.1 for more information on the employed guidance approaches).
Players also received contextual information at the beginning of the game, why the quest was relevant to the player’s avatar (i.e., being a thief that has to steal valuable coins from the mansion’s owner). Furthermore, the required game objects (i.e., the coins distributed in the level) were designed to communicate an inherent value and should distinguish themselves from the other entities in the game world. Furthermore, in the game art and level design phase, it was one of the leading design goals to create a level that encourages the player to explore the scenery via environmental storytelling: This was achieved by creating areas that indicate the noble man’s daily life.
The thief’s particular skill encapsulates the guidance approach presented in Section 3 to foster exploration: It indicated a section where one of the coins was hidden. The feedback intensity was increased when the players looked/moved closer to the target area to address the aspect of proximity. The approach incorporates both SGD and OGD design elements. SGD cues in the peripheral viewport guided players to specific game objects (OGD). In the following section, detailed information is given, how the gaze-based player guidance approach was realized, and how it compared with other solutions.

4. Comparative Study

A study was set up to investigate the impact of the proposed approach that consisted of three conditions that differed in the way players were guided (i.e., two feedback variants guided players by either their gaze or through a crosshair, and one variant without gaze support). Only small aspects differentiate one condition from each other to grant the comparability of the conditions (see Figure 3). In all conditions, movement and picking up objects are carried out via mouse (look around, pointing) and keyboard (WASD-movement) actions. In the following, conditions are described in detail.

4.1. Conditions

Condition 1: In the first condition, called Gaze guidance (GazeG)), players were guided to the coins via gaze. To grant them a challenging experience and to address the aspect of exploration, players received visual feedback driven by gaze through a vignette effect (i.e., a black and blurred circle mask darkened the corners and edges of the screen). When players looked at a sensitive gaze area (where one of the coins was located), the vignette effect appeared (see Figure 4).
The design rationale behind the feedback type (i.e., vignette) and the feedback intensity was as follows: Based on the categorization by Fagerholt & Lorentzon [47], a meta-interface solution was chosen, where representations can exist on a meta-layer between the player and the game world. The most obvious example is the effects rendered on the screen, such as blood spatter on the camera to indicate damage. The research team explored various visual user interface designs (i.e., visual cues in different shapes with different sizes). In our case, a vignette effect was deemed to be suitable, because it draws not only from SGD design approaches (i.e., subtly showing information on the peripheral field of vision) but also from OGD by directly guiding the player’s gaze in the foveal region (i.e., gaze intensity is driven by the distance between the player’s gaze and the coin).
By using a meta-interface design in the form of a vignette effect, the gaze-based approach can be integrated into different genres (in contrast to diegetic interfaces, which are closely tied to the narrative and the game world). Additionally, the approach is not dependent on the visual channel (spatial game interface such as outlined objects) and can be integrated subtly (in contrast to “traditional” game interfaces with an overt virtual layer). Furthermore, we aimed at creating feedback that is as simple as possible. Animations of the visual gaze condition are relatively subtle (continuous animation that increases or decreases the vignette’s size and opacity to indicate proximity) and simple (i.e., no pulsating movement) to avoid distracting players.
The vignette effect (size and opacity) was realized through Unity post-processing effect “vignette” [48] and is driven through two dependent parameters:
  • the distance between the player’s avatar and the gaze-sensitive area (i.e., the effect only kicks in, when the player is in close distance to the object—in our case: 2 m in Unity)
  • the distance between the gaze position and the coin in screen space within a gaze-sensitive area (i.e., the closer the gaze position is in relation to a coin, the stronger the effect—in the game prototype, a gradual intensity transition was implemented: 1/2 screen width distance between gaze and coin: 0% effect strength; 0 distance: 100% effect strength)
Condition 2: The second condition, coined Crosshair Guidance (CrossG) is similar to condition 1 with one exception: instead of using the player’s gaze to guide the player, the crosshair, located and locked in the center of the screen, manipulates the intensity of vignetting effect (i.e., GazeG builds on CrossG by using the gaze position as a crosshair—see Figure 5). Via this condition, we wanted to investigate if the inclusion of gaze in the guidance process leads to a different game experience. If not, one could assume that a well-established input form, such as a mouse, would be sufficient to achieve a similar or even better result. By doing so, we want to contribute to the research body of comparative studies in the field of gaze vs. mouse (e.g., [49,50,51,52,53,54,55]).
Condition 3: The last condition, No Guidance (NoG) served as a control condition: It was aimed to find out if, in general, the inclusion of a guidance system is perceived useful and arouses an engaging game experience. Here, no cues are given that reveal the position of the gold coins.

4.2. Technical Setup

The game prototype was developed with the Unity game engine [56] using and editing assets of the Medieval Cartoon Furniture Pack [57]. For the gaze-based input in our game prototype, we used the Tobii EyeX eye tracker [58] with components of the Tobii Unity SDK for Desktop [59]. The hardware setup in the experimental study consisted of a standard desktop PC with a 27-inch monitor and standard stereo-headphones for sound output. In all conditions, players played the game via a mouse plus a keyboard (WASD-keys for moving the avatar, mouse interaction to look around, and left mouse button for picking up coins) to keep the conditions comparable (only difference: in GazeG. The gaze position, and not the current orientation of the crosshair, controls the guidance.

4.3. Hypotheses

With our comparative study, we investigated three hypotheses that relate to the previously described conditions (GazeG, CrossG, and NoG). In general, we deem that the gaze-based player guidance approach contributes to an immersive experience by offering guidance and a certain degree of challenge. Furthermore, the gaze-based variant could provide a more engaging experience in comparison to other solutions, by offering a natural form to get in contact with the game (i.e., eyes are employed to perceive visual information). Thus, we assume that the approach arouses a better experience than a solution that does not include gaze-based feedback (in our case: when the crosshair is used for guidance). We also wanted to investigate if the inclusion of a guidance tool is reasonable and grants an engaging experience in the context of an exploration game in general, or if it bores players by offering no challenge to them in achieving the game’s goals. It could be the case that players would enjoy the challenge by getting no clues at all, with the consequence that a guidance approach is not applicable and even counter-productive. The following hypotheses are formulated:
Hypothesis 1 (Game Experience).
In an exploration game, due to the integration of gaze as a natural interaction form, players will have a better game experience when being guided through a gaze-based guidance approach in comparison to a crosshair guidance-based or a no guidance approach.
Hypothesis 2 (Game Performance).
In an exploration game, players, supported via guidance features, will perform better concerning the game goals (here: find specific objects) in comparison to a no guidance approach.
Hypothesis 3 (Game Challenge).
In an exploration game, players with no guidance support, will perceive the game (goals) as more challenging (i.e., perceived game difficulty) in comparison to gaze- and crosshair guidance-based approaches.

4.4. Participants and Procedure

This section comprises information regarding the experiment, which was conducted at the Playful Interactive Environments (PIE) research laboratory, University of the University of Applied Sciences Upper Austria. Recruitment of subjects was carried out by utilizing mailing lists provided by the University of Applied Sciences Upper Austria. The experimenters invited subjects by providing information on the type of experiment (i.e., experimental study in the field of games), the study location, and the duration of the experiment, and the targeted age range (we were interested in subjects between 18 and 35 as they are a relevant group when it comes to games [60]). No incentives were offered. In total, 24 people between the age of 21–34 participated (M = 23, SD = 3), ten were female, 14 were males. To get information on the subject’s playing habits the following questions were asked: “How often do you play (computer-)games?” (62% several times a week, 21% several times a month, 17% daily) and “How often do you play exploration games? (Stanley Parable, Gone Home, etc.)” (17% several times a week, 83% several times a month). None of them had any previous experience with eye-tracking devices and gaze interaction. Furthermore, the test subjects signed a consent form. During the experiment, one member of the research staff was present who was responsible for guiding the participants through the experimental procedure, offering support if necessary, and carrying out interviews (responsibilities: introduction, interview).
The experimental procedure had the following structure: As a first step, the experimenter welcomed the subjects and provided a brief introduction to the scope of the study. After this, the game goals, the mechanics, and the means of interaction were introduced (i.e., ways to interact with the game world, control scheme, the functionality of the eye tracker, playing a short demo [61]). When this step was finished, the eye tracking device was calibrated. The subjects filled in a questionnaire that contained demographic information (i.e., age, gender, education, and experience with games). After that, they play the first out of three conditions resulting in three play sessions per participant. The locations of the coins were randomized (8 coins in each condition with 128 possible locations) to avoid biases. Furthermore, conditions were presented in a randomized order in each playtest (e.g., subject 1 played condition 3, condition 2, condition 1; subject 2 played condition 2, etc.).
All subjects were asked to play all three conditions (within-subject design). In all conditions, the game was stopped after 3 min to make sure that there was enough time for playing, filling out the questionnaires, and carrying out the interviews.
By having relatively short play sessions, it was avoided that subjects might get tired or even bored, which could have an impact on the information obtained in the interviews. Furthermore, it was one of our goals that each subject was exposed to the game within a predefined time limit to make conditions more comparable. It was also crucial that the exposure time was not being influenced by the player’s performance (i.e., gold). We also did not inform the players how many coins they already collected and how many coins are still left. When the goal of the condition was reached (time limit exceeded/all coins were found), players were asked to fill in a questionnaire that asked them about their impression of the game prototype focusing on the perceived game experience.
The game design also reflects the goal of having a brief exposition to the game. The game is made up of simple mechanics (move, look around, and pick up gold), is easy to understand concerning its controls (mouse plus WASD-keys on the keyboard), has a clear goal (collect all the gold coins), and the game design is consistent in all three conditions.
Additionally, participants were interviewed (for more information on the employed measures see Section 4.5). When players were finished with filling in the questionnaire and giving answers, the experimenter presented the second out of three conditions. As in the first part, players were asked to fill in the same questionnaire after completing each condition. When all conditions were completed, an informal interview was carried out by the experimenter. Players were asked to compare each condition with each other. The procedure took about 60 to 80 min per participant.

4.5. Measures

To measure the perceived immersive game experience, the Immersive Experience Questionnaire (IEQ) by Jennett et al. [62] was employed. It was used in various studies (e.g., [63,64]), and measures the experience via five factors and a single question to indicate the perceived immersive experience. For the comparative study the following factors were employed: cognitive involvement (effort and attention—Coin), emotional involvement (affect and suspense—EmIn), control (use of the interface—Cont), challenge (game difficulty—Chal), and total immersion (Toim).
In order to get a better impression of the factors the following example items shall be given
  • Emotional involvement (EmIn): “To what extent did you feel that the game was something fun you were experiencing, rather than something you were just doing?” (rated on a seven-point Likert scale ranging from “not at all” to “very much so”).
  • Challenge (Chal): “To what extent did you find the game challenging?” (rated on a seven-point Likert scale ranging from “not at all” to “very difficult”).
Apart from the IEQ, the number of collected coins (player performance regarding the game goal) was measured. Furthermore, after each condition, players were interviewed via a semi-structured interview dealing with the topics overall game experience, perceived difficulty, perceived the visual quality of the vignette effect (only GazeG and CrossG condition), usefulness and clarity of the player guidance feedback system (only GazeG and CrossG condition). At the end of the evaluation, an interview was carried out focusing on the comparison of the conditions (overall game experience, perceived challenge, the usefulness of the GazeG condition concerning the other conditions).

4.6. Data Analysis

In order to examine the underlying hypotheses, all analyses were conducted using repeated-measures analysis of variance (rANOVA). A benefit of the repeated-measures rANOVA is the limited number of subjects required. All parametric tests were performed after validating the data for assumptions of rANOVA use. Following the argumentation of Iacovides et al. [65], normality is established as a theoretical assumption that derives from the employment of a questionnaire to measure a unidimensional latent concept. Condition of sphericity was satisfied by carrying out Mauchly’s sphericity tests (emotional involvement: Mauchly-W(2) = 0.77, p = 0.17 , control: Mauchly-W(2) = 0.71, p = 0.23 , challenge: Mauchly-W(2) = 0.98, p = 0.79 , total immersion: Mauchly-W(2) = 0.71, p = 0.18 , collected coins: Mauchly-W(2) = 0.97, p = 0.69 ). Pairwise comparisons used the Bonferroni method of adjusting the degrees of freedom for multiple comparisons (post-hoc-tests). All statistic tests were carried out with SPSS 24. Significance was set at α = 0.05 .

5. Results

In the following section, the results of the study are described. Insights are given whether the various conditions have an impact on the perceived immersive game experience. This section is subdivided into two parts: the first section deals with the analysis of the quantitative data, while the second section explores themes based on the interviews.

5.1. Analysis of Quantitative Data

No order effects concerning the presented conditions could be found. Furthermore, gender, age, and the participants’ experience with playing games and exploration games were not significantly related to the ratings.

5.1.1. H1: Game Experience

GazeG received higher control ratings than CrossG and NoG, no difference between CrossG and NoG: Regarding the first scale, Cont, results of the rANOVA indicated a significant effect of the condition on the control scale ( F 2 , 46 = 6.96 , p = 0.00 , η 2 = 0.31 —see Table 1). 31% of the variation in the dependent variable (Cont) could be explained by the independent variable (type of player guidance). Bonferroni-corrected pairwise comparisons revealed that the GazeG condition (M = 5.1, SD = 0.50) had a higher perceived Cont than the CrossG condition (M = 4.57, SD = 0.78 p = 0.01 ) and the NoG condition (M = 4.86, SD = 0.63, p = 0.04 ). It was shown that CrossG and NoG did not differ significantly ( p = 0.29 ).
GazeG received higher emotional involvement ratings than CrossG and NoG, no difference between CrossG and NoG: The results of the Emin scale revealed a significant effect of the condition on the emotional involvement ( F 2 , 46 = 16.11 , p = 0.00 , η 2 = 0.41 ). Bonferroni-corrected pairwise comparisons revealed that the GazeG (M=4.08, SD=.72) had a higher emotional involvement than the CrossG condition (M = 3.51, SD = 0.79 p = 0.00 ), and NoG condition (M = 3.56, SD = 0.99, p = 0.00 ). Results highlighted that CrossG and NoG did not differ significantly ( p = 1.00 ).
GazeG was more immersive than CrossG and NoG, no difference between CrossG and NoG: The Toim scale revealed a significant effect of the condition on the perceived immersion in the game ( F 2 , 46 = 4.78 , p = 0.01 , η 2 = 0.27 ). Bonferroni-corrected pairwise comparisons revealed that the GazeG (M = 7.42, SD = 1.28) was more immersive than the CrossG condition (M = 6.46, SD = 1.56, p = 0.00 ), and NoG condition (M = 3.56, SD = 0.99, p = 0.03 ). Results highlighted that CrossG and NoG did not differ significantly ( p = 1.00 ).
Based on the quantitative results, H1 could be verified: As anticipated, players had a better game experience when being guided through a gaze-based guidance approach in comparison to a crosshair guidance-based or a no guidance approach.

5.1.2. H2: Game Performance

In GazeG players found more coins than in CrossG and NoG. In CrossG players collected more coins than in NoG: The Coin scale revealed a significant effect of the condition on the number of the collected coins ( F 2 , 46 = 26.46 , p = 0.00 , η 2 = 0.54 ). Bonferroni-corrected pairwise comparisons revealed that in the GazG (M = 6.79, SD = 1.50) condition subjects significantly found more coins than in the CrossG condition (M = 5.88, SD = 1.78, p = 0.05 ) and NoG condition (M = 4.25, SD = 1.39, p = 0.00 ). Results highlighted that CrossG and NoG differ significantly ( p = 0.01 ).
To sum up, H2 could be verified: It was assumed that players, supported via guidance features, will perform better concerning the game goals (here: find coins) in comparison to a no guidance approach.

5.1.3. H3: Game Difficulty

NoG was perceived harder than GazeG and CrossG, no difference between GazeG and CrossG: The Chal scale revealed a significant effect of the condition on the perceived challenge in the game ( F 2 , 46 = 9.00 , p = 0.00 , η 2 = 0.28 ). Bonferroni-corrected pairwise comparisons revealed that the GazeG (M = 5.01, SD = 0.52, p = 0.01 ) and the CrossG condition (M = 5.06, SD = 0.66, p = 0.01 ) received lower scores than the NoG condition (M=5.49, SD = 0.63). Results indicated that GazeG and CrossG did not differ significantly ( p = 1.00 ).
Last, but not least, H3 was confirmed: Players with no guidance support, perceived the game as more challenging in comparison to gaze- and crosshair guidance-based approaches.

5.2. Analysis of Qualitative Data

The qualitative data includes the data based on the questions asked between the condition playtests and at the end of the sessions, complementing the data gathered from the IEQ questionnaire. We analyzed the qualitative data through a thematic analysis [66]. Two researchers were involved in the analysis process that included data review, code generation, search, review, and define themes. The researchers created a set of initial research codes (e.g., gaze as avatar empowerment, challenge and guidance, gaze as a narrative element) that were transferred to three themes, which will be described in the following sections.

5.2.1. Gaze as a Special Skill

The first theme encapsulates the players’ perceptions of the gaze-based interaction regarding the game character (i.e., the avatar). The gaze element was seen as an integral element of the game experience by relating to it as the avatar’s particular skill. Subject 02, mentioned that the GazeG gave him the feeling of being guided by the special skill of his character. In the CrossG condition, he solely focused on the feedback (i.e., the vignette effect) while neglecting the game objects and the scenery. This was a common issue that was noted by several subjects: Subject 09 reported that the guidance feature in the CrossG felt like a tool that helped to complete a task. To her, the game in general and the game character itself lost their significance (subject 09: “It felt like work and not like a game”). This view was also shared by subject 23, who noted that in the GazeG she had the feeling that she IS the character, while in the CrossG condition she had the impression of being a test subject in a game study. Following this notion, the inclusion of gaze had not only an effect on the players’ relation to the avatar but also influenced the way how players were immersed in the game. Interestingly, subject 05 noted that he had similar experience in the CrossG and the GazeG conditions regarding the identification with the game character. Another reason why subject tended to prefer the GazeG condition was that it felt more natural and plausible (subject 19: “I like the idea that my avatar has this skill....and the use of one’s eyes makes sense”). It was mentioned by subject 04 that the GazeG allowed him to find game objects without moving the camera’s viewing angle (i.e., use the eyes), making it more comfortable to find the coins: (in contrast to CrossG where it was necessary to move the view in order get feedback). Subject 06 had a similar impression: She said that “...an observer would think that my character is silly when he does not use his eyes, but his head (i.e., the crosshair) to find the coins...”. Subjects favoring CrossG over the other two conditions indicated that the CrossG did not require them to concentrate, as they looked at the vignette effect blending out the other game elements that were not directly related to the game world (i.e., it gave the feeling of being productive). Only one subject (13) liked the NoG best, as he characterized himself as someone who wants to be confronted with significant challenges.

5.2.2. Challenge and Novelty

The second theme deals with the aspects of the perceived challenge in the conditions and the potential novelty effects caused by the gaze-based interaction. It appears that guidance establishes a game experience that is not too hard nor too easy. The level of difficulty was being enjoyed in both guidance conditions. For example, subject 03 noted that the guidance approach, on the one hand, helped to find the coins, and, on the other hand, provided some challenge through the inclusion of indirect guidance. Although the demo by Tobii was aimed at mitigating potential novelty effects, players mentioned that they needed some time to get accustomed to the gaze interaction. Notably, in the beginning, the concept of gazes-based interaction was hard to grasp. However, when the first coin was found via gaze, subjects appeared to have gained a better understanding concerning the gaze guidance. Subject 02 and 14 rated GazeG with the highest difficulty, because they had difficulties in getting used to the gaze input: Subject 02: “...doing two things simultaneously” (i.e., walking with the mouse and looking with the eyes) and subject 14 “ it took me some time to get used to the eye tracker...”.
Following this notion, the aspect of game difficulty is mixed with problems caused by the novelty of the device. In this regard, the interviews after each condition and at the end of the play session helped us to gain a better understanding of what players experienced. Given that the playtime with the eye tracker was rather short (i.e., three minutes), players, in general, adopted to the new situation quickly. However, further studies need to be carried out to investigate the aspect of challenge and novelty (see Section 6).
The absence of guidance resulted in GazeG (too) challenging game experience. Several subjects (e.g., subjects 03,07,12,15,24) noted that in some situations, they felt frustrated and did not know what to do. Subject 01 also indicated in the interview that he might have overlooked some coins. Although he tried to concentrate and wanted to do his best, he did not get a feeling of accomplishment. A similar aspect was brought up by subject 23: “....it was much more difficult....I gave up after some time....the coins could be anywhere...”. Referring to the concept of (game) flow [67,68], player frustration might emerge when the challenge exceeds the skills of a player.
On the other hand, players might get bored when the challenge is low, while their skills are relatively high. Thus, game designers strive to maintain an equilibrium between challenge and skill. This balance can be seen in remarks like that of subject 13: “...it was tough...but it could be a positive thing... you have to act on your own...have to concentrate and look at things...”. Overall, none of the subjects indicated that the game was too easy. In all conditions, they had the impression of being challenged, although the NoG was perceived as the most challenging condition.

5.2.3. The Meaning and Use of Gaze

The third theme relates to the ways gaze was perceived and interpreted by the players. Gaze served not only as a tool to accomplish game goals but as an element that adds meaning to the game experience (i.e., gaze as a meta-element that takes the player and his/her body into account). Subject 07 mentioned that the use of gaze could strengthen the connection between the player and the game by using a natural form of interaction. Subject 22 put forward that it is not just useful, but “...adds another layer to the experience...although there were no big differences between the games...the version where I had to use my eyes felt different.” This was also mentioned by subject 13: “... the gaze version is something different... the focus is somewhere else.”.
The interviews revealed differences in employment: while most of the subjects (71%) remarked that either walked or looked and took their time to observe the scenery, the rest used the guidance feature in GazeG in conjunction with the WASD-keyboard input. In the CrossG a majority of players (79%) mentioned to employ the following guidance strategy: instead of looking at scenery to find the coins, they solely relied on the guidance feedback (i.e., vignette effect). They walked through a room, always moving the mouse cursor in circular motions and focusing on the edges of the screen. When the vignette effect appeared, they immediately stopped, looked around to observe if the effect increased in its intensity. Players reported when being asked how the solving the challenge felt, that the guidance in CrossG could be described as a tool that is disconnected from the game: The scenery could be exchanged with another set. Player 24: “...the game could take place somewhere else...I do not care...the only thing that I looked at was the effect.”

6. Discussion

In general, the comparative study showed that the inclusion of gaze-supported player guidance, dependent on the guidance type, led to different experiences on different levels. This was achieved by addressing the game context via the integration of components that foster exploration (i.e., revealing the exact location of a relevant game object and integrating feedback intensity based on distance) and by including gaze-based input, we found out that players had a more positive experience on various dimensions.

6.1. Game Experience

H1 (Game Experience) could be verified as both the qualitative and quantitative results indicate that subjects preferred the gaze-based guidance in comparison to the CrossG condition. One aspect that added to the positive impression can be identified in the interaction form (i.e., gaze as a natural interaction method in the context of exploration games). This goes in line with other research projects that identified a similar effect [1,3,7,69,70,71]. One reason why CrossG received less positive game experience ratings, was that the static crosshair-based approach felt like a tool (to solve a task) and not like a special skill of the character. Related findings were discovered in the work of Lankes et al. [72], where small modifications of the locus of manipulation [73] led to a reinterpretation of the player-avatar relation. In the case of our prototype, subjects tended to disregard the game world and focused on the visual guidance feedback (i.e., vignette effect), leading to a decoupling of the player and the avatar.
In GazeG players had the feeling of seeing the game world through the eyes of the avatar, which led to a stronger bond between them and the game representation. The identification with the avatar through the inclusion of gaze was supported through the game’s narrative that explained the player guidance as a unique trait of the thief character. A similar design can be found in the gaze-enabled version of the game “Assassin’s Creed: Origins”, where the player’s avatar is an assassin with several unique skills. One of these skills is the “eagle vision” that allows the character to identify enemies with ease and tag them through his gaze. We deem that gaze could offer various ways to be employed into games to redefine the relation between the player and the avatar (design pointers are provided in the sub-section General Applicability).
Another critical factor that led to the results can be seen in the fact that the gaze-based player guidance system in the prototype was conceptualized as an integral part of the game design, and not as an add-on to a fully functional game. In many commercial games with eye tracking support (e.g., [11,12]) gaze-based interaction features are superimposed on a game to enhance and substitute specific player actions (i.e., camera positioning, selecting objects, targeting). This leads to the impression that eye trackers in games context feel like a luxury item and not as a basic necessity [74]. In the case of the game prototype presented in this paper, a different game design approach was pursued: From the beginning on, gaze was conceptualized as a vital element of the game’s design. The gaze-based interactions support the game goals (i.e., steal coins) and the players’ activities (i.e., investigate the game scenery, get in contact with the game objects).

6.2. Game Performance

Regarding H2 (Game Performance), it was confirmed that both guidance approaches lead to a better performance of players by looking at the number of coins collected (measure Coin). This also implies that players were interested in the game goals and not negatively influenced by the novel form of (gaze) interaction. Interestingly, GazeG and CrossG also differed significantly from each other. One explanation for this result can be identified in the way players navigated through the room. As described in the previous paragraph, players tended to concentrate on the vignette feedback, disregarding the game environment. This had the effect that in several situations, the players’ avatar got stuck without being noticed by the players. It took them some time to realize that they could not move. One solution to this issue would be employing auditory or haptic feedback instead of visual cues to grant a maximum field of view. Another option would be to reduce the strength of the vignette effect by reducing its size and its opacity or by using another visualization form (e.g., small but pulsating visual elements located in the screen corners: the closer the player is to the coin, the higher the frequency of the pulsating animation). Another aspect that was found was that, due to their neglecting of the game environment, players searched in the same places several times. On the contrary, subjects playing GazeG and NoG looked at the game objects. However, the guidance feature of the CrossG compensated the performance issue, as in NoG player performance had the lowest values.

6.3. Game Challenge

In H3 (Game Challenge), it was the aim to find out if players, when being guided, felt a certain degree of being challenged or were bored by the guidance feature. In this context, results show that the type of input does not play a significant role: both GazeG and CrossG received similar ratings (measure: Chal). Although they performed significantly lower in CrossG, players had a similar impression regarding the challenge in comparison to GazeG. An explanation for this would be that the CrossG condition was seen as a different kind of game (i.e., some subjects mentioned that they would not call CrossG a game - it felt more like a chore). Another aspect that contributed to the results was that players could not track the amount of gold they already collected (i.e., no user interface element was shown—with the exception of the vignette effect), and, thus, they could not directly monitor or compare their performance: the design decision was consciously made, as we were more interested in the perceived and not in the actual performance. Last but not least, NoG was experienced to be the most challenging variant: many subjects noted (95,83%) that it was much harder and frustrating, while only a minority found an exploration game without guidance more appealing.

6.4. General Applicability

It was also noted in the interviews that the chosen genre (i.e., exploration game) fits well with the gaze-based player guidance. However, we also argue that the proposed approach is not only suitable for exploration games but also could be employed in various genres. For instance, the concept may serve as a core mechanic in a horror game: One common element of horror games is that players have minimal resources (line of sight, low ammunition) and opponents are much stronger than the player character (e.g., Amnesia [75]). Thus, players typically avoid enemies by sneaking around them. Gaze-based guidance could add a new game design element by introducing an inverted form of player guidance (i.e., avoidance). Players could be confronted with invisible enemies that are only indicated via a vignette effect triggered by the player’s gaze. Enemies pursue the player when being watched. This mechanic could lead to the dilemma of either looking at the enemies to know their current location or by avoiding them to stay alive.
Furthermore, gaze-based player guidance could be harnessed as a navigation tool (i.e., a breadcrumb system) in open-world games, where navigation beacons could lead the player from one location to the next. These are just a few examples, where a gaze-based guidance system could add to the game experience. Table 2 summarizes the potential of the approach in other game genres (categorization based on Rollins & Adams [76]).
Apart from the desktop setting, we deem that the approach can also be transferred to other application domains, such as Virtual Reality (VR) and Augmented Reality (AR). For instance, players could be guided in a VR action-adventure game based on their current gaze position, or they play an AR tag game through the use of AR glasses with eye-tracking capabilities [84]. Another potential research direction can be seen in non-entertainment application areas. For example, tourists could be guided through a city to interesting landmarks without removing the opportunity to explore and find the locations by themselves. In general, these reflections concerning the use of gaze-based guidance are not meant to be complete but are intended to be understood as design pointers that should foster the discussion process.

6.5. Limitations

Although the comparative study led to some interesting findings, several limitations have to be acknowledged. First, the paper explored the potentials of a gaze-based player guidance approach in an exploration game. To broaden the field of application and to explore the transferability of the concept thoroughly, further research is required that deals with the implementation and the evaluation of gaze-based guidance in the context of different game genres presented in Table 2. Secondly, we did not explore the design potentials regarding the question, when the guidance could be activated: for instance, one could think about a hybrid approach, consisting of the NoG and GazeG conditions. At the beginning of a game session, players start with NoG condition. Only when players do not find coins (e.g., in a predefined time limit), the GazeG condition is initialized.
Another limitation of this work is the fact that the demographics of the test, subjects were relatively narrow, mostly comprising of subjects at the same age with a high level of education. In future studies, it is planned to include different age groups in a comparative study. As indicated in the Section 4.4, none of the subjects had any experience with eye-tracking technologies. The aspect of novelty (i.e., being confronted with a device that you do not know) could affect the gathered results. The mitigation of novelty effects is hard to achieve since a large number of players do not use eye-tracking technologies. We addressed this issue by including a short tutorial session (using a demo from the Tobii [61]) at the beginning of each playtest. Last but not least, other types of feedback (i.e., different forms of visual feedback) and the combination of different feedback types (e.g., a combination of auditory and visual feedback) are not covered in this paper.

7. Conclusions

This paper introduced gaze-based guidance in the context of games and reported on a comparative study that examined the potential effects of the approach regarding the game experience in a first-person exploration game. It was revealed that players experience a more engaging game experience on different levels in comparison to a guidance system via a static cursor and to a game without gaze-based guidance features. Regarding future work, it is planned to investigate the following aspects: we plan to compare the introduced approach with other visual gaze guidance strategies. Another research direction forms the investigation of multi-modal feedback conditions (e.g., visual and auditory) to overcome the limitations of a visual-only approach, gaze-based guidance in other genres, and the inclusion of player types [85] in the experimental setting. Furthermore, we deem that gaze-based player guidance appears to be a relevant topic in the context of VR and AR. We plan to transfer the proposed game to VR to identify possible differences in the game experience when compared to the desktop setting. In general, we see the field of gaze-based guidance in games as a promising field for future activities.

Author Contributions

Conceptualization, M.L. and A.H.; methodology, M.L.; software, M.L. and A.H.; validation, M.L.; formal analysis, M.L. and A.H.; investigation, M.L. and A.H.; resources, M.L.; data curation, M.L.; writing—original draft preparation, M.L.; writing—review and editing, C.W.; visualization, M.L.; supervision, C.W.; project administration, M.L.

Funding

This research received no external funding.

Acknowledgments

The authors would like to thank all of our survey participants for their time and for the data they generously provided.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
MDPIMultidisciplinary Digital Publishing Institute
Crosshair GuidanceCrossG
Gaze guidanceGazeG
No GuidanceNoG
Overt Gaze DirectionOGD
Subtle Gaze DirectionSGD
Virtual realityVR
Augmented realityAR
Playful Interactive EnvironmentsPIE
Immersive Experience QuestionnaireIEQ
Cognitive involvementCoin
Emotional involvementEmIn
ControlCont
ChallengeChal
Total immersionToim
Repeated-measures analysis of variancerANOVA

References

  1. Velloso, E.; Fleming, A.; Alexander, J.; Gellersen, H. Gaze-supported gaming: MAGIC techniques for first person shooters. In Proceedings of the 2015 Annual Symposium on Computer-Human Interaction in Play, CHI PLAY ’15, London, UK, 5–7 October 2015; ACM: New York, NY, USA, 2015; pp. 343–347. [Google Scholar] [CrossRef]
  2. Pfeuffer, K.; Alexander, J.; Gellersen, H. GazeArchers: Playing with individual and shared attention in a two-player look&shoot tabletop game. In Proceedings of the 15th International Conference on Mobile and Ubiquitous Multimedia, MUM ’16, Rovaniemi, Finland, 12–15 December 2016; ACM: New York, NY, USA, 2016; pp. 213–216. [Google Scholar] [CrossRef]
  3. Menges, R.; Kumar, C.; Wechselberger, U.; Schaefer, C.; Walber, T.; Staab, S. Schau genau! A gaze-controlled 3D game for entertainment and education. J. Eye Mov. Res. 2017, 10, 220. [Google Scholar]
  4. Lankes, M.; Rammer, D.; Maurer, B. Eye contact: Gaze as a connector between spectators and players in online games. In Entertainment Computing—ICEC 2017; Munekata, N., Kunita, I., Hoshino, J., Eds.; Springer International Publishing: Cham, Switzerland, 2017; pp. 310–321. [Google Scholar]
  5. Lankes, M.; Newn, J.; Maurer, B.; Velloso, E.; Dechant, M.; Gellersen, H. EyePlay revisited: Past, present and future challenges for eye-based interaction in games. In Proceedings of the 2018 Annual Symposium on Computer-Human Interaction in Play Companion Extended Abstracts, CHI PLAY ’18 Extended Abstracts, Melbourne, VIC, Australia, 28–31 October 2018; ACM: New York, NY, USA, 2018; pp. 689–693. [Google Scholar] [CrossRef]
  6. Antunes, J.; Santana, P. A study on the use of eye tracking to adapt gameplay and procedural content generation in first-person shooter games. Multimodal Technol. Interact. 2018, 2, 23. [Google Scholar] [CrossRef]
  7. Navarro, D.; Sundstedt, V. Simplifying game mechanics: Gaze as an implicit interaction method. In Proceedings of the SIGGRAPH Asia 2017 Technical Briefs, SA ’17, Bangkok, Thailand, 27–30 November 2017; ACM: New York, NY, USA, 2017; pp. 4:1–4:4. [Google Scholar] [CrossRef]
  8. Duchowski, A.T. Serious gaze. In Proceedings of the 2017 9th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games), Athens, Greece, 6–8 September 2017; pp. 276–283. [Google Scholar] [CrossRef]
  9. Abbaszadegan, M.; Yaghoubi, S.; MacKenzie, I.S. TrackMaze: A comparison of head-tracking, eye-tracking, and tilt as input methods for mobile games. In Human-Computer Interaction. Interaction Technologies; Kurosu, M., Ed.; Springer International Publishing: Cham, Switzerland, 2018; pp. 393–405. [Google Scholar]
  10. Dechant, M.; Heckner, M.; Wolff, C. Den Schrecken im Blick: Eye tracking und survival horrorspiele. In Mensch & Computer 2013 Workshopband; Boll, S., Maaß, S., Malaka, R., Eds.; Oldenbourg Verlag: München, Germany, 2013; pp. 539–542. [Google Scholar]
  11. Ubisoft Montreal. Far Cry 5; Game [SNES]; Ubisoft: Rennes, France, 2018; Last played February 2019. [Google Scholar]
  12. Ubisoft Montreal. Assassin’s Creed Odyssey; Game [Microsoft Windows, PS4, XboxOne]; Ubisoft: Rennes, France, 2018; Last played February 2019. [Google Scholar]
  13. Tobii. Tobii Gaming, PC Games with Eye Tracking, Top Games from Steam, Uplay. 2018. Available online: https://tobiigaming.com/games/ (accessed on 2 April 2019).
  14. HTC. HTC Vive Pro Eye. 2019. Available online: https://www.vive.com/eu/pro-eye/ (accessed on 15 May 2019).
  15. Velloso, E.; Carter, M. The emergence of EyePlay: A survey of eye interaction in games. In Proceedings of the 2016 Annual Symposium on Computer-Human Interaction in Play, CHI PLAY ’16, Austin, TX, USA, 16–19 October 2016; ACM: New York, NY, USA, 2016; pp. 171–185. [Google Scholar] [CrossRef]
  16. Tobii. How to Play Assassin’s Creed Origins with Tobii Eye Tracking. 2018. Available online: https://www.youtube.com/watch?time_continue=2&v=ZSoDSiI0mZw (accessed on 25 February 2019).
  17. Lintu, A.; Carbonell, N. Gaze Guidance through Peripheral Stimuli; Centre de recherche INRIA Nancy: Rocquencourt, France, 2009. [Google Scholar]
  18. De Koning, B.B.; Jarodzka, H. Attention guidance strategies for supporting learning from dynamic visualizations. In Learning from Dynamic Visualization; Springer: Cham, Switzerland, 2017; pp. 255–278. [Google Scholar]
  19. Lin, Y.C.; Chang, Y.J.; Hu, H.N.; Cheng, H.T.; Huang, C.W.; Sun, M. Tell me where to look: Investigating ways for assisting focus in 360 video. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, Denver, CO, USA, 6–11 May 2017; ACM: New York, NY, USA, 2017; pp. 2535–2545. [Google Scholar]
  20. Cole, F.; DeCarlo, D.; Finkelstein, A.; Kin, K.; Morley, R.K.; Santella, A. Directing gaze in 3D models with stylized focus. Render. Technol. 2006, 2006, 17. [Google Scholar]
  21. Barth, E.; Dorr, M.; Böhme, M.; Gegenfurtner, K.; Martinetz, T. Guiding the mind’s eye: improving communication and vision by external control of the scanpath. In Proceedings of Human Vision and Electronic Imaging XI; International Society for Optics and Photonics: Washington, DC, USA, 2006; Volume 6057, p. 60570D. [Google Scholar]
  22. Bailey, R.; McNamara, A.; Sudarsanam, N.; Grimm, C. Subtle gaze direction. ACM Trans. Graph. (TOG) 2009, 28, 100. [Google Scholar] [CrossRef]
  23. Hata, H.; Koike, H.; Sato, Y. Visual guidance with unnoticed blur effect. In Proceedings of the International Working Conference on Advanced Visual Interfaces, Bari, Italy, 7–10 June 2016; ACM: New York, NY, USA, 2016; pp. 28–35. [Google Scholar]
  24. Grogorick, S.; Stengel, M.; Eisemann, E.; Magnor, M. Subtle gaze guidance for immersive environments. In Proceedings of the ACM Symposium on Applied Perception, Cottbus, Germany, 16–17 September 2017; ACM: New York, NY, USA, 2017; p. 4. [Google Scholar]
  25. Schell, J. The Art of Game Design: A Book of Lenses; Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA, 2008. [Google Scholar]
  26. Astronauts, T. The Vanishing of Ethan Carter; Game [Microsoft Windows], 2014; The Astronauts: Warsaw, Poland, 2014; Last played June 2018. [Google Scholar]
  27. Brandse, M.; Tomimatsu, K. Using color guidance to improve on usability in interactive environments. In HCI International 2014—Posters’ Extended Abstracts; Stephanidis, C., Ed.; Springer International Publishing: Cham, Switzerland, 2014; pp. 3–8. [Google Scholar]
  28. Gibson, J. Introduction to Game Design, Prototyping, and Development: From Concept to Playable Game with Unity and C#, 1st ed.; Addison-Wesley Professional: Boston, MA, USA, 2014. [Google Scholar]
  29. Totten, C. An Architectural Approach to Level Design; Taylor & Francis: Abingdon-on-Thames, UK, 2014. [Google Scholar]
  30. Sparrow, G. What Remains of Edith Finch; Game [Microsoft Windows, PS4, XboxOne], 2017; Giant Sparrow: Santa Monica, CA, USA, 2017; Last played February 2019. [Google Scholar]
  31. Rogers, S. Level Up! The Guide to Great Video Game Design, 2nd ed.; Wiley Publishing: Hoboken, NJ, USA, 2014. [Google Scholar]
  32. Castillo, T.; Novak, J. Game Development Essentials: Game Level Design, 1st ed.; Delmar Learning: Clifton Park, NY, USA, 2008. [Google Scholar]
  33. Isokoski, P.; Joos, M.; Spakov, O.; Martin, B. Gaze controlled games. Univers. Access Inf. Soc. 2009, 8, 323–337. [Google Scholar] [CrossRef]
  34. Sundstedt, V.; Bernhard, M.; Stavrakis, E.; Reinhard, E.; Wimmer, M. Visual attention and gaze behavior in games: An object-based approach. In Game Analytics: Maximizing the Value of Player Data; Seif El-Nasr, M., Drachen, A., Canossa, A., Eds.; Springer: London, UK, 2013; pp. 543–583. [Google Scholar] [CrossRef]
  35. Sheikh, A.; Brown, A.; Watson, Z.; Evans, M. Directing Attention in 360-Degree Video. In Proceedings of the International Broadcasting Convention, IBC 2016, Amsterdam, The Netherlands, 8–12 September 2016; IBC: London, UK, 2016. [Google Scholar]
  36. Khan, A.; Matejka, J.; Fitzmaurice, G.; Kurtenbach, G. Spotlight: Directing users’ attention on large displays. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Portland, OR, USA, 2–7 April 2005; ACM: New York, NY, USA, 2005; pp. 791–798. [Google Scholar]
  37. Sato, Y.; Sugano, Y.; Sugimoto, A.; Kuno, Y.; Koike, H. Sensing and controlling human gaze in daily living space for human-harmonized information environments. In Human-Harmonized Information Technology; Springer: Cham, Switzerland, 2016; Volume 1, pp. 199–237. [Google Scholar]
  38. Vig, E.; Dorr, M.; Barth, E. Learned saliency transformations for gaze guidance. In Human Vision and Electronic Imaging XVI; International Society for Optics and Photonics: Washington, DC, USA, 2011; Volume 7865, p. 78650W. [Google Scholar]
  39. Kosara, R.; Miksch, S.; Hauser, H. Focus+ context taken literally. IEEE Comput. Graph. Appl. 2002, 22, 22–29. [Google Scholar] [CrossRef]
  40. Ben-Joseph, E.; Greenstein, E. Gaze Direction in Virtual Reality Using Illumination Modulation and Sound; research report; Leland Atanford Junior University: Stanford, CA, USA, 2016. [Google Scholar]
  41. Team Bondi. L.A. Noire; Game [Microsoft Windows], 2011; Team Bondi: Sydney, Australia, 2011; Last played February 2018. [Google Scholar]
  42. Creative Assembly. Alien: Isolation; Game [Microsoft Windows], 2014; Creative Assembly: Horsham, UK, 2014; Last played January 2018. [Google Scholar]
  43. MercurySteam; Nintendo EPD. Metroid: Samus Returns; Game [Nintendo 3DS], 2017; MercurySteam and Nintendo EPD: Kyoto, Japan, 2017; Last played May 2018. [Google Scholar]
  44. Nintendo EAD. Zelda: A Link between Worlds; Game [Nintendo 3DS], 2013; Nintendo EAD: Kyoto, Japan, 2013; Last played September 2018. [Google Scholar]
  45. Galactic Cafe. The Stanley Parable; Game [Microsoft Windows], 2013; Galactic Cafe: Austin, TX, USA, 2013; Last played February 2018. [Google Scholar]
  46. The Chinese Room; SCE Santa Monica Studio. Everybody’s Gone to the Rapture; Game [Microsoft Windows], 2016; The Chinese Room and SCE Santa Monica Studio: Brighton, UK, 2016; Last played May 2018. [Google Scholar]
  47. Fagerholt, E.; Lorentzon, M. Beyond the HUD—User Interfaces for Increased Player Immersion in FPS Games. Master’s Thesis, Chalmers University of Technology, Göteborg, Sweden, 2009; p. 118. [Google Scholar]
  48. Unity. Vignette. 2019. Available online: https://docs.unity3d.com/Manual/PostProcessing-Vignette.html (accessed on 24 May 2019).
  49. Lutteroth, C.; Penkar, M.; Weber, G. Gaze vs. mouse: A fast and accurate gaze-only click alternative. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology, UIST ’15, Charlotte, NC, USA, 11–15 November 2015; ACM: New York, NY, USA, 2015; pp. 385–394. [Google Scholar] [CrossRef]
  50. Bednarik, R.; Tukiainen, M. Gaze vs. Mouse in Games: The Effects on User Experience; research report; University of Joensuu: Joensuu, Finnland, 2008. [Google Scholar]
  51. Kasprowski, P.; Harezlak, K.; Niezabitowski, M. Eye movement tracking as a new promising modality for human computer interaction. In Proceedings of the 17th International Carpathian Control Conference (ICCC), Tatranska Lomnica, Slovakia, 29 May–1 June 2016. [Google Scholar] [CrossRef]
  52. Isokoski, P.; Martin, B. Eye Tracker Input in First Person Shooter Games. In Proceedings of the COGAIN, Turin, Italy, 4–5 September 2006; pp. 78–81. [Google Scholar]
  53. Dorr, M.; Pomarjanschi, L.; Barth, E. Gaze beats mouse: A case study on a gaze-controlled breakout. PsychNology J. 2009, 7, 197–211. [Google Scholar]
  54. Dechant, M.; Stavness, I.; Mairena, A.; Mandryk, R.L. Empirical Evaluation of Hybrid Gaze-Controller Selection Techniques in a Gaming Context. In Proceedings of the 2018 Annual Symposium on Computer-Human Interaction in Play, CHI PLAY ’18, Melbourne, VIC, Australia, 28–31 October 2018; ACM: New York, NY, USA, 2018; pp. 73–85. [Google Scholar] [CrossRef]
  55. Hild, J.; Gill, D.; Beyerer, J. Comparing Mouse and MAGIC Pointing for Moving Target Acquisition. In Proceedings of the Symposium on Eye Tracking Research and Applications, ETRA ’14, Safety Harbor, FL, USA, 26–28 March 2014; ACM: New York, NY, USA, 2014; pp. 131–134. [Google Scholar] [CrossRef]
  56. Unity. Unity. 2018. Available online: https://unity3d.com/ (accessed on 13 January 2019).
  57. Medieval Cartoon Furniture Pack. 2019. Available online: https://assetstore.unity.com/packages/3d/environments/fantasy/medieval-cartoon-furniture-pack-15094 (accessed on 22 April 2019).
  58. Gaming, T. Tobii Eye Tracker 4C. 2018. Available online: https://tobiigaming.com/product/tobii-eye-tracker-4c/ (accessed on 22 March 2019).
  59. Gaming, T. Tobii Unity SDK for Desktop. 2018. Available online: http://developer.tobii.com/tobii-unity-sdk/ (accessed on 22 March 2019).
  60. Entertainment Software Association. Essential Facts about the Computer and Video Game Industry. 2018. Available online: https://www.theesa.com/wp-content/uploads/2019/03/ESA_EssentialFacts_2018.pdf (accessed on 10 February 2019).
  61. Tobii. Tobii SDK Guide. 2019. Available online: https://developer.tobii.com/tobii-sdk-guide/ (accessed on 3 May 2019).
  62. Jennett, C.; Cox, A.L.; Cairns, P.; Dhoparee, S.; Epps, A.; Tijs, T.; Walton, A. Measuring and defining the experience of immersion in games. Int. J. Hum.-Comput. Stud. 2008, 66, 641–661. [Google Scholar] [CrossRef]
  63. Iacovides, I.; Cox, A.L. Moving Beyond Fun: Evaluating Serious Experience in Digital Games. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, CHI ’15, Seoul, Korea, 18–23 April 2015; ACM: New York, NY, USA, 2015; pp. 2245–2254. [Google Scholar] [CrossRef]
  64. Rigby, J.M.; Brumby, D.P.; Cox, A.L.; Gould, S.J.J. Watching movies on Netflix: Investigating the effect of screen size on viewer immersion. In Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct, MobileHCI ’16, Florence, Italy, 6–9 September 2016; ACM: New York, NY, USA, 2016; pp. 714–721. [Google Scholar] [CrossRef]
  65. Iacovides, I.; Cox, A.; Kennedy, R.; Cairns, P.; Jennett, C. Removing the HUD: The Impact of Non-Diegetic Game Elements and Expertise on Player Involvement. In Proceedings of the 2015 Annual Symposium on Computer-Human Interaction in Play, CHI PLAY ’15, London, UK, 5–7 October 2015; ACM: New York, NY, USA, 2015; pp. 13–22. [Google Scholar] [CrossRef]
  66. Braun, V.; Clarke, V. Using thematic analysis in psychology. Qual. Res. Psychol. 2006, 3, 77–101. [Google Scholar] [CrossRef] [Green Version]
  67. Csikszentmihalyi, M. Flow: The Psychology of Optimal Experience; Harper Perennial: New York, NY, USA, 1991. [Google Scholar]
  68. Sweetser, P.; Wyeth, P. GameFlow: A model for evaluating player enjoyment in games. Comput. Entertain. 2005, 3, 3. [Google Scholar] [CrossRef]
  69. Piumsomboon, T.; Lee, G.; Lindeman, R.W.; Billinghurst, M. Exploring natural eye-gaze-based interaction for immersive virtual reality. In Proceedings of the 2017 IEEE Symposium on 3D User Interfaces (3DUI), Los Angeles, CA, USA, 18–19 March 2017; pp. 36–39. [Google Scholar] [CrossRef]
  70. Menges, R.; Kumar, C.; Sengupta, K.; Staab, S. eyeGUI: A novel framework for eye-controlled user interfaces. In Proceedings of the 9th Nordic Conference on Human-Computer Interaction, NordiCHI ’16, Gothenburg, Sweden, 23–27 October 2016; ACM: New York, NY, USA, 2016; pp. 1–6. [Google Scholar] [CrossRef]
  71. Kumar, C.; Menges, R.; Staab, S. Eye-Controlled Interfaces for Multimedia Interaction. IEEE MultiMedia 2016, 23, 6–13. [Google Scholar] [CrossRef]
  72. Lankes, M.; Mirlacher, T.; Wagner, S.; Hochleitner, W. Whom are you looking for?: The effects of different player representation relations on the presence in gaze-based games. In Proceedings of the First ACM SIGCHI Annual Symposium on Computer-Human Interaction in Play, CHI PLAY ’14, Toronto, ON, Canada, 19–21 October 2014; ACM: New York, NY, USA, 2014; pp. 171–179. [Google Scholar] [CrossRef]
  73. Bayliss, P. Beings in the game-world: Characters, avatars, and players. In Proceedings of the 4th Australasian Conference on Interactive Entertainment, IE’07, Melbourne, Australia, 3–5 December 2007; RMIT University: Melbourne, Australia, 2007; pp. 1–6. [Google Scholar]
  74. Horti, S. Eye Tracking for Gamers: Seeing is Believing. 2018. Available online: https://www.techradar.com/news/eye-tracking-for-gamers-seeing-is-believing (accessed on 23 July 2018).
  75. Frictional Games. Amnesia: The Dark Descent; Game [Microsoft Windows], 2010; Frictional Games: Helsingborg, Sweden, 2010; Last played October 2013. [Google Scholar]
  76. Rollings, A.; Adams, E. Andrew Rollings and Ernest Adams on Game Design; New Riders Publishing: Indianapolis, IN, USA, 2003. [Google Scholar]
  77. Nintendo EAD. Super Mario World; Game [SNES], 1992; Nintendo EAD: Kyoto, Japan, 1992; Last played May 2014. [Google Scholar]
  78. Blizzard Entertainment. Warcraft 3; Game [Microsoft Windows], 2002; Blizzard Entertainment: Irvine, CA, USA, 2002; Last played March 2012. [Google Scholar]
  79. CD Projekt Red. Witcher 3: Wild Hunt; Game [Microsoft Windows], 2015; CD Projekt: Warsaw, Poland, 2015; Last played October 2017. [Google Scholar]
  80. EA Romania, E. FIFA 18; Game [Microsoft Windows], 2017; EA Sports: San Mateo, California, USA, 2017; Last played January 2018. [Google Scholar]
  81. ACES Game Studio. Microsoft Flight Simulator X; Game [Microsoft Windows], 2006; Microsoft Studios: Redmond, WA, USA, 2006; Last played April 2015. [Google Scholar]
  82. Maxis. SimCity 2000; Game [Microsoft Windows], 1993; Maxis: Redwood, CA, USA, 1993; Last played March 2016. [Google Scholar]
  83. Revolution Software. Broken Sword: The Shadow of the Templars; Game [Microsoft Windows], 1996; Virgin Interactive Entertainment: London, UK, 1996; Last played June 2017. [Google Scholar]
  84. Leap, M. Magic Leap One: Creator Edition. 2018. Available online: https://www.magicleap.com/magic-leap-one (accessed on 9 June 2019).
  85. Nacke, L.E.; Bateman, C.; Mandryk, R.L. BrainHex: Preliminary results from a neurobiological gamer typology survey. In Entertainment Computing—ICEC 2011; Anacleto, J.C., Fels, S., Graham, N., Kapralos, B., Saif El-Nasr, M., Stanley, K., Eds.; Springer: Berlin/Heidelberg, Germany, 2011; pp. 288–293. [Google Scholar]
Figure 1. Screenshot of the game prototype that was used in the comparative study (first person view).
Figure 1. Screenshot of the game prototype that was used in the comparative study (first person view).
Mti 03 00061 g001
Figure 2. In the game prototype, the player takes the role of a thief, who has to steal as many coins as possible within a limited amount of time.
Figure 2. In the game prototype, the player takes the role of a thief, who has to steal as many coins as possible within a limited amount of time.
Mti 03 00061 g002
Figure 3. The study consisted of three conditions: 1—the No Guidance Condition (NoG) served as a control condition, and provided a crosshair, but no guidance features; 2—the Crosshair Guidance Condition (CrossG) included guidance through the proximity of the crosshair (white circle) in regard to the gold coins; the closer the crosshair was to a gold coin, the stronger the vignette effect was set; 3—Gaze-based Guidance Condition (GazeG) the guidance was achieved through the player’s gaze position; the crosshair in this condition was only used to pick up objects.
Figure 3. The study consisted of three conditions: 1—the No Guidance Condition (NoG) served as a control condition, and provided a crosshair, but no guidance features; 2—the Crosshair Guidance Condition (CrossG) included guidance through the proximity of the crosshair (white circle) in regard to the gold coins; the closer the crosshair was to a gold coin, the stronger the vignette effect was set; 3—Gaze-based Guidance Condition (GazeG) the guidance was achieved through the player’s gaze position; the crosshair in this condition was only used to pick up objects.
Mti 03 00061 g003
Figure 4. A step-by-step example of the gaze-based guidance process: left column—player view; right column: top-down view of the level; Circles with white filling color: crosshair, circles with blue and white outlines: current gaze position of player; circles with blue filling colour: current player position; first row: the player looks at a gaze sensitive areas from afar; second row: the player reduces his/her distance between him/her and the target area and fixates the target; the third row: the player looks at the target and is near the hidden object. The distance between the player and the target and the distance between the current gaze position relative to the target drives the vignette effect.
Figure 4. A step-by-step example of the gaze-based guidance process: left column—player view; right column: top-down view of the level; Circles with white filling color: crosshair, circles with blue and white outlines: current gaze position of player; circles with blue filling colour: current player position; first row: the player looks at a gaze sensitive areas from afar; second row: the player reduces his/her distance between him/her and the target area and fixates the target; the third row: the player looks at the target and is near the hidden object. The distance between the player and the target and the distance between the current gaze position relative to the target drives the vignette effect.
Mti 03 00061 g004
Figure 5. Figure that illustrates the commonalities and differences between CrossG and GazeG; left image: in CrossG, players interact via a mouse (look around, pointing) and a keyboard (WASD-movement, crouching); the guidance (i.e., vignette effect) is driven by the current crosshair position that is pinned to the center of the screen, right image: in GazeG, players move in the same way as in CrossG (i.e., mouse and keyboard); in this condition, the guidance is decoupled from the screen’s center and is driven via the player’s gaze position. Note: gaze point is shown for demonstration purposes (i.e., no visual feedback on the current gaze position was provided).
Figure 5. Figure that illustrates the commonalities and differences between CrossG and GazeG; left image: in CrossG, players interact via a mouse (look around, pointing) and a keyboard (WASD-movement, crouching); the guidance (i.e., vignette effect) is driven by the current crosshair position that is pinned to the center of the screen, right image: in GazeG, players move in the same way as in CrossG (i.e., mouse and keyboard); in this condition, the guidance is decoupled from the screen’s center and is driven via the player’s gaze position. Note: gaze point is shown for demonstration purposes (i.e., no visual feedback on the current gaze position was provided).
Mti 03 00061 g005
Table 1. Means and Standard Deviation for control (Cont), emotional involvement (EmIn), challenge (Chal) on a scale from 1 to 7; Total immersion (Toim) on a scale from 0 to 10; and coins collected (scale: 0 to 8); ratings are presented per condition (gaze-based guidance (GazeG), crosshair-based guidance (CrossG), no guidance (NoG), Note: N = 24, p < 0.05.
Table 1. Means and Standard Deviation for control (Cont), emotional involvement (EmIn), challenge (Chal) on a scale from 1 to 7; Total immersion (Toim) on a scale from 0 to 10; and coins collected (scale: 0 to 8); ratings are presented per condition (gaze-based guidance (GazeG), crosshair-based guidance (CrossG), no guidance (NoG), Note: N = 24, p < 0.05.
GazeGCrossGNoG F 2 , 46 p η 2
IEQ: Control5.15 (0.50)4.57 (0.78)4.88 (0.63)6.960.000.31
IEQ: Emotional Involvement4.08 (0.73)3.51 (0.79)3.56 (0.99)16.110.000.41
IEQ: Total Immersion7.42 (1.28)6.46 (1.56)6.25 (1.94)4.780.010.27
Metric: Coins collected6.79 (1.50)5.88 (1.78)4.25 (1.39)26.460.000.54
IEQ: Challenge5.01 (0.52)5.06 (0.66)5.49 (0.63)9.000.000.28
Table 2. An overview of potential use cases of gaze-based guidance in the context of games.
Table 2. An overview of potential use cases of gaze-based guidance in the context of games.
Genre & Game ExampleGaze-Based Guidance Function & Gameplay Example
Action Games: Super Mario World [77]Power-Ups & Strategies indicator: Via gaze players could be made aware of strategies to overcome obstacles (e.g., an indication of the enemies’ weak spots) and of the location of power-ups (e.g., mushrooms, fire flowers) that are hidden in a level area.
Strategy Games: Warcraft III: Reign of Chaos [78]Fog of war add-on: Although players cannot directly see through the fog of war, they could be informed about the enemy movement in a particular area through a gaze-based vignetting effect (without revealing the exact position and the type of units), enabling them to develop a counter-strategy.
Role-Playing Games: Witcher 3: Wild Hunt [79]Extended witcher senses: In the game the player has sharpened senses (i.e., the Witcher senses—visual highlighting) that help him/her to identify relevant game objects to solve a quest; the integration of gaze could offer a more challenging and more rewarding experience by only indicating the objects’ location.
Sports Games: FIFA 2019 [80]Team coordinator: In soccer, players could be guided via gaze to look at one of the members that want to interact with them (e.g., pass the ball), which would have a beneficial effect on the team coordination.
Vehicle Sims: MS Flight Simulator [81]Advanced cockpit tutorial: In a tutorial for players with intermediate skills (e.g., start plane engines and take off), players could be guided to relevant areas of the cockpit to solve the assignment (without revealing the exact location).
Management Sims: Sim City 2000 [82]Silent counsel: Players could be made aware of positive (e.g., increase of population), negative situations (e.g., fire), and strategies (e.g., lower taxes) by not directly pointing, but just by guiding them and indicating that something relevant is happening or could be done in a segment of the city. This would give players the possibility to develop their interpretations of the given situation.
Adventure Games: Assassins Creed: Origins [12]Gaze-based waypoint beacons: Instead of using markers in the map to get to the next quest, players could be guided using a gaze-based waypoint system. When a player looks at the direction (sensitive gaze area) a waypoint is located, he/she gets feedback via a vignetting effect (similar to Lost & Found). The closer he/she gets to the waypoint, the stronger the feedback.
Puzzle Games: Broken Sword [83]Hint system for puzzle-solving: In point and click adventures, such Broken Sword, players can turn on a hint system (3 hints per puzzle), when they are not able to solve a given challenge. This system could offer a more interesting experience by using gaze guidance to indicate, but not showing the solution (location of a puzzle object).

Share and Cite

MDPI and ACS Style

Lankes, M.; Haslinger, A.; Wolff, C. gEYEded: Subtle and Challenging Gaze-Based Player Guidance in Exploration Games. Multimodal Technol. Interact. 2019, 3, 61. https://doi.org/10.3390/mti3030061

AMA Style

Lankes M, Haslinger A, Wolff C. gEYEded: Subtle and Challenging Gaze-Based Player Guidance in Exploration Games. Multimodal Technologies and Interaction. 2019; 3(3):61. https://doi.org/10.3390/mti3030061

Chicago/Turabian Style

Lankes, Michael, Andreas Haslinger, and Christian Wolff. 2019. "gEYEded: Subtle and Challenging Gaze-Based Player Guidance in Exploration Games" Multimodal Technologies and Interaction 3, no. 3: 61. https://doi.org/10.3390/mti3030061

APA Style

Lankes, M., Haslinger, A., & Wolff, C. (2019). gEYEded: Subtle and Challenging Gaze-Based Player Guidance in Exploration Games. Multimodal Technologies and Interaction, 3(3), 61. https://doi.org/10.3390/mti3030061

Article Metrics

Back to TopTop