Next Article in Journal
Music Sight-Reading Expertise, Visually Disrupted Score and Eye Movements
Previous Article in Journal
Direction Estimation Model for Gaze Controlled Systems
 
 
Journal of Eye Movement Research is published by MDPI from Volume 18 Issue 1 (2025). Previous articles were published by another publisher in Open Access under a CC-BY (or CC-BY-NC-ND) licence, and they are hosted by MDPI on mdpi.com as a courtesy and upon agreement with Bern Open Publishing (BOP).
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Central Bias in Day-to-Day Viewing

by
Flora Ioannidou
,
Frouke Hermens
and
Timothy L. Hodgson
University of Lincoln, Lincoln LN6 7TS, UK
J. Eye Mov. Res. 2016, 9(6), 1-13; https://doi.org/10.16910/jemr.9.6.6 (registering DOI)
Submission received: 14 June 2016 / Published: 30 September 2016

Abstract

:
Eye tracking studies have suggested that, when viewing images centrally presented on a computer screen, observers tend to fixate the middle of the image. This so-called ‘central bias’ was later also observed in mobile eye tracking during outdoors navigation, where observers were found to fixate the middle of the head-centered video image. It is unclear, however, whether the extension of the central bias to mobile eye tracking in outdoors navigation may have been due to the relatively long viewing distances towards objects in this task and the constant turning of the body in the direction of motion, both of which may have reduced the need for large amplitude eye movements. To examine whether the central bias in day-to-day viewing is related to the viewing distances involved, we here compare eye movements in three tasks (indoors navigation, tea making, and card sorting), each associated with interactions with objects at different viewing distances. Analysis of gaze positions showed a central bias for all three tasks that was independent of the task performed. These results confirm earlier observations of the central bias in mobile eye tracking data, and suggest that differences in the typical viewing distance during different tasks have little effect on the bias. The results could have interesting technological applications, in which the bias is used to estimate the direction of gaze from head-centered video images, such as those obtained from wearable technology.

Introduction

When viewing the external world, observers make eye movements to shift their gaze to foveate objects of interest for further visual processing. Despite many years of research, it is not fully understood how targets for such gaze shifts are selected. Saliency models have been proposed, in which viewers are assumed to shift their gaze to objects and parts of the scene that are likely to attract attention, making use of the distribution of features such as colors, edges, and contrast in the scene (Itti, Koch, & Niebur, 1998; Itti & Koch, 2001). Observers, however, also have a strong tendency to fixate the center of an image (Tatler, 2007) and it has been suggested when quantifying the performance of saliency models, that this central bias needs to serve as the baseline for evaluating a model’s performance, suggesting a crucial role for the central bias in allocating visual attention (Clarke & Tatler, 2014). Similar support for this essential role for oculomotor biases was found by Tatler and Vincent (2009).
The tendency to fixate the center of an image, the central bias, has been consistently found in studies in which participants fixate images (Parkhurst, Law, & Niebur, 2002; Tatler, 2007), isolated words (Vitu, Kapoula, Lancelin, & Lavigne, 2004), head centered video recordings (Cristino & Baddeley, 2009; Foulsham, Walker, & Kingstone, 2011; ’t Hart et al., 2009), movie clips (Dorr, Martinetz, Gegenfurtner, & Barth, 2010; Tseng, Carmi, Cameron, Munoz, & Itti, 2009), and also when walking around freely (Foulsham et al., 2011; ’t Hart et al., 2009). In image and video viewing, the bias may represent a bias towards objects of interest. People, when taking photographs, have a tendency to direct the camera in such a way that objects of interest are located in the center of the image, known as the photographer’s bias (Reinagel & Zador, 1999). Viewers of these images may use this tendency and therefore focus on the center of the image, where objects of interest can often be found due to this bias. In free navigation, the navigator may act as a ‘photographer’ and aim for objects of interest to be located in the center of the image (Dorr et al., 2010; Schumann et al., 2008), but more likely, such a possible viewing strategy could reflect a different approach, such as an attempt to keep the eye in mid-orbit, from where it is easier to move the eye quickly (Biguer, Jeannerod, & Prablanc, 1982; Pelz & Canosa, 2001; Tatler, 2007). During free navigation, the observer can make gaze shifts by combining body, head, and eye rotations, and the selection of how to shift one’s gaze may depend on the size of the required gaze shift, although large variability across observers has been reported (Fuller, 1992).
Establishing the central bias during day-to-day viewing has technological implications. Mobile eye tracking equipment may not be affordable to everyone. Moreover, researchers interested in gaze coordination of larger groups of research participants, for example in school settings or transport hubs, may be interested in lower-cost options to track each individual’s gaze. Glasses with small head-centered video cameras are now widely available at a low cost. These may come in the form of the Google Glass system, the Microsoft HoloLens or other products (e.g., spy-glasses). If we can establish which section of the head-centered video image provides the best heuristic to estimate an observer’s direction of gaze, and if we can determine how accurate such heuristics would be, equipment such as spy-glasses may provide a reasonable gaze tracking alternative.
Past studies of the central bias in day-to-day viewing have focused on navigation and its comparison to viewing the same images during head-fixed eye tracking (Foulsham et al., 2011; ’t Hart et al., 2009). During navigation, gaze is directed at objects at a relatively large distance, which could influence the relative contributions of head and body movements to gaze shifts. When viewing objects at a larger distance, gaze shifts across larger angular distances can be obtained by relatively small shifts in the orientation of the eyes, and therefore eye movements may be preferred over head movements. On the other hand, navigation involves large body and head movements, and eye movements may be made to compensate for these movements. Because of these possible influences of larger viewing distances, it is therefore important to establish whether the central bias persists when tasks are performed that involve viewing objects at a closer distance. Although studies have examined eye movements during other day-to-day tasks, such as tea making (Land, Mennie, & Rusted, 1999), driving (Land & Lee, 1994), and sports (Land & McLeod, 2000), none of these studies have reported the spatial distribution of gaze positions or have directly compared these spatial distributions across tasks. Fixed-head eye tracking using images on a computer screen has suggested that task modulates the pattern of eye movements (Castelhano, Mack, & Henderson, 2009; DeAngelus & Pelz, 2009; Tatler, Wade, Kwan, Findlay, & Velichkovsky, 2010; Yarbus, 1967), but it is unclear how these results extend to day-to-day tasks.
The present study therefore aims to directly compare the central bias across three different tasks, each involving interactions with objects at different distances (see Table 1 for estimates of the distances involved in each of the tasks). As a baseline, a navigation task was used, which should compare to earlier studies of the central bias in navigation (Foulsham et al., 2011; ’t Hart et al., 2009). This task involves looking at relatively large distances, in which observers look where they are going. While the task may also involves shorter viewing distances, for example when opening a door or pressing an elevator button, the overall viewing distance is relatively large (Table 1). The navigation task was compared to tea making (a classical task in this context, Land et al., 1999), which involves interacting with objects at arms-length or slightly further away (before initiating the grasps of the objects), and to card sorting, where the objects involved (the decks of cards) are typically held near the body (at less than an arm’s length). By comparing the tasks, we compare two hypotheses. If compensatory eye movements for head and body movements dictate the distribution of gaze positions, the largest spread of gaze position across the headmounted video image is expected for navigation, followed by tea making, and card sorting. If, on the other hand, gaze shifts are mostly reflecting making adjustments of the viewing angle towards objects at small and large distances, because objects at large distances require smaller head or eye turns than objects at smaller distances, the largest deviations of gaze position are expected for card sorting, followed by tea making and navigation.

Methods

Participants

Forty-eight participants (undergraduate and postgraduate students from the University of Lincoln) took part in our study. After removing data from six participants, whose eye movements were not visible during two of the experimental tasks (tea making and card sorting; when looking down), data from forty-two participants (Males = 14, Females = 28, aged from 18 to 46 years old; mean = 21.38, SD = 5.18) remained. All participants had normal or corrected-to-normal (with contact lenses) vision. All provided written consent for their participation in the study that was approved by the local ethics committee.

Apparatus

Eye movements were recorded using a Tobii Pro 2 ultralight head mounted eye tracker (Figure 1a). The Tobii 2 eye tracker consists of a binocular headgear in the form of a pair of glasses and a pocket sized recording unit. The inner part of the headgear frame contains four eye cameras which track participants’ eye movements using corneal reflection and dark pupil signals. The headgear also contains a scene camera in the center of the headgear recording video images from the point of view of the participant. Data are stored and analyzed by the recording unit, which was worn by the participant in their pockets or on their belts. The system samples scene views at 25Hz and eye gaze data at a rate of 50Hz. The scene camera video resolution is 1920 by 1080 pixels. This corresponds to a field of view of 82 degrees horizontally and 52 degrees vertically. The visual field of view is more than 160 degrees horizontally and 70 degrees vertically (frame obstruction). The system is calibrated using a single calibration point, recommended to be held at around 1.5 meters in front of the participant (approximately a fully stretched arm’s length). Software then uses this information to build a 3D model of the two eyes, which are then combined to estimate the gaze position at different viewing distances. How this is performed exactly is restricted information, and we can therefore not give further details about this process. While a microphone recorded sound during eye tracking, this data was not used for the analysis. Further processing of the data was conducted offline, using custom-built Perl and Matlab scripts to extract the relevant information from the data stored by the recording unit, and to analyze and plot the results.

Design

Participants each performed three tasks (navigation, tea making, card sorting). The order of these tasks was counterbalanced across participants.

Procedure

Participants were asked to complete three everyday tasks (navigation, tea making and card sorting). These tasks were chosen to reflect a range of viewing distances, with navigation corresponding to visually engaging with objects at a relatively large distance, tea making with objects at a medium distance and card sorting with objects at a close distance. Prior the start of the first task, participants were introduced to the eye tracker and the study protocol. The eye tracker was then fitted onto the participant’s head normally, with the pair of glasses feeling comfortably. No further adjustment of the camera or field of view was possible, as the camera is fixed within the system. Following this, a one point calibration procedure took place. This procedure required participants to fixate their eyes at a marker, placed approximately 1.5m away from them, without moving their heads. After calibration of the eye tracker, participants were given a general description of each task and were told in what sequence they would perform the tasks. For all tasks participants were advised to behave as naturally as possible and were informed that there was no time limit to complete them. More detailed information (e.g., what items needed to be used) was given at the beginning of each task.
Navigation task. In the navigation task, we asked participants to walk a specific route inside a building located at the University of Lincoln campus (Figure 1b and Figure 2a)). We selected this route for two reasons. First of all because it included a variety of different environmental features (e.g., staircase, corridors). Secondly the selected route was less busy comparing to the rest of the building. As a result we were able to minimize any potential accidents caused by collisions with other people. Likewise, when using the stairs participants were only asked to climb the staircase. An elevator was used for moving participants between floors when they had to go down. During the task, the experimenter followed participants at a close distance in order to provide instructions about the route (e.g., turn left, go through the door), to ensure that the eye tracking equipment was functional throughout the recording, and to ensure the safety of the participant. Participants were instructed to move in their own pace and to behave as naturally as possible (e.g., move their head freely to explore the environment).
Tea making task. The tea making task was inspired by the classic experiment of Land et al. (1999). The task took place inside a kitchen located in the same building as the navigation task. Before leading participants into the kitchen, we explained to them that the task required from them to make a cup of tea for the experimenter. Upon arriving at the kitchen, participants were given additional instructions. They were told that to complete the task they needed to use specific items that were placed inside the kitchen cupboards (see Figure 1c and Figure 2b). The items we used were two different colored jars which contained tea (green jar, with ‘tea’ written on the outside) and sugar (red jar, with ‘sugar’ written on the outside), a tea spoon (placed in the front of one of the shelves of one of the cupboards), a mug with a specific pattern (displaying different colored butterflies, this was explained to the participant as a request to use the experimenter’s own mug) and a small bottle of milk, placed inside the fridge. In order to enhance the sense of a real kitchen, we did not remove any other items that were present in the kitchen. The locations of the task relevant items were kept the same for all participants. We reminded participants that they should behave as naturally as possible, that they were free to search all the cupboards inside the kitchen, and that there was no time limit for the task.
Card sorting task. In the card sorting task, participants were asked to sit at a table in the same kitchen used for the tea making task and were given a stack of playing cards (Figure 1d and Figure 2c). The stack of cards consisted of two decks of cards, each with a different design on its back side. One had a common playing card back design, whereas the second deck had a Star Wars back design. Prior to arrival of the participant, the experimenter had shuffled to the stack of cards, placed the stack on the table inside the kitchen. Participants were instructed to sort the stack of cards into two decks, according to their back side theme. As in the other tasks we reminded participants that there was no time limit.

Data analysis

After extracting the raw data from the recording unit, these data were converted to files containing the horizontal and vertical gaze positions within the video image using a custom-built Perl script. To locate the start and end frames and samples for each task (the recording of the three tasks was performed in a single session, without stopping the eye tracker), the video files were visually inspected. For initial inspection of the data, we combined the gaze data with the video recordings using a custom-built Matlab script, showing a green dot where in the head mounted video image the gaze of the participant was located for each of the three tasks Figure 1e–g). We then filtered the data for participants for whom the gaze position was consistently outside the video image (and therefore recorded as missing data). These participants were excluded from the analysis (see the participants section above). Subsequent analysis involved computing the distribution, average and spread of the gaze position in the image for the three tasks. Next the minimal size of the ellipse covering an area of the image to contain a certain percentage of gaze points was determined. Finally, sequences of frames for which the gaze position was outside the center region, were examined to determine in what instances gaze positions were outside the central window. Statistics were computed using SPSS version 21 (Fand t-values, p-values, and partial eta square values) and JASP version 0.7.5.6 (Cohen’s d values and BF10 factors).

Results

Figure 3a provides 2D histograms of all the samples recorded in each of the three tasks (samples across all participants). The plots suggest that participants fixated mostly along the vertical midline of the video images, without any clear task influences.
To study the distribution of individual participants, Figure 3b plots the gaze distribution of each of the participants in the form of an ellipse around their average gaze position, with separate horizontal and vertical radii equal to the standard deviation for that participant. Although many ellipses overlap, there is some variation in the position and size of the ellipses across participants.
To quantify the bias in gaze position and the variability in gaze position, Figure 4 plots the average gaze position and standard deviations of gaze position across tasks. The data plotted here were obtained by computing the average gaze location in the image for each participant separately, as well as the standard deviation of these locations, which were then averaged across participants. Gaze locations are shown as a percentage of the width and height of the video image. Repeated measures ANOVAs and BF10 factors (indicating the evidence in support of the alternative hypothesis against the null hypothesis) showed no effect of task on the horizontal gaze position (F(1.7,68.7) = 0.90, p = 0.39, η p 2 = 0.022, BF10 = 0.164) and no effect of task on the vertical gaze position (F(2,82) = 1.71, p = 0.19, η p 2 = 0.040, BF10 = 0.32). Likewise, no effect of task was found on the horizontal (F(2,82) = 0.81, p = 0.45, η p 2 = 0.019, BF10 = 0.16) and vertical standard deviations (F(1.4,55.8) = 0.86, p = 0.39, η p 2 = 0.020, BF10 = 0.17). Moreover, effect sizes were relatively small (all η p 2 < 0.1).
One-sample t-tests showed that horizontal gaze positions were left of the vertical midline for the navigation task (t(41) = 5.06, p < 0.001, d = 0.78; medium to large effect size) and tea making task (t(41) = 3.26, p = 0.0023, d = 0.50; medium effect size), but not for the card sorting task (t(41) = 1.83, p = 0.075, d = 0.28; small effect size). The same tests for vertical gaze positions showed that gaze was directed above the horizontal midline for the tea making task (t(41) = 3.20, p = 0.0027; d = 0.34; small effect size), and the card sorting task (t(41) = 4.15, p < 0.001; d = 0.49; medium effect size), but not for the navigation task after adjusting the critical p-value using a Bonferroni correction (t(41) = 2.20, p = 0.033; d = 0.64; medium effect size). Paired samples t-tests showed that vertical standard deviations were larger than horizontal standard deviations for all three tasks (all p<0.01 after Bonferroni correction).

Data loss

A possible cause of the central bias that needs to be examined is that recording of eye gaze may be worse along the edges of the head mounted video image. This cause cannot be directly investigated (as the data are missing), but we can determine whether data loss varies across tasks, and where the observer was looking just before data loss occurred (which may give an indirect indication of the possible cause). Figure 5a shows that missing values occurred for around 17% of the samples. While missing values tended to be slightly less frequent for the navigation task, the difference between the three tasks did not reach statistical significance and a Bayesian analysis suggested that the evidence for the null hypothesis (no task effect) to be a factor 10 times the evidence for the alternative hypothesis (F(2,82) = 0.32, p = 0.73, η p 2 = 0.008, BF10 = 0.099).
The spatial distribution of the samples just before missing values is provided in Figure 5. It shows that for all three tasks, just before a missing value, the observer most likely fixated the lower edge of the head mounted image. Missing values therefore appear to be mostly due to participants looking down with their eyes and less by turning their head downwards. Other missing values are preceded by samples along the vertical midline. These may be due to blinks, but this would need to be investigated further in mobile eye tracking systems that explicitly code for blinks in the data. Overall, the distribution of the samples of missing data do not suggest that the central bias in our data is due to the system failing to measure eye gaze at the edge of the video image (there is no ring of pre-missing-value-samples away from the edge of the video image).

Using the bias as a heuristic for gaze position

The average data show that participants systematically fixate near the center of the head-centered image, and that this tendency was unaffected by the task participants performed. To establish whether these findings can be used to estimate where in the head-centered image participants fixate without eye tracker information, we compute two measures. First, we determine what percentage of samples of the eye tracking data are contained within ellipses around the participants’ average gaze bias of various sizes. With this measure, it is possible to determine what area of the video image from the head mounted camera needs to be considered to capture a certain percentage of gaze points. The second measure considers the histogram of distances between estimated gaze position (on the basis of the central bias) and the actual gaze position. Three different strategies will be considered, differing in the amount of information used from the present findings.
Individual ellipses. A first strategy would be to first record a sample of a participant’s eye movements with a mobile eye tracker, and then remove the mobile eye tracker and fit the participant with the head-mounted video camera (e.g., spy glasses). A slight complication of this method would be that the direction of the video camera in both systems would need to be identical (i.e., how much it points downwards or upwards), but for the sake of the present analysis, this is assumed to be the case. For such an approach, one eye tracker would need to be available. For each participant a ‘calibration’ is performed, in which the central bias for that participant is estimated using the mobile eye tracker. After this calibration phase, participants are then entered into the group testing phase of the study and each asked to wear a head-centered video camera (which can be spyglasses instead of mobile eye trackers). The central bias measured with the mobile eye tracker for each participant can then be used to estimate where people are looking on the basis of the head-centered video images.
In terms of data analysis, the ellipse estimating where the participant fixates in the video image is based on that participant’s mean and standard deviation of the gaze position acquired with the eye tracker during the ‘calibration’ stage. To evaluate how well such an approach would work, Figure 6a plots the number of samples contained within ellipses for each participant as a function of the area occupied by the ellipse (as a percentage of the video image). This done for a range of ellipse sizes from near 0% of the image to near 50% of the image. During this process, we used the observed horizontal and vertical standard deviation of gaze positions for each participant to determine the shape of the ellipse, and multiply these standard deviations by an increasing factor to increase the size of the ellipse. The results suggest that to capture 80% of participants’ gaze locations, a surface of around 20% of the image is needed and a surface of around 30% of the image to capture 90% of the samples. Across the three tasks, we find that similar size ellipses are needed to capture the same number of samples.
Figure 6d provides another view of how good estimates are on the basis of people’s individual central biases. In this plot, histograms are provided of the distance between the actual gaze position and the center of the individual participants’ ellipses (used as estimates of where the look on the basis of the head mounted video image only). This shows that most samples are at around 20% of the size of the video image (horizontal and vertical deviations weighted equally), with only few samples beyond 40% of the size of the image. For the card sorting task, there appear to be more observations with smaller distances from the actual position, but the difference with the other two tasks is small.
Use data from present study. A second strategy would be to use the present data and compute an average gaze location across tasks and participants (as shown in Figure 4), and to use these data as an estimate where in the head-centered image participants fixate. This strategy would suit labs without an eye tracker, but with head-mounted video cameras, and takes advantage of the present results. As with the first strategy, the success of the approach depends on whether the angle of the head-mounted video camera (the amount by which it looks down or up from the participant’s head) is similar across configurations. For the present analysis, we assume it is. Figure 6b provides an estimate of how large the area of the ellipse based on these estimates need to be to contain a certain percentage of gaze points. The estimates differ slightly across tasks (with navigation needing the smallest ellipes), but overall, to capture around 80% of the samples, a surface area of around 30% is needed with this second strategy. To capture 90% of the samples around 40% of the area of the image is needed. In terms of distances between the actual and estimated gaze positions, using the average bias across participants increases the proportion of samples at longer distances from the actual difference than by using individual biases (Figure 6e).
Assume central fixation. A final strategy is to assume participants look in the center of the image and that their horizontal and vertical standard deviation of gaze points are identical. This strategy does not require an eye tracker, and assumes that people tend to look at the center of the head mounted image. Figure 6c estimates the size of the surface area of the image needed to a certain percentage of gaze points. To capture 80% of the gaze points, an area of around 36% of the image is needed, whereas to capture 90% of the samples around 50% of the image is needed. Smaller ellipses are needed for the navigation task than for tea making or card sorting, possibly because vertical gaze positions were slightly closer to the midline for this task (Figure 4). The distances with the actual gaze position (Figure 6f) do not differ much with respect to the second method considered, using the central bias observed in the present study (Figure 6e).

Periods away from the central bias

One approach to improve eye tracking on the basis of the head-centered video image alone, is to determine when participants fixate outside the center region, and to examine whether any common aspects can be found for periods of fixation outside the center region. Possibly, such periods can be detected on the basis of visual properties of the video image (e.g., motion blur), allowing for these sections of the video to be removed so that they do not contaminate the analysis.
To examine what happens during periods of fixation outside the central regions, sequences of gaze positions outside an ellipse centered around a participant’s average gaze position and with a width and height of 1.25 times the standard deviation that lasted at least 5 samples (83 milliseconds) were extracted. Visual inspection of the extracted frames suggested that viewing outside the central region occurred mostly (1) when interacting with the experimenter, (2) moving one’s head or body (resulting in image blur), and (3) when inside a small space (e.g., a lift; see Figure 7).

Discussion

To examine how task influences the central bias in gaze behavior in day-to-day viewing, we recorded eye movements from participants while they performed three different tasks (navigation, tea making, card sorting) while wearing a mobile eye tracker. We chose these tasks to reflect a range of viewing distances (Table 1) to examine whether the central bias is influenced by this factor. Analysis of the data showed a strong bias towards (slightly left and above) the center of the head centered video image, which was independent of the task participants performed.
In our study, we found a bias towards gaze locations along the vertical midline, with systematically larger vertical variability in gaze locations than horizontal variability. These results contrast with earlier observations when tracking eye movements towards static and dynamic images using head-fixed eye tracking. In this latter situation, the distribution of gaze points tends to be along the horizontal midline (e.g., Cristino & Baddeley, 2009; ’t Hart et al., 2009), although a wide horizontal distribution is not systematically found (e.g., Tatler, 2007). The larger vertical variability also contrasts with some findings in mobile eye tracking (Kretch & Adolph, 2015; ’t Hart et al., 2009), although the bias towards the horizontal midline was weaker for outdoor navigation (Foulsham et al., 2011) and in infants (Kretch & Adolph, 2015).
Participants in our study tended to look slightly left of the vertical midline. Such a leftward bias was also found by (Foulsham et al., 2011) while walking (mobile eye tracking), but not while watching (with a stabilized head eye tracker). Other studies, however, have reported leftward biases during stabilized head eye tracking. For example, when viewing fractals, observers’ first saccade tended to be directed towards the left (Foulsham & Kingstone, 2010). This tendency to make a first leftward saccade was also found for viewing natural scenes (Foulsham, Gray, Nasiopoulos, & Kingstone, 2013) and in face perception (Butler et al., 2005). Leftward biases are also found in the distribution of fixations, for example in face perception (Guo, Meints, Hall, Hall, & Mills, 2009; Guo, Smith, Powell, & Nicholls, 2012; Hermens & Zdravković 2015) and visual search (Durgin, Doyle, & Egan, 2008). Leftward biases for faces have been explained from a right-hemispheric dominance in processing faces, but observations of leftward biases in other tasks (navigation, tea making, scene perception) suggest that the leftward bias in eye movements may have a different cause. Leftward biases in eye movements may relate to reading direction (e.g., Chokron & De Agostini, 1995; Spalek & Hammad, 2005), and it would therefore be interesting to investigate gaze biases during day-to-day tasks in participants with a dominant reading direction other than left-to-right (e.g., in Hebrew, or Asian writing systems).
Previous research has suggested that observers tend to fixate more towards the top of the image for interiors and more towards the bottom of images for urban scenes (Parkhurst et al., 2002). Our data suggest that for indoor navigation, like in outdoor navigation (Foulsham et al., 2011), there is a bias towards the top half of the image. Vertical biases in mobile eye tracking, however, need to be interpreted with care, as the recorded gaze position in the image depends on how the scene camera is oriented with respect to the observer’s head, and may therefore vary with the equipment used. To examine the extent to which the vertical bias is due to the equipment used, future studies should examine the bias for identical tasks with the different eye tracking systems, and methods should be developed to align the recorded video images of different eye tracking systems. Presently, we can only compare vertical biases (e.g., across different tasks) within the same participant measured with the same system. Differences in the orientation of the head-mounted camera can also explain why the size of the area of the video image containing 90% of all gaze points was smaller when participant specific regions were used, compared to when the region was based on data from all participants (as the downward angle of the scene camera may vary across participants).
Previous work has suggested a stronger central bias with more object interactions (Bambach, Crandall, & Yu, 2013). Our tasks varied in the number of such interactions. In navigation, object interactions were infrequent and mostly involved opening doors, holding hand rails, and pressing lift buttons (an estimated 8 object interactions per participant, based on a random subset of 10 the participants). More object interactions took place during tea making, involving opening cupboard doors, and handling the objects involved in making the tea (an estimated 34 object interactions per participant). Finally, card sorting involved continuous handling of objects (the cards). Because we did not find a difference between tasks in the central bias, our data therefore suggest no role for the number of object interactions in the central bias, but it is unclear why we reach a different conclusion from Bambach et al. (2013) on this matter. It should be noted that our study was not specifically designed to investigate this matter and therefore future studies should address this issue further with tasks specifically designed to compare the amount of object handling while keeping all other conditions constant.
In our study we relied on one system (the Tobii 2 glasses), which uses binocular recording, 3D modeling of the human eyes and a single point calibration method to estimate where observers are looking in the head mounted image. How this particular method compares with methods applied by other systems that may use monocular eye recordings, offline calibration methods, flexible orienting of the head mounted camera, and other methods of mounting the system on observers’ heads, is unclear. Data from our navigation task generally agrees with those obtained by (Foulsham et al., 2011) using a different system, which suggests that the biases that we observe are linked to human viewing strategies rather than to how the data are measured, but future studies should investigate this matter in more detail.
It may be tempting to interpret the present results and those from earlier mobile eye tracking studies (Foulsham et al., 2011; Kretch & Adolph, 2015; ’t Hart et al., 2009) as evidence for a photographer’s bias as the cause of the central bias, as, for example, put forwards by Tseng et al. (2009) (however, see Tatler, 2007). In this bias, objects of interest are placed in the center of the image by the photographer, which could explain why eye tracking towards these images shows a bias to the center. During day-to-day tasks, the observer may adopt a similar strategy, and place objects of interest in the middle of the head-centered image (Dorr et al., 2010; Schumann et al., 2008). Such an automatic ‘photographer’s bias’ (turning one’s head towards objects of interest) may explain why the central bias is found both when freely navigating (head movements allowed) and when watching videos of someone else navigating (no head movements allowed, Foulsham et al., 2011; ’t Hart et al., 2009). The photographer’s bias in day-to-day, however, is likely to have a different cause than the aesthetic considerations that may underlie the bias for images. A likely candidate is the tendency of humans to keep the eye centered in its orbit (one of the causes of the central bias suggested by Tatler, 2007), where the position of the eye can be best estimated (Biguer et al., 1982; Pelz & Canosa, 2001), and from which the eye can most easily rotate into different directions. In such an interpretation, what the bias shows is that people move their head when orienting towards objects of interest, rather than shifting their eye gaze within the head.
Our results suggest that the central bias provides a reasonable heuristic for estimating where people look on the basis of a head-centered video image, in agreement with earlier suggestions that the bias provides a good baseline for predicting where observers fixate in static images (Clarke & Tatler, 2014; Tatler & Vincent, 2009). Our analysis shows that when the bias of a particular participant is known, 90% of an observer’s gaze samples are contained in a window of around 30% of the image. While this is still a relatively large section of the image, which may contain several objects (e.g., while it will not be possible to tell what word a person is fixating while reading a text, it will probably tell whether the observer is looking at the book, or the wall instead), it demonstrates that a global sense of observers’ eye movements can be obtained from a head mounted video image alone.
Our analysis also demonstrated that there were certain situations where observers tend to fixate outside the central window (interacting with people, during head and body movements, inside narrow spaces). For applications in which head-mounted cameras are used as a method to estimate an observer’s gaze, it would be best to exclude these intervals, because it is likely that the observer’s cannot be guessed from the central bias. Events such as these may be detected for example by tracking people’s head movements (with technology such as that used in mobile phones and tablets) or by applying computer algorithms to detect motion blur in video images (Tong, Li, Zhang, & Zhang, 2004), by using binocular information from eye tracking systems to estimate the average viewing distance (to detect small spaces), or software to automatically detect faces in video images (Hsu, Abdel-Mottaleb, & Jain, 2002; Jesorsky, Kirchberg, & Frischholz, 2001; Yang & Huang, 1994) to detect social interactions. Further improvements in estimating the direction of the observer’s gaze in head-mounted video recordings may be obtained by applying saliency models (e.g., Itti & Koch, 2001), but applying such models will be computationally expensive, and may not always be a feasible option. At present we can only speculate why participants deviate from the central bias in the observed circumstances. Possibly, direct looks at the relatively unfamiliar experimenter were avoided (Foulsham, Walker, & Kingstone, 2009; Laidlaw, Foulsham, Kuhn, & Kingstone, 2011). Possibly, participants avoid turning their head in small spaces. And possibly, the eyes move before the head follows in day to day tasks, but such explanations need to be studied in more specifically designed future studies.
Our results have direct implications for emerging technologies, allowing for the recordings and streaming of videos from the users’ point of view. A possible application would be to provide direct information about fixated objects via visual feedback to the person wearing the technology (augmented reality, e.g., Google Glass or Microsoft’s HoloLens). For example, the device could provide a restaurant’s menu the moment a user is walking past and looking at the outside of a restaurant, or a patient’s medical records when a physician turns their head towards the patient. The central bias may be an important first step in the development of such technology.

References

  1. Bambach, S., D. J. Crandall, and C. Yu. 2013. Understanding embodied visual attention in child-parent interaction. In The IEEE Third Joint International Conference on Development and Learning and Epigenetic Robotics (ICDL). IEEE: pp. 1–6. [Google Scholar] [CrossRef]
  2. Biguer, B., M. Jeannerod, and C. Prablanc. 1982. The coordination of eye, head, and arm movements during reaching at a single visual target. Experimental Brain Research 46, 2: 301–304. [Google Scholar] [CrossRef]
  3. Butler, S., I. Gilchrist, D. Burt, D. Perrett, E. Jones, and M. Harvey. 2005. Are the perceptual biases found in chimeric face processing reflected in eye-movement patterns? Neuropsychologia 43, 1: 52–59. [Google Scholar] [CrossRef]
  4. Castelhano, M. S., M. L. Mack, and J. M. Henderson. 2009. Viewing task influences eye movement control during active scene perception. Journal of Vision 9, 3: 6, 1:15. [Google Scholar] [CrossRef]
  5. Chokron, S., and M. De Agostini. 1995. Reading habits and line bisection: A developmental approach. Cognitive Brain Research 3, 1: 51–58. [Google Scholar]
  6. Clarke, A. D., and B. W. Tatler. 2014. Deriving an appropriate baseline for describing fixation behaviour. Vision Research 102: 41–51. [Google Scholar] [CrossRef]
  7. Cristino, F., and R. Baddeley. 2009. The nature of the visual representations involved in eye movements when walking down the street. Visual Cognition 17, 6–7: 880–903. [Google Scholar] [CrossRef]
  8. DeAngelus, M., and J. B. Pelz. 2009. Topdown control of eye movements: Yarbus revisited. Visual Cognition 17, 6–7: 790–811. [Google Scholar] [CrossRef]
  9. Dorr, M., T. Martinetz, K. R. Gegenfurtner, and E. Barth. 2010. Variability of eye movements when viewing dynamic natural scenes. Journal of Vision 10, 10: 28. [Google Scholar] [CrossRef]
  10. Durgin, F. H., E. Doyle, and L. Egan. 2008. Upper-left gaze bias reveals competing search strategies in a reverse stroop task. Acta Psychologica 127, 2: 428–448. [Google Scholar]
  11. Foulsham, T., A. Gray, E. Nasiopoulos, and A. Kingstone. 2013. Leftward biases in picture scanning and line bisection: A gaze-contingent window study. Vision Research 78: 14–25. [Google Scholar]
  12. Foulsham, T., and A. Kingstone. 2010. Asymmetries in the direction of saccades during perception of scenes and fractals: Effects of image type and image features. Vision Research 50, 8: 779–795. [Google Scholar]
  13. Foulsham, T., E. Walker, and A. Kingstone. 2009. Gaze behaviour in the natural environment: Eye movements in video versus the real world. Journal of Vision 9, 8: 446. [Google Scholar]
  14. Foulsham, T., E. Walker, and A. Kingstone. 2011. The where, what and when of gaze allocation in the lab and the natural environment. Vision Research 51, 17: 1920–1931. [Google Scholar] [CrossRef]
  15. Fuller, J. H. 1992. Head movement propensity. Experimental Brain Research 92, 1: 152–164. [Google Scholar] [CrossRef]
  16. Guo, K., K. Meints, C. Hall, S. Hall, and D. Mills. 2009. Left gaze bias in humans, rhesus monkeys and domestic dogs. Animal Cognition 12, 3: 409–418. [Google Scholar]
  17. Guo, K., C. Smith, K. Powell, and K. Nicholls. 2012. Consistent left gaze bias in processing different facial cues. Psychological Research 76, 3: 263–269. [Google Scholar]
  18. Hermens, F., and S. Zdravković. 2015. Information extraction from shadowed regions in images: An eye movement study. Vision Research 113: 87–96. [Google Scholar]
  19. Hsu, R.-L., M. Abdel-Mottaleb, and A. K. Jain. 2002. Face detection in color images. IEEE transactions on Pattern Analysis and Machine Intelligence 24, 5: 696–706. [Google Scholar]
  20. Itti, L., and C. Koch. 2001. Computational modelling of visual attention. Nature Reviews Neuroscience 2, 3: 194–203. [Google Scholar] [CrossRef]
  21. Itti, L., C. Koch, and E. Niebur. 1998. A model of saliencybased visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis & Machine Intelligence (11): 1254–1259. [Google Scholar] [CrossRef]
  22. Jesorsky, O., K. J. Kirchberg, and R. W. Frischholz. 2001. Robust face detection using the hausdorffdistance. International Conference on Audio-and Video-Based Biometric Person Authentication, 90–95. [Google Scholar]
  23. Kretch, K. S., and K. E. Adolph. 2015. Active vision in passive locomotion: real-world free viewing in infants and adults. Developmental Science 18, 5: 736–750. [Google Scholar] [CrossRef]
  24. Laidlaw, K. E., T. Foulsham, G. Kuhn, and A. Kingstone. 2011. Potential social interactions are important to social attention. Proceedings of the National Academy of Sciences 108, 14: 5548–5553. [Google Scholar]
  25. Land, M. F., and D. N. Lee. 1994. Where do we look when we steer. Nature, 742–744. [Google Scholar]
  26. Land, M. F., and P. McLeod. 2000. From eye movements to actions: how batsmen hit the ball. Nature Neuroscience 3, 12: 1340–1345. [Google Scholar] [CrossRef]
  27. Land, M. F., N. Mennie, and J. Rusted. 1999. Eye movements and the roles of vision in activities of daily living: making a cup of tea. Perception 28, 4: 1311–1328. [Google Scholar] [CrossRef]
  28. Parkhurst, D., K. Law, and E. Niebur. 2002. Modeling the role of salience in the allocation of overt visual attention. Vision Research 42, 1: 107–123. [Google Scholar] [CrossRef]
  29. Pelz, J. B., and R. Canosa. 2001. Oculomotor behavior and perceptual strategies in complex tasks. Vision Research 41, 25: 3587–3596. [Google Scholar] [CrossRef]
  30. Reinagel, P., and A. M. Zador. 1999. Natural scene statistics at the centre of gaze. Network: Computation in Neural Systems 10, 4: 341–350. [Google Scholar] [CrossRef]
  31. Schumann, F., W. Einhäuser, J. Vockeroth, K. Bartl, E. Schneider, and P. Koenig. 2008. Salient features in gaze-aligned recordings of human visual input during free exploration of natural environments. Journal of Vision 8, 12: 1–17. [Google Scholar] [CrossRef]
  32. Spalek, T. M., and S. Hammad. 2005. The left-to-right bias in inhibition of return is due to the direction of reading. Psychological Science 16, 1: 15–18. [Google Scholar] [CrossRef]
  33. ’t Hart, B. M., J. Vockeroth, F. Schumann, K. Bartl, E. Schneider, P. Koenig, and et al. 2009. Gaze allocation in natural stimuli: comparing free exploration to headfixed viewing conditions. Visual Cognition 17, 6–7: 1132–1158. [Google Scholar] [CrossRef]
  34. Tatler, B. W. 2007. The central fixation bias in scene viewing: Selecting an optimal viewing position independently of motor biases and image feature distributions. Journal of Vision 7, 4: 1–17. [Google Scholar] [CrossRef]
  35. Tatler, B. W., and B. T. Vincent. 2009. The prominence of behavioural biases in eye guidance. Visual Cognition 17, 6–7: 1029–1054. [Google Scholar] [CrossRef]
  36. Tatler, B. W., N. J. Wade, H. Kwan, J. M. Findlay, and B. M. Velichkovsky. 2010. Yarbus, eye movements, and vision. i-Perception 1, 1: 7–27. [Google Scholar] [CrossRef]
  37. Tong, H., M. Li, H. Zhang, and C. Zhang. 2004. Blur detection for digital images using wavelet transform. In Multimedia and expo, 2004. icme’04. 2004 IEEE international conference on. IEEE: Vol. 1, pp. 17–20. [Google Scholar] [CrossRef]
  38. Tseng, P.-H., R. Carmi, I. G. Cameron, D. P. Munoz, and L. Itti. 2009. Quantifying center bias of observers in free viewing of dynamic natural scenes. Journal of Vision 9, 7: 4. [Google Scholar] [CrossRef]
  39. Vitu, F., Z. Kapoula, D. Lancelin, and F. Lavigne. 2004. Eye movements in reading isolated words: Evidence for strong biases towards the center of the screen. Vision Research 44, 3: 321–338. [Google Scholar] [CrossRef]
  40. Yang, G., and T. S. Huang. 1994. Human face detection in a complex background. Pattern Recognition 27, 1: 53–63. [Google Scholar]
  41. Yarbus, A. L. 1967. Eye Movements During Perception of Complex Objects. Springer. [Google Scholar]
Figure 1. a) Illustration of the eye tracking equipment used (headgear and recording unit). b) Example of one of the corridors encountered by the participants during the navigation task. c) Example of the contents of one of the cupboards in the tea making task. Note the target cup in the middle of the cupboard. d) The decks of cards used for the card sorting task. These were shuffled into one stack before the start of the experiment. e) Example showing the extraction of gaze location for the navigation task (red arrows show the horizontal and vertical distance towards the top left of the image, providing the data analyzed). f) Example of a gaze location during the tea making task. g) Example of a gaze location during the card sorting task.
Figure 1. a) Illustration of the eye tracking equipment used (headgear and recording unit). b) Example of one of the corridors encountered by the participants during the navigation task. c) Example of the contents of one of the cupboards in the tea making task. Note the target cup in the middle of the cupboard. d) The decks of cards used for the card sorting task. These were shuffled into one stack before the start of the experiment. e) Example showing the extraction of gaze location for the navigation task (red arrows show the horizontal and vertical distance towards the top left of the image, providing the data analyzed). f) Example of a gaze location during the tea making task. g) Example of a gaze location during the card sorting task.
Jemr 09 00034 g001
Figure 2. a) Illustration of the path taken by participants during the indoors navigation task. Participants started from the lab on the second floor of the building, went up the stairs to the third floor, went across this floor and took the lift down to the second floor, and followed a different path towards the kitchen (participants starting with the navigation task). Participants starting with one of the other two tasks (which took place in the kitchen) started in the kitchen and went up the stairs from there. b) Illustration of the layout of the kitchen. The kettle was placed on a counter with underneath a set of cupboards and above a set of cupboards. All items relevant to the task were placed in the cupboards above the counter (except for the milk, which was in the fridge), at the places indicated in the illustration. c) Illustration of the card sorting task. Participants were seated at the table in the kitchen and formed two piles of cards from the large deck in their hands.
Figure 2. a) Illustration of the path taken by participants during the indoors navigation task. Participants started from the lab on the second floor of the building, went up the stairs to the third floor, went across this floor and took the lift down to the second floor, and followed a different path towards the kitchen (participants starting with the navigation task). Participants starting with one of the other two tasks (which took place in the kitchen) started in the kitchen and went up the stairs from there. b) Illustration of the layout of the kitchen. The kettle was placed on a counter with underneath a set of cupboards and above a set of cupboards. All items relevant to the task were placed in the cupboards above the counter (except for the milk, which was in the fridge), at the places indicated in the illustration. c) Illustration of the card sorting task. Participants were seated at the table in the kitchen and formed two piles of cards from the large deck in their hands.
Jemr 09 00034 g002
Figure 3. a) 2D histograms of gaze samples across all participants, plotted separately for the three tasks. b) Ellipses showing the gaze bias for individual participants (ellipses showing one standard deviation away from the mean for each participant). Numbers along the horizontal and vertical axis indicate where in the head-centered video image the recorded gaze position was located. A number of, for example, 30% indicates that the recorded gaze was at a distance of 30% of the width of the image, measured from left of the image (horizontal axis) or at a distance of 30% of the height of the image, measured from the bottom of the image (vertical axis).
Figure 3. a) 2D histograms of gaze samples across all participants, plotted separately for the three tasks. b) Ellipses showing the gaze bias for individual participants (ellipses showing one standard deviation away from the mean for each participant). Numbers along the horizontal and vertical axis indicate where in the head-centered video image the recorded gaze position was located. A number of, for example, 30% indicates that the recorded gaze was at a distance of 30% of the width of the image, measured from left of the image (horizontal axis) or at a distance of 30% of the height of the image, measured from the bottom of the image (vertical axis).
Jemr 09 00034 g003
Figure 4. Average gaze location and standard deviations across the three tasks (plotted such that horizontal averages above 50% indicate a gaze bias towards the right and vertical averages above 50% indicate a gaze bias towards the top section of the video image). Separate data plots show the average location (as a percentage of the width or height of the video image) and standard deviation of the locations (first computed per participant and then averaged). The red horizontal lines indicate the center of the image. Error bars show the standard error of the mean across participants.
Figure 4. Average gaze location and standard deviations across the three tasks (plotted such that horizontal averages above 50% indicate a gaze bias towards the right and vertical averages above 50% indicate a gaze bias towards the top section of the video image). Separate data plots show the average location (as a percentage of the width or height of the video image) and standard deviation of the locations (first computed per participant and then averaged). The red horizontal lines indicate the center of the image. Error bars show the standard error of the mean across participants.
Jemr 09 00034 g004
Figure 5. a) Percentages of missing values per task. b) Distributions of the samples just before missing observations, providing an indication of the reason for the missing values.
Figure 5. a) Percentages of missing values per task. b) Distributions of the samples just before missing observations, providing an indication of the reason for the missing values.
Jemr 09 00034 g005
Figure 6. Estimates of the percentage of gaze points contained within ellipses of various sizes modeling the central bias. Along the horizontal axis the size of the ellipse is shown (as a percentage of the total surface size of the video image; ‘surface area’), whereas on the vertical axis the percentage of gaze points contained in the ellipse is indicated (‘samples contained’). Estimates based on ellipses estimated for each participant individually using a baseline task with a mobile eye tracker. Estimates based on ellipses based on the present data. c) Estimates based on assuming an isotropic bias (equal standard deviations in both horizontal and vertical directions) towards the center of the image. Lines in the graphs are connecting the individual data points. d, e, and f) Histograms of the distance between the actual gaze position and the estimated gaze position on the basis of the various bias measures (as in a, b and c).
Figure 6. Estimates of the percentage of gaze points contained within ellipses of various sizes modeling the central bias. Along the horizontal axis the size of the ellipse is shown (as a percentage of the total surface size of the video image; ‘surface area’), whereas on the vertical axis the percentage of gaze points contained in the ellipse is indicated (‘samples contained’). Estimates based on ellipses estimated for each participant individually using a baseline task with a mobile eye tracker. Estimates based on ellipses based on the present data. c) Estimates based on assuming an isotropic bias (equal standard deviations in both horizontal and vertical directions) towards the center of the image. Lines in the graphs are connecting the individual data points. d, e, and f) Histograms of the distance between the actual gaze position and the estimated gaze position on the basis of the various bias measures (as in a, b and c).
Jemr 09 00034 g006
Figure 7. Examples of frames where participants looked away from the central location. Visual inspection suggested that such frames mostly involved (1) interacting the with experimenter, (2) gaze during head and body movements (visible as blur in the image), and (3) when inside small spaces (like inside a lift).
Figure 7. Examples of frames where participants looked away from the central location. Visual inspection suggested that such frames mostly involved (1) interacting the with experimenter, (2) gaze during head and body movements (visible as blur in the image), and (3) when inside small spaces (like inside a lift).
Jemr 09 00034 g007
Table 1. Approximate distances to the participants’ bodies for the different tasks. These measures will depend on how exactly the participants behaved (e.g., whether they bent forward while sorting the cards) and therefore only provide an indication of the typical distances involved in the different tasks.
Table 1. Approximate distances to the participants’ bodies for the different tasks. These measures will depend on how exactly the participants behaved (e.g., whether they bent forward while sorting the cards) and therefore only provide an indication of the typical distances involved in the different tasks.
Jemr 09 00034 g008

Share and Cite

MDPI and ACS Style

Ioannidou, F.; Hermens, F.; Hodgson, T.L. The Central Bias in Day-to-Day Viewing. J. Eye Mov. Res. 2016, 9, 1-13. https://doi.org/10.16910/jemr.9.6.6

AMA Style

Ioannidou F, Hermens F, Hodgson TL. The Central Bias in Day-to-Day Viewing. Journal of Eye Movement Research. 2016; 9(6):1-13. https://doi.org/10.16910/jemr.9.6.6

Chicago/Turabian Style

Ioannidou, Flora, Frouke Hermens, and Timothy L. Hodgson. 2016. "The Central Bias in Day-to-Day Viewing" Journal of Eye Movement Research 9, no. 6: 1-13. https://doi.org/10.16910/jemr.9.6.6

APA Style

Ioannidou, F., Hermens, F., & Hodgson, T. L. (2016). The Central Bias in Day-to-Day Viewing. Journal of Eye Movement Research, 9(6), 1-13. https://doi.org/10.16910/jemr.9.6.6

Article Metrics

Back to TopTop