Eye direction can convey a range of useful information—the location of a threat, interest of a mate, or a source of food [1
]. As such, it is not surprising that humans follow where others are looking and orient their attention to gazed-at locations [1
]. In the laboratory, gaze following is often measured using a computerized task, in which participants are presented with multiple trials featuring a central face cue looking in a particular direction (e.g., to the left or right) and are asked to respond to targets appearing at the gazed-at or not-gazed-at locations (e.g., [4
]). A large number of studies conducted using this so-called ‘gaze cuing’ procedure show robust overall performance facilitation for gazed-at trials, as indexed by faster response times for targets appearing at gazed-at locations relative to those appearing at not-gazed-at locations, even when gaze direction remains fully task-irrelevant and does not convey any meaningful information about the target (e.g., [5
]). This result has typically been interpreted as indicating that participants orient their attention in response to gaze cues consistently and spontaneously (e.g., [4
In contrast, observational studies suggest that gaze-following behavior during real world interactions does not appear to occur on every instance in which a deviated gaze cue is available. Recent research that has measured gaze following during natural behaviors shows that despite multiple opportunities, humans follow gaze cues in only a small fraction of instances (e.g., [7
]). For example, Hayward et al. [8
] measured the proportion of gaze following during a two-minute face-to-face conversation, and found that participants followed the directional gaze cues displayed by the conversational partner in about 30% of opportunities, which is consistent with estimates of gaze following reported by similar prior work (see [7
While this relative infrequency of gaze following has been theorized to reflect the modulating influence of contextual and social factors available during complex real world interactions (e.g., [7
]), it is also possible that a similar frequency of occurrence may be observed during laboratory investigations as well. However, examinations of individual instances of attentional orienting during laboratory tasks have not yet been performed due to researchers typically analyzing participants’ average performance across all gazed-at and not-gazed-at trials (e.g., [2
]). This stands in contrast to naturalistic procedures in which researchers analyze the proportion of discrete gaze following behaviors displayed by the participant relative to all available opportunities. As such, naturalistic studies reflect a proportion of gaze following behavior relative to all gaze-cue instances while laboratory data reflect average performance across all gazed-at and not-gazed-at trials. At present, the typical analysis of the laboratory data does not allow one to establish whether the overall gaze cuing effect reflects consistent gaze orienting throughout the task. It also does not yield an understanding of whether this overall orienting effect reflects a pattern in which attention is consistently oriented toward the target on the majority of gazed-at trials and away from the target on the majority of not-gazed-at trials.
To address this issue, in the present investigation we examined manual performance from a typical gaze-cuing task by analyzing the proportions of benefits and costs. We reasoned that the proportion of responses with reaction times falling outside of 1 standard deviation (SD) of the performance yielded by corresponding neutral trials (i.e., benefits and costs) would represent a good approximation of the frequency of individual indices of attentional orienting behaviors [4
]. This follows from existing work, which shows that when attention is sufficiently biased, both benefits and costs emerge in manual performance with corresponding sensory gains found in early neural processing of the targets (e.g., [16
]). As such, this provides a way to index the frequency of individual instances of attentional orienting during the typical cuing task by assessing the proportion of trials in which the attentional cue elicited a sufficiently large change in performance to be considered an attentional benefit or cost. It is worth noting here that this examination of responses does not represent a novel statistical procedure, but rather an additional way of summarizing and analyzing response time (RT) data. As such, all conventions governing the analyses employed on average RT data and those that we perform here on proportions of benefits and costs apply equally for both dependent measures.
Participants were asked to complete a typical gaze-cuing task in which the central gaze cue indicated the correct target location randomly, on half of trials. To measure any potentially unique effects of this social cue, we also included an additional condition in which a nonsocial arrow served as a central cuing stimulus [5
]. Figure 1
shows the stimuli and an example task sequence. In the gaze condition, a schematic face with left or right deviated gaze served as a central cue (Figure 1
A). In the arrow condition, an arrow pointing left or right served as a central cue (Figure 1
B). To measure benefits and costs, the neutral condition in Experiment 1 reflected performance in response to nondirectional cues (Figure 1
A,B). In Experiment 2, the neutral condition reflected average performance in response to targets occurring along the not-cued dimension (i.e., up and down). Trials on which response time (RT) fell below 1 standard deviation (SD) relative to the average RT of corresponding neutral trials were labeled as benefits while those that fell above 1 SD were labeled as costs.
If the typical gaze-cuing effect reflects consistent orienting of attention throughout the task, we expected to observe benefits on at least half of valid trials and costs on at least half of invalid trials. We also expected to find benefits on a greater proportion of valid trials and costs on a greater proportion of invalid trials.
In this study, we sought to index individual instances of covert attentional orienting within the gaze-cuing task. To this end, we examined participants’ manual performance as a function of benefits and costs, defined as participants’ responses falling below and above 1 SD of their average neutral RT. Based on past research [4
] we reasoned that trials labeled as benefits and costs would provide a reasonable estimate of individual instances of attentional orienting, as on those trials, attention was biased strongly enough by the cue to generate a facilitative or detrimental effect on performance. Across two different neutral conditions, manipulated in Experiments 1 and 2, we found that both benefits and costs occurred on fewer than half of valid and invalid trials respectively. Furthermore, we also found that benefits occurred more frequently on valid vs. invalid trials, and costs occurred more frequently on invalid vs. valid trials. Together, these findings suggest three general implications.
One, our result indicating that benefits and costs occurred on fewer than half of available trials suggests that the typical average orienting effect yielded by the cuing task does not reflect consistent attentional orienting throughout the task, and dovetails well with the results from observational studies [7
]. In other words, it appears that the measure of average orienting yielded by the cuing task may be driven by instances of attentional orienting occurring on a minority rather than a majority of trials. It is possible, however, that this result reflects our conservative estimate of the individual instances of attentional orienting, as we defined benefits and costs as responses that deviated from the average performance in a relatively extreme fashion (by 1 SD). To probe into this issue, we analyzed the data using a more liberal benefit and cost comparison cutoff of 0.5 SD. Not surprisingly, applying this lower benchmark resulted in overall increased proportions of benefits and costs, which together now constituted 61.68% and 61.30% of trials in Experiments 1 and 2, respectively. However, despite this overall increase, the proportions of benefits and costs still mirrored the main results. Specifically, while benefits now occurred on 42.03% of valid trials in Experiment 1, and on 38.13% of valid trials in Experiment 2, both of these values were still statistically lower than the 50% mark (E1: t
(24) = 7.62, p
< 0.0001, dz
= 1.52; E2: t
(24) = 10.65, p
< 0.0001, two-tailed, one sample tests, dz
= 2.13). The same held for costs, which occurred on 25.85% and 26.47% of invalid trials in Experiments 1 and 2, respectively. Again, both of these values were still reliably lower than 50% (E1: t
(24) = 17.56, p
< 0.0001, dz
= 3.51; E2: t
(24) = 28.18, p
< 0.0001, dz
= 5.64). Thus, even when the criterion for identifying benefits and costs was lowered using a more liberal 0.5 SD cutoff, benefits and costs still did not occur on a majority of trials. As such, this provides support for the notion that the average orienting effect yielded by the cuing task likely does not reflect consistent attentional orienting throughout the task, but isolated incidences of strong performance biases occurring on less than half of available trials.
Second, although we found that benefits and costs occurred on a minority of trials, their representation across valid and invalid trials was consistent with a pattern predicted by the available literature. Namely, our data indicated that more benefits relative to costs occurred on valid trials and that more costs relative to benefits occurred on invalid trials. This supports our initial hypothesis and validates the present analytical approach. More specifically, the results of our proportion analyses dovetail with the conclusions from typical analyses of average performance while providing a more detailed characterization of the dynamics of attentional orienting behaviors. As such, this method may be useful for future studies investigating which characteristics of attentional cues may modulate the frequency and/or strength of instances of covert attentional orienting.
Finally, like many past studies [18
], our examination also did not reveal reliable differences across gaze and arrow cues. Unlike orienting to gaze direction, which is thought to reflect evolutionarily driven social biases [1
], attentional orienting in response to arrows is purported to reflect an automated process, which arises as a function of overlearning the meaning of behaviorally relevant symbols [24
]. Despite this fundamental difference, cuing tasks continue to show similar overall performance across gaze and arrow cues ([18
]; but see [19
]). The present investigation is the first to our knowledge to examine whether the two types of cues may also differ in individual instances of orienting. Our results for both gaze cues and arrow cues were once again indistinguishable. They indicated the presence of benefits and costs on less than half of valid and invalid trials, with the proportion of benefits on valid trials ranging between 16% and 20%, and the proportion of costs on invalid trials ranging between 14% and 16% for both gaze and arrow cues across the two experiments. Additionally, when we compared the proportions of benefits and costs across valid and invalid trials, the absence of significant interactions involving cue type for each experiment indicated that the result showing a greater proportion of benefits relative to costs on valid trials and a greater proportion of costs relative to benefits on invalid trials was not preferentially driven by the social or automated cue type.
It is nevertheless important to interpret these results within the context of two broad methodological considerations. First, we remain mindful of the fundamental differences between real-world and laboratory investigations due to naturalistic investigations measuring overt orienting directly and laboratory performance investigations measuring covert orienting indirectly. As such, the contribution of covert orienting in the overt effects assessed during naturalistic studies remains relatively unknown (e.g., [7
]). In turn, it is possible that the frequency of gaze following occurrence in naturalistic settings may be underestimated, a point that will be important to address in future research (see [27
] for discussion). Systematic comparisons between laboratory and naturalistic studies in which orienting in response to discrete instances of attentional cues is manipulated and measured using both covert and overt measures are needed to establish the correspondence across these approaches (see [8
] for a recent study on this topic). Second, one also needs to remain mindful of the potential role of task parameters. Here, we employed a set of task parameters which have been used by many past gaze- and arrow-cuing studies [4
]. While this allows for establishing meaningful links between the present data and the existing literature, it also raises a question of whether alterations of these typical parameters may lead to changes in the estimated prevalence of attentional orienting. Increasing task difficulty, changing the nature of the response task (discrimination vs. detection), and/or increasing the realism of the stimuli (photographs vs. schematic faces) are some of the potential factors that would be worth examining.
An intriguing outstanding question posed by this work relates to the notion of why
attentional orienting does not appear to be elicited by each available cue instance. This is particularly puzzling within the context of the traditional attentional theory, in which a reflexive or spontaneous orienting response is purported to reflect an automatic alignment of the orienting mechanisms with biasing occurring within the corresponding neural pathways (e.g., [29
]). In the case of social attention, it has now been consistently demonstrated that such social attentional effects do not occur obligatorily but rather in a contextually and situationally appropriate manner [8
]. It would be interesting to examine if similar patterns of sporadic attentional performance biasing are also observed when other attentional cues such as luminance increments are used. However, in both cases, the current methodology enables the examination of those performance patterns and permits future investigations of the factors that may influence and determine when an attentional shift is elicited by the cue. Such performance effects could also be studied using neuroimaging approaches, which may afford further opportunities to distinguish between the neural signatures of attentional effects using single-trial analyses [33
]. In turn, understanding the underlying mechanisms driving performance changes in laboratory attentional paradigms will facilitate the creation of a much-needed methodological bridge between naturalistic and laboratory approaches (e.g., [8
]) and will allow for a fine-grained characterization and measurement of attentional behavior in both the real world and the laboratory (e.g., [36