Next Article in Journal
It’s All About the Transient: Intra-Saccadic Onset Stimuli Do Not Capture Attention
Previous Article in Journal
Assessment of Dual-Mode and Switched-Channel Models with Experimental Vergence Responses
 
 
Journal of Eye Movement Research is published by MDPI from Volume 18 Issue 1 (2025). Previous articles were published by another publisher in Open Access under a CC-BY (or CC-BY-NC-ND) licence, and they are hosted by MDPI on mdpi.com as a courtesy and upon agreement with Bern Open Publishing (BOP).
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Analysis of Eye and Head Coordination in a Visual Peripheral Recognition Task

by
Simon Schwab
,
Othmar Würmle
and
Andreas Altorfer
University of Bern, 3012 Bern, Switzerland
J. Eye Mov. Res. 2012, 5(2), 1-9; https://doi.org/10.16910/jemr.5.2.3
Published: 1 May 2012

Abstract

:
Coordinated eye and head movements simultaneously occur to scan the visual world for relevant targets. However, measuring both eye and head movements in experiments allowing natural head movements may be challenging. This paper provides an approach to study eye-head coordination: First, we demonstrate the capabilities and limits of the eye-head tracking system used, and compare it to other technologies. Second, a behavioral task is introduced to invoke eye-head coordination. Third, a method is introduced to reconstruct signal loss in video-based oculography caused by cornea reflection artifacts in order to extend the tracking range. Finally, parameters of eye-head coordination are identified using EHCA (eye-head coordination analyzer), a MATLAB software which was developed to analyze eye-head shifts.To demonstrate the capabilities of the approach, a study with 11 healthy subjects was performed to investigate motion behavior. The approach presented here is discussed as an instrument to explore eye-head coordination, which may lead to further insights into attentional and motor symptoms of certain neurological or psychiatric diseases, e.g., schizophrenia.

Introduction

In human vision, the eyes move to perceive a complete scene. Such voluntary eye movements are characterized by alternating stops (fixations) and quick shifts (saccades), which are strongly related to the anatomy of the eye: Only a single spot on the retina, the fovea, contains the highest density of photoreceptors. Consequently, the eyes move to scan all objects of interest. This mechanism can be extensively studied with videobased, non-invasive eye-tracking devices, which record the movement of the pupil, and usually the position of the cornea reflection (CR). Generally, these eye trackers need to be calibrated before the measurement. During calibration, the subject is presented with some fixation points, which are then mapped to the positions of the eyes. All other eye positions are then interpolated with a linear mapping function to determine the subject’s point of regard (POR). POR measurements are susceptible to head movements, as eye trackers are most often attached to the head and do not measure head movements. Therefore, the head is usually fixed for a precise measurement of gaze position, e.g., by attaching the head to a chin rest.
However, in everyday situations, head movements are often required to perceive peripheral objects. Eye and head movements are both involved in gaze control to project the object of interest onto the fovea. Targets appearing in the center of the visual field require no head movements. However, if a target eccentricity extends beyond ±10°, the eye saccade is usually accompanied with a head movement (Proudlock & Gottlob, 2007). Bartz (1966) and Bizzi, Kalil, and Tagliasco (1971) were among the first to study eye and head movements in humans and primates. Bartz (1966) found that gazeshifts are initiated by a saccade, followed by rotation of the head. Then, prior to reaching the peripheral stimulus, the eyes begin to move to the opposite direction to compensate for the continuing head rotation with a compensatory eye movement (CEM) of the vestibulo-ocular reflex. This pioneering work demonstrated the high coupling of eye and head movements, also known as eye-head coordination; for a review, see Proudlock and Gottlob (2007).
Approximately 40 years ago, eye-head coordination was often measured using electrooculography (EOG) in combination with a helmet attached to a potentiometer (Barnes, 1979; Bartz, 1966; Bizzi et al., 1971). Even though the potentiometer allows the study of head movements of interest (e.g., azimuthal), the head is not absolutely free, and head elevation and head roll are more difficult to track. Another method to measure head movements is by using ultrasonic sensors (Altorfer et al., 2000), which allow free head movements, but might be more susceptible to data loss, since the sensors and receiver require a free line of sight. In other studies, search coils in a magnetic field are used to measure eye and head movements (Cecala & Freedman, 2008; Corneil & Munoz, 1999; Tweed, Glenn, & Vilis, 1995). In this method, a magnetic coil is attached to the head and a scleral search coil to the eye. The scleral search coil is probably the most precise eye movement recording method and is also suitable for animal studies (Phillips, Fuchs, Ling, Iwamoto, & Votaw, 1997). However, one of the major disadvantages is the invasive nature of this method, because few subjects can endure a coil in their eye for a long duration, even though the eye has been anesthetized (Houben, Goumans, & van der Steen, 2006; van der Geest & Frens, 2002). Therefore, it may be difficult to obtain the subjects’ full cooperation, especially with children or with patients in clinical investigations. Recent work in eye-head coordination often use EOG (Becker et al., 2009) or video-based eye tracking (Richard, Churan, Guitton, & Pack, 2011). A basic overview of the different techniques is provided in Table 1. For an in-depth comparison of eye trackers see Duchowski (2007); for motion trackers, see Welch and Foxlin (2002).
In this paper, we use video-based eye tracking in combination with magnetic head tracking. Magnetic head tracking allows for free head movements, is very accurate, and encounters hardly any interference. Videobased eye tracking is probably the most popular technique in eye movement research today. Most importantly, there is no contact with the subject’s eye or face. Therefore, this method is generally well accepted. This becomes relevant especially when studying patients or children. Hence, the combination of these 2 methods provides a non-invasive instrument to study eye-head coordination in humans.
Eye-head movement analysis has direct application in many fields of research, e.g., psychology, medicine, marketing, usability, virtual reality, and also the sport sciences (Land & McLeod, 2000). Experiments investigated the role of eye-head coordination in primates (Bizzi et al., 1971; Bizzi, 1979; Phillips et al., 1997; Crawford, Ceylan, Klier, & Guitton, 1999; Populin & Rajala, 2011), in humans during natural exploration (Einha¨user et al., 2009), towards visual compared to auditory stimuli (Zambarbieri, Schmid, Versino, & Beltrami, 1997; Goossens & Van Opstal, 1997), dependent on subject expectation about the target (Oommen, Smith, & Stahl, 2004), in laboratory compared to natural settings (Thumser, Oommen, Kofman, & Stahl, 2008), and during eye-only gaze shifts containing minor head movements (Oommen & Stahl, 2005). Physiological studies applied electrical stimulation in monkeys to identify subcortical and cortical regions involved in eye-head coordination, e.g., the superior colliculus, the frontal and supplementary eye field (Klier, Wang, & Crawford, 2001; Monteon, Constantin, Wang, Martinez-Trujillo, & Crawford, 2010; Chen & Walton, 2005), or transcranial magnetic stimulation, a brain stimulation technique that is safe for human subjects (Nagel & Zangemeister, 2003). Eye-head coordination was also investigated in some neurological and psychiatric diseases, e.g., Parkinson’s disease, Schizophrenia (Hansen, Gibson, Zangemeister, & Kennard, 1990; Fukushima, Fukushima, Morita, & Yamashita, 1990).
In the next sections, we provide a detailed description of the instrument used, present a visual peripheral recognition paradigm to invoke eye-head shifts, remove CR artifacts to extend the tracking range, and finally, identify parameters to quantify eye-head coordination. To demonstrate the approach, we designed an experiment that enabled a discussion of its potential and limits. At the end, we present some results to demonstrate the impact of the proposed procedure. The system presented here has the following aims: First, the basic study of eye-head physiology and coordination in healthy human subjects. Second, the study of eye-head physiology in neurological and psychiatric patients, with focus on attentional and motor symptoms to identify pathological markers for a more complete understanding of the disease. Third, the effect of psychopharmaceutical medication on focused attention and the oculomotor system.

Methods

Participants

Eleven subjects participated in the study (mean age 31 years, range 22–50 years, 6 women and 5 men). All subjects had no history of neurological or psychiatric diseases. None had any cervical spine dysfunction or related pain. All subjects had normal vision (no history of eye diseases, no color vision deficiency, and sufficient visual acuity to easily see the targets). None of the subjects were on psychoactive medication, except Subject 9 who was on antihistamines. All procedures were approved by the cantonal ethics committee (KEK Bern) and were carried out in full accordance with the principles of the Declaration of Helsinki. The procedure was fully explained to the subjects, who provided written consent before the experiments.

Apparatus

Eye movements of the dominant eye were recorded with a video-based infrared eye tracker (iView X HEDMHT, SMI, Germany) at a sampling rate of 200 Hz and a spatial resolution of 0.5–1. The device had a combined pupil and cornea reflexion tracking and was attached to a bicycle helmet. Head movements were recorded using magnetic coils (Fastrack, Polhemus, USA) with a sample rate of 40 Hz and a spatial resolution of 0.15°. A receiver was attached to the subject’s head to track head position and orientation. We used the manufacturer’s software (iView X 2.5, SMI) for recording, which provided online gaze vectors upon head and eye tracking. A 13-point calibration within an area of 50° × 15° was used. After calibration, the system provided gaze vectors within a previously defined 3D model of the screens. Additionally, the system took into account the different centers of rotations of the eye and the head.
Stimuli were presented using our own experiment software based on PsychoPy (Peirce, 2008). This software triggered the eye and head tracking system with a TTL signal in each experimental trial for subsequent time synchronization. A beamer (vp6321, HewlettPackard, USA) with a refresh rate of 75 Hz presented the targets on a central, left and right screen. Two mirrors were used to reflect the peripheral targets in the correct position on the left and right screens (Figure 1a). We used a response box with an accuracy of 1 ms; this setup was adapted from Stewart (2006).

Visual targets

Visual targets were presented in the center and in the periphery of the subject’s visual field. Peripheral targets had an eccentricity of ±55° (center of target), see Figure 1b. Visual targets were color squares (red and yellow) or Landolt rings (with upward or downward orientation). Targets were 6 cm × 6 cm (4.3° × 4.3°) and were presented at a viewing distance of 80 cm.

Procedure

At the beginning, visual acuity (Snellen chart), color vision (Ishihara test), visual dominance (Porta test), and handedness (Edinburgh inventory) were determined. All subjects were screened for eye diseases, diseases of the cervical vertebrae, neck and shoulder pain, drug abuse, and medication. Then, the visual peripheral recognition task was explained (Figure 1c). At the beginning, a black dot, at which the subjects had to look, was presented. Then, the first target appeared in the same position, followed by a second target either on the left or on the right side. The task was to determine whether these 2 objects were identical (color or orientation). Subjects were instructed to make quick and accurate responses. They pressed 2 buttons using their index (”Yes”) and middle finger (”No”) of their dominant hand. Subjects were seated on a chair and usually made no shoulder movements. However, a second receiver was attached to the right shoulder to control for small shoulder movements. A laser was used to position the subjects at the correct viewing distance. Before the experiment, a 13-point calibration was performed and then validated.
In the experimental session, each subject performed 16 training trials followed by 96 trials in 3 blocks (32 trials per block). The trials involved color squares or Landolt rings (50% each). Likewise, peripheral targets appeared on either the left or right (50% each). For each subject, these 2 conditions were randomized within each block before the experiment started.

Signal pre-processing

Data were analyzed using our own MATLAB software. The head signal was recorded at 40 Hz and the eye signal at 200 Hz. Both time-synchronized signals were provided in a single raw data file by the manufacturer’s recording software. Due to the lower sampling rate in the head signal, we used piecewise cubic Hermite interpolation to reconstruct the intermediate values (Figure 2a). Head and gaze signals were synchronized with the stimulus onset using a linear fit. Gaze vectors were transformed to visual angles in degrees, filtered (>750°/s) and smoothed (moving average over 5 values, 20 ms). A translation was performed so that gaze and head positions were relative to the central fixation point; negative angles denoted shifts to the left, positive angles shifts to the right. Finally, based on gaze and head position, the eye position was derived by a simple subtraction.
Combined pupil-CR recording is very robust for small and medium saccades, but recording larger saccades may be challenging because the tracking range is limited with such systems. One problem that sometimes occurs is that the CR may be lost when the eye fixates on the periphery. This happens because the CR leaves the iris merging with the sclera during eye rotation, or is covered by the eyelid or the eyelashes. However, missing data from CR artifacts can be reconstructed using the information from the raw pupil position. Therefore, we performed a curve fitting (second degree polynomial) to fit the raw pupil position to the eye position (Figure 2b). In such a fit, the coefficients of a polynomial are determined that fit the raw pupil data (in pixel) to the lossy eye position (in ), in a least squares sense. A goodness of fit statistics, root mean square error (RMSE), is shown in Figure 2c. The RMSE is a measure that aggregates the residuals of the fit into a single measure to evaluate its precision or accuracy. The median RMSE of 0.9° (n = 11, IQR 0.8–1.4) suggests a good precision of the fit. The reconstruction successfully corrected a large number of such CR artifacts and highly improved the signal quality. Missing data were reduced from 42% to 14%. Further, the reconstruction proposed is not specifically related to eye-head coordination studies, but can be applied to all video-based eye tracking data to reduce signal loss due to CR artifacts in order to extend the tracking range.

Identification of parameters

Three-second data segment were created for each trial, starting from the onset of the peripheral target. Within such time segments (Figure 2d), the saccade, the head shift, and the CEM were detected by velocity thresholds (Salvucci & Goldberg, 2000). We used similar detection criteria as reported by Cecala and Freedman (2008). Eye onset/offset were defined using 60°/s onset, 15°/s offset. Head onset/offset were defined using 20°/s and 15°/s velocity thresholds, and that for CEM were 15°/s and 5°/s. Characteristic parameters of gaze, eye, and head shifts were identified: The saccade latency was the time from peripheral target onset until the start of the saccade. Typically, the eye-head shift is initiated with the saccade; therefore, the saccade latency is a measure of reaction time towards the stimulus. Furthermore, the head offset was determined as the time between the saccade onset and the head onset. It is a measure of how much the head reaction lags behind the beginning of the saccade. These parameters correspond to those used by Zambarbieri et al. (1997). We also analyzed the POR and the head amplitude. Due to the absence of software to analyze eye-head coordination, we wrote our own MATLAB toolbox EHCA (eye-head coordination analyzer), which is available at SourceForge (Schwab, 2011). A short introduction to the software, which also contains sample data, is provided in Appendix A.

Results

We recorded 96 gaze shifts for each of the 11 subjects. For the purpose of this paper, only gaze shifts in the Landolt task are presented here (48 of 96 trials) because they contained more combined eye-head shifts. Only correctly answered trials were evaluated.
Most subjects had head shifts during the trials; the median number of head shifts was 44 (IQR 25–48); see Figure 3a. Three subjects made few or no head shifts: Subject 3 (20), Subject 5 (2), and Subject 11 (none). Median saccade latency was 195 ms (IQR 187–229), shown in Figure 3b. Subject 9 had the highest saccade latency (326 ms), which was a clear outlier. We found that head shift generally started after saccade onset. The median head offsets were 98 ms (IQR 50–121), shown in Figure 3c. POR was analyzed in Figure 3d. Subjects had a median POR of 46.2° (IQR 36.7–50.2). Subjects 2, 4, 7, 8, and 11 had a median POR that approached the target position as close as 5°. Subject 9 had the lowest POR at 21.8°. Median head amplitudes were 14.7° (IQR 10.3–17.4), shown in Figure 3e. To determine the relative contribution of eye and head amplitude to the gaze shift, we calculated the head-eye amplitude ratio (head amplitude/eye amplitude); see Figure 3f. Generally, head contribution for gaze shifts was less than saccade contribution: Subjects had a median ratio of 0.36 (IQR 0.22–0.42). Subject 9 had the highest ratio (0.74, outlier). Subjects performed a visual recognition task. Each trial was categorized as hit, miss, correct rejection, or false alarm. We evaluated the percentage of correct answers (hits and correct rejections) and the associated response times. Generally, subjects exhibited high accuracy at the task. The median number of correct responses was 47 of 48, IQR 45–48 (Figure 4a). The median response time was 947 ms, IQR 824–1029 (Figure 4b).
In summary, we identified the following average spatial and temporal pattern: The saccade started after 195 ms (median saccade latency) and after additional 98 ms (median head offset), the head turned in the same direction. In gaze shifts, saccade contribution was generally larger than that of the head, median POR was 46.2° and median head amplitude was 14.7°.
One of the participants, Subject 9, had been prescribed an antihistamine. This subject had a longer saccade latency compared to the others, a shorter POR, and a higher head-eye ratio. The higher ratio was caused

Discussion

The purpose of this paper was to present a instrument to measure and analyze eye-head coordination. Eye-head coordination can be disturbed in some psychiatric and neurological conditions, where attention or the motor system is affected (Hansen et al., 1990; Fukushima et al., 1990; Proudlock & Gottlob, 2007). The approach presented here is non-invasive and therefore suitable for research in patients who may have lower tolerance and compliance compared to healthy controls. Such research might bring new insights into attentional and motor symptoms associated with certain psychiatric and neurological conditions (e.g., schizophrenia, Parkinson’s disease).
In our study, we observed a similar spatial and temporal pattern to that of Bartz (1966), who presented targets at the same position (55° ). He found that saccades started before the head rotation and were a larger contributor to gaze shifts (POR: 51.5–52.2° , head shifts: 16.5–19.4° ). The POR in our experiment was around 5° smaller. An explanation is that we used larger stimuli (4.7° ) compared to Bartz (0.23° ). Therefore, subjects were not required to fixate precisely on the center of the target to complete the recognition task; parafoveal vision, e.g., a nearby fixation at the closer end of the target, was sufficient for a good performance.
Typically, subjects made eye and head shifts towards the targets. Three of our subjects used a different strategy: Subjects 5 and 11 made saccades without head shifts. Subject 3 made head shifts only in half of the trials. Such missing head shifts were compensated with larger saccades, because there was no reduction in overall POR. Apparently, these missing head shifts had no negative influence on task performance. The tendency for head movements seems to be different across subjects. A distinction between ”head-movers” and ”nonhead movers” was proposed (Fuller, 1992). The differences are not simply related to individual ocular or neck motor range and are still under debate (Stahl, 1999). An interesting approach may be to correlate individual head contribution with personality traits. On the other hand, the larger contribution of the eye amplitude compared to the head seems to be a general principle. A reason might be that saccades are more economic compared to head shifts, e.g., head movements are much slower and require higher muscular activity. These results, however, apply to normal subjects. Under certain pathological condition, e.g., oculomotor nerve palsy, head movements may have a more dominate component and can, to some degree, compensate for eye movements.
One of the participants, Subject 9, had been prescribed an antihistamine. This subject had a longer saccade latency compared to the others, a shorter POR, and a higher head-eye ratio. The higher ratio was caused by the lower eye contribution to gaze shifts because the head amplitudes were normal. This pattern is probably associated with the subject’s antihistamine medication. Interestingly, there was good performance in view of the reaction times and the number of correct responses. It is likely that antihistamine affects eye movements, but not head movements or subject response to the same extent. It is well known that eye movement parameters are affected by some medications, especially those affecting the central nervous system. Benzodiazepines and antipsychotics often cause a decrease in saccadic peak velocity (Reilly, Lencer, Bishop, Keedy, & Sweeney, 2008). The effect of antihistamines on eye movements is less understood and requires more investigation. Therefore, it is very important to screen subjects for medication in eye movement studies. In our case, we had a clear reason to exclude such a subject from the study; however, it was not so relevant for this paper.
In eye-head coordination studies, often a simple protocol is used (e.g., subjects are ask to look at light points in the periphery). On the other hand, the visual peripheral recognition task is a more complex protocol. This task may be more suitable to target cognitive resources such as visual short term memory and attention during gaze control. Such a protocol may be interesting to study in some pathologies, e.g., schizophrenia patients, who suffer from attentional and motor symptoms. Schizophrenia patients, disturbed by distracting stimuli, have profound problems focusing attention on salient cues (Nuechterlein & Dawson, 1984; Braff, 1993).
Looking at the task accuracy (median 98%), the difficulty may be further increased, e.g., reducing the stimuli presentation time, or decreasing the stimuli size. This may increase the variance in correct responses and reduce the variance in POR, because the subjects require to look more precisely at the smaller target. On the other hand, increasing the difficulty may also reduce the number of correct responses, and therefore reduce the number of trials for analysis. If a comparison of eye-head movement patterns in correctly vs. incorrectly answered trials is of interest, a number of incorrectly answered trials may be desired.
In this study, eye movements were measured in an experiment where the head was free to move. The targets we used caused relatively large saccades (up to 50° ). Two problems result from this. First, CR detection may fail sometimes with video-based eye trackers with such large saccades. We addressed this problem by signal reconstruction based on pupil position, which improved the signal quality and extended the tracking range. Second, saccades performed in the experiment can sometimes be larger than those performed during calibration. This might reduce overall system accuracy in the spatial domain. From the data, a large number of further analysis can be performed dependent on the research question studied (e.g., maximal head velocities, saccade latency distribution in relation to a specific task).
In conclusion, the proposed measurement and quantification of eye-head coordination demonstrated (1) an easy-to-apply and non-invasive technique to precisely measure eye and head movements as well as their complex interaction, (2) a behavioral task to invoke eyehead coordination, and (3) methods of signal processing, artifact correction and parameter identification. The application of the methodology is presented by an exemplified analysis of experimental data. The proposed approach provides an effective way to quantify and analyze eye-head coordination.

Acknowledgements

The authors thank Ulrich Raub for technical assistance and Ste´phanie Giezendanner for Figure 1b. The author contributions were: Simon Schwab performed the experiments, analyzed the data, developed the analysis tools and wrote the manuscript. Othmar Wu¨ rmle contributed to the analysis and revised the manuscript. Andreas Altorfer conceived and designed the experiments and revised the manuscript.

Appendix A How to use EHCA

Our EHCA software is available at SourceForge (http://sourceforge.net/projects/ehca/). To use this software, MATLAB is required. The software package can be downloaded, extracted to a directory, which can be added to the MATLAB Path (from Menu ”File”, use ”Set Path...”) so that the functions are available from the MATLAB command line. Some basic MATLAB programming skills may be helpful to use our software.
The class ehcaEmov provides a popular eye movement analysis algorithm (I-VT) for saccade detection based on Salvucci and Goldberg (2000). The method to detect a saccade is ehcaEmov.get_saccade, which requires 5 arguments (with the data type in parentheses):
- Time scale (vector of doubles)
- Eye position, horizontal (vector of doubles)
- Onset velocity threshold (double)
- Offset velocity threshold (double)
- Maximal saccade duration threshold (double)
Both the onset and offset thresholds need to be divided by the sampling rate of your eye tracker because point-to-point velocities are calculated, e.g., if you choose an onset threshold of 60°/s and your sampling rate is 200 Hz, then your threshold is 60/200, or 0.3. On the other hand, the maximal saccade duration has to be multiplied with the sampling rate, e.g., if you chose a maximal saccade duration of 0.3 s your threshold is 0.3 × 200, or 60, which corresponds to the maximal number of data points allowed in a time window. Therefore, the method works with any sampling rate. The method ehcaEmov.get_saccade returns the following 6 arguments: Onset time, onset position, offset time, offset position, amplitude, and duration of the saccade. Likewise, the method ehcaEmov.get_saccade can be used to detect head shifts; however, different thresholds should be used. Some default thresholds can be found as constants defined in the class ehcaDemo; these were also the thresholds used to analyze the data presented in this paper.
Figure A1. Using the plot method, all 11 exemplary trials (Subjects 1–11) are displayed for inspection of proper parameter detection. Saccades (red), head shifts (blue), and CEM (green) are shown. Due to a pupil artifact, saccade offset could not be detected in Subject 10.
Figure A1. Using the plot method, all 11 exemplary trials (Subjects 1–11) are displayed for inspection of proper parameter detection. Saccades (red), head shifts (blue), and CEM (green) are shown. Due to a pupil artifact, saccade offset could not be detected in Subject 10.
Jemr 05 00008 g0a1
The ehcaDemo class is provided to demonstrate how the previously described methods can be used in practice. This class may be adapted for your own data. However, we have provided some sample data, ehca_demo.mat, which contains 11 data segments (trials) from our 11 subjects (1 trial per subject). Here, we provide the steps required to analyze the sample data.
First, the sample data have to be loaded:
>> load ehca_demo;
This loads the struct datatype segments, which contains the data. Within the struct, time contains the time course, head the head shifts, eye the saccades, and gaze the POR as cell arrays. There is also a vector, nr, which contains the trial numbers that were chosen for this demo sample, e.g., the first trial segment is subject 1, trial 14. Next, we create a demo object:
>>demo=ehcaDemo(segments);
The demo object loads the sample data from ehca_demo.mat and uses the methods provided in ehcaEmov to analyze the data. Eye-head movement parameters can be accessed in a struct datatype using the dot syntax. For example, to obtain the saccade onset in the second trial, type:
>>demo.eye.on_t(2)
This returns a saccade latency of 0.206 s. Likewise, saccade amplitudes, amp, and durations, dur, can be accessed in the data structure.
Very importantly, all trials should be controlled for correct detection of the parameters. Therefore, we have provided 2 methods to visualize the trials (Figure A1). To plot the trials, type:
>>demo.plot_segments();
To plot one specific trial, e.g., the second trial, use:
>>demo.plot_segment(2);
The file ehcaDemo.m may be changed and adapted to analyze your own data. If necessary, different thresholds can be defined in the constants section at the top of the ehcaEmov.m file. Sampling rate and number of trials can be defined in the ehcaDemo.m. If data are stored in the same structure as our sample data segments in ehca_demo.mat, then little or no modification is required, and you can use the methods provided with ehcaDemo to analyze your data. If data require additional smoothing or filtering, we provide the methods ehcaEmov.movavg and ehcaEmov.filter_saccades. This is only a short and condensed introduction to our software. However, you may check the documentation of the class to determine in detail how all the methods are used (help ehcaEmov).
We have provided MATLAB software and a simple use case with 11 trials to analyze. To our knowledge, this is the first MATLAB tool to analyze eye-head shifts. If EHCA was useful in your research, please cite this paper. EHCA is free software.

References

  1. Altorfer, A., S. Jossen, O. Würmle, M. L. Käsermann, K. Foppa, and H. Zimmermann. 2000. Measurement and meaning of head movements in everyday face-to-face communicative interaction. Behavior Research Methods, Instruments & Computers 32, 1: 17–32. [Google Scholar]
  2. Barnes, G. R. 1979. Vestibulo-ocular function during coordinated head and eye movements to acquire visual targets. Journal of Physiology 287: 127–147. [Google Scholar] [CrossRef] [PubMed]
  3. Bartz, A. E. 1966. Eye and head movements in peripheral vision: nature of compensatory eye movements. Science 152, 729: 1644–1645. [Google Scholar] [CrossRef] [PubMed]
  4. Becker, W., R. rgens, J. Kassubek, D. Ecker, B. Kramer, and B. Landwehrmeyer. 2009. Eye-head coordination in moderately affected huntington’s disease patients: do head movements facilitate gaze shifts? Experimental Brain Research 192, 1: 97–112. [Google Scholar] [CrossRef]
  5. Bizzi, E. 1979. Strategies of eye-head coordination. Progress in Brain Research 50: 795–803. [Google Scholar]
  6. Bizzi, E., R. E. Kalil, and V. Tagliasco. 1971. Eye-head coordination in monkeys: evidence for centrally patterned organization. Science 173, 3995: 452–454. [Google Scholar] [CrossRef]
  7. Braff, D. L. 1993. Information processing and attention dysfunctions in schizophrenia. Schizophrenia Bulletin 19, 2: 233–259. [Google Scholar] [CrossRef] [PubMed]
  8. Cecala, A. L., and E. G. Freedman. 2008. Amplitude changes in response to target displacements during human eye-head movements. Vision Research 48, 2: 149–166. [Google Scholar] [CrossRef]
  9. Chen, L. L., and M. M. G. Walton. 2005. Head movement evoked by electrical stimulation in the supplementary eye field of the rhesus monkey. Journal of Neurophysiology 94, 6: 4502–4519. [Google Scholar] [CrossRef]
  10. Corneil, B. D., and D. P. Munoz. 1999. Human eye-head gaze shifts in a distractor task. ii. reduced threshold for initiation of early head movements. Journal of Neurophysiology 82, 3: 1406–1421. [Google Scholar] [CrossRef]
  11. Crawford, J. D., M. Z. Ceylan, E. M. Klier, and D. Guitton. 1999. Three-dimensional eye-head coordination during gaze saccades in the primate. Journal of Neurophysiology 81, 4: 1760–1782. [Google Scholar] [CrossRef] [PubMed]
  12. Duchowski, A. T. 2007. Eye tracking methods: Theory and practice. Springer. [Google Scholar]
  13. user, W., F. Schumann, J. Vockeroth, K. Bartl, M. Cerf, J. Harel, and et al. 2009. Distinct roles for eye and head movements in selecting salient image parts during natural exploration. Annals of the New York Academy of Sciences 1164: 188–193. [Google Scholar] [CrossRef]
  14. Fukushima, J., K. Fukushima, N. Morita, and I. Yamashita. 1990. Disturbances in the control of saccadic eye movement and eye-head coordination in schizophrenics. Journal of Vestibular Research 1, 2: 171–180. [Google Scholar] [CrossRef]
  15. Fuller, J. H. 1992. Head movement propensity. Experimental Brain Research 92, 1: 152–164. [Google Scholar] [CrossRef]
  16. Goossens, H. H., and A. J. Van Opstal. 1997. Human eye-head coordination in two dimensions under different sensorimotor conditions. Experimental Brain Research 114, 3: 542–560. [Google Scholar] [CrossRef]
  17. Hansen, H. C., J. M. Gibson, W. H. Zangemeister, and C. Kennard. 1990. The effect of treatment on eye-head coordination in parkinson’s disease. Journal of Vestibular Research 1, 2: 181–186. [Google Scholar] [CrossRef] [PubMed]
  18. Houben, M. M. J., J. Goumans, and J. van der Steen. 2006. Recording three-dimensional eye movements: scleral search coils versus video oculography. Investigative Ophthalmology & Visual Science 47, 1: 179–187. [Google Scholar] [CrossRef]
  19. Klier, E. M., H. Wang, and J. D. Crawford. 2001. The superior colliculus encodes gaze commands in retinal coordinates. Nature Neuroscience 4, 6: 627–632. [Google Scholar] [CrossRef]
  20. Land, M. F., and P. McLeod. 2000. From eye movements to actions: how batsmen hit the ball. Nature Neuroscience 3, 12: 1340–1345. [Google Scholar] [CrossRef]
  21. Monteon, J. A., A. G. Constantin, H. Wang, J. MartinezTrujillo, and J. D. Crawford. 2010. Electrical stimulation of the frontal eye fields in the head-free macaque evokes kinematically normal 3d gaze shifts. Journal of Neurophysiology 104, 6: 3462–3475. [Google Scholar] [CrossRef]
  22. Nagel, M., and W. H. Zangemeister. 2003. The effect of transcranial magnetic stimulation over the cerebellum on the synkinesis of coordinated eye and head movements. Journal of the Neurological Sciences 213, 1-2: 35–45. [Google Scholar] [CrossRef] [PubMed]
  23. Nuechterlein, K. H., and M. E. Dawson. 1984. Information processing and attentional functioning in the developmental course of schizophrenic disorders. Schizophrenia Bulletin 10, 2: 160–203. [Google Scholar] [CrossRef]
  24. Oommen, B. S., R. M. Smith, and J. S. Stahl. 2004. The influence of future gaze orientation upon eye-head coupling during saccades. Experimental Brain Research 155, 1: 9–18. [Google Scholar] [CrossRef] [PubMed]
  25. Oommen, B. S., and J. S. Stahl. 2005. Amplitudes of head movements during putative eye-only saccades. Brain Research 1065, 1-2: 68–78. [Google Scholar] [CrossRef] [PubMed]
  26. Peirce, J. W. 2008. Generating stimuli for neuroscience using PsychoPy. Frontiers in Neuroinformatics 2: 10. [Google Scholar] [CrossRef] [PubMed]
  27. Phillips, J. O., A. F. Fuchs, L. Ling, Y. Iwamoto, and S. Votaw. 1997. Gain adaptation of eye and head movement components of simian gaze shifts. Journal of Neurophysiology 78, 5: 2817–2821. [Google Scholar] [CrossRef]
  28. Populin, L. C., and A. Z. Rajala. 2011. Target modality determines eye-head coordination in nonhuman primates: implications for gaze control. Journal of Neurophysiology 106, 4: 2000–2011. [Google Scholar] [CrossRef]
  29. Proudlock, F. A., and I. Gottlob. 2007. Physiology and pathology of eye-head coordination. Progress in Retinal and Eye Research 26, 5: 486–515. [Google Scholar] [CrossRef]
  30. Reilly, J. L., R. Lencer, J. R. Bishop, S. Keedy, and J. A. Sweeney. 2008. Pharmacological treatment effects on eye movement control. Brain and Cognition 68, 3: 415–435. [Google Scholar] [CrossRef]
  31. Richard, A., J. Churan, D. E. Guitton, and C. C. Pack. 2011. Perceptual compression of visual space during eye-head. gaze shifts. Journal of Vision 11, 12: 1–17. [Google Scholar] [CrossRef]
  32. Salvucci, D. D., and J. H. Goldberg. 2000. Identifying fixations and saccades in eye-tracking protocols. In Proceedings of the 2000 symposium on eye tracking research & applications. New York, NY, USA, ACM: pp. 71–78. [Google Scholar] [CrossRef]
  33. Schwab, S. 2011. EHCA: Eye-head coordination analyzer for the MATLAB programming language. Available online: http://sourceforge.net/projects/ehca/.
  34. Stahl, J. S. 1999. Amplitude of human head movements associated with horizontal saccades. Experimental Brain Research 126, 1: 41–54. [Google Scholar] [CrossRef] [PubMed]
  35. Stewart, N. 2006. A PC parallel port button box provides millisecond response time accuracy under Linux. Behavior Research Methods 38, 1: 170–173. [Google Scholar] [CrossRef] [PubMed]
  36. Thumser, Z. C., B. S. Oommen, I. S. Kofman, and J. S. Stahl. 2008. Idiosyncratic variations in eye-head coupling observed in the laboratory also manifest during spontaneous behavior in a natural setting. Experimental Brain Research 191, 4: 419–434. [Google Scholar] [CrossRef]
  37. Tweed, D., B. Glenn, and T. Vilis. 1995. Eye-head coordination during large gaze shifts. Journal of Neurophysiology 73, 2: 766–779. [Google Scholar] [CrossRef]
  38. van der Geest, J. N., and M. A. Frens. 2002. Recording eye movements with video-oculography and scleral search coils: a direct comparison of two methods. Journal of Neuroscience Methods 114, 2: 185–195. [Google Scholar] [CrossRef] [PubMed]
  39. Welch, G., and E. Foxlin. 2002. Motion tracking: no silver bullet, but a respectable arsenal. IEEE Computer Graphics and Applications 22, 6: 24–38. [Google Scholar] [CrossRef]
  40. Zambarbieri, D., R. Schmid, M. Versino, and G. Beltrami. 1997. Eye-head coordination toward auditory and visual targets in humans. Journal of Vestibular Research 7, 2-3: 251–263. [Google Scholar] [CrossRef]
Figure 1. Apparatus and paradigm of the experiment. Two mirrors were used to project the visual targets onto the left and right peripheral screens (a). Visual targets appeared at 3 positions (illustrated by black dots) on the left, central, and right screens (b). All 3 targets had a viewing distance of d = 80 cm.
Figure 1. Apparatus and paradigm of the experiment. Two mirrors were used to project the visual targets onto the left and right peripheral screens (a). Visual targets appeared at 3 positions (illustrated by black dots) on the left, central, and right screens (b). All 3 targets had a viewing distance of d = 80 cm.
Jemr 05 00008 g001
Figure 2. Signal pre-processing: Head shifts (40 Hz) were upsampled to 200 Hz using piecewise cubic Hermite interpolation (a). In saccades (b), data loss sometimes occurred due to cornea reflection (CR) artifacts, but could successfully be removed with a polynomial fitting function upon raw pupil position data. In (c), this reconstruction was evaluated in view of a goodness of fit statistic, root mean square error (RMSE). A typical gaze shift (d) consisted of a saccade (red), a head shift (blue), and a compensatory eye movement (CEM, green). Parameters detected were saccade latency (SL), head offset (HO), point of regard (POR), and head amplitude (HA).
Figure 2. Signal pre-processing: Head shifts (40 Hz) were upsampled to 200 Hz using piecewise cubic Hermite interpolation (a). In saccades (b), data loss sometimes occurred due to cornea reflection (CR) artifacts, but could successfully be removed with a polynomial fitting function upon raw pupil position data. In (c), this reconstruction was evaluated in view of a goodness of fit statistic, root mean square error (RMSE). A typical gaze shift (d) consisted of a saccade (red), a head shift (blue), and a compensatory eye movement (CEM, green). Parameters detected were saccade latency (SL), head offset (HO), point of regard (POR), and head amplitude (HA).
Jemr 05 00008 g002
Figure 3. (a) Number of head shifts (HS) towards the targets for each subject with group statistic (right). Saccade latency (b) was the saccadic response time toward the peripheral stimulus onset. Subject 9 had an outlying latency at the upper end. Head offsets (c) were the time between saccade onset and head-shift onset. (d) POR indicated how closely subjects approached the targets. In eye-head shifts, head amplitudes. (e) contributed less than saccades, shown in the head-eye amplitude ratios (f) which were generally < 1. Subject 9 had the highest ratio (outlier), i.e. highest head contribution relative to the eye.
Figure 3. (a) Number of head shifts (HS) towards the targets for each subject with group statistic (right). Saccade latency (b) was the saccadic response time toward the peripheral stimulus onset. Subject 9 had an outlying latency at the upper end. Head offsets (c) were the time between saccade onset and head-shift onset. (d) POR indicated how closely subjects approached the targets. In eye-head shifts, head amplitudes. (e) contributed less than saccades, shown in the head-eye amplitude ratios (f) which were generally < 1. Subject 9 had the highest ratio (outlier), i.e. highest head contribution relative to the eye.
Jemr 05 00008 g003
Figure 4. Number of correct responses (a) and response times (b) for each subject, group statistics on the right.
Figure 4. Number of correct responses (a) and response times (b) for each subject, group statistics on the right.
Jemr 05 00008 g004
Table 1. Comparison of popular eye and head tracking methods.
Table 1. Comparison of popular eye and head tracking methods.
Jemr 05 00008 i001

Share and Cite

MDPI and ACS Style

Schwab, S.; Würmle, O.; Altorfer, A. Analysis of Eye and Head Coordination in a Visual Peripheral Recognition Task. J. Eye Mov. Res. 2012, 5, 1-9. https://doi.org/10.16910/jemr.5.2.3

AMA Style

Schwab S, Würmle O, Altorfer A. Analysis of Eye and Head Coordination in a Visual Peripheral Recognition Task. Journal of Eye Movement Research. 2012; 5(2):1-9. https://doi.org/10.16910/jemr.5.2.3

Chicago/Turabian Style

Schwab, Simon, Othmar Würmle, and Andreas Altorfer. 2012. "Analysis of Eye and Head Coordination in a Visual Peripheral Recognition Task" Journal of Eye Movement Research 5, no. 2: 1-9. https://doi.org/10.16910/jemr.5.2.3

APA Style

Schwab, S., Würmle, O., & Altorfer, A. (2012). Analysis of Eye and Head Coordination in a Visual Peripheral Recognition Task. Journal of Eye Movement Research, 5(2), 1-9. https://doi.org/10.16910/jemr.5.2.3

Article Metrics

Back to TopTop