Next Article in Journal
The Illusion of Being Located in Dynamic Virtual Environments—Can Eye Movement Parameters Predict Spatial Presence?
Previous Article in Journal
Repeated Web Page Visits and the Scanpath Theory: A Recurrent Pattern Detection Approach
 
 
Journal of Eye Movement Research is published by MDPI from Volume 18 Issue 1 (2025). Previous articles were published by another publisher in Open Access under a CC-BY (or CC-BY-NC-ND) licence, and they are hosted by MDPI on mdpi.com as a courtesy and upon agreement with Bern Open Publishing (BOP).
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Microsaccade Characterization Using the Continuous Wavelet Transform and Principal Component Analysis

by
Mario Bettenbühl
1,2,
Claudia Paladini
1,
Konstantin Mergenthaler
2,
Reinhold Kliegl
2,
Ralf Engbert
2 and
Matthias Holschneider
1
1
Department of Mathematics, University of Potsdam, Potsdam, Germany
2
Department of Psychology, University of Potsdam, Potsdam, Germany
J. Eye Mov. Res. 2009, 3(5), 1-14; https://doi.org/10.16910/jemr.3.5.1
Published: 30 October 2010

Abstract

:
During visual fixation on a target, humans perform miniature (or fixational) eye movements consisting of three components, i.e., tremor, drift, and microsaccades. Microsaccades are high velocity components with small amplitudes within fixational eye movements. However, microsaccade shapes and statistical properties vary between individual observers. Here we show that microsaccades can be formally represented with two significant shapes which we identfied using the mathematical definition of singularities for the detection of the former in real data with the continuous wavelet transform. For characterization and model selection, we carried out a principal component analysis, which identified a step shape with an overshoot as first and a bump which regulates the overshoot as second component. We conclude that microsaccades are singular events with an overshoot component which can be detected by the continuous wavelet transform.

Introduction

The act of visual fixation is the basis for perception of a stationary target objects. In the beginning eyemovement research, miniature eye movements have been discovered during fixation of the eyes. Furthermore, under suppressed eye movements, perception is disturbed. In the 1950s it was demonstrated in a laboratory experiment that stationary objects rapidly fade from perception when our eyes are artificially stabilized Ditchburn and Ginsborg (1952); Pritchard (1961); Riggs, Ratliff, Cornsweet, and Cornsweet (1953)). This perceptual fading is caused by the adaptation of retinal receptor systems to constant input and can occur rapidly (Coppola & Purves, 1996). Thus, while our eyes fixate a stimulus for the visual analysis of fine details, ironically, miniature eye movements must be produced to counteract perceptual fading. Consequently, the term fixational eye movements (FEM) was introduced to capture this seemingly paradoxical behavior. Perceptual performance as a function of self-generated noise is an unimodal function which lends support to underlying nonlinear mechanisms (Starzynski & Engbert, 2009).
Fixational eye movement is classified as tremor, drift and microsaccades (e.g., Ciuffreda & Tannen, 1995). The largest component of fixational eye movements is produced by microsaccades which are high-velocity movements with small amplitudes. Recent findings demonstrated various neural, perceptual and behavioral functions of microsaccades (Martinez-Conde, Macknik, & Hubel, 2004; Martinez-Conde, Macknik, Troncoso, & Hubel, 2009; Rolfs, 2009). The relevance of microsaccades to diverse neural and cognitive systems offers a possible explanation for the difficulties in identifying a specific function for microsaccades (for recent overviews see Martinez-Conde et al., 2004). Here, the following section gives a very simple and brief overview of recent findings about functions of microsaccades:
Neural activity. Microsaccades are correlated with bursts of spikes across the visual pathway (Martinez-Conde, Macknik, & Hubel, 2000; Martinez-Conde et al., 2004; Martinez-Conde et al., 2009). Theoretical analyses suggest that they help to decorrelate neural responses in natural viewing (Rucci & Casile, 2004).
Oculomotor control of fixation. Microsaccades enhance retinal image slip (to counteract retinal fatigue) on a short time scale and control fixational errors on a long time scale (Engbert & Kliegl, 2004). Moreover, recent evidence suggests that microsaccades are triggered on perceptual demand based on estimation of retinal image slip (Engbert & Mergenthaler, 2006).
Perception. Microsaccades are important for peripheral and parafoveal vision. During the perception of bistable visual scenes, microsaccades induce transitions to visibility and counteract transitions to perceptual fading (Engbert, 2006; Martinez-Conde, Macknik, Troncoso, & Dyar, 2006; Rucci, Iovin, Poletti, & Santini, 2007). Moreover, fixational eye movements and microsaccades represent noise sources that enhance perception (Starzynski & Engbert, 2009). Furthermore during fixation, microsaccades play a part in supporting second-order visibility (Troncoso, Macknik, & Martinez-Conde, 2008).
Attention. Microsaccades can be suppressed voluntarily with focused attention (Bridgeman & Palca, 1980; Gowen, Abadi, & Poliakoff, 2005). They are also modulated by crossmodal attention with a pronounced signature in both rate and orientation (e.g., Engbert & Kliegl, 2003; Galfano, Betta, & Turatto, 2004; Hafed & Clark, 2002; R. Laubrock, Engbert, & Kliegl, 2005; Rolfs, Engbert, & Kliegl, 2005). The hypothesis that microsaccades represent an index of covert attention has been criticized by Horowitz, Fine, Fencsik, Yurgenson, and Wolfe (2007) (but see Horowitz, Fencsik, Fine, Yurgenson, & Wolfe, 2007; J. Laubrock, Engbert, Rolfs, & Kliegl, 2007), however new work by J. Laubrock, Kliegl, Rolfs, and Engbert (2010) lends support to the coupling between attention and microsaccades.
Saccadic latency. Microsaccades interact with upcoming saccadic responses which can result in prolonged as well as shortened latencies for saccadic reactions (Rolfs, Laubrock, & Kliegl, 2006). Recently, Sinn and Engbert (2009) demonstrated that this effect contributes to the saccadic facilitation effect in nature background.
Individual differences. The pattern of successive microsaccades (called saccadic intrusion in this study) has been proposed as a stable characteristic between persons (Abadi & Gowen, 2004). In general, there is much overlap but also a few differences tied to the distinction between microsaccades and saccadic intrusions (Gowen, Abadi, Poliakoff, Hansen, & Miall, 2007).
The list of results demonstrates that microsaccades are associated with a wide range of research areas in behavior, cognition and neural functioning. For the detection of microsaccades in trajectories of fixational eye movements, methods were developed by Boyce (1967), Martinez-Conde et al. (2000) and Engbert and Kliegl (2004) that used the microsaccade property of their high velocity, i.e. they used their amplitude. The idea underlying their methods is to describe microsaccades as high velocity events of certain length. It was Mergenthaler and Engbert (2007) who reported that fixational eye movements can be modelled as fractional Brownian motion with persistent and anti-persistant behavior on a short and long time scale, respectively. They also investigated the influence of microsaccades on the scaling exponent which determines the characteristics of the underlying structure. Our working hypothesis is the definition of microsaccades as events that compared to the general structure in fixational eye movement trajectories have low regularities. We propose a scale-free detection method for microsaccades using the continuous wavelet transform (Holschneider, 1995; Mallat, 1998). It uses structural properties of the trajectory and the unlikeliness of microsaccadic events in the underlying drift movements, to detect microsaccades.
Using the results obtained by a detection method that uses structure properties of fixational eye movements, we continue with an analysis of the shapes obtained. We show, how to arrive at a data-driven microsaccade shape characterization by the use of principal component analysis (Jolliffe, 2002). We will see that already two components are enough to describe the microsaccade shape within an appropriate range.

Methods

Microsaccades are rapid small-amplitude events with typical durations between 6 to 30 ms and amplitudes below 1 (for an overview see (Engbert, 2006)). In this paper however we propose to not use their higher velocity as in (Engbert & Kliegl, 2003) or (Engbert & Mergenthaler, 2006), but rather use their local singularity structure. Wavelet analysis is a well suited toll to detect and characterize singularities within a more regular background. Singularities and local regularity
One first might ask how the mathematically motivated term singularity applies to the description of a microsaccade. Following Zuber, Stark, and Cook (1965), we sketch a prototypical microsaccade shape in Figure 1. The y-axis is the horizontal eye movement plotted over time (x-axis). Thus, in this example, after an initial period of rest, the eye moves quickly towards the right and returns partially towards the left before arriving at the new horizontal position. The illustration depicts three important characteristics of microsaccade topology: amplitude, displacement, and overshoot. We refer to the maximum excursion as the amplitude and amplitude minus overshoot as the effective displacement (see Figure 1a). Thus, the distinction between amplitude and displacement is due to variations in the overshoot component and is relevant to kinematic as well as functional aspects of microsaccades.
Figure 1. Microsaccades can mathematically be defined as superposition of two functions with singular points. (a) We have labeled several attributes which allow a further description of the eye’s displacement. The eye’s trajectory is drawn on the Y-axis against time on the X-axis. Smoothing the (a) Heavyside and (b) Dirac delta function and superpose them, will return a microsaccade shape.
Figure 1. Microsaccades can mathematically be defined as superposition of two functions with singular points. (a) We have labeled several attributes which allow a further description of the eye’s displacement. The eye’s trajectory is drawn on the Y-axis against time on the X-axis. Smoothing the (a) Heavyside and (b) Dirac delta function and superpose them, will return a microsaccade shape.
Jemr 03 00022 g001
This schematic representation of a microsaccade shape can mathematically be described by a superposition of smoothed versions of the singularities depict in Figure 1b and 1c. These two schematically show shapes of functions that have scale invariant singularities: the Heavyside and Dirac delta function. Both functions are singular at zero. In fixational eye movement trajectories, the drift and tremor movement describe the baseline of position-time displacements. Microsaccades do not share the same property of self-similarity at this baseline (compare Mergenthaler & Engbert, 2007), influence the latter and appear as singular events in the more regular drift movement.

The continuous wavelet transform

A powerful tool to analyze local regularity and to detect local singularites is the continuous wavelet transform (Arneodo, Grasseau, & Holschneider, 1988; Holschneider & Tchamitchian, 1991; Mallat & Hwang, 1992). This time-frequency analysis tool has been applied for various reasons in signal processing and in general data analysis (see e.g., Daubechies I., 1992; Daubechies & Teschke, 2005; Diallo, Holschneider, Kulesh, Scherbaum, & Adler, 2006; Holschneider, 1995; Quiroga, Nadasdy, & Ben-Shaul, 2004). Therefore, we will apply the wavelet method for the detection of singularities in fixational eye movement data.
The continuous wavelet transform of a real-valued signal s(t) with respect to a wavelet Ψ is given by
Jemr 03 00022 i001(1)
which is a function of two parameters a and b. Here, the bar denotes the complex conjugate of Ψ. The parameter b is a translation parameter (i.e., a variation of b moves the wavelet along the time axis) whereas a > 0 is the scale parameter. The inverse scale 1/a plays the role of a frequency. (dilation). In wavelet analysis, the one-dimensional signal is transformed into the twodimensional time-frequency plane that tells us when (parameter b) which frequency (parameter 1/a) occurs. Other time-frequency transformations exist as for instance the windowd Fourier transform or the Gabor transform (Feichtinger & Strohmer, 1998). However only wavelet analysis is capable of characterizing local singularity because it has no a priori limit to its timeresolution.
For our analysis, we used the progressive Morlet wavelet, defined by
Jemr 03 00022 i002(2)
with a = ω 0 ω and ω0 as internal frequency. It is an oscillating wavelet, i.e., the parameter a changes the average frequency of the scaled wavelet. We see the Gaussian envelope given by et2/2 illustrated in Figure 2 with the oscillating part of the wavelet. We checked that our findings do not depend on this particular choice of wavelet by using the Cauchy wavelet in comparison. The results are not sensitive to the choice of wavelet as long as they respect generic properties such as localization in time and frequency.
Figure 2. The Morlet wavelet at its smallest internal frequency ω0 =5.336. The oscillations o the real R ( t ) and imaginary part I(t) of the complex-valued wavelet are enclosed in the positive and negative modulus of Ψ(t).
Figure 2. The Morlet wavelet at its smallest internal frequency ω0 =5.336. The oscillations o the real R ( t ) and imaginary part I(t) of the complex-valued wavelet are enclosed in the positive and negative modulus of Ψ(t).
Jemr 03 00022 g002

Singularity detection

For the analysis of microsaccades, we exploit the idea that microsaccades are more irregular than some background movement. In real data however perfect singularities cannot be observed. Perfect singularities are physiologically impossible because they would require infinite accelerations. We rather expect to have smoothed singularity. This is a shape which at large scale looks singular, whereas at small scale the physiologal limitations enforce a more regular behavior.
A pure singularity will give rise to a conelike structure of strong wavelet coefficients with the top at the time-point, where the singularity occures. A smoothed singularity will behave the same at all scales, which are large compared to the smoothing scale. In Figure 3, such small cones that cross the whole frequency range can be identified by eye.
Numerically singularity detection with wavelets is usually done using the so called method of maximum modulus lines (see Marr & Hildreth, 1980; Witkin, 1983). A maximum modulus line is a line in the time-scale plane on which the modulus of the wavelet transform has a local maximum with respect to small variation in b0 such that
Jemr 03 00022 i003(3)
Connecting such points will give the maximum modulus lines. It can be shown that if a signal has a singularity at point then there is a maximum modulus line which at small scale converges towards the location of the singularity (e.g., (Mallat & Hwang, 1992)). In smoothed singularities the maximumum modulus line may end at a scale which is about the smoothing scale of the singularity. For this reason we consider those maximum modulus lines, which go from a fixed highest frequency / smallest scale to a smallest frequency / largest scale. The estimated position of the singularity is simply the small-scale end of the corresponding maximum modulus line.
Figure 3b shows a typical example of a wavelet transform for the horizontal component of fixational eye movements. The maximum modulus in the (a,b)plane, here highlighted in red.
Taking previous works into consideration (Ditchburn & Ginsborg, 1952; Krauskopf, Cornsweet, & Riggs, 1960), microsaccades are generally binocular, conjugated eye movements and we consider only binocular singularities. This means, positions of a singularity in one eye are not allowed to differ more than τ from the positions of a singularity in the other eye, i.e., microsaccades in one eye have their simultaneous appearing microsaccades in the other eye in a time window (t0−τ,t0+τ). We will refer to this criterion as binocularity criterion.
In this study, we restrict the analysis to the horizontal component of fixational eye movements. Previous work suggested that microsaccades show a preference for horizontal orientation (Engbert & Kliegl, 2003; Engbert & Kliegl, 2003).

Microsaccade characterization

In the previous section we have shown how the analysis of the continuous wavelet transform helps us to detect singularities in fixational eye movement trajectories. We use this information to extract an area around these singularities to investigate the characteristics of the eye’s position as e.g., the shape of a microsaccade. For each of the binocular singularities detected with the method described in the previous section, we extract K data samples corresponding to an epoch of the time series around the location of a singularity. We call this segment a signal snippet. To investigate if typical shapes for microsaccades exist in fixational eye movement data, we made use of the principal component analysis (PCA, see Jolliffe, 2002; Smith, 2002, for a short tutorial) to systematically describe the large variability of all possible microsaccadic shapes. The principal component analysis is a way to represent a given data set in a reference frame whose dimensions, the principal components, are such that: the first accounts for as much of the variance in the data as possible, the second for as much as possible of the remaining variance and so on. All pci together represent an empirical orthonormal system. In addition to the principal components pci, we obtain a measure of the importance of each dimension in relation to the others, given by the singular values s. Now, one rewrites the shape of a microsaccade (ms) with a linear combination of these principal shapes:
ms = c0pc0 + c1pc1 + c2pc2 + … + cK−1pcK−1
where pci is the ith principal component, ci is the coefficient that explains the contribution of this ith shape, and K is the length of a signal snippet. Due to the interpretation of the obtained principal components for fixational eye movements, we will use principal shapes and principal components as synonyms.
Before the analysis of our data for their pci, we need to preprocess the input data. First, we remove the constant signal offset by subtracting the mean value from each signal snippet, i.e.
Jemr 03 00022 i005(5)
with l indicating one individual signal snippet. Then we compose a matrix M of dimension NxK with N given by the total number of detected singularities and K the length of each snippet. We subtract the ensemble mean from M, i.e.,
Jemr 03 00022 i006(6)
with mij being the elements of the matrix M. Using singular value decomposition (SVD, see Venegas, 2001) we write M ˜ as a product of three matrices
Jemr 03 00022 i007(7)
where U is an orthogonal KxK matrix which contains as columns the orthonormal vectors that represent the orthonormal system for all row vectors of M ˜ . The matrix V is an orthonormal NxN matrix whose columns represent the orthonormal system for all column vectors of M ˜ and S is a rectangular, diagonal KxN matrix containing the K singular values in its diagonal. After SVD, the columns of U contain the principal components pci, i =0,...,K−1 which best describe the collection of the singularities along the K dimensions. The diagonal entries of S, namely s0,0,...,sk−1,k−1 give us a measurement for the importance of each pci. Therefore we evaluate
Jemr 03 00022 i008(8)
with hh =(0,0),...,(k−1,k−1). Having this measure we are able to reduce the dimensions of the obtained orthonormal system to a lower complexity but still containing sufficient information to describe the variability of inlying functions to a high level.
Figure 3. One horizontal fixational eye movement trajectory in position-time and time-frequency representation. The marker indicate the positions of singularities (red dots and arrows) and binocular (green dots and arrows) singularities. The modulus of the wavelet transform allows the identification of maximum modulus lines. The time point at which a singularity occurs is taken at the highest frequency. The positions of maximum modulus lines match candidates in the time-position trajectory which e.g., by visual inspection would be identified as microsaccades.
Figure 3. One horizontal fixational eye movement trajectory in position-time and time-frequency representation. The marker indicate the positions of singularities (red dots and arrows) and binocular (green dots and arrows) singularities. The modulus of the wavelet transform allows the identification of maximum modulus lines. The time point at which a singularity occurs is taken at the highest frequency. The positions of maximum modulus lines match candidates in the time-position trajectory which e.g., by visual inspection would be identified as microsaccades.
Jemr 03 00022 g003
In the process of this work we reconstruct the shape of binocular singularities with the principal components. We will see how these representations vary between subjects.
In summary, we identify binocular singularities that give us candidates for binocular microsaccades in our fixational eye movemement study. The representation of a signal snippet with principal components gives possibilities for further discussion about the importance of each component contributing to the variability of microsaccadic eye movement. This will be discussed in the result section below.

Experiment

Experimental data presented here was published in earlier work (Mergenthaler & Engbert, 2007; Engbert & Mergenthaler, 2006).

Participants

Twenty-four participants with an average age of 22 (range: 19 to 51 years) participated in the experiment. All participants had normal or corrected to normal vision. The experiment was performed in accordance with the declaration of Helsinki.

Task

Participants had to fixate a black square on a white background (3-by-3 pixels on a computer display (Iiyama, Vision Master Pro 514, 40 by 30 cm, 100Hz, 1024 by 768 pixels) which corresponds to a spatial extent of 7.2 arc min). They were required to perform 30 trials of 20 s each. To reduce loss of data, the participants were asked to avoid blinking during each trial. In addition, we remonitored for eye blinks and trials with occuring blinks were repeated. Every fixation trial is followed by a presentation of a photograph for 10 s, allowing participants to relax and perform inspection saccades or blinks. A typical trajectory from a trial is shown in Figure 4.
Figure 4. Representation of the trajectories in a fixational eye movement trial for both eyes. Recorded with EyeLink II system we obtained the horizontal (x-axis) and vertical (y-axis) position of (a) left and (b) right eye. The trajectories are corrected to center.
Figure 4. Representation of the trajectories in a fixational eye movement trial for both eyes. Recorded with EyeLink II system we obtained the horizontal (x-axis) and vertical (y-axis) position of (a) left and (b) right eye. The trajectories are corrected to center.
Jemr 03 00022 g004

Eye movement recording

Eye movements of our participants were tracked with a head mounted eye tracker (EyeLink II, SR Research, Osgoode, Ontario, Canada). They were recorded binocularly with a sampling rate of 500 Hz and a spatial resolution for a dark pupil in root mean square of higher than 0.01 visual angle. Participants were seated on a chair with their head placed on a chin rest. The viewing distance to the computer screen was 50 cm.

Results

The method of singularity detection identified segments of fixational eye movements that represent candidates for microsaccades. Figure 3 displays the wavelet transform of a time series of one single trial of fixational eye movement. All local maximum modulus lines which passed without interruption between 20 Hz to 50 Hz and met the condition in Equation (3) with at maximum ε=5 ms at 20 Hz are included in the analyses. We have chosen this frequency range because the left and right eye are well correlated over this range. Additionally we want to work above a threshold of 20 Hz as the wavelet transform maximum modulus lines will get influenced by modulus maxima of the other structures present in our data at lower frequency. A binocular singularity is defined by the time point a singularity is detected in left and right eye. The time point is allowed to differ of most by 30 ms. In Figure 3a we have marked the positions of singularities detected in the wavelet plane. In Table 1 we summarize the number of detected monocular singularities as well as the binocular singularities per participant in left and right eye, respectively.
In the total number of 682 trials, we detected 35531 and 35066 singularities in left and right eyes, respectively. The difference in the number of detections between eyes is lower than 1.3%, yielding good agreement for the microsaccadic processes in both eyes. After application of the binocularity criterion as described in section , we retain a total number of 16947 binocular singularities. The mean rate of binocular singularities is 1.2 per second with a standard deviation of 0.5. In this study, seven participants contributed less than one binocular singularity per second in their fixational eye movement. As the number of detected singularities in their eye movement trials is in agreement with those of other participants, their results are suggestive of monocular events.
Figure 5. Representation of a time series snippet around the location of a singularity with its principal shapes processed by the principal component analysis. The left side of this equationlike representation is the horizontal position in a 60 ms time window around an identified singularity. Here, it is defined by the linear combination of principal shapes. We identified that already two shapes are sufficient to represent roughly 95% of the variances contained in each individual shape. The coefficients c0,c1,... are individual for each binocular singularity and explain the contribution of the corresponding shape to the individual shape.
Figure 5. Representation of a time series snippet around the location of a singularity with its principal shapes processed by the principal component analysis. The left side of this equationlike representation is the horizontal position in a 60 ms time window around an identified singularity. Here, it is defined by the linear combination of principal shapes. We identified that already two shapes are sufficient to represent roughly 95% of the variances contained in each individual shape. The coefficients c0,c1,... are individual for each binocular singularity and explain the contribution of the corresponding shape to the individual shape.
Jemr 03 00022 g005
Next, we investigate the signal snippets around the detected binocular singularities for common features. Equation (4) enables us to rewrite each shape as a linear combination of reliably measured basic shapes. The variance contribution of each is measured in the individual coefficients for the shape. We have taken snippets of the fixational eye movement trajectory with the length K = 31 around a binocular singularity position. We are interested in the 30 ms before and after the time point we detected. We preprocess the data as described in section Microsaccade characterization and obtain a representation as shown in Figure 5 which is a visual representation of the components in Equation (4).
The PCA of the detected snippets reveals that the first two principal components pc0 and pc1 account for roughly 95% of the variability of microsaccadic shapes (compare Equation (8) which measures the importance of each single pci or combination of the same). At this, each component separately accounts (in average of both eyes) for 82.7 and 81.2% as well as 12.2 and 13.4% for pc0 and pc1 in left and right eye, respectively. We restrict therefore our analysis to these first two principal shapes. We decompose each single microsaccade ms(l) into the following five terms:
  • a linear combination of pc0 and pc1
  • a vector ρ, representing the mean of the whole ensemble for each eye (see Equation (6))
  • a scalar κ(l), representing the mean of each snippet (see Equation (5))
  • a small residual vector r capturing numerical errors
Table 1. Rates of detected singularities in the horizonal eye movement in our fixation task experiment. The number of binocular singularities is a subset of all detected. The total rates are given as mean ± standard deviation.
Table 1. Rates of detected singularities in the horizonal eye movement in our fixation task experiment. The number of binocular singularities is a subset of all detected. The total rates are given as mean ± standard deviation.
ParticipantNumber of trials Detected   sin gularities   rate   of   [ 1 s ] Binocular   rate   of   [ 1 s ]
leftright
1302.62.71.5
2293.33.32.4
3302.72.81.4
4302.22.10.3
5222.32.30.6
6302.82.91.8
7302.82.81.7
8302.92.81.7
9302.12.00.5
10172.62.51.0
11282.52.41.0
12302.42.40.8
13292.32.30.6
14302.62.51.2
15293.12.91.8
16302.72.51.2
17292.42.40.8
18232.62.51.4
20292.82.71.7
21292.92.91.9
22302.12.10.3
23292.62.51.2
24302.82.71.7
25292.52.51.2
Total6822.6±0.32.6±0.31.2±0.5
Except for the residual vector, all these components are directly computed from the data. Now, we write our model for a typical microsaccade by the linear combination
Jemr 03 00022 i009(9)
while l denotes the index of each individual microsaccade. The shapes of pc0 and pc1 are shown in Figure 6a and 6b, respectively.
The first principal shape pc0 represents a movement of the eye with a short linearly increasing part and more importantly, it exhibits an overshoot. This returns the significance of this shape property and creates a prominent marker for microsaccadic movements. The second component pc1 is a bump whose contribution to the microsaccade shape seems to regulate the height of the overshoot.
Figure 6. Graph of the shapes of the first two principal components which go into our model for a microsaccade. (a) We have for pc0 a steplike shape which has the tendency to return after it reached the maximum amplitude. This overshoot, typical for microsaccades, dominates together with the almost linear increasing shoot part this shape. (b) The shape of the second component pc1 is bumplike. It identifies how much overshoot each microsaccade has. The left (blue solid line) and right (red dashed line) eye agree in the shape of the first two principal components.
Figure 6. Graph of the shapes of the first two principal components which go into our model for a microsaccade. (a) We have for pc0 a steplike shape which has the tendency to return after it reached the maximum amplitude. This overshoot, typical for microsaccades, dominates together with the almost linear increasing shoot part this shape. (b) The shape of the second component pc1 is bumplike. It identifies how much overshoot each microsaccade has. The left (blue solid line) and right (red dashed line) eye agree in the shape of the first two principal components.
Jemr 03 00022 g006
For an investigation of individual microsaccade shapes, we need to measure how much each principal component contributes to the individual microsaccade shape. We quantify this by computing the projection of each microsaccade ms(l) on pc0 and pc1 as follows
Jemr 03 00022 i010(10)
with |ms(l)| = 1 and <a,b> = aTb denoting a column vector scalar product. Next, we represent the coefficient pairs (c0,c1) in a coordinate system. The axes are given by the first and second principal shape which means that each microsaccade shape is plotted according to its reconstruction by the principal shapes. One representation is shown in Figure 7.
Figure 7. Representation of each microsaccade candidate in the pc0-pc1-coordinate system. Each dot is given by (c0,c1). The contribution of both of our model components is explained in this coordinate system. We take binocular singularities as binocular microsaccades whose variability is described to more or equal than 80% by our model (red dots between inner blue dashed and outer green solid circle).
Figure 7. Representation of each microsaccade candidate in the pc0-pc1-coordinate system. Each dot is given by (c0,c1). The contribution of both of our model components is explained in this coordinate system. We take binocular singularities as binocular microsaccades whose variability is described to more or equal than 80% by our model (red dots between inner blue dashed and outer green solid circle).
Jemr 03 00022 g007
With respect to these coefficient pairs we define microsaccades as those binocular shapes whose variability is described by more than 80% of the first two principal components. Expressed in an equation we write
Jemr 03 00022 i011(11)
A geometrical interpretation of this condition is shown in Figure 7. Every point - marking one single binocular shape - between inner and outer ring is modeled as binocular microsaccade. To investigate the probabilities for a certain shape to occur in our trials, we sample the two-dimensional coefficient distribution to an one-dimensional distribution by taking the angle between the c0-axis and the vector which points to our (c0,c1) pairs, and obtain α(l) by
Jemr 03 00022 i012(12)
With the Gaussian kernel density estimation as shown in Figure 8, we discover that the dominating shape of a microsaccade is given by pc0, a steplike shape including an overshoot. This result is true for all participants. The different widths of the peaks define how much the second component pc1 varies the overshoot height between participants.
In the bottom of Figure 8 we have selected the most probable shapes for each participant in the one and the contraverse direction. The individual contribution of pc0 and pc1 to these shapes is marked with a marker for each participant. Apparently, interindividual differences between humans are not based on the shapes of microsaccades but on their variation in overshoots and therefore, in the precision of the microsaccadic eye movement.
To sum up the results, we see that given all binocular singularities, two principal components explain the variance of roughly 95% of the microsaccade shape and define our microsaccade model with these two shapes. The steplike shape with an overshoot represents the dominant shape of microsaccadic eye movement. Additionally, we see that the second component is a measure parameter for the overshoot and furthermore a criterion which admits a comparison of microsaccades between different participants. The second component does not yield an absolute measurement of the microsaccade and overshoot length but is a relative parameter between microsaccade amplitude and overshoot. Our model is capable of describing microsaccadic eye movements and lets us quantify characteristic statistics between individual participants.

Singularity detection and characterization in amplitude-adjusted surrogate data

In this section, we validate the applicability of our detection and characterization methods. First, we check the reliability of singularities detected with the continuous wavelet transform by comparison of original data and time series which mimic properties of the original fixational eye movement data. Second we want to support that the identified shapes of the first principal components are typical for microsaccadic eye movements.
Figure 8. (Upper graph) Gaussian kernel density estimation for binocular microsaccades which fit to our model and (lower graph) representation of the most probable shape combination of pc0 and pc1 for each individual participant. Each microsaccade coefficient pair (c0,c1) is transformed to the one dimensional α(l) with l being the index of a microsaccade. The density estimation reveals the dominance of the first principal shape pc0 (localization around 0 and π) above pc1. The lower graph shows the distribution of the most probable shapes in left and right direction and supports the former statement. This result holds for all participants. The contribution of the second component pc1 to the microsaccade shape differs between them which we see in the different widths of the peaks.
Figure 8. (Upper graph) Gaussian kernel density estimation for binocular microsaccades which fit to our model and (lower graph) representation of the most probable shape combination of pc0 and pc1 for each individual participant. Each microsaccade coefficient pair (c0,c1) is transformed to the one dimensional α(l) with l being the index of a microsaccade. The density estimation reveals the dominance of the first principal shape pc0 (localization around 0 and π) above pc1. The lower graph shows the distribution of the most probable shapes in left and right direction and supports the former statement. This result holds for all participants. The contribution of the second component pc1 to the microsaccade shape differs between them which we see in the different widths of the peaks.
Jemr 03 00022 g008
We generate a time series which mimic properties of fixational eye movements while destroying microsaccades, by applying an appropriate surrogate data generation method. In fixational eye movements studies we observe a persistent behavior on a short time scale (Engbert & Kliegl, 2004; Engbert & Mergenthaler, 2006) which is reflected in a positive autocorrelation function of the velocities for small lags. We need to reject the null hypothesis that positively autocorrelated samples in the drift are the reason for the observation of high-velocity epochs in fixational eye movements detected as singularities. A surrogate data type allowing to test the null hypothesis is amplitude-adjusted phase-randomized surrogate data (Theiler, Galdrikian, Longtin, Eubank, & Farmer, 1992) which maintains the velocity distribution and approximate the autocorrelation function. The velocity of our fixational eye movements is obtained as in Engbert and Kliegl (2003) via
Jemr 03 00022 i013(13)
where s(t) is the signal at position t and ∆t =0.002s. The generation of amplitude-adjusted surrogates is split into the following steps:
  • Sort v and obtain a rank series r of v
  • Generate a series g of Gaussian distributed random numbers with the same length as v, sort it and rearrange it according to the rank series r. In this way, we generate a time series h that is a rescaled time series of v with the property that the amplitudes of the samples belong to a normal distribution
  • Transform h to the Fourier space and obtain h ˜
4.
Randomize the phase: Jemr 03 00022 i015. φ is a series containing equally distributed random numbers between −π and π with identical values for positive and negative frequency
5.
Calculate the inverse of the Fourier transform: Jemr 03 00022 i016
6.
Obtain a rank series of Jemr 03 00022 i017 and rearrange v in accordance with the new rank series
To return to a position-time series one sums the velocities divided by the sampling frequency cumulatively. In the following we refer to the latter as the surrogates or surrogate data of fixational eye movement time series. We take all 682 trials of surrogates and perform the same analysis as performed for the original data. The results for the detected singularities are shown in Table 2.
We explain the higher number of singularities detected in the surrogate data by the conservation of each original velocity sample during surrogate data generation. Nevertheless, epochs of high velocities in fixational eye movement data, corresponding to microsaccades, can be split into two or more singularities in the surrogates. But this is only true for monocular singularities. However, we also observe a high number of binocular singularities, on average 10 per trial. The high number of binocular events results from the probability of randomly co-occuring extended events of length 2τ. Co-occuring means: A singularity in one eye happens within the window of τ milliseconds before or after a singularity in the other eye. Estimating the probability of this co-occurrence, we take a monocular time series of length T and a number of monocular singularities N.
As each singularity is in the center of a 2τ window and at least on sample ∆t apart, a significant number of data samples belong to those which are possible candidates for a co-occurrence. Thus, the probability to observe a binocular event E by chance is given by
Jemr 03 00022 i018(14)
which is the ratio of time, belonging to points of possible co-occurrence candidates in the full trajectory. For the presented data the values are: τ = 30 ms, ∆t = 2 ms, T = 20000 ms, and N = 57, which is the average number of detected monocular singularities in our surrogate data. Inserting these values one gets p(EN ≈ 10 binocular events simply by chance in our surrogate data. This result is in very good agreement with the numbers of observed binocular events in the surrogate data: On average 10 binocular events. Now, we argue that the obtained number of binocular events in surrogates mainly depends on the probabilitiy p(E) of random co-occurence, which is set by our time frame to define an event as binocular.
These randomly occurring binocular singularities, which also can happen in the original data, cannot be explained by the first principal components pc0 and pc1 by more than 80%. Thus, applying Equation (11) one filters out random binocular singularities in the analysis of original data. Further processing of the binocular singularities with the principal component analysis (see section Microsaccade characterization) results in the principal components shown in Figure 9. In this case, the first two components explain even more than 95% of the variance in our data and we choose pc0s and pc1s to reduce our 31 dimensional system of shapes to this two dimensions. Both components look similar but differ strongly in their shapes, compared to the principal components obtained for the original data. This becomes obvious by consideration of combinations of the two components. For surrogates the principal component pc0s is a smoothed step and does not show any overshoot. The second component pc1s is a very smooth peak-like bump. Both pc0s and pc1s are smoothed version of singularities, i.e. a step and a single peak as illustrated in Figure 1.
Figure 9. Representation of the first principal components’ shapes for the original data and surrogate data. Compare (a) pc0 and pc0s, the typical overshoot behavior in the smooth step is not present. Left (blue solid line) and right (red dashed line) eye again show approximately the same principal component in original as well as in surrogate data (green dotted and magenta dash-dotted line). (b) The second component for the original pc1 is a bump whether pc1s is a smooth peak shape.
Figure 9. Representation of the first principal components’ shapes for the original data and surrogate data. Compare (a) pc0 and pc0s, the typical overshoot behavior in the smooth step is not present. Left (blue solid line) and right (red dashed line) eye again show approximately the same principal component in original as well as in surrogate data (green dotted and magenta dash-dotted line). (b) The second component for the original pc1 is a bump whether pc1s is a smooth peak shape.
Jemr 03 00022 g009
Importantly, the pc0 and pc1 of the original data show a directionality in time: They cannot be reversed. The principal components pc0s and pc1s for the surrogate data can be reversed. Both fulfill the condition of symmetry, i.e. f(−t) = −f(t) and do not show any directionality. On the basis of the described differences in pc0 and pc1, we reject the described null hypothesis and conclude that the observed binocular singularities in fixational eye movement time series result from high-velocity epochs in fixations with a distinct shape given by a linear combination as explained in our model equation (9) with the components pc0 and pc1 of the original data, shown in Figure 6.
Table 2. Rates for the detected monocular and binocular singularities in the surrogates of horizontal fixational eye movement data sets. The total rates are given as mean± standard deviation.
Table 2. Rates for the detected monocular and binocular singularities in the surrogates of horizontal fixational eye movement data sets. The total rates are given as mean± standard deviation.
ParticipantNumber of trials Detected   sin gularities   rate   of   [ 1 s ] Binocular   rate   of   [ 1 s ]
LR
1302.52.60.4
2293.13.10.6
3302.82.70.4
4302.42.20.4
5222.72.50.5
6302.92.90.5
7303.13.00.6
8303.23.20.7
9302.52.50.4
10172.72.60.4
11283.13.00.6
12303.03.20.6
13292.32.30.3
14302.82.40.5
15293.13.00.6
16302.72.50.4
17292.42.30.4
18233.33.30.7
20293.23.20.7
21293.43.30.7
22302.52.40.4
23293.33.30.7
24302.82.70.4
25293.43.30.7
Total6822.9±0.32.8±0.40.5±0.1

Discussion

We investigated the hypothesis that microsaccades can be modelled as events of lower regularity and see that the continuous wavelet transform successfully distinguishes microsaccades of fixational eye movement from background activity (i.e., drift). Our methods are based on a pre-defined minimal set of parameters (related to maximum modulus lines, binocularity, and the fit to the model). A validation that uses amplitude-adjusted surrogate data verifies that almost simultaneously appearing structures of low regularity in fixational eye-movement trajectories cannot be explained by randomly co-occuring autocorrelated samples in both eyes’ drift movements.
In comparison to current methods which use amplitudes (or its derivatives) for the detection of microsaccades, our alternative approach can identify large scale saccades and microsaccades within one detection procedure, e.g., in eye movements recorded during scene viewing or reading. Methods that use velocities require predefined thresholds such that an analysis either uses two detection runs to first separate saccades and then microsaccades (i.e., one uses a threshold related to the variance in the trajectory for saccade detection and subsequently another threshold related to the remaining variance), or instead uses a threshold defined by the variance obtained in co-recorded fixation-task experiments. Using the shared property of lower regularity (between microsaccades and saccades) allows the identification of both within the same detection process.
A very brief analysis between performances of the velocity threshold and continuous wavelet transform method on eye movement trajectories in a fixation task can be found in the Appendix A. Seemingly, the detected microsaccade positions in fixational eye movements look quite similar.
We continued our analysis with a comparison of detected microsaccades between participants and performed a principal components analysis, yielding two main components whose linear combination describes the shapes of microsaccades. These two shapes cover over 94% of the present variance in the shapes of binocular microsaccades. The established simple linear model for a typical microsaccade shape convincingly agrees with the microsaccadic shape reported in studies of Zuber et al. (1965) which likewise reported an overshoot as typical property of a microsaccade (for more recent results that show microsaccades with overshoot in eye position traces recorded with electromagnetic induction technique, see Hafed, Goffart, & Krauzlis, 2009). The first principal component is a steplike shape with an overshoot and the second principal component characterizes the overshoot height. Combining both, we can represent microsaccades with all possible overshoot heights. This also includes rare microsaccades which do not share the overshoot shape. A two-dimensional coefficient pair and a measure for the amplitude returns the most simple property set for microsaccades.
In Abadi and Gowen (2004), four types of saccadic intrusions were reported for studies about characteristics of saccadic intrusions. Two microsaccadic shapes with a Single Saccadic Pulse (SSP), and Double Saccadic Pulse (DSP) were introduced by the authors. Additionally, two sequences of microsaccades were investigated which took two and three subsequent microsaccadic movements into account, the Monophasic Square Wave Intrusion (MSWI) and Biphasic Square Wave Intrusion (BSWI). Our microsaccade model is able to separate SSP and DSP simply by investigation of the more dominant principal shape, i.e., if the first prinicipal shape pc0 is dominant, we count a SSP whereas if the second principal shape pc1 is leading, we see a DSP.
In perspective, an analysis of sequences of microsaccadic shapes allows studies on short- and long-term dependencies as well as investigations with co-registered EEG data which profits from a better understanding of microsaccades and the induced potentials (Dimigen, Sommer, Hohlfeld, Jacobs, & Kliegl, under revision).
Future work will consider a stronger comparison of the two methods and the possibility to extend the detection method to horizontal and vertical (2D) eye movements and analyze their interplay in the context of microsaccadic movements. A subsequent classification of microsaccade shapes seems promising to investigate the spatiotemporal dynamics of microsaccades and may lead to a new model for the dynamics of fixational eye movements. An understanding of the latter under the well-defined conditions in fixation-task experiments allows the modeling of eye movements at its baseline. Establishing a model at this point will provide a foundation to study trajectories and reactions to attractions of the eye in more complex experimental setups.

Acknoweledgments

We thank Marco Rusconi for valuable discussions, reading the manuscript and comments. Additionally, we also thank Hannes Matuschek for the supply of the source codes for the wavelet transformation in pythonTM. This research was supported by Deutsche Forschungsgemeinschaft (Research Group 868 “Computational Modeling of Behavioral, Cognitive, and Neural Dynamics“, Grant No. EN 471/3-1).

Appendix A. Comparison between microsaccade detection algorithms

In our study we introduced a new method to detect microsaccades in records of fixational eye movements. We demonstrated the usability of this method with amplitude-adjusted surrogate data. Previous algorithms for the detection of microsaccades were based on velocities of the eye position signal (Engbert & Kliegl, 2003; Engbert & Mergenthaler, 2006; MartinezConde et al., 2000). These methods used the property of microsaccades as high velocity component in fixational eye movements. In Table A1 we see the binocular events detected with the velocity threshold algorithm introduced by Engbert and Kliegl (2003) and our continuous wavelet transform method for the same data set.
Figure A1. Detections of microsaccades in an example horizontal fixational eye-movement trajectory. The two methods perform quite similar. The wavelet detection method also identifies small scale events, compare e.g., at around 1.3s.
Figure A1. Detections of microsaccades in an example horizontal fixational eye-movement trajectory. The two methods perform quite similar. The wavelet detection method also identifies small scale events, compare e.g., at around 1.3s.
Jemr 03 00022 g0a1
The detected rates for binocular events with the continuous wavelet transform and the velocity threshold algorithm are significantly correlated (r = 0.96; P < 0.0001). A comparison of the detected time points for binocular events shows an overlap of 74% (number of binocular microsaccades detected with both methods divided with the number of detections by the continuous wavelet transform method). We called an event occuring at the same time point in both methods if it did not differ more than τ = 16 ms.
In Figure A1 we show five seconds of one fixational eye movement trial with marked time points which were detected either by the continuous wavelet transform or velocity threshold method. We see that the detected time points are in nice agreement.
A detailed investigation of distinct properties describing microsaccades detected in one but not the other method is in preparation.
Table A1. Detection of microsaccadic events in a fixational eye movement experiment with the continuous wavelet transform (WT) and velocity threshold (VT) method. Results are compared after application of the individual settings for each method. The VT algorithm detects just 5% less binocular events. The total rates are given as mean ± standard deviation.
Table A1. Detection of microsaccadic events in a fixational eye movement experiment with the continuous wavelet transform (WT) and velocity threshold (VT) method. Results are compared after application of the individual settings for each method. The VT algorithm detects just 5% less binocular events. The total rates are given as mean ± standard deviation.
ParticipantNumber of trials Binocular   events   rate   of   [ 1 s ]
waveletvelocityboth methods
1301.51.10.9
2292.42.42.0
3301.41.41.1
4300.30.10.1
5220.60.50.3
6301.81.91.5
7301.72.01.4
8301.72.01.4
9300.50.20.2
10171.00.70.4
11281.00.90.8
12300.80.90.5
13290.60.20.1
14301.20.80.6
15291.81.91.4
16301.21.40.9
17290.80.30.2
18231.41.51.1
20291.71.91.5
21291.91.91.7
22300.30.20.2
23291.21.20.9
24301.71.71.4
25291.21.10.9
Total6821.2±0.51.2±0.70.9±0.6

References

  1. Abadi, R., and E. Gowen. 2004. Characteristics of saccadic intrusions. Vision Research 44: 2675–2690. [Google Scholar] [CrossRef] [PubMed]
  2. Arneodo, A., G. Grasseau, and M. Holschneider. 1988. On the wavelet transform of multifractals. Physical Review Letters 61: 2281–2284. [Google Scholar] [CrossRef] [PubMed]
  3. Boyce, P. 1967. Monocular fixation in human eye movement. Proceedings of the Royal Society of London. Series B, Biological Sciences 167, 1008: 293–315. [Google Scholar]
  4. Bridgeman, B., and J. Palca. 1980. The role of microsaccades in high acuity observational tasks. Vision Research 20: 813–817. [Google Scholar] [CrossRef] [PubMed]
  5. Ciuffreda, K., and B. Tannen. 1995. Eye movement basics for the clinician. St. Louis: Mosby. [Google Scholar]
  6. Coppola, D., and D. Purves. 1996. The extraordinarily rapid disappearance of entopic images. Proceedings of the National Academy of Sciences of the United States of America 93: 8001–8004. [Google Scholar] [CrossRef]
  7. Daubechies, I., and G. Teschke. 2005. Variational image restoration by means of wavelets: simultaneous decomposition, deblurring and denoising. Applied and Computational Harmonic Analysis 19: 1–16. [Google Scholar] [CrossRef]
  8. Daubechies, I. 1992. Ten lectures on wavelets. SIAM. [Google Scholar]
  9. Diallo, M. S., M. Holschneider, M. Kulesh, F. Scherbaum, and F. Adler. 2006. Characterization of polarization attributes of seisimic waves using continuous wavelet transforms. Geophysics 71: 67–77. [Google Scholar] [CrossRef]
  10. Dimigen, O., W. Sommer, A. Hohlfeld, A. Jacobs, and R. Kliegl. Co-registration of eye movements and EEG in natural reading: Analyses and review, under revision.
  11. Ditchburn, R., and B. Ginsborg. 1952. Vision with a stabilized retinal image. Nature 170: 36–37. [Google Scholar] [CrossRef]
  12. Engbert, R. 2006a. Flick-induced flips in perception. Neuron 49: 168–170. [Google Scholar] [CrossRef]
  13. Engbert, R. 2006b. Microsaccades: A microcosm for research on oculomotor control, attention, and visual perception. Progress in Brain Research 154: 177–192. [Google Scholar]
  14. Engbert, R., and R. Kliegl. 2003a. Binocular coordination in microsaccades. In The Mind’s Eyes: Cognitive and Applied Aspects of Eye Movements. Edited by J. Hyn, R. Radach and H. Deubel. pp. 103–117. [Google Scholar]
  15. Engbert, R., and R. Kliegl. 2003b. Microsaccades uncover the orientation of covert attention. Vision Research 43, 9: 10351045. [Google Scholar] [CrossRef] [PubMed]
  16. Engbert, R., and R. Kliegl. 2004. Microsaccades keep the eyes’ balance during fixation. Psychological Science 15: 431–436. [Google Scholar] [CrossRef]
  17. Engbert, R., and K. Mergenthaler. 2006. Microsaccades are triggered by low retinal image slip. Proceedings of the National Academy of Sciences of the United States of America 103: 7192–7197. [Google Scholar] [CrossRef] [PubMed]
  18. Feichtinger, H., and T. Strohmer. 1998. Gabor analysis and algorithms: Theory and applications. Birkhauser. [Google Scholar]
  19. Galfano, G., E. Betta, and M. Turatto. 2004. Inhibition of return in microsaccades. Experimental Brain Research 159: 400–404. [Google Scholar] [CrossRef]
  20. Gowen, E., R. Abadi, and E. Poliakoff. 2005. Paying attention to saccadic intrusions. Cognitive Brain Research 25: 810–825. [Google Scholar] [CrossRef] [PubMed]
  21. Gowen, E., R. V. Abadi, E. Poliakoff, P. C. Hansen, and R. C. Miall. 2007. Modulation of saccadic intrusions by exogenous and endogenous attention. Brain Res 1141: 154–167. [Google Scholar] [CrossRef]
  22. Hafed, Z., and J. Clark. 2002. Microsaccades as an overt measure of covert attention shifts. Vision Research 42: 25332545. [Google Scholar] [CrossRef]
  23. Hafed, Z., L. Goffart, R. Krauzlis, and M. Holschneider. 2009. A Neural Mechanism for Microsaccade Generation in the Primate Superior Colliculus. Science 323, 5916: 940–943. [Google Scholar] [CrossRef]
  24. Holschneider, M. 1995. Wavelets: An analysis tool. Oxford University Press. [Google Scholar]
  25. Holschneider, M., and P. Tchamitchian. 1991. Pointwise analysis of Riemann’s nondifferentiable function. Inventiones Mathematicae 105: 157–175. [Google Scholar] [CrossRef]
  26. Horowitz, T. S., D. E. Fencsik, E. M. Fine, S. Yurgenson, and J. M. Wolfe. 2007. Microsaccades and attention: Does a weak correlation make an index? Reply to Laubrock, Engbert, Rolfs, and Kliegl (2007). Psychological Science 18: 367–368. [Google Scholar] [CrossRef]
  27. Horowitz, T. S., E. M. Fine, D. E. Fencsik, S. Yurgenson, and J. M. Wolfe. 2007. Fixational eye movements are not an index of covert attention. Psychological Science 18: 356–363. [Google Scholar]
  28. Jolliffe, I. 2002. Principal component analysis. Springer Verlag. [Google Scholar]
  29. Krauskopf, J., T. Cornsweet, and L. Riggs. 1960. Analysis of eye movements during monocular and binocular fixation. Journal of the Optical Society of America 50: 572–578. [Google Scholar] [PubMed]
  30. Laubrock, J., R. Engbert, M. Rolfs, and R. Kliegl. 2007. Microsaccades are an index of covert attention: commentary on horowitz, fine, fencsik, yurgenson, and wolfe (2007). Psychological Science 18: 364–366. [Google Scholar]
  31. Laubrock, J., R. Kliegl, M. Rolfs, and R. Engbert. 2010. When do microsaccades follow attention? Attention. Perception, & Psychophysics 72: 683–694. [Google Scholar]
  32. Laubrock, R., R. Engbert, and R. Kliegl. 2005. Microsaccade dynamics during covert attention. Vision Research 45: 721–730. [Google Scholar] [CrossRef]
  33. Mallat, S. 1998. A wavelet tour of signal processing. In Academic Press. [Google Scholar]
  34. Mallat, S., and W. Hwang. 1992. Singularity detection and processing with wavelets. IEEE Transactions on Information Theory 38: 617–643. [Google Scholar]
  35. Marr, D., and E. Hildreth. 1980. Theory of Edge Detection. Proceedings of the Royal Society of London. Series B. Biological Sciences 207: 187–217. [Google Scholar]
  36. Martinez-Conde, S., S. Macknik, and D. Hubel. 2000. Microsaccadic eye movements and firing of single cells in the striate cortex of macaque monkeys. Nature Neuroscience 3: 251–258. [Google Scholar] [CrossRef] [PubMed]
  37. Martinez-Conde, S., S. Macknik, and D. Hubel. 2004. The role of fixational eye movements in visual perception. Nature Reviews Neuroscience 5: 229–240. [Google Scholar] [PubMed]
  38. Martinez-Conde, S., S. Macknik, X. Troncoso, and T. Dyar. 2006. Microsaccades counteract visual fading during fixation. Neuron 49: 297–305. [Google Scholar]
  39. Martinez-Conde, S., S. L. Macknik, X. G. Troncoso, and D. H. Hubel. 2009. Microsaccades: a neurophysiological analysis. Trends Neurosci 32: 463–475. [Google Scholar]
  40. Mergenthaler, K., and R. Engbert. 2007. Modeling the control of fixational eye movements with neurophysiological delays. Physical Review Letters 98, 13: 138104. [Google Scholar] [CrossRef] [PubMed]
  41. Pritchard, R. M. 1961. Stabilized images on the retina. Scientific American 204: 72–78. [Google Scholar] [CrossRef]
  42. Quiroga, R. Q., Z. Nadasdy, and Y. Ben-Shaul. 2004. Unsupervised spike detection and sorting with wavelets and superparamagnetic clustering. Neural Computation 16: 1661–1687. [Google Scholar] [CrossRef] [PubMed]
  43. Riggs, L., F. Ratliff, J. Cornsweet, and T. Cornsweet. 1953. The disappearance of steadily fixated visual test objects. Journal of the Optical Society of America 43: 495–501. [Google Scholar] [CrossRef]
  44. Rolfs, M. 2009. Microsaccades: small steps on a long way. Vision Research 49: 2415–2441. [Google Scholar] [CrossRef] [PubMed]
  45. Rolfs, M., R. Engbert, and R. Kliegl. 2005. Crossmodal coupling of oculomotor control and spatial attention in vision and audition. Experimental Brain Research 166: 427–439. [Google Scholar] [CrossRef]
  46. Rolfs, M., J. Laubrock, and R. Kliegl. 2006. Shortening and prolongation of saccade latencies following microsaccades. Experimental Brain Research 169: 369–376. [Google Scholar] [CrossRef]
  47. Rucci, M., and A. Casile. 2004. Decorrelation of neural activity during fixational instability: Possible implications for the refinement of v1 receptive fields. Visual Neuroscience 21: 725–738. [Google Scholar] [CrossRef]
  48. Rucci, M., R. Iovin, M. Poletti, and F. Santini. 2007. Miniature eye movements enhance fine spatial detail. Nature 447: 852–855. [Google Scholar] [CrossRef]
  49. Sinn, P., and R. Engbert. 2009. Saccadic facilitation by modulation of microsaccades in natural backgrounds. Manuscript submitted.
  50. Smith, L. 2002. A tutorial on principal components analysis.
  51. Starzynski, C., and R. Engbert. 2009. Noise-enhanced target discrimination under the influence of fixational eye movements and external noise. Chaos 19, 015112: 1–7. [Google Scholar] [CrossRef] [PubMed]
  52. Theiler, J., B. Galdrikian, A. Longtin, S. Eubank, and J. Farmer. 1992. Testing for nonlinearity in time series: the method of surrogate data. Physica D: Nonlinear Phenomena 58: 77–94. [Google Scholar] [CrossRef]
  53. Troncoso, X. G., S. Macknik, and S. Martinez-Conde. 2008. Microsaccades counteract perceptual filling-in. Journal of Vision 8, 14: 1–9. [Google Scholar] [CrossRef]
  54. Venegas, S. 2001. Statistical methods for signal detection in climate. Danish Center for Earth System Science Report 2: 96. [Google Scholar]
  55. Witkin, A. P. 1983. Scale-space filtering. In Ijcai’83: Proceedings of the eighth international joint conference on artificial intelligence. pp. 1019–1022. [Google Scholar]
  56. Zuber, B. L., L. Stark, and G. Cook. 1965. Microsaccades and the Velocity-Amplitude Relationship for Saccadic Eye Movements. Science 150: 1459–1460. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Bettenbühl, M.; Paladini, C.; Mergenthaler, K.; Kliegl, R.; Engbert, R.; Holschneider, M. Microsaccade Characterization Using the Continuous Wavelet Transform and Principal Component Analysis. J. Eye Mov. Res. 2009, 3, 1-14. https://doi.org/10.16910/jemr.3.5.1

AMA Style

Bettenbühl M, Paladini C, Mergenthaler K, Kliegl R, Engbert R, Holschneider M. Microsaccade Characterization Using the Continuous Wavelet Transform and Principal Component Analysis. Journal of Eye Movement Research. 2009; 3(5):1-14. https://doi.org/10.16910/jemr.3.5.1

Chicago/Turabian Style

Bettenbühl, Mario, Claudia Paladini, Konstantin Mergenthaler, Reinhold Kliegl, Ralf Engbert, and Matthias Holschneider. 2009. "Microsaccade Characterization Using the Continuous Wavelet Transform and Principal Component Analysis" Journal of Eye Movement Research 3, no. 5: 1-14. https://doi.org/10.16910/jemr.3.5.1

APA Style

Bettenbühl, M., Paladini, C., Mergenthaler, K., Kliegl, R., Engbert, R., & Holschneider, M. (2009). Microsaccade Characterization Using the Continuous Wavelet Transform and Principal Component Analysis. Journal of Eye Movement Research, 3(5), 1-14. https://doi.org/10.16910/jemr.3.5.1

Article Metrics

Back to TopTop