Next Article in Journal
Metal Artifact Reduction in Spectral X-ray CT Using Spectral Deep Learning
Previous Article in Journal
Visualization of Inferior Alveolar and Lingual Nerve Pathology by 3D Double-Echo Steady-State MRI: Two Case Reports with Literature Review
Previous Article in Special Issue
Retinal Processing: Insights from Mathematical Modelling
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Microsaccades, Drifts, Hopf Bundle and Neurogeometry

by
Dmitri Alekseevsky
1,2
1
Institute for Information Transmission Problems, B. Karetnuj per., 19, 127051 Moscow, Russia
2
Faculty of Science, University of Hradec Králové, Rokitanského 62, 500 03 Hradec Králové, Czech Republic
J. Imaging 2022, 8(3), 76; https://doi.org/10.3390/jimaging8030076
Submission received: 31 January 2022 / Revised: 3 March 2022 / Accepted: 4 March 2022 / Published: 17 March 2022

Abstract

:
The first part of the paper contains a short review of the image processing in early vision is static, when the eyes and the stimulus are stable, and in dynamics, when the eyes participate in fixation eye movements. In the second part, we give an interpretation of Donders’ and Listing’s law in terms of the Hopf fibration of the 3-sphere over the 2-sphere. In particular, it is shown that the configuration space of the eye ball (when the head is fixed) is the 2-dimensional hemisphere S L + , called Listing hemisphere, and saccades are described as geodesic segments of S L + with respect to the standard round metric. We study fixation eye movements (drift and microsaccades) in terms of this model and discuss the role of fixation eye movements in vision. A model of fixation eye movements is proposed that gives an explanation of presaccadic shift of receptive fields.

1. Introduction

The main task of the visual system is processing and decoding visual information, recorded by the retinal photoreceptors, and constructing a model of the external world. The photoreceptors convert the light signal into electric signals which are sent to retinal ganglion cells and then by a conformal retinotopic mapping to LGN, then to the V1 cortex, V2 cortex etc. The visual system has a hierarchical structure and consists of many subsystems connected by direct and feedback.
The neurogeometry of vision deals with the construction of continuous models of various visual subsystems in terms of differential geometry and differential equations.
There are three level of the models of the visual subsystems:
  • Static, without taking into account time, i.e., under assumption that the eye and the perceived object (stimulus) are stationary;
  • Semi-dynamic, when the stimulus is stationary and the eye is moving;
  • Dynamic, when both the eye and the stimulus are in motion.
Over the past two decades, great progress has been made in understanding the functional architecture of early vision in static and constructing the neurogeometric models of early vision systems (primary visual cortex V1, hypercolumns), see [1,2,3,4,5,6,7,8,9]. The models are based mostly on the results obtained in experiments on anesthetized animals.
In natural vision, the eye always participates in different movements. According to the classical experiments of A. Yarbus [10], the compensation of the eye movement leads to the loss of vision of stationary objects in 2–3 s. Moving objects remain visible, albeit poorly. Later experiments show that the most important phase of the fixation eye movements is the drift. Compensation of microsaccades does not lead to loss of vision.
It was remarked by M. Rucci, E. Ahissar and D. Burr [11].
“As there are no stationary retinal signals during natural vision, motion processing is the fundamental, basic operating mode of human vision.”
They also note that due to this there is no big difference between semi-dynamic and dynamic vision.
In the first part of the paper, we will briefly discuss the main results concerning the static vision, which are the base points to deal with dynamic one. Currently, there are some advances in the study of the dynamic case, [12,13,14,15] although the description of the visual processes becomes significantly more complicated and new phenomena arise, such as saccade remapping [16,17], shift of the receptive field, compression of the space and time during saccades [18,19]. The main difference between static and dynamic vision is the following. As it is generally accepted, in static vision all information comes from the activation of retinal photoreceptors. In dynamic vision, the process of perception is determined by the interaction of the visual information from the retina and the dynamical information about eye movements, coded in the ocular motor system.
Even when the gaze is focused on a stationary point, it participates in different type of movements, called fixational eye movements (FEM). For a long time, most neurophysiologists did not pay serious attention to FEM. The situation has changed in the last two decades, see [20]. Both experimental and theoretical works have appeared that substantiate the important role of FEM in vision. Primarily the works by M. Rucci and their coauthors [11,21,22,23,24,25] contain detailed and critical analysis on many experimental results about different types of FEM—tremor, drift and microsaccades, and new ideas about their role in vision.
In the dynamic case, the eye movements are controlled by ocular motor system and a copy of motor command, called corollary discharge or efference copy, is sent from superior colliculus through MD thalamus to frontal cortex. It plays an important role in visual stability, i.e., the compensation of the shift of retinal stimuli and perception stable object as stable, see [26,27,28] for results and discussions on the problem of visual stability.
A deeper understanding of the mechanism of FEM depends on further progress in description of image processing in retina, visual cortex and in ocular motor control of eyes movements.
Fixational eye movements are stochastic in nature. There were proposed various stochastic models of FEM as a random walk, see [29,30,31]. We especially note the works [32,33]. In the most works, FEM are modeled by a random walk on the plane or on a lattice in the plane. However, the information about eye rotation, which is contained in corollary discharge, treats the eye as a ball and not as a plane. For more realistic model of FEM, which will be consistent with corollary discharge information, we need more sophisticated model of saccades and drift, where such movements are considered as rotations of the eye ball. Due to this, it is important to describe the configuration space of the eye.
A priori the configuration space of eye ball B 3 , rotating around its center O, is the orthogonal group S O ( 3 ) (which can be thought as the 3-sphere with identified antipodal points, S O ( 3 ) = S 3 / Z 2 ).
A big surprise even for the great physicist and physiologist H. von Helmholtz was the law, discovered in the middle of the 19th century by F.C. Donders and supplemented by J.B. Listing. It states that, when the head is fixed, the real configuration space of eye positions is two-dimensional. More precisely, the direction of the gaze e 1 uniquely determines the position of the eye, described by the retinotopic orthonormal frame ( e 1 , e 2 , e 3 ) . From the point of view of the modern control theory, such a constraint is quite reasonable. The difference between the motion control on the 3-sphere and on a surface is similar to the difference in piloting a plane and driving a car.
One of the main results of the work consists of interpreting Listing’s law in terms of a section s : S ˜ 2 S L + S 3 (which we call Listing’s section) of the Hopf bundles χ : S 3 S 2 over a punctured sphere S ˜ 2 = S 2 { i } where i is the direction to the nodal point of the eye sphere S 2 (in the standard position) and i is the direction to the center of the fovea. Listing’s section is an open 2-dimensional hemisphere S L + of a 3-dimensional sphere S 3 , identified with the group H 1 of unit quaternions. This simple description of Listing’s law provides a way for construction of more realistic stochastic models of FEM and oculomotor system that control eyes movements. For example, denote by S E 2 = B 3 the eye sphere in the standard position. Let A , B S E 2 be two points and a = s ( A ) , b = s ( B ) the corresponding points of Listing’s hemisphere S L + . Then the saccade with the initial gaze direction A and the final gaze direction B is the segment a b S L + of the unique geodesic γ a , b (the great semicircle) of Listing hemisphere S L + (with the standard metric) through points a , b . The corresponding evolution of the gaze is the segment A B = χ ( a b ) of the circle S A , B 1 S ˜ E 2 Π ( A , B , i ) (with the deleted point i ), which is the section of the punctured sphere S ˜ E 2 by the plane, generated by the points A , B , i . So the space of saccades is the direct product S L + × S L + of two copies of Listing’s hemisphere.
We propose a deterministic model of fixation eye movements (drift and microsdaccades) in terms of Listing’s hemisphere. The microsaccades are considered as a mechanism of remapping the visual information, which depends of the choice of the salient point as the next gaze target. It gives a simple description of the presaccadic shift of receptive fields. We use this model to define a distance between point stimuli A , B . Then we shortly recall the basic fact of diffusion geometry, initiated by R.R. Coifman and S. Lafon [34,35], and discuss the extension of the model to the stochastic case, when the drift is considered as a random walk on Listing’s hemisphere, in the framework of diffusion geometry.

2. Information Processing in Early Vision in Static and Functional Structure of Retina and Primary Visual Cortex

In static, visual information is coded in firing of retinal photoreceptors, cones and rods. In the first approximation, the input function of the retina may be considered as the function I ( x , y ) on retina, which describes the density of energy of light, recorded by photoreceptors. The visual information is primary processing in retina and it sent to primary visual cortex V1 and then to V2, V3 and other visual systems for further processing and decoding. The visual information is coded in visual neurons which are working as filters that is functionals on the space of input function, which value depends only on the restriction of the input function to a small domain D R of the retina, called receptive filed (RF). The linear neurons are working as linear filters, i.e., the linear functionals, described as the integral D f ( x , y ) W ( x , y ) d vol of the input function with some weight W ( x , y ) , called the receptive profile. In reality, most visual neurons have spatiotemporal character, that is their reply depend also on time integration of the input function.

2.1. The Eye as an Optical Device and Input Function

The eye is a transparent ball B 3 together with a lens L which focuses light rays to the retina R, see Figure 1. The retina occupies a big part of the boundary sphere S 2 = B 2 of the eye ball. The lens is formed by the cornea and the eye crystal. We will assume that the optical center of the lens or nodal point N belongs to the eye sphere S 2 .
A beam of light emitted from a point A of a surface Σ and passing through the nodal point N is not refracted and falls to the point A ¯ = A N R of the intersection of the retina R with the ray A N . A beam from the point A which passes through any other point of the lens is focused and come to the same point A ¯ R . So we get a central projection of the surface Σ to retina R with center N given by the map
π : M A A ¯ = A N R R ,
where A ¯ = A N R is the second point of intersection of the ray A N with the retina R, see Figure 2. The central projection generically is a local diffeomorphism.
Note that if M = Π is the frontal plane (orthogonal to the line of sight) which is far enough away compared to the size of the eyeball, then the central projection π : Π R S 2 is approximately a conformal map.
The (density of) energy of light I R ( A ¯ ) coming from a point A Σ of the surface to the point π ( A ) = A ¯ of retina is approximately proportional to the (density of) energy of light I ( A ) , emitted from the point A. So the input function
I R : R R 0 , A ¯ I R ( A ¯ )
of the retina (where R 0 is the set of non negative numbers), contains information about the density I ( A ) of energy of light, emitted from the surface Σ . The aim of the static monochromatic vision is to extract from the input function I R information about geometry of the surface. We will not speak about other characteristics of the recorded light, for example, the spectral properties, which are responsible for color vision. It seems that the polarisation plays no role in human vision.
It was discovered by D. Hubel and T. Wiesel, that the most important characteristic of the detected stimulus are the contours, i.e., the level sets of the input function I R ( x , y ) with large gradient. J. Petitot [5] gave a precise geometrical formulation of this claim as a statement that simple neurons of V1 cortex detect infinitesimal contours, i.e., 1-jets of contours, considered as non parametrized curves. One of the main task of the higher order visual subsystems is to integrate such infinitesimal contours to global ones.

2.2. Retina

2.2.1. Anatomy of Retina

Retina consists of 5 layers. In human there are in approx. 80 different types of cells. The bottom layer consists of receptors, photoelements which transform light energy into electric signals, see Figure 3. They measure the input function
I R : R R 0
and send information to ganglion cells. In fovea, one cone is connected with 1 ganglion. In periphery, one rode is connected with 10 2 10 3 ganglions. There are 1 million of ganglions and 125–150 millions of receptors.

2.2.2. Ganglion Cells as Marr Filters

It was discovered by S. Kuffler that the receptive field of a typical ganglion cell is rotationally invariant (isotropic) and contains central disc and surround ring. It is working as a linear filter with receptive profile which is ether positive in the central disc and negative in the ring or vice versa. In the first case, Kuffler called it ON-cell and in the second one OFF-cell, see Figure 4. D. Marr showed that the filter with Laplacian of the Gauss function as the receptive profile gives a good model of Kuffler cell and proved that image processing by a system of such filters turns a picture into a graphic image, see Figure 5. The purpose of the information processing in retina is to regularize the input function, eliminate the small artifacts of the retina image and to highlight the contours, which are the main objects of perception in early vision.

2.2.3. Information Processing in Retina. Two Pathways from Receptors to Ganglion Cells

There are two pathways from receptors to ganglion cells: Direct path: receptor–bipolar–ganglion activates the center of ganglion cells, which work as a linear filter. Antagonistic surround is activated by (linear) negative feed back from horizontal cells via indirect path: receptor–horizontal cell–(amacrine)–bipolar–ganglion. A nonlinear rectifying mechanism (associated with contrast gain control) is related with amacrine cells.
For sufficiently small contrast, ganglion P-cells is working as linear Marr filter. M-cells, responsible for perception of moving objects, are working as essentially non-linear filters. Response depends on stimulus contrast and temporal frequency [36].

2.2.4. Fovea

The fovea was discovered by Leonardo da Vinci. It is a small pit in the retina which contains mostly cones, see Figure 6. The cental part of the fovea, called the foveola, has a diameter 0.35 mm ∼ 1 ° . It consists only from cones packed with maximum density. The fovea occupies 1% of retina, but is projected onto almost 50% of area of the visual cortex. When we fix gaze on a point A, the image A ¯ of this point on retina moved due to the fixation eye movements (FEM), but remains inside fovea.

2.2.5. Inhomogeneity of the Retina and Magnification. Physiological Metric in Retina

The physical metric in retina (considered as a sphere) is standard metric of the sphere. (The distance is described in mm or in degrees). 1 mm = 3.5 ° ∼ 6 cm at a distance of 1.5 m, 1 ° ∼ 0.3 mm ∼ 2.5 cm at a distance 135 cm. Apparent diameter of Moon and Sun is 0.5 ° = 0.15 mm = 150 μ . Receptive field of neurons of V1 cortex projected to fovea has diameter 0.25 ° –0.7 ° and the area 0.07 ° × 0.15 ° ∼ 0.12 mm2. The receptive field of neurons projected onto the periphery of the retina has a diameter up to 8 ° , on average this is 30 times more then in fovea and the RF here contains thousands of rods.
Magnification = distance between two points of V1 cortex which corresponds to 1 mm distance in retina. The cortical magnification in the fovea 1mm ∼ (1/6) ° = 0.05 mm is 20 times. The cortical magnification in the periphery 1 mm = 6 ° = 1.8 mm is 0.55 times.
Hubel [37] remarked that the structure of retina is very inhomogeneous. He supposed that it is one of the reason, why the information processing in retina is very limited. On the other hand, he emphasized the amazing homogeneity of the cortex V1. It is expressed in the fact that a shift in 2 mm at any point of the cortex corresponds to shift on diameter of the corresponding receptive field in retina. We define the physiological metric in the retina, where the length of a curve is given by the number of receptive fields of neurons along this curve. This metric in the retina is proportional to the physical metric in the cortex. In particular, the diameter of fovea 1 ° corresponds to 6 mm in V1 cortex. (Hubel).
We will discuss a possible application of this metric to choosing of appropriate diffusion kernel for stochastic model of the drift.

2.2.6. Conformal Retinotopic Map from the Retina to the LGN (Lateral Geniculate Nucleus) and to the Visual Cortex V1

After image processing in the retina, the input function is encoded by the firings of ganglion cells. Then it is sent to the LGN and the V1 cortex by the conformal retinotopic mapping, see [38,39]. There are three main pathways from the retina to the V1 cortex: the P-pathway, which is responsible for the perception of stable objects, the M-pathway, which is important for the perception of moving objects, and the K-pathway, important for the color vision. In static models, only the P-pathway is considered, but for dynamic model the M-pathway is also very important. M-pathway is more complicated then P-pathway, since M-neurons are not linear, see [36].
Let ( x , y ) be the standard coordinates of the tangent plane T F S 2 of the eye sphere at the center F of the fovea. We will consider these coordinates as conformal coordinates on the eye sphere due to the stereographic map with center at the nodal point N. It is convenient also to introduce the complex coordinate z = x + i y and the associated polar coordinates r , θ where z = r e i θ . In physiology, the coordinate r (the geodesic distance to F) is called the eccentricity and θ the angular coordinate. In appropriate complex coordinate in LGN and the V1 cortex, the retinotopic map is described by a meromorphic function of the form
z F ( z ) = log z + a z + b , a , b R .
The module | F ( z ) | describes the local magnification at a point z of the retina (see E. Schwartz [38]).

2.3. Functional Architecture of the Primary Visual Cortex: Columns, Pinweels, Simple and Complex Cells, Hypercolumns

The primary visual cortex V1 is a surface of depth 1.5–2 mm which consists of 6 layers. Each layer consists of columns of cells which have approximately the same receptive field. Hubel and Wiesel proposed a classification of V1 cells into simple and complex cells. Simple cells act as Gabor filters (defined by the receptive profile, that is the Gauss function modulated by sin or cos). The most important property of the Gabor filter is that it detects orientation of the contour, crossing its receptive field. There are several versions of the Gabor filters, which measure at the same time other parameters of the stimuli, for example, spatial frequency, phase etc. This means that the Gabor filter is activated only if these parameters take (with some precision) certain values. All simple cells from a regular column act as Gabor filters with almost the same center and they detect almost the same orientation of the contour.
A singular column called(pinwheel) contains simple cells which measure any possible orientation of the contour.
One of the purposes of the eyes movement is to produce the shift of the retinal stimulus such that the contour intersects pinwheels and is detected by their neurons.

Hypercolumns of V1 Cortex

Hubel and Wiesel proposed a deep and very productive notion of hypercolumns in V1 cortex. Given a system of local parameters (e.g., orientation, ocular dominance, spatial frequency, temporal frequency, phase etc.). A lhypercolumn (or, module) is defined as a minimal collection of (regular) columns, containing simple cells which measure any possible value of these parameters and which is sufficient to detect the local structure of the stimulus. Applying this notion to orientation and ocular dominance, they proposed a famous ice cube model of V1 cortex. Now this notion is applied also for the V2 cortex. Usually, the area of hypercolumns is 1–2 mm2.

3. Information Processing in Dynamics

3.1. The Eye as a Rotating Rigid Ball

From a mechanical point of view, the eye is a rigid ball B 3 which can rotate around its center O. The retina occupies only part of the eye sphere but for simplicity, we identify it with the whole eye sphere S 2 = B 3 . We will assume that the eye nodal point N (or optical center) belongs to the eye sphere and the opposite point F of the sphere at the center of the fovea.
For a fixed position of the head, there is a standard initial position S E 2 of the eye sphere, described by the canonical orthonormal frame e ̲ 0 = ( i , j , k ) , which determines the standard coordinates ( X , Y , Z ) of the Euclidean space E 3 with center O. We will consider these coordinates as the spatiotopic (or the world-centered) coordinates and at the same time as the head-centered coordinates. Here i indicates the standard frontal direction of the gaze, j is the lateral direction from left to right which is orthogonal to i and k is the vertical direction up.
Any other position of the eye is described by an orthogonal transformation R SO 3 which maps the frame e ̲ 0 = ( i , j , k ) into another frame ( e ̲ ) = ( e 1 , e 2 , e 3 ) = R ( i , j , k ) where e 1 is the new direction of the gaze. Recall that any movement R S O ( 3 ) is a rotation R e α about some axis e S E 2 through some angle α .

Definition of a Straight Line by Helmholz

If the frontal plane (orthogonal to the line of sight) is far enough away compared to the size of the eyeball, then the central projection can be considered as a conformal map.
H. von Helmholtz gave the following physiological definition of a straight line:
A straight line is a curve E 3 , which is characterized by the following property: when the gaze moves along the curve , the retinal image of does not change.
Indeed, given a straight line = { γ ( t ) } , let us denote by Π = Π ( O , ) the plane through and the center O of the eye ball and by n its normal vector. Assume that for the standard position S E 2 of the eye, the gaze is concentrated on the point γ ( 0 ) , i.e., γ ( 0 ) R i . The retina image of belongs to the intersection Π S E 2 between Π and the standard position S E 2 of the eye sphere. When the gaze moves along γ ( t ) , the eye rotates with the axis n. Since at each moment t the new position of the eye sphere is S t 2 = R n t S E 2 , the retina image
S t 2 Π = R n t S E 2 Π = R n t ( S E 2 Π ) = S E 2 Π
remains the same for all t.
We will see that saccades correspond to such movements along the straight lines.

3.2. Saccades and Fixation Eye Movements: Tremor, Drift and Macrosaccades

3.2.1. Saccades

Eyes participate in different types of movements [40]. We are interested only in saccades and fixation eye movements (FEMs) when the gaze is “fixed” [41].
Saccades are one of the fastest movements produced by the human body. The angular speed of the eye during a saccade reaches up to 700 ° /s in humans for great saccades ( 25 ° of visual angle). Saccades to an unexpected stimulus normally take about 200 milliseconds (ms) to initiate, and then last from about 20–200 ms, depending on their amplitude. For amplitudes up to 15 ° or 20 ° , the velocity of a saccade linearly depends on the amplitude. Head-fixed saccades can have amplitudes of up to 90 ° , but in normal conditions saccades are far smaller, and any shift of gaze larger than about 20 ° is accompanied by a head movement. Most researchers define microsaccades as a small saccades, i.e., saccades with a small amplitude, such that the during a microsaccade the retina image of the point of fixation belongs to the fovea and even foveola, [23]. However in [42], the authors distinguish the small goal-directed voluntary eye movements from microsaccades. They showed that properties of microsaccades are correlated with precursory drift motion, while amplitudes of goal-directed saccades do not dependent on previous drift epochs. Microsaccades represent one of the three types of fixation eye movements.

3.2.2. Fixation Eye Movements (FEM)

The fixation eye movements are responsible for detection of local image structures and consist of tremor, drifts and microsaccades.
Tremor is an aperiodic, wave-like motion of the eyes of high frequency but very small amplitude. We hypothesize that the role of tremor is to increase the width of the contour on the retina, so that it is perceived by several rows of photoreceptors. This will allow also to estimate the value of the gradient along the contour. A detailed study of tremor and its influence on the retina images was made in [43], see Figure 7.
Drifts occur simultaneously with tremor and are slow motions of eyes, in which the image of the fixation point for each eye remains within the fovea. Drift is an involuntary stochastic process. However, the stochastic characteristics of the drift may depend on the local structure of the stimulus. Drifts occur between the fast, jerk-like, linear microsaccades. The main property of the FEMs is that during FEM the retina image of the point of fixation remains in the fovea and even the foveola [23]. The following Table 1 indicates the main characteristics of the FEM.
Per 1 s tremor moves on 1–1.5 diameters of the fovea cone, drift moves on 10–15 diameters, microsaccads moves on 15–300 diameters, see Figure 8.

3.2.3. The Role of Fixation Eye Movements

The papers by M. Rucci and his collaborators [21,22,23,24,25] contain very useful information about different characteristics of fixation eye movements and a detailed analysis of the role of FEM in vision. In a survey [23], the authors critically revised three main hypotheses about the role of microsaccades (MS) in vision:
(1)
the maintenance of accurate fixation;
(2)
the prevention of image fading due to fast adaptation of retinal photoreceptors;
(3)
vision of fine spatial detail.
They gave many very convincing arguments in support of the hypotheses (1) and (3) and 10 arguments against the hypothesis (2). We add here only one additional argument against (2). Support that before the MS a retinal photoreceptor in fovea received light signal from stimulus A. After the MS, it will receive a signal from another stimulus B, which can be even brighter. Why this will prevent the photoreceptor from adaptation?
We mention also one geometric argument why FEM are useful for vision. In monocular vision, provided that the position of eye is fixed, the retina gets information only from the 2-dimensional Lagrangian submanifold L ( N ) = { N } R P 2 of the 4-dimensional space of lines L ( E 3 ) consisting of lines incident to the eye nodal point N. The space of lines is naturally identified with the (co)tangent bundle T S 2 T S 2 of the unit sphere. It is a symmetric pseudo-Kähler manifold of neutral signature ( 2 , 2 ) . When the eye moves with a small amplitude, the retina gets information from a neighborhood of this 2-surface L ( F ) in the 4-manifold L ( E 3 ) .
M. Poletti and M. Rucci [23] gave evidence that during natural vision the microsaccades can not be regarded as a random process. Their characteristics depend on the scene. Moreover, the ability to control microsaccades plays an important role in performing different fine work, like reading, threading a needle, playing some sports (e.g., table tennis), etc. However, it seems plausible that in some cases MS can be considered as random processes. For example, when contemplating the sea, the blue sky and similar homogeneous scenes, it can be assumed that microsaccades make a random walk. Perhaps the pleasure that a person feels when contemplating such scenes is due to the fact that the eyes get rid of the difficult work of finding new targets for microsaccades.

3.2.4. Remapping and Shift of the Receptive Fields (RFs)

In a seminal paper, J.-R. Duhamel, C.L. Colby and M.E. Goldberg [45] described the shift of receptive field of many neurons in macaque lateral intraparietal area (LIP), which shows that the visual neurons of these systems get information about the retina images of their future receptive fields. This is one of the most remarkable discoveries of neurophysiology of vision at the end of the 20th century.
Assume that the RF of a neuron before a saccade covers the retina image A ¯ of a point A and after the saccade the retina image B ¯ of another point B. Then 100 ms before the saccade, the neuron detects stimuli at the locations B ¯ . This process constitutes a remapping of the stimulus from the retina coordinates with the initial fixation point A to those of the future fixation point B. The process is governed by a copy of the motor command (corollary discharge).
For a long time, it had been assumed that the presaccadic shift of the receptive field (RF) from A ¯ to B ¯ is an anticipation of the retinal consequences of the saccade, which randomly changes the gaze direction and the RF of the neurons to B ¯ . Since any point B ¯ of the retina can be a new position of the receptive field, this means that the information about the visual stimulus at the point B ¯ can be transmitted to neurons with receptive field at the point A ¯ . This seems very doubtful, since the number of neurons pairs is too big. The solution was proposed by M. Zirnsac and T. Moore [46]. They conjectured that the presaccadic shift of RF is a part of a process of remapping and reflects the selection of the targets for the saccades. Some local area of a higher center of the visual system has information about visual stimulus concentrated at A ¯ and about other points of the retina. It uses this information to choose a new saccadic target B ¯ . Just before the saccade, it sends the information about the visual stimulus at the retinal point B ¯ to neurons with presaccadic receptive field at A ¯ . After saccades, the real RF of these neurons cover the retina stimulus B ¯ . Then the visual system use information from these neurons to corrects the presaccadic information. In the last section, we propose a mechanism of realization of such presaccadic remapping.

3.2.5. Oculomotor System, Corollary Discharge and Stability Problem

In dynamic, the retinal photoreceptors are not the only source of visual information. The important part of information about eyes movements is coded in oculomotor system. A copy of motor commands, which control eyes movements, the corollary discharge (CD) or efference copy, is sent from the sensorimotor region through the MD thalamus to the frontal cortex. The mechanism of interaction of CD information with information from retinal receptors processed in the visual cortex is not well known. It is very important for solution of the stability problem, i.e., explanation of the compensation mechanism for shift of stimuli on the retina caused by eye movements, such that a stable stimuli will be perceived as stable, see [26,27,28]. Clearly, it must be very strong synchronization between corollary discharge and the presentation of the retina input function in visual cortex.
The stability problem was first formulated in the eleventh century by the Persian scholar Abu’Ali al-Hasan ibn al-Hasan ibn al-Haytham (latinized, Alhazen) and was discussed by Descartes, Helmholtz, Mach, Sherrington and many others scientists.

3.3. The Geometry of the Quaternions

Now we recall the basic facts about quaternions and the Hopf bundle, which are we need for reformulation of Donders’ and Listing’s laws in terms of Listing’s section of the Hopf bundle.
Let H = R 4 = R 1 + Im H = R 1 + E 3 be the algebra of quaternions with the unit 1, where the space E 3 of the imaginary quaternions is the standard Euclidean vector space with the orthonormal basis ( i , j , k ) and the product a b of two elements from E is the sum of their scalar product and the cross-product:
a b = a , b + a × b .
The group
H 1 = { q = q 0 1 + q , | q | 2 : = q 0 2 + | q | 2 = 1 } = S 3
of unit quaternions are naturally identified with the three dimensional sphere S 3 and its Lie algebra is the algebra E 3 = R 3 of imaginary quaternions with the cross-product as the Lie bracket.
Denote by
L : H 1 S O ( R 4 ) , a L a , L a q = a q , q H
the (exact) left representation and by
R 1 : H 1 S O ( R 4 ) , a R a 1 , R a 1 q = q a ¯ , q H
the (exact) right representation, which commutes with the left representation. They define the representation
L × R 1 : H 1 × H 1 S O ( 4 ) = ( L H 1 × R H 1 ) / Z 2
with the kernel Z 2 = { ± 1 } .
The representation
Ad : H 1 S O ( 4 ) , a Ad a . Ad a q = a q a ¯
is called the adjoint representation. It has the kernel Z 2 = { ± 1 } , acts trivially on the real line R 1 and defines the isomorphism H 1 / Z 2 = Ad H 1 = S O ( E 3 ) = S O ( 3 ) which shows that the group H 1 = S 3 is the universal covering of the orthogonal group S O ( 3 ) . The standard scalar product q , q = q q ¯ in H , where for q = q 0 1 + q the q ¯ : = q 0 q is the conjugated quaternion, induces the standard Riemannian metric of the unit 3-sphere S 3 = H 1 , which is invariant with respect to the (transitive) actions of the group L H 1 × R H 1 1 . The group Ad H preserves the points 1 , 1 (which will be considered as poles of S 3 ) and acts transitively on the equator S E 2 : = S 3 E 3 , which is the standard Euclidean unite sphere of the Euclidean space E 3 . The geodesics of S 3 are the great circles (the intersections of S 3 with 2-subspaces of H = R 4 ).
The following simple facts are important for us and we state them as
Lemma 1.
 (i) 
Any point a S 3 different from ± 1 belongs to unique 1-parameter subgroup g a = span ( 1 , a ) S 3 (the meridian) and can be canonically represented as
a = e ψ v : = cos ψ + sin ψ v , 0 ψ < π / 2 , v s . S E 2 ,
where v = p r S E 2 a is the closest to a point of the equator.
 (ii) 
Points v S E 2 ± 1 bijectively corresponds to oriented 1-parameter subgroups
g v ( t ) : = e t v = cos t + sin t v
of H 1 , parametrized by the arclength.
 (iii) 
Any orbit γ ( t ) = g v ( t ) a , a S 3 of the left action of an one-parameter subgroup γ ( t ) (as well as the right action) is a geodesic of the sphere S 3 . All geodesics are exhausted by such orbits.

3.3.1. The Adjoint Action of the Group H 1

Lemma 2.
 (i) 
The 1-parameter subgroup g v ( t ) = e t v of H 1 generated by a unit vector v S E 2 H 1 acts on the sphere S E 2 as the 1-parameter group R v 2 t of rotation w.r.t. the axe v:
A d g v ( t ) = R v 2 t .
 (ii) 
More generally, let
γ ( t ) = g v ( t ) a S 3 = H 1
be a geodesic of S 3 , considered as the orbit of an 1-parameter subgroup g v ( t ) . Then for x S E 2 the adjoint action of the curve γ ( t ) is given by
Ad γ ( t ) x = Ad γ ( t ) x = Ad g v ( t ) x a = R v 2 t ( x a ) , where x a : = Ad a x = a x a ¯ .
In other words, the orbit Ad γ ( t ) x is the circle, obtained from the point x a by action of the group Ad g v ( t ) = R v 2 t of rotations w.r.t. the axe v.
Proof. 
(i)
The adjoint image A d g v ( t ) of the one-parameter subgroup is an one-parameter subgroup of S O ( E 3 ) , which preserves the vector v S E 2 , hence the group R v of rotation w.r.t. v. To calculate the angle of the rotation, we apply Ad g v ( t ) to a vector u S E 2 , which anticommutes with v, as follows
Ad g v ( t ) u = e t v u e t v = e 2 t v u = cos 2 t u + sin 2 t v u .
This shows that Ad g v ( t ) = R v 2 t .
(ii)
follows from ( i ) and the following calculation
Ad γ ( t ) x = g v ( t ) a x a ¯ g ¯ v ( t ) = Ad g v ( t ) a x a ¯ = Ad g v ( t ) x a = R v 2 t x a .

3.3.2. The Hopf Bundle and Listing’s Sphere

The Hopf bundle is defined as the natural projection
χ : S 3 = H 1 S E 2 = S 3 / S O 2 , q Ad q i = q i q ¯
of H 1 = S 3 to the Ad H 1 -orbit S E 2 = A d H 1 i of the point i.
The base sphere S E 2 = S 3 E 3 is called the Euclidean 2-sphere. The points i , i will be considered the north and south poles of S E 2 . We denote by S E 1 = { p = cos θ j + sin θ k } the equator of S E 2 .
The Hopf bundle is a non trivial bundle and has no global section. However, by removing just one point i with the preimage S E 1 from the base sphere S E 2 , we will construct the canonical section
s : S ˜ E 2 = S E 2 { i } S ˜ 3 = S 3 S E 1 .
of the bundle
χ : S ˜ 3 S ˜ E 2
over the punctured sphere S ˜ E 2 .
First of all, we define Listing’s sphere and Listing’s hemisphere, which play a central role in the geometry of saccades. The Listing’s sphere is intersection S L 2 = S 3 i of the 3-sphere with the subspace i = span ( 1 , j , k ) spanned by vectors 1 , j , k . In other words, it is the equator of the 3-sphere S 3 w.r.t. the poles ± i , see Figure 9.
We consider the point 1 (resp., −1) as north (resp. south) pole of Listing’s sphere and denote by S L + (resp., S L ) the open north (resp., south) hemisphere and by S ¯ L + (resp., S ¯ L ) the closed hemisphere. Note that the equator S L 1 of Listing’s sphere coincides with the equator S E 1 of the Euclidean sphere S E 2 .

3.3.3. Geometry of Listing’s Hemisphere S L +

We consider Listing’s sphere as the Riemannian sphere with the induced metric of curvature 1 equipped with the polar coordinates ( r , θ ) centered at the north pole 1 . The geodesics of S L 2 are big circles. Any point a = e r p = cos r 1 + sin r p ± 1 of S L 2 belongs to the unique 1-parameter subgroup g a ( t ) = e t a = cos t 1 + sin t a of H 1 .
Any point a S L + , different from 1 , can be canonically represented as
a = e r p : = cos r 1 + sin r p , p = cos θ j + sin θ k S L 1
where 0 < r < π / 2 is the polar radius (the geodesic distance to the pole 1 (such that φ : = π / 2 r is the geographic latitude) and 0 θ < π is the geographic longitude of the point a. The point p = p r S L 1 a is the geodesic projection of a to the equator, i.e., the closest to a point of the intersection of g a ( t ) = γ a , 1 with the equator S L 1 .
Note that the coordinate lines θ = c o n s t are big circles (meridians), in particular, θ = 0 is zero ("greenwich") meridian and the coordinate lines φ = c o n s t are parallels. The only geodesic parallel is the zero parallel, i.e., the equator S L 1 .
The open Listing’s hemisphere S L + is geodesic convex. This means that any two distinct points a , b S L + determine a unique (oriented) geodesic γ a , b of the sphere S L 2 and are joined by a unique geodesic segment a b S L + .

Canonical Parametrization of Geodesics γ a , b S L 2

Let a , b S L + be two distinct points and γ a , b the oriented geodesic. Denote by p the first point of intersection of γ ( a , b ) with the equator S L 1 .
If 1 γ a , b then the geodesic is an 1-parameter subgroup and
γ a , b = e t p = cos t 1 + sin p
is its canonical parametrization.
If 1 γ a , b , the unique top point m γ a , b , with the maximal latitude φ has the form m = e φ q where q = p r S L 1 m S L 1 is the geodesic projection of m to S L 1 and p , q = 0 , hence q = ± p i .
Then
γ a , b = γ p , m = { cos t p + sin t m = e t v p } , v = m p ¯ = m p ,
where v s . = m p ¯ = cos φ p + sin φ p q S E 2 and p q = ± i , is the canonical parametrization of the geodesic γ a , b .
The intersection γ a , b + = γ a , b S L + of the geodesic with the Listing hemisphere L L + is called the Listing’s semicircle.

3.3.4. Properties of the Restriction of the Hopf Map to Listing’s Sphere

Theorem 1.
The restriction χ : S L 2 S E 2 of the Hopf map χ to the Listing sphere is a branch Z 2 covering. More precisely
 (i) 
It maps the poles ± 1 of the sphere S L 2 into the pole i of the sphere S E 2 and the equator S L 1 into the south pole i = χ ( S L 1 ) .
 (ii) 
Any different from 1 point a S L 2 belongs to a unique 1-parameter subgroup g a = e t a (the meridian of Listing’s sphere) which can be written as g a = g p = e t p where p = p r S L 1 a = cos θ j + sin θ k S L 1 is the equatorial point of g a .
The map χ : g a S p 1 is a locally isometric Z 2 covering of the meridian g a = γ p , 1 of Listing’s sphere S L 2 onto the meridian S p 1 of the Euclidean sphere S E 2 through the point p S E 1 . The restriction of χ to the semicircle g a S L + is a diffeomorphism.
 (iii) 
More generally, let γ a , b = γ p , m = { e t v p } , v = m p ¯ be a geodesic through points a , b S L + with the canonical parametrization
γ p , m ( t ) = cos t p + sin t m , m = e φ q = cos φ 1 + sin φ q .
It is the orbit e t v p of 1-parameter group e t v ,
v s . = m p ¯ = cos φ p + sin φ p q S E 2
and the Hopf mapping χ maps it into the orbit
S v 1 : = { Ad e t v ( i ) = } = { R v 2 t ( i ) } S E 2 ,
of the 1-parameter group of rotations R v 2 t . In other words, the circle S v 1 : = χ ( γ p , m ( t ) ) is obtained by rotating the point i about the axis v S E 2 .
 (iv) 
The restriction of the map χ to the Listing hemisphere S L + is a diffeomorphism χ : S L + S ˜ E 2 .
Proof. 
(i)–(ii) follow from the remark that quaternions ± 1 commute with i and the quaternions from S L 1 anticommute with i. Hence Ad e t p i = e 2 t p i = i e 2 t p for p S L 1 .
(iii) We calculate
χ ( γ p , m ( t ) ) = χ ( e t v p ) = e t v p i p ¯ e t v = e t v i e t v = e 2 t v i = R v 2 t ( i ) .
(iv) follows from ( i i ) or from Lemma 2. □

3.3.5. Listing Section

According to the Theorem, the Hopf map defines a diffeomorphism
χ : S L + S ˜ E 2 : = S E 2 { i }
a = e t p A : = χ ( a ) = e 2 t p i = R p 2 t i = cos 2 t i + sin 2 t q , q : = p i = i p .
Since the preimage ξ 1 ( i ) = S L 1 is the equator of Listing’s sphere S L 2 , the inverse map
s : = χ 1 : S ˜ E 2 S L + S 3
A : = e 2 t q i = cos 2 t i + sin 2 t q = a : = s ( A ) = e t p = cos t + sin t p ,
where q S E 1 = S L 1 , q = p i is a section of the principal bundle
χ : S ˜ 3 : = S 3 S L 1 S ˜ E = S E 2 { i } .
We call the section s the Listing section.

3.4. The Physiological Interpretation: Donders’ and Listing’s Laws and Geometry of Saccades

We use the developed formalism to give an interpretation of Donders’ and Listing’s laws and to study the saccades and drifts.
We consider the Euclidean sphere S E 2 Im H = R 3 as the model of the eye sphere, see Figure 10, (the boundary of the eye ball B 3 R 3 = Im H ) with the center at the origin 0. We assume that the head is fixed and the standard basis e ̲ 0 = ( i , j , k ) determines the standard initial position of the eye, where the first vector i (the gaze vector) indicates the standard frontal direction of the gaze, the second vector j gives the lateral direction from right to left and k is the vertical direction up.
The coordinates ( X , Y , Z ) associated with the standard basis are the head-centered and spatiotopic (or world-centered) coordinates. A general position of the eye, which can rotate around the center 0 is determined by the orthonormal moving (retinatopic) frame e ̲ = ( e 1 , e 2 , e 3 ) , which determine the (moving) retina-centered coordinates ( x , y , z ) .
The configuration space of the rotating sphere is identified with the orthogonal group S O ( 3 ) , an orthogonal transformation R define the frame
e ̲ = ( e 1 , e 2 , e 3 ) = R e ̲ 0 = R ( i , j , k ) .
It is more convenient to identify the configuration space with the group H 1 = S 3 of unit quaternions, which is the universal cover of S O ( 3 ) . The corresponding Z 2 -covering is given by the adjoint representation
Ad : H 1 S O ( 3 ) = H 1 / { ± 1 } , a Ad a .
A unit quaternion a H 1 gives rise the orthogonal transformation Ad a S O ( E 3 ) and the frame e ̲ = Ad a e 0 ̲ = Ad a ( i , j , k ) which defines the new position of the eye. We have to remember that opposite quaternions a , a H 1 represent the same frame and the same eye position. Note that a direction of the gaze e 1 determines the position e ̲ = ( e 1 , e 2 , e 3 ) of the eye up to a rotation w.r.t. the axe e 1 . Such rotation is called the twist.
Donders’ law states that if the head is fixed, then there is no twist. More precisely, the position of the gaze A = e 1 S E 2 determines the position of the eye, i.e., there is a (local) section s : S E 2 S 3 of the Hopf bundle
χ : S 3 = H 1 S E 2 = Ad H 1 i .
In other words, the admissible configuration space of the eye is two-dimensional. Physiologists were very puzzled by this surprise. Even the great physiologist and physicist Hermann von Helmholtz doubted the justice of this law and recognized it only after their own experiments. However, from the point of view of the modern control theory, it is very natural and sensible. The complexity of motion control in 3-dimensional configuration space compared to control on the surface is similar to the difference between piloting a plane and driving a car.
Listing’s law specifies the section s. In our language, it can be stated as follows.
Listing’s law. The section of Donder’s law is the Listing’s section
s = χ 1 : S ˜ E 2 = S L 2 { i } S L + S 3
A = e 1 = cos t i + sin t q = e t q i a = s ( A ) = e ( t / 2 ) p
where p = q i ¯ = i q , which is the inverse diffeomorphism to the restriction
χ : S L + S ˜ E 2
a = e t p a : = χ ( a ) = Ad e t p i = e 2 t p i = R p 2 t i = cos 2 t i + sin 2 t q , q = p i .
of the Hopf projection to Listing’s hemisphere.
In other words, a gaze direction A = e 1 = cos t i + sin t q S ˜ E 2 determines the position e ̲ = ( e 1 , e 2 e 3 ) of the eye as follows
e ̲ = Ad s e 1 e ̲ 0 = Ad e ( t / 2 ) p ( i , j , k ) , p = q i ¯ S L 1 = S E 1 .

Saccades

We define a saccade as a geodesic segment a b S L + of the geodesic semicircle γ a , b + = γ a , b S L 2 . Recall that the semicircle γ a , b + = γ p , m + , (where p is the first point of the intersection of the oriented geodesic γ a , b with the equator S L 1 , m = e φ q is the top point of γ a , b + and q is the equatorial point of the meridian of the point m), has the natural parametrization
γ p , m + ( t ) = { cos t p + sin t m = ( cos t + sin t m p ¯ ) p = e t v ( p ) } , 0 < t < π
where v = m p ¯ = ( cos φ p + sin φ q ) p ¯ = cos φ + sin φ ( q p ¯ ) . We may chose the vector q, defined up to a sign, such that q p ¯ = i .
The image
χ ( γ a , b + ) = Ad e t v p i = Ad e t v p i p ¯ = Ad e t v ( i ) = R v 2 t ( i ) = S v 1 S E 2 ˜
is the circle S v 1 (without the point i ), obtained by the rotating of the point i with respect to the axe R v , or, in other words, it is the section of the punctured sphere S ˜ E 2 by the plane i + span ( A + i , B + i ) with the normal vector v R ( A + i ) × ( B + i ) , where A = χ ( a ) , B = χ ( b ) . The segment A B S v 1 is the gaze curve, the curve, which describes the evolution of the gaze during the saccade a b γ a , b + .
The natural question arises. If the gaze circle S v 1 is not a meridian, it is not a geodesic of S ˜ E 2 and the gaze curve A B S v 1 is not the shortest curve of the sphere, joint A and B. Why the eye does not rotate such that the gaze curve A B is not the geodesic?
The answer is the following. If all gaze curves during saccades would be geodesics, then we get the twist and the configuration space of the eye becomes three-dimensional. Assume that the gaze curve of three consecutive saccades is a geodesic triangle A B C which starts and finishes in the north pole A = i . Since the sphere is a symmetric space, moreover, the space of constant curvature, the movements along a geodesic induce a parallel translation of tangent vectors. This implies that after saccadic movements along the triangle, the initial position e 0 = ( i , j , k ) of the eye will rotates w.r.t. the normal axe i on the angle α which is proportional to the area of the triangle. Hence, a twist will appear.
Fortunately, since the retina image of the fixation point during FEM remain in the fovea with the center at i , the gaze curve remains in a small neighborhood of the standard position i. In this case, the deviation of the gaze curve A B during MS from the geodesic will be very small. This is important for energy minimization, since during wakefulness, 2–3 saccades occur every second. Hence more than 100,000 saccades occur during the day.
Consider the stereographic projection π i : S ˜ E 2 T i S E 2 of the sphere S ˜ E 2 onto the tangent plane at the point T i S 2 . It is a conformal diffeomorphism, which maps any gaze circle S v 1 S ˜ E 2 onto a straight line and any gaze curve A B of a saccade a b onto an interval A B = π ( A B ) = π ( A ) π ( B ) where A is the point of the intersection of the tangent plane T i S E 2 = i + span ( j , k ) with the line i + R ( A + i ) and similar for B . More precisely, A = i + 2 1 + cos ψ ( A + i ) where A = cos ψ i + sin ψ q . The spherical n-gon A 1 A 2 A n , formed by gaze curves A 1 A 2 , , A n A 1 of saccades, maps into the n-gone A 1 A n on the plane, such that the angles between adjacent sides are preserved.

3.5. Listing’s Section and Fixation Eye Movements

Below we propose an approach to description of information processing in dynamics.

3.5.1. Retinotopic Image of a Stable Stimulus during Eye Movements

Recall that the direction N = e 1 of the gaze determines the position a = s ( N ) S L + of the eye, which determines the frame e ̲ = ( e 1 , e 2 , e 3 ) : = Ad a e ̲ 0 = Ad a ( i , j , k ) and associated retinotopic coordinates.
Let the eye look for some time [ 0 , T ] ] at a stationary surface, for example, at a plane Π , and the gaze describes a curve N ( t ) S E 2 and hence is directed to the points A ( t ) : = R N ( t ) Π of the stimulus Π . Then the eye position is defined by the curve a ( t ) = s ( N ( t ) ) . We call a ( t ) Listing’s curve.
The retinal image of the points A ( t ) forms the curve A ¯ ( t ) : = N ( t ) .
Moreover, if B ¯ ( 0 ) it the retinal image of a point B Π at t = 0 , then due to eye movement, the retinal image B ¯ ( t ) of the same point B at the moment t will be
B ¯ ( t ) = Ad a ¯ ( t ) B ¯ ( 0 ) , a ¯ = a 1 .
Hence the retinal curve B ¯ ( t ) is the retinal image of the external point B. Indeed, in retinotopic coordinates, the eye is stable and the external plane Π is rotating in the opposite direction and at the moment t take the position Π ( t ) : = Ad a ¯ Π . The point B ¯ ( t ) Π t is the new position of the point Ad a ( t ) B ( t ) = B ( 0 ) .

3.5.2. n-Cycles of Fixation Eye Movements

We define a fixation eye movement n-cycle as a FEM which starts and finishes at the standard eye position a 0 = 1 and consists of n drifts δ k = δ ( a k 1 , a k 1 ) , k = 1 , , n and n microsaccades S k = a k 1 a k between them. We will assume that MSs are instantaneous movements and occur at times T 1 , T 2 , , T k = T . Then the corresponding Listing’s curve can be written as
δ ( a 0 , a 0 ) , a 0 a 1 , δ ( a 1 , a 1 ) , a 1 a 2 , , δ ( a n 1 , a n 1 ) , a n 1 a n , a 0 = a n = 1 .
We associate with n-cycle the spherical polygon P S L + with 2 n vertices ( 2 n -gone)
P = ( a 0 , a 0 , a 1 , a 1 , , a k 1 , a k 1 , a k ) , a 0 = a k = 1 .
The sides ( a k 1 , a k ) represent saccades S k = a k 1 a k and the sides ( a k 1 , a k 1 ) corresponds to the drifts δ k ( a k 1 , a k 1 ) .
Using the stereographic projection of Listing’s sphere from the south pole 1 to the tangent plane T 1 S L + , we can identify P with an 2 n -gone on the tangent plane T 1 S L + .
In the case of saccade, Listing’s curve is a segment a b S L + . Hence all saccades of n-cycle are determined by the position of their initial and final points in Listing’s hemisphere, i.t. by 2 n points a k 1 , a k , k = 1 , , n .
For example, a 3-cycle is characterised by the hexagon a 0 a 0 a 1 a 1 a 2 a 2 a 3 , a 0 = a 3 = 1 and consists of 3 drifts and 3 MSs:
δ 1 = δ ( 1 , a 0 ) , S 1 = a 0 a 1 , δ 2 = δ ( a 1 , a 1 ) , S 2 = a 1 a 2 , δ 3 = δ ( a 2 , a 2 ) , S 3 = a 2 1 .
An example of 3-cycle and associated hexagon is depicted in Figure 11.
We suppose that during n-cycle with a Listing’s curve a ( t ) , t [ 0 , T ] the visual system perceives local information about the stimulus, more precisely, information about points B whose retinal image belong to the fovea. The information needed for such local pattern recognition during a FEM cycle consists of two parts:
(a)
The dynamical information about Listing’s curve a ( t ) , t [ 0 , T ] , coded in oculomotor command signals. A copy of these signals (corollary discharge (CD)) is sent from the superior colliculus through the MD thalamus to the frontal cortex. It is responsible for visual stability, that is the compensation of the eye movements and perception of stable objects as stable.
(b)
The visual information about characteristics of a neighborhood of points B of the stimulus which is primary encoded into the chain of photoreceptors along the closed retinal curve B ¯ ( t ) = Ad a ¯ ( t ) B ( 0 ) , which represents the point B during FEM. Then this information is sent for decoding through LGN to the primary visual cortex and higher order visual structures. In particular, if A ( t ) = χ ( a ( t ) ) = Ad a ( t ) ( i ) is the gaze curve with the initial direction to external point A R A ( 0 ) = R i , the point of fixation A is represented by the retinal curve A ¯ ( t ) = Ad a ¯ ( t ) ( i ) with A ¯ ( 0 ) = i .

3.6. A Model of Fixation Eye Movements

At first, we consider a purely deterministic scheme for processing information encoded in CD and visual cortex.
Then we discuss the problem of extending this model to a stochastic model. We state our main assumptions. If the opposite is not stated, we assume that we are working in spatiotopic coordinates associated to a 0 = 1 .
1. We assume that CD contains information about the eye position a k 1 , a k , k = 1 , , n during the beginning and the end of the saccades S k , (which is equivalent to information about the gaze positions) and about the corresponding time T k .
2. We assume also that CD has information about Listing’s curve δ k ( t ) , t [ T k 1 , T k ] of the drift δ k + 1 = δ ( a k , a k ) from the point a k to the point a k . (This assumption is not realistic and later we will revise it.)
3. Let B be a point of the stable stimulus and B ¯ ( 0 ) its retina image at the time t = 0 . Then during the drift δ k + 1 ( t ) = δ ( a k , a k ) the image of B is the retina curve B ¯ k + 1 ( t ) = Ad δ ¯ k + 1 ( t ) B ¯ . We denote by I k + 1 B ( t ) = I ( B ¯ k + 1 ( t ) ) the characteristics of this image B, which is recorded in the activation of photoreceptors along the retinal curve B ¯ ( t ) during the drift δ k + 1 and then in firing of visual neurons in V1 cortex and higher order visual subsystems. Note that the information about the external stable point B is encoded into the dependent on time vector–function I k A ( t ) . This is a manifestation of a phenomenon that E. Ahissar and A. Arieli [12] aptly named ‘figurating space by time’.
4. We assume that the (most) information about the drift δ k + 1 ( t ) , encoded in Listing’s curve δ k + 1 ( t ) S L + and about the characteristic functions I k + 1 B ( t ) , is encoded in the coordinate system, associated to the end point a k of the preceding saccade S k . We remark that if a k = cos θ 1 + sin θ p , p S L 1 , then associated with a k coordinate system is obtained from the spatiotopic coordinates by the rotation along the axe p of the Listing plane Π ( j , k ) through the angle 2 π . (These coordinates are the retinotopic coordinates at the time T k ).
5. Let C be another point of the stable stimulus with the retina image C ¯ ( 0 ) at t = 0 and I k + 1 C ( t ) , t [ T k , T k + 1 ] the characteristic function of the retina image
C ¯ ( t ) = Ad δ ¯ k + 1 ( t ) C ¯ ( 0 )
of C during drift δ k + 1 . Then the visual system is able to calculate the visual distance between point B , C during drift δ k as an appropriate distance between their characteristic functions I k + 1 B , I k + 1 C .
6. We assume that the change of coordinates (remapping) appear during each saccade. So for example during 3-cycle, the system uses the coordinates associated to the following points of Listing’s hemisphere
a 0 = 1 [ 0 , T 1 ] , a 1 [ T 1 , T 2 ] , a 2 [ T 2 , T 3 ]
Here the interval [ T k , T k + 1 ] indicates the time of the drift δ k + 1 when the coordinates a k is used.
7. In particular, this means that the information about the characteristic function I k + 1 B ( t ) of the external point B along the retinal curves during the drift δ k + 1 = δ ( a k , a k ) is encoded into the coordinates associated to the end point a k of the preceding saccade S k (which are the retinotopic coordinates at the time T k ).
To recalculate the characteristic function I k B ( t ) in terms of the spatiotopic coordinates, associated to a 0 = 1 , it is sufficient to know the point a k S L + .
8. Following M. Zirnsak and T. Moore [46], we suppose that during the drift δ k + 1 = δ ( a k , a k ) , the visual system chooses an external saliency point A as the target for the next gaze position. More precisely, it fixes the retinal image A ¯ S E 2 of this point w.r.t. coordinates associated with a k (which are retinotopic coordinates at the moment T k ). After the next saccade S k + 1 = a k a k + 1 (at the moment T k + 1 ) the point A ¯ S E 2 will become the point F (the center of the fovea) and after the saccade the point A will be the target point of the gaze vector N = F , A R N .
9. This allows to give an explanation of the presaccadic shift or receptive fields.
The above assumption means that before the time T k + 1 of the saccade S k + 1 , the visual system knows the future gaze vector e 1 ( T k ) = N = F with respect to the coordinates, associated with a k . Of course, this information may be obtained only due to collaboration of the visual system with the ocular motor system. At some moment t s h i f t = T k + 1 Δ < T k + 1 , Δ 100 ms these subsystems recalculate the characteristic functions I k + 1 B ( t ) from the coordinates a k into the new coordinates, associated to the future gaze point a k + 1 and send this information to neurons of different visual systems.
This leads to the shift of receptive field, discovered in [45]. The information about the future characteristic functions will contains some mistakes since the real position of the eye at the moment t s h i f t is different from the position a k . It is observed as dislocation (compression) of the image in space and time [16,17,18,19]. After the saccade, this mistakes are corrected. One of the way to reduce such dislocation is to increase the frequency of microsaccades.

3.6.1. Diffusion Maps and Stochastic Model of FEM

It seems that the assumption 1. that the CD contains information about the eye position at the beginning and the end of each saccade is rather reasonable. However, the assumption 2. must be clarified. Since the drift trajectory δ k + 1 ( t ) (Listing curve) can be arbitrary, it is difficult to believe that the CD stores information about its shape even for a short time. It is naturally to assume that the drift is a random walk and the ocular motor system and CD store information about random trajectory of the drift. Similarly, the characteristic functions I k + 1 B ( t ) , which contain information about the stable stimulus B, recorded by photoreceptors during the drift δ k + 1 becomes a random function.

3.6.2. Diffusion Map by R.R. Coifman and S. Lafon

We shortly recall the basis ideas of the diffusion maps (or diffusion geometry) by R.R. Coifman and S. Lafon [34,35], which we will need.
The diffusion geometry on a (compact oriented) manifold M with a volume form d vol such that M d vol = 1 is determined by a kernel k ( x , y ) i.e., a non negative and symmetric ( ( k ( x , y ) = k ( y , x ) 0 ) function on M × M . An example of the kernel is the Gauss kernel k ϵ ( x , y ) = exp | | x y | | 2 ϵ , ϵ > 0 , of the Euclidean space or the heat kernel of a Riemannian manifold. The normalization of the kernel gives the transition Markov kernel
p ( x , y ) : = k ( x , y ) d ( x ) : = k ( x , y ) M k ( x , y ) vol ( y ) .
which defines a random walk on M. The value p ( x , y ) is considered as the probability to jump in one step from the point x to the point y.
The associated diffusion operator P on the space of function is defined by
( P f ) ( x ) = M p ( x , y ) f ( y ) vol ( y ) .
Then the probability density to move form x to y in T N steps is described by the kernel p T ( x , y ) associated to the T N power P T of the operator P such that
( P T f ) ( x ) = M p T ( x , y ) f ( y ) vol ( y ) .
It can be defined for any T R in terms of the eigenvectors and eigenfunction of the operator P [34]. So any point x M determines a family of the bump functions p x T ( u ) : = p T ( x , u ) on M, which characterize the local structure of a small neighborhood of x. We call p x T ( u ) the trajectory of random walk (or random trajectory) started from x during time interval [ 0 , T ] .
R.R. Coifman and S. Lafon [34] define the diffusion distance between points x , y M as the L 2 -distance between the bump functions (or random trajectories) p x T ( u ) and p y T ( u ) , started form these points:
( D T ( x , y ) ) 2 = | | p x T p y T | | 2 : = M [ p x T ( u ) p y T ( u ) ] 2 d vol ( u ) .
Let λ 0 = 0 < λ 1 λ 2 λ 3 be eigenvalues of the diffusion operator P and ψ 0 = 1 , ψ 1 , ψ 2 , associated eigenfunctions. Then for sufficiently big number m, the diffusion distance D T ( x , y ) is approximated by the function D m , T ( x , y ) given by
D m , T 2 ( x , y ) = k = 1 m ( λ k 2 T ( ψ k ( x ) ψ k ( y ) ) 2 .
In other worlds, the map (called the diffusion map)
Φ T : M R m , x Φ T = ( λ 1 t ψ 1 ( x ) , λ 2 t ψ 2 ( x ) , , λ m t ψ m ( x ) ) t
is closed to the isometric map of the manifold M with the diffusion metric D T to the Euclidean space R m . If the manifold M is approximated by a finite systems of points X = ( x 1 , x 2 , , x N ) , the diffusion map gives a dimensional reduction of the system X.

3.6.3. Remarks on Stochastic Description of Drift as Random Walk and Possible Application of Diffusion Distance

The idea that FEMs is a stochastic process and may be described as a random walk has a long history, [29,30,31,32,33,42].
1. We assume that the drift is a random walk on the Listing hemisphere S L + defined by some kernel. The question is to chose an appropriate kernel. The first guess is to assume that it is the heat kernel of the round (hemi)sphere. The short-time asymptotic of the heat kernel of the round sphere is known, see [47]. The functional structure of the retina which records light information, is very important for choosing the kernel. Inhomogeneity of the retina shows that the first guess is not very reasonable. It seems that the more natural assumption is that the system uses the heat kernel for the metric on Listing hemisphere, which corresponds to the physiological metric of the retina. Recall that it is the pull back of the physical metric of the V1 cortex with respect to the retinotopic mapping.
2. We assume that the drift is a random walk in Listing’s hemisphere, defined by some kernel. Then by the drift trajectory δ k + 1 ( t ) from the point a k we may understand the random trajectory on S L + (or the bump function) p a k ( a ) : = p a k Δ T ( a ) during the time interval Δ T = [ T k , T k + 1 ] . It has no fixed end point but it allows to calculate the probability that the end point belongs to any neighbourhood of the point a k . The situation is similar to Feynman’s path integral formulation of quantum mechanics. Moreover, if by a point we will understand not a mathematical point but a small domain, e.g., the domain which corresponds to the receptive field of a visual neuron in V1 cortex or the composite receptive field of a V1 column (which is 2–4 time larger) [37], then we may speak about random drift δ a k , a k + 1 from the point a k to the point a k + 1 with the bump function p a k , a k + 1 Δ T ( a ) (“the random trajectory”). Roughly speaking, this function gives the probability that the random drift from the point a k to the point a k + 1 after Δ T steps comes to the point a S L + .
3. Due to diffeomorphism defined by the Hopf map χ : S L + S ˜ E 2 , we may identify the random walk in S L + with the random walk on the eye sphere S ˜ E 2 . A drift δ k + 1 ( t ) = δ ( a k , a k + 1 ) in S L + induces the “drift” of a point B S ˜ E 2 given by
B ( t ) : = Ad δ ¯ k + 1 ( t ) B .
Let A be the fixation point of the gaze at the initial moment t = 0 , such that its retina image is i . Then the retina image of the point A during the drift δ k + 1 ( t ) is the curve A ( t ) = Ad δ ¯ k + 1 ( i ) . More generally, if B ( 0 ) is the retina image at t = 0 of any other point B of the stimulus, then the retina image during the drift δ k + 1 is
B ( t ) : = Ad δ ¯ k + 1 ( t ) B ( 0 ) .
In the stochastic case, the drift δ ¯ k + 1 ( t ) is characterized by the random trajectory p a k Δ T ( a ) , and associated “drift” of points in S ˜ E 2 by the random trajectory
p B k Δ T ( x ) : = p a k Δ T ( s x )
where B k = Ad a k B ( 0 ) and s = χ 1 is Listing’s section. Note that the right hand side does not depend on the point B ( 0 ) .
We conjecture that the ocular motor control system detects information about random trajectories in S L + and S E 2 and the corollary discharge get a copy of this information.
It seems that the proposed explanation for shifting receptive fields may be generalized to the stochastic case.
4. Let B be a stable stimulus and B 0 its retina image at t = 0 and B k : = Ad a ¯ k B 0 the retina image at the time T k . Denote by I k B ( t ) = I ( B k ( t ) ) the characteristic function, which describes the visual information about a stable stimulus point B with the retina image B k ( t ) during the drift B k ( t ) , t [ T k , T k + 1 ] . If the drift is considered as a random walk, the information about the drift curve B k ( t ) S E 2 is encoded in the function p a k Δ T ( s x ) and the characteristic function I k B ( t ) becomes a random function and is described by the bump function p I k B Δ T ( x ) on S E 2 . We suppose that the visual system calculates the visual distance between external points B , C as the diffusion distance between the associated bump functions.
5. We also conjecture that like in deterministic case, the information about the random trajectory of the drift δ k + 1 encoded in CD and the information about characteristic bump function, encoded in different structures of the visual cortex are sufficient for stabilization of visual perception. The problem reduces to recalculation of all information in spatiotopic coordinates, associated with the point a = 1 .

Funding

This research received no external funding.

Acknowledgments

I thank Andrea Spiro, who read the manuscript and made many useful comments, remarks and suggestions. I would like also to thank Andrej Balakin for useful discussions and help in preparation of pictures. In unpublished course work at Higher School of Economics, he derived, under some natural assumptions, Listing’s law from Donders’ law, using the projective geometry of the orthogonal group and showed that saccades are part of the section of the eye sphere by a plane through the point i , corresponding to the center of the fovea.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Bressloff, P.C.; Cowan, J.D. A spherical model for orientation as spatial-frequency tuning in a cortical hypercolumn. Philos. Trans. R. Soc. Lond. B 2003, 357, 1643–1667. [Google Scholar] [CrossRef] [PubMed]
  2. Bressloff, P.C.; Cowan, J.D. The functional geometry of local and horizontal connections in a model of V1. J. Physiol. Paris 2003, 97, 221–236. [Google Scholar] [CrossRef] [Green Version]
  3. Bressloff, P.C.; Cowan, J.D. The visual cortex as a crystal. Phys. D 2002, 173, 226–258. [Google Scholar] [CrossRef] [Green Version]
  4. Citti, G.; Sarti, A. (Eds.) Neuromathematics of Vision; Lecture Notes in Morphogenesis; Springer: Berlin/Heidelberg, Germany, 2014. [Google Scholar]
  5. Petitot, J. The neurogeometry of pinwheels as a sub-Riemannian contact structure. J. Physiol. Paris 2003, 97, 265–309. [Google Scholar] [CrossRef] [PubMed]
  6. Petitot, J. Elements of Neurogeometry; Springer: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
  7. Sarti, A.; Citti, G.; Petitot, J. The symplectic structure of the primary visual cortex. Biol. Cybern. 2008, 98, 33–48. [Google Scholar] [CrossRef] [PubMed]
  8. Westheimer, D. The third dimension in the primary visual cortex. J. Phys. 2009, 587, 2807–2816. [Google Scholar] [CrossRef]
  9. Alekseevsky, D. Conformal model of hypercolumns in V1 cortex and the Mobius group. Application to the visual stability problem. In Proceedings of the International Conference on Geometric Science of Information, Paris, France, 21–23 July 2021; pp. 65–72. [Google Scholar]
  10. Yarbys, A.L. Eye Movements and Vision; Plenum Press: New York, NY, USA, 1967. [Google Scholar]
  11. Rucci, M.; Ahissar, E.; Burr, D. Temporal Coding of Visual Space. Trends Cogn. Sci. 2018, 22, 883895. [Google Scholar] [CrossRef] [PubMed]
  12. Ahissar, E.; Arieli, A. Figuring Space by Time Review. Neuron 2001, 32, 185–201. [Google Scholar] [CrossRef] [Green Version]
  13. Ahissar, E.; Arieli, A. Seeing via miniature eye movements: A dynamic hypothesis for vision. Front. Comput. Neurosci. 2012, 6, 89. [Google Scholar] [CrossRef] [Green Version]
  14. Carandini, M. What simple and complex cells compute? J Physiol. 2006, 577, 463–466. [Google Scholar] [CrossRef] [PubMed]
  15. Carandini, M.; Demb, J.B.; Mante, V.; Tolhurst, D.J.; Dan, Y.; Olshausen, B.A.; Gallant, J.L.; Rust, N.C. Do We Know What the Early Visual System Does? J. Neurosci. 2005, 25, 10577–10597. [Google Scholar] [CrossRef] [PubMed]
  16. Melcher, D.; Colby, C.L. Trans-saccadic perception. Trends Cogn Sci. 2008, 12, 466–473. [Google Scholar] [CrossRef]
  17. Wolfe, B.A.; Whitney, D. Saccadic remapping of object-selective information. Atten. Percept. Psychophys. 2015, 77, 2260–2269. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Ross, J.; Morrone, M.C.; Burr, D.C. Compression of visual space before saccades. Nature 1998, 386, 598–601. [Google Scholar] [CrossRef]
  19. Burr, D.C.; Ross, J.; Binda, P.; Morrone, M.C. Saccades compress space, time and number. Trends Cogn. Sci. 2010, 14, 528–533. [Google Scholar] [CrossRef]
  20. Hauperich, A.-K.; Young, L.K.; Smithson, H.E. What makes a microsaccade? A review of 70 years of research prompts a new detection method. J. Eye Mov. Res. 2020, 12, 1–22. [Google Scholar] [CrossRef] [PubMed]
  21. Aytekin, M.; Victor, J.D.; Rucci, M. The Visual Input to the Retina during Natural Head-Free Fixation. J. Neurosci. 2014, 17, 1201–1215. [Google Scholar] [CrossRef] [PubMed]
  22. Boi, M.; Poletti, M.; Victor, J.D.; Rucci, M. Consequences of the oculomotor cycle for the dynamics of perception. Curr. Biol. 2017, 27, 110. [Google Scholar] [CrossRef]
  23. Poletti, M.; Rucci, M. A compact field guide to the study of microsaccades: Challenges and functions. Vis. Res. 2016, 118, 83–97. [Google Scholar] [CrossRef]
  24. Rucci, M.; Poletti, M. Control and Functions of Fixational Eye Movements. Annu. Rev. Vis. Sci. 2015, 1, 499518. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Rucci, M.; Victor, J.D. The Unsteady Eye: An Information Processing Stage, not a Bug. Trends Neurosci. 2015, 38, 19520. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Wurtz, R.H. Neuronal mechanisms of visual stability. Vis. Res. 2008, 48, 2070–2089. [Google Scholar] [CrossRef] [Green Version]
  27. Cavanaugh, J.; Berman, R.A.; Joiner, W.M.; Wurtz, R.H. Saccadic Corollary Discharge Underlies Stable Visual Perception. J. Neurosci. 2016, 36, 31–42. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Wurtz, R.H.; Joiner, W.M.; Berman, R.A. Neuronal mechanisms for visual stability: Progress and problems. Philos. Trans. R. Soc. B 2011, 366, 492–503. [Google Scholar] [CrossRef]
  29. Vasudevan, R.; Phatak, A.V.; Smith, J.D. A stochastic model for eye movements during fixation on a stationary target. Kybernetik 1972, 11, 24–31. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  30. Lakshminarayanan, V. Stochastic Eye Movements While Fixating on a Stationary Target. In Stochastic Processes and Their Applications; Vijayakumar, A., Sreenivasan, M., Eds.; Narosa Publishing House Private Limited: New Delhi, India, 1999; pp. 39–49. [Google Scholar]
  31. Boccignone, G. Advanced statistical methods for eyemovement analysis and modelling: A gentle introduction. arXiv 2017, arXiv:1506.07194v4. [Google Scholar]
  32. Engbert, R.; Mergenthaler, K.; Sinn, P.; Pikovsky, A. An integrated model of fixation eye movements and microsaccades. Proc. Nat. Acad. Sci. USA 2011, 108, 765–770. [Google Scholar] [CrossRef] [Green Version]
  33. Herrmann, C.J.J.; Metzler, R.; Engbert, R. A self-avoiding walk with neural delays as a model of fixational eye movements. Sci. Rep. 2017, 7, 12958. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  34. Coifman, R.R.; Lafon, S. Diffusion maps. Appl. Comput. Harmon. Anal. 2006, 21, 5–30. [Google Scholar] [CrossRef] [Green Version]
  35. Lafon, S.; Lee, A.B. Diffusion Maps and Coarse-Graining: A Unied Framework for Dimensionality Reduction, Graph Partitioning and Data Set Parameterization. IEEE Trans. Pattern Anal. Mach. Intell. 2006, 28, 1393–1403. [Google Scholar] [CrossRef] [Green Version]
  36. Kaplan, E.; Benardete, E. The dynamics of primate retinal ganglion cells. Prog. Brain Res. 2001, 134, 17–34. [Google Scholar]
  37. Hubel, D.H. Eye, Brain and Vision. JAMA 1988, 260, 3677. [Google Scholar]
  38. Schwartz, E. Topographic Mapping in Primate Visual Cortex: History, Anatomy and Computation; Technical Report 593; Courant Institute of Mathematical Sciences: New York, NY, USA, 1993. [Google Scholar]
  39. Schwartz, E. Spatial mapping in the primate sensory projection: Analytic structure and relevance to perception. Biol. Cybern. 1977, 25, 181–194. [Google Scholar] [CrossRef] [PubMed]
  40. Kowler, E. Eye movements: The past 25 years. Vis. Res. 2011, 51, 1457–1483. [Google Scholar] [CrossRef] [Green Version]
  41. Rolf, M. Microsaccades: Small steps on a long way. Vis. Res. 2009, 49, 2415–2441. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  42. Sinn, P.; Engbert, R. Small saccades versus microsaccades: Experimental distinction and model-based unification. Vis. Res. 2016, 118, 132–143. [Google Scholar] [CrossRef] [PubMed]
  43. Bowers, N.R.; Boehm, A.E.; Roorda, A. The effects of fixational tremor on the retinal image. J. Vis. 2019, 19, 8. [Google Scholar] [CrossRef] [Green Version]
  44. Martinez-Conde, S.; Macknik, S.L.; Hubel, D.H. The role of fixation eye movements in visual perception. Nat. Rev. 2004, 5, 224–240. [Google Scholar] [CrossRef] [PubMed]
  45. Duhamel, J.-R.; Colby, C.L.; Goldberg, M.E. The Updating of the Representation of Visual Space in Parietal Cortex by Intended Eye Movements. Science 1992, 255, 90–92. [Google Scholar] [CrossRef]
  46. Zirnsak, M.; Moore, T. Saccades and shifting receptive fields: Anticipating consequences or selecting targets? Trends Cogn. Sci. 2014, 18, 621–628. [Google Scholar] [CrossRef] [Green Version]
  47. Molchnov, S.A. Diffusion processes and Riemannian geometry. Uspekhi Mat. Nauk 1975, 30, 3–59. [Google Scholar] [CrossRef]
Figure 1. The Human Eye. Adapted from Wikipedia.
Figure 1. The Human Eye. Adapted from Wikipedia.
Jimaging 08 00076 g001
Figure 2. Central projection.
Figure 2. Central projection.
Jimaging 08 00076 g002
Figure 3. Anatomy of retina.
Figure 3. Anatomy of retina.
Jimaging 08 00076 g003
Figure 4. On and Off Kuffler cells.
Figure 4. On and Off Kuffler cells.
Jimaging 08 00076 g004
Figure 5. Action of Marr filter.
Figure 5. Action of Marr filter.
Jimaging 08 00076 g005
Figure 6. Eye, retina and fovea. Adapted from Wikipedia.
Figure 6. Eye, retina and fovea. Adapted from Wikipedia.
Jimaging 08 00076 g006
Figure 7. (A) An example of an eye trace taken from an AOSLO movie. A microsaccade (magenta background) is clearly distinguishable from the ocular drift (blue background). Gray vertical gridlines demarcate frame boundaries from the AOSLO movie. Each frame is acquired over 33 ms as indicated by the scale bar. (B) An example of an image/frame from an AOSLO movie. The cone mosaic can be resolved even at the fovea. (C) An example of the AOSLO raster with a green letter E as it would appear to the subject. The small discontinuities in the eye trace at the boundaries between frames 478–479 and 480–481 are likely the result of tracking errors that occur at the edges of the frame. They are infrequent and an example is included here for full disclosure. Errors like this contribute to the peaks in the amplitude spectrum at the frame rate and higher harmonics. All original eye motion traces are available for download. Adapted from [43].
Figure 7. (A) An example of an eye trace taken from an AOSLO movie. A microsaccade (magenta background) is clearly distinguishable from the ocular drift (blue background). Gray vertical gridlines demarcate frame boundaries from the AOSLO movie. Each frame is acquired over 33 ms as indicated by the scale bar. (B) An example of an image/frame from an AOSLO movie. The cone mosaic can be resolved even at the fovea. (C) An example of the AOSLO raster with a green letter E as it would appear to the subject. The small discontinuities in the eye trace at the boundaries between frames 478–479 and 480–481 are likely the result of tracking errors that occur at the edges of the frame. They are infrequent and an example is included here for full disclosure. Errors like this contribute to the peaks in the amplitude spectrum at the frame rate and higher harmonics. All original eye motion traces are available for download. Adapted from [43].
Jimaging 08 00076 g007
Figure 8. Microsaccades and Ocular Drifts. Adapted from Wikipedia https://commons.wikimedia.org/wiki, CC-BY.
Figure 8. Microsaccades and Ocular Drifts. Adapted from Wikipedia https://commons.wikimedia.org/wiki, CC-BY.
Jimaging 08 00076 g008
Figure 9. Listing’s sphere.
Figure 9. Listing’s sphere.
Jimaging 08 00076 g009
Figure 10. The eye sphere.
Figure 10. The eye sphere.
Jimaging 08 00076 g010
Figure 11. Hexagone.
Figure 11. Hexagone.
Jimaging 08 00076 g011
Table 1. Characteristics of fixation eye movements (Adapted from [44]) with refined data from [23,43] and Wikipedia.
Table 1. Characteristics of fixation eye movements (Adapted from [44]) with refined data from [23,43] and Wikipedia.
AmplitudeDurationFrequencySpeed
Tremor11-60 arcsec-50–100 HzMax 20 arcmin/s
Drift1.5–4 arcmin0.2–0.8 s95–97% of time50 arcmin/s
Micsac1–30 arcmin0.01–0.02 s0.1–5 Hz40–220 deg/s
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Alekseevsky, D. Microsaccades, Drifts, Hopf Bundle and Neurogeometry. J. Imaging 2022, 8, 76. https://doi.org/10.3390/jimaging8030076

AMA Style

Alekseevsky D. Microsaccades, Drifts, Hopf Bundle and Neurogeometry. Journal of Imaging. 2022; 8(3):76. https://doi.org/10.3390/jimaging8030076

Chicago/Turabian Style

Alekseevsky, Dmitri. 2022. "Microsaccades, Drifts, Hopf Bundle and Neurogeometry" Journal of Imaging 8, no. 3: 76. https://doi.org/10.3390/jimaging8030076

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop