# Investigating Human Visual Sensitivity to Binocular Motion-in-Depth for Anti- and De-Correlated Random-Dot Stimuli

^{1}

^{2}

^{3}

^{*}

^{†}

## Abstract

**:**

## 1. Introduction

#### 1.1. Types of Binocular Cues to Motion-in-Depth

#### 1.2. Experimental Isolation of the Binocular Cues

#### 1.3. Experimental Evidence for an IOVD-Specific Mechanism

#### 1.4. Comparison of aIOVD and dIOVD Stimuli

## 2. Methods

#### 2.1. Setup

#### 2.2. Stimuli

#### 2.2.1. Signal Dots

#### 2.2.2. Noise Dots

#### 2.3. Procedure

#### 2.4. Participants

#### 2.5. Data Analysis

- Fitting of psychometric functionsCumulative normal psychometric functions were fit to the data using MATLAB
^{®}[41] and the Palamedes toolbox [46]. Initially, we fitted psychometric functions separately for each participant and condition with fixed guess rate (0.5) and fixed lapse rate (0.01). The resulting threshold and slope parameter estimates were then used as starting values for fitting data from the three stimulus conditions (FULL, aIOVD, dIOVD) simultaneously for each participant. In these fits, the lapse rate parameter was free to vary between participants but not between conditions to estimate a single lapse rate for each participant for all conditions. The range of possible lapse rates was constrained to values between 0 and 0.06. The fits are shown in Figure 3.The errors associated with the parameters determined by fitting psychometric functions (thresholds, slopes, and lapse rates), were estimated by performing 2000 non-parametric bootstraps of the fits. All simulations converged. The standard error (SE) of the parameter estimates is given by the standard deviation of the sampling distribution of parameter estimates. We present 95% confidence intervals representing ±1.96 SE.Motion coherence values ranged from 0–100% in steps of 10%. These values were log-transformed before fitting the cumulative normal function. For clarity, the thresholds and corresponding confidence intervals are displayed on a linear scale in Figure 3 and Figure 4. The transformation from log to linear values resulted in asymmetric error bars. - Model comparisonOur aim was to determine whether the three stimulus conditions affected performance differently. To do this, we compared two different models:
**Model 1**: we assumed that the stimulus conditions did not affect performance differently, i.e., all potential differences between the conditions would be due to sampling, while the underlying thresholds and slopes would be the same in all conditions. In this case, the same psychometric function would adequately fit data from all conditions.**Model 2**: we made the assumption that the different conditions affect performance in different ways. In this case, separate psychometric functions would have to be fit to the data indicating that the performance is not determined by the same single underlying mechanism.To determine which model provided the better fit, data from all three conditions were fit twice: once under the assumptions of each of the two models. For fitting Model 1, the data from all conditions were combined, and for Model 2, the conditions were fit separately. Then the likelihood ratio between the first and second model fits was computed. The second model has more free parameters than the first model. So, the first model can never provide a better fit than the second model. A likelihood ratio of one would indicate that the two models fit the data equally well. The smaller the likelihood ratio, the worse is the fit of the first model relative to the second model. Note that the model comparison compares the fits of the two models. It does not check whether the models themselves provide a good fit to the data. This is done by the goodness-of-fit test.The single likelihood ratio between the two models alone does not allow us to say whether the data can be sufficiently explained by the first model or not because the differences could be due to sampling. The appropriate question to ask is: assuming that the data can be described by a single model, how likely is it that we find a likelihood ratio between the two models as low or lower than the one that we found for the experimental data?To determine whether the likelihood ratio could be explained by sampling alone, a ‘simulated participant’ was created who responded according to the first model, i.e., random data sets were repeatedly generated based on the psychometric function fitted to the combined experimental data. The two models are fitted to the simulated participant data and for every simulation, the likelihood ratio between the two models is calculated. In this case, we know that the first model must provide a good fit to the data and that all fits resulting in a likelihood ratio smaller than one are due to sampling. The likelihood ratio for our simulated data sets is then compared to the likelihood ratio between the two models that was found for the fit to the experimental data. The proportion of simulations (p) that resulted in a likelihood ratio smaller than the likelihood ratio for the experimental data indicated whether the experimental likelihood ratio was in the range of the likelihood ratios expected due to sampling.We then set a value for p below which we assumed that it to be unlikely that a participant who behaved according to the first model would produce likelihood ratios as small or smaller than those found for the experimental data. In this case, we rejected the null hypothesis that the stimulus conditions did not affect performance differently and instead assumed that different psychometric functions are required to adequately describe the data.We chose a cut-off value of $\alpha =0.05$ for p and used 2000 bootstraps for each model comparison and participant. All simulations converged. - Goodness-of-fitA goodness-of-fit analysis was used to test the assumptions made during the fitting procedure. We assumed that the psychometric functions were cumulative normal functions with a guess rate of 0.5 and lapse rates between 0 and 0.06 that were equal between conditions. These assumptions specified the target model which was then tested against a model that made no specific assumptions (saturated model), i.e., that was based on the observed proportions of correct responses alone. Both models were fit to the experimental data and the likelihood ratio of the fits was computed. The same test was performed repeatedly with simulated data generated based on the target model. For each simulated data set, the likelihood ratio for the fit of the target model to the simulated data and the fit of the saturated model were computed. The proportion of simulations (p) that resulted in a likelihood ratio smaller than the likelihood for the experimental data indicates whether the target model provides a good fit to the experimental data (see [45]). We assumed that if this goodness-of-fit measure p was smaller than 0.05 the fit was unacceptably poor (as per [45]), then the target model did not represent a good fit to the data. The experiment was simulated 2000 times, and all simulations converged. The results of the goodness-of-fit test are shown in Figure A6 in Appendix C.

## 3. Results

#### Model Comparison

## 4. Discussion

#### 4.1. Comparability of aIOVD and dIOVD Stimuli

#### 4.2. Inter-Individual Variability

## 5. Conclusions

## Author Contributions

## Funding

## Acknowledgments

## Conflicts of Interest

## Appendix A. Additional Methods

**Figure A1.**Example of a single frame of the stimulus. In the centre of the left and right eye displays was a square with horizontal and vertical nonius lines of which one line of each orientation was presented to one eye and the other two lines to the other eye. The black and white random-dots moved in a circular field surrounded by a ring of randomly place binocular black and white dots at zero disparity. To help with the binocular alignment of the stimulus white binocular squares were presented in the four corners of the display.

## Appendix B. Screening Data

#### Appendix B.1. Methods

#### Appendix B.2. Results

**Figure A2.**Psychometric function fits for all 15 participants who participated in the screening experiments. The x-axis shows motion coherence as percent signal and the y-axis proportion consistent. Filled circles show data points and curves psychometric functions fit to the data. FULL cue is shown in black, aIOVD in blue, and dIOVD in orange. Note that participant S5 saw motion-in-depth in the direction opposite to the direction that the other participants perceived.

**Figure A3.**Screening motion coherence thresholds. The x-axis lists the different participants, and the y-axis shows motion coherence thresholds as percent signal. Data for FULL cue are shown in black, aIOVD in blue, and dIOVD in orange. Error bars show 95% confidence intervals of the threshold estimates derived from a non-parametric bootstrap procedure. The horizontal red band indicates for which participants and conditions no thresholds could be determined. Data points have been displaced horizontally to avoid complete occlusion of data points.

## Appendix C. Additional Results

**Figure A4.**Slopes (in log-space) for six participants. FULL cue is shown in black, aIOVD in blue, and dIOVD in orange. Error bars show 95% confidence intervals of the slope estimates derived from a non-parametric bootstrap procedure. The red shaded area indicates participants for whom no thresholds could be determined.

**Figure A5.**Lapse rates for six participants. Lapse rate fits were constraint to be identical for the three stimulus types and limited to the range 0–0.06. Error bars show 95% confidence intervals of the lapse rate estimates derived from a non-parametric bootstrap procedure. The red shaded area indicates participants for whom no thresholds could be determined.

**Figure A6.**Goodness-of-fit for four models. The different models are shown on the x-axis: F vs. A vs. D, F vs. A, F vs. D, A vs. D with F: FULL cue, A: aIOVD, and D: dIOVD. The y-axis shows the five participants that were included in the analysis. Grey-shading and values in the different fields indicate the p-values for each test. The significance level for the overall comparison (first column) was $\alpha =0.05$ (significant values are shown in red). For the multiple comparisons (columns 2–4) it was adjusted to ${\alpha}_{\mathrm{bc}}=0.0167$ (significant values are shown in magenta).

## References

- Cumming, B.G.; Parker, A.J. Binocular mechanisms for detecting motion-in-depth. Vis. Res.
**1994**, 34, 483–495. [Google Scholar] [CrossRef] - Harris, J.M.; Nefs, H.T.; Grafton, C.E. Binocular vision and motion-in-depth. Spat. Vis.
**2008**, 21, 531–547. [Google Scholar] [CrossRef] [PubMed] - Regan, D. Binocular correlates of the direction of motion in depth. Vis. Res.
**1993**, 33, 2359–2360. [Google Scholar] [CrossRef] - Rashbass, C.; Westheimer, G. Independence of conjugate and disjunctive eye movements. J. Physiol.
**1961**, 159, 361–364. [Google Scholar] [CrossRef] [PubMed][Green Version] - Julesz, B. Foundations of Cyclopean Perception; University of Chicago Press: Chicago, IL, USA, 1971. [Google Scholar]
- Allison, R.; Howard, I.; Howard, A. Motion in depth can be elicited by dichoptically uncorrelated textures. Percept. ECVP Abstr.
**1998**, 27, 46. [Google Scholar] - Shioiri, S.; Saisho, H.; Yaguchi, H. Motion in depth based on inter-ocular velocity differences. Vis. Res.
**2000**, 40, 2565–2572. [Google Scholar] [CrossRef] - Rokers, B.; Cormack, L.K.; Huk, A.C. Strong percepts of motion through depth without strong percepts of position in depth. J. Vis.
**2008**, 8, 6. [Google Scholar] [CrossRef] [PubMed] - Cogan, A.I.; Kontsevich, L.L.; Lomakin, A.J.; Halpern, D.L.; Blake, R. Binocular disparity processing with opposite-contrast stimuli. Perception
**1995**, 24, 33–47. [Google Scholar] [CrossRef] [PubMed] - Cogan, A.I.; Lomakin, A.J.; Rossi, A.F. Depth in anticorrelated stereograms: Effects of spatial density and interocular delay. Vis. Res.
**1993**, 33, 1959–1975. [Google Scholar] [CrossRef] - Cumming, B.G.; Shapiro, S.E.; Parker, A.J. Disparity detection in anticorrelated stereograms. Perception
**1998**, 27, 1367–1377. [Google Scholar] [CrossRef] [PubMed] - Harris, J.M.; Rushton, S.K. Poor visibility of motion in depth is due to early motion averaging. Vis. Res.
**2003**, 43, 385–392. [Google Scholar] [CrossRef] - Cumming, B.G.; Parker, A.J. Responses of primary visual cortical neurons to binocular disparity without depth perception. Nature
**1997**, 389, 280–283. [Google Scholar] [CrossRef] [PubMed] - Neri, P.; Parker, A.J.; Blakemore, C. Probing the human stereoscopic system with reverse correlation. Nature
**1999**, 401, 695–698. [Google Scholar] [CrossRef] [PubMed] - Ohzawa, I.; DeAngelis, G.C.; Freeman, R.D. Stereoscopic depth discrimination in the visual cortex: Neurons ideally suited as disparity detectors. Science
**1990**, 249, 1037–1041. [Google Scholar] [CrossRef] [PubMed] - Cormack, L.K.; Czuba, T.B.; Knoell, J.; Huk, A.C. Binocular Mechanisms of 3D Motion Processing. Annu. Rev. Vis. Sci.
**2017**. [Google Scholar] [CrossRef] [PubMed] - Maunsell, J.H.; Van Essen, D.C. Functional properties of neurons in middle temporal visual area of the macaque monkey. II. Binocular interactions and sensitivity to binocular disparity. J. Neurophysiol.
**1983**, 49, 1148–1167. [Google Scholar] [CrossRef] [PubMed] - Czuba, T.B.; Huk, A.C.; Cormack, L.K.; Kohn, A. Area MT encodes three-dimensional motion. J. Neurosci.
**2014**, 34, 15522–15533. [Google Scholar] [CrossRef] [PubMed] - Sanada, T.M.; DeAngelis, G.C. Neural Representation of Motion-In-Depth in Area MT. J. Neurosci.
**2014**, 34, 15508–15521. [Google Scholar] [CrossRef] [PubMed][Green Version] - Likova, L.T.; Tyler, C.W. Stereomotion processing in the human occipital cortex. NeuroImage
**2007**, 38, 293–305. [Google Scholar] [CrossRef] [PubMed] - Rokers, B.; Cormack, L.K.; Huk, A.C. Disparity- and velocity-based signals for three-dimensional motion perception in human MT+. Nat. Neurosci.
**2009**, 12, 1050–1055. [Google Scholar] [CrossRef] [PubMed] - Harris, J.M.; Watamaniuk, S.N. Speed discrimination of motion-in-depth using binocular cues. Vis. Res.
**1995**, 35, 885–896. [Google Scholar] [CrossRef] - Portfors-Yeomans, C.V.; Regan, D. Discrimination of the direction and speed of motion in depth of a monocularly visible target from binocular information alone. J. Exp. Psychol. Hum. Percept. Perform.
**1997**, 23, 227–243. [Google Scholar] [CrossRef] [PubMed] - Nefs, H.T.; O’Hare, L.; Harris, J.M. Two independent mechanisms for motion-in-depth perception: Evidence from individual differences. Front. Psychol.
**2010**, 1, 155. [Google Scholar] [CrossRef] [PubMed][Green Version] - Brooks, K.R. Interocular velocity difference contributes to stereomotion speed perception. J. Vis.
**2002**, 2, 218–231. [Google Scholar] [CrossRef] [PubMed] - Shioiri, S.; Yoshizawa, M.; Ogiya, M.; Matsumiya, K.; Yaguchi, H. Low-level motion analysis of color and luminance for perception of 2D and 3D motion. J. Vis.
**2012**, 12. [Google Scholar] [CrossRef] [PubMed] - Brooks, K.R.; Mather, G. Perceived speed of motion in depth is reduced in the periphery. Vis. Res.
**2000**, 40, 3507–3516. [Google Scholar] [CrossRef] - Brooks, K.R.; Stone, L.S. Spatial scale of stereomotion speed processing. J. Vis.
**2006**, 6, 1257–1266. [Google Scholar] [CrossRef] [PubMed] - Fernandez, J.M.; Farell, B. Seeing motion in depth using inter-ocular velocity differences. Vis. Res.
**2005**, 45, 2786–2798. [Google Scholar] [CrossRef] [PubMed][Green Version] - Fernandez, J.M.; Farell, B. Motion in depth from interocular velocity differences revealed by differential motion aftereffect. Vis. Res.
**2006**, 46, 1307–1317. [Google Scholar] [CrossRef] [PubMed][Green Version] - Rokers, B.; Czuba, T.B.; Cormack, L.K.; Huk, A.C. Motion processing with two eyes in three dimensions. J. Vis.
**2011**, 11. [Google Scholar] [CrossRef] [PubMed] - Sakano, Y.; Allison, R.S.; Howard, I.P. Motion aftereffect in depth based on binocular information. J. Vis.
**2012**, 12. [Google Scholar] [CrossRef] [PubMed] - Shioiri, S.; Kakehi, D.; Tashiro, T.; Yaguchi, H. Integration of monocular motion signals and the analysis of interocular velocity differences for the perception of motion-in-depth. J. Vis.
**2009**, 9. [Google Scholar] [CrossRef] [PubMed] - Shioiri, S.; Nakajima, T.; Kakehi, D.; Yaguchi, H. Differences in temporal frequency tuning between the two binocular mechanisms for seeing motion in depth. J. Opt. Soc. Am. A Opt. Image Sci. Vis.
**2008**, 25, 1574–1585. [Google Scholar] [CrossRef] [PubMed] - Czuba, T.B.; Rokers, B.; Huk, A.C.; Cormack, L.K. Speed and eccentricity tuning reveal a central role for the velocity-based cue to 3D visual motion. J. Neurophysiol.
**2010**, 104, 2886–2899. [Google Scholar] [CrossRef] [PubMed] - Brooks, K.R. Monocular motion adaptation affects the perceived trajectory of stereomotion. J. Exp. Psychol. Hum. Percept. Perform.
**2002**, 28, 1470–1482. [Google Scholar] [CrossRef] [PubMed] - Maloney, R.T.; Kaestner, M.; Ansell, J.; Bloj, M.; Harris, J.; Wade, A. Mapping the temporal and neural properties of binocular mechanisms for motion-in-depth perception. In Perception; Sage Publications Ltd.: London, UK, 2016; Volume 45, p. 201. [Google Scholar]
- Adelson, E.H.; Bergen, J.R. Spatiotemporal energy models for the perception of motion. J. Opt. Soc. Am. A Opt. Image Sci.
**1985**, 2, 284–299. [Google Scholar] [CrossRef] - Watson, A.B.; Ahumada, A.J. Model of human visual-motion sensing. J. Opt. Soc. Am. A Opt. Image Sci.
**1985**, 2, 322–341. [Google Scholar] [CrossRef] - Shioiri, S.; Matsumiya, K.; Matsubara, K. Isolation of two binocular mechanisms for motion in depth: A model and psychophysics. Jpn. Psychol. Res.
**2012**, 54, 16–26. [Google Scholar] [CrossRef] - The MathWorks Inc. MATLAB; The MathWorks Inc.: Natick, MA, USA, 2014. [Google Scholar]
- Brainard, D.H. The psychophysics toolbox. Spat. Vis.
**1997**, 10, 433–436. [Google Scholar] [CrossRef] [PubMed] - Kleiner, M.; Brainard, D.; Pelli, D. What’s new in Psychtoolbox-3. Perception
**2007**, 36, 1. [Google Scholar] - Pelli, D.G. The VideoToolbox software for visual psychophysics: Transforming numbers into movies. Spat. Vis.
**1997**, 10, 437–442. [Google Scholar] [CrossRef] [PubMed] - Kingdom, F.A.; Prins, N. Psychophysics: A Practical Introduction; Elsevier: London, UK, 2010. [Google Scholar]
- Prins, N.; Kingdom, F.A.A. Palamedes: Matlab Routines for Analyzing Psychophysical Data. 2009. Available online: http://www.palamedestoolbox.org (accessed on 29 October 2018).
- Fulvio, J.M.; Rosen, M.L.; Rokers, B. Sensory uncertainty leads to systematic misperception of the direction of motion in depth. Atten. Percept. Psychophys.
**2015**, 77, 1685–1696. [Google Scholar] [CrossRef] [PubMed][Green Version] - Fulvio, J.M.; Wang, M.; Rokers, B. Head tracking in virtual reality displays reduces the misperception of 3D motion. J. Vis.
**2015**, 15, 1180. [Google Scholar] [CrossRef] - Rokers, B.; Fulvio, J.M.; Pillow, J.W.; Cooper, E.A. Systematic misperceptions of 3-D motion explained by Bayesian inference. J. Vis.
**2018**, 18, 23. [Google Scholar] [CrossRef] [PubMed] - Tidbury, L.P.; Brooks, K.R.; O’Connor, A.R.; Wuerger, S.M. A systematic comparison of static and dynamic cues for depth perception. Investig. Ophthalmol. Vis. Sci.
**2016**, 57, 3545–3553. [Google Scholar] [CrossRef] [PubMed] - Gray, R.; Regan, D. Accuracy of estimating time to collision using binocular and monocular information. Vis. Res.
**1998**, 38, 499–512. [Google Scholar] [CrossRef] - Lee, A.R.I.; Ales, J.M.; Harris, J.M. Speed change discrimination for motion in depth with constant world and retinal speeds. In preparation.

**Figure 1.**Computational schemes for the CD (

**top**) and IOVD (

**bottom**) cues (see text for explanation). ‘—’ indicates differencing and ‘d/dt’ differentiation.

**Figure 2.**Schematic depiction of two consecutive frames of random-dot stereograms for FULL cue (

**top left**), CD (

**top right**), aIOVD (

**bottom left**), and dIOVD (

**bottom right**) stimuli. The grey circles show the stimuli presented to the left and the right eye, respectively, at two consecutive points in time (lower, then upper). Black and white filled dots are examples of random-dots moving on the screen in the direction indicated by the red arrows. Dashed lines connect dots that are correlated between eyes (connecting the left and right eye) and/or correlated between frames (connecting the lower and upper stimulus). Check marks indicate the correlations isolated by the CD and IOVD stimuli, whereas dotted lines and open circles indicate the missing correlations between eyes (dIOVD) and frames (CD), respectively.

**Figure 3.**Psychometric function fits for six participants. The x-axis shows motion coherence as percent signal and the y-axis proportion consistent. Filled circles show data points and curves psychometric functions fit to the data. Note that participants S5 and S6 saw motion-in-depth in the direction opposite to the direction that participants S1–S4 perceived.

**Figure 4.**Motion coherence thresholds for the six participants. The x-axis lists the participants, and the y-axis shows motion coherence thresholds as percent signal. Data for FULL cue are shown in black, aIOVD in blue, and dIOVD in orange. Error bars show 95% confidence intervals of the threshold estimates derived from a non-parametric bootstrap procedure. Data points have been displaced horizontally to avoid complete occlusion.

**Figure 5.**Model comparisons for four models. The different models are shown on the x-axis: F vs. A vs. D, F vs. A, F vs. D, A vs. D with F: FULL cue, A: aIOVD, and D: dIOVD. The y-axis shows the five participants that were included in the analysis. Grey-shading and values in the fields indicate the p-values for each comparison. The significance level for the overall comparison (first column) was $\alpha =0.05$ (significant values are shown in red). For the multiple comparisons (columns 2–4), it was adjusted to ${\alpha}_{\mathrm{bc}}=0.0167$ (significant values are shown in magenta).

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Giesel, M.; Wade, A.R.; Bloj, M.; Harris, J.M. Investigating Human Visual Sensitivity to Binocular Motion-in-Depth for Anti- and De-Correlated Random-Dot Stimuli. *Vision* **2018**, *2*, 41.
https://doi.org/10.3390/vision2040041

**AMA Style**

Giesel M, Wade AR, Bloj M, Harris JM. Investigating Human Visual Sensitivity to Binocular Motion-in-Depth for Anti- and De-Correlated Random-Dot Stimuli. *Vision*. 2018; 2(4):41.
https://doi.org/10.3390/vision2040041

**Chicago/Turabian Style**

Giesel, Martin, Alex R. Wade, Marina Bloj, and Julie M. Harris. 2018. "Investigating Human Visual Sensitivity to Binocular Motion-in-Depth for Anti- and De-Correlated Random-Dot Stimuli" *Vision* 2, no. 4: 41.
https://doi.org/10.3390/vision2040041