Multisensory Modulation of Vision

A special issue of Vision (ISSN 2411-5150).

Deadline for manuscript submissions: closed (31 October 2019) | Viewed by 20815

Special Issue Editor


E-Mail
Guest Editor
Naval Aerospace Medical Research Laboratory, Naval Medical Research Unit—Dayton, 2624 Q Street, WPAFB, Dayton, OH 45433-7955, USA
Interests: color vision; pattern perception; complexity theory

Special Issue Information

Dear Colleagues,

For this Special Issue on “Multisensory Modulation of Vision” we invite a mixture of original and review articles that illuminate the effects that the other senses have on vision.

It has long been known that the other senses can modulate vision. For example, dim lights look brighter when a sound comes from the same direction and some cells in visual cortex fire harder when light is paired with sound, even in anesthetized animals. Under other circumstances visual suppression can occur. Listening to sounds can keep stabilized images from fading and can distort the number of flashes seen in a sequence. With training, auditory and tactile stimuli can be used to perceptually image the environment (sensory substitution) and there is evidence that this training exploits the unused machinery of the visual cortex. Tactile feedback is crucial in creating body ownership illusions in highly visual virtual reality environments. Of course some synesthetes see colors triggered by music and voices. Some of these interactions seem to be specific to modality, while others may be generic to sensory and cognitive systems, in the same ways that attention and memory exert broad effects. This Special Issue casts a broad net and aims to gather together a variety of interesting multisensory effects on vision.

If you are considering preparing a review article, please send a brief proposal before a full submission to:

Guest Editor: Vincent A. Billock, Associate Professor, The Ohio State University.

Dr. Vincent A. Billock
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Vision is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Multisensory Perception
  • Multisensory Neural Mechanisms 
  • Computational Modeling 
  • Functional Imaging 
  • Electrophysiology

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 2342 KiB  
Article
Judging Relative Onsets and Offsets of Audiovisual Events
by Puti Wen, Collins Opoku-Baah, Minsun Park and Randolph Blake
Vision 2020, 4(1), 17; https://doi.org/10.3390/vision4010017 - 3 Mar 2020
Cited by 4 | Viewed by 4214
Abstract
This study assesses the fidelity with which people can make temporal order judgments (TOJ) between auditory and visual onsets and offsets. Using an adaptive staircase task administered to a large sample of young adults, we find that the ability to judge temporal order [...] Read more.
This study assesses the fidelity with which people can make temporal order judgments (TOJ) between auditory and visual onsets and offsets. Using an adaptive staircase task administered to a large sample of young adults, we find that the ability to judge temporal order varies widely among people, with notable difficulty created when auditory events closely follow visual events. Those findings are interpretable within the context of an independent channels model. Visual onsets and offsets can be difficult to localize in time when they occur within the temporal neighborhood of sound onsets or offsets. Full article
(This article belongs to the Special Issue Multisensory Modulation of Vision)
Show Figures

Figure 1

16 pages, 1906 KiB  
Article
Stimulus Onset Modulates Auditory and Visual Dominance
by Margeaux F. Ciraolo, Samantha M. O’Hanlon, Christopher W. Robinson and Scott Sinnett
Vision 2020, 4(1), 14; https://doi.org/10.3390/vision4010014 - 29 Feb 2020
Cited by 5 | Viewed by 3881
Abstract
Investigations of multisensory integration have demonstrated that, under certain conditions, one modality is more likely to dominate the other. While the direction of this relationship typically favors the visual modality, the effect can be reversed to show auditory dominance under some conditions. The [...] Read more.
Investigations of multisensory integration have demonstrated that, under certain conditions, one modality is more likely to dominate the other. While the direction of this relationship typically favors the visual modality, the effect can be reversed to show auditory dominance under some conditions. The experiments presented here use an oddball detection paradigm with variable stimulus timings to test the hypothesis that a stimulus that is presented earlier will be processed first and therefore contribute to sensory dominance. Additionally, we compared two measures of sensory dominance (slowdown scores and error rate) to determine whether the type of measure used can affect which modality appears to dominate. When stimuli were presented asynchronously, analysis of slowdown scores and error rates yielded the same result; for both the 1- and 3-button versions of the task, participants were more likely to show auditory dominance when the auditory stimulus preceded the visual stimulus, whereas evidence for visual dominance was observed as the auditory stimulus was delayed. In contrast, for the simultaneous condition, slowdown scores indicated auditory dominance, whereas error rates indicated visual dominance. Overall, these results provide empirical support for the hypothesis that the modality that engages processing first is more likely to show dominance, and suggest that more explicit measures of sensory dominance may favor the visual modality. Full article
(This article belongs to the Special Issue Multisensory Modulation of Vision)
Show Figures

Figure 1

16 pages, 3481 KiB  
Article
Individual Differences in Multisensory Interactions: The Influence of Temporal Phase Coherence and Auditory Salience on Visual Contrast Sensitivity
by Hiu Mei Chow, Xenia Leviyah and Vivian M. Ciaramitaro
Vision 2020, 4(1), 12; https://doi.org/10.3390/vision4010012 - 5 Feb 2020
Cited by 5 | Viewed by 3136
Abstract
While previous research has investigated key factors contributing to multisensory integration in isolation, relatively little is known regarding how these factors interact, especially when considering the enhancement of visual contrast sensitivity by a task-irrelevant sound. Here we explored how auditory stimulus properties, namely [...] Read more.
While previous research has investigated key factors contributing to multisensory integration in isolation, relatively little is known regarding how these factors interact, especially when considering the enhancement of visual contrast sensitivity by a task-irrelevant sound. Here we explored how auditory stimulus properties, namely salience and temporal phase coherence in relation to the visual target, jointly affect the extent to which a sound can enhance visual contrast sensitivity. Visual contrast sensitivity was measured by a psychophysical task, where human adult participants reported the location of a visual Gabor pattern presented at various contrast levels. We expected the most enhanced contrast sensitivity, the lowest contrast threshold, when the visual stimulus was accompanied by a task-irrelevant sound, weak in auditory salience, modulated in-phase with the visual stimulus (strong temporal phase coherence). Our expectations were confirmed, but only if we accounted for individual differences in optimal auditory salience level to induce maximal multisensory enhancement effects. Our findings highlight the importance of interactions between temporal phase coherence and stimulus effectiveness in determining the strength of multisensory enhancement of visual contrast as well as highlighting the importance of accounting for individual differences. Full article
(This article belongs to the Special Issue Multisensory Modulation of Vision)
Show Figures

Figure 1

9 pages, 573 KiB  
Communication
Musical Training Improves Audiovisual Integration Capacity under Conditions of High Perceptual Load
by Jonathan M. P. Wilbiks and Courtney O’Brien
Vision 2020, 4(1), 9; https://doi.org/10.3390/vision4010009 - 24 Jan 2020
Cited by 2 | Viewed by 2637
Abstract
In considering capacity measures of audiovisual integration, it has become apparent that there is a wide degree of variation both within (based on unimodal and multimodal stimulus characteristics) and between participants. Recent work has discussed performance on a number of cognitive tasks that [...] Read more.
In considering capacity measures of audiovisual integration, it has become apparent that there is a wide degree of variation both within (based on unimodal and multimodal stimulus characteristics) and between participants. Recent work has discussed performance on a number of cognitive tasks that can form a regression model accounting for nearly a quarter of the variation in audiovisual integration capacity. The current study involves an investigation of whether different elements of musicality in participants can contribute to additional variation in capacity. Participants were presented with a series of rapidly changing visual displays and asked to note which elements of that display changed in synchrony with a tone. Results were fitted to a previously used model to establish capacity estimates, and these estimates were included in correlational analyses with musical training, musical perceptual abilities, and active engagement in music. We found that audiovisual integration capacity was positively correlated with amount of musical training, and that this correlation was statistically significant under the most difficult perceptual conditions. Results are discussed in the context of the boosting of perceptual abilities due to musical training, even under conditions that have been previously found to be overly demanding for participants. Full article
(This article belongs to the Special Issue Multisensory Modulation of Vision)
Show Figures

Figure 1

15 pages, 1453 KiB  
Article
The Louder, the Longer: Object Length Perception Is Influenced by Loudness, but Not by Pitch
by Pia Hauck and Heiko Hecht
Vision 2019, 3(4), 57; https://doi.org/10.3390/vision3040057 - 28 Oct 2019
Cited by 2 | Viewed by 3082
Abstract
Sound by itself can be a reliable source of information about an object’s size. For instance, we are able to estimate the size of objects merely on the basis of the sound they make when falling on the floor. Moreover, loudness and pitch [...] Read more.
Sound by itself can be a reliable source of information about an object’s size. For instance, we are able to estimate the size of objects merely on the basis of the sound they make when falling on the floor. Moreover, loudness and pitch are crossmodally linked to size. We investigated if sound has an effect on size estimation even in the presence of visual information, that is if the manipulation of the sound produced by a falling object influences visual length estimation. Participants watched videos of wooden dowels hitting a hard floor and estimated their lengths. Sound was manipulated by (A) increasing (decreasing) overall sound pressure level, (B) swapping sounds among the different dowel lengths, and (C) increasing (decreasing) pitch. Results showed that dowels were perceived to be longer with increased sound pressure level (SPL), but there was no effect of swapped sounds or pitch manipulation. However, in a sound-only-condition, main effects of length and pitch manipulation were found. We conclude that we are able to perceive subtle differences in the acoustic properties of impact sounds and use them to deduce object size when visual cues are eliminated. In contrast, when visual cues are available, only loudness is potent enough to exercise a crossmodal influence on length perception. Full article
(This article belongs to the Special Issue Multisensory Modulation of Vision)
Show Figures

Figure 1

16 pages, 3005 KiB  
Article
The A-Effect and Global Motion
by Pearl S. Guterman and Robert S. Allison
Vision 2019, 3(2), 13; https://doi.org/10.3390/vision3020013 - 28 Mar 2019
Viewed by 3225
Abstract
When the head is tilted, an objectively vertical line viewed in isolation is typically perceived as tilted. We explored whether this shift also occurs when viewing global motion displays perceived as either object-motion or self-motion. Observers stood and lay left side down while [...] Read more.
When the head is tilted, an objectively vertical line viewed in isolation is typically perceived as tilted. We explored whether this shift also occurs when viewing global motion displays perceived as either object-motion or self-motion. Observers stood and lay left side down while viewing (1) a static line, (2) a random-dot display of 2-D (planar) motion or (3) a random-dot display of 3-D (volumetric) global motion. On each trial, the line orientation or motion direction were tilted from the gravitational vertical and observers indicated whether the tilt was clockwise or counter-clockwise from the perceived vertical. Psychometric functions were fit to the data and shifts in the point of subjective verticality (PSV) were measured. When the whole body was tilted, the perceived tilt of both a static line and the direction of optic flow were biased in the direction of the body tilt, demonstrating the so-called A-effect. However, we found significantly larger shifts for the static line than volumetric global motion as well as larger shifts for volumetric displays than planar displays. The A-effect was larger when the motion was experienced as self-motion compared to when it was experienced as object-motion. Discrimination thresholds were also more precise in the self-motion compared to object-motion conditions. Different magnitude A-effects for the line and motion conditions—and for object and self-motion—may be due to differences in combining of idiotropic (body) and vestibular signals, particularly so in the case of vection which occurs despite visual-vestibular conflict. Full article
(This article belongs to the Special Issue Multisensory Modulation of Vision)
Show Figures

Figure 1

Back to TopTop