Next Article in Journal
Intensive Sleep Re-Training: From Bench to Bedside
Next Article in Special Issue
Verbs in Mothers’ Input to Six-Month-Olds: Synchrony between Presentation, Meaning, and Actions Is Related to Later Verb Acquisition
Previous Article in Journal
Correction: Krymchantowski, A.V.; et al. Medication-Overuse Headache: Differences between Daily and Near-Daily Headache Patients; Brain Sciences 2016, 6, 30
Previous Article in Special Issue
Contributions of Letter-Speech Sound Learning and Visual Print Tuning to Reading Improvement: Evidence from Brain Potential and Dyslexia Training Studies
Open AccessArticle

Modeling the Development of Audiovisual Cue Integration in Speech Perception

Department of Psychology, Villanova University, Villanova, PA 19085, USA
*
Author to whom correspondence should be addressed.
Academic Editor: Heather Bortfeld
Brain Sci. 2017, 7(3), 32; https://doi.org/10.3390/brainsci7030032
Received: 15 November 2016 / Revised: 3 March 2017 / Accepted: 16 March 2017 / Published: 21 March 2017
(This article belongs to the Special Issue Audiovisual Integration in Early Language Development)
Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues. View Full-Text
Keywords: speech perception; speech development; multimodal representations; audiovisual cues; statistical learning; mixture of Gaussians; cue weighting speech perception; speech development; multimodal representations; audiovisual cues; statistical learning; mixture of Gaussians; cue weighting
Show Figures

Figure 1

MDPI and ACS Style

Getz, L.M.; Nordeen, E.R.; Vrabic, S.C.; Toscano, J.C. Modeling the Development of Audiovisual Cue Integration in Speech Perception. Brain Sci. 2017, 7, 32.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop