Modeling the Development of Audiovisual Cue Integration in Speech Perception
AbstractAdult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues. View Full-Text
Scifeed alert for new publicationsNever miss any articles matching your research from any publisher
- Get alerts for new papers matching your research
- Find out the new papers from selected authors
- Updated daily for 49'000+ journals and 6000+ publishers
- Define your Scifeed now
Getz, L.M.; Nordeen, E.R.; Vrabic, S.C.; Toscano, J.C. Modeling the Development of Audiovisual Cue Integration in Speech Perception. Brain Sci. 2017, 7, 32.
Getz LM, Nordeen ER, Vrabic SC, Toscano JC. Modeling the Development of Audiovisual Cue Integration in Speech Perception. Brain Sciences. 2017; 7(3):32.Chicago/Turabian Style
Getz, Laura M.; Nordeen, Elke R.; Vrabic, Sarah C.; Toscano, Joseph C. 2017. "Modeling the Development of Audiovisual Cue Integration in Speech Perception." Brain Sci. 7, no. 3: 32.
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.