Special Issue "Audiovisual Integration in Early Language Development"

A special issue of Brain Sciences (ISSN 2076-3425).

Deadline for manuscript submissions: closed (28 February 2017).

Special Issue Editor

Prof. Dr. Heather Bortfeld
E-Mail Website
Guest Editor
Psychological Sciences, University of California-Merced, 5200 North Lake Road, Merced, CA 95343, USA
Tel. 979-587-1011
Interests: infant speech perception; language learning; language development; audiovisual speech perception; audiovisual integration; near-infrared spectroscopy
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleagues,

Speech is a sensory-rich stimulus, with information provided across multiple modalities. The reliable co-occurrence of cues serves to support speech comprehension and language learning, whether first or subsequent. However, despite the clear relationship between spoken language and the moving mouth that produces it, it remains unclear how sensitive early language learners—particularly infants—are to whether and how sight and sound co-occur. This Special Issue has been conceived to facilitate discussion of how and when the sight and sound of spoken language come to be processed as a unified signal. Moreover, it provides a forum for presenting theoretical and experimental advances in the study of early audiovisual speech perception and processing. Both behavioral and neurophysiological techniques are welcome.

Heather Bortfeld
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Brain Sciences is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • multisensory processing
  • infant speech perception
  • audiovisual integration
  • language development
  • language learning
  • visual speech
  • temporal binding
  • spectral and temporal cues

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

Open AccessArticle
Subtitling for d/Deaf and Hard-of-Hearing Children: Current Practices and New Possibilities to Enhance Language Development
Brain Sci. 2017, 7(7), 75; https://doi.org/10.3390/brainsci7070075 - 30 Jun 2017
Cited by 1
Abstract
In order to understand and fully comprehend a subtitle, two parameters within the linguistic code of audiovisual texts are key in the processing of the subtitle itself, namely, vocabulary and syntax. Through a descriptive and experimental study, the present article explores the transfer [...] Read more.
In order to understand and fully comprehend a subtitle, two parameters within the linguistic code of audiovisual texts are key in the processing of the subtitle itself, namely, vocabulary and syntax. Through a descriptive and experimental study, the present article explores the transfer of the linguistic code of audiovisual texts in subtitling for deaf and hard-of-hearing children in three Spanish TV stations. In the first part of the study, we examine current practices in Spanish TV captioning to analyse whether syntax and vocabulary are adapted to satisfy deaf children’s needs and expectations regarding subtitle processing. In the second part, we propose some alternative captioning criteria for these two variables based on the needs of d/Deaf and hard-of-hearing (DHH) children, suggesting a more appropriate way of displaying the written linguistic code for deaf children. Although no specific distinction will be made throughout this paper, it is important to refer to these terms as they have been widely used in the literature. Neves (2008) distinguishes between the “Deaf”, who belong to a linguistic minority, use sign language as their mother tongue, and usually identify with a Deaf community and culture; the “deaf”, who normally have an oral language as their mother tongue and feel part of the hearing community; and the “hard of hearing”, who have residual hearing and, therefore, share the world and the sound experience of hearers. In the experimental study, 75 Spanish DHH children aged between 8 and 13 were exposed to two options: the actual broadcast captions on TV, and the alternative captions created by the authors. The data gathered from this exposure were used to analyse the children’s comprehension of these two variables in order to draw conclusions about the suitability of the changes proposed in the alternative subtitles. Full article
(This article belongs to the Special Issue Audiovisual Integration in Early Language Development)
Show Figures

Figure 1

Open AccessArticle
Electrophysiological Indices of Audiovisual Speech Perception in the Broader Autism Phenotype
Brain Sci. 2017, 7(6), 60; https://doi.org/10.3390/brainsci7060060 - 02 Jun 2017
Cited by 1
Abstract
When a speaker talks, the consequences of this can both be heard (audio) and seen (visual). A novel visual phonemic restoration task was used to assess behavioral discrimination and neural signatures (event-related potentials, or ERP) of audiovisual processing in typically developing children with [...] Read more.
When a speaker talks, the consequences of this can both be heard (audio) and seen (visual). A novel visual phonemic restoration task was used to assess behavioral discrimination and neural signatures (event-related potentials, or ERP) of audiovisual processing in typically developing children with a range of social and communicative skills assessed using the social responsiveness scale, a measure of traits associated with autism. An auditory oddball design presented two types of stimuli to the listener, a clear exemplar of an auditory consonant–vowel syllable /ba/ (the more frequently occurring standard stimulus), and a syllable in which the auditory cues for the consonant were substantially weakened, creating a stimulus which is more like /a/ (the infrequently presented deviant stimulus). All speech tokens were paired with a face producing /ba/ or a face with a pixelated mouth containing motion but no visual speech. In this paradigm, the visual /ba/ should cause the auditory /a/ to be perceived as /ba/, creating an attenuated oddball response; in contrast, a pixelated video (without articulatory information) should not have this effect. Behaviorally, participants showed visual phonemic restoration (reduced accuracy in detecting deviant /a/) in the presence of a speaking face. In addition, ERPs were observed in both an early time window (N100) and a later time window (P300) that were sensitive to speech context (/ba/ or /a/) and modulated by face context (speaking face with visible articulation or with pixelated mouth). Specifically, the oddball responses for the N100 and P300 were attenuated in the presence of a face producing /ba/ relative to a pixelated face, representing a possible neural correlate of the phonemic restoration effect. Notably, those individuals with more traits associated with autism (yet still in the non-clinical range) had smaller P300 responses overall, regardless of face context, suggesting generally reduced phonemic discrimination. Full article
(This article belongs to the Special Issue Audiovisual Integration in Early Language Development)
Show Figures

Figure 1

Open AccessArticle
Verbs in Mothers’ Input to Six-Month-Olds: Synchrony between Presentation, Meaning, and Actions Is Related to Later Verb Acquisition
Brain Sci. 2017, 7(5), 52; https://doi.org/10.3390/brainsci7050052 - 29 Apr 2017
Cited by 7
Abstract
In embodied theories on language, it is widely accepted that experience in acting generates an expectation of this action when hearing the word for it. However, how this expectation emerges during language acquisition is still not well understood. Assuming that the intermodal presentation [...] Read more.
In embodied theories on language, it is widely accepted that experience in acting generates an expectation of this action when hearing the word for it. However, how this expectation emerges during language acquisition is still not well understood. Assuming that the intermodal presentation of information facilitates perception, prior research had suggested that early in infancy, mothers perform their actions in temporal synchrony with language. Further research revealed that this synchrony is a form of multimodal responsive behavior related to the child’s later language development. Expanding on these findings, this article explores the relationship between action–language synchrony and the acquisition of verbs. Using qualitative and quantitative methods, we analyzed the coordination of verbs and action in mothers’ input to six-month-old infants and related these maternal strategies to the infants’ later production of verbs. We found that the verbs used by mothers in these early interactions were tightly coordinated with the ongoing action and very frequently responsive to infant actions. It is concluded that use of these multimodal strategies could significantly predict the number of spoken verbs in infants’ vocabulary at 24 months. Full article
(This article belongs to the Special Issue Audiovisual Integration in Early Language Development)
Show Figures

Figure 1

Open AccessArticle
Modeling the Development of Audiovisual Cue Integration in Speech Perception
Brain Sci. 2017, 7(3), 32; https://doi.org/10.3390/brainsci7030032 - 21 Mar 2017
Cited by 2
Abstract
Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to [...] Read more.
Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues. Full article
(This article belongs to the Special Issue Audiovisual Integration in Early Language Development)
Show Figures

Figure 1

Review

Jump to: Research

Open AccessReview
Contributions of Letter-Speech Sound Learning and Visual Print Tuning to Reading Improvement: Evidence from Brain Potential and Dyslexia Training Studies
Brain Sci. 2017, 7(1), 10; https://doi.org/10.3390/brainsci7010010 - 18 Jan 2017
Cited by 8
Abstract
We use a neurocognitive perspective to discuss the contribution of learning letter-speech sound (L-SS) associations and visual specialization in the initial phases of reading in dyslexic children. We review findings from associative learning studies on related cognitive skills important for establishing and consolidating [...] Read more.
We use a neurocognitive perspective to discuss the contribution of learning letter-speech sound (L-SS) associations and visual specialization in the initial phases of reading in dyslexic children. We review findings from associative learning studies on related cognitive skills important for establishing and consolidating L-SS associations. Then we review brain potential studies, including our own, that yielded two markers associated with reading fluency. Here we show that the marker related to visual specialization (N170) predicts word and pseudoword reading fluency in children who received additional practice in the processing of morphological word structure. Conversely, L-SS integration (indexed by mismatch negativity (MMN)) may only remain important when direct orthography to semantic conversion is not possible, such as in pseudoword reading. In addition, the correlation between these two markers supports the notion that multisensory integration facilitates visual specialization. Finally, we review the role of implicit learning and executive functions in audiovisual learning in dyslexia. Implications for remedial research are discussed and suggestions for future studies are presented. Full article
(This article belongs to the Special Issue Audiovisual Integration in Early Language Development)
Show Figures

Figure 1

Back to TopTop