The Impact of Non-Speech Cues on Speech Perception in Infancy

A special issue of Brain Sciences (ISSN 2076-3425). This special issue belongs to the section "Neurolinguistics".

Deadline for manuscript submissions: closed (5 December 2020) | Viewed by 17689

Special Issue Editor


E-Mail Website
Guest Editor
Department of Speech, Language and Hearing Sciences, Purdue University, West Lafayette, IN 47907, USA
Interests: speech perception; infancy; non-speech cues; multimodal cues

Special Issue Information

Dear Colleagues,

Studies on speech perception in infancy and childhood have told us much about phonological structure and learnability but have relied heavily on models which limit learning to auditory mechanisms and a unidirectional stream of learning from perception to production (as opposed to vice versa). In this Special Issue, we solicit papers which explore how infants and children may utilize flows of information beyond this unidirectional auditory stream. We solicit cutting-edge research which explores the impact of social cues, sensorimotor cues, visual cues, olfactory cues, etc. on speech perception in infancy and childhood.

Prof. Dr. Amanda Seidl
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Brain Sciences is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • infancy
  • speech perception
  • non-speech cues
  • multimodal cues

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

18 pages, 31486 KiB  
Article
Impaired Audiovisual Representation of Phonemes in Children with Developmental Language Disorder
by Natalya Kaganovich, Jennifer Schumaker and Sharon Christ
Brain Sci. 2021, 11(4), 507; https://doi.org/10.3390/brainsci11040507 - 16 Apr 2021
Cited by 4 | Viewed by 2353
Abstract
We examined whether children with developmental language disorder (DLD) differed from their peers with typical development (TD) in the degree to which they encode information about a talker’s mouth shape into long-term phonemic representations. Children watched a talker’s face and listened to rare [...] Read more.
We examined whether children with developmental language disorder (DLD) differed from their peers with typical development (TD) in the degree to which they encode information about a talker’s mouth shape into long-term phonemic representations. Children watched a talker’s face and listened to rare changes from [i] to [u] or the reverse. In the neutral condition, the talker’s face had a closed mouth throughout. In the audiovisual violation condition, the mouth shape always matched the frequent vowel, even when the rare vowel was played. We hypothesized that in the neutral condition no long-term audiovisual memory traces for speech sounds would be activated. Therefore, the neural response elicited by deviants would reflect only a violation of the observed audiovisual sequence. In contrast, we expected that in the audiovisual violation condition, a long-term memory trace for the speech sound/lip configuration typical for the frequent vowel would be activated. In this condition then, the neural response elicited by rare sound changes would reflect a violation of not only observed audiovisual patterns but also of a long-term memory representation for how a given vowel looks when articulated. Children pressed a response button whenever they saw a talker’s face assume a silly expression. We found that in children with TD, rare auditory changes produced a significant mismatch negativity (MMN) event-related potential (ERP) component over the posterior scalp in the audiovisual violation condition but not in the neutral condition. In children with DLD, no MMN was present in either condition. Rare vowel changes elicited a significant P3 in both groups and conditions, indicating that all children noticed auditory changes. Our results suggest that children with TD, but not children with DLD, incorporate visual information into long-term phonemic representations and detect violations in audiovisual phonemic congruency even when they perform a task that is unrelated to phonemic processing. Full article
(This article belongs to the Special Issue The Impact of Non-Speech Cues on Speech Perception in Infancy)
Show Figures

Figure 1

20 pages, 2002 KiB  
Article
Perceptual Connectivity Influences Toddlers’ Attention to Known Objects and Subsequent Label Processing
by Ryan E. Peters, Justin B. Kueser and Arielle Borovsky
Brain Sci. 2021, 11(2), 163; https://doi.org/10.3390/brainsci11020163 - 27 Jan 2021
Cited by 6 | Viewed by 2250
Abstract
While recent research suggests that toddlers tend to learn word meanings with many “perceptual” features that are accessible to the toddler’s sensory perception, it is not clear whether and how building a lexicon with perceptual connectivity supports attention to and recognition of word [...] Read more.
While recent research suggests that toddlers tend to learn word meanings with many “perceptual” features that are accessible to the toddler’s sensory perception, it is not clear whether and how building a lexicon with perceptual connectivity supports attention to and recognition of word meanings. We explore this question in 24–30-month-olds (N = 60) in relation to other individual differences, including age, vocabulary size, and tendencies to maintain focused attention. Participants’ looking to item pairs with high vs. low perceptual connectivity—defined as the number of words in a child’s lexicon sharing perceptual features with the item—was measured before and after target item labeling. Results revealed pre-labeling attention to known items is biased to both high- and low-connectivity items: first to high, and second, but more robustly, to low-connectivity items. Subsequent object–label processing was also facilitated for high-connectivity items, particularly for children with temperamental tendencies to maintain focused attention. This work provides the first empirical evidence that patterns of shared perceptual features within children’s known vocabularies influence both visual and lexical processing, highlighting the potential for a newfound set of developmental dependencies based on the perceptual/sensory structure of early vocabularies. Full article
(This article belongs to the Special Issue The Impact of Non-Speech Cues on Speech Perception in Infancy)
Show Figures

Graphical abstract

17 pages, 1627 KiB  
Article
The Role of Audiovisual Speech in Fast-Mapping and Novel Word Retention in Monolingual and Bilingual 24-Month-Olds
by Drew Weatherhead, Maria M. Arredondo, Loreto Nácar Garcia and Janet F. Werker
Brain Sci. 2021, 11(1), 114; https://doi.org/10.3390/brainsci11010114 - 16 Jan 2021
Cited by 13 | Viewed by 3515
Abstract
Three experiments examined the role of audiovisual speech on 24-month-old monolingual and bilinguals’ performance in a fast-mapping task. In all three experiments, toddlers were exposed to familiar trials which tested their knowledge of known word–referent pairs, disambiguation trials in which novel word–referent pairs [...] Read more.
Three experiments examined the role of audiovisual speech on 24-month-old monolingual and bilinguals’ performance in a fast-mapping task. In all three experiments, toddlers were exposed to familiar trials which tested their knowledge of known word–referent pairs, disambiguation trials in which novel word–referent pairs were indirectly learned, and retention trials which probed their recognition of the newly-learned word–referent pairs. In Experiment 1 (n = 48), lip movements were present during familiar and disambiguation trials, but not retention trials. In Experiment 2 (n = 48), lip movements were present during all three trial types. In Experiment 3 (bilinguals only, n = 24), a still face with no lip movements was present in all three trial types. While toddlers succeeded in the familiar and disambiguation trials of every experiment, success in the retention trials was only found in Experiment 2. This work suggests that the extra-linguistic support provided by lip movements improved the learning and recognition of the novel words. Full article
(This article belongs to the Special Issue The Impact of Non-Speech Cues on Speech Perception in Infancy)
Show Figures

Figure 1

9 pages, 464 KiB  
Article
Does Human Touch Facilitate Object Categorization in 6-to-9-Month-Old Infants?
by Girija Kadlaskar, Sandra Waxman and Amanda Seidl
Brain Sci. 2020, 10(12), 940; https://doi.org/10.3390/brainsci10120940 - 06 Dec 2020
Cited by 1 | Viewed by 3087
Abstract
Infants form object categories in the first months of life. By 3 months and throughout the first year, successful categorization varies as a function of the acoustic information presented in conjunction with category members. Here we ask whether tactile information, delivered in conjunction [...] Read more.
Infants form object categories in the first months of life. By 3 months and throughout the first year, successful categorization varies as a function of the acoustic information presented in conjunction with category members. Here we ask whether tactile information, delivered in conjunction with category members, also promotes categorization. Six- to 9-month-olds participated in an object categorization task in either a touch-cue or no-cue condition. For infants in the touch-cue condition, familiarization images were accompanied by precisely-timed light touches from their caregivers; infants in the no-cue condition saw the same images but received no touches. Only infants in the touch-cue condition formed categories. This provides the first evidence that touch may play a role in supporting infants’ object categorization. Full article
(This article belongs to the Special Issue The Impact of Non-Speech Cues on Speech Perception in Infancy)
Show Figures

Figure 1

Review

Jump to: Research

12 pages, 427 KiB  
Review
A New Proposal for Phoneme Acquisition: Computing Speaker-Specific Distribution
by Mihye Choi and Mohinish Shukla
Brain Sci. 2021, 11(2), 177; https://doi.org/10.3390/brainsci11020177 - 01 Feb 2021
Cited by 5 | Viewed by 2179
Abstract
Speech is an acoustically variable signal, and one of the sources of this variation is the presence of multiple speakers. Empirical evidence has suggested that adult listeners possess remarkably sensitive (and systematic) abilities to process speech signals, despite speaker variability. It includes not [...] Read more.
Speech is an acoustically variable signal, and one of the sources of this variation is the presence of multiple speakers. Empirical evidence has suggested that adult listeners possess remarkably sensitive (and systematic) abilities to process speech signals, despite speaker variability. It includes not only a sensitivity to speaker-specific variation, but also an ability to utilize speaker variation with other sources of information for further processing. Recently, many studies also showed that young children seem to possess a similar capacity. This suggests continuity in the processing of speaker-dependent speech variability, and suggests that this ability could also be important for infants learning their native language. In the present paper, we review evidence for speaker variability and speech processing in adults, and speaker variability and speech processing in young children, with an emphasis on how they make use of speaker-specific information in word learning situations. Finally, we will build on these findings to make a novel proposal for the use of speaker-specific information processing in phoneme learning in infancy. Full article
(This article belongs to the Special Issue The Impact of Non-Speech Cues on Speech Perception in Infancy)
Show Figures

Figure 1

17 pages, 702 KiB  
Review
Development of the Mechanisms Underlying Audiovisual Speech Perception Benefit
by Kaylah Lalonde and Lynne A. Werner
Brain Sci. 2021, 11(1), 49; https://doi.org/10.3390/brainsci11010049 - 05 Jan 2021
Cited by 17 | Viewed by 3479
Abstract
The natural environments in which infants and children learn speech and language are noisy and multimodal. Adults rely on the multimodal nature of speech to compensate for noisy environments during speech communication. Multiple mechanisms underlie mature audiovisual benefit to speech perception, including reduced [...] Read more.
The natural environments in which infants and children learn speech and language are noisy and multimodal. Adults rely on the multimodal nature of speech to compensate for noisy environments during speech communication. Multiple mechanisms underlie mature audiovisual benefit to speech perception, including reduced uncertainty as to when auditory speech will occur, use of correlations between the amplitude envelope of auditory and visual signals in fluent speech, and use of visual phonetic knowledge for lexical access. This paper reviews evidence regarding infants’ and children’s use of temporal and phonetic mechanisms in audiovisual speech perception benefit. The ability to use temporal cues for audiovisual speech perception benefit emerges in infancy. Although infants are sensitive to the correspondence between auditory and visual phonetic cues, the ability to use this correspondence for audiovisual benefit may not emerge until age four. A more cohesive account of the development of audiovisual speech perception may follow from a more thorough understanding of the development of sensitivity to and use of various temporal and phonetic cues. Full article
(This article belongs to the Special Issue The Impact of Non-Speech Cues on Speech Perception in Infancy)
Show Figures

Figure 1

Back to TopTop