Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (31)

Search Parameters:
Keywords = phoneme discrimination

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 821 KiB  
Article
The Role of Phoneme Discrimination in the Variability of Speech and Language Outcomes Among Children with Hearing Loss
by Kerry A. Walker, Jinal K. Shah, Lauren Alexander, Stacy Stiell, Christine Yoshinaga-Itano and Kristin M. Uhler
Behav. Sci. 2025, 15(8), 1072; https://doi.org/10.3390/bs15081072 - 6 Aug 2025
Abstract
This research compares speech discrimination abilities between 17 children who are hard-of-hearing (CHH) and 13 children with normal hearing (CNH), aged 9 to 36 months, using either a conditioned head turn (CHT) or condition play paradigm, for two phoneme pairs /ba-da/ and /sa-ʃa/. [...] Read more.
This research compares speech discrimination abilities between 17 children who are hard-of-hearing (CHH) and 13 children with normal hearing (CNH), aged 9 to 36 months, using either a conditioned head turn (CHT) or condition play paradigm, for two phoneme pairs /ba-da/ and /sa-ʃa/. As CHH were tested in the aided and unaided conditions, CNH were also tested on each phoneme contrast twice to control for learning effects. When speech discrimination abilities were compared between CHH, with hearing aids (HAs), and CNH, there were no statistical differences observed in performance on stop consonant discrimination, but a significant statistical difference was observed for fricative discrimination performance. Among CHH, significant benefits were observed for /ba-da/ speech discrimination while wearing HAs, compared to the no HA condition. All CHH were early-identified, early amplified, and were enrolled in parent-centered early intervention services. Under these conditions, CHH demonstrated the ability to discriminate speech comparable to CNH. Additionally, repeated testing within 1-month did not result in a change in speech discrimination scores, indicating good test–retest reliability of speech discrimination scores. Finally, this research explored the question of infant/toddler listening fatigue in the behavioral speech discrimination task. The CHT paradigm included returning to a contrast (i.e., /a-i/) previously shown to be easier for both CHH and CNH to discriminate to examine if failure to discriminate /ba-da/ or /sa-ʃa/ was due to listening fatigue or off-task behavior. Full article
(This article belongs to the Special Issue Language and Cognitive Development in Deaf Children)
Show Figures

Figure 1

15 pages, 1545 KiB  
Article
Speech Recognition in Noise: Analyzing Phoneme, Syllable, and Word-Based Scoring Methods and Their Interaction with Hearing Loss
by Saransh Jain, Vijaya Kumar Narne, Bharani, Hema Valayutham, Thejaswini Madan, Sunil Kumar Ravi and Chandni Jain
Diagnostics 2025, 15(13), 1619; https://doi.org/10.3390/diagnostics15131619 - 26 Jun 2025
Viewed by 512
Abstract
Introduction: This study aimed to compare different scoring methods, such as phoneme, syllable, and word-based scoring, during word recognition in noise testing and their interaction with hearing loss severity. These scoring methods provided a structured framework for refining clinical audiological diagnosis by revealing [...] Read more.
Introduction: This study aimed to compare different scoring methods, such as phoneme, syllable, and word-based scoring, during word recognition in noise testing and their interaction with hearing loss severity. These scoring methods provided a structured framework for refining clinical audiological diagnosis by revealing underlying auditory processing at multiple linguistic levels. We highlight how scoring differences inform differential diagnosis and guide targeted audiological interventions. Methods: Pure tone audiometry and word-in-noise testing were conducted on 100 subjects with a wide range of hearing loss severity. Speech recognition was scored using phoneme, syllable, and word-based methods. All procedures were designed to reflect standard diagnostic protocols in clinical audiology. Discriminant function analysis examined how these scoring methods differentiate the degree of hearing loss. Results: Results showed that each method provides unique information about auditory processing. Phoneme-based scoring has pointed out basic auditory discrimination; syllable-based scoring can capture temporal and phonological processing, while word-based scoring reflects real-world listening conditions by incorporating contextual knowledge. These findings emphasize the diagnostic value of each scoring approach in clinical settings, aiding differential diagnosis and treatment planning. Conclusions: This study showed the effect of different scoring methods on hearing loss differentiation concerning severity. We recommend the integration of phoneme-based scoring into standard diagnostic batteries to enhance early detection and personalize rehabilitation strategies. Future research must involve studies about integration with other speech perception tests and applicability across different clinical settings. Full article
Show Figures

Figure 1

29 pages, 2091 KiB  
Article
Distributional Learning and Language Activation: Evidence from L3 Spanish Perception Among L1 Korean–L2 English Speakers
by Jeong Mun and Alfonso Morales-Front
Languages 2025, 10(6), 147; https://doi.org/10.3390/languages10060147 - 19 Jun 2025
Viewed by 664
Abstract
This study investigates L3 Spanish perception patterns among L1 Korean–L2 English bilinguals with varying L3 proficiency levels, aiming to test the applicability of traditional L2 perceptual models in multilingual contexts. We conducted two experiments: a cross-linguistic discrimination task and a cross-language identification task. [...] Read more.
This study investigates L3 Spanish perception patterns among L1 Korean–L2 English bilinguals with varying L3 proficiency levels, aiming to test the applicability of traditional L2 perceptual models in multilingual contexts. We conducted two experiments: a cross-linguistic discrimination task and a cross-language identification task. Results revealed unexpected outcomes unique to multilingual contexts. Participants had difficulty reliably discriminating between cross-linguistic categories and showed little improvement over time. Similarly, they did not demonstrate progress in categorizing sounds specific to each language. The absence of a clear correlation between proficiency levels and the ability to discriminate and categorize sounds suggests that input distribution and language-specific activation may play more critical roles in L3 perception, consistent with the distributional learning approach. We argue that phoneme distributions from all three languages likely occupy a shared perceptual space. When a specific language is activated, the relevant phoneme distributions become dominant, while others are suppressed. This selective activation, while not crucial in traditional L1 and L2 studies, is critical in L3 contexts, like the one examined here, where managing multiple phonemic systems complicates discrimination and categorization. These findings underscore the need for theoretical adjustments in multilingual phonetic acquisition models and highlight the complexities of language processing in multilingual settings. Full article
(This article belongs to the Special Issue Advances in the Investigation of L3 Speech Perception)
Show Figures

Figure 1

23 pages, 2751 KiB  
Article
Speech Production Development in Mandarin-Speaking Children: A Case of Lingual Stop Consonants
by Fangfang Li
Behav. Sci. 2025, 15(4), 516; https://doi.org/10.3390/bs15040516 - 13 Apr 2025
Viewed by 535
Abstract
Lingual stops are among the earliest sounds acquired by young children, but the process of acquiring the temporal coordination of lingual gestures necessary for the production of stop consonants appears to be protracted. The current research aims to investigate the developmental process of [...] Read more.
Lingual stops are among the earliest sounds acquired by young children, but the process of acquiring the temporal coordination of lingual gestures necessary for the production of stop consonants appears to be protracted. The current research aims to investigate the developmental process of lingual stop consonants in 100 Mandarin-speaking 2- to 5-year-olds using the acoustic parameter voice onset time (VOT). Children were engaged in a word-repetition task and recorded while producing words that begin with /t/, /d/, /k/, and /g/. Results indicate well-established contrasts between /t/ and /d/ as well as between /k/ and /g/ by age 2. However, comparing with adults’ speech patterns, children’s speech productions are characterized by greater within-category dispersion and overlap, as well as smaller phoneme discriminability. Mandarin-speaking children also go through an “overshoot” stage by producing longer-than-adult VOT values, especially for voiceless aspirated stops /t/ and /k/. Lastly, unlike adults who exhibit gender-specific patterns in VOT, boys and girls do not show distinct patterns in their VOT by age 5. These results will be discussed in relation to children’s lingual motor control development and the organization of phonological and phonetic structures during the process of language acquisition. Full article
(This article belongs to the Special Issue Developing Cognitive and Executive Functions Across Lifespan)
Show Figures

Figure 1

22 pages, 2244 KiB  
Article
Mismatch Negativity Unveils Tone Perception Strategies and Degrees of Tone Merging: The Case of Macau Cantonese
by Han Wang, Fei Gao and Jingwei Zhang
Brain Sci. 2024, 14(12), 1271; https://doi.org/10.3390/brainsci14121271 - 17 Dec 2024
Viewed by 1266
Abstract
Background/Objectives: Previous studies have examined the role of working memory in cognitive tasks such as syntactic, semantic, and phonological processing, thereby contributing to our understanding of linguistic information management and retrieval. However, the real-time processing of phonological information—particularly in relation to suprasegmental features [...] Read more.
Background/Objectives: Previous studies have examined the role of working memory in cognitive tasks such as syntactic, semantic, and phonological processing, thereby contributing to our understanding of linguistic information management and retrieval. However, the real-time processing of phonological information—particularly in relation to suprasegmental features like tone, where its contour represents a time-varying signal—remains a relatively underexplored area within the framework of Information Processing Theory (IPT). This study aimed to address this gap by investigating the real-time processing of similar tonal information by native Cantonese speakers, thereby providing a deeper understanding of how IPT applies to auditory processing. Methods: Specifically, this study combined assessments of cognitive functions, an AX discrimination task, and electroencephalography (EEG) to investigate the discrimination results and real-time processing characteristics of native Macau Cantonese speakers perceiving three pairs of similar tones. Results: The behavioral results confirmed the completed merging of T2–T5 in Macau Cantonese, and the ongoing merging of T3–T6 and T4–T6, with perceptual merging rates of 45.46% and 27.28%, respectively. Mismatch negativity (MMN) results from the passive oddball experiment revealed distinct temporal processing patterns for the three tone pairs. Cognitive functions, particularly attention and working memory, significantly influenced tone discrimination, with more pronounced effects observed in the mean amplitude of MMN during T4–T6 discrimination. Differences in MMN peak latency between T3–T6 and T4–T6 further suggested the use of different perceptual strategies for these contour-related tones. Specifically, the T3–T6 pair can be perceived through early signal input, whereas the perception of T4–T6 relies on constant signal input. Conclusions: This distinction in cognitive resource allocation may explain the different merging rates of the two tone pairs. This study, by focusing on the perceptual difficulty of tone pairs and employing EEG techniques, revealed the temporal processing of similar tones by native speakers, providing new insights into tone phoneme processing and speech variation. Full article
(This article belongs to the Collection Collection on Neurobiology of Language)
Show Figures

Figure 1

24 pages, 4425 KiB  
Brief Report
Transcranial Magnetic Stimulation Facilitates Neural Speech Decoding
by Lindy Comstock, Vinícius Rezende Carvalho, Claudia Lainscsek, Aria Fallah and Terrence J. Sejnowski
Brain Sci. 2024, 14(9), 895; https://doi.org/10.3390/brainsci14090895 - 2 Sep 2024
Cited by 1 | Viewed by 1624
Abstract
Transcranial magnetic stimulation (TMS) has been widely used to study the mechanisms that underlie motor output. Yet, the extent to which TMS acts upon the cortical neurons implicated in volitional motor commands and the focal limitations of TMS remain subject to debate. Previous [...] Read more.
Transcranial magnetic stimulation (TMS) has been widely used to study the mechanisms that underlie motor output. Yet, the extent to which TMS acts upon the cortical neurons implicated in volitional motor commands and the focal limitations of TMS remain subject to debate. Previous research links TMS to improved subject performance in behavioral tasks, including a bias in phoneme discrimination. Our study replicates this result, which implies a causal relationship between electro-magnetic stimulation and psychomotor activity, and tests whether TMS-facilitated psychomotor activity recorded via electroencephalography (EEG) may thus serve as a superior input for neural decoding. First, we illustrate that site-specific TMS elicits a double dissociation in discrimination ability for two phoneme categories. Next, we perform a classification analysis on the EEG signals recorded during TMS and find a dissociation between the stimulation site and decoding accuracy that parallels the behavioral results. We observe weak to moderate evidence for the alternative hypothesis in a Bayesian analysis of group means, with more robust results upon stimulation to a brain region governing multiple phoneme features. Overall, task accuracy was a significant predictor of decoding accuracy for phoneme categories (F(1,135) = 11.51, p < 0.0009) and individual phonemes (F(1,119) = 13.56, p < 0.0003), providing new evidence for a causal link between TMS, neural function, and behavior. Full article
(This article belongs to the Special Issue Language, Communication and the Brain)
Show Figures

Figure 1

32 pages, 7815 KiB  
Article
Neural Adaptation at Stimulus Onset and Speed of Neural Processing as Critical Contributors to Speech Comprehension Independent of Hearing Threshold or Age
by Jakob Schirmer, Stephan Wolpert, Konrad Dapper, Moritz Rühle, Jakob Wertz, Marjoleen Wouters, Therese Eldh, Katharina Bader, Wibke Singer, Etienne Gaudrain, Deniz Başkent, Sarah Verhulst, Christoph Braun, Lukas Rüttiger, Matthias H. J. Munk, Ernst Dalhoff and Marlies Knipper
J. Clin. Med. 2024, 13(9), 2725; https://doi.org/10.3390/jcm13092725 - 6 May 2024
Cited by 4 | Viewed by 2166
Abstract
Background: It is assumed that speech comprehension deficits in background noise are caused by age-related or acquired hearing loss. Methods: We examined young, middle-aged, and older individuals with and without hearing threshold loss using pure-tone (PT) audiometry, short-pulsed distortion-product otoacoustic emissions [...] Read more.
Background: It is assumed that speech comprehension deficits in background noise are caused by age-related or acquired hearing loss. Methods: We examined young, middle-aged, and older individuals with and without hearing threshold loss using pure-tone (PT) audiometry, short-pulsed distortion-product otoacoustic emissions (pDPOAEs), auditory brainstem responses (ABRs), auditory steady-state responses (ASSRs), speech comprehension (OLSA), and syllable discrimination in quiet and noise. Results: A noticeable decline of hearing sensitivity in extended high-frequency regions and its influence on low-frequency-induced ABRs was striking. When testing for differences in OLSA thresholds normalized for PT thresholds (PTTs), marked differences in speech comprehension ability exist not only in noise, but also in quiet, and they exist throughout the whole age range investigated. Listeners with poor speech comprehension in quiet exhibited a relatively lower pDPOAE and, thus, cochlear amplifier performance independent of PTT, smaller and delayed ABRs, and lower performance in vowel-phoneme discrimination below phase-locking limits (/o/-/u/). When OLSA was tested in noise, listeners with poor speech comprehension independent of PTT had larger pDPOAEs and, thus, cochlear amplifier performance, larger ASSR amplitudes, and higher uncomfortable loudness levels, all linked with lower performance of vowel-phoneme discrimination above the phase-locking limit (/i/-/y/). Conslusions: This study indicates that listening in noise in humans has a sizable disadvantage in envelope coding when basilar-membrane compression is compromised. Clearly, and in contrast to previous assumptions, both good and poor speech comprehension can exist independently of differences in PTTs and age, a phenomenon that urgently requires improved techniques to diagnose sound processing at stimulus onset in the clinical routine. Full article
Show Figures

Graphical abstract

9 pages, 909 KiB  
Article
Hearing and Language Skills in Children Using Hearing Aids: Experimental Intervention Study
by Luana Speck Polli Burigo, Anna Quialheiro, Karina Mary de Paiva, Thaiana Vargas dos Santos, Luciele Kauana Woide, Luciana Berwanger Cigana, Janaina Massignani and Patricia Haas
J. Pers. Med. 2024, 14(4), 372; https://doi.org/10.3390/jpm14040372 - 30 Mar 2024
Viewed by 2150
Abstract
Introduction: Hearing loss in childhood compromises a child’s auditory, linguistic, and social skill development. Stimulation and early intervention through therapy and the use of personal sound amplification devices (PSAPs) are important for improving communication. Purpose: To verify the effectiveness of speech therapy intervention [...] Read more.
Introduction: Hearing loss in childhood compromises a child’s auditory, linguistic, and social skill development. Stimulation and early intervention through therapy and the use of personal sound amplification devices (PSAPs) are important for improving communication. Purpose: To verify the effectiveness of speech therapy intervention on the auditory and linguistic skills of Brazilian children aged between 6 and 8 years using PSAPs. Methods: Experimental study analyzing the intervention process in children aged between 6 and 8 years with mild to severe bilateral hearing loss and prelingual deafness who are PSAP users. Diagnostic information was analyzed, and assessments and interventions were carried out using the Glendonald Auditory Screening Procedure (GASP), a phoneme discrimination test with figures (TFDF), an expressive language category classification test, and an Infant-Toddler Meaningful Auditory Integration Scale (IT-MAIS) questionnaire. Results: Sixteen children participated in the study; they were divided into a control group (CG) of six children and an intervention group (IG) of ten children. All research subjects underwent two protocol application sessions, and the IG underwent six speech therapy intervention sessions. In the IT-MAIS, the CG had a 9% increase in score, and the IG had an increase of 3% after intervention. The TFDF obtained a 5% increase in the IG in terms of phonemic discrimination ability. The expressive language category classification tests and GASP were considered not sensitive enough to modify the parameters of auditory and linguistic skills. Conclusions: The study found a significant improvement amongst the IG in the TFDF protocol and an increase in IT-MAIS scores in both groups. Full article
(This article belongs to the Section Methodology, Drug and Device Discovery)
Show Figures

Figure 1

10 pages, 549 KiB  
Article
Duration Perception and Reading in Typically Developing Adults and Adults with Developmental Dyslexia: Implications for Assessment and Intervention
by Aikaterini Liapi, Susana Silva and Vasiliki Folia
Eur. J. Investig. Health Psychol. Educ. 2024, 14(3), 699-708; https://doi.org/10.3390/ejihpe14030046 - 15 Mar 2024
Cited by 1 | Viewed by 1697
Abstract
While the link between beat perception and reading skills is attributed to a general improvement in neural entrainment to speech units, duration perception (DP) is primarily linked to a specific aspect of speech perception, specifially discriminating phonemes of varying lengths. Our previous study [...] Read more.
While the link between beat perception and reading skills is attributed to a general improvement in neural entrainment to speech units, duration perception (DP) is primarily linked to a specific aspect of speech perception, specifially discriminating phonemes of varying lengths. Our previous study found a significant correlation between DP and pseudoword reading in both typically developing (TD) individuals and adults with dyslexia (DD). This suggests that, like beat, DP may also enhance overall speech perception. However, our previous study employed a composite measure that did not discriminate speed from accuracy. In this study, we sought to replicate the link between DP and pseudoword reading in a new sample and explore how it might vary depending on the reading parameter being measured. We analyzed the performance of 60 TD vs. 20 DD adults in DP, word reading and pseudoword reading tasks, analyzing the latter for both speed and accuracy. Indeed, duration skills correlated positively with pseudoword reading accuracy. In TD adults, there was no association between DP and reading speed, whereas DD individuals exhibited slower reading speed alongside improved duration skills. We emphasize the potential usefulness of DP tasks in assessment and early intervention and raise new questions about compensatory strategies adopted by DD adults. Full article
(This article belongs to the Collection Research in Clinical and Health Contexts)
Show Figures

Figure 1

38 pages, 4527 KiB  
Article
Mismatch Responses to Speech Contrasts in Preschoolers with and without Developmental Language Disorder
by Ana Campos, Jyrki Tuomainen and Outi Tuomainen
Brain Sci. 2024, 14(1), 42; https://doi.org/10.3390/brainsci14010042 - 31 Dec 2023
Cited by 5 | Viewed by 2192
Abstract
This study compared cortical responses to speech in preschoolers with typical language development (TLD) and with Developmental Language Disorder (DLD). We investigated whether top-down language effects modulate speech perception in young children in an adult-like manner. We compared cortical mismatch responses (MMRs) during [...] Read more.
This study compared cortical responses to speech in preschoolers with typical language development (TLD) and with Developmental Language Disorder (DLD). We investigated whether top-down language effects modulate speech perception in young children in an adult-like manner. We compared cortical mismatch responses (MMRs) during the passive perception of speech contrasts in three groups of participants: preschoolers with TLD (n = 11), preschoolers with DLD (n = 16), and adults (n = 20). We also measured children’s phonological skills and investigated whether they are associated with the cortical discrimination of phonemic changes involving different linguistic complexities. The results indicated top-down language effects in adults, with enhanced cortical discrimination of lexical stimuli but not of non-words. In preschoolers, the TLD and DLD groups did not differ in the MMR measures, and no top-down effects were detected. Moreover, we found no association between MMRs and phonological skills, even though the DLD group’s phonological skills were significantly lower. Our findings suggest that top-down language modulations in speech discrimination may not be present during early childhood, and that children with DLD may not exhibit cortical speech perception deficits. The lack of association between phonological and MMR measures indicates that further research is needed to understand the link between language skills and cortical activity in preschoolers. Full article
Show Figures

Figure 1

19 pages, 4308 KiB  
Article
Speaker Profiling Based on the Short-Term Acoustic Features of Vowels
by Mohammad Ali Humayun, Junaid Shuja and Pg Emeroylariffion Abas
Technologies 2023, 11(5), 119; https://doi.org/10.3390/technologies11050119 - 7 Sep 2023
Cited by 3 | Viewed by 2365
Abstract
Speech samples can provide valuable information regarding speaker characteristics, including their social backgrounds. Accent variations with speaker backgrounds reflect corresponding acoustic features of speech, and these acoustic variations can be analyzed to assist in tracking down criminals from speech samples available as forensic [...] Read more.
Speech samples can provide valuable information regarding speaker characteristics, including their social backgrounds. Accent variations with speaker backgrounds reflect corresponding acoustic features of speech, and these acoustic variations can be analyzed to assist in tracking down criminals from speech samples available as forensic evidence. Speech accent identification has recently received significant consideration in the speech forensics research community. However, most works have utilized long-term temporal modelling of acoustic features for accent classification and disregarded the stationary acoustic characteristics of particular phoneme articulations. This paper analyzes short-term acoustic features extracted from a central time window of English vowel speech segments for accent discrimination. Various feature computation techniques have been compared for the accent classification task. It has been found that using spectral features as an input gives better performance than using cepstral features, with the lower filters contributing more significantly to the classification task. Moreover, detailed analysis has been presented for time window durations and frequency bin resolution to compute short-term spectral features concerning accent discrimination. Using longer time durations generally requires higher frequency resolution to optimize classification performance. These results are significant, as they show the benefits of using spectral features for speaker profiling despite the popularity of cepstral features for other speech-related tasks. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

20 pages, 7377 KiB  
Article
Automatic Detection System for Velopharyngeal Insufficiency Based on Acoustic Signals from Nasal and Oral Channels
by Yu Zhang, Jing Zhang, Wen Li, Heng Yin and Ling He
Diagnostics 2023, 13(16), 2714; https://doi.org/10.3390/diagnostics13162714 - 21 Aug 2023
Cited by 3 | Viewed by 1850
Abstract
Velopharyngeal insufficiency (VPI) is a type of pharyngeal function dysfunction that causes speech impairment and swallowing disorder. Speech therapists play a key role on the diagnosis and treatment of speech disorders. However, there is a worldwide shortage of experienced speech therapists. Artificial intelligence-based [...] Read more.
Velopharyngeal insufficiency (VPI) is a type of pharyngeal function dysfunction that causes speech impairment and swallowing disorder. Speech therapists play a key role on the diagnosis and treatment of speech disorders. However, there is a worldwide shortage of experienced speech therapists. Artificial intelligence-based computer-aided diagnosing technology could be a solution for this. This paper proposes an automatic system for VPI detection at the subject level. It is a non-invasive and convenient approach for VPI diagnosis. Based on the principle of impaired articulation of VPI patients, nasal- and oral-channel acoustic signals are collected as raw data. The system integrates the symptom discriminant results at the phoneme level. For consonants, relative prominent frequency description and relative frequency distribution features are proposed to discriminate nasal air emission caused by VPI. For hypernasality-sensitive vowels, a cross-attention residual Siamese network (CARS-Net) is proposed to perform automatic VPI/non-VPI classification at the phoneme level. CARS-Net embeds a cross-attention module between the two branches to improve the VPI/non-VPI classification model for vowels. We validate the proposed system on a self-built dataset, and the accuracy reaches 98.52%. This provides possibilities for implementing automatic VPI diagnosis. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

24 pages, 6264 KiB  
Article
Perceptual Discrimination of Phonemic Contrasts in Quebec French: Exposure to Quebec French Does Not Improve Perception in Hexagonal French Native Speakers Living in Quebec
by Scott Kunkel, Elisa Passoni and Esther de Leeuw
Languages 2023, 8(3), 193; https://doi.org/10.3390/languages8030193 - 14 Aug 2023
Cited by 1 | Viewed by 2315
Abstract
In Quebec French, /a ~ ɑ/ and /ε ~ aε/ are phonemic, whereas in Hexagonal French, these vowels are merged to /a/ and /ε/, respectively. We tested the effects of extended exposure to Quebec French (QF) as a second dialect (D2) on Hexagonal [...] Read more.
In Quebec French, /a ~ ɑ/ and /ε ~ aε/ are phonemic, whereas in Hexagonal French, these vowels are merged to /a/ and /ε/, respectively. We tested the effects of extended exposure to Quebec French (QF) as a second dialect (D2) on Hexagonal French (HF) speakers’ abilities to perceive these contrasts. Three groups of listeners were recruited: (1) non-mobile HF speakers born and living in France (HF group); (2) non-mobile QF speakers born and living in Quebec (QF group); and mobile HF speakers having moved from France to Quebec (HF>QF group). To determine any fine-grained effects of second dialect (D2) exposure on the perception of vowel contrasts, participants completed a same–different discrimination task in which they listened to stimuli paired at different levels of acoustic similarity. As expected, QF listeners showed a significant advantage over the HF group in discriminating between /a ~ ɑ/ and /ε ~ aε/ pairs, thus suggesting an own-dialect advantage in perceptual discrimination. Interestingly, this own-dialect advantage appeared to be greater for the /ε ~ aε/ contrast. The QF listeners also showed an advantage over the HF>QF group, and, surprisingly, this advantage was greater than over the HF group. In other words, the results suggested that the acquisition of a second dialect did not enhance the abilities of listeners to perceive differences between phonemic contrasts in that D2. If anything, the acquisition of the D2 disadvantaged the perceptual abilities of the HF>QF group. This might be because these phonemes have, over time, become less acoustically marked for the HF>QF participants and have, potentially, become integrated into their D1 phonemic categories. Full article
Show Figures

Figure 1

18 pages, 2587 KiB  
Article
Brain Lateralization for Language, Vocabulary Development and Handedness at 18 Months
by Delphine Potdevin, Parvaneh Adibpour, Clémentine Garric, Eszter Somogyi, Ghislaine Dehaene-Lambertz, Pia Rämä, Jessica Dubois and Jacqueline Fagard
Symmetry 2023, 15(5), 989; https://doi.org/10.3390/sym15050989 - 27 Apr 2023
Cited by 2 | Viewed by 5003
Abstract
Is hemisphere lateralization for speech processing linked to handedness? To answer this question, we compared hemisphere lateralization for speech processing and handedness in 18-month-old infants, the age at which infants start to produce words and reach a stable pattern of handedness. To assess [...] Read more.
Is hemisphere lateralization for speech processing linked to handedness? To answer this question, we compared hemisphere lateralization for speech processing and handedness in 18-month-old infants, the age at which infants start to produce words and reach a stable pattern of handedness. To assess hemisphere lateralization for speech perception, we coupled event-related potential (ERP) recordings with a syllable-discrimination paradigm and measured response differences to a change in phoneme or voice (different speaker) in the left and right clusters of electrodes. To assess handedness, we gave a 15-item grasping test to infants. We also evaluated infants’ range of vocabulary to assess whether it was associated with direction and degree of handedness and language brain asymmetries. Brain signals in response to a change in phoneme and voice were left- and right-lateralized, respectively, indicating functional brain lateralization for speech processing in infants. Handedness and brain asymmetry for speech processing were not related. In addition, there were no interactions between the range of vocabulary and asymmetry in brain responses, even for a phoneme change. Together, a high degree of right-handedness and greater vocabulary range were associated with an increase in ERP amplitudes in voice condition, irrespective of hemisphere side, suggesting that they influence discrimination during voice processing. Full article
(This article belongs to the Special Issue Early Laterality in Behaviour and Brain)
Show Figures

Figure 1

16 pages, 1477 KiB  
Article
Auditory Discrimination—A Missing Piece of Speech and Language Development: A Study on 6–9-Year-Old Children with Auditory Processing Disorder
by Anna Guzek and Katarzyna Iwanicka-Pronicka
Brain Sci. 2023, 13(4), 606; https://doi.org/10.3390/brainsci13040606 - 3 Apr 2023
Cited by 3 | Viewed by 4041
Abstract
Auditory discrimination, the hearing ability crucial for speech and language development, allowing one to perceive changes in volume, duration and frequency of sounds, was assessed for 366 participants with normal peripheral hearing: 220 participants with auditory processing disorders (APD) and 146 typically developing [...] Read more.
Auditory discrimination, the hearing ability crucial for speech and language development, allowing one to perceive changes in volume, duration and frequency of sounds, was assessed for 366 participants with normal peripheral hearing: 220 participants with auditory processing disorders (APD) and 146 typically developing (TD) children, all aged 6–9 years. Discrimination of speech was tested with nonsense words using the phoneme discrimination test (PDT), while pure tones—with the frequency pattern test (FPT). The obtained results were statistically analyzed and correlated. The median of the FPT results obtained by participants with APD was more than twice lower than those of TD (20% vs. 50%; p < 0.05), similarly in the PDT (21 vs. 24; p < 0.05). The FPT results of 9-year-old APD participants were worse than the results of TD 6-year-olds (30% vs. 40%; p < 0.05), indicating that the significant FPT deficit strongly suggests APD. The process of auditory discrimination development does not complete with the acquisition of phonemes but continues during school age. Physiological phonemes discrimination is not yet equalized among 9-year-olds. Nonsense word tests allow for reliable testing of phoneme discrimination. APD children require testing with PDT and FPT because both test results allow for developing individual therapeutic programs. Full article
(This article belongs to the Section Neurolinguistics)
Show Figures

Figure 1

Back to TopTop