Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (282)

Search Parameters:
Keywords = consonants

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 855 KB  
Article
Speech Sound Production in Adults with Dyslexia
by Sabrina Turker, Natalia Kartushina and Narly Golestani
Brain Sci. 2026, 16(5), 448; https://doi.org/10.3390/brainsci16050448 - 23 Apr 2026
Abstract
Background: Dyslexia is a reading disorder that is associated with phonological processing and awareness difficulties. However, little is known about phonetic production in dyslexia. Whereas individual differences in speech sound perception were linked to native and foreign speech sound production in typical readers, [...] Read more.
Background: Dyslexia is a reading disorder that is associated with phonological processing and awareness difficulties. However, little is known about phonetic production in dyslexia. Whereas individual differences in speech sound perception were linked to native and foreign speech sound production in typical readers, this remains to be explored in dyslexia. Given the phonetic processing deficits frequently encountered in dyslexia, we aimed to pinpoint potential differences in the acoustic realization of native phonemic production in adults with dyslexia. Methods: Ten adults with dyslexia and ten age-matched typical readers produced 24 native-language minimal voiced–voiceless word pairs across three places of articulation (labial, dental, velar) in a reading task. Acoustic analyses addressed phonemic category size, between-category distance, and voice onset time (VOT). Pseudoword reading performance served as an index of phonological decoding ability. Results: For category size, we observed a trend-level group-by-type interaction (p = 0.059, η2 = 0.04): both groups showed larger category sizes for voiced than voiceless consonants, but this difference was numerically larger in typical readers. Between-category distance showed a marginal group effect (p = 0.089, η2 = 0.14), with larger differences between categories in dyslexia. VOT showed the expected effect of voicing, but no group differences. Conclusions: Our results indicate broadly preserved speech production in dyslexia, alongside subtle differences in category separation and size in dyslexia, marked by considerable inter-individual variability. Full article
(This article belongs to the Special Issue Current Advances in Developmental Dyslexia)
14 pages, 2521 KB  
Article
Longitudinal Correlation of Frequency-to-Place Mismatch and Postoperative Speech Perception Outcomes in Cochlear Implant Recipients: Monosyllable, Consonant, Word, and Sentence
by Toshihito Sahara, Yujiro Hoshi, Anjin Mori, Hajime Koyama, Yasuhiro Osaki, Waki Nakajima, Takeshi Fujita, Akinori Kashio and Katsumi Doi
Audiol. Res. 2026, 16(2), 56; https://doi.org/10.3390/audiolres16020056 - 10 Apr 2026
Viewed by 246
Abstract
Background/Objectives: Frequency-to-place mismatch between cochlear implant (CI) electrodes and cochlear tonotopy has been suggested to affect postoperative speech perception. This study aimed to examine the associations between frequency-to-place mismatch and speech perception outcomes across multiple linguistic levels in patients with CI and [...] Read more.
Background/Objectives: Frequency-to-place mismatch between cochlear implant (CI) electrodes and cochlear tonotopy has been suggested to affect postoperative speech perception. This study aimed to examine the associations between frequency-to-place mismatch and speech perception outcomes across multiple linguistic levels in patients with CI and to assess how these associations change over time using postoperative computed tomography. Methods: This retrospective cohort study included 44 postlingually deafened adults who underwent unilateral cochlear implantation with a Flex28 electrode by a single surgeon at a tertiary care hospital. Speech perception was assessed using CI-2004, a Japanese speech perception test consisting of monosyllables, consonants, words, and sentences, in quiet settings at 3, 6, and 12 months after CI activation. Partial correlation analyses between frequency-to-place mismatch and postoperative speech perception scores were performed in 35 of the 44 patients, controlling for age and mean preoperative pure-tone thresholds. Results: Negative associations were observed between frequency-to-place mismatch and CI-2004 scores, particularly for monosyllable and consonant perception in uncorrected analyses. After correction for multiple comparisons, only consonant perception at 3 months after CI activation remained significant (r = −0.52, p = 0.002). Similar patterns were observed for other speech measures and at later time points, although these did not remain significant after correction. Conclusions: Frequency-to-place mismatch was associated with postoperative speech perception outcomes, particularly those involving phoneme-level recognition. After correction for multiple comparisons, only consonant perception at 3 months after CI activation remained significant. Full article
Show Figures

Figure 1

30 pages, 4780 KB  
Article
Systematic Phonetic Deviations in Standard Mandarin Acquisition: Perceptual and Acoustic Evidence from Lanyin Mandarin Speakers
by Yali Liu, Siyu Zhang, Zhijun Zhao and Lingyun Xie
Appl. Sci. 2026, 16(8), 3675; https://doi.org/10.3390/app16083675 - 9 Apr 2026
Viewed by 195
Abstract
Lanyin Mandarin is a major regional variety of Mandarin Chinese with phonological characteristics that interact with the acquisition of the codified Standard Mandarin norm. This study examined the pronunciation of Standard Mandarin by 67 native speakers from the Lanyin Mandarin area using a [...] Read more.
Lanyin Mandarin is a major regional variety of Mandarin Chinese with phonological characteristics that interact with the acquisition of the codified Standard Mandarin norm. This study examined the pronunciation of Standard Mandarin by 67 native speakers from the Lanyin Mandarin area using a large-scale subjective listening experiment (12 listeners, 6700 tokens), with deviations analyzed across initial consonants, finals, and tones. Based on the perceptual results, a pronunciation deviation database was established (N = 20,100 monosyllabic tokens), enabling targeted acoustic comparisons with Standard Mandarin. The results reveal several systematic patterns with quantified deviation rates. For initial consonants, the highest deviation rates were observed for /l/→/n/ (30.5%), /s/→/ts/ (25.5%), and /tsh/→/ts/ (20.2%), significantly exceeding their reverse substitutions (/n/→/l/: 13.3%, /ts/→/s/: 0.0%, /ts/→/tsh/: 15.4%; all p < 0.001). For finals, /iŋ/→/in/ showed the strongest asymmetry (61.2% vs. 21.9% for the reverse), followed by /əŋ/→/ən/ (40.2%) and /ən/→/əŋ/ (39.2%). Tonal deviations were dominated by Tone 3 identified as Tone 2 (31.7%), with Tone 1→Tone 2 at a lower rate (8.4%). These deviations exhibited significant directional asymmetries (e.g., /l/→/n/ vs. /n/→/l/: χ2(1) = 768.06, p < 0.001). Acoustic analyses indicated that consonant confusions corresponded to F2/F3 formant convergence (e.g., Lanyin-biased /l/ F2 values approached Standard Mandarin /n/), while nasal finals showed F2 fronting (higher F2 values approaching Standard Mandarin /in/). Tonal analyses revealed a compressed pitch range (2.4 semitones narrower than Standard Mandarin), with flattened Tone 3 contours contributing to Tone 2 confusion. Together, these findings demonstrate quantifiable, systematic, and directional phonetic patterns in the acquisition of Standard Mandarin by Lanyin dialect speakers, supported by converging perceptual and acoustic evidence. Full article
(This article belongs to the Section Acoustics and Vibrations)
Show Figures

Figure 1

15 pages, 1426 KB  
Article
Consonant Error Profiles and Short-Term Memory Deficits in Chinese School-Age Children with Speech Sound Disorders
by Qi Xu, Nan Peng, Xihan Li, Lei Wang, Haifeng Duan, Cuijuan Xu, Xi Wang, Bo Zhou, Jianhong Wang and Lin Wang
Behav. Sci. 2026, 16(4), 540; https://doi.org/10.3390/bs16040540 - 5 Apr 2026
Viewed by 279
Abstract
Speech sound disorder (SSD) is common in childhood and can persist, adversely affecting language, literacy, and social functioning. Yet consonant error patterns in school-age children, particularly in non-English-speaking populations, remain insufficiently characterized. Short-term memory (STM) supports phonological processing and speech learning, but its [...] Read more.
Speech sound disorder (SSD) is common in childhood and can persist, adversely affecting language, literacy, and social functioning. Yet consonant error patterns in school-age children, particularly in non-English-speaking populations, remain insufficiently characterized. Short-term memory (STM) supports phonological processing and speech learning, but its relationship with SSD severity in school-age children is not well established. This study profiles consonant errors and short-term memory in school-age Chinese children with SSD and examines short-term memory correlates and predictors of disorder severity to inform targeted interventions. A total of 142 Mandarin-speaking school-age children with SSD were recruited. For the short-term memory analyses, we randomly selected 70 children with SSD and recruited 70 typically developing controls. Speech was assessed using a word-level picture-naming task to derive consonant accuracy and characterize error types/patterns, and short-term memory was measured with the WISC-IV Digit Span (forward and backward). Substitutions predominated for most consonants, and individual phonemes often exhibited co-occurring error patterns. In addition, school-age children with SSD showed significantly poorer short-term memory than typically developing peers across multiple indices. Notably, backward digit span was positively associated with consonant accuracy and remained an independent predictor of consonant accuracy. These results advance our understanding of the mechanisms underlying SSD and provide an evidence-based rationale for future interventions that combine speech-focused therapy with cognitive training to enhance clinical outcomes. Full article
Show Figures

Figure 1

20 pages, 736 KB  
Article
Cognitive Biases in Asset Pricing: An Empirical Analysis of the Alphabet Effect and Ticker Fluency in the US Market
by Antonio Pagliaro
Symmetry 2026, 18(3), 477; https://doi.org/10.3390/sym18030477 - 11 Mar 2026
Viewed by 399
Abstract
Behavioral finance theory predicts that Processing Fluency—the subjective ease of parsing a nominal stimulus—should systematically influence investor attention and asset pricing through heuristic-based decision making. Yet modern equity markets, increasingly dominated by High-Frequency Trading (HFT) and algorithmic execution, provide powerful near-instantaneous arbitrage forces [...] Read more.
Behavioral finance theory predicts that Processing Fluency—the subjective ease of parsing a nominal stimulus—should systematically influence investor attention and asset pricing through heuristic-based decision making. Yet modern equity markets, increasingly dominated by High-Frequency Trading (HFT) and algorithmic execution, provide powerful near-instantaneous arbitrage forces that should neutralize any pricing premium arising from superficial nominal cues. Whether cognitive biases such as the “Ticker Fluency” effect and the “Alphabet Effect” persist in this algorithmic environment or have been fully arbitraged away remains an open empirical question with direct implications for the boundary conditions of Processing Fluency Theory. We address this gap by applying a deterministic Heuristic Fluency Score—based on vowel density and consonant cluster penalties—to all 492 S&P 500 constituents over 752 trading days (January 2021–January 2024), estimating individual stock Fama-French 3-Factor Alphas via daily time-series regressions, and testing whether fluency or alphabetical rank explains cross-sectional variation in abnormal returns after controlling for Liquidity, Amihud illiquidity, and GICS Sector Fixed Effects. To guard against Selection Bias, we explicitly contrast a biased illustrative case study (N=25, 2019–2024) against the rigorous full-market analysis. We find no statistically or economically significant effect: the Fluency Score coefficient is β=0.0036 (p=0.495) and the Alphabet Rank coefficient is β=0.0027 (p=0.642), with the results robust to all tested parameterizations (λ[0.05,0.20]; p>0.50 throughout). These findings establish a boundary condition of Processing Fluency Theory: in algorithm-dominated, highly liquid large-cap markets, cognitive biases in nominal cues are fully absorbed by arbitrage, and ticker symbols function as neutral identifiers rather than heuristic signals. Residual effects, if any, are more likely to manifest in attention-based or volume-related outcomes, or in less institutionalized market segments where algorithmic participation is lower. Full article
Show Figures

Figure 1

25 pages, 4978 KB  
Article
Psychoacoustic Study of Simple-Tone Dyads: Frequency Ratio and Pitch
by Stefania Kaklamani and Constantinos Simserides
Acoustics 2026, 8(1), 14; https://doi.org/10.3390/acoustics8010014 - 9 Feb 2026
Viewed by 1152
Abstract
This study investigates how listeners perceive consonance and dissonance in dyads composed of simple (sine) tones, focusing on the effects of frequency ratio (R) and mean frequency (F). Seventy adult participants—categorized by musical training, gender, and age group—rated randomly [...] Read more.
This study investigates how listeners perceive consonance and dissonance in dyads composed of simple (sine) tones, focusing on the effects of frequency ratio (R) and mean frequency (F). Seventy adult participants—categorized by musical training, gender, and age group—rated randomly ordered dyads using binary preference responses (“like” or “dislike”). Dyads represented standard Western intervals but were constructed with sine tones rather than musical notes, preserving interval ratios while varying absolute pitch. Statistical analyses reveal a consistent decrease in preference with increasing mean frequency, regardless of interval class or participant group. Octaves, fifths, fourths, and sixths showed a nearly linear decline in preference with increasing F. Major seconds were among the least preferred. Musicians rated octaves and certain consonant intervals more positively than non-musicians, while gender and age groups exhibited different sensitivity to high frequencies. The findings suggest that both interval structure and pitch range shape the perception of consonance in simple-tone dyads, with possible psychoacoustic explanations involving frequency sensitivity and auditory fatigue at higher frequencies. Full article
Show Figures

Figure 1

23 pages, 409 KB  
Article
Morphology-Aware Segmentation and Tokenization for Turkic Languages: A CSE-Guided Framework (The Kazakh Case)
by Ualsher Tukeyev and Bekarys Rysbek
Information 2026, 17(2), 128; https://doi.org/10.3390/info17020128 - 29 Jan 2026
Viewed by 784
Abstract
The main challenge of resource-poor languages—namely, the lack of sufficiently large and linguistically informed datasets for training neural models—is addressed in this paper by developing a dataset generation technology based on a Complete Set of Endings (CSE) morphological model for Turkic languages. Building [...] Read more.
The main challenge of resource-poor languages—namely, the lack of sufficiently large and linguistically informed datasets for training neural models—is addressed in this paper by developing a dataset generation technology based on a Complete Set of Endings (CSE) morphological model for Turkic languages. Building on this technology, we propose a CSE-Guided Framework for morphology-aware statistical tokenization and neural model segmentation, with Kazakh as a case study. Applying the proposed CSE-guided approach to adapt well-known tokenizers for Kazakh leads to measurable reductions in neural model training time (up to approximately 33%) in our experimental setting, primarily due to shorter tokenized sentence lengths. In addition, we extend the SOTA FEMSeg-CRF architecture by incorporating Kazakh vowel–consonant harmony rules at the embedding generation stage. Within the proposed framework, training on a corpus of CSE-generated wordforms results in the FEMSeg_kaz_v2 model, which is evaluated using intrinsic segmentation metrics. Training on a CSE-segmented sentence corpus yields FEMSeg_kaz_v3, which is further assessed using intrinsic, extrinsic, and external evaluation on a manually prepared gold-standard dataset. The paper presents a CSE-guided framework for morphology-aware tokenization and segmentation for Turkic languages, supported by corpus construction, model extensions, and multi-level evaluation. The proposed CSE-Guided Framework can potentially be adapted for other Turkic languages. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

36 pages, 3446 KB  
Article
Neurodegenerative Disease-Specific Relations Between Temporal and Kinetic Gait Features Identified Using InterCriteria Analysis
by Irena Jekova, Vessela Krasteva and Todor Stoyanov
Mathematics 2026, 14(2), 340; https://doi.org/10.3390/math14020340 - 19 Jan 2026
Viewed by 568
Abstract
Gait analysis is a non-invasive, cost-effective method for detecting subtle motor changes in neurodegenerative disorders. This study uses an exploratory approach to identify temporal–kinetic gait feature relationships specific to amyotrophic lateral sclerosis (ALS) and Huntington (HUNT) and Parkinson (PARK) disease versus healthy controls [...] Read more.
Gait analysis is a non-invasive, cost-effective method for detecting subtle motor changes in neurodegenerative disorders. This study uses an exploratory approach to identify temporal–kinetic gait feature relationships specific to amyotrophic lateral sclerosis (ALS) and Huntington (HUNT) and Parkinson (PARK) disease versus healthy controls (CONTROL) using recent advances in InterCriteria Analysis (ICrA). The novelty lies in the (i) comprehensive temporal–kinetic feature set, (ii) use of ICrA to characterize inter-feature coordination patterns at population and disease-group levels and (iii) interpretation in a neuromechanical context. Forty-one temporal/kinetic features were extracted from left/right leg ground reaction force and rate-of-force-development signals, considering laterality, gait phase (stance, swing, double support), magnitudes, waveform correlations, and inter-/intra-limb asymmetries. The analysis included 14,580 steps from 64 recordings in the Gait in Neurodegenerative Disease Database: 16 CONTROL (4054 steps), 13 ALS (2465), 20 HUNT (4730), 15 PARK (3331). Sensitivity analysis identified strict consonance thresholds (μ ≥ 0.75, ν ≤ 0.25), selecting <5% strongest inter-feature relations from 820 feature pairs: population level (16 positive, 14 negative), group-level (15–25 positive, 9–14 negative). ICrA identified group-specific consonances—present in one group but absent in others—highlighting disease-related alterations in gait coordination: ALS (15/11 positive/negative, disrupted bilateral stride coordination, prolonged stance/double-support, decoupled stride/cadence, desynchronized force-generation patterns—reflecting compensatory adaptations to muscle weakness and instability), HUNT (11/7, severe temporal–kinetic breakdown consistent with gait instability—loss of bilateral coordination, reduced swing time, slowed force development), PARK (1/2, subtle localized disruptions—prolonged stance and double-support intervals, reduced force during weight transfer, overall coordination remained largely preserved). Benchmarking vs. Pearson correlation showed strong linear agreement (R2 = 0.847, p < 0.001), confirming that ICrA captures dominant dependencies while moderating the correlation via uncertainty. These results demonstrate that ICrA provides a quantitative, interpretable framework for characterizing gait coordination patterns and can guide principled feature selection in future predictive modeling. Full article
(This article belongs to the Special Issue Advanced Intelligent Algorithms for Decision Making Under Uncertainty)
Show Figures

Figure 1

16 pages, 5040 KB  
Article
Phonetic Training and Talker Variability in the Perception of Spanish Stop Consonants
by Iván Andreu Rascón
Languages 2026, 11(1), 1; https://doi.org/10.3390/languages11010001 - 23 Dec 2025
Viewed by 983
Abstract
This study examined how variability in phonetic training input (high vs. low) influences the perception and acquisition of Spanish stop consonants by English-speaking beginners. A total of 128 participants completed 20 online identification sessions targeting /p, t, k, b, d, g/. In the [...] Read more.
This study examined how variability in phonetic training input (high vs. low) influences the perception and acquisition of Spanish stop consonants by English-speaking beginners. A total of 128 participants completed 20 online identification sessions targeting /p, t, k, b, d, g/. In the high-variability condition (HVPT), learners heard tokens from six speakers, and in the low-variability condition (LVPT), all input came from a single speaker. Training followed an interleaved-talker design with immediate feedback, and perceptual learning was evaluated using a Bayesian hierarchical logistic regression analysis. Results showed improvement across sessions for both groups, with identification accuracy reaching ceiling by the end of the training sessions. Differences between HVPT and LVPT were small: LVPT showed steeper categorization trajectories in some cases due to slightly lower baselines, but neither condition yielded a measurable advantage. The pattern observed suggests that for boundary-shift contrasts such as Spanish stops, perceptual improvements are driven primarily by input quantity rather than variability. This interpretation aligns with input-based models of L2 speech learning (SLM-r, L2LP) and underscores the role of repeated exposure in restructuring phonological categories. Full article
(This article belongs to the Special Issue The Impacts of Phonetically Variable Input on Language Learning)
Show Figures

Figure 1

15 pages, 880 KB  
Article
Differentiating Between Human-Written and AI-Generated Texts Using Automatically Extracted Linguistic Features
by Georgios P. Georgiou
Information 2025, 16(11), 979; https://doi.org/10.3390/info16110979 - 12 Nov 2025
Cited by 10 | Viewed by 7997
Abstract
While extensive research has focused on ChatGPT in recent years, very few studies have systematically quantified and compared linguistic features between human-written and artificial intelligence (AI)-generated language. This exploratory study aims to investigate how various linguistic components are represented in both types of [...] Read more.
While extensive research has focused on ChatGPT in recent years, very few studies have systematically quantified and compared linguistic features between human-written and artificial intelligence (AI)-generated language. This exploratory study aims to investigate how various linguistic components are represented in both types of texts, assessing AI’s ability to emulate human writing. Using human-authored essays as a benchmark, we prompted ChatGPT to generate essays of equivalent length. These texts were analyzed using Open Brain AI, an online computational tool, to extract measures of phonological, morphological, syntactic, and lexical constituents. Despite AI-generated texts appearing to mimic human speech, the results revealed significant differences across multiple linguistic features such as specific types of consonants, nouns, adjectives, pronouns, adjectival/prepositional modifiers, and use of difficult words, among others. These findings underscore the importance of integrating automated tools for efficient language assessment, reducing time and effort in data analysis. Moreover, they emphasize the necessity for enhanced training methodologies to improve AI’s engineering capacity for producing more human-like text. Full article
(This article belongs to the Special Issue Information Extraction and Language Discourse Processing)
Show Figures

Graphical abstract

15 pages, 912 KB  
Article
Exploratory Behavioral Study of the Production and Processing of French Categorical Liaisons in Children with Expressive DLD
by Elisabeth Cesari, Bernard Laks and Frédéric Isel
NeuroSci 2025, 6(4), 112; https://doi.org/10.3390/neurosci6040112 - 6 Nov 2025
Viewed by 975
Abstract
Categorical liaison—defined as the obligatory pronunciation of a latent word in the form of a final consonant when followed by a vowel as the initial word or a word beginning with a silent “h” (e.g., des‿ours [dezuʁs])—is a robust phonological phenomenon in French [...] Read more.
Categorical liaison—defined as the obligatory pronunciation of a latent word in the form of a final consonant when followed by a vowel as the initial word or a word beginning with a silent “h” (e.g., des‿ours [dezuʁs])—is a robust phonological phenomenon in French and an informative window into morphophonological development. This exploratory behavioral study investigates the dissociation between perception and production of categorical liaisons among 24 French-speaking children aged 6–10 years diagnosed with expressive Developmental Language Disorder (DLD). A battery of nine ad hoc tasks assessed perception and production across words, pseudowords, noun phrases, and sentences. Results showed that children with DLD performed comparably to typically developing peers in perceiving unrealized categorical liaisons but exhibited significantly more omissions in production, regardless of context or age. Production deficits correlated with reduced working memory and inhibitory control. These preliminary findings provide descriptive data that can inform the development of standardized assessment tools and generate hypotheses about the cognitive mechanisms underlying categorical liaison difficulties in DLD. Full article
Show Figures

Figure 1

18 pages, 1819 KB  
Article
Speech Markers of Parkinson’s Disease: Phonological Features and Acoustic Measures
by Ratree Wayland, Rachel Meyer and Kevin Tang
Brain Sci. 2025, 15(11), 1162; https://doi.org/10.3390/brainsci15111162 - 29 Oct 2025
Cited by 1 | Viewed by 1730
Abstract
Background/Objectives: Parkinson’s disease (PD) affects both articulatory and phonatory subsystems, leading to characteristic speech changes known as hypokinetic dysarthria. However, few studies have jointly analyzed these subsystems within the same participants using interpretable deep-learning-based measures. Methods: Speech data from the PC-GITA corpus, [...] Read more.
Background/Objectives: Parkinson’s disease (PD) affects both articulatory and phonatory subsystems, leading to characteristic speech changes known as hypokinetic dysarthria. However, few studies have jointly analyzed these subsystems within the same participants using interpretable deep-learning-based measures. Methods: Speech data from the PC-GITA corpus, including 50 Colombian Spanish speakers with PD and 50 age- and sex-matched healthy controls were analyzed. We combined phonological feature posteriors—probabilistic indices of articulatory constriction derived from the Phonet deep neural network—with harmonics-to-noise ratio (HNR) as a laryngeal measure. Linear mixed-effects models tested how these measures related to disease severity (UPDRS, UPDRS-speech, and Hoehn and Yahr), age, and sex. Results: PD participants showed significantly higher [continuant] posteriors, especially for dental stops, reflecting increased spirantization and articulatory weakening. In contrast, [sonorant] posteriors did not differ from controls, indicating reduced oral constriction without a shift toward more open, approximant-like articulations. HNR was predicted by vowel height and sex but did not distinguish PD from controls, likely reflecting ON-medication recordings. Conclusions: These findings demonstrate that deep-learning-derived articulatory features can capture early, subphonemic weakening in PD speech—particularly for coronal consonants—while single-parameter laryngeal indices such as HNR are less sensitive under medicated conditions. By linking spectral energy patterns to interpretable phonological categories, this approach provides a transparent framework for detecting subtle articulatory deficits and developing feature-level biomarkers of PD progression. Full article
(This article belongs to the Section Behavioral Neuroscience)
Show Figures

Figure 1

29 pages, 1961 KB  
Article
Developing an AI-Powered Pronunciation Application to Improve English Pronunciation of Thai ESP Learners
by Jiraporn Lao-un and Dararat Khampusaen
Languages 2025, 10(11), 273; https://doi.org/10.3390/languages10110273 - 28 Oct 2025
Viewed by 2978
Abstract
This study examined the effects of using specially designed AI-mediated pronunciation application in enhancing the production of English fricative consonants among Thai English for Specific Purposes (ESP) learners. The research utilized a quasi-experimental design involving intact classes of 74 undergraduate students majoring in [...] Read more.
This study examined the effects of using specially designed AI-mediated pronunciation application in enhancing the production of English fricative consonants among Thai English for Specific Purposes (ESP) learners. The research utilized a quasi-experimental design involving intact classes of 74 undergraduate students majoring in Thai Dance and Music Education, divided into control (N = 38) and experimental (N = 36) groups. Grounded in Skill Acquisition Theory, the experimental group received pronunciation training via a custom-designed AI application leveraging automatic speech recognition (ASR), offering ESP contextualized practices, real-time, and individualized feedback. In contrast, the control group underwent traditional teacher-led articulatory and teacher-assisted feedback. Pre- and post-test evaluations measured pronunciation for nine target fricatives in ESP-relevant contexts. The statistical analyses revealed significant improvements in both groups, with the AI-mediated group demonstrating substantially greater gains, particularly on challenging sounds absent in Thai, such as /θ/, /ð/, /z/, /ʃ/, and /h/. The findings underscore the potential of AI-driven interventions to address language-specific phonological challenges through personalized, immediate feedback and adaptive practices. The study provides empirical evidence for integrating advanced technology into ESP pronunciation pedagogy, informing future curriculum design for EFL contexts. Implications for theory, practice, and future research are discussed, emphasizing tailored technological solutions for language learners with specific phonological profiles. Full article
Show Figures

Figure 1

15 pages, 1071 KB  
Article
Intercriteria Decision-Making Method for Speed and Load Effects Evaluation on Upper Arm Muscles in the Horizontal Plane
by Silvija Angelova, Rositsa Raikova and Maria Angelova
Appl. Sci. 2025, 15(20), 11213; https://doi.org/10.3390/app152011213 - 20 Oct 2025
Cited by 1 | Viewed by 688
Abstract
Speed and load effects on the number and type of pair interactions between six elbow and shoulder muscles or muscle (m.) heads were evaluated by the intercriteria decision-making method (ICrA). The surface electromyography (sEMG) signals of the m. deltoideus pars clavicularis (Dcla), m. [...] Read more.
Speed and load effects on the number and type of pair interactions between six elbow and shoulder muscles or muscle (m.) heads were evaluated by the intercriteria decision-making method (ICrA). The surface electromyography (sEMG) signals of the m. deltoideus pars clavicularis (Dcla), m. deltoideus pars spinata (Dspi), m. biceps brachii (BB), m. triceps brachii caput longum (TB), m. brachialis (BR), and m. anconeus (AN) of ten healthy subjects were recorded. The data was collected during cycling movements (CMs) for continuous flexions and extensions in the elbow joint in the horizontal plane. The CMs were performed with and without an added load at four different speeds. The obtained sEMG data were subjected to the ICrA to identify muscle activity and speed correlations. The ICrA results demonstrate that added load resulted in a higher number of consonance relations between muscle activities. Positive consonance (PosC) appears between the Dcla-Dspi, Dspi-BR, BB-BR, and TB-BR criteria pairs for the loaded flexion phases. When extension is in the focus, Dcla-BB is in a consonance relation for no loaded phases, while for the loaded ones, five muscle pairs, namely Dcla-BB, Dcla-BR, Dspi-BR, BB-BR, and TB-BR, hit PosC. Also, the most correlations are found for the fastest phase (1 s) of flexion and extension, regardless of the load. Additionally, correlation dependencies between the two faster (Sp2-Sp1) and the two slower speeds (Sp10-Sp6) were found. Full article
Show Figures

Figure 1

12 pages, 685 KB  
Article
Changes in Bilabial Contact Pressure as a Function of Vocal Loudness in Individuals with Parkinson’s Disease
by Jeff Searl
Appl. Sci. 2025, 15(18), 10165; https://doi.org/10.3390/app151810165 - 18 Sep 2025
Cited by 1 | Viewed by 767
Abstract
This study evaluated the impact of vocal loudness on bilabial contact pressure (BCP) during the production of bilabial English consonants in adults with Parkinson’s disease (PD). Twelve adults with PD produced sentences with the phonemes /b, p, m/ initiating a linguistically meaningful word [...] Read more.
This study evaluated the impact of vocal loudness on bilabial contact pressure (BCP) during the production of bilabial English consonants in adults with Parkinson’s disease (PD). Twelve adults with PD produced sentences with the phonemes /b, p, m/ initiating a linguistically meaningful word within the sentence, while BCP was sensed with a miniature pressure transducer positioned at the midline between the upper and lower lips. Stimuli were produced at two loudness levels: Habitual and twice as loud as habitual loudness (Loud). A linear mixed model (LMM) indicated a statistically significant main effect of Condition (F (1, 714) = 16.210, p < 0.001) with Loud having greater BCP than Habitual (mean difference of 0.593 kPa). The main effect of Phoneme was also significant (F (1, 714) = 31.905, p < 0.001), with post hoc tests revealing that BCP was significantly higher for /p/ compared to /m/ (p = 0.007), and for /b/ compared to /m/ (p = 0.002). An additional LMM of the magnitude of the percent change in BCP in the Loud condition relative to the Habitual condition had a significant main effect of Phoneme (F (2, 22.3) = 5.871, p = 0.006). The percent change in BCP was the greatest for /p/ (47.7%), followed by /b/ (35.7%) and /m/ (27.4%), with statistically significant differences for both /p/ and /b/ compared to /m/ in post hoc tests. The results indicated that changes in vocal loudness cause changes in BCP in individuals with PD. A louder voice was associated with higher BCP for all three phonemes, although the increase was the greatest on bilabial stops compared to nasal stops. These results provide initial insights regarding the mechanism by which therapeutic interventions focused on increasing loudness in people with PD alter oral articulatory behaviors. Future work that details potential aerodynamic (e.g., oral air pressure build-up) and articulatory acoustics (e.g., burst intensity) is needed to better explain the mechanistic actions of increased loudness that can explain why loud-focused speech treatments for people with PD may improve speech intelligibility. Full article
Show Figures

Figure 1

Back to TopTop