Topic Editors

Human Cognition Lab, CIPsi, School of Psychology, University of Minho, 4710-057 Braga, Portugal
Dr. David Tomé
Center for Rehabilitation Research (CiR), Department of Audiology, School of Health, Polytechnic Institute of Porto (E2S-P.Porto), Porto, Portugal

Language: From Hearing to Speech and Writing

Abstract submission deadline
closed (31 October 2025)
Manuscript submission deadline
31 December 2025
Viewed by
9310

Topic Information

Dear Colleagues,

Language is a multifaceted process involving intricate interactions between cognitive, sensory, and motor systems that must be implemented successfully. Hearing may allow us to speak, and reading may allow us to write, but how do these systems interact? What are their limits before impairment? Although significant progress has been made in recent years toward understanding these interactions, the combined contributions of these systems in language acquisition and processing—both in typical and atypical scenarios—are rarely considered together. This topic seeks to bring together recent findings in this expansive research field, offering new insights into the aetiology of various spoken and reading/writing disorders, such as developmental language disorder, dyslexia, and aphasia, as well as other conditions where language processing is disrupted across diverse age groups, including children, adults, and the elderly. Contributions from diverse fields (e.g., (e.g., Audiology, Speech–Hearing Sciences, Psychology, and Neuroscience) using a combination of different methodologies, including behavioural and neuroimaging techniques, are especially encouraged.

Dr. Ana Paula Soares
Dr. David Tomé
Topic Editors

Keywords

  • language acquisition and processing
  • audiology
  • speech
  • developmental language disorder
  • dyslexia
  • aphasia

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Brain Sciences
brainsci
2.8 5.6 2011 16.2 Days CHF 2200 Submit
Neurology International
neurolint
3.0 4.8 2009 21.4 Days CHF 1800 Submit
NeuroSci
neurosci
2.0 - 2020 27.1 Days CHF 1200 Submit

Preprints.org is a multidisciplinary platform offering a preprint service designed to facilitate the early sharing of your research. It supports and empowers your research journey from the very beginning.

MDPI Topics is collaborating with Preprints.org and has established a direct connection between MDPI journals and the platform. Authors are encouraged to take advantage of this opportunity by posting their preprints at Preprints.org prior to publication:

  1. Share your research immediately: disseminate your ideas prior to publication and establish priority for your work.
  2. Safeguard your intellectual contribution: Protect your ideas with a time-stamped preprint that serves as proof of your research timeline.
  3. Boost visibility and impact: Increase the reach and influence of your research by making it accessible to a global audience.
  4. Gain early feedback: Receive valuable input and insights from peers before submitting to a journal.
  5. Ensure broad indexing: Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (5 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
21 pages, 3633 KB  
Article
One System, Two Rules: Asymmetrical Coupling of Speech Production and Reading Comprehension in the Trilingual Brain
by Yuanbo Wang, Yingfang Meng, Qiuyue Yang and Ruiming Wang
Brain Sci. 2025, 15(12), 1288; https://doi.org/10.3390/brainsci15121288 - 29 Nov 2025
Viewed by 294
Abstract
Background/Objectives: The functional architecture connecting speech production and reading comprehension remains unclear in multilinguals. This study investigated the cross-modal interaction between these systems in trilinguals to resolve the debate between Age of Acquisition (AoA) and usage frequency. Methods: We recruited 144 Uyghur (L1)–Chinese [...] Read more.
Background/Objectives: The functional architecture connecting speech production and reading comprehension remains unclear in multilinguals. This study investigated the cross-modal interaction between these systems in trilinguals to resolve the debate between Age of Acquisition (AoA) and usage frequency. Methods: We recruited 144 Uyghur (L1)–Chinese (L2)–English (L3) trilinguals, a population uniquely dissociating acquisition order from social dominance. Participants completed a production-to-comprehension priming paradigm, naming pictures in one language before performing a lexical decision task on translated words. Data were analyzed using linear mixed-effects models. Results: Significant cross-language priming confirmed an integrated lexicon, yet a fundamental asymmetry emerged. The top-down influence of production was governed by AoA; earlier-acquired languages (specifically L1) generated more effective priming signals than L2. Conversely, the bottom-up efficiency of recognition was driven by social usage frequency; the socially dominant L2 was the most receptive target, surpassing the heritage L1. Conclusions: The trilingual lexicon operates via “Two Rules”: a history-driven production system (AoA) and an environment-driven recognition system (Social Usage). This asymmetrical baseline challenges simple bilingual extensions and clarifies the dynamics of multilingual language control. Full article
(This article belongs to the Topic Language: From Hearing to Speech and Writing)
Show Figures

Figure 1

24 pages, 1289 KB  
Systematic Review
Electrical Cortical Stimulation for Language Mapping in Epilepsy Surgery—A Systematic Review
by Honglin Zhu, Efthymia Korona, Sepehr Shirani, Fatemeh Samadian, Gonzalo Alarcon, Antonio Valentin and Ioannis Stavropoulos
Brain Sci. 2025, 15(12), 1267; https://doi.org/10.3390/brainsci15121267 - 26 Nov 2025
Viewed by 301
Abstract
Background: Language mapping is a critical component of epilepsy surgery, as postoperative language deficits can significantly impact patients’ quality of life. Electrical stimulation mapping has emerged as a valuable tool for identifying eloquent areas of the brain and minimising post-surgical language deficits. However, [...] Read more.
Background: Language mapping is a critical component of epilepsy surgery, as postoperative language deficits can significantly impact patients’ quality of life. Electrical stimulation mapping has emerged as a valuable tool for identifying eloquent areas of the brain and minimising post-surgical language deficits. However, recent studies have shown that language deficits can occur despite language mapping, potentially due to variability in stimulation techniques and language task selection. The validity of specific linguistic tasks for mapping different cortical regions remain inadequately characterised. Objective: To systematically evaluate the validity of linguistic tasks used during electrical cortical stimulation (ECS) for language mapping in epilepsy surgery, analyse task-specific responses across cortical regions, and assess current evidence supporting optimal task selection for different brain areas. Methods: Following PRISMA [2020] guidelines, a systematic literature search was conducted in PubMed and Scopus covering articles published from January 2013 to November 2025. Studies on language testing with electrical cortical stimulation in epilepsy surgery cases were screened. Two reviewers independently screened 956 articles, with 45 meeting the inclusion criteria. Data extraction included language tasks, stimulation modalities (ECS, SEEG, ECoG, DECS), cortical regions and language error types. Results: Heterogeneity in language testing techniques across various centres was identified. Visual naming deficits were primarily associated with stimulation of the posterior and basal temporal regions, fusiform gyrus, and parahippocampal gyrus. Auditory naming elicited impairments in the posterior superior and middle temporal gyri, angular gyrus, and fusiform gyrus. Spontaneous speech errors varied, with phonemic dysphasic errors linked to the inferior frontal and supramarginal gyri, and semantic errors arising from superior temporal and perisylvian parietal regions. Conclusions: Task-specific language mapping reveals distinct cortical specialisations, with systematic patterns emerging across studies. However, marked variability in testing protocols and inadequate standardisation limit reproducibility and cross-centre comparisons. Overall, refining and standardising the language task implementation process could lead to improved outcomes, ultimately minimising resection-related language impairment. Future research should validate task–region associations through prospective multicentre studies with long-term outcome assessment. Full article
(This article belongs to the Topic Language: From Hearing to Speech and Writing)
Show Figures

Figure 1

19 pages, 1247 KB  
Article
Longitudinal and Cross-Sectional Relations Between Early Rise Time Discrimination Abilities and Pre-School Pre-Reading Assessments: The Seeds of Literacy Are Sown in Infancy
by Marina Kalashnikova, Denis Burnham and Usha Goswami
Brain Sci. 2025, 15(9), 1012; https://doi.org/10.3390/brainsci15091012 - 19 Sep 2025
Viewed by 722
Abstract
Background/Objectives: The Seeds of Literacy project has followed infants at family risk for dyslexia (FR group) and infants not at family risk (NFR group) since the age of 5 months, exploring whether infant measures of auditory sensitivity and phonological skills are related to [...] Read more.
Background/Objectives: The Seeds of Literacy project has followed infants at family risk for dyslexia (FR group) and infants not at family risk (NFR group) since the age of 5 months, exploring whether infant measures of auditory sensitivity and phonological skills are related to later reading achievement. Here, we retrospectively assessed relations between infant performance on a rise time discrimination task with new pre-reading behavioural measures administered at 60 months. In addition, we re-classified dyslexia risk at 60 months and again assessed relations to rise time sensitivity. Participants were re-grouped using the pre-reading behavioural measures as either dyslexia risk at 60 months (60mDR) or no dyslexia risk (60mNDR). Methods: FR and NFR children (44 English-learning children) completed assessments of rise time discrimination at 10 and/or 60 months, phonological awareness, phonological memory, rapid automatised naming (RAN), letter knowledge, and language skills (receptive vocabulary and grammatical awareness). Results: Longitudinal analyses showed significant time-lagged correlations between rise time sensitivity at 10 months and both RAN and letter knowledge at 60 months. Rise time sensitivity at 60 months was significantly poorer in those children re-grouped as 60mDR, and rise time sensitivity was significantly related to concurrent phonological awareness, RAN, letter knowledge, and receptive vocabulary, but not to tests of grammatical awareness. Conclusions: The data support the view that children’s rise time sensitivity is significantly related to their pre-reading phonological abilities. These findings are discussed in terms of Temporal Sampling theory. Full article
(This article belongs to the Topic Language: From Hearing to Speech and Writing)
Show Figures

Figure 1

16 pages, 317 KB  
Perspective
Listening to the Mind: Integrating Vocal Biomarkers into Digital Health
by Irene Rodrigo and Jon Andoni Duñabeitia
Brain Sci. 2025, 15(7), 762; https://doi.org/10.3390/brainsci15070762 - 18 Jul 2025
Cited by 1 | Viewed by 4260
Abstract
The human voice is an invaluable tool for communication, carrying information about a speaker’s emotional state and cognitive health. Recent research highlights the potential of acoustic biomarkers to detect early signs of mental health and neurodegenerative conditions. Despite their promise, vocal biomarkers remain [...] Read more.
The human voice is an invaluable tool for communication, carrying information about a speaker’s emotional state and cognitive health. Recent research highlights the potential of acoustic biomarkers to detect early signs of mental health and neurodegenerative conditions. Despite their promise, vocal biomarkers remain underutilized in clinical settings, with limited standardized protocols for assessment. This Perspective article argues for the integration of acoustic biomarkers into digital health solutions to improve the detection and monitoring of cognitive impairment and emotional disturbances. Advances in speech analysis and machine learning have demonstrated the feasibility of using voice features such as pitch, jitter, shimmer, and speech rate to assess these conditions. Moreover, we propose that singing, particularly simple melodic structures, could be an effective and accessible means of gathering vocal biomarkers, offering additional insights into cognitive and emotional states. Given its potential to engage multiple neural networks, singing could function as an assessment tool and an intervention strategy for individuals with cognitive decline. We highlight the necessity of further research to establish robust, reproducible methodologies for analyzing vocal biomarkers and standardizing voice-based diagnostic approaches. By integrating vocal analysis into routine health assessments, clinicians and researchers could significantly advance early detection and personalized interventions for cognitive and emotional disorders. Full article
(This article belongs to the Topic Language: From Hearing to Speech and Writing)
17 pages, 244 KB  
Hypothesis
Proprioceptive Resonance and Multimodal Semiotics: Readiness to Act, Embodied Cognition, and the Dynamics of Meaning
by Marco Sanna
NeuroSci 2025, 6(2), 42; https://doi.org/10.3390/neurosci6020042 - 12 May 2025
Cited by 2 | Viewed by 2736
Abstract
This paper proposes a theoretical model of meaning-making grounded in proprioceptive awareness and embodied imagination, arguing that human cognition is inherently multimodal, anticipatory, and sensorimotor. Drawing on Peircean semiotics, Lotman’s model of cultural cognition, and current research in neuroscience, we show that readiness [...] Read more.
This paper proposes a theoretical model of meaning-making grounded in proprioceptive awareness and embodied imagination, arguing that human cognition is inherently multimodal, anticipatory, and sensorimotor. Drawing on Peircean semiotics, Lotman’s model of cultural cognition, and current research in neuroscience, we show that readiness to act—a proprioceptively grounded anticipation of movement—plays a fundamental role in the emergence of meaning, from perception to symbolic abstraction. Contrary to traditional approaches that reduce language to a purely symbolic or visual system, we argue that meaning arises through the integration of sensory, motor, and affective processes, structured by axial proprioceptive coordinates (vertical, horizontal, sagittal). Using Peirce’s triadic model of interpretants, we identify proprioception as the modulatory interface between sensory stimuli, emotional response, and logical reasoning. A study on skilled pianists supports this view, showing that mental rehearsal without physical execution improves performance via motor anticipation. We define this process as proprioceptive resonance, a dynamic synchronization of embodied states that enables communication, language acquisition, and social intelligence. This framework allows for a critique of linguistic abstraction and contributes to ongoing debates in semiotics, enactive cognition, and the origin of syntax, challenging the assumption that symbolic thought precedes embodied experience. Full article
(This article belongs to the Topic Language: From Hearing to Speech and Writing)
Back to TopTop