Special Issue "Advances in the Neurocognition of Music and Language"

A special issue of Brain Sciences (ISSN 2076-3425). This special issue belongs to the section "Cognitive Neuroscience".

Deadline for manuscript submissions: 15 August 2019

Special Issue Editors

Guest Editor
Dr. Daniela Sammler

Otto Hahn Group Neural Bases of Intonation in Speech and Music, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstrasse 1a, 04103 Leipzig, Germany
Website | E-Mail
Interests: music cognition and the brain; neurocognitive overlap of music and language; neural bases of prosody
Guest Editor
Dr. Stefan Elmer

Auditory Research Group Zurich, University of Zurich, Institute of Psychology, Binzmuehlestrasse 14, Box 1, 8050 Zurich, Switzerland
Website | E-Mail
Interests: music training and transfer effects; spectrotemporal speech processing; language learning and expertise; multilingualism; speech, music and cognitive functions; functional and structural plasticity

Special Issue Information

Dear Colleagues,

Neurocomparative music and language research has seen major advances over the past two decades: The Shared Syntactic Integration Resource Hypothesis (SSIRH) has now come of age, fully matured, as has the Modularity of Music Processing, and yet research on the relationship between music and language has never lost its appeal. On the contrary, the field has left no stone unturned to explore neurofunctional similarities of syntax and rhythm, pitch and meaning, their emotional and communicative power, in ontogeny and phylogeny. Research on perceptual and cognitive transfer between domains has recognized the signs of times by exploring learning and cognitive reserve in aging and the benefits of neural entrainment, amongst others. Methods have been refined and the explanatory value of neural overlap has been questioned, all to draw a more nuanced picture on what is shared and what is not, and what this knowledge earns practitioners. The goal of this Special Issue is to take a step back and showcase persistent neural analogies between musical and linguistic information processing and their entwined organization in human cognition, to scrutinize the limits of neural overlap and sharing, and to conclude on the applicability of the combined knowledge in pedagogy and therapy.

Dr. Daniela Sammler
Dr. Stefan Elmer
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Brain Sciences is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 850 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Music, speech and language
  • Brain
  • Neural overlap
  • Perception and cognition
  • Learning and oscillatory dynamics
  • Therapeutic applications, cognitive reserve, and aging

Published Papers (8 papers)

View options order results:
result details:
Displaying articles 1-8
Export citation of selected articles as:

Research

Jump to: Review

Open AccessArticle
Music Training Positively Influences the Preattentive Perception of Voice Onset Time in Children with Dyslexia: A Longitudinal Study
Brain Sci. 2019, 9(4), 91; https://doi.org/10.3390/brainsci9040091
Received: 1 April 2019 / Revised: 12 April 2019 / Accepted: 13 April 2019 / Published: 21 April 2019
PDF Full-text (2103 KB) | HTML Full-text | XML Full-text
Abstract
Previous results showed a positive influence of music training on linguistic abilities at both attentive and preattentive levels. Here, we investigate whether six months of active music training is more efficient than painting training to improve the preattentive processing of phonological parameters based [...] Read more.
Previous results showed a positive influence of music training on linguistic abilities at both attentive and preattentive levels. Here, we investigate whether six months of active music training is more efficient than painting training to improve the preattentive processing of phonological parameters based on durations that are often impaired in children with developmental dyslexia (DD). Results were also compared to a control group of Typically Developing (TD) children matched on reading age. We used a Test–Training–Retest procedure and analysed the Mismatch Negativity (MMN) and the N1 and N250 components of the Event-Related Potentials to syllables that differed in Voice Onset Time (VOT), vowel duration, and vowel frequency. Results were clear-cut in showing a normalization of the preattentive processing of VOT in children with DD after music training but not after painting training. They also revealed increased N250 amplitude to duration deviant stimuli in children with DD after music but not painting training, and no training effect on the preattentive processing of frequency. These findings are discussed in view of recent theories of dyslexia pointing to deficits in processing the temporal structure of speech. They clearly encourage the use of active music training for the rehabilitation of children with language impairments. Full article
(This article belongs to the Special Issue Advances in the Neurocognition of Music and Language)
Figures

Figure 1

Open AccessArticle
Impaired Recognition of Metrical and Syntactic Boundaries in Children with Developmental Language Disorders
Brain Sci. 2019, 9(2), 33; https://doi.org/10.3390/brainsci9020033
Received: 19 December 2018 / Revised: 23 January 2019 / Accepted: 31 January 2019 / Published: 5 February 2019
Cited by 1 | PDF Full-text (967 KB) | HTML Full-text | XML Full-text
Abstract
In oral language, syntactic structure is cued in part by phrasal metrical hierarchies of acoustic stress patterns. For example, many children’s texts use prosodic phrasing comprising tightly integrated hierarchies of metre and syntax to highlight the phonological and syntactic structure of language. Children [...] Read more.
In oral language, syntactic structure is cued in part by phrasal metrical hierarchies of acoustic stress patterns. For example, many children’s texts use prosodic phrasing comprising tightly integrated hierarchies of metre and syntax to highlight the phonological and syntactic structure of language. Children with developmental language disorders (DLDs) are relatively insensitive to acoustic stress. Here, we disrupted the coincidence of metrical and syntactic boundaries as cued by stress patterns in children’s texts so that metrical and/or syntactic phrasing conflicted. We tested three groups of children: children with DLD, age-matched typically developing controls (AMC) and younger language-matched controls (YLC). Children with DLDs and younger, language-matched controls were poor at spotting both metrical and syntactic disruptions. The data are interpreted within a prosodic phrasing hypothesis of DLD based on impaired acoustic processing of speech rhythm. Full article
(This article belongs to the Special Issue Advances in the Neurocognition of Music and Language)
Figures

Figure 1

Open AccessArticle
Electrical Brain Responses Reveal Sequential Constraints on Planning during Music Performance
Brain Sci. 2019, 9(2), 25; https://doi.org/10.3390/brainsci9020025
Received: 11 January 2019 / Revised: 21 January 2019 / Accepted: 26 January 2019 / Published: 28 January 2019
PDF Full-text (3799 KB) | HTML Full-text | XML Full-text
Abstract
Elements in speech and music unfold sequentially over time. To produce sentences and melodies quickly and accurately, individuals must plan upcoming sequence events, as well as monitor outcomes via auditory feedback. We investigated the neural correlates of sequential planning and monitoring processes by [...] Read more.
Elements in speech and music unfold sequentially over time. To produce sentences and melodies quickly and accurately, individuals must plan upcoming sequence events, as well as monitor outcomes via auditory feedback. We investigated the neural correlates of sequential planning and monitoring processes by manipulating auditory feedback during music performance. Pianists performed isochronous melodies from memory at an initially cued rate while their electroencephalogram was recorded. Pitch feedback was occasionally altered to match either an immediately upcoming Near-Future pitch (next sequence event) or a more distant Far-Future pitch (two events ahead of the current event). Near-Future, but not Far-Future altered feedback perturbed the timing of pianists’ performances, suggesting greater interference of Near-Future sequential events with current planning processes. Near-Future feedback triggered a greater reduction in auditory sensory suppression (enhanced response) than Far-Future feedback, reflected in the P2 component elicited by the pitch event following the unexpected pitch change. Greater timing perturbations were associated with enhanced cortical sensory processing of the pitch event following the Near-Future altered feedback. Both types of feedback alterations elicited feedback-related negativity (FRN) and P3a potentials and amplified spectral power in the theta frequency range. These findings suggest similar constraints on producers’ sequential planning to those reported in speech production. Full article
(This article belongs to the Special Issue Advances in the Neurocognition of Music and Language)
Figures

Figure 1

Open AccessCommunication
Cross-Modal Priming Effect of Rhythm on Visual Word Recognition and Its Relationships to Music Aptitude and Reading Achievement
Brain Sci. 2018, 8(12), 210; https://doi.org/10.3390/brainsci8120210
Received: 22 October 2018 / Revised: 26 November 2018 / Accepted: 28 November 2018 / Published: 29 November 2018
PDF Full-text (1295 KB) | HTML Full-text | XML Full-text
Abstract
Recent evidence suggests the existence of shared neural resources for rhythm processing in language and music. Such overlaps could be the basis of the facilitating effect of regular musical rhythm on spoken word processing previously reported for typical children and adults, as well [...] Read more.
Recent evidence suggests the existence of shared neural resources for rhythm processing in language and music. Such overlaps could be the basis of the facilitating effect of regular musical rhythm on spoken word processing previously reported for typical children and adults, as well as adults with Parkinson’s disease and children with developmental language disorders. The present study builds upon these previous findings by examining whether non-linguistic rhythmic priming also influences visual word processing, and the extent to which such cross-modal priming effect of rhythm is related to individual differences in musical aptitude and reading skills. An electroencephalogram (EEG) was recorded while participants listened to a rhythmic tone prime, followed by a visual target word with a stress pattern that either matched or mismatched the rhythmic structure of the auditory prime. Participants were also administered standardized assessments of musical aptitude and reading achievement. Event-related potentials (ERPs) elicited by target words with a mismatching stress pattern showed an increased fronto-central negativity. Additionally, the size of the negative effect correlated with individual differences in musical rhythm aptitude and reading comprehension skills. Results support the existence of shared neurocognitive resources for linguistic and musical rhythm processing, and have important implications for the use of rhythm-based activities for reading interventions. Full article
(This article belongs to the Special Issue Advances in the Neurocognition of Music and Language)
Figures

Figure 1

Open AccessArticle
Early Influence of Musical Abilities and Working Memory on Speech Imitation Abilities: Study with Pre-School Children
Brain Sci. 2018, 8(9), 169; https://doi.org/10.3390/brainsci8090169
Received: 25 May 2018 / Revised: 27 August 2018 / Accepted: 29 August 2018 / Published: 1 September 2018
Cited by 1 | PDF Full-text (570 KB) | HTML Full-text | XML Full-text
Abstract
Musical aptitude and language talent are highly intertwined when it comes to phonetic language ability. Research on pre-school children’s musical abilities and foreign language abilities are rare but give further insights into the relationship between language and musical aptitude. We tested pre-school children’s [...] Read more.
Musical aptitude and language talent are highly intertwined when it comes to phonetic language ability. Research on pre-school children’s musical abilities and foreign language abilities are rare but give further insights into the relationship between language and musical aptitude. We tested pre-school children’s abilities to imitate unknown languages, to remember strings of digits, to sing, to discriminate musical statements and their intrinsic (spontaneous) singing behavior (“singing-lovers versus singing nerds”). The findings revealed that having an ear for music is linked to phonetic language abilities. The results of this investigation show that a working memory capacity and phonetic aptitude are linked to high musical perception and production ability already at around the age of 5. This suggests that music and (foreign) language learning capacity may be linked from childhood on. Furthermore, the findings put emphasis on the possibility that early developed abilities may be responsible for individual differences in both linguistic and musical performances. Full article
(This article belongs to the Special Issue Advances in the Neurocognition of Music and Language)
Figures

Figure 1

Review

Jump to: Research

Open AccessReview
Pushing the Envelope: Developments in Neural Entrainment to Speech and the Biological Underpinnings of Prosody Perception
Brain Sci. 2019, 9(3), 70; https://doi.org/10.3390/brainsci9030070
Received: 31 December 2018 / Revised: 8 March 2019 / Accepted: 15 March 2019 / Published: 22 March 2019
PDF Full-text (1371 KB) | HTML Full-text | XML Full-text
Abstract
Prosodic cues in speech are indispensable for comprehending a speaker’s message, recognizing emphasis and emotion, parsing segmental units, and disambiguating syntactic structures. While it is commonly accepted that prosody provides a fundamental service to higher-level features of speech, the neural underpinnings of prosody [...] Read more.
Prosodic cues in speech are indispensable for comprehending a speaker’s message, recognizing emphasis and emotion, parsing segmental units, and disambiguating syntactic structures. While it is commonly accepted that prosody provides a fundamental service to higher-level features of speech, the neural underpinnings of prosody processing are not clearly defined in the cognitive neuroscience literature. Many recent electrophysiological studies have examined speech comprehension by measuring neural entrainment to the speech amplitude envelope, using a variety of methods including phase-locking algorithms and stimulus reconstruction. Here we review recent evidence for neural tracking of the speech envelope and demonstrate the importance of prosodic contributions to the neural tracking of speech. Prosodic cues may offer a foundation for supporting neural synchronization to the speech envelope, which scaffolds linguistic processing. We argue that prosody has an inherent role in speech perception, and future research should fill the gap in our knowledge of how prosody contributes to speech envelope entrainment. Full article
(This article belongs to the Special Issue Advances in the Neurocognition of Music and Language)
Figures

Figure 1

Open AccessReview
Preconceptual Spectral and Temporal Cues as a Source of Meaning in Speech and Music
Brain Sci. 2019, 9(3), 53; https://doi.org/10.3390/brainsci9030053
Received: 23 January 2019 / Revised: 18 February 2019 / Accepted: 26 February 2019 / Published: 1 March 2019
PDF Full-text (332 KB) | HTML Full-text | XML Full-text
Abstract
This paper explores the importance of preconceptual meaning in speech and music, stressing the role of affective vocalizations as a common ancestral instrument in communicative interactions. Speech and music are sensory rich stimuli, both at the level of production and perception, which involve [...] Read more.
This paper explores the importance of preconceptual meaning in speech and music, stressing the role of affective vocalizations as a common ancestral instrument in communicative interactions. Speech and music are sensory rich stimuli, both at the level of production and perception, which involve different body channels, mainly the face and the voice. However, this bimodal approach has been challenged as being too restrictive. A broader conception argues for an action-oriented embodied approach that stresses the reciprocity between multisensory processing and articulatory-motor routines. There is, however, a distinction between language and music, with the latter being largely unable to function referentially. Contrary to the centrifugal tendency of language to direct the attention of the receiver away from the text or speech proper, music is centripetal in directing the listener’s attention to the auditory material itself. Sound, therefore, can be considered as the meeting point between speech and music and the question can be raised as to the shared components between the interpretation of sound in the domain of speech and music. In order to answer these questions, this paper elaborates on the following topics: (i) The relationship between speech and music with a special focus on early vocalizations in humans and non-human primates; (ii) the transition from sound to meaning in speech and music; (iii) the role of emotion and affect in early sound processing; (iv) vocalizations and nonverbal affect burst in communicative sound comprehension; and (v) the acoustic features of affective sound with a special emphasis on temporal and spectrographic cues as parts of speech prosody and musical expressiveness. Full article
(This article belongs to the Special Issue Advances in the Neurocognition of Music and Language)
Open AccessReview
Neurophysiological Markers of Statistical Learning in Music and Language: Hierarchy, Entropy and Uncertainty
Brain Sci. 2018, 8(6), 114; https://doi.org/10.3390/brainsci8060114
Received: 10 May 2018 / Revised: 14 June 2018 / Accepted: 18 June 2018 / Published: 19 June 2018
Cited by 6 | PDF Full-text (2086 KB) | HTML Full-text | XML Full-text
Abstract
Statistical learning (SL) is a method of learning based on the transitional probabilities embedded in sequential phenomena such as music and language. It has been considered an implicit and domain-general mechanism that is innate in the human brain and that functions independently of [...] Read more.
Statistical learning (SL) is a method of learning based on the transitional probabilities embedded in sequential phenomena such as music and language. It has been considered an implicit and domain-general mechanism that is innate in the human brain and that functions independently of intention to learn and awareness of what has been learned. SL is an interdisciplinary notion that incorporates information technology, artificial intelligence, musicology, and linguistics, as well as psychology and neuroscience. A body of recent study has suggested that SL can be reflected in neurophysiological responses based on the framework of information theory. This paper reviews a range of work on SL in adults and children that suggests overlapping and independent neural correlations in music and language, and that indicates disability of SL. Furthermore, this article discusses the relationships between the order of transitional probabilities (TPs) (i.e., hierarchy of local statistics) and entropy (i.e., global statistics) regarding SL strategies in human’s brains; claims importance of information-theoretical approaches to understand domain-general, higher-order, and global SL covering both real-world music and language; and proposes promising approaches for the application of therapy and pedagogy from various perspectives of psychology, neuroscience, computational studies, musicology, and linguistics. Full article
(This article belongs to the Special Issue Advances in the Neurocognition of Music and Language)
Figures

Figure 1

Planned Papers

The below list represents only planned manuscripts. Some of these manuscripts have not been received by the Editorial Office yet. Papers submitted to MDPI journals are subject to peer-review.

Dissociation of spoken pitch and timing from sung pitch and rhythm in dysprosodic subjects following focal brain damage in two trained singers

 Diana Van Lancker Sidtis; Ji Sook Ahn; Yoonji Kim

 Singing and speech prosody share elemental acoustic components (pitch, timing, and rhythm) which may be differentially impacted by brain damage. Investigations of speech prosody and singing performance in persons who have suffered a stroke are challenging, requiring reliable measures and documentation of participants’ premorbid musical abilities. The effects of focal brain damage on timing, rhythm and fundamental frequency (F0, also pitch) in speech production and in singing were retrospectively investigated in two persons diagnosed with severely dysprosodic speech due to cerebral vascular accidents; both were experienced singers. Participant 1 suffered a large right hemisphere (RH) infarct and Participant 2 sustained a right-sided ischemic subcortical lesion, documented by CT scan and PET. Pitch and timing in lexical contrasts were acoustically analyzed, and accuracies of pitch and rhythm in personally familiar songs were measured and rated by listeners. Both study participants produced lexical contrasts with non-normal pitch trajectories but used timing relations that approached normal values. For Participant 1, familiar songs were sung with consistently incorrect tones, but accurate rhythm. Participant 2 sang with correct tones and rhythm. These case studies show a dissociation between pitch control and timing abilities for speech versus singing. Timing and rhythm were intact in both vocal tasks, while pitch control was differentially affected in speech as compared with singing. A proposal of the role of lesion location in retained and deficient prosodic components is offered. Better understanding of these dissociations may lead to better understanding of the relations between speaking and singing in cerebral function and may assist in assessment and treatment of dysprosody.

Brain Sci. EISSN 2076-3425 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top