Special Issue "Advances in the Neurocognition of Music and Language"

A special issue of Brain Sciences (ISSN 2076-3425). This special issue belongs to the section "Cognitive Neuroscience".

Deadline for manuscript submissions: closed (15 August 2019).

Special Issue Editors

Dr. Daniela Sammler
Website
Guest Editor
Otto Hahn Group Neural Bases of Intonation in Speech and Music, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstrasse 1a, 04103 Leipzig, Germany
Interests: music cognition and the brain; neurocognitive overlap of music and language; neural bases of prosody
Dr. Stefan Elmer
Website
Guest Editor
Auditory Research Group Zurich, University of Zurich, Institute of Psychology, Binzmuehlestrasse 14, Box 1, 8050 Zurich, Switzerland
Interests: music training and transfer effects; spectrotemporal speech processing; language learning and expertise; multilingualism; speech, music and cognitive functions; functional and structural plasticity

Special Issue Information

Dear Colleagues,

Neurocomparative music and language research has seen major advances over the past two decades: The Shared Syntactic Integration Resource Hypothesis (SSIRH) has now come of age, fully matured, as has the Modularity of Music Processing, and yet research on the relationship between music and language has never lost its appeal. On the contrary, the field has left no stone unturned to explore neurofunctional similarities of syntax and rhythm, pitch and meaning, their emotional and communicative power, in ontogeny and phylogeny. Research on perceptual and cognitive transfer between domains has recognized the signs of times by exploring learning and cognitive reserve in aging and the benefits of neural entrainment, amongst others. Methods have been refined and the explanatory value of neural overlap has been questioned, all to draw a more nuanced picture on what is shared and what is not, and what this knowledge earns practitioners. The goal of this Special Issue is to take a step back and showcase persistent neural analogies between musical and linguistic information processing and their entwined organization in human cognition, to scrutinize the limits of neural overlap and sharing, and to conclude on the applicability of the combined knowledge in pedagogy and therapy.

Dr. Daniela Sammler
Dr. Stefan Elmer
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Brain Sciences is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Music, speech and language
  • Brain
  • Neural overlap
  • Perception and cognition
  • Learning and oscillatory dynamics
  • Therapeutic applications, cognitive reserve, and aging

Published Papers (16 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

Open AccessArticle
Processing of Rhythm in Speech and Music in Adult Dyslexia
Brain Sci. 2020, 10(5), 261; https://doi.org/10.3390/brainsci10050261 - 30 Apr 2020
Abstract
Recent studies have suggested that musical rhythm perception ability can affect the phonological system. The most prevalent causal account for developmental dyslexia is the phonological deficit hypothesis. As rhythm is a subpart of phonology, we hypothesized that reading deficits in dyslexia are associated [...] Read more.
Recent studies have suggested that musical rhythm perception ability can affect the phonological system. The most prevalent causal account for developmental dyslexia is the phonological deficit hypothesis. As rhythm is a subpart of phonology, we hypothesized that reading deficits in dyslexia are associated with rhythm processing in speech and in music. In a rhythmic grouping task, adults with diagnosed dyslexia and age-matched controls listened to speech streams with syllables alternating in intensity, duration, or neither, and indicated whether they perceived a strong-weak or weak-strong rhythm pattern. Additionally, their reading and musical rhythm abilities were measured. Results showed that adults with dyslexia had lower musical rhythm abilities than adults without dyslexia. Moreover, lower musical rhythm ability was associated with lower reading ability in dyslexia. However, speech grouping by adults with dyslexia was not impaired when musical rhythm perception ability was controlled: like adults without dyslexia, they showed consistent preferences. However, rhythmic grouping was predicted by musical rhythm perception ability, irrespective of dyslexia. The results suggest associations among musical rhythm perception ability, speech rhythm perception, and reading ability. This highlights the importance of considering individual variability to better understand dyslexia and raises the possibility that musical rhythm perception ability is a key to phonological and reading acquisition. Full article
(This article belongs to the Special Issue Advances in the Neurocognition of Music and Language)
Show Figures

Figure 1

Open AccessArticle
Infants Segment Words from Songs—An EEG Study
Brain Sci. 2020, 10(1), 39; https://doi.org/10.3390/brainsci10010039 - 09 Jan 2020
Abstract
Children’s songs are omnipresent and highly attractive stimuli in infants’ input. Previous work suggests that infants process linguistic–phonetic information from simplified sung melodies. The present study investigated whether infants learn words from ecologically valid children’s songs. Testing 40 Dutch-learning 10-month-olds in a familiarization-then-test [...] Read more.
Children’s songs are omnipresent and highly attractive stimuli in infants’ input. Previous work suggests that infants process linguistic–phonetic information from simplified sung melodies. The present study investigated whether infants learn words from ecologically valid children’s songs. Testing 40 Dutch-learning 10-month-olds in a familiarization-then-test electroencephalography (EEG) paradigm, this study asked whether infants can segment repeated target words embedded in songs during familiarization and subsequently recognize those words in continuous speech in the test phase. To replicate previous speech work and compare segmentation across modalities, infants participated in both song and speech sessions. Results showed a positive event-related potential (ERP) familiarity effect to the final compared to the first target occurrences during both song and speech familiarization. No evidence was found for word recognition in the test phase following either song or speech. Comparisons across the stimuli of the present and a comparable previous study suggested that acoustic prominence and speech rate may have contributed to the polarity of the ERP familiarity effect and its absence in the test phase. Overall, the present study provides evidence that 10-month-old infants can segment words embedded in songs, and it raises questions about the acoustic and other factors that enable or hinder infant word segmentation from songs and speech. Full article
(This article belongs to the Special Issue Advances in the Neurocognition of Music and Language)
Show Figures

Figure 1

Open AccessArticle
How the Brain Understands Spoken and Sung Sentences
Brain Sci. 2020, 10(1), 36; https://doi.org/10.3390/brainsci10010036 - 08 Jan 2020
Abstract
The present study investigates whether meaning is similarly extracted from spoken and sung sentences. For this purpose, subjects listened to semantically correct and incorrect sentences while performing a correctness judgement task. In order to examine underlying neural mechanisms, a multi-methodological approach was chosen [...] Read more.
The present study investigates whether meaning is similarly extracted from spoken and sung sentences. For this purpose, subjects listened to semantically correct and incorrect sentences while performing a correctness judgement task. In order to examine underlying neural mechanisms, a multi-methodological approach was chosen combining two neuroscientific methods with behavioral data. In particular, fast dynamic changes reflected in the semantically associated N400 component of the electroencephalography (EEG) were simultaneously assessed with the topographically more fine-grained vascular signals acquired by the functional near-infrared spectroscopy (fNIRS). EEG results revealed a larger N400 for incorrect compared to correct sentences in both spoken and sung sentences. However, the N400 was delayed for sung sentences, potentially due to the longer sentence duration. fNIRS results revealed larger activations for spoken compared to sung sentences irrespective of semantic correctness at predominantly left-hemispheric areas, potentially suggesting a greater familiarity with spoken material. Furthermore, the fNIRS revealed a widespread activation for correct compared to incorrect sentences irrespective of modality, potentially indicating a successful processing of sentence meaning. The combined results indicate similar semantic processing in speech and song. Full article
(This article belongs to the Special Issue Advances in the Neurocognition of Music and Language)
Show Figures

Figure 1

Open AccessArticle
Attention Modulates Electrophysiological Responses to Simultaneous Music and Language Syntax Processing
Brain Sci. 2019, 9(11), 305; https://doi.org/10.3390/brainsci9110305 - 01 Nov 2019
Abstract
Music and language are hypothesized to engage the same neural resources, particularly at the level of syntax processing. Recent reports suggest that attention modulates the shared processing of music and language, but the time-course of the effects of attention on music and language [...] Read more.
Music and language are hypothesized to engage the same neural resources, particularly at the level of syntax processing. Recent reports suggest that attention modulates the shared processing of music and language, but the time-course of the effects of attention on music and language syntax processing are yet unclear. In this EEG study we vary top-down attention to language and music, while manipulating the syntactic structure of simultaneously presented musical chord progressions and garden-path sentences in a modified rapid serial visual presentation paradigm. The Early Right Anterior Negativity (ERAN) was observed in response to both attended and unattended musical syntax violations. In contrast, an N400 was only observed in response to attended linguistic syntax violations, and a P3/P600 only in response to attended musical syntax violations. Results suggest that early processing of musical syntax, as indexed by the ERAN, is relatively automatic; however, top-down allocation of attention changes the processing of syntax in both music and language at later stages of cognitive processing. Full article
(This article belongs to the Special Issue Advances in the Neurocognition of Music and Language)
Show Figures

Figure 1

Open AccessArticle
Is It Speech or Song? Effect of Melody Priming on Pitch Perception of Modified Mandarin Speech
Brain Sci. 2019, 9(10), 286; https://doi.org/10.3390/brainsci9100286 - 22 Oct 2019
Abstract
Tonal languages make use of pitch variation for distinguishing lexical semantics, and their melodic richness seems comparable to that of music. The present study investigated a novel priming effect of melody on the pitch processing of Mandarin speech. When a spoken Mandarin utterance [...] Read more.
Tonal languages make use of pitch variation for distinguishing lexical semantics, and their melodic richness seems comparable to that of music. The present study investigated a novel priming effect of melody on the pitch processing of Mandarin speech. When a spoken Mandarin utterance is preceded by a musical melody, which mimics the melody of the utterance, the listener is likely to perceive this utterance as song. We used functional magnetic resonance imaging to examine the neural substrates of this speech-to-song transformation. Pitch contours of spoken utterances were modified so that these utterances can be perceived as either speech or song. When modified speech (target) was preceded by a musical melody (prime) that mimics the speech melody, a task of judging the melodic similarity between the target and prime was associated with increased activity in the inferior frontal gyrus (IFG) and superior/middle temporal gyrus (STG/MTG) during target perception. We suggest that the pars triangularis of the right IFG may allocate attentional resources to the multi-modal processing of speech melody, and the STG/MTG may integrate the phonological and musical (melodic) information of this stimulus. These results are discussed in relation to subvocal rehearsal, a speech-to-song illusion, and song perception. Full article
(This article belongs to the Special Issue Advances in the Neurocognition of Music and Language)
Show Figures

Figure 1

Open AccessArticle
Event-Related Potential Evidence of Implicit Metric Structure during Silent Reading
Brain Sci. 2019, 9(8), 192; https://doi.org/10.3390/brainsci9080192 - 08 Aug 2019
Abstract
Under the Implicit Prosody Hypothesis, readers generate prosodic structures during silent reading that can direct their real-time interpretations of the text. In the current study, we investigated the processing of implicit meter by recording event-related potentials (ERPs) while participants read a series of [...] Read more.
Under the Implicit Prosody Hypothesis, readers generate prosodic structures during silent reading that can direct their real-time interpretations of the text. In the current study, we investigated the processing of implicit meter by recording event-related potentials (ERPs) while participants read a series of 160 rhyming couplets, where the rhyme target was always a stress-alternating noun–verb homograph (e.g., permit, which is pronounced PERmit as a noun and perMIT as a verb). The target had a strong–weak or weak–strong stress pattern, which was either consistent or inconsistent with the stress expectation generated by the couplet. Inconsistent strong–weak targets elicited negativities between 80–155 ms and 325–375 ms relative to consistent strong–weak targets; inconsistent weak–strong targets elicited a positivity between 365–435 ms relative to consistent weak–strong targets. These results are largely consistent with effects of metric violations during listening, demonstrating that implicit prosodic representations are similar to explicit prosodic representations. Full article
(This article belongs to the Special Issue Advances in the Neurocognition of Music and Language)
Show Figures

Figure 1

Open AccessArticle
Domain-Specific Expectations in Music Segmentation
Brain Sci. 2019, 9(7), 169; https://doi.org/10.3390/brainsci9070169 - 17 Jul 2019
Cited by 1
Abstract
The acoustic cues that guide the assignment of phrase boundaries in music (pauses and pitch movements) overlap with those that are known for speech prosody. Based on this, researchers have focused on highlighting the similarities and neural resources shared between music and speech [...] Read more.
The acoustic cues that guide the assignment of phrase boundaries in music (pauses and pitch movements) overlap with those that are known for speech prosody. Based on this, researchers have focused on highlighting the similarities and neural resources shared between music and speech prosody segmentation. The possibility that music-specific expectations add to acoustic cues in driving the segmentation of music into phrases could weaken this bottom-up view, but it remains underexplored. We tested for domain-specific expectations in music segmentation by comparing the segmentation of the same set of ambiguous stimuli under two different instructions: stimuli were either presented as speech prosody or as music. We measured how segmentation differed, in each instruction group, from a common reference (natural speech); thus, focusing on how instruction affected delexicalization effects (natural speech vs. transformed versions with no phonetic content) on segmentation. We saw interactions between delexicalization and instruction on most segmentation indices, suggesting that there is a music mode, different from a speech prosody mode in segmentation. Our findings highlight the importance of top-down influences in segmentation, and they contribute to rethinking the analogy between music and speech prosody. Full article
(This article belongs to the Special Issue Advances in the Neurocognition of Music and Language)
Show Figures

Figure 1

Open AccessArticle
Poor Synchronization to Musical Beat Generalizes to Speech
Brain Sci. 2019, 9(7), 157; https://doi.org/10.3390/brainsci9070157 - 04 Jul 2019
Cited by 2
Abstract
The rhythmic nature of speech may recruit entrainment mechanisms in a manner similar to music. In the current study, we tested the hypothesis that individuals who display a severe deficit in synchronizing their taps to a musical beat (called beat-deaf here) would also [...] Read more.
The rhythmic nature of speech may recruit entrainment mechanisms in a manner similar to music. In the current study, we tested the hypothesis that individuals who display a severe deficit in synchronizing their taps to a musical beat (called beat-deaf here) would also experience difficulties entraining to speech. The beat-deaf participants and their matched controls were required to align taps with the perceived regularity in the rhythm of naturally spoken, regularly spoken, and sung sentences. The results showed that beat-deaf individuals synchronized their taps less accurately than the control group across conditions. In addition, participants from both groups exhibited more inter-tap variability to natural speech than to regularly spoken and sung sentences. The findings support the idea that acoustic periodicity is a major factor in domain-general entrainment to both music and speech. Therefore, a beat-finding deficit may affect periodic auditory rhythms in general, not just those for music. Full article
(This article belongs to the Special Issue Advances in the Neurocognition of Music and Language)
Show Figures

Figure 1

Open AccessArticle
Music Training Positively Influences the Preattentive Perception of Voice Onset Time in Children with Dyslexia: A Longitudinal Study
Brain Sci. 2019, 9(4), 91; https://doi.org/10.3390/brainsci9040091 - 21 Apr 2019
Cited by 2
Abstract
Previous results showed a positive influence of music training on linguistic abilities at both attentive and preattentive levels. Here, we investigate whether six months of active music training is more efficient than painting training to improve the preattentive processing of phonological parameters based [...] Read more.
Previous results showed a positive influence of music training on linguistic abilities at both attentive and preattentive levels. Here, we investigate whether six months of active music training is more efficient than painting training to improve the preattentive processing of phonological parameters based on durations that are often impaired in children with developmental dyslexia (DD). Results were also compared to a control group of Typically Developing (TD) children matched on reading age. We used a Test–Training–Retest procedure and analysed the Mismatch Negativity (MMN) and the N1 and N250 components of the Event-Related Potentials to syllables that differed in Voice Onset Time (VOT), vowel duration, and vowel frequency. Results were clear-cut in showing a normalization of the preattentive processing of VOT in children with DD after music training but not after painting training. They also revealed increased N250 amplitude to duration deviant stimuli in children with DD after music but not painting training, and no training effect on the preattentive processing of frequency. These findings are discussed in view of recent theories of dyslexia pointing to deficits in processing the temporal structure of speech. They clearly encourage the use of active music training for the rehabilitation of children with language impairments. Full article
(This article belongs to the Special Issue Advances in the Neurocognition of Music and Language)
Show Figures

Figure 1

Open AccessArticle
Impaired Recognition of Metrical and Syntactic Boundaries in Children with Developmental Language Disorders
Brain Sci. 2019, 9(2), 33; https://doi.org/10.3390/brainsci9020033 - 05 Feb 2019
Cited by 4
Abstract
In oral language, syntactic structure is cued in part by phrasal metrical hierarchies of acoustic stress patterns. For example, many children’s texts use prosodic phrasing comprising tightly integrated hierarchies of metre and syntax to highlight the phonological and syntactic structure of language. Children [...] Read more.
In oral language, syntactic structure is cued in part by phrasal metrical hierarchies of acoustic stress patterns. For example, many children’s texts use prosodic phrasing comprising tightly integrated hierarchies of metre and syntax to highlight the phonological and syntactic structure of language. Children with developmental language disorders (DLDs) are relatively insensitive to acoustic stress. Here, we disrupted the coincidence of metrical and syntactic boundaries as cued by stress patterns in children’s texts so that metrical and/or syntactic phrasing conflicted. We tested three groups of children: children with DLD, age-matched typically developing controls (AMC) and younger language-matched controls (YLC). Children with DLDs and younger, language-matched controls were poor at spotting both metrical and syntactic disruptions. The data are interpreted within a prosodic phrasing hypothesis of DLD based on impaired acoustic processing of speech rhythm. Full article
(This article belongs to the Special Issue Advances in the Neurocognition of Music and Language)
Show Figures

Figure 1

Open AccessArticle
Electrical Brain Responses Reveal Sequential Constraints on Planning during Music Performance
Brain Sci. 2019, 9(2), 25; https://doi.org/10.3390/brainsci9020025 - 28 Jan 2019
Cited by 1
Abstract
Elements in speech and music unfold sequentially over time. To produce sentences and melodies quickly and accurately, individuals must plan upcoming sequence events, as well as monitor outcomes via auditory feedback. We investigated the neural correlates of sequential planning and monitoring processes by [...] Read more.
Elements in speech and music unfold sequentially over time. To produce sentences and melodies quickly and accurately, individuals must plan upcoming sequence events, as well as monitor outcomes via auditory feedback. We investigated the neural correlates of sequential planning and monitoring processes by manipulating auditory feedback during music performance. Pianists performed isochronous melodies from memory at an initially cued rate while their electroencephalogram was recorded. Pitch feedback was occasionally altered to match either an immediately upcoming Near-Future pitch (next sequence event) or a more distant Far-Future pitch (two events ahead of the current event). Near-Future, but not Far-Future altered feedback perturbed the timing of pianists’ performances, suggesting greater interference of Near-Future sequential events with current planning processes. Near-Future feedback triggered a greater reduction in auditory sensory suppression (enhanced response) than Far-Future feedback, reflected in the P2 component elicited by the pitch event following the unexpected pitch change. Greater timing perturbations were associated with enhanced cortical sensory processing of the pitch event following the Near-Future altered feedback. Both types of feedback alterations elicited feedback-related negativity (FRN) and P3a potentials and amplified spectral power in the theta frequency range. These findings suggest similar constraints on producers’ sequential planning to those reported in speech production. Full article
(This article belongs to the Special Issue Advances in the Neurocognition of Music and Language)
Show Figures

Figure 1

Open AccessCommunication
Cross-Modal Priming Effect of Rhythm on Visual Word Recognition and Its Relationships to Music Aptitude and Reading Achievement
Brain Sci. 2018, 8(12), 210; https://doi.org/10.3390/brainsci8120210 - 29 Nov 2018
Cited by 2
Abstract
Recent evidence suggests the existence of shared neural resources for rhythm processing in language and music. Such overlaps could be the basis of the facilitating effect of regular musical rhythm on spoken word processing previously reported for typical children and adults, as well [...] Read more.
Recent evidence suggests the existence of shared neural resources for rhythm processing in language and music. Such overlaps could be the basis of the facilitating effect of regular musical rhythm on spoken word processing previously reported for typical children and adults, as well as adults with Parkinson’s disease and children with developmental language disorders. The present study builds upon these previous findings by examining whether non-linguistic rhythmic priming also influences visual word processing, and the extent to which such cross-modal priming effect of rhythm is related to individual differences in musical aptitude and reading skills. An electroencephalogram (EEG) was recorded while participants listened to a rhythmic tone prime, followed by a visual target word with a stress pattern that either matched or mismatched the rhythmic structure of the auditory prime. Participants were also administered standardized assessments of musical aptitude and reading achievement. Event-related potentials (ERPs) elicited by target words with a mismatching stress pattern showed an increased fronto-central negativity. Additionally, the size of the negative effect correlated with individual differences in musical rhythm aptitude and reading comprehension skills. Results support the existence of shared neurocognitive resources for linguistic and musical rhythm processing, and have important implications for the use of rhythm-based activities for reading interventions. Full article
(This article belongs to the Special Issue Advances in the Neurocognition of Music and Language)
Show Figures

Figure 1

Open AccessArticle
Early Influence of Musical Abilities and Working Memory on Speech Imitation Abilities: Study with Pre-School Children
Brain Sci. 2018, 8(9), 169; https://doi.org/10.3390/brainsci8090169 - 01 Sep 2018
Cited by 4
Abstract
Musical aptitude and language talent are highly intertwined when it comes to phonetic language ability. Research on pre-school children’s musical abilities and foreign language abilities are rare but give further insights into the relationship between language and musical aptitude. We tested pre-school children’s [...] Read more.
Musical aptitude and language talent are highly intertwined when it comes to phonetic language ability. Research on pre-school children’s musical abilities and foreign language abilities are rare but give further insights into the relationship between language and musical aptitude. We tested pre-school children’s abilities to imitate unknown languages, to remember strings of digits, to sing, to discriminate musical statements and their intrinsic (spontaneous) singing behavior (“singing-lovers versus singing nerds”). The findings revealed that having an ear for music is linked to phonetic language abilities. The results of this investigation show that a working memory capacity and phonetic aptitude are linked to high musical perception and production ability already at around the age of 5. This suggests that music and (foreign) language learning capacity may be linked from childhood on. Furthermore, the findings put emphasis on the possibility that early developed abilities may be responsible for individual differences in both linguistic and musical performances. Full article
(This article belongs to the Special Issue Advances in the Neurocognition of Music and Language)
Show Figures

Figure 1

Review

Jump to: Research

Open AccessReview
Pushing the Envelope: Developments in Neural Entrainment to Speech and the Biological Underpinnings of Prosody Perception
Brain Sci. 2019, 9(3), 70; https://doi.org/10.3390/brainsci9030070 - 22 Mar 2019
Cited by 3
Abstract
Prosodic cues in speech are indispensable for comprehending a speaker’s message, recognizing emphasis and emotion, parsing segmental units, and disambiguating syntactic structures. While it is commonly accepted that prosody provides a fundamental service to higher-level features of speech, the neural underpinnings of prosody [...] Read more.
Prosodic cues in speech are indispensable for comprehending a speaker’s message, recognizing emphasis and emotion, parsing segmental units, and disambiguating syntactic structures. While it is commonly accepted that prosody provides a fundamental service to higher-level features of speech, the neural underpinnings of prosody processing are not clearly defined in the cognitive neuroscience literature. Many recent electrophysiological studies have examined speech comprehension by measuring neural entrainment to the speech amplitude envelope, using a variety of methods including phase-locking algorithms and stimulus reconstruction. Here we review recent evidence for neural tracking of the speech envelope and demonstrate the importance of prosodic contributions to the neural tracking of speech. Prosodic cues may offer a foundation for supporting neural synchronization to the speech envelope, which scaffolds linguistic processing. We argue that prosody has an inherent role in speech perception, and future research should fill the gap in our knowledge of how prosody contributes to speech envelope entrainment. Full article
(This article belongs to the Special Issue Advances in the Neurocognition of Music and Language)
Show Figures

Figure 1

Open AccessReview
Preconceptual Spectral and Temporal Cues as a Source of Meaning in Speech and Music
Brain Sci. 2019, 9(3), 53; https://doi.org/10.3390/brainsci9030053 - 01 Mar 2019
Abstract
This paper explores the importance of preconceptual meaning in speech and music, stressing the role of affective vocalizations as a common ancestral instrument in communicative interactions. Speech and music are sensory rich stimuli, both at the level of production and perception, which involve [...] Read more.
This paper explores the importance of preconceptual meaning in speech and music, stressing the role of affective vocalizations as a common ancestral instrument in communicative interactions. Speech and music are sensory rich stimuli, both at the level of production and perception, which involve different body channels, mainly the face and the voice. However, this bimodal approach has been challenged as being too restrictive. A broader conception argues for an action-oriented embodied approach that stresses the reciprocity between multisensory processing and articulatory-motor routines. There is, however, a distinction between language and music, with the latter being largely unable to function referentially. Contrary to the centrifugal tendency of language to direct the attention of the receiver away from the text or speech proper, music is centripetal in directing the listener’s attention to the auditory material itself. Sound, therefore, can be considered as the meeting point between speech and music and the question can be raised as to the shared components between the interpretation of sound in the domain of speech and music. In order to answer these questions, this paper elaborates on the following topics: (i) The relationship between speech and music with a special focus on early vocalizations in humans and non-human primates; (ii) the transition from sound to meaning in speech and music; (iii) the role of emotion and affect in early sound processing; (iv) vocalizations and nonverbal affect burst in communicative sound comprehension; and (v) the acoustic features of affective sound with a special emphasis on temporal and spectrographic cues as parts of speech prosody and musical expressiveness. Full article
(This article belongs to the Special Issue Advances in the Neurocognition of Music and Language)
Open AccessReview
Neurophysiological Markers of Statistical Learning in Music and Language: Hierarchy, Entropy and Uncertainty
Brain Sci. 2018, 8(6), 114; https://doi.org/10.3390/brainsci8060114 - 19 Jun 2018
Cited by 11
Abstract
Statistical learning (SL) is a method of learning based on the transitional probabilities embedded in sequential phenomena such as music and language. It has been considered an implicit and domain-general mechanism that is innate in the human brain and that functions independently of [...] Read more.
Statistical learning (SL) is a method of learning based on the transitional probabilities embedded in sequential phenomena such as music and language. It has been considered an implicit and domain-general mechanism that is innate in the human brain and that functions independently of intention to learn and awareness of what has been learned. SL is an interdisciplinary notion that incorporates information technology, artificial intelligence, musicology, and linguistics, as well as psychology and neuroscience. A body of recent study has suggested that SL can be reflected in neurophysiological responses based on the framework of information theory. This paper reviews a range of work on SL in adults and children that suggests overlapping and independent neural correlations in music and language, and that indicates disability of SL. Furthermore, this article discusses the relationships between the order of transitional probabilities (TPs) (i.e., hierarchy of local statistics) and entropy (i.e., global statistics) regarding SL strategies in human’s brains; claims importance of information-theoretical approaches to understand domain-general, higher-order, and global SL covering both real-world music and language; and proposes promising approaches for the application of therapy and pedagogy from various perspectives of psychology, neuroscience, computational studies, musicology, and linguistics. Full article
(This article belongs to the Special Issue Advances in the Neurocognition of Music and Language)
Show Figures

Figure 1

Back to TopTop