Next Article in Journal
Cortical Resonance to Visible and Invisible Visual Rhythms
Next Article in Special Issue
Infants Segment Words from Songs—An EEG Study
Previous Article in Journal
Expression and Localization of Kv1.1 and Kv3.1b Potassium Channels in the Cochlear Nucleus and Inferior Colliculus after Long-Term Auditory Deafferentation
Previous Article in Special Issue
Attention Modulates Electrophysiological Responses to Simultaneous Music and Language Syntax Processing
Open AccessArticle

How the Brain Understands Spoken and Sung Sentences

1
ICONE-Innsbruck Cognitive Neuroscience, Department for Hearing, Speech, and Voice Disorders, Medical University of Innsbruck, 6020 Innsbruck, Austria
2
Department for Medical Psychology, Medical University of Innsbruck, 6020 Innsbruck, Austria
3
Department for Hearing, Speech, and Voice Disorders, Medical University of Innsbruck, 6020 Innsbruck, Austria
*
Author to whom correspondence should be addressed.
Brain Sci. 2020, 10(1), 36; https://doi.org/10.3390/brainsci10010036
Received: 29 November 2019 / Revised: 19 December 2019 / Accepted: 6 January 2020 / Published: 8 January 2020
(This article belongs to the Special Issue Advances in the Neurocognition of Music and Language)
The present study investigates whether meaning is similarly extracted from spoken and sung sentences. For this purpose, subjects listened to semantically correct and incorrect sentences while performing a correctness judgement task. In order to examine underlying neural mechanisms, a multi-methodological approach was chosen combining two neuroscientific methods with behavioral data. In particular, fast dynamic changes reflected in the semantically associated N400 component of the electroencephalography (EEG) were simultaneously assessed with the topographically more fine-grained vascular signals acquired by the functional near-infrared spectroscopy (fNIRS). EEG results revealed a larger N400 for incorrect compared to correct sentences in both spoken and sung sentences. However, the N400 was delayed for sung sentences, potentially due to the longer sentence duration. fNIRS results revealed larger activations for spoken compared to sung sentences irrespective of semantic correctness at predominantly left-hemispheric areas, potentially suggesting a greater familiarity with spoken material. Furthermore, the fNIRS revealed a widespread activation for correct compared to incorrect sentences irrespective of modality, potentially indicating a successful processing of sentence meaning. The combined results indicate similar semantic processing in speech and song. View Full-Text
Keywords: semantics; speech comprehension; singing; N400; event-related brain potentials (ERPs); functional near-infrared spectroscopy (fNIRS) semantics; speech comprehension; singing; N400; event-related brain potentials (ERPs); functional near-infrared spectroscopy (fNIRS)
Show Figures

Figure 1

MDPI and ACS Style

Rossi, S.; Gugler, M.F.; Rungger, M.; Galvan, O.; Zorowka, P.G.; Seebacher, J. How the Brain Understands Spoken and Sung Sentences. Brain Sci. 2020, 10, 36.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop