Next Article in Journal
Gut–Brain Crosstalk and the Central Mechanisms of Orofacial Pain
Next Article in Special Issue
Temporal Shift Length and Antecedent Occurrence Likelihood Modulate Counterfactual Conditional Comprehension: Evidence from Event-Related Potentials
Previous Article in Journal
The Utility of Responsive Neurostimulation for the Treatment of Pediatric Drug-Resistant Epilepsy
Previous Article in Special Issue
With or without Feedback?—How the Presence of Feedback Affects Processing in Children with Developmental Language Disorder
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Non-Invasive Mapping of the Neuronal Networks of Language

by
Andrew C. Papanicolaou
Department of Pediatrics, Division of Pediatric Neurology, College of Medicine, University of Tennessee Health Science Center, Memphis, TN 38013, USA
Brain Sci. 2023, 13(10), 1457; https://doi.org/10.3390/brainsci13101457
Submission received: 7 August 2023 / Revised: 13 September 2023 / Accepted: 5 October 2023 / Published: 13 October 2023
(This article belongs to the Special Issue Neurofunctional Basis of Language Processing)

Abstract

:
This review consists of three main sections. In the first, the Introduction, the main theories of the neuronal mediation of linguistic operations, derived mostly from studies of the effects of focal lesions on linguistic performance, are summarized. These models furnish the conceptual framework on which the design of subsequent functional neuroimaging investigations is based. In the second section, the methods of functional neuroimaging, especially those of functional Magnetic Resonance Imaging (fMRI) and of Magnetoencephalography (MEG), are detailed along with the specific activation tasks employed in presurgical functional mapping. The reliability of these non-invasive methods and their validity, judged against the results of the invasive methods, namely, the “Wada” procedure and Cortical Stimulation Mapping (CSM), is assessed and their use in presurgical mapping is justified. In the third and final section, the applications of fMRI and MEG in basic research are surveyed in the following six sub-sections, each dealing with the assessment of the neuronal networks for (1) the acoustic and phonological, (2) for semantic, (3) for syntactic, (4) for prosodic operations, (5) for sign language and (6) for the operations of reading and the mechanisms of dyslexia.

1. Introduction

Mapping the neuronal networks of language has been a popular scientific pursuit since the functional neuroimaging methods became adequate for the task. In addition to its theoretical value, such mapping is of considerable practical utility as an adjunct to or a replacement of the traditional invasive presurgical mapping techniques, namely the Intracarotid Sodium Amytal test (or the Wada procedure) for assessing lateralization of the language networks and the direct cortical stimulation mapping (CSM) method for localizing parts of such networks in the cortex of the language dominant hemisphere (for reviews, see [1,2,3,4]).
Language, conceived as a function, is not a monolithic entity but consists of at least four interrelated yet distinct subsidiary operations, each apparently associated with its own neuronal network. These are the acoustic, the phonological, the semantic and the syntactic operations. The first analyzes the acoustic features of speech sounds; the second those features that are specific to human speech; and the third is assumed to invest these signals with meaning in ways that remain largely conjectural. Finally, the fourth provides the means for arranging words in serial order so as to constitute grammatically correct sentences but also for arranging phonemes in the requisite serial order for the construction (and comprehension) of words.
It is nearly certain that not all four of these operations are lateralized. The acoustic processing of both speech and non-speech stimuli is generally believed to be mediated by neuronal networks in the auditory cortex in Heschl’s Gyrus (HG) in the mid-portion of the superior temporal gyrus (STG) in the left and in the right hemispheres. In contrast, speech production (which includes the temporal arrangement of articulatory gestures for producing words as well as the arrangement of words for producing sentences) is said to involve the Inferior Frontal Gyrus (IFG), comprising the pars opercularis and pars triangularis, of the left hemisphere only. There is also considerable agreement that the IFG may be necessary for the comprehension of sentences, although its contribution may not be necessary for the comprehension of isolated words [5].
Beyond these points of general agreement, most other hypotheses regarding cortical organization for language vary among theorists. According to the dominant model that has emerged over the years, mostly on the basis of focal brain lesion studies, phonological processing requires the contribution of the posterior part of STG (pSTG) of only the left hemisphere [6,7,8,9]. But this view has been challenged, partly on the basis of results of some focal lesion studies [10,11,12,13,14] and partly on some functional neuroimaging data [15,16]. Instead, a “dual-route” model has been proposed, according to which the perception of speech sounds engages networks in both the left and the right pSTG but also of the anterior STG, e.g., [8,17,18,19,20,21,22].
The situation with respect to the lateralization of semantic operations is even less settled. According to some investigators, semantic operations are mediated by networks in the region that encompasses the pSTG, a portion of the Middle Temporal Gyrus (MTG) and the Angular and Supramarginal gyri (AG and SMG, respectively) of the left cerebral hemisphere, that is, the classical “Wernicke’s area.” This view, which has been widely supported by clinical evidence that lesions in this area disrupt the comprehension of both words and sentences [6,23,24,25,26,27,28], has been challenged by dementia data [29,30,31,32,33,34,35] as well as focal lesion data [36]. These data indicate that whereas Wernicke’s area may appear to be necessary for both phonological and semantic processes, it is, in fact, necessary for phonological analysis only. This analysis is said to result in the emergence of the word-forms (i.e., activation patterns coding the phonological aspect of words)—see [37,38]. Nevertheless, in the context of this view, its disruption would be expected to interfere with not only phonological analysis but also with word and sentence comprehension, because the latter presuppose phonological analysis and not because the area mediates semantic operations. Semantic operations, in this alternative view, are mediated by the left Anterior Temporal Lobe (ATL) instead. However, the precise role of the ATL is not at all clear. According to one hypothesis [5], the ATL is necessary for mediating the activation of semantic circuits distributed throughout the cortex on the basis of “word form”-related input that it receives from Wernicke’s region, or for activating word-form circuits on the basis of input from semantic circuits, as in the case of object naming tasks [35,36]. Therefore, according to this alternative to the dominant model, the disruption of the left ATL by electrical stimulation would be expected to disrupt word comprehension (consequently, also, sentence comprehension), as well as object naming. But the results of a cortical stimulation mapping (CSM) study [39] have pointed to the conclusion that the left fusiform gyrus (rather than the left ATL) is necessary for word and sentence comprehension. On the basis of yet another lesion study [40], it has also been proposed that comprehension of word meaning is mediated by the inferior temporal region and the ATL bilaterally. These data lead to the expectation that, during word comprehension tasks, all these areas, rather than only the left ATL or the left Wernicke’s region, should show increased activation [21].
It is in the settlement of this largely unsettled terrain of mixed facts and conjectures that functional neuroimaging is attempting to make its contribution. In the sections that follow, we will briefly review the nature of the functional neuroimaging methods employed in that capacity, and we will comment on the justification of their use by pointing out the compatibility of their results with those of the traditional invasive brain mapping methods. In a final section, the most notable results of the application of these methods to the study of linguistic networks will be summarized.

2. The Methods of Non-Invasive Mapping

2.1. Methods

There are four types of neurophysiological events that are typically captured in functional images, each associated with a different kind of electromagnetic signal. Three of the four types of brain events imaged are aspects of what is referred to as brain baseline “activity” and stimulus or task-specific “activation”. These are, first, electrochemical signaling among neurons, which is imaged through the method of magnetoencephalography (MEG). Second, metabolic activity rates in sets of neurons. These are imaged by means of Positron Emission Tomography (PET). Third, blood flow rates supplying glucose and oxygen to these sets of neurons, imaged through PET but, most commonly, through functional Magnetic Resonance Imaging (fMRI). These three aspects of activity and activation are interrelated: local rates of signaling, specific to each brain structure at rest and during its engagement in behavioral and cognitive tasks, determine to a large extent the rates at which these structures utilize glucose and oxygen. These local metabolic rates, in turn, determine, along with other factors, the rates of local blood flow. This being the case, maps of brain activity and activation representing rates of signaling or of metabolism or of blood flow, recorded under the same circumstances, are expected to be quite similar, especially in normal individuals, if not also in patients sustaining vascular or other brain lesions. The fourth type of brain process is a prerequisite of brain activity and activation, and changes sufficiently slowly so as to qualify as a time-invariant; that is, a structural aspect of brain physiology. It consists of the distributions of receptors for particular classes of neurotransmitters throughout the brain, and is imaged using PET.
The baseline activity and the activation patterns imaged represent three basic types of entities: first, the “functional networks” that embody the mechanisms of particular functions or subsidiary operations. They are obtained in activation experiments and are typically captured using fMRI (there are many more fMRI scanners than all MEG and PET systems combined), less frequently with MEG, occasionally using blood flow PET and, rarely, via metabolic PET. Second, the activation patterns represent signals putatively specific to particular “products” of functions, such as different classes of behavior, i.e., percepts, sensations, thoughts or sentiments. Third, the visualized patterns of resting activity may represent features of physiology specific to different diagnostic categories, personality traits, demographic categories (i.e., age, gender) and relatively long-lasting “states” of the subject (e.g., vigilance, craving, anger). Here, the methods of choice are PET (which could capture category-specific receptor distributions for particular neurotransmitters) and metabolic PET, although fMRI and MEG have been used for that purpose.

2.2. Tasks

The activation tasks used fall into two main categories: those of production and of perception of either phonemes, morphemes or whole sentences presented either auditorily or visually (reading). In detail, the tasks vary widely depending on the specific aims of particular neuroimaging studies. However, there is a more-or-less standardized set of tasks that have been adopted in clinical settings of presurgical functional mapping represented by the following typical examples used with MEG and fMRI.
The most common task for expressive language mapping with fMRI involves the covert production of words to visual cues (for detailed discussion, see also [41,42]). Several variants of this task are in use. In verb generation tasks, the patient is asked to silently produce action verbs (e.g., “cut, slice”) in response to a printed noun (e.g., “knife”) or to produce nouns that belong to a particular semantic category. Activation maps obtained during these tasks are compared to activation maps obtained during rest or during the passive viewing of meaningless letter strings. Alternative tasks often used for expressive language mapping involve covert object naming and sentence completion (e.g., [43]). A typical object naming task consists of a standard block design alternating between stimulus and rest, during which the patient is presented with line drawings of living and inanimate objects [44] with the explicit instruction to covertly name the object upon presentation.
Receptive language mapping with fMRI generally follows the protocol described by Binder et al. [45] involving a blocked semantic/tone decision task (see also [46]). In the context of this task, patients are presented with names of animals and are cued to make a button response regarding a particular attribute of the animal. This condition is alternated with a tone decision task where patients are presented with sequences of high- and low-frequency tones and are instructed to make a button response upon hearing a sequence containing two high tones.
For receptive language mapping with MEG, variations of the following task [47] have typically been used: patients are given a recognition memory task for spoken words, and Event Related Fields (ERFs) are recorded for each word stimulus. The stimuli (target words that are repeated and foils that are presented once) are delivered binaurally at the patient’s outer ear through two plastic tubes terminating in ear inserts with a variable interstimulus interval of 2.5–3.5 s. Patients are asked to lift their index finger whenever they recognize a repeated word. The responding hand is counterbalanced across sessions. On occasion, a variation of this protocol has been adopted in the visual modality, whereby target and distractor stimuli are presented visually, with identical task demands (e.g., [48]). As well as eliciting reliable receptive language-related activation, the visual variant of the task has been shown to engage the inferior frontal region (see [49]). Although MEG-receptive language mapping has most readily been achieved using the aforementioned protocol, the adoption of other paradigms (e.g., [50,51]) has been shown to be similarly useful in identifying the receptive language cortex. Expressive language mapping using MEG is typically performed in the context of a picture naming task (e.g., [52,53]) or of covert verb generation tasks (e.g., [50,54]).

2.3. The Reliability and Validity of Non-Invasive Methods

The use of the non-invasive fMRI and MEG as adjuncts to or as replacements of the invasive CSM and the Wada procedures in presurgical mapping, as well as their use in experimental investigations involving normal subjects, is justified by the fact that they provide compatible results with those of the invasive methods. The compatibility of the lateralization results of the Wada and the fMRI methods has been attested in a number of studies with patient samples ranging from 7 to 100 individuals. Results range from reporting perfect concordance [55,56,57] to nearly perfect [58,59,60,61,62] or considerably high [46,63,64,65,66,67,68].
Also high is the reported compatibility between the results of the Wada and MEG methods with respect to language lateralization, reaching 87% concordance in the study with the largest sample [47], with the rest of the studies reporting uniformly high agreement [48,50,54,69,70,71,72,73,74]. Far fewer studies report comparisons of laterality estimates for memory between fMRI and Wada. These involve small samples yet high concordance [75], but also low [76] and none between MEG and Wada.
Furthermore, the degree of concordance between the CSM and MEG localization of language-specific cortical patches is quite high. In a study involving a small patient sample, ref. [77] showed the compatibility of CSM and MEG for localizing receptive language-specific cortical sites, as did a second study involving 47 patients [78].
The question then arises as to how to interpret cases of discordant localization and lateralization results between the invasive and noninvasive methods. On the basis of the assumption that CSM and the Wada test are the gold standards, the tendency is to consider discordant estimates as failures of the noninvasive methods. However, when that assumption was put to empirical test, it became obvious that neither CSM results nor those of the Wada should be considered as the gold standard any more than the results of the noninvasive methods should. For example, using CSM, Ojemann [79,80] reported extensive temporal lobe involvement in receptive language tasks such as naming, yet Sanai et al. [81], also using CSM, found a paucity of naming sites there. In addition to limited reliability, CSM also has limited predictive value with respect to postsurgical language and memory performance when the latter is also operationally defined as performance in naming tasks. For example, Ojemann and Dodrill [82] reported an 80% predictive accuracy of CSM. Cervenka et al. [83] reported that in a series of seven patients, language deficits were not anticipated by CSM data because four had amygdalohippocampectomies and, more importantly, because three developed language deficits, although CSM-determined language-specific loci were not resected. Carvenka et al. [84] reported that three of four patients operated on presented language deficits that were not predicted by CSM; Hamberger et al. [85] reported that, in their experience, sparing cortical sites that were CSM-positive (i.e., their stimulation interrupted naming) did not prevent postoperative word finding difficulties; Hermann et al. [86], in their review of the results of eight centers involving 217 patients, concluded that neither intra- nor extraoperative CSM-guided surgeries are any more effective in reducing postoperative naming deficits than non-CSM-guided surgeries.
The efficacy of the Wada procedure is also lower than would be expected for a gold standard for predicting the likelihood of postoperative language and memory deficits. In fact, the assertion that it is correctly assessing language laterality was not empirically verified against surgical outcome except in the context of comparing its efficacy against that of fMRI. Such comparisons show that fMRI may in fact have better predictive efficacy than the Wada test [87].
For nearly two decades, MEG and fMRI data have been used as adjunct means of assessing language and memory laterality and language localization. In some cases, MEG especially has been used as a means of informing the placement of subdural grid electrodes [88,89,90] in the process of identifying the location and extent of the epileptogenic zones. The question, however, remains: can the noninvasive procedures used in tandem replace CSM and the Wada test? This issue has been debated for some time and until resolved, the invasive procedures will remain in many, but not necessarily in most clinicians’ minds the gold standards for the presurgical evaluation of language.

3. Applications in Basic Research

Whereas the clinical applications of imaging language-related networks aim at disclosing the areas not to be interfered with during surgery, the aim of basic neuroimaging research—which is a far more difficult to accomplish—is that of specifying what particular aspects of language perception and production the networks within each of these areas mediate.

3.1. Acoustic and Phonological Operations

In the following paragraphs I will summarize the functional neuroimaging findings that are germane to the predictions of the various models that have been alluded to previously regarding acoustic and phonological operations. Specifically, are the mechanisms of acoustic signal analysis located within the HG, and is the activation of HG bilaterally symmetrical, for both speech (syllables or words) and for non-speech sounds? Is the dominant model correct in stating that the phonological analysis of heard speech (whether of syllables, words or phrases) is meditated by a network located in the left pSTG, or are the alternative notions correct, predicting the activation of both the left and the right pSTG for this operation? Is the IFG also implicated in phoneme perception, as the “analysis by synthesis” theory of speech perception [91] implies?
It was mentioned in the introduction that it is generally asserted that the auditory analysis of non-speech sounds is mediated by the left and the right HG equally, whereas deriving the phonological or speech-specific features of sounds is mediated by distinct, left-hemisphere networks. In fact, there are two different hypotheses regarding the manner of neuronal mediation of the acoustic and phonological operations: the older hypothesis [92] is that whereas all acoustic operations extracting pitch, loudness, timbre, etc., from all sounds (both speech and non-speech) are bilaterally mediated, phonological features that are unique to speech sounds are extracted by a separate “speech organ” (i.e., a specialized neuronal network) located in the left hemisphere. The second hypothesis states that speech sounds are processed by the same networks as all other sounds, and if there is a hemispheric specialization it consists of the greater efficiency of left hemisphere networks in resolving fast temporal changes in the acoustic signals that characterize speech sounds and in the greater efficiency in right hemisphere networks in analyzing the spectral composition of all sounds (see [93]). According to this second hypothesis, there is no sharp separation between acoustic and phonological operations, and the spectral and temporal analysis of both speech and non-speech signals proceed bilaterally.
The validity of the latter hypothesis has been upheld in studies that have confirmed the specialization of the left hemisphere networks for detecting rapid temporal variations [94,95,96,97]. One PET study in particular [98] has demonstrated that whereas the left more than the right HG responds to temporal variation, the right anterior STG more than the left responds to spectral variation. Yet, in a subsequent fMRI study [99], it was found that high temporal variation in tonal stimuli was associated with bilaterally symmetrical HG activation and spectral variation was associated with bilaterally symmetrical STG activation and the activation of the posterior part of the superior temporal sulcus (STS). Moreover, in a study using electrocorticographic (ECoG) recordings, Hullett et al. [100] found that the spectro-temporal analysis of speech input engages large sectors of the STG of both hemispheres well beyond HG, with the pSTG specialized for fast temporal variations in the speech sounds and the anterior STG for slower such variations. Finally, a further complication in determining the manner of mediation of acoustic and phonological operations has arisen in an MEG study [101], the results of which challenge a long-standing assumption; namely, that phonological processing follows the acoustic analysis effected by HG, bilaterally. In this study, it was found that in the context of a speech perception task greater left HG activation, as well as more general left-lateralized activation, starts as early as 50 ms. for stimulus onset.
From two meta-analyses of neuroimaging studies of language [102,103], a somewhat different picture emerges with respect to the way acoustic and phonological operations are mediated. A distinctive aspect of that picture is the confirmation of the notion that the anterior and posterior language-related networks are co-activated during both production and comprehension tasks, an aspect that brings to mind the analysis by synthesis theory of speech perception [91]. Another is that, as far as neuroimaging data are concerned, language processing entails bilateral activation. Specifically, during speech perception tasks, whether listening to syllables [104] or producing speech sounds [105] or articulating phonemes [106] or engaging in rhyming word tasks [107] or attending selectively to vowels [108] or detecting temporal changes in vowels [109], but also during covert syllable repetition [110], HG, along with an anterior region overlapping with HG, was reliably activated bilaterally [103].
Also, bilateral activation was found in a region of the pSTG that includes the planum temporale while listening to vocalizations [111] and during the perception of syllables [112], the identification of syllables [104], while listening to syllables [113], while categorizing syllables [114], listening to pseudo-words [115], but also during word production [116], as reported by Vigneau et al. [103]. However, the same region also showed additional unilateral left hemisphere activation in the same tasks, thus rendering the question of whether phonological processes engage both the left and right pSTG difficult to answer. Does the preponderance of left pSTG activation mean that the observed right pSTG activation is superfluous? Or does it mean, as some believe, that although the right STG is capable of performing phonological analyses of speech in isolation as much as the left pSTG is, it is actively inhibited by the left?
The same questions are raised (and also remain unsettled) by activation observed mostly over Boca’s area, but also in its homotopic region of the right hemisphere. There, as well, activation attends both phonological production and perception tasks. For example, the left lateralized activation of the pars triangularis was found in tasks involving counting syllables [117] or identifying them [113], and between pars orbicularis and the middle frontal sulcus during syllable articulation tasks either overt [106] or covert [118], syllable counting [117], categorization [114], discrimination [113] and covert pseudo-word reading [119]. Similarly, the lower part of the pre-central gyrus is activated bilaterally (but, again, mostly in the left hemisphere) by the same phonological production and perception tasks, such as the overt articulation of words and syllables but, importantly, also during tongue movements, non-speech motor tasks involving the articulators [105,120], overt phoneme repetition [106], covert syllable repetition [110] and syllable counting [117].
Clearly, therefore, neuroimaging studies have raised more questions than they have resolved regarding detailed aspects of the cerebral mediation of acoustic and phonological operations—a phenomenon common to all branches of ever-evolving science—confirming, meanwhile, the notion that the phonological networks are lateralized to the left STG.

3.2. Semantic Operations

Networks of semantic operations are supposed to output not simply patterns of neuronal signals, but meaning as well, possibly by activating neuronal engrams of words or, possibly, because the signal patterns themselves somehow produce conscious experiences of meaning. The ambiguity of what it means for a pattern of neuronal signals to produce experiences either directly or indirectly aside, the lesion data summarized in the “Introduction” of this review appear to indicate that more than half of the left temporal and extensive sectors of the left frontal and parietal lobes are somehow implicated in that process. To the same conclusion point the results of early direct CSM studies of Penfield and his group [121] and of Ojemann and his associates [122]. Interestingly, the results of most functional neuroimaging studies summarized by Vigneau et al. [102,103] present the same picture.
The first obvious regularity evident in the neuroimaging data is that unlike phonological processing, semantic processing is more clearly lateralized in the left hemisphere. According to Vigneau et al.’s [103] meta-analysis, only 12.5% of activated sites were found in the right hemisphere during all word comprehension conditions reviewed, as opposed to 30% in the case of phonological processing. These sites fell mostly in the junction between the pars opercularis of the IFG and the middle frontal sulcus, in the anterior insula and the orbital region of the IFG. However, the activation of these right hemisphere sites was not specific to the semantic operations but also occurred in attentional tasks common to both phonological and semantic processing, as well as to working memory tasks (e.g., [123,124,125]).
In contrast, the activation of regions within the left frontal lobes does appear to be specific to the process of retrieving or creating word meaning. These regions lie anteriorly to the ones implicated in phonological processing on the opercular part of the IFG and at the junction of that region with the precentral gyrus (e.g., [126,127]) and in the orbital part of the IFG [128,129,130,131]. The manner in which these frontal regions contribute to the emergence of meaning and whether they do so in a different or complementary way to that of other areas also implicated in the same process, such as the ATL (see [36]), is not clear. What is sufficiently clear, however, is that the emergence of word meaning does appear to involve the anterior hub of the left lateralized language network and not only the posterior ones in the temporal and parietal regions or the right ATL.
In the Vigneau et al. [102] meta-analysis, several clusters of activation sites within the left temporal and parietal lobes were found in the posterior part of the superior temporal sulcus, the anterior part of the fusiform gyrus and the angular gyrus, but not in the pSTG region, against the predictions of the dominant model mentioned in the “Introduction”. The anterior superior temporal sulcus appears to be involved in the processing of written words, in that it is activated during written word categorization tasks [132,133,134,135,136] or in word reading [137,138,139,140,141], but its specific role remains conjectural, as is the precise role of the rest of the aforementioned activated regions. For example, the AG is said to be involved in conceptual knowledge [102] and the fusiform gyrus in word reading (see the subsequent section), but also in listening to words (e.g., [142]) and in word association tasks (e.g., [45,131,143,144]), but once again the neuroimaging evidence is insufficient to identify the precise role of these areas in the process of the emergence of word meaning. The same can be said for areas like the frontal pole (part of the left ATL), which are activated not only during semantic but also during syntactic tasks.
It can therefore be concluded that functional neuroimaging evidence supports the contention that, first, like phonological processing, the emergence word meaning also appears to require the involvement of the left frontal lobe. Second, that the contribution of the right hemisphere in the process is minimal and that the pSTG is not necessarily part of the process. But functional neuroimaging has yet to answer the question as to the precise role of each of the activated areas in the meaning-accessing or generation processes.

3.3. Syntactic Operations

Both the perception and the production of sentences are processes akin to the perception and production of words, in that they require the ordering of units (words and phonemes, respectively). One question, therefore, is whether the networks that mediate the ordering of phonemes are distinct and potentially distinguishable, through imaging, from those mediating the ordering of words. But the question is difficult to address for the following reason: to extract the activation pattern that is specific to the syntactic operations from the global activation pattern that also includes semantic, phonological and acoustic operations, one must contrast the activation pattern obtained during the processing of syntactically correct sentences with that obtained during processing of incorrect sentences. But what exactly the activation pattern resulting from that contrast corresponds to is not clear. It is certainly not necessary that it corresponds to the “syntax networks”, since in both cases such networks must be activated, in the first case resulting in the emergence of the meaning of the sentence, and in the second in the failure of their application to engender such meaning. It is therefore not surprising that most activation resulting from sentence processing studies is due to non-syntactic operations.
For example, in an early study [17] it was found that both the right and left ALTs were activated during listening to syntactically correct sentences. Although the implication would be that these regions are specific to syntactic operations or that they contain part of the (hypothesized) syntactic networks, it is by no means the correct one, given that these brain regions have also been found to be activated during a variety of tasks ranging from episodic memory retrieval (e.g., [145,146]) to word meaning retrieval (e.g., [147,148]).
The same ambiguity besets the interpretation of activation data in other parts of the temporal lobes in neuroimaging studies attempting to identify the “syntax” networks in the human brain. Progress in this area is likely to be made when alternatives (perhaps radically so) to the existing psycholinguistic models are formulated and tested.

3.4. Prosody

Prosody is used for two purposes: first to disambiguate the meaning of utterances (linguistic prosody), and second for conveying the affective state of the speaker. Although it would be reasonable to suggest that when the purpose of prosody is to disambiguate the meaning of utterances, left hemisphere networks are implicated, and when it conveys the affective state of the speaker right hemisphere ones are, no data have settled this issue.
The few studies that address the issue of linguistic prosody do not suffice for the drawing of definitive conclusions. The evidence regarding the mediation of affective prosody is equally provisional. The earliest study of affective prosody using electrophysiological estimates of laterality [149] showed that attending to the affective aspect of conversations conducted in a language unknown to the subjects engages predominantly the right hemisphere, whereas attending to phonetic aspects of the same conversation engages predominantly the left. Yet, subsequent investigations are less conclusive on this point. In a recent meta-analysis of twenty-seven neuroimaging studies of affective prosody by Witteman, Van Heuven and Schiller [150], the notion that prosodic processing involves the same fronto-temporal language networks and the notion that affective prosody is right-lateralized were confirmed. The studies analyzed fall into two categories: The first includes those in which affective prosody passages were contrasted with equivalent passages not carrying prosodic features. The second includes those studies in which the same prosodic passages were presented during two conditions, but the subjects’ one task was to identity or distinguish the emotions expressed by the prosody in the one condition and in the other to identify other non-affective features of the same passages.
In both sets of studies, once again, a frontal-temporal network was activated involving STG and ITG sites. Moreover, in the first set of studies (but not the second), the left and right medial frontal gyrus and the left and right insula were activated, possibly reflecting the affective state perceived or the affective state induced by the material in the hearers. With respect to lateralization, the expected right hemisphere preponderance for affective prosody involved only the transverse temporal gyrus in the first set of studies and the pSTG in the second, neither of which structures are known to be involved in the perception of affect, thus leaving the issue of the lateralization of affective prosody unsettled.

3.5. Sign Language

Unlike so-called “body language”, which, among oral language users, serves to express the attitude of the speaker, sign languages have many of the purely linguistic features that also characterize oral languages. For example, there is a correspondence between phonemes and particular finger movements named “cheremes” (from the Greek for hand: cheri). There is also a correspondence between words and “signs” consisting of one or more cheremes [151]. There is finally correspondence between speech prosody, both linguistic and affective, and distinct facial expressions as well as body movements used for the same purposes by sign language users. It is, therefore, expected that there might be an overlap of networks mediating oral language and sign languages. Accordingly, the main question to be addressed here is whether neuroimaging data also accord with that expectation and whether they better define the constituent hubs or nodes of the presumed left lateralized sign language network.
The earliest of the neuroimaging studies to address this issue was that of Neville and her associates [152]. In that, as well as in subsequent similar studies, the following findings emerged: first, left-lateralized language networks activated, corresponding approximately to Brodman’s areas (BA) 44/45 within the IFG (i.e., Broca’s area) and to BA 22, roughly corresponding to part of Wernicke’s area but also extending over the entire STG, when English speakers were reading English sentences and when native American Sign Language (ASL) users were perceiving signed sentences. Second, congenitally deaf ASL signers showed activation of right hemisphere regions homotopic to those of the oral language network. This effect, however, does not characterize bilingual signers who have been exposed to oral language, who showed bilateral activation of the receptive language cortex but engaged the left anterior regions typically associated with speech production in order to comprehend the meaning of ASL sentences—a finding that argues in favor of the notion that regardless of whether the task requires comprehension or production, the language network is activated as a unit.
Several other studies of the comprehension of signed sentences have addressed the hypothesis of a common network at the basis of sign and oral languages. Newman and his associates [153] exposed native signers and hearing English speakers to “signed” and English sentences alternating with control stimuli that were visually similar to the linguistic stimuli but meaningless. This fMRI study supported the expected left-lateralized activation pattern, and showed that both native and late signers (i.e., individuals that had learned to sign later in life) engaged the left-lateralized network, including Broca’s region, the dorsolateral prefrontal cortex, the temporal sulcus, both left and right superior temporal sulcus and the angular gyrus (AG), during signed sentence comprehension. A third group of late signers exhibited an essentially similar activation pattern. Also, contrary to the right dominant activation pattern found in Neville et al.’s [151] study and in concert with the Newmann et al. [152] findings, Sakai and his associates [154], in an fMRI study of deaf signers of the Japanese Sign Language (JSL), also found a left-lateralized network for signed sentences (dialogues) comprehension featuring the IFG, mid-frontal and dorsolateral cortex, the MTG (but not the STG), the SMG and the AG.
Subsequently, Newmann and associates [155,156], also using fMRI during sentence comprehension with deaf ASL signers, replicated the largely left anterior cortex activation (pars triangularis or BA 45 of the IFG) but failed to replicate the bilateral activation of the middle part of the MTG, the superior temporal sulcus (STS) and the left AG. The same group [157] addressed the issue of whether ALS and meaningful yet not linguistic gestures would activate the same left-lateralized network, and concluded that symbolic gestures are indeed processed by the left lateralized network, indicating that they are treated as if they were linguistic in nature.

3.6. Reading and Dyslexia

Although clinical observations have indicated that certain focal lesions result in alexia, the inability to read, they have not offered any indications regarding the networks necessary for reading, or what changes in these networks may account for dyslexia when the latter is not associated with lesions. Normal reading appears to be accomplished in three stages, each of which entails its own neuronal network. The first stage, common to all visual perception, is mediated by the visual cortex; the second is peculiar to written words or to word-like stimuli. It entails one of two distinct yet compatible operations. The one operation is known as the “grapheme–phoneme rule system”. It is supposed to convert the output of the primary visual analysis of letter shapes to their sound equivalent, and its neuronal basis was postulated as early as 1892 by Dejerine [158] to be contained in the left angular gyrus [159]. The output of this conversion operation would then access the language production and perception system and the written word would thus be read and understood (or understood and read).
The second operation is known as the “visual word form system” [160] and its network is supposed to be located in the left fusiform gyrus at the base of the temporal lobe (e.g., [161,162,163]). The output of this network then accesses the semantic comprehension and the production networks by directly activating the circuits that code the meaning that corresponds to the word form, and thus the words are read and understood (or, again, understood and read). These two operations are not mutually exclusive. In fact, they may either both belong to the second stage of the reading process (the third stage being the engagement of the language comprehension and production system), or the visual word form system may be engaged first, sending its output to both the grapheme-to-phoneme conversion mechanism and to the semantic language comprehension system.
The temporal succession of the activation of these networks is beyond the range of either fMRI or the PET in that it unfolds within milliseconds from the arrival of the visual stimuli at the primary visual cortex. It is, however, detectable through magnetoencephalography (MEG). In fact, a series of MEG investigations [164,165,166,167,168,169] provided evidence showing that following bilateral visual cortex activation in response to word or pseudoword stimuli, left lateralized basal temporal activation precedes the engagement of the left posterior and anterior language networks, which are activated about 300 milliseconds following the activation of the primary visual cortex.
A meta-analysis of reading studies [160] provided further indications that the early stage of the reading process engages the left occipitotemporal region in the vicinity of the fusiform gyrus, that is, the word form area, a finding that agrees with the aforementioned MEG results as well as the results of subsequent investigations (e.g., [163,170]). The same meta-analysis also provided indications that components of the grapheme-to-phoneme conversion process are most likely located in the left posterior STG, the supramarginal gyrus (SMG), but also in the pars opercularis part of Broca’s area (BA 44), a finding also supported by more recent investigations [170]. Subsequent meta-analyses [171,172] have resulted in similar findings. The former provides additional evidence for a grapheme-to-phoneme conversion network in the inferior parietal region, and the latter evidence of reading-related activation in the ventral aspect of the left occipto-temporal and inferior parietal region, in addition to the IFG.
Given that lesions in the left fusiform “word form area”, as well as in the vicinity of Wernicke’s area, result in alexia (among other symptoms), the question naturally arises whether dyslexia, not associated with structural lesions, is due to a malfunction of either of these two areas. The data appear to favor this explanation. For example, in an MEG study, Simos et al. [168] showed that unlike normally reading children who engaged the pSTG region, children diagnosed with dyslexia failed to engage that region but engaged the homotopic area in the right hemisphere, instead. Moreover, after successful behavioral reading intervention, the same children displayed the normal activation profile during reading. Similar findings have appeared in subsequent years. Evidence from two meta-analyses [173,174] showed that the main difference between normal and dyslexic readers was the suppressed activation of the left lateralized language network among the latter. In addition, in another meta-analysis of 13 fMRI and PET studies of normal readers and individuals with reading difficulties, Pollack and associates [175] found that during rhyming or reading tasks dyslexics engaged to a greater degree the right rather than left hemisphere structures. These findings point to the possibility that dyslexia is due to a malfunctioning of articulatory and “word form” analysis mechanisms rather than the mechanism of grapheme to phoneme conversion.
Once again, although functional neuroimaging studies generate many more questions regarding the way neuronal networks engender language than they set out to answer, they do, to a considerable degree, verify most of the explanations regarding the neurophysiology of language, in all its forms, that has been and continues to be generated in the clinic through the observation and the interpretation of the effects of focal lesions. It is in part due to this convergence of the two sets of relevant evidence that non-invasive presurgical brain mapping has become a reality.

4. Perspectives

This review includes studies of mostly European languages and Japanese; therefore, the results may not be readily generalized to the cerebral mediation of tonal languages like the Chinese that may possibly involve different neuronal networks, especially for phonological processing (e.g., [176]). Also, some studies involve tasks of questionable ecological validity, whereas others may not sample satisfactorily all aspects of meaningful communication (e.g., non-verbal semantics). Such shortcomings, driven largely by the practical constraints that functional neuroimaging procedures entail, especially in the pre-surgical brain mapping of patients, are appreciated and will be overcome, in all likelihood, in the future. But for this to happen in an efficient way, future research should aim not only at the procurement of new results, as is now the case, but also be directed to the consolidation of old findings through replication within and across laboratories. Clearly, for this to happen, a change in opinion as to what constitutes commendable research activity is required on the part of both researchers and funding institutions. Such changes do not transpire easily but are absolutely necessary in weeding out inadvertent false findings that survive unchecked for years in the guise of facts—a phenomenon rather common in functional neuroimaging, as well as in many other fields of science.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Duffau, H.; Moritz-Gasser, S.; Mandonnet, E. A re-examination of neural basis of language processing: Proposal of a dynamic hodotopical model from data provided by brain stimulation mapping during picture naming. Brain Lang. 2014, 131, 1–10. [Google Scholar] [CrossRef]
  2. Papanicolaou, A.C. The Oxford Handbook of Functional Brain Imaging in Neuropsychology and Cognitive Neurosciences; Oxford University Press: Oxford, UK, 2017. [Google Scholar]
  3. Papanicolaou, A.C.; Rezaie, R.; Simos, P.G. The auditory and association cortex and the language evaluation methods. In Clinical Neurophysiology (Handbook of Clinical Neurology); Chauvel, P., Levin, P.K., Eds.; Elsevier: Amsterdam, The Netherlands, 2019. [Google Scholar]
  4. Simos, P.G.; Rezaie, R.; Papanicolaou, A.C. Applications of Magnetoencephalography in Epilepsy and Tumor Surgery; Fountas, K., Ed.; Springer: Berlin, Germany, 2020. [Google Scholar]
  5. Mesulam, M.M.; Thompson, C.K.; Weintraub, S.; Rogalski, E.J. The Wernicke conundrum and the anatomy of language comprehension in primary progressive aphasia. Brain 2015, 138 Pt 8, 2423–2437. [Google Scholar] [CrossRef]
  6. Wernicke, C. The symptom complex of aphasia: A psychological study on an anatomical basis. In Boston Studies in the Philosophy of Science; Cohen, R.S., Wartofsky, M.W., Eds.; D. Reidel Publishing Company: Dordrecht, The Netherlands, 1874/1969; pp. 34–97. [Google Scholar]
  7. Luria, A.R. Traumatic Aphasia: Its Syndromes, Psychology, and Treatment; Mouton de Gruyter: Berlin/Heidelberg, Germany, 1970. [Google Scholar]
  8. Scott, S.K.; Blank, C.C.; Rosen, S.; Wise, R.J. Identification of a pathway for intelligible speech in the left temporal lobe. Brain 2000, 123 Pt 12, 2400–2406. [Google Scholar] [CrossRef]
  9. Rauschecker, J.P.; Scott, S.K. Maps and streams in the auditory cortex: Nonhuman primates illuminate human speech processing. Nat. Neurosci. 2009, 12, 718–724. [Google Scholar] [CrossRef]
  10. Miceli, G.; Gainotti, G.; Caltagirone, C.; Masullo, C. Some aspects of phonological impairment in aphasia. Brain Lang. 1980, 11, 159–169. [Google Scholar] [CrossRef]
  11. Buchman, A.S.; Garron, D.C.; Trost-Cardamone, J.E.; Wichter, M.D.; Schwartz, M. Word deafness: One hundred years later. J. Neurol. Neurosurg. Psychiatry 1986, 49, 489–499. [Google Scholar] [CrossRef]
  12. Poeppel, D. Pure word deafness and the bilateral processing of the speech code. Cogn. Sci. 2001, 25, 679–693. [Google Scholar] [CrossRef]
  13. Rogalsky, C.; Pitz, E.; Hillis, A.E.; Hickok, G. Auditory word comprehension impairment in acute stroke: Relative contribution of phonemic versus semantic factors. Brain Lang. 2008, 107, 167–169. [Google Scholar] [CrossRef]
  14. Rogalsky, C.; Hickok, G. The role of Broca’s area in sentence comprehension. J. Cogn. Neurosci. 2011, 23, 1664–1680. [Google Scholar] [CrossRef]
  15. Price, C.J. A review and synthesis of the first 20 years of PET and fMRI studies of heard speech, spoken language and reading. Neuroimage 2012, 62, 816–847. [Google Scholar] [CrossRef]
  16. Schirmer, A.; Fox, P.M.; Grandjean, D. On the spatial organization of sound processing in the human temporal lobe: A meta-analysis. Neuroimage 2012, 63, 137–147. [Google Scholar] [CrossRef] [PubMed]
  17. Mazoyer, B.M.; Tzourio, N.; Frak, V.; Syrota, A.; Murayama, N.; Levrier, O.; Mehler, J. The cortical representation of speech. J. Cogn. Neurosci. 1993, 5, 467–479. [Google Scholar] [CrossRef] [PubMed]
  18. Narain, C.; Scott, S.K.; Wise, R.J.; Rosen, S.; Leff, A.; Iversen, S.D.; Matthews, P.M. Defining a left-lateralized response specific to intelligible speech using fMRI. Cereb. Cortex 2003, 13, 1362–1368. [Google Scholar] [CrossRef] [PubMed]
  19. Spitsyna, G.; Warren, J.E.; Scott, S.K.; Turkheimer, F.E.; Wise, R.J.S. Converging language streams in the human temporal lobe. J. Neurosci. 2006, 26, 7328–7336. [Google Scholar] [CrossRef]
  20. Poeppel, D.; Emmorey, K.; Hickok, G.; Pylkkänen, L. Towards a new neurobiology of language. J. Neurosci. 2012, 32, 14125–14131. [Google Scholar] [CrossRef]
  21. Hickok, G.; Poeppel, D. Neural basis of speech perception. In Handbook of Clinical Neurology, 3rd ed.; Aminoff, M.J., Boller, F., Swaab, D.F., Eds.; Elsevier: Amsterdam, The Netherlands, 2015; pp. 149–160. [Google Scholar]
  22. Keator, L.M.; Yourganov, G.; Faria, A.V.; Hillis, A.E.; Tippett, D.C. Application of the dual stream model to neurodegenerative disease: Evidence from a multivariate classification tool in primary progressive aphasia. Aphasiology. 2022, 36, 618–647. [Google Scholar] [CrossRef] [PubMed]
  23. Marie, P. The third left frontal convolution plays no special role in the function of language. Sem. Me’d 1906, 26, 241–247. [Google Scholar]
  24. Lhermitte, F.; Gautier, J.C. Aphasia. In Handbook of Clinical Neurology; Vinken, P., Bruyn, G., Eds.; North Holland: Amsterdam, The Netherlands, 1969; pp. 4–84. [Google Scholar]
  25. Geschwind, N. Language and the brain. Sci. Am. 1972, 226, 76–83. [Google Scholar] [CrossRef]
  26. Bogen, J.E.; Bogen, G.M. Wernicke’s region—Where is it? Ann. N. Y. Acad. Sci. 1976, 280, 834–843. [Google Scholar] [CrossRef]
  27. Naeser, M.A.; Helm-Estabrooks, N.; Haas, G.; Auerbach, S.; Srinivasan, M. Relationship between lesion extent in ‘Wernicke’s area’ on computed tomographic scan and predicting recovery of comprehension in Wernicke’s aphasia. Arch. Neurol. 1987, 44, 73–82. [Google Scholar] [CrossRef]
  28. Turken, A.U.; Dronkers, N.F. The neural architecture of the language comprehension network: Converging evidence from lesion and connectivity analyses. Front. Syst. Neurosci. 2011, 5, 1. [Google Scholar] [CrossRef] [PubMed]
  29. Snowden, J.S.; Goulding, P.J.; Neary, D. Semantic dementia A form of circumscribed cerebral atrophy. Behav. Neurol. 1989, 2, 167–182. [Google Scholar] [CrossRef]
  30. Hodges, J.R.; Patterson, K.; Oxbury, S.; Funnell, E. Semantic dementia. Progressive fluent aphasia with temporal lobe atrophy. Brain 1992, 115 Pt 6, 1783–1806. [Google Scholar] [CrossRef] [PubMed]
  31. Rogers, T.T.; Lambon Ralph, M.A.; Garrard, P.; Bozeat, S.; McClelland, J.L.; Hodges, J.R.; Patterson, K. Structure and deterioration of semantic memory: A neuropsychological and computational investigation. Psychol. Rev. 2004, 111, 205–235. [Google Scholar] [CrossRef] [PubMed]
  32. Jefferies, E.; Lambon Ralph, M.A. Semantic impairment in stroke aphasia versus semantic dementia: A case-series comparison. Brain 2006, 129 Pt 8, 2132–2147. [Google Scholar] [CrossRef] [PubMed]
  33. Patterson, K.; Nestor, P.J.; Rogers, T.T. Where do you know what you know? The representation of semantic knowledge in the human brain. Nat. Rev. Neurosci. 2007, 8, 976–987. [Google Scholar] [CrossRef] [PubMed]
  34. Hurley, R.S.; Paller, K.A.; Rogalski, E.J.; Mesulam, M.M. Neural mechanisms of object naming and word comprehension in primary progressive aphasia. J. Neurosci. 2012, 32, 4848–4855. [Google Scholar] [CrossRef]
  35. Mesulam, M.M.; Wieneke, C.; Hurley, R.; Rademaker, A.; Thompson, C.K.; Weintraub, S.; Rogalski, E.J. Words and objects at the tip of the left temporal lobe in primary progressive aphasia. Brain 2013, 136 Pt 2, 601–618. [Google Scholar] [CrossRef]
  36. Schwartz, M.F.; Kimberg, D.Y.; Walker, G.M.; Faseyitan, O.; Brecher, A.; Dell, G.S.; Coslett, H.B. Anterior temporal involvement in semantic word retrieval: Voxel-based lesion-symptom mapping evidence from aphasia. Brain 2009, 132 Pt 12, 3411–3427. [Google Scholar] [CrossRef]
  37. Pulvermuller, F.; Fadiga, L. Active perception: Sensorimotor circuits as a cortical basis for language. Nat. Rev. Neurosci. 2010, 11, 351–360. [Google Scholar] [CrossRef]
  38. Pulvermuller, F. How neurons make meaning: Brain mechanisms for embodied and abstract-symbolic semantics. Trends Cogn. Sci. 2013, 17, 458–470. [Google Scholar] [CrossRef] [PubMed]
  39. Luders, H.; Dinner, D.S.; Lesser, R.P.; Morris, H.H. Evoked potentials in cortical localization. J. Clin. Neurophysiol. 1986, 3, 75–84. [Google Scholar] [CrossRef] [PubMed]
  40. Gorno-Tempini, M.L.; Dronkers, N.F.; Rankin, K.P.; Ogar, M.J.; Phengrasamy, L.; Rosen, H.J.; Johnson, J.K.; Weiner, M.W.; Miller, B.L. Cognition and anatomy in three variants of primary progressive aphasia. Ann. Neurol. 2004, 55, 335–346. [Google Scholar] [CrossRef] [PubMed]
  41. Połczyńska, M.; Japardi, K.; Curtiss, S.; Moody, T.; Benjamin, C.; Cho, A.; Vigil, C.; Kuhn, T.; Jones, M.; Bookheimer, S. Improving language mapping in clinical fMRI through assessment of grammar. NeuroImage Clin. 2017, 15, 415–427. [Google Scholar] [CrossRef]
  42. Meinhold, T.; Hofer, W.; Pieper, T.; Kudernatsch, M.; Staudt, M. Presurgical language fMRI in children, adolescents and young adults: A validation study. Clin. Neuroradiol. 2020, 30, 691–704. [Google Scholar] [CrossRef]
  43. Ashtari, M.; Perrine, K.; Elbaz, R.; Syed, U.; Thaden, E.; McIlree, C.; Dolgoff-Kaspar, R.; Clarke, T.; Diamond, A.; Ettinger, A. Mapping the functional anatomy of sentence comprehension and application to presurgical evaluation of patients with brain tumor. AJNR Am. J. Neuroradiol. 2005, 26, 1461–1468. [Google Scholar]
  44. Szekely, A.; D’Amico, S.; Devescovi, A.; Federmeier, K.; Herron, D.; Iyer, G.; Jacobsen, T.; Arévalo, A.L.; Vargha, A.; Bates, E. Timed action and object naming. Cortex 2005, 41, 7–25. [Google Scholar] [CrossRef] [PubMed]
  45. Binder, J.R.; Swanson, S.J.; Hammeke, T.A.; Morris, G.L.; Mueller, W.M.; Fischer, M.; Benbadis, S.; Frost, J.A.; Rao, S.M.; Haughton, V.M. Determination of language dominance using functional MRI: A comparison with the Wada test. Neurology 1996, 46, 978–984. [Google Scholar] [CrossRef]
  46. Janecek, J.K.; Swanson, S.J.; Sabsevitz, D.S.; Hammeke, T.A.; Raghavan, M.; Rozman, M.E.; Binder, J.R. Language lateralization by fMRI and Wada testing in 229 patients with epilepsy: Rates and predictors of discordance. Epilepsia 2013, 54, 314–322. [Google Scholar] [CrossRef]
  47. Papanicolaou, A.C.; Simos, P.G.; Castillo, E.M.; Breier, J.I.; Sarkari, S.; Pataraia, E.; Billingsley, R.L.; Buchanan, S.; Wheless, J.; Maggio, V.; et al. Magnetocephalography: A noninvasive alternative to the Wada procedure. J. Neurosurg. 2004, 100, 867–876. [Google Scholar] [CrossRef]
  48. Breier, J.I.; Simos, P.G.; Wheless, J.W.; Constantinou, J.E.; Baumgartner, J.E.; Venkataraman, V.; Papanicolaou, A.C. Language dominance in children as determined by magnetic source imaging and the intracarotid amobarbital procedure: A comparison. J. Child. Neurol. 2001, 16, 124–130. [Google Scholar] [CrossRef]
  49. Papanicolaou, A.C.; Pazo-Alvarez, P.; Castillo, E.M.; Billingsley-Marshall, R.; Breier, J.; Swank, P.; Buchanan, S.; McManis, M.; Clear, T.; Passaro, A. Functional neuroimaging with MEG: Normative language profiles. Neuroimage 2006, 33, 326–342. [Google Scholar] [CrossRef]
  50. Bowyer, S.M.; Moran, J.E.; Weiland, B.J.; Mason, K.M.; Greenwald, M.L.; Smith, B.J.; Barkley, G.L.; Tepley, N. Language laterality determined by MEG mapping with MR-FOCUSS. Epilepsy Behav. 2005, 6, 235–241. [Google Scholar] [CrossRef] [PubMed]
  51. Kamada, K.; Sawamura, Y.; Takeuchi, F.; Kuriki, S.; Kawai, K.; Morita, A.; Todo, T. Expressive and receptive language areas determined by a non-invasive reliable method using functional magnetic resonance imaging and magnetoencephalography. Neurosurgery 2007, 60, 296–305, discussion 305–306. [Google Scholar] [CrossRef] [PubMed]
  52. Bowyer, S.M.; Moran, J.E.; Mason, K.M.; Constantinou, J.E.; Smith, B.J.; Barkley, G.L.; Tepley, N. MEG localization of language-specific cortex utilizing MR-FOCUSS. Neurology 2004, 62, 2247–2255. [Google Scholar] [CrossRef]
  53. Castillo, E.M.; Simos, P.G.; Venkataraman, V.; Breier, J.I.; Wheless, J.W.; Papanicolaou, A.C. Mapping of expressive language cortex using magnetic source imaging. Neurocase 2001, 7, 419–422. [Google Scholar] [CrossRef] [PubMed]
  54. Findlay, A.M.; Ambrose, J.B.; Cahn-Weiner, D.A.; Houde, J.F.; Honma, S.; Hinkley, L.B.; Berger, M.S.; Nagarajan, S.S.; Kirsch, H.E. Dynamics of hemispheric dominance for language assessed by magnetoencephalographic imaging. Ann. Neurol. 2012, 71, 668–686. [Google Scholar] [CrossRef] [PubMed]
  55. Benson, R.R.; FitzGerald, D.B.; LeSueur, L.L.; Kennedy, D.; Kwong, K.; Buchbinder, B.; Davis, T.; Weisskoff, R.; Talavage, T.; Logan, W.; et al. Language dominance determined by whole brain functional MRI in patients with brain lesions. Neurology 1999, 52, 798–809. [Google Scholar] [CrossRef] [PubMed]
  56. Deblaere, K.; Boon, P.A.; Vandemaele, P.; Tieleman, A.; Vonck, K.; Vingerhoets, G.; Backes, W.; Defreyne, L.; Achten, E. MRI language dominance assessment in epilepsy patients at 1.0 T: Region of interest analysis and comparison with intracarotid amytal testing. Neuroradiology 2004, 46, 413–420. [Google Scholar] [CrossRef]
  57. Desmond, J.E.; Sum, J.M.; Wagner, A.D.; Demb, J.B.; Shear, P.K.; Glover, G.H.; Gabrieli, J.D.E.; Morrell, M.J. Functional MRI measurement of language lateralization in Wada-tested patients. Brain 1995, 118 Pt 6, 1411–1419. [Google Scholar] [CrossRef]
  58. Binder, J.R.; Frost, J.A.; Hammeke, T.A.; Rao, S.M.; Cox, R.W. Function of the left planum temporale in auditory and linguistic processing. Brain 1996, 119 Pt 4, 1239–1247. [Google Scholar] [CrossRef]
  59. Carpentier, A.; Pugh, K.R.; Westerveld, M.; Studholme, C.; Skrinjar, O.; Thompson, J.L.; Spencer, D.D.; Constable, R.T. Functional MRI of language processing: Dependence on input modality and temporal lobe epilepsy. Epilepsia 2001, 42, 1241–1254. [Google Scholar] [CrossRef] [PubMed]
  60. Hertz-Pannier, L.; Gaillard, W.D.; Mott, S.H.; Cuenod, C.A.; Bookheimer, S.Y.; Weinstein, S.; Conry, J.; Papero, P.H.; Schiff, S.J.; Bihan, D.L.; et al. Noninvasive assessment of language dominance in children and adolescents with functional MRI: A preliminary study. Neurology 1997, 48, 1003–1012. [Google Scholar] [CrossRef] [PubMed]
  61. Sabbah, P.; Chassoux, F.; Leveque, C.; Landre, E.; Baudoin-Chial, S.; Devaux, B.; Mann, M.; Godon-Hardy, S.; Nioche, C.; Aït-Ameur, A.; et al. Functional MR imaging in assessment of language dominance in epileptic patients. Neuroimage 2003, 18, 460–467. [Google Scholar] [CrossRef] [PubMed]
  62. Woermann, F.G.; Jokeit, H.; Luerding, R.; Freitag, H.; Schulz, R.; Guertler, S.; Okujava, M.; Wolf, P.; Tuxhorn, I.; Ebner, A. Language lateralization by Wada test and fMRI in 100 patients with epilepsy. Neurology 2003, 61, 699–701. [Google Scholar] [CrossRef]
  63. Arora, J.; Pugh, K.; Westerveld, M.; Spencer, S.; Spencer, D.D.; Constable, R.T. Language lateralization in epilepsy patients: fMRI validated with the Wada procedure. Epilepsia 2009, 50, 2225–2241. [Google Scholar] [CrossRef]
  64. Benke, T.; Koylu, B.; Visani, P.; Karner, E.; Brenneis, C.; Bartha, L.; Trinka, E.; Trieb, T.; Felber, S.; Bauer, G.; et al. Language lateralization in temporal lobe epilepsy: A comparison between fMRI and the Wada Test. Epilepsia 2006, 47, 1308–1319. [Google Scholar] [CrossRef]
  65. Jones, S.E.; Mahmoud, S.Y.; Phillips, M.D. A practical clinical method to quantify language lateralization in fMRI using whole-brain analysis. Neuroimage 2011, 54, 2937–2949. [Google Scholar] [CrossRef]
  66. Suarez, R.O.; Whalen, S.; Nelson, A.P.; Tie, Y.; Meadows, M.E.; Radmanesh, A.; Golby, A.J. Threshold-independent functional MRI determination of language dominance: A validation study against clinical gold standards. Epilepsy Behav. 2009, 16, 288–297. [Google Scholar] [CrossRef]
  67. Szaflarski, J.P.; Holland, S.K.; Jacola, L.M.; Lindsell, C.; Privitera, M.D.; Szaflarski, M. Comprehensive presurgical functional MRI language evaluation in adult patients with epilepsy. Epilepsy Behav. 2008, 12, 74–83. [Google Scholar] [CrossRef]
  68. Zaca, D.; Nickerson, J.P.; Deib, G.; Pillai, J.J. Effectiveness of four different clinical fMRI paradigms for preoperative regional determination of language lateralization in patients with brain tumors. Neuroradiology 2012, 54, 1015–1025. [Google Scholar] [CrossRef] [PubMed]
  69. Doss, R.C.; Zhang, W.; Risse, G.L.; Dickens, D.L. Lateralizing language with magnetic source imaging: Validation based on the Wada test. Epilepsia 2009, 50, 2242–2248. [Google Scholar] [CrossRef] [PubMed]
  70. Hirata, M.; Kato, A.; Taniguchi, M.; Saitoh, Y.; Ninomiya, H.; Ihara, A.; Kishima, H.; Oshino, S.; Baba, T.; Yorifuji, S.; et al. Determination of language dominance with synthetic aperture magnetometry: Comparison with the Wada test. Neuroimage 2004, 23, 46–53. [Google Scholar] [CrossRef] [PubMed]
  71. McDonald, C.R.; Thesen, T.; Hagler, D.J., Jr.; Carlson, C.; Devinksy, O.; Kuzniecky, R.; Barr, W.; Gharapetian, L.; Trongnetrpunya, A.; Dale, A.M.; et al. Distributed source modeling of language with magnetoencephalography: Application to patients with intractable epilepsy. Epilepsia 2009, 50, 2256–2266. [Google Scholar] [CrossRef]
  72. Maestu, F.; Ortiz, T.; Fernandez, A.; Amo, C.; Martin, P.; Fernández, S.; Sola, R.G. Spanish language mapping using MEG: A validation study. Neuroimage 2002, 17, 1579–1586. [Google Scholar] [CrossRef] [PubMed]
  73. Merrifield, W.S.; Simos, P.G.; Papanicolaou, A.C.; Philpott, L.M.; Sutherling, W.W. Hemispheric language dominance in magnetoencephalography: Sensitivity, specificity, and data reduction techniques. Epilepsy Behav. 2007, 10, 120–128. [Google Scholar] [CrossRef]
  74. Tanaka, N.; Liu, H.; Reinsberger, C.; Madsen, J.R.; Bourgeois, B.F.; Dworetzky, B.A.; Hämäläinen, M.S.; Stufflebeam, S.M. Language lateralization represented by spatiotemporal mapping of magnetoencephalography. AJNR Am. J. Neuroradiol. 2013, 34, 558–563. [Google Scholar] [CrossRef]
  75. Detre, J.A.; Maccotta, L.; King, D.; Alsop, D.C.; Glosser, G.; D’Esposito, M.; Zarahn, E.; Aguirre, G.K.; French, J.A. Functional MRI lateralization of memory in temporal lobe epilepsy. Neurology 1998, 50, 926–932. [Google Scholar] [CrossRef]
  76. Deblaere, K.; Backes, W.H.; Tieleman, A.; Vandemaele, P.; Defreyne, L.; Vonck, K.; Hofman, P.; Boon, P.; Vermeulen, J.; Wilmink, J.; et al. Lateralized anterior mesiotemporal lobe activation: Semirandom functional MR imaging encoding paradigm in patients with temporal lobe epilepsy—Initial experience. Radiology 2005, 236, 996–1003. [Google Scholar] [CrossRef]
  77. Simos, P.G.; Papanicolaou, A.C.; Breier, J.I.; Wheless, J.W.; Constantinou, J.E.C.; Gormley, W.B.; Maggio, W.W. Localization of language-specific cortex by using magnetic source imaging and electrical stimulation mapping. J. Neurosurg. 1999, 91, 787–796. [Google Scholar] [CrossRef]
  78. Castillo, E.M.; Papanicolaou, A.C. Cortical representation of dermatomes: MEG-derived maps after tactile stimulation. Neuroimage 2005, 25, 727–733. [Google Scholar] [CrossRef] [PubMed]
  79. Ojemann, G.A. Organization of language cortex derived from investigations during neurosurgery. Semin. Neurosci. 1990, 2, 297–306. [Google Scholar]
  80. Ojemann, G.A. Cortical organization of language. J. Neurosci. 1991, 11, 2281–2287. [Google Scholar] [CrossRef] [PubMed]
  81. Sanai, N.; Mirzadeh, Z.; Berger, M.S. Functional outcome after language mapping for glioma resection. N. Engl. J. Med. 2008, 358, 18–27. [Google Scholar] [CrossRef]
  82. Ojemann, G.A.; Dodrill, C.B. Verbal memory deficits after left temporal lobectomy for epilepsy. Mechanism and intraoperative prediction. J. Neurosurg. 1985, 62, 101–107. [Google Scholar] [CrossRef]
  83. Cervenka, M.C.; Corines, J.; Boatman-Reich, D.F.; Eloyan, A.; Sheng, X.; Franaszczuk, P.J.; Crone, N.E. Electrocorticographic functional mapping identifies human cortex critical for auditory and visual naming. Neuroimage 2013, 69, 267–276. [Google Scholar] [CrossRef]
  84. Cervenka, M.C.; Boatman-Reich, D.F.; Ward, J.; Franaszczuk, P.J.; Crone, N.E. Language mapping in multilingual patients: Electrocorticography and cortical stimulation during naming. Front. Hum. Neurosci. 2011, 5, 13. [Google Scholar] [CrossRef]
  85. Hamberger, M.J.; Seidel, W.T.; McKhann, G.M., 2nd; Perrine, K.; Goodman, R.R. Brain stimulation reveals critical auditory naming cortex. Brain 2005, 128, 2742–2749. [Google Scholar] [CrossRef]
  86. Hermann, B.; Davies, K.; Foley, K.; Bell, B. Visual confrontation naming outcome after standard left anterior temporal lobectomy with sparing versus resection of the superior temporal gyrus: A randomized prospective clinical trial. Epilepsia 1999, 40, 1070–1076. [Google Scholar] [CrossRef]
  87. Sabsevitz, D.S.; Swanson, S.J.; Hammeke, T.A.; Spanaki, M.V.; Possing, E.T.; Morris, G.L.; Mueller, W.M.; Binder, J.R. Use of preoperative functional neuroimaging to predict language deficits from epilepsy surgery. Neurology 2003, 60, 1788–1792. [Google Scholar] [CrossRef]
  88. Blount, J.P.; Cormier, J.; Kim, H.; Kankirawatana, P.; Riley, K.O.; Knowlton, R.C. Advances in intracranial monitoring. Neurosurg. Focus 2008, 25, E18. [Google Scholar] [CrossRef] [PubMed]
  89. Ochi, A.; Otsubo, H. Magnetoencephalography-guided epilepsy surgery for children with intractable focal epilepsy: SickKids experience. Int. J. Psychophysiol. 2008, 68, 104–110. [Google Scholar] [CrossRef] [PubMed]
  90. Vitikainen, A.M.; Lioumis, P.; Paetau, R.; Salli, E.; Komssi, S.; Metsähonkala, L.; Paetau, A.; Kičić, D.; Blomstedt, G.; Valanne, L.; et al. Combined use of non-invasive techniques for improved functional localization for a selected group of epilepsy surgery candidates. Neuroimage 2009, 45, 342–348. [Google Scholar] [CrossRef]
  91. Liberman, A.M.; Cooper, F.S.; Shankweiler, D.P.; Studdert-Kennedy, M. Perception of the speech code. Psychol. Rev. 1967, 74, 431–461. [Google Scholar] [CrossRef] [PubMed]
  92. Liberman, A.M.; Mattingly, I.G. A specialization for speech perception. Science 1989, 243, 489–494. [Google Scholar] [CrossRef]
  93. Washington, S.D.; Tillinghast, J.S. Conjugating time and frequency: Hemispheric specialization, acoustic uncertainty, and the mustached bat. Front. Neurosci. 2015, 9, 143. [Google Scholar] [CrossRef]
  94. Belin, P.; Zilbovicius, M.; Crozier, S.; Thivard, L.; Fontaine, A.; Masure, M.C.; Samson, Y. Lateralization of speech and auditory temporal processing. J. Cogn. Neurosci. 1998, 10, 536–540. [Google Scholar] [CrossRef]
  95. Liegeois-Chauvel, C.; de Graaf, J.B.; Laguitton, V.; Chauvel, P. Specialization of left auditory cortex for speech perception in man depends on temporal coding. Cereb. Cortex 1999, 9, 484–496. [Google Scholar] [CrossRef]
  96. Nicholls, M.E.; Schier, M.; Stough, C.K.; Box, A. Psychophysical and electrophysiologic support for a left hemisphere temporal processing advantage. Neuropsychiatry Neuropsychol. Behav. Neurol. 1999, 12, 11–16. [Google Scholar]
  97. Yamasaki, T.; Goto, Y.; Taniwaki, T.; Kinukawa, N.; Kira, J.; Tobimatsu, S. Left hemisphere specialization for rapid temporal processing: A study with auditory 40 Hz steady-state responses. Clin. Neurophysiol. 2005, 116, 393–400. [Google Scholar] [CrossRef]
  98. Zatorre, R.J.; Belin, P. Spectral and temporal processing in human auditory cortex. Cereb. Cortex 2001, 11, 946–953. [Google Scholar] [CrossRef] [PubMed]
  99. Jamison, H.L.; Watkins, K.E.; Bishop, D.V.; Matthews, P.M. Hemispheric specialization for processing auditory nonspeech stimuli. Cereb. Cortex 2006, 16, 1266–1275. [Google Scholar] [CrossRef] [PubMed]
  100. Hullett, P.W.; Hamilton, L.S.; Mesgarani, N.; Schreiner, C.E.; Chang, E.F. Human Superior Temporal Gyrus Organization of Spectrotemporal Modulation Tuning Derived from Speech Stimuli. J. Neurosci. 2016, 36, 2014–2026. [Google Scholar] [CrossRef]
  101. Papanicolaou, A.C.; Kilintari, M.; Rezaie, R.; Narayana, S.; Babajani-Feremi, A. The Role of the Primary Sensory Cortices in Early Language Processing. J. Cogn. Neurosci. 2017, 29, 1755–1765. [Google Scholar] [CrossRef] [PubMed]
  102. Vigneau, M.; Beaucousin, V.; Herve, P.Y.; Duffau, H.; Crivello, F.; Houde, O.; Tzourio-Mazoyer, N. Meta-analyzing left hemisphere language areas: Phonology, semantics, and sentence processing. Neuroimage 2006, 30, 1414–1432. [Google Scholar] [CrossRef] [PubMed]
  103. Vigneau, M.; Beaucousin, V.; Herve, P.Y.; Jobard, G.; Petit, L.; Crivello, F.; Tzourio-Mazoyer, N. What is right-hemisphere contribution to phonological, lexico-semantic, and sentence processing? Insights from a meta-analysis. Neuroimage 2011, 54, 577–593. [Google Scholar] [CrossRef]
  104. Zatorre, R.J.; Evans, A.C.; Meyer, E.; Gjedde, A. Lateralization of phonetic and pitch discrimination in speech processing. Science 1992, 256, 846–849. [Google Scholar] [CrossRef]
  105. Braun, A.R.; Varga, M.; Stager, S.; Schulz, G.; Selbie, S.; Maisog, J.M.; Ludlow, C.L. Altered patterns of cerebral activity during speech and language production in developmental stuttering. An H2(15)O positron emission tomography study. Brain 1997, 120 Pt 5, 761–784. [Google Scholar] [CrossRef]
  106. Bookheimer, S.Y.; Zeffiro, T.A.; Blaxton, T.A.; Gaillard, P.W.; Theodore, W.H. Activation of language cortex with automatic speech tasks. Neurology 2000, 55, 1151–1157. [Google Scholar] [CrossRef]
  107. Booth, J.R.; Burman, D.D.; Meyer, J.R.; Gitelman, D.R.; Parrish, T.B.; Mesulam, M.M. Modality independence of word comprehension. Hum. Brain Mapp. 2002, 16, 251–261. [Google Scholar] [CrossRef]
  108. Hugdahl, K.; Thomsen, T.; Ersland, L.; Rimol, L.M.; Niemi, J. The effects of attention on speech perception: An fMRI study. Brain Lang. 2003, 85, 37–48. [Google Scholar] [CrossRef] [PubMed]
  109. Joanisse, M.F.; Gati, J.S. Overlapping neural regions for processing rapid temporal cues in speech and nonspeech signals. Neuroimage 2003, 19, 64–79. [Google Scholar] [CrossRef] [PubMed]
  110. Wildgruber, D.; Ackermann, H.; Grodd, W. Differential contributions of motor cortex, basal ganglia, and cerebellum to speech motor control: Effects of syllable repetition rate evaluated by fMRI. Neuroimage 2001, 13, 101–109. [Google Scholar] [CrossRef] [PubMed]
  111. Belin, P.; Zatorre, R.J.; Lafaille, P.; Ahad, P.; Pike, B. Voice-selective areas in human auditory cortex. Nature 2000, 403, 309–312. [Google Scholar] [CrossRef]
  112. Jancke, L.; Wustenberg, T.; Scheich, H.; Heinze, H.J. Phonetic perception and the temporal cortex. Neuroimage 2002, 15, 733–746. [Google Scholar] [CrossRef]
  113. Sekiyama, K.; Kanno, I.; Miura, S.; Sugita, Y. Auditory-visual speech perception examined by fMRI and PET. Neurosci. Res. 2003, 47, 277–287. [Google Scholar] [CrossRef]
  114. Poeppel, D.; Guillemin, A.; Thompson, J.; Fritz, J.; Bavelier, D.; Braun, A.R. Auditory lexical decision, categorical perception, and FM direction discrimination differentially engage left and right auditory cortex. Neuropsychologia 2004, 42, 183–200. [Google Scholar] [CrossRef]
  115. Binder, J.R.; Frost, J.A.; Hammeke, T.A.; Bellgowan, P.S.; Springer, J.A.; Kaufman, J.N.; Possing, E.T. Human temporal lobe activation by speech and nonspeech sounds. Cereb. Cortex 2000, 10, 512–528. [Google Scholar] [CrossRef]
  116. Price, C.J.; Wise, R.J.; Warburton, E.A.; Moore, C.J.; Howard, D.; Patterson, K.; Friston, K.J. Hearing and saying. The functional neuro-anatomy of auditory word processing. Brain 1996, 119 Pt 3, 919–931. [Google Scholar] [CrossRef]
  117. Poldrack, R.A.; Wagner, A.D.; Prull, M.W.; Desmond, J.E.; Glover, G.H.; Gabrieli, J.D. Functional specialization for semantic and phonological processing in the left inferior prefrontal cortex. Neuroimage 1999, 10, 15–35. [Google Scholar] [CrossRef]
  118. Warburton, E.; Wise, R.J.; Price, C.J.; Weiller, C.; Hadar, U.; Ramsay, S.; Frackowiak, R.S. Noun and verb retrieval by normal subjects. Studies with PET. Brain 1996, 119 Pt 1, 159–179. [Google Scholar] [CrossRef] [PubMed]
  119. Mechelli, A.; Friston, K.J.; Price, C.J. The effects of presentation rate during word and pseudoword reading: A comparison of PET and fMRI. J. Cogn. Neurosci. 2000, 12 (Suppl. 2), 145–156. [Google Scholar] [CrossRef] [PubMed]
  120. Riecker, A.; Ackermann, H.; Wildgruber, D.; Meyer, J.; Dogil, G.; Haider, H.; Grodd, W. Articulatory/phonetic sequencing at the level of the anterior perisylvian cortex: A functional magnetic resonance imaging (fMRI) study. Brain Lang. 2000, 75, 259–276. [Google Scholar] [CrossRef]
  121. Penfield, W.; Roberts, L. Speech and Brain Mechanisms; Princeton University Press: Princeton, NJ, USA, 1959. [Google Scholar]
  122. Ojemann, G.; Ojemann, J.; Lettich, E.; Berger, M. Cortical language localization in left, dominant hemisphere. An electrical stimulation mapping investigation in 117 patients. J. Neurosurg. 1989, 71, 316–326. [Google Scholar] [CrossRef]
  123. Wager, T.D.; Smith, E.E. Neuroimaging studies of working memory: A meta-analysis. Cogn. Affect. Behav. Neurosci. 2003, 3, 255–274. [Google Scholar] [CrossRef]
  124. Zago, L.; Petit, L.; Turbelin, M.R.; Andersson, F.; Vigneau, M.; Tzourio-Mazoyer, N. How verbal and spatial manipulation networks contribute to calculation: An fMRI study. Neuropsychologia 2008, 46, 2403–2414. [Google Scholar] [CrossRef]
  125. Yang, F.G.; Edens, J.; Simpson, C.; Krawczyk, D.C. Differences in task demands influence the hemispheric lateralization and neural correlates of metaphor. Brain Lang. 2009, 111, 114–124. [Google Scholar] [CrossRef] [PubMed]
  126. Wagner, A.D.; Koutstaal, W.; Maril, A.; Schacter, D.L.; Buckner, R.L. Task-specific repetition priming in left inferior prefrontal cortex. Cereb. Cortex 2000, 10, 1176–1184. [Google Scholar] [CrossRef]
  127. Wagner, A.D.; Pare-Blagoev, E.J.; Clark, J.; Poldrack, R.A. Recovering meaning: Left prefrontal cortex guides controlled semantic retrieval. Neuron 2001, 31, 329–338. [Google Scholar] [CrossRef]
  128. Adams, R.B.; Janata, P. A comparison of neural circuits underlying auditory and visual object categorization. Neuroimage 2002, 16, 361–377. [Google Scholar] [CrossRef]
  129. Braver, T.S.; Bongiolatti, S.R. The role of frontopolar cortex in subgoal processing during working memory. Neuroimage 2002, 15, 523–536. [Google Scholar] [CrossRef] [PubMed]
  130. Binder, J.R.; McKiernan, K.A.; Parsons, M.E.; Westbury, C.F.; Possing, E.T.; Kaufman, J.N.; Buchanan, L. Neural correlates of lexical access during visual word recognition. J. Cogn. Neurosci. 2003, 15, 372–393. [Google Scholar] [CrossRef] [PubMed]
  131. Bright, P.; Moss, H.; Tyler, L.K. Unitary vs multiple semantics: PET studies of word and picture processing. Brain Lang. 2004, 89, 417–432. [Google Scholar] [CrossRef] [PubMed]
  132. Jennings, J.M.; McIntosh, A.R.; Kapur, S.; Zipursky, R.B.; Houle, S. Functional network differences in schizophrenia: A rCBF study of semantic processing. Neuroreport 1998, 9, 1697–1700. [Google Scholar] [CrossRef] [PubMed]
  133. Perani, D.; Cappa, S.F.; Schnur, T.; Tettamanti, M.; Collina, S.; Rosa, M.M.; Fazio, F. The neural correlates of verb and noun processing. A PET study. Brain 1999, 122 Pt 12, 2337–2344. [Google Scholar] [CrossRef]
  134. Chee, M.W.; Weekes, B.; Lee, K.M.; Soon, C.S.; Schreiber, A.; Hoon, J.J.; Chee, M. Overlap and dissociation of semantic processing of Chinese characters, English words, and pictures: Evidence from fMRI. Neuroimage 2000, 12, 392–403. [Google Scholar] [CrossRef]
  135. Grossman, M.; Koenig, P.; DeVita, C.; Glosser, G.; Alsop, D.; Detre, J.; Gee, J. Neural representation of verb meaning: An fMRI study. Hum. Brain Mapp. 2002, 15, 124–134. [Google Scholar] [CrossRef]
  136. Heim, S.; Opitz, B.; Friederici, A.D. Broca’s area in the human brain is involved in the selection of grammatical gender for language production: Evidence from event-related functional magnetic resonance imaging. Neurosci. Lett. 2002, 328, 101–104. [Google Scholar] [CrossRef]
  137. Howard, D.; Patterson, K.; Wise, R.; Brown, W.D.; Friston, K.; Weiller, C.; Frackowiak, R. The cortical localization of the lexicons. Positron emission tomography evidence. Brain 1992, 115 Pt 6, 1769–1782. [Google Scholar] [CrossRef]
  138. Moore, C.J.; Price, C.J. Three distinct ventral occipitotemporal regions for reading and object naming. Neuroimage 1999, 10, 181–192. [Google Scholar] [CrossRef]
  139. Small, S.L.; Noll, D.C.; Perfetti, C.A.; Hlustik, P.; Wellington, R.; Schneider, W. Localizing the lexicon for reading aloud:replication of a PET study using fMRI. Neuroreport 1996, 7, 961–965. [Google Scholar] [CrossRef] [PubMed]
  140. Fiez, J.A.; Balota, D.A.; Raichle, M.E.; Petersen, S.E. Effects of lexicality, frequency, and spelling-to-sound consistency on the functional anatomy of reading. Neuron 1999, 24, 205–218. [Google Scholar] [CrossRef] [PubMed]
  141. Fiebach, C.J.; Friederici, A.D.; Muller, K.; von Cramon, D.Y. fMRI evidence for dual routes to the mental lexicon in visual word recognition. J. Cogn. Neurosci. 2002, 14, 11–23. [Google Scholar] [CrossRef]
  142. Demonet, J.F.; Price, C.; Wise, R.; Frackowiak, R.S. Differential activation of right and left posterior sylvian regions by semantic and phonological tasks: A positron-emission tomography study in normal human subjects. Neurosci. Lett. 1994, 182, 25–28. [Google Scholar] [CrossRef] [PubMed]
  143. Binder, J.R.; Frost, J.A.; Hammeke, T.A.; Bellgowan, P.S.; Rao, S.M.; Cox, R.W. Conceptual processing during the conscious resting state. A functional MRI study. J. Cogn. Neurosci. 1999, 11, 80–95. [Google Scholar] [CrossRef] [PubMed]
  144. Davis, M.H.; Meunier, F.; Marslen-Wilson, W.D. Neural responses to morphological, syntactic, and semantic properties of single words: An fMRI study. Brain Lang. 2004, 89, 439–449. [Google Scholar] [CrossRef]
  145. Andreasen, N.C.; O’Leary, D.S.; Arndt, S.; Cizadlo, T.; Rezai, K.; Watkins, G.L.; Hichwa, R.D.I. PET studies of memory: Novel and practiced free recall of complex narratives. Neuroimage 1995, 2, 284–295. [Google Scholar] [CrossRef]
  146. Wiggins, G.C.; Elisevich, K.; Smith, B.J. Morbidity and infection in combined subdural grid and strip electrode investigation for intractable epilepsy. Epilepsy Res. 1999, 37, 73–80. [Google Scholar] [CrossRef]
  147. Crinion, J.T.; Lambon-Ralph, M.A.; Warburton, E.A.; Howard, D.; Wise, R.J. Temporal lobe regions engaged during normal speech comprehension. Brain 2003, 126 Pt 5, 1193–1201. [Google Scholar] [CrossRef]
  148. Vingerhoets, G.; Van Borsel, J.; Tesink, C.; van den Noort, M.; Deblaere, K.; Seurinck, R.; Achten, E. Multilingualism: An fMRI study. Neuroimage 2003, 20, 2181–2196. [Google Scholar] [CrossRef]
  149. Papanicolaou, A.C.; Levin, H.S.; Eisenberg, H.M.; Moore, B.D. Evoked potential indices of selective hemispheric engagement in affective and phonetic tasks. Neuropsychologia 1983, 21, 401–405. [Google Scholar] [CrossRef] [PubMed]
  150. Witteman, J.; Van Heuven, V.J.; Schiller, N.O. Hearing feelings: A quantitative meta-analysis on the neuroimaging literature of emotional prosody perception. Neuropsychologia 2012, 50, 2752–2763. [Google Scholar] [CrossRef] [PubMed]
  151. Stokoe, W.C., Jr. Sign language structure: An outline of the visual communication systems of the American deaf. J. Deaf. Stud. Deaf. Educ. 2005, 10, 3–37. [Google Scholar] [CrossRef] [PubMed]
  152. Neville, H.J.; Bavelier, D.; Corina, D.; Rauschecker, J.; Karni, A.; Lalwani, A.; Turner, R. Cerebral organization for language in deaf and hearing subjects: Biological constraints and effects of experience. Proc. Natl. Acad. Sci. USA 1998, 95, 922–929. [Google Scholar] [CrossRef] [PubMed]
  153. Newman, A.J.; Bavelier, D.; Corina, D.; Jezzard, P.; Neville, H.J. A critical period for right hemisphere recruitment in American Sign Language processing. Nat. Neurosci. 2002, 5, 76–80. [Google Scholar] [CrossRef]
  154. Sakai, K.L.; Tatsuno, Y.; Suzuki, K.; Kimura, H.; Ichida, Y. Sign and speech: Amodal commonality in left hemisphere dominance for comprehension of sentences. Brain 2005, 128 Pt 6, 1407–1417. [Google Scholar] [CrossRef]
  155. Newman, A.J.; Supalla, T.; Hauser, P.; Newport, E.L.; Bavelier, D. Dissociating neural subsystems for grammar by contrasting word order and inflection. Proc. Natl. Acad. Sci. USA 2010, 107, 7539–7544. [Google Scholar] [CrossRef]
  156. Newman, A.J.; Supalla, T.; Hauser, P.C.; Newport, E.L.; Bavelier, D. Prosodic and narrative processing in American Sign Language: An fMRI study. Neuroimage 2010, 52, 669–676. [Google Scholar] [CrossRef]
  157. Newman, A.J.; Supalla, T.; Fernandez, N.; Newport, E.L.; Bavelier, D. Neural systems supporting linguistic structure, linguistic experience, and symbolic communication in sign language and gesture. Proc. Natl. Acad. Sci. USA 2015, 112, 11684–11689. [Google Scholar] [CrossRef]
  158. Dejerine, J. Contribution à l’étude anatomo-pathologique et clinique des differentes variétés de cécité verbale. Comptes Rendu Société Biol. 1892, 4, 61–90. [Google Scholar] [CrossRef]
  159. Imtiaz, K.E.; Nirodi, G.; Khaleeli, A.A. Alexia without agraphia: A century later. Int. J. Clin. Pract. 2001, 55, 225–226. [Google Scholar] [CrossRef] [PubMed]
  160. Jobard, G.; Crivello, F.; Tzourio-Mazoyer, N. Evaluation of the dual route theory of reading: A metanalysis of 35 neuroimaging studies. Neuroimage 2003, 20, 693–712. [Google Scholar] [CrossRef] [PubMed]
  161. Cohen, L.; Dehaene, S.; Naccache, L.; Lehericy, S.; Dehaene-Lambertz, G.; Henaff, M.A.; Michel, F. The visual word form area: Spatial and temporal characterization of an initial stage of reading in normal subjects and posterior split-brain patients. Brain 2000, 123 Pt 2, 291–307. [Google Scholar] [CrossRef] [PubMed]
  162. Cohen, L.; Lehericy, S.; Chochon, F.; Lemer, C.; Rivaud, S.; Dehaene, S. Language-specific tuning of visual cortex? Functional properties of the Visual Word Form Area. Brain 2002, 125 Pt 5, 1054–1069. [Google Scholar] [CrossRef] [PubMed]
  163. Dehaene, S.; Cohen, L. The unique role of the visual word form area in reading. Trends Cogn. Sci. 2011, 15, 254–262. [Google Scholar] [CrossRef]
  164. Breier, J.I.; Simos, P.G.; Zouridakis, G.; Papanicolaou, A.C. Relative timing of neuronal activity in distinct temporal lobe areas during a recognition memory task for words. J. Clin. Exp. Neuropsychol. 1998, 20, 782–790. [Google Scholar] [CrossRef]
  165. Breier, J.I.; Simos, P.G.; Zouridakis, G.; Papanicolaou, A.C. Temporal course of regional brain activation associated with phonological decoding. J. Clin. Exp. Neuropsychol. 1999, 21, 465–476. [Google Scholar] [CrossRef]
  166. Simos, P.G.; Breier, J.I.; Fletcher, J.M.; Bergman, E.; Papanicolaou, A.C. Cerebral mechanisms involved in word reading in dyslexic children: A magnetic source imaging approach. Cereb. Cortex 2000, 10, 809–816. [Google Scholar] [CrossRef]
  167. Simos, P.G.; Breier, J.I.; Fletcher, J.M.; Foorman, B.R.; Castillo, E.M.; Papanicolaou, A.C. Brain mechanisms for reading words and pseudowords: An integrated approach. Cereb. Cortex 2002, 12, 297–305. [Google Scholar] [CrossRef]
  168. Simos, P.G.; Fletcher, J.M.; Bergman, E.; Breier, J.I.; Foorman, B.R.; Castillo, E.M. Dyslexia-specific brain activation profile becomes normal following successful remedial training. Neurology 2002, 58, 1203–1213. [Google Scholar] [CrossRef]
  169. Papanicolaou, A.C.; Simos, P.G.; Breier, J.I.; Fletcher, J.M.; Foorman, B.R.; Francis, D.; Davis, R.N. Brain mechanisms for reading in children with and without dyslexia: A review of studies of normal development and plasticity. Dev. Neuropsychol. 2003, 24, 593–612. [Google Scholar] [CrossRef] [PubMed]
  170. Malins, J.G.; Gumkowski, N.; Buis, B.; Molfese, P.; Rueckl, J.G.; Frost, S.J.; Mencl, W.E. Dough, tough, cough, rough: A “fast” fMRI localizer of component processes in reading. Neuropsychologia 2016, 91, 394–406. [Google Scholar] [CrossRef] [PubMed]
  171. Taylor, J.S.; Rastle, K.; Davis, M.H. Can cognitive models explain brain activation during word and pseudoword reading? A meta-analysis of 36 neuroimaging studies. Psychol. Bull. 2013, 139, 766–791. [Google Scholar] [CrossRef] [PubMed]
  172. Martin, A.; Schurz, M.; Kronbichler, M.; Richlan, F. Reading in the brain of children and adults: A meta-analysis of 40 functional magnetic resonance imaging studies. Hum. Brain Mapp. 2015, 36, 1963–1981. [Google Scholar] [CrossRef] [PubMed]
  173. Richlan, F.; Kronbichler, M.; Wimmer, H. Functional abnormalities in the dyslexic brain: A quantitative meta-analysis of neuroimaging studies. Hum. Brain Mapp. 2009, 30, 3299–3308. [Google Scholar] [CrossRef] [PubMed]
  174. Paulesu, E.; Danelli, L.; Berlingeri, M. Reading the dyslexic brain: Multiple dysfunctional routes revealed by a new meta-analysis of PET and fMRI activation studies. Front. Hum. Neurosci. 2014, 8, 830. [Google Scholar] [CrossRef]
  175. Pollack, C.; Luk, G.; Christodoulou, J.A. A meta-analysis of functional reading systems in typically developing and struggling readers across different alphabetic languages. Front. Psychol. 2015, 6, 191. [Google Scholar] [CrossRef]
  176. Valaki, C.E.; Maestu, F.; Simos, P.G.; Zhang, W.; Fernandez, A.; Amo, C.; Ortiz, T.; Papanicolaou, A.C. Cortical organization for receptive language functions in Chinese, English and Spanish: A cross-linguistic MEG study. Neuropsychologia 2004, 42, 967–979. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Papanicolaou, A.C. Non-Invasive Mapping of the Neuronal Networks of Language. Brain Sci. 2023, 13, 1457. https://doi.org/10.3390/brainsci13101457

AMA Style

Papanicolaou AC. Non-Invasive Mapping of the Neuronal Networks of Language. Brain Sciences. 2023; 13(10):1457. https://doi.org/10.3390/brainsci13101457

Chicago/Turabian Style

Papanicolaou, Andrew C. 2023. "Non-Invasive Mapping of the Neuronal Networks of Language" Brain Sciences 13, no. 10: 1457. https://doi.org/10.3390/brainsci13101457

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop