Next Article in Journal
Does It Count? Pre-School Children’s Spontaneous Focusing on Numerosity and Their Development of Arithmetical Skills at School
Next Article in Special Issue
Understanding Temporal Relations in Mandarin Chinese: An ERP Investigation
Previous Article in Journal
Correction: Liu et al. Responsible Genes for Neuronal Migration in the Chromosome 17p13.3: Beyond Pafah1b1(Lis1), Crk and Ywhae(14-3-3ε). Brain Sci. 2022, 12, 56
Previous Article in Special Issue
Syntactic and Semantic Influences on the Time Course of Relative Clause Processing: The Role of Language Dominance
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Effect of Lexical-Semantic Cues during Real-Time Sentence Processing in Aphasia

1
School of Speech, Language, and Hearing Sciences, San Diego State University, San Diego, CA 92182, USA
2
Department of Cognitive Science, University of California San Diego, San Diego, CA 92122, USA
3
Joint Doctoral Program in Language and Communicative Disorders, San Diego State University/University of California San Diego, San Diego, CA 92182, USA
4
Department of Speech, Language and Hearing Sciences, California State University East Bay, Hayward, CA 94542, USA
*
Author to whom correspondence should be addressed.
Brain Sci. 2022, 12(3), 312; https://doi.org/10.3390/brainsci12030312
Submission received: 14 December 2021 / Revised: 13 February 2022 / Accepted: 22 February 2022 / Published: 25 February 2022

Abstract

:
Using a visual world eye-tracking paradigm, we investigated the real-time auditory sentence processing of neurologically unimpaired listeners and individuals with aphasia. We examined whether lexical-semantic cues provided as adjectives of a target noun modulate the encoding and retrieval dynamics of a noun phrase during the processing of complex, non-canonical sentences. We hypothesized that the real-time processing pattern of sentences containing a semantically biased lexical cue (e.g., the venomous snake) would be different than sentences containing unbiased adjectives (e.g., the voracious snake). More specifically, we predicted that the presence of a biased lexical cue would facilitate (1) lexical encoding (i.e., boosted lexical access) of the target noun, snake, and (2) on-time syntactic retrieval or dependency linking (i.e., increasing the probability of on-time lexical retrieval at post-verb gap site) for both groups. For unimpaired listeners, results revealed a difference in the time course of gaze trajectories to the target noun (snake) during lexical encoding and syntactic retrieval in the biased compared to the unbiased condition. In contrast, for the aphasia group, the presence of biased adjectives did not affect the time course of processing the target noun. Yet, at the post-verb gap site, the presence of a semantically biased adjective influenced syntactic re-activation. Our results extend the cue-based parsing model by offering new and valuable insights into the processes underlying sentence comprehension of individuals with aphasia.

1. Introduction

One property of language processing is the ability to integrate sentential constituents and establish linguistic relationships between non-adjacent pieces of information. This latter process creates syntactic dependencies and is critical for the determination of the underlying meaning of the sentence. To successfully understand an utterance, the listener must assign appropriate roles of the nouns to the linked verbs in the sentence. This is accomplished automatically (i.e., it is an automatic process which is a reflexive, unconscious, moment by moment operations that unfolds in real-time during sentence processing) through thematic role assignment (for example, determining which noun is the agent or actor and which noun is the theme or object of the verb). In English (which has a strict subject–verb–object word order), this process aligns quite nicely with the order of input of a simple active sentence (1a); that is, the first noun encountered is the actor or agent, and the noun after the verb is the object. This process is also simple for more complex sentence constructions that maintain canonical word order, such as those found in subject-relative constructions (1b):
(1a)
The girl hits the boy.
(1b)
The girl that hits the boy is angry.
(1c)
The boyi that the girl hiti <the boy> was angry.
This automatic process of assigning thematic roles becomes more challenging for listeners when the sentence structure deviates from the canonical word order. Sentence (1c), above, is an example of a non-canonical sentence (object-relative construction). In this example, the object of the verb ‘hit’ (/the boy/) is fronted or displaced to the beginning of the sentence, causing it to be structurally separated from its underlying (post-verb) position. When processing this sentence, the listener, upon hearing the verb, must link the verb to its object (noted by the subscript ‘i’). This retrieval process allows for its integration with the syntactic and semantic properties of the verb to facilitate interpretation.
Numerous studies have found evidence of re-activation of the direct object at the gap site (the position from where the noun phrase has been displaced is known as a gap; the ‘i’ indexation represents a link between the verb and its structurally licensed direct object) using various online methodological approaches including probe recognition tasks [1], cross-modal priming tasks [2], and eye-tracking [3,4]. Although dependency linking occurs rapidly and relatively automatically in neurologically healthy individuals, the associated processing cost is higher in non-canonical sentences compared to canonically ordered constructions due to the need of forming long-distance dependencies [5,6,7,8,9]. Theoretical models of sentence processing, namely cue-based parsing, make specific predictions regarding the processing costs associated with long-distance dependencies. According to these models, the success (as measured by reaction time methods) of retrieving the displaced constituent (i.e., syntactic dependency linking) is a function of the degree of interference from similar items in memory that compete with the retrieval of the target item [9,10,11]. The higher the interference, the more likely the wrong target will be retrieved. Although these models are based on data from neurotypical adults, interference has also been found to contribute to comprehension impairments in post-stroke aphasia [3,12,13]. For individuals with aphasia, the presence of interference can overwhelm the impaired system and lead to breakdowns in comprehension [14,15,16,17], thus the focus of the current study (see Section 1.3 below).
It is generally accepted that comprehension deficits in aphasia are not the result of an irrevocable loss of stored linguistic representations [15,18], but rather stem from disruptions to the automatic operations involved in sentence processing. According to some researchers, these deficits stem from processing impairments at the lexical level [16,19,20,21,22,23] which includes disruptions to the processes of lexical access and integration. These are fundamental mechanisms that provide the system with timely lexical information and allow for the incorporation of that information into a syntactic frame. Lexical-level deficits can emerge from impairments of representational encoding and retrieval which can amplify the effects of interference. As described below, the current paper investigates whether semantic-level manipulations during encoding can reduce interference effects and alleviate retrieval difficulties during sentence processing [24,25]. We expect the semantic-level manipulations during encoding to facilitate (1) lexical processing (i.e., boost representational access) and (2) online syntactic dependency linking (i.e., reduce interference effect) in neurologically unimpaired adults as well as in individuals with aphasia.

1.1. Interference Effect during Sentence Processing

As discussed previously, under many theoretical accounts, the successful linking processing of the verb “chased” in object-relative sentences such as (2) depends on the retrieval of its syntactic direct object bear upon encountering the verb chased [26,27,28].
(2)
It was the beari that the hunters chasedi in the cold forest yesterday.
According to the cue-based parsing retrieval theory, a memory representation of the noun bear, is formed (encoded) as a bundle or vector of certain syntactic and semantic features such as [+nominative; +animate; +singular] that are activated when the noun is first encountered in the sentence. These features remain active in some form of memory—but outside the focus of attention—as the sentence constituents unfold. When the comprehender reaches a retrieval point (e.g., the verb), the representation of the noun phrase (NP; “the bear”) must be integrated into the structural frame to be assigned its thematic role. Cue-based parsing theory assumes that dependencies are resolved via a direct-access operation based on their representational content (i.e., content addressability) [29,30,31]. For example, at the verb ‘chased’ in sentence (2), a retrieval mechanism based on linguistic and contextual features is assumed to be immediately triggered, which seeks out a representation with the [+nominative; +animate] features (i.e., something that can be chased). During this content-addressable search, these features or retrieval cues are matched against all possible candidates (i.e., recently activated items) in memory. The likelihood of retrieving a given item is determined by the strength of the match between the features encoded with a given item and the features contained in the retrieval cue.
Although the cue-based parsing approach does not make specific predictions regarding the quality of encoding having an impact on retrieval probability and latency, recent studies have shown that enriching a word, via the addition of modifiers, facilitates its subsequent retrieval compared to conditions in which the same word is left unmodified [24,25,32,33,34]. In a self-paced reading study, Hofmeister (2011) investigated reading times in neurologically unimpaired participants for sentences which contained a critical noun that was either modified by zero, one, or two adjectives (low, mid, and high complexity conditions, respectively); see (3) below for an example of the high complexity condition. Note that the brackets, parentheses, and indexation have been added to highlight the manipulation but were not used in the study itself.
(3)
It was [the injured and dangerous bear]i that the hunters chasedi in the cold forest yesterday.
This study reported decreased reading times for the main verb for items in the highest complexity condition (i.e., where the direct object noun was preceded by two adjectives) compared to the other conditions. In a similar experiment, such findings were also observed for nouns that were semantically richer/more specific (e.g., “soldier”) compared to less semantically rich/less specific (e.g., “person”). Hofmeister (2011) interpreted these results as evidence that, for unimpaired comprehenders, the addition of semantic and syntactic features increased the uniqueness of the target representation compared to other lexical items in the sentence and facilitated retrieval of information and subsequent integration later in the sentence. This finding is in accordance with the predictions of encoding interference, which is assumed to arise from competition associated with the encoding of items with similar features [35]. By increasing the distinctness of the target item, a higher-quality representation was created, reducing encoding interference. An additional finding was increased encoding times for the more complex NP, which may have indexed additional cognitive effort required to perform combinatorial processing (i.e., incorporating the adjectives into the NP). Increased cognitive effort and the extended time dedicated to the NP within the auditory signal may have both served to raise the salience or activation level of the representational network [11,36].

1.2. Evidence from Aphasia of the Effect of Lexical Processing Deficits on Syntactic Processing

As previously mentioned, lexical processing accounts suggest that auditory comprehension deficits in individuals with aphasia (IWA) are mainly due to lexical processing impairments. One such account is the delayed lexical activation hypothesis [16,19], which claims that slowed lexical activation precludes the timely formation of syntactic structure building as the parser is not provided with the necessary lexical information when needed. Delayed lexical activation occurs in both canonical and non-canonical sentences; however, when a delay occurs in non-canonical constructions, it feeds the syntactic processor too slowly, throwing off retrieval, which results in comprehension breakdowns. Evidence for this hypothesis comes from studies with IWA that have found delayed activation of NPs when they were first encountered in an auditory object-relative sentence as well as at the gap site following a verb [16]. The delayed re-activation of the displaced direct object NP was taken as evidence that IWA are able to perform syntactic computations; however, this process is slowed compared to neurologically unimpaired comprehenders. In this same study, Love and colleagues (2008) found that when the rate of speech input was slowed, IWA showed both on-time initial lexical activation and re-activation at the post-verb gap site, and critically, improved final sentence comprehension. Several other studies with IWA using various methods have also found either delayed lexical activation and/or delayed integration of lexical information into the sentence context [21,37]. For example, Swaab and colleagues (1997), in an event-related potential study, reported that IWA had an N400 component (a neurophysiological index of semantic integration processing) upon hearing a sentence-final word that violated the sentential-semantic constraints (e.g., “The girl dropped the candy on the sky”) that were either reduced in amplitude or delayed compared to neurologically unimpaired individuals [20,21]. Other studies using the eye-tracking-while-listening paradigm (ETL) have also indicated processing deficits in IWA. In an ETL visual world paradigm, participants are asked to listen to sentences over headphones while viewing a visual array displaying four items (characters mentioned in the sentence as well as item(s) unrelated to the sentence). The timing of eye gazes to the characters on the screen while listening to the unfolding sentences is argued to index underlying linguistic processing in real-time [38,39,40,41]. Many studies using ETL consistently indicated a late-emerging influence of competitor interpretations during sentence processing for IWA (i.e., interference effects) in incorrectly comprehended trials, providing further evidence for delayed integration [12,13,23]. In these ETL studies, the delay in lexical activation is proposed to result in interference effects when subsequent sentence constituents are activated during auditory processing. Altogether, these studies support the idea that encoding deficits at the lexical level plays a significant role in sentence processing, and consequently comprehension deficits for IWA. These findings are also in line with theoretical models of sentence processing such as the cue-based parsing theory which proposes that representational encoding at the word level is the core component of the sentence processing mechanism.

1.3. The Current Study

Using the eye-tracking-while-listening method, we investigate whether local contextual information at the semantic level (provided by an adjective preceding a target lexical item) can be used to facilitate the representational encoding of the nouns for listeners with and without aphasia during sentence processing. We further investigate whether the encoding pattern has any downstream effects on the retrieval of the target representations during dependency linking for both groups. Previous studies on this topic have primarily used self-paced reading paradigms to explore these effects during real-time sentence processing [24,25]. To tap into real-time auditory sentence processing, we employed the ETL method with a visual world paradigm (VWP). This method allows us to explore the time course of the proposed manipulation and its effect on processing throughout the sentence. As mentioned above, the way in which a word is encoded during sentence processing will impact the processing of subsequent constituents in the sentence. Therefore, to distill the encoding process of nouns, we examined their activation and de-activation patterns during an ongoing stream of the sentence (see Figure 1). The activation pattern is an indication of processing phonological and semantic features to access the target item (manifested as an increasing pattern of gaze movement toward an item). The de-activation pattern is an indication of a change in the level of representation at that time. Here, we operationalized de-activation as representing a shift from integrating the previously processed constituent into the sentence structure to accessing the new input. This is manifested as a decreasing pattern of gaze movement away from an item. Moreover, to distill the retrieval process that is involved in the verb-frame window, we examined the re-activation of the displaced object and its interference with distractor nouns in the sentence. The interference pattern is an indication of the competitive processing between the nouns that are lingering to be linked to the verb for the means of thematic role assignment.
Table 1 demonstrates the different types of gaze movements that can be used as a metric to reflect varying stages of sentence processing including lexical and structural processing.
Here, we hypothesized that the lexical-semantic cues (in form of adjectives, see examples 4a and 4b) would facilitate (1) lexical encoding (i.e., boost representational access) and in turn (2) on-time syntactic retrieval or dependency linking (i.e., increasing the probability of on-time lexical retrieval at the gap site) during auditory sentence processing for both groups. Based on prior research, we anticipate that the overall pattern of online processing in individuals with aphasia to be delayed across conditions compared to the pattern in neurologically unimpaired individuals [13,16,23].
(4a)
Unbiased adjective: The eagle saw the voracious snakei that the bear cautiously encounteredi <the snake> underneath the narrow bridge.
(4b)
Biased adjective: The eagle saw the venomous snakei that the bear cautiously encounteredi <the snake> underneath the narrow bridge.

2. Methods

2.1. Participants

Eleven individuals with chronic aphasia (IWA: female = 5, Mage = 54.2 years, SDage = 8.2) and 11 age-matched controls (AMC: female = 7, Mage = 61.9, SDage = 2.3) were recruited for this study. The inclusion criteria for both groups were as follows: participants were monolingual native English speakers with no exposure to a foreign language before the age of six; right handed (premorbidly for IWA); had no self-reported history of emotional or learning disorders or drug abuse and had normal to corrected self-reported vision and hearing. IWA had to have experienced a single left-hemisphere stroke at least 6 months prior to participation to control for the effect of spontaneous recovery. The diagnosis and severity of aphasia were assessed using standardized aphasia examinations, the Boston Diagnostic Aphasia Examination (BDAE-version 3; [42]) and the Western Aphasia Battery-Revised (WAB-R; [43]), and were confirmed by clinical consensus. Sentence comprehension ability was assessed using the S.O.A.P. Test of Sentence Comprehension [44] (see Table 2); IWA participants in this study demonstrated comprehension deficits, which we defined as at- or below-chance performance on the comprehension of sentences with non-canonical word order (object relatives and passives). The neurologically unimpaired age-matched participants additionally had no self-reported history of brain injury. Participants were excluded from this study if they did not meet the above criteria or were unable to understand directions and complete this study.
All participants were tested at the Language and Neuroscience Group Laboratory at San Diego State University and were paid $15 per session. A review of treatment history reveals that six of our seven participants had received prior treatment for sentence-level deficits, though the extent of treatment (number of sessions, type of treatment, and treatment response) was not available.

2.2. Materials

This study utilized eye-tracking-while-listening with a visual world paradigm (ETL-VWP) to measure auditory sentence processing in real time. In this paradigm, participants listen to sentences over headphones while viewing a 2 × 2 visual array displaying four items (three characters mentioned in the sentence and one item unrelated to the sentence). The timing of eye gazes to the characters on the screen while listening to the unfolding sentences, such as those shown in Table 3, is argued to index underlying linguistic processing in real time [45,46].

2.2.1. Visual Stimuli

Visual stimuli consisted of freely available black-and-white line drawings of animals obtained from the internet and clip art resources that were resized to 450 × 450 pixels. During each trial, 4 images were displayed on the screen. Three of the images corresponded to each of the nouns in the experimental sentence. The fourth image was an unrelated control (e.g., “cat” in Table 3, above). The location of the images was counterbalanced across trials such that pictures corresponding to each noun in the sentence appeared equally as often in the 4 quadrants.
Visual stimuli pretesting: All images used in the experiment had 90% or greater name agreement on a naming pretest conducted with college-aged students naive to the goals of the present experiment (n = 34, Mage = 20.1 years, SD = 1.4).

2.2.2. Sentence Stimuli

The experimental sentences consisted of 30 sentence pairs (60 sentences total) containing non-canonical object-relative constructions that involved a long-distance dependency linking the displaced object (e.g., /snake/) and the relative clause verb (e.g., /encountered/; see Appendix A for the full list of sentences). These sentences were presented in two conditions: with a semantically neutral adjective preceding the displaced NP (i.e., the unbiased adjective condition), or with a semantically related adjective preceding the displaced NP (i.e., the biased adjective condition; see Table 3). The unbiased condition was included to control for the potential effect of the presence of a modifier increasing the salience of the NP (i.e., cognitive effort related to combinatorial processing) and to allow for isolation of the unique effect of the semantic information provided by the adjective (i.e., feature enrichment). In addition to these experimental sentences, 60 canonical sentence structures were included as non-experimental filler sentences. All sentences were recorded by a native English-speaking female at an average rate of 4.47 syllables per second. Each sentence trial was followed by a yes/no question to ensure that participants attended to the sentences.
Sentence stimuli pretesting: Two pretests were conducted to ensure the selection of strong semantically related experimental adjective–noun pairs in the biased condition. In the first pretest, neurologically unimpaired college-aged participants (n = 34, Mage = 20.1 years, SD = 1.4) were shown a series of 120 black and white line drawings one at a time and were instructed to generate a descriptive word (an adjective) that corresponded with the image pictured. Sixty adjective–noun pairs were chosen for which a minimum agreement criterion (i.e., the concurrence of adjective choice (exact or semantically related) across participants) of 50% was met (M = 61%, SD = 10%). As a follow-up, a second pretest assessing semantic relatedness was conducted on the sixty adjective–noun pairs that were generated from the first pretest (e.g., “venomous snake”). A separate group of neurologically unimpaired college-aged participants (n = 23, Mage = 23.3 years, SD = 3.7) rated the semantic relatedness of each adjective–noun pair using a 5-point Likert scale (1 = Not Related; 5 = Highly Related). The thirty adjective–noun pairs with the highest ratings were selected for the ETL-VWP experiment (M = 4.59, SD = 0.41). To create an unbiased match for each of the experimental sentences, unbiased or neutral adjectives were chosen that were matched for syllable length, lexical frequency, and phonemic onset (t(59) = 0.08, p = 0.94).

2.3. Procedure

This was a within-subjects experiment in which the trials were distributed and counterbalanced across 4 visits. Visits were spaced a minimum of one week apart. At the beginning of each visit, 10 practice trials were conducted to ensure understanding of the task. During the practice trials, the experimenter provided feedback as necessary. During each visit, participants were seated 60 cm from a computer screen with an attached Tobii X-120 eye-tracker and wore over-the-ear headphones for auditory stimulus presentation. The eye-tracker was calibrated at the beginning of each experimental session. Across each trial, gaze location was sampled at a rate of 60 Hz (every 17 ms) from both eyes. Stimuli were presented using E-Prime 2.0 software (Psychology Software Tools, Pittsburgh, PA, USA). Each trial began with a fixation cross presented for 500 ms, followed by a blank screen for 250 ms. Next, the four-picture display was presented for 1500 ms before the auditory sentence began and remained on screen for 500 ms after the sentence ended (see Figure 2). To ensure that all participants were attending to the sentences, following each trial, an offline measure was administered during which participants heard a question related to the sentence (e.g., was the bear under the narrow bridge?). Participants were instructed to respond as quickly as possible with a binary decision via a button box (YES/NO) using their left, non-dominant hand. The questions were either related to the action of the first or the third noun phrase of each sentence so as not to bring specific attention to the target displaced object NP. Half of the questions were designed to elicit a YES response.

2.4. Analysis Approach

Preprocessing and analyses of eye-tracking data were performed using the eyetrackingR package [47] in R (R Core Team, 2019). In this study, gaze data from 60 trials across 22 individuals (11 in each AMC and IWA group) were sampled. All data across both groups and conditions were inspected to ensure that gaze patterns for the initial NP were evident. Furthermore, visual inspection revealed that for two sentences (in the biased condition) there were no discernable gazes to the first noun in the sentence (N1) after the auditory presentation of N1. Based on the rationale that lack of gazes to NP1 reflected either technical errors in gaze sampling during data collection or listeners’ difficulty distinguishing the visual items on the screen, data from these two sentences were removed from further analysis. Moreover, data from one sentence (from the unbiased condition) was excluded from analysis as the gaze patterns reflected semantic biasing towards a distractor noun in the sentence. In total, data from the 57 remaining experimental sentences were subjected to further analyses.

2.4.1. Preprocessing

Preprocessing of the eye-tracking data was conducted to check for trackloss, aggregate the data points across trials, and group them into temporal bins. Trackloss occurs when the gaze data are unavailable for both of the participant’s eyes (e.g., when they turn away or blink), which results in the validity of recorded gaze location being low (Tobii’s acceptable validity range is 0–2 on the scale of up to 4). Trials in which the trackloss proportion was greater than 25% were excluded from further analyses resulting in the removal of data from 19% of the trials. After reviewing the number of remaining trials available for analysis, it was determined that any participant who had more than 50% of their trials excluded due to the criteria listed above was to be removed from further analysis. This resulted in the exclusion of three participants (2 in the AMC and 1 in the IWA group) from the dataset. Data from the remaining 9 AMC and 10 IWA participants were aggregated across trials and aggregated into 100 ms time-bins. This approach is used as a strategy to account for the inherent dependency in time-series eye-tracking data which can inflate type I error rates. For each bin, the proportion of gaze within each AOI from the binary response variable (within or outside of an AOI) were estimated [48]. Gaze proportions were then subjected to statistical analysis (described below).

2.4.2. Statistical Analysis Approaches

Growth curve analysis (GCA) was used to explore the dynamic patterns of gaze movement over time in a preselected window of interest within the sentence. The GCA approach has been widely used in the analysis of gaze data in the visual world paradigm [49,50,51,52,53,54]. GCA is a multi-level modeling technique specifically designed to capture change over time using orthogonal polynomials [50]. The effects of the variables of interest on the polynomial terms provide a way to quantify and evaluate those effects on statistically independent (i.e., orthogonal) aspects of the gaze proportion trajectory. In the GCA approach, the level 1 model captures the overall gaze time course, with the intercept term reflecting the average overall gaze proportion. The linear term reflects a monotonic change in gaze proportion (similar to a linear regression of gaze proportion as a function of time) while the quadratic term reflects the symmetric rise and fall rate around a central inflection point [50]. The level 2 submodels capture the fixed effects of experimental conditions or group effects (categorical variables) on the level 1 time terms. The models in the current study included random effects of participants and items on intercept, linear, and quadratic time terms. Moreover, random slopes for condition were added per subject to achieve a maximal random effects structure [55]. Using the GCA approach, the fixed effects of variables of interest were added individually and their effects on the model were evaluated using model comparisons in order to examine whether a particular effect made a statistically significant contribution to model fit. Improvements in model fit were evaluated using −2 times the change in log-likelihood, which is distributed as x2 with degrees of freedom equal to the number of parameters added [48]. In this study, all analyses were conducted with the statistical software R-3.2.1, using the package LmerTest [56].
Cluster analysis was used to determine whether there were any time windows in which the looking patterns significantly differed between conditions (e.g., biased versus unbiased) within groups. The rationale of this method is to identify whether there is a series of consecutive time bins that show a significant effect of conditions. If the number of the consecutive time bins is larger than the observed null distribution, we can be confident that that the gaze pattern is different for the two conditions during the specified time window. Cluster analysis has been used in EEG studies [57] and in the visual world paradigm [52,58,59]. In this method, a separate test for the critical interaction at each individual time-bin was conducted (see below). If the time bins (20 ms) pass a determined threshold (p-value smaller than 0.05), then the adjacent time bins are clustered together. Finally, to correct for multiple comparisons, a non-parametric permutation test was conducted to determine the p-value for given cluster size. For this analysis, we used the eyetrackingR divergence analysis package [47].

3. Results

Offline processing: Recall that participants were asked a yes/no question after each trial to ensure that they paid attention to the sentences. While these data were not used to inform the online analysis, we conducted a mixed-effects logistic regression model to explore group and condition differences. The results revealed an effect of group (AMC and IWA); specifically, the IWA group performed worse than the AMC group (estimate = −1.12, SE = 0.23, p < 0.05). No effect of condition was found for accuracy within the AMC or IWA group (AMC: biased = 77.8%, unbiased = 79.3%; IWA: biased = 60.6%, unbiased = 61.4%).
Online processing: The subsequent analyses are focused on the condition differences between each group at specified windows of interest that are discussed in Figure 3.
Figure 4 represents the time course of gazes during sentence processing for the two groups (AMC and IWA) across the two conditions (biased and unbiased). As shown in Figure 4 using colored dashed lines, there are critical parts of the sentences that were the focus of the analysis as described below.

3.1. Experimental Condition Effect on Lexical Processing

In an ongoing sentence, the way in which a word is processed will impact the processing of subsequent words in the sentence. Below, we present the processing patterns of nouns starting from the beginning of the sentence and moving forward in time in a linear fashion. In the following section, we examine the processing of each noun by analyzing their activation as well as de-activation patterns.

3.1.1. Effect of Condition on Encoding the Noun Preceding the Manipulation (N1)

Here, we examined if the adjective manipulation affected the processing of the preceding noun (e.g., deactivation of N1 upon hearing the adjective). We specified the window of analysis to occur 100 ms after the onset (this parameter allows time for planning and execution of an eye movement) of the first noun phrase until 2500 ms afterward (corresponding to the average offset of N2—“the eagle saw the/adjective/snake”, see Figure 3). Gaze data and curve fits for the interaction effects of group (AMC, IWA) and condition (biased and unbiased) for processing N1 are plotted in Figure 5 (see Appendix B for GCA modeling details). Upon visual inspection, unlike the IWA, the AMC show a stronger de-activation pattern in the biased condition compared to the unbiased condition.
The results of the individual parameter estimates revealed a simple effect of condition at the linear term which indicates that the average rate of change in gazes to and away from N1 for the AMC group changed in the biased condition compared to the unbiased condition (estimate = 0.31, SE = 0.13, t = 2.50, p = 0.02). The positive estimate indicated that the average rate of N1 processing (i.e., activation and deactivation over time) was lower in the unbiased condition compared to the biased condition. Moreover, there was an interaction effect of group and condition at the linear term (estimate = −0.37, SE = 0.14, t = −2.67, p = 0.01): the negative estimates on the linear terms indicated that the difference in the processing of N1 between conditions in the IWA group is smaller than the difference between the two conditions for the AMC group. Table 4 shows the full results of this analysis.
In summary, the GCA analysis suggests that there is a main effect of condition for the AMC group. To understand when in the time course the difference between conditions occurred, we conducted a permutation cluster analysis. In 2000 permuted samples, with an alpha of 0.05, the analyses revealed one significant cluster within 1640–2280 (cluster sum statistic = 97.03, p = 0.01) which corresponded with the onset of the adjective. Therefore, the difference between conditions for AMC individuals occurred at the point where the biased adjective was heard in the sentence. Moreover, when the adjective was semantically biased toward the next upcoming item (N2), AMC listeners showed an earlier disengagement from N1.

3.1.2. Effect of Condition on Encoding the Noun following the Manipulation (N2)

Recall that time window 2 begins at the onset of the adjective until the average offset of noun 3 (“the/adjective/snake that the bear”, see Figure 3). Here, we seek to capture the effect of adjective bias on the processing patterns of the upcoming noun (N2). We employed the same analysis approach as described above in time window 1 (see Appendix C for GCA modeling details). Table 5 shows the full results of this analysis and Figure 6 shows the trajectory of effects.
The result of the individual parameter estimates revealed a marginal effect of the group at the intercept term (estimate = −0.05, SE = 0.02, t = −1.84, p = 0.066): this negative estimate, while not statistically significant at the 0.05 level, suggests that the average proportion of gazes toward N2 in the IWA is less than the AMC group. In addition, there was a marginal effect of condition for the AMC group at the linear term (estimate = 0.20, SE = 0.11, t =1.76, p = 0.08): the positive estimate indicates that that the average rate of N2 processing (i.e., activation and deactivation over time) was lower in the unbiased condition compared to the biased condition. While not statistically significant at the 0.05 level, this result suggests that the average rate of change in looking at the N2 for the AMC is different between the conditions. Furthermore, the results revealed a marginal interaction effect of group and condition at the linear term (estimate = −0.22, SE = 0.12, t = –1.78, p = 0.07): the negative estimates on the linear terms indicated that the difference in the processing of N2 between conditions in the IWA group is smaller than the difference between the two conditions in the AMC. Overall, the GCA analysis indicated no effect of the biased conditions for the IWA group.
The GCA analysis revealed a marginal difference between the groups on the proportion of gazes toward N2. Moreover, the results revealed a marginal effect of condition in the AMC group. To determine when in the time course the difference between conditions occurred for the AMC group, we conducted a permutation cluster analysis. In the 2000 permuted samples, with an alpha of 0.05, the analyses revealed one significant cluster within 1920–2160 (cluster sum statistic = −40.56, p = 0.06) which corresponds with the offset of the N2. The marginal difference between conditions for AMC individuals occurred later when N3 was heard in the sentence. The biased condition resulted in an earlier disengagement from the N2 compared to the unbiased condition. Therefore, the condition difference is mainly reflected in the earliness of disengaging from the already activated item. The analysis of the downstream effect of adjectives on encoding the third noun (N3) is discussed in Appendix D.
In summary, the results of encoding the noun phrases (N1, N2, and N3) revealed that the presence of the adjective had a local effect during processing the first and second nouns in the sentence for the AMC group. In the biased condition, the AMC group revealed earlier disengagement from N1 and N2 upon hearing the next upcoming target nouns which means that the addition of adjective had facilitated the semantic integration processes as the speech stream unfold. However, IWA revealed impaired lexical processing patterns when compared with AMC. This was demonstrated by the lower rate of the magnitude of gaze proportions toward the targeted nouns upon hearing them in the sentence among IWA.

3.2. Experimental Condition Effect on Syntactic Retrieval

Recall that in the post-verb window, successful dependency linking is evidenced as a re-activation of the direct-object noun (N2). The post-verb window is specified to begin at the onset of the verb until 1200 ms afterward (corresponding to “encounteredi underneath the narrow”, see Figure 3) to allow time for re-activation at the verb site as well as the spillover region. In this window, we explored whether re-activation occurred and inspected the presence of an interference effect during dependency linking by analyzing the gaze proportion of N2 (to-be-retrieved noun, henceforth “target”) relative to N1 and N3 (interfering nouns, henceforth “competitors”). Based on the cue-based parsing approach, upon encountering the verb, retrieval cues are triggered to search for a direct-object noun (N2); however, there are additional noun phrases whose features overlap with the target creating competition between the target (N2) and the non-target nouns (N1 and N3). Of importance is how the biased adjective is modulating the interference effects of non-target items across the groups. The evidence for the re-activation of N2 in the gap site (verb-frame) across the AMC and IWA groups is discussed in Appendix E. In the sections that follow, we examined each group separately and explored the effect of condition on the interference effects of N1 (3.2.1) and N3 (3.2.2) during re-activation of N2.

3.2.1. Effect of Condition on Re-Activation of N2 Relative to N1 at the Verb-Frame (Time Window 4b)

After establishing the presence of re-activation, the next question is whether the adjectives led to facilitation in retrieval. To understand the effect of condition on the proportion of N2 retrieval at the gap site, we built separate models for each group by including the interaction of fixed effect of images (N2 and N1) and condition. This interaction term reflects the extent to which the difference between N2 and N1 fixation time courses differed between conditions.
The individual parameter estimates for the AMC group (Table 6) revealed that the activation of N2 was lower in the unbiased condition compared to the biased condition (estimate = −0.031, SE = 0.02, t = −1.97, p < 0.05).
The individual parameter estimates for IWA (Table 7) revealed that the activation level of the N1 competitor was higher in the unbiased condition compared to the biased one (estimate = 0.05, SE = 0.02, t = 2.15, p < 0.05). Moreover, the activation of target N2 at the intercept level was lower in the unbiased condition (estimate = −0.10, SE = 0.01, t = −6.26, p < 0.05) compared to the biased condition. There was a significant interaction effect at the linear term that revealed a later emerging increase in activation of N2 in the unbiased condition (estimate = 0.24, SE = 0.05, t = 4.79, p < 0.05).
Overall, these results indicate a larger interference effect of N1 in the unbiased condition. Nevertheless, the lexical-semantic cues in the biased condition did not seem to benefit the IWA listeners enough to robustly reactivate N2 compared to N1.
Altogether, these sets of results from AMC and IWA indicated that the adjective type affected the dynamics of target re-activation at the gap site. In the biased condition, for the AMC group, the level of N2 re-activation was higher. Moreover, for the IWA group, the level of target N2 activation was higher while N1 interference was reduced (see Figure 7, red boxes).

3.2.2. Effect of Condition on Re-Activation of N2 Relative to N3 at the Verb-Frame (Time Window 4b)

After establishing the interference effect of N1, we conducted another analysis to observe the interference effect of N3 (subject of the relative-clause verb) during the re-activation of N2. Previously, we discussed that the recently activated representation of N3 can induce an interference effect during N2 re-activation. To investigate the effect of condition on the interference effect of N3, we repeated the same models for each group that was constructed before and looked at the interaction of condition with images N2 and N3.
The results of the AMC group (Table 8) revealed that the re-activation of N2 was lower in the unbiased condition when compared to the biased one (linear term estimate = −0.10, SE = 0.05, t = −2.22, p < 0.05). Moreover, the activation of N3 was higher in the unbiased condition (intercept term estimate = 0.04, SE = 0.02, t = 2.02, p < 0.05) and its rate was increasing (linear term estimate = 0.21, SE = 0.06, t = 3.19, p < 0.05) when compared to the biased condition. These results revealed that the biased adjective affected the dynamics of target re-activation at the gap site and reduced the interference effect of N3.
The results of the IWA group (Table 9) revealed an earlier increase in the rate of N2 activation overtime in the biased compared to the unbiased condition (linear term estimate 0.13, SE = 0.04, t = 3.09, p < 0.05). Yet, the activation dynamics of N3 did not change between the conditions.
Altogether, these sets of results indicated that the biased adjective modulated the re-activation of N2 in both groups and reduced the interference effect of N3 in the AMC group (see Figure 7, red boxes). See Appendix F for the full summary of the results section.

4. Discussion

In this study, we examined whether lexical-semantic cues as premodifiers of a target noun (N2) modulated the encoding (activation and deactivation gaze patterns) and retrieval (re-activation) dynamics of a noun phrase during the auditory processing of non-canonical sentences in both age-matched neurologically unimpaired listeners (AMC) and individuals with aphasia (IWA). We hypothesized that the lexical-semantic cues (in the form of adjectives) would facilitate (1) lexical encoding (i.e., boost representational access) and (2) downstream syntactic retrieval or dependency linking (i.e., increasing the probability of lexical retrieval at the gap site) during auditory sentence processing for both groups. The results revealed that the AMC group had a higher rate of activation and deactivation of nouns in the biased compared to the unbiased/neutral condition. Moreover, at the gap site, the accessibility of the target item (the displaced object noun, N2) was higher in the biased condition, which resulted in facilitated retrieval at the gap site. Our results from the AMC group are consistent with previous studies showing that semantically richer noun phrases that are encoded more ‘deeply’ are more accessible in memory at critical syntactic positions during sentence processing [24,25,32,33,34]. In contrast to the results found for the AMC group, the presence of biased adjectives did not affect the rate of lexical access of the target noun in the IWA group upon hearing the adjective in the sentence. Yet, in the post-verb-frame window, there was higher activation of target N2 and reduction in interference from the first noun competitor (N1). In the following sections, we discuss the results of the AMC group and then turn our discussion toward the IWA group to interpret the mechanism underlying the effect of the lexical-semantic cues (premodifier, adjective) during auditory sentence processing.

4.1. Real-Time Dynamics of Lexical Encoding and Retrieval during Sentence Processing in Unimpaired Individuals

In a self-paced reading paradigm, Hofmeister (2011) found that in neurotypical individuals, semantic complexity of the displaced object noun phrase or to-be-retrieved noun (e.g., “the injured and dangerous bear” versus “the bear”) resulted in longer reading times (i.e., longer encoding, or deeper processing) but then later yielded faster reading times at sentence-internal retrieval or re-activation sites [25]. The author suggested that richer representations containing typical or highly predictable feature combinations yielded retrieval facilitation at the retrieval site during sentence processing. In this study, using an eye-tracking-while-listening visual world paradigm, regarding the initial processing of the target noun, we found that the semantically biasing adjectives boosted the activation of representational features. We suggest that the presence of a biased adjective led to a greater spreading of activation such that accessing the set of features associated with the adjective primed the activation of semantic features of the target noun [60,61,62]. In other words, the semantically biased adjective increased the function of associative strengths and representational complexity during the processing of the target lexical item. Specifically, the presence of an adjectival cue as a premodifier provides a contextually unique feature for the target item that no other competitor shares. This can reduce the interference effect arising from the simultaneous presence of representations with overlapping features in memory (i.e., similarity-based interference) and improve the chances of its recoverability. As suggested by Nairne [35,63,64], the probability of retrieving a memory representation increases with the similarity or feature-overlap of the retrieval cues and target and decreases with the similarity of the cues to other memory candidates (see [65] for the full description of feature-based retrieval model of Nairne). Based on Nairne’s conceptual formulation, the probability of retrieving a representation E1, given a retrieval cue set X1, depends on the similarity or relatedness in features of X1 and E1, as well as the similarity or relatedness of X1 to other memory candidates (E2, E3, E4, …, En). This ratio model is designed to describe the distinctiveness property of a cue (e.g., “venomous snake” versus “voracious snake” when both “bear” and “eagle” can be also voracious) during the retrieval.
P r ( E 1 | X 1 ) = S 1 ( X 1 , E 1 ) S 1 ( X 1 , E n )
The numerator of this formulation refers to the similarity of X1 and E1, which varies as a function of the number of matching and mismatching features between the two terms which can be illustrated as the formulation below using the relating distance (d). This means that similar items (items containing few mismatching features) will be nearby items and produce the largest effects.
s ( X 1 ,   E 1 ) = e d ( E 1 , X 1 )
If the goal is to recover the representation E1 in the presence of a particular cue X1, the probability of retrieving E1 is highest when its features are similar to the cue X1 (the numerator of the equation), and dissimilar to other possible retrieval candidates (denominator). Therefore, the target retrieval is proportional to the cue-target match and inversely proportional to the amount of cue overload. Ultimately, the greater number of contextually unique features in E1, the greater it possesses a feature that no other competitor shares and, the greater the probability for compatibility with X1, and thus better chances for successful retrieval. In our case, the biasing adjectives make the target noun (snake: E1) distinct from the other competitor items (Eagle: E2 and bear: E3), thus reducing the level of cue overload, which can result in a higher probability of target item E1 retrieval. Moreover, using this feature-based model of retrieval, Hoffmeister et al. (2013) suggested that increasing representational complexity increases the probability that some features will be unique and therefore helps distinguish a representation from other competitors in memory [65]. With respect to the cue-based retrieval theories, such uniqueness may create a better match with the set of retrieval cues at the gap site [9]. Therefore, adding unique information can be quite helpful for memory retrieval. In the present study, regarding the downstream effects, the additional lexical-semantic information provided by the biasing adjective increases the representational complexity of the target noun and increases the distinctiveness of the target at the time of retrieval for neurotypical individuals. The results revealed that the AMC group disengaged from the first noun phrase earlier upon hearing the biasing adjective noun phrase when compared to the neutral adjective in the unbiased condition. Moreover, they retrieved (reactivated) the target N2 earlier in the retrieval site (post-verb-frame window) and manifested an increase in its rate of re-activation in the biased condition compared to the unbiased condition. Therefore, the results from AMC are consistent with previous studies showing that semantically richer nouns are more accessible in memory [24,25,32,34].

4.2. Real-Time Dynamics of Lexical Encoding and Retrieval during Sentence Processing in Individuals with Aphasia

Unlike the AMC group, IWA did not demonstrate sensitivity to the lexical-semantic cues (biased adjectives) in their rate of initial lexical access. However, their re-activation processes of the displaced item changed in the post-verb-frame window. IWA demonstrated a reduction in interference-effect arising from the competitor item in the sentence. The lack of sensitivity of IWA to the lexical-semantic cue during real-time processing could be attributed to their inefficiency in accessing or maintaining the representational features in real time. Research exploring real-time processing in aphasia has suggested that these individuals have a delay in lexical access causing the critical semantic features to be unavailable for fast-acting syntactic processes [13,16,37]. Yet, the deficits could be overcome when the rate of auditory input is slowed down and the time constraints for retrieval are relaxed [16]. This is in line with studies with neurotypical individuals that have suggested that the addition of time at specific points during processing allows for a deeper encoding of sentential constituents, leading to a strengthened representation [52,66]. In the current study, we aimed to strengthen the representations via biasing adjectives, though the approach was unsuccessful. One explanation for the lack of sensitivity of IWA to the contextual cue could be attributed to the interference of active representations that are outside of the scope of other sentential element constraints such as phonological form [67,68,69]. Although we do not have direct evidence to support this, it has been suggested that impairments in cognitive control processes (such as cognitive flexibility and inhibitory control) can increase the interference from context-independent distractors during sentence processing [70]. Investigations into the effect of these impaired processes in IWA should be considered in future endeavors.

4.3. The Underlying Nature of Lexical-Semantic Processing Deficit in Aphasia

Nozari (2019) introduced a theoretical framework that explains all the empirical findings surrounding lexical-access deficits in aphasia [71]. The framework, which is based on language production, can be generalized to comprehension processes as it pertains to shared mechanisms (namely representational semantic storage and cognitive control processes) that are involved in both production and comprehension. Nozari (2019) demonstrated that lexical access deficits in aphasia can have two distinct etiologies by presenting a case of a double dissociation between two IWA. One case showed a profile of impaired activation of semantic features of the target lexical items (activation deficit), while the other case showed a profile compatible with impaired inhibition of competing for lexical items (inhibition deficit). Those IWA with activation deficits suffer from lower-than-normal activation of representational features (semantic or phonological) which can lead to smaller differences between items during the spread of activation and ultimately impede the absolute selection of an item [72]. Those IWA with inhibition deficits suffer from increased activation of semantic competitors which hinges on the malfunction of the inhibitory process that suppresses the activation of unrelated representations. Deficits in any of these mechanisms can explain why IWA, upon first hearing a target noun, demonstrated a lack of sensitivity to the distinctiveness in the biased condition. This framework is related to the feature-based retrieval formulation of Nairne (2006 and references therein) which expresses that the probability that a target representation will be selected depends on cue-target features match and distinctiveness of the target from competitor items which dictate the level of interference within the potential representations. Our results demonstrate that IWA do not appear to be initially sensitive to the distinctiveness property of a cue during the fast-acting real-time processing of sentences as they have impairments in the timely processing of these functions. However, since they evinced a later emerging reduced interference effect between the target noun and the competitor noun (N1) during the post-verb-frame window, this suggests that distinctiveness is processed, just delayed.
Altogether, if the intention is to mitigate initial delay in lexical access, then the addition of biasing adjectives as premodifiers may not be an ideal approach for IWA to boost representational access in the memory as they have a delay in timely access to representational features. Future investigations may explore adding modifiers after the noun (post-modifiers, e.g., “It was the bear with large claws that the hunter chased into the evening”, as compared to a matched sentence with a neutral preposition phrase after the target noun) as those may offer a better route for enriching the semantic features of a representation. Post-modifiers may be more efficiently encoded by IWA during online sentence processing. It is suggested from neurotypical studies that in the case of postmodifiers, the memory representation (semantic and syntactic features) of the head noun becomes reactivated as the modifying information is being encoded [11]. Since the full lexical semantics of the head noun is available in the case of postmodifiers, an immediate re-activation of both syntactic and semantic information can lead to more robust representational access and allows time for lexical processing to be fully executed. Future studies need to look at the effect of pre and postmodifiers in sentence processing patterns of individuals with aphasia. Another viable approach to modulate sentence processing and reduce the interference effect for IWA is to directly manipulate the representational features of the target item and make them inherently mismatching from other competitor items in the sentence. In a similar vein, previous reading studies with neurotypical individuals have shown that a mismatch in the properties of encoded referents of the sentence, such as “the general” and “Christopher” in (5b) can minimize the similarity-based interference effects when compared to (5a) and therefore increase the probability of on-time target retrieval [5,6,73]. This approach could be more useful for IWA rather than increasing the syntactic and semantic representational complexity of the to-be-retrieved item by adding modifiers.
(5a)
It was the generali that the lawyer chasedi <the general> in the office yesterday.
(5b)
It was the generali that Christopher chasedi <the general> in the office yesterday.
(5c)
It was the victorious four-star generali that the lawyer chasedi <the general> in the office yesterday.
One limitation in this study is the small sample size of the aphasia group. The sample size limited the ability to conduct individual-level analyses. Moving forward, it would be useful to relate features of stroke-induced lesions, such as size, location, and white-matter damage, to variability in language outcomes across individuals with aphasia.

5. Conclusions

Altogether, the current study improves our understanding of how words are encoded, processed against competitor items, and retrieved during language comprehension in neurologically unimpaired as well as impaired populations. Here, we demonstrate that a boost in representational access via premodifiers (biasing adjectives) can facilitate the syntactic processing of unimpaired populations. However, disruption in the timely activation of compatible representations can reduce the sensitivity to premodifying lexical-semantic cues among IWA.

Author Contributions

Conceptualization, N.A. (Niloofar Akhavan), M.G. and T.L.; methodology, N.A. (Niloofar Akhavan), C.B. and T.L.; formal analysis, N.A. (Niloofar Akhavan), C.S. and C.B., writing—original draft preparation, N.A. (Niloofar Akhavan); writing—review and editing, N.A. (Niloofar Akhavan), C.S., C.B., N.A. (Noelle Abbott), M.G. and T.L.; supervision, T.L.; visualization, N.A. (Niloofar Akhavan) and T.L.; funding acquisition, N.A. (Niloofar Akhavan), C.S., C.B. and T.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the William Orr Dingwall Foundation (N.A.); UCSD Friends of the International Center (N.A.); The David A. Swinney Fellowship (N.A. and C.J.B.); the UCSD Tribal Membership Initiative (C.M.S); NIH NIDCD award numbers T32 DC007361 (trainees: M.G., C.J.B., N.T.A, C.M.S.; PI: T.E.L.); R21 DC015263 (PI: T.E.L.); R01 DC009272 (PI: T.E.L.).

Institutional Review Board Statement

This study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Institutional Review Board of the University of California, San Diego and San Diego State University (IRB number: 171023, continuous approval since 06-03-2015).

Informed Consent Statement

Informed consent was obtained from all subjects involved in this study.

Data Availability Statement

The data used to support the findings of this publication can be requested from the corresponding author upon request.

Acknowledgments

We thank Natalie Sullivan and Lewis Shapiro for their assistance during various stages of data collection and processing, and all of our participants, their families, and our funding agencies for supporting this work.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Appendix A

Table A1. The list of sentence stimuli used in this study is shown in the table below.
Table A1. The list of sentence stimuli used in this study is shown in the table below.
Unbiased AdjectiveBiased Adjective
The duck followed the perfect kitten that the cow deliberately nudged across the grassy meadow. The duck followed the playful kitten that the cow deliberately nudged across the grassy meadow.
The veterinarian greeted the popular king that the criminal mistakenly expected at the stunningly lavish gala.The veterinarian greeted the powerful king that the criminal mistakenly expected at the stunningly lavish gala.
The scorpion annoyed the anxious bull that the bee constantly pestered in the abandoned railroad yard.The scorpion annoyed the angry bull that the bee constantly pestered in the abandoned railroad yard.
The crocodile spied the weird owl that the chameleon momentarily faced in the exotic animal show.The crocodile spied the wise owl that the chameleon momentarily faced during the exotic animal show.
The crab helped the coy puppy that the rabbit relentlessly teased before playful tussle.The crab helped the cute puppy that the rabbit relentlessly teased before the playful tussle.
The lawyer visited the forgetful gymnast that the butler allegedly helped with the illegal cover-up. The lawyer visited the flexible gymnast that the butler allegedly helped with the illegal cover-up.
The magician passed the redheaded nun that the mailman compassionately soothed after the traumatic event.The magician passed the religious nun that the mailman compassionately soothed after the traumatic event.
The ladybug observed the smelly bat that the opossum deliberately avoided near the historic monument.The ladybug observed the scary bat that the opossum deliberately avoided near the historic monument.
The astronaut approached the sad jockey that the salesman incorrectly judged throughout the dinner party.The astronaut approached the short jockey that the salesman incorrectly judged throughout the dinner party.
The otter spotted the shiny octopus that seagull unsurprisingly smelled after the hot and sunny day.The otter spotted the slimy octopus that seagull unsurprisingly smelled after the hot and sunny day.
The deer noticed the male gorilla that the hummingbird thoroughly amused with the acrobatic display.The deer noticed the mean gorilla that the hummingbird thoroughly amused with the acrobatic display.
The ostrich recognized the delightful toucan that the baboon hesitantly touched during the bizarre encounter.The ostrich recognized the colorful toucan that the baboon hesitantly touched during the bizarre encounter.
The spider scared the live rooster that the porcupine accidentally bumped on the side of the country road.The spider scared the loud rooster that the porcupine accidentally bumped on the side of the country road.
The dentist helped the tired maid that the plumber heartlessly cheated in spite of the cautious investment.The dentist helped the tidy maid that the plumber heartlessly cheated in spite of the cautious investment.
The orangutan examined the defenseless cockroach that the parrot quickly located near the bottom of the staircase. The orangutan examined the disgusting cockroach that the parrot quickly located near the bottom of the staircase.

Appendix B. Model Details for Analyzing the Noun Preceding Adjective

The data for this analysis was composed of gaze proportions to N1 across the window for the 9 AMC and 10 IWA. This window captures the full dynamics of N1 processing, which includes its activation and deactivation pattern over time as the speech unfolds. We began the GCA analysis using a base model that included only the time terms (linear and quadratic) without any modulation of group and condition. The addition of group (AMC vs. IWA) and condition (biased vs. unbiased) parameters to the model significantly improved the model fit (χ2(12) = 154.91, p < 0.001). Of interest was the interaction term between the condition and group reflects the extent to which the difference in N1 gaze proportion between biased and unbiased conditions differed between participant groups. The AMC group served as the comparison group and coefficients were estimated for the IWA relative to the AMC group.

Appendix C. Model Details for Analyzing the Noun following Adjective

To capture the full pattern of N2 processing (i.e., its activation and deactivation over time) and examine the effect of the biased adjective, we specified the window of analyses to include the onset of adjective until 2500 ms afterward (corresponding to the average offset of N3—“the/adjective/snake that the bear”). The data for this analysis was composed of N2 gaze proportion over time for the 9 AMC and 10 IWA participants. Here, we built a baseline model including only the time terms (linear and quadratic). The addition of group (AMC vs. IWA) and condition (biased vs. unbiased) improved the model fit (χ2(12) = 161.43, p < 0.001). This interaction term reflects the extent to which N2 gaze proportion differences between biased and unbiased conditions differed between participant groups.

Appendix D. Downstream Effect of Condition on Encoding the Noun after the Manipulation (N3)

For this analysis, the window was specified at the onset of N3 until 2000 ms afterward (corresponding to the average offset of the verb—“the bear cautiously encountered”). We began the GCA analysis using the base model that includes time without any modulation of group and condition. The addition of group (AMC vs. IWA) and condition (biased vs. unbiased) improved the model fit (χ2(12) = 150.41, p < 0.001). See Figure A1 for the gaze data and curve fits for this interaction model.
Figure A1. Gaze proportion differences toward N3 between conditions and groups. Solid lines represent observed data and dashed lines represent the GCA model fit.
Figure A1. Gaze proportion differences toward N3 between conditions and groups. Solid lines represent observed data and dashed lines represent the GCA model fit.
Brainsci 12 00312 g0a1
The result of the individual parameter estimates (Table A2) revealed a marginal effect of condition for N3 processing in the AMC group at the intercept term (estimate = −0.06, SE = 0.03, t = −1.09, p = 0.06): this negative estimate indicated that the average proportion of gazes toward N3 in the unbiased condition was less than the biased condition. Furthermore, there was an effect of the group at the intercept term (estimate = −0.11, SE = 0.05, t = −2.09, p = 0.04), which indicated that the average proportion of gazes toward N3 for IWA was less than the AMC group (this result corresponds to the biased condition which is the reference estimate). Additionally, the main effect of the group was also significant at the quadratic term (estimate = 0.23, SE = 0.10, t = 2.22, p = 0.03), which is indicative of a steeper rise and fall (curvature inflection point) for the AMC compared to IWA in the biased condition.
Table A2. Results of GCA analysis for time window 3.
Table A2. Results of GCA analysis for time window 3.
PredictorsEstimatesCIP (Two Tailed)
(Intercept)0.440.36–0.53<0.001
Linear0.530.26–0.81<0.001
Quadratic−0.31−0.47–−0.15<0.001
Condition [Unbiased]−0.06−0.12–0.000.056
Group [IWA]−0.11−0.22–−0.010.036
Linear × Condition [Unbiased]0.09−0.14–0.320.458
Quadratic × Condition [Unbiased]0.07−0.09–0.220.387
Linear × Group [IWA]−0.20−0.56–0.160.275
Quadratic × Group [IWA]0.230.03–0.430.027
Condition [Unbiased] × Group [IWA]0.01−0.06–0.070.853
(Linear × Condition [Unbiased]) × Group [IWA]−0.04−0.32–0.240.774
(Quadratic × Condition [Unbiased]) × Group [IWA]−0.08−0.26–0.100.397
Note: The table provides the test of the full model including the interaction of group and condition on the intercept, linear, and quadratic time terms. The AMC group and the biased condition are set as the reference estimates. Results in boldface are presented in the text.
The marginal effect of the condition which was found in the AMC group was reevaluated using the permutation cluster analysis (2000 permuted samples, with an alpha of 0.05). The analysis did not result in any significant clusters that would indicate differences between conditions. The cluster analysis did not validate the GCA finding, and therefore we cannot report any consistent effect of group or condition difference for N3 processing.

Appendix E. Evidence of N2 Re-Activation at the Verb-Frame (Time Window 4)

We inspected the re-activation of N2 relative to N1 as an index of syntactic re-activation at the gap site window, for two reasons: first, if gaze proportions to N2 and N1 were similar, then it means that the listeners must have maintained activation for items on the screen regardless of their syntactic roles. In other words, at this verb-frame position, individuals’ gazes must be away from N1 as the syntactic role of these items should have been already assigned. The activation of N1 at this verb-frame position would only indicate the presence of an interference effect. However, at this point in the sentence, there are two NPs that have not yet been fully integrated into the syntactic structure; that is N2 and N3. It is to be expected that the most recently encountered N3 would have high gazes as its traces of representation can remain active after hearing the verb. Therefore, if syntactic linking is triggered at the verb offset, then we would be expecting higher gazes toward N2 and not N1. To indicate if individuals in each group have shown evidence of re-activation at the verb-frame, we formed a second-order orthogonal polynomial and added the fixed effect of group (IWA vs. AMC) and gazes toward the images of interest (N2 vs. N1) to the model. See Figure A2 for the gaze data and curve fits for this interaction model.
Figure A2. Gaze proportion differences to N1 and N2 between groups. Solid lines represent observed data and dashed lines represent the GCA model fit.
Figure A2. Gaze proportion differences to N1 and N2 between groups. Solid lines represent observed data and dashed lines represent the GCA model fit.
Brainsci 12 00312 g0a2
Adding the group and images and their interaction with the higher-order time terms improved the baseline model fit (χ2(12) = 697.09, p < 0.001). This interaction term reflects the extent to which the difference between N2 (target) and N1 (competitor) gaze time courses differed between participant groups. In Table A3 we report the results of individual parameter estimates of the full quadratic model. The analysis revealed an effect of the group such that the average gaze of IWA toward the competitor image (N1) was higher than the AMC group (estimate = 0.10, p < 0.05). There was also an interaction effect showing that the IWA group had a significantly lower intercept term (estimate = −0.19, SE = 0.05, t = −3.88, p < 0.001) relative to the AMC group. The effect on the intercept term indicates that the difference in overall fixation of N1 vs. N2 was smaller for the IWA group than for the AMC group. As shown in the plots, the AMC group did not maintain activation of N1 and had a higher proportion of gazes toward N2, which is indicative of reduced interference effect in this group.
Altogether, we can summarize that IWA were experiencing interference effects during the verb-frame window (i.e., gap site where syntactic dependency linking must occur). The next analysis investigates whether the condition modulated the pattern of gazes toward N2 at the verb-frame position.
Table A3. Results of GCA analysis for time window 4.
Table A3. Results of GCA analysis for time window 4.
PredictorsEstimatesCIP (Two Tailed)
(Intercept)0.120.06–0.18<0.001
Linear−0.01−0.10–0.070.793
Quadratic −0.01−0.05–0.030.654
Images [N2]0.170.10–0.24<0.001
Group [IWA]0.100.02–0.180.016
Linear × Images [N2]0.02−0.09–0.140.696
Quadratic × Images [N2]0.01−0.05–0.070.764
Linear × Group [IWA]0.01−0.10–0.130.832
Quadratic × Group [IWA]0.02−0.04–0.080.532
Images [N2] × Group [IWA]−0.19−0.29–−0.09<0.001
(Linear × Images [N2]) × Group [IWA]−0.02−0.18–0.140.803
(Quadratic × Images [N2]) × Group [IWA]−0.00−0.08–0.080.982
Note: The table provides the test of the full model including the interaction of group and images of interest (N1 and N2) on the intercept, linear, and quadratic time terms. The AMC group and the N1 are set as reference estimates. Results in boldface are presented in the text.

Appendix F. Summary of Results

Table A4. Summary of the results for the online sentence processing of AMC and IWA based on GCA and Cluster analyses.
Table A4. Summary of the results for the online sentence processing of AMC and IWA based on GCA and Cluster analyses.
Process Brainsci 12 00312 i002
Activation and Deactivation of N1Activation and Deactivation of N2Activation and Deactivation of N3Dependency Linking/Re-Activation of N2
Results-AMCAcross the entire time course of processing N1, there was an effect of condition. The effect emerges upon hearing the adjective. In the biased condition, deactivation (looking away from N1 when processing the adjective occurred earlier than in the unbiased condition. Across the entire time course of processing N2, there was a marginal effect of condition. The difference emerges at the offset of N2. In the biased condition, N2 was deactivated earlier as compared to the unbiased condition.AMC did not show a significant effect of condition for activation and deactivation of N3.There was a condition effect for the AMC group such that they revealed a higher rate of activation of N2 in the biased compared to the unbiased condition. Moreover, the level and rate of activation of N3 were lower in the biased condition.
Results-IWAIWA did not show an effect of condition for N1 processing.IWA did not show an effect of condition for N2.IWA did not show an effect of condition for N3.IWA showed an earlier rise in the re-activation of N2 in the biased condition They revealed a reduced interference effect in the biased condition as manifested by reduced looks to N1. They revealed no different gaze patterns toward N3 between the conditions.

References

  1. McElree, B.; Foraker, S.; Dyer, L. Memory structures that subserve sentence comprehension. J. Mem. Lang. 2003, 48, 67–91. [Google Scholar] [CrossRef]
  2. Nicol, J.; Swinney, D. The role of structure in coreference assignment during sentence comprehension. J. Psycholinguist. Res. 1989, 18, 5–19. [Google Scholar] [CrossRef] [PubMed]
  3. Sheppard, S.M.; Walenski, M.; Love, T.; Shapiro, L.P. The auditory comprehension of wh-questions in aphasia: Support for the intervener hypothesis. J. Speech Lang. Hear. Res. 2015, 58, 781–797. [Google Scholar] [CrossRef] [PubMed]
  4. Koring, L.; Mak, P.; Reuland, E. The time course of argument reactivation revealed: Using the visual world paradigm. Cognition 2012, 123, 361–379. [Google Scholar] [CrossRef] [PubMed]
  5. Gordon, P.C.; Hendrick, R.; Johnson, M.; Lee, Y. Similarity-based interference during language comprehension: Evidence from eye tracking during reading. J. Exp. Psychol. Learn. Mem. Cogn. 2006, 32, 1304. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Gordon, P.C.; Hendrick, R.; Levine, W.H. Memory-Load Interference in Syntactic Processing. Psychol. Sci. 2002, 13, 425–430. [Google Scholar] [CrossRef]
  7. Gordon, P.C.; Hendrick, R.; Johnson, M. Memory interference during language processing. J. Exp. Psychol. Learn. Mem. Cogn. 2001, 27, 1411–1423. [Google Scholar] [CrossRef]
  8. Van Dyke, J.A. Interference effects from grammatically unavailable constituents during sentence processing. J. Exp. Psychol. Learn. Mem. Cogn. 2007, 33, 407. [Google Scholar] [CrossRef] [Green Version]
  9. Van Dyke, J.A.; McElree, B. Retrieval interference in sentence comprehension. J. Mem. Lang. 2006, 55, 157–166. [Google Scholar] [CrossRef] [Green Version]
  10. Lewis, R.L.; Vasishth, S.; Van Dyke, J.A. Computational principles of working memory in sentence comprehension. Trends Cogn. Sci. 2006, 10, 447–454. [Google Scholar] [CrossRef] [Green Version]
  11. Lewis, R.L.; Vasishth, S. An activation-based model of sentence processing as skilled memory retrieval. Cogn. Sci. 2005, 29, 375–419. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Thompson, C.K.; Choy, J.J. Pronominal resolution and gap filling in agrammatic aphasia: Evidence from eye movements. J. Psycholinguist. Res. 2009, 38, 255–283. [Google Scholar] [CrossRef] [Green Version]
  13. Dickey, M.W.; Choy, J.J.; Thompson, C.K. Real-time comprehension of wh- movement in aphasia: Evidence from eyetracking while listening. Brain Lang. 2007, 100, 1–22. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Caramazza, A.; Zurif, E.B. Dissociation of algorithmic and heuristic processes in language comprehension: Evidence from aphasia. Brain Lang. 1976, 3, 572–582. [Google Scholar] [CrossRef]
  15. Grodzinsky, Y. The neurology of syntax: Language use without Broca’s area. Behav. Brain Sci. 2000, 23, 1–21. [Google Scholar] [CrossRef]
  16. Love, T.; Swinney, D.; Walenski, M.; Zurif, E. How left inferior frontal cortex participates in syntactic processing: Evidence from aphasia. Brain Lang. 2008, 107, 203–219. [Google Scholar] [CrossRef] [Green Version]
  17. Murphy, E. The Oscillatory Nature of Language; Cambridge University Press: Cambridge, UK, 2020. [Google Scholar]
  18. Grodzinsky, Y. A restrictive theory of agrammatic comprehension. Brain Lang. 1995, 50, 27–51. [Google Scholar] [CrossRef]
  19. Ferrill, M.; Love, T.; Walenski, M.; Shapiro, L.P. The time-course of lexical activation during sentence comprehension in people with aphasia. Am. J. Speech-Lang. Pathol. 2012, 21, S179–S189. [Google Scholar] [CrossRef] [Green Version]
  20. Swaab, T.Y.; Brown, C.; Hagoort, P. Understanding ambiguous words in sentence contexts: Electrophysiological evidence for delayed contextual selection in Broca’s aphasia. Neuropsychologia 1998, 36, 737–761. [Google Scholar] [CrossRef] [Green Version]
  21. Swaab, T.; Brown, C.; Hagoort, P. Spoken Sentence Comprehension in Aphasia: Event-related Potential Evidence for a Lexical Integration Deficit. J. Cogn. Neurosci. 1997, 9, 39–66. [Google Scholar] [CrossRef] [Green Version]
  22. Hagoort, P.; Brown, C.M.; Swaab, T.Y. Lexical—semantic event–related potential effects in patients with left hemisphere lesions and aphasia, and patients with right hemisphere lesions without aphasia. Brain 1996, 119, 627–649. [Google Scholar] [CrossRef] [PubMed]
  23. Choy, J.J.; Thompson, C.K. Binding in agrammatic aphasia: Processing to comprehension. Aphasiology 2010, 24, 551–579. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Hofmeister, P.; Vasishth, S. Distinctiveness and encoding effects in online sentence comprehension. Front. Psychol. 2014, 5, 1237. [Google Scholar] [CrossRef] [Green Version]
  25. Hofmeister, P. Representational Complexity and Memory Retrieval in Language Comprehension. Lang. Cogn. Process. 2011, 26, 376–405. [Google Scholar] [CrossRef] [Green Version]
  26. Kluender, R.; Kutas, M. Bridging the gap: Evidence from ERPs on the processing of unbounded dependencies. J. Cogn. Neurosci. 1993, 5, 196–214. [Google Scholar] [CrossRef]
  27. McElree, B. Sentence Comprehension Is Mediated by Content-Addressable Memory Structures. J. Psycholinguist. Res. 2000, 29, 111–123. [Google Scholar] [CrossRef]
  28. Trueswell, J.C.; Tanenhaus, M.K.; Garnsey, S.M. Semantic influences on parsing: Use of thematic role information in syntactic ambiguity resolution. J. Mem. Lang. 1994, 33, 285–318. [Google Scholar] [CrossRef]
  29. Martin, A.E.; McElree, B. Retrieval cues and syntactic ambiguity resolution: Speed-accuracy tradeoff evidence. Lang. Cogn. Neurosci. 2018, 33, 769–783. [Google Scholar] [CrossRef] [Green Version]
  30. McElree, B.; Dosher, B.A. Serial position and set size in short-term memory: The time course of recognition. J. Exp. Psychol. Gen. 1989, 118, 346–373. [Google Scholar] [CrossRef]
  31. Parker, D.; Shvartsman, M.; Van Dyke, J.A. The cue-based retrieval theory of sentence comprehension: New findings and new challenges. In Language Processing and Disorders; Cambridge Scholars Publishing: Newcastle upon Tyne, UK, 2017; pp. 121–144. [Google Scholar]
  32. Karimi, H.; Diaz, M.; Ferreira, F. “A cruel king” is not the same as “a king who is cruel”: Modifier position affects how words are encoded and retrieved from memory. J. Exp. Psychol. Learn. Mem. Cogn. 2019, 45, 2010. [Google Scholar] [CrossRef] [Green Version]
  33. Karimi, H.; Brothers, T.; Ferreira, F. Phonological versus semantic prediction in focus and repair constructions: No evidence for differential predictions. Cogn. Psychol. 2019, 112, 25–47. [Google Scholar] [CrossRef] [PubMed]
  34. Troyer, M.; Hofmeister, P.; Kutas, M. Elaboration over a discourse facilitates retrieval in sentence processing. Front. Psychol. 2016, 7, 374. [Google Scholar] [CrossRef] [Green Version]
  35. Nairne, J.S. A feature model of immediate memory. Mem. Cogn. 1990, 18, 251–269. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  36. Vasishth, S.; Lewis, R.L. Argument-head distance and processing complexity: Explaining both locality and antilocality effects. Language 2006, 82, 767–794. [Google Scholar] [CrossRef]
  37. Choy, J.J. Effects of Lexical Processing Deficits on Sentence Comprehension in Agrammatic Broca’s Aphasia; Northwestern University: Evanston, IL, USA, 2011. [Google Scholar]
  38. Cooper, R.M. The control of eye fixation by the meaning of spoken language: A new methodology for the real-time investigation of speech perception, memory, and language processing. Cogn. Psychol. 1974, 6, 84–107. [Google Scholar] [CrossRef]
  39. Huettig, F.; Rommers, J.; Meyer, A.S. Using the visual world paradigm to study language processing: A review and critical evaluation. Acta Psychol. 2011, 137, 151–171. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  40. Tanenhaus, M.K.; Trueswell, J.C. Sentence Comprehension. In Speech, Language, and Communication; Academic Press: San Diego, CA, USA, 1995. [Google Scholar]
  41. Wendt, D.; Brand, T.; Kollmeier, B. An eye-tracking paradigm for analyzing the processing time of sentences with different linguistic complexities. PLoS ONE 2014, 9, e100186. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  42. Goodglass, H.; Kaplan, E.; Barresi, B. BDAE-3: Boston Diagnostic Aphasia Examination, 3rd ed.; Lippincott Williams & Wilkins: Philadelphia, PA, USA, 2001. [Google Scholar]
  43. Kertesz, A. WAB-R: Western Aphasia Battery-Revised; PsychCorp: San Antonio, TX, USA, 2007. [Google Scholar]
  44. Love, T.; Oster, E. On the categorization of aphasic typologies: The SOAP (a test of syntactic complexity). J. Psycholinguist. Res. 2002, 31, 503–529. [Google Scholar] [CrossRef]
  45. Altmann, G.T.; Kamide, Y. Incremental interpretation at verbs: Restricting the domain of subsequent reference. Cognition 1999, 73, 247–264. [Google Scholar] [CrossRef] [Green Version]
  46. Eberhard, K.M.; Spivey-Knowlton, M.J.; Sedivy, J.C.; Tanenhaus, M.K. Eye movements as a window into real-time spoken language comprehension in natural contexts. J. Psycholinguist. Res. 1995, 24, 409–436. [Google Scholar] [CrossRef]
  47. Dink, J.W.; Ferguson, B. eyetrackingR: An R Library for Eye-Tracking Data Analysis; 2015; Volume 6, p. 2017. Available online: www.eyetracking-r.com (accessed on 20 November 2021).
  48. Mirman, D. Growth Curve Analysis and Visualization Using R; Chapman and Hall/CRC: Boca Raton, FL, USA, 2017. [Google Scholar]
  49. Mirman, D.; Yee, E.; Blumstein, S.E.; Magnuson, J.S. Theories of spoken word recognition deficits in aphasia: Evidence from eye-tracking and computational modeling. Brain Lang 2011, 117, 53–68. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  50. Mirman, D.; Dixon, J.A.; Magnuson, J.S. Statistical and computational models of the visual world paradigm: Growth curves and individual differences. J. Mem. Lang. 2008, 59, 475–494. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  51. Brown, M.; Salverda, A.P.; Dilley, L.C.; Tanenhaus, M.K. Expectations from preceding prosody influence segmentation in online sentence processing. Psychon. Bull. Rev. 2011, 18, 1189–1196. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  52. Baker, C.; Love, T. It’s about time! Time as a parameter for lexical and syntactic processing: An eye-tracking-while-listening investigation. Lang. Cogn. Neurosci. 2021, 37, 1–21. [Google Scholar] [CrossRef]
  53. Akhavan, N.; Blumenfeld, H.K.; Love, T. Auditory Sentence Processing in Bilinguals: The Role of Cognitive Control. Front. Psychol. 2020, 11, 898. (In English) [Google Scholar] [CrossRef]
  54. Hadar, B.; Skrzypek, J.E.; Wingfield, A.; Ben-David, B.M. Working memory load affects processing time in spoken word recognition: Evidence from eye-movements. Front. Neurosci. 2016, 10, 221. [Google Scholar] [CrossRef] [Green Version]
  55. Barr, D.J.; Levy, R.; Scheepers, C.; Tily, H.J. Random effects structure for confirmatory hypothesis testing: Keep it maximal. J. Mem. Lang. 2013, 68, 255–278. [Google Scholar] [CrossRef] [Green Version]
  56. Bates, D.; Mächler, M.; Bolker, B.; Walker, S. Fitting linear mixed-effects models using lme4. arXiv 2014, arXiv:1406.5823. [Google Scholar]
  57. Maris, E.; Oostenveld, R. Nonparametric statistical testing of EEG-and MEG-data. J. Neurosci. Methods 2007, 164, 177–190. [Google Scholar] [CrossRef]
  58. Hahn, N.; Snedeker, J.; Rabagliati, H. Rapid linguistic ambiguity resolution in young children with autism spectrum disorder: Eye tracking evidence for the limits of weak central coherence. Autism Res. 2015, 8, 717–726. [Google Scholar] [CrossRef] [Green Version]
  59. Huang, Y.; Snedeker, J. Evidence from the visual world paradigm raises questions about unaccusativity and growth curve analyses. Cognition 2020, 200, 104251. [Google Scholar] [CrossRef] [PubMed]
  60. Anderson, J.R.; Budiu, R.; Reder, L.M. Theory of sentence memory as part of a general theory of memory. J. Mem. Lang. 2001, 45, 337–367. [Google Scholar] [CrossRef] [Green Version]
  61. Bradshaw, G.L.; Anderson, J.R. Elaborative encoding as an explanation of levels of processing. J. Verbal Learn. Verbal Behav. 1982, 21, 165–174. [Google Scholar] [CrossRef]
  62. Waddill, P.J.; McDaniel, M.A. Distinctiveness effects in recall. Mem. Cogn. 1998, 26, 108–120. [Google Scholar] [CrossRef] [Green Version]
  63. Nairne, J.S. Modeling distinctiveness: Implications for general memory theory. In Distinctiveness and Memory; Oxford University Press: Oxford, UK, 2006; pp. 27–46. [Google Scholar]
  64. Nairne, J.S. A Functional Analysis of Primary Memory. 2001. Available online: https://psycnet.apa.org/record/2001-00138-015 (accessed on 20 November 2021).
  65. Hofmeister, P.; Jaeger, T.F.; Arnon, I.; Sag, I.A.; Snider, N. The source ambiguity problem: Distinguishing the effects of grammar and processing on acceptability judgments. Lang. Cogn. Processes 2013, 28, 48–87. [Google Scholar] [CrossRef] [Green Version]
  66. Karimi, H.; Ferreira, F. Good-enough linguistic representations and online cognitive equilibrium in language processing. Q. J. Exp. Psychol. 2016, 69, 1013–1040. [Google Scholar] [CrossRef]
  67. Huettig, F.; Altmann, G.T. Word meaning and the control of eye fixation: Semantic competitor effects and the visual world paradigm. Cognition 2005, 96, B23–B32. [Google Scholar] [CrossRef] [Green Version]
  68. Yee, E.; Blumstein, S.E.; Sedivy, J.C. Lexical-Semantic Activation in Broca’s and Wernicke’s Aphasia: Evidence from Eye Movements. J. Cogn. Neurosci. 2008, 20, 592–612. [Google Scholar] [CrossRef] [Green Version]
  69. Kukona, A.; Cho, P.W.; Magnuson, J.S.; Tabor, W. Lexical interference effects in sentence processing: Evidence from the visual world paradigm and self-organizing models. J. Exp. Psychol. Learn. Mem. Cogn. 2014, 40, 326. [Google Scholar] [CrossRef] [Green Version]
  70. Nozari, N.; Trueswell, J.C.; Thompson-Schill, S.L. The interplay of local attraction, context and domain-general cognitive control in activation and suppression of semantic distractors during sentence comprehension. Psychon. Bull. Rev. 2016, 23, 1942–1953. [Google Scholar] [CrossRef] [Green Version]
  71. Nozari, N. The dual origin of semantic errors in access deficit: Activation vs. inhibition deficit. Cogn. Neuropsychol. 2019, 36, 31–53. [Google Scholar] [CrossRef] [PubMed]
  72. Nozari, N.; Dell, G.S.; Schwartz, M.F. Is comprehension necessary for error detection? A conflict-based account of monitoring in speech production. Cogn. Psychol. 2011, 63, 1–33. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  73. Gordon, P.C.; Hendrick, R.; Johnson, M. Effects of noun phrase type on sentence complexity. J. Mem. Lang. 2004, 51, 97–114. [Google Scholar] [CrossRef]
Figure 1. An illustration of the overall pattern of lexical processing in an ongoing sentence that involves an activation and de-activation phase. Activation is represented by increase in gaze proportion toward the heard item in the sentence while de-activation is represented by reduction in gaze proportion over time.
Figure 1. An illustration of the overall pattern of lexical processing in an ongoing sentence that involves an activation and de-activation phase. Activation is represented by increase in gaze proportion toward the heard item in the sentence while de-activation is represented by reduction in gaze proportion over time.
Brainsci 12 00312 g001
Figure 2. Example of a visual world eye-tracking paradigm. The speaker represents the auditory sentence.
Figure 2. Example of a visual world eye-tracking paradigm. The speaker represents the auditory sentence.
Brainsci 12 00312 g002
Figure 3. Specified windows of interest. The arrows represent when in the sentence a prespecified window starts and ends. The windows of analysis were overlapping as we wanted to capture the full morphology of the gaze pattern toward a targeted image. In these windows, we capture the activation (gazes toward) and deactivation (gazes away) parts of lexical processing. Here, we divided our sentence into four analysis windows to capture processing patterns via the gaze dynamics to the three images of the nouns that were mentioned in the sentence (here N1 represents the illustration of eagle, N2 the snake, N3 the bear).
Figure 3. Specified windows of interest. The arrows represent when in the sentence a prespecified window starts and ends. The windows of analysis were overlapping as we wanted to capture the full morphology of the gaze pattern toward a targeted image. In these windows, we capture the activation (gazes toward) and deactivation (gazes away) parts of lexical processing. Here, we divided our sentence into four analysis windows to capture processing patterns via the gaze dynamics to the three images of the nouns that were mentioned in the sentence (here N1 represents the illustration of eagle, N2 the snake, N3 the bear).
Brainsci 12 00312 g003
Figure 4. Mean gaze over time toward N1 (first noun, solid salmon line), N2 (second noun, solid green line), N3 (third noun, solid blue line), and a distractor image (solid purple line) averaged across conditions for each group which begins at the auditory onset of the sentence (N1), N4 (unrelated noun, purple line). Shaded areas represent 95% confidence intervals within subject. The dotted salmon line represents mean offset N1; the dotted green line represents mean offset N2; the dotted blue line represents mean offset N3; the dotted black line represents mean offset of the verb.
Figure 4. Mean gaze over time toward N1 (first noun, solid salmon line), N2 (second noun, solid green line), N3 (third noun, solid blue line), and a distractor image (solid purple line) averaged across conditions for each group which begins at the auditory onset of the sentence (N1), N4 (unrelated noun, purple line). Shaded areas represent 95% confidence intervals within subject. The dotted salmon line represents mean offset N1; the dotted green line represents mean offset N2; the dotted blue line represents mean offset N3; the dotted black line represents mean offset of the verb.
Brainsci 12 00312 g004
Figure 5. The plot captures the part of the sentence as “the eagle saw the/adjective/snake”. This plot demonstrates the gaze proportion differences to N1 between conditions and groups. Solid lines represent observed data and dashed lines represent the GCA model fit. The graphic representation of the model is showing the quadratic fit, yet the significant results for the condition effect were observed at the linear term.
Figure 5. The plot captures the part of the sentence as “the eagle saw the/adjective/snake”. This plot demonstrates the gaze proportion differences to N1 between conditions and groups. Solid lines represent observed data and dashed lines represent the GCA model fit. The graphic representation of the model is showing the quadratic fit, yet the significant results for the condition effect were observed at the linear term.
Brainsci 12 00312 g005
Figure 6. The plot captures the following part of the sentence “/adjective/snake that the bear”. This plot demonstrates the gaze proportion differences to N2 between conditions and groups. Solid lines represent observed data and dashed lines represent the GCA model fit. The graphic representation of the model is showing the quadratic fit, yet the marginal results for the group effect were observed at the intercept level, in addition to the interaction effect which was significant at the linear term.
Figure 6. The plot captures the following part of the sentence “/adjective/snake that the bear”. This plot demonstrates the gaze proportion differences to N2 between conditions and groups. Solid lines represent observed data and dashed lines represent the GCA model fit. The graphic representation of the model is showing the quadratic fit, yet the marginal results for the group effect were observed at the intercept level, in addition to the interaction effect which was significant at the linear term.
Brainsci 12 00312 g006
Figure 7. Averaged gaze proportions to N1, N2, and N3 between groups and conditions. This is the raw data observation of gaze toward N1, N2, and N3 in the verb-frame window. The gray shaded ribbons around the lines represent standard errors. The red boxes indicate the window in which the effect is expected.
Figure 7. Averaged gaze proportions to N1, N2, and N3 between groups and conditions. This is the raw data observation of gaze toward N1, N2, and N3 in the verb-frame window. The gray shaded ribbons around the lines represent standard errors. The red boxes indicate the window in which the effect is expected.
Brainsci 12 00312 g007
Table 1. Gaze movement metrics of specific sentence-level processes.
Table 1. Gaze movement metrics of specific sentence-level processes.
Processing LevelGaze Movement Pattern
Lexical accessGaze movement toward a visual representation of a target noun upon hearing it in the sentence
Lexical integrationAfter lexical access, gaze divergence from a previously accessed target noun indicates its integration into the syntactic structure
Dependency linkingGaze movement that returns to a noun representation that was previously activated (re-activation) when it is syntactically licensed
Interference effectsAn equivalent proportion of gazes toward related as well as non-target nouns (i.e., that are not relevant at a given point in a sentence) indicates an individual’s susceptibility to the interference effect
Table 2. IWA Participants’ characteristics (n = 11).
Table 2. IWA Participants’ characteristics (n = 11).
IWASexYears Post-StrokeAge at Testing Years of EducationAphasia SubtypeLesion LocationBDAE-v3 WAB-R
AQ
SOAP-SR (%)SOAP- OR (%)
009M155517Mixed non-fluentLarge L lesion, IFG (BA 44/BA45) w/posterior467.760%40
017M186615AnomicL anterior cerebral and middle cerebral infarct495.410090
101M96720BrocaLarge L lesion posterior IFG (BA 44) w/posterior282.610030
130M86316Broca
/Anomia
L IPL with posterior ext. sparing STG490.57555
140F1642--L MCA infarct275.78030
151F76516AnomicL MCA infarct with subcortical extension495.8100100
159F66416BrocaL MCA infarct392.410070
165F46412BrocaL MCA infarct3ND8060
169M45912BrocaL MCA infarct228.28040
190F67612BrocaLeft superior temporal lobe388.29040
191M15716BrocaL MCA infarct4.598.410060
AMC GroupAges 57–66 years (mean = ~61.9); 7 females, 4 males; education 14–18 years (mean = 15.7) *
M = male, F = female; L = left; LH = left hemisphere; BA = Brodmann area; IPL = inferior parietal lobule; STG = superior temporal gyrus; MCA = middle cerebral artery. BDAE = Boston Diagnostic Aphasia Examination (0 = no usable speech or auditory comprehension; 5 = minimal discernable speech handicap). SOAP SR = average percent correct on subject-relative items from the SOAP Test of Auditory Sentence Comprehension. SOAP OR = average percent correct of object relative items from the SOAP Test of Auditory Sentence Comprehension. * Missing education data for four AMC individuals.
Table 3. Example of experimental sentence and visual stimuli.
Table 3. Example of experimental sentence and visual stimuli.
ConditionSample SentenceVisual Array
Unbiased Adjective“The eagle saw the voracious snake that the bear cautiously encountered underneath the narrow bridge.” Brainsci 12 00312 i001
Biased Adjective“The eagle saw the venomous snake that the bear cautiously encountered underneath the narrow bridge.”
Table 4. Results of GCA analysis for time window 1 (processing N1).
Table 4. Results of GCA analysis for time window 1 (processing N1).
PredictorsEstimatesCIP (Two Tailed)
(Intercept)0.370.31–0.42<0.001
Linear−0.08−0.31–0.150.506
Quadratic−0.53−0.72–−0.34<0.001
Condition [Unbiased]0.02−0.03–0.070.497
Group [IWA]−0.02−0.09–0.050.512
Linear × Condition [Unbiased]0.310.07–0.560.012
Quadratic × Condition [Unbiased]−0.05−0.24–0.140.589
Linear × Group [IWA]0.19−0.09–0.470.178
Quadratic × Group [IWA]0.20−0.03–0.430.093
Condition [Unbiased] × Group [IWA]−0.02−0.06 0.020.367
(Linear × Condition [Unbiased]) × Group [IWA]−0.37−0.64–−0.100.008
(Quadratic × Condition [Unbiased]) × Group [IWA]0.07−0.14–0.270.523
Note: The table provides the test of the full model including the interaction of group and condition on the intercept, linear, and quadratic time terms. The AMC group and the biased condition are set as the reference estimates. Results in boldface are presented in the text.
Table 5. Results of GCA analysis for time window 2 (processing N2).
Table 5. Results of GCA analysis for time window 2 (processing N2).
PredictorsEstimatesCIP (Two Tailed)
(Intercept)0.380.33–0.43<0.001
Linear0.350.07–0.620.014
Quadratic−0.51−0.71–−0.31<0.001
Condition [Unbiased]0.01−0.05–0.070.763
Group [IWA]−0.05−0.11–0.000.066
Linear × Condition [Unbiased]0.20−0.02–0.410.078
Quadratic × Condition [Unbiased]0.11−0.07–0.300.237
Linear × Group [IWA]−0.12−0.48–0.230.497
Quadratic × Group [IWA]0.18−0.06–0.430.149
Condition [Unbiased] × Group [IWA]−0.00−0.06–0.060.971
(Linear × Condition [Unbiased]) × Group [IWA]−0.22−0.46–0.020.075
(Quadratic × Condition [Unbiased]) × Group [IWA]−0.08−0.27–0.100.362
Note: The table provides the test of the full model including the interaction of group and condition on the intercept, linear, and quadratic time terms. The AMC group and the biased condition are set as the reference estimates. Results in boldface are presented in the text.
Table 6. Results of GCA analysis of AMC data for time window 4 (processing N2 relative to N1).
Table 6. Results of GCA analysis of AMC data for time window 4 (processing N2 relative to N1).
PredictorsEstimatesCIP (Two Tailed)
(Intercept)0.120.04–0.190.002
Linear0.01−0.12–0.140.857
Quadratic0.03−0.04–0.100.413
Images [N2]0.190.11–0.27<0.001
Condition [Unbiased]0.00−0.04–0.050.924
Linear × Images [N2]0.05−0.11–0.200.547
Quadratic × Images [N2]−0.02−0.12–0.080.725
Linear × Condition [Unbiased]−0.06−0.17–0.060.333
Quadratic × Condition [Unbiased]−0.07−0.15–0.000.059
Images [N2] × Condition [Unbiased]−0.03−0.06–−0.000.049
(Linear × Images [N2]) × Condition [Unbiased]−0.04−0.14–0.070.473
(Quadratic × Images [N2]) × Condition [Unbiased]0.05−0.05–0.160.323
Note: The table provides the test of the full model including the interaction of condition and images of interest (N1 and N2) on the intercept, linear, and quadratic time terms. The biased condition and the N1 are set as the reference estimate. Results in boldface are presented in the text.
Table 7. Results of GCA analysis of IWA data for time window 4 (processing N2 relative to N1).
Table 7. Results of GCA analysis of IWA data for time window 4 (processing N2 relative to N1).
PredictorsEstimatesCIP (Two Tailed)
(Intercept)0.200.14–0.25<0.001
Linear0.06−0.01–0.120.081
Quadratic−0.03−0.08–0.020.255
Images [N2]0.03−0.02–0.090.217
Condition [Unbiased]0.050.00–0.100.031
Linear × Images [N2]−0.12−0.21–−0.040.003
Linear × Images [N2]0.03−0.04–0.100.362
Linear × Condition [Unbiased]−0.11−0.19–−0.040.004
Quadratic × Condition [Unbiased]0.070.00–0.150.044
Images [N2] × Condition [Unbiased]−0.10−0.13–−0.07<0.001
(Linear × Images [N2] × Condition [Unbiased]0.240.14–0.34<0.001
(Quadratic × Images [N2] × Condition [Unbiased]−0.05−0.15–0.050.359
Note: The table provides the test of the full model including the interaction of condition and images of interest (N1 and N2) on the intercept, linear, and quadratic time terms. The biased condition and the N1 are set as the reference estimate. Results in boldface are presented in the text.
Table 8. Results of GCA analysis of AMC data for time window 4 (processing N2 relative to N3).
Table 8. Results of GCA analysis of AMC data for time window 4 (processing N2 relative to N3).
PredictorsEstimatesCIP (Two Tailed)
(Intercept)0.310.21–0.41<0.001
Linear0.07−0.08–0.210.368
Quadratic0.01−0.06–0.080.777
Images [N3]0.210.07–0.350.003
Condition [Unbiased]−0.03−0.07–0.000.080
Linear × Images [N3]−0.10−0.31–0.110.347
Quadratic × Images [N3]−0.07−0.17–0.030.168
Linear × Condition [Unbiased]−0.10−0.19–−0.010.026
Quadratic × Condition [Unbiased]−0.02−0.11–0.070.654
Images [N3] × Condition [Unbiased]0.040.00–0.080.044
(Linear × Images [N3]) × Condition [Unbiased]0.210.08–0.330.001
(Quadratic × Images [N3]) × Condition [Unbiased]0.11−0.02–0.230.089
Note: The table provides the test of the full model including the interaction of condition and images of interest (N2 and N3) on the intercept, linear, and quadratic time terms. The biased condition and the N2 are set as the reference estimate. Results in boldface are presented in the text.
Table 9. Results of GCA analysis of IWA data for time window 4 (processing N2 relative to N3).
Table 9. Results of GCA analysis of IWA data for time window 4 (processing N2 relative to N3).
PredictorsEstimatesCIP (Two Tailed)
(Intercept)0.230.15–0.31<0.001
Linear −0.07−0.14–0.010.073
Quadratic0.00−0.07–0.070.986
Images [N3]0.180.07–0.280.001
Condition [Unbiased]−0.04−0.09–0.010.118
Linear × Images [N3]0.04−0.05–0.140.368
Quadratic × Images [N3]−0.02−0.12–0.070.638
Linear × Condition [Unbiased]0.130.05–0.210.002
Quadratic × Condition [Unbiased]0.03−0.05–0.100.474
Images [N3] × Condition [Unbiased]0.01−0.03–0.040.687
(Linear × Images [N3]) × Condition [Unbiased]−0.09−0.19–0.020.101
(Quadratic × Images [N3]) × Condition Unbiased]−0.08−0.19–0.020.128
Note: The table provides the test of the full model including the interaction of condition and images of interest (N2 and N3) on the intercept, linear, and quadratic time terms. The biased condition and the N2 are set as the reference estimate. Results in boldface are presented in the text.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Akhavan, N.; Sen, C.; Baker, C.; Abbott, N.; Gravier, M.; Love, T. Effect of Lexical-Semantic Cues during Real-Time Sentence Processing in Aphasia. Brain Sci. 2022, 12, 312. https://doi.org/10.3390/brainsci12030312

AMA Style

Akhavan N, Sen C, Baker C, Abbott N, Gravier M, Love T. Effect of Lexical-Semantic Cues during Real-Time Sentence Processing in Aphasia. Brain Sciences. 2022; 12(3):312. https://doi.org/10.3390/brainsci12030312

Chicago/Turabian Style

Akhavan, Niloofar, Christina Sen, Carolyn Baker, Noelle Abbott, Michelle Gravier, and Tracy Love. 2022. "Effect of Lexical-Semantic Cues during Real-Time Sentence Processing in Aphasia" Brain Sciences 12, no. 3: 312. https://doi.org/10.3390/brainsci12030312

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop