Next Article in Journal
Age-Related Deficits in Memory Encoding and Retrieval in Word List Free Recall
Next Article in Special Issue
Electrical Brain Responses Reveal Sequential Constraints on Planning during Music Performance
Previous Article in Journal
Decreased Neuron Density and Increased Glia Density in the Ventromedial Prefrontal Cortex (Brodmann Area 25) in Williams Syndrome
Previous Article in Special Issue
Early Influence of Musical Abilities and Working Memory on Speech Imitation Abilities: Study with Pre-School Children
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

Cross-Modal Priming Effect of Rhythm on Visual Word Recognition and Its Relationships to Music Aptitude and Reading Achievement

1
Literacy Ph.D. Program, Middle Tennessee State University, Murfreesboro, TN 37132, USA
2
Institutional Research, University of South Alabama, Mobile, AL 36688, USA
3
Psychology Department, Middle Tennessee State University, Murfreesboro, TN 37132, USA
*
Author to whom correspondence should be addressed.
Brain Sci. 2018, 8(12), 210; https://doi.org/10.3390/brainsci8120210
Submission received: 22 October 2018 / Revised: 26 November 2018 / Accepted: 28 November 2018 / Published: 29 November 2018
(This article belongs to the Special Issue Advances in the Neurocognition of Music and Language)

Abstract

:
Recent evidence suggests the existence of shared neural resources for rhythm processing in language and music. Such overlaps could be the basis of the facilitating effect of regular musical rhythm on spoken word processing previously reported for typical children and adults, as well as adults with Parkinson’s disease and children with developmental language disorders. The present study builds upon these previous findings by examining whether non-linguistic rhythmic priming also influences visual word processing, and the extent to which such cross-modal priming effect of rhythm is related to individual differences in musical aptitude and reading skills. An electroencephalogram (EEG) was recorded while participants listened to a rhythmic tone prime, followed by a visual target word with a stress pattern that either matched or mismatched the rhythmic structure of the auditory prime. Participants were also administered standardized assessments of musical aptitude and reading achievement. Event-related potentials (ERPs) elicited by target words with a mismatching stress pattern showed an increased fronto-central negativity. Additionally, the size of the negative effect correlated with individual differences in musical rhythm aptitude and reading comprehension skills. Results support the existence of shared neurocognitive resources for linguistic and musical rhythm processing, and have important implications for the use of rhythm-based activities for reading interventions.

1. Introduction

Music and language are complex cognitive abilities that are universal across human cultures. Both involve the combination of small sound units (e.g., phonemes for speech, and notes for music) which in turn, allow us to generate an unlimited number of utterances or melodies, in accordance with specific linguistic or musical grammatical rules (e.g., [1]). Of specific interest for the present study, is the notion of rhythm. In music, rhythm is marked by the periodic succession of acoustic elements as they unfold over time, and some of these elements may be perceived as stronger than others. Meter is defined as the abstract hierarchical organization of these recurring strong and weak elements that emerge from rhythm. It is this metrical structure that allows listeners to form predictions and anticipations, and in turn dance or clap their hands to the beat of the music [2].
Similarly, in speech, the pattern of stressed (i.e., strong), and unstressed (i.e., weak) syllables occurring at the lexical level contributes to the metrical structure of an utterance. Lexical stress is usually defined as the relative emphasis that one syllable, or several syllables, receive in a word [3]. Stress is typically realized by a combination of increased duration, loudness, and/or pitch change. In many languages, such as English, the salience of the stressed syllable is further reinforced by the fact that many unstressed syllables contain a reduced vowel [4]. Some languages are described as having fixed stress because the location of the stress is predictable. For instance, in French, the stress is usually on the final full syllable [5]. By contrast, several languages are considered to have variable stress because the position of the stress is not predictable. In such languages, like English, stress may serve as a distinctive feature to distinguish noun-verb stress homographs [6]. For example, the word “permit” is stressed on the first syllable when used as a noun, but stressed on the second syllable when used as a verb.
There is increasing support for the existence of rhythmic regularities in English, despite the apparent lack of physical periodicity of the stressed syllables when compared to the rhythmic structure of music (e.g., [7]). During speech production, rhythmic adjustments, such as stress shifts, may take place to avoid stress on adjacent syllables, and these stress shifts may give rise to a more regular alternating pattern of stressed and unstressed syllables [8]. For example, “thirteen” is normally stressed on the second syllable, but the stress can shift to the first syllable when followed by a word with initial stress (e.g., “thirteen people”). These rhythmic adjustments may play a role in speech perception, as suggested by findings showing that sentences with stress shifts are perceived as more natural than sentences with stress clashes, despite that words with shifted stress deviate from their default metrical structure [9].
In music, the Dynamic Attending Theory (DAT) provides a framework in which auditory rhythms are thought to create hierarchical expectancies for the signal as it unfolds over time [10,11]. According to the DAT, distinct neural oscillations entrain to the multiple hierarchical levels of the metrical structure of the auditory signal, and strong metrical positions act as attentional attractors, thus making acoustic events occurring at these strong positions easier to process. Similarly, listeners do not pay equal attention to all parts of the speech stream, and speech rhythm may influence which moments are hierarchically attended to in the speech signal. For instance, detection of a target phoneme was found to be faster if it was embedded in a rhythmically regular sequence of words (i.e., regular time interval between successive stressed syllables), thus suggesting that speech rhythm cues, such as stressed syllables, guide listeners’ attention to specific portions of the speech signal [12]. Further evidence suggests that predictions regarding speech rhythm and meter may be crucial for language acquisition [13], speech segmentation [14], word recognition [15], and syntactic parsing [16].
Given the structural similarities between music and language, a large body of literature has documented which neuro-cognitive systems may be shared between language and music (e.g., [7,17,18]), and converging evidence support the idea that musical and linguistic rhythm perception skills partially overlap [19,20,21]. In line with these findings, several EEG studies revealed a priming effect of musical rhythm on spoken language processing. For instance, listeners showed a more robust neural marker of beat tracking and better comprehension when stressed syllables aligned with strong musical beats in sung sentences [22]. Likewise, EEG findings demonstrated that spoken words were more easily processed when they followed non-linguistic primes with a metrical structure that matched the word metrical structure [23]. A follow-up study using a similar design showed this benefit of rhythm priming on speech processing may be mediated by cross-domain neural phase entrainment [24].
The purpose of the present study was to shed further light on the effect of non-linguistic rhythmic priming on language processing (e.g., [22,23,24]). We specifically focused on words with a trochaic stress pattern (i.e., a stressed syllable followed by an unstressed syllable) because in the English lexicon, they constitute more than 85% of content words [25]. This high frequency of the trochaic pattern may play a particularly preponderant role in English language development, as infants seem to adopt a metrical segmentation strategy by considering a stress syllable as the beginning of a word in the continuous speech stream [26]. Evidence in support of this important role of the trochaic pattern comes from studies conducted with English speaking infants who develop a preference for the trochaic pattern as early as the age of 6 months [27]. By contrast, the ability to detect words with an iambic pattern (i.e., an unstressed syllable followed by a stressed syllable) develops later, around 10.5 months, and seems to rely more on using additional sets of linguistic knowledge regarding phonotactic constraints (i.e., the sequences of phonemes that are allowed in a given language), and allophonic cues (i.e., the multiple phonetic variants of a phoneme, whose occurrences depend on their position in a word and their phonetic context), rather than stress cues [13].
The first specific aim was to examine whether the cross-domain rhythmic priming effect is also present when target words are visually presented. To this end, participants were presented with rhythmic auditory prime sequences (either a repeating pattern of long-short or short-long tone pairs), followed by a visual target word with a stress pattern that either matched, or mismatched, the temporal structure of the prime (See Figure 1). Based on previous literature (e.g., [20,23,28]), we predicted that words that do not match the temporal structure of the rhythmic prime would elicit an increased centro-frontal negativity.
A second aim of the study was to determine whether such rhythmic priming effect would be related to musical aptitude. Musical aptitude has been associated with enhanced perception of speech cues that are important correlates of rhythm. For instance, individuals with formal musical training better detect violations of word pitch contours [29,30] and syllabic durations [31] than non-musicians. In addition, electrophysiological evidence shows that the size of a negative ERP component elicited by spoken words with an unexpected stress pattern correlates with individual differences in musical rhythm abilities [20]. Thus, in the present study, we expected the amplitude of the negativity elicited by the cross-modal priming effect to correlate with individual scores on a musical aptitude test, if the relationship between musical aptitude and speech rhythm sensitivity transfers to the visual domain.
Finally, the third study aim was to test whether the cross-modal priming effect present in the ERPs correlated with individual differences in reading achievement. Mounting evidence suggests a link between sensitivity to auditory rhythm skills (both linguistic and musical) and reading abilities (e.g., [32,33,34,35]). As such, we collected individuals’ scores on a college readiness reading achievement test to examine whether the cross-modal ERP effect correlated with individual differences in reading comprehension skills. We expected the amplitude of the negativity elicited by the cross-modal priming effect to correlate with individual scores on the American College Testing (ACT) reading test, if rhythm perception skills relate to reading abilities as suggested by the current literature [32,33,34,35].

2. Materials and Methods

2.1. Participants

Eighteen first year college students took part in the experiment (14 females and 4 males, mean age = 19.5, age range: 18–22). All were right-handed, native English speakers with less than two years of formal musical training. None of the participants were enrolled in a Music major. The study was approved by the Institutional Review Board at Middle Tennessee State University, and written consent was obtained from the participants prior to the start of the experiment.

2.2. Standardized Measures

The Advanced Measures of Music Audiation (AMMA; [36]) was used to assess participants’ musical aptitude. The AMMA has been used previously to measure the correlation between musical aptitude and index of brain activities (e.g., [20,37,38,39]). This measure was nationally standardized with a normed sample of 5336 U.S. students and offers percentile ranked norms for both music and non-music majors. Participants were presented with 30 pairs of melodies and asked to determine whether the two melodies of each pair were the same, tonally different, or rhythmically different. The AMMA provides separate scores for rhythmic and tonal abilities. For non-Music majors, reliability scores are 0.80 for the tonal score and 0.81 for the rhythm score [36].
The reading scores on the ACT exam were used to examine the relationship between reading comprehension and speech rhythm sensitivity. The ACT reading section is a standardized achievement test that comprises short passages from four categories (prose fiction, social science, humanities, and natural science) and 40 multiple-choice questions that test the reader’s comprehension of the passages. Scores range between 1 and 36. The test was administered and scored by the non-profit organization of the same name (ACT, Inc., Iowa City, IA, USA) using a paper and pencil format.

2.3. EEG Cross-Modal Priming Paradigm

Prime sequences consisted of a rhythmic tone pattern of either a long-short or short-long structure repeated three times. The tones consisted of a 500 Hz sine wave with a 10 ms rise/fall, and a duration of either 200 ms (long) or 100 ms (short). In long-short sequences, the long tone and short tone were separated by a silence of 100 ms, and each of the three successive long-short tone pairs was followed by a silence of 200 ms. In short-long sequences, the short tone and long tone were separated by a silence of 50 ms, and each of the three successive short-long tone pairs was followed by a silence of 250 ms. Because previous research has shown that native speakers of English have a cultural bias toward grouping a sequence of tones differing in duration, into short-long patterns [40,41], a series of behavioral pilot experiments were conducted with different iterations of the tone sequences to determine which parameters would provide consistent perception of either long-short or short-long patterns.
Visual targets were composed of 140 English real-word bisyllabic nouns and 140 pseudowords, which were all selected from the database of the English Lexicon Project [42]. The lexical frequency of all the words was controlled using the log HAL frequency [43]. The mean log HAL frequency for each set of stress patterns was 10.28 (SD = 0.98) for trochaic sequences and 10.28 (SD = 0.97) for iambic sequences. Pseudowords were matched to the real words in terms of syllable count and word length and were used only for the purpose of the lexical decision task. Half of the real words (N = 70) had a trochaic stress pattern (i.e., stressed on the first syllable, for example, “basket”). The other half consisted of fillers with an iambic stress pattern (i.e., stressed on the second syllable, for example, “guitar”).
Short-long and long-short prime sequences were combined with the visual target words to create two experimental conditions in which the stress pattern of the target word either matched or mismatched the rhythm of the auditory prime.
We chose to analyze only the ERPs elicited by trochaic words for several reasons. First, trochaic words comprise the predominant stress pattern in English (85–90% of spoken English words according to [34]), and consequently, participants will likely be familiar with their pronunciation. Second, because stressed syllables correspond to word onset in trochaic words, this introduces fewer temporal jitters than for iambic words when computing ERPs across trials. This scenario is particularly problematic for iambic words during silent reading, because there is no direct way to measure when participants read the second syllable. Third, participants were recruited from a university located in the southeastern region of the United States, and either originated from this area, or have been living in the area for several years. It is well documented that the Southern American English dialect tends to place stress on the first syllable of many iambic words despite that these types of words are stressed on the second syllable in standard American English (e.g., [44]). As such, rhythmic expectations are less clear to predict for iambic words.

2.4. Procedure

Participants’ musical aptitude was first measured using the AMMA [36]. Following administration of the AMMA test, participants were seated in a soundproofed and electrically shielded room. Auditory prime sequences were presented through headphones, and target stimuli were visually presented on a computer screen placed at approximately 3 feet in front of the participant. Words and pseudowords were written in black lowercase characters on a white background. No visual cue was provided to the participant regarding the location of the stress syllables in the target words. Stimulus presentation was controlled using the software E-prime 2.0 Professional with Network Timing Protocol (Psychology software tools, Inc., Pittsburgh, PA, USA). Participants were presented with 5 blocks of 56 stimuli. The trials were randomized within each block, and the order of the blocks was counterbalanced across participants. Each trial was introduced by a fixation cross displayed at the center of a computer screen that remained until 2 s after the onset of the visual target word. Participants were asked to silently read the target word and to press one button if they thought it was a real English word, or another button if they thought it was a nonword. The entire experimental session lasted 1.5 h.

2.5. EEG Acquisition and Preprocessing

EEG was recorded continuously from 128 Ag/AgCL electrodes embedded in sponges in a Hydrocel Geodesic Sensor Net (EGI, Eugene, OR, USA) placed on the scalp, connected to a NetAmps 300 amplifier, and using a MacPro computer. Electrode impedances were kept below 50 kΩ. Data was referenced online to Cz and re-referenced offline to the averaged mastoids. In order to detect the blinks and vertical eye movements, the vertical and horizontal electrooculograms (EOG) were also recorded. The EEG and EOG were digitized at a sampling rate of 500 Hz. EEG preprocessing was carried out with NetStation Viewer and Waveform tools. The EEG was first filtered with a bandpass of 0.1 to 30 Hz. Data time-locked to the onset of trochaic target words was then segmented into epochs of 1100 ms, starting with a 100 ms prior to the word onset and continuing 1000 ms post-word-onset. Trials containing movements, ocular artifacts, or amplifier saturation were discarded. ERPs were computed separately for each participant and each condition by averaging together the artifact-free EEG segments relative to a 100 ms pre-baseline.

2.6. Data Analysis

Statistical analyses were performed using MATLAB and the FieldTrip open source toolbox [45]. A planned comparison between the ERPs elicited by mismatching trochaic words and matching trochaic words was performed using a cluster-based permutation approach. This non-parametric data-driven approach does not require the specification of any latency range or region of interest a priori, while also offering a solution to the problem of multiple comparisons (see [46]).
To relate the ERP results to the behavioral measures (i.e., musical aptitude and reading comprehension), an index of sensitivity to speech rhythm cues was first calculated from the ERPs using the mean of the significant amplitude differences between ERPs elicited by matching and mismatching trochaic words at each channels, and time points belonging to the resulting clusters (see [20,47] for similar approaches). Pearson correlations were then tested between the ERP cluster mean difference and the participants’ scores on the AMMA and ACT reading section, respectively. A multiple regression was also computed with the ERP cluster mean difference as the outcome measure, and the AMMA Rhythm scores and ACT Reading scores as the predictor variables.

3. Results

3.1. Metrical Expectancy

Overall, participants performed well on the lexical decision task, as suggested by the mean accuracy rate (M = 98.82%, SD = 0.85). A paired samples t-test was computed to compare accuracy rates for real target words in the matching (M = 99.83%, SD = 0.70), and mismatching (M = 99.42%, SD = 1.40) rhythm conditions. No statistically significant differences were found between the two conditions, t (35) = 1.54, p = 0.13, two-tailed.
Analyses of the ERP data revealed that target trochaic words that mismatched the rhythmic prime elicited a significantly larger negativity from 300 to 708 ms over a centro-frontal cluster of electrodes (p < 0.001, See Figure 2).

3.2. Brain-Behavior Relationships

The negative ERP cluster mean difference was statistically significantly positively correlated with the AMMA Rhythm scores (r = 0.74, p < 0.001; see Figure 3A) and the ACT Reading scores (r = 0.60, p = 0.009; see Figure 3B). A statistically significant positive correlation was also found between the AMMA Rhythm scores and ACT Reading scores (r = 0.55, p = 0.016; see Figure 3C). By contrast, no statistically significant correlation was found between the AMMA Tonal scores and the negative ERP cluster mean difference (r = 0.30, p = 0.23) or the ACT Reading scores (r = 0.09, p = 0.70). The maximum Cook’s distance for the reported correlations indicated no undue influence of any data point on the fitted models (max Cook’s d < 0.5).
A multiple regression was conducted to investigate whether AMMA Rhythm scores and ACT Reading scores predicted the size of the negative ERP cluster mean difference. Table 1 summarizes the analysis results. The regression model explained 59.9% of the variance and was a statistically significant predictor of the negative ERP cluster mean difference (R2 = 0.599, F (2,15) = 11.2, p = 0.001). As can be seen in Table 1, AMMA Rhythm scores statistically significantly contributed to the model (β = 0.594, t (15) = 3.023, p = 0.009), but ACT Reading scores did not (β = 0.267, t (15) = 1.359, p = 0.194). The final predictive model was: Negative ERP Cluster Mean Difference = (0.281 × AMMA Rhythm) + (0.081 × ACT Reading) + 8.000.

4. Discussion

The current study aimed to examine the cross-modal priming effect of non-linguistic auditory rhythm on written word processing and investigate whether such effect would relate to individual differences in musical aptitude and reading comprehension. As hypothesized, trochaic target words that did not match the rhythmic structure of the auditory prime were associated with an increased negativity over the centro-frontal part of the scalp. This finding is in line with previous ERP studies on speech rhythm and meter [6,15,20,28,31,48,49,50]. It has been generally proposed that this negative effect either reflects an increased N400 [15,49], or a domain-general rule-based error-detection mechanism [6,20,28,31,51,52]. The fact that similar negative effects have been reported in response to metric deviations in tone sequences (e.g., [53,54]) further supports the latter interpretation.
While the aforementioned studies were conducted either in the linguistic or musical domain, the negative effect observed for mismatching target words was generated by non-linguistic prime sequences in the present experiment. Cason and Schön [23] previously reported a cross-domain priming effect of music on speech processing, which was reflected by a similar increased negativity when the metrical structure of the spoken target word did not match the rhythmic structure of the musical prime. Several other findings have since shown that temporal expectancies generated by rhythmically regular non-linguistic primes can facilitate spoken language processing in typical adults (e.g., [24,55]), and children [56,57], as well as adults with Parkinson’s disease [58], children with cochlear implants [59], and children with language disorders [60]. This beneficial effect may stem from the regular rhythmic structure of the prime, which provides temporally predictable cues to which internal neural oscillators can anchor [24]. The present findings support and extend this line of research by showing this negativity is elicited even when the target words were visually presented, thus suggesting that non-linguistic rhythm can not only induce metrical expectations across distinct cognitive domains, but also across different sensory modalities [61]. These findings also provide additional evidence in favor of the view that rhythm/meter processing relies on a domain-general neural system that is not specific to language [19,21,22].
We further investigated whether this cross-modal priming effect was related to individual differences in musical aptitude. Interestingly, our results showed a statistically significant correlation between the size of the brain response elicited by unexpected stress patterns and the AMMA rhythm subscore, but not the tonal subscore. In addition, musical rhythm aptitude was a statistically significant predictor of speech rhythm sensitivity, even after controlling for reading comprehension skills. This is in line with previous ERP studies showing that adult musicians performed better than non-musicians at detecting words pronounced with an incorrect stress pattern [31]. In addition, this enhanced sensitivity to speech meter was associated with larger electrophysiological responses to incorrectly pronounced words, which was interpreted as reflecting more efficient early auditory processing of the temporal properties of speech.
Robust associations have also been found between musical rhythm skills and speech prosody perception, even after controlling for years of music education [19]. Noteworthy for the present experiment, individual differences in brain sensitivity to speech rhythm variations can be explained by variance in musical rhythm aptitude in individuals with less than two years of musical training. For instance, in a recent experiment [20], participants’ musical aptitude was assessed using the same standardized measure of musical abilities (i.e., AMMA) as in the present study. Participants listened to sequences consisting of four bisyllabic words for which the stress pattern of the final word either matched or mismatched the stress pattern of the preceding words. Words with a mismatching stress pattern elicited an increased negative ERP component with the same scalp distribution and latency as the one found in the current data. More importantly, participants’ musical rhythm aptitude statistically significantly correlated with the size of the negative effect. Thus, in light of the aforementioned literature, the present results confirm and extend previous data suggesting a possible transfer of learning between the musical and linguistic domains (See [62] for a review).
Adding to the growing literature showing a relationship between sensitivity to speech rhythm and reading skills, our results revealed a statistically significant positive correlation between the scores on the ACT reading subtest and the size of the negative ERP effect elicited by mismatching stress patterns. Previous studies have mainly focused on typically developing young readers using several novel speech rhythm tasks in conjunction with standardized measures of reading abilities, and results consistently showed a correlation between performances on the speech rhythm tasks and individual differences in word reading skills [63,64,65,66]. It has been proposed that early sensitivity to speech rhythm cues may contribute to the development of phonological representations [32]. However, sensitivity to speech rhythm cues still explains unique variance in word reading skills after controlling for phonological processing skills [67], thus suggesting that it also makes a significant contribution to reading development independently of phonological awareness.
More directly related to the present study, research with older readers and adults suggests that knowledge of the prosodic structure of words continues to play a role in skilled reading. For instance, visual word recognition is facilitated when primed by word fragments with a matching stress pattern [68,69]. Two other studies conducted on typical adults focused on lexical stress perception in isolated multisyllabic words [70,71], and found a significant relationship with reading comprehension. Likewise, adult struggling readers usually show lower performance than their typical peers on tasks measuring perception of word stress patterns or auditory rhythms [72,73,74,75] (but see [74,76]).
Interestingly, the finding that reading comprehension was not a statistically significant contributor to speech rhythm sensitivity after controlling for musical rhythm aptitude supports the Temporal Sampling Framework (TSF) proposed by Goswami [32]. According to the TSF, the link between speech rhythm sensitivity and reading skills is mediated by domain-general neurocognitive mechanisms for processing acoustic information carrying rhythmic cues. In line with this interpretation, we found a statistically significant correlation between the AMMA rhythm scores and reading achievement scores.
The OPERA (overlap, precision, emotion, repetition, attention) hypothesis formulated by Patel [77,78] further provides a potential explanation of music-training driven plasticity in brain networks involved in language. OPERA offers a set of five optimal conditions that must be met for music training to drive plasticity: (1) music and language have overlapping anatomical substrates; (2) music activities require a greater level of precision compared to language; (3) music activities evoke strong emotions; (4) music training involves repeated practice; (5) music activities require sustained attention. In line with this framework, the Precise Auditory Timing Hypothesis (PATH) proposed by Tierney and Kraus [79] predicts that music programs that focus on rhythm activities, with an emphasis on entrainment and timing, will be more effective in improving reading-related skills, such as phonological processing skills, because there are overlaps between language and music networks processing rhythmic information, and music requires a higher level of auditory-motor timing precision than language. OPERA and PATH thus provide compelling explanations for the significant relationships we report here between musical rhythm aptitude, speech rhythm sensitivity, and reading achievement. While our present study was correlational (and conducted with non-musicians), data from recent longitudinal studies using randomized controlled trials indeed show promising results of rhythm-based intervention for the development of language skills in children with reading disorders [80], and typical peers [81].
Finally, the fact that we found a “metrical” negativity to visual targets, despite that participants were not allowed to sound out the words, further supports theories proposing that information about the metrical structure of a word is part of its lexical representation and automatically retrieved during silent reading [82,83]. This idea is in line with the Implicit Prosody Hypothesis (IPH) originally proposed by Fodor [84]. The IPH is closely related to the concept of verbal imagery or inner voice, which can be found in the literature throughout the 20th century [82]. According to the IPH, readers create a mental representation of the prosodic structure of the text while they are silently reading. Several studies have provided compelling evidence in support for the IPH, especially regarding lexical stress. For instance, eye-tracking studies showed that readers had longer reading times and more eye fixations for four-syllable words with two stressed syllables, than for one stressed syllable [85], and that expectations generated by the stress pattern of successive words may affect early stages of syntactic analysis of upcoming words in written sentences [82,86]. Taken together, these results and the present data provide compelling evidence for a role of prosodic representations regarding a word stress pattern during silent reading.
One potential limitation of the current research is the use of ACT reading scores, which may not be fully representative of the participants’ reading skills. In particular, phonemic awareness, decoding, and fluency, which are components known to greatly contribute to reading comprehension [87], cannot be teased apart in the ACT reading subsets. Future research using a more comprehensive battery of language and reading assessments would better allow a more complete understanding of which reading components are more closely related to speech rhythm perception skills.

5. Conclusions

The present data confirm and extend previous studies showing facilitating effects of a regular non-linguistic rhythm on spoken language processing (e.g., [23,55,59]), by demonstrating this to also be the case for written language processing. We propose that this cross-modal effect of rhythm is mediated by the automatic retrieval of the word metrical structure (i.e., implicit prosody) during silent reading (i.e., implicit prosody generated through verbal imagery). Finally, because we found that the negativity associated with this cross-modal priming effect of rhythm correlated with individual differences in musical aptitude and reading achievement, this further supports the potential clinical and education implications of using rhythm-based intervention for populations with language or learning disabilities.

Author Contributions

T.S.F. collected the data and wrote the paper. H.M. collected and analyzed the data. J.R.S. wrote and edited the paper. C.L.M. conceived the idea, designed the experiments, and wrote the paper.

Funding

This study was funded by NSF Grant # BCS-1261460 awarded to Cyrille Magne and by the MTSU Foundation.

Conflicts of Interest

The authors declare no conflict of interest. The funding sources had no role in study design; in the collection, analysis and interpretation of data; in the writing of the report; nor in the decision to submit the article for publication.

References

  1. Lerdahl, F.; Jackendoff, R. An overview of hierarchical structure in music. Music Percept. Interdiscip. J. 1984, 1, 229–252. [Google Scholar] [CrossRef]
  2. London, J. Hearing in Time: Psychological Aspects of Musical Meter, 1st ed.; Oxford University Press: Oxford, UK, 2004; ISBN 0-19-516081-9. [Google Scholar]
  3. Fox, A. Prosodic Features and Prosodic Structures: The Phonology of Suprasegmentals; Oxford University Press: New York, NY, USA, 2000. [Google Scholar]
  4. Nespor, M. On the rhythm parameter in phonology. In Logical Issues in Language Acquisition; Rocca, I., Ed.; Foris Publications: Dordrech, The Netherlands, 1990; pp. 157–175. [Google Scholar]
  5. Delattre, P. Studies in French and Comparative Phonetics, 1st ed.; Mouton: The Hague, The Netherlands, 1966. [Google Scholar]
  6. Moon, H.; Magne, C. Noun/verb distinction in English stress homographs: An ERP study. Neuroreport 2015, 26, 753–757. [Google Scholar] [CrossRef] [PubMed]
  7. Patel, A.D. Music, Language, and the Brain; Oxford University Press: New York, NY, USA, 2008; ISBN 978-0-19-975530-1. [Google Scholar]
  8. Liberman, M.; Prince, A. On stress and linguistic rhythm. Linguist. Inq. 1977, 8, 249–336. [Google Scholar]
  9. Henrich, K.; Alter, K.; Wiese, R.; Domahs, U. The relevance of rhythmical alternation in language processing: An ERP study on English compounds. Brain Lang. 2014, 136, 19–30. [Google Scholar] [CrossRef] [PubMed]
  10. Jones, M.R.; Boltz, M. Dynamic attending and responses to time. Psychol. Rev. 1989, 96, 459–491. [Google Scholar] [CrossRef] [PubMed]
  11. Large, E.W.; Jones, M.R. The dynamics of attending: How people track time-varying events. Psychol. Rev. 1999, 106, 119–159. [Google Scholar] [CrossRef]
  12. Quené, H.; Port, R.F. Effects of timing regularity and metrical expectancy on spoken-word perception. Phonetica 2005, 62, 1–13. [Google Scholar] [CrossRef] [PubMed]
  13. Jusczyk, P.W. How infants begin to extract words from speech. Trends Cogn. Sci. 1999, 3, 323–328. [Google Scholar] [CrossRef]
  14. Mattys, S.L.; Samuel, A.G. How lexical stress affects speech segmentation and interactivity: Evidence from the migration paradigm. J. Mem. Lang. 1997, 36, 87–116. [Google Scholar] [CrossRef]
  15. Magne, C.; Astésano, C.; Aramaki, M.; Ystad, S.; Kronland-Martinet, R.; Besson, M. Influence of syllabic lengthening on semantic processing in spoken French: Behavioral and electrophysiological evidence. Cereb. Cortex 2007, 17, 2659–2668. [Google Scholar] [CrossRef] [PubMed]
  16. Schmidt-Kassow, M.; Kotz, S.A. Entrainment of syntactic processing? ERP-responses to predictable time intervals during syntactic reanalysis. Brain Res. 2008, 1226, 144–155. [Google Scholar] [CrossRef] [PubMed]
  17. Jantzen, M.G.; Large, E.W.; Magne, C. Editorial: Overlap of neural systems for processing language and music. Front. Psychol. 2016, 7, 876. [Google Scholar] [CrossRef] [PubMed]
  18. Peretz, I.; Vuvan, D.; Lagrois, M.-E.; Armony, J.L. Neural overlap in processing music and speech. Philos. Trans. R. Soc. B 2015, 370, 20140090. [Google Scholar] [CrossRef] [PubMed]
  19. Hausen, M.; Torppa, R.; Salmela, V.R.; Vainio, M.; Särkämö, T. Music and speech prosody: A common rhythm. Front. Psychol. 2013, 4, 566. [Google Scholar] [CrossRef] [PubMed]
  20. Magne, C.; Jordan, D.K.; Gordon, R.L. Speech rhythm sensitivity and musical aptitude: ERPs and individual differences. Brain Lang. 2016, 153, 13–19. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  21. Peter, V.; Mcarthur, G.; Thompson, W.F. Discrimination of stress in speech and music: A mismatch negativity (MMN) study. Psychophysiology 2012, 49, 1590–1600. [Google Scholar] [CrossRef] [PubMed]
  22. Gordon, R.L.; Magne, C.L.; Large, E.W. EEG correlates of song prosody: A new look at the relationship between linguistic and musical rhythm. Front. Psychol. 2011, 2, 352. [Google Scholar] [CrossRef] [PubMed]
  23. Cason, N.; Schön, D. Rhythmic priming enhances the phonological processing of speech. Neuropsychologia 2012, 50, 2652–2658. [Google Scholar] [CrossRef] [PubMed]
  24. Falk, S.; Lanzilotti, C.; Schön, D. Tuning neural phase entrainment to speech. J. Cogn. Neurosci. 2017, 29, 1378–1389. [Google Scholar] [CrossRef] [PubMed]
  25. Cutler, A.; Carter, D.M. The Predominance of Strong Initial Syllables in the English Vocabulary. Comput. Speech Lang. 1987, 2, 133–142. [Google Scholar] [CrossRef] [Green Version]
  26. Cutler, A.; Norris, D. The role of strong syllables in segmentation for lexical access. J. Exp. Psychol. Hum. Percept. Perform. 1988, 14, 113–121. [Google Scholar] [CrossRef]
  27. Jusczyk, P.W.; Cutler, A.; Redanz, N.J. Infants’ preference for the predominant stress patterns of English words. Child Dev. 1993, 64, 675–687. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Rothermich, K.; Schmidt-Kassow, M.; Schwartze, M.; Kotz, S.A. Event-related potential responses to metric violations: Rules versus meaning. Neuroreport 2010, 21, 580–584. [Google Scholar] [CrossRef] [PubMed]
  29. Magne, C.; Schön, D.; Besson, M. Musician children detect pitch violations in both music and language better than nonmusician children: Behavioral and electrophysiological approaches. J. Cogn. Neurosci. 2006, 18, 199–211. [Google Scholar] [CrossRef] [PubMed]
  30. Schön, D.; Magne, C.; Besson, M. The music of speech: Music training facilitates pitch processing in both music and language. Psychophysiology 2004, 41, 341–349. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  31. Marie, C.; Magne, C.; Besson, M. Musicians and the metric structure of words. J. Cogn. Neurosci. 2011, 23, 294–305. [Google Scholar] [CrossRef] [PubMed]
  32. Goswami, U. A temporal sampling framework for developmental dyslexia. Trends Cogn. Sci. 2011, 15, 3–10. [Google Scholar] [CrossRef] [PubMed]
  33. Harrison, E.; Wood, C.; Holliman, A.J.; Vousden, J.I. The immediate and longer-term effectiveness of a speech-rhythm-based reading intervention for beginning readers. J. Res. Read. 2018, 41, 220–241. [Google Scholar] [CrossRef]
  34. Holliman, A.J.; Williams, G.J.; Mundy, I.R.; Wood, C.; Hart, L.; Waldron, S. Beginning to disentangle the prosody-literacy relationship: A multi-component measure of prosodic sensitivity. Read. Writ. 2014, 27, 255–266. [Google Scholar] [CrossRef]
  35. Thomson, J.; Jarmulowicz, L. Linguistic Rhythm and Literacy; John Benjamins Publishing Company: Amsterdam, The Netherlands, 2016. [Google Scholar] [CrossRef]
  36. Gordon, E.E. Predictive Validity Study of AMMA: A One-Year Longitudinal Predictive Validity Study of the Advanced Measures of Music Audiation; GIA Publications: Chicago, IL, USA, 1990. [Google Scholar]
  37. Schneider, P.; Scherg, M.; Dosch, H.G.; Specht, H.J.; Gutschalk, A.; Rupp, A. Morphology of Heschl’s gyrus reflects enhanced activation in the auditory cortex of musicians. Nat. Neurosci. 2002, 5, 688–694. [Google Scholar] [CrossRef] [PubMed]
  38. Seppänen, M.; Brattico, E.; Tervaniemi, M. Practice strategies of musicians modulate neural processing and the learning of sound-patterns. Neurobiol. Learn. Mem. 2007, 87, 236–247. [Google Scholar] [CrossRef] [PubMed]
  39. Vuust, P.; Brattico, E.; Seppänen, M.; Näätänen, R.; Tervaniemi, M. The sound of music: Differentiating musicians using a fast, musical multi-feature mismatch negativity paradigm. Neuropsychologia 2012, 50, 1432–1443. [Google Scholar] [CrossRef] [PubMed]
  40. Hay, J.S.F.; Diehl, R.L. Perception of rhythmic grouping: Testing the iambic/trochaic law. Percept. Psychophys. 2007, 69, 113–122. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  41. Iversen, J.R.; Patel, A.D.; Ohgushi, K. Perception of rhythmic grouping depends on auditory experience. J. Acoust. Soc. Am. 2008, 124, 2263–2271. [Google Scholar] [CrossRef] [PubMed]
  42. Balota, D.A.; Yap, M.J.; Hutchison, K.A.; Cortese, M.J.; Kessler, B.; Loftis, B.; Neely, J.H.; Nelson, D.L.; Simpson, G.B.; Treiman, R. The English Lexicon Project. Behav. Res. Methods 2007, 39, 445–459. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  43. Lund, K.; Burgess, C. Producing high-dimensional semantic spaces from lexical co-occurrence. Behav. Res. Methods Instrum. Comput. 1996, 28, 203–208. [Google Scholar] [CrossRef] [Green Version]
  44. Thomas, E.R. Rural white southern accents. In A Handbook of Varieties of English: A Multimedia Reference Tool; Kortmann, B., Schneider, E.W., Eds.; Mouton de Gruyter: New York, NY, USA, 2004; pp. 300–324. [Google Scholar]
  45. Oostenveld, R.; Fries, P.; Maris, E.; Schoffelen, J.-M. FieldTrip: Open source software for advanced analysis of MEG, EEG, and invasive electrophysiological data. Comput. Intell. Neurosci. 2011, 2011, 1–9. [Google Scholar] [CrossRef] [PubMed]
  46. Maris, E.; Oostenveld, R. Nonparametric statistical testing of EEG- and MEG-data. J. Neurosci. Methods 2007, 164, 177–190. [Google Scholar] [CrossRef] [PubMed]
  47. Lense, M.D.; Gordon, R.L.; Key, A.P.F.; Dykens, E.M. Neural correlates of cross-modal affective priming by music in Williams syndrome. Soc. Cogn. Affect. Neurosci. 2014, 9, 529–537. [Google Scholar] [CrossRef] [PubMed]
  48. Bohn, K.; Knaus, J.; Wiese, R.; Domahs, U. The influence of rhythmic (IR) regularities on speech processing: Evidence from an ERP study on German phrases. Neuropsychologia 2013, 51, 760–771. [Google Scholar] [CrossRef] [PubMed]
  49. Domahs, U.; Wiese, R.; Bornkessel-Schlesewsky, I.; Schlesewsky, M. The processing of German word stress: Evidence for the prosodic hierarchy. Phonology 2008, 25, 1–36. [Google Scholar] [CrossRef]
  50. McCauley, S.M.; Hestvik, A.; Vogel, I. Perception and bias in the processing of compound versus phrasal stress: Evidence from event-related brain potentials. Lang. Speech 2012, 56, 23–44. [Google Scholar] [CrossRef] [PubMed]
  51. Rothermich, K.; Schmidt-Kassow, M.; Kotz, S.A. Rhythm’s gonna get you: Regular meter facilitates semantic sentence processing. Neuropsychologia 2012, 50, 232–244. [Google Scholar] [CrossRef] [PubMed]
  52. Schmidt-Kassow, M.; Kotz, S.A. Attention and perceptual regularity in speech. Neuroreport 2009, 20, 1643–1647. [Google Scholar] [CrossRef] [PubMed]
  53. Brochard, R.; Abecasis, D.; Potter, D.; Ragot, R.; Drake, C. The “Ticktock” of our internal clock: Direct brain evidence of subjective accents in isochronous sequences. Psychol. Sci. 2003, 14, 362–366. [Google Scholar] [CrossRef] [PubMed]
  54. Ystad, S.; Magne, C.; Farner, S.; Pallone, G.; Aramaki, M.; Besson, M.; Kronland-Martinet, R. Electrophysiological study of algorithmically processed metric/rhythmic variations in language and music. EURASIP J. Audio Speech, Music Process. 2007, 2007, 03019. [Google Scholar] [CrossRef]
  55. Cason, N.; Astésano, C.; Schön, D. Bridging music and speech rhythm: Rhythmic priming and audio-motor training affect speech perception. Acta Psychol. 2015, 155, 43–50. [Google Scholar] [CrossRef] [PubMed]
  56. Gordon, R.L.; Shivers, C.M.; Wieland, E.A.; Kotz, S.A.; Yoder, P.J.; Devin McAuley, J. Musical rhythm discrimination explains individual differences in grammar skills in children. Dev. Sci. 2015, 18, 635–644. [Google Scholar] [CrossRef] [PubMed]
  57. Chern, A.; Tillmann, B.; Vaughan, C.; Gordon, R.L. New evidence of a rhythmic priming effect that enhances grammaticality judgments in children. J. Exp. Child Psychol. 2018, 173, 371–379. [Google Scholar] [CrossRef] [PubMed]
  58. Kotz, S.A.; Gunter, T.C. Can rhythmic auditory cuing remediate language-related deficits in Parkinson’s disease? Ann. N. Y. Acad. Sci. 2015, 1337, 62–68. [Google Scholar] [CrossRef] [PubMed]
  59. Cason, N.; Hidalgo, C.; Isoard, F.; Roman, S.; Schön, D. Rhythmic priming enhances speech production abilities: Evidence from prelingually deaf children. Neuropsychology 2015, 29, 102–107. [Google Scholar] [CrossRef] [PubMed]
  60. Przybylski, L.; Bedoin, N.; Herbillon, V.; Roch, D.; Kotz, S.A.; Tillmann, B. Rhythmic auditory stimulation influences syntactic processing in children with developmental language disorders. Neuropsychology 2013, 27, 121–131. [Google Scholar] [CrossRef] [PubMed]
  61. Brochard, R.; Tassin, M.; Zagar, D. Got rhythm for better and for worse. Cross-modal effects of auditory rhythm on visual word recognition. Cognition 2013, 127, 214–219. [Google Scholar] [CrossRef] [PubMed]
  62. Gordon, R.L.; Magne, C.L.; Magne, C.L. Music and the brain: Music and cognitive abilities. In The Routledge Companion to Music Cognition; Ashley, R., Timmers, R., Eds.; Routledge: New York, NY, USA, 2017; pp. 49–62. [Google Scholar]
  63. Holliman, A.J.; Wood, C.; Sheehy, K. Sensitivity to speech rhythm explains individual differences in reading ability independently of phonological awareness. Br. J. Dev. Psychol. 2008, 26, 357–367. [Google Scholar] [CrossRef]
  64. Holliman, A.J.; Wood, C.; Sheehy, K. A cross-sectional study of prosodic sensitivity and reading difficulties. J. Res. Read. 2012, 35, 32–48. [Google Scholar] [CrossRef]
  65. Whalley, K.; Hansen, J. The role of prosodic sensitivity in children’s reading development. J. Res. Read. 2006, 29, 288–303. [Google Scholar] [CrossRef] [Green Version]
  66. Wood, C. Metrical stress sensitivity in young children and its relationship to phonological awareness and reading. J. Res. Read. 2006, 29, 270–287. [Google Scholar] [CrossRef]
  67. Holliman, A.J.; Gutiérrez Palma, N.; Critten, S.; Wood, C.; Cunnane, H.; Pillinger, C. Examining the independent contribution of prosodic sensitivity to word reading and spelling in early readers. Read. Writ. 2017, 30, 509–521. [Google Scholar] [CrossRef]
  68. Cooper, N.; Cutler, A.; Wales, R. Constraints of lexical stress on lexical access in English: Evidence from native and non-native listeners. Lang. Speech 2002, 45, 207–228. [Google Scholar] [CrossRef] [PubMed]
  69. Friedrich, C.K. Neurophysiological correlates of mismatch in lexical access. BMC Neurosci. 2005, 6, 64. [Google Scholar] [CrossRef] [PubMed]
  70. Chan, J.S.; Wade-Woolley, L. Explaining phonology and reading in adult learners: Introducing prosodic awareness and executive functions to reading ability. J. Res. Read. 2018, 41, 42–57. [Google Scholar] [CrossRef]
  71. Heggie, L.; Wade-Woolley, L. Prosodic awareness and punctuation ability in adult readers. Read. Psychol. 2018, 39, 188–215. [Google Scholar] [CrossRef]
  72. Leong, V.; Hämäläinen, J.; Soltész, F.; Goswami, U. Rise time perception and detection of syllable stress in adults with developmental dyslexia. J. Mem. Lang. 2011, 64, 59–73. [Google Scholar] [CrossRef]
  73. Leong, V.; Goswami, U. Assessment of rhythmic entrainment at multiple timescales in dyslexia: Evidence for disruption to syllable timing. Hear. Res. 2014, 308, 141–161. [Google Scholar] [CrossRef] [PubMed]
  74. Mundy, I.R.; Carroll, J.M. Speech prosody and developmental dyslexia: Reduced phonological awareness in the context of intact phonological representations. J. Cogn. Psychol. 2012, 24, 560–581. [Google Scholar] [CrossRef]
  75. Thomson, J.M.; Fryer, B.; Maltby, J.; Goswami, U. Auditory and motor rhythm awareness in adults with dyslexia. J. Res. Read. 2006, 29, 334–348. [Google Scholar] [CrossRef]
  76. Dickie, C.; Ota, M.; Clark, A. Revisiting the phonological deficit in dyslexia: Are implicit nonorthographic representations impaired? Appl. Psycholinguist. 2013, 34, 649–672. [Google Scholar] [CrossRef]
  77. Patel, A.D. Why would Musical Training Benefit the Neural Encoding of Speech? The OPERA Hypothesis. Front. Psychol. 2011, 2, 142. [Google Scholar] [CrossRef] [PubMed]
  78. Patel, A.D. Can nonlinguistic musical training change the way the brain processes speech? The expanded OPERA hypothesis. Hear. Res. 2014, 308, 98–108. [Google Scholar] [CrossRef] [PubMed]
  79. Tierney, A.; Kraus, N. Auditory-motor entrainment and phonological skills: Precise auditory timing hypothesis (PATH). Front. Hum. Neurosci. 2014, 8, 949. [Google Scholar] [CrossRef] [PubMed]
  80. Gordon, R.L.; Fehd, H.M.; McCandliss, B.D. Does music training enhance literacy skills? A meta-analysis. Front. Psychol. 2015, 6, 1777. [Google Scholar] [CrossRef] [PubMed]
  81. Francois, C.; Chobert, J.; Besson, M.; Schon, D. Music training for the development of speech segmentation. Cereb. Cortex 2013, 23, 2038–2043. [Google Scholar] [CrossRef] [PubMed]
  82. Breen, M.; Clifton, C. Stress matters: Effects of anticipated lexical stress on silent reading. J. Mem. Lang. 2011, 64, 153–170. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  83. Magne, C.; Gordon, R.L.; Midha, S. Influence of metrical expectancy on reading words: An ERP study. In Proceedings of the Speech Prosody 2010 Conference, Chicago, IL, USA, 10–14 May 2010; pp. 1–4. [Google Scholar]
  84. Fodor, J.D. Learning to parse? J. Psycholinguist. Res. 1998, 27, 285–319. [Google Scholar] [CrossRef]
  85. Ashby, J.; Clifton, C., Jr. The prosodic property of lexical stress affects eye movements during silent reading. Cognition 2005, 96, B89–B100. [Google Scholar] [CrossRef] [PubMed]
  86. Kentner, G.; Vasishth, S. Prosodic Focus Marking in Silent Reading: Effects of Discourse Context and Rhythm. Front. Psychol. 2016, 7, 319. [Google Scholar] [CrossRef] [PubMed]
  87. Teaching Children to Read: An Evidence-Based Assessment of the Scientific Research Literature on Reading and Its Implications for Reading Instruction. Available online: https://www.nichd.nih.gov/sites/default/files/publications/pubs/nrp/Documents/report.pdf (accessed on 1 November 2018).
Figure 1. Rhythmic cross-modal priming experimental paradigm. The auditory prime (long-short or short-long sequence) is followed by a target visual word with a stress pattern that either match or mismatch the prime (Note: stressed syllable is underlined for illustration purposes only).
Figure 1. Rhythmic cross-modal priming experimental paradigm. The auditory prime (long-short or short-long sequence) is followed by a target visual word with a stress pattern that either match or mismatch the prime (Note: stressed syllable is underlined for illustration purposes only).
Brainsci 08 00210 g001
Figure 2. Rhythmic priming Event-related potential (ERP) effect. Grand-average event-related potentials (ERPs) recorded for matching (purple), and mismatching (green) trochaic target words, averaged for the significant group of channels in the cluster. The latency range of the significant clusters is indicated in blue. (Note: Negative amplitude values are plotted upward. The topographic map shows the mean differences in scalp amplitudes in the latency range of the significant clusters. Electrodes belonging to the cluster are indicated with a black dot).
Figure 2. Rhythmic priming Event-related potential (ERP) effect. Grand-average event-related potentials (ERPs) recorded for matching (purple), and mismatching (green) trochaic target words, averaged for the significant group of channels in the cluster. The latency range of the significant clusters is indicated in blue. (Note: Negative amplitude values are plotted upward. The topographic map shows the mean differences in scalp amplitudes in the latency range of the significant clusters. Electrodes belonging to the cluster are indicated with a black dot).
Brainsci 08 00210 g002
Figure 3. Brain-behavior correlations. (A) Correlation between speech rhythm sensitivity (as indexed by the negative ERP cluster mean difference) and musical rhythm aptitude; (B) correlation between speech rhythm sensitivity and reading comprehension; (C) correlation between musical rhythm aptitude and reading comprehension. (Note: The solid line represents a linear fit.)
Figure 3. Brain-behavior correlations. (A) Correlation between speech rhythm sensitivity (as indexed by the negative ERP cluster mean difference) and musical rhythm aptitude; (B) correlation between speech rhythm sensitivity and reading comprehension; (C) correlation between musical rhythm aptitude and reading comprehension. (Note: The solid line represents a linear fit.)
Brainsci 08 00210 g003
Table 1. Multiple regression coefficients.1
Table 1. Multiple regression coefficients.1
SourceBSEβtp
Constant8.0002.033 3.9350.001
ACT Reading0.0810.0600.2671.3590.194
AMMA Rhythm0.2810.0930.5943.0230.009
1 Outcome: Negative ERP cluster mean difference; B: unstandardized coefficient; SE: standard error; β: standardized coefficient; t: t-value; p: p-value; ACT: American College Testing; AMMA: Advanced Measures of Music Audiation.

Share and Cite

MDPI and ACS Style

Fotidzis, T.S.; Moon, H.; Steele, J.R.; Magne, C.L. Cross-Modal Priming Effect of Rhythm on Visual Word Recognition and Its Relationships to Music Aptitude and Reading Achievement. Brain Sci. 2018, 8, 210. https://doi.org/10.3390/brainsci8120210

AMA Style

Fotidzis TS, Moon H, Steele JR, Magne CL. Cross-Modal Priming Effect of Rhythm on Visual Word Recognition and Its Relationships to Music Aptitude and Reading Achievement. Brain Sciences. 2018; 8(12):210. https://doi.org/10.3390/brainsci8120210

Chicago/Turabian Style

Fotidzis, Tess S., Heechun Moon, Jessica R. Steele, and Cyrille L. Magne. 2018. "Cross-Modal Priming Effect of Rhythm on Visual Word Recognition and Its Relationships to Music Aptitude and Reading Achievement" Brain Sciences 8, no. 12: 210. https://doi.org/10.3390/brainsci8120210

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop