Next Article in Journal
What Biological Visualizations Do Science Center Visitors Prefer in an Interactive Touch Table?
Next Article in Special Issue
When Complexity Is Your Friend: Modeling the Complex Problem Space of Vocabulary
Previous Article in Journal
Are We Aware of What Is Going on in a Student’s Mind? Understanding Wrong Answers about Plant Tropisms and Connection between Student’s Conceptions and Metacognition in Teacher and Learner Minds
Previous Article in Special Issue
Teachers as Learners: The Impact of Teachers’ Morphological Awareness on Vocabulary Instruction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Academic Vocabulary and Reading Fluency: Unlikely Bedfellows in the Quest for Textual Meaning

Annsley Frazier Thornton School of Education, Bellarmine University, Louisville, KY 40205, USA
*
Author to whom correspondence should be addressed.
Educ. Sci. 2018, 8(4), 165; https://doi.org/10.3390/educsci8040165
Submission received: 6 August 2018 / Revised: 18 September 2018 / Accepted: 25 September 2018 / Published: 5 October 2018
(This article belongs to the Special Issue Vocabulary Development)

Abstract

:
Academic vocabulary is the specialized language used to communicate within academic settings. The Coxhead (2000) taxonomy is one such list that identifies 570 headwords representing academic vocabulary. Researchers have hypothesized that students possessing greater fluent reading skills are more likely to benefit from exposure to vocabulary due to greater amounts of time spent reading (Nagy and Stahl, 2007; Stanovich, 1986). In this study of 138 sixth- and seventh-grade students, we assess academic vocabulary, indicators of fluent reading, and silent reading comprehension to gain insight into relationships between the three. Our results found that reading rate mediates the relationship between academic vocabulary and reading comprehension, accounting for nearly one-third of the explained variance. Using simple slope analysis, we identified a threshold suggesting the point where reading rate exerts a neutral effect on reading comprehension beyond which vocabulary learning is no longer hindered.

1. Introduction

The most recent reading results from the National Assessment of Educational Progress reports that some 36% of eighth-grade students scored at or above the proficient level, an increase from 34% in 2015 [1]. While the improvement is welcomed, the fact is that nearly two-thirds of students achieve at a less-than-proficient level. When these scores are analyzed by ethnicity, the results reveal that 45% of White students score at or above proficiency compared to 18% and 23% of Black and Hispanic students, respectively. In fact, an inspection of below- basic scores reveals that 18% of White students scoring at this lowest level compared to 40% of Black and 33% of Hispanic students. Many reading processes contribute to ultimate reading achievement, one of which is the understanding of words [2]. Research has linked word understanding to improvement in text comprehension both within and across grade levels [3,4,5,6]. Oulette and Beers [6] described in sixth-grade students the transition from decoding to dependency on breadth and depth of word understanding as critical to continued reading growth. A close corollary to general vocabulary during this period is growth in the understanding of academic vocabulary, a subset of words pertaining to academic discourse [7]. This study considers the role of academic vocabulary and the extent to which reading fluency may affect its contribution to reading comprehension in sixth- and seventh-grade students.

2. Background

2.1. Theoretical Framework

A defining feature of an educated individual is the ability to create understanding of what they read. Ideally, the reader extracts meaning at the textbase level where a literal or “just the facts” interpretation is made of the text. The second level, the situational model level, is where the text is integrated with the reader’s existing schema [8,9]. It is this level where the reader considers the text in conjunction with what they already know, or believe they know about the topic. To arrive at understanding, it is important the reader successfully develops a meaning-based representation of the text at the situational level [8,10]. To create these representations, the construction–integration (CI) model demonstrates that effective comprehension is supported by both top-down and bottom-up processes that function on an interactive basis [8]. The effective decoding of words and fluent reading of text reflects the automatic reading processes that allow critical working memory and attentional resources to be directed to the creation of a situational model [11,12]. The verbal efficiency theory claims that through rapid word identification processes the reader is able to retrieve both the phonology and word meaning that contributes to comprehension [13]. While word retrieval speed is critical in this process and has support in the research base, what is likely as important is the ability of the reader to arrive at the appropriate context-based meaning of the words [14]. This leads to the notion that the nuances found within word meanings are intricately tied to reading fluency and comprehension. This study explores the importance of academic word knowledge and its relationship with reading fluency indicators, and the ultimate contribution of both to silent reading comprehension.

2.2. Vocabulary Learning

Anderson and Freebody proposed that the extent to which a reader knows word meanings directly detracts or enhances comprehension [15]. Cunningham and Stanovich argued that it is the volume of reading that becomes the primary contributor to differences in vocabulary knowledge between readers [16]. Stanovich’s reciprocal model of vocabulary hypothesizes that a virtuous cycle occurs where the reader’s vocabulary knowledge facilitates engagement with texts, which results in greater word knowledge, which in turn encourages more reading [17]. Stated differently, word knowledge begets word knowledge, to which Perfetti would add enhances text comprehension [14]. This virtuous, reciprocal cycle of reading and learning words was viewed by Stahl and Nagy as the most likely scenario accounting for vocabulary growth in readers, where those who read more expand their breadth and depth of word knowledge [18]. Of interest then is the operational structure of the word learning cycle and how the cycle may vary between readers of different abilities. To further understand this cycle, Perfetti and Stafura made two points about effective readers [19]. First, they are more skilled at understanding words and integrating them into a mental model of a text than are those who are less effective [20,21,22]. Secondly, when encountering a new word, skilled readers are more effective at acquiring orthographic and semantic information about the word [23,24,25]. A fundamental characteristic defining a skilled reader is the extent to which they read with fluency [2]. This suggests that vocabulary learning may differ based on the degree to which a student possesses the indicators associated with fluent reading.
The lexical quality hypothesis claims that a reader’s ability to rapidly retrieve a words’ phonology (its sound constituent) and its associated meaning is a potentially limiting factor to comprehension [26]. The theory provides an explanation for word learning by recognizing that lexical entries are tri-part, meaning they consist of an orthographic, phonological, and semantic component. The quality of each of these three components in terms of reliability, coherence, and consistency reflects the overall lexical quality of the word. It may not be assumed that the quality of each part of a lexical entry is equal as a reader may be able to connect the orthography to the phonology of a word, but may have little to no corresponding semantic association [27]. For any given reader, the inventory of high-quality word representations can be placed on a continuum ranging from few to many. Skilled readers are those whose high-quality word representations tend toward the high end of the continuum, while the inventory of less-skilled readers aggregates towards the lower end. Although both skilled and less-skilled readers have many words for which they have low-quality representations, the former group is in a much better position to improve their understanding of a new word, and hence improve its lexical quality. For example, a skilled reader may have little trouble quickly applying their orthographic and phonological knowledge to unlock an unknown word from print. At the same time a less-skilled reader may be more likely to struggle in their attempt to pronounce the word due to less-than-adequate decoding skill. The skilled reader is thus in a better position to improve the quality of an impoverished word representation by leveraging their deeper knowledge of spelling, pronunciation, or meaning in an effort to extract new information about the word. The less-skilled reader has a shallower inventory of these foundational skills from which to draw making it more difficult to engage in new learning of impoverished words. Hence, effective word learning for less-skilled readers often requires explicit instruction and frequent encounters with the word.

2.3. Decoding Processes and Vocabulary

The simple view of reading hypothesizes that decoding and linguistic comprehension plus the interaction of the two results in reading comprehension [28]. Two accounts of the role of vocabulary in decoding have been proposed. One view proposes that breadth rather than depth in vocabulary growth promotes decoding growth and development. As young readers add words to their lexicon they become increasingly sensitive to sublexical details in those words including their phonemes [29,30,31]. This view privileges vocabulary breadth as the mechanism that drives decoding development. The second view is derived from the triangle model that hypothesizes an interaction between decoding, vocabulary, and semantics suggesting that depth of vocabulary knowledge is related to decoding proficiency [32]. To unravel these relationships, Ouellette studied decoding, vocabulary, word recognition, and comprehension processes in fourth-grade students [33]. The author found that breadth of receptive vocabulary alone predicted decoding while expressive vocabulary predicted visual word recognition (non-decodable words). Through its association with expressive vocabulary, vocabulary depth predicted both visual word recognition and comprehension above and beyond the measures of vocabulary breadth. Ouellette surmised that the relationship between decoding and oral vocabulary is a function of the size of one’s receptive vocabulary. This supports the notion by Levelt, Roelofs, and Meyer [34] that a reader’s ability to encode phonological information about a word may at least partly explain the relationship between oral vocabulary and reading comprehension reported by other authors [35,36,37]. In sum, these results suggest a perspective where an initial emphasis on decoding is followed by a focus on building automatic sight-word reading skills that leads to fluent reading and an increase in vocabulary breadth.
Decoding processes are reflected in the degree to which a student can fluently read connected text, which along with comprehension is a necessary condition for effective reading [28]. We adopt a definition of fluent reading that includes the indicators of reading rate (word automaticity), word identification accuracy, and prosody [38,39]. Word automaticity is the ability to quickly recall the phonological representation of a printed word without the need to apply decoding strategies [12]. In addition to speed, the reader must also accurately recall the letter–sound correspondences resulting in the correct pronunciation of the word. For students struggling with reading, accurate word identification has been found to be problematic at the elementary, middle, and secondary levels [40,41,42]. However, other researchers have found that word accuracy is not a significant issue for readers [43,44,45]. Despite differences in findings automatic and accurate recall of words is necessary to becoming a fluent reader and for maintaining the attentional resources critical to comprehension [11]. Reading is often measured by a metric where the number of words read (rate) minus reading miscues (word mispronunciations, omissions, and insertions) is calculated to reflect the number of correctly read words-per-minute. We call this metric accumaticity and distinguish it from the term reading fluency as the latter also includes expressive reading or prosody [46]. Numerous studies over the past several decades have found that one or more of the indicators of fluent reading accounts for significant variance in reading across multiple grade levels [38,47]. Although prosody has often been ignored, it has been found to contribute unique variance to silent reading comprehension across elementary, middle, and secondary grades beyond that accounted for by automaticity and word identification accuracy [42,48,49,50,51,52,53]. Prosody is important because as in speech, it provides a kind of cognitive architecture that aids the reader in comprehension [54,55,56,57]. Accumaticity has been found to explain from 25% to 50% or more of the variance in reading comprehension [38,42,47,52]. while prosody has been shown to contribute an additional 3 to 7% of variance to reading comprehension [42,48,52,58,59,60].

2.4. Academic Vocabulary

Hiebert and Lubliner decomposed academic words into those that are general and those that are content specific [61]. Academic vocabulary is a component of academic language and is critical to successful achievement in school [62,63,64,65]. Nagy and Townsend explained that “Academic language is the specialized language, both oral and written, of academic settings that facilitates communication and thinking about disciplinary content” [66] (p. 92). This definition situates academic language as a subset of general vocabulary and includes words such as paradigm, allocate, and preliminary that are used in both oral and written language across academic settings. Such words convey abstract ideas outside the realm of informal, casual conversation and reflect the nuanced meanings that are used across disciplines. Academic settings refer to educational institutions which includes the associated print and digital literature that discusses disciplinary concepts. In Beck, McKeown, and Kucan’s word tiers, academic vocabulary is situated in Tier 2 as words used across disciplines while Tier 3 refers to discipline-specific words [67]. To further differentiate academic vocabulary from more general and pedestrian language, Biber outlined characteristics of academic words stating they are often derived from Latin and Greek vocabulary, consist of nouns, adjectives, and prepositions that are morphologically complex, are often used as grammatical metaphors, are abstract, and are informationally dense [68]. Latin and Greek words were introduced into English by the educated classes in the eleventh century. For example, where the vernacular was deer, the new term became venison, pig became pork, to eat became to dine, be was now exist, right was correct, tooth became dental, and by hand was referred to as manual. Additional words were introduced for which no previous corollary existed and include words such as accommodate, arbitrary, and analyze [7].
Nagy and Townsend pointed out that students from less educated backgrounds will not learn academic words without explicit instruction of their meanings and repeated encounters of their use [66]. In a seminal study Coxhead proposed a list of 570 of the most common academic word families. Research has since shown that the study of these words can significantly improve a student’s word knowledge [7,69,70,71]. At the same time such lists have been criticized, particularly when they have been used as lists to be taught prescriptively to students [72]. Despite these criticisms academic words have been found to contribute significant, unique variance to academic achievement. The correlation between general vocabulary and reading comprehension is generally in the 0.70 to 0.95 range [73,74]. Townsend, Filippini, Collins, and Biancarosa found that after controlling for general vocabulary, academic vocabulary accounted for 2% to 7% of additional variance in math, social studies, ELA (English Language Arts), and science achievement [75].

2.5. Study Rationale

The importance of word knowledge to comprehension has been long established in the literature base. Our review of the literature discussed evidence suggesting the importance of academic vocabulary to academic achievement across disciplines. It is also established that students who struggle with reading require significantly more exposure to words, particularly academic words, to incorporate them into their lexical inventory. However, there remains little evidence of the extent to which academic vocabulary is acquired by middle school students, and as a corollary there are few validated instruments to assess academic vocabulary. Stanovich, Stahl, and others have suggested that fluent readers are more likely to acquire deeper word knowledge because they spend more time engaged with text and thus have an increased exposure to words [17,74]. We assume that the same fluent reading process responsible for general vocabulary acquisition is at work in learning academic vocabulary. While there are limitations to a strict focus on academic vocabulary per se, a deeper understanding can provide insight into its role and influence on comprehension, and whether those with fluent reading skills are likely to possess larger academic vocabularies that contribute to comprehension.
The goal of this study then is to describe the extent to which reading fluency and academic vocabulary contribute to reading comprehension in sixth- and seventh-grade students. The research questions guiding this study are as follows:
  • RQ1: What is the attainment level of students on measures of academic vocabulary, reading fluency, and comprehension?
  • RQ2: Do differences exist in students between grades on the measured variables?
  • RQ3: To what extent is silent reading comprehension predicted by academic vocabulary and reading fluency indicators?
  • RQ4: Does reading fluency mediate the relationship between academic vocabulary and reading comprehension?
We hypothesized that students with adequate indicators of reading fluency possess greater academic vocabulary knowledge and reading comprehension.

3. Methods

3.1. Participants

This study was conducted in a large, urban school district in the southeast U.S. containing 24 middle schools. From these schools, ten agreed to participate in the present study. Of the students attending these schools 49% are white, 37% African-American, 9% Hispanic/Latino, and 5% are of other ethnicities. Sixty-two percent of these students receive free- or reduced-priced lunch. From the 10 middle schools, 29 ELA teachers who taught sixth- and/or seventh-grade volunteered to participate in the study. A total of 320 students were randomly sampled from the class rosters of participating teachers with half drawn from sixth-grade and the other from seventh. Of these students, the final analytic sample is n = 138.

3.2. Assessments

Students were assessed over the course of 10 days in the early fall on measures of academic vocabulary, reading fluency (reading rate, word identification accuracy, and prosody), and reading comprehension. Vocabulary and reading fluency assessments were administered by researchers trained in their administration while a standardized, computer-based reading comprehension assessment was administered by the district.

3.3. Test of Academic Word Knowledge (TAWK)

The TAWK is a 50-item test of academic word knowledge created by the researchers. Word level test items were drawn from the Academic Word List (AWL) created by Coxhead [7]. The 570 headwords in the AWL are divided into 10 sub-lists containing 60 words each with list 10 containing 30 words. The frequency of occurrence in academic texts of the words in each sub-list diminishes in frequency as one moves from sub-list 1 to 2 to 3, and so on. To develop items for the TAWK an equal number of words were randomly drawn from each sub-list resulting in 110 items. For each of these items three words were selected with the one most closely associated with the target word being the correct answer. Over the course of five administrations of the 110 items to middle school students, a total of 50 emerged that demonstrated acceptable discrimination (good ≥ 0.40, reasonable > 0.30 and ≤ 0.39, fair > 0.20 and ≤ 0.29, and poor ≤ 0.20) and difficulty characteristics (easy > 0.60, moderate ≥ 0.30 and < 0.60, and hard > 0.33). Nunnally and Bernstein suggested that the difficulty of the test items should be mixed and that when a test is not designed to make placement decisions for individual students a reliability coefficient of r = 0.80 is adequate [76]. The final 50-item version of the TAWK was administered to 278 middle school students across 15 schools and resulted in a Cronbach’s alpha of internal consistency = 0.85. The TAWK is group-administered on a whole-class basis where students are instructed to circle from options A, B, and C the word that most nearly means the same as the target word. Results for internal consistency yielded a Cronbach’s alpha equal to 0.85. Split-half reliability using the Spearman–Brown and Gutman coefficients both equaled 0.940. Indices of normality exhibited a normal distribution as evidenced by a mean of 24.7(8.31), median equal to 24.0, and a mode equal to 24.0.

3.4. Reading Accumaticity Indicators

To assess accumaticity students read a primarily narrative passage selected to be grade-level appropriate. Curriculum-based measures have been established as both valid and reliable measures of reading competency [77,78,79,80]. For each grade, a passage was selected based on narrativity and appropriate textual complexity. Sixth-grade students read a passage with a complexity level between 950 L and 1000 L while the seventh-grade passage measured between 1050 L and 1100 L. Both complexity levels are within the text complexity grade bands recommended by the Common Core [81]. Narrativity was measured using Coh–Metrix with the sixth- and seventh-grade passages measuring at 58% and 61%, respectively, where 100% would indicate a completely narrative text and 0% one that is totally informational [82]. All students read aloud the narrative passage for one minute while a researcher recorded the total number of words read and the number of reading miscues that included mispronunciations, word omissions, and insertion of words not in the text. To compute accumaticity, miscues were subtracted from the total words read for one minute which resulted in the number of words read correctly. Test–retest reliability of the measures used in this study resulted in Pearson’s r = 0.962.

3.5. Reading Prosody

To assess reading prosody a recording was made of each student reading the one-minute passage and then saved for later analysis. The Multi-Dimensional Fluency Rubric (MDFS) was used to assess the quality of each student’s reading [83]. When using the MDFS the rater evaluates four domains of reading quality that include pacing, smoothness, expression, and phrasing. Each of these domains is assigned a rating from a low of 1, indicating very poor quality, to a maximum score of 4 indicating excellent reading. The four ratings are then summed resulting in a composite score ranging from 4–16. Our interpretation of the composite is that a score ≥ 12 indicates appropriate fluency, 9–11 indicates fluency that is developing, and ≤8 suggests poorly developed fluency. It is important to remember that reading fluency is assessed on a per-reading basis and that fluent reading with a particular passage does not guarantee that a student reads every passage with equal fluency.
The MDFS has been found to be a valid and reliable tool for assessing reading fluency with two particularly rigorous studies reporting reliability coefficients well above the accepted norm of 0.80 [84,85,86]. Moser, Sudweeks, Morrison, and Wilcox studied two raters who scored 144 readings of 36 fourth-grade students [87]. Each student read two narrative and two informational passages. The authors implemented a research design using a generalizability study (G-study) which found that variability attributed to readers ranged from 79.2% to 83.3% for narrative text, and 70.7% and 81.8% for informational text. The authors then used a decision study (D-study) that found reliability coefficients ranging from 0.94 to 0.97 for narrative text and 0.92 to 0.98 for informational text suggesting high rater reliability. Smith and Paige conducted a study involving 354 narrative readings by 177 first- to third-grade students [88]. Each student had read one text each in the fall and spring of their respective grade with all readings recorded for later analysis. Four teachers with no experience in formal reading instruction received five hours of training in the use of the MDFS after which the raters independently evaluated each reading. Generalizability coefficients were analyzed by grade resulting in statistics equal to 0.92, 0.93, and 0.93 for first- to third-grade, respectively. Decision study results revealed generalizability coefficients for any single rater evaluating one reading equal to 0.77, 0.77, and 0.78 for first- to third-grade, respectively. When single raters evaluated two readings reliabilities increased to 0.87 across all grades. These two studies provide strong evidence of the reliability of the MDFS as a tool to assess oral reading fluency and prosody. Test–retest reliability of rating in the present study is 0.948 (n = 274).

3.6. Scholastic Reading Inventory

Students were assessed for reading comprehension in the fall and spring by school personnel using the Scholastic Reading Inventory (SRI), a computer-adaptive, individually administered assessment. During the assessment students independently and silently read a series of passages and then answered a multiple choice inferential question. Students typically spent about 25 min taking the SRI. Test results are in student-level Lexile levels that provide the highest level of text-complexity for which the student reads with 75% comprehension [89]. The test authors provide the results of multiple studies showing concurrent validity with other standardized tests ranging from 0.88 to 0.93. Test–retest reliability is reported as 0.87 to 0.89. In the present study correlations between the fall and spring administrations of the SRI are Pearson’s r = 0.85 (n = 185).

4. Results

Our first research question sought to describe student attainment on the measured variables, while the second asks if differences exist between sixth- and seventh-grade readers. Table 1 shows the means and standard deviations of the variables under study while Table 2 displays the bivariate correlations. Numerical differences in mean scores appear to exist between grades for comprehension, vocabulary, and prosody with seventh-grade scoring higher. However, in the case of reading rate and word identification accuracy the sixth-grade means exceed those of seventh-grade. Accumaticity, the difference between reading rate and miscues adjusted for time, was calculated at 134.43 (33.99) words-correct-per-minute for sixth-grade and 125.22 (38.92) for seventh-grade. This places sixth-grade accumaticity at approximately the 55th percentile and sixth-graders just below the 50th percentile on oral reading norms from Hasbrouck and Tindal [90].
Moderate to large correlations were statistically significant between all mean variables with the strongest occurring among comprehension, reading rate, word identification accuracy, and prosody. On the measure of academic vocabulary mean scores show sixth- and seventh-grade attainment to be at the 52nd and 53rd percentiles, respectively.
Our second question asked if students differed by grade on the measured variables. We conducted an analysis of variance (ANOVA) using grade as the between-subject factor while applying a Bonferroni adjustment to account for inflated Type I error. Results revealed a statistically significant difference in favor of seventh-grade for reading comprehension (F (1,136) = 6.68, MSE = 174048.02, p = 0.011, d = 0.45) while vocabulary approached significance (F (1,136) = 4.07, MSE = 327256.98, p = 0.046) with seventh-grade scoring higher. No other variables approached statistical significance.
The third research question asks the extent to which silent reading comprehension is predicted by academic vocabulary and the indicators of fluent reading. We proceeded by regressing silent reading comprehension onto academic vocabulary, reading rate, word identification accuracy, and reading prosody. Results (Table 3) revealed academic vocabulary and reading rate to be significant predictors of comprehension while a test of the interaction of the two was non-significant. Regression results show that academic vocabulary explained 22.9% of the variance in comprehension (F (1,136) = 41.78, MSE = 367806.71, p < 0.001) while reading rate explained 18.0% of the variance (F (1,135) = 42.43, MSE = 327256.98, p < 0.001). The two variables accounted for 40.9% of the variance in silent reading comprehension.
Our fourth research question asked if reading fluency mediates the relationship between academic vocabulary and comprehension. Results from question three revealed that of the three fluency indicators only reading rate was a significant predictor of comprehension. Scatterplot analysis (Figure 1) shows a nonlinear relationship for comprehension and reading rate where the influence of rate decreases as comprehension increases. Figure 2 shows that as rate increases vocabulary also increases, while Figure 3 shows that as vocabulary increases so does comprehension. In sum, these graphs suggest the possibility that rate may function as a mediator between academic vocabulary and reading comprehension.
The path model defining the relationships between vocabulary, rate, and comprehension is shown in Figure 4.
Following recommendations by Baron and Kenny for determining if a variable acts as a mediator, we began by regressing the criterion variable comprehension onto academic vocabulary to determine the total effect [91]. We next regressed reading rate, the suspected mediation variable, onto academic vocabulary to determine a portion of the indirect effect. Finally, comprehension was regressed onto both reading rate and academic vocabulary. The final multiple regression (Table 4) gives estimates of the direct effect of academic vocabulary on comprehension while controlling for reading rate, as well as estimating the other component of the indirect effect, reading rate, on comprehension. The standardized beta coefficients associated with each of these conditions are all statically significant (p < 0.001). Consistent with Baron and Kenny’s partial mediation designation is the finding that the effect of academic vocabulary on comprehension is reduced but not eliminated when reading rate is added to the model, decreasing the effect from 12.14 to 8.30 and making the effect of reading rate equal to 3.84.
Weaknesses in the method of the Baron and Kenny model identified by Preacher and Hayes and Preacher and Kelly concerns beta estimation, particularly in the case of small sample sizes. To address these concerns, these authors recommended bootstrapping, a resampling approach that increases the stability of parameter estimates [92,93]. In line with this recommendation, we conducted 1000 bootstrap samples to increase the stability of our parameter estimates. Because our bootstrapping results at the 95% confidence interval in Table 4 do not include 0, it may be concluded that all effects are significant considering a null hypothesis of b = 0. The results of both the Baron and Kenny approach and the more robust bootstrap estimation method indicate that reading rate has a significant mediating effect on the relationship between academic vocabulary and silent reading comprehension.
Using the finding that reading rate acts as a mediator we sought to determine the point at which it had a neutral effect on reading comprehension. In other words, how much reading rate is enough to avoid degradation in comprehension? Using simple slope analysis recommended by Preacher, Curran, and Bauer we proceeded through a series of iterations where we assigned a conditional value or cut-point based on reading rate in order to split the analytic sample into low- and high-rate groups [94]. For each reading rate data point (e.g., 119, 120, 121, etc.), we regressed comprehension onto rate to estimate the standardized beta coefficients for each of the two groups. Results in Table 5 show that at a rate of 127 words-per-minute, the beta coefficients are nearly equal, suggesting the influence of rate is now neutral for the low- and high-rate groups. Using this finding, we then split the analytic sample into low- and high-rate readers using the cut-point of 127 words per minute. We then conducted an analysis of variance (ANOVA) test to determine the differences in academic vocabulary, comprehension, and reading rate between low- and high-rate readers. Table 6 reveals statistically significant differences for comprehension (F (1,137) = 33.70, p < 0.001, d = 0.700), academic vocabulary (F (1,137) = 6.85, p = 0.01, d = 0.32), and reading rate (F (1,1,137) = 242.15, p < 0.001, d = 1.88). For the low- and high-rate reading groups we then regressed comprehension onto academic vocabulary and reading rate. Results in Figure 5 and Table 3 for the low-rate group show that academic vocabulary explained 21.4% of the variance in reading comprehension (F (1,56) = 16.52, p < 0.000) while 8.5% was explained by rate (F (1.55) = 7.76, p = 0.007). Beta coefficients were b = 0.422 and 0.314 for academic vocabulary and reading rate respectively. For the high-rate group academic vocabulary explained 16.5% of the variance (F (1,78) = 16.56, p < 0.001) while rate explained an additional 5.9% of variance (F (1,77) = 7.00, p = 0.01). When the coefficients for the two models are compared, the impact of vocabulary on the low rate reading group is 32% greater than the impact on the high rate group.

5. Discussion

In this study we investigated the relationships among academic vocabulary, indicators of reading fluency, and reading comprehension. Our results found that mean accumaticity was at or above the 50th percentile for students in both grades while as a group readers generally exhibited less than adequate reading prosody. Our results also showed that sixth-grade students out-performed those in seventh-grade on all reading fluency measures, although the results were not statistically significant. We found statistically significant differences between grades in reading comprehension with seventh-grade outscoring those in sixth, while on the measure of academic vocabulary we found no statistically significant difference between the two grades, although results approached significance.
Our multivariate regression results found that of the three indicators of reading fluency, only reading rate was a significant predictor of comprehension, a finding consistent with other researchers [95]. Accompanying reading rate as a significant predictor of reading comprehension was academic vocabulary. While reading rate predicted 18% of the variance in comprehension, academic vocabulary predicted an additional 22.9% for a total of 40.9% of explained variance. We remind the reader that we measured silent reading comprehension with a standardized instrument, the results of which are often less robust than are researcher-constructed instruments. Although we did not include a measure of general vocabulary in the study, these results provide evidence for the role of academic vocabulary and reading rate as co-predictors of reading comprehension. In previous studies we have found reading prosody to be a significant predictor of reading comprehension so we were mildly surprised that in this sample of students it predicted no unique variance. A possibility for this result may rest in the constrained range of scores. We consider a score of 12 (out of a range of 12) on the MDFS to reflect appropriate prosody. In this sample, a 12 reflected attainment at the 75th percentile, nearly a full standard deviation above the sample mean of 10.4(2.00). We suspect that the suppression of scores in the upper range may have restricted the utility of prosody as a statistically significant predictor of reading comprehension.
Given research showing that fluent readers spend more time engaged with text which exposes them to more words than their less-fluent peers, we tested hypotheses from Stanovich and Stahl and Nagy that the former group would possess larger academic vocabularies that positively influence their reading comprehension. We began by testing a model showing that reading rate does indeed function as a mediator between academic vocabulary and reading comprehension. Our findings demonstrated that reading rate reduces the direct effect of academic vocabulary on comprehension by nearly one-third (32%). We interpret this result to mean that languid reading rates negatively influence growth in academic vocabulary that may ultimately reduce reading comprehension. Digging deeper into this finding we found through a simple slope analysis that a reading rate equal to 127 words-per-minute was the distinguishing point between low- and high-rate readers where the effect on comprehension became neutralized. In other words, 127 words-per-minute was the “bar” that when attained, appeared to negate the effect of reading rate on comprehension. To be clear, we do not recommend that our finding of 127 words-per-minute be interpreted as an absolute minimum standard for reading rate as it is restricted to the sample set in the present study. At the same time, we do believe it to be a plausible estimate that distinguishes the influence of reading rate on comprehension. Our estimate of 127 also happens to be the approximate accumaticity level that is equal to the 50th percentile for sixth- and seventh-grade readers as indicated by the Hasbrouck and Tindal reading norms [90]. We do not assume that these norms reflect a national sample, however, because of the large number of readers represented within each cell (approximately 15,000–17,000 readers), they do provide opportunity for an interesting insight. If our estimated cut-point of 127 words-per-minute for the present sample of students were to be applied to the sixth- and seventh-grade Hasbrouck and Tindal fall sample, then approximately half of students in the sample read at a less-than-adequate reading rate to fully benefit from exposure to academic vocabulary and its effect on comprehension. Regardless, poor reading rate represents an opportunity cost for low-rate readers as they learn less of the critical vocabulary necessary to achieve at levels commensurate with their peers possessing adequate reading rates. Our results suggest that improved reading rate offers the opportunity for improved vocabulary learning and comprehension.
It is important to remember the hypotheses put forth by Stanovich as well as Nagy and Stahl [17,18]. The authors suggest that readers with adequate fluency are more likely to engage in reading and as a result, experience greater vocabulary growth. Because an individual’s attentional capacity is a zero-sum resource, there is a finite amount that when reading, is typically divided between decoding and comprehension. If resources are over-taxed because of poor decoding, there is little attention left to direct to comprehension. On the other hand, efficient decoding leaves most of the reader’s attention capacity for making sense of the text. We say typically to allow for cases of fluent readers who expend little attention on decoding but do not focus their attention on creating meaning when they read. The point here is that it cannot be taken for granted that all readers who become rate-proficient will be inclined to engage in a sufficient amount of reading of the right texts to experience growth in academic vocabulary. Nonetheless, getting over the reading rate “bar” appears from our results, to be an important accomplishment that potentially sets the student up for further vocabulary growth due to improved engagement in reading.
There are several limitations to this study. The reader should keep in mind that this is a descriptive study and does not make use of random assignment to control for confounds necessary to draw causal conclusions of the results. We did not gather a measure for receptive vocabulary to determine the extent to which academic vocabulary provides additional, unique variance explaining differences in comprehension which would likely reduce or perhaps eliminate the percentage of explained variance found in this study. Our analytic sample is limited to students attending 10 middle schools across a large urban district and is not generalizable to a larger population or to other geographic areas. We also advise that our results may be problematic if applied to populations that differ in socio-economic status and academic achievement.
Future research into this area can explore the extent to which students possess academic vocabulary and how this subset of vocabulary affects academic achievement. While it would be expected that correlations between general and academic vocabulary would be large, this has yet to be strongly established. As many subskills in education are distributed along socioeconomic strata, it is expected that academic vocabulary would be no different, however, this too has little empirical evidence. There is also a need to determine at what point across grade levels academic vocabulary becomes a significant predictor of reading comprehension. Academic vocabulary is also a constrained set of words that can be purposely learned. As such, we do not know if there is a point in the educational continuum when academic vocabulary loses its predictive power on comprehension. Finally, the effect of academic vocabulary on state achievement assessments is unknown.

Author Contributions

Conceptualization, D.D.P.; Data curation, G.S.S.; Formal analysis, G.S.S.; Writing—original draft, D.D.P.; Writing—review & editing, D.D.P.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. NAEP Reading: National Average Scores. Available online: https://www.nationsreportcard.gov/reading_2017/#nation/scores?grade=8 (accessed on 26 September 2018).
  2. National Institute of Child Health and Development. National Reading Panel. Teaching Children to Read: An Evidence-Based Assessment of the Scientific Research Literature on Reading and Its Implications for Reading Instruction. Available online: https://www.nichd.nih.gov/sites/default/files/publications/pubs/nrp/Documents/report.pdf (accessed on 26 September 2018).
  3. Beck, I.L.; Perfetti, C.A.; McKeown, M.G. Effects of long-term vocabulary instruction on lexical access and reading comprehension. J. Educ. Psychol. 1982, 74, 506–521. [Google Scholar] [CrossRef]
  4. Kameenui, E.J.; Carmine, D.; Freschi, R. Effects of text construction and instructional procedures for teaching word meanings on comprehension and recall. RRQ 1982, 17, 367–388. [Google Scholar] [CrossRef]
  5. McKeown, M.G.; Beck, I.L.; Omanson, R.C.; Perfetti, C.A. The effects of long-term vocabulary instruction on reading comprehension: A replication. J. Read. Behav. 1983, 15, 3–18. [Google Scholar] [CrossRef]
  6. Ouellette, G.; Beers, A. A not-so-simple view of reading: How oral vocabulary and visual-word recognition complicate the story. Read. Writ. 2010, 23, 189–208. [Google Scholar] [CrossRef]
  7. Coxhead, A. A new academic word list. TESOL Q. 2000, 34, 213–238. [Google Scholar] [CrossRef]
  8. Kintsch, W. Comprehension: A Paradigm for Cognition; Cambridge University Press: Cambridge, UK, 1998. [Google Scholar]
  9. Paris, S.G.; Hamilton, E.E. The development of children’s reading comprehension. In Handbook of Research on Reading Comprehension; Israel, S.E., Duffy, G.G., Eds.; Routledge: New York, NY, USA, 2009; pp. 32–53. [Google Scholar]
  10. Gernsbacher, M.A. Language Comprehension as Structure Building; Erlbaum: Hillsdale, NJ, USA, 1990. [Google Scholar]
  11. LaBerge, D.; Samuels, S.J. Toward a theory of automatic information processing in reading. Cogn. Psychol. 1974, 6, 293–323. [Google Scholar] [CrossRef]
  12. Logan, G.D. Toward an instance theory of automatization. Psychol. Rev. 1988, 95, 492–527. [Google Scholar] [CrossRef]
  13. Perfetti, C.A. Reading Ability; Oxford University Press: New York, NY, USA, 1985. [Google Scholar]
  14. Perfetti, C.A. Reading ability: Lexical quality to comprehension. Sci. Stud. Read. 2007, 11, 357–383. [Google Scholar] [CrossRef]
  15. Anderson, R.C.; Freebody, P. Vocabulary knowledge. In Comprehension and Teaching: Research Reviews; Guthrie, J.T., Ed.; International Reading Association: Newark, DE, USA, 1981; pp. 77–117. [Google Scholar]
  16. Cunningham, A.E.; Stanovich, K.E. What reading does for the mind. Am. Educ. 1998, 22, 8–15. [Google Scholar]
  17. Stanovich, K.E. Matthew effects in reading: Some consequences of individual differences in the acquisition of literacy. RRQ 1986, 21, 360–407. [Google Scholar] [CrossRef]
  18. Stahl, S.A.; Nagy, W.E. Teaching Word Meanings; Erlbaum: Mahwah, NJ, USA, 2006. [Google Scholar]
  19. Perfetti, C.A.; Stafura, J. Word knowledge in a theory of reading comprehension. Sci. Stud. Read. 2014, 18, 22–37. [Google Scholar] [CrossRef]
  20. Perfetti, C.; Yang, C.L.; Schmalhofer, F. Comprehension skill and word-to-text integration processes. Appl. Cogn. Psychol. 2008, 22, 303–318. [Google Scholar] [CrossRef] [Green Version]
  21. Yang, C.L.; Perfetti, C.A.; Schmalhofer, F. Less skilled comprehenders ERPs show sluggish word-to-text integration processes. WL&L 2005, 8, 157–181. [Google Scholar]
  22. Yang, C.L.; Perfetti, C.A.; Schmalhofer, F. Event-related potential indicators of text integration across sentence boundaries. J. Exp. Psychol. Learn. Mem. Cogn. 2007, 33, 55–89. [Google Scholar] [CrossRef] [PubMed]
  23. Bolger, D.J.; Balass, M.; Landen, E.; Perfetti, C.A. Context variation and definitions in learning the meanings of words: An instance-based learning approach. Discourse Process. 2008, 45, 122–159. [Google Scholar] [CrossRef]
  24. Perfetti, C.A.; Wlotko, E.W.; Hart, L.A. Word learning and individual differences in word learning reflectedin event-related potentials. J. Exp. Psychol. Learn. Mem. Cogn. 2005, 31, 1281. [Google Scholar] [CrossRef] [PubMed]
  25. Van Daalen-Kapteijns, M.M.; Elshout-Mohr, M. The acquisition of word meanings as a cognitive learning process. J. Verbal Learn. Verbal Behav. 1981, 20, 386–399. [Google Scholar] [CrossRef]
  26. Perfetti, C.A.; Hart, L. The lexical quality hypothesis. In Precursors of Functional Literacy; Vehoeven, L., Elbro, C., Reitsma, P., Eds.; John Benjamins: Amsterdam, The Netherlands, 2002; pp. 189–213. [Google Scholar]
  27. Lahey, M. Language Disorders and Language Development; Macmillan: Needham, MA, USA, 1988. [Google Scholar]
  28. Hoover, W.A.; Gough, P.B. A simple view of reading. Read. Writ. 1990, 2, 127–160. [Google Scholar] [CrossRef]
  29. Goswami, U. Early phonological development and the acquisition of literacy. In Handbook of Early Literacy Research; Neuman, S.B., Dickinson, D.K., Eds.; Guilford Press: New York, NY, USA, 2001; pp. 111–125. [Google Scholar]
  30. Metsala, J.L. Young children’s phonological awareness and nonword repetition as a function of vocabulary development. J. Educ. Psychol. 1999, 91, 3–19. [Google Scholar] [CrossRef]
  31. Walley, A.C.; Metsala, J.L.; Garlock, V.M. Spoken vocabulary growth: Its role in the development of phonological awareness and early reading ability. Read. Writ. 2003, 16, 5–20. [Google Scholar] [CrossRef]
  32. Seidenberg, M.S.; McClelland, J.L. A distributed developmental model of word recognition and naming. Psychol. Rev. 1989, 96, 523–568. [Google Scholar] [CrossRef] [PubMed]
  33. Ouellette, G.P. What’s meaning got to do with it: The role of vocabulary in word reading and reading comprehension. J. Educ. Psychol. 2006, 98, 554–566. [Google Scholar] [CrossRef]
  34. Levelt, W.J.M.; Roelofs, A.; Meyer, A.S. A theory of lexical access in speech production. BBS 1999, 22, 1–75. [Google Scholar]
  35. Dickinson, D.K.; McCabe, A.; Anastasopoulos, L.; Feinberg, E.S.; Poe, M.D. The comprehensive language approach to early literacy: The interrelationships among vocabulary, phonological sensitivity, and print knowledge among preschool-aged children. J. Educ. Psychol. 2003, 95, 465–481. [Google Scholar] [CrossRef]
  36. Scarborough, H.S. Connecting early language and literacy to later reading (dis)abilities: Evidence, theory, and practice. In Handbook of Early Literacy Research; Neuman, S.B., Dickinson, D.K., Eds.; Guilford Press: New York, NY, USA, 2001; pp. 97–110. [Google Scholar]
  37. Sénéchal, M.; Ouellette, G.; Rodney, D. The misunderstood giant: On the predictive role of early vocabulary to future reading. In Handbook of Early Literacy Research; Neuman, S.B., Dickinson, D.K., Eds.; Guilford Press: New York, NY, USA 2006; pp. 173–182. [Google Scholar]
  38. Rasinski, T.V.; Reutzel, C.R.; Chard, D.; Linan-Thompson, S. Reading fluency. In Handbook of Reading Research; Kamil, M.L., Pearson, P.D., Moje, E.B., Afflerbach, P.P., Eds.; Routledge: New York, NY, USA, 2011; pp. 286–319. [Google Scholar]
  39. Samuels, S.J. The DIBELS tests: Is speed of barking at print what we mean by reading fluency? RRQ 2007, 42, 563–566. [Google Scholar]
  40. Hock, M.F.; Brasseur, I.F.; Deshler, D.D.; Catts, H.W.; Marquis, J.G.; Mark, C.A.; Stibling, J.W. What is the reading component skill profile of struggling adolescent readers in urban schools? LDQ 2009, 32, 21–38. [Google Scholar]
  41. Paige, D.D.; Smith, G.S.; Rasinski, T.V.; Rupley, W.H.; Magpuri-Lavell, T.; Nichols, W.D. A PATH analytic model linking foundational skills to grade 3 reading achievement. J. Educ. Res. 2018. [Google Scholar] [CrossRef]
  42. Paige, D.D.; Rasinski, T.; Magpuri-Lavell, T.; Smith, G. Interpreting the relationships among prosody, automaticity, accuracy, and silent reading comprehension in secondary students. J. Lit. Res. 2014, 46, 123–156. [Google Scholar] [CrossRef]
  43. Jenkins, J.R.; Fuchs, L.S.; van den Broek, P.; Espin, C.; Deno, S.L. Accuracy and fluency in list and context reading of skilled and RD groups: Absolute and relative performance levels. LDRP 2003, 18, 237–245. [Google Scholar] [CrossRef]
  44. Pinnell, G.S.; Pikulski, J.J.; Wixson, K.K.; Campbell, J.R.; Gough, P.B.; Beatty, A.S. Listening to Children Read Aloud: Data from Naep’s Integrated Reading Performance Record (IRPR) at Grade 4. Available online: https://eric.ed.gov/?id=ED378550 (accessed on 26 September 2018).
  45. Wise, B.W.; Ring, J.; Olson, R.K. Training phonological awareness with and without explicit attention to articulation. J. Exp. Child Psychol. 1999, 72, 271–304. [Google Scholar] [CrossRef] [PubMed]
  46. Paige, D.D.; Rupley, W.H.; Smith, G.S.; Rasinski, T.V.; Nichols, W.; Magpuri-Lavell, T. Is prosodic reading a strategy for comprehension? J. Educ. Res. Online 2017, 9, 245–275. [Google Scholar]
  47. Kuhn, M.R.; Stahl, S.A. Fluency: A Review of Developmental and Remedial Practices. Available online: http://psycnet.apa.org/buy/2003-01605-001 (accessed on 26 September 2018).
  48. Benjamin, R.G.; Schwanenflugel, P.J. Text complexity and oral reading prosody in young children. RRQ 2010, 45, 388–404. [Google Scholar] [CrossRef]
  49. Daane, M.C.; Campbell, J.R.; Grigg, W.S.; Goodman, M.J.; Oranje, A. Fourth-Grade Students Reading Aloud: NAEP 2002 Special Study of Oral Reading. Available online: https://nces.ed.gov/pubsearch/pubsinfo.asp?pubid=2006469 (accessed on 26 September 2018).
  50. Klauda, S.L.; Guthrie, J.T. Relationships of three components of reading fluency to reading comprehension. J. Educ. Psychol. 2008, 100, 310–321. [Google Scholar] [CrossRef]
  51. Paige, D.D.; Rasinski, T.V.; Magpuri-Lavell, T. If fluent, expressive reading important for high school readers? JAAL 2012, 56, 67–76. [Google Scholar] [CrossRef]
  52. Rasinski, T.V.; Rikli, A.; Johnston, S. Reading fluency: More than automaticity? More than a concern for the primary grades? Lit. Res. Instr. 2009, 48, 350–361. [Google Scholar] [CrossRef]
  53. Valencia, S.W.; Smith, A.; Reece, A.M.; Li, M.; Wixon, K.K.; Newman, H. Oral reading fluency assessment: Issues of content, construct, criterion, and consequent validity. RRQ 2010, 45, 270–291. [Google Scholar] [CrossRef]
  54. Beckman, M.E. The parsing of prosody. Lang. Cogn. Process. 1996, 11, 17–67. [Google Scholar] [CrossRef]
  55. Cutler, A.; Dahan, D.; van Donselaar, W. Prosody in the comprehension of spoken language: A literature review. Lang. Speech 1997, 40, 141–201. [Google Scholar] [CrossRef] [PubMed]
  56. Peppé, S.; McCann, J. Assessing intonation and prosody in children with atypical language development: The PEPS-C test and the revised version. Clin. Linguist. Phon. 2003, 17, 345–354. [Google Scholar] [CrossRef] [PubMed]
  57. Sanderman, A.A.; Collier, R. Prosodic phrasing and comprehension. Lang. Speech 1997, 40, 391–409. [Google Scholar] [CrossRef]
  58. Schwanenflugel, P.J.; Benjamin, R.G. Lexical prosody as an aspect of reading fluency. Read. Writ. 2017, 30, 143–162. [Google Scholar] [CrossRef]
  59. Veenendaal, N.J.; Groen, M.A.; Verhoeven, L. The role of speech prosody and text reading prosody in children’s reading comprehension. Br. J. Educ. Psychol. 2014, 84, 521–536. [Google Scholar] [CrossRef] [PubMed]
  60. Veenendaal, N.J.; Groen, M.A.; Verhoeven, L. What oral reading fluency can reveal about reading comprehension. J. Res. Read. 2015, 38, 213–225. [Google Scholar] [CrossRef]
  61. Hiebert, E.H.; Lubliner, S. The nature, learning, and instruction of general academic vocabulary. In What Research Has to Say about Vocabulary Instruction; Farstrup, A.E., Samuels, S.J., Eds.; International Reading Association: Newark, DE, USA, 2008; pp. 106–129. [Google Scholar]
  62. Corson, D. The learning and use of academic English words. Lang. Learn. 1997, 47, 671–718. [Google Scholar] [CrossRef]
  63. Cunningham, J.W.; Moore, D.W. The contribution of understanding academic vocabulary to answering comprehension questions. J. Read. Behav. 1993, 25, 171–180. [Google Scholar] [CrossRef]
  64. Nation, I.S.P.; Kyongho, H. Where would general service vocabulary stop and special purposes vocabulary begin? System 1995, 23, 35–41. [Google Scholar] [CrossRef]
  65. Scarcella, R.C. Accelerating Academic English: A Focus on English Language Learners. Available online: https://www.librarything.com/work/3402943 (accessed on 29 September 2018).
  66. Nagy, W.; Townsend, D. Words as tools: Learning academic vocabulary as language acquisition. RRQ 2012, 47, 91–108. [Google Scholar] [CrossRef]
  67. Beck, I.L.; McKeown, M.G.; Kucan, L. Bringing Words to Life; The Guilford Press: New York, NY, USA, 2002. [Google Scholar]
  68. Biber, D. University Language: A Corpus-Based Study of Spoken and Written Registers; John Benjamins: Philadelphia, PA, USA, 2006. [Google Scholar]
  69. Lesaux, N.K.; Kieffer, M.J.; Faller, S.E.; Kelley, J.G. The effectiveness and ease of implementation of an academic vocabulary intervention for linguistically diverse students in urban middle schools. RRQ 2010, 45, 196–228. [Google Scholar] [CrossRef]
  70. Snow, C.E.; Lawrence, J.; White, C. Generating knowledge of academic language among urban middle school students. SREE 2009, 2, 325–344. [Google Scholar] [CrossRef]
  71. Townsend, D.; Collins, P. Academic vocabulary and middle school English learners: An intervention study. Read. Writ. 2009, 22, 993–1019. [Google Scholar] [CrossRef]
  72. Hancioğlu, N.; Neufeld, S.; Eldridge, J. Through the looking glass and into the land of lexico-grammar. ESP 2008, 27, 459–479. [Google Scholar] [CrossRef]
  73. Biemiller, A. Language and Reading Success; Brookline: Newton Upper Falls, MA, USA, 1999. [Google Scholar]
  74. Stahl, S.A. Beyond the instrumentalist hypothesis: Some relationships between word meanings and comprehension. In The Psychology of Word Meanings; Schwanenflugel, P.J., Ed.; Erlbaum: Hillsdale, NJ, USA, 1991; pp. 157–186. [Google Scholar]
  75. Townsend, D.; Filippini, A.; Collins, P.; Giancarosa, G. Evidence for the importance of academic word knowledge for the academic achievement of diverse middle school students. ESJ 2012, 112, 497–518. [Google Scholar] [CrossRef]
  76. Nunnally, J.C.; Bernstein, I.H. Psychometric Theory; McGraw Hill: New York, NY, USA, 1994. [Google Scholar]
  77. Deno, S.L. Curriculum-based measurement: The emerging alternative. Except. Child. 1985, 52, 219–232. [Google Scholar] [CrossRef] [PubMed]
  78. Deno, S.L.; Mirkin, P.K.; Chiang, B. Identifying valid measures of reading. Except. Child. 1982, 49, 36–45. [Google Scholar] [PubMed]
  79. Fuchs, L.S.; Fuch, D.; Hosp, M.K.; Jenkins, J.R. Oral reading fluency as an indicator of reading competence: A theoretical, empirical, and historical analysis. Sci. Stud. Read. 2001, 5, 239–256. [Google Scholar] [CrossRef]
  80. McGlinchey, M.T.; Hixon, M.D. Using curriculum-based measurement to predict performance on state assessments in reading. SPR 2004, 33, 193–203. [Google Scholar]
  81. National Governors Association Center for Best Practices & Council of Chief State School Officers. Common Core State Standards for English Language Arts and Literacy in History/Social Studies, Science, and Technical Subjects; National Governors Association Center for Best Practices, Council of Chief State School Officers: Washington, DC, USA, 2010.
  82. Graesser, A.C.; McNamara, D.; Louwerse, M.M.; Cai, Z. Coh-metrix analysis of text on cohesion and language. Behav. Res. Method. Instr. Comput. 2004, 36, 193–202. [Google Scholar] [CrossRef]
  83. Zutell, J.; Rasinski, T.V. Training teaches to attend to their students’ oral reading fluency. TIP 1991, 30, 211–217. [Google Scholar]
  84. Loo, R. Motivational orientations toward work: An evaluation of the work preference inventory (Student form). MECD 2001, 33, 222–233. [Google Scholar]
  85. Nunnally, J.C. Psychometric Theory, 1st ed.; McGraw-Hill: New York, NY, USA, 1967. [Google Scholar]
  86. Streiner, D.L. Starting at the beginning: An introduction to coefficient alpha and internal consistency. J. Pers. Assess. 2003, 80, 99–103. [Google Scholar] [CrossRef] [PubMed]
  87. Moser, G.P.; Sudweeks, R.R.; Morrison, T.G.; Wilcox, B. Reliability of ratings of children’s expressive reading. Read. Psychol. 2014, 35, 58–79. [Google Scholar] [CrossRef]
  88. Smith, G.S.; Paige, D.D. A study of reliability across multiple raters when using the NAEP and MDFS rubrics to measure oral reading fluency. Read. Psychol. 2018, in press. [Google Scholar]
  89. SRI: Technical Guide. Available online: https://www.hmhco.com/products/assessment-solutions/assets/pdfs/sri/SRI_TechGuide.pdf (accessed on 29 September 2018).
  90. Hasbrouck, J.; Tindal, G. Oral reading fluency norms: A valuable assessment tool for reading teachers. Reading Teacher 2006, 59, 636–644. [Google Scholar] [CrossRef]
  91. Baron, R.M.; Kenny, D.A. The moderator-mediator variable distinction in social psychological research: Conceptual, strategic, and statistical considerations. J. Personal. Soc. Psychol. 1986, 51, 1173–1182. [Google Scholar] [CrossRef]
  92. Preacher, K.J.; Hayes, A.F. SPSS and SAS procedures for estimating indirect effects in simple mediation models. Behav. Res. Method. Instrum. Comput. 2004, 36, 717–731. [Google Scholar] [CrossRef] [Green Version]
  93. Preacher, K.J.; Kelly, K. Effect size measures for mediation models: Quantitative strategies for communicating indirect effects. Psychol. Method. 2011, 16, 93–115. [Google Scholar] [CrossRef] [PubMed]
  94. Preacher, K.J.; Curran, P.J.; Bauer, D.J. Computational tools for probing interaction effects in multiple linear regression, multilevel modeling, and latent curve analysis. J. Educ. Behav. Stat. 2006, 31, 437–448. [Google Scholar] [CrossRef]
  95. Hiebert, E.H. The forgotten reading proficiency: Stamina in silent reading. In Teaching Stamina and Silent Reading in the Digital-Global Age; Hiebert, E.H., Ed.; TextProject: Santa Cruz, CA, USA, 2015; pp. 16–31. [Google Scholar]
Figure 1. Graph of the Relationship between Comprehension and Rate.
Figure 1. Graph of the Relationship between Comprehension and Rate.
Education 08 00165 g001
Figure 2. Graph of the Relationship between Vocabulary and Rate.
Figure 2. Graph of the Relationship between Vocabulary and Rate.
Education 08 00165 g002
Figure 3. Graph of Relationship between Comprehension and Vocabulary.
Figure 3. Graph of Relationship between Comprehension and Vocabulary.
Education 08 00165 g003
Figure 4. Pathway of Mediation Analysis.
Figure 4. Pathway of Mediation Analysis.
Education 08 00165 g004
Figure 5. Beta Coefficients (b) for Academic Vocabulary and Reading Rate Predicting Comprehension by Group.
Figure 5. Beta Coefficients (b) for Academic Vocabulary and Reading Rate Predicting Comprehension by Group.
Education 08 00165 g005
Table 1. Means and standard deviations of the measured variables by grade.
Table 1. Means and standard deviations of the measured variables by grade.
VariableSixth-Grade
(n = 83)
Seventh-Grade
(n = 55)
All Students
(n = 138)
Comprehension851.52(206.57)942.49(196.14)887.78(206.64)
Academic Vocabulary23.30(7.99)26.16(8.42)24.44(8.25)
Reading Rate136.93(33.48)131.36(29.97)134.71(32.13)
Word Identification Accuracy134.43(33.99)126.89(34.46)130.33(38.26)
Prosody10.64(2.18)10.05(1.67)10.41(2.01)
Table 2. Bivariate Correlations among the Measured Variables.
Table 2. Bivariate Correlations among the Measured Variables.
VariableComprehensionAcademic VocabularyReading RateWord AccuracyProsody
Comprehension1
Academic Vocabulary0.4851
Reading Rate0.5660.3371
Word Accuracy0.5440.3210.9581
Prosody0.4890.3800.7550.7381
Note: All correlations significant at p < 0.01 (two-tailed); N = 138; Word accuracy = word identification accuracy.
Table 3. Hierarchical Regression Results for Measures Predicting Silent Reading Comprehension.
Table 3. Hierarchical Regression Results for Measures Predicting Silent Reading Comprehension.
All Students (n = 138)Low-Rate Students (n = 58; Rate < 127)High-Rate Students (n = 80; Rate ≥ 127)
VariableStd. ErrorBSEββR2R2tStd. ErrorBSEββR2R2tStd. ErrorBSEββR2R2t
Constant55.72591.1448.42 12.21 ***79.19517.0868.61 7.39 ***61.78729.7360.81 11.52 ***
TAWK2.1612.141.880.4850.229 6.46 ***3.3711.782.900.4770.214 4.07 ***2.269.092.230.4180.165 4.06 ***
Constant67.40291.3062.58 4.66 ***128.49221.23124.45 1.78150.79376.26145.85 2.58 *
TAWK1.918.301.750.3320.229 4.76 ***3.1810.412.780.4220.214 3.75 ***2.356.962.300.3200.165 3.03 **
Rate0.472.920.450.4540.4090.186.51 ***0.333.121.120.3140.2990.0852.79 ***1.012.610.990.2800.2240.0592.65 *
* p < 0.05, ** p < 0.01, *** p < 0.001; TAWK, Test of Academic Word Knowledge (academic vocabulary).
Table 4. Regression Results for the Mediation of Rate on Vocabulary and Comprehension.
Table 4. Regression Results for the Mediation of Rate on Vocabulary and Comprehension.
Model/PathEstimateSE95% CI (Lower)95% CI (Upper)
Vocabulary—Reading Rate (a)1.31 ***0.3140.6911.934
R2M.X0.114
Reading Rate—Comprehension (b)2.92 ***0.4492.0343.809
R2Y.M
Vocabulary—Comprehension (c)12.14 ***1.8788.42315.850
R2Y.X.0.235
Vocabulary—Comprehension (c’)8.30 ***1.7464.84911.756
R2Y.M.X0.418
Indirect Effect of Rate in Y.M.X3.84 ***1.1391.8396.278
Note: Estimates for model/path effects are standardized coefficients (betas); R2M.X is the proportion of variance in reading rate (M) explained by academic vocabulary (X); R2 Y.M is the proportion of variance in comprehension (Y) explained by reading rate (M). R2Y.X is the proportion of variance in comprehension (Y) explained by academic vocabulary (X). R2Y.MX is the proportion of variance in comprehension (Y) explained by academic vocabulary (X) when controlling for rate (M). Number of bootstrap samples = 1000. CI; confidence interval. *** p < 0.001.
Table 5. Effect of Reading Rate on Comprehension by Reading Group.
Table 5. Effect of Reading Rate on Comprehension by Reading Group.
Group
Rate Cut-PointLow Rate
(b)
High Rate
(b)
∆Rate
1140.287ns0.418 ***ns
1170.293ns0.412 ***ns
1180.336 *0.414 ***0.78
1190.329 *0.390 ***0.61
1200.374 **0.417 ***0.43
1210.398 **0.428 ***0.30
1220.398 **0.428 ***0.30
1230.392 **0.420 ***0.28
1250.437 **0.445 **0.08
1270.429 **0.427 ***0.02
1280.416 **0.409 ***0.07
1290.417 **0.395 ***0.22
1300.363 **0.369 **0.06
1310.388 **0.392 ***0.04
1330.387 **0.377 **0.10
1340.364 **0.356 **0.08
1360.395 **0.368 **0.27
1370.398 **0.363 **0.35
Note: b, coefficient beta; Rate, words read per minute; * p < 0.05, ** p < 0.01, *** p < 0.001. Data not available for cut-points 115, 116, 124, 126, 132, and 135.
Table 6. Means and Standard Deviations for Reading Rate, Academic Vocabulary, and Comprehension by Low- and High-Rate Readers.
Table 6. Means and Standard Deviations for Reading Rate, Academic Vocabulary, and Comprehension by Low- and High-Rate Readers.
GroupComprehensionAcademic VocabularyReading Rate
Low-Rate (n = 58, 42%)780.02(196.32)22.33(7.96)104.62(19.75)
High-Rate (n = 80, 58%)965.90(177.59)25.98(8.17)156.53(19.04)
Note: Differences between low- and high-rate groups for comprehension and reading rate are statistically significant at p < 0.001. Differences between low- and high-rate groups for academic vocabulary are significant at p = 0.01.

Share and Cite

MDPI and ACS Style

Paige, D.D.; Smith, G.S. Academic Vocabulary and Reading Fluency: Unlikely Bedfellows in the Quest for Textual Meaning. Educ. Sci. 2018, 8, 165. https://doi.org/10.3390/educsci8040165

AMA Style

Paige DD, Smith GS. Academic Vocabulary and Reading Fluency: Unlikely Bedfellows in the Quest for Textual Meaning. Education Sciences. 2018; 8(4):165. https://doi.org/10.3390/educsci8040165

Chicago/Turabian Style

Paige, David D., and Grant S. Smith. 2018. "Academic Vocabulary and Reading Fluency: Unlikely Bedfellows in the Quest for Textual Meaning" Education Sciences 8, no. 4: 165. https://doi.org/10.3390/educsci8040165

APA Style

Paige, D. D., & Smith, G. S. (2018). Academic Vocabulary and Reading Fluency: Unlikely Bedfellows in the Quest for Textual Meaning. Education Sciences, 8(4), 165. https://doi.org/10.3390/educsci8040165

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop