Next Article in Journal
A Systematic Review of STEAM Education’s Role in Nurturing Digital Competencies for Sustainable Innovations
Previous Article in Journal
Students’ Epistemological Framings When Solving an Area Problem of a Degenerate Triangle: The Influence of Presence and Absence of a Drawing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Critical Thinking in Reading Comprehension: Fine Tuning the Simple View of Reading

1
Department of Curriculum & Instruction, School of Education, Northern Illinois University, DeKalb, IL 60115, USA
2
Teaching, Learning, Culture, College of Education, Texas A&M University, College Station, TX 77843, USA
3
Department of Education, School of Education, Chicago State University, Chicago, IL 60628, USA
*
Author to whom correspondence should be addressed.
Educ. Sci. 2024, 14(3), 225; https://doi.org/10.3390/educsci14030225
Submission received: 14 January 2024 / Revised: 6 February 2024 / Accepted: 8 February 2024 / Published: 22 February 2024
(This article belongs to the Special Issue Building Literacy Skills in Primary School Children and Adolescents)

Abstract

:
Critical thinking has been identified as an essential skill for the 21st century, yet little research has investigated its role in reading comprehension. Executive functions (EF) and critical thinking overlap, where the latter often rely on the proficient operation of EF and vice versa. Extending the simple view of reading, the active view of reading considers the contribution of language comprehension and decoding to reading comprehension by including the role of EF. In the present study, we assess 360 seventh-grade English language learners attending schools in three states in India. We gathered measures of reading comprehension, critical thinking and listening comprehension, reading fluency, academic vocabulary, and encoding. Using multiple regression to fit a linear model, the best-fit model explained 59.3% of the total variance in reading comprehension. Two indicators of critical thinking, induction and deduction, were significant predictors of reading comprehension, along with listening comprehension, encoding, and academic vocabulary. Also of interest was the result showing reading fluency to be a non-significant predictor of reading comprehension. Results of this study add empirical support for the role of critical thinking in reading comprehension.

1. Introduction

The role of critical thinking, along with reading attainment, is being touted as essential competencies for the 21st century. For example, in its Learning Framework for 2030, the Organization for Economic Cooperation and Development [1] identifies critical thinking as an essential skill necessary to navigate the complexities of today’s world. The ultimate outcome of reading attainment is comprehension, which has been found to share a bi-directional relationship with the growth of cognitive abilities and academic achievement [2,3], indicating that advancements in reading comprehension align with heightened critical thinking abilities and vice versa. Principles and insights from educational and cognitive science literature would, therefore, suggest that critical thinking and reading comprehension are reciprocal cognitive processes. Thus, critical thinking should be a robust predictor of reading comprehension. In this study, we address the role of critical thinking as a predictor of reading comprehension in 360 seventh-grade students from across three states in India.
Within the domain of cognitive science, a widely shared construct centers on the dual-process nature of thinking [4]. This dual-process framework theorizes the existence of two distinct systems guiding cognitive operations. System 1 processes are characterized by their rapidity, implicit nature, and automatic execution, demanding minimal cognitive attention. In contrast, System 2 processes require heightened cognitive focus, engaging in intentional, analytic, and reflective operations [4,5,6]. Serving as the umbrella for reasoning, System 2 processes play a pivotal role in cognitive activities that demand deliberate thought and analysis. Integral to this dual-process framework is the recognition that System 2 processes are not only essential for reasoning but are particularly crucial in the realm of critical thinking [7]. Critical thinking is a cognitive activity, that is distinguished by its intentionality, reflective nature, and substantial reliance on the cognitive efforts governed by System 2. In the intricate interplay between cognitive processes and critical thinking, the dual-process framework stands out as a succinct and illuminating conceptualization within the context of cognitive science. Executive function emerges as critical in this cognitive interplay, assuming a pivotal role in coordinating deliberate and analytical cognitive operations.

2. Executive Function

Executive functions (EF) consist of three distinct components: cognitive, metacognitive, and dispositional [8,9]. The cognitive component includes inductive and deductive thinking, decision-making, cognitive flexibility (the ability to adopt different perspectives and adapt to changed circumstances), problem-solving, and creativity. Meta-cognitive components consist of control and self-awareness, while dispositional components include motivation, goal orientation, and attitude. Critical to EF is working memory, the ability to manipulate information while holding it in mind [10,11]. Zelazo and Mϋller [12] define executive functions (EF) “as the psychological processes involved in the control of thought and action” (p. 445). Similarly, Lizarrage et al. [13] describe EF as a set of “managerial processes” (p. 274) that operate to control advanced cognitive processes used to produce thoughts, memories, and emotions, particularly in contexts requiring novel and ambiguous approaches to achieve a solution.
In sum, executive function is linked to self-management and task-oriented processes, whereas critical thinking is centered on intellectual tasks and reasoned decision-making. Research indicates an overlap, recognizing that effective critical thinking often relies on proficient executive function skills and vice versa [14].

3. Critical Thinking

Critical thinking has long been recognized as a useful tool when confronting everyday situations and is now considered an important student outcome in general education [15,16,17,18,19,20,21]. However, there is not an agreed-upon definition of CT. Ennis [22] and Bensley [23] focus on it being purposeful and reflective, while other authors conceptualize CT as a habit of mind, something that is conducted routinely in the pursuit of fair evaluation [24]. CT is a multidimensional construct composed of complex, higher-order cognitive skills as well as dispositional, motivational, and meta-cognitive components that are purposeful, reflective, problem-focused, and skeptical.
Critical thinking requires a purposeful effort to engage in the independent analysis of evidence apart from one’s own beliefs and biases [16,21,22,25,26,27,28]. Engaging in reasoning and analysis requires the reader to possess the stable and relative content knowledge necessary to evaluate the claims and warrants appearing in the text [29,30]. A challenge with knowledge is that it cannot be instantly constructed; content knowledge is built over time through intentional learning and study, and when it is insufficient, it results in a poor ability to generate inferences [31]. Readers may possess deep knowledge and critical thinking skills; however, this does not mean that these will be used in comprehension tasks [22,32].
Critical thinking research is often focused on the reasoning necessary to first identify and then accept or reject fallacies in arguments and short scenarios [33,34]. Inductive and deductive thinking are components of such reasoning because they aid individuals in determining the origin and validity of knowledge sources and their applicability to specific situations [35,36,37,38,39]. Inductive reasoning (IR) can be conceptualized as generalizing rules from individual observations and involves the tasks of data gathering, pattern finding, and hypothesis generation [40,41]. IR is thought to provide the basis for applying knowledge to unfamiliar contexts and is one of the strongest predictors of academic performance [40,42]. On the other hand, deductive reasoning (DR) results in valid conclusions when premises are, in fact, true, making for only one logically valid answer [43,44]. A major difference between inductive and deductive reasoning is that the latter can be invoked without real-world knowledge, thus setting up the possibility of a logically valid but factually incorrect deduction. Consider two premises where the first states that “All insects can bark” and the second states that “A caterpillar is an insect.” Despite it resulting in an incorrect counterfactual, the correct deduction is that a caterpillar can bark.

4. Belief Systems

A challenge in evaluating arguments is the acceptance of fallacious arguments supporting the reader’s existing belief systems that can lead to invalid conclusions [16,22,25,44,45,46,47].
Belief systems influence how a reader frames a problem or statement, which then directs subsequent reasoning regarding its interpretation [48,49,50,51,52,53,54,55]. Engagement in critical, hypothetical thinking requires the reader to decouple real or current representations of the world from those that can be imagined [56,57,58]. This requires the reader to have the basic cognitive skills to identify logical reasoning flaws in themselves and others and to possess the meta-cognitive skills needed to evaluate evidence independently from one’s own beliefs [16,59,60,61,62].

5. Reading Comprehension

Critical thinking applied to reading comprehension involves the application of executive function processes, such as comprehension monitoring and inference-making, as included in the construction–integration (CI) model of reading comprehension [29]. The CI model identifies three levels of comprehension, beginning with the words and syntax comprising the surface code, the textbase that contains the reader’s abstracted text propositions and the minimal inferences needed for coherence, and the situation model where the text is integrated with the reader’s background knowledge [29,63,64,65].
Some theorists argue that critical thinking works in interaction with the reader’s existing knowledge base [66,67]. For example, the reading systems framework hypothesizes that linguistic and orthographic systems interact with the reader’s background knowledge to produce textual understanding [68]. However, Willson and Rupley [69] found that by sixth grade, background knowledge was no longer the primary predictor of reading comprehension. Rather, strategic knowledge of how and what to read in a text becomes more important for both narrative and expository text understanding, thus suggesting a role for critical thinking. Further, Diakidoy et al. [70] found that reading to evaluate an author’s argument resulted in better comprehension as the reader was more likely to engage in critical evaluation.

6. Conceptual Framework

Duke and Cartwright [71] have proposed the active view of reading (AVR), which extends the simple view of reading [72] by including executive function processes necessary for active self-regulation when reading. These processes include working memory capacity, prior knowledge of the content, and inference-making. The AVR also includes bridging processes connecting decoding and comprehension, such as reading fluency and morphological awareness [73,74,75,76]. A conceptual strength of the AVR is that it allows for the possibility of unique, explanatory variance in reading comprehension attributable to critical thinking. Consider that critical reading of a text requires the reader to use the text-dependent information provided by the author to then generate inferences that integrate with the reader’s schema to improve comprehension [31,64,77]. The quality of the resulting representation rests on reader-dependent factors such as word-level knowledge, working memory capacity, inference-making ability, self-monitoring of comprehension, and prior knowledge of the topic [78,79,80,81,82,83]. As such, these processes are viewed as important to reading comprehension above and beyond decoding and language comprehension processes.

7. The Present Study

This study is set in India, and although reading achievement is slowly improving, it is characterized by generally low attainment and large discrepancies by gender and age [84]. Critical thinking has not been an explicit part of the educational curriculum in India, thus providing an opportunity to analyze the role of CT in a setting where students have not been instructed in its use [85]. This study investigates the potential contribution that CT may make to reading comprehension in seventh-grade students across three states in northeast India. This study is focused on the following three research questions:
RQ1: What is the attainment level of critical thinking skills in the study population?
RQ2: Do students at the three participating schools differ on the measured variables?
RQ3: To what extent does critical thinking predict reading comprehension?
We hypothesize that critical thinking, reading fluency, listening comprehension, and vocabulary will contribute significantly to reading comprehension. We base this hypothesis on the active view framework, which suggests that meta-cognitive processes are important to comprehension, while reading fluency, listening comprehension, and vocabulary have been shown in multiple studies to predict reading comprehension.

8. Methods

8.1. Participants

The present study was conducted in three private, all female schools located in cities in the northeastern Indian states of Assam, Meghalaya, and West Bengal. These schools were selected because they were part of a larger initiative by the researcher to improve reading instruction across the schools. The schools are run by a religious Christian order that has operated private schools across India for over 100 years. Although school administrators do not collect socioeconomic information from students or their families, the researcher was informed that a small portion of students are financially disadvantaged and receive tuition assistance. However, most students come from middle-class backgrounds whose parents pay full tuition. Because of these factors, it should be noted that these schools are not representative of schools across India, particularly government-sponsored schools. The students attending these three schools come from a variety of faith traditions, with the large majority being Hindu. A smaller number are of the Christian tradition, and the smallest percentage are Muslim. It was also shared by administrators that parental involvement is high in each of the three schools. For example, principals meet twice a year with parents to review the progress of their child. The medium of instruction in each school is English. Additionally, all students take a class in Hindi, which, along with English, is the other official language in India. Students also study the local, indigenous language, which differs by state. Administrators reported to the researcher that daily attendance at each school is nearly 100%.
It was decided that seventh-grade students would be selected for the study because nearly all students matriculate into each of the schools at kindergarten and have been studying English as a second language for nearly eight years. While English is their second language, the principals of the participating schools stated that nearly all seventh-grade students are fluent English speakers, readers, and writers. All seventh-grade students present on the days that assessments were administered were included in the study, making the sample a census of seventh-grade students attending each school. The number of students participating in the study was n = 141, 81, and 138 for the schools located in Assam, Meghalaya, and West Bengal, respectively, resulting in a total sample of N = 360. The ages of the students in the study sample ranged from 11 years and 8 months to 12 years and 6 months, with a mean age of 12 years and 1 month.

8.2. Assessments

To determine the influence of predictors of reading comprehension (bridging processes, decoding, and comprehension), assessments were administered to measure developmental spelling knowledge (encoding), academic vocabulary, reading fluency, and listening and reading comprehension. To determine critical thinking ability, a standardized test of critical thinking was administered.

8.3. Encoding

The Developmental Spelling Assessment (DSA) [86] is a 20-item spelling (encoding) test that identifies the current stage of spelling knowledge attained by the student, thus providing a measure of the reader’s orthographic knowledge [86,87,88]. Based on work describing spelling stages by Henderson and Templeton [89], the 20 words increase in spelling complexity, where the first five represent the letter-naming stage, the next five the within-word stage, words 11–15 assess the syllable juncture stage, and the last five assess attainment of the derivational constancy stage. The DSA is group-administered as a traditional spelling test using paper and pencil. To administer the test, the teacher pronounces each word, uses it in a sentence, and then pronounces it again. Students spell the word on their paper and then wait for the next word. The test continues in this manner and typically takes about 10 to 15 min to administer. To score the test, 1 point is awarded for each correctly spelled word in a range of 0 to 20 points. Pearson-r correlations for the five words comprising each of the four spelling stages, as reported by the test author, range from 0.97 to 0.99, while test–retest correlations range from 0.97 to 0.98.

8.4. Vocabulary

The Test of Academic Word Knowledge (TAWK) is a 50-item assessment of academic vocabulary that takes about 15 to 20 min to complete [90]. The TAWK uses 10 sub-lists of words created by Coxhead [91], resulting in 570 word families that account for about 10% of the word tokens found in academic texts but only 1.4% of narrative texts. The TAWK is group-administered using paper and pencil. To complete the TAWK, the student silently reads the target word and then chooses from among three words the one whose meaning is most closely related to the target word. The student repeats this procedure for all 50 items. The test authors report internal reliability for the TAWK with Cronbach’s alpha equal to 0.85 and test–retest coefficient = 0.89.

8.5. Reading Fluency

To assess reading fluency, all students read aloud to complete an unrehearsed narrative passage consisting of 170 words with a Lexile ranging between 810L and 1000L. Readings were digitally recorded for later scoring. Each reading was evaluated for the time it took the reader to complete the passage. Additionally, word-reading accuracy was determined by counting the number of errors made by the student in the 170-word passage. Errors included mispronounced words, words not read, and words inserted into the text, and self-corrections did not count as errors. The total number of errors was subtracted from the 170-word total to determine the total number of words read correctly. This number was then adjusted to reflect the number of words read correctly per minute (WCPM), a metric we call accumaticity [92]. Additionally, all readings were evaluated for reading prosody using the Multidimensional Fluency Scale (MDFS) [93]. Evaluative ratings using the MDFS result in a score ranging from 4 to 16, with higher scores indicating more competent reading. MDFS was found to be highly reliable, with generalizability coefficients ranging from 0.91 to 0.93 [94].

8.6. Critical Thinking

To assess critical thinking, the Cornell Critical Thinking Test (5th ed., Revised, Level X [CCTT]) [95] was administered to students. The CCTT Level X is a 76-item assessment that uses a future-based, four-section adventure story to assess inductive and deductive thinking, the ability to evaluate the credibility of sources, and the identification of assumptions to assess critical thinking in 4th- through 14th-grade students. In each of the four sections of the story, the student reads and rates statements that gradually reveal more about the story and its characters. For example, examinees may be asked to determine if a fact supports a hypothesis, to rate the believability of a statement, to determine what might occur next given certain circumstances, or to determine the assumptions being made by the exploration team in the story. The CCTT takes about one hour to complete, and results can be computed for four domains that, include inductive and deductive thinking, judgment of credibility, and determination of assumptions. These four domains can then be summed into a composite score. Test reliability measures for the internal consistency of Level X reported by its authors range from 0.67 to 0.90, with a weighted average across all reported studies = 0.78. The test authors provide criterion-related evidence for the validity of the CCTT with correlations ranging from r = 0.41 to 0.49 with the Watson–Glaser assessment and correlations with IQ aptitude tests of r = 0.44 to 0.74 with the Otis–Lennon, r = 0.42 to 0.53 with the Houghton–Mifflin Cognitive Abilities Test, r = 0.27 to 0.49 with the California Test of Mental Maturity, r = 0.63 with the Iowa Test of Educational Development, and r = 0.41 to 0.52 with the Scholastic Aptitude Test. To score the CCTT, all the correct answers from each of the four domains are totaled. The four domains are then summed to form a composite score ranging from 0 to 76.

8.7. Listening Comprehension

To assess listening comprehension, the listening comprehension subtest of the Oral and Written Language Scales (2nd ed.) was individually administered to all students in the study sample [96]. The OWLS listening comprehension subscale consists of three practice items, followed by 111 items. Following the recommendations of the test authors for the grade level assessed in the present study, the assessment begins with item 60 and proceeds through the end of the assessment, resulting in scores ranging from 0 to 111. The test items assess three areas of skills, including lexical (nouns, pronouns, verbs, and modifiers), syntactic (grammatical forms), and supralinguistic (humor and deriving meaning from context) knowledge. The examiner reads the item promptly to the student, who then selects an answer from among four pictures. The completion time for the OWLS is about 20 min. Spearman–Brown reliability correlations range from 0.75 to 0.89 with two-week test–retest reliabilities of r = 0.94.

8.8. Silent Reading Comprehension

Silent reading comprehension was assessed using the text comprehension subtest from the Test of Reading Comprehension-4 (TORC) [97]. The TORC was group administered to participants, with students reading six increasingly complex stories that increased in length from 56 to 340 words. Following each story, students answered five multiple-choice questions requiring factual and inferential thinking. Students were group-assessed and told to read each story silently and then answer the questions following the story until they had finished the six stories. Students completed the assessment in about an hour. The score used for this subtest was the total number of correct answers, with a range of 0 to 30. Test–retest reliability for the text comprehension subtest, as reported by the test authors, is r = 0.81 for students in the assessed age range. The test authors also report coefficient alpha, test–retest coefficients for all students in the normative sample, and interscorer reliabilities as 0.95, 0.83, and 0.95, respectively.

8.9. Assessment Administration

The administration of the assessments took place over two days at each of the study sites. Students were group-administered the assessments measuring encoding, reading comprehension, academic vocabulary, and critical thinking. Directions for the tests were given to students by a local, school-based teacher to avoid misunderstandings among students of the American dialect spoken by the research team. The first researcher monitored the local teacher as the instructions were explained to ensure correctness. The order in which assessments were administered was randomized to avoid administration bias. The reading fluency passage and listening vocabulary assessments were individually administered by the first researcher and a graduate student trained in the administration of the two assessments.

9. Results

9.1. Research Question 1

The first question asks about the attainment level of critical thinking skills in the study population. The means and bivariate correlations for the four domains of the critical thinking construct and their composite are shown in Table 1. Correlations between the four domains show small, statistically significant correlations, except for those between credibility and assumptions, which share a small and non-significant relationship. To further explore the relationships among the four domains, a principal components analysis (PCA) was conducted. Direct oblimin rotation was used to account for the correlations among the variables. Results shown in Table 2 reveal that the four domains form a single dimension, explaining 42.64% of the variance. Student attainment on the critical thinking composite was 30.54 (6.87). While norming data for the CCTT is limited, the authors provide data showing mean attainment of 35.4 (5.3) and 37.0 (5.3) for fourth- through sixth-grade and seventh- through ninth-grade, respectively. The mean for the study population is 30.54 (6.87) and suggests attainment that is less than the fourth- to sixth-grade norm. Attainment across the four domains was 10.34 (3.19) for inductive thinking, 6.79 (2.61) for deductive thinking, 10.7 (2.87) for evaluation of credibility, and 3.34 (1.69) for evaluation of assumptions.

9.2. Research Question 2

The second research question asks if students attending the three participating schools differed on the measured variables. To answer this question, a multivariate analysis of variance (MANOVA) was conducted with location as the between-subject factor. Table 3 shows the means, standard deviations, and statistical significance for differences by location. To control for Type 1 error rates when conducting multiple significance tests, alpha levels were adjusted using a Bonferroni correction. Results show that on the measures of reading comprehension and critical thinking, achievement at the Meghalaya school was below the other two locations at a statistically significant level (p < 0.001), while no achievement differences were found between the Assam and West Bengal schools on either measure. On the measure of accumaticity (WCPM), each of the three schools was statistically different from the other, with the Meghalaya school achieving significantly below the other two (p < 0.001) and the Assam school showing significantly greater achievement than the West Bengal school (p < 0.001). Results for prosody found the Meghalaya location to be significantly lower than both the Assam (p = 0.005) and West Bengal (p < 0.001) locations. On the measure of encoding, attainment at the Meghalaya school was significantly lower than both the Assam and West Bengal schools (p < 0.001), neither of which differed significantly from each other (p = 0.056). For the measure of academic vocabulary, the West Bengal and Assam schools showed no statistically significant difference between each other (p = 1.00), while the Meghalaya schools attained a level significantly below the other two (p < 0.001). On the last measure of listening comprehension, the Meghalaya school again scored significantly below (p < 0.001) the other two schools that were not statistically different from each other (p < 1.00).

9.3. Research Question 3

The research question asks if critical thinking predicts reading comprehension. Before beginning the analysis, the assumptions necessary for regression analysis were checked. All variables were found to be normally distributed, and no multicollinearity was detected (tolerance and VIF statistics were all acceptable). We evaluated whether homoscedasticity existed across the error terms and found no violations in any of the variables. To determine if the sample size was adequate for linear multiple regression, we conducted an a priori power analysis using G*Power [98,99]. We estimated a conservative effect size of 0.15, a significance level = 0.05, and a power level = 0.80 for the nine predictors. Results showed a minimum sample size of N = 114 would provide adequate statistical power, which is far less than the study sample of N = 360.
We began our analysis by regressing reading comprehension onto the four indicators of critical thinking (induction, credibility, deduction, and assumptions) and the bridging process variables, accumaticity, prosody, academic vocabulary, encoding, and listening comprehension. Results in Table 4 reveal that the model predicted 59.4% of the variance in reading comprehension, F(9350) = 59.407, p < 0.001. Non-significant predictors included credibility, assumptions, accumaticity, and prosody (p < 0.05). It should be noted that accumaticity was a negative predictor of the outcome variable. We then regressed reading comprehension onto the remaining significant measures, which now included induction, deduction, academic vocabulary, encoding, and listening vocabulary. All measures were statistically significant predictors of reading comprehension, with the final model predicting 59.3% of the variance, F(5354) = 105.500, p < 0.001. The five significant predictors are shown in Table 4, while Figure 1 displays a diagram of the results. Standardized beta coefficients reveal listening comprehension to be the strongest predictor of reading comprehension (β = 0.460), followed by encoding (β = 0.273), induction (β = 0.140), deduction (β = 0.137), and academic vocabulary (β = 0.086).

10. Discussion

As decoding and word reading processes become increasingly automatic, their value in predicting reading comprehension recedes while language processes, background knowledge, and EF take on increasing importance [100,101,102]. The results of our study provide insight into this shift and the contribution of critical thinking to reading comprehension in adolescent readers who speak English as a second language.
In this study, we measured the contribution of critical thinking to reading comprehension, along with the bridging processes connecting decoding and comprehension, encoding, academic vocabulary, reading fluency, and listening comprehension, in a group of 360 seventh-grade students attending private, English-medium schools in three states in northeast India. The final model found that inductive and deductive thinking, academic vocabulary, encoding, and listening comprehension were significant predictors of reading comprehension, accounting for 59.3% of the explained variance. Standardized beta coefficients for the final model showed listening comprehension to be by far the strongest predictor of reading comprehension (β = 0.460), followed by encoding (β = 0.273). The two indicators of critical thinking revealed nearly equal standardized beta coefficients of β = 0.140 and 0.137 for inductive and deductive thinking, respectively, while academic vocabulary was the smallest statistically significant predictor (β = 0.086). The results of this study contribute to the research base by supporting and quantifying the contribution of critical thinking to reading comprehension beyond that explained by decoding and linguistic processes in a sample of adolescent Indian students.
Our study is framed by the Active View of Reading [71], which hypothesizes that executive function skills regulate reading comprehension. Higher-order executive function skills operate by updating information in working memory, inhibiting prepotent responses so other perspectives can be considered, and toggling among mental sets to facilitate critical thinking [103,104,105]. Our results show that the addition of unique variance contributed by deductive and inductive thinking partially supports the AVR and the role of critical thinking in reading comprehension.
A surprising result of this study was that oral reading fluency, as measured by accumaticity (words-correct-per-minute) and prosody, did not predict unique variance in reading comprehension in our study population. In fact, accumaticity was a negative predictor of reading comprehension, with a standardized beta equal to −0.053. We suggest an explanation can be found in the Decoding Threshold Hypothesis that posits the relationship between decoding and reading comprehension can only be observed above a certain decoding threshold [106]. The mean score for reading fluency in our study sample revealed accumaticity equal to 126.62 (26.10), putting students at the 50th percentile of widely used reading norms, a metric that has been found sufficient to sustain reading comprehension [90,107]. However, students in our study sample achieved the 9th percentile for reading comprehension on our standardized measure. Unfortunately, our measure of accumaticity (words-correct-per-minute) is not directly comparable to the six-indicator construct used by Wang and colleagues [106] to arrive at the DTH. Nevertheless, our measure of accumaticity was not predictive of reading comprehension and resulted in a negative, non-significant standardized beta (−0.053; Table 4). This result suggests at least three possible explanations. First, reading fluency attainment in our study sample attained a threshold where, as hypothesized by the DTH, it is not a positive predictor of reading comprehension. Second, given the very low level of comprehension attainment in our study sample (e.g., the 9th percentile), students may have been focused on reading accumaticity rather than on creating an understanding of what they were reading. A possible explanation for why accumaticity was a negative predictor of reading comprehension may be that most students read to maintain their reading rate. However, it is plausible that students who applied deductive and inductive thinking to the text slowed down or even stopped to think about the text. In such a case, faster reading would not be beneficial to critical thinking.
Reading prosody, another indicator of fluent reading that has been found to predict variance in reading comprehension across elementary, middle, and secondary grades, resulted in a mean of 5.30 (1.19), suggesting less than adequate development [94]. While the predictive value shown in Table 4 is positive, it is far from being statistically significant. We tentatively suggest that the low attainment of reading prosody in the study sample may contribute in a small way to poor reading comprehension. Another perspective on this is that too many students in the study sample may not have properly understood what they were reading to result in an appropriate, prosodic interpretation of the text.
Encoding, which measures the ability to apply phoneme–grapheme relationships to correctly spell words, was the second strongest predictor of reading comprehension (β = 0.273). The encoding grand mean equaled 15.66 (3.25) and revealed development in the derivational constancy stage of developmental spelling, suggesting appropriate control over the sound-to-letter process. Of interest is why encoding would be a significant predictor of reading comprehension. Although decoding and encoding have been called “two sides of the coin” [108] (p. 19), there are fundamental differences that make spelling a word more difficult than correctly pronouncing it in text. When attempting to decode a written word, there is a constraint on the phonemic possibilities that can be applied to the written letter features. For example, if one encounters the word Phoenix in text, of the 43 English phonemes, the number involved in pronouncing the word is just five. However, there are numerous, non-standard, plausible ways to spell Phoenix that could lead to the pronunciation of the word (e.g., phenix, fenix, phenicks, etc.). As such, these phoneme-to-grapheme possibilities can potentially cause confusion as the student attempts to write the standard spelling of the word. Students with superior spelling skills have developed a deeper knowledge of word orthography and likely word meaning than do their lower-achieving peers [109,110]. As the second strongest predictor, particularly considering the non-significance of reading fluency, this suggests the importance of encoding to reading comprehension in this sample of adolescent readers.
Listening comprehension, as measured by the OWLS [96], was, by far, the strongest predictor of reading comprehension. Researchers have found listening comprehension to be a predictor of reading comprehension across a variety of age groups [111,112]. Hjetland and colleagues [101] remark that language comprehension is such a strong predictor of reading comprehension that it is “a causal influence” (p. 759). Our results support the role of listening comprehension as a fundamental factor in reading comprehension attainment.
Academic vocabulary can be thought of as a “specialized language” [113] (p. 92) that is often used in academic settings when discussing disciplinary content. Whether involved in individual thinking or communication, academic vocabulary provides the individual with the critical language to engage in the cognitive processing of disciplinary concepts. In the present study, student attainment on the measure of academic vocabulary was at about the 45th percentile. Although it was the weakest of the five statistically significant predictors in this study, we interpret this result as suggesting that an understanding of academic vocabulary aids in the understanding of increasingly complex texts and is a partner with listening comprehension in understanding what is read.

11. Conclusions

Chafee [114] states that critical thinking is “our active, purposeful, and organized effort to make sense of our world by carefully examining our thinking and the thinking of others in order to clarify and improve our understanding” (p. 29). Research has consistently shown that students in post-secondary education exhibit improved performance when they have sufficient skills in the bridging processes connecting decoding and comprehension, which include critical thinking and meta-cognitive skills [115,116,117]. Unfortunately, research over the past 20+ years shows that students exhibit less willingness to engage in extended reading and have a declining ability to critically extract meaning from what they read [90]. Further, only 28% of students age 8 to 18 report engagement in daily reading [118]. At the same time, our results provide evidence that critical thinking skills, along with listening comprehension, academic vocabulary, and encoding skills, are processes that contribute to improved reading comprehension in a sample of Indian students. We suggest that encouraging students to engage in consistent reading of high-interest, informational texts to build background knowledge across grades can foster knowledge and motivation to learn through reading. This may help turn the tide towards increased reading engagement, deeper and broader global schema building, and greater academic achievement.

11.1. Limitations

Several limitations of this study should be considered by the reader. While students were proficient English speakers and precautions were taken to ensure understanding of assessment instructions, the possibility exists that some may have misunderstood, thus reducing the veracity of the data. One must also consider other variables that were not collected that could affect the outcome of the study. For example, a test of working memory, a variable that is known to affect executive function, was not assessed. Additionally, no variable to measure general intelligence was administered, nor was reading motivation measured.

11.2. Future Research

Our results suggest further research into the role of bridging processes connecting decoding and comprehension to reading comprehension is warranted. To begin, additional studies of critical CT using measures for deductive and inductive thinking could replicate the present study to determine if similar effects are found and the magnitude of their strength. Longitudinal studies could provide insight into how CT evolves in students across grades. For example, does CT increase in its effect on reading comprehension as students matriculate across grades? Also, are there other executive function skills that contribute to comprehension other than, or in synchronicity with, deductive and inductive thinking?

Author Contributions

Conceptualization, D.P. and W.H.R.; Methodology, D.P.; Formal analysis, D.P. and L.Z.; writing—original draft preparation, D.P. and W.H.R.; Witing, D.P., W.H.R. and L.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board of Northern Illinois University (protocol code AS23-0013, 22 August 2022).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data contain within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. The Organization for Economic Cooperation and Development. The Future of Education and Skills: Education 2030. 2023. Available online: https://www.oecd.org/education/2030/E2030%20Position%20Paper%20.pdf (accessed on 5 April 2018).
  2. Peng, P.; Kievit, R.A. The Development of Academic Achievement and Cognitive Abilities: A Bidirectional Perspective. Child Dev. Perspect. 2020, 14, 15–20. [Google Scholar] [CrossRef]
  3. Sparks, R.L.; Patton, J.; Murdoch, A. Early reading success and its relationship to reading achievement and reading volume: Replication of ‘10 years later’. Read. Writ. 2013, 27, 189–211. [Google Scholar] [CrossRef]
  4. Kahneman, D. Thinking Fast and Slow; Farrer, Straus, and Giroux: New York, NY, USA, 2011. [Google Scholar]
  5. Evans, J.S.B.T. Dual-Processing Accounts of Reasoning, Judgment, and Social Cognition. Annu. Rev. Psychol. 2008, 59, 255–278. [Google Scholar] [CrossRef]
  6. Huber, C.R.; Kuncel, N.R. Does College Teach Critical Thinking? A Meta-Analysis. Rev. Educ. Res. 2016, 86, 431–468. [Google Scholar] [CrossRef]
  7. Oaksford, M.; Chater, N.; Hahn, U. Human reasoning and argumentation: The probabilistic approach. In Reasoning: Studies of Human Inference and Its Foundations; Adler, J., Rips, L., Eds.; Cambridge University Press: Cambridge, UK, 2008; pp. 383–413. [Google Scholar]
  8. Diamond, A. Want to optimize executive functions and academic outcomes? In Minnesota Symposia on Child Psychology: Developing Cognitive Processes: Mechanisms, Implications, and Interventions; Zelazo, P.D., Sera, M.D., Eds.; John Wiley and Sons: Hoboken, NJ, USA, 2014; Volume 37, pp. 203–230. [Google Scholar] [CrossRef]
  9. Paul, R.W.; Elder, L. Critical Thinking: The Nature of Critical and Creative Thought. J. Dev. Educ. 2006, 30, 2–7. [Google Scholar]
  10. Baddeley, A.D.; Hitch, G.J. Developments in the concept of working memory. Neuropsychology 1994, 8, 485–493. [Google Scholar] [CrossRef]
  11. Smith, E.E.; Jonides, J. Storage and Executive Processes in the Frontal Lobes. Science 1999, 283, 1657–1661. [Google Scholar] [CrossRef] [PubMed]
  12. Zelazo, P.D.; Mϋller, U. Executive function in typical and atypical development. In Blackwell Handbook of Childhood Cognitive Development; Goswami, U., Ed.; Blackwell Publisher: Hoboken, NJ, USA, 2002; pp. 445–469. [Google Scholar]
  13. Lizarraga, M.L.S.d.A.; Baquedano, M.T.S.d.A.; Villanueva, O.A. Critical thinking, executive functions and their potential relationship. Think. Ski. Creativity 2012, 7, 271–279. [Google Scholar] [CrossRef]
  14. Gilbert, S.J.; Burgess, P.W. Executive function. Curr. Biol. 2008, 18, R110–R114. [Google Scholar] [CrossRef]
  15. Arum, R.; Roksa, J. Academically Adrift: Limited Learning on College Campuses; University of Chicago Press: Chicago, IL, USA, 2011. [Google Scholar]
  16. Baron, J.; Granato, L.; Spranca, M.; Teubal, E. Decision making biases in children and early adolescence: Exploratory studies. Merrill-Palmer Q. 1993, 39, 22–46. [Google Scholar]
  17. Erikson, M.G.; Erikson, D. Learning outcomes and critical thinking – good intentions in conflict. Stud. High. Educ. 2019, 44, 2293–2303. [Google Scholar] [CrossRef]
  18. Halpern, D.F. Assessing the Effectiveness of Critical Thinking Instruction. J. Gen. Educ. 2001, 50, 270–286. [Google Scholar] [CrossRef]
  19. Kahneman, D.; Tversky, A. Subjective probability: A judgment of representativeness. Cogn. Psychol. 1972, 3, 430–454. [Google Scholar] [CrossRef]
  20. Kuhn, D. Connecting scientific and informal reasoning. Merrill-Palmer Q. 1993, 39, 74–103. [Google Scholar]
  21. Perkins, D.N. Postprimary education has little impact on informal reasoning. J. Educ. Psychol. 1985, 77, 562–571. [Google Scholar] [CrossRef]
  22. Ennis, R.H. A taxonomy of critical thinking dispositions and abilities. In Teaching Thinking Skills: Theory and Practice; Baron, J., Sternberg, R., Eds.; Freeman: New York, NY, USA, 1987; pp. 9–26. [Google Scholar]
  23. Bensley, D.A. Critical thinking and the rejection of unsubstantiated claims. In Critical Thinking in Psychology, 2nd ed.; Sternberg, R.J., Halpern, D.F., Eds.; Cambridge University Press: Cambridge, UK, 2020; pp. 68–102. [Google Scholar]
  24. Facione, P. Critical Thinking: A Statement of Expert Consensus for Purposes of Educational Assessment and Instruction. Research Findings and Recommendations; American Philosophical Association: Newark, DE, USA, 1990; Available online: https://philarchive.org/archive/faccta (accessed on 1 December 2023).
  25. Sternberg, R.J. Thinking Styles; Cambridge University Press: Cambridge, UK, 1997. [Google Scholar]
  26. Sternberg, R.J. Wisdom, Intelligence, and Creativity Synthesized; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
  27. Pithers, R.T.; Soden, R. Critical Thinking in Education: A Review. Educ. Res. 2000, 42, 237–249. [Google Scholar] [CrossRef]
  28. Abrami, P.C.; Bernard, R.M.; Borokhovski, E.; Wadem, A.; Surkes, M.A.; Tamim, R.; Zhang, D. Instructional interventions affecting critical thinking skills and dispositions: A stage 1 meta-analysis. Rev. Educ. Res. 2008, 78, 1102–1134. [Google Scholar] [CrossRef]
  29. Kintsch, W. Comprehension: A Paradigm for Cognition; Cambridge University Press: Cambridge, UK, 1998. [Google Scholar]
  30. McCarthy, K.S.; McNamara, D.S. The multidimensional knowledge in text comprehension framework. Educ. Psychol. 2021, 56, 196–214. [Google Scholar] [CrossRef]
  31. McNamara, D.S.; Magliano, J. Toward a comprehensive model of comprehension. Psychol. Learn. Motiv. 2009, 51, 297–384. [Google Scholar] [CrossRef]
  32. Black, B. Critical Thinking—A Tangible Construct? Res. Matters A Camb. Assess. Publ. 2007, 3, 2–4. Available online: https://www.cambridgeassessment.org.uk/our-research/all-published-resources/research-matters/rm-03/ (accessed on 1 December 2023).
  33. Klaczynski, P.A.; Gordon, H.D.; Fauth, J. Goal-oriented critical reasoning and individual differences in critical reasoning biases. J. Educ. Psychol. 1997, 89, 470–485. [Google Scholar] [CrossRef]
  34. Macpherson, R.; Stanovich, K.E. Cognitive ability, thinking dispositions, and instructional set as predictors of critical thinking. Learn. Individ. Differ. 2007, 17, 115–127. [Google Scholar] [CrossRef]
  35. de Bruin, W.B.; Parker, A.M.; Fischhoff, B. Individual differences in adult decision-making competence. J. Pers. Soc. Psychol. 2007, 92, 938–956. [Google Scholar] [CrossRef] [PubMed]
  36. Halpern, D.F. Thought and Knowledge: An Introduction to Critical Thinking, 5th ed.; Psychology Press: New York, NY, USA, 2014. [Google Scholar]
  37. Davies, M. Critical Thinking and the Disciplines Reconsidered. High. Educ. Res. Dev. 2013, 32, 529–544. [Google Scholar] [CrossRef]
  38. Linn, M.C. Designing the knowledge integration environment. Int. J. Sci. Educ. 2000, 22, 781–796. [Google Scholar] [CrossRef]
  39. Philley, J. Critical thinking concepts. Prof. Safety 2005, 50, 26–32. [Google Scholar]
  40. Haverty, L.A.; Koedinger, K.A.; Klahr, D.; Alibali, M.W. Solving inductive reasoning problems in mathematics. Not-so-trivial pursuit. Cogn. Sci. 2000, 24, 249–298. [Google Scholar] [CrossRef]
  41. Sternberg, R.J.; Sternberg, K. Cognitive Psychology; Wadsworth-Cengage: Boston, MA, USA, 2012. [Google Scholar]
  42. Capó, B. The Development of Inductive Reasoning: Cross-sectional Assessments in an Educational Context. Int. J. Behav. Dev. 1997, 20, 609–626. [Google Scholar] [CrossRef]
  43. Goswami, U. Inductive and deductive reasoning. In Blackwell Handbook of Childhood Cognitive Development; Goswami, U., Ed.; Blackwell Publisher: Hoboken, NJ, USA, 2002; pp. 282–302. [Google Scholar]
  44. Johnson-Laird, P.N. Deductive reasoning. Annu. Rev. Psychol. 1999, 50, 109–135. [Google Scholar] [CrossRef]
  45. Baron, J. Thinking and Deciding, 3rd ed.; Cambridge University Press: Cambridge, UK, 2000. [Google Scholar]
  46. Sternberg, R.J. Why schools should teach for wisdom: The balance theory of wisdom in educational settings. Educ. Psychol. 2001, 36, 227–245. [Google Scholar] [CrossRef]
  47. West, R.F.; Toplak, M.E.; Stanovich, K.E. Heuristics and biases as measures of critical thinking: Associations with cognitive ability and thinking dispositions. J. Educ. Psychol. 2008, 100, 930–941. [Google Scholar] [CrossRef]
  48. Bassok, M.; Holyoak, K.J. Interdomain transfer between isomorphic topics in algebra and physics. J. Exp. Psychol. Learn. Mem. Cogn. 1989, 15, 153–166. [Google Scholar] [CrossRef]
  49. Bransford, J.D.; Stein, B.S. The Ideal Problem Solver, 2nd ed.; Freeman: New York, NY, USA, 1993. [Google Scholar]
  50. Chi, M.T.H.; Feltovich, P.J.; Glaser, R. Categorization and representation of physics problems by experts and novices. Cogn. Sci. 1981, 5, 121–152. [Google Scholar] [CrossRef]
  51. Gibson, J.J.; Gibson, E.J. Perceptual learning: Differentiation or enrichment? Psychol. Rev. 1955, 62, 32–41. [Google Scholar] [CrossRef] [PubMed]
  52. Greeno, J.G.; Smith, D.R.; Moore, J.L. Transfer of situated learning. In Transfer on Trial: Intelligence, Cognition, and Instruction; Detterman, D.K., Sternberg, R.J., Eds.; Ablex: Norwood, NJ, USA, 1993; pp. 99–167. [Google Scholar]
  53. Marton, F.; Booth, S. Learning and Awareness; Erlbaum: Mahwah, NJ, USA, 1997. [Google Scholar]
  54. National Research Council (NRC); Committee on Developments in the Science of Learning with additional material from the Committee on Learning, Research and Educational Practice; Commission on Behavioral and Social Sciences and Education. How People Learn: Brain, Mind, Experience, and School: Expanded Edition; Bransford, J.D., Brown, A.L., Cocking, R.R., Eds.; National Academy Press: Washington, DC, USA, 2000. [Google Scholar]
  55. Schuyler, D. Cognitive Therapy: A Practical Guide; WW Norton and Company: New York, NY, USA, 2003. [Google Scholar]
  56. Evans, J.S.B.T.; Stanovich, K.W. Dual process theories of higher cognition: Advancing the debate. Perspect. Psychol. Sci. 2013, 8, 223–241. [Google Scholar] [CrossRef]
  57. Stanovich, K.E. What Intelligence Tests Miss: The Psychology of Rational Thought; Yale University Press: New Haven, CT, USA, 2009. [Google Scholar]
  58. Stanovich, K.E. Rationality and the Reflective Mind; Oxford University Press: New York, NY, USA, 2011. [Google Scholar]
  59. Carey, S. Cognitive science and science education. Am. Psychol. 1986, 41, 247–265. [Google Scholar] [CrossRef]
  60. Kardash, C.M.; Scholes, R.J. Effects of pre-existing beliefs, epistemological beliefs, and need for cognition on interpretation of controversial issues. J. Educ. Psychol. 1996, 88, 260–271. [Google Scholar] [CrossRef]
  61. Klaczynski, P.A.; Gordon, D.H. Self-serving influences on adolescents’ evaluations of belief-relevant evidence. J. Exp. Child Psychol. 1996, 62, 317–339. [Google Scholar] [CrossRef] [PubMed]
  62. Stanovich, K.E.; West, R.F. Reasoning independently of prior belief and individual differences in actively open-minded thinking. J. Educ. Psychol. 1997, 89, 342–357. [Google Scholar] [CrossRef]
  63. van den Broek, P.; Lorch, R.F.; Linderholm, T.; Gustafson, M. The effects of readers’ goals on inference generation and memory for texts. Mem. Cogn. 2001, 29, 1081–1087. [Google Scholar] [CrossRef]
  64. Graesser, A.C.; Millis, K.K.; Zwann, R.A. Discourse comprehension. Annu. Rev. Psychol. 2001, 48, 163–189. [Google Scholar] [CrossRef]
  65. van Dijk, T.A.; Kintsch, W. Strategies of Discourse Comprehension; Academic Press: New York, NY, USA, 1983; pp. 189–221. [Google Scholar]
  66. Baron, J.; Sternberg, R.J. Teaching Thinking Skills: Theory and Practice; Freeman: Dallas, TX, USA, 1987. [Google Scholar]
  67. Sternberg, R.J.; Roediger, H.L., III; Halpern, D.F. (Eds.) Critical Thinking in Psychology; Cambridge University Press: Cambridge, UK, 2007. [Google Scholar]
  68. Perfetti, C.A.; Stafura, J. Word Knowledge in a Theory of Reading Comprehension. Sci. Stud. Read. 2013, 18, 22–37. [Google Scholar] [CrossRef]
  69. Willson, V.L.; Rupley, W.H. A structural equation model for reading comprehension based on background, phonemic, and strategy knowledge. Sci. Stud. Read. 1997, 1, 45–63. [Google Scholar] [CrossRef]
  70. Diakido, I.N.; Ioanno, M.C.; Christodoulou, S.A. Reading argumentative texts: Comprehension and evaluation goals and outcomes. Read. Writ. 2017, 30, 1869–1890. [Google Scholar] [CrossRef]
  71. Duke, N.K.; Cartwright, K.B. The science of reading progresses: Communicating advances beyond the simple view of reading. Read. Res. Q. 2021, 56, S25–S44. [Google Scholar] [CrossRef]
  72. Hoover, W.A.; Gough, P.B. The simple view of reading. Read. Writ. 1990, 2, 127–160. [Google Scholar] [CrossRef]
  73. Kearns, D.M.; Al Ghanem, R. The role of semantic information in children’s word reading: Does meaning affect readers’ ability to say polysyllabic words aloud? J. Educ. Psychol. 2019, 111, 933–956. [Google Scholar] [CrossRef]
  74. Kendeou, P.; Savage, R.; Broek, P.v.D. Revisiting the simple view of reading. Br. J. Educ. Psychol. 2009, 79, 353–370. [Google Scholar] [CrossRef] [PubMed]
  75. Lonigan, C.J.; Burgess, S.R.; Schatschneider, C. Examining the Simple View of Reading with Elementary School Children: Still Simple After All These Years. Remedial Spec. Educ. 2018, 39, 260–273. [Google Scholar] [CrossRef]
  76. Mitchell, A.M.; Brady, S.A. The effect of vocabulary knowledge on novel word identification. Ann. Dyslexia 2013, 63, 201–216. [Google Scholar] [CrossRef] [PubMed]
  77. Coté, N.; Goldman, S.R.; Saul, E.U. Students making sense of informational text: Relations between processing and representation. Discourse Process. 1998, 25, 1–53. [Google Scholar] [CrossRef]
  78. Cain, K.; Oakhill, J.V.; Barnes, M.A.; Bryant, P.E. Comprehension skill, inference-making ability, and their relation to knowledge. Mem. Cogn. 2001, 29, 850–859. [Google Scholar] [CrossRef] [PubMed]
  79. Carretti, B.; Caldarola, N.; Tencatti, C.; Cornoldi, C. Improving reading comprehension in reading and listening settings: The effect of two training programmes focusing on metacognition and working memory. Br. J. Educ. Psychol. 2014, 84, 194–210. [Google Scholar] [CrossRef]
  80. Elbro, C.; Buch-Iversen, I. Activation of background knowledge for inference making: Effects on reading comprehension. Sci. Stud. Read. 2013, 17, 435–452. [Google Scholar] [CrossRef]
  81. Peng, P.; Barnes, M.; Wang, C.; Wang, W.; Li, S.; Swanson, H.L.; Dardick, W.; Tao, S. A meta-analysis on the relation between reading and working memory. Psychol. Bull. 2018, 144, 48–76. [Google Scholar] [CrossRef]
  82. Perfetti, C.A. Reading ability: Lexical quality to comprehension. Sci. Stud. Read. 2007, 11, 357–383. [Google Scholar] [CrossRef]
  83. Pérez, A.I.; Paolieri, D.; Macizo, P.; Bajo, T. The role of working memory in inferential sentence comprehension. Cogn. Process. 2014, 15, 405–413. [Google Scholar] [CrossRef] [PubMed]
  84. Dilip, C. Public–Private Partnership to Meet the Skills Challenges in India; Springer: Berlin/Heidelberg, Germany, 2012; ISBN 978-94-007-5936-7. [Google Scholar]
  85. Banerji, R.; Chavan, M. Improving literacy and math instruction at scale in India’s primary schools: The case of Pratham’s Read India program. J. Educ. Chang. 2016, 17, 453–475. [Google Scholar] [CrossRef]
  86. Ganske, K. Developmental Spelling Analysis: A Diagnostic Measure for Instruction and Research. Ph.D. Thesis, University of Virginia, Charlottesville, VA, USA, 1994, unpublished. [Google Scholar]
  87. Ehri, L.C. How English orthography influences phonological knowledge as children learn to read and spell. In Literacy and Language Analysis; Scales, R.J., Ed.; Erlbaum: Hillsdale, NJ, USA, 1993; pp. 21–43. [Google Scholar]
  88. Ganske, K. Word Journeys: Assessment-Guided Phonics, Spelling, and Vocabulary Instruction, 2nd ed.; Guildford: New York, NY, USA, 2014. [Google Scholar]
  89. Henderson, E.H.; Templeton, S. A developmental perspective of formal spelling instruction through alphabet, pattern, and meaning. Elem. Sch. J. 1986, 86, 314–316. [Google Scholar] [CrossRef]
  90. Paige, D.D.; Smith, G.S. Academic vocabulary and reading fluency: Unlikely bedfellows in the quest for textual meaning. Educ. Sci. 2018, 8, 165. [Google Scholar] [CrossRef]
  91. Coxhead, A. A new academic word list. TESOL Q. 2000, 34, 213. [Google Scholar] [CrossRef]
  92. Paige, D.D.; Rupley, W.S. Revisiting complex text instruction: A study of 11th-grade history students. Psychol. Sch. 2023, 60, 3633–3647. [Google Scholar] [CrossRef]
  93. Zutell, J.; Rasinski, T.V. Training teachers to attend to their students’ oral reading fluency. Theory Into Pract. 1991, 30, 211–217. [Google Scholar] [CrossRef]
  94. Smith, G.S.; Paige, D.D. A Study of Reliability Across Multiple Raters When Using the NAEP and MDFS Rubrics to Measure Oral Reading Fluency. Read. Psychol. 2019, 40, 34–69. [Google Scholar] [CrossRef]
  95. Ennis, R.H.; Millman, J.; Tomko, T.N. Cornell Critical Thinking Test, 5th ed.; The Critical Thinking Company: North Bend, OR, USA, 2005; Available online: www.criticalthinking.com (accessed on 4 December 2023).
  96. Carrow-Woolfolk, E. Oral and Written Language Scales-Second Edition (OWLS-II); Western Psychological Services: Torrance, CA, USA, 2011. [Google Scholar]
  97. Brown, V.L.; Widerholt, J.L.; Hammill, D.D. Test of Reading Comprehension, 4th ed.; Pro-Ed: Austin, TX, USA, 2009. [Google Scholar]
  98. Faul, F.; Erdfelder, E.; Lang, A.-G.; Buchner, A. G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behav. Res. Methods 2007, 39, 175–191. [Google Scholar] [CrossRef]
  99. Faul, F.; Erdfelder, E.; Buchner, A.; Lang, A.-G. Statistical power analyses using G*Power 3.1: Tests for correlation and regression analyses. Behav. Res. Methods 2009, 41, 1149–1160. [Google Scholar] [CrossRef]
  100. Cirino, P.T.; Romain, M.A.; Barth, A.E.; Tolar, T.D.; Fletcher, J.M.; Vaughn, S. Reading skill components and impairments in middle school struggling readers. Read. Writ. 2012, 26, 1059–1086. [Google Scholar] [CrossRef] [PubMed]
  101. Hjetland, H.N.; Lervåg, A.; Lyster, S.-A.H.; Hagtvet, B.E.; Hulme, C.; Melby-Lervåg, M. Pathways to reading comprehension: A longitudinal study from 4 to 9 years of age. J. Educ. Psychol. 2019, 111, 751–763. [Google Scholar] [CrossRef]
  102. Tighe, E.L.; Schatschneider, C. A dominance analysis approach to determining predictor importance in third, seventh, and tenth grade reading comprehension skills. Read. Writ. 2013, 27, 101–127. [Google Scholar] [CrossRef] [PubMed]
  103. Diamond, A. Executive Functions. Annu. Rev. Psychol. 2013, 64, 135–168. [Google Scholar] [CrossRef] [PubMed]
  104. Dawson, P.; Guare, R. Executive Skills in Children and Adolescents; Guilford: New York, NY, USA, 2018. [Google Scholar]
  105. Zelazo, P.D.; Blair, C.B.; Willoughby, M.T. Executive Function: Implications for Education (NCER, 2017-2020); National Center for Education Research, Institute of Education Sciences, U.S. Department of Education: Washington, DC, USA, 2016. [Google Scholar]
  106. Wang, Z.; Sabatini, J.; O’Reilly, T.; Weeks, J. Decoding and reading comprehension: A test of the decoding threshold hypothesis. J. Educ. Psychol. 2019, 111, 387–401. [Google Scholar] [CrossRef]
  107. Hasbrouck, J.; Tindal, G. An Update to Compiled ORF Norms; Technical Report No. 1702; Behavioral Research and Teaching, University of Oregon: Eugene, OR, USA, 2017. [Google Scholar]
  108. Ehri, L. Learning To Read and Learning to Spell: Two Sides of a Coin. Top. Lang. Disord. 2000, 20, 19–36. Available online: http://ovidsp.ovid.com/ovidweb.cgi?T=JS&PAGE=reference&D=ovftd&NEWS=N&AN=00011363-200020030-00005 (accessed on 1 December 2023).
  109. McGuinness, D. Early Reading Instruction: What Science Really Tells Us About How to Teach Reading; MIT Press: Cambridge, MA, USA, 2004. [Google Scholar]
  110. Perfetti, C. Reading acquisition and beyond: Decoding includes cognition. Am. J. Educ. 1984, 93, 40–60. [Google Scholar] [CrossRef]
  111. Mancilla-Martinez, J.; Kieffer, M.J.; Biancarosa, G.; Christodoulou, J.A.; Snow, C.E. Investigating English comprehension growth in adolescent language minority learners: Some insights from the simple view. Read. Writ. 2011, 24, 339–354. [Google Scholar] [CrossRef]
  112. Storch, S.A.; Whitehurst, G.J. Oral language and code-related precursors to reading: Evidence from a longitudinal structural model. Dev. Psychol. 2002, 38, 934–947. [Google Scholar] [CrossRef]
  113. Nagy, W.; Townsend, D. Words as Tools: Learning Academic Vocabulary as Language Acquisition. Read. Res. Q. 2012, 47, 91–108. [Google Scholar] [CrossRef]
  114. Chaffee, J. Thinking Critically; Houghton Mifflin: Boston, MS, USA, 1988. [Google Scholar]
  115. Cox, S.R.; Freisner, D.; Khayum, M. Do reading skills courses help underprepared readers achieve academic success in college. J. Coll. Read. Learn. 2003, 33, 170–196. [Google Scholar] [CrossRef]
  116. McCabe, R.H. No One to Waste: A Report to Public Decision-Makers and Community College Leaders; Community College Press: Washington, DC, USA, 2000. [Google Scholar]
  117. Oudenhoven, B. Remediation at the community college: Pressing issues, uncertain solutions. New Dir. Community Coll. 2002, 2002, 35–44. [Google Scholar] [CrossRef]
  118. Clark, C.; Teravainen-Goff, A. Children and Young People’s Reading in 2020: Findings from Our Annual Literacy Survey; National Literacy Trust: London, UK, 2019. [Google Scholar]
Figure 1. * p < 0.05; *** p < 0.001. Measures predicting reading comprehension show bivariate correlations, standardized beta coefficients, and total explained variance (R2).
Figure 1. * p < 0.05; *** p < 0.001. Measures predicting reading comprehension show bivariate correlations, standardized beta coefficients, and total explained variance (R2).
Education 14 00225 g001
Table 1. Means, standard deviations, and bivariate correlations of the measured variables.
Table 1. Means, standard deviations, and bivariate correlations of the measured variables.
Variable1234567891011
1. CCTTInduction1
2. CCTTDeduction0.369 **1
3. CCTTCredibility0.265 **0.224 **1
4. CCTTAssumptions0.125 **0.284 **0.1011
5. CCTTComposite0.746 **0.714 **0.650 **0.454 **1
6. Reading comprehension0.436 **0.396 **0.199 **0.182 **0.481 **1
7. Encoding0.307 **0.295 **0.163 **0.0650.339 **0.551 **1
8. Accumaticity (WCPM)0.204 **0.238 **0.154 **0.0970.273 **0.286 **0.509 **1
9. Prosody0.121 **0.215 **0.0660.0200.170 **0.186 **0.247 **0.208 **1
10. Academic vocabulary0.322 **0.344 **0.201 **0.109 **0.391 **0.437 **0.501 **0.297 **0.0731
11. Listening comprehension0.291 **0.213 **0.0800.121 **0.279 **0.642 **0.329 **0.197 **0.132 **0.265 **1
Range (min–max)0–190–140–170–143–534–267–167–160–105–4372–108
Mean
(sd)
10.34
(3.19)
6.79
(2.61)
10.17
(2.87)
3.34
(1.69)
30.54
(6.87)
15.27
(4.20)
15.66
(3.25)
126.62
(26.10)
11.18
1.86
24.19
5.82
92.96
6.35
Percentile Attainmentnananana<4th grade9thDC50thna45th45th
Note. ** p < 0.01. CCTT = Cornell Critical Thinking Test; WCPM = words-correct-per-minute. NA = no percentiles available.
Table 2. Factor loadings and communality for principal component analysis for critical thinking scale are composed of induction, deduction, assumptions, and credibility.
Table 2. Factor loadings and communality for principal component analysis for critical thinking scale are composed of induction, deduction, assumptions, and credibility.
ItemFactor LoadingCommunalities
10.7120.507
20.7670.588
30.5190.269
40.5850.342
% of variance extracted42.64
Note. Item 1 = Induction, Item 2 = Deduction, Item 3 = Assumptions, Item 4 = Credibility.
Table 3. Multivariate analysis of variance (MANOVA) results for differences in the measured variables by school location.
Table 3. Multivariate analysis of variance (MANOVA) results for differences in the measured variables by school location.
Location
MeghalayaAssamWest BengalTotal
VariableMean (sd)Mean (sd)Mean (sd)Mean (sd)
Reading Comprehension11.85 (3.40) ***16.02 (3.97)16.51 (3.77)15.27 (4.20)
Critical Thinking Composite25.91 (5.23) ***32.33 (6.96)31.43 (6.44)30.54 (6.87)
Accumaticity (WCPM)110.33 (22.54) ***1138.66 (25.32) ***2123.88 (26.10) ***3126.62 (26.10)
Prosody4.80 (1.11) ***45.32 (1.27)5.57 (1.07)5.30 (1.9)
Encoding12.31 (3.18) ***16.26 (2.41)17.02 (2.62)15.66 (3.25)
Academic Vocabulary19.30 (4.85) ***25.65 (5.38)25.58 (5.22)24.19 (5.82)
Listening Comprehension89.42 (6.54) ***94.16 (5.96)93.54 (6.34)92.86 (6.35)
Note. WCPM = words-correct-per-minute; *** p < 0.001. 1 Mean lower than Assam and West Bengal at p < 0.001. 2 Mean greater than Meghalaya and West Bengal at p < 0.001. 3 Mean greater than Meghalaya and lower than Assam at p < 0.001. 4 Mean is lower than Assam at p = 0.005 and lower than West Bengal at p < 0.001.
Table 4. Hierarchical regression models predicting reading comprehension.
Table 4. Hierarchical regression models predicting reading comprehension.
95% Confidence Interval
BSE BβtR2LowerUpper
Model 1 0.594
    Constant−23.6912.144 −11.050 *** −27.907−19.474
    Induction0.1750.0510.1333.435 *** 0.0750.275
    Credibility0.0540.0520.0371.044 −0.0480.156
    Deduction0.1920.0630.1193.036 ** 0.0680.317
    Assumptions0.1170.0880.0471.335 −0.0550.289
    Accumaticity−0.0090.006−0.053−1.324 −0.0210.004
    Prosody0.1490.1340.0421.109 −0.1150.412
    Academic Vocabulary0.0580.0290.0811.995 * 0.0010.116
    Encoding0.3660.0590.2836.198 *** 0.2500.482
    Listening Comprehension0.3020.0240.45712.441 *** 0.2540.350
Model 2 0.593
    Constant−23.3692.079 −11.238 *** −27.459−19.280
    Induction0.1840.0500.1403.666 *** 0.0850.282
    Deduction0.2200.0610.1373.621 *** 0.1010.339
    Academic Vocabulary0.0620.0290.0862.108 * 0.0040.119
    Encoding0.3530.0520.2736.738 *** 0.2500.456
    Listening Comprehension0.3040.0240.46012.540 *** 0.2560.352
Note. β = standardized beta coefficients. * p < 0.05; ** p < 0.01; *** p < 0.001.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Paige, D.; Rupley, W.H.; Ziglari, L. Critical Thinking in Reading Comprehension: Fine Tuning the Simple View of Reading. Educ. Sci. 2024, 14, 225. https://doi.org/10.3390/educsci14030225

AMA Style

Paige D, Rupley WH, Ziglari L. Critical Thinking in Reading Comprehension: Fine Tuning the Simple View of Reading. Education Sciences. 2024; 14(3):225. https://doi.org/10.3390/educsci14030225

Chicago/Turabian Style

Paige, David, William H. Rupley, and Leily Ziglari. 2024. "Critical Thinking in Reading Comprehension: Fine Tuning the Simple View of Reading" Education Sciences 14, no. 3: 225. https://doi.org/10.3390/educsci14030225

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop