Next Article in Journal
Defining Imitative Coinage in the Roman Imperial Period on the Territory of the Empire
Previous Article in Journal
Sustainable Built Environments at the Climate–Health Nexus: Mitigating Heat Risks for Urban Well-Being
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

A Systematic Quantitative Literature Review of the Contribution of Phonics to Overall Reading Performance for Primary Students

1
School of Education and Professional Studies, Griffith University, Nathan, QLD 4111, Australia
2
School of Curriculum, Teaching & Inclusive Education, Monash University, Clayton, VIC 3800, Australia
*
Author to whom correspondence should be addressed.
Encyclopedia 2026, 6(3), 61; https://doi.org/10.3390/encyclopedia6030061
Submission received: 14 January 2026 / Revised: 20 February 2026 / Accepted: 5 March 2026 / Published: 13 March 2026
(This article belongs to the Section Social Sciences)

Abstract

This Systematic Quantitative Literature Review (SQLR) examines instructional content (the what) and instructional strategies (the how) that contribute to overall reading performance for students in mainstream English-speaking primary classes. Drawing on 163 peer-reviewed studies published over four and a half decades, the authors examine instructional content and strategies aligned with six interrelated foundational elements of reading development: phonological awareness, phonics, vocabulary, fluency, comprehension, and oral language. In response to the proliferation of reading research and the limitations of narrative reviews, the five iterative phases of the SQLR method enable rigorous selection, coding, and synthesis of studies reporting quantitative evidence of the contribution of instructional content and strategies to students’ overall reading performance. The second part of the paper focuses on phonics instruction, an element of the teaching of reading central to ongoing public, educational, and political debate. The authors identify significant variation in terms of the scale, duration, and year-levels of the reported research, and foreground the complex roles of teacher professional learning, teachers’ pedagogical decision-making, and implementation fidelity in shaping the research projects. The paper finishes by synthesizing evidence that concludes that while phonics instruction can contribute to overall reading performance, its effects are variable and contingent on specific instructional and contextual conditions.

1. Introduction

Despite widespread interest in improving reading outcomes, there remains limited consensus on which instructional content and instructional strategies effectively support students’ overall reading performance. In response to the overwhelming volume of research on the teaching of reading in the primary school years (students 4 to 12 years of age), and a tendency for traditional narrative literature reviews to selectively cite studies that support particular perspectives [1], the authors seek a more transparent approach through a Systematic Quantitative Literature Review (SQLR). Over the last four and a half decades, the volume of research on instructional content (the “what”) and instructional strategies (the “how”) for the teaching of reading has expanded exponentially, creating both opportunities and challenges for evidence-informed practice. The influential United States (U.S.) National Reading Panel Report [2] opened with the claim that “100,000 research studies on reading have been published since 1966, with perhaps another 15,000 appearing before that time” (p. 1). In January 2024, as the authors prepared this review, a Google Scholar search using the phrase “teaching of reading” yielded approximately 5.29 million results. To refine the scope and reduce the inclusion of gray literature, the authors narrowed the search to “teaching of reading scientific study”, which still returned a staggering 2.27 million results. In light of the scale and diversity of available research, and the potential for implicit bias in how studies are selected and interpreted [1], the authors employ the five-phase SQLR method devised by Pickering et al. [3]. This method enables a structured examination of peer-reviewed research studies that quantitatively report on the contribution of instructional content paired with particular instructional strategies to students’ overall reading performance.
Konza’s [4] synthesis traces the emergence of a “highly credentialed framework” comprising the five interrelated elements of phonological awareness, phonics, vocabulary, fluency, and comprehension [5,6,7]. Konza [4] and Cross [5] offer empirical justification (or what Pearson and Gallagher [1] term “existential proof”), demonstrating how each either interacts synergistically or independently to support students’ overall reading development. Multiple studies reinforce the essential role of phonological awareness (the ability to focus on and manipulate phonemes in spoken words) [8,9] in supporting students’ phonics knowledge (letter-sound knowledge), their ability to decode written words, and the impact of this on reading fluency [10]. Comprehension, both literal (understanding information that is literally on the page) and inferred (integrating textual cues with prior knowledge to understand information that is not directly stated on the page), is also reported as being an essential element of reading development [11].
Konza [4] advances a compelling case for the inclusion of oral language as a sixth element contributing to reading development (see also [12]). Konza [4] draws on notable studies from Nation and Snowling [13] and Snowling [14], which demonstrate that individual differences in oral language proficiency account for significant variations in reading comprehension. A major finding is that oral language proficiency exceeds the predictive power of age, non-verbal reasoning, or non-word decoding on overall reading development [13]. Similarly, Roth et al. [15] track 39 students from Kindergarten to Year Two to identify the predictive relationship between oral language skills assessment in kindergarten vis-à-vis reading performance in Years One and Two. Fielding-Barnsley and Hay [16] report on a study involving 47 Year One students with low levels of language development. These students participate in a structured oral language intervention comprising 16 × 30 min sessions delivered over eight weeks. The sessions, facilitated by trained research assistants working with small groups of four to five students, offer dialog-based instruction. Activities include matching experiences, describing classifications, sharing oral recounts, and reasoning about lived events, all designed to enhance expressive and receptive language skills. The differential contribution of oral language subskills to students’ reading performance across distinct phases of reading development is confirmed [16].
These six elements are embedded to varying degrees in multiple major international reports and reviews on the teaching of reading. For example, the National Reading Panel Report [2] devotes entire chapters to the contribution of alphabetics (made up of phonemic awareness and phonics instruction), comprehension, and fluency to students’ reading development. The report also recognizes the importance of “the alphabetic code that represents oral language in writing” (pp. 2–100) and the need to “guarantee that the vocabulary items are in the oral language of the reader” (pp. 4–25). The United Kingdom’s (U.K.) Independent Review of the Teaching of Early Reading: Final report [17] lists “phonological awareness and letter-sound knowledge as important prerequisites for successful reading development” (p. 38), alongside “developing spoken language, building vocabulary, grammar, (and) comprehension” (p. 16) to support students’ reading development. Fluency is acknowledged as an outcome of effective phonics instruction and language-rich teaching [17]. Recommendation Two from the Australian Government commissioned report, Teaching Reading: National inquiry into the teaching of literacy [18], advises that teachers should provide “systematic, direct and explicit phonics instruction” alongside an “integrated approach to reading that supports the development of oral language, vocabulary, grammar, reading fluency, (and) comprehension…” (p. 14). This report defines letter-sound rules as including “phonemic awareness and phonological knowledge” (p. 31). Recommendation Two from the Canadian Language and Literacy Research Network Summary Report, National Strategy for Early Literacy [19], states that instructional strategies “must include systematic, direct, and explicit instruction, supporting the acquisition of essential alphabetic, code-breaking skills, and the development of strong oral language, vocabulary, grammar, fluency, and reading comprehension skills” (p. 10).
The six elements are used here as instructionally defined categories, rather than as a developmental or cognitive model of reading acquisition, and are limited to domains that have been examined through teacher-delivered classroom interventions with measures of overall reading performance. With this understanding in place, in 2024, the authors implemented the five-phase SQLR method developed by Pickering et al. [3]. Unlike traditional narrative reviews which often omit a literature review methodology and are therefore considered highly subjective [1], the authors make explicit their replicable process for identifying relevant studies. SQLRs are noted for systematically coding content and generating visual displays and thematic subcategories [20]. The SQLR differs from meta-analyses in both scope and treatment of evidence. While both use systematic methods to identify relevant literature, meta-analyses typically include only experimental studies with statistically analyzed data and apply statistical techniques to aggregate findings [21]. In his examination of research spanning systematic phonics and alternative methods of reading instruction, Bowers [22] cautions that meta-analyses are often built on “mischaracterization” of original research that is then “passed on and exaggerated by many others” who cite only the meta-analysis (pp. 682–683). SQLRs allow for a broader range of study designs and focus on mapping and evaluating evidence from the original publication without combining results through statistical synthesis. SQLRs also illuminate shifts in research focus over time, helping to identify when aspects of reading instruction first gain prominence as areas of inquiry. By mapping studies across types of research publications, research location, research methods, year of publication, and elements of reading development, the SQLR supports an understanding of how research on the teaching of reading evolves in response to methodological trends, instructional content and strategies, and representations of the teacher as the pedagogical decision maker [23].

2. The SQLR Method

The Pickering et al. [3] SQLR has five overarching phases listed as Phases A-E: Ideation, Searching, Coding, Analysis, and Writing. Each phase contains three distinct yet interconnected steps, numbered sequentially from 1 to 15. These steps are not strictly linear, but rather operate iteratively, allowing movement back and forth across phases as the review evolves. In the sub-sections that follow, each phase is explained.

2.1. Phase A—Ideation

The Ideation Phase covers steps one–three:
  • Step 1—Define topic.
  • Step 2—Formulate research questions.
  • Step 3—Identify keywords, using synonyms and multiple search terms.
From the outset, the authors seek to identify and catalog the published quantitative literature over time that reports on the evidence base of instructional strategies focused on one or more of the six elements important to overall reading performance: phonological awareness, phonics, vocabulary, fluency, comprehension, and oral language [4,5,6,7,8,9,10,11,12,13,14,15,16]. While qualitative studies produce important insights into pedagogical reasoning, classroom processes, and contextual conditions, they typically do not report standardized or otherwise quantifiable overall reading performance. Therefore, the exclusion of qualitative designs reflects the methodological boundaries of the present review rather than a conceptual judgment about the value of qualitative literacy research. The focus is not on the efficacy of the instructional strategies for the six elements per se, but on the contribution of the instructional strategies to students’ overall reading performance. The authors adopt the term reading performance not as a reference to any single instrument or assessment protocol, but rather as a conceptual descriptor of students’ capacity to “understand, use and reflect on written texts to achieve personal goals, develop knowledge and potential, and participate in society” [24]. This framing allows the authors to examine the literature without imposing a uniform metric or outcome measure. Rather than seeking to standardize or aggregate findings as a meta-analysis would, the authors acknowledge the methodological diversity across studies and allow each research team’s operationalization of reading performance to stand on its own terms as explicated in their publications.
In addition, the scope is narrowed to instructional strategies that could be provided by classroom teachers in English-speaking contexts with mainstream students in the primary years. The focus is on what could be feasible, replicable, and relevant to everyday teaching practice rather than what might be possible only in highly resourced contexts. The scope excludes interventions akin to intensive one-on-one support or pull-out programs taught by a specialist educator. This emphasis on ecological validity reflects a commitment to mapping instructional strategies that are not only evidence-based but also contextual to the realities of mainstream classrooms. The overarching research question is: What does the published quantitative literature reveal about the contribution of instructional strategies targeting phonological awareness, phonics, vocabulary, fluency, comprehension, and/or oral language on reading performance in mainstream English-speaking primary classrooms?
The search terms are broader than the six essential elements important to reading development: primary OR elementary AND “reading assessment” OR “reading growth” OR “reading development” OR “reading gain*” OR “reading comprehension” OR “reading pedagogy” OR “reading instruction” OR “teaching of reading” OR “teaching reading”. The asterisk functions as a wildcard character in database searches, directing the system to retrieve terms that share a common root but vary in their endings. For example, by entering “gain*”, the search includes results for both “gain” and “gains”, among other possible suffixes.

2.2. Phase B—Searching

The Searching Phase covers steps four–six:
  • Step 4—Identify and search databases, refine search terms based on database functions.
  • Step 5—Read and assess publications, note exclusion reasoning.
  • Step 6—Structure database by defining categories of data and create both broad and tight criteria based on values, an interactive stage of working backwards and forwards to establish rigor.
In designing the search strategy, the authors use the Griffith University Covidence Subscription and the university’s ERIC (Education Resources Information Center) and Scopus databases. ERIC is sponsored by the Institute of Education Sciences (IES) of the U.S. Department of Education. Scopus is Elsevier’s abstract and citation database, the largest database of peer-reviewed literature. The authors use the search terms listed in Step 3 without further refinement. When preparing the SQLR in 2024, no front-end date restrictions are applied in the search strategy, allowing for a comprehensive mapping of the literature across historical and contemporary contexts. In practice, the end-date restriction is set on 31 December 2022, reflecting the reality that, although the review is conducted in 2024, many research outputs from the latter part of 2023 are not yet published or indexed. It is not uncommon for final issues of publications to become searchable only in the subsequent calendar year.
The sourced references are uploaded to Covidence, an online platform designed to support the management of systematic literature reviews. The authors use the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) reporting guidelines [25]. The PRISMA checklist is uploaded as Supplementary Materials. As listed in the auto-populated PRISMA flow diagram (Figure 1), 6138 research publications are identified, with 2695 records from ERIC and 3443 records from Scopus. Precisely 1845 duplicates are removed, with 4293 research publications progressing to the screening of titles and abstracts. At least two of the four authors screen each publication for conformity to the inclusion criteria. The authors work independently for the first round of title and abstract screening. No automation tools are used in this process. The authors then meet online over half a dozen hour-long sessions as a group of three or four to discuss points of difference and to continue to develop the following exclusion criteria for the whole set of publications: no full text; gray literature; full-text not in English; meta-analysis or review paper; reading undertaken in the study in a language other than English; no rigorous quantitative data (such as when studies made claims for improvement without providing numerical or statistical evidence); does not report on a primary/elementary cohort; does not focus on a mainstream class; or does not include a reading intervention. A further 3761 research publications are removed because at least two researchers agreed with their removal, with 532 research publications progressing to Phase C—Coding.

2.3. Phase C—Coding

The Coding Phase covers steps seven–nine:
  • Step 7—Enter the first 10% of papers, detailing changes in inclusion and exclusion criteria if necessary.
  • Step 8—Test and revise categories, including piloting potential charts and data results tables.
  • Step 9—Enter bulk of papers, documenting category changes and criteria alterations.
Phase C—Coding involves a full-text screening of 532 research publications to assess their eligibility for inclusion in the final list. At least two of the four authors of this paper agree to either include or exclude each publication. The authors work independently for the first round of full-text screening, then meet online over a few hour-long sessions as a group of three or four to discuss points of difference. No automation tools are used in this process. A team of four authors is essential for maintaining momentum, given the large volume of publications in Phase C—Coding and the 250+ hours to complete this work collectively.
Some studies appear, prima facie, to report on instructional strategies that contribute to overall reading performance but do not always meet the inclusion criteria. A further 369 publications meet the exclusion criteria due to no full text (n = 52), gray literature (n = 17), full-text not in English (n = 5), meta-analysis or review paper (n = 29), reading undertaken in the study in a language other than English (n = 51), no rigorous quantitative data (n = 92) such as when studies make claims for improvement without providing numerical or statistical evidence, does not report on a primary/elementary cohort (n = 17), does not focus on a mainstream class (n = 24), or does not include a reading intervention (n = 82). These exclusion criteria and the number of articles are listed on the PRISMA flow diagram (see Figure 1). On the point of a reading intervention, the authors adopt the notion of classroom “pedagogical experiments” [1], that is, research that nudges “a small bit of the educational environment of students a little and then evaluates the effect of the nudge on other features of the environment” (p. 325). Gray literature from organizations that do not primarily focus on publishing, such as think tanks, community groups, or sources like census reports and economic datasets are excluded.
Risk of bias is assessed a priori using the Mixed Methods Appraisal Tool (MMAT) [26], version 2018, a single appraisal framework designed for systematic reviews that include quantitative, qualitative, and mixed methods studies. MMAT first checks that a study has a clear research question and suitable data, then applies five methodological criteria specific to the study design to identify potential sources of bias without calculating a quality score. Two authors of this review independently appraise each of the studies, with discrepancies resolved through discussion until the authors have consensus. No automation tools are used. Consistent with MMAT guidance, no numerical quality scores are calculated; instead, the appraisal helps to identify potential sources of bias related to study design, data collection, analysis, and integration of methods where applicable. Risk of bias judgments inform the interpretation of findings and the strength of claims about the contribution of instructional strategies targeting phonological awareness, phonics, vocabulary, fluency, comprehension, and/or oral language on reading performance in mainstream English-speaking primary classrooms. In total, 163 publications meet the eligibility criteria and progress to Phase D—Analysis.

2.4. Phase D—Analysis

The Analysis Phase covers steps 10–12:
  • Step 10—Produce and review summary tables, listing percentages of papers, highlighting any significance or outliers in results.
  • Step 11—Draft methods, ensuring descriptions allow for replicability.
  • Step 12—Evaluate key results and conclusions, aiming for both breadth and depth in the topic literature.
Step 10 identifies the spread of publication sources, that being journal articles (n = 97), dissertations (n = 49), and reports (n = 17). The 97 journal articles are from 58 different journals. The Journal of Educational Research (Taylor and Francis) contains the most articles (n = 8), followed by Literacy Research and Instruction (Taylor and Francis) (n = 6) and the Journal of Educational Psychology (American Psychological Association) (n = 5). In terms of location, 135 are from the U.S., 14 from the U.K., seven from Australia, four from New Zealand, and three from Canada (Figure 2). The high representation of U.S. studies reflects the funding of the ERIC online library by the Institute of Education Sciences (IES) within the U.S. Department of Education. This institutional infrastructure not only facilitates the indexing of peer-reviewed journal articles but also supports the open-access dissemination of doctoral dissertations, which account for 48 of the U.S. entries in this review. This geographic distribution has implications for the generalizability of findings, particularly in relation to curriculum frameworks, instructional contexts, and assessment regimes that vary across national settings.
Of the 163 publications, 78 use quantitative methods, while 85 employ mixed methods incorporating quantitative techniques (Figure 3). This approach may be partially attributed to the influence of evidence-based policy frameworks, especially in the U.S. contexts, where federal funding often prioritizes experimental and quasi-experimental designs. The prevalence of mixed methods studies indicates recognition of the value of triangulating statistical data with qualitative insights from classroom observations, teacher interviews, and student reflections to capture the contextual nuances and instructional practices shaping reading instruction.
Analysis of publication years across the 163 manuscripts provides empirical evidence that reading researchers have investigated and disseminated research on foundational instructional content and strategies for at least 40 years. Online availability peaks in 2017 and 2019 (n = 9 each), followed by 2012 and 2014 (n = 9 each; Figure 4). Trend inspection suggests a gradual increase from 1983, coinciding with the emergence of the public internet, continuing through the late 1980s with the wider adoption of hypertext transfer protocol (HTTP), and rising in the years preceding the COVID-19 pandemic. Although the pandemic substantially disrupted student learning and academic research from early 2020 onwards [27], the temporal distribution of publications relevant to the SQLR does not exhibit a consistent year-by-year pattern.
The 163 publications detail interventions focusing on all six of the elements of reading development (see Table 1). As some publications cover more than one of the elements that impact reading development, the number of publications in Table 1 total to more than 163. This is because each study is coded for all elements explicitly targeted by its instructional content or strategies, meaning a single publication may appear in multiple categories. However, each publication is counted only once in the total dataset of 163 studies. The results highlight an emphasis on comprehension (n = 121), while oral language (n = 5), phonological awareness (n = 9), and phonics (n = 5) are markedly underrepresented. While the authors acknowledge that disaggregating the six elements important to overall reading development offers a reductive portrayal of a complex and integrative skill set, such delineation helps to map the research. The co-occurrence of elements within individual studies also promotes the emerging patterns, including frequent pairings of fluency and comprehension, for example, which reflect an integration of instructional content.
Many of the publications report on the ongoing controversies over instructional content and strategies for teaching students to read. For example, one of the oldest articles in the dataset from Kamm in 1978 [28] opens with a statement, “In recent years controversy over approaches to teaching children to read has existed not only in the professional literature, but in the schools as well” (p. 104). Rightmyer et al. [29] comment on the unclear relationship between students’ reading performance and particular instructional practices. Ehri and Flugman [30] comment on the quality teaching debate, claiming that teaching students to read “requires specialized knowledge and training which many primary grade teachers lack” (p. 425). In contrast, Ferguson et al. [31] offer that there is already a “good deal of convergence of thought and research in the area of literacy instruction”, concluding that “the air of continuing controversy in the area is puzzling” (p. 238).
In countries such as the United Kingdom (U.K.) [32,33], the U.S. [34,35], and Australia [36,37], the element of reading instruction most contested in policy, practice, and media discourse is the teaching of phonics. In the U.S., the national newsroom organization Stateline reports the following: “As reading scores fall, states turn to phonics, but not without a fight” [38]. In the U.K., University College London published an opinion piece, “Why are ministers obsessed with teaching children to read using phonics?” [39]. In Australia, the Australian Broadcasting Corporation reports the following: “Reading wars rage again as Australian Government pushes to introduce phonics test” [40]. In Canada, the Canadian Broadcasting Corporation (CBC) Radio reports on curriculum reforms driven by low literacy outcomes, noting a national shift toward structured literacy and phonics-based instruction [41], alongside findings from the Ontario Human Rights Commission’s Right to Read inquiry, which concludes that Ontario’s public education system fails many students by not adopting evidence-based approaches to reading instruction [42]. Together, these accounts illustrate persistent definitional ambiguity, as stakeholders advance or critique phonics-related practices without consistently specifying instructional models, intensity, or integration with broader instructional content and strategies.
At its most general, phonics instruction helps students understand the relationship between letters and sounds, a skill that underpins accurate word recognition and supports reading fluency [2,17]. Both phonics-first proponents and advocates of balanced approaches that integrate explicit phonics instruction through children’s literature agree that phonics is foundational to early literacy development, supporting word decoding and the development of reading fluency [6]. The second half of this paper focuses on the five studies that report on instructional content (the what) and instructional strategies (the how) for teaching phonics, broadly defined. The authors say “broadly defined” because phonics is often used as an umbrella term encompassing diverse contents and strategies, which are described in more detail in the second half of the paper. The focus on phonics is not because the other five elements are not important, but because the authors need space for Phase E—Writing. The authors will investigate the other elements in due course.
Although two major highly cited meta-analyses conducted by Ehri et al. [43] and the authors of the National Reading Panel Report [2] more than two decades ago examined the effects of phonics instruction, neither study reports on overall reading performance for instructional content and strategies delivered by classroom teachers in mainstream K–6 settings. The narrower criterion explains why only 5 studies from 4.5 decades meet inclusion for the phonics component of this SQLR. Studies that refer to phonics but do not meet this narrow criterion are excluded at the full-text review; for transparency, a sample of these studies and the specific reasons for their exclusion appear in Appendix A.
Of the 163 research publications, five report findings on instructional content and strategies applicable to mainstream classrooms and examine the contribution of phonics instruction to overall reading performance. All authors read these five papers in detail and agreed to their inclusion in this dataset:
  • Rightmyer, E.C.; McIntyre, E.; Petrosko, J.M. Instruction, development, and achievement of struggling primary grade readers. Reading Research & Instruction 2006, 45(3), 209–241 [29].
  • Ferguson, N.; Currie, L.-A.; Paul, M.; Topping, K. The longitudinal impact of a comprehensive literacy intervention. Educational Research 2011, 53(3), 237–256 [31].
  • Quint, J.C.; Balu, R.; DeLaurentis, M.; Rappaport, S.; Smith, T.J.; Zhu, P. The success for all model of school reform: Early findings from the investing in innovation (i3) scale-up. Manpower Demonstration Research Corporation, 2014 [44].
  • Ehri, L.C.; Flugman, B. Mentoring teachers in systematic phonics instruction: Effectiveness of an intensive year-long program for kindergarten through 3rd grade teachers and their students. Reading and Writing 2018, 31(2), 425–456 [30].
  • Cross, C. The Relationship between the Reading in Motion Program and Early Literacy: A Study on the Effect of the Reading in Motion Program and Reading Fluency in Kindergarten Students. EdD, University of St. Francis, United States, May 2019 [5].
All five papers include instructional components beyond phonics, such as oral reading, writing, guided reading, syntactic cueing, picture/context cues, spelling, syllable/morpheme instruction, vocabulary building, listening comprehension, music, drama, partner reading, and/or story drawing. In the findings and discussion of this paper, the authors cannot attribute overall reading performance gains to phonics alone. Instead, the authors interpret the findings cautiously and treat phonics as one contributing element rather than the sole causal factor. This approach reflects the reality that an intervention isolating phonics from all other literacy experiences is neither feasible nor ecologically valid; classroom instruction and children’s reading development are inherently multi-component.

2.5. Phase E—Writing

The Writing Phase covers steps 13–15:
  • Step 13—draft results and discussion, write key findings as the Section 6.
  • Step 14—draft introduction, abstract, and references.
  • Step 15—revise paper and prepare for submission according to publication guidelines.
To prepare for Phase E—Writing, the authors use a sequential, hierarchical analysis of nested research questions (RQs), guided by a progressive focusing framework. This approach draws on Stake’s notion of progressive focusing, as articulated by Sinkovics and Alfoldi [45], whereby an initially broad inquiry is progressively refined to deepen interpretive precision in research design, intervention design, and evidence measures. The response to research questions reports descriptive patterns identified across the five studies; interpretive commentary is in the Section 6. The RQs are as follows:
  • RQ1: How are the studies designed in terms of participants, instructional duration, and targeted year levels?
  • RQ2: How are the interventions designed in terms of teacher professional learning, instructional content, and strategies, and teachers’ pedagogical decision-making?
  • RQ3: What is collected as evidence of instructional effectiveness and intervention fidelity, and what are the findings?

3. RQ1—Research Design: Participants, Instructional Duration, and Year Levels

The responses to RQ1, “How are the studies designed in terms of participants, instructional duration, and targeted year levels?”, are in Table 2. Such detailing is not always straightforward, as some studies have a wider purview than the narrow area of interest. Quint et al. [44] examine a multi-year whole-school initiative (Kindergarten–Grade 5). With the current analysis focusing on the initial Kindergarten cohort through to the end of Grade 1, the authors disregard the intervention for older students. Although Ehri and Flugman [30] focus on kindergarten to Grade 3 students, only Grades 1–3 provide overall reading performance measures; analysis focuses on these grades. Rightmyer et al. [29] examine six reading programs using a range of instructional strategies, including whole-class instruction with and without differentiation for focus students and teacher-led targeted instruction. All six programs remain in scope, as instruction in each occurs under the classroom teacher’s direction. Across the five studies, the number of students and teachers in each condition is not always clearly specified. Also, student and teacher attrition often goes unstated. The terminology of the researchers is used to describe the year levels, such as “Primary”, “Year”, or “Grade”. As such, Table 2 rests on what the authors could extract from the five studies and should be interpreted as such in terms of completeness.
Variation occurs in the profiles of student and teacher participants across the five studies. In relative terms, Cross’s [5] Doctor of Education research is a small-scale study of 32 students and two teachers over one year. This design allows for detailed reporting of participant characteristics and instructional conditions. Two studies are medium-scale: Rightmyer et al. [29], with 117 students and 42 teachers across six intervention programs of two years each; and Ferguson et al. [31], with 415 students and 80 teachers over two years, plus an additional two-year follow-up. Two studies are large-scale: Ehri and Flugman [30], with 1336 students and 69 teachers over one year; and Quint et al. [44], with 2147 students and an undisclosed number of teachers over one year.
The year levels also vary, including a Kindergarten focus (Cross, [5]), and Kindergarten plus Year 1 (Quint et al. [44]), where foundational decoding skills and early comprehension skills are introduced and assessed. Other studies span the early years of primary school, specifically, Years 1, 2, and 3 (Ehri and Flugman, [30]), and Years 1 and 2 and Years 2 and 3 (Rightmyer et al. [29]). This highlights ongoing phonics instruction beyond initial acquisition stages. Ferguson et al. [31] cover the widest span, starting with Years 1 and 2 intervention and a follow-up reading assessment in Years 3 and 4. Collectively, this variation in year level contributes evidence at different points along the reading development continuum, shaping both the kinds of assessments and the conclusions about phonics instruction over time.

4. RQ2—Intervention Design: Professional Learning, Instructional Content and Strategies, and Pedagogical Decision-Making

The responses to RQ2, “How are the interventions designed in terms of teacher professional learning, instructional content and strategies, and teachers’ pedagogical decision-making?”, are in Table 3. The teacher professional learning column summarizes teacher time commitments and professional learning content. In some cases, this information lacks specificity. The next column reports instructional content and strategies for teaching phonics, using original author terminology to maximize reporting fidelity. All studies address this point, with Rightmyer et al. [29] describing six separate interventions. The final column shows how teachers make teaching decisions, specifically whether they follow a script or adapt lessons to meet students’ needs.
Reporting of teacher professional learning varies across the five studies. Professional learning is either absent or described in general terms. The teachers in Rightmyer et al. [29] are recommended by the school principal as being “particularly successful at implementing the instructional model for at least a year” (p. 214). However, the study follows the students from the first year to the end of the second year, where teachers are included by convenience rather than by recommendation. Quint et al. [44], Ehri and Flugman [30], and Cross [5] mention professional learning sessions, but the content detail is unspecified. Ferguson et al. [31] is the only study that provides substantial details about the content of teacher professional learning. A further consideration is the scale of teacher professional learning, with Ehri and Flugman [30] documenting a 45 h summer intensive and 90 h of in-school coaching.
Four of the five studies explicitly characterize the phonics component as systematic, and although they do not formally define it as such, the instructional content and strategies align with the National Reading Panel Report [2], which describes systematic phonics as the planned, sequential introduction of phonics elements accompanied by instruction and practice. Quint et al. [44], the sole exception in terminology, similarly emphasize phonics through scripted lessons that specify instructional sequences, activities, and teacher language. The National Reading Panel Report [2] differentiates among three common approaches to systematic phonics instruction. Synthetic phonics teaches students to “convert letters into sounds or phonemes and then blend the sounds to form recognisable words” (pp. 2–89). Analytic phonics avoids students having to “pronounce sounds in isolation to figure out words”, rather students learn “to analyze letter–sound relations once the word is identified” (pp. 2–89). Phonics-in-context teaches students to use “sound–letter correspondences along with context cues to identify unfamiliar words they encounter in text” (pp. 2–89).
The five studies cannot be neatly categorized as synthetic or analytic phonics or phonics-in-context, reflecting a well-recognized classification challenge in phonics research [46,47]. Only Ehri and Flugman [30] explicitly adopt the nomenclature of synthetic phonics. This ambiguity is evident in Rightmyer et al. [29], where Programs 1, 3, 5, and 6 refer to “skills”, and Programs 2 and 4 refer to “word work”, without specifying if instruction is grounded in synthetic or analytic phonics. Across all six programs, phonics instruction is intentional, observable, and embedded within reading instruction more generally, suggesting alignment with a hybrid approach. In the absence of dedicated teacher professional learning, program enactment relies heavily on individual teacher interpretation, leading to variability in implementation across classrooms. Ferguson et al. [31] describe explicit and systematic phonics instruction embedded within a “wider literacy curriculum” (p. 241). Quint et al. [44] do not specify the phonics approach but do document a meaning-oriented literacy framework. Cross [5] likewise demonstrates hallmarks of phonics-in-context, integrating instruction in sound–symbol relationships into authentic, arts-based language and reading experiences; although purposeful and highly contextual, the phonics instruction is less systematically sequenced.
Across the five studies, teachers’ pedagogical decisions range from constrained enactment within highly scripted programs to substantial professional discretion over instructional content, strategies, and/or emphases. In prescriptive and scripted contexts (Ehri and Flugman [30]; Quint et al. [44]), teachers are still exercising agency through interpretive enactment. Rightmyer et al. [29] demonstrate that across six programs, including those characterized by scripted lessons and fixed instructional sequences, teachers consistently re-orientate instruction along a skills–meaning continuum, adopting skills-focused, meaning-focused, or balanced pedagogical approaches in ways that diverge from program intent. Survey data from Quint et al. [44] underscore this dynamic, revealing that many teachers modify aspects of scripted programs or perceive them as excessively rigid. In contrast, studies affording greater instructional flexibility, such as Ferguson et al. [31] and Cross [5], position teachers as active decision makers who adapt instruction to students through integration with existing reading schemes or through deliberate selection of instructional content and strategies within stable organizational routines.

5. RQ3—Evidence of Instructional Effectiveness and Intervention Fidelity and Findings

The responses to RQ3, “What is collected as evidence of instructional effectiveness and intervention fidelity, and what are the findings?”, are in Table 4. The focus is on the claims of evidence, intervention fidelity, and each study’s conclusions about the contribution of phonics to overall reading performance. Consistent with the focus, the authors do not report on assessments of literacy subskills such as phonemic awareness, sight-word recognition, or spelling proficiency. Intervention fidelity is a critical consideration when interpreting claims of effectiveness in reading intervention research, as variation in the extent to which programs are implemented as designed can substantially shape outcomes. When discussing fidelity, some studies focus on adherence to prescribed content, sequence, or routines, while others foreground consistency of instructional activities across classrooms or improvement in teachers’ pedagogical enactment over time. The five studies report a range of measures and findings for students’ overall reading performance. Performance reflects statistically significant changes on formal assessments at pre-and-post intervention points or uses comparative analyses between intervention and control groups.
Across the five studies, instructional effectiveness shows in assessments emphasizing either reading comprehension or broad reading performance. While the specific instruments vary, all studies privilege standardized or structured comprehension outcomes, thereby aligning measurement with meaning-based constructs of reading development. However, the diversity of assessment instruments introduces construct and comparability issues, yielding non-isomorphic estimates of growth.
In highly prescriptive contexts, implementation fidelity is often inconsistent. Rightmyer et al. [29] report high fidelity only for one of six programs. Ferguson et al. [31] note that the intervention is “generally being implemented” (p. 240). Quint et al. [44] describe fidelity as “adequate” (p. 8) while noting that teachers offer critiques of pacing and grouping practices. Ehri and Flugman [30] observe that most teachers achieve the highest attitude ratings by the end of the intervention, yet concede that second-grade teachers are “less likely to teach the program on days when the mentor is not present” (p. 448). In studies by Rightmyer et al. [29] and Cross [5], fidelity is understood not merely as adherence, but as responsive expertise within principled boundaries.
Across the five studies, outcome patterns confirm that students’ reading performances are not simply a function of phonics presence, but of instructional balance, implementation conditions, and time. Rightmyer et al. [29] show that no one particular approach to phonics yields an advantage for Group One in the first year, although some approaches may constrain later fluency and comprehension in the second year. Rightmyer et al. [29] show that instructional models integrating phonics with fluency and comprehension produce comparable Year 1 outcomes and superior fluency-comprehension profiles by Year 2. However, for Group Two students who participate in an intervention for two years over Years One and Two, no significant differences are identified across Programs of instruction. A time-dependent pattern also shows in Ferguson et al. [31], where phonics is explicitly embedded within a broader literacy curriculum. Quint et al. [44] report null effects on passage comprehension despite acceptable fidelity of scripted lessons and prescriptive pacing for phonics instruction, indicating that program rigidity may attenuate impact even under nominally faithful implementation. Ehri and Flugman [30] report substantial pre-to-post-intervention gains in reading comprehension among students beginning with a low baseline performance, although these gains do not close the gap to year-level expectations. Questions surrounding intervention fidelity make it difficult to disentangle the potential effects of consistently enacted instruction from teachers’ discretionary emphasis on other dimensions of literacy. Cross [5] reports moderate comprehension gains alongside sustained professional learning that seems to stabilize enactment.

6. Discussion

At the outset, this SQLR seeks to identify empirical studies from mainstream English-speaking primary classrooms that examine instructional content and strategies addressing one or more of the six elements of reading development and their contribution to students’ overall reading performance. The novelty of this SQLR lies in its systematic narrowing of a large and heterogeneous evidence base to delineate those studies in which instructional content and strategies in phonological awareness, phonics, vocabulary, fluency, comprehension, and oral language could be quantitatively linked to outcomes in reading performance. Of the 163 publications initially identified as being of interest, five quantitatively detail the contribution of phonics instruction to overall reading performance, which forms the focus of the second half of the paper. Future SQLR publications will continue to report on the remaining five elements of reading development and their contribution to students’ overall reading performance. The task of this Section 6 is not to adjudicate winners among branded or commercial programs, but to clarify conditions for rigorous research and effective phonics instruction.

6.1. Research Design

In terms of research design, the five studies that focus on the contribution of phonics instruction to overall reading performance, sample sizes, and site counts vary considerably across studies. Small-scale investigations offer granular insight into instructional processes, while large-scale trials provide policy-relevant signals. This polarity underscores the need for coordinated research programs that stack evidence across both scales to strengthen causal claims and translational guidance. The wide span of year levels across the five studies highlights the developmental layering of reading. This breadth reflects both the persistence of decoding challenges for some learners and the necessity of revisiting phonics instruction as texts become more complex, particularly for students whose early uptake of reading was uneven. The claim is that the heterogeneity in findings is more plausibly attributable to differences in instructional balance, length of the focused teaching, and supports for the teacher rather than to the orientation of phonics instruction itself. This reality supports differentiated approaches rather than a one-size-fits-all model of instruction. Intervention fidelity is conceptualized in markedly different ways, ranging from adherence to prescribed content to enactment quality to consistency across classrooms. Rightmyer et al. [29] offer the most differentiated account, demonstrating that fidelity varies not only across programs but across dimensions of practice such as the amount taught, instructional emphasis, scripting, and shared routines. Other studies rely on threshold or snapshot indicators, which, while efficient for large-scale implementation, risk obscuring meaningful within-program and within-school heterogeneity. One model begins to address enactment quality while also exposing the fragility of implementation when instructional support is withdrawn. Another approach adopts an implementation science orientation where fidelity is treated as dynamic and improvable through feedback, coaching, and iterative support. Taken together, the authors observe that the evidence suggests that while program design strongly shapes the boundaries of instruction, pedagogical decision-making persists across contexts, often manifesting not in what teachers teach, but in how they emphasize and integrate skills and meaning. These approaches suggest that implementation fidelity is not a fixed property but a multidimensional construct that shapes, and potentially constrains, cross-study comparisons. The lack of specificity in reporting on the duration, content focus, and coaching intensity of teacher professional learning hampers meaningful cross-study comparison. Yet, articulating the depth and nature of teacher professional learning remains crucial for interpreting outcomes and guiding implementation at scale, as emphasized by Dilgard et al. [48] and Scull and Lyons [49].

6.2. Instructional Content and Strategies

When implemented with fidelity, all systematic phonics approaches, whether synthetic, analytic, or embedded in context, can positively impact students’ overall reading achievement. Rightmyer et al. [29] provide the clearest signal, commenting that “no instructional model proved more effective for phonics learning of first-grade struggling readers than any other model” (p. 221). Another study finds that kindergarten students in a cooperative learning intervention do not outperform those in a control group on comprehension measures. These findings suggest that balanced, coherent programs and multi-year trajectories matter; that coaching and mentoring can lift enactment quality but do not, by themselves, guarantee grade-level attainment for students with a lower-than-expected starting point; and that rigid pacing or grouping can mute effects for certain subgroups even under acceptable adherence to the intervention design. Explicit instruction in phonics paired with opportunities to read increasingly complex texts and engage in meaning-making discussions supports transfer from what Paris [50] terms the constrained skill of phonics knowledge to the unconstrained skill of reading comprehension. Rightmyer et al. [29] argue that “students need strong phonics instruction whilst learning to read” and that extended reading time with engaging pedagogy provides traction for later fluency and comprehension (p. 229). Evidence of this trajectory is gradual; in the studies in this review, gains appear after one year, two years, and three to four years. Across the five studies, teachers’ pedagogical decision-making shapes outcomes even under prescriptive designs. One study documents variable orientations, such as a skills-focused, meaning-focused, and a balanced approach across teachers delivering the same program. Another study highlights teacher responses to program rigidity. When pacing and grouping are rigid, teachers’ capacity to tailor instruction to high-functioning and struggling students is constrained, and null effects become more likely. Rightmyer et al. [29] add candidly that “the work is not easy, and it takes educated teachers to do it well” (p. 229). These observations resonate with Vaughn et al. [51] and Exley et al. [23], who conceptualize adaptive teaching as knowledgeable, in-the-moment decision-making that incorporates students’ backgrounds, instructional needs, and cultures into the literacy curriculum.

6.3. Implications for Practice, Policy, and Future Research

The findings of this SQLR have implications for classroom practice, policy discourse, and future research. For practice, the synthesis indicates that explicit phonics instruction is effective when implemented as part of a broader, balanced literacy approach and supported by sustained teacher professional learning. Program rigidity and limited opportunities for pedagogical adaptation appear to constrain impact for subgroups of students in mainstream classrooms.
For policy, the results caution against treating explicit phonics instruction as a standalone solution to reading performance. Evidence from this review supports policy approaches that foreground instructional coherence, implementation support, and contextual responsiveness for subgroups of students rather than narrow compliance measures.
For future research, there is a need for studies that report outcomes with greater transparency, including implementation fidelity measures and teacher and student attrition data. Longer-term and multi-year designs will be particularly important for capturing the delayed or cumulative effects of explicit phonics instruction on reading comprehension. Coordinated research programs that align design, outcome measurement, and reporting would strengthen the field’s capacity for quantitative synthesis and increase the relatively small number of phonics-focused studies that meet the stringent inclusion criteria of this SQLR.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/encyclopedia6030061/s1, PRISMA checklist.

Author Contributions

Conceptualization, B.E., K.Z.B., and D.H.H.; methodology, B.E., K.Z.B., and D.H.H.; software, D.H.H.; validation, B.E., K.Z.B., D.H.H., and S.C.; formal analysis, B.E., K.Z.B., D.H.H., and S.C.; data curation, D.H.H.; writing—original draft preparation, B.E., K.Z.B., and D.H.H.; writing—review and editing, B.E., K.Z.B., and D.H.H.; visualization, S.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript.
SQLRSystematic Quantitative Literature Review
U.S.United States
NICHDNational Institute of Child Health and Development
U.K.United Kingdom
EAL/DEnglish as an Additional Language/Dialect
ELLEnglish Language Learners
EFLEnglish as a Foreign Language
L2Language 2
IESInstitute of Education Sciences
PRISMAPreferred Reporting Items for Systematic Reviews and Meta-Analyses
HTTPHypertext Transfer Protocol
GRTGroup Reading Test
NFERNational Foundation for Educational Research
GMRTGates MacGinitie Reading Test
NWEA MAPNorthwest Evaluation Association Measures of Academic Progress
OECDOrganisation for Economic Co-operation and Development

Appendix A

Table A1. Studies meeting initial inclusion criteria but excluded after full-text screening.
Table A1. Studies meeting initial inclusion criteria but excluded after full-text screening.
CitationWhy the Study Looked EligiblePrimary Reason for Exclusion
(Mapped to SQLR Criteria)
Kamm (1978) [28]District-wide elementary implementation of a skills-centred reading program with standardized outcomes. Large mainstream sample; teacher-delivered; reports gains on reading subtests.Does not isolate phonics as the instructional content/strategy—multi-component skills management makes the specific contribution of phonics to overall reading performance non-attributable.
Watson & Johnston (1998) [52]Synthetic phonics in early primary; reports strong gains, incl. “reading age”. Mainstream year-level focus; quantitative outcomes.Across phases, outcomes emphasize decoding/spelling/phonemic awareness; evidence for overall reading performance is limited and mixed, and some instruction occurred outside typical class routines.
Pernai et al. (2000) [53] Grade-1 “balanced” program adding phonics to a literature-rich curriculum; pre/post gains reported. Teacher-delivered, mainstream, phonics present.Primary outcomes are letter ID, letter–sound, pre-primer word lists—i.e., subskills rather than overall reading performance; additional reading-specialist support confounds attribution.
Bowens (2013) [54] Small-n repeated-measures study; sight-word vocabulary gains using a word-manipulation strategy. Teacher-delivered classroom work; quantitative pre/post.Outcomes limited to sight-word vocabulary/word-reading fluency; no measure of overall reading performance (e.g., text-level comprehension).
Devonshire et al. (2013) [55] Randomized cross-over comparing multi-level orthographic instruction (morphology + phonology + etymology) with phonics; significant gains in word reading and spelling. Mainstream classes; robust design; teacher-deliveredPrimary outcomes are word-level (Schonell word reading, spelling); no text-level “overall reading performance” measure (e.g., standardized comprehension).
Piquette et al. (2014) [56] Web-based, teacher-delivered K–1 program; significant gains largely in letter–sound knowledge; mixed/non-significant on several standardized outcomes. Classroom-embedded, randomized, quantitative.Not a phonics-specific instructional study but a multi-component, technology-mediated literacy suite; significant effects concentrate on foundational subskills; no measure of standardized overall reading performance.
Strong (2020) [57] Cluster RCT comparing text-structure instruction vs. comprehension strategies; effects on text-structure awareness, organizer use, and informational writing. Mainstream, teacher-delivered, quantitative, with reading-related outcomes.Outside the phonics domain (focus is discourse-level text structure, not alphabetics/phonics).

References

  1. Pearson, P.D.; Gallagher, M.C. The Instruction of Reading Comprehension. Contemp. Educ. Psychol. 1983, 8, 317–344. [Google Scholar] [CrossRef]
  2. National Institute of Child Health and Development (NICHD). Teaching Children to Read: An Evidence-Based Assessment of the Scientific Research Literature on Reading and Its Implication for Reading Instruction; NIH Publication No. 00-4769; National Reading Panel: Washington, DC, USA, 2000. Available online: https://www.nichd.nih.gov/publications/pubs/nrp/pages/smallbook.aspx (accessed on 20 February 2026).
  3. Pickering, C.; Johnson, M.; Byrne, J. Using systematic quantitative literature reviews for urban analysis. In Methods in Urban Analysis; Baum, S., Ed.; Springer: Berlin/Heidelberg, Germany, 2021; pp. 29–50. [Google Scholar] [CrossRef]
  4. Konza, D. Teaching Reading: Why the “Fab Five” should be the “Big Six”. Aust. J. Teach. Educ. 2014, 39, 10. [Google Scholar] [CrossRef]
  5. Cross, C. The Relationship Between the Reading in Motion Program and Early Literacy: A Study on the Effect of the Reading in Motion Program and Reading Fluency in Kindergarten Students. Ed.D. Thesis, University of St. Francis, Joliet, IL, USA, May 2019. [Google Scholar]
  6. Paige, D.D.; Young, C.; Rasinski, T.V.; Rupley, W.H.; Nichols, W.D.; Valerio, M. Teaching reading is more than a science: It’s also an art. Read. Res. Q. 2021, 56, S339–S350. [Google Scholar] [CrossRef]
  7. Tindall, E.; Nisbet, D. Exploring the essential components of reading. J. Adult Educ. Inf. Serv. 2010, 39, 1–9. [Google Scholar]
  8. Carson, K.L.; Gillon, G.T.; Boustead, T.M. Classroom phonological awareness instruction and literacy outcomes in the first year of school. Lang. Speech Hear. Serv. Sch. 2013, 44, 147–160. [Google Scholar] [CrossRef]
  9. Khan, M.; Khan, R. Phonological awareness and phonics instruction: Inclusive practice that benefits all kinds of learners. Asia Pac. J. Dev. Differ. 2021, 8, 173–185. [Google Scholar] [CrossRef]
  10. Eldredge, J.L. Foundations of Fluency: An Exploration. Read. Psychol. 2005, 26, 161–181. [Google Scholar] [CrossRef]
  11. Kendeou, P.; McMaster, K.L.; Christ, T.J. Reading Comprehension: Core Components and Processes. Policy Insights Behav. Brain Sci. 2016, 3, 62–69. [Google Scholar] [CrossRef]
  12. Castles, A.; Rastle, K.; Nation, K. Ending the reading wars: Reading acquisition from novice to expert. Psychol. Sci. Public Interest 2018, 19, 5–51. [Google Scholar] [CrossRef]
  13. Nation, K.; Snowling, M.J. Beyond phonological skills: Broader language skills contribute to the development of reading. J. Res. Read. 2004, 27, 342–356. [Google Scholar] [CrossRef]
  14. Snowling, M. Literacy outcomes for children with oral language impairments: Developmental interactions between language skills and learning to read. In The Connection Between Language and Reading Disabilities; Catts, H.W., Kamhi, A.G., Eds.; Psychology Press: Hove, UK, 2005; pp. 55–75. [Google Scholar] [CrossRef]
  15. Roth, F.P.; Speece, D.L.; Cooper, D.H. A longitudinal analysis of the connection between oral language and early reading. J. Educ. Res. 2002, 95, 259–272. [Google Scholar] [CrossRef]
  16. Fielding-Barnsley, R.; Hay, I. Comparative effectiveness of phonological awareness and oral language intervention for children with low emergent literacy skills. Aust. J. Lang. Lit. 2012, 35, 271–286. [Google Scholar] [CrossRef]
  17. Rose, J. Independent Review of the Teaching of Early Reading: Final Report; Department for Education & Skills: London, UK, 2006; Available online: http://dera.ioe.ac.uk/id/eprint/5551/2/report.pdf (accessed on 20 February 2026).
  18. Rowe, K.; National Inquiry into the Teaching of Literacy. Teaching Reading: Report and Recommendations; Department of Education, Science and Training: Canberra, Australia, 2005; Available online: https://research.acer.edu.au/tll_misc/5/ (accessed on 20 February 2026).
  19. Jamieson, D.G. National Strategy for Early Literacy: Summary Report 2009; Canadian Language and Literacy Research Network: London, ON, Canada, 2009; Available online: https://www.strongstart.ca/wp-content/uploads/National-Strategy-for-Early-Literacy.pdf (accessed on 20 February 2026).
  20. Li, Y.; Zhang, S. Applied Research Methods in Urban and Regional Planning; Springer: Cham, Switzerland, 2022. [Google Scholar] [CrossRef]
  21. Silverman, R.D.; Johnson, E.; Keane, K.; Khanna, S. Beyond decoding: A meta-analysis of the effects of language comprehension interventions on K-5 students’ language and literacy outcomes. Read. Res. Q. 2020, 55, S207–S233. [Google Scholar] [CrossRef]
  22. Bowers, J.S. Reconsidering the evidence that systematic phonics is more effective than alternative methods of reading instruction. Educ. Psychol. Rev. 2020, 32, 681–705. [Google Scholar] [CrossRef]
  23. Exley, B.; Hoyte, F.; Singh, P. When the curriculum demands personalization: Adaptive professionalism, pre-packaged plans & the teaching of phonics & spelling. Aust. J. Lang. Lit. 2025, 48, 161–174. [Google Scholar] [CrossRef]
  24. Organization for Economic Co-Operation & Development. Reading Performance (PISA). 2023. Available online: https://www.oecd.org/en/data/indicators/reading-performance-pisa.html (accessed on 20 February 2026).
  25. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 Statement: An updated guideline for reporting systematic reviews. BMJ 2021, 372, 71. [Google Scholar] [CrossRef]
  26. Hong, Q.N.; Pluye, P.; Fàbregues, S.; Bartlett, G.; Boardman, F.; Cargo, M.; Dagenais, P.; Gagnon, M.-P.; Griffiths, F.; Nicolau, B.; et al. Mixed Methods Appraisal Tool (MMAT), Version 2018; McGill University: Montreal, QC, Canada, 2018; Available online: http://mixedmethodsappraisaltoolpublic.pbworks.com (accessed on 20 February 2026).
  27. Pendergast, D.; Sammel, A.; Rowan, L.; O’Brien, M.; McCann, T.; Kanasa, H.; Geelan, D.; Exley, B.; Dennett, C.; Alhadad, S. Spaces to care & places to share: Fostering a sense of belonging during the global pandemic through digitally mediated activity. In Practising Compassion in Higher Education; Lemon, N., Harju-Luukkainen, H., Garvis, S., Eds.; Routledge: London, UK, 2023; pp. 120–147. [Google Scholar] [CrossRef]
  28. Kamm, K. A Five-Year Study of the Effects of a Skill-Centered Approach to the Teaching of Reading. J. Educ. Res. 1978, 72, 104–112. [Google Scholar] [CrossRef]
  29. Rightmyer, E.C.; McIntyre, E.; Petrosko, J.M. Instruction, development, and achievement of struggling primary grade readers. Read. Res. Instr. 2006, 45, 209–241. [Google Scholar] [CrossRef]
  30. Ehri, L.C.; Flugman, B. Mentoring teachers in systematic phonics instruction: Effectiveness of an intensive year-long program for Kindergarten through 3rd grade teachers and their students. Read. Writ. 2018, 31, 425–456. [Google Scholar] [CrossRef]
  31. Ferguson, N.; Currie, L.-A.; Paul, M.; Topping, K. The longitudinal impact of a comprehensive literacy intervention. Educ. Res. 2011, 53, 237–256. [Google Scholar] [CrossRef]
  32. Clark, M.M. Learning to Be Literate: Insights from Research Policy and Practice; Routledge: London, UK, 2016. [Google Scholar] [CrossRef]
  33. Wyse, D.; Bradbury, A. Reading wars or reading reconciliation? A critical examination of robust research evidence, curriculum policy and teachers’ practices for teaching phonics and reading. Rev. Educ. 2022, 10, e3314. [Google Scholar] [CrossRef]
  34. Compton-Lilly, C.; Spence, L.K.; Thomas, P.L.; Decker, S.L. Stories grounded in decades of research: What we truly know about the teaching of reading. Read. Teach. 2023, 77, 392–400. [Google Scholar] [CrossRef]
  35. Duke, N.K.; Cartwright, K.B. The science of reading progresses: Communicating advances beyond the simple view of reading. Read. Res. Q. 2021, 56, S25–S44. [Google Scholar] [CrossRef]
  36. Baroutsis, A.; Woods, A. Academic research and public debates: A media analysis of the proposed Australian phonics check. In Literacies in Early Childhood: Foundations for Equity and Quality; Woods, A., Exley, B., Eds.; Oxford University Press: Sydney, Australia, 2020; pp. 352–364. [Google Scholar]
  37. Exley, B. Reading the disregarded evidence: The Australian phonics check debate. In Teaching Initial Literacy: Policies, Evidence & Ideology; Clark, M.M., Ed.; Glendale Education: Birmingham, UK, 2018; pp. 76–82. [Google Scholar]
  38. Sequeira, R. As reading scores fall, states turn to phonics—but not without a fight. Stateline, 30 April 2025. Available online: https://stateline.org/2025/04/30/as-reading-scores-fall-states-turn-to-phonics-but-not-without-a-fight/ (accessed on 20 February 2026).
  39. Bradbury, A. Opinion: Why are ministers obsessed with teaching children to read using phonics? University College London News, 19 January 2022. Available online: https://www.ucl.ac.uk/news/2022/jan/opinion-why-are-ministers-obsessed-teaching-children-read-using-phonics (accessed on 20 February 2026).
  40. Robinson, N. Reading wars rage again as Australian Government pushes to introduce phonics test. Australian Broadcasting Commission News, 30 June 2019. Available online: https://www.abc.net.au/news/2019-06-30/australian-phonics-war-on-how-to-teach-kids-to-read-rages-on/11258944 (accessed on 20 February 2026).
  41. Canadian Broadcasting Corporation (CBC) Radio. Low literacy rates in Canada prompt reading curriculum changes. Canadian Broadcasting Corporation, 18 September 2024. Available online: https://www.cbc.ca/radio/thecurrent/updated-reading-curriculum-1.7313187 (accessed on 20 February 2026).
  42. Ontario Human Rights Commission. Right to Read: Public Inquiry into Human Rights Issues Affecting Students with Reading Disabilities; Ontario Human Rights Commission: Toronto, ON, Canada, 2022; Available online: https://www3.ohrc.on.ca/en/right-read-inquiry-report-0 (accessed on 20 February 2026).
  43. Ehri, L.C.; Nunes, S.R.; Stahl, S.A.; Willows, D.M. Systematic phonics instruction helps students learn to read: Evidence from the National Reading Panel’s Meta-Analysis. Rev. Educ. Res. 2001, 71, 393–447. [Google Scholar] [CrossRef]
  44. Quint, J.C.; Balu, R.; DeLaurentis, M.; Rappaport, S.; Smith, T.J.; Zhu, P. The Success for All Model of School Reform: Early Findings from the Investing in Innovation (i3) Scale-Up; Manpower Demonstration Research Corporation: New York, NY, USA, 2014; Available online: https://www.mdrc.org/sites/default/files/SFA_2015_FR.pdf (accessed on 20 February 2026).
  45. Sinkovics, R.R.; Alfoldi, E.A. Progressive focusing and trustworthiness in qualitative research: The enabling role of computer-assisted qualitative data analysis software. Manag. Int. Rev. 2012, 52, 817–845. [Google Scholar] [CrossRef]
  46. Ehri, L.C. Learning to read words: Theory, findings, and issues. Sci. Stud. Read. 2005, 9, 167–188. [Google Scholar] [CrossRef]
  47. Mesmer, H.A.E.; Griffith, P.L. Everybody’s selling it—But just what is explicit, systematic phonics instruction? Read. Teach. 2005, 59, 366–376. [Google Scholar] [CrossRef]
  48. Dilgard, C.; Hodges, T.S.; Coleman, J. Phonics instruction in early literacy: Examining professional learning, instructional resources, and intervention intensity. Read. Psychol. 2022, 43, 541–575. [Google Scholar] [CrossRef]
  49. Scull, J.; Lyons, D. Teaching phonics in context: Stories of teachers’ practice and students’ outcomes. Aust. J. Lang. Lit. 2024, 47, 181–201. [Google Scholar] [CrossRef]
  50. Paris, S.G. Reinterpreting the development of reading skills. Read. Res. Q. 2025, 40, 184–202. [Google Scholar] [CrossRef]
  51. Vaughn, M.; Parsons, S.A.; Gallagher, M.A. Challenging scripted curricula with adaptive teaching. Educ. Res. 2023, 51, 186–196. [Google Scholar] [CrossRef]
  52. Watson, J.E.; Johnston, R.S. Accelerating Reading Attainment: The Effectiveness of Synthetic Phonics. Scott. Educ. Rev. 1998, 30, 69–84. [Google Scholar]
  53. Pernai, D.A.; Pulciani, J.; Vahle, D. Piecing Together Phonics and Whole Language: A Balanced Approach. Read. Teach. 2000, 54, 30–39. [Google Scholar]
  54. Bowens, S.W. The Relationship Between Using the Scrambled Words Reading Strategy and the Vocabulary of Struggling Readers. Ed.D. Thesis, Liberty University, Lynchburg, VA, USA, 2013. [Google Scholar]
  55. Devonshire, V.; Morris, P.; Fluck, M. Spelling and Reading Development: The Effect of Teaching Orthographic Structure on Reading and Spelling. J. Res. Read. 2013, 36, 295–316. [Google Scholar]
  56. Piquette, N.A.; Savage, R.S.; Abrami, P.C. A Cluster Randomized Control Field Trial of the ABRACADABRA Web-Based Reading Technology: Replication and Extension of Basic Findings. Front. Psychol. 2014, 5, 1413. [Google Scholar] [CrossRef]
  57. Strong, J.Z. Investigating a Text Structure Intervention for Reading and Writing in Grades 4 and 5. Read. Res. Q. 2020, 55, 545–551. [Google Scholar] [CrossRef]
Figure 1. PRISMA flow diagram for this SQLR.
Figure 1. PRISMA flow diagram for this SQLR.
Encyclopedia 06 00061 g001
Figure 2. Countries where research studies are conducted.
Figure 2. Countries where research studies are conducted.
Encyclopedia 06 00061 g002
Figure 3. Research methods.
Figure 3. Research methods.
Encyclopedia 06 00061 g003
Figure 4. Year of publication of SQLR manuscripts.
Figure 4. Year of publication of SQLR manuscripts.
Encyclopedia 06 00061 g004
Table 1. List of the 163 research publications according to elements of reading development.
Table 1. List of the 163 research publications according to elements of reading development.
Six Elements of Reading Instruction That Contribute to Reading DevelopmentNumber of Publications
Oral Language5
Phonological Awareness9
Phonics5
Vocabulary 22
Fluency 36
Comprehension 121
Table 2. 5 Research studies participants, instructional duration, and year levels.
Table 2. 5 Research studies participants, instructional duration, and year levels.
CitationParticipantsInstructional DurationYear Levels
1Rightmyer et al. (2006) [29]Program 1 in 1st & 2nd grade—5 students, 3 teachers
Program 2 in 2nd & 3rd grade—10 students, 5 teachers
Program 3 in 1st & 2nd grade and 2nd & 3rd grade—11 students, 6 teachers
Program 4 in 1st & 2nd grade and 2nd & 3rd grade—21 students, 3 teachers
Program 5 in 1st & 2nd grade and 2nd & 3rd grade—56 students, 21 teachers
Program 6 in 2nd & 3rd grade—14 students, 4 teachers
U.S.
Variable lessons per week across 6 programs of instruction, over 2 years Group 1—1st grade through to the end of second grade
Group 2—2nd grade through to end of 3rd grade
2Ferguson et al. (2011) [31]Group 1—135 students, 2-year intervention in Primary 1 & 2 and follow-up assessment in Primary 3 & 4.
Group 2—143 students, 2-year intervention in Primary 1 & 2 and follow-up assessment in Primary 3.
Group 3—137 students, 2-year intervention in Primary 1 & 2.
16 teachers in Year 1, and 32 teachers in Years 2 & 3, across 16 urban or rural schools in North Lanarkshire, Scotland
Embedded phonics teaching within a wider literacy curriculum; phonics daily for 20 min in a whole class format & independent phonics activity 4 times week; one big book lesson to the whole class every week; daily independent reading activities; 2 years (Groups 1, 2 & 3) + 1 year follow up (Group 2) & 2 year follow up (Group 1) Primary 1 & 2 (intervention) & Primary 3 & 4 comprehension assessment (follow-up)
3Quint et al. (2014) [44]37 schools (19 intervention schools, 18 control schools), 2147 kindergarten/1st grade students (1129 program students; 1018 control group students); teacher number unknown, U.S. (within 200 miles of the border with Mexico)Daily reading blocks (excluding spelling & grammar) for an average of 99 min/day in program schools & an average of 107 min/day in control schools. In some program schools, students receive small-group tutoring for 2 yearsKindergarten through to the end of 1st grade
4Ehri & Flugman (2018) [30]806 students (406 1st grade, 325 2nd grade, 75 3rd grade), 69 teachers from 23 urban lower socio-economic public elementary schools in the greater New York City region, U.S. The number of lessons varies according to the teacher, 1 school year1st, 2nd & 3rd Grade
5Cross (2019) [5] 32 students (some with/out prior preschool experience), 2 teachers, Midwest suburban elementary school, U.S. 5 × 60 min lessons per week × 32 weeks (1 school year), whole group instruction & some students receive an extra 15 min small group instruction per day Kindergarten
Table 3. 5 Research studies—teacher professional learning, instructional content and strategies, and teachers’ pedagogical decision-making.
Table 3. 5 Research studies—teacher professional learning, instructional content and strategies, and teachers’ pedagogical decision-making.
CitationTeacher Professional LearningInstructional Content and StrategiesTeachers’ Pedagogical Decision-Making
1Rightmyer et al. (2006) [29]In the 1st year, principals recommend teachers who are “particularly successful at implementing the instructional model for at least one year” (p. 214). The majority of the teachers in this 1st year of the study hold “advanced rank in the profession; 73% had earned at least 30 credit hours beyond the bachelor’s degree” & 84% taught “for more than five years” (p. 214). No description of teachers in the 2nd year. Program 6 teachers mention teacher professional learning but provide no details. Program 1—Using a variety of literature, a variety of grouping patterns, skill lessons on phonics, & oral reading. Option to use a computer program. No extensive time for students to read connected text.
Program 2—Classroom teacher taught only the “struggling readers” for 15 min of reading, 15 min of word work & 15 min of writing that was “mostly sentence dictation” (p. 224).
Program 3—Activities involve read-alouds, skills lessons, oral reading, independent reading, writing connected texts, ability-grouped guided reading & skill worksheets.
Program 4—Four blocks including guided reading, independent reading, word work & writing.
Program 5—a scripted program, ability-grouped guided reading using direct instruction & basal readers, worksheets for skills, & regular oral reading assessment. Some daily read aloud & some independent reading time.
Program 6—Whole class instruction with read-alouds, explicit teaching of skills, oral reading, independent reading, writing of connected texts & some differentiation for small-group guided reading.
Program 1—Pedagogic model unspecified, teachers offer varied literature & practices; two teachers emphasize skills, one emphasizes meaning.
Program 2—Prescriptive pedagogy with two teachers “skills-focused”, two “meaning-focused” & one “balanced” (p. 224).
Program 3—Three teachers are “meaning focus”, two with “balanced focus”, & one with “skills focus”.
Program 4—Teachers vary between a word-work focus with “little integration of word work into reading & writing” & a “balanced” approach that includes “guided reading lessons with explicit lessons on how to decode” (p. 225).
Program 5—A direct instruction model with specified sequence & scripted lessons that teachers still vary with most taking a skills-focus (as intended), a few are “balanced with skills & meaning” & two teachers are “meaning-based” (p. 225).
Program 6—Teachers observe students reading, use a
variety of materials, & design instructional activities for individuals. All teachers are either “meaning-focused or balanced” (p. 226).
2Ferguson et al. (2011) [31]Training days for a theoretical overview of 3 strands: phonemic awareness & phonics instruction; semantic & syntactic cueing systems; & metacognitive strategies for decoding & comprehension. Weekly literacy-teacher visits for planning & demonstration of strategies for scaffolding, modeling, reciprocal teaching, & direct instruction. Staff discusses time allocations & groupings. Professional development for reciprocal teaching, think-alouds, modelling, & scaffolding. End-of-year reviews reflect on practice, evaluating materials, & refining strategies for the following year. Head teachers have two additional development days, & early-years staff have extra training.Explicit systematic phonics instruction over 2 years, embedded within “a wider literacy curriculum” (p. 241). Strand 1—phonemic awareness & phonics instruction (alphabet sounds & phonemes through multi-sensory activities; phonological tasks; rhymes in short stories, teacher big books & levelled texts; make, break, blend & write words with focus on rime). Strand 2—Semantic & syntactical cueing using a “wide range of graded texts” (p. 241), including “big book” lessons (p. 241). Strand 3—metacognitive strategies to improve decoding & comprehension using “the major cueing systems” (p. 242), plus sequencing, identification of main character & summarization.Teachers integrate the program with existing reading schemes, using core elements for explicit instruction & responding to students’ initiatives in individual, group, & whole-class sessions through varied teaching strategies.
3Quint et al. (2014) [44]Teachers & school leaders receive initial training & continuous professional development over 2 years. Coaches visit schools regularly, providing feedback.Emphasizes phonics for beginning readers & comprehension for students at all levels via cooperative learning, student discussions, & across-grade ability grouping. Use “picture cues and context cues, a strategy that emphasizes the meaning of these words” (p. 19). Use think-pair-share & whole-group response. Some students receive extra assistance in small groups. Control group teachers use “commonly used basal programs” that strike “a balance between decoding and comprehension skills” (p. 3). Control group teachers are “more likely to examine words isolated from their contexts”, such as sight words & word structure (prefixes, suffixes & contractions) (p. 19).Scripted lesson plans & prescriptive pacing with coaches verifying & rating teachers. Survey data from teachers indicate 44.7% of program teachers & 89.6% of control group teachers “agree or strongly agree that they change parts of the reading program that they do not like” (p. 16). Survey data from teachers indicate that 59% of program teachers & 17.2% of control group teachers “agree or strongly agree that their reading program is too rigid” (p. 16).
4Ehri & Flugman (2018) [30]Teachers complete 45 h summer institute & 90 h of in-school training. Mentors work with teachers twice a week × 30 weeks (60 visits) for a 45-min prep period & 45 min of modeling & feedback in the classroom.Synthetic phonics focuses on “precise sounds for each letter or letter combination” of 70 phonograms, how to blend to form words & how to write words (p. 430). 1st graders learn new words at a rapid pace, up to 30 words a week, syllables, morphemic patterns, 29 spelling rules & vocabulary building through analysis of the meanings of words, roots & affixes, & reading & listening comprehension. Once students learn some words & letters, they begin reading texts, first with “simple decodable texts” (p. 434). The approach involves “whole class reading” of decodable books with phonically controlled vocabularies where “good students” model reading for “less advanced students” (p. 434). At the end of the lesson, the teacher reads aloud a higher-level text. In 2nd & 3rd grades, students read myths, biographies, history & science, focusing on comprehension & discussing content. Highly structured lesson plans & strict scope & sequence. Students work in “unison by chorally responding” & sit “facing the blackboard so that they can focus” (p. 434). Teachers gradually assume responsibility but have little freedom to alter core content or pacing.
5Cross (2019) [5] Teachers complete summer training on strategies & engagement techniques, then in-class coaching where associates model lessons, observe instruction, & provide feedback to ensure fidelity. Daily 60 min of arts-based literacy instruction targeting different learning styles through whole group instruction that focuses on the sound & words of the day, involving music or drama, partner reading & drawing images of the stories. Small group activities for the use of vowel cards, alphabet cards & a short story. Some students receive an extra 10 min of instruction.Teachers follow a daily routine (60 min) including whole-group, small-group & independent centres but decide on the instructional content & strategies.
Table 4. 5 Research studies—evidence collected, intervention fidelity, and findings.
Table 4. 5 Research studies—evidence collected, intervention fidelity, and findings.
CitationEvidence CollectedIntervention FidelityFindings
1Rightmyer et al. (2006) [29]Flynt–Cooter Informal Reading Inventory, a reading assessment that includes a record of errors (numerical score), as well as oral and silent reading of fiction and nonfiction passages for retelling and comprehension questions. This inventory produces a numerical “grade level” score for the purposes of comparing achievement (p. 216). Program 1—low implementation fidelity with less phonics taught than initially planned.
Program 2—“implemented with high fidelity to the model and looked similar across all classrooms” (p. 224).
Program 3—a variety of foci, spanning meaning-focus, skills focus, and a balanced approach (p. 224).
Program 4—teachers follow the program with the whole class even though “some or all of the target children were not following the lessons” (p. 225).
Program 5—most teachers follow the script.
Program 6—all teachers use the same activities and share materials (p. 226).
No one program proves more effective in first grade after one year of instruction. After the second year, an ANOVA reveals that students who participate in Program 6 (M = 2.71) significantly exceed the means for Program 1 (M = 1.30) and Program 4 (M = 1.68), with no significant differences between Program 6, Program 5 (M = 2.02), and Program 3 (M = 2.20). A phonics-only approach in the first year may compromise fluency and comprehension in subsequent years. A balanced (phonics + fluency + comprehension) model does as well as a phonics-intensive model in the first year, but a balanced program offers “greater fluency and comprehension in the second year” (pp. 229-230). However, the Group 2 cohort of second graders, after two years of instruction in the same reading model, found no statistically significant differences in reading achievement.
2Ferguson et al. (2011) [31]“Group Reading Test II (GRT) for Primary 3 (Groups 1 & 2) & Primary 4 (Group 1) follow-up (NFER-Nelson, 1998). This test is “norm-referenced and assessed reading skills using sentence completion and context comprehension” (p. 245). Informal observations reveal “this advice was generally being implemented” (p. 240). At Primary 3, intervention cohorts outperform the comparison cohort by approximately 6 months of reading age on the GRT (Group 1: M = 99.27 vs. 93.13 months; Group 2: M = 98.89 vs. 93.13 months). At Primary 4, the advantage reading age on the GRT remains at approximately 5 months (Group 1: M = 101.42 vs. 96.49 months). Attainment in reading comprehension “significantly improved as a result of the intervention” (p. 253). “This was true not only at the end of the intervention, but at follow-up one and two years later” (p. 253).
3Quint et al. (2014) [44]Woodcock–Johnson Passage Comprehension “asks students to read a short passage and supply a missing word that makes sense in the context of the passage” (p. 26). By the end of the second year of the intervention, 16 of 19 program schools met the “standard for adequate implementation fidelity” of 50% of the 97 Level 1, 2 & 3 snapshot items, although there is “considerable variation within this group” (p. 8). Results for Woodcock–Johnson Passage Comprehension are similar across both samples: program students and control students do not differ significantly (main sample ES = 0.03; full sample ES = 0.02). This indicates no measurable impact of the program on reading comprehension at the end of Grade 1, even under conditions of full program exposure. Teachers “critique the pacing”, and express concern that program grouping practices “do not respond well to the needs of high-functioning and struggling students” (pp. 12–13).
4Ehri & Flugman (2018) [30]The Gates MacGinitie Reading Test (GMRT) is a multiple-choice test. In 2nd & 3rd grade, reading comprehension is assessed. 2nd-grade test items require students to read short paragraphs and identify which of three pictures matches the text. 3rd-grade test items require students to read a passage & write answers to comprehension questions. Monthly ratings by mentors show teachers improve their phonics teaching skills, with many reaching the highest ratings. Second Grade teachers “less likely to teach the program” on days when the mentor is absent, perhaps due to “pressure to teach other reading skills such as building students’ fluency, vocabulary & comprehension” for the mandatory testing (p. 448). On the GMRT reading comprehension, Grade-2 students show large, statistically significant gains, M = 33.10 in the Fall to M = 42.00 in Spring. When separated into the General Education subgroup, the gain is M = 37.95 in the Fall to M = 46.15 in the Spring. Bilingual English Language Learners show the following gain M = 17.79 in the Fall and M = 28.25 in the Spring, which is still below the benchmark. On the GMRT reading comprehension, Grade-3 students show large, statistically significant gains, M = 31.51 in the Fall to M = 40.12 in the Spring, but remain below the expected end-grade level due to low levels at the start of the intervention.
5Cross (2019) [5] Northwest Evaluation Association—Measures of Academic Progress (NWEA—MAP) is an adaptive test measuring a range of reading skills, including comprehension. Fidelity checks by coaches for both teachers with feedback on “events that were successful or problematic as well as setting up for additional training and support if needed” (p. 32). Using NWEA-MAP Reading , a statistically significant pre–post gain for kindergarteners after one year from M = 136.19 to 147.97 with a moderate effect size. Students are beginning to show “the ability to understand what they are reading” (p. 77). Subgroup analyses show a nonsignificant gain for students with preschool and a significant but small effect for those without preschool.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Exley, B.; Bradfield, K.Z.; Heinrichs, D.H.; Clancy, S. A Systematic Quantitative Literature Review of the Contribution of Phonics to Overall Reading Performance for Primary Students. Encyclopedia 2026, 6, 61. https://doi.org/10.3390/encyclopedia6030061

AMA Style

Exley B, Bradfield KZ, Heinrichs DH, Clancy S. A Systematic Quantitative Literature Review of the Contribution of Phonics to Overall Reading Performance for Primary Students. Encyclopedia. 2026; 6(3):61. https://doi.org/10.3390/encyclopedia6030061

Chicago/Turabian Style

Exley, Beryl, Kylie Zee Bradfield, Danielle H. Heinrichs, and Sonja Clancy. 2026. "A Systematic Quantitative Literature Review of the Contribution of Phonics to Overall Reading Performance for Primary Students" Encyclopedia 6, no. 3: 61. https://doi.org/10.3390/encyclopedia6030061

APA Style

Exley, B., Bradfield, K. Z., Heinrichs, D. H., & Clancy, S. (2026). A Systematic Quantitative Literature Review of the Contribution of Phonics to Overall Reading Performance for Primary Students. Encyclopedia, 6(3), 61. https://doi.org/10.3390/encyclopedia6030061

Article Metrics

Back to TopTop