Next Article in Journal
Teachers’ Perceptions of Changes in Their Professional Development as a Result of ICT
Next Article in Special Issue
Learning and Instruction: How to Use Technology to Enhance Students’ Learning Efficacy
Previous Article in Journal
Challenges to Student Interdisciplinary Learning Effectiveness: An Empirical Case Study
Previous Article in Special Issue
Taking a Closer Look: The Relationship between Pre-School Domain General Cognition and School Mathematics Achievement When Controlling for Intelligence
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Computer-Based Development of Reading Skills to Reduce Dropout in Uncertain Times

1
Institute of Education, Kaposvár Campus, Hungarian University of Agriculture and Life Sciences, 7400 Kaposvár, Hungary
2
MTA—SZTE Research Group on the Development of Competencies, Institute of Education, University of Szeged, 6722 Szeged, Hungary
3
MTA—SZTE Digital Learning Technologies Research Group, Institute of Education, University of Szeged, 6722 Szeged, Hungary
*
Author to whom correspondence should be addressed.
J. Intell. 2022, 10(4), 89; https://doi.org/10.3390/jintelligence10040089
Submission received: 30 June 2022 / Revised: 13 October 2022 / Accepted: 14 October 2022 / Published: 21 October 2022
(This article belongs to the Special Issue Learning and Instruction)

Abstract

:
An adequate level of reading comprehension is a prerequisite for successful learning. Numerous studies have shown that without a solid foundation, there can be severe difficulties in later learning and that failure in the first years of schooling can determine attitudes to learning. In the present study, we present the effect size of an online game-based training program implemented on eDia. The primary goals of the development program are to develop fluency in reading and reading comprehension in Grades 3–4. The content of the program has been developed in accordance with the national core curriculum and the textbooks based on it. Therefore, it can be integrated into both classroom-based lessons and extracurricular activities outside of class. The quasiexperimental research involved 276 students. Propensity score matching was used in examining the effect size of the development program to increase the validity of the results. Through the training program, the development of students in the intervention group accelerated greatly (d = .51), which proved to be even higher in the lowest and average skill groups (d1 = 1.81; d2 = .92) as well as in the disadvantaged student group (d = .72). Latent-change analyses confirmed the sensitivity, relevance, and importance of developing comprehension at 9–10 years of age and the generalizability of the results (χ2 = 421.5; df = 272; p < .05; CFI = .950; TLI = .945; RMSEA = .045 (CI: .036, .153). The study provided evidence that a well-designed online training program is suitable for developing comprehension and overcoming disadvantages, even without the presence of the teacher outside the classroom.

1. Introduction

The role of reading comprehension is indispensable in learning and in a proper understanding of instructions, and it is becoming increasingly important in modern societies. Any lack of reading can severely limit an individual’s ability to succeed (Wyschkon et al. 2017; Luo et al. 2017; Jamshidifarsani et al. 2019). A person who reads well makes a continuous effort to search for information from the text to interpret it. If efficiency is inadequate, the reader is unable to memorize information, make connections, and integrate background knowledge with what is read (National Reading Panel 2000). The integration of experience and background knowledge into what is read is an essential condition for the development of intelligent and fluent reading (Nelson et al. 2012), and the lack of this skill puts students at risk for failure in school and dropout (Rabiner et al. 2016).
Continuous improvement of students’ reading skills should continue to be a priority after the lower grades, as research identifies a decline in the reading performance of low-income students after third grade (Chall and Jacobs 1983; Hirsch 2003; Stockard 2010; Campbell et al. 2019). Teachers need continuous feedback on how their students are developing in different areas of reading to support effective reading instruction (Sztajn et al. 2012). Thanks to technology, the continuity of feedback can now be integrated into the process of modern teaching. These aspects have led researchers to take steps to design development programs that are available online, provide continuous feedback to students, and therefore offer an appropriate way to develop the most critical skills for successful reading comprehension, even without face-to-face contact.
We already know a great deal about the components of effective classroom and individual reading instruction, but more research is needed to assess the impact of reading interventions that align with and complement the technology-based curriculum. This issue has become a hot topic in uncertain times, when the lack of personal school instruction has resulted in a significant learning gap even in the most important domains of education—reading, mathematics, and the sciences—especially among students in the lower grades (Engzell et al. 2021; Tomasik et al. 2021; Molnár and Hermann 2022). This study expands our current knowledge of the potential of implementing computer-based training for reading skills beyond normal teaching hours to bridge the learning gap that has grown during remote learning due to COVID-19 at the beginning of schooling among students at the ages of 9–11. An online, curriculum-based training program in support of text comprehension will be presented, the effectiveness of which will be evaluated after the intervention and three months later. The validity of the quasiexperimental study was improved by matching propensity scores to assign students to intervention and control groups. This is the first available online training program in Hungarian, which has been empirically tested and which focuses on the development of the reading skills of students aged 9–11. Thus, the focus of this study is twofold: first, to show that the age of 9–11 is a sensitive period for accelerating the development of reading skills, and, second, to demonstrate that a computerized, personalized training program can prevent broader learning gaps in reading skills even at this stage of education with immediate, continuous, and differentiated feedback without the presence of the teacher.

1.1. Cognitive and Linguistic Components of Reading

Reading is one of the most complex and significant cognitive activities that a person engages in (Kendeou et al. 2016; Elleman and Oslund 2019). In the process of reading, the reader creates meaning through interaction with the text (Kim and Goetz 1995). Comprehension of a text requires coordination of several linguistic and cognitive processes (Castles et al. 2018), including word reading skills, working memory, generation of conclusions, monitoring comprehension, vocabulary, and prior knowledge (Perfetti et al. 2005). Importantly, it can be concluded that in the process of reading, the higher-level complex skills of decoding and text comprehension require integration of several basic reading and reading-related skills (Morris and Lonigan 2022). Reading comprehension is supported by both word reading skills (decoding) and oral language comprehension (Kendeou et al. 2009; Lonigan 2015). Decoding skills entail the integration of orthographic knowledge and phonological awareness. Comprehension skills involve integration of semantic and syntactic knowledge and inference processes. All of these basic reading skills are supported by general intelligence (Morris and Lonigan 2022). Reading has a causal effect on more general cognitive abilities; that is, it can improve overall intelligence (Gottfredson 1997). Therefore, the development of reading skills can also promote the development of intelligence (Torgesen 2005) in a classroom environment. Reading improves verbal intelligence (Cunningham and Stanovich 1991, 1998; Cotton and Crewther 2009; Cain and Oakhill 2011). Better reading skills can improve knowledge of specific facts, but it can also allow a person to acquire abstract thinking skills; thus, in addition to verbal abilities, it is also associated with an increase in nonverbal abilities (Ritchie et al. 2014).
Understanding what you read requires each component to work properly. If any part is damaged or stuck in its development, it will also affect the other components. In the early stages of learning reading, reading success is determined by a number of components, of which those related to language skills stand out (Lonigan et al. 2008). Therefore, it is necessary to understand reading as a language skill. Reading is the language skill for which the appropriately functioning spoken language skills of phonological (interpreting the sounds of speech), semantic (interpreting the actual meaning of sentences), syntactic (interpreting the grammatical structure of sentences), and pragmatic (interpreting the context of a text) organization are essential (Lonigan et al. 2008; Kamhi and Catts 2012). Based on a meta-analysis, the National Reading Panel (2000) concluded that a program that includes the following four areas of reading instruction is successful: (1) teaching phonemic awareness, (2) building phonics, (3) systematic improvement and development of fluency, and (4) strengthening comprehension. The panel found that a combination of these techniques makes reading instruction more effective. Ehri et al. (2001) identified five main pillars for success in reading acquisition: (1) developing phonological awareness, (2) building a thorough knowledge of letter-sound relationships, (3) developing vocabulary, (4) developing reading fluency, and (5) mastering comprehension strategies. These pillars are closely interlinked. Strong phonological awareness is the basis for building letter-sound relationships. A thorough knowledge of phonological awareness and letter–sound relationships facilitates the development of fluent reading, which enables the reader to access the meaning of written texts using vocabulary and comprehension strategies (Ehri et al. 2001; Kamhi and Catts 2012).
The goal of acquiring different reading skills, from phonemic awareness to vocabulary acquisition and fluency, is to be able to understand texts effectively (Jamshidifarsani et al. 2019). A variety of skills are involved in reading comprehension at an appropriate level, and the lack of one or more of these skills can impair comprehension (Nation 2005; Kendeou et al. 2014). Therefore, everyone with reading difficulty may have different levels of difficulty reading the text, resulting in a different reading profile (Cain and Oakhill 2011). Therefore, the key to reading at a high level of proficiency is automatic, easy decoding (LaBerge and Samuels 1974). To reach this level of proficiency, learners need to undergo a long learning process (Swart et al. 2017). This labor-intensive learning phase is not easy for all students and can be hindered by a number of factors. A school can only meet the expectations of public education if it can develop the students’ skills and abilities with scientifically based tools. Computer-assisted development programs offer a new way to solve this problem. They can be used to develop pupils’ skills in a targeted way. Thus, a complex, multicomponent development program adapted to the curriculum can be used to facilitate the development of students’ reading skills in the beginning stages of elementary school, one of the methods of which can be technology-based development.

1.2. Computer-Based Development of Reading Skills

Computer-assisted education in schools has been around since the 1980s and was identified by Barley et al. (2002) as an effective tool for improving performance among at-risk students. Numerous studies have demonstrated the benefits and success of technology-based development programs (Sivin-Kachala and Bialo 2000; Barley et al. 2002; van Scoter and Boss 2004; James 2014). First, learning in a playful digital environment can enhance motivation, which can lead to increased acceptance, concentration, and persistence in learning tasks (Malouf 1988; Papastergiou 2009). Furthermore, technology-based instruction can reduce cognitive load and contribute to greater retention of course material (Williams and Zahed 1996; Mayer and Moreno 2003; Ricci et al. 2009). Computer-based programs provide opportunities for differentiated instruction of students by enabling real-time data generation and immediate visualization of a student’s performance, which plays a key role in differentiation when a student is deficient, underdeveloped, and in need of additional support (Jenkins et al. 2017; Campbell et al. 2022). With no time limit, all students can progress at their own pace, which can promote their individual development (Corbett 2001). Finally, it can provide personalized, adaptive tutoring without the involvement of instructors or only to a limited extent, which is really beneficial if there are not enough human resources available (Andreev et al. 2009; Athanaselis et al. 2014).
A growing number of computer programs have been developed to promote students’ reading performance (Carlson and Francis 2002; Guthrie et al. 2004; Jamshidifarsani et al. 2019). Most studies deal with the development of one dominant segment of reading, and, most often, developmental procedures prepared for students with some special learning problem (e.g., dyslexia and attention deficit disorder) are evaluated. Most of the procedures support the decoding skills of five- to eight-year-old children in a playful way, which increase students’ motivation. Existing research results have demonstrated the benefits of computer-based development (1) in the development of phonological awareness (Mitchell and Fox 2001; Lonigan et al. 2003; Cassady and Smith 2004; Segers and Verhoeven 2005; Macaruso and Walker 2008; Wild 2009; de Graaff et al. 2009; Nelson 2010; Adlof et al. 2010; Al Otaiba et al. 2012; Savage et al. 2013); (2) in the identification of letter–sound relationships (Segers and Verhoeven 2005; Macaruso et al. 2006); (3) in the area of word recognition skills (van Daal and Reitsma 2000; Hecht and Close 2002; Shelley-Tremblay and Eyer 2009; Macaruso and Rodman 2011; Saine et al. 2011); and in the area of word reading (Johnson et al. 2010; Saine et al. 2011). However, fewer studies deal with curriculum-based complex reading intervention programs and their scientifically proven effects among lower grades without students with special learning needs. The purpose of these development programs is not only to teach decoding skills but also to develop text comprehension.
Kloos et al. (2019) used a similar procedure involving an online reading program called MindPlay Virtual Reading Coach (MVRC) to develop reading in second- and fourth-grade students. The program covers phonological awareness, phonetic skills, vocabulary, grammar, fluency in quiet reading, and comprehension. Fluency in reading was assessed before and after the procedure. MVRC clearly shows an advantage in the area of thin reading. Taken together, the results suggested that increasing the amount of time spent with MVRC directly leads to improved reading fluency. In addition, the program has helped to improve the reading skills of children from middle-class homes, even when reading failure is not directly threatened.
Prescott et al. (2018) used the Core5 online program to improve reading for disadvantaged students. The program component provides a systematic and personalized way to teach reading. The content of the program targets six branches of reading: phonological awareness, phonetics, structural analysis, automation or fluency, vocabulary, and comprehension, which are systematically aligned with the kindergarten standards required to read informative texts and read literature up to fifth grade. Their results in all grades (kindergarten–Grade 5) demonstrated the effectiveness of the program, especially in the early stages of learning reading. Their regression analyses showed that students who made greater progress in the online component scored higher on the reading test. Macaruso et al. (2019) also used the Core5 program to a longitudinal (3-year) study of disadvantaged students. Students began the program in kindergarten and followed their reading scores until the end of the second grade. Their results confirmed the effectiveness of the online development program. First and second graders using the developmental program showed significantly greater reading improvement on a standard test than members of the control group.
Based on a meta-analysis of thirty-two technology-based or technology-assisted reading development programs, Jamshidifarsani et al. (2019) concluded that letter recognition automation can be taught initially, then word recognition automation can be practiced, and, later, phrases, paragraphs, and longer texts can be interpreted. In addition, it was suggested that developers should take advantage of the latest advances in information and communication technologies and design innovative methods that are not available under normal educational conditions.
In summary, the majority of technology-supported development programs to promote reading skills are designed for students with specific learning disabilities and deal with the development of one segment of reading skills. The number of complexes, curriculum-based development programs, which would have been prepared for students who have average abilities and no learning disability but who are struggling with developmental delays for some reason (e.g., online education) is negligible. No such program is known in Hungary at all. Our online reading skills development program was prepared to fill this gap and to speed up students’ development, with the application of which we wanted to eliminate backlogs caused by school closures.

1.3. Aims and Research Questions

This study had two objectives: first, to develop a game-based, personalized reading skills intervention program for third- and fourth-grade students to improve their reading comprehension and close the learning gap in basic reading skills during the first two years of distance education; and, second, to conduct a quasiexperimental research project to test the effect size of the intervention immediately afterward and then three months later in a follow-up test on different groups of students. That is, in the present study, we used a quasiexperimental procedure with propensity score matching to determine the impact of the development program by evaluating students’ comprehension scores. The study addresses the following research issues:
  • RQ1. How effectively can a complex online reading intervention program be implemented at the ages of 9–11?
  • RQ2. Which starting level of reading skills is the most sensitive to the complex online training program? Which level can we thus expect the largest effect on?
  • RQ3. Which group of students can be enhanced the most via the online reading program based on students’ socioeconomic background?
  • RQ4. How generalizable are the results? Are the effects confirmed by latent-level analyses using a no-change model in the control group and a latent change model in the intervention group?

2. Materials and Methods

2.1. Participants

The study involved third- and fourth-grade students from 33 schools and 54 classes, for a total of 278 people. To minimize the effect of teachers’ personality and teaching methods, full classes have been involved in the study. Based on students’ pretest performance, we formed learning pairs at the class level, in which one member participated in the development and the other did not. The primary aspect in the formation of the study pairs was that they should be in the same class, as it is guaranteed that the students will master the curriculum with the same methodological repertoire and that their skills will be developed with the same methodology. If more than one student in the same class achieved the same performance, the time spent on the test was also considered a variable. Inclusion of the time factor provided an opportunity to observe the factor of the learner providing an answer immediately or after thinking.
During data processing, students (1) who lacked a pre-, post-, or follow-up test, (2) who did not participate in 70% of the training (intervention group), or (3) whose time spent on the test did not exceed the minimum time needed to read and complete the tasks were deleted to validate the effectiveness of the program. After data cleaning, propensity score matching was applied: each student in the intervention group was paired with a peer in the control group based on their same school group (classmate) and their performance before the test. In total, 276 participants remained in the research, i.e., 138 pairs of students, indicating a higher number of boys in terms of gender (see Table 1).

2.2. Instrument

To evaluate students’ performance, we used a pre-, post-, and follow-up test to frame the online training program, which was implemented on the eDia platform (Csapó and Molnár 2019). The test examined students’ reading comprehension. The pre-, post-, and follow-up tests included the same tasks to measure information search, interpretations, and reflections. Based on the student activity, the online test contains single- and multiple-choice tasks, a total of 28 items. Therefore, the maximum score available on the test was 28 points. Tasks were click/tap or drag & drop. The reliability of the reading comprehension test proved to be good (Npretest = 2700; Cronbach’s αpretest = .859).

2.2.1. Content and Structure of the Training Program

The online program was primarily designed for third- and fourth-grade students to bridge the learning gap that had arisen during remote learning in reading. Its content was developed in accordance with the national curriculum and the reading books, grammar-spelling textbooks, and other textbooks based on it. Therefore, it can be used for both in-class native language education and individual/group extracurricular catch-up activities.
Its primary function is (1) to develop continuous reading (fluency), (2) to help students understand the text they are reading (comprehension), and (3) to practice grammatical knowledge. The secondary function is to alleviate the socioeconomic and sociocultural disadvantages present in the learning community.
The content of the development program was compiled based on the recommendation of the National Reading Panel (2000), according to which a stronger intervention program includes tasks aimed at developing phonological awareness, sound, text comprehension, and fluency. That is, development of decoding and comprehension skills was realized with varied, multicomponent tasks. With this multicomponent reading intervention, the weaknesses and strengths of each individual can be assessed, and more personalized instruction can be provided.
The texts to be processed are based on texts in the second- and third-grade textbooks. The tasks tied to them are related to the (1) phonological, (2) lexical, (3) syntactic, and (4) semantic linguistic levels. Due to the complexity of the tasks, the development of the morphological-level language is integrated into the language levels (1–4) listed above. Since the students involved in the development have been studying for long periods of time without in-person education in the past two academic years, we considered it necessary to integrate the contents of the second-grade curriculum into the program to eliminate possible lags. The third-grade course starts with the repetition of the second-grade material, the process is simpler, and stress is placed on texts. Therefore, since the development program is embedded in the course, it also fits this line of thinking. In addition, tasks adapted to lower skill levels make up only a small part of the development program. Thus, students who struggle with difficulties have the opportunity to compensate for the gaps, while those who do not struggle with falling behind experience these items as an easy tuning-in task.
The training program contains 15 different texts, designating 15 different development opportunities. On average, each set of tasks contains 13–16 tasks, so the total development program contains 200 basic tasks with additional support instructions and branches. The branching structure of the program allows for tackling the task again with helpful information in the case of an incorrect solution to a task. A task can be completed at 2–3 levels of difficulty. That is, after an unsuccessful answer, the learner is provided with help, and if this is still not enough to reach the correct solution, they can receive further support information.
The program is tailored to the individual needs of the student, so it includes summaries, explanations, and highlights, which can be listened to, watched, and/or read by the learner, if necessary. Immediate feedback is provided for the children after each task is completed. It takes 20–40 min to complete a series of tasks. The time spent on the task depends to a large extent on the number of support functions that the student is able to use to do the basic task. The instructions, explanations, and additions to the tasks are supplemented by a correctly articulated and emphasized audio file so that students who may have difficulties with reading comprehension can more easily understand the instructions provided. Upon completion of the assignment, the student will receive a summary assessment of their performance. The system and the linear linking of the series of tasks offer an opportunity to interrupt work on the task sequence, and the next time the student enters, they can continue working from where they left off. With this method, the teacher does not have to keep track of which task the student is on, as they can always continue working from the current point, thus avoiding jumping between tasks, and guaranteeing that the student is progressing gradually from the beginning of the task.
We used a complex content structure to develop reading fluency, comprehension, and correct grammatical structures (Figure 1).
Helping to develop reading fluency is grouped around six types of tasks. Considering the age characteristics, we practiced reading together, where the students first followed the text read out, with an increase in tempo, and then the task was to find the word changed in several places in the text. To eliminate regression, students followed the text read at a normal pace. When skipping, the children looked for accents, punctuation marks, and words in the text. To broaden the eye fixation band, students were only asked to follow words read from the beginning and end of the line, where they were expected to see the distance of initially 2–3 and then 4–5 words. To help them comprehend the word, we practiced reading words spaced out in an increasingly wide band. Then, the task was to read texts with different visual disruptions (blanked letters, incomplete words, scribbled text, and blurred letters).
We assisted in reading comprehension on five levels. During the reproduction, students were expected to repeat a fact in the text. Then, at the level of identification, facts and data were identified. At the third level, the aim of the tasks was to identify the answers that were implicit in the text during production and interpretation. When identifying the meaning, we expected an interpretation of words, word combinations, sentences, and paragraphs during the solution. In addition, the recognition of relationships and connections in the text (e.g., cause–explanation, means–end, cause–effect, etc.) was practiced.
We combined second-grade grammar practice in the above types of tasks, which was designed with special attention to ensure that all language levels are integrated into the tasks. This is how we practiced manipulating letters and sounds; differentiation of long-short sounds; syllabification; alphabetical order; grammatically correct sentence structure; modification of the meaning of the various suffixes and their spelling; types of sentences; and related and contradictory terms. The complexity of the development program is illustrated in Figure 1.

2.2.2. Procedure

The pretest was administered in September–November 2021. After the study pairs were formed, the three-month development started in the second half of November 2021, which was closed in early March 2022 with the administration of the online post-test. Then, in June 2022, we administered another follow-up test. Students completed all the tests and the training tasks online in the computer room at their own institution. The assisting educators were given detailed written and, if necessary, oral instructions on the purpose of the tasks in the development program and the manner of implementation. The teachers were not allowed to help students during the testing and training process beyond the login procedure.
We used the propensity score matching technique to arrange the students into pairs, minimizing the influence of factors affecting change. In connection with the algorithm, in addition to the skill level of the students, which was characterized by their average performance on the test, we considered the student’s school, class (excluding the various effects resulting from the teacher’s classroom work), gender, and grade (excluding gender and grade differences in development). In summary, the average performance of the students on the test was taken as the primary basis during the propensity score matching technique.
In addition to descriptive statistics, we used a two-sample t-test to analyze the differences between disadvantaged and nondisadvantaged students. A paired t-test was used to examine the differences in performance between the third and fourth graders between pre- and post-development and then three months later at the sample level. Cohen’s d (Cohen 1988) was used to describe the magnitude of effect size, that is, the changes in standard deviation units. If its value is less than .2, it is considered a small effect; if it is around .5, it is a medium effect size; and if it is greater than .8, it is interpreted as a large effect (Cohen 1988).
Beyond the analyses using observed variables, which have several limitations (Alessandri et al. 2017), we also used latent-curve modeling and a three-step approach (Little et al. 2002) to evaluate the generalizability of the results on a latent level. By comparing the relative fit indices of the models, we gained further insights into students’ development as a result of regular school instruction (control group) and students’ development as a result of explicit training beyond regular school instruction. First, we specified a no-change model for both groups (intervention and control), assuming that neither normal school education nor the additional intervention had produced any meaningful effect. In this model, the mean and variance of the second-order intercept factor were freely estimated across groups. Second, we used a latent change model for the intervention and a no-change model for the control group. That is, we additionally estimated a slope growth factor in the intervention group to capture any possible change. Finally, we estimated a latent change model for both groups. We compared model fit indexes, CFI (Comparative Fit Index) and TLI (Tucker–Lewis Index) with associated 90% confidence intervals, and RMSEA (Root Mean Square Error of Approximation) and the changes in fit indexes between the different models. We accepted CFI and TLI values > .90 and RMSEA values < .08 (see Kline 2016). The Akaike information criterion (AIC; Burnham and Anderson 2004) was also used, as it “rewards goodness of fit and includes a penalty that is an increasing function of the number of parameters estimated” (Alessandri et al. 2017). If the referring fit index of the model differs more than 2 from the best fitting model, it has considerably less support. If the difference is larger than 10, there is no support for that model. The differences between the CFI and RMSEA values were also used in identifying the best fitting model. According to Chen (2007), if differences between CFI and RMSEA values of two different models exceed .01, the data supports the models on a different level. Probabilistic model selection based on information criteria provides an analytical technique for scoring and choosing among candidate models. The Akaike information criterion (AIC) and the Bayesian information criterion (BIC) are used for model selection among a finite set of models. Both are based on likelihood function. Generally, models with lower AIC and BIC are preferred; however, they do not offer information about the absolute quality of a model, only the quality relative to each of the other models. Thus, AIC and BIC also provide tools for model selection beyond the fit indices (CFI, TLI, and RMSEA).

3. Results

3.1. RQ1. Changes in Reading Performance Compared to Students’ Original Reading Skills Level

In our research, we first examined the comprehension performance of the intervention and control groups before the intervention with the pretest. Students’ performance was treated as a continuous variable. In both groups, the condition of homogeneity of variance was met (Fpretest = 1.21, p = .27). The difference between the two groups was not significant on the pretest (t = −1.18, p = .24). To examine the effectiveness of the development program, we compared the frequency of student performance, based on the three measurement occasions, that is, on the pretest (Mig = 56.25; SDig = 20.99; Mig = 59.10; SDcg = 19.24), post-test (Mig = 67.58; SDig = 17.48; Mcg = 61.16; SDcg = 20.99), and follow-up test (Mig = 68.57; SDig = 20.05; Mcg = 67.44; SDcg = 18.36).
Figure 2 illustrates the frequency distribution of the reading comprehension performance of the two student groups measured at three times (pretest, post-test, and follow-up test). The performance of the two groups overlapped well before the start of the study; there were three positive shifts in the performance of the intervention group compared to the control group after the intervention. (1) In the intervention group, there was a decrease in the number of students performing at or below 50 percent after the intervention, and (2) the number of students completing between 60 and 100 percent grew compared to the control group. (3) Three months later, the number of students completing between 50 and 60 percent fell further compared to members of the control group. That is, the intervention group was able to maintain its post-intervention skills advantage over the control group even three months after the intervention.
The relationship between students’ performance on the pre- and post-tests and on the pre- and follow-up tests is illustrated in Figure 3. The first figure shows the power of change between the pre- and post-tests, and the second figure demonstrates the power of change between the pre- and follow-up tests. The abscissa indicates the performance on the pretest, while the ordinate represents the performance on the post- or follow-up test. Each dot in the figure symbolizes a student. The blue color stands for the intervention group, and the red color signifies the control group. Students whose symbols fall on the mean line or between the two dashed lines (representing a standard deviation) performed equally in both cases. If the symbol is above the dashed line, it means that the student has shown a significant improvement from the pre- to the post-test or from the pre- to the follow-up test, while if it is below the dashed line, the student performance was significantly worse from the first to the second and from the first to the third data collection.
Based on the results, it can be concluded that most members of both groups performed better on the post-tests than on the pretest. However, it is also observed that this statement is not true for all students, as we find individuals with a weaker performance on the post- or follow-up test.
The development of the students in the intervention and control groups between pre- and post-tests and pre- and follow-up tests in standard deviation units on a manifest level are shown in Table 2. As a result of the extra development, the children developed by half a standard deviation (d = .51, t = −6.65, p < .01). During the same time, there was no development in this area in the control group (d = .03, t = −.43, p = .67). Between the post-test and the follow-up test, students participated exclusively in school education, where the previously developed intervention group improved by one-tenth of a standard deviation (d = .12, t = −1.24, p = .22), while the control group developed by three-tenths of a standard deviation (d = .35, t = −5.33, p < .01). Since the intervention and control groups started from the same level at the beginning of the research, it is likely that this is the effect of accelerated development—where those at higher levels are presumably less developed. Overall, both groups underwent marked development as a result of school education and extra development (dig = .59, tig = −7.72, pig < .01; dcg = .39, tcg = −4.68, pcg < .01); that is, the period involved was sensitive to the development of this skill, while the development of the intervention group proved to be more marked.

3.2. RQ2. Expand the Impact of the Intervention According to the Initial Skill Level of the Students

To monitor the effectiveness of the training as regards students’ starting level of reading skills, we divided students into three groups based on their performance on the pretest (Table 3). Students in the first group (N = 42) were labeled low achievers, performing more than one standard deviation lower (0–39%) than the mean achiever in the second group (N = 139, Mean = 54.20%, SD = 8.98; 40–78%). Students in the third group (N = 95), who were called high achievers, managed one standard deviation higher than students in the second group (79–100%).
The standardized differences between the control and intervention groups proved to be much higher in the two lower-skilled groups in the intervention group than that of the control group (dig1 = 1.81, tig1 = −5.98, pig1 < .01; dcg1 = .49, tcg1 = −1.60, pcg1 = .13; dig2 = .92, tig2 = −6.75, pig2 < .01; dcg2 = .41, tcg2 = −2.65, pcg2 = .01). In the third skill group, the performance of the students in the control group was slightly higher than that of the intervention group (dig3 = .39, tig3 = −2.64, pig3 = .01; dcg3 = .52, tcg3 = 3.06, pcg3 < .01). There was a marked change in the two lower capacity ranges of the intervention group. Development was accelerated in these two groups. After three months, as a result of explicit school development, there was a further slight improvement in all three skill groups in the intervention group, while marked progress was seen in the two lower-skilled groups in the control group (dcg1 = .63; dcg2 = .39). However, the third skill group in the control group did not show any improvement. That is, the extra development not only sped up the skills of the low- or medium-achieving students but also that of the high-performing students, whose skills showed marked improvement after three months (Figure 4).

3.3. RQ3. Expand the Effect Size of the Intervention on Disadvantaged Students

As one of the priorities of school education is to bridge the gap experienced by socioeconomically disadvantaged students, we were interested in the extent of the developmental impact of the intervention program on them. Disadvantaged students are students who experience normal school conditions, who receive regular family services support because of their parents’ low educational attainment and/or under- or unemployment, and/or whose living or housing conditions are inadequate. The performance of the students in the intervention and control groups according to the three skill groups on the pre-, post-, and follow-up tests with respect to disadvantage is shown in Table 4. In terms of disadvantage, the distribution of students was similar in the intervention and control groups. The performance measured on the pretest was lower in both groups in the intervention group, and the results obtained on the post-test show higher values compared to the same subgroups in the control group. The magnitude of the effect of the experimental intervention is illustrated in Figure 5.
The intervention had a positive effect on the intervention group. The value of the developmental effect was median for the disadvantaged (d = .72) and nondisadvantaged students (d = .51). At the same time, the control group showed no improvement. Three months after the experimental intervention, there was no change in the skill of the intervention group as a result of explicit school development. At the same time, there was a slight improvement in the performance of the control group. This development was lower among the disadvantaged students (d = .27) than among the nondisadvantaged students (d = .45). That is, the intervention greatly accelerated the development of the disadvantaged and nondisadvantaged students, so both groups are sensitive to the development of these skills (Figure 5).
The development of the students in the intervention and control groups between pre- and post-tests and between post- and follow-up tests on a manifest level in standard deviation units by skill group and disadvantaged situation are shown in Table 5. The number of students in the same skill groups was similar in the intervention group and the control group. In the intervention group, the students of the disadvantaged high-skilled group did not develop due to the intervention; however, in the other subgroups, the intervention resulted in a marked improvement. The development of the low-skilled disadvantaged group was most marked (d = 2.04). School development led to a small improvement in the low-skilled disadvantaged students (d = .39), and no progress was made in the two higher-skilled groups in the control group. After the end of the experiment, only the high-skilled students developed in the disadvantaged intervention group (d = .66), and school development did not contribute to the change in the other two skill groups.

3.4. RQ4. Evaluating the Effect of the Intervention Program within the Latent Curve Modeling Framework

The reading comprehension test monitored the areas of information search, interpretation, and reflection in 9–11-year-old students. First, we tested a measurement model for comprehension with all three indicators combined under one general factor. The measurement model based on the pretest results showed an acceptable fit (χ2 = 496.1; df = 273; p < .05; CFI = .926; TLI = .918; RMSEA = .055 (CI: .047, .053)). Second, we created two parallel forms of the comprehension scale based on the factor loading values. This 2-dimensional model showed a better fit (χ2 = 421.5; df = 272; p < .05; CFI = .950; TLI = .945; RMSEA = .045 (CI: .036, .153)), so in further analyses, we used the latent growth model. Third, to run a latent change model by the analyses, at least two indicators per time point are required. Therefore, based on the factor loading values and the procedure described by Steyer et al. (1997) and Little et al. (2002), we created two parcels for each time point both on the test and dimension levels. The composition of the parcels was identical for each of the three time points. Table 6 shows the fit indexes for the three alternative models. According to the fit indexes, the third model fitted the data the best (CFI = .842; TLI = .828; RMSEA = .255 (CI: .212, .299)). Information criteria also supported these results (determined by the FIT index).
In order to test the developmental effect on a latent level, we analyzed the development between T1 (development between pre- and post-tests) and T2 (development between pre- and follow-up tests) time on the level of dimensions for the intervention and control groups separately applying Alessandri et al.’s (2017) description of evaluation intervention programs with a pretest–post-test design. Table 7 shows the fit indexes for the three alternative models in the first dimension (development between pre- and post-test). According to the fit indices, even in the first dimension, the third model fitted the data the best (CFI = .930; TLI = .923; RMSEA = .129 (CI: .084, .177)). Information criteria also supported these results (determined by the FIT index).
Table 8 shows the fit indexes for the three alternative models in the second dimension (development between the pre- and follow-up tests). According to the fit indices, even in the second dimension, the third model fitted the data the best (CFI = .925; TLI = .924; RMSEA = .289 (CI: .249, .331)). Information criteria also supported these results (determined by the FIT index).
During the SEM analyses, we tested whether the three areas (information search, interpretation, and reflection) should be treated and interpreted as separate dimensions within reading comprehension skills or whether the use of a one-dimensional construct is recommended based on the data, i.e., whether it is sufficient to include performance on the test in the analyses. In the latter case, all the items on the test were classified as manifest variables into a common dimension, and the latent variable of reading text comprehension was constructed. In the case of the multidimensional model, we built the individual dimensions as latent variables from the items on the individual subtests as manifest variables. Based on the results, it can be concluded that, in all cases—on a construct and a dimension level—the third model fitted the data the best; that is, the effect of the development was also confirmed on a latent level.

4. Discussion

The study presents an online reading skills development program focused on developing comprehension and follows a quasiexperimental design with a total of 276 third and fourth graders. We used a quasiexperimental procedure with propensity score matching to determine the impact of the development program by evaluating students’ comprehension scores. The goal was to eliminate the learning gap in reading skills accumulated during distance learning for students aged 9–11 using a curriculum-based, playful reading skills development program and testing the effect of the quasiexperimental project immediately after the intervention and three months later for different groups of students.

4.1. RQ1. Changes in Reading Performance Compared to Students’ Original Reading Skills Level

In our first question, we examined changes in performance relative to students’ original reading skill levels. Previous research has already examined the usability and effectiveness of technology-based education at the school level (including Sivin-Kachala and Bialo 2000; Barley et al. 2002; van Scoter and Boss 2004; James 2014) and the potential for developing technology-based comprehension in addition to normal school teaching (e.g., Jenkins et al. 2017; Kloos et al. 2019; Campbell et al. 2022). Their results bore out the success of technology-based reading development. Our research results confirmed that the application of the training program in the sample is suitable for improving comprehension performance. We found that the text comprehension of the students involved in the development improved by half a standard deviation (d = .51) after the completion of the development program, while there was no change in the skill level of the control group members (d = .03). In the three months between the pretest and the follow-up test, students only received school education, where we experienced a positive change in the skill levels of both groups. That is, this period is sensitive to the development of comprehension. Since students started from the same skill level before beginning the experiment, we can conclude that extracurricular development accelerated the development of the intervention group, as the students involved in the development retained their marked development.

4.2. RQ2. Expanding the Impact of the Intervention According to the Initial Skill Level of the Students

As regards the second research question, which aimed to gain more knowledge about the efficacy of the intervention program, we examined its effect size according to students’ level of skill. Based on the results, we concluded that the intervention program was able to speed up the development of the students in the intervention group and that students in both lower skill groups were most affected by the training. The worst performing students (Skill Group 1) showed the greatest improvement, with the rate of impact of the intervention being large (d = 1.81) and the moderately performing students (Skill Group 2) being medium (d = .92). Strongly performing students (Skill Group 3) showed the least improvement (d = .23). Overall, after the completion of the training program, there was a positive change in comprehension among the members of all three skill groups compared to the members of the control group; that is, their development accelerated. As a result of the measurement three months later, it can be concluded that their performance advantage was maintained by the lower- and higher-skilled intervention groups; however, this advantage decreased. Those with good skills were able to maintain their marked advantage, with their comprehension improving by more than two-tenths of a standard deviation (d = .24), and the students in the first skill group improved by an additional one-tenth of a standard deviation (d = .12). Our results are partially consistent with Campbell et al. (2022), who also found that their development was effective for students at the highest and lowest levels of the study. We consider these results to be particularly important, as students were lacking in-person schooling for two school years, resulting in a significant learning gap (Engzell et al. 2021; Tomasik et al. 2021; Molnár and Hermann 2022). The positive changes in the performance of the intervention group suggest that the development program is also suitable for overcoming these disadvantages.

4.3. RQ3. Expanding the Effect Size of the Intervention on Disadvantaged Students

In our third research question, we examined the extent of the impact of the intervention on disadvantaged students. In a three-year longitudinal study, Macaruso et al. (2019) found that disadvantaged students experienced a slippage in reading performance each summer, performance that was successfully overcome by students in development each year by supplementing Core5 lessons. Our results show that the text comprehension of the disadvantaged students involved in the development program improved by half a standard deviation (d = .53) after the completion of the program, while the control group developed by three-tenths of a standard deviation (d = .32). Based on our results three months later, we can conclude that due to accelerated development, the comprehension of at-risk students in development improved by one-sixth of a standard deviation (d = .66), while there was no change in explicit school development in the lower-skilled groups. These results are consistent with other findings, showing that effective interventions may be beneficial for at-risk learners (Connor et al. 2013; Lovett et al. 2017; Simmons et al. 2008; Macaruso et al. 2019), especially at the beginning of the school year, to make up for the summer slippage.

4.4. RQ4. Evaluating the Effect of the Intervention Program within the Latent Curve Modeling Framework

Our fourth question involved an evaluation of the impact of the intervention program in the latent curve modeling framework. The developmental power of the intervention program was confirmed by structural equation modeling analyses. Three different combinations of the no-change and latent change models were used in both the intervention and control groups. The best-fit trajectory (latent change model) and the significant positive latent slope factor of the intervention group confirmed the result obtained at the manifest level as regards the positive effect of training in both dimensions, while the students in the control group showed no significant change at the latent level. Importantly, our results also demonstrated that there were significant differences between students in their response to the training program, as indicated by the interaction between treatment and baseline.
In summary, the results indicate that the development of this online training program can be considered a success. It develops third to fourth graders in a playful environment. The findings suggest that reading skills can develop significantly and effectively not only traditionally in person, but also in a computer environment. The development program has achieved its goal because it truly focuses on catching up lower-skilled and/or disadvantaged groups. Surprisingly, however, it also significantly facilitated the development of students in the higher-skilled group. Therefore, this development program can be used at the classroom level as a complement to school learning to accelerate the development of comprehension.

5. Limitations of the Study

The limitations in the study affected the sample and methodological sections. It used convenience sampling, as schools and classes were able to join the sample on a voluntary basis, so representativeness did not appear. The students who completed the pretest dropped out significantly during the development process, an exploration of which requires further research. Although the pairs of learners were fitted according to certain criteria, no background variables were considered, nor was the effect of the reading teaching method on development.

6. Conclusions

The study presents a reading skills development program for third to fourth graders using a quasiexperimental design. Based on our research results, we can conclude that our complex program designed to improve reading works effectively. The online development program accelerated development and aided students involved in the program in gaining a significant developmental advantage over their control group peers. The results of the program also showed that the development of comprehension can also take place in an online environment, which offers an objective form of measurement and development for teachers and students. The uniqueness of our program lies primarily in the fact that its content has been developed in line with the national curriculum and recommended textbooks used in Hungary and can therefore be used in class and extracurricular activities. Second, it adapts to the needs and abilities of the students because its branching structure guides students to the right solution with helpful information, explanations, and highlights. It is therefore also suited to differentiated learning. Thirdly, it is simple to use and does not require the presence of a specialist to implement development. The use of the online development program is not tied to a strict time. It can be started at any time of the school year and day.

Author Contributions

Conceptualization, B.C. and G.M.; Data curation, K.S. and R.K.; Methodology, B.C. and G.M.; Project administration, R.K.; Software, K.S. and R.K.; Supervision, R.K. and G.M.; Validation, K.S.; Writing—original draft, K.S. and R.K.; Writing—review end editing, B.C. and G.M. All authors have read and agreed to the published version of the manuscript.

Funding

This study has been implemented with support provided by the Hungarian National Research, Development, and Innovation Fund, financed under the OTKA K135727 funding scheme, and supported by the Research Program for Public Education Development of the Hungarian Academy of Sciences (KOZOKT2021-16).

Institutional Review Board Statement

The study was approved by the Institutional Ethical Committee of the University of Szeged Doctoral School of Education (12/2021, 10 September 2021).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data are available upon request due to privacy restrictions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Adlof, Suzanne M., Hugh W. Catts, and Jaehoon Lee. 2010. Kindergarten predictors of second versus eighth grade reading comprehension impairments. Journal of Learning Disabilities 43: 332–45. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Al Otaiba, Stephanie, Marcia L. Kosanovich, and Joseph K. Torgesen. 2012. Assessment and instruction in phonemic awareness and word recognition skills. In Language and Reading Disabilities, 3rd ed. Edited by Alan G. Kamhi and Hugh W. Catts. Needham Heights: Allyn & Bacon, pp. 112–39. [Google Scholar]
  3. Alessandri, Guido, Antonio Zuffianò, and Enrico Perinelli. 2017. Evaluating intervention programs with a pretest-posttest design: A structural equation modeling approach. Frontiers in Psychology 8: 223. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Andreev, Rumen, Valentina Terzieva, and Petia Kademova-Katzarova. 2009. An approach to development of personalized e-learning environment for dyslexic pupils’ acquisition of reading competence. Paper presented at the International Conference on Computer Systems and Technologies and Workshop for PhD Students in Computing—CompSysTech, Ruse, Bulgaria, June 18–19. [Google Scholar]
  5. Athanaselis, Theologos, Stelios Bakamidis, Ioannis Dologlou, Evmorfia Argyriou, and Antonis Symvonis. 2014. Making assistive reading tools user friendly: A new platform for Greek dyslexic students empowered by automatic speech recognition. Multi-Media Tools and Applications 3: 681–99. [Google Scholar] [CrossRef]
  6. Barley, Zoe, Patricia A. Lauer, Sheila A. Arens, Helen A. Apthorp, Kerry S. Englert, David Snow, and Motoko Akiba. 2002. Helping A-Risk Students Meet Standards: A Synthesis of Evidence-Based Classroom Practices. Washington, DC: Office of Educational Research and Improvement. [Google Scholar]
  7. Burnham, Kenneth P., and David R. Anderson. 2004. Multimodel Inference Understanding AIC and BIC in Model Selection. Sociological Methods and Research 33: 261–304. [Google Scholar] [CrossRef]
  8. Cain, Kate, and Jane Oakhill. 2011. Matthew effects in young readers reading comprehension and reading experience aid vocabulary development. Journal of Learning Disabilities 44: 431–43. [Google Scholar] [CrossRef] [Green Version]
  9. Campbell, Laurie O., Claudia C. Sutter, and Glenn W. Lambie. 2019. An investigation of the summer learning effect on fourth grade students’ reading scores. Reading Psychology 40: 465–90. [Google Scholar] [CrossRef] [Green Version]
  10. Campbell, Laurie O., Cassandra Howard, Glenn W. Lambie, and Xueying Gao. 2022. The efficacy of a computer-adaptive reading program on grade 5 students’ reading achievement scores. Education and Information Technologies 27: 8147–63. [Google Scholar] [CrossRef]
  11. Carlson, Coleen D., and David J. Francis. 2002. Increasing the reading achievement of at-risk children through direct instruction: Evaluation of the Rodeo Institute for Teacher Excellence (RITE). Journal of Education for Students Placed at Risk 7: 141–66. [Google Scholar] [CrossRef]
  12. Cassady, Jerrell C., and Lawrence L. Smith. 2004. Acquisition of blending skills: Comparisons among body-coda, onset-rime, and phoneme blending tasks. Reading Psychology 25: 261–72. [Google Scholar] [CrossRef]
  13. Castles, Anne, Kathleen Rastle, and Kate Nation. 2018. Ending the reading wars: Reading acquisition from novice to expert. Psychological Science in the Public Interest 19: 5–51. [Google Scholar] [CrossRef] [PubMed]
  14. Chall, Jeanne S., and Vicki A. Jacobs. 1983. Writing and reading in the elementary grades: Developmental trends among low SES children. Language Arts 60: 617–26. [Google Scholar]
  15. Chen, Fang Fang. 2007. Sensitivity of goodness of fit indexes to lack of measurement invariance. Structural Equation Modeling: A Multidisciplinary Journal 14: 464–504. [Google Scholar] [CrossRef]
  16. Cohen, Jacob. 1988. Statistical Power Analysis for the Behavioral Sciences, 2nd ed. Hillsdale: Lawrence Earlbaum Associates. [Google Scholar]
  17. Connor, Carol McDonald, Frederick J. Morrison, Barry Fishman, Elizabeth C. Crowe, Stephanie Al Otaiba, and Christopher Schatschneider. 2013. A longitudinal cluster-randomized controlled study on the accumulating effects of individualized literacy instruction on students’ reading from first through third grade. Psychological Science 8: 1408–19. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Corbett, Albert. 2001. Cognitive computer tutors: Solving the two-sigma problem. In User Modeling 2001. Edited by Mathias Bauer, Piotr J. Gmytrasiewicz and Julita Vassileva. Berlin/Heidelberg and Sonthofen: Springer, pp. 137–47. [Google Scholar] [CrossRef]
  19. Cotton, Sue M., and Sheila G. Crewther. 2009. The relationship between reading and intelligence in primary school aged children: Implications for definitional models of dyslexia. The Open Education Journal 2: 42–50. [Google Scholar] [CrossRef] [Green Version]
  20. Csapó, Benő, and Gyöngyvér Molnár. 2019. Online Diagnostic Assessment in Support of Personalized Teaching and Learning: The eDia System. Frontiers in Psychology 10: 1522. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  21. Cunningham, Anne E., and Keith E. Stanovich. 1991. Tracking the unique effects of print exposure in children: Associations with vocabulary, general knowledge, and spelling. Journal of Educational Psychology 83: 264–74. [Google Scholar] [CrossRef]
  22. Cunningham, Anne E., and Keith E. Stanovich. 1998. What reading does for the mind. American Educator 22: 8–15. [Google Scholar]
  23. de Graaff, Saskia, Anna Bosman, Fred Hasselman, and Ludo Verhoeven. 2009. Benefits of systematic phonics instruction. Scientific Studies of Reading 13: 318–33. [Google Scholar] [CrossRef]
  24. Ehri, Linnea C., Simone R. Nunes, Dale M. Willows, Barbara Valeska Schuster, Zoh Yaghoub-Zadeh, and Timothy Shanahan. 2001. Phonemic awareness instruction helps children learn to read: Evidence from the national reading panel’s meta-analysis. Reading Research Quarterly 36: 250–87. [Google Scholar] [CrossRef]
  25. Elleman, Amy M., and Eric L. Oslund. 2019. Reading Comprehension Research: Implications for Practice and Policy. Policy Insights from the Behavioral and Brain Sciences 6: 3–11. [Google Scholar] [CrossRef]
  26. Engzell, Per, Arun Frey, and Mark D. Verhagen. 2021. Learning loss due to school closures during the COVID-19 pandemic. SocArXiv. [Google Scholar] [CrossRef]
  27. Gottfredson, Linda S. 1997. Why g matters: The complexity of everyday life. Intelligence 24: 79–132. [Google Scholar] [CrossRef] [Green Version]
  28. Guthrie, John T., Allan Wigfield, Pedro Barbosa, Katleen C. Perencevich, Ana M. Taboada, Marcia Davis, Nicole T. Scafiddi, and Stephen M. Tonks. 2004. Increasing reading comprehension and engagement through concept-oriented reading instruction. Journal of Educational Psychology 96: 403–23. [Google Scholar] [CrossRef] [Green Version]
  29. Hecht, Steven A., and Linda Close. 2002. Emergent literacy skills and training time uniquely predict variability in responses to phonemic awareness training in disadvantaged kindergartners. Journal of Experimental Child Psychology 82: 93–115. [Google Scholar] [CrossRef]
  30. Hirsch, Edward D. 2003. Reading comprehension requires knowledge-of words and the world. American Educator 27: 10–13. [Google Scholar]
  31. James, Laurie. 2014. The integration of a computer-based early reading program to increase English language learners’ literacy skills. Teaching English with Technology 14: 9–22. [Google Scholar]
  32. Jamshidifarsani, Hossein, Samir Garbaya, Theodore Lim, Pierre Blazevic, and James M. Ritchie. 2019. Technology-based reading intervention programs for elementary grades: An analytical review. Computers & Education 128: 427–51. [Google Scholar] [CrossRef] [Green Version]
  33. Jenkins, Joseph, Margaret Schulze, Allison Marti, and Allen G. Harbaugh. 2017. Curriculum-based measurement of reading growth: Weekly versus intermittent progress monitoring. Exceptional Children 84: 42–54. [Google Scholar] [CrossRef]
  34. Johnson, Erin Phinney, Justin Perry, and Haya Shamir. 2010. Variability in reading ability gains as a function of computer-assisted instruction method of presentation. Computers and Education 55: 209–17. [Google Scholar] [CrossRef]
  35. Kamhi, Alan G., and Hugh W. Catts. 2012. Language and Reading Disabilities, 3rd ed. New York: Pearson. [Google Scholar]
  36. Kendeou, Panayiota, Robert Savage, and Paul van den Broek. 2009. Revisiting the simple view of reading. British Journal of Educational Psychology 79: 353–70. [Google Scholar] [CrossRef] [Green Version]
  37. Kendeou, Panayiota, Paul van den Broek, Anne Helder, and Josefine Karlsson. 2014. A cognitive view of reading comprehension: Implications for reading difficulties. Learning Disabilities Research and Practice 29: 10–16. [Google Scholar] [CrossRef]
  38. Kendeou, Panayiota, Kristen L. McMaster, and Theodore J. Christ. 2016. Reading comprehension core components and processes. Policy Insights from the Behavioral and Brain Sciences 3: 62–69. [Google Scholar] [CrossRef]
  39. Kim, Yeu Hong, and Ernest T. Goetz. 1995. Children’s Use of Orthographic and Contextual Information in Word Recognition and Comprehension. In The Varieties of Orthographic Knowledge. Edited by Virginia Wise Berninger. Dordrecht: Kluwer Academic Publishers, pp. 205–49. [Google Scholar] [CrossRef]
  40. Kline, Rex B. 2016. Principles and Practice of Structural Equation Modeling, 4th ed. New York: The Guilford Press. [Google Scholar]
  41. Kloos, Heidi, Stephanie Sliemers, Macey Cartwright, Quintino Mano, and Scott Stage. 2019. MindPlay Virtual Reading Coach: Does it affect reading fluency in elementary school? Frontiers in Education 4: 1–13. [Google Scholar] [CrossRef]
  42. LaBerge, David, and Jay Samuels. 1974. Toward a theory of automatic information processing in reading. Cognitive Psychology 6: 293–323. [Google Scholar] [CrossRef]
  43. Little, Todd D., William A. Cunningham, Golan Shahar, and Keith F. Widaman. 2002. To parcel or not to parcel: Exploring the question, weighing the merits. Structural Equation Modeling 9: 151–73. [Google Scholar] [CrossRef]
  44. Lonigan, Christopher J. 2015. Literacy development. In Handbook of Child Psychology and Developmental Science. Volume 2: Cognitive Processes. Edited by Richard M. Lerner, Lynn S. Liben and Ulrick Müeller. Hoboken: Wiley and Sons, pp. 763–805. [Google Scholar]
  45. Lonigan, Christopher J., Kimberly Driscoll, Beth M. Phillips, Benlee G. Cantor, Jason L. Anthony, and Howard Goldstein. 2003. A computer-assisted instruction phonological sensitivity program for preschool children at-risk for reading problems. Journal of Early Intervention 25: 248–62. [Google Scholar] [CrossRef]
  46. Lonigan, Christopher J., Chris Schatschneider, and Laura Westberg. 2008. Results of the national early literacy panel research synthesis: Identification of children’s skills and abilities linked to later outcomes in reading, writing, and spelling. In NELP: Developing Early Literacy: Report of the National early Literacy Panel; Washington, DC: National Institute for Literacy, pp. 55–106. [Google Scholar]
  47. Lovett, Maureen W., Jan C. Frijters, Maryanne Wolf, Karen A. Steinbach, Rose A. Sevcik, and Robin D. Morris. 2017. A longitudinal cluster-randomized controlled study on the accumulating effects of individualized literacy instruction on students’ reading from first through third grade. Journal of Educational Psychology 7: 889–914. [Google Scholar] [CrossRef] [PubMed]
  48. Luo, Tian, Guang-Lea Lee, and Cynthia Molina. 2017. Incorporating Istation into early childhood classrooms to improve reading comprehension. Journal of Information Technology Education: Research 16: 247–66. [Google Scholar] [CrossRef] [Green Version]
  49. Macaruso, Paul, and Alyson Rodman. 2011. Benefits of computer-assisted instruction to support reading acquisition in English language learners. Bilingual Research Journal 34: 301–15. [Google Scholar] [CrossRef]
  50. Macaruso, Paul, and Adelaide Walker. 2008. The efficacy of computer-assisted instruction for advancing literacy skills in kindergarten children. Reading Psychology 29: 266–87. [Google Scholar] [CrossRef]
  51. Macaruso, Paul, Pamela E. Hook, and Robert McCabe. 2006. The efficacy of computer-based supplementary phonics programs for advancing reading skills in at-risk elementary students. Journal of Research in Reading 29: 162–72. [Google Scholar] [CrossRef]
  52. Macaruso, Paul, Shani Wilkes, Sarah Franzén, and Rachel Schechter. 2019. Three-Year Longitudinal Study: Impact of a Blended Learning Program—Lexia® Core5® Reading—On Reading Gains in Low-SES Kindergarteners. Computers in the Schools 36: 2–18. [Google Scholar] [CrossRef]
  53. Malouf, David B. 1988. The effect of instructional computer games on continuing student motivation. The Journal of Special Education 4: 27–38. [Google Scholar] [CrossRef]
  54. Mayer, Richard E., and Roxana Moreno. 2003. Nine Ways to Reduce Cognitive Load in Multimedia Learning. Educational Psychologist 38: 43–52. [Google Scholar] [CrossRef] [Green Version]
  55. Mitchell, Mary Jane, and Barbara J. Fox. 2001. The effects of computer software for developing phonological awareness in low-progress readers. Literacy Research and Instruction 40: 315–32. [Google Scholar] [CrossRef]
  56. Molnár, Gyöngyvér, and Zoltán Hermann. 2022. Short- and Long-term Effects of COVID-related Kindergarten and School Closures on First- to Eighth-grade Students’ School Readiness Skills and Mathematics, Reading and Science Learning. Submitted for publication. [Google Scholar]
  57. Morris, Brittany M., and Christopher J. Lonigan. 2022. What components of working memory are associated with children’s reading skills? Learning and Individual Differences 95: 1–13. [Google Scholar] [CrossRef]
  58. Nation, Kate. 2005. Children’s reading comprehension difficulties. In The Science of Reading: A Handbook. Edited by Margaret J. Snowling and Charles Hulme. Oxford: Blackwell Publishing, pp. 248–65. [Google Scholar] [CrossRef]
  59. National Reading Panel. 2000. Teaching Children to Read: An Evidence-Based Assessment of the Scientific Research Literature on Reading and Its Implications for Reading Instruction; Washington, DC: National Institute of Child Health and Human Development.
  60. Nelson, Nickola W. 2010. Language and Literacy Disorders: Infancy through Adolescence. Boston: Education Inc. [Google Scholar]
  61. Nelson, Jack L., Stuart Palonsky, and Mary Rose McCarthy. 2012. Critical Issues in Education: Dialogues and Dialectics. New York: McGraw-Hill Higher Education. [Google Scholar]
  62. Papastergiou, Marina. 2009. Digital Game-Based Learning in high school Computer Science education: Impact on educational effectiveness and student motivation. Computers & Education 1: 1–12. [Google Scholar] [CrossRef]
  63. Perfetti, Charles A., Nicole Landi, and Jane Oakhill. 2005. The acquisition of reading comprehension skill. In The Science of Reading: A Handbook. Edited by Margaret J. Snowling and Charles Hulme. Oxford: Blackwell Blackwell Publishing, pp. 227–47. [Google Scholar]
  64. Prescott, Jen Elise, Kristine Bundschuh, Elizabeth R. Kazakoff, and Paul Macaruso. 2018. Elementary school–wide implementation of a blended learning program for reading intervention. The Journal of Educational Research 111: 497–506. [Google Scholar] [CrossRef]
  65. Rabiner, David L., Jennifer Godwin, and Kenneth A. Dodge. 2016. Predicting academic achievement and attainment: The contribution of early academic skills, attention difficulties, and social competence. School Psychology Review 45: 250–67. [Google Scholar] [CrossRef] [Green Version]
  66. Ricci, Katrina E., Eduardo Salas, and Janis A. Cannon-Bowers. 2009. Do computer-based games facilitate knowledge acquisition and Retention? Military Psychology 4: 295–307. [Google Scholar] [CrossRef]
  67. Ritchie, Stuart J., Timothy C. Bates, and Robert Plomin. 2014. Does Learning to Read Improve Intelligence? A Longitudinal Multivariate Analysis in Identical Twins From Age 7 to 16. Child Development 1: 23–36. [Google Scholar] [CrossRef]
  68. Saine, Nina L., Marja-Kristiina Lerkkanen, Timo Ahonen, Asko Tolvanen, and Heikki Lyytinen. 2011. Computer-assisted remedial reading intervention for school beginners at risk for reading disability. Child Development 82: 1013–28. [Google Scholar] [CrossRef] [PubMed]
  69. Savage, Robert, Philip C. Abrami, Noella Piquette, Eileen Wood, Gia Deleveaux, Sukhbinder Sanghera-Sidhu, and Giovani Burgos. 2013. A pan-Canadian cluster randomized control effectiveness trial of the ABRACADABRA web-based literacy program. Journal of Educational Psychology 105: 310–28. [Google Scholar] [CrossRef]
  70. Segers, Eliane, and Ludo Verhoeven. 2005. Long-term effects of computer training of phonological awareness in kindergarten. Journal of Computer Assisted Learning 21: 17–27. [Google Scholar] [CrossRef]
  71. Shelley-Tremblay, John, and Joshua C. Eyer. 2009. Effect of the reading plus program on reading skills in second graders. Journal of Behavioral Optometry 20: 59–66. [Google Scholar]
  72. Simmons, Deborah C., Michael D. Coyne, Oi-man Kwok, Sarah McDonagh, Beth A. Harn, and Edward J. Kame’enui. 2008. Indexing response to intervention: A longitudinal study. Journal of Learning Disabilities 2: 158–73. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  73. Sivin-Kachala, Jay, and Ellen R. Bialo. 2000. 2000 Research Report on the Effectiveness of Technology in Schools. Washington, DC: Software and Information Industry Association. [Google Scholar]
  74. Steyer, Rolf, Michael Eid, and Peter Schwenkmezger. 1997. Modeling true intraindividual change: True change as a latent variable. Methods of Psychological Research Online 2: 21–33. [Google Scholar]
  75. Stockard, Jean. 2010. Promoting reading achievement and countering the “fourth-grade slump”: The impact of direct instruction on reading achievement in fifth grade. Journal of Education for Students Placed at Risk 15: 218–40. [Google Scholar] [CrossRef]
  76. Swart, Nicole M., Marloes M. L. Muijselaar, Esther G. Steenbeek-Planting, Mienke Droop, Peter F. de Jong, and Ludo Verhoeven. 2017. Cognitive precursors of the developmental relation between lexical quality and reading comprehension in the intermediate elementary grades. Learning and Individual Differences 59: 43–54. [Google Scholar] [CrossRef]
  77. Sztajn, Paola, Jere Confrey, P. Holt Wilson, and Cynthia Edgington. 2012. Learning trajectory-based instruction: Toward a theory of teaching. Educational Researcher 41: 147–56. [Google Scholar] [CrossRef]
  78. Tomasik, Martin J., Laura A. Helbling, and Urs Moser. 2021. Educational gains of in-person vs. distance learning in primary and secondary schools: A natural experiment during the COVID-19 pandemic school closures in Switzerland. International Journal of Psychology 56: 566–76. [Google Scholar] [CrossRef] [PubMed]
  79. Torgesen, Joseph K. 2005. Recent discoveries on remedial interventions for children with dyslexia. In The Science of Reading: A Handbook. Edited by Charles Hulme and Margaret J. Snowling. Oxford: Blackwell, pp. 521–37. [Google Scholar]
  80. van Daal, Victor H. P., and Pieter Reitsma. 2000. Computer-assisted learning to read and spell: Results from two pilot studies. Journal of Research in Reading 23: 181–93. [Google Scholar] [CrossRef]
  81. van Scoter, Judy, and Suzie Boss. 2004. Learners, Language, and Technology: Making Connections That Support Literacy. Creating Communities of Learning & Excellence. Portland: Northwest Regional Educational Laboratory. [Google Scholar]
  82. Wild, Mary. 2009. Using computer-aided instruction to support the systematic practice of phonological skills in beginning readers. Journal of Research in Reading 32: 413–32. [Google Scholar] [CrossRef]
  83. Williams, T. Craig, and Hyder Zahed. 1996. Computer-based training versus traditional lecture: Effect on learning and retention. Journal of Business and Psychology 2: 297–310. [Google Scholar] [CrossRef]
  84. Wyschkon, Anne, Franziska Schulz, Finnya Sunnyi Gallit, Nadine Poltz, Juliane Kohn, Svenja Moraske, Rebecca Bondü, Michael von Aster, and Günter Esser. 2017. 5-Jahres-Verlauf der LRS: Stabilität, Geschlechtseffekte, Schriftsprachniveau und Schulerfolg. Zeitschrift für Kinder- und Jugendpsychiatrie und Psychotherapie 46: 1–16. [Google Scholar] [CrossRef]
Figure 1. The complex structure of the development program (own editing).
Figure 1. The complex structure of the development program (own editing).
Jintelligence 10 00089 g001
Figure 2. Frequency distribution of intervention and control group performance at three times: (1) pretest: before intervention; (2) post-test: after intervention; and (3) follow-up test: three months after intervention.
Figure 2. Frequency distribution of intervention and control group performance at three times: (1) pretest: before intervention; (2) post-test: after intervention; and (3) follow-up test: three months after intervention.
Jintelligence 10 00089 g002
Figure 3. Comparison of student performance on the pre- and post-tests and on the pre- and follow-up tests.
Figure 3. Comparison of student performance on the pre- and post-tests and on the pre- and follow-up tests.
Jintelligence 10 00089 g003
Figure 4. Change in effect size by skill groups in the interventional group and control group. Skill groups (performance on pretest): 1: 0–34%; 2: 35–67%; 3: 68–100%.
Figure 4. Change in effect size by skill groups in the interventional group and control group. Skill groups (performance on pretest): 1: 0–34%; 2: 35–67%; 3: 68–100%.
Jintelligence 10 00089 g004
Figure 5. The effect size of the development program in terms of disadvantage in the intervention and control groups.
Figure 5. The effect size of the development program in terms of disadvantage in the intervention and control groups.
Jintelligence 10 00089 g005
Table 1. Gender distribution of the sample.
Table 1. Gender distribution of the sample.
Gender (%)
BoysGirlsMissing
Intervention group49.344.95.8
Control group60.134.85.1
Total54.739.95.4
Table 2. The development of the students in the intervention and control groups between the pre-, post-, and follow-up tests in standard deviation units on a manifest level.
Table 2. The development of the students in the intervention and control groups between the pre-, post-, and follow-up tests in standard deviation units on a manifest level.
GroupTestDevelopmentSDtdfpd
Intervention group Pre- and post-tests9.6817.11−6.65137<.01.51
Post- and follow-up tests1.5814.85−1.24136.22.12
Pre- and follow-up tests11.0416.72−7.72136<.01.59
Control group Pre- and post-tests.6718.09−.43137.67.03
Post- and follow-up tests6.6914.68−5.33136<.01.35
Pre- and follow-up tests7.2718.18−4.68136<.01.39
Table 3. Performance of students in the intervention and control groups according to the three skill groups on the pre-, post-, and follow-up tests.
Table 3. Performance of students in the intervention and control groups according to the three skill groups on the pre-, post-, and follow-up tests.
Skills GroupTestGroupNMSDtp
1PretestIntervention group2322.2610.43−.95.35
Control group1925.058.09
Post-testIntervention group2349.5720.952.74<.01
Control group1932.0020.31
Follow-up testIntervention group2348.5222.25.46.65
Control group1945.5319.51
2PretestIntervention group6954.849.85−.90.37
Control group7056.349.74
Post-testIntervention group6964.6413.781.47.14
Control group7060.6317.99
Follow-up testIntervention group6966.4317.17−.54.59
Control group7067.9415.55
3PretestIntervention group4679.746.39−.08.94
Control group4979.845.77
Post-testIntervention group4680.439.173.22<.01
Control group4972.9013.15
Follow-up testintervention group4682.969.922.82<.01
Control group4976.2412.95
Table 4. The performance of the students in the intervention and control groups according to the three skill groups on the pre-, post-, and follow-up tests with respect to disadvantage.
Table 4. The performance of the students in the intervention and control groups according to the three skill groups on the pre-, post-, and follow-up tests with respect to disadvantage.
Disadvantaged SituationTestGroupsNMeanSDtp
NondisadvantagedPretestIntervention group10462.3818.47−.22.82
Control group10962.9417.74
Post-testIntervention group10470.3516.822.33.02
Control group10964.5519.27
Follow-up testIntervention group10472.2718.93.61.55
Control group10970.7916.73
DisadvantagedPretestIntervention group3443.4123.53−1.24.22
Control group2950.7623.42
Post-testIntervention group3458.3516.692.11.04
Control group2947.8622.74
Follow-up testIntervention group3458.8219.76.46.65
Control group2956.5918.57
Table 5. Development of students in the intervention and control groups between pre- and post-tests and between post- and follow-up tests on a manifest level in standard deviation units by skill group and disadvantaged situation.
Table 5. Development of students in the intervention and control groups between pre- and post-tests and between post- and follow-up tests on a manifest level in standard deviation units by skill group and disadvantaged situation.
GroupDisadvantaged SituationSkill GroupTestsNDevelopmentSDtpd
Intervention groupNondisadvantaged1Pre- and post-tests925.5127.24−2.81.021.42
Post- and follow-up tests2.9223.24.38.720
2Pre- and post-tests5511.2514.31−5.83.00.97
Post- and follow-up tests1.6714.24−.87.39.10
3Pre- and post-tests403.618.15−2.80.01.45
Post- and follow-up tests1.9910.33−1.22.23.16
Disadvantaged1Pre- and post-tests1431.4821.29−5.53.002.04
Post- and follow-up tests−2.2322.82.37.72.00
2Pre- and post-tests149.529.38−3.80.00.82
Post- and follow-up tests.0413.66.01.99.00
3Pre- and post-tests6.624.33.35.74.00
Post- and follow-up tests3.866.32−1.50.20.66
Control groupNondisadvantaged1Pre- and post-tests99.8824.91−1.19.27.58
Post- and follow-up tests14.9812.30−3.65.01.62
2Pre- and post-tests587.4718.24−3.12.00.53
Post- and follow-up tests6.2115.55−3.04.00.37
3Pre- and post-tests42−5.0312.162.68.01.00
Post- and follow-up tests4.6011.18−2.67.01.37
Disadvantaged1Pre- and post-tests105.5617.22−1.02.33.39
Post- and follow-up tests10.0512.03−2.64.03.69
2Pre- and post-tests122.7814.91.65.53.00
Post- and follow-up tests8.2211.27−2.53.03.55
3Pre- and post-tests7−6.8812.551.45.20.00
Post- and follow-up tests−3.0021.30.37.72.00
Table 6. Goodness-of-fit indices for the tested models on test level—between T1 and T2.
Table 6. Goodness-of-fit indices for the tested models on test level—between T1 and T2.
Modelχ2dfAICBICCFITLIRMSEA [90% CI]
Latent change model for both of the groups103.2108837.88901.9.842.810.268 [.222, .316]
No-change model for both of the groups155.4128885.98942.9.757.757.303 [.262, .347]
No-change model for the control and latent change model for the intervention group108.6119331.79393.2.842.828.255 [.212, .299]
Note: CFI = Comparative Fit Index; TLI = Tucker–Lewis Index; RMSEA = Root Mean Square Error of Approximation; CI = confidence interval; AIC = Akaike information criterion; BIC = Bayesian information criterion.
Table 7. Goodness-of-fit indices for the models tested on the test level in the first dimension (development between the pre- and post-tests).
Table 7. Goodness-of-fit indices for the models tested on the test level in the first dimension (development between the pre- and post-tests).
Modelχ2dfAICBICCFITLIRMSEA [90% CI]
Latent change model for both groups35.8109902.79967.8.928.913.137 [.091, .187]
No-change model for both groups54.9129917.89975.6.880.880.162 [.120, .206]
No-change model for the control group and latent change model for the intervention group36.1119900.99962.4.930.923.129 [.084, .177]
Note: CFI = Comparative Fit Index; TLI = Tucker–Lewis Index; RMSEA = Root Mean Square Error of Approximation; CI = confidence interval; AIC = Akaike information criterion; BIC = Bayesian information criterion.
Table 8. Goodness-of-fit indices for the models tested on the test level in the second dimension (development between the pre- and follow-up tests).
Table 8. Goodness-of-fit indices for the models tested on the test level in the second dimension (development between the pre- and follow-up tests).
Modelχ2dfAICBICCFITLIRMSEA [90% CI]
Latent change model for both groups144.0108671.98737.0.925.911.313 [.269, .359]
No-change model for both groups149.2128671.88733.2.924.918.299 [.257, .343]
No-change model for the control group and latent change model for the intervention group145.8118671.28731.0.925.924.289 [.249, .331]
Note: CFI = Comparative Fit Index; TLI = Tucker–Lewis Index; RMSEA = Root Mean Square Error of Approximation; CI = confidence interval; AIC = Akaike information criterion; BIC = Bayesian information criterion.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Szili, K.; Kiss, R.; Csapó, B.; Molnár, G. Computer-Based Development of Reading Skills to Reduce Dropout in Uncertain Times. J. Intell. 2022, 10, 89. https://doi.org/10.3390/jintelligence10040089

AMA Style

Szili K, Kiss R, Csapó B, Molnár G. Computer-Based Development of Reading Skills to Reduce Dropout in Uncertain Times. Journal of Intelligence. 2022; 10(4):89. https://doi.org/10.3390/jintelligence10040089

Chicago/Turabian Style

Szili, Katalin, Renáta Kiss, Benő Csapó, and Gyöngyvér Molnár. 2022. "Computer-Based Development of Reading Skills to Reduce Dropout in Uncertain Times" Journal of Intelligence 10, no. 4: 89. https://doi.org/10.3390/jintelligence10040089

APA Style

Szili, K., Kiss, R., Csapó, B., & Molnár, G. (2022). Computer-Based Development of Reading Skills to Reduce Dropout in Uncertain Times. Journal of Intelligence, 10(4), 89. https://doi.org/10.3390/jintelligence10040089

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop