The purpose of this article is to present three related studies that build on each other to demonstrate first the need and then the efficacy of the Blended Arithmetic Curriculum (BAC) to help students overcome both slow language processing and the environmental effects of being a student in an urban school district. Study A was a Teacher Action Research study conducted by Oldrieve. Study B was a replication study of Study A. Study C was designed to further demonstrate the efficacy of the Blended Arithmetic Curriculum both on improving mathematical computational fluency and accuracy, as well as showing that these gains would also improve results on more expansive school district proficiency assessments. Both Study B and Study C were overseen by Oldrieve’s dissertation co-chair, Dr. Barbara R. Schirmer and Dr. Donald Scipione, a former physicist who had become president of a firm that programmed online applications for truck and doctor shift scheduling as it had begun branching into elementary school academic content.
The Blended Arithmetic Curriculum was developed by Oldrieve to help students who have IEPs indicating they have learning disabilities in K–3 math. Additionally, the participating students were enrolled in urban, muti-ethnic, low socio-economic school districts, and the study was designed to determine if the Blended Arithmetic Curriculum could benefit students enrolled in Urban Learning Disabilities (ULD), Urban General Education (UGE) and Suburban General Education (SGE) classrooms. The goal was to help slow-language-processing individuals avoid falling behind in K–3 math in the first place.
The results of the Teacher Action Research study were first presented to the superintendent of the urban district and then used to help Oldrieve convince Donald Scipione, the president of a local internet software firm with a background in writing successful local and national grants, to team with him to develop the Blended Arithmetic Curriculum into a complete program with student workbooks, teacher plan books, computer-based conceptual activities, and professional development sessions. The ultimate goal of Scipione was to take the paper and pencil Blended Arithmetic Curriculum and develop computer programming so that an online version could individualize instruction for every student. Clements, Sarama, Baroody, and Joswick [
1] have pointed out that students do better with “Learning Trajectories” if what they are taught is only one learning trajectory above their current knowledge-which is exactly what an assessment expert and a well-designed computer program can determine. Furthermore, Clements, Vinh, Lim, and Sarama [
2] argue that learning trajectories targeting STEM education are particularly effective for students who live in poverty, are members of linguistic and ethnic minority groups, or are children with disabilities in a STEM field.
Scipione and Oldrieve, with the oversight of Barbara R. Schirmer, a member of Oldrieve’s dissertation committee for a study of Oldrieve’s phonemic awareness, spelling, and writing program, then conducted a replication study of the original Teacher Action Research project. When the results of the larger replication study found that a different set of second-grade Urban General Education students had similar results to the UGE second graders, Scipione and Oldrieve were able to convince principals and teachers to sign on to three local grants that would allow teachers and students to test-pilot the Blended Arithmetic Curriculum. In turn, the results of the test-pilot program helped Scipione’s internet software firm win a four-year Small Business Innovation Research (SBIR) grant from the Special Education division of the U.S. Department of Education.
Theoretical Foundation of Blended Arithmetic Curriculum
While taking an introductory class in Animal Behavior as an undergraduate, Oldrieve began speculating that differences in cognitive processing speed are a key factor in different special education categorizations such as specific learning disabilities (SLD) and autism. Furthermore, the author speculated that being slow at language processing would be a disadvantage when trying to memorize arithmetic facts but would be counterbalanced with the benefit of being able to solve complex problems.
Wiig, Semel, and Nystrom [
8] as well as many other researchers such as Denckla and Rudel [
9,
10,
11]; Fawcett and Nicolson [
12]; Korhonen [
13,
14]; Spring and Perry [
15]; Torgesen [
16]; as well as Wolff, Michel, and Ovrut [
17] had long ago noticed that slow times on RAN of Objects assessments predicted failure in early language development, early reading, and many other learning differences, and it was no surprise when the National Early Literacy Panel [
18], combined the findings of many researchers and confirmed this relationship.
In doing an extensive literature search as to the brain-based structural reasons for this predictability, the author found that MacLeod [
19] speculated that differences in Rapid Automatic Naming abilities might be accounted for by the fact that some individuals might devote more “parallel processors” to a given problem than others, and those who devote more parallel processors might take longer to solve said problem.
The parallel processing hypothesis suggests that there are subtle advantages to slow processing that can be missed by educators because of the obvious disadvantages. For example, in early literacy and numeracy, an individual who tends to use lots of parallel processors to solve a problem might be quite creative at storytelling and be very good at comprehension and mathematical problem solving, but these advantages will get lost if the student fails to learn the arithmetic facts and computation necessary for solving algebra problems. Or, as in Oldrieve’s dissertation study [
20] on the effectiveness of a pattern-based method for helping Urban General Education kindergarteners learn phonemic awareness, spelling, and writing [
21], slow language processors will not reach their potential in learning how to read and write literature. In theory, if a teacher and/or curriculum modification help to teach these slower-processing students phonics and computation, then the students’ comprehension, creativity, and problem solving might become more obvious.
Bodner and Guay [
22] speculated that a slow-language-processing individual who was exposed to a “nouveau” type of problem that they had never solved before would take longer to solve the problem than someone who was typically a fast processor. Nonetheless, Bodner and Guay reasoned that, like their fast-processing peers, slower-processing individuals would tend to be fast and efficient at solving routine tasks they’ve seen and solved before. In MacLeod’s model, slower-processing individuals who had learned how to solve a certain type of problem would need fewer processors to solve problems they had already developed a routine to solve. In contrast, Oldrieve speculates that there is a tipping point where, if the problem is complex enough, the fast-processing individual may not even be able to solve it no matter how much time they are given, while the slow-processing individual could solve it given unlimited time.
When first hired as an assistant professor of K–3 Literacy Assessment as well as K–3 Phonics and Word study, Oldrieve joined the COSMOS Teaching and Community for math and science professors and graduate students in order to better understand the brain science behind learning differences. A year later, a subset of the group began to look at the role of 3-D visualization ability in learning college-level math and science, as had been done by previous researchers [
23,
24]. The science and math professors, as well as the science and math education professors, ultimately wanted to test-pilot interventions that would increase the 3-D visualization ability of some individuals so that they could become more proficient at solving complex mathematical and scientific reasoning [
25,
26]. To serve as a pre- and post-test, the research team settled upon Bodner and Guay’s [
22] Purdue Spatial Visualization Test: Rotations (PSVT:R).
Oldrieve, from his previous dissertation work, then suggested that the team should add Wiig, Semel, and Nystrom’s [
8] assessment of Rapid Automatic Naming (RAN) to the test battery. Fortunately, others in the sub-group thought administering the combination of PSVT:R and RAN of Objects using paper and pencil answer sheets would be worthwhile. Subsequently, the governing board of the Northwest Ohio Consortium of Math and Science Educators awarded a USD 3000 grant to the COSMOS research team.
Members of the COSMOS Teaching and Learning Community, along with 8 to 10 of the reading department’s graduate students, tested 391 students enrolled at a Midwest Community College and a Midwest University. The students were enrolled in 24 sections of 9 different classes. Each class was presented with a copy of the same MS PowerPoint that included instructions for the participating students, an embedded count-up timer for the RAN of Objects, and a count-down timer for the PVST-R. Professors were not supposed to administer the assessments to their own classes; instead, they were supposed to go into a fellow researcher’s class and administer the assessments, while the Reading Department’s graduate students helped collect and eventually score the assessments. (For a deeper description of the assessment procedures and protocols, see Richard M. Oldrieve and Cynthia Bertelsen’s article [
7]).
As was predicted and expected by all the members of the team, students enrolled in junior and senior level math and science classes tended to have higher scores in the 3-D visualization ability measured by the PSVT:R than most other participants—though the arts classes of Music Theory IV, print-making, and 3-D art also had higher than the grand mean scores on the PSVT:R. In fact, the class averages for all of the STEAM classes were all above the 60 that Sorby [
24] and her colleagues Medina, Gerson, and Sorby [
25] have as well as Boersma, Hamlin, and Sorby [
26] found to be a minimum score necessary to do well in engineering courses.
Furthermore, as was hypothesized by Oldrieve, the Cosmos team’s study found a statistically significant multiplier that indicated that for each second slower a university or community college student performed on the RAN of Objects, the greater the odds that the student earned a higher final score in the class in which they were tested.
Additionally, when PSVT:R vs. RAN of Objects scatterplots were created for each of the four high school content areas of high school science, math, language arts, and social studies teachers, the results were separated into three different quadrants.
Future high school Science teachers tended to be highly visual and fast.
Future high school Math teachers tended to be highly visual and slow.
Future high school English Language Arts (ELA) teachers tended to be low visual and slow language processors.
Furthermore, especially considering the small sample size and an expected alpha of
p = 0.1, these differences were statistically significant from moderate to very strong on the planes that were relevant (see
Table 1). For example, future science and math teachers were similar in visualization but significantly different in language processing speed; future math and language arts teachers were similar in language processing speed but differed significantly in visualization; and future science and language arts teachers were significantly different from each other in both language processing speed and visualization ability. What supports the theory that those who are slow language processors might struggle early on with phonics and memorizing math facts due to their slow processing speed is that future math teachers were those who taught the languages of algebra, geometry, trigonometry, calculus, and/or statistics while future language arts teachers became good at teaching the languages of reading and writing literature.