Next Article in Journal
Correction: Brandl and Schrader (2024). Student Player Types in Higher Education—Trial and Clustering Analyses. Education Sciences, 14(4), 352
Previous Article in Journal
Buffer or Boost? The Role of Job Resources in Predicting Teacher Work Engagement and Emotional Exhaustion in Different School Types
Previous Article in Special Issue
Investigating AI Chatbots’ Role in Online Learning and Digital Agency Development
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Learning About Alphabetics and Fluency: Examining the Effectiveness of a Blended Professional Development Program for Kenyan Teachers

by
Noah Battaglia
1,
Eileen Wood
1,*,
Alexandra Gottardo
1,
Livison Chovu
2,
Clifford Ghaa
2,
Edwin Santhosh
1,
Natasha Vogel
1,
Anne Wade
3 and
Philip C. Abrami
3
1
Department of Psychology, Wilfrid Laurier University, Waterloo, ON N2L 3C5, Canada
2
Aga Khan Academy, Mombasa P.O. Box 90066-80100, Kenya
3
Centre for the Study of Learning and Performance, Concordia University, Montreal, QC H3G 1M8, Canada
*
Author to whom correspondence should be addressed.
Educ. Sci. 2025, 15(6), 709; https://doi.org/10.3390/educsci15060709
Submission received: 25 February 2025 / Revised: 2 June 2025 / Accepted: 4 June 2025 / Published: 6 June 2025

Abstract

:
This study examined the effectiveness of an 18-week online blended teacher professional development program for Kenyan in-service teachers. Also, teachers received instruction on the use of an evidence-based early literacy software program for children. The 94 teachers completed two professional development training modules (alphabetics and fluency) and four surveys (one before and one after each module). Surveys assessed teachers’ confidence and knowledge consistent with the primary elements of the TPACK model (i.e., content, pedagogy, technology). Knowledge gains were observed for fluency content, but not alphabetics content. Across the program, there were gains in pedagogical knowledge and teachers’ confidence. Given the importance of technology in the present study, additional analyses involving intersections of key elements with technology were examined. Outcomes supported the importance of technological pedagogy for the overarching integrated TPACK model. Overall, the TPD and accompanying course material provided some support for teachers who struggle with literacy instruction.

1. Introduction

Early acquisition of literacy skills predicts subsequent gains in literacy attainment and educational success and provides societal benefits related to occupational, health, and economic gains (DeWalt et al., 2004; Nie et al., 1996). However, fewer than 50% of all Kenyan school students currently meet minimum proficiency requirements in English literacy (The World Bank, 2022). Kenya has two official languages: Kiswahili and English (Kenya Law Reform Commission, 2023), with English serving as the language of instruction after Grade 3. Thus, training in foundational early literacy skills, including alphabetics (e.g., phonological awareness, phonemic awareness, and phonics), fluency, and reading comprehension in English is critical for Kenyan students’ future success. This training is best achieved through formal instruction at school by teachers knowledgeable about how to teach early literacy skills effectively (Spernes & Ruto-Korir, 2018). As such, the present study examines the efficacy of an early literacy teacher professional development program (focusing on alphabetics and fluency) offered in a blended format (online and in-person) to support Kenyan teachers in their delivery of early literacy instruction.

1.1. Kenyan Context

In recognition of the need to improve literacy countrywide, the Kenyan government introduced two innovative education initiatives: the Digital Literacy Program and the new Basic Education Curriculum Framework (BECF; Kenyan Institute of Curriculum Development, 2019). The Digital Literacy Program is a multiphase program initiated in 2013 (Keya-Shikuku, 2021). Its ongoing goals involve significant investment in infrastructure to support the use of technology as a teaching and learning tool. Goals included enhanced online connections and the introduction of computer devices (i.e., laptops, tablets) within schools to make quality literacy education more widely available (Barasa, 2021). Realizing this goal, however, has been challenging. Teachers have borne the burden of acquiring the skills to integrate technology in their classrooms (Kerkhoff & Makubuya, 2021), while dealing with a limited number of devices, large class sizes (Heinrich et al., 2019), and limited technological support (Kirimi, 2014).
In addition, teachers face the demands of learning and integrating the new Basic Education Curriculum Framework (BECF; Kenya Institute of Curriculum Development, 2017). The BECF introduces a new curriculum which marks a significant shift in pedagogy from a predominantly transmission-based model to one that is student centered, promotes active learning approaches, and has a focus on inclusion. This new curriculum also prioritizes the development of skills and competencies that are relevant to students’ futures in the workforce (Sifuna & Obonyo, 2019). Again, teachers have borne the burden of acquiring the knowledge and skills to be able to integrate these significant changes to content, pedagogy, and methods as part of their teaching practice.
The adoption of the new BECF curriculum, in addition to the push toward digital learning, necessitates opportunities for teachers to receive professional development training to ensure teachers have the resources, knowledge, and support needed to promote teacher agency. Providing such training, however, is a significant task given the number of teachers distributed over such a wide geographic region. Previous training attempts using a cascade model for teacher training have proven ineffective and inefficient (Bett, 2016). The present study explores the impact of teacher professional development using online blended instruction to inform teachers how best to use technology-based tools to support young children’s literacy skill development.

1.2. Models of Best Practice: Integrating Technology

The challenges associated with acquiring new domain knowledge and technological skills and integrating these within new pedagogies is significant but not insurmountable. Previous pilot studies suggest that professional development programs, which provide direct instruction about early literacy development as well as pedagogies about student-centered learning, can promote Kenyan teachers’ knowledge gains in these domains (e.g., Uribe-Banda et al., 2023). Difficulties in integrating technology as a teaching tool have been found in many contexts, including barriers to technology use by teachers in more technologically literate education systems (Pittman & Gaines, 2015). As a result, a large body of research has emerged in the past 20 years that recognizes the growing need to define and understand the construct of effective technology integration across diverse educational contexts.
Shulman (1986) identified the important distinction between mastery of curricular content (i.e., domain knowledge) and the understanding of how material is best communicated to students (i.e., pedagogical knowledge). He argued for the need for a category of knowledge that involves both content knowledge and pedagogical knowledge together, referred to as content–pedagogical knowledge. In the following decade, new types of technology were introduced to classrooms and among these were computers. Recognizing the new set of skills that technology required of teachers, Mishra and Koehler (2006) proposed a framework for navigating the shifting landscape of educational technology, referred to as the Technological Pedagogical Content Knowledge (TPACK) model. Their framework builds on Shulman (1986) by including a new primary knowledge domain, technological knowledge, in addition to content and pedagogical knowledge. While recognizing the importance of understanding technologies and how they work, TPACK argues that successful educational instruction requires knowledge of what types of technologies are most effective in communicating specific content and how to use them in a way that is consistent with appropriate pedagogical practice (Mishra & Koehler, 2006). Specifically, TPACK argues that content, pedagogical, and technological knowledge are all interrelated and simultaneously involved in the productive use of technology in educational settings.
The three primary knowledge domains, content, pedagogy, and technology, have unique and shared content in the TPACK model. Where the three primary knowledge domains overlap, they create secondary domains. These domains include pedagogical–content, technological–content, technological–pedagogical knowledge, and a final overlapping domain of technological–pedagogical–content involving the intersection of all three primary domains (Mishra & Koehler, 2006). The total structure of TPACK therefore involves seven knowledge domains. Thus, in order to develop a capacity for technological fluency in education, educators must develop technological competence as well as content and pedagogical knowledge.
The present study uses the TPACK model to explore Kenyan teachers’ self-efficacy and performance outcomes from a blended online teacher professional development program. Given the rapid expansion of technologies, the technology component of the TPACK model requires continuous consideration (Brantley-Dias & Ertmer, 2013). Technology integration within the context of the present study includes examining blended instruction as the method of instruction for teachers as well as teaching teachers how to integrate software as an instructional tool for their students.
To satisfy the need for effective and appropriate classroom software applications, developers have created evidence-based and evidence-proven resources that follow both content and pedagogical aspects of the TPACK framework. For example, A Balanced Reading Approach for Children Designed to Achieve Best Results for All (ABRACADABRA, ABRA for short; CSLP, 2019) is a freely available software program that offers instructional activities in four main areas of literacy: alphabetics, fluency, reading comprehension, and writing. Developers of the program strongly advocate for purposeful classroom implementation and offer a specialized teacher professional development program freely available for teachers. Positive outcomes for students and teachers following use of the tool and training have been demonstrated in Canada (Savage et al., 2013), Australia (Wolgemuth et al., 2011), and Hong Kong (Mak et al., 2017). Effective use of ABRACADABRA in the classroom requires skill in navigating the program’s interface as well as a comprehensive understanding of its literacy content. This customizable feature and the emphasis on teachers’ decision making regarding how to best integrate the technology as part of ongoing instruction resonate strongly with expectations that promote teachers’ agency. Biesta et al. (2015) identify teachers’ ability to actively “(shape) their work and its conditions—for the overall quality of education” (p. 624) as key to teacher agency. For this reason, ABRACADABRA is a valuable tool for teaching literacy and for fostering teacher agency, especially when delivered in a student-centered context such as the one defined by the new curriculum (BECF) and consistent with the pedagogies provided through the teacher training program. ABRA’s activities provide varying levels of difficulty which allows it to be tailored to the needs of individual students. Through the use of an assessment report feature, teachers can receive summaries of student activity and performance within ABRA. This enables them to monitor student time spent completing specific ABRA activities, along with error detection, and make decisions regarding the instructional needs of individual students or within their class more generally. Teachers are able to differentiate their instruction for those students who might need additional or less exposure to one or more of the ABRA activities or, conversely, benefit more from teacher-led whole-class instruction. Availability of resources is one of several key elements impacting teachers’ agency as is the culture or values in the current context (Biesta et al., 2015). Thus, the combination of the software resources and the cultural shift to student-centered instruction in the teacher professional development programming provides the tools teachers can use to “shape” the instructional environment for students. In the present study, the teacher professional development program provided the additional resource of targeted instruction regarding alphabetics and fluency as these skills are foundational to early literacy development.
Children must acquire alphabetics knowledge (letter-sound knowledge, phonological and phonemic awareness) to learn to read (Ehri et al., 2001; Martinussen et al., 2015). However, teachers find it difficult to understand and teach concepts such as phonological awareness (Binks-Cantrell et al., 2012; Joshi et al., 2009). Additionally, fluency must be acquired after children learn to read words in order to understand the text (Paige, 2020). In the present study, the accompanying teacher professional development program for the alphabetics and fluency concepts were adapted to meet the needs of Kenyan teachers. Specifically, the aspects of the new Kenyan curriculum as well as supports for novice users of technologies as teaching tools were incorporated. The design of the professional development program drew upon previous pilot studies with revisions to accommodate suggested areas for improvement (Uribe-Banda et al., 2023). Thus, the present study examines Kenyan teacher outcomes when provided with two of the four revised online teacher training modules (alphabetics and fluency). The present study uses the TPACK framework as it permits all of the key elements (content, pedagogical, and technological knowledge) of the teacher professional development training to be examined together.

1.3. Present Study

The present study examines change in teacher knowledge and perceptions regarding early literacy development and skills following participation in a blended online teacher professional development program introducing content specific to alphabetics and fluency. In the context of Kenya’s new curriculum and technology investments, assessing teachers’ pre- to post-intervention knowledge will help to identify current needs and guide future intervention development. Because student-centered pedagogy and educational technology are both recent introductions to Kenya’s school system, the guiding principles of TPACK are highly relevant. Specifically, the question of whether knowledge integration necessarily follows the development of basic competencies is of critical importance to the success of Kenya’s initiatives and for the promotion of teacher agency going forward. Thus, a key goal was to observe changes in teachers’ perceived knowledge, confidence, and comfort for teaching the two early literacy domains of alphabetics and fluency, as well as the pedagogies that support learning and use of technologies such as ABRA.
The overarching research question was: are there pre-test/post-test changes in alphabetic and fluency knowledge following participation in the professional development program? Given previous research, it was expected that explicit instruction and training regarding development of early literacy skills would yield pre-test to post-test gains in alphabetics knowledge (H1). It was also expected that the program would yield learning gains in teacher fluency knowledge (H2).
An additional research question explored the integration of the three core primary domain areas of the TPACK model, that is, teachers’ technological, pedagogical, and literacy-content knowledge. Based on the intersecting structure of the TPACK model (Mishra & Koehler, 2006), strong relationships between primary and secondary domains were expected post-intervention (H3). Secondary knowledge domains were also expected to be strongly related to teachers’ integration when all domains overlap (i.e., technological–pedagogical–content knowledge), as indicated by the TPACK model (Mishra & Koehler, 2006). The present study introduced an online instructional tool for teacher learning and software that teachers could use for children’s learning with some in-person sessions. Given this focus on technology, combined with recent technology-based infrastructure changes introduced by the Kenyan government, examining the technology-related aspects of the TPACK model was an important criterion for the present study.
In summary, the present study examined three hypotheses related to the impact of a teacher professional development program for Kenyan teachers regarding early literacy instruction.

2. Method

2.1. Participants

The 94 (66 females, 28 males) in-service teachers were recruited from public schools in coastal and south-western regions of Kenya. Targeted regions were selected to ensure representation across urban, peri-urban, and rural communities. Following approvals from local county government agencies, surveys of the schools’ IT infrastructure were completed. Invitations were sent to the head teachers in those schools with the necessary IT infrastructure, who in turn solicited voluntary participation from their early primary teachers. Teacher’s ages ranged from 21–58 years (Mage = 37.55, SDage = 7.17), with females being slightly older than males (Mfemale = 38.66, vs. Mmale = 35.04; t(90) = 2.28, p = 0.025). Most participants had completed a university or college program (70.2%) with the remaining participants having completed some university (26.6%) or high school (1.1%). All participants had completed teacher’s college.
Teaching experience ranged from 1–35 years (Mexperience = 11.13 years, SDexperience = 7.05), with female teachers reporting more years of teaching experience than males (Mfemales = 12.08, Mmales = 8.89; t(89) = 2.00, p = 0.048). The number of students taught ranged from 17–126 while completing the alphabetics module (Mstudents = 57.78, SDstudents = 21.71) and 10–130 for the fluency module (Mstudents = 54.97, SDstudents = 21.97). Most participants (77.6%) spoke at least 3 languages.
Almost two thirds of the teachers (60.6%) reported previously taking an online course while 74.5% reported previously taking a specialized reading instruction course. Fewer than a quarter of the teachers (22.3%) reported having prior experience using ABRA1. This research was reviewed and approved by a university research ethics board and all participants were treated in accordance with APA/CPA ethical principles.

2.2. Materials

Participants completed two sets of pre- and post-surveys, one set for the alphabetics module and one set for the fluency module. Pre-surveys were completed prior to training for each module and post-surveys were completed after training for each module. Only the alphabetics pre-survey assessed demographic information (i.e., age, gender, years teaching). Both the alphabetics and fluency pre- and post-surveys assessed content knowledge specific to the respective module. The pre- and post-alphabetics and post-fluency surveys assessed a combination of primary and secondary TPACK knowledge domains. Several self-report items included on the fluency surveys were adapted from previous research (Schmid et al., 2020; Bingimlas, 2018), and items on the alphabetics surveys were original.

2.2.1. Content Knowledge

The same measures were used in both the pre- and post-alphabetics surveys to test knowledge gains regarding alphabetics concepts. Three multiple-choice questions assessed knowledge of phonological awareness, phonemes, and phonics. Participants also completed a phoneme counting task. Together, these four performance measures constitute some of the core areas of foundational alphabetics knowledge and reflect the content presented in the first module. The four alphabetics items were aggregated for an alphabetics content performance score (maximum = 4).
Five multiple choice questions were used on both the pre- and post-fluency surveys to assess fluency knowledge gains. These involved defining fluency, accuracy, and partner reading, selecting effective strategies for teaching fluency, and identifying features of a book repository that would enhance promotion of early fluency instruction. These items were aggregated to create a fluency content knowledge performance measure (maximum score = 5).
In addition, four items assessed participants’ self-reported ability to understand, speak, and write in English using a 10-point scale (1 = not at all fluent to 10 = very fluent) on the pre-alphabetics survey. Reliability calculated using Cronbach’s α = 0.94 indicated a high level of internal consistency. Participants also rated three items assessing their knowledge of fluency using a 6-point scale (1 = I need a lot of additional knowledge to 6 = I have strong knowledge) on the post-fluency instruction survey. Reliability was good, Cronbach’s α = 0.84.

2.2.2. Pedagogical Knowledge

Two multiple choice questions on the pre- and post-alphabetics surveys tested pedagogical knowledge gains. These performance measures targeted teachers’ knowledge of student-centered pedagogy and included defining differentiated instruction and identifying four relevant components of self-regulated learning from a list of seven items. Two additional multiple-choice questions on the post-alphabetics survey measured pedagogical knowledge specific to the ABRA software (i.e., correctly categorizing an ABRA instructional activity and selecting the most effective teaching strategy for using ABRA in a classroom).
Four items adapted from Schmid et al. (2020) assessed self-reported pedagogical knowledge on the post-fluency survey. Reliability for these adapted measures was acceptable α = 0.71. Participants indicated their agreement (1 = strongly agree to 5 = strongly disagree) with statements concerning the depth, intentionality, and flexibility of their teaching approach.

2.2.3. Technological Knowledge

Self-reported technological knowledge was assessed in the pre- and post-alphabetics and post-fluency surveys. In the pre-alphabetics survey, participants rated their level of comfort (1 = very uncomfortable to 5 = very comfortable) when using six different technological resources: the internet, computers/tablets, WhatsApp, online modules, email, and Zoom. Reliability was acceptable, α = 0.74. In the post-alphabetics instruction survey, six questions asked participants to rate their comfort regarding various navigational and problem-solving tasks when using the ABRA software (e.g., navigating the ABRA activities when I am teaching the children) on a 5-point scale (1 = not at all comfortable to 5 = extremely comfortable). Reliability was acceptable, α = 0.78. Participants also indicated the average number of times ABRA was typically used with their class per week.
Four items adapted from Schmid et al. (2020) assessed self-reported technological knowledge on the post-fluency survey in the current study. Reliability for these adapted measures was high, α = 0.93. Participants rated their knowledge of information and communication technologies (e.g., I can solve ICT related problems, I know websites about new technologies) using a 6-point scale (1 = I need a lot of additional knowledge to 6 = I have strong knowledge).

2.2.4. Technological–Pedagogical Knowledge

Self-reported technological–pedagogical knowledge was assessed through three questions on both the pre- and post-alphabetics surveys. Participants indicated their level of agreement (1 = strongly disagree to 5 = strongly agree) regarding their ability to supervise students using technology, use technology during lessons, and plan lessons using technology. These items together reflect key competencies for effectively integrating classroom technologies and demonstrate acceptable face validity. Reliability was acceptable in the pre-alphabetics survey (α = 0.78) but lower in the post-alphabetics survey (α = 0.52).
Four items adapted from Schmid et al. (2020) assessed self-reported technological–pedagogical knowledge in the post-fluency survey. Participants indicated their level of agreement (1 = strongly disagree to 5 = strongly agree) about the use of technology to support/enhance student learning. Reliability was good, α = 0.80.

2.2.5. Pedagogical–Content Knowledge

Self-reported pedagogical–content knowledge was assessed through 11 items in the pre- and post-alphabetics survey. Reliability was high for both measures α = 0.90/0.92, respectively. Participants rated their confidence (1 = not at all confident to 6 = very confident) in teaching 11 different literacy skills (e.g., segmenting/ blending).

2.2.6. Technological–Content Knowledge

Six items adapted from Bingimlas (2018) assessed self-reported technological–content knowledge in the post-fluency survey. Participants rated their knowledge (1 = I need a lot of additional knowledge to 6 = I have strong knowledge) of how to use technology to study, explore, and illustrate fluency (3 items; e.g., I know websites with materials for studying fluency) and alphabetics (3 items; e.g., I know websites with materials for studying alphabetics) content (Cronbach’s α = 0.89/0.88 for each measure, respectively).

2.2.7. Technological–Pedagogical–Content Knowledge

Three items adapted from previous research (Schmid et al., 2020) assessed self-reported technological–pedagogical–content knowledge in the post-fluency survey. Participants indicated their agreement (1 = strongly disagree to 5 = strongly agree) with statements concerning their ability to effectively use technology to enhance literacy instruction (e.g., I can choose ABRA activities that enhance what I teach, how I teach, and what students learn; Cronbach’s α = 0.70).

2.3. Procedure

Teachers participated in two training modules totaling 18 weeks in duration. First, teachers completed the 12-week alphabetics teacher professional development module followed by the 6-week fluency module. To increase the contextual relevance, specific connections between the TPD content and the BECF English Language curricula were established with supplemental support material. Consistent with blended instruction, the blended format in the present study included three in-person sessions for each module: the initial, middle, and final sessions. All other sessions were completed online using web conferencing software, with WhatsApp being used for ongoing interactions with the facilitators and with the teachers’ colleagues. Additionally, teachers met with colleagues (and in some cases, their head teachers) within their school or within a cluster of schools to share lesson plans and implementation strategies related to the use of ABRA in their literacy teaching. Participants were grouped into three cohorts to allow for ease in delivery. Start dates were staggered across the three cohorts during the 2022 and 2023 Kenyan school year. For each module, the pre-survey was completed on the first day of instruction and the post-survey was completed on the last day or shortly thereafter. Surveys were completed independently online. TPD requirements included the completion of online quiz-styled scavenger hunts designed to encourage interaction with the TPD instructional content and the activities within ABRA and READS and submission of lesson plans that included the integration of ABRA and/or READS. For example, scavenger hunts included a small number of questions varying in difficulty from factual to applied questions (e.g., “How many phonemes does the word ‘pitch’ have?” for the topic related to phonemic awareness). Teachers prepared a final teaching portfolio following an in-person showcase event at the end of each module, where participants came together to share experiences. Discounted data bundles were provided to participants to facilitate the completion of course requirements and surveys. For those teachers with unstable connectivity, TPD content could be exported in the form of PDF documents.

3. Results

3.1. Teacher Perceptions: Self-Reported Knowledge and Confidence

Teachers’ self-ratings regarding their English skills (i.e., content knowledge), confidence in teaching English (i.e., pedagogical–content knowledge), and ability to use technology in their classroom (i.e., technological–pedagogical knowledge) were converted to proportion scores to permit comparisons across measures (see Table 1). Overall, 94.9% of teachers provided ratings above the midpoint of the scale indicating fluency in English and a high degree of comfort across all types of technology and apps. Within the confidence measure for teaching English (Mpre-instruction = 0.84 to Mpost-instruction = 0.87), confidence ratings were highest for individual items assessing teaching English letter names (Mpre-instruction = 0.90, SD = 0.16) and teaching letter sounds (Mpost-instruction = 0.90, SD = 0.14) and lowest for instruction for writing stories (Mpre-instruction = 0.70, SD = 0.24) and teaching the mechanics of writing (Mpost-instruction = 0.80, SD = 0.17).
After completing the fluency module, teachers reported high levels of pedagogical and technological–pedagogical knowledge as well as confidence in their ability to use technology to effectively deliver fluency-content material to their students (i.e., technological–pedagogical–content knowledge) with means all at or above M = 0.91 (see Table 2). Teachers’ self-ratings for knowledge in the domains of technological, content (i.e., fluency), and technological–content knowledge, however, were lower (i.e., highest Mfluency content = 0.69) Ratings of technological knowledge were not normally distributed, with most ratings falling equally at both 0.33 (15.3%) and 0.83 (15.3%).
Given this distribution for self-ratings of technological knowledge, teachers were assigned to one of two groups based on their technological knowledge ratings and t-tests were conducted to investigate whether teachers with low technological knowledge and high technological knowledge provided different ratings of their technology-integrated knowledge (i.e., technological–content knowledge and technological–pedagogical knowledge). Teachers with ratings at or below the median (Mdn = 0.71) were categorized as less technologically fluent, and those with ratings above the median were categorized as more technologically fluent. The homogeneity of variance assumption was met for both analyses. Ratings of technological–content knowledge (M = 0.74, SD = 0.19) and technological–pedagogical knowledge (M = 0.95, SD = 0.11) were significantly higher for technologically fluent teachers in comparison to technologically non-fluent teachers (M = 0.44, SD = 0.19 and M = 0.89, SD = 0.13; t(61) = 6.26, p < 0.001, d = 1.58 and t(62) = 1.89, p = 0.032, d = 0.47, respectively).

3.2. Teacher Performance Outcomes: Pre- and Post-Instruction Differences

Pre-post performance did not yield expected increases (H1) for content performance scores in alphabetics (Mpre-test = 2.57, SDpre-test = 0.87; Mpost-test = 2.78, SDpost-test =1.00; t(57) = 1.18, p = 0.122, d = 0.15) but did yield gains (H2) for fluency knowledge (Mpre-test = 2.59, SDpre-test = 0.95; Mpost-test = 3.17 SDpost-test = 1.11; t(87) = 4.04, p < 0.001, d = 0.43). A third t-test indicated pre-post-instruction gains (Mpre-test = 0.90, SDpre-test = 0.63; Mpost-test = 1.19, SDpost-test = 0.65) in pedagogical knowledge (t(80) = 3.00, p = 0.002, d = 0.33).
To allow for comparisons across domains over time, all performance measures were converted to proportion scores (see Table 3). Measures of alphabetics and fluency competence on the pre- and post-surveys were aggregated as a total content performance measure.
A 2 (Time; Pre-instruction, Post-instruction) × 2 (Domain: Content, Pedagogical) repeated measures ANOVA was conducted to assess differences in performance across time and domains. All assumptions were met and the ANOVA revealed a significant main effect of time (F(1, 50) = 7.99, p = 0.007, ηp2 = 0.14). As expected, teachers’ post-instruction performance scores were significantly higher than their pre-instruction performance scores. There was also a main effect of domain on performance scores (F(1, 50) = 8.25, p = 0.006, ηp2 = 0.14), such that teachers’ content knowledge performance scores were significantly higher than their pedagogical performance scores. There was no interaction of time and domain (F(1, 50) = 0.59 p = 0.447, ηp2 = 0.01).
A 2 (Time: Pre-instruction, Post-instruction) × 2 (Technological fluency: Low, High) mixed model ANOVA was conducted to assess differences between low and high-technological fluency and teachers’ total content knowledge gains. Residuals were normally distributed, and the homogeneity of variance assumption was met. The main effect of time and technological fluency and technological fluency by time interaction were non-significant (F(1, 43) = 3.71, p = 0.061, ηp2 = 0.08; F(1, 43) = 0.16, p = 0.688, ηp2 = 0.00; F(1, 43) = 0.25, p = 0.619, ηp2 = 0.01).

3.3. Correlations Among Domains

3.3.1. Alphabetics

Correlations for the pre- and post-alphabetics survey correlations are presented in Table 4 and Table 5, respectively. At pre-instruction, only two correlations were significant. Self-reported pedagogical–content knowledge was positively correlated with self-reported content knowledge and negatively correlated with content knowledge performance.
At post-instruction, self-reported technological–pedagogical knowledge was positively correlated with technological knowledge. However, the relationships among all other primary–secondary domain groupings were not significant. With respect to performance measure scores, teachers’ alphabetics performance was positively correlated with self-reported content knowledge. Technology use was positively correlated with self-reported technological knowledge but negatively correlated with pedagogical performance.

3.3.2. Fluency

Results of the post-fluency-instruction correlation analyses are presented in Table 6. Both self-reported technological–content knowledge and self-reported technological–pedagogical knowledge were positively correlated with their corresponding primary domains (i.e., technological and content knowledge; technological and pedagogical knowledge). Self-reported technological–content knowledge was also negatively correlated with technology use. Both secondary domains were positively correlated with self-reported technological–pedagogical–content knowledge. With respect to performance knowledge scores, content knowledge performance was not correlated with any domain.
The correlation analysis assessing content knowledge performance between the fluency and alphabetics performance (r(78) = 0.08, p = 0.489) was not significant.

3.4. Predicting Secondary Domain Knowledge After Completing Both Modules

Technological knowledge and pedagogical knowledge were entered into a hierarchical regression analysis to determine primary domain knowledge predictors of technological–pedagogical knowledge (see Table 7). Initially technological knowledge was entered in the first step while pedagogical knowledge was entered as the second step. The two variables mentioned above explained a significant amount of the variance in technological–pedagogical knowledge, total R2 = 0.331, F(2, 53) = 13.14, p < 0.001. Technological knowledge was significant as the first step. Pedagogical knowledge explained 24.1% unique variance, β = 0.493, t(55) = 4.37, p < 0.001. For completeness the order of entry of the variables was reversed and technological knowledge explained 6.4% unique variance in technological–pedagogical knowledge, β = 0.254, t(55) = 2.25, p = 0.028.
Technological knowledge and content knowledge were entered into a hierarchical regression analysis to determine predictors of technological–content knowledge (see Table 8). Initially, technological knowledge was entered as the first step while content knowledge was entered as the second step. The two variables mentioned above explained a significant amount of the variance in technological–content knowledge, total R2 = 0.574, F(2, 59) = 39.74, p < 0.001. Technological knowledge was significant as the first step. Content knowledge explained 12.7% unique variance, β = 0.428, t(61) = 4.19, p < 0.001. For completeness, the order of entry of the variables was reversed. Technological knowledge was also a unique statistical predictor as the second step, β = 0.432, t(61) = 4.24, p < 0.001, predicting 13% unique variance.
Technological–pedagogical knowledge and technological–content knowledge were entered into a hierarchical regression analysis to determine predictors of technological–pedagogical–content knowledge (see Table 9). Initially, technological–pedagogical knowledge was entered as the first step, while technological–content knowledge was entered as the second step. The two variables explained a significant amount of the variance in technological–pedagogical–content knowledge, total R2 = 0.216, F(2, 58) = 7.99, p < 0.001. Technological–pedagogical knowledge was significant as the first step. Technological–content knowledge was not a significant predictor of unique variance (β = 0.128, t(60) = 1.06, p = 0.292). For completeness, the order of entry of the variables was reversed. Technological–pedagogical knowledge was a unique statistical predictor, β = 0.414, t(60) = 4.24, p = 0.001, predicting 15.9% unique variance as the second step.

4. Discussion

A key goal of the present study was to assess the effectiveness of a blended online teacher professional development program for improving teacher perceptions and knowledge related to two critical domains in early English literacy instruction, specifically alphabetics and fluency. Additionally, the study adopted the TPACK model (Mishra & Koehler, 2006) as a framework to assess relationships among the primary areas, specifically content, technology, and pedagogy, and secondary areas, which include the overlap of the primary areas that serve as the building blocks for successful integration of technology as a teaching and learning tool. Outcomes from the present study suggest that teachers’ skills, experiences, and confidence can be facilitated through exposure to the blended online teacher professional development program provided in the present study. Gains across domains, however, were not equivalent. Examination of primary and secondary domains consistent with the TPACK components suggests a more nuanced understanding may be necessary to identify areas of ongoing need.

4.1. Changes in Domain Knowledge

Previous pilot research by Uribe-Banda et al. (2023) suggested that exposure to a blended teacher professional development program can yield knowledge gains in key literacy domains such as alphabetics and fluency. Teachers in the present study did not follow this pattern of outcomes. Instead, significant gains were observed for content related to fluency but not for alphabetics. One explanation for the differences in outcomes may be that gains in the previous research may have overestimated the efficacy of the teacher professional development for alphabetics instruction. Specifically, in that pilot study the gains noted were modest, moving participants from just below to just above the midpoint of measured scales, and these gains were based on a very small sample (Uribe-Banda et al., 2023). In the present study with a much larger and more generalizable sample size, when similar measures were used, the mean alphabetic performance scores started at just above the midpoint of the scale and ended just slightly higher but still near the midpoint of the scale with differences in these two scores not being statistically significant. Our outcomes are consistent with a body of research suggesting that acquisition of key constructs related to phonological, phonemic, and grapheme–phoneme awareness may be especially challenging for teachers. Specifically, several studies suggest that many teachers, not just those in Kenya, lack sufficient knowledge of the underlying linguistic concepts needed to effectively teach early literacy (Binks-Cantrell et al., 2012; Joshi et al., 2009; Martinussen et al., 2015) as well as lack awareness of their actual phonological awareness skills (Carson & Bayetto, 2018). This explanation is consistent with Wawire (2021) who suggests that insufficient exposure to literacy concepts in pre-service education programs leaves teachers unprepared to teach literacy skills. Consistent with previous literature, our findings suggest that teachers may need greater exposure, deeper and more thorough instruction, and more opportunities for hands-on practice with these constructs than was available through the professional development modules used in the present study. For example, a successful TPD program that targeted the use of phonics included 30 weeks of intervention and biweekly visits by mentors (Ehri & Flugman, 2018), suggesting that TPD in the area of phonological awareness might require intensive, long-term instruction. Additional ways of addressing the need for greater exposure to difficult constructs, such as the difference between phonological and phonemic awareness, would be important to explore in future research. For example, in future iterations of the present program it may be useful to incorporate additional videoconferencing sessions targeting these concepts. Alternatively, teachers may need more gradual exposure to the concepts over time through additional modules especially if the modules are leveled to permit teachers to advance according to their level of understanding of these concepts. Additional modules could provide more intensive instruction and practice, and this could be augmented if accompanied by regular mentoring.
While these findings suggest gaps in teacher knowledge, it is also important to consider that deep awareness of linguistic concepts may not be necessary for early literacy instruction. In the present study, exposure to foundational alphabetics constructs was extended both in terms of content presented and time given to cover this domain. These changes from previous teacher professional development designs, however, did not result in improved knowledge for teachers. Thus, it may be important to determine just how much and how deeply teachers need to understand constructs related to alphabetics to produce high-quality instruction. It could be that moderate awareness of these challenging linguistic underpinnings is sufficient to inform good teaching practice, especially when teachers have access to high-quality software teaching tools (e.g., ABRA). A deeper understanding of the theoretical underpinnings, although desirable theoretically, may not be necessary in terms of teaching practice.
Although teachers showed no gains in alphabetic knowledge, teachers did demonstrate knowledge gains with respect to the domain of fluency. These gains reflect greater understanding of the constructs of speed, accuracy, and expression relative to those associated with alphabetics. However, it is also possible that our results reflect an order effect where discussions involved in the coverage of alphabetics influenced subsequent acquisition of fluency content. Order of presentation in the professional development program was fixed, aligning with research that children must be taught and learn alphabetic skills prior to developing reading fluency. It may be useful in future research to teach these domains separately to assess their individual impact on teachers’ learning gains.

4.2. Teacher Perceptions and Performance

At the start of the teacher professional development, teachers reported a high level of confidence across all domains but scored poorly on all (content and pedagogical) performance measures. This disconnect between confidence and performance replicates previous research (Binks-Cantrell et al., 2012; Carson & Bayetto, 2018; Joshi et al., 2009; Uribe-Banda et al., 2023) where perceived skills are inflated relative to performance outcomes. Teachers in the present study were experienced and were engaged in teaching early literacy skills in their classrooms prior to participating in the program. Thus, teachers may have based initial confidence ratings on their perceived current success in the classroom. However, exposure to the content and feedback through assignments, quizzes, and discussions during the teacher professional development program could have highlighted gaps in their knowledge which could account for their lowered subsequent confidence ratings.
Interestingly, teachers with lower alphabetics scores reported greater confidence in providing literacy instruction to students. This finding suggests that teachers with a strong foundation of alphabetics knowledge are more likely to recognize the difficulty of communicating key literacy concepts effectively to students. In support of this interpretation, literacy-pedagogical confidence was not significantly related to literacy confidence after exposure to the course material. As emphasized by the TPACK framework, both content knowledge and pedagogical competence together involve flexible use of content expertise along with strategic teaching methods (Mishra & Koehler, 2006). Thus, the complexity of alphabetics concepts, which involves learning linguistics concepts, may have prevented teachers from developing knowledge specific to the content domain. Further evidence of this view was provided post-fluency-instruction, where perceptions of fluency and fluency-pedagogical competence were strongly correlated following significant performance gains.
At the outset of the study, teachers’ pre-instruction technological comfort was positively related to technological–pedagogical knowledge, suggesting that teachers who were comfortable using technology also felt comfortable supervising students and planning lessons using technology. However, although teachers generally reported a high degree of technological comfort pre-instruction, distinct groups characterized by lower and higher technological fluency ratings emerged following completion of both modules. A comparison of the two groups revealed significant differences in both measures of technology-integrated secondary knowledge (i.e., technological–pedagogical and technological–content). Less technologically fluent teachers reported less confidence in their teaching and fluency abilities when involving technology. This finding highlights the importance of developing a strong foundation of technological knowledge. As Kenyan policy further increases expectations of a technology-integrated classroom, it is critical that teachers receive adequate technological training to ensure the quality of instruction is maintained. An important next step would be to prepare technology training modules that range from basic (e.g., log in, familiarity with hardware, basic troubleshooting) to more sophisticated skills (e.g., flexible use of navigation tools, adapting software) to ensure all teachers have the skill sets needed to confidently use technologies.
Teachers’ use of ABRA in the classroom offered a different perspective on the relationship between technology and pedagogy. Following exposure to the alphabetics module, teachers who used ABRA more frequently scored lower on pedagogical performance. It is possible that, for lower-performing teachers, increased use of ABRA compensated for weaker teaching practices or poorer understanding of how to integrate the concepts and technologies. Reliance on using ABRA without having an understanding of the content or pedagogy may have limited teachers’ opportunities to practice instructional methods associated with student-centered pedagogy. Alternatively, ABRA may be highly valued as a teaching aid by teachers who are struggling to adapt to the new BECF curriculum. This perspective is further supported by the strong relationship between ABRA use and confidence on each self-evaluation measure, suggesting that use of supportive technology may encourage greater perceived teaching competence across domains.
Importantly, teachers’ pedagogical performance scores increased over the course of the alphabetics module. This is an important shift in knowledge especially in the Kenyan context as the student-centered pedagogical approach is a core aspect of Kenya’s new curriculum (BECF). These findings are encouraging as they indicate that the explicit instruction and practice offered through the training provided support for teachers in their transition toward translating key elements of the new curriculum into practical active-learning opportunities for children.

4.3. Predicting Self-Reported Secondary Domain Knowledge After Completing Both Modules

An important consideration in assessing the impact of the teacher professional development programming was determining how, and whether, specific aspects of learning were potentially driving observed changes in self-reported outcomes. Adopting the TPACK components as a foundation for assessing outcomes permitted a finer-grained analysis of what teachers were experiencing and how these experiences were related. In this study, both technological knowledge and content knowledge were significant predictors of, and accounted for a similar amount of variability in, technological–content knowledge.
Similarly, technological knowledge and pedagogical knowledge both contributed to technological–pedagogical knowledge. Together these outcomes support the integrative aspects of the TPACK components for the effective integration of technology as a teaching and learning tool.
Technological–pedagogical knowledge, but not technological–content knowledge, was a significant predictor demonstrating unique variance in relation to the full model which encompassed technology, content, and pedagogy. Therefore, when examining the contributions of technology, pedagogy, and content knowledge, for the implementation and use of a technology program, the ability to use technology to teach is crucial and appears to be more important than understanding the links between technology and content knowledge. These findings point to the contributions of an evidence-based, evidence-proven software program, such as ABRA, in enhancing literacy instruction. The ability to implement a software program to teach early literacy skills might be the initial step to enhancing literacy skills and, more broadly, addressing global concerns such as the United Nations Sustainable Development Goal 4 which cites the need to provide inclusive, equitable, quality education (UNESCO, 2016). This software may be a particularly useful instructional tool for teachers who have challenges in relaying literacy content that requires a deep understanding of underlying literacy concepts (e.g., phonological processing).

4.4. Limitations and Future Directions

The present study draws upon a body of research indicating that online blended instructional approaches can serve as important instructional tools (Bicen et al., 2014; Means et al., 2013). However, the absence of a control group potentially limits our understanding of the full impact of the intervention. Future studies incorporating a no-exposure or waitlist control as a benchmark would provide additional confidence for interpreting study outcomes. It may also be useful to introduce varying levels of in-person contact as part of the blended model. This would allow for a better understanding of the optimal amount of support needed by teachers to maximize outcomes. In addition, the teachers in this study met with their colleagues within their school or across a cluster of schools to discuss ABRA-infused pedagogical experiences as the TPD unfolded. However, providing structured opportunities for teachers to engage in broader discussions about their teaching and their understanding of education in general where the sharing of these beliefs would allow them to embed the professional development training within a broader professional context and could further facilitate their sense of agency going forward (Biesta et al., 2015).
The present study examines the current sample of teachers as a group even though there are individual differences including years of teaching experience, age, technological experience, and gender. It may be important in future research to explore the impact of these individual differences with respect to teachers’ perceptions and performance in the TPD to determine whether subgroups of teachers benefit more or less from aspects of such training. In addition to these quantitative data it may also be useful to incorporate interviews with teachers following each module to gain a richer understanding of their experiences.

5. Conclusions

Teachers in Kenya, and beyond, constantly face changes to curriculum content, pedagogy, and ongoing advancements in technologies. As the tools for teaching and learning continue to shift there is a need to develop viable, accessible, evidence-based, and proven supports to help teachers address these changes. In Kenya, where academic content is delivered in English, effective early literacy skills instruction leading to reading success is critical (Spernes & Ruto-Korir, 2018). The present study provides partial support confirming the efficacy of a blended teacher professional development program for teachers who provide instruction in early literacy skills. However, findings also make it clear that some skills—especially those associated with alphabetics—require further support for teachers to attain mastery. Meeting the needs of teachers to integrate content, pedagogical practice, and technological tools is critical. The use of the TPACK model as a frame for designing and evaluating teacher professional development programs can provide insights regarding program strengths and weaknesses and thus may be an important tool for designing and evaluating future programs, especially those involving digital delivery or support tools. The present study demonstrated the importance of examining all components of the TPACK framework when evaluating the effectiveness of a teacher professional development program for building teachers’ knowledge and skills regarding the principles of early literacy instruction.

Author Contributions

Conceptualization: E.W., A.G., N.B., A.W. and P.C.A.; Methodology: E.W., A.G., N.B., A.W., P.C.A., L.C. and C.G.; Software: P.C.A., A.W., E.W. and A.G.; Analyses: E.W., A.G., N.B., E.S. and N.V.; Data Curation: E.W., A.G., N.B., E.S. and N.V.; Resources: E.W., A.G., N.B., E.S., N.V., E.W., A.G., A.W., P.C.A., L.C. and C.G.; Supervision: E.W., A.G., N.B., A.W., P.C.A., L.C. and C.G.; Administration: A.W., P.C.A. and C.G.; Writing—Original Draft: N.B., E.W., A.G. and N.V.; Writing—Review, Drafts:. N.B., E.W., A.G., E.S. and N.V.; Funding Acquisition: P.C.A., A.W., E.W. and A.G. All authors have read and agreed to the published version of the manuscript.

Funding

Funding came in part from the International Development Research Centre (IDRC) under the GPE-KIX (Knowledge Exchange) program (GPE-KIX: 109375-001) and from the Social Sciences and Humanities Research Council of Canada (SSHRC).

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Research Ethics Review Board of Wilfrid Laurier University (protocol code 5502, 3 October 2022).

Informed Consent Statement

Informed consent was obtained from all participants involved in the study.

Data Availability Statement

De-identified data available from first author upon reasonable request.

Acknowledgments

We thank the teachers who participated in this research study and for sharing their experiences with us.

Conflicts of Interest

The authors declare no conflicts of interest.

Note

1
Although individual difference measures were collected, the smaller sample size prohibited statistical analysis of the impact of these variables. See the Limitations and Future Directions section for further discussion.

References

  1. Barasa, P. L. (2021). Digitalization in teaching and education in Kenya. International Labour Organization. Available online: https://www.ilo.org/wcmsp5/groups/public/---ed_dialogue/---sector/documents/publication/wcms_783665.pdf (accessed on 4 June 2025).
  2. Bett, H. K. (2016). The cascade model of teachers’ continuing professional development in Kenya: A time for change? Cogent Education, 3(1), 1139439. [Google Scholar] [CrossRef]
  3. Bicen, H., Ozdamli, F., & Uzunboylu, H. (2014). Online and blended learning approach on instructional multimedia development courses in teacher education. Interactive Learning Environments, 22(4), 529–548. [Google Scholar] [CrossRef]
  4. Biesta, G., Priestley, M., & Robinson, S. (2015). The role of beliefs in teacher agency. Teachers and Teaching, 21(6), 624–640. [Google Scholar] [CrossRef]
  5. Bingimlas, K. (2018). Investigating the level of teachers’ knowledge in Technology, Pedagogy, and Content (TPACK) in Saudi Arabia. South African Journal of Education, 38(3), 1–12. [Google Scholar] [CrossRef]
  6. Binks-Cantrell, E., Washburn, E. K., Joshi, R. M., & Hougen, M. (2012). Peter effect in the preparation of reading teachers. Scientific Studies of Reading, 16(6), 526–536. [Google Scholar] [CrossRef]
  7. Brantley-Dias, L., & Ertmer, P. A. (2013). Goldilocks and TPACK: Is the construct ‘Just Right?’. Journal of Research on Technology in Education, 46(2), 103–128. [Google Scholar] [CrossRef]
  8. Carson, K., & Bayetto, A. (2018). Teachers’ phonological awareness assessment practices, self-reported knowledge and actual knowledge: The challenge of assessing what you may know less about. Australian Journal of Teacher Education (Online), 43(6), 67–85. [Google Scholar] [CrossRef]
  9. DeWalt, D. A., Berkman, N. D., Sheridan, S., Lohr, K. N., & Pignone, M. P. (2004). Literacy and health outcomes: A systematic review of the literature. Journal of General Internal Medicine, 19, 1228–1239. [Google Scholar] [CrossRef]
  10. Ehri, L. C., & Flugman, B. (2018). Mentoring teachers in systematic phonics instruction: Effectiveness of an intensive year-long program for kindergarten through 3rd grade teachers and their students. Reading and Writing, 31, 425–456. [Google Scholar] [CrossRef]
  11. Ehri, L. C., Nunes, S. R., Willows, D. M., Schuster, B. V., Yaghoub-Zadeh, Z., & Shanahan, T. (2001). Phonemic awareness instruction helps children learn to read: Evidence from the National Reading Panel’s meta-analysis. Reading Research Quarterly, 36(3), 250–287. [Google Scholar] [CrossRef]
  12. Heinrich, C. J., Darling-Aduana, J., & Martin, C. (2019). The potential and prerequisites of effective tablet integration in rural Kenya. British Journal of Educational Technology, 51(2), 498–514. [Google Scholar] [CrossRef]
  13. Joshi, R. M., Binks, E., Hougen, M., Dahlgren, M. E., Ocker-Dean, E., & Smith, D. L. (2009). Why elementary teachers might be inadequately prepared to teach reading. Journal of Learning Disabilities, 42(5), 392–402. [Google Scholar] [CrossRef] [PubMed]
  14. Kenya Institute of Curriculum Development. (2017). Basic education curriculum framework. Available online: https://kicd.ac.ke/wp-content/uploads/2017/10/CURRICULUMFRAMEWORK.pdf (accessed on 4 June 2025).
  15. Kenya Law Reform Commission. (2023). Constitution of Kenya 7. National, official and other languages. Available online: https://www.klrc.go.ke/index.php/constitution-of-kenya/108-chapter-two-the-republic/173-7-national-official-and-other-languages (accessed on 4 June 2025).
  16. Kenyan Institute of Curriculum Development. (2019). Competency based curriculum guidelines on parental empowerment and engagement. Kenyan Institute of Curriculum Development. [Google Scholar]
  17. Kerkhoff, S. N., & Makubuya, T. (2021). Professional development on digital literacy and transformative teaching in a low-income country: A case study of rural Kenya. Reading Research Quarterly, 57(1), 287–305. [Google Scholar] [CrossRef]
  18. Keya-Shikuku, M. (2021). Digital literacy programme on course. Kenya News Agency. Available online: https://www.kenyanews.go.ke/digital-literacy-programme-on-course/ (accessed on 4 June 2025).
  19. Kirimi, K. J. (2014). Impact of information communication technology on education—Kenya. Journal of Educational and Social Research, 4(1), 435–443. [Google Scholar] [CrossRef]
  20. Mak, B. S., Cheung, A. C., Guo, X., Abrami, P. C., & Wade, A. (2017). Examining the impact of the Abracadabra (ABRA) web-based literacy program on primary school students in Hong Kong. Education and Information Technologies, 22(6), 2671–2691. [Google Scholar] [CrossRef]
  21. Martinussen, R., Ferrari, J., Aitken, M., & Willows, D. (2015). Pre-service teachers’ knowledge of phonemic awareness: Relationship to perceived knowledge, self-efficacy beliefs, and exposure to a multimedia-enhanced lecture. Annals of Dyslexia, 65(3), 142–158. [Google Scholar] [CrossRef] [PubMed]
  22. Means, B., Toyama, Y., Murphy, R., & Baki, M. (2013). The effectiveness of online and blended learning: A meta-analysis of the empirical literature. Teachers College Record, 115(3), 1–47. [Google Scholar] [CrossRef]
  23. Mishra, P., & Koehler, M. J. (2006). Technological pedagogical content knowledge: A framework for teacher knowledge. Teachers College Record, 108(6), 1017–1054. [Google Scholar] [CrossRef]
  24. Nie, N. H., Junn, J., & Stehlik-Barry, K. (1996). Education and democratic citizenship in America. University of Chicago Press. [Google Scholar]
  25. Paige, D. D. (2020). Reading fluency: A brief history, the importance of supporting processes, and the role of assessment. Available online: https://files.eric.ed.gov/fulltext/ED607625.pdf (accessed on 20 November 2024).
  26. Pittman, T., & Gaines, T. (2015). Technology integration in third, fourth and fifth grade classrooms in a Florida school district. Educational Technology Research and Development, 63(4), 539–554. [Google Scholar] [CrossRef]
  27. Savage, R., Abrami, P. C., Piquette, N., Wood, E., Deleveaux, G., Sanghera-Sidhu, S., & Burgos, G. (2013). A (Pan-Canadian) cluster randomized control effectiveness trial of the Abracadabra web-based literacy program. Journal of Educational Psychology, 105(2), 310–328. [Google Scholar] [CrossRef]
  28. Schmid, M., Brianza, E., & Petko, D. (2020). Developing a short assessment instrument for Technological Pedagogical Content Knowledge (TPACK.xs) and comparing the factor structure of an integrative and a transformative model. Computers & Education, 157, 103967. [Google Scholar] [CrossRef]
  29. Shulman, L. S. (1986). Those who understand: Knowledge growth in teaching. Educational Researcher, 15, 4–14. [Google Scholar] [CrossRef]
  30. Sifuna, D. N., & Obonyo, M. M. (2019). Competency based curriculum in primary schools in Kenya: Prospects and challenges of implementation. Journal of Popular Education in Africa, 3(7), 39–50. [Google Scholar]
  31. Spernes, K. I., & Ruto-Korir, R. (2018). Medium of instruction in school: The indigenous language, the national language or the official language? A case study from multilingual deep rural Kenya. Journal of African Languages and Linguistics, 39(1), 41–64. [Google Scholar] [CrossRef]
  32. The Centre for the Study of Learning and Performance. (2019). Literacy and numeracy within the learning toolkit+: Teacher guide (Kenya edition). Available online: https://literacy.concordia.ca/resources/common/assets/doc/LTK-TG-Kenya-E3-20190114.pdf (accessed on 1 January 2024).
  33. The World Bank. (2022). Kenyan economic update: Aiming high-securing education to sustain the recovery. Available online: https://documents1.worldbank.org/curated/en/099430006062288934/pdf/P17496106873620ce0a9f1073727d1c7d56.pdf (accessed on 4 June 2025).
  34. UNESCO. (2016). UN sustainability goals. Department of Economic and Social Affairs: Sustainable Development. [Google Scholar]
  35. Uribe-Banda, C., Wood, E., Gottardo, A., Biddle, J., Ghaa, C., Iminza, R., Wade, A., & Korir, E. (2023). Assessing blended and online-only delivery formats for teacher professional development in Kenya. Cogent Education, 10(1), 1–16. [Google Scholar] [CrossRef]
  36. Wawire, B. A. (2021). Promoting effective early grade reading: The case study of primary teachers’ preparation programmes in Kenya. The Curriculum Journal, 32(2), 247–268. [Google Scholar] [CrossRef]
  37. Wolgemuth, J., Savage, R., Helmer, J., Lea, T., Harper, H., Chalkiti, K., Bottrell, C., & Abrami, P. (2011). Using computer-based instruction to improve indigenous early literacy in northern Australia: A quasi-experimental study. Australasian Journal of Educational Technology, 27(4), 727–750. [Google Scholar] [CrossRef]
Table 1. Means and Standard Deviations for Self-reported Pre- and Post-Alphabetics Instruction Measures (maximum score of 1).
Table 1. Means and Standard Deviations for Self-reported Pre- and Post-Alphabetics Instruction Measures (maximum score of 1).
Knowledge DomainPre-AlphabeticsPost-Alphabetics
Mean
(SD)
Mean
(SD)
Technological (General)0.88
(0.10)
-
Technological (ABRA)-0.79
(0.13)
Content (English skills)0.84
(0.16)
0.88
(0.15)
Pedagogical–content (Confidence teaching English) 0.84
(0.13)
0.87
(0.11)
Technological–pedagogical 0.86
(0.20)
0.87
(0.14)
Table 2. Means and Standard Deviations for Self-reported Post-Fluency Instruction Measures.
Table 2. Means and Standard Deviations for Self-reported Post-Fluency Instruction Measures.
Knowledge DomainPost-Fluency-Instruction
Mean (SD)n
Technological (General)0.64 (0.23)72
Pedagogical0.92 (0.10)71
Content (Fluency)0.69 (0.12)86
Technological–content0.60 (0.24)77
Technological–pedagogical0.93 (0.12)73
Technological–pedagogical–content0.91 (0.12)81
Table 3. Means and Standard Deviations for Pre- and Post-Instruction Performance Proportion Scores.
Table 3. Means and Standard Deviations for Pre- and Post-Instruction Performance Proportion Scores.
Knowledge DomainPre-InstructionPost-Instruction
Mean (SD)Mean (SD)
Aggregated Content0.59 (0.14)0.66 (0.17)
Alphabetics0.63 (0.22)0.70 (0.24)
Fluency0.52 (0.19)0.63 (0.22)
Pedagogical0.45 (0.31)0.58 (0.33)
Table 4. Pre-alphabetics Survey Correlations.
Table 4. Pre-alphabetics Survey Correlations.
Measures123456
1. Technological (General)1
2. Content−0.2471
3. Pedagogical–content0.0180.456 **1
4. Technological–pedagogical0.556 **−0.0130.0811
5. Alphabetics (performance)−0.066−0.228−0.314 *0.0291
6. Pedagogical (performance)0.0910.086−0.106−0.1070.0851
* p < 0.05; ** p < 0.001.
Table 5. Post-alphabetics Survey Correlations.
Table 5. Post-alphabetics Survey Correlations.
Measures1234567
1. Technological (ABRA)1
2. Technology Use0.432 **1
3. Content0.374 **0.0811
4. Pedagogical–content0.403 **0.311 **0.2051
5. Technological–pedagogical0.471 **0.252 *0.267 *0.354 **1
6. Alphabetics (performance)0.1840.1030.420 **−0.0410.0791
7. Pedagogical (performance)0.051−0.251 *0.1580.0250.0320.248 *1
* p < 0.05; ** p < 0.001.
Table 6. Post-Fluency-Instruction Correlations.
Table 6. Post-Fluency-Instruction Correlations.
Measures123456
1. Technological (use)1
2. Pedagogical0.0371
3. Content0.0410.2051
4. Technological–content0.271 *0.2130.668 **1
5. Technological–pedagogical0.1410.457 **0.1760.266 *1
6. Technological–pedagogical–content−0.0500.554 **0.2070.270 *0.460 **
7. Fluency (performance)0.0450.0620.060−0.099−0.0301
* p < 0.05; ** p < 0.001.
Table 7. Predictors of technological–pedagogical knowledge (total R2 = 0.331).
Table 7. Predictors of technological–pedagogical knowledge (total R2 = 0.331).
Step and VariablesΔR2βt-Value p-Value
1. Technological0.0910.3012.320.024
2. Technological 0.2542.250.028
  Pedagogical0.2410.4934.37<0.001
1. Pedagogical0.2670.5174.44<0.001
2. Pedagogical 0.4934.37<0.001
  Technological 0.0640.2542.250.028
Table 8. Predictors of technological-content knowledge (total R2 = 0.574).
Table 8. Predictors of technological-content knowledge (total R2 = 0.574).
Step and VariablesΔR2βt-Value p-Value
1. Technological0.4470.6696.963<0.001
2. Technological 0.4324.239<0.001
  Content0.1270.4284.194<0.001
1. Content0.4440.6666.924<0.001
2. Content 0.4284.194<0.001
  Technological 0.1300.4324.239<0.001
Table 9. Predictors of technological–pedagogical–content knowledge (total R2 = 0.216).
Table 9. Predictors of technological–pedagogical–content knowledge (total R2 = 0.216).
Step and VariablesΔR2βt-Value p-Value
1. Technological–pedagogical0.2010.4483.85<0.001
2. Technological–pedagogical 0.4143.430.001
  Technological–content0.0150.1281.060.292
1. Technological–content0.0570.2401.900.063
2. Technological–content 0.1281.060.292
  Technological–pedagogical0.1590.4143.430.001
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Battaglia, N.; Wood, E.; Gottardo, A.; Chovu, L.; Ghaa, C.; Santhosh, E.; Vogel, N.; Wade, A.; Abrami, P.C. Learning About Alphabetics and Fluency: Examining the Effectiveness of a Blended Professional Development Program for Kenyan Teachers. Educ. Sci. 2025, 15, 709. https://doi.org/10.3390/educsci15060709

AMA Style

Battaglia N, Wood E, Gottardo A, Chovu L, Ghaa C, Santhosh E, Vogel N, Wade A, Abrami PC. Learning About Alphabetics and Fluency: Examining the Effectiveness of a Blended Professional Development Program for Kenyan Teachers. Education Sciences. 2025; 15(6):709. https://doi.org/10.3390/educsci15060709

Chicago/Turabian Style

Battaglia, Noah, Eileen Wood, Alexandra Gottardo, Livison Chovu, Clifford Ghaa, Edwin Santhosh, Natasha Vogel, Anne Wade, and Philip C. Abrami. 2025. "Learning About Alphabetics and Fluency: Examining the Effectiveness of a Blended Professional Development Program for Kenyan Teachers" Education Sciences 15, no. 6: 709. https://doi.org/10.3390/educsci15060709

APA Style

Battaglia, N., Wood, E., Gottardo, A., Chovu, L., Ghaa, C., Santhosh, E., Vogel, N., Wade, A., & Abrami, P. C. (2025). Learning About Alphabetics and Fluency: Examining the Effectiveness of a Blended Professional Development Program for Kenyan Teachers. Education Sciences, 15(6), 709. https://doi.org/10.3390/educsci15060709

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop