Next Article in Journal
Topic Level Visualization of Student Enrollment Records in a Computer Science Curriculum
Previous Article in Journal
Sources of Support and Their Benefits for New Primary School Teachers in Switzerland
Previous Article in Special Issue
Exploring Capacity and Professional Development Needs of Teachers: Moving Toward Inclusive and Engaging Physical Education for Girls
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Consolidated Framework for Implementation Research: Application to Education

1
Department of Education, St. Francis Xavier University, Antigonish, NS B2G 2W5, Canada
2
Faculty of Education, Ontario Tech University, Oshawa, ON L1G 0C5, Canada
*
Author to whom correspondence should be addressed.
Educ. Sci. 2025, 15(5), 613; https://doi.org/10.3390/educsci15050613
Submission received: 30 January 2025 / Revised: 2 April 2025 / Accepted: 12 May 2025 / Published: 16 May 2025

Abstract

This study investigates the application of the Consolidated Framework for Implementation (CFIR) in an educational setting. Although education implements numerous interventions and programs at all levels, standardized testing has primarily measured their success, leaving a paucity of direct measures for non-academic program implementation. Borrowing from an established practice in medicine, the CFIR, which provides a framework to identify potential barriers and facilitators to program implementation, is reviewed and investigated as a possible tool for use in education. Researchers applied the CFIR to a program intervention implemented at two Canadian university faculties of education. Through the complex coding and analysis inherent to the CFIR, barriers and facilitators were identified for program implementation. Reviewing and applying the CFIR provided results about the educational intervention and the use of the CFIR in education. While complex and time-consuming, the results suggest that the CFIR is a credible tool for measuring the effectiveness of implementing myriad educational initiatives at all system levels.

1. Introduction

Teachers, as leaders and changemakers, face an ever-increasing onslaught of policy mandates that can either make or break educational change. Initiatives to improve student academic performance or teaching practices, whether evidence-based or not, come with each new school year (e.g., see Boyd, 2021 for a recent history of reforms in Ontario, Canada). Fullan (1998) assessed the ever-evolving state of educational reform, finding that reform occurred in two halves: policy initiatives originating at the governmental level and then the local implementation of these policies. Policy frameworks, he suggested, could be divided into four domains: “curriculum and instruction, assessment, teacher education, and community development including early childhood education” (p. 2). Most educators could develop a list of local initiatives resulting from these policy initiatives, including standardized testing, guided reading, holistic teaching, cooperative education, and learning styles; the list is voluminous.
This research focuses on the second part of educational reform, its implementation, and how we measure the success of educational reforms on the ground at the classroom level. For example, how does the policy, when implemented, affect student learning outcomes or the teacher’s effectiveness at the front of the classroom? Fullan aptly describes the disconnect between policy and implementation, “We know a fair amount about local implementation. The policy arena and local life are divergent worlds if not two solitudes” (p. 4). As Fullan informs us, no matter the intent or energy generated through a reform focused on some aspect of student success, rarely does the policy achieve its goals. He finds that,
Teachers fail to make the effort, or their commitment to making a difference turns to despair in the face of overload and political alienation. Agencies like school districts, teacher unions, universities either become part of the problem or fail to help.
(p. 4)
Local success is possible, but system change is unlikely (Fullan, 1998).
How do we measure success in education? Rampant moves to improve education through standardizing learning experiences have resulted in marginal gains, and as Eisner and Barone say, no one fattens a cow by weighing it (Barone & Eisner, 2011). The political climate for the last several decades has focused on standardized testing. Volante (2010) frames the testing and accountability fervor,
Rarely does a day go by when the media does not feature an article about standards, standardized testing, and levels of student achievement in public schools. Establishing and raising standards, and measuring the attainment of those standards, are intended to encourage excellence in our schools.
(p. 54)
Fraught with concerns, Wiggins and McTighe (2005) deftly described the result of the standardized testing fervor as “teach, test and hope for the best” (p. 3). Nevertheless, the testing and ranking of schools is an easy-to-consume political tool for governments. However, as Moir (2018) clarifies, “for any intervention to be successfully embedded, socioeconomic and cultural environments need to be acknowledged, because their variables impact on implementation success” (p. 3). In the process, the teachers, on-the-ground changemakers, become demoralized, feeling unsupported at best and silenced at worst. Their voices, and experiential knowledge about how students learn, do not appear to matter. A test score or graduation rate cannot measure the success of every educational intervention. Schools are people-places with students and teachers working together to grow and learn (Dewey, 1922; Greene, 1995; Hooks, 1994).
As each newly elected government, or those struggling to remain in power, attempts to initiate educational reform, the evidence demonstrates that most fail, or do not come with a broad scope of measures to test the efficacy of the intervention (Damschroder et al., 2022). Additionally, even when reforms include well-developed plans for execution and measurement, most plans fail to consider local contextual factors that may influence the success of educational reform.
This research introduces a form of implementation science, the Consolidated Framework for Implementation Research (CFIR), to measure the success of educational reforms. A methodology used widely in health care to measure the success of a wide range of interventions, Damschroder et al. (2022) informs us that often, “implementation science embraces the reality that contextual factors are active and dynamic forces working for and against implementation efforts in the real world” (p. 2). This premise is worth consideration in education as classrooms, schools, and broader school communities are pressured by continual change (Fullan, 1998).
Moir (2018) contends that school districts and schools are unique and distinct communities. Considering the local context, interventions should reflect the local community and its needs. In the rotating melee of new and unique initiatives to reform some aspect of education, the system (the state, district, school) ignores two significant concerns: why did previous reform attempts fail, and was the implementation well done (Kelly & Perkins, 2012). Suppose that equity, diversity, inclusion, and accessibility (EDIA), treaty education, financial literacy, internet safety, or inquiry-based learning are worthy of time, money, and effort as system or local initiatives. How does one know that the program’s implementation was successful? Equally, and of interest to this research, those of us in teacher education never know if the new instructional strategy or pedagogy passed on to pre-service teachers (PSTs) had any influence in the PSTs’ future classrooms. Education, at all levels, must have a proven system for measuring the success of educational reforms or new initiatives beyond standardized testing. A system to measure the success of a program or policy implementation (e.g., CFIR) is simple accountability—accountability to students, the public, and the teachers who trust that the system will work for all stakeholders.

Research Questions

This research explores the application of the CFIR as a theoretical framework assessing the effectiveness of a purposefully crafted inclusive physical education teacher education (PETE) course across two Canadian universities. The course aimed to prompt PSTs to critically reflect on their encounters with inclusion in PE as they are exposed to and engaged in an intentionally designed EDIA/PBMIS curriculum. As a direct measure of the intervention, does the CFIR provide a valuable evaluation of the intervention, informing educators not only of the success of the program implementation, but also of the strengths and weaknesses of the intervention?
It is reasonable to argue that the many forms of qualitative research used in education provide a deep and rich understanding of a phenomenon (Yin, 2017). Still, few provide measured barriers and facilitators to an intervention as the CFIR does, inspiring the researchers to apply this methodology in their PETE programs. Moreover, is the CFIR methodology applicable to education in measuring the effectiveness of the ongoing myriad of reforms at all levels of education?

2. Educational Reforms and Accountability

All Canadian provinces and territories have some form of standardized testing as a measure “for accountability purposes and act as a lever for educational change through the development of provincial, district, and school achievement targets” (Volante, 2005, p. 55). For example, in the United States, the National Assessment of Educational Progress organization has been tracking student testing success since 1969 (National Assessment of Educational Progress, 2024). Yet, the definitions of “student success” have narrow, non-culturally responsive implications that may marginalize some populations. Care must be taken to include more than just one measure of student learning. Additionally, the Program for International Assessment (PISA) measures the ability of young people at 15 years of age to use their reading, mathematics, and science knowledge and skills to meet real-life challenges (OECD, 2024). PISA is the international standard which governments base their desired educational reform on:
a triennial survey of 15-year-old students around the world that assesses the extent to which they have acquired key knowledge and skills essential for full participation in social and economic life. PISA assessments do not just ascertain whether students near the end of their compulsory education can reproduce what they have learned; they also examine how well students can extrapolate from what they have learned and apply their knowledge in unfamiliar settings, both in and outside of school.
More than 600,000 students from 79 countries took the 2018 PISA test; in 2022, 81 countries participated (OECD, 2023). Such is the handwringing and fretting over the state of education that the general focus in the Canadian press was negative following the release of the 2018 PISA, even though Canada ranked fourth in reading, sixth in science and tenth in math. Subsequently, negative spins on the results calling for further educational reform dominated headlines (e.g., Goldstein, 2022). Results from the 2022 test found Canada to be the top-performing English-speaking country in math and science and second in reading. The Canadian Broadcasting Company (CBC) posted this headline on their website following the release of the PISA scores: “Canadian students’ math, reading scores have dropped since 2018—but study says it’s not all COVID’s fault” (CBC, 2023). Test scores aside, what the media did not acknowledge was that “Education systems in Canada, Denmark, Finland, Hong Kong (China), Ireland, Japan, Korea, Latvia, Macao (China) and the United Kingdom are highly equitable by PISA’s standard (combining high levels of inclusion and fairness)” (OECD, 2023, p. 27). The differential in scores between the highest-performing Canadian students and the lowest is among the smallest, indicating a well-functioning, inclusive education system.
However, measuring an educational system’s success via test scores is low-hanging fruit for governments that view all services/programs through an economic lens—value for money is the focus (Moir, 2018, p. 1). What is missing? As this research focuses on physical education, a sports analogy might provide an answer. Who do observers blame if a team is underperforming, and the owner has spent significant money trying to make a champion (test scores)? Is it the fault of the manager that brought in the talent (the interventions to improve test scores), or is the coach to blame (local government, school boards, school administrators who translate the new initiatives and develop professional development for teachers), or finally, is it the players’ fault (those charged with implementing the intervention)? One does not have to have played a sport or even be a fan; we have all been part of some team (work), community, family, seeking to achieve a common goal. To determine what is wrong or missing, one must consider the implementation of any program or intervention as a source of failure or success (Fixsen et al., 2005; Moir, 2018; Slavin, 2002).
Often, reforms/initiatives focus on boosting test scores; however, many others, especially within faculties of education, aim to develop educators’ professional skills, pedagogies, and awareness of issues of importance to the individual student or broader school community. Professional development, in-service, and professional learning communities all require money and time to learn about and implement these reforms, along with test-focused initiatives. Fixsen et al. (2005) defined an intervention as “a specified set of activities designed to put into practice an activity of known dimensions” (p. 5). Moreover, for the intervention to be deemed successful, it should achieve the desired results and adhere to the intended design (Carroll et al., 2007; Fixsen et al., 2005; Moir, 2018). Other than test scores—local, national, and international—how does education account for the success of these implementations? Educators readily speak of the cycle of new interventions and the lack of accountability. Where is the evidence that this new intervention will work, has it worked in the past, and why did the program fail or succeed (Slavin, 2002)? Implementation science focuses on the implementation as a critical factor in the success of an intervention or program. As Kelly and Perkins (2012) suggested, worthy interventions are often poorly implemented, and the opposite can equally exist—poor interventions may successfully be implemented. Resultantly, good programs may fail, while poorly supported interventions may run indeterminately.

3. Implementation Science

Many have opined on the purpose of education. Martin Luther King Jr. stated that the core function of education “is to teach one to think intensively and to think critically. But education which stops with efficiency may prove the greatest menace to society. The most dangerous criminal may be the man gifted with reason, but with no morals” (King, 1947). This warns that having graduates who have amassed significant knowledge are not enough; students must be able to think critically and have a moral compass. Once again, the World Economic Forum report (WEF, 2023), the Organization for Economic and Cultural Development (OECD), and the Conference Board of Canada (2000) clearly iterate that new competencies and skills are required. In response, education systems should develop programs aimed at students and provide some form of in-service for educators. Educators at all levels should be familiar with system-wide or local initiatives focused on school discipline, EDIA, and classroom management, new instructional strategies (e.g., inquiry-based learning, play, experiential learning, guided reading), increased use of technology for students and teachers, outdoor learning, and the list goes on. However, education systems do not have measures of these programs and initiatives. Over time, it becomes increasingly difficult to ask career educators to engage and implement yet another initiative without knowing if the last one worked. Enter implementation science.
Implementation science includes the “application and integration of research evidence to in practice and policy” (Glasgow et al., 2013, p. 26). As previously introduced, implementation science hopes to address many problems and issues related to new programs and initiatives introduced within a system (Peters et al., 2013). Peters further defines implementation science as the intent “to understand what, why, and how interventions work in “real-world” settings and to test approaches to improve them” (p. 1). Implementation science accounts for context, working within real-world conditions (in the school, school district, or system). Within the specific educational setting, the success of an intervention is often determined by the social, cultural, economic, political, and physical environment (Moir, 2018). Implementation science is particularly interested in the users—those who will ultimately determine the success or failure of the intervention (Carroll et al., 2007; Fixsen et al., 2005; Moir, 2018). As a scientific approach, implementation science hopes to translate evidence-based interventions/practice into real-world settings (Tucker et al., 2021). Significant to education and the multiple interventions implemented by the practicing educator, implementation science allows research/evidence to become effective practice/implementation (Tucker et al., 2021).
Despite an extensive history of practice in medicine, nursing, and community health, unfortunately, the use of implementation science in education is limited (Lyon et al., 2018). A survey of the literature suggests that educational initiatives aimed at improving student outcomes need more quality implementation and a rigorous review of the program and its implementation (Barnett, 1995; Greenberg et al., 2005). Moir (2018) noted that for an intervention to be successful, the “key internal components of the programme have to be compatible with external influences” (p. 2). One such tool, the CFIR, seeks “to understand why an innovation is successfully implemented in one setting, but not in another” (Kirk et al., 2016, p. 2). CFIR provides a meta-theoretical framework with a “repository of standardized implementation-related constructs that can be applied across the spectrum of implementation research” (Kirk et al., 2016, p. 2). With 39 constructs organized within five domains, CFIR provides a common language by which determinants of implementation can be articulated, as well as a comprehensive, standardized list of constructs to serve as a guide for researchers as they identify variables that are most salient to implementation of a particular innovation” (Kirk et al., 2016, p. 2). Kirk informs us that the CFIR can be applied to a wide range of studies, types of interventions, numerous settings, and research designs.
The CFIR lists determinants of implementation into five domains: (1) characteristics of policies that may determine implementation (e.g., complexity); (2) the outer setting characteristics (e.g., networking with other organizations); (3) the inner setting characteristics (e.g., organizational climate); (4) individual-level determinants (e.g., knowledge and beliefs); and (5) characteristics of implementation processes (e.g., implementation plans) (See Figure 1). Touchette (2020) describes the CFIR as a framework that will help translate research into practice. Additionally, in support of adopting a knowledge translation/implementation research framework, Touchette noted that “Change doesn’t start at the organization level—it begins with changes by individuals (who can then influence other individuals), and is dependent on their mindset, norms, interest, and affiliations” (blog post)—suggesting that education should move from top-down implementation and appraisal of reforms to in-the-moment and on-the-ground evaluations with the new initiatives’ stakeholders (e.g., teachers and educational assistants). Olswang and Prelock (2015) contend that making the paradigm shift necessary in adopting the CFIR requires researchers to consider the context before initiating research. Much more of a grassroots movement than an administration-driven initiative, the CFIR considers the context as problems being defined, causes identified, and the necessary intervention developed and implemented (Olswang & Prelock, 2015).
Each domain is defined as follows:
  • Intervention characteristics—the features of an intervention (educational initiative or policy) that might influence implementation, including the quality of the intervention (PBMIS instruction), and how the intervention is perceived by the stakeholders (PSTs).
  • Outer setting—encompasses the aspects of the external context or environment that could impact implementation (e.g., the school and school system in which the PST completes their practicum, the pedagogy of the associate teacher, and the influence of external organizations such as PHE Canada and TAPHE at the practicum site).
  • Inner setting—the characteristics of the implementing organization that could influence implementation (e.g., the structural, political, and cultural contexts of the school, including the relative value of PE within the school (e.g., isolation, marginalization) the PE department/associate teacher and their practices).
  • Individual—the attributes and characteristics of the individuals engaged in the implementation that might influence the intervention (e.g., PSTs’ self-efficacy, competence and confidence in PETE learning/program and this pedagogy).
  • Implementation process—involves strategies or tactics that could affect the intervention/implementation, such as involving relevant individuals in the implementation and utilization of the intervention, as well as reflection and evaluation by the PST.

4. Description of the PETE Intervention

In response to numerous significant challenges hindering the effectiveness of PE instruction, such as inadequate PE assessment practices, outdated teaching methods, low elective enrollment in PE, and the marginalization of PE and physical educators, researchers made significant revisions to their PETE curriculum and instructional courses (see previously published research, Barber et al., 2023, 2024; Walters et al., 2023). These revisions, informed by insights from existing literature, conference discussions, and researcher dialog, resulted in a play-based modern instructional strategy (PBMIS) and EDIA-focused curricula. Moreover, the literature suggests that the experiences of PSTs often reflect traditional approaches to PE, emphasizing skill-based instruction, sport and performance-based teaching, fitness testing, and pedagogies that cater primarily to athletic individuals (Baghurst, 2014; James, 2018; Sundaresan et al., 2017; Zeigler, 2005). This normative view aligns with old school practices, those with athletic abilities and those with differing levels of ability. Therefore, the overarching objective, assessed through the lens of the CFIR, was to enhance the PETE curriculum to equip PSTs with the requisite skills in inclusive PE teaching practices.
Select elements of the intervention are reflected in the following five elements of a PBMIS and EDIA-focused curricula:
  • PSTs actively reflected on their past experiences of inclusion in PE. Working together, sharing their past experiences, and engaging with the literature, PSTs arrived at themes representative of the state of inclusion in PE.
  • PSTs engaged in adaptive, para-sports, and non-traditional sport (e.g., bocce, wheelchair basketball, goalball) to disrupt previous conceptions of ability and disability. Before each alternative activity, PSTs investigated their experience, anticipated challenges, and potential biases or stereotypes they might encounter (e.g., gender-related perceptions of yoga and dance). Following engagement in the activities, PSTs collaboratively debriefed their experiences, considered the potential barriers to including these activities in their curriculum and how to overcome these, and critically read the relevant literature.
  • PSTs studied and actively engaged in models-based practice (e.g., teaching games for understanding, cooperative learning, sports education, and teaching personal and social responsibility). PSTs experienced modeling of various instructional strategies, exploration and engagement with the supporting literature, and micro-teaching opportunities sharing their understanding of specific instructional models.
  • PSTs reconceptualized their beliefs, values, and experiences, challenging and reshaping their understanding of what constitutes a fully inclusive physical education pedagogy. They critically analyzed the literature and engaged with guest speakers, individuals who live with disabilities, and practicing physical educators.
  • PSTs engaged with various literature and resources, including guest speakers, to further develop their inclusive and PBMIS pedagogy. Moreover, they actively participated in meaningful dialogs with individuals who live with disabilities and practicing physical educators, acquiring direct insights and perspectives. This multifaceted approach disrupted previous conceptions of EDIA, enriched the PSTs’ theoretical knowledge, and fostered a deeper, more empathetic comprehension of the diverse needs and experiences within inclusive physical education.
  • PSTs participated in teaching experiences beyond their practicum, engaging in collaborative micro-teaching sessions at local schools each term. They crafted and presented lessons employing contemporary, inclusive instructional methods. Each PST group partnered with another for a comprehensive feedback exchange. One such session occurred at a nearby Mi’kmaw school, offering PSTs teaching opportunities, feedback, and valuable insights into Mi’kmaw schools and culture.

5. Methodology and Methods

5.1. Methodology

Grounded within the theoretical framework of the CFIR, this study adopted a case study methodology. Creswell (2007), amongst others, defines a case study as a bounded system; in this research, this is within the PETE curriculum and instruction courses in two universities. Analyzing multiple data sources results in an in-depth case description and the evolution of case-based themes. As with the multi-determinant CFIR, Quintão et al. (2020) suggested that one of the main advantages of a case study is its relevance to real-life human situations and contemporary contexts. Moreover, it provides a comprehensive and holistic perspective of a complex social unit characterized by multiple variables. As a descriptive case study, this research sought a complete understanding of applying PBMIS/EDIA curricula with PSTs and the effectiveness of its implementation (Yin, 2017). Additionally, researchers sought to evaluate the effectiveness and usefulness of the CFIR methodology as a tool to measure program change/implementation in education.
A variety of data sources were incorporated to uphold reliability and rigor in assessing the application of CFIR in educational research (Yin, 2017):
(i)
Multiple data sources—video interviews, focus groups, researcher observations/field notes;
(ii)
Researcher triangulation providing a varied evaluation of data—researchers from three universities;
(iii)
Multiple theoretical perspectives—phenomenology, CFIR, video analysis;
(iv)
Methodological triangulation—multiple qualitative data sources.
As discussed in this research, the outcome is a profound and insightful contemplation of PSTs’ comprehension and viewpoints regarding EDIA in teaching PE in a real-world context (Yin, 2017).

5.2. Methods

5.2.1. Participants

Participants of this study included second-year Bachelor of Education students recruited from a mandatory PE curriculum and instruction course at two Canadian universities and agreed to take part in the study. One university, situated in a rural setting in Nova Scotia, had nine participants participated in two separate focus groups. The second university in an urban setting in Ontario included a group of four who participated in a third focus group. Additionally, the Ontario PSTs participated in a three-hour experiential visit to the Whitby Abilities Center, following which 23 PSTs participated in a video interview. The Abilities Center, known worldwide, serves as a community hub and inclusion incubator, offering fully accessible activities, learning programs, and spaces. While the PETE programs at the two universities differed, they shared similar philosophical foundations, prioritizing play, inclusion, physical literacy, and the development of fundamental movement skills across various environments.

5.2.2. Interviews and Focus Groups

Data collection through surveys, interviews and focus groups followed the curriculum and instruction course completion. PETE instructor observations collected through field notes and ongoing weekly meetings added to the depth of the data. Interviews of 23 PSTs captured using videography using prompting questions were followed by three focus groups. The focus group interviews, with 13 PSTs, were online and video-recorded.

5.2.3. Surveys

PSTs at both universities (39 total) completed surveys following completion of the first year of their Bachelor of Education program. PSTs received the following instruction prior to completing the survey,
With reference to your past experiences as a student in PHE, your concepts of inclusion and diversity in PE prior to the course, your observations of your practicum placements, and your learning and growth through the courses, please answer with respect to how the course’s PLAY-based inclusive pedagogical intervention has impacted your learning and growth.
Survey items, designed collaboratively by researchers, a team of practicing PE teachers, and a group of six graduate student researchers, included 45 items rated on a five-point Likert scale (strongly agree, agree, neutral, disagree, strongly disagree). These data were then coded within the CFIR framework.

5.2.4. Videography

Videography provided an in-depth understanding of the intervention beyond the transcript alone. Researchers collaboratively developed the interview and focus group questions based on the research questions. Queries encompassed various aspects to elicit PSTs’ perspectives on inclusion and a PBMIS curriculum. These questions identified three phases, spanning their initial perceptions before engaging in the PETE course, to moments of revelation during the course, culminating in final reflections upon the course’s completion. This comprehensive approach allowed researchers to trace the reflective growth and shifts in attitudes among PSTs resulting from their course experiences. Twenty-three PSTs from the Ontario campus completed these interviews (see Appendix A).

5.2.5. Videography of Two Online Focus Groups

Three focus groups included nine PSTs from Nova Scotia and four from Ontario. Research assistants trained by the researchers conducted focus group interviews. The purpose of the focus groups was to determine if PSTs’ initial perceptions of inclusion and PBMIS had changed or evolved due to the PETE coursework. Further, PSTs were asked to reflect upon their experiences (and learning) as PE students in PETE courses and during their practicum. These focus groups yielded 10 h of video for analysis.

5.2.6. Data Analysis

Data were collected with research ethics board approval from both universities. Member checking was not performed as part of the process. Using video analysis, researchers hoped to capture participants’ authentic reflections, avoiding situations where PSTs might change their answer to please the instructors/researchers. The focus of this article is to report on the effectiveness of the CFIR model in education, specifically PETE. However, the data have also been used to evaluate the effectiveness of an intentionally designed PBMIS/EDI curriculum in supporting PSTs to interrogate and reflect on their experiences in PE as they develop an inclusive pedagogy. Inductive thematic analysis guided the coding and analysis of relevant themes to determine the impact of the PBMIS/EDIA PETE curriculum on PSTs’ reflections on their PE experiences and understanding of and ability to apply such a pedagogy in their classroom. A brief report on the findings of this part of the study will be included in section six. The CFIR was used as an evaluation framework and to guide data analysis. See Figure 2 for overview of the data analysis process.
As part of the data analysis process, survey, focus group, and interview questions were coded within the various CFIR domains using an adapted version of the CFIR codebook (CFIR Research Team Center for Clinical Management Research, https://cfirguide.org/, 4 January 2023). This process began with researchers meeting to assign questions to CFIR constructs, deciding which questions best reflected the construct and its characteristics. For example, for the construct intervention characteristics, we assigned survey questions that focused on how the intervention was perceived by the PSTs, and for the construct outer setting, the questions selected dealt with their practicum, the school site, the broader PE community, and their associate teacher. The survey data were then cross-coded within the CFIR, looking to identify barriers and facilitators at different levels related to the intervention, the stakeholders (PSTs, PETE instructors, students), and the setting (classroom) that are necessary for successful implementation of a PBMIS/EDI PETE curriculum (Cardona et al., 2023; Olswang & Prelock, 2015).
Every survey question was assigned to a CFIR construct and evaluated as a facilitator (f) or barrier (b) using the characteristics of the construct. Not all construct characteristics were applicable (e.g., as an intervention characteristic, cost was not applicable as the PSTs were already enrolled in a BEd program and did not incur additional costs to engage in the PBMIS/EDI curriculum). Researchers completed the coding individually and then met following the initial coding using the CFIR codebook to establish consistency, refine code definitions, and agree on the application of the codes. Coding was based on two factors: the valence (a facilitator or barrier to the implementation) and the strength (a weak or strong influence on the implementation).
Valence was coded as X, 0, f, b. When the data indicated an equally positive and negative response to a survey question within a construct characteristic, researchers coded this item as an X. If the response indicated an advantage or was seen as a positive by the participant, the item was coded as f, conversely a b. Neutral comments were coded as 0 (note: this does not indicate the absence of the item within a construct characteristic; when this was the case, it was left blank).
Strength was coded as 1 or 2 based on the level of agreement among research participants, the depth of their commitment, and the use of concrete examples. This was an analytical process in that researchers interpreted the data and applied the CFIR code to determine how valence and strength characteristics within the construct would be coded. The coding allowed researchers to view and analyze a large amount of data to make comparisons and identify similarities, trends, and differences in how PSTs experienced identified aspects of the PBMIS/EDI intervention.
Additionally, this study analyzed focus group and individual interviews with NVivo to elucidate further “the interaction of the participant with the environment, capturing of nonverbal cues” (Wang & Lien, 2013, p. 2933). Video analysis supported researchers in coding beyond textual analysis (Saldana, 2013). The data were also analyzed to determine if an intentionally designed inclusive PETE course that challenged PSTs to reflect on their experiences of inclusion critically and disseminated in a paper, “Teacher Candidates’ Critical Reflections on Inclusive Physical Education: Deconstructing and Rebuilding New Paradigms” (Barber et al., 2023). Findings suggested that notions of PE can be disrupted, and PSTs can acquire the knowledge and skills to provide an inclusive PE classroom. The previous working of the data advantaged researchers as they coded interview and focus group items and responses within the CFIR constructs and characteristics (see Figure 2). Researchers’ familiarity with the data and their depth of understanding of student responses assisted them in interpreting the coding and assigning the valence and strength.

6. Results

In this section, we discuss the application of the CFIR methodology to evaluate the effectiveness of a specific program intervention in two PETE programs. The results are presented within the context of CFIR and in a practical sense for educators wishing to evaluate program interventions. As the introduction and literature review noted, systems in education rarely evaluate the many interventions implemented at all system levels. The application of the CFIR in this research revealed that the intervention successfully achieved its intended goals. Additionally, the CFIR indicated areas of strength and weakness that require reflection and remediation. On a personal level, using the CFIR forced the researchers into a deep, data-based reflection of their curriculum and teaching strategies. Finally, analyzing program implementation through the CFIR suggested that PST experiences in the teaching practicum are inconsistent with the goals of the PETE program. As PETE professors, we have little to no control over the teaching practicum, yet we can adjust our program to support PSTs in the practicum.
Hand coding within the CFIR, a complex and time-consuming process, produced significant findings related to the barriers and facilitators to the PETE program implementation at the two universities. It might be argued that the detailed data analysis required by the CFIR, evaluating data across five domains and 26 constructs, created an opportunity for researchers to deeply reflect on the program implementation, adding to the strength of the evaluation. Themes explored by applying the CFIR methodology to the data are initially reported as barriers and facilitators to the goals of the PETE program implementation. Following, themes are presented from within the domains, constructs, and characteristics of the CFIR.

6.1. Facilitators

Facilitators of the implementation included the PETE curriculum, which was designed and delivered at the two universities, demonstrating a disruption to PSTs’ previously held notions of EDIA and PE pedagogy. PSTs reported on the development of their confidence and competence in delivering such a curriculum in their classroom. Additionally, national and provincial non-governmental organizations, e.g., PHE Canada, Teachers Association for Physical and Health Education Nova Scotia, and Ontario Physical and Health Education Association, were seen as facilitating the PST’s understanding of an EDIA and PBMIS curriculum, providing resources and professional development. PSTs benefitted from an inclusive classroom modeling a form of community that they might bring to their classroom. More generally, the Bachelor of Education programs facilitated the development of an inclusive pedagogy. However, one university appeared to have greater consistency across its program in supporting notions and practices of inclusion. Some PSTs described the teaching practicum as an opportunity to develop an inclusive pedagogy, put theory into practice, and position the practicum as a facilitator of an EDIA/PBMIS curriculum. Facilitators from each domain are presented in Table 1.

6.2. Barriers

Barriers for many PSTs broadly included their practicum experience (see Table 2). PSTs provided evidence of pedagogy inconsistent with principles of EDIA and a PBMIS curriculum at the school site and from their associate teacher. Additionally, they felt that PE was a marginalized subject within the school and the school district resulting in a lack of attention, professional development, and support. PSTs reported on some school or district-wide attempts to support EDIA; they did not see any PE-specific PD or support. Video analysis of focus groups and interviews revealed a necessity to disrupt the past PE experiences of PSTs, who could generally be categorized as athletes and, therefore, thriving in a traditional “old-school” style of PE. Additionally, as PSTs who enjoyed their PE experiences and as human kinetics graduates, they experienced an apprenticeship of observation in the form of PE that favored athletes and those who excelled at sport. The CFIR provided the opportunity to assess both the effectiveness of the implementation and the relative significance of past PE experiences and those of the practicum.
A number of constructs with the various domains were scored neutral or were not applicable to the intervention. Examples of survey items scored as neutral occurred in the inner setting domain. PSTs answers to four items assessing levels of inclusion that they witnessed during their practicum covered a range of responses across the coding spectrum. Additionally, as an intervention characteristic, cost was an example of an item that was not applicable as a barrier or facilitator. All research participants are voluntarily enrolled in a Bachelor of Education and any tuition that they have paid covers all costs of their education program.
Applying the CFIR framework to the data revealed factors that supported the implementation of the EDIA/PBMIS intervention and the success of its intended outcome—PSTs prepared to develop and deliver an inclusive physical education pedagogy. Using the CFIR to analyze survey data further delineated which aspects of the PETE program were most affective and which areas of the intervention needed adjustment or reflection to improve their effectiveness. Conversely, the CFIR provided a detailed picture of the teaching practicum as a barrier in specific areas to the desired development of PE PSTs.

7. Discussion

This study aimed to identify the success of a deliberately designed EDIA/PBMIS in a PETE course as part of a Bachelor of Education program that would disrupt previously held notions of inclusion in PE and provide PSTs with the competence and confidence to deliver an inclusive curriculum in their classroom. Perhaps, more significantly, was the CFIR useful as a tool in an educational setting to measure the effectiveness of a specific intervention? Additionally, related to the goals of the intervention, what did the CFIR reveal as barriers and facilitators to implementing the PETE program?
Central to this research was the desire to find a tool that would measure the effectiveness of specific educational interventions and provide feedback to guide the development of the implementation (Damschroder et al., 2022; Peters et al., 2013). The ongoing plethora of educational initiatives introduced at all levels of education demands that we find measures outside of standardized tests (Moir, 2018). As PETE professors, we initiated a curricular intervention to prepare our PSTs better. We have the freedom to set our curriculum within broad parameters given to us through the approval of our university senate. To advance our teaching practice and outcomes for our students, we must interrogate our teaching practice and “acknowledge that ‘unknowing’, and uncertainty are integral to our academic practice” (Pirrie & Manum, 2024, p. 2). However, one cannot simply install a new curriculum or educational initiative and never measure the program’s implementation against its intended goals. The CFIR provided this measure considering various factors (e.g., inner and outer settings, intervention characteristics) that might influence program implementation success (Damschroder et al., 2022). Pirrie and Manum (2024) remind us that we risk stagnation if we do not reflect on our practice or the implementation of a new program and do not advance our practice/teaching methodology in a living way. Conversely, if we teach or install an initiative without a transparent evaluation tool, we fall into wishing and hoping (Wiggins & McTighe, 2005).
The CFIR demonstrated a high level of agreement with a previous working of the data (Barber et al., 2023); that is, it is possible to disrupt PSTs’ previously held notions of EDIA/PBMIS in PE and to provide a working inclusive pedagogy. Additionally, the CFIR methodology revealed a significant number of constructs that strongly identify facilitators and barriers to program implementation. Fourteen of these constructs were strongly distinguishing. The implementation of the PETE course scored a strong positive valence. At the same time, the teaching practicum leaned more towards a negative valence in support of the development of the PST’s understanding and application of an inclusive model of PE.
When addressing factors external to the PETE program (e.g., early years burnout, isolation, marginalization, cognitive dissonance), defined as the outer setting in CFIR, Richards (2015) and Curtner-Smith (2001) suggested that PETE programs prepare PSTs for this reality. We concur that PETE must provide PSTs with a well-rounded, experiential, evidence-based program that builds resilience and grit to prepare them for the multiple realities they will encounter in their practicum and teaching practice (Barber et al., 2024). One method featured in both university PETE programs include planned teaching experiences in local schools before each teaching practicum. In addition to the expected lesson planning and post-teaching reflections, PSTs observe PE classes and dialog with in-service PE teachers. Additionally, both PETE programs engaged with the PHE Canada Student Chapters initiative to help PSTs develop the on-the-job and in-the-classroom skills we cannot provide in PETE and that they may not be experiencing in their practicum.
The CFIR in comparison to traditional large-scale program implementation measures (e.g., standardized test scores) and to what a classroom teacher might use to evaluate the effectiveness of a new instructional strategy (e.g., student surveys, exit slips) provided specific feedback on the implementation that will serve future iterations of the implementation. The CFIR points to a lack of consistency in the support and application of EDIA/PBMIS in schools. Anecdotally, PSTs have provided similar feedback to PETE professors. The CFIR, as qualitative data, points to specific barriers in supporting PSTs adoption of an EDIA/PBMIS approach to teaching PE during the practicum. In many ways, this finding supports the call in the literature for a closer alignment between PETE programs and associate teachers guiding PSTs in the practicum (Armour et al., 2015; Chambers et al., 2015).
As with most PETE programs, the researchers in this study developed and taught curricula they deemed appropriate based on the literature and their experience. The only evaluation of their course comes from direct student feedback and formal university student course evaluations. PETE researchers have engaged in many different research methodologies to evaluate their practice or intervention (see the self-study by Ovens & Fletcher, 2014), action research (Casey et al., 2018), and narrative inquiry (Clandinin, 2013), amongst others). For example, Scanlon et al. (2024) used self-study to understand how to prepare PSTs best to enact the pedagogical principles of meaningful physical education (Beni et al., 2021). As a cyclical process looking to inform and improve practice, action research has been used frequently in PETE and physical education research (Martos-García & García-Puchades, 2023; Robinson et al., 2023). What differs between these normalized research approaches and the CFIR is that the CFIR focuses on providing a more nuanced and specific appraisal of the facilitators and barriers to program implementation. The researchers do not suggest that the CFIR is superior to any other research methodology. The sole intent of the CFIR is to evaluate the implementation of a new program or intervention by evaluating structural barriers and facilitators.

Limitations

Broad measures of success in education have dominated policy-makers’ decisions (People for Education, n.d.). Research seeking to establish measurable indicators of school success suggested that current efforts are often only assessed upon graduation (People for Education, n.d.). These measures include graduation rates, 21st-century skills, college and career readiness, and higher-order thinking skills. The CFIR provides data from an intervention’s implementation level, rather than the aforementioned measures, which are limited in scope or for after program implementation. However, the results of this study and the limited application of CFIR in education (Allen et al., 2021; Meshkovska et al., 2022; Roshan et al., 2025) suggest many limitations.
In agreement with Meshkovska et al. (2022), we feel that the CFIR provides a systematic method to identify implementation barriers and facilitators. They go on to identify that a strength of the CFIR is the ability to compare similar studies using this common framework. Similarly, Allen et al. (2021) investigated a school-based equity-orientated intervention and suggested the CFIR provided “a more nuanced cross-construct understanding” (p. 375) of factors that facilitated or undermined implementation. Moreover, the CFIR provides a reliable and valid framework to investigate why an intervention may work in one setting and not another (Kirk et al., 2016). This feature offers a significant advantage to using this framework as it identifies specific barriers and facilitators to an intervention.
However, beyond the obvious concerns of the need for training or an in-depth exploration of the CFIR methodology and the time-consuming data analysis, educational researchers emphasize the need for extensive planning, stakeholder engagement, and ongoing support if the intervention and its analysis are successful (Meshkovska et al., 2022; Roshan et al., 2025). This research sought to evaluate the use of the CFIR in a school setting. Our advantage is most likely a limitation for school boards and schools; we had the time to research and learn about the CFIR and conduct the research. Without complete commitment from the school board, researchers, school administrators, and classroom teachers, this complex evaluation framework will not succeed in an educational setting. With the focus of this, and, more broadly, implementation science, on the end user, this form of research could be seen as another layer of work added to the classroom teacher’s duties. Advocates of implementation science suggest it is these folks who implement the program/intervention that determines its success (Carroll et al., 2007; Fixsen et al., 2005; Moir, 2018). There already exists a significant level of frustration among Canadian educators due to a lack of autonomy, trust, and support (Paradis et al., 2019). The advantages of traditional measures of success in education are that they do not require the classroom teacher to engage in the research. As university professors with complete autonomy and a significant research interest, the CFIR is a good fit.
Additionally, Damschroder et al. (2022) suggest that researchers have license in using the CFIR, e.g., how they assign and interpret constructs of the intervention. Many of these features attracted us as researchers to the CFIR. However, these could be seen as limitations for the classroom teacher and school- or board-level administrators who want to evaluate an intervention. The time commitment necessary to understand the CFIR, assign constructs, code, and interpret the data is likely prohibitive in an educational setting. Clearly, health care and education have some similar outcomes, yet politically and organizationally, they are vastly different.

8. Conclusions

As the presentation of the data reveals, the CFIR provides one lens that may allow educators to measure the success of a specific intervention, and it is a complex and detailed methodology (Damschroder et al., 2022). One might argue that the time needed and the ongoing evolution of the CFIR as a tool to evaluate program implementation outweigh its value. We suggest that in the absence of comparable tools in education and other disciplines, the CFIR provides the detailed analysis that has been missing in government-run entities that serve the public and largely run on taxpayer dollars. Often, assessments in education are limited in their scope and force inferences that may not be reliable or valid. Additionally, regularly used school-based measures, e.g., test scores, number of suspensions, and attendance, do not point to the specific facilitators or barriers in a program initiative or implementation. Education at a local, provincial/state, or national level may design and implement a new program aimed at a specific outcome, spend significant dollars on the initiative, and never evaluate which specific factors of the program produced positive or negative results.
The researchers acknowledge that academics in the field of education in Canada produce a significant amount of research. One specific group, the Physical and Health Education Canada Research Council, composed of PETE professors, researchers, and graduate students, works to advance and support research on physical and health education. However, while this research may include methodologies such as action research or program evaluation (e.g., see Game Changers), opportunities to measure provincial or local school board initiatives are limited (Robson, 2021). As discussed in the Introduction, our interest is in educational change/reform at the school level and how that impacts student outcomes. To evaluate the effectiveness of the CFIR methodology in measuring program change, researchers worked within their classrooms. Validating the CFIR as an effective educational tool in a known environment (PETE classroom) invites researchers to apply the methodology in a public-school setting.
We argue that implementation science and tools such as the CFIR must become commonplace in education. Political rhetoric and a heavy focus on standardized testing have yet to provide the evidence to justify the ongoing implementation of new initiatives in education. The disconnect between the various levels of education, especially for those charged with the implementation (teachers), could be repaired if a detailed review of an initiative can be provided to all involved, pointing to the specific facilitators and barriers to success. In education, this could include local administrative decisions aimed at responding to a perceived crisis.
The next steps for this research include using the CFIR in various educational settings, including public schools and other areas within our Bachelor of Education programs. It would be advantageous to work with other researchers in these new research initiatives, including those in schools responsible for implementing new initiatives (e.g., school principals and vice principals). Working with school administrators, although time-consuming for them, the CFIR would provide both specific data and an opportunity to reflect deeply on the facilitators and barriers to implementing desired initiatives.

Author Contributions

Conceptualization, W.W. and W.B.; methodology, W.W. and W.B.; formal analysis, W.W. and W.B.; investigation, W.W. and W.B.; data curation, W.W. and W.B.; writing—original draft preparation, W.W.; writing—review and editing, W.W., M.J. and W.B.; visualization, W.W., W.B. and M.J.; project administration, W.W., W.B. and M.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans and approved by the research ethics review board of St. Francis Xavier and Ontario Tech universities.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data are unavailable due to privacy or ethical restrictions.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

  • Focus Group or Post-Interview questions
  • Intervention characteristics—how the intervention is perceived by those PSTs
    • Intervention source?
    • Evidence strength and quality
      • Did your practicum experiences support the implementation/use of a play-based modern instructional strategy (PBMIS)?
      • If so how, if not, why not? What is your evidence?
    • Relative Advantage
      • What did you find to be the benefits or advantages to a PBMIS approach over more traditional instructional strategies or those used by your associate teacher?
    • Adaptability
      • Were there situations or contexts in which you had to rely on other instructional approaches?
    • Trialability
    • Complexity
      • What, if any, roadblocks did you come across that impeded your implementation of this model of teaching? What made it difficult to implement this model?
    • Design quality and packaging
      • How did the students respond to the teaching methods? Were there difficulties in having them respond to the change in teaching method?
    • Cost
  • Outer setting—the school and PE community context
    • Student needs and resources
      • Was the school/system or your AT set up/working to serve the needs of your students in PE? Inclusive of all students?
      • What would you identify as the main goal of PE in your school?
    • External policies and incentives
      • Were there structures or policies within the school or system, that impeded the implementation of PE using this model?
      • Do you have recommendations for policy or procedures to support this form of teaching PE?
    • Peer pressure
      • Did your AT support your use of this style of teaching? Did other teachers in the school question your use of this teaching methodology? Did they assume it was just a new teacher’s enthusiasm and the reality of the job will have you teaching like them?
      • Did you have colleagues that disagreed with this form of teaching PE, did you feel pressure to change to conform? Did this cause doubt in your beliefs?
    • Cosmopolitanism—in line with other organizations—networked with external organizations
      • Do you see support for this instructional paradigm in outside organizations, e.g., PHE Canada, OPHEA, TAPHE, on the internet?
  • Inner setting
    • Structural characteristics
    • Culture
    • Implementation climate
    • Readiness for implementation
    • Learning climate
    • Relative priority
    • Compatibility with organizational culture
    • Organizational incentives and rewards
      • How was your teaching method met within the PE department? School?
      • Was assessment embedded in your AT’s teaching, in yours?
      • Was your AT accepting of change? Is the school supportive of innovation and change?
      • How does PE rank within the school? What is its relative importance as a subject?
      • Did administration speak of innovation, change…?
  • Individuals
    • Knowledge and beliefs about intervention/self-efficacy
      • Do you feel competent and confident in planning and teaching in this style?
      • Did you find that this form of teaching supported the inclusion of all students in your class in PE?
      • Do you feel that this teaching methodology, approach to PE is supportive of your PE pedagogy?
    • Individual stage of change
      • As this teaching methodology is likely different than what you had as a student, how confident are you in using the model and that it will be successful?
    • Individual identification with the organization
      • How is PE perceived within the school/region/province?
      • How do you see yourself within the school community as a physical educator? Relative status?
    • Other personal attributes
      • What is it about you that helped you succeed with this style of teaching?, e.g., experience, attitude, motivation
  • Implementation Process
    • Planning—have the proper steps to promote effective implementation been established and put in place?… Could you describe some of the teaching strategies that you used to implement this form of teaching? How did you adapt your teaching to best support the needs of the students in your class (their specific context)? How did you measure your success as a physical educator?
    • Engaging—who is involved in the implementation? Does it have a champion?
    • Executing
    • Reflecting and evaluating—did you reflect on your teaching daily? Did you consider your teaching strategies in your reflection? Was your teaching inclusive of all students in your class? Did your teaching provide students with physical literacy to live a healthy active lifestyle suited to them?

References

  1. Allen, M., Wilhelm, A., Ortega, L. E., Pergament, S., Bates, N., & Cunningham, B. (2021). Applying a race (ism)-conscious adaptation of the CFIR framework to understand implementation of a school-based equity-oriented intervention. Ethnicity & Disease, 31(1), 375–388. [Google Scholar]
  2. Armour, K., Quennerstedt, M., Chambers, F., & Makopoulou, K. (2015). What is ‘effective’ CPD for contemporary physical education teachers? A deweyan framework. Sport, Education and Society, 22(7), 799–811. [Google Scholar] [CrossRef]
  3. Baghurst, T. (2014). Assessment of effort and participation in physical education. The Physical Educator, 71, 505–513. [Google Scholar]
  4. Barber, W., Walters, J., & Walters, W. (2023). Teacher candidates’ critical reflections on inclusive physical education: Deconstructing our past and rebuilding new paradigms. PHEnex Journal, 13(2), 1–18. [Google Scholar]
  5. Barber, W., Walters, W., Bates, D., Waterbury, K., & Poulin, C. (2024). Building capacity: New directions in physical education teacher education. In M. Garcia (Ed.), Global innovations in physical and health education (pp. 491–516). IGI Global. [Google Scholar] [CrossRef]
  6. Barnett, W. S. (1995). Long-term effects of early childhood programs on cognitive and school outcomes. The Future of Children, 5(3), 25–50. [Google Scholar] [CrossRef]
  7. Barone, T., & Eisner, E. W. (2011). Arts based research. Sage. [Google Scholar]
  8. Beni, S., Chróinín, D. N., & Fletcher, T. (2021). ‘It’s how PE should be!’: Classroom teachers’ experiences of implementing Meaningful Physical Education. European Physical Education Review, 27(3), 666–683. [Google Scholar] [CrossRef] [PubMed]
  9. Boyd, T. (2021). Education reform in Ontario: Building capacity through collaboration. In F. M. Reimers (Ed.), Implementing deeper learning and 21st century education reforms: Building an education renaissance after a global pandemic (pp. 39–58). Springer Nature. [Google Scholar] [CrossRef]
  10. Canadian Broadcast Company (CBC). (2023, December 10). Canadian students’ math, reading scores have dropped since 2018—But study says it’s not all COVID’s fault. Available online: https://www.cbc.ca/news/canada/canadian-students-pandemic-learning-match-science-reading-study-1.7049681 (accessed on 5 January 2023).
  11. Cardona, M. I., Monsees, J., Schmachtenberg, T., Grünewald, A., & Thyrian, J. R. (2023). Implementing a physical activity project for people with dementia in Germany-Identification of barriers and facilitator using consolidated framework for implementation research (CFIR): A qualitative study. PLoS ONE, 18(8), e0289737. [Google Scholar] [CrossRef]
  12. Carroll, C., Patterson, M., Wood, S., Booth, A., Rick, J., & Balain, S. (2007). A conceptual framework for implementation fidelity. Implementation Sci, 2, 40. [Google Scholar] [CrossRef]
  13. Casey, A., Fletcher, T., Schaefer, L., & Gleddie, D. (2018). Conducting practitioner research in physical education and youth sport. Routledge. [Google Scholar]
  14. Chambers, F. C., Luttrell, S., Armour, K., Bleakley, W. E., Brennan, D. A., & Herald, F. A. (2015). The pedagogy of mentoring: Process and practice. In F. C. Chambers (Ed.), Mentoring in physical education and sports coaching (pp. 28–38). Routledge. [Google Scholar]
  15. Clandinin, D. J. (2013). Engaging in narrative inquiry. Left Coast Press. [Google Scholar]
  16. Conference Board of Canada. (2000). Employability skills 2000+. Conference Board of Canada. [Google Scholar]
  17. Creswell, J. W. (2007). Qualitative inquiry and research design: Choosing among five approaches (2nd ed.). Sage. [Google Scholar]
  18. Curtner-Smith, M. D. (2001). The occupational socialization of a first-year physical education teacher with a teaching orientation. Sport, Education and Society, 6(1), 81–105. [Google Scholar] [CrossRef]
  19. Damschroder, L. J., Reardon, C. M., Widerquist, M. A. O., & Lowery, J. (2022). The updated consolidated framework for implementation research based on user feedback. Implementation Science: IS, 17(1), 75. [Google Scholar] [CrossRef]
  20. Dewey, J. (1922). Democracy and education: An introduction to the philosophy of education (4th ed.). Macmillan. [Google Scholar] [CrossRef]
  21. Fixsen, D. L., Naoom, S., Blase, K., Friedman, R., & Wallace, F. (2005). Implementation research: A synthesis of the literature. The National Implementation Research Network. [Google Scholar]
  22. Fullan, M. (1998). Education reform: Are we on the right track? Canadian Education Association, 38(3), 1–7. [Google Scholar]
  23. Glasgow, R. E., Eckstein, E. T., & El Zarrad, M. K. (2013). Implementation science perspectives and opportunities for HIV/AIDS research: Integrating science, practice, and policy. Journal of Acquired Immune Deficiency Syndromes, 63, 26–31. [Google Scholar] [CrossRef] [PubMed]
  24. Goldstein, L. (2022, September 7). Students do well in global testing, but scores falling. The Toronto Sun. Available online: https://torontosun.com/opinion/columnists/goldstein-students-do-well-in-global-testing-but-scores-falling-says-report (accessed on 5 January 2023).
  25. Greenberg, M. T., Domitrovich, C. E., Graczyk, P. A., & Zins, J. E. (2005). The study of implementation in school-based preventive interventions: Theory, research, and practice. In Promotion of mental health and prevention of mental and behavior disorders (Vol. 3). U.S. Department of Health and Human Services. [Google Scholar]
  26. Greene, M. (1995). Releasing the imagination: Essays on education, the arts, and social change. Jossey-Bass. [Google Scholar]
  27. Hooks, B. (1994). Teaching to transgress: Education as the practice of freedom. Routledge. [Google Scholar] [CrossRef]
  28. James, A. R. (2018). Grading in physical education. Journal of Physical Education, Recreation & Dance, 89(5), 5–7. [Google Scholar]
  29. Kelly, B., & Perkins, D. F. (Eds.). (2012). Handbook of implementation science for psychology in education. Cambridge University Press. [Google Scholar] [CrossRef]
  30. King, M. L., Jr. (1947). The purpose of education. The Maroon Tiger. Morehouse College. [Google Scholar]
  31. Kirk, M. A., Kelley, C., Yankey, N., Birken, S. A., Abadie, B., & Damschroder, L. (2016). A systematic review of the use of the consolidated framework for implementation research. Implementation Science, 11(1), 72–86. [Google Scholar] [CrossRef] [PubMed]
  32. Lyon, A. R., Cook, C. R., Brown, E. C., Locke, J., Davis, C., Ehrhart, M., & Aarons, G. A. (2018). Assessing organizational implementation context in the education sector: Confirmatory factor analysis of measures of implementation leadership, climate, and citizenship. Implementation Science, 13, 5. [Google Scholar] [CrossRef] [PubMed]
  33. Martos-García, D., & García-Puchades, W. (2023). Emancipation or simulation? The pedagogy of ignorance and action research in PETE. Physical Education and Sport Pedagogy, 28(1), 43–55. [Google Scholar] [CrossRef]
  34. Meshkovska, B., Scheller, D. A., Wendt, J., Jilani, H., Scheidmeir, M., Stratil, J. M., & Lien, N. (2022). Barriers and facilitators to implementation of direct fruit and vegetables provision interventions in kindergartens and schools: A qualitative systematic review applying the consolidated framework for implementation research (CFIR). The International Journal of Behavioral Nutrition and Physical Activity, 19(1), 11. [Google Scholar] [CrossRef]
  35. Moir, T. (2018). Why is implementation science important for intervention design and evaluation within educational settings? Frontiers in Education (Lausanne), 3, 61. [Google Scholar] [CrossRef]
  36. National Assessment of Educational Progress. (2024, August 28). About NAEP: A common measure of student achievement. Available online: https://nces.ed.gov/nationsreportcard/about/ (accessed on 5 September 2024).
  37. Olswang, L. B., & Prelock, P. A. (2015). Bridging the gap between research and practice: Implementation science. Journal of Speech, Language, and Hearing Research, 58(6), S1818–S1826. [Google Scholar] [CrossRef]
  38. Organisation for Economic Co-operation and Development (OECD). (2023). PISA 2022 results (Volume I): The state of learning and equity in education. PISA, OECD Publishing. [Google Scholar] [CrossRef]
  39. Organisation for Economic Co-operation and Development (OECD). (2024, August 28). Programme for international student assessment (PISA). Available online: https://www.oecd.org/en/about/programmes/pisa.html (accessed on 5 September 2024).
  40. Ovens, A., & Fletcher, T. (Eds.). (2014). Self-study in physical education teacher education: Exploring the interplay of practice and scholarship. Springer. [Google Scholar]
  41. Paradis, A., Lutovac, S., Jokikokko, K., & Kaasila, R. (2019). Towards a relational understanding of teacher autonomy: The role of trust for Canadian and Finnish teachers. Research in Comparative and International Education, 14(3), 394–411. [Google Scholar] [CrossRef]
  42. People for Education. (n.d.). Broader measures of success: Measuring what matters in education. People for Education. Available online: https://peopleforeducation.ca/report/broader-measures-of-success-measuring-what-matters-in-education/#chapter8 (accessed on 5 January 2023).
  43. Peters, D. H., Adam, T., Alonge, O., Agyepong, I. A., & Tran, N. (2013). Implementation research: What it is and how to do it. The British Medical Journal, 48(8), 731–736. [Google Scholar] [CrossRef]
  44. Pirrie, P., & Manum, K. (2024). Reimagining academic freedom: A companion piece. Journal of Philosophy of Education, 58(6), 895–909. [Google Scholar] [CrossRef]
  45. Quintão, C., Andrade, P., & Almeida, F. (2020). How to improve the validity and reliability of a case study approach? Journal of Interdisciplinary Studies in Education, 9(2), 264–275. [Google Scholar] [CrossRef]
  46. Richards, K. A. R. (2015). Role socialization theory: The sociopolitical realities of teaching physical education. European Physical Education Review, 21(3), 379–393. [Google Scholar] [CrossRef]
  47. Robinson, D. B., Harenberg, S., Walters, W., Barrett, J., Cudmore, A., Fahie, K., & Zakaria, T. (2023). Game Changers: A participatory action research pilot project for/with students with disabilities in school sports settings. Frontiers in Sports and Active Living, 5, 1150130. [Google Scholar] [CrossRef]
  48. Robson, K. (2021). An essay on the challenges of doing education research in Canada. Journal of Applied Social Science, 15(2), 183–196. [Google Scholar] [CrossRef]
  49. Roshan, R., Hamid, S., Kumar, R., Hamdani, U., Naqvi, S., Zill-e-Huma, & Adeel, U. (2025). Utilizing the CFIR framework for mapping the facilitators and barriers of implementing teachers led school mental health programs—A scoping review. Social Psychiatry and Psychiatric Epidemiology, 60(3), 535–548. [Google Scholar] [CrossRef]
  50. Saldana, J. (2013). The coding manual for qualitative researchers (2nd ed.). Sage. [Google Scholar]
  51. Scanlon, D., Beckey, A., Wintle, J., & Hordvik, M. (2024). ‘Weak’physical education teacher education practice: Co-constructing features of meaningful physical education with pre-service teachers. Sport, Education and Society, 1–16. [Google Scholar] [CrossRef]
  52. Slavin, R. E. (2002). Evidence-based education policies: Transforming educational practice and research. Educational Researcher, 31(7), 15–21. [Google Scholar] [CrossRef]
  53. Sundaresan, N., Dashoush, N., & Shangraw, R. (2017). Now that were “well rounded”, let’s commit to quality physical education assessment. Journal of Physical Education, Recreation & Dance, 88(8), 35–38. [Google Scholar]
  54. Touchette, A. (2020). The consolidated framework for implementation research (CFIR): A roadmap to implementation. KnowledgeNudge. Available online: https://medium.com/knowledgenudge/the-consolidated-framework-for-implementation-research-cfir-49ef43dd308b (accessed on 5 September 2024).
  55. Tucker, S., McNett, M., Mazurek Melnyk, B., Hanrahan, K., Hunter, S. C., Kim, B., Cullen, L., & Kitson, A. (2021). Implementation science: Application of evidence-based practice models to improve healthcare quality. Worldviews Evidence Based Nursing, 18(2), 76–84. [Google Scholar] [CrossRef] [PubMed]
  56. Volante, L. (2005). Accountability, student assessment, and the need for a comprehensive approach. International Electronic Journal for Leadership in Learning, 9(6), 1–8. [Google Scholar]
  57. Volante, L. (2010). Standards-based reform: Could we do better? Education Canada, 47(1), 54–56. [Google Scholar]
  58. Walters, W., MacLaughlin, V., & Deakin, A. (2023). Perspectives and reflections on assessment in physical education: A self-study narrative inquiry of a pre-service, in-service, and physical education teacher educator. Curriculum Studies in Health and Physical Education, 14(1), 73–91. [Google Scholar] [CrossRef]
  59. Wang, T. L., & Lien, Y. H. B. (2013). The power of using video data. Quality and Quantity: International Journal of Methodology, 47(5), 2933–2941. [Google Scholar] [CrossRef]
  60. Wiggins, G., & McTighe, J. (2005). Understanding by design (2nd ed.). Association for Supervision and Curriculum Development. [Google Scholar]
  61. World Economic Forum (WEF). (2023). The future of jobs report 2023. Available online: https://www.weforum.org/reports/the-future-of-jobs-report-2023/ (accessed on 1 May 2024).
  62. Yin, R. K. (2017). Case study research and applications: Design and methods. Sage. [Google Scholar]
  63. Zeigler, E. F. (2005). History and status of American physical education and educational sport. Trafford. [Google Scholar]
Figure 1. CFIR domains and characteristics. Note: adapted from the Consolidated Framework for Implementation Research (CFIR): A Roadmap to Implementation, published in Knowledge Nudge, 21 May 2020, https://medium.com/knowledgenudge/the-consolidated-framework-for-implementation-research-cfir-49ef43dd308b (Accessed on 27 July 2023).
Figure 1. CFIR domains and characteristics. Note: adapted from the Consolidated Framework for Implementation Research (CFIR): A Roadmap to Implementation, published in Knowledge Nudge, 21 May 2020, https://medium.com/knowledgenudge/the-consolidated-framework-for-implementation-research-cfir-49ef43dd308b (Accessed on 27 July 2023).
Education 15 00613 g001
Figure 2. CFIR data analysis process.
Figure 2. CFIR data analysis process.
Education 15 00613 g002
Table 1. Survey items coded as facilitators.
Table 1. Survey items coded as facilitators.
FacilitatorsSurvey Items
Intervention CharacteristicsThe features of this course/learning experiences challenged my perceptions of what inclusive PHE means.
I learned strategies and methods to develop inclusive PHE classes.
I designed effective lesson plans based on UDL (Universal Design for Learning), inclusion, diversity, and accessibility.
I learned the advantages to designing and implementing fully inclusive PHE classes.
I can identify the perceptions of privilege, power, and normative notions of ability and disability.
This overall experience allowed me to deconstruct and reconstruct ideas of ability vs. disability.
Outer SettingThe provincial curriculum and policies support fully inclusive PHE.
My practicum placement experience was in a school board/regional center that has an inclusion and diversity policy.
My overall Bachelor of Education experience demonstrated fully inclusive pedagogy across subjects.
My university course experience modeled fully inclusive PHE pedagogy.
External organizations (e.g., PHE Canada, OPHEA, TAPHE, CIRA) support fully inclusive PE pedagogy.
Inner SettingThe classroom climate for my pre-service PHE teacher education course was inclusive and welcoming.
Class leader(s), teachers, and guest speakers modeled climate-building and inclusion.
The setting of the class was inclusive for teacher candidates of all abilities.
The setting of the class was inclusive for teacher candidates of all races, cultures, and genders.
The PETE classroom was a safe setting for all participants regardless of ability level.
Characteristics of IndividualsI witnessed intentional pedagogical strategies to build community by the instructor(s).
The instructor demonstrated knowledge and belief that play-based inclusion was essential to PETE.
I felt fully included by my classmates, and was encouraged to participate at my own level.
I felt a sense of belonging in class based on the instructor’s approaches to inclusive PETE.
Inclusive PE pedagogical strategies used in this course created a strong sense of community.
I was encouraged to learn from, and with my classmates of different ability levels, cultures.
I developed confidence and competence for inclusive PE pedagogy as a result of this experience.
As a result of this PETE course, I feel committed to building inclusive PE pedagogy for my students.
Student voice in PETE classes was honored and respected.
I was encouraged to develop resilience and grit as a beginning PE teacher.
I was encouraged to prioritize self-care in order to be at my best as a PE teacher.
ProcessI was provided with time to reflect on my learning about inclusion.
I was guided to literature regarding inclusive pedagogy.
I was able to self and peer assess and evaluate my learning in the inclusive PETE course.
In my practicum placement, I was provided time to reflect on how inclusion and diversity were in my lesson designs.
Table 2. Survey items coded as barriers.
Table 2. Survey items coded as barriers.
BarriersSurvey Items
Intervention CharacteristicsI observed fully inclusive PHE classes in my practicum placement experiences.
I discovered barriers to implementation in my practicum placement experiences.
Outer SettingMy associate teacher was aware of school board/regional center policies on inclusion.
I received support to design and implement inclusive PE during my practicum placement.
I observed inclusion and diversity being implemented in my practicum placement.
School timetables are designed to be supportive of inclusive design for PHE.
Inner SettingWas my practicum placement school culture supportive of innovation and change?
Did I observe resistance to the development of full inclusion in my placement school?
I observed PE classes that were inclusive for students of all abilities.
I observed PE classes that were inclusive for students of all races, cultures, and genders.
Characteristics of IndividualsI see representation or role models from my culture in the field of PE pedagogy.
ProcessNone
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Walters, W.; Barber, W.; Jutras, M. The Consolidated Framework for Implementation Research: Application to Education. Educ. Sci. 2025, 15, 613. https://doi.org/10.3390/educsci15050613

AMA Style

Walters W, Barber W, Jutras M. The Consolidated Framework for Implementation Research: Application to Education. Education Sciences. 2025; 15(5):613. https://doi.org/10.3390/educsci15050613

Chicago/Turabian Style

Walters, William, Wendy Barber, and Mickey Jutras. 2025. "The Consolidated Framework for Implementation Research: Application to Education" Education Sciences 15, no. 5: 613. https://doi.org/10.3390/educsci15050613

APA Style

Walters, W., Barber, W., & Jutras, M. (2025). The Consolidated Framework for Implementation Research: Application to Education. Education Sciences, 15(5), 613. https://doi.org/10.3390/educsci15050613

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop