Next Article in Journal
The Dialogical Turn in Normative Political Theory and the Pedagogy of Human Rights Education
Previous Article in Journal
Artificial Intelligence in Higher Education: A Bibliometric Study on its Impact in the Scientific Literature
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Metasynthesis of Preservice Professional Preparation and Teacher Education Research Studies

Orelena Hawks Puckett Institute, Asheville and Morganton, NC 28655, USA
*
Author to whom correspondence should be addressed.
Educ. Sci. 2019, 9(1), 50; https://doi.org/10.3390/educsci9010050
Submission received: 6 February 2019 / Revised: 25 February 2019 / Accepted: 26 February 2019 / Published: 8 March 2019

Abstract

:
Results from a metasynthesis of the relationships between 14 different types of preservice teacher preparation practices and teaching quality, preschool to university student performance, and university student and beginning teacher belief appraisals are reported. Each type of preservice practice (e.g., course-based student learning) included different kinds of instructional methods (e.g., problem-based learning, inquiry-based learning, and project-based learning). The metasynthesis included 118 meta-analyses and 12 surveys of more than three million study participants. Findings clearly indicated that active university student and beginning teacher involvement in mastering the use of instructional practices and both knowledge and skill acquisition by far stood out as the most important preservice teacher preparation practices. These included extended student teaching experiences, simulated instructional practices and microteaching, faculty coaching and mentoring, clinical supervision, different types of cooperative learning practices, and course-based active student learning methods. The pattern of results helped identify high leverage and high impact teacher preparation practices. Implications for future research and improving teacher preparation are described.

1. Introduction

1.1. Background

Debate about the core practices of effective teacher preparation and education programs has been the focus of considerable discussion and analysis for years (e.g., [1,2,3]). Throughout the history of teacher preparation, different practices have been advanced as the most important for effective preservice teacher preparation and education [4,5]. Darling-Hammond [6,7,8] and Cochran-Smith [9,10] among others (e.g., [11,12,13]) have extensively reviewed the teacher preparation literature with a focus on identifying the common features of effective university teacher training programs. Darling-Hammond [14], for example, concluded, based on her review of available evidence, that teacher preparation programs that graduate well-prepared teachers include a clear vision of good teaching, well-defined standards of professional practice, a strong core curriculum, extensive clinical experience, problem- and inquiry-based preservice student learning, strategies for dealing with student assumptions and beliefs about learning, and strong university and school relationships.
Different teacher preparation specialists and researchers have called for use of a wide range of preservice teacher education practices to ensure that students “are extraordinarily well prepared” [7]. These practices include, but are not limited to, teacher certification [15], types of coursework [16], student learning methods [17], field experiences [18], high quality clinical practice [19], clinical supervision [20], and induction and mentoring [21].
As part of the research synthesis described in this paper, 14 different sets of teacher preparation practices and variables were identified that have been the focus of meta-analysis and systematic reviews. These are shown in Table 1. The table includes representative citations for the types of practices and variables for each of the sets of different teacher preparation practices. Nearly all of the sets of teacher preparation practices include multiple kinds of practices that teacher educator experts claim are either necessary for preparing well-qualified teachers or are practices that have been the focus of research reviews.

1.2. Methodological Approach

The quantitative metasynthesis that is the focus of this paper was essentially a meta-analysis of meta-analyses of teacher and preservice preparation research. The metasynthesis approach was similar to one conducted by Hattie [52,53], but with a number of exceptions. First, we focused entirely on preservice teacher preparation practices or practices during the transition from student to practitioner status (e.g., induction and mentoring). Our primary interest was identifying preservice practices where results from different meta-analyses of studies of the practices were combined where the aggregated sizes of the effects for different practice–outcome relationships could be used to identify the most important teacher preparation practices. Second, for each type of teacher preparation practice (e.g., course-based student learning) we content analyzed the different approaches to teacher preparation to identify subsets of practices (problem-based learning, case-based learning, inquiry-based learning, etc.). Our primary interest was determining whether different kinds of practices proved more effective than others in explaining practice–outcome relationships. Third, our aim was to include only meta-analyses that compared a preservice practice (e.g., problem-based learning) with a contrasting condition (e.g., traditional classroom lecture) to determine if there were any value-added benefits of a practice hypothesized to be related to better student outcomes.
The metasynthesis also differed from other reviews of meta-analyses of teacher preparation and higher education studies (e.g., [52,53,54,55,56]) by examining only preservice teacher preparation practices that could be used by course instructors, clinical supervisors, faculty coaches and mentors, supervising teachers, and other teacher preparation specialists to affect university student learning. Both Hattie [53] and Schneider and Preckel [55], for example, included meta-analyses in their integrative reviews of variables that would not be the primary focus of teacher preparation programs and practices (e.g., student gender, student personalities, divorce, and health status) and were not part of the metasynthesis.
Explicit attention was paid to meta-analyses that included studies that employed either quasi-experimental or experimental research designs [57], or the authors were able to compare a preservice practice with a contrasting condition based on available information in a research report. Meta-analyses that employed only or primarily one-group pretest–posttest or correlational studies were excluded from the metasynthesis to lessen criticisms levied against the Hattie [52] metasynthesis of factors associated with preschool, elementary, middle, and high school student achievement [58,59].

1.3. Expected Outcome

The expected outcome of the metasynthesis was identification of the core or high leverage teacher preparation practices that ought to be the focus of preservice teacher education programs [60,61,62]. As noted by Goldhaber [63] in his review of teacher preparation programs, “there are only a few studies that focus on the association [relationship] between the features of teacher preparation and teacher…outcomes” (p. 1). The extent to which accumulated research evidence is consistent with this contention was one focus of the metasynthesis. The main focus was identification of high leverage and high impact core teacher preparation practices that ought to be the focus of preparing highly qualified teachers [53,60].

2. Method

2.1. Search Sources

The primary sources of teacher education and preservice professional preparation studies were research syntheses and reports of the types of variables and practices in Table 1. Controlled vocabulary, keyword, and natural language searches of eight electronic databases were performed to identify candidate studies. These sources were supplemented by searches of more than 100 journals publishing preservice preparation and teacher education research; hand searches of the reference sections of all retrieved research syntheses and reports; and searches of the publications of noted experts on teacher preparation (e.g., [6,10,52,64]).
The electronic databases searched for candidate studies were ERIC, ProQuest Central, PsycInfo, PubMed, Directory of Open Access Journals, Open Access Journal Search Engine, the Bielefeld Academic Search Engine, and Google Scholar. The ERIC, ProQuest Central, PubMed, and PsycInfo thesauri were the sources of controlled vocabulary terms. These included, but were not limited to, teacher education, preservice teacher education, teacher educator education, teacher preparation, higher education, and preservice teachers (depending on a database thesaurus).
A four-tiered search strategy was used to locate candidate meta-analyses. The first tier searches included the terms meta-analysis, research synthesis, or systematic review and each controlled vocabulary term in separate searches (e.g., “meta-analysis” AND “preservice teachers”, “systematic review”, and “teacher education”). The second tier searches included the terms meta-analysis, research synthesis, or systematic review and the keyword or the natural language terms for the specific kinds of preservice practices in Table 1 (e.g., distance education, field experiences, induction, and mentoring). All possible combinations of keyword and natural language terms were searched until no new meta-analyses or research reports were located. Both the Tier 1 and Tier 2 search results were sorted by relevance to ensure optimal matches with the search terms.
The third tier searches include both hand and electronic searches of all located research reviews, other research reports, and teacher preparation articles for “meta-analysis” or “systematic review” to identify additional research syntheses not located in the first two tier searches. The fourth tier searches included hand and electronic searches of bibliographies of the types of preservice practices shown in Table 1 to identify meta-analyses of preservice practices not identified in other tier searches (e.g., [65,66,67]). The bibliographies were located by searches of Google and Google Scholar (e.g., bibliography AND microteaching).
As part of the searches for research syntheses, both national and large sample size surveys were located that included data for teacher preparation–outcome relationships. Additional searches were therefore conducted to identify other surveys that included preservice practice–outcome data relevant for the metasynthesis. These studies were included in the metasynthesis if any of the teacher preparation practices in Table 1 and outcomes of interest were the focus of investigation where none or only a few meta-analyses were located for specific preservice practices–outcome relationships.

2.2. Inclusion Criteria

Research syntheses and reports that met the inclusion criteria included journal articles, dissertations and theses, conference presentations, web-based reports, and unpublished reports. No time limit was placed on the searches for meta-analyses or research syntheses that included teacher education or preservice preparation–outcome data. Studies were limited to those published in English.
Meta-analyses and research reports were considered candidate studies if they included effect sizes for teacher education or preservice preparation–outcome relationships or they included data where effect sizes could be computed and meta-analyzed. Four types of research reports were included in the metasynthesis. The first were research syntheses that met generally agreed upon criteria for a meta-analysis (average effect sizes, confidence intervals or standard errors, sample sizes, etc.) (e.g., [68,69]). The second were research reports that included effect sizes from individual studies, which needed to be meta-analyzed (e.g., [70,71,72]) or which included data for computing the effect sizes for between group comparisons which were then meta-analyzed (e.g., [73]). The third were either large sample size or nationally representative studies that included teacher education or preservice preparation–outcome measures for different groups of participants that were used as best estimates of the effects of the preservice practice (e.g., [74,75]). The fourth were meta-analyses of related practices in other fields where neither research syntheses or surveys of teacher preparation–outcome relationships could be located (e.g., [76,77]), or where meta-analyses of similar or identical practices in other fields included relevant preservice–outcome data (e.g., [71,78]).
Meta-analyses and surveys were included only if a preservice preparation practice was compared to a control or contrasting condition (e.g., traditional teacher certification vs. no certification and distance education coursework vs. traditional classroom instruction). The one exception was types of coursework where the number of courses completed or another course-related variable was correlated with outcomes of interest (e.g., [79,80]).

Exclusion Criteria

Meta-analyses that included only one-group pretest–posttest effect sizes or the preponderance of studies in a meta-analysis included results from pretest–posttest designs were excluded from the metasynthesis. Research syntheses of single case design studies of teacher preparation practices were also excluded since the effect sizes in these reports are not comparable to those in between group studies (e.g., [81]). Survey data that correlated ordinal or continuously scored preservice preparation measures and outcomes of interest were also excluded (with exceptions noted above).

2.3. Data Preparation

The data in the research reports was prepared for metasynthesis in a series of steps resulting in the findings in this report. Each study was examined to identify the particular preservice practice or practices that were the focus of investigation, the types of outcomes that were the dependent measures, and the effect sizes for the relationships between the practices and outcomes either reported in the primary reports or which were computed for the metasynthesis. The majority of reports included mean difference effect sizes (Cohen’s d or Hedges’ g). An attempt was initially made to convert Cohen’s d to Hedges’ g in all meta-analyses but this proved futile because of so much missing information in many research reports. In those cases where we were able to convert ds to gs, the differences were so small or the two effect sizes did not differ, that we included the particular mean difference effect sizes reported by the meta-analysts.
In those cases where effect sizes or results were reported using other indices or metrics, they were converted to mean difference effect sizes using generally recommended conversion procedures (e.g., [82]). In some instances, effect sizes needed to be estimated from available information in the research reviews. In a number of research reports, effect sizes for between group comparisons were reported for some but not all outcomes. In cases where results were reported as not significant, the effect sizes for those preservice–practice comparisons were assumed to be zero before within study aggregated results were computed.
The majority of meta-analyses included only university students or beginning teachers. A number of research syntheses included university students and preschool, elementary, middle, and/or high school students. In those cases where effect sizes were reported separately for subgroups of participants, we reported only the results for university student. In those cases where effect sizes were not reported separately for subgroups of participants, we included the aggregated results for all participants but reported the percentage of participants who were university students or the percentage of studies that included only university students. These meta-analyses were included only if investigators indicated that the majority of participants were university students or we could independently establish this to be the case.
The above information was used to construct data tables for subsequent analysis. Each data table for each type of preservice practice in Table 1 included the citation for the study, type of research report (meta-analysis or survey), comparative conditions, the outcome measures, the grade level for the outcome measure, number of studies, sample sizes, number of effect sizes, average effect size for each type of preservice practice–outcome relationship, the 95% confidence interval for the average effect size, the Z test for the effect size, and the p-value for the average size of effect. An attempt was made to include or compute all of this information and statistics, but because of missing data, it was not possible to construct tables with complete sets of information for all practices for all studies. All of the tables, however, included the number of effect sizes and average effect sizes for every preservice practice–outcome relationship that were the focus of the metasynthesis.
Twenty-four data tables were prepared for the 14 different sets of practices in Table 1. Two data tables were prepared for as many preservice practices as possible: teaching quality and child, K–12 student, and different university student or beginning teacher outcomes (other than teaching quality). Two data tables included the effect sizes for the relationships between faculty instruction and student outcomes. Two other data tables included the effect sizes for different preservice practices and teacher retention, attrition, and related outcomes. The data tables were the sources of evidence for computing the aggregated sizes of effects for teacher preparation practices-outcome relationships in the metasynthesis (see the Metasynthesis Supplemental Report at www.puckett.org/PreserviceMetasynthesisReport.pdf).
Teaching quality was measured in terms of classroom quality or instructional practices. Classroom quality was assessed in terms of classroom social and nonsocial organization and classroom management. Instructional practices were assessed in terms of teaching practices, teacher performance, teacher–student interactions, and beliefs about teaching confidence or competence (e.g., commitment to teaching). In a number of cases, proxy measures were used as instructional practices outcomes.
Two types of student outcomes were measured in the studies: student performance and student belief appraisals. Student performance was assessed in terms of academic achievement, course grades, subject matter knowledge, course or degree completion, and other related performance measures. Student beliefs were assessed in terms of attitudes (toward course delivery, small group learning, computer-based instruction, etc.), satisfaction (with faculty instructional methods, learning methods, teaching position, etc.), and related belief appraisals (e.g., judgments of course quality and judgments of course enjoyment).
The two faculty outcome tables included measures of (1) instructor practices (performance, effectiveness) and faculty–student interactions and (2) undergraduate and graduate student outcomes. The two tables including beginning teacher outcomes included measures of retention, attrition, and related outcomes (e.g., teacher movement from one school or district to another school or district).

2.4. Methods of Analysis

Guidelines for conducting a meta-analysis of meta-analyses, or secondary meta-analysis, have been proposed by a number of meta-analysts [83,84,85]. The largest number of meta-analyses located for the metasynthesis were not amendable to the kinds of analyses recommended by these experts for any number of reasons. This led us to consider alternative procedures for combining average effect sizes in individual meta-analyses.
The most straightforward approach would be to simply calculate the mean of the mean effect sizes for particular practice–outcome relationships. Such a method, however, fails to take into consideration the fact that the number of studies and number of effect sizes in different meta-analyses are not the same. A better approach would be to adjust the effect sizes in different meta-analyses by taking into consideration the differences in number of studies, sample sizes, variability around the average effect sizes, etc. This, however, was not possible because so many meta-analyses did not include the indices and metrics necessary to aggregate meta-analysis results taking into consideration this kind of information.
The two measures that were able to be discerned in all research syntheses (and surveys) were the number of effect sizes for different preservice practice–outcome relationships and the average effect sizes for those relationships. We therefore decided to perform a metasynthesis using the number of effect sizes and average effect sizes in individual meta-analysis to obtain the best estimates of the overall sizes of effects for preservice practice–outcome relationships. This was accomplished using SPSS [86] to compute a weighted mean of means using the number of effect sizes for a particular preservice practice in each meta-analysis as the weight for computing an overall mean for each of the preservice practice–outcome effect sizes. It is important to note that this was done for individual preservice practice–outcome effect sizes and not by using the total number of effect sizes in a meta-analysis or survey.
Although we recognize the methodological and statistical shortcomings of our decision to use the number of effect sizes and the average effect sizes as our input variables, we concluded that this was the best approach to yield means of means that had similar interpretive meaning for different preservice practice–outcome relationships. Our goal was not to make definitive conclusions about particular practices, but rather to produce findings where we were able to discern patterns of relationships between different teacher preparation practices and different study outcomes in order to identify high leverage practices [60] as evidenced by the sizes of effects for different preservice practices–outcome relationships.

2.5. Search Results

The metasynthesis included 118 meta-analyses and 12 surveys of teacher and preservice preparation practice–outcome relationships. The complete list of the 130 studies is included in the Appendix A. Sixty-seven of the meta-analyses (57%) included the number of participants in the studies in the research syntheses. Those meta-analyses included 1,758,700 participants. The eight surveys included 42,300 participants. The 130 studies in the metasynthesis included an estimated 3 million+ study participants.
The research reports were available in journal articles (81%), dissertations or theses (12%), conference presentations (6%), and unpublished reports (1%). The research reports were completed between 1979 and 2018, with the majority (73%) completed between 2000 and 2018.
The supplemental report contains information not included in the metasynthesis. The report includes (1) the list of journals searched for meta-analyses of teacher preparation studies, (2) information about each of the meta-analyses and surveys included in the metasynthesis (type of research report, preservice practice, number of studies, sample size, and number of effect sizes), (3) the research designs and types of between group or between condition comparisons in each report, (4) the definitions or descriptions of the teacher preparation practices or variables in the research studies, and (5) the 24 data tables that were the sources of evidence for the metasynthesis.

3. Metasynthesis Results

The results from the metasynthesis are first presented for each of the 14 types of teacher and preservice preparation practices (Table 1) and second for the rank ordering of the sizes of effects for the different types of teacher preparation practice–outcome relationships. Each of the tables summarizing the results from the metasynthesis for each set of teacher preparation practices includes the different comparisons or contrasts; the outcome measures that were the focus of practice–outcome relationships; the grade level of the outcome measures; the number of studies, sample sizes, and effect sizes for the practice–outcome relationships; the mean difference effect sizes for the aggregated results; and the 95% confidence intervals or the average effect sizes. In those cases where the number of studies and sample sizes were not reported in individual research reports, the numbers that are reported include a plus sign indicating that the number of studies or sample sizes or both is actually larger than we were able to determine. The rank ordering of sizes of effects for identifying high leverage core practices are reported separately for teaching quality, child, K–12 student, university student, beginning teacher performance, and university and beginning teacher belief appraisal outcomes.

3.1. Teacher Preparation Practices

3.1.1. Teaching Degree

Five meta-analyses and one survey included comparisons of educators with different teaching degrees for evaluating the effects of teacher qualifications on different measures of teaching quality and child and student outcomes. The degrees that were the focus of analysis included high school (HS), associate’s degree (AA), child development associate’s degree (CDA), bachelor’s degree (BA), and master’s degree (MA). All but one report were investigations of teachers in preschool programs.
The results for the between teaching degree comparisons are shown in Table 2. Type of degree was related to all but one of teaching quality measures but none of the child or student achievement outcomes (with only one exception), as evidenced by the sizes of effects between teachers with different degrees and the study outcomes.
Teachers with bachelor’s degrees had stronger belief appraisals compared to teachers with only a high school degree. The same was the case for use of effective teaching practices. Teachers with bachelor’s degrees also used more effective teaching practices compared to teachers with high school or associate’s degrees.
The relationships between teacher degrees and classroom quality also indicated that teacher qualifications matter in terms of classroom practices. Teachers with a bachelor’s or master’s degree were judged as having higher quality classrooms compared to teachers with high school or associate’s degrees. There was little difference in classroom quality between teachers with either associate’s or high school degrees.
Close inspection of the sizes of effects for the relationship between teacher qualifications and teaching quality shows that the more disparate the between degree comparisons, the larger the sizes of effect. For example, the sizes of effect for BA vs. AA and MA vs. BA were both 0.16 for classroom quality. In contrast, the sizes of effects for BA vs. HS and MA vs. HS were much larger for the same outcome measure.

3.1.2. Teacher Preparation Programs

No meta-analyses of teacher preparation programs were located, and only two surveys of four-year compared to extended teacher preparation programs were found for the primary outcomes of interest [75,87]. Both surveys included teaching quality and teacher performance outcome measures. One survey included evidence for the effects of professional development schools on first and second year teacher retention [88].
The results for the relationships between type of teacher preparation program and the study outcomes are shown in Table 3. The findings indicate no discernible differences for extended teacher preparation programs compared to traditional bachelor’s degree programs for teaching practices or teacher beliefs (i.e., commitment to teaching). There was also no advantage for extended degree programs on teacher performance (e.g., knowledge) and there was a negative effect for teacher attitudes (career satisfaction). Teachers in extended degree programs had more negative attitudes toward teaching.
Latham et al. (2015) compared teachers who graduated from professional development schools (PDS) with those who graduated from traditional teacher preparation (TTP) programs on teacher retention. The participants were beginning teachers in preschool, elementary, and middle school programs. Results showed the PDS graduates persisted in their professions for longer periods of time compared to graduates from TTP programs (Cohen’s d = 0.50, 95% CI = 0.43, 0.56, Z = 15.27, p = 0.0000).

3.1.3. Teacher Certification

Four meta-analyses were located that included comparisons of teachers with different types of certification and the effects on preschool to 12th grade student performance. No meta-analyses or surveys were found that examined the relationships between teacher certification and teaching quality.
The four meta-analyses permitted comparisons of educators with Teach for America, National Board, or alternative certifications with teachers having traditional teacher certification. Teachers with traditional certification were also compared to teachers with emergency, provisional, or no teacher certification.
Table 4 shows the results for the relationships between the different types of certification and student achievement at different grade levels. The results indicate no appreciably discernible advantage of any of the different types of teacher certification. There was, however, a small positive effect for the Teach for America certification compared to the traditional teacher certification for 6th to 8th grade student outcomes, and a small positive effect for National Board Certification compared to traditional teacher certification for Kindergarten to 12th grade student outcomes. There was also a small positive effect for teachers with alternative certification compared to teachers with traditional certification for Kindergarten to 12th grade student outcomes, and small positive effects for teachers with traditional certification compared to teachers with provisional, emergency, or no certification for preschool to 12th grade student outcomes. The sizes of effects for all between teacher certification comparisons, however, are very small.

3.1.4. In-Field vs. Out-of-Field Certification

Two meta-analyses and one survey included data for the relationships between in-field vs. out-of-field teacher certification or degree and both teaching quality and student achievement. The results are shown in Table 5. The findings indicate that an in-field certification or degree was not related to differences in classroom quality, teaching practices, or preschool to 12th grade student achievement compared to teachers with an out-of-field certification or degree. The effect size for the relationship between in-field certification and 6th to 12th grade student achievement must be interpreted with caution since it is based on just one study and two effect sizes.

3.1.5. University Coursework

Four meta-analyses and two surveys included results for the relationships between different measures of university coursework and both teaching quality and student performance. All but one research report included findings for the correlations between continuously scored measures of coursework (number of courses and class attendance) and the study outcomes where the correlations were converted to Cohen’s d effect sizes for comparative purposes. One meta-analysis included comparisons of university students who participated in first year seminars compared to students who did not participate in seminars [89]. Only two preservice teacher preparation practices were related to the study outcomes.
Table 6 shows the results from the metasynthesis. Only two preservice coursework measures were related to the study outcomes. The number of general education courses was related to teaching practices used with K to 12th grade students and class attendance was related to university student performance. Class attendance is most likely a proxy measure for university student commitment to and motivation toward academic success.

3.1.6. Method of Course Delivery

Fourteen meta-analyses included comparisons of four different methods of course delivery with traditional classroom instruction. The study outcomes included university student achievement and attitudes toward or satisfaction with type of course method. None of the meta-analyses included effect sizes for the relationships between types of course delivery and teaching quality.
The results for the comparisons between the different types of course delivery and traditional courses are shown in Table 7. All four types of course delivery were associated with differences in university student achievement. In contrast, method of course delivery was differentially related to student belief appraisals. Personal systems of instruction (PSI) courses were positively related to student attitudes toward course delivery, whereas blended courses were negatively related to student satisfaction with course delivery.

3.1.7. Technology and E-Learning Instruction

Twenty-eight meta-analyses were located that included results for the effects of five different types of technology-assisted, e-learning, and related practices on university student outcomes. The majority of the syntheses compared technology or e-learning instruction with traditional classroom instruction. Five meta-analyses included findings from research syntheses comparing the same type of technology or e-learning under contrasting conditions [72,90,91,92,93].
Table 8 shows the results for effects of technology and e-learning instruction on the university student outcomes. All six types of instruction were associated with discernible benefits in terms of student achievement. The sizes of effects, however, differed as a function of the type of instruction.
Virtual reality instruction, information and communication technology learning (ICT), computer-assisted instruction, and intelligent tutoring instruction were associated with larger effect sizes compared to those for technology-assisted instruction and internet-based instruction. The sizes of effects for all five types of instruction, however, indicated a favorable advantage compared to traditional classroom or course instruction.
In those cases where student belief appraisals (attitudes or satisfaction) were the focus of analysis, technology-assisted, internet-based, and computer-assisted instruction or learning were related to positive belief appraisals, whereas ICT learning was associated with dissatisfaction with this type of learning method. Ungerleider and Burns (2003) reported a positive relationship between ICT and student achievement but a negative relationship with student belief appraisals. The latter may be the case because ICT requires considerably more student investment in using this method of learning compared to traditional classroom instruction.
The five meta-analyses of between condition comparisons for computer-assisted instruction yielded differential findings. Computer-assisted instruction that included performance feedback was associated with more positive student achievement compared to no performance feedback. Small group computer-assisted instruction, compared to individual student computer assisted instruction, was associated with a discernible positive effect on student achievement but not student belief appraisals. Computer-assisted instruction that was under student control proved no more effective than computer-assisted instruction not under student control.

3.1.8. Course-Based Learning Methods

Twenty-one meta-analyses included investigations of seven different types of course-based learning methods. The study outcomes included different measures of university student performance (achievement, knowledge acquisition, course grades, etc.) and different belief appraisals (self-efficacy beliefs and attitudes toward learning method). Only one meta-analysis included the relationship between a course-based learning method and teaching quality [94].
The findings for the relationships between the different course-based learning methods and the study outcomes are shown in Table 9. Inquiry-based learning (including discovery learning and project-based learning) was associated with differences in both teaching practices and student achievement compared to traditional classroom or instructional practices. The sizes of effects for this learning method are discernibly larger than those for the other course-based learning methods. Notwithstanding these differences, problem-based learning (including guided design), student self-directed learning, critical thinking instruction, and different kinds of note-taking practices [95,96,97] were all positively related to student performance. In contrast, visually-based learning and explanation-based learning were associated with smaller sizes of effect for the influence of these learning methods on student performance. These two course-related methods require less student engagement and investment in learning compared to the other course-based learning methods, which might account for the difference in the sizes of effects on student performance.
Only two types of course-based learning methods were evaluated in terms of student belief appraisals (e.g., attitudes toward the course-based learning methods). Problem-based learning was positively related to student belief appraisals, whereas visually-based learning was not related to these same outcomes. The latter again may be related to the differences in students’ investment in problem-based learning compared to visually-based learning.

3.1.9. Cooperative Learning Practices

Fourteen meta-analyses included effect sizes for the relationships between three different types of student cooperative learning practices (small group learning, peer tutoring, and peer instruction) and university student outcomes. Small group learning was compared to either traditional classroom instruction or individual student instruction. Peer tutoring was compared to no peer tutoring [98,99] and peer instruction was compared to faculty instruction [100]. One meta-analysis included the effect sizes for the comparison between peer instruction as part of traditional classroom instruction and traditional classroom instruction without peer instruction [101]. The outcome measures in the studies included student performance (achievement, knowledge, task completion, etc.) and attitudes toward different types of cooperative learning.
Table 10 shows the findings for the sizes of effect between the cooperative learning practices and university student outcomes. Peer instruction that was incorporated into traditional classroom instruction was by far associated with the largest size of effect with student achievement. Small group learning was positively related to both student performance and attitudes toward the cooperative learning method. Small group instruction that was computer assisted was positively related to student achievement but not related to student attitudes toward computer-assisted instruction. Peer tutoring was also positively related to student achievement. In contrast, peer instruction compared to faculty instruction was not differentially related to student knowledge and skill acquisition (Rees et al., 2016).
The findings for peer vs. faculty instruction deserve comment because the small size of effect for the between type of instructor comparisons is somewhat different from that for the other cooperative learning practices. The purpose of this particular meta-analysis was to determine if peer teaching could be used as a substitute for faculty teaching in undergraduate courses for influencing student knowledge and skill acquisition in the same manner as faculty instruction [100]. The fact that the outcome for the two agents of instruction did not differ much indicates that both types of teaching were about equally effective.

3.1.10. Faculty Instructional Practices

Twelve meta-analyses included findings for the relationships between different types of faculty-related practices and either course instructor teaching-related practices or different university student outcomes (performance, beliefs and attitudes, course completion). The outcome measures for instructor practices and faculty–student interactions are all based on students’ assessments of faculty performance.
The results for the relationships between the faculty-related practices and study outcomes are shown in Table 11. Faculty coaching of students as part of course or classroom practices was associated with the largest size of effect in terms of students’ positive judgments of faculty instructional practices. The other three types of feedback were also related to student judgments of both faculty member instructional practices and faculty–student interactions. More specifically, consultative feedback on student ratings of faculty member instruction were related to positive student judgments of faculty performance, student feedback on faculty member instruction was related to positive student judgments of the quality of faculty instructional practices, and faculty feedback on student performance was related as well to students’ positive judgments of faculty–student interactions.
Faculty member-related instructional practices were associated with all of the university student outcomes. Faculty coaching (including just-in-time instruction and individualized student instruction), faculty member mentoring, faculty member feedback on student performance, and student feedback on faculty member instruction were all positively related to different measures of student performance. Both faculty member mentoring and student feedback on faculty member instruction were also related to student positive attitudes toward faculty member instructional practices.
The findings from the different sets of analyses in Table 11, taken together, point to the importance of feedback, support, and guidance as factors bolstering the effects of faculty instructional practices on students’ judgments of course instructors and student achievement. This seems to especially be the case in situations where faculty–student interactions and exchanges include explicit efforts to engage and promote student learning and achievement.

3.1.11. Teaching Method Instruction

Eight meta-analyses included evaluations of the influences of methods and procedures used by faculty, course instructors, or other supervising teachers to facilitate or improve students’ teaching practices and other student outcomes. The meta-analyses included a number of proxy measures for teaching method instruction since so few syntheses were located for preservice teaching method practices. These included meta-analyses of microcounseling (Baker & Daniels, 1989) and simulation-based instruction (Cook et al., 2013; Kim et al. 2016).
Table 12 includes the results from the eight meta-analyses. All seven instructional methods were related to the student teaching quality outcomes. Simulation-based instruction that explicitly included deliberate practice to improve students’ clinical practice skills was the most effective practice by far as evidenced by a very large size of effect. In contrast, simulation-based instruction without deliberate practice was associated with a positive but much smaller size of effect. Adding deliberate practice to simulation-based instruction had nearly five times more effect than the same type of instruction without deliberate practice.
All of the other six teaching method instructional practices were related to differences in university students’ use of teaching practices compared to the absence of use of any type of the teaching method instructional practices. The sizes of effects for these teaching method instructional practices ranged between 0.59 (modeling teaching practices) and 0.89 (critical thinking instruction). The common denominator of all of the teaching method instructional practices is explicit activities to promote students’ use of different kinds of teaching and instructional practices.
The sizes of effects for simulation-based instruction and critical thinking instruction and university student knowledge and skill acquisition were both positive but small. Simulation-based instruction was also related to student satisfaction with this instructional method.

3.1.12. Student Field Experiences

Four meta-analyses and one survey were located that included evaluations of different kinds of field experiences on teaching quality and university student or beginning teacher outcomes. No meta-analyses were located for the effects of student or practice teaching.
The survey of student teaching included information about practice teaching that involved a comparison of 10 or more weeks of (extended) student teaching with little or no student teaching, and a comparison of five to nine weeks of (limited) student teaching with little or no student teaching. The outcomes for student teaching included classroom quality (instructional planning and classroom management) and teaching practices (instructional methods and subject matter teaching). The meta-analyses of course-based field experiences and course-based service learning included comparisons with students having no field experience. The outcome measures for student field experiences included teaching practices and university student and beginning teacher performance and beliefs.
The results of the relationships between the different kinds of field experiences and teaching quality are shown in Table 13. The findings are clear-cut. The more time students participated in student or practice teaching, the more proficient the students were in terms of the use of teaching practices. The same was the case for classroom quality. The more student teaching time the participants had in practice teaching, the better the measures of classroom quality were. Course-based field experiences (that presumably were less intense than either type of student teaching) had a small but positive size of effect on teaching practices compared to students without any course-based field experiences.
The results for service learning and field experiences are also clear-cut. Service learning, which is a more intense type of field experience compared to course-based field experiences, was associated with sizes of effects 2 to 3 times larger in terms of the different student outcomes. Taken together, results showed that both service learning and course-based field experiences were associated with university student achievement, skill acquisition, and attitudes toward teaching compared to study participants who had no field experiences.
The results from the analyses of the effects of field experiences on the study outcomes deserve special comment since the findings are highly suggestive in terms of the proverb that “practice makes perfect”. The more time students engage in field experiences, the better are the benefits in terms of teaching quality and student performance. The results are consistent with findings from studies where it was found that deliberate practice is an important determinant of the development of expert performance [102,103,104].

3.1.13. Clinical Supervision

No meta-analyses or surveys of the clinical supervision of field-based student experiences for teacher preparation were located. We did locate three meta-analyses of related practices where findings indicated that there are positive benefits of clinical supervision of graduate students in counseling [105] and performance feedback in studies of undergraduate and graduate students and career professionals [106,107]. The outcome measures in these meta-analyses included study participant performance (clinical practices and skill acquisition), self-efficacy beliefs, and anxiety associated with clinical supervision.
The results for the relationships between the preservice practices and study outcomes are shown in Table 14. Clinical supervision and performance feedback were both related to differences in university student and career professional performance and self-efficacy beliefs compared to study participants not receiving any supervision or feedback. Both types of practices were associated with better skill acquisition and clinical practice skill acquisition and were also related to stronger self-efficacy beliefs in terms of self-assessment of clinical abilities. Clinical supervision, however, was associated with heightened anxiety among counseling students. The latter would not be unexpected since no matter how supportive a clinical supervisor is likely to be, students are inclined to view such supervision as an evaluative activity.

3.1.14. Induction and Mentoring

Six meta-analyses and four surveys were located for the effects of teacher induction, school-based or workplace mentoring, and workplace coaching practices on beginning teacher and other early career professional outcomes. The outcomes included different measures of teaching quality, career professional performance, and other types of teacher and career professional performance outcomes.
Table 15 shows the results for the relationships between beginning teacher or career professional practices (school-based induction, school-based and workplace mentoring, and workplace coaching) and the study outcomes. Neither school-based induction nor specific types of induction practices (seminars, collaborative planning, or teacher support networks) were related to teachers’ sense of preparedness. Different types of induction practices (group seminars, collaborative planning, and teacher supported networks) had a small positive effect on first year teacher retention.
School-based mentoring was associated with higher quality teaching practices and improved K–12 student achievement, and to a lesser degree, beginning teacher judgments of their preparedness to teach compared to no school-based mentoring. Workplace mentoring had similar effects on career commitment, job satisfaction, and job performance compared to professionals not being mentored. Workplace coaching as well was associated with positive career professional outcomes (career commitment, performance, and attitudes toward one’s profession).
The results from the analyses of the effects of induction and mentoring on beginning teachers, career teachers, and other career professionals indicate that teacher specific mentoring and coaching are more likely to be associated with between outcomes differences compared to more formal induction programs and practices. This is most likely the case because mentoring and coaching are highly individualized practices, whereas induction practices are more group oriented.

3.2. Relative Effectiveness of the Teacher Preparation Practices

Results from the 14 sets of analyses in Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11, Table 12, Table 13, Table 14 and Table 15 indicated considerable variability in terms of the particular preservice and teacher preparation practices that were related to the study outcomes. The results for the different practices in these tables were therefore rank-ordered by the sizes of effects for the relationships between the different preservice preparation practices and the study outcomes (teaching quality, performance measures, and belief appraisals) to identify the relative importance of the practices in terms of explaining the study outcomes. The results were expected to shed light on which preservice practices for which outcomes emerged as high leverage [60] and high impact [53] teacher preparation practices.

3.2.1. Teaching Quality

There were 38 different effect sizes for preservice practice–teaching quality outcome relationships in the metasynthesis. The outcomes were categorized as teacher or practitioner teaching/clinical practices, classroom organization practices, or teacher belief appraisals (e.g., teacher self-efficacy beliefs). The rankings for the influences of the preservice practices on the teaching quality outcome measures are shown in Table 16. The different practices can be grouped into five overlapping categories, with each category having discernibly different sizes of effects for the preservice practice–outcome relationships: Intensive student clinical experiences, student active learning opportunities, guidance and feedback, teacher qualifications, and practices that were not effective in terms of teaching quality.
Three of the four largest sizes of effects are for practices that involve intensive clinical experiences in different kinds of teaching and classroom practices (simulation-based instruction with deliberate practice and extensive student teaching). The sizes of effects for these high impact practices vary between 1.52 and 2.30.
The active student learning practices all involve student participation in experiences designed to promote and enhance their ability to use teaching and classroom organization practices. These preservice practices included teaching instruction, peer instruction, limited student teaching, microteaching, inquiry-based learning, and minicourses. The sizes of effects for these practices ranged between 0.70 and 0.86.
A cluster of seven practices includes the use of different types of guidance and feedback and their influence on teaching practices. These include faculty consultative feedback, clinical supervision, modeling teaching practices, school-based mentoring, student feedback on faculty instruction, faculty feedback or student performance, and workplace mentoring. The effect sizes for this cluster of practices vary from 0.32 to 0.69.
The fourth group of practices has to do primarily with the influence of teacher qualifications on teaching quality. Six of the eight practices in this group include the effect sizes for comparisons between different teacher degrees and both teaching practices and classroom organization. Teachers with more advanced degrees were observed or reported to use higher quality teaching practices and classroom organization practices. The sizes of effect ranged between 0.16 and 0.33.
The five preservice practices that were associated with trivial sizes of effects included four different preservice practices (in-field certification or degree, number of university courses, school-based induction practices, and extended teacher preparation programs). The influences of these practices on students’ sense of preparedness, teaching practices, and classroom quality were close to zero as evidenced by effect sizes between −0.07 and 0.05.
The average sizes of effects for the six clusters of preservice teacher preparation practices were 1.77 (intensive clinical experiences), 0.76 (active student learning), 0.51 (guidance and feedback), 0.21 (teacher qualifications), and −0.05 (ineffective practices). The pattern of results highlights the particular preservice practices that ought to be emphasized where the focus of teacher preparation is teaching quality.

3.2.2. Performance and Belief Outcomes

The preservice practices that were associated with study participant outcomes other than teaching quality were grouped into two categories for the ranking results: Performance-related outcomes (achievement, grades, knowledge and skill acquisition, etc.) and belief appraisal outcomes (satisfaction or attitudes). There were 57 preservice practices-performance outcome relationships and 22 preservice practices-belief appraisal outcome relationships. Table 17 shows the rank ordering for the relationships between the preservice practices and the performance outcomes, and Table 18 shows the ranks for the relationships between the preservice practices and belief appraisal outcomes. The rankings of the effect sizes for the performance outcomes include the study participants whose performance was the focus of investigation, and the rankings of the effect sizes for the belief outcomes includes whether it was a satisfaction or attitude measure.

Performance Outcomes

A number of patterns can be discerned from the relationships between the preservice practices and performance outcomes. The positive effects of the preservice practices on the study outcomes are almost entirely limited to university student performance outcomes, as evidenced by the fact that 33 of the 57 preservice practice–outcome measures ranked the highest are for university student performance. The effects of the preservice practices on child and K to 12th grade student performance outcomes are trivial to small as evidenced by sizes of effects between 0.03 (number of courses) and 0.18 (school-based mentoring).
The numbers of effect sizes that are medium to large are for five preservice practices (peer instruction as part of traditional course instruction, class attendance, faculty coaching, inquiry-based learning, and small group learning). The sizes of effects for these five practices range between 0.63 and 0.96. The five preservice practices include a mix of active student learning (peer instruction, inquiry-based learning, and small group learning), guidance and support (faculty coaching), and student commitment to academic success (class attendance) practices.
A comparison of the preservice practices with the sizes of effects ranked highest with the sizes of effects ranked lowest finds that the practices can be roughly divided into active student learning practices and passive or inactive practices or variables. All of the preservice practices associated with the effect sizes between 0.25 and 0.96 include those that necessitate active student participation in different kinds of learning experiences. In contrast, the preservice practices associated with effect sizes between −0.01 and 0.15 include a preponderance of static measures (teacher education, teacher certification, number of courses, and first year university seminars).
Closer examination of the high impact preservice practices shows that they include different kinds of active student learning practices (e.g., peer instruction, inquiry-based learning, small group learning) and different types of student guidance and feedback (faculty coaching, performance feedback, and faculty mentoring). Among the top 20 ranked practices, 14 include different kinds of student learning practices, and five include different kinds of guidance and support practices. The pattern of results is very similar to that for the relationships between the preservice practices and teaching quality (Table 16).

Belief Outcomes

The preservice practices that were associated with student or beginning teacher satisfaction or attitude outcome measures are shown in Table 18. Approximately half of the preservice practice-belief sizes of effects are small to medium (0.27 to 0.66) and about half are trivial to small (−0.16 to 0.24).
The preservice practices that were associated with student satisfaction with the preservice practices or attitudes toward the practices include primarily a mix of coaching and mentoring practices (workplace coaching, faculty mentoring, clinical supervision, and workplace mentoring) and active student learning methods (personalized system of instruction courses, problem-based learning, small group learning, technology-assisted learning, and simulation-based instruction). In addition, student feedback on faculty instruction and student engagement in service learning was also associated with positive belief appraisals.
Three of the preservice practices were associated with negative sizes of effects for student satisfaction with or attitudes toward the practices. Enrolment in blended courses was associated with poor satisfaction with this method of course delivery, and students in extended teacher preparation programs had more negative attitudes toward this type of program. Students who used information and communication learning practices indicated considerable dissatisfaction with this practice.

4. Discussion

The results from the analyses of the relationships between the 14 different types of preservice teacher preparation practices (Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11, Table 12, Table 13, Table 14 and Table 15) as well as the rank ordering of the practices by sizes of effects for the three types of study outcomes (Table 16, Table 17 and Table 18) helped identify which practices do and do not matter in terms of the outcomes that were the focus of investigation. The metasynthesis of preservice teacher preparation practices was done in terms of “practice-based teacher education” [60], where a major goal was identifying high leverage [60] and high impact [53] teacher preparation practices considered essential for ensuring that students in teacher preparation programs “are extraordinarily well prepared” [7]. The findings add to the knowledge base in terms of the particular teacher preparation practices that are evidence-based as determined by the sizes of effects for the preservice teacher preparation practice–outcome relationships [63].
The findings that emerged from the metasynthesis paint a rather clear picture about which teacher preparation practices ought to constitute core practices as part of teacher education programs. The high leverage practices include
  • Extensive student teaching and clinical experiences (e.g., [19,48,108]);
  • Explicit instruction and practices for students to learn how to teach (e.g., [42,109,110]);
  • Faculty and clinical supervision, coaching and mentoring, and student performance feedback (e.g., [20,48,50]);
  • Active student participation and engagement in knowledge and skill acquisition (e.g., [36,111,112]).
The pattern of results for all 14 types of teacher preparation practices is shown in Table 19 in terms of a continuum of impact on student outcomes where degree of impact is defined in terms of the sizes of effects for high leverage practices. Among the four core sets of practices listed above, seven different teacher preparation practices emerged as having either high or very high impact. These practices, in order of their effects on student outcomes, are student field experiences teaching method instruction; clinical supervision; faculty coaching, mentoring, and student performance feedback; course-based student learning methods and practices; cooperative learning practices; and web-based and e-learning instruction.
The particular teacher preparation practices that had high or very high impact are ones that teacher education experts have “called for” in terms of well-prepared teachers (e.g., [19,20,36,61,113,114]). Other practices that often are said to be important for preservice teacher education, however, proved not to be highly associated with teaching quality or student performance and beliefs or not related at all with those outcomes. These practices included teacher degrees [22,23], type of teacher preparation programs [24,26], teacher certification [15,28], and the number of university courses [16,30]. Findings from meta-analyses and surveys of these kinds of practices were found to be associated with small sizes of effects, and compared to the high impact practices, were found not to be as important as the practices associated with large sizes of effects.
Hattie [53], Schneider and Preckel [55], and others (e.g., [54,56,115]) who have conducted reviews of reviews were also interested in identifying high impact practices in higher education and teacher preparation programs. The metasynthesis adds to this knowledge base by sorting out, among more than 100 individual types of preservice practices and variables, those practices that were associated with the largest sizes of effects in terms of the three different outcomes that were the focus of investigation. Comparisons of the rankings of the effect sizes in the metasynthesis (Table 16, Table 17 and Table 18) with those by Hattie [53] and Schneider and Preckel [55] finds some overlap as well as metasynthesis specific differences. The latter was not unexpected given the fact that the purposes of the reviews of reviews were somewhat different where each emphasized the investigation of different preservice practices. Notwithstanding the differences, the three sets of effect size rankings can inform more detailed identification of the conditions under which teacher preparation practices are likely to have optimal benefits. For example, whereas number of teacher preparation courses was not found to be highly related to the study outcomes in the metasynthesis, Schneider and Preckel [55] found that course preparation, organization, and delivery, and especially the use of practices that included meaningful student learning experiences, were associated with the largest sizes of effects in their metasynthesis (see e.g., [16]).

4.1. Caveats and Limitations

It is important to note a number of caveats to prevent any misunderstanding of the results from the metasynthesis. One caveat has to do with the fact that the findings and interpretation of the results are limited to the preservice teacher preparation practices for which we were able to locate meta-analyses and surveys of the practices. There are many practices for which we were not able to locate research syntheses. These included, but are not limited to, blended or integrated teacher preparation programs (e.g., [25,116]), foundation and methods courses (e.g., [16,117]), case-based learning (e.g., [35,118]), and different types of teacher preparation field experiences (e.g., [47,119,120]). Findings from meta-analyses of these practices might have influenced the patterns of results in the metasynthesis.
A second caveat is the need to understand the “make up” of the preservice practices that were the focus of investigation and the fact that some practices are proxies for other teacher preparation practices. For example, knowing that a teacher has a bachelor’s degree tells us very little about the quality of the teacher’s preservice preparation program. In contrast, knowing that inquiry-based learning was used to actively engage students in mastering course content tells us quite a bit about the characteristics of the learning method.
There are several limitations as well that need to be pointed out. The first is the fact that the findings in the metasynthesis are those for the main effects of the preservice practices on the study outcomes. This was the case because so few meta-analyses included information for (a) discerning the conditions under which the preservice practices were most effective and (b) identification of the variables that moderated the relationships between the practices and the study outcomes. Knowledge about these factors would improve our understanding of “what works best” [53] (p. 79) for which practices under which conditions.
A second limitation has to do with the fact that the meta-analyses in the metasynthesis were “quite uneven” in the approaches to synthesizing study results, and especially in terms of operationally defining the practices that were the focus of investigation. A concerted effort was made to be sure that practices in different meta-analyses were in fact the same or very similar practices before aggregating results. In some cases, however, we needed to rely on limited information about the practices, and had to make our best judgments about the practices that were the focus of investigation.

4.2. Implications for Future Research

Two types of research are needed to make further advances in our understanding of core teacher preparation practices. The first is the need for meta-analyses of practices that have yet to be synthesized, and especially practices that experts claim are effective, to empirically determine if the practices are in fact evidence-based teacher preparation practices. These include, but are not limited to, the practices listed earlier in the Discussion section. The second is the need to identify which preservice practices under which conditions are most effective in terms of which outcomes. Meta-analyses that focus on these kinds of relationships would help identify “practices of choice” in terms of which student learning methods and experiences are most likely to result in well prepared teachers.

5. Conclusions

At the outset it was noted that the primary purpose of the metasynthesis was to identify evidence-based core teacher preparation practices based on the sizes of effects between different types of preservice practices and the study outcomes. A point was made that the pattern of results and not the findings for any one particular preservice practice ought to be the foundation for interpreting the metasynthesis results. Our findings clearly indicate that different clusters of practices stood out as being high leverage and high impact practices. These particular practices, when used in concert, ought to be emphasized in teacher preparation programs if highly qualified teachers are to be ready to enter the workforce prepared to teach in a manner that benefits preschool, elementary, middle school, and high school students.

Author Contributions

Conceptualization, C.J.D., D.W.H. and R.B.H.; data curation, D.W.H., H.W. and K.A.; formal analysis, D.W.H. and C.J.D.; methodology, C.J.D.; project administration, C.J.D.; writing, C.J.D.; review and editing, H.W., K.A. and C.J.D.

Funding

The preparation of this metasynthesis was supported, in part, by funding from the U.S. Department of Education, Office of Special Education Programs (No. H325B120004) for the Early Childhood Personnel Center, University of Connecticut Health Center (Mary Beth Bruder, Principal Investigator).

Acknowledgments

The contents and opinions expressed, however, are those of the authors and do not necessarily reflect the policy or official position of the U.S. Department of Education, Office of Special Education Programs, University of Connecticut Health Center, or the Early Childhood Personnel Center, and no endorsement should be inferred or implied.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Meta-Analyses and Surveys Included in the Metasynthesis

  • Abrami, P. C., Bernard, R. M., Borokhovski, E., Waddington, D. I., Wade, C. A., & Persson, T. (2015). Strategies for teaching students to think critically: A meta-analysis. Review of Educational Research, 85(2), 275-314. doi:10.3102/0034654314551063
  • Aiello, N., & Wolfle, L. (1980). A meta-analysis of individualized instruction in science. Paper presented at the American Educational Research Association, Boston. Retrieved from https://files.eric.ed.gov/fulltext/ED190404.pdf
  • Alegre-Ansuategui, F. J., Moliner, L., Lorenzo, G., & Maroto, A. (2018). Peer tutoring and academic achievement in mathematics: A meta-analysis. Eurasia Journal of Mathematics, Science and Technology Education, 14(1), 337-354. Retrieved from http://www.ejmste.com/Peer-Tutoring-and-Academic-Achievement-in-Mathematics-A-Meta-Analysis,79805,0,2.html doi:10.12973/ejmste/79805
  • Alfieri, L., Brooks, P. J., Aldrich, N. J., & Tenenbaum, H. R. (2011). Does discovery-based instruction enhance learning? Journal of Educational Psychology, 103(1), 1-18. doi:10.1037/a002101
  • Allen, M., Bourhis, J., Burrell, N., & Mabry, E. (2002). Comparing student satisfaction with distance education to traditional classrooms in higher education: A meta-analysis. The American Journal of Distance Education, 16(2), 83-97.
  • Allen, M., Mabry, E., Mattrey, M., Bourhis, J., Titsworth, S., & Burrell, N. (2004). Evaluating the effectiveness of distance learning: A comparison using meta-analysis. Journal of Communication, 54(3), 402-420.
  • Allen, T. D., Eby, L. T., Poteet, M. L., Lentz, E., & Lima, L. (2004). Career benefits associated with mentoring for protégés: A meta-analysis. Journal of Applied Psychology, 89(1), 127-136. doi:10.1037/0021-9010.89.1.127
  • Anderson, R. D., Kohl, S., M. L. Smith, M. L., & Glass, G. V. (1982). Science meta-analysis: Final report of NSF Project No. SED 80-12310, Volume I and II. Boulder, CO: Laboratory for Research in Science and Mathematics Education, University of Colorado.
  • Andrew, M. D., & Schwab, R. L. (1995). Has reform in teacher education influenced teacher performance? An outcome assessment of graduates of an eleven-university consortium. Action in Teacher Education, 17(3), 43-53. doi:10.1080/01626620.1995.10463255
  • Ayaz, M. F., & Söylemez, M. (2015). The effect of the project-based learning approach on the academic achievements of the students in science classes in Turkey: A meta-analysis study. Education and Science, 40(178), 255-283. Retrieved from https://www.researchgate.net/publication/275959212_The_Effect_of_the_Project-Based_Learning_Approach_on_the_Academic_Achievements_of_the_Students_in_Science_Classes_in_Turkey_A_Meta-Analysis_Study doi:10.15390/EB.2015.4000
  • Azevedo, R., & Bernard, R. M. (1995). The effects of computer-presented feedback on learning from computer-based instruction: A meta-analysis. Paper presented at the Annual Meeting of the American Educational Research Association, San Francisco, CA.
  • Baker, S. B., & Daniels, T. G. (1989). Integrating research on the microcounseling program: A meta-analysis. Journal of Counseling Psychology, 36(2), 213-222. doi:10.1037/0022-0167.36.2.213
  • Baker, T. E., & Andrew, M. D. (1993). An eleven institution study of four-year and five-year teacher education program graduates. Paper presented at the Annual Meeting of the Association of Teacher Educators, Los Angeles. Retrieved from https://files.eric.ed.gov/fulltext/ED355224.pdf
  • Balta, N., Michinov, N., Balyimez, S., & Ayaz, M. F. (2017). A meta-analysis of the effect of peer instruction on learning gain: Identification of informational and cultural moderators. International Journal of Educational Research, 86, 66-77. doi:10.1016/j.ijer.2017.08.009
  • Bangert-Drowns, R. L., Kulik, C. C., Kulik, J. A., & Morgan, M. T. (1991). The instructional effect of feedback in test-like events. Review of Educational Research, 61(2), 213-238. doi:10.3102/00346543061002213
  • Bayraktar, S. (2002). A meta-analysis of the effectiveness of computer-assisted instruction in science education. Journal of Research on Technology in Education, 34(2), 173-188. doi:10.1080/15391523.2001.10782344
  • Benz, B. F. (2010). Improving the quality of e-learning by enhancing self-regulated learning: A synthesis of research on self-regulated learning and an implementation of a scaffolding concept. (Doctoral Dissertation), Technische Universität Darmstadt, Darmstadt, Germany.
  • Bernard, R. M., Abrami, P. C., Lou, Y., Borokhovski, E., Wade, A., Wozney, L., … Huang, B. (2004). How does distance education compare to classroom instruction? A meta-analysis of the empirical literature. Review of Educational Research, 74(3), 379-439. doi:10.3102/00346543074003379
  • Bernard, R. M., Borokhovski, E., Schmid, R. F., Tamim, R. M., & Abrami, P. C. (2014). A meta-analysis of blended learning and technology use in higher education: from the general to the applied. Journal of Computing in Higher Education, 26, 87-122.
  • Boe, E., Shin, S., & Cook, L. H. (2007). Does teacher preparation matter for beginning teachers in either special or general education? Journal of Special Education, 41(3), 158-170. doi:10.1177/00224669070410030201
  • Borman, G. D., & Dowling, N. M. (2008). Teacher attrition and retention: A meta-analytic and narrative review of the research. Review of Educational Research, 78(3), 367-409. doi:10.3102/0034654308321455
  • Bowen, C. W. (2000). A quantitative literature review of cooperative learning effects on high school and college chemistry achievement. Journal of Chemical Education, 77(1), 116-119. doi:10.1021/ed077p116
  • Butcher, P. M. (1981). An experimental investigation of the effectiveness of a value claim strategy unit for use in teacher education (Master’s Thesis), Macquarie University, Sydney, Australia.
  • Camnalbur, M., & Erdoğan, Y. (2008). A meta analysis on the effectiveness of computer-assisted instruction: Turkey sample. Educational Sciences: Theory & Practice, 8(2), 497-505.
  • Capar, G., & Tarim, K. (2015). Efficacy of the cooperative learning method on mathematics achievement and attitude: A meta-analysis research. Educational Sciences: Theory & Practice, 15(2), 553-559. doi:10.12738/estp.2015.2.2098
  • Castillo-Manzano, J. I., Castro-Nuño, M., López-Valpuesta, L., Sanz-Díaz, M., & Yñiguez, R. (2016). Measuring the effect of ARS on academic performance: A global meta-analysis. Computers & Education, 96, 109-121. doi:10.1016/j.compedu.2016.02.007
  • Cohen, P. (1980). Effectiveness of student rating feedback for improving college instruction: A meta-analysis of findings. Research in Higher Education, 13, 321-341. doi:10.1007/BF00976252
  • Cohen, P., Ebeling, B., & Kulik, J. (1981). A meta-analysis of outcome studies of visual-based instruction. Educational Communication and Technology Journal, 29(1), 26-36.
  • Cook, D. A., Hamstra, S. J., Brydges, R., Zendejas, B., Szostek, J. H., Wang, A. T., … Hatala, R. (2013). Comparative effectiveness of instructional design features in simulation-based education: Systematic review and meta-analysis. Medical Teacher, 35, e867-898. Retrieved from https://www.tandfonline.com/doi/pdf/10.3109/0142159X.2012.714886
  • Cook, D. A., Levinson, A. J., Garside, S., Dupras, D. M., Erwin, P. J., & Montori, V. M. (2010). Instructional design variations in internet-based learning for health professions education: A systematic review and meta-analysis. Academic Medicine, 85(5), 909-922.
  • Credé, M., Roch, S. G., & Kieszczynka, U. M. (2010). Class attendance in college: A meta-analytic review of the relationship of class attendance with grades and student characteristics. Review of Educational Research, 80(2), 272-295. doi:10.3102/0034654310362998
  • DeAngelis, K. J., Wall, A. F., & Che, J. (2013). The impact of preservice preparation and early career support on novice teachers’ career intentions and decisions. Journal of Teacher Education, 64(4), 338-355. doi:10.1177/0022487113488945
  • Dochy, F., Segers, M., Van den Bossche, P., & Gijbels, D. (2003). Effects of problem-based learning: A meta-analysis. Learning and Instruction, 13(5), 533-568. doi:10.1016/S0959-4752(02)00025-7
  • Druva, C. A., & Anderson, R. D. (1983). Science teacher characteristics by teacher behavior and by student outcome: A meta-analysis of research. Journal of Research in Science Teaching, 20(5), 467-479. doi:10.1002/tea.3660200509
  • Duke, L., Karson, A., & Wheeler, J. (2006). Do mentoring and induction programs have greater benefits for teachers who lack preservice training? Journal of International and Public Affairs, 17, 61-82. Retrieved from https://jpia.princeton.edu/sites/jpia/files/2006-4.pdf
  • Dunst, C. J., Trivette, C. M., & Hamby, D. W. (2010). Meta-analysis of the effectiveness of four adult learning methods and strategies. International Journal of Continuing Education and Lifelong Learning, 3(1), 91-112. Retrieved from http://hdl.voced.edu.au/10707/41
  • Early, D. M., Maxwell, K. L., Burchinal, M., Alva, S., Bender, R. H., Bryant, D., … Zill, N. (2007). Teachers’ education, classroom quality, and young children’s academic skills: Results from seven studies of preschool programs. Child Development, 78, 558-580. doi:10.1111/j.1467-8624.2007.01014.x
  • Eby, L. T., Allen, T. D., Evans, S. C., Ng, T., & DuBois, D. L. (2008). Does mentoring matter? A multidisciplinary meta-analysis comparing mentored and non-mentored individuals. Journal of Vocational Behavior, 72(2), 254-267. doi:10.1016/j.jvb.2007.04.005
  • Falenchuk, O., Perlman, M., McMullen, E., Fletcher, B., & Shah, P. S. (2017). Education of staff in preschool aged classrooms in child care centers and child outcomes: A meta-analysis and systematic review. PLoS ONE, 12(8), e0183673. Retrieved from https://doi.org/10.1371/journal.pone.0183673 doi:10.1371/journal.pone.0183673
  • Fletcher-Flinn, C. M., & Gravatt, B. (1995). The efficacy of computer assisted instruction (CAI): A meta-analysis. Journal of Educational Computing Research, 12(3), 219-242. doi:10.2190/51D4-F6L3-JQHU-9M31
  • Freeman, S., Eddy, S. L., McDonough, M., Smith, M. K., Okoroafor, N., Jordt, H., & Wenderoth, M. P. (2014). Active learning increases student performance in science, engineering, and mathematics. PNAS, 111(23), 8410-8415. doi:10.1073/pnas.1319030111/-/DCSupplemental
  • Fukkink, R. G., Trienekens, N., & Kramer, L. J. C. (2011). Video feedback in education and training: Putting learning in the picture. Educational Psychology Review, 23, 45-63. doi:10.1007/s10648-010-9144-5
  • Gijbels, D., Dochy, F., Van den Bossche, P., & Segers, M. (2005). Effects of problem-based learning: A meta-analysis from the angle of assessment. Review of Education Research, 75, 27-61. doi:10.3102/00346543075001027
  • Gliessman, D. H., Pugh, R. C., Dowden, D. E., & Hutchins, T. F. (1988). Variables influencing the acquisition of a generic teaching skill. Review of Educational Research, 58(1), 25-46. doi:10.3102/00346543058001025
  • Gong, X. (2015). Does having a preschool teacher with a bachelor’s degree matter for children’s developmental outcomes? (Doctoral Dissertation), Columbia University, New York. Retrieved from https://core.ac.uk/download/pdf/158157910.pdf
  • Greenwald, R., Hedges, L. V., & Laine, R. D. (1996). The effect of school resources on student achievement. Review of Education Research, 66(3), 361-396. doi:10.3102/00346543066003361
  • Hacke, W. (2010). Meta-analysis comparing student outcomes for national board certified teachers and non-national board certified teachers (Doctoral Dissertation), University of San Francisco, San Francisco. Retrieved from https://repository.usfca.edu/cgi/viewcontent.cgi?article=1384&context=diss
  • Hatala, R., Cook, D. A., Zendejas, B., Hamstra, S. J., & Brydges, R. (2013). Feedback for simulation-based procedural skills training: A meta-analysis and clinical narrative synthesis. Advances in Health Sciences Education, 19(2), 251-272. doi:10.1007/s10459-013-9462-8
  • Henk, W. A., & Stahl, N. A. (1985, November). A meta-analysis of the effect of notetaking on learning from lecture. College reading and learning assistance. Paper presented at the Annual Meeting of the National Reading Conference, St. Petersburg Beach, FL. Retrieved from https://files.eric.ed.gov/fulltext/ED258533.pdf
  • Hsu, Y. (2003). The effectiveness of computer-assisted instruction in statistics education: A meta-analysis. (Doctoral Dissertation), University of Arizona, Tucson, AZ.
  • Huddy, W. P. (2012). A meta-analytic review of cooperative learning practices in higher education: A human communication perspective. (Doctoral Dissertation), University of Denver, Denver, CO. Retrieved from https://digitalcommons.du.edu/cgi/viewcontent.cgi?article=1296&context=etd
  • Ingvarson, L., Beavis, A., & Kleinhenz, E. (2007). Factors affecting the impact of teacher education programmes on teacher preparedness: Implications for accreditation policy. European Journal of Teacher Education, 30, 351-381. doi:10.1080/02619760701664151
  • Jahng, N., Krug, D., & Zhang, Z. (2007). Student achievement in online distance education compared to face-to-face education. European Journal of Open, Distance and E-Learning. Retrieved from http://www.eurodl.org/materials/contrib/2007/Jahng_Krug_Zhang.pdf
  • Johnson, D. W., Maruyama, G., Johson, R., Nelson, D., & Skon, L. (1981). Effects of cooperative, competitive, and individualistic goal structures on achievement: A meta-analysis. Psychological Bulletin, 89(1), 47-62. doi:10.1037/0033-2909.89.1.47
  • Jones, R. J., Woods, S. A., & Guillaume, Y. R. F. (2016). The effectiveness of workplace coaching: A meta-analysis of learning and performance outcomes from coaching. Journal of Occupational and Organizational Psychology, 89, 249-277. doi:10.1111/joop.12119
  • Jurewitsch, B. (2012). A meta-analytic and qualitative review of online versus face-to-face problem-based learning. International Journal of E-Learning & Distance Education, 26(2).
  • Kalaian, S. A., & Kasim, R. A. (2014). A meta-analytic review of studies of effectiveness of small-group learning on statistics achievement. Journal of Statistics Education, 22(1). doi:10.1080/10691898.2014.11889691
  • Karich, A. C., Burns, M. K., & Maki, K. E. (2014). Updated meta-analysis of learner control within educational technology. Review of Educational Research, 84(3), 392-410. doi:10.3102/0034654314526064
  • Kelley, P., & Camilli, G. (2007). The impact of teacher education on outcomes in center-based early childhood education programs: A meta-analysis. New Brunswick, NJ: National Institute for Early Education Research. Retrieved from http://nieer.org/
  • Kim, J., Park, J.-H., & Shin, S. (2016). Effectiveness of simulation-based nursing education depending on fidelity: A meta-analysis. BMC Medical Education, 16. doi:10.1186/s12909-016-0672-7
  • Kluger, A. N., & DeNisi, A. (1996). The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychological Bulletin, 119(2), 254-284. doi:10.1037/0033-2909.119.2.254
  • Kobayashi, K. (2005). What limits the encoding effect of note-taking? A meta-analytic examination. Contemporary Educational Psychology, 30, 242-262. doi:10.1016/j.cedpsych.2004.10.001
  • Koufogiannakis, D., & Wiebe, N. (2006). Effective methods for teaching information literacy skills to undergraduate students: A systematic review and meta-analysis. Evidence Based Library and Information Practice, 1(3). doi:10.18438/B8MS3D
  • Kraft, M. A., Blazar, D., & Hogan, D. (2018). The effect of teacher coaching on instruction and achievement: A meta-analysis of the causal evidence. Review of Educational Research, 88(4), 547-588. doi:10.3102/0034654318759268
  • Kulik, C. C., & Kulik, J. A. (1991). Effectiveness of computer-based instruction: An updated analysis. Computers in Human Behavior, 7, 75-94. doi:10.1016/0747-5632(91)90030-5
  • Kulik, J., Kulik, C., & Cohen, P. (1979a). A meta-analysis of outcome studies of Keller’s personalized system of instruction. American Psychologist, 34(4), 307-318. doi:10.1037/0003-066X.34.4.307
  • Kulik, J., Kulik, C., & Cohen, P. (1979b). Research on audio-tutorial instruction: A meta-analysis of comparative studies. Research in Higher Education, 11(4), 321-341. doi:10.1007/BF00975623
  • Kulik, J., Kulik, C., & Cohen, P. (1980). Effectiveness of computer-based college teaching: A meta-analysis of findings. Review of Educational Research, 50(4), 525-544. doi:10.3102/00346543050004525
  • Kulik, J. A., Cohen, P. A., & Ebeling, B. J. (1980). Effectiveness of programmed instruction in higher education: A meta-analysis of findings. Educational Evaluation and Policy Analysis, 2(6), 51-64. doi:10.3102/01623737002006051
  • Kulik, J. A., & Kulik, C. C. (1988). Timing of feedback and verbal learning. Review of Educational Research, 58(1), 79-97. doi:10.3102/00346543058001079
  • Larwin, K. H., Gorman, J., & Larwin, D. A. (2013). Assessing the impact of testing aids on post-secondary student performance: A meta-analytic investigation. Educational Psychology Review, 25, 429-443. doi:10.1007/s10648-013-9227-1
  • Larwin, K. H., & Larwin, D. A. (2013). The impact of guided notes on post-secondary student achievement: A meta-analysis. International Journal of Teaching and Learning in Higher Education, 25(1), 47-58.
  • Latham, N., Mertens, S. B., & Hamann, K. (2015). A comparison of teacher preparation models and implications for teacher attrition: Evidence from a 14-year longitudinal study. School-University Partnerships, 8(2), 79-89.
  • Leary, H., Walker, A., Shelton, B. E., & Fitt, M. H. (2013). Exploring the relationships between tutor background, tutor training, and student learning: A problem-based learning meta-analysis. Interdisciplinary Journal of Problem-Based Learning, 7(1), 40-66. doi:10.7771/1541-5015.1331
  • Leary, H. M. (2012). Self-directed learning in problem-based learning versus traditional lecture-based learning: A meta-analysis (Doctoral Dissertation), Utah State University, Logan, UT. Retrieved from https://www.researchgate.net/publication/267988290
  • Leung, K. C. (2015). Preliminary empirical model of crucial determinants of best practice for peer tutoring on academic achievement. Journal of Educational Psychology, 107(2), 558-579. doi:10.1037/a0037698
  • Liu, H.-Y., & Chang, C.-C. (2017). Effectiveness of 4Ps creativity teaching for college students: A systematic review and meta-analysis. Creative Education, 8, 857-869. doi:10.4236/ce.2017.86062
  • Liu, S.-N. C., & Beaujean, A. A. (2017). The effectiveness of team-based learning on academic outcomes: A meta-analysis. Scholarship of Teaching and Learning in Psychology, 3(1), 1-14. doi:10.1037/stl0000075
  • Lou, Y., Abrami, P., & d’Apollonia, S. (2001). Small group and individual learning with technology: A meta-analysis. Review of Educational Research, 71(3), 449-521.
  • Lou, Y., Abrami, P. C., Spence, J. C., Poulsen, C., Chambers, B., & D’Apollonia, S. (1996). Within-class grouping: A meta-analysis. Review of Educational Research, 66(4), 423-458. doi:10.3102/00346543066004423
  • Malone, M. R. (1984). Project MAFEX: How effective are field experience in science education?. Paper presented at the annual conference of the National Association for Research in Science Teaching, New Orleans, LA.
  • Manning, M., Garvis, S., Fleming, C., & Wong, G. T. W. (2017). The relationship between teacher qualification and the quality of the early childhood education and care environment. Campbell Systematic Reviews, 2017(1), 1-82. doi:10.4073/csr.2017.1
  • McGaghie, W. C., Issenberg, B., Cohen, E. R., Barsuk, J. H., & Wayne, D. B. (2011). Does simulation-based medical education with deliberate practice yield better results than traditional clinical education? A meta-analytic comparative review of the evidence. Academic Medicine, 86(6), 706-711. doi:10.1097IACM.0b013e318217e119
  • Means, B., Toyama, Y., Murphy, R., & Baki, M. (2013). The effectiveness of online and blended learning: A meta-analysis of the empirical literature. Teachers College Record, 115(3), 1-47.
  • Menges, R., & Brinko, K. (1986, April). Effects of student evaluation feedback: A meta-analysis of higher education research. Paper presented at the American Educational Research Association, San Francisco. Retrieved from https://files.eric.ed.gov/fulltext/ED270408.pdf
  • Merchant, Z., Goetz, E. T., Cifuentes, L., Keeney-Kennicutt, W., & Davis, T. J. (2014). Effectiveness of virtual reality-based instruction on students’ learning outcomes in K–12 and higher education: A meta-analysis. Computers & Education, 70, 29-40. doi:10.1016/j.compedu.2013.07.033
  • Metcalf, K. K. (1995, April). Laboratory experiences in teacher education: A meta-analytic review of research. Paper presented at the Annual Meeting of the American Educational Research Association, San Francisco, CA. Retrieved from https://files.eric.ed.gov/fulltext/ED388645.pdf
  • Michko, G. M. (2008). Meta-analysis of effectiveness of technology use in undergraduate engineering education. Paper presented at the 38th ASEE/IEEE Frontiers in Education Conference, Saratoga Springs, NY. Retrieved from http://icee.usm.edu/icee/conferences/FIEC2008/papers/1378.pdf
  • Monk, D. H. (1994). Subject area preparation of secondary mathematics and science teachers and student achievement. Economics of Education Review, 13(2), 125-145. doi:10.1016/0272-7757(94)90003-5
  • Mothibi, G. (2015). A meta-analysis of the relationship between e-learning and students’ academic achievement in higher education. Journal of Education and Practice, 6(9), 6-9.
  • Murad, M. H., Coto-Yglesias, F., Varkey, P., Prokop, L. J., & Murad, A. L. (2010). The effectiveness of self-directed learning in health professions education: A systematic review. Medical Education, 44, 1057-1068.
  • Niu, L., Behar-Horenstein, L. S., & Garvan, C. W. (2013). Do instructional interventions influence college students’ critical thinking skills? A meta-analysis. Educational Research Review, 9, 114-128. doi:10.1016/jedurev.2012.12.002
  • Novak, J. M., Markey, V., & Allen, M. (2007). Evaluating cognitive outcomes of service learning in higher education: A meta-analysis. Communication Research Reports, 24(2), 149-157. doi:10.1080/08824090701304881
  • Pai, H.-H., Sears, D. A., & Maeda, Y. (2015). Effects of small-group learning on transfer: A meta-analysis. Educational Psychology Review, 27, 79-102. doi:10.1007/s10648-014-9260-8
  • Parsons, J. A. (1991). A meta-analysis of learner control in computer-based learning environments. (Doctoral Dissertation), Nova Southeastern University, Fort Lauderdale, FL. Retrieved from http://nsuworks.nova.edu/gscis_etd/765
  • Penny, A. R., & Coe, R. (2004). Effectiveness of consultation on student ratings feedback: A meta-analysis. Review of Education Research, 74(2), 215-253. doi:10.3102/00346543074002215
  • Permzadian, V., & Credé, M. (2016). Do first-year seminars improve college grades and retention? A quantitative review of their overall effectiveness and an examination of moderators of effectiveness. Review of Educational Research, 86(1), 277-316. doi:10.3102/0034654315584955
  • Qu, Y., & Becker, B. J. (2003). Does traditional teacher certification imply quality? A meta-analysis. Paper presented at the Annual Meeting of the American Educational Research Association, Chicago, IL. Retrieved from https://files.eric.ed.gov/fulltext/ED477460.pdf
  • Rees, E., Quinn, P. J., Davies, B., & Fotheringham, V. (2016). How does peer teaching compare to faculty teaching? A systematic review and meta-analysis. Medical Teacher, 38, 829-837. doi:10.3109/0142159X.2015.1112888
  • Roberts, R. M. (2011). Best instructional practices for distance education: A meta-analysis. (Doctoral Dissertation), University of Nevada, Las Vegas, Las Vegas, NV. Retrieved from https://digitalscholarship.unlv.edu/cgi/viewcontent.cgi?article=2239&context=thesesdissertations
  • Ronfeldt, M., Schwartz, N., & Jacob, B. (2014). Does pre-service preparation matter? Examining an old question in new ways. Teachers College Record, 116(10), 1-46.
  • Schenker, J. D. (2007). The effectiveness of technology use in statistics instruction in higher education: A meta-analysis using hierarchical linear modelling. (Doctoral Dissertation), Kent State University, Kent, OH.
  • Schmid, R. F., Bernard, R. M., Borokhovski, E., Tamim, R., Abrami, P. C., Wade, C. A., … Lowerison, G. (2009). Technology’s effect on achievement in higher education: A stage I meta-analysis of classroom applications. Journal of Computing in Higher Education, 21, 95-109. doi:10.1007/s12528-009-9021-8
  • Schmid, R. F., Bernard, R. M., Borokhovski, E., Tamim, R. M., Abrami, P. C., Surkes, M. A., … Woods, J. (2014). The effects of technology use in postsecondary education: A meta-analysis of classroom applications. Computers & Education, 72, 271-291. doi:10.1016/j.compedu.2013.11.002
  • Schmidt, H. G., van der Molen, H. T., te Winkel, W. W. R., & Wijnen, W. H. F. W. (2009). Constructivist, problem-based learning does work: A meta-analysis of curricular comparisons involving a single medical school. Educational Psychologist, 44(4), 227-249. doi:10.1080/00461520903213592
  • Shachar, M., & Neumann, Y. (2010). Twenty years of research on the academic performance differences between traditional and distance learning: Summative meta-analysis and trend examination. MERLOT Journal of Online Learning and Teaching, 6(2), 318-334.
  • Sitzmann, T., Kraiger, K., Stewart, D., & Wisher, R. (2006). The comparative effectiveness of web-based and classroom instruction: A meta-analysis. Personnel Psychology, 59, 623-664. doi:10.1111/j.1744-6570.2006.00049.x
  • Smith, T. M., & Ingersoll, R. M. (2004). What are the effects of induction and mentoring on beginning teacher turnover? American Educational Research Journal, 41(3), 681-714. Retrieved from http://repository.upenn.edu/gse_pubs/135 doi:10.3102/00028312041003681
  • Sneyers, E., & De Witte, K. (2018). Interventions in higher education and their effect on student success: A meta-analysis. Educational Review, 70(2), 208-228.
  • Sosa, G. W., Berger, D. E., Saw, A. T., & Mary, J. C. (2011). Effectiveness of computer-assisted instruction in statistics: A meta-analysis. Review of Educational Research, 81(1), 97-128. doi:10.3102/0034654310378174
  • Sparks, K. (2004). The effect of teacher certification on student achievement (Doctoral Dissertation), Texas A&M University, College Station, TX. Retrieved from https://dspacepre1.library.tamu.edu/bitstream/handle/1969.1/2229/etd-tamu-2004A-EPSY-Sparks-1.pdf?sequence=1
  • Springer, L., Stanne, M., & Donovan, S. (1999). Effects of small group learning on undergraduates in science, mathematics, engineering, and technology: A meta-analysis. Review of Educational Research, 69(1), 21-51. doi:10.3102/00346543069001021
  • Steenbergen-Hu, S., & Cooper, H. (2014). A meta-analysis of the effectiveness of intelligent tutoring systems on college students’ academic learning. Journal of Educational Psychology, 106(2), 331-347. doi:10.1037/a0034752
  • Sweitzer, G. L., & Anderson, R. D. (1983). A meta-analysis of research on science teacher education practices associated with inquiry strategy. Journal of Research in Science Teaching, 20(5), 453-466. doi:10.1002/tea.3660200508
  • Theeboom, T., Beersma, B., & van Vianen, A. E. M. (2014). Does coaching work? A meta-analysis on the effects of coaching on individual level outcomes in an organizational context. Journal of Positive Psychology, 9(1), 1-18. doi:10.1080/17439760.2013.837499
  • Thomas, A., & Loadman, W. E. (2001). Evaluating teacher education programs using a national survey. Journal of Educational Research, 94(4), 195-206. doi:10.1080/00220670109598753
  • Timmerman, C. E., & Kruepke, K. A. (2006). Computer-assisted instruction, media richness, and college student performance. Communication Education, 55(1), 73-104. doi:10.1080/03634520500489666
  • Underhill, C. M. (2006). The effectiveness of mentoring programs in corporate settings: A meta-analytical review of the literature. Journal of Vocational Behavior, 68(2), 292-307.
  • Ungerleider, C., & Burns, T. (2003). A systematic review of the effectiveness and efficiency of networked ICT in education. Ottawa, Canada: Council of Ministers of Education, Canada and Industry Canada. Retrieved from http://204.225.6.243/stats/SystematicReview2003.en.pdf
  • Üstün, U. (2012). To what extent is problem-based learning effective as compared to traditional teaching in science education? A meta-analysis study. (Doctoral Dissertation), Middle East Technical University, Ankara, Turkey. Retrieved from http://etd.lib.metu.edu.tr/upload/12615106/index.pdf
  • Van der Kleij, F. M., Feskens, R. C. W., & Eggen, T. J. H. M. (2015). Effects of feedback on a computer-based learning environment on students’ learning outcomes: A meta-analysis. Review of Education Research, 85(4), 475-511. doi:10.3102/0034654314564881
  • Vernon, D. T. A., & Blake, R. L. (1993). Does problem-based learning work? A meta-analysis of evaluative research. Academic Medicine, 68(7), 550-563. doi:10.1097/00001888-199307000-00015
  • Vo, H. M., Zhu, C., & Diep, N. A. (2017). The effect of blended learning on student performance at course-level in higher education: A meta-analysis. Studies in Educational Evaluation, 53, 17-28. doi:10.1016/j.stueduc.2017.01.002
  • Whitford, D. K., Zhang, D., & Katsiyassis, A. (2018). Traditional vs. alternative teacher preparation programs: A meta-analysis. Journal of Child and Family Studies, 27(3), 671-685. doi:10.1007/s10826-017-0932-0
  • Whittaker, S. M. (2004). A multi-vocal synthesis of supervisees’ anxiety and self-efficacy during clinical supervision: Meta-analysis and interviews. (Doctoral Dissertation), Virginia Polytechnic Institute and State University, Blacksburg, VA.
  • Williams, S. L. (2004). A meta-analysis of the effectiveness of distance education in allied health science programs. (Doctoral Dissertation), University of Cincinnati, Cincinnati, OH.
  • Wittwer, J., & Renkl, A. (2010). How effective are instructional explanations in example-based learning? A meta-analytic review. Educational Psychology Review, 22, 393-409. doi:10.1007/s10648-010-9136-5
  • Yaakub, M. N. (1998). Meta-analysis of the effectiveness of computer-assisted instruction in technical education and training. (Doctoral Dissertation), Virginia Polytechnic Institute and State University, Blacksburg, VA. Retrieved from https://vtechworks.lib.vt.edu/bitstream/handle/10919/30651
  • Yang, L. (2017). Meta-analysis of the impact of service learning on students from statistical perception. Research on Modern Higher Education, 3, 87-89.
  • Yeany, R. H., & Padilla, M. J. (1986). Training science teachers to utilize better teaching strategies: A research synthesis. Journal of Research in Science Teaching, 23(2), 85-95. doi:10.1002/tea.3660230202
  • Zhao, Y. (2003). Recent Developments in technology and language learning: A literature review and meta-analysis. CALICO Journal, 21(1), 7-27.
  • Zhao, Y., Lei, J., Yan, B., Lai, C., & Tan, H. S. (2005). What makes the difference? A practical analysis of research on the effectiveness of distance education. Teachers College Record, 107, 1836-1884. doi:10.1111/j.1467-9620.2005.00544.x

References

  1. Bergen, T.J., Jr. The criticisms of teacher education: A historical perspective. Teach. Educ. Q. 1992, 19, 5–18. [Google Scholar]
  2. Cochran-Smith, M. The problem of teacher education. J. Teach. Educ. 2004, 55, 295–299. [Google Scholar] [CrossRef]
  3. Moon, B. The issues and tensions around teacher education and training in the university. In Do Universities Have a Role in the Education and Training of Teachers? An International Analysis of Policy and Practice; Moon, B., Ed.; Cambridge University Press: Cambridge, UK, 2016; pp. 1–18. [Google Scholar]
  4. Labaree, D.F. An uneasy relationship: The history of teacher education in the university. In Handbook of Research on Teacher Education: Enduring Questions in Changing Contexts Research, 3rd ed.; Cochran-Smith, M., Feiman-Nemser, S., McIntyre, D.J., Eds.; Routledge: New York, NY, USA, 2008; pp. 290–306. [Google Scholar]
  5. Kosnik, C.; Beck, C. Priorities in Teacher Education; Routledge: London, UK, 2009. [Google Scholar]
  6. Darling-Hammond, L.; Hyler, M.E.; Gardner, M. Effective Teacher Professional Development; Learning Policy Institute: Palo Alto, CA, USA, 2017. [Google Scholar]
  7. Darling-Hammond, L. Constructing 21st-century teacher education. J. Teach. Educ. 2006, 57, 300–314. [Google Scholar] [CrossRef]
  8. Darling-Hammond, L.; Bransford, J. Preparing Teachers for a Changing World: What Teachers Should Learn and Be Able to Do; Jossey-Bass: San Francisco, CA, USA, 2005. [Google Scholar]
  9. Studying Teacher Education: The Report of the AERA Panel on Research and Teacher Education; Cochran-Smith, M.; Zeichner, K.M. (Eds.) Lawrence Erlbaum: Mahwah, NJ, USA, 2006. [Google Scholar]
  10. Cochran-Smith, M.; Villegas, A.M.; Abrams, L.W.; Chavez-Moreno, L.C.; Mills, T.; Stern, R. Research on teacher preparation: Charting the landscape of a sprawling field. In Handbook of Research on Teaching, 5th ed.; Gitomer, D.H., Bell, C.A., Eds.; American Educational Research Association: Washington, DC, USA, 2016. [Google Scholar]
  11. Goldhaber, D.; Liddle, S. The Gateway to the Profession: Assessing Teacher Preparation Programs Based on Student Achievement; CALDER: Washington, DC, USA, 2012. [Google Scholar]
  12. Saracho, O.N. Early childhood teacher preparation programmes in the USA. Early Child Dev. Care 2013, 183, 571–588. [Google Scholar] [CrossRef]
  13. Zeichner, K. The changing role of universities in US teacher education In Do Universities Have a Role in the Education and Training of Teachers? An International Analysis of Policy and Practice; Moon, B., Ed.; Cambridge University Press: Cambridge, UK, 2016; pp. 107–126. [Google Scholar]
  14. Darling-Hammond, L. Powerful Teacher Education: Lessons from Exemplary Programs; Jossey-Bass: San Francisco, CA, USA, 2006. [Google Scholar]
  15. Darling-Hammond, L.; Berry, B.; Thoreson, A. Does teacher certification matter? Evaluating the evidence. Educ. Eval. Policy Anal. 2001, 23, 57–77. [Google Scholar] [CrossRef]
  16. Akarsu, B.; Kaya, H. Redesigning effective methods courses: Teaching pre-service teachers how to teach. Electron. J. Sci. Educ. 2012, 16, 1–16. [Google Scholar]
  17. Prince, M.; Felder, R. The many faces of inductive teaching and learning. J. Coll. Sci. Teach. 2007, 36, 14–20. [Google Scholar]
  18. Greenberg, J.; Pomerance, L.; Walsh, K. Student Teaching in the United States; National Council on Teacher Quality: Washington, DC, USA, 2011. [Google Scholar]
  19. Outcomes of High-Quality Clinical Practice in Teacher Education; Hoppey, D.; Yendol-Hoppey, D. (Eds.) Information Age Publishing: Charlotte, NC, USA, 2018. [Google Scholar]
  20. Burns, R.W.; Jacobs, J.; Yendol-Hoppey, D. The changing nature of the role of the university supervisor and function of preservice teacher supervision in an era of clinically-rich practice. Action Teach. Educ. 2016, 38, 410–425. [Google Scholar] [CrossRef]
  21. Smith, T.M.; Ingersoll, R.M. What are the effects of induction and mentoring on beginning teacher turnover? Am. Educ. Res. J. 2004, 41, 681–714. [Google Scholar] [CrossRef]
  22. Barnett, W.S. Better Teachers, Better Preschools: Student Achievement Linked to Teacher Qualifications; National Institute for Early Education Research: New Brunswick, NJ, USA, 2000; pp. 1–11. [Google Scholar]
  23. Whitebook, M. Bachelor’s Degrees Are Best: Higher Qualifications for Pre-Kindergarten Teachers Lead to Better Learning Environments for Children; The Trust for Early Education: Washington, DC, USA, 2003. [Google Scholar]
  24. Breidenstein, A. Examining the outcomes of four-year and extended teacher education programs. Teach. Educ. Pract. 2002, 15, 12–43. [Google Scholar]
  25. Piper, A.W. What we know about integrating early childhood education and early childhood special education teacher preparation programs: A review, a reminder and a request. J. Early Child. Teach. Educ. 2007, 28, 163–180. [Google Scholar] [CrossRef]
  26. Worrell, F.C.; Brabeck, M.M.; Dwyer, C.; Geisinger, K.F.; Marx, R.W.; Noell, G.H.; Pianta, R.C. Assessing and Evaluating Teacher Preparation Programs; American Psychological Association: Washington, DC, USA, 2014. [Google Scholar]
  27. Stayton, V.D.; Smith, B.J.; Dietrich, S.L.; Bruder, M.B. Comparison of state certification and professional association personnel standards in early childhood special education. Topics Early Child. Special Educ. 2012, 32, 24–37. [Google Scholar] [CrossRef]
  28. du Plessis, A.E. Out-of-Field Teaching Practices: What Educational Leaders Need to Know; Sense: Rotterdam, The Netherlands, 2017. [Google Scholar]
  29. Ingersoll, R.; Gruber, K. Out-of-Field Teaching and Educational Equality; U.S. Department of Education: Washington, DC, USA, 1996.
  30. Clift, R.T.; Brady, P. Research on methods courses and field experiences. In Studying Teacher Education: The Report of the AERA Panel on Research and Teacher Education; Cochran-Smith, M., Zeichner, K.M., Eds.; Lawrence Erlbaum: Mahwah, NJ, USA, 2005; pp. 309–424. [Google Scholar]
  31. Isikoglu, N. The effects of a teaching methods course on early childhood preservice teachers’ beliefs. J. Early Child. Teach. Educ. 2008, 29, 190–203. [Google Scholar] [CrossRef]
  32. Castle, S.R.; McGuire, C.J. An analysis of student self-assessment of online, blended, and face-to-face learning environments: Implications for sustainable education delivery. Int. Educ. Stud. 2010, 3, 36–40. [Google Scholar] [CrossRef]
  33. Mandinach, E.B.; Cline, H.F. Classroom Dynamics: Implementing a Technology-Based Learning Environment; Routledge: New York, NY, USA, 1993. [Google Scholar]
  34. Wasim, J.; Sharma, S.K.; Khan, I.A.; Siddiqui, J. Web based learning. Int. J. Comput. Sci. Inf. Technol. 2014, 5, 446–449. [Google Scholar]
  35. Levin, B.B. Using the case method in teacher education: The role of discussion and experience in teachers’ thinking about cases. Teach. Teach. Educ. 1995, 11, 63–79. [Google Scholar] [CrossRef]
  36. Prince, M.; Felder, R. Inductive teaching and learning methods: Definitions, comparisons, and research bases. J. Eng. Educ. 2006, 95, 123–138. [Google Scholar] [CrossRef]
  37. Johnson, D.W.; Johnson, R. Learning Together and Alone: Cooperative, Competitive, and Individualistic Learning, 5th ed.; Allyn & Bacon: Boston, MA, USA, 1999. [Google Scholar]
  38. Slavin, R.E. Research on cooperative learning and achievement: What we know, what we need to know. Contemp. Educ. Psychol. 1996, 21, 43–69. [Google Scholar] [CrossRef]
  39. Tosey, P.; Gregory, J. The peer learning community in higher education: Reflections on practice. Innov. Educ. Train. Int. 1998, 35, 74–81. [Google Scholar] [CrossRef]
  40. Gormally, C.; Evans, M.; Brickman, P. Feedback about teaching in higher ed: Neglected opportunities to promote change. CBE—Life Sci. Educ. 2014, 13, 187–199. [Google Scholar] [CrossRef]
  41. Greenwald, A. Validity concerns and usefulness of student ratings of instruction. Am. Psychol. 1997, 52, 1182–1186. [Google Scholar] [CrossRef] [PubMed]
  42. Donnelly, R.; Fitzmaurice, M. Towards productive reflective practice in microteaching. Innov. Educ. Teach. Int. 2011, 48, 335–346. [Google Scholar] [CrossRef] [Green Version]
  43. Katz, Y.J. Kindergarten teacher training through virtual reality: Three-dimensional simulation methodology. Educ. Media Int. 1999, 36, 151–156. [Google Scholar] [CrossRef]
  44. DeNeve, K.M.; Heppner, M.J. Role play simulations: The assessment of an active learning technique and comparisons with traditional lectures. Innov. High. Educ. 1997, 21, 231–246. [Google Scholar] [CrossRef]
  45. Sen, A. A study on the effectiveness of peer microteaching in a teacher education program. Educ. Sci. 2009, 34, 165–174. [Google Scholar]
  46. Baeten, M.; Simons, M. Student teachers’ team teaching: Models, effects, and conditions for implementation. Teach. Teach. Educ. 2014, 41, 92–110. [Google Scholar] [CrossRef]
  47. Guise, M.; Habib, M.; Thiessen, K.; Robbins, A. Continuum of co-teaching implementation: Moving from traditional student teaching to co-teaching. Teach. Teach. Educ. 2017, 66, 370–382. [Google Scholar] [CrossRef]
  48. Darling-Hammond, L. Strengthening clinical preparation: The holy grail of teacher education. Peabody J. Educ. 2014, 89, 547–561. [Google Scholar] [CrossRef]
  49. Howe, E.R. Exemplary teacher induction: An international review. Educ. Philos. Theory 2006, 38, 287–297. [Google Scholar] [CrossRef]
  50. Kemmis, S.; Heikkinen, H.L.T.; Fransson, G.; Aspfors, J.; Edwards-Groves, C. Mentoring of new teachers as a contested practice: Supervision, support and collaborative self-development. Teach. Teach. Educ. 2014, 43, 154–164. [Google Scholar] [CrossRef]
  51. Strong, M. Effective Teacher Induction and Mentoring: Assessing the Evidence; Teachers College Press: New York, NY, USA, 2009. [Google Scholar]
  52. Hattie, J. Visible Learning: A Synthesis of over 800 Meta-Analyses Relating to Achievement; Routledge: New York, NY, USA, 2009. [Google Scholar]
  53. Hattie, J. The applicability of visible learning to higher education. Scholarsh. Teach. Leaning Psychol. 2015, 1, 76–91. [Google Scholar] [CrossRef]
  54. Ahn, S.; Ames, A.J.; Myers, N.D. A review of meta-analyses in education: Methodological strengths and weaknesses. Rev. Educ. Res. 2012, 82, 436–476. [Google Scholar] [CrossRef]
  55. Schneider, M.; Preckel, F. Variables associated with achievement in higher education: A systematic review of meta-analyses. Psychol. Bull. 2017, 143, 565–600. [Google Scholar] [CrossRef]
  56. Tight, M. Systematic reviews and meta-analyses of higher education research. Eur. J. High. Educ. 2018. [Google Scholar] [CrossRef]
  57. Shadish, W.R.; Cook, T.D.; Campbell, D.T. Experimental and Quasi-Experimental Designs for Generalized Causal Inference; Houghton Mifflin: Boston, MA, USA, 2002. [Google Scholar]
  58. Bergeron, P.; Rivard, L. How to engage in pseudoscience with real data: A criticism of John Hattie’s arguments in Visible Learning from the perspective of a statistician. Mcgill J. Educ. 2017, 52, 237–246. [Google Scholar] [CrossRef]
  59. Myburgh, S.J. Critique of peer-reviewed articles on John Hattie’s use of meta-analysis in education. In International and Global Issues for Research (No. 2016/3); University of Bath Department of Education: Bath, UK, 2016. [Google Scholar]
  60. Forzani, F.M. Understanding “core practices” and “practice-based” teacher education: Learning from the past. J. Teach. Educ. 2014, 65, 357–368. [Google Scholar] [CrossRef]
  61. Darling-Hammond, L.; Baratz-Snowden, J. A good teacher in every classroom: Preparing the highly qualified teachers our children deserve. Educ. Horizons 2007, 85, 111–132. [Google Scholar]
  62. National Council for Accreditation of Teacher Education. What Makes a Teacher Effective? A Summary of Key Research Findings on Teacher Preparation; National Council for Accreditation of Teacher Education: Washington, DC, USA, 2006. [Google Scholar]
  63. Goldhaber, D. Evidence-based teacher preparation: Policy context and what we know. J. Teach. Educ. 2018. [Google Scholar] [CrossRef]
  64. Grossman, P.; McDonald, M. Back to the future: Directions for research in teaching and teacher education. Am. Educ. Res. J. 2008, 45, 184–205. [Google Scholar] [CrossRef]
  65. Mohanty, S.B. A supplementary bibliography of microteaching. Innov. Educ. Teach. Int. 1984, 21, 142–151. [Google Scholar] [CrossRef]
  66. Sullivan, T. Teacher professional development in crisis edited series: Annotated bibliography. Inter-Agency Network for Education in Emergencies Toolkit. Burns, M., Ed.; 2013. Available online: http://www.ineesite.org/en/bibliography-tpd (accessed on 8 November 2018).
  67. Cameron, M.; Baker, R. Research on Initial Teacher Education in New Zealand: 1993–2004: Literature Review and Annotated Bibliography; New Zealand Council for Educational Research: Wellington, New Zealand, 2004. [Google Scholar]
  68. Lloyd, M.; Bahr, N. What matters in higher education: A meta-analysis of a decade of learning design. J. Learn. Des. 2016, 9, 2. [Google Scholar] [CrossRef]
  69. Borman, G.D.; Dowling, N.M. Teacher attrition and retention: A meta-analytic and narrative review of the research. Rev. Educ. Res. 2008, 78, 367–409. [Google Scholar] [CrossRef]
  70. Alfieri, L.; Brooks, P.J.; Aldrich, N.J.; Tenenbaum, H.R. Does discovery-based instruction enhance learning? J. Educ. Psychol. 2011, 103, 1–18. [Google Scholar] [CrossRef] [Green Version]
  71. Schmidt, H.G.; van der Molen, H.T.; te Winkel, W.W.R.; Wijnen, W.H.F.W. Constructivist, problem-based learning does work: A meta-analysis of curricular comparisons involving a single medical school. Educ. Psychol. 2009, 44, 227–249. [Google Scholar] [CrossRef]
  72. Lou, Y.; Abrami, P.; d’Apollonia, S. Small group and individual learning with technology: A meta-analysis. Rev. Educ. Res. 2001, 71, 449–521. [Google Scholar] [CrossRef]
  73. Early, D.M.; Maxwell, K.L.; Burchinal, M.; Alva, S.; Bender, R.H.; Bryant, D.; Cai, K.; Clifford, R.M.; Ebanks, C.; Griffin, J.A.; et al. Teachers’ education, classroom quality, and young children’s academic skills: Results from seven studies of preschool programs. Child Dev. 2007, 78, 558–580. [Google Scholar] [CrossRef]
  74. Gong, X. Does Having a Preschool Teacher with a Bachelor’s Degree Matter for Children’s Developmental Outcomes? Ph.D. Dissertation, Columbia University, New York, NY, USA, 2015. [Google Scholar]
  75. Andrew, M.D.; Schwab, R.L. Has reform in teacher education influenced teacher performance? An outcome assessment of graduates of an eleven-university consortium. Action Teach. Educ. 1995, 17, 43–53. [Google Scholar] [CrossRef]
  76. Allen, T.D.; Eby, L.T.; Poteet, M.L.; Lentz, E.; Lima, L. Career benefits associated with mentoring for protégés: A meta-analysis. J. Appl. Psychol. 2004, 89, 127–136. [Google Scholar] [CrossRef]
  77. Theeboom, T.; Beersma, B.; van Vianen, A.E.M. Does coaching work? A meta-analysis on the effects of coaching on individual level outcomes in an organizational context. J. Posit. Psychol. 2014, 9, 1–18. [Google Scholar] [CrossRef]
  78. McGaghie, W.C.; Kowlowitz, V.; Renner, B.R.; Sauter, S.V.H.; Hoole, A.J.; Schuch, C.P.; Misch, M.S. A randomized trial of physicians and physical therapists as instructors of the musculoskeletal examination. J. Rheumatol. 1993, 20, 1027–1032. [Google Scholar]
  79. Credé, M.; Roch, S.G.; Kieszczynka, U.M. Class attendance in college: A meta-analytic review of the relationship of class attendance with grades and student characteristics. Rev. Educ. Res. 2010, 80, 272–295. [Google Scholar] [CrossRef]
  80. Monk, D.H. Subject area preparation of secondary mathematics and science teachers and student achievement. Econ. Educ. Rev. 1994, 13, 125–145. [Google Scholar] [CrossRef]
  81. Boyer, J.A. Meta-Analysis of Single Case Design: Linking Preservice Teacher Preparation Coursework to Outcomes for Children. Ph.D. Dissertation, University of Cincinnati, Cincinnati, OH, USA, 2003. [Google Scholar]
  82. Borenstein, M.; Hedges, L.V.; Higgins, J.P.T.; Rothstein, H.R. Introduction to Meta-Analysis; Wiley: Chichester, UK, 2009. [Google Scholar]
  83. Cooper, H.; Koenka, A.C. The overview of reviews: Unique challenges and opportunities when research syntheses are the principal elements of new integrative scholarship. Am. Psychol. 2012, 67, 446–462. [Google Scholar] [CrossRef] [PubMed]
  84. Schmidt, F.L.; Oh, I.-S. Methods for second-order meta-analysis and illustrative applications. Organ. Behav. Hum. Decis. Process. 2013, 121, 204–218. [Google Scholar] [CrossRef]
  85. Schmidt, F.L.; Hunter, J.E. Methods of Meta-Analysis: Correcting Error and Bias in Research Findings, 3rd ed.; SAGE: Thousand Oaks, CA, USA, 2015. [Google Scholar]
  86. IBM Corp. IBM SPSS Statistics for Windows (Version 24); IBM Corp: Armock, NY, USA, 2016. [Google Scholar]
  87. Thomas, A.; Loadman, W.E. Evaluating teacher education programs using a national survey. J. Educ. Res. 2001, 94, 195–206. [Google Scholar] [CrossRef]
  88. Latham, N.; Mertens, S.B.; Hamann, K. A comparison of teacher preparation models and implications for teacher attrition: Evidence from a 14-year longitudinal study. Sch. Univ. Partnersh. 2015, 8, 79–89. [Google Scholar]
  89. Permzadian, V.; Credé, M. Do first-year seminars improve college grades and retention? A quantitative review of their overall effectiveness and an examination of moderators of effectiveness. Rev. Educ. Res. 2016, 86, 277–316. [Google Scholar] [CrossRef]
  90. Azevedo, R.; Bernard, R.M. The Effects of Computer-Presented Feedback on Learning from Computer-Based Instruction: A Meta-Analysis. In Proceedings of the Annual Meeting of the American Educational Research Association, San Francisco, CA, USA, 18–22 April 1995; Available online: https://files.eric.ed.gov/fulltext/ED385235.pdf (accessed on 7 November 2018).
  91. Van der Kleij, F.M.; Feskens, R.C.W.; Eggen, T.J.H.M. Effects of feedback on a computer-based learning environment on students’ learning outcomes: A meta-analysis. Rev. Educ. Res. 2015, 85, 475–511. [Google Scholar] [CrossRef]
  92. Karich, A.C.; Burns, M.K.; Maki, K.E. Updated meta-analysis of learner control within educational technology. Rev. Educ. Res. 2014, 84, 392–410. [Google Scholar] [CrossRef]
  93. Parsons, J.A. A Meta-Analysis of Learner Control in Computer-Based Learning Environments; Nova Southeastern University: Fort Lauderdale, FL, USA, 1991. [Google Scholar]
  94. Yeany, R.H.; Padilla, M.J. Training science teachers to utilize better teaching strategies: A research synthesis. J. Res. Sci. Teach. 1986, 23, 85–95. [Google Scholar] [CrossRef]
  95. Kobayashi, K. What limits the encoding effect of note-taking? A meta-analytic examination. Contemp. Educ. Psychol. 2005, 30, 242–262. [Google Scholar] [CrossRef]
  96. Larwin, K.H.; Gorman, J.; Larwin, D.A. Assessing the impact of testing aids on post-secondary student performance: A meta-analytic investigation. Educ. Psychol. Rev. 2013, 25, 429–443. [Google Scholar] [CrossRef]
  97. Henk, W.A.; Stahl, N.A. A meta-analysis of the effect of notetaking on learning from lecture. College reading and learning assistance. In Proceedings of the Annual Meeting of the National Reading Conference, St. Petersburg Beach, FL, USA, 20 November 1984; Available online: https://files.eric.ed.gov/fulltext/ED258533.pdf (accessed on 6 March 2018).
  98. Leung, K.C. Preliminary empirical model of crucial determinants of best practice for peer tutoring on academic achievement. J. Educ. Psychol. 2015, 107, 558–579. [Google Scholar] [CrossRef]
  99. Alegre-Ansuategui, F.J.; Moliner, L.; Lorenzo, G.; Maroto, A. Peer tutoring and academic achievement in mathematics: A meta-analysis. Eurasia J. Math. Sci. Technol. Educ. 2018, 14, 337–354. [Google Scholar] [CrossRef]
  100. Rees, E.; Quinn, P.J.; Davies, B.; Fotheringham, V. How does peer teaching compare to faculty teaching? A systematic review and meta-analysis. Med. Teach. 2016, 38, 829–837. [Google Scholar] [CrossRef] [PubMed]
  101. Balta, N.; Michinov, N.; Balyimez, S.; Ayaz, M.F. A meta-analysis of the effect of peer instruction on learning gain: Identification of informational and cultural moderators. Int. J. Educ. Res. 2017, 86, 66–77. [Google Scholar] [CrossRef]
  102. Ward, P.; Hodges, N.J.; Williams, A.M.; Starkes, J.L. Deliberate practice and expert performance: Defining the path to excellence. In Skill Acquisition in Sport: Research, Theory and Practice; Williams, A.M., Hodges, N.J., Eds.; Routledge: New York, NY, USA, 2004; pp. 231–258. [Google Scholar]
  103. Ericsson, K.A. The influence of experience and deliberate practice on the development of superior expert performance. In The Cambridge Handbook of Expertise and Expert Performance; Ericsson, K.A., Ed.; Cambridge University Press: Cambridge, UK, 2006; pp. 685–705. [Google Scholar]
  104. Ericsson, K.A.; Krampe, R.T.; Tesch-Romer, C. The role of deliberate practice in the acquisition of expert performance. Psychol. Rev. 1993, 100, 363–406. [Google Scholar] [CrossRef]
  105. Whittaker, S.M. A Multi-Vocal Synthesis of Supervisees’ Anxiety and Self-Efficacy during Clinical Supervision: Meta-Analysis and Interviews. Ph.D. Dissertation, Virginia Polytechnic Institute and State University, Blacksburg, VA, USA, 2004. [Google Scholar]
  106. Kluger, A.N.; DeNisi, A. The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychol. Bull. 1996, 119, 254–284. [Google Scholar] [CrossRef]
  107. Hatala, R.; Cook, D.A.; Zendejas, B.; Hamstra, S.J.; Brydges, R. Feedback for simulation-based procedural skills training: A meta-analysis and clinical narrative synthesis. Adv. Health Sci. Educ. 2013, 19, 251–272. [Google Scholar] [CrossRef]
  108. Burn, K.; Mutton, T. A review of ‘research-informed clinical practice’ in initial teacher education. Oxf. Rev. Educ. 2015, 41. [Google Scholar]
  109. Arsal, Z. Microteaching and pre-service teachers’ sense of self-efficacy in teaching. Eur. J. Teach. Educ. 2014, 37, 453–464. [Google Scholar] [CrossRef]
  110. Simmons, K.T.; Douglas, D.Y.; Perdue, E. From medicine to education: Adapting simulation-based training to the professional development training models for educators. J. Child. Dev. Disord. 2018, 4, 1–3. [Google Scholar] [CrossRef]
  111. Richards, K.A.R.; Ressler, J. Engaging preservice teachers in context-based, action oriented curriculum development. J. Phys. Educ. Recreat. Danc. 2016, 87, 36–43. [Google Scholar] [CrossRef]
  112. Darling-Hammond, L.; Barron, B.; Pearson, P.D.; Schoenfeld, A.H.; Stage, E.K.; Zimmerman, T.D.; Cervetti, G.N.; Tilson, J.L. Powerful Learning: What We Know about Teaching for Understanding; Jossey-Bass: San Francisco, CA, USA, 2008. [Google Scholar]
  113. Wilson, S.M.; Floden, R.E.; Ferrini-Mundy, J. Teacher Preparation Research: Current Knowledge, Gaps, and Recommendations; ERIC Clearinghouse on Teaching and Teacher Education: Washington, DC, USA, 2001. [Google Scholar]
  114. National Commission on Teaching & America’s Future. What Matters Most: Teaching for America’s Future; National Commission on Teaching & America’s Future: New York, NY, USA, 1996. [Google Scholar]
  115. Foster, P.; Hammersley, M. A review of reviews: Structure and function in reviews of educational research. Br. Educ. Res. J. 1998, 24, 609–628. [Google Scholar] [CrossRef]
  116. Cochran, K.F.; DeRuiter, J.A.; King, R.A. Pedagogical content knowing: An integrative model for teacher preparation. J. Teach. Educ. 1993, 44, 263–272. [Google Scholar] [CrossRef]
  117. Evens, M.; Elen, J.; Depaepe, F. Developing pedagogical content knowledge: Lessons learned from intervention studies. Educ. Res. Int. 2015, 2015. [Google Scholar] [CrossRef]
  118. Gravett, S.; de Beer, J.; Odendaal-Kroon, R.; Merseth, K.K. The affordances of case-based teaching for the professional learning of student-teachers. J. Curric. Stud. 2017, 49, 369–390. [Google Scholar] [CrossRef]
  119. Baeten, M.; Simons, M. Innovative field experiences in teacher education: Student-teachers and mentors as partners in teaching. Int. J. Teach. Learn. High. Educ. 2016, 28, 38–51. [Google Scholar]
  120. Burns, R.W.; Jacobs, J.; Yendol-Hoppey, D. Preservice teacher supervision within field experiences in a decade of reform: A comprehensive meta-analysis of the empirical literature from 2001 to 2013. Teach. Educ. Pract. 2016, 29, 46–75. [Google Scholar]
Table 1. Types of teacher and preservice professional preparation practices.
Table 1. Types of teacher and preservice professional preparation practices.
Teacher Preparation Practices/VariablesRepresentative Sources
Type of Teacher Degree (High school, associate’s degree, child development associate’s degree, bachelor’s degree, or master’s degree)[22,23]
Type of Teacher Preparation Program (Extended degree programs, four year degree programs, bachelor’s degree program, master’s degree program, integrated degree programs, blended degree programs, etc.)[12,24,25,26]
Type of Teacher Certification (Traditional teacher certification, National Board Certification, Teach for America Certification, alternative teacher certification, etc.)[15,27]
In-Field Degree/Certification (In-field certification or degree; out-of-field certification or degree)[28,29]
Type of Coursework (General education, subject matter courses, methods courses, etc.)[16,30,31]
Methods of Course Delivery (Distance education courses, blended courses, personalized system of instruction courses, etc.).[30,32]
Web-Based and E-Learning Instruction (Technology-assisted instruction, computer-assisted instruction, web-based instruction, virtual reality instruction, etc.)[33,34]
Course-Based Student Learning Methods (Problem-based learning, case-based learning, project-based learning, self-directed learning, guided design, etc.)[35,36]
Cooperative Learning Practices (Small group learning, peer tutoring, peer instruction, etc.)[37,38,39]
Faculty Instructional Practices (Faculty coaching, just-in-time training, faculty mentoring, etc.)[40,41]
Teaching Method Instruction (Microteaching, simulation-based instruction, minicourses, peer-facilitated instruction, etc.)[42,43,44,45]
Types of Field Experiences (Student teaching, practicum experience, (course-based field experiences, service learning, etc.)[14,30,46,47]
Field Experience Supervision (Clinical supervision, field-based performance feedback, etc.)[20,48]
Induction and Mentoring (School-based induction, school-based mentoring, beginning teacher coaching, etc.)[49,50,51]
Table 2. Relationships between teacher degree and the study outcomes.
Table 2. Relationships between teacher degree and the study outcomes.
Preservice Practice/Variable aOutcome MeasureGrade bNo. of StudiesSample SizeNo. of EffectsMean Effect Size95% CI
Teaching Quality
AA/CDA vs. HSClassroom qualityP8138580.04−0.04, 0.11
BA vs. HSClassroom qualityP143440210.330.27, 0.40
BA vs. AA/CDAClassroom qualityP122889120.160.08, 0.24
MA vs. HSClassroom qualityP6106660.230.10, 0.36
MA vs. AA/CDAClassroom qualityP8132480.210.12, 0.29
MA vs. BAClassroom qualityP7175970.160.05, 0.26
BA vs. HSTeaching practicesP112732660.520.50, 0.53
BA vs. AATeaching practicesP3430050.280.21, 0.34
BA vs. HSTeacher beliefsP4550110.770.57, 0.97
Child/Student Performance
AA vs. HSAchievementP6173418−0.01−0.03, 0.01
BA vs. HSAchievementP93336310.140.08, 0.19
MA vs. HSAchievementP61729180.060.04, 0.07
BA vs. AAAchievementP1114,750290.070.05, 0.09
MA vs. AAAchievementP71983200.050.03, 0.07
MA vs. BAAchievementP-12154000+350.060.05, 0.07
a HS = High school, AA = Associate’s degree, CDA = Child development associate’s degree, BA = Bachelor’s degree and MA = Master’s degree. b P = Preschool and P–12 = Preschool to 12th grade.
Table 3. Relationships between type of teacher preparation program and the study outcomes.
Table 3. Relationships between type of teacher preparation program and the study outcomes.
Preservice Practice/VariableOutcome MeasureGrade aNo. of StudiesSample SizeNo. of EffectsMean Effect Size95% CI
Teaching Quality
Extended vs. BA ProgramTeaching practicesCE113949−0.07−0.18, 0.03
Extended vs. BA ProgramTeacher beliefsCE2526960.02−0.06, 0.10
Teacher Outcomes
Extended vs. BA ProgramTeacher performance CE2526430.06−0.03, 0.14
Extended vs. BA ProgramTeacher attitudesCE113943−0.16−0.19, −0.13
a CE = Career educators.
Table 4. Relationships between teacher certification and student achievement.
Table 4. Relationships between teacher certification and student achievement.
Preservice Practice/VariableOutcome MeasureGradeNo. of StudiesSample SizeNo. of EffectsMean Effect Size95% CI
Teach for America Certification aAchievementK–57-400.030.02, 0.04
Teach for America Certification aAchievement6–84-440.110.11, 0.12
Teach for America Certification aAchievement9–127-310.00−0.01, 0.01
National Board Certification aAchievementK–1211,047,391210.080.08, 0.09
Alternative Certification aAchievementK–127-670.130.08, 0.18
Traditional Certification bAchievementP–1219-340.090.03, 0.15
Traditional Certification bAchievementK–126-530.090.05, 0.12
a Compared to traditional teacher certification. b Compared to provisional, emergency, or no teacher certification.
Table 5. Relationships between in-field or out-of-field teacher certification and the study outcomes.
Table 5. Relationships between in-field or out-of-field teacher certification and the study outcomes.
Preservice Practice/VariableOutcome MeasureGradeNo. of StudiesSample SizeNo. of EffectsMean Effect Size95% CI
Teaching Quality
In-Field CertificationClassroom qualityP11121411−0.03−0.19, 0.14
In-Field CertificationTeaching practicesK–1215-390.05−0.03, 0.13
Child/Student Performance
In-Field CertificationAchievementP194653190.02−0.11, 0.14
In-Field CertificationAchievementP–1219 340.090.03, 0.15
In-Field CertificationAchievement6–12161320.27−0.12, 0.87
Table 6. Relationships between university coursework and the study outcomes.
Table 6. Relationships between university coursework and the study outcomes.
Preservice Practice/VariableOutcome MeasureGrade bNo. of StudiesSample SizeNo. of EffectsMean Effect Size95% CI
Teaching Quality
Number of CoursesTeacher preparednessK–121277730.03−0.05, 0.10
Number of CoursesTeaching practicesK–12--80.25−0.13, 0.63
Student Outcomes
Number of CoursesAchievement101149230.030.00, 0.05
Number of CoursesAchievement11198330.03−0.08, 0.15
First Year Seminar aFirst year GPAU8952,406890.02−0.25, 0.29
Class AttendanceAchievementU6821,164680.940.92, 0.96
a Coded yes vs. no. All other effect sizes are for the correlations between the preservice practice measures and study outcomes converted to Cohen’s d. b U = Undergraduate students.
Table 7. Relationships between method of course delivery and university student outcomes.
Table 7. Relationships between method of course delivery and university student outcomes.
Preservice Practice/VariableOutcome MeasureGrade aNo. of StudiesSample SizeNo. of EffectsMean Effect Size95% CI
PSI CoursesAchievementU76+-1300.360.34, 0.38
Blended CoursesAchievementU157+25,139+2280.340.33, 0.35
Distance Education CoursesAchievementU257+99,820+5960.210.19, 0.23
Audio Tutorial CoursesAchievementU42+-680.220.21, 0.23
Distance Education CoursesSatisfactionU10+159219−0.08−0.12, −0.04
Blended CoursesSatisfactionU-176911−0.15−0.26, −0.05
PSI CoursesAttitudesU11-350.640.59, 0.68
Audio Tutorial CoursesAttitudesU28-280.100.10, 0.11
Please note that all of the course delivery methods were compared to traditional classroom or course instruction. PSI = Personal system of instruction. a U = Undergraduate students. + indicates that there were more unspecified number of studies in the research reports.
Table 8. Relationships between technology-assisted and e-learning instruction and university student outcomes.
Table 8. Relationships between technology-assisted and e-learning instruction and university student outcomes.
Preservice Practice/VariableOutcome MeasureGrade bNo. of StudiesSample SizeNo. of EffectsMean Effect Size95% CI
Group Comparisons a
Virtual-Reality InstructionAchievementU678432670.430.42, 0.44
Computer-Assisted InstructionAchievementU38941,1055040.380.36, 0.41
ICT LearningAchievementU6016,008+820.380.30, 0.45
Intelligent Tutoring InstructionAchievementU37-370.350.24, 0.46
Technology-Assisted InstructionAchievementU433+37,923+10750.290.28, 0.29
Internet-Based InstructionAchievementU24+10,910+1340.250.22, 0.29
Technology-Assisted InstructionAttitudesU--1020.270.17, 0.38
Internet-Based InstructionSatisfactionU24+2580+450.240.17, 0.31
Computer-Assisted InstructionSatisfactionU382585+1090.170.16, 0.17
ICT LearningSatisfactionU33974−0.51−0.68, −0.34
Contrasting Conditions
CAI with Feedback vs. No FeedbackAchievementU64341+700.340.32, 0.36
CAI Small Group vs. IndividualAchievementU46-1150.230.22, 0.25
CAI Learner Control vs. No ControlAchievementU332420+64−0.01−0.02, 0.00
CAI Small Group vs. IndividualAttitudesU24-490.040.01, 0.07
ICT = Information and communication technology instruction and CAI = Computer-assisted instruction. a All technology-assisted and e-learning instruction were compared to traditional classroom or course instruction. b U = Undergraduate students.
Table 9. Relationships between course-based learning methods and university student outcomes.
Table 9. Relationships between course-based learning methods and university student outcomes.
Preservice Practice/VariableOutcome MeasureGrade aNo. of StudiesSample SizeNo. of EffectsMean Effect Size95% CI
Teaching Quality
Inquiry-Based LearningTeaching practicesU--100.72-
Student Outcomes
Inquiry-Based LearningAchievementU26+1190+1510.720.69, 0.74
Problem-Based Learning aAchievementU551850.57−0.14, 1.28
Problem-Based LearningSkill acquisitionU/G196442330.430.34, 0.53
Problem-Based LearningKnowledge acquisitionU/G14552,769+4850.340.32, 0.36
Self-Directed LearningAchievementU//G1992744+2230.330.30, 0.36
Critical Thinking InstructionCritical thinking skillsU--1260.260.19, 0.33
Note-Taking PracticesCourse grades/knowledgeU/G29+1348+1690.250.22, 0.27
Visually-Based LearningAchievementU65-710.150.15, 0.16
Explanation-Based LearningKnowledge acquisitionU21 570.170.14, 0.21
Problem-Based LearningAttitudesU/G142287190.570.54, 0.60
Visually-Based LearningAttitudesU16-22−0.09−0.12, −0.07
Please note that all comparisons are for the course-based learning method vs. traditional classroom, course, or instructional practices except the one comparison in the footnote. a Problem-based learning online vs. classroom-based problem-based learning. b U = Undergraduate students and G = Graduate students.
Table 10. Relationships between cooperative learning practices and university student outcomes.
Table 10. Relationships between cooperative learning practices and university student outcomes.
Preservice Practice/VariableOutcome MeasureGradeNo. of StudiesSample SizeNo. of EffectsMean Effect Size95% CI
Peer Instruction + TCIAchievementU162050160.960.59, 1.33
Small Group LearningAchievementU81+6702+2190.630.60, 0.66
Small Group LearningPerformanceU283371440.310.29, 0.33
Small Group LearningAttitudesU739370.56-
Small Group Instruction (CAI) aAchievementU41-1150.230.22, 0.25
Small Group Instruction (CAI) aAttitudesU24-490.040.01, 0.07
Peer TutoringAchievementU131397+130.280.19, 0.37
Peer InstructionKnowledge/skillsU101300200.090.08, 0.10
TCI = Traditional classroom instruction and CAI = Computer-assisted instruction. All between group comparisons are for the cooperative learning methods vs. traditional classroom instruction or no peer tutoring or instruction except for CAI facilitated small group instruction. a Small group CAI facilitated instruction vs. small group instruction without CAI.
Table 11. Relationships between faculty instructional practices and the study outcomes.
Table 11. Relationships between faculty instructional practices and the study outcomes.
Preservice Practice/VariableOutcome MeasureGrade aNo. of StudiesSample SizeNo. of EffectsMean Effect Size95% CI
Teaching Quality
Faculty CoachingInstructor practicesF -142.30
Consultative FeedbackInstructor practicesF11331110.690.43, 0.95
Student Feedback on Instructor PracticesInstructor practicesF44-530.420.41, 0.42
Faculty FeedbackInteractional skillsF3310582170.400.26, 0.54
Student Performance
Faculty CoachingAchievementU8+628+310.770.67, 0.86
Faculty MentoringAcademic performanceU/G91444170.330.30, 0.36
Faculty FeedbackAchievementU382460+600.300.29, 0.31
Student FeedbackAchievementU5-80.220.19, 0.25
Faculty MentoringAttitudesU/G5114750.610.35, 0.88
Student FeedbackAttitudesU7-130.370.34, 0.41
Faculty MentoringRetentionU/G1422,079140.170.14, 0.19
All comparisons are for the type of instructional practice vs. traditional classroom or course instruction or the absence of any instructional practice or instruction. a F = Faculty member or instructor, U = Undergraduate students, and G = Graduate students.
Table 12. Relationships between types of teaching method instruction and teaching quality and student outcomes.
Table 12. Relationships between types of teaching method instruction and teaching quality and student outcomes.
Preservice Practice/VariableOutcome MeasureGrade aNo. of StudiesSample SizeNo. of EffectsMean Effect Size95% CI
Teaching Quality
Simulation-Based Instruction (DP)Clinical practicesG10-101.670.90, 2.58
Critical Thinking InstructionQuestioning skillsU/G28745+280.890.87, 0.91
Teaching Practices InstructionQuestioning skillsU/G9-90.790.58, 1.01
Peer-Facilitated TeachingTeaching practicesU210840.780.12, 1.44
MicroteachingTeaching practicesU/G11410431170.760.71, 0.81
Mini-CoursesTeaching practicesU37940.70-
Modeling of Teaching MethodsTeaching practicesU411160.59-
Simulation-Based InstructionTeaching or clinical practicesU/G1417122+1420.340.31, 0.38
Student Outcomes
Simulation-Based InstructionKnowledge acquisitionU352759360.180.13, 0.23
Critical Thinking InstructionCritical thinking skillsU32-400.190.09, 0.30
Simulation-Based InstructionSatisfactionU563042560.51−0.33, 1.35
Please note that all of the comparisons are for the type of teaching method or instruction vs. the absence of the use of any instruction and DP = Deliberate practice. a U = Undergraduate students and G = Graduate students.
Table 13. Relationships between types of field experiences and teaching quality and student outcomes.
Table 13. Relationships between types of field experiences and teaching quality and student outcomes.
Preservice Practice/VariableOutcome MeasureGradeNo. of StudiesSample SizeNo. of EffectsMean Effect Size95% CI
Teaching Quality
Extended Student TeachingTeaching practicesU1992941.520.72, 2.31
Limited Student TeachingTeaching practicesU1166340.770.49, 1.06
Field ExperiencesTeaching practicesU--110.250.06, 0.44
Extended Student TeachingClassroom qualityU1992941.59
Limited Student TeachingClassroom qualityU1166340.730.25, 1.20
Student Outcomes
Service LearningAchievementU261610+260.430.42, 0.44
Service LearningSocial skillsU28-560.290.28, 0.30
Service LearningAttitudesU36-480.280.15, 0.40
Field ExperienceCompositeST/FT--210.140.09, 0.19
Field ExperienceAttitudesU--350.130.09, 0.17
Field ExperienceAchievementU--170.120.01, 0.22
Please note that extended student teaching equals 10 or more weeks and limited student teaching equals 5 to 9 weeks. All comparisons are for either type of student teaching vs. no student teaching. All other comparisons for the two types of field experiences vs. no field experience. a U = Undergraduate student, ST = Student teacher, and FT = First year teacher.
Table 14. Relationships between clinical supervision and the study outcomes.
Table 14. Relationships between clinical supervision and the study outcomes.
Preservice PracticeOutcome MeasureGrade aNo. of StudiesSample SizeNo. of EffectsMean Effect Size95% CI
Clinical SupervisionSelf-efficacy beliefsG49540.660.23, 1.08
Clinical SupervisionStudent anxietyG829380.450.19, 0.72
Performance FeedbackSkill acquisitionU/G17653170.740.38, 1.09
Performance FeedbackClinical practiceU/CE13112,6526070.41−0.18, 1.00
a U = Undergraduate students, G = Graduate students, and CE = Career employees.
Table 15. Relationships between induction/mentoring and the study outcomes.
Table 15. Relationships between induction/mentoring and the study outcomes.
Preservice Practice/VariableOutcome MeasureGrade bNo. of StudiesSample SizeNo. of EffectsMean Effect Size95% CI
Teaching Quality
School-Based Induction ProgramTeacher preparednessFT260993−0.06−0.42, 0.30
School-Based MentoringTeaching practicesST+43-1860.490.38, 0.60
School-Based MentoringTeacher preparednessFT2609930.12−0.05, 0.29
Workplace MentoringCareer commitmentCE-2207+100.320.19, 0.44
Workplace CoachingCareer commitmentCE11789110.740.42, 1.06
Participant Outcomes
School-Based MentoringK–12 student achievementST+31-1130.180.10, 0.25
School-Based MentoringMoved to another school districtFT/ST113758−0.01−0.07, 0.05
Workplace MentoringPerformanceCE145449880.240.17, 0.31
Workplace MentoringSatisfactionCE-3029+170.390.38, 0.41
Workplace CoachingPerformanceCE204116200.400.33, 0.46
Workplace CoachingAttitudesCE750770.540.34, 0.73
Induction Practices aRetentionFT24610100.130.07, 0.20
Note that all comparisons are for type of induction, mentoring, or coaching practice vs. the absence of use of the practice. a Group seminars, collaborative planning, or teacher support network. b FT = First year teachers, ST = Second year teachers, and CE = Career employees.
Table 16. Rankings of the effect sizes (ES) for the relationships between the preservice teacher preparation practices and teaching quality.
Table 16. Rankings of the effect sizes (ES) for the relationships between the preservice teacher preparation practices and teaching quality.
RankPracticeOutcomesESRankPracticeOutcomesES
1Faculty CoachingInstructor Practices2.3020Faculty FeedbackF–S Interactions0.40
2Simulated Instruction (DP)Clinical Practices1.6721Simulated InstructionTeaching Practices0.34
3Extended Student TeachingClassroom Quality1.5922BA vs. HSClassroom Quality0.33
4Extended Student TeachingTeaching Practices1.5223Workplace MentoringCareer Commitment0.32
5Teaching InstructionTeaching Practices0.8624BA vs. ASTeaching Practices0.28
6Peer InstructionTeaching Practices0.7825Number of CoursesTeaching Practices0.25
7Limited Student TeachingTeaching Practices0.7726Field ExperiencesTeaching Practices0.25
8BA vs. HSTeacher Beliefs0.7727MA vs. HSClassroom Quality0.23
9MicroteachingTeaching Practices0.7628MA vs. AA/CDAClassroom Quality0.21
10Workplace CoachingCareer Commitment0.7429BA vs. AA/CDAClassroom Quality0.16
11Limited Student TeachingClassroom Quality0.7330MA vs. BATeaching Quality0.16
12Inquiry-Based LearningTeaching Practices0.7231School-Based MentoringTeacher Preparedness0.12
13MinicoursesTeaching Practices0.7032In-Field CertificationTeaching Practices0.05
14Consultative FeedbackInstructor Practices0.6933AA/CDA vs. HSClassroom Quality0.04
15Clinical SupervisionTeacher Beliefs0.6634Number of CoursesTeacher Preparedness0.03
16Modeling Teaching PracticesTeaching Practices0.5935Extended Preparation ProgramsTeacher Preparedness0.03
17BA vs. HSTeaching Practices0.5236In-Field CertificationClassroom Quality−0.03
18School-Based MentoringTeaching Practices0.4937School-Based InductionTeacher Preparedness−0.06
19Student FeedbackInstructor Practices0.4238Extended Preparation ProgramsTeaching Practices−0.07
DP = Deliberate practice and F–S = Faculty–student interactions.
Table 17. Rankings of the effect sizes (ES) for the relationships between the preservice teacher preparation practices and study participants’ performance.
Table 17. Rankings of the effect sizes (ES) for the relationships between the preservice teacher preparation practices and study participants’ performance.
RankPracticeOutcome aESRankPracticeOutcome aES
1Peer Instruction + TCIU0.9630Student Feedback of Faculty InstructionU0.22
2Class AttendanceU0.9431Distance Education CoursesU0.21
3Faculty CoachingU0.7732Critical Thinking InstructionU0.19
4Inquiry-Based LearningU0.7233Simulation-Based InstructionU0.18
5Small Group LearningU0.6334School-Based MentoringK–120.18
6Virtual-Reality InstructionU0.4335Faculty Mentoring of StudentsU0.17
7Service LearningU0.4336Explanation-Based LearningU0.17
8Performance FeedbackU/CE0.4237Visually-Based LearningU0.15
9Workplace CoachingCE0.4038Field ExperienceST/FT0.14
10Information & Communication LearningU0.3839BA vs. HSP0.14
11Computer-Assisted InstructionU0.3840Alternative Teacher CertificationK–120.13
12PSI CoursesU0.3641Induction PracticesFT0.13
13Intelligent Tutoring InstructionU0.3542Field ExperiencesU0.12
14Problem-Based LearningU0.3543Peer InstructionU0.09
15Blended CoursesU0.3444Traditional Teacher CertificationP-120.09
16CAI with FeedbackU0.3445National Board Teacher CertificationK–120.08
17Faculty MentoringU0.3346BA vs. AAP0.07
18Self-Directed LearningU0.3347In-Field Certification or DegreeP-120.07
19Small Group LearningU0.3148MA vs. HSP0.06
20Faculty FeedbackU0.3049MA vs. BAP-120.06
21Service LearningU0.2950Extended Preparation ProgramCE0.06
22Technology-Assisted InstructionU0.2951MA vs. AAP0.05
23Peer TutoringU0.2852Teach for America Teacher CertificationK–120.05
24Critical Thinking InstructionU0.2653Number of Courses10-120.03
25Note-Taking PracticesU0.2554First Year SeminarU0.02
26Internet-Based InstructionU0.2555AA vs. HSP−0.01
27Workplace MentoringCE0.2456CAI with Learner ControlU−0.01
28Small Group Instruction (CAI)U0.2357School-Based MentoringFT/ST−0.01
29Audio Tutorial CoursesU0.22
a Indicates the study participants whose performance was assessed. TCI = Traditional classroom instruction and CAI = Computer-assisted learning. P = Preschool student outcomes, U = University student outcomes, ST = Student teacher outcomes, FT = First-year teacher outcomes, and CE = Career educator or career professional outcomes.
Table 18. Rankings of the effect sizes (ES) for the relationships between the preservice teacher preparation practices.
Table 18. Rankings of the effect sizes (ES) for the relationships between the preservice teacher preparation practices.
RankPracticeMeasureESRankPracticeMeasureES
1Workplace CoachingBeliefs0.6612Technology-Assisted InstructionAttitudes0.27
2PSI CoursesAttitudes0.6413Internet-Based Instruction (CAI)Satisfaction0.24
3Faculty MentoringAttitudes0.6114Computer-Assisted instructionSatisfaction0.17
4Problem-Based LearningAttitudes0.5715Field ExperiencesAttitudes0.13
5Small Group LearningAttitudes0.5616Audio Tutorial CoursesAttitudes0.10
6Workplace CoachingAttitudes0.5417Small Group Instruction (CAI)Attitudes0.04
7Simulation-Based InstructionSatisfaction0.5118Distance Education CoursesSatisfaction−0.08
8Clinical SupervisionAnxiety0.4519Visually-Based LearningAttitudes−0.09
9Workplace MentoringSatisfaction0.3920Blended CoursesSatisfaction−0.15
10Student FeedbackAttitudes0.3721Extended Preparation ProgramAttitudes−0.16
11Service LearningAttitudes0.2822Information & Communication LearningSatisfaction−0.51
Table 19. High impact core practices in preservice teacher preparation.
Table 19. High impact core practices in preservice teacher preparation.
Teacher Preparation PracticesDegree of Impact
Very HighHighMediumLowNone
Clinically-Rich Field ExperiencesX
Teaching Methods InstructionX
Clinical SupervisionX
Faculty Coaching and Instructional PracticesX
Course-Based Learning Practices X
Web-Based and E-Learning Practices X
Cooperative Learning Practices X
Methods of Course Delivery X
School-Based Mentoring and Coaching X
Teacher Degree X
Teacher Certification X
Teacher Preparation Programs X
Course Work X
In-Field Certification X

Share and Cite

MDPI and ACS Style

Dunst, C.J.; Hamby, D.W.; Howse, R.B.; Wilkie, H.; Annas, K. Metasynthesis of Preservice Professional Preparation and Teacher Education Research Studies. Educ. Sci. 2019, 9, 50. https://doi.org/10.3390/educsci9010050

AMA Style

Dunst CJ, Hamby DW, Howse RB, Wilkie H, Annas K. Metasynthesis of Preservice Professional Preparation and Teacher Education Research Studies. Education Sciences. 2019; 9(1):50. https://doi.org/10.3390/educsci9010050

Chicago/Turabian Style

Dunst, Carl J., Deborah W. Hamby, Robin B. Howse, Helen Wilkie, and Kimberly Annas. 2019. "Metasynthesis of Preservice Professional Preparation and Teacher Education Research Studies" Education Sciences 9, no. 1: 50. https://doi.org/10.3390/educsci9010050

APA Style

Dunst, C. J., Hamby, D. W., Howse, R. B., Wilkie, H., & Annas, K. (2019). Metasynthesis of Preservice Professional Preparation and Teacher Education Research Studies. Education Sciences, 9(1), 50. https://doi.org/10.3390/educsci9010050

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop