Next Article in Journal
New Developments and Emergent Challenges in International Inclusive Education—A Response to Growing Family Needs and the Pandemic
Previous Article in Journal
Developing Critical Thinking in Technical and Vocational Education and Training
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

How Expert and Inexpert Instructors Talk about Teaching

1
Parlay Consulting, Omaha, NE 68118, USA
2
Physics Department, University of Nebraska Omaha, Omaha, NE 68182, USA
*
Author to whom correspondence should be addressed.
Educ. Sci. 2023, 13(6), 591; https://doi.org/10.3390/educsci13060591
Submission received: 9 May 2023 / Revised: 7 June 2023 / Accepted: 8 June 2023 / Published: 10 June 2023
(This article belongs to the Section Higher Education)

Abstract

:
Using mixed-method social network analysis, we explored the discussions happening between instructors within a teaching-related network and how instructional expertise correlated with the content of those discussions. Instructional expertise, defined by the extent to which effective teaching practices were implemented, was measured for 82 faculty teaching at a Midwestern research university in the USA using the Faculty Inventory of Methods and Practices Associated with Competent Teaching (F-IMPACT). Eight instructors from this population were interviewed after being selected from a stratified random sample having varied disciplines, positions, years of teaching experience, number of network alters, and quartile F-IMPACT score. Network Canvas was used to design, capture, and export network data during the interview process, and a deductive qualitative analysis approach was used for coding and analysis. In general, expert instructors had larger networks that also consisted of expert alters and greater frequency of discussions throughout the semester (both formal and informal) and participated in discussions centered around best practices and education research. Inexpert instructors had smaller teaching networks that consisted of other inexpert instructors, lower frequency of interactions, and had discussions that centered around sharing course-specific, surface-level advice.

1. Introduction

The importance of social connections to an individual instructor’s decision-making process regarding instructional practice has been well established [1,2,3,4,5]. For example, these studies have shown how social connections among faculty can influence the diffusion of instructional practices throughout a department, and some have attempted to uncover patterns in teaching-related discussions. Knowing the “who, what, when, where, why, and how” of teaching discussions within these networks can help inform efforts to facilitate broad-scale unit and/or institutional implementation of effective, evidence-based instructional practices.
Unfortunately, uncovering what is happening during faculty–faculty interactions within these networks has been complicated. Common methods include surveys and interviews. Within these surveys and interviews, the prompts typically include something generic about identifying alters with whom the respondent talked about teaching [2,3,6]. Most studies assume that teaching interactions of any type involve the sharing of “good” practices, either by soliciting no clarification about what was discussed, assuming the respondents had pedagogical expertise, and/or making assumptions about the quality of the conversations [6,7,8].
There are also myriad issues with how expertise is defined and measured. For example, Van Waes et al. used three factors to define expertise, none of which have any direct relationship to the implementation of effective teaching practices. The reliable identification of individuals with expertise in a network is important, since a teaching network with no existing source of expertise will fail to adopt best practices, and recent research suggests that even the existence of that expertise is not sufficient for the diffusion of best practices across the network [9].
In order to use existing teaching networks to facilitate diffusion of instructional expertise across an institution, a better understanding of the types of knowledge being shared as well as the sources of that knowledge are necessary [10]. Using mixed-method social networking analysis, this study builds off the current body of knowledge to provide insight into expertise in teaching, instructors’ experiences within their teaching networks, and the context in which these teaching-related discussions are happening. As discussed in Reding et al., engaging individuals that possess both structural position within a network and pedagogical knowledge may be needed to help strategically diffuse best practices. This requires both the identification of expertise and knowledge of how expert and inexpert instructors interact [9].

2. Background

2.1. Expertise in Teaching

An increasing number of studies have been conducted to examine the difference in teaching discussion networks based on expertise. One such study looked at the relationship between a faculty member’s stage of instructional development and the faculty network used to communicate about teaching practices [11]. The study demonstrated a relationship between network size and stage of instructional development where experienced expert faculty members had larger networks than novices and experienced non-experts. The study also demonstrated that experienced experts also had more diversity in their networks and less frequency of teaching interactions, while this study provided insight into possible methods to investigate the difference in teaching networks based on expertise, the method through which they determined actual expertise was flawed.
The term “expert” in the study referred to experienced top performers who excel in a particular field, or as professionals who achieve at least a moderate degree of success in their occupation [12]. For Van Waes et al., an “expert faculty member” performs at a high level when implementing effective teaching practices in the classroom, consisting of student-centric practices [11]. However, they used three different factors, none of which have any direct relationship to the implementation of effective teaching practices, to determine the instructional stages of the 30 faculty members they interviewed for the study. The three factors included years teaching, scores on student evaluation of teaching surveys (SETs), and department chair nomination. For a faculty member to be identified as an experienced expert, they had to have a minimum of 10 years teaching experience, perform in the top quartile on SETs, and be nominated by their department chair.
The combination of these three factors resembles the use of triangulation to determine expertise, while triangulation is a plausible method, there are interdependent limitations to the factors used. Years of experience is not a reliable measure of expertise and studies have found no significant relationships between years of teaching experience and implementation of best practices [13,14,15]. Berger et al. did however show a significant increase in a faculty member’s sense of self-efficacy with years of teaching experience [13]. Research does show a significant positive bias on SET scores toward instructor years of experience, but also shows a similar bias towards increasing instructor confidence [16]. The implication of Berger et al. is that years of experience is covariant with confidence and therefore SET scores. Not only do studies demonstrate bias toward years of experience in student evaluations, but many studies have also demonstrated the presence of gender, racial, and cultural biases in SETs [17,18,19]. There is also no evidence that traditional affect-based SET scores correlate with measures of student learning, or the instructional practices used within a course. Finally, in the Van Waes et al. study, department supervisors provided no observational evidence of actual evidence-based practice implementation within their nominations, and supervisors could be similarly biased to both years of experience and/or increased instructor confidence [11].
Recently, studies have begun to use more quantitative and reliable methods that directly measure faculty members’ usage levels of effective teaching practices. Middleton et al. used the Approaches to Teaching Inventory (ATI) in combination with network metrics to measure faculty perceptions of their own teaching [20]. The ATI is a self-reported assessment consisting of items that fall into four dimensions: conceptual change intention, student-centered strategies, information transmission, and teacher-focused strategies [21]. Similarly, Reding et al. used the Teaching Practices Inventory (TPI) to examine the relationship between faculty member network elements and the implementation of effective teaching practices [9]. The self-reported TPI measures the use of multiple practices shown by research to support student learning and teaching effectiveness in STEM and social science courses [22]. Factors that support student learning include knowledge organization, reducing cognitive load, motivation, practice, feedback, metacognition, and group learning. Factors that support effective teaching include prior knowledge/beliefs, feedback on effectiveness, and gaining relevant knowledge and skills. Recently, the TPI was modified for validity in both in-person and online courses, with the modified version called the Faculty Inventory of Methods and Practices Associated with Competent Teaching (F-IMPACT) [23].
As self-reported surveys, instruments like the ATI, TPI, and F-IMPACT also have limitations; however, these types of instruments have been designed to directly measure the implementation of effective teaching practices. In this study, we have adopted the Van Waes et al. definition of a teaching expert as a high-level implementer of effective teaching practices in the classroom [11]. However, we have used the F-IMPACT instrument to measure the level of implementations more directly, with F-IMPACT score representing an instructor’s level of expertise. By establishing a valid measure of expertise within the broad domain of teaching, how expert and inexpert instructors interact with their social connections can be measured in an effort to better support diffusion of evidence-based teaching practices.

2.2. Social Capital and Network Analysis

The importance of social connections aiding in the diffusion of evidence-based teaching practices has been supported by research based on a social capital theoretical framework. There are numerous definitions of social capital depending on the author, but within an educational context, it has been defined as “the knowledge and resources for teaching practice that are accessible through a social network” [24]. Social capital studies in higher education have investigated informal teaching advice networks, identification of instructional leaders, the conditions related to the development of teaching-related ties, and the influence of social capital on long-term academic performance [7,8,9,25,26]. Social capital operates at many levels including ego, sub-group, and whole network. This study operates under the ego-level perspective, which includes three intersecting elements: the resources embedded within the network; individual accessibility to these resources; and individual mobilization or actualization of these resources [27]. Studies interested in examining the diffusion of evidence-based practices, such as this current study, view teaching expertise as the resource and faculty members as the individuals.
We use Social Network Analysis (SNA) to quantify these components of social capital. SNA is an empirical method rooted in graph theory and is used to investigate relational concepts, processes, and patterns within a social network [28]. SNA views social structures as multi-faceted and consisting of network entities, which could be individuals, departments, organizations, etc., that have relationships based on some sort of interaction. In SNA, the entities are known as actors and their interactions are known as ties. To connect this with the components of social capital for this study, the actors are the faculty members, and their ties are their discussions related to teaching.
The ties between the actors are conduits for the diffusion of instructional expertise through their discussions. This diffusion of instructional expertise through social capital relies on the three intersecting elements that were previously identified, including instructional expertise being embedded within the network, faculty members having access to the instructional expertise, and finally, faculty members mobilizing, or implementing, the instructional expertise into their own courses. Assuming that instructional expertise exists within a faculty teaching discussion network, the topics of discussion must be examined in order to understand the accessibility and mobilization of practices. There are several methods through which faculty teaching discussion network data are obtained. Depending on the scope of the research, some methods use a roster approach, for instance, where the names of all faculty members within a department are listed and each individual selects the type of discussions they have. This approach is typically used when researchers want to better understand the whole network of a department or unit. Other times, researchers are focused on ego-level networks and may employ name generators, where a respondent constructs the list. Regardless of the data collection method, the instrument used must provide some sort of prompt to describe the type of teaching discussion that might occur. Due to the relational nature of networks, these prompts, and their interpretation by respondents, are instrumental to the overall analysis.
The most common types of biases in SNA self-reports are social desirability bias, reference bias, and introspective ability [29]. When survey participants operate under social desirability bias, they tend to rate themselves “higher”, hoping to appear more socially desirable. Social desirability can result in an inflated number of alters being identified, increased frequency of interactions, or the selection of more advanced types of interactions. Reference bias occurs when respondents interpret scales and prompts differently, which can result in misinterpretation of the content of teaching discussions and un-reciprocated interaction types. Introspective ability bias refers to an individual’s ability to objectively rate themselves, which can also result in either the misidentification of alters and/or the nature of their ties. There are several ways to limit these biases in SNA, which include the provision of descriptive prompts with examples which can be used for ego-level or network-level studies to minimize a sense of ambiguity [30].
Some studies determining the presence, diffusion, and subsequent implementation of evidence-based practices throughout a teaching network tend to assume teaching discussions of any type inherently involve the sharing of “good” practices. One such study used the term “teaching-related issues” to refer to discussion about teaching with no additional clarification for what teaching-related content was actually discussed [6]. Another study interviewed 22 participants through a semi-structured interview with the prompt of discussing “methods or techniques they can use to better teach their students important skills, knowledge, or abilities” [7]. The issue with this prompt in the context of the diffusion of specific effective practices is that it assumes the respondent has a solid evidence-based pedagogical foundation. It is possible that they may have networks that are not beneficial because they consist of other faculty members who similarly do not have a solid, evidence-based understanding of what their students need to succeed.
Another study that used a semi-structured interview approach used the prompt “In the past half year, who did you talk to about your teaching? More specifically, who do you talk to about the preparation of courses, teaching courses, student guidance or assessment, experiences with students and/or teaching? You do not have to include administrative or judicial aspects of teaching” [11]. While this prompt provided examples of what was meant by talking about teaching, it combined all levels of teaching discussion into one prompt, so it is impossible to parse out what they were actually discussing in regard to teaching. Other studies that do parse out the content of discussions into separate relational ties also make assumptions about the quality of the instructional conversations. Apkarian and Rasmussen used SNA to uncover formal and informal instructional leadership structures [8], while they investigated four different instruction-specific relationships, including advice about teaching, seeking instructional materials, discussing instructional matter, and instructional influence, assumptions regarding the degree to which respondents understood the difference between the various types of discussions inhibit the validity of the results. The actual prompts were not provided and there was no mention of the provision of examples to help respondents better understand what was being asked.

2.3. Study Context and Objectives

This existing research makes clear that in order for SNA to be used adequately to help diffuse evidence-based practices, the identification of instructional expertise, as well as uncovering the content of faculty members’ teaching discussions are fundamental [9,20]. Considering that previous research has shown faculty instructional networks to be influential in diffusing best practices and facilitating teaching-related discussions, understanding the mechanisms underlying these networks can be useful to improving student outcomes. What has not been shown throughout the research is a clear understanding of what instructors are actually experiencing within their interactions and how this relates to the diffusion of instructional expertise. This study seeks to provide insight into these interactions by answering the following research questions:
1.
Based on expertise, what are the differences in the number, diversity, and expertise of alters?
2.
How does expertise relate to instructors’ interpretations of different levels of teaching interactions?
3.
What differences exist between various expertise levels and their motives for discussing teaching?
4.
How are conditions surrounding teaching interactions different between levels of expertise?

3. Methods

This study used a Mixed-Method Social Network Analysis (MMSNA) approach. MMSNA is an emerging research approach within education studies and combines qualitative and/or quantitative data with social network analysis [28]. The purpose of combining these methods is to elucidate the information gathered from social network analysis by adding complementary data perspectives provided by other methods [28]. The additional data perspectives employed in this study included qualitative semi-structured interviews to uncover the nature of teaching discussions within the faculty teaching networks of study participants. Deductive Qualitative Analysis (DQA) was the method used for the qualitative potion of the study [31]. DQA differs from grounded theory in that it recognizes the importance of acknowledging and including previous research and theories when conducting certain types of qualitative research. It approaches analysis from a deductive lens rather than inductive and allows for previous research related to the new study to influence the methods and analysis from the onset of the study, including conceptualization, data collection, analysis, and interpretation [31]. The information gleaned from previous research as described in the background section of this manuscript informed the research questions, data collection, and analysis for this research.

3.1. Population

The population for this study was a subset of 82 total instructors that had completed the F-IMPACT survey in the fall of 2021. Respondents to the survey had taught an introductory STEM or social science course during the fall of 2021 at a large research university in the Midwest USA. Respondents’ F-IMPACT scores were divided into quartiles, with Quartile 1 representing the bottom quartile and Quartile 4 representing the top quartile. A stratified random sample of three individuals from each quartile was invited to be interviewed regarding their teaching discussions in the spring of 2022. Eight out of the twelve instructors that were invited agreed to be interviewed, comprising the population for this study. Table 1 provides a description of the participants’ courses taught, position, years teaching, number of network alters, and F-IMPACT score quartile. Due to the limited number of participants, rather than comparing findings based on quartiles, participants were divided into halves: the top half consists of participants with scores in the third and fourth quartiles and are referred to as High-Level Implementers (HLI), and the lower half consists of participants with scores in the first and second quartiles and are referred to as Low-Level Implementers (LLI), also shown in Table 1. HLIs are expert instructors and LLIs are inexpert instructors within our expertise framework.

3.2. Data Collection

Interviews were conducted and recorded via Zoom, and the software known as Network Canvas was used to help conduct the interviews. Network Canvas is a suite of tools that allows researchers to design, capture, and export network data to assist with the interview process [32]. There were four phases of the interview process including introductory demographic questions, identification of teaching network alters, identification and discussion of interaction levels, and development of ego-level network. These various portions of the interview were informed by previous research and were intended to capture specific information to answer the research questions. For purposes of this article, only the first three phases are reported.
The results of the first-phase demographic questions are represented in Table 1. F-IMPACT score was determined prior to the interviews and was not discussed during the interviews. The number of network alters was limited to a maximum of 12 due to time constraints of the interview. Participants not only identified which alters they spoke with about teaching, but also identified pairs of alters within their networks that they were certain also talked about teaching. This was important to understand what was being discussed within their networks and to uncover interpretations of interaction levels. After participants identified alters and pairs of alters that discussed teaching, they determined which types of teaching interactions they had engaged in with their alters as well as the types of teaching interactions in which they knew their alter pairs had also engaged. Participants were given follow-up prompts to provide examples of what was discussed for the different levels of interaction. The interaction types provided were as follows:
1.
This pair has shared specific advice about teaching, like how to grade specific assignments or how to set up your classroom online.
2.
This pair has shared specific resources for teaching, like a website, textbook, or article that helps a certain course you teach.
3.
This pair has discussed teaching in a more general way, like education research or best practice.
4.
This pair has influenced one another’s teaching practices (can be one directional).
5.
This pair has discussed teaching but in a different way than the options provided.
The audio interview data were sent to a third-party transcription service, and the researchers used the software MAXQDA to code and analyze the transcriptions.

3.3. Analysis

Within DQA, there are three considerations for analysis, including (1) individual units, which in this study are the faculty members; (2) comparisons between and across the units; and (3) comparison of the results of the first two levels of analysis with existing research [31]. In this study, because specific research questions and background information were used to guide it, codes were developed with specific data collection in mind. Initial coding was conducted at the individual level using predetermined codes and subcodes. Two researchers independently coded one participant’s data and compared these codes to ensure reliability, adjustments were made to address any misalignment and the remaining interviews were coded, and changes were made to the initial coding schema to produce the final coding schema, as seen in Table 2, with additions noted in italics. The additional codes resulted in connections made with other research after the coding process and, therefore, were not discussed in Section 2 (Background) but are presented in Section 5 (Discussion).
Once each participant’s transcripts had been coded, they were categorized into groups as either HLI or LLI, and comparisons within each group were determined based on the research questions and codes. After comparisons within each group were conducted, between-group comparisons were then determined as well. These comparisons are further discussed in Section 4 (Findings). After comparisons of the data within this study were completed, the results were then compared with the existing research used to inform the study, which are further explained in Section 5 (Discussion).

4. Findings

The findings from the qualitative analysis are organized based on comparisons between the HLI and LLI groups regarding teaching network alters, interpretations of network alters, motives to discuss teaching, and conditions surrounding teaching discussions. Table 3 shows a high-level comparison and corresponding specific examples for each category.

4.1. Teaching Network Alters

Participants with high expertise levels were more likely than their counterparts to identify alters that had not taught the same courses, were not in the same department, or were not a faculty member at the same institution. Some of these participants met their alters within professional associations, as stated by Participant 5 (HLI),
“We actually got connected through the State <professional education association redacted>.”
Others named alters they had worked with in a previous setting and had continued discussing teaching with in their new positions, even though they are not in the same college. Participant 4 (HLI) said,
“She and I actually taught for seven years next door…she’s right now in a tenure track development position in the <college redacted> and this relationship has continued…we end up intersecting with a lot of the same group of kids.”
Participants with low expertise levels tended to only name alters in their same department and/or only teaching the same courses, as demonstrated by Participant 8 (LLI) when talking about their named alters:
“All of these people are in the same department and teach similar courses.”
When considering differences between central importance within the network, most alters identified by participants with low F-IMPACT scores tended to teach the same courses currently or very recently. Participant 8 (LLI) demonstrates this by stating,
“We (central alters) teach the same advanced classes as well as teaching the general classes and so our interactions… expand over more classes than I do with some of the other people (less central alters).”
Some central alters identified by participants with low F-IMPACT scores did not consist of teaching-focused interactions. For example, Participant 3 (LLI) stated the following:
“So <name redacted> is the PhD student, I’m her resident co-advisor which means that her real advisor is in Europe, and they needed somebody at UNO to be her co-advisor…and (our discussions) are more research-based and has nothing to do with <redacted> education and we interact a lot but not necessarily about teaching.”
Many participants noted a level of mutual respect and reciprocated collegial admiration for the alters they named, like Participant 7 (HLI), who stated the following:
“So, they are competent in what they teach and what they do, and I also think they are good people and I trust them that they have the same things at heart that I do…so we are in this industry to make students more knowledgeable and enjoy what they’re learning.”
On the contrary, a few participants identified a lack of mutual respect and reciprocated collegial admiration for the alters they did not name as alters, as demonstrated by Participant 6 (LLI), who stated the following:
“Please don’t invite them (‘pure researchers’) to meetings concerning anything to do with teaching, why are they here, other than to argue with us and yell at us and make fun of us.”

4.2. Interaction Levels

All participants, irrespective of their expertise level, indicated they had engaged in all interaction types from specific advice about teaching to education research and best practice. Participants seemed to understand the purpose of the various interaction levels attempting to uncover different types of interactions centered around teaching and instruction; however, there was a difference in the interpretation of what the various interaction levels implied. This was most evident when participants interpreted the “education research and best practice” interaction level. Participants with low F-IMPACT scores tended to focus primarily on content-related discussions and interpreted them as best practice discussion. For example, Participant 6 (LLI) stated the following:
“A lot of times <name redacted> and I will discuss what we have seen in the journals (subject matter research) and things in the last week to see if it’s anything we need to modify the curriculum for, and the problem is we have new stuff coming out every other day… so we mostly discuss keeping our content up to date and relevant.”
Some participants with low F-IMPACT scores interpreted education research and best practice to be very similar to the specific advice interaction levels. This was demonstrated by Participant 8 (LLI), who interpreted discussions about best practice as being centered around content and course structure:
“We were talking about education practices, but she was talking about <course number redacted> and I was doing something similar in <course number redacted> …different courses but same kind of general content and we had set the courses up the same.”
Conversely, participants with high F-IMPACT scores tended to identify interactions that were more pedagogy-centric or focused around specific education research topics, as stated by Participant 5 (HLI),
“I go to <name redacted> because their area is in assessment and I say ‘Hey, here is what I did (on a test)’ and we kind of go through understanding how hard of a question it was, was the test too hard.”

4.3. Nature of Discussions

All participants identified student success as a primary motivator to discuss teaching with colleagues; typically, that is as much detail as participants with low F-IMPACT scores could describe. For example, Participant 3 (LLI) stated,
“I think for me it’s basically just getting better at working with students, looking how I work with and relate to students…and it’s helpful to know what other instructors think about it.”
Similarly, Participant 1 (LLI) commented,
“When we can do a better job for the students.”
Participants with high F-IMPACT scores went beyond more surface-level student success and interactions and were able to identify specific areas of improvement motivation, as demonstrated by Participant 2 (HLI):
“That realization that from year to year, what works best changes…we were reading a study out of John Hopkins University about structurally how kids brains’ changed because of COVID and how the ideas of, there is more anxiety…kids remember less, and they don’t make connections like we are used to them making.”
Participant 7 (HLI) identified the nature of some discussions to find commonalities between different subjects and how topics might be integrated, as follows:
“We talk about not only our approaches but specific topics we might use in class and how they, he’s a <redacted> and I teach in <redacted>, and we talk about what I teach and what he teaches and how they are different and how they might complement each other.”
Course coordination was a commonly discussed reason for discussing teaching regardless of F-IMPACT score. Participant 3 (LLI) noted,
“We talk about setting up the classroom (same course) online…and shared resources on how to teach <course number redacted>.”

4.4. Conditions Surrounding Teaching Discussions

Some participants, regardless of expertise level, were part of teams that met with the intention of improving instruction or discussing current issues centered around teaching. There was a difference in the frequency of interactions for participants that were members of such teams based on F-IMPACT scores, where those participants with low F-IMPACT scores met much less frequently than participants with high F-IMPACT scores. For example, Participant 8 (LLI) stated the following:
“We (named alters) have a group where we meet periodically …active collaboration to either coordinate or figure out what we think is best practice our how we are doing it…it’s about once a month”.
Conversely, Participant 4 (HLI) explained,
“Formally we meet once a week as a large team, so there is always that time that is less about course specific stuff but still about teaching.”
A few participants with high F-IMPACT scores identified ongoing weekly or even daily discussions within their networks, as explained by Participant 2 (HLI),
“As part of that refinement process, a lot of us do, have chat threads on Teams …where we say ‘I tried this and it worked really well, or I tried this and you’ll want to change these things’”.
Along with continuous discussions around teaching, all participants with high expertise also started seeking out their network alters for frequent informal pedagogical discussions, like Participant 4 (HLI), who stated the following:
“<Name redacted>, I just go to with any question and I just walk down the hall and …I talk to her about other things but definitely if I ever have a question and I’m thinking about trying this in my class I go and talk to her and bounce ideas off of her.”
Some participants with low expertise also noted they had frequent, informal discussions around teaching. Participant 1 (LLI) stated,
“<Name redacted> and I talk about two or three times a week.”
While no participants with high expertise noted COVID-19 as a reason for decreased interactions, some participants with low expertise did identify public health measures to help decrease the spread of COVID-19 as a reason for less interaction, as demonstrated by Participant 3 (LLI):
“It used to be that we had faculty department meetings every two weeks and you could just run into people and chat before the meeting starts and find out what is happening in their courses, research, and there is none of that these days on Zoom.”

5. Discussion

For RQ1, which was concerned with differences in the number, diversity, and expertise levels of alters named, there were some similarities in the findings of this study when compared to previous studies. Previous studies that sought to determine differences between expert and inexpert instructors found that experts tended to have larger networks, meaning that they identified more alters that composed their network when compared to inexpert instructors [11]. This study also found a difference between the number of alters named by HLI participants when compared to LLI participants. Participants were provided instructions to name up to 12 alters within their networks. While significance cannot be established, the range of alters for LLIs was 4–6 and the range for the HLIs was 5–12. The HLI participant that identified only five alters was also very cognizant of not naming people with whom he did not actually discuss best practices, although he did discuss teaching in various ways, such as student behavior, class scheduling, etc. This demonstrates that he interpreted a teaching network to consist of pedagogical discussions and not student behavior or logistical discussions.
One difference between the findings of the Van Waes study and this study is how “experts” were determined [11]. The first data category used to determine an expert in the Van Waes study was years of experience. In this study, the range of years of experience for LLIs was 9–44 years and the range of years of experience for HLIs was 8–26 years. The Van Waes study noted that expert teachers have a minimum of 10 years of experience, which was inconsistent with the findings of this study where one participant in the top quartile, not only top half, had only eight years of teaching experience.
Another difference between the Van Waes study and this study was the operational definition of diversity [11]. For Van Waes, diversity was measured through age, teaching experience, and gender. For this study, diversity meant the inclusion of alters outside of one’s department. There were differences in the diversity of named alters between HLI and LLI groups. While all participants identified alters within their own departments and specific content areas, only participants with high expertise named alters outside of their departments. This identification of diversity implies that participants with high expertise understand that best practices transcend specific content areas, which relates to Webb’s highest level of Depth of Knowledge (DoK) [33,34]. Webb’s DoK framework categorizes learning contexts into four levels that progress through deeper cognitive stages. The stages within Webb’s DoK include recall, knowledge application, strategic thinking, and extended critical thinking. The notion that best-practices transcend application within a specific course or similar courses demonstrates extended critical thinking because they are fusing information from seemingly disparate circumstances, synthesizing that information and then applying it in a new situation.
For RQ2, which was concerned with the relationships between expertise levels and their interpretations of teaching interactions, some differences were identified. The biggest difference was that HLI participants interpreted best practice and education research discussions to be separate from discussions centered around specific advice and resources, and their discussions focused on pedagogical practices. Conversely, LLI participants tended to repeat the topics of discussion for specific advice and resource sharing when interpreting best practice and education research, and these discussions typically focused on content specific issues. These differences demonstrate the notion that HLIs recognize that sharing specific course setup advice and resources is different than discussing best pedagogical practices and research, while LLIs do not make such distinctions. This result is consistent with the findings of Quardokus and Henderson [6].
RQ3 focused on the differences that exist between expertise levels and motives for discussing teaching. There were similarities noted between HLI and LLI groups where student success was always mentioned as a primary motivator. One difference, however, was the level of detail provided about best practice discussions and the motives for interacting. HLIs mentioned specific areas of best practice, such as assessments and cross-subject integration, as the motivators for discussion. LLIs did not provide specific details, and discussions were generally concerned either with course setup or course-specific content. This ability to identify more specific details about best practice during a semi-structured interview indicates that HLIs are more knowledgeable about best practice and education research than their LLI counterparts. This also serves as congruent validity of the F-IMPACT as a measure of instructional expertise within our framework.
When considering these results within the context of Webb’s DoK [33,34], LLI participants typically demonstrated recall and knowledge application through their responses of discussions centered on basic course set up and comparing more specific course-related practices. Conversely, HLI participants were going beyond recall and knowledge application and discussing recent best practice either within their fields of study or outside of it and how this new knowledge can be applied to their instruction, which represents strategic thinking. HLIs were also engaging in extended critical thinking when evaluating their own methods of evaluation and specific assessment items. These differences in responses about discussion topics represent different levels of knowledge that are not necessarily captured within the F-IMPACT but have major implications for the quality of discussions.
RQ4 focused on determining similarities and differences of the conditions surrounding teaching interactions based on expertise level. The conditions that surfaced throughout the interviews included the presence of instruction-focused teams, frequency of discussions, presence of informal discussions, and the role of public health policies related to COVID-19. One similarity between HLIs and LLIs was that some of them were members of instruction-focused teams within their departments, which were specifically studied by Apkarian and Rasmussen [8]. One of the LLIs and two of the HLIs were members of instructional improvement teams. One difference between these teams was how frequently they formally met. The teams of the HLIs met weekly, whereas the team of the LLI met monthly.
The other difference between these instructional teams was the F-IMPACT scores of the alters named, while we did not have access to the F-IMPACT scores of all the alters identified for the HLI participants, we did have scores for many of them and typically these alters scored in the top two quartiles. We had access to F-IMPACT scores for most of the alters identified by the LLI participant, where those within their instruction-focused team were in the bottom two quartiles. This finding highlights the necessity of having existing instructional expertise within a teaching discussion network for successful diffusion of evidence-based practices, as discussed by Reding et al. [9].
Another similarity between HLIs and LLIs was the presence of informal discussions, as all participants noted that they discussed teaching outside of formal meetings, similar to the Benbow and Lee study; however, HLIs tended to have geographic proximity and were able to “walk down the hall” to have informal conversations as desired [7]. On the other hand, LLIs tended to be either geographically isolated from their peers or did not participate in informal discussions as frequently as HLIs. While none of the HLIs mentioned public health policies due to COVID-19 as a deterrent of discussing teaching, a few of the LLIs did. These findings do not align with the Van Waes study’s findings, which determined that inexpert teachers talk about teaching more frequently; however, we caution that our findings occurred within the exigent context of a global pandemic [11].
The objective of this study was to further explore the experiences of instructors within their teaching-related networks based on their instructional expertise. These experiences included teaching network alters, interpretations of levels of teaching interactions, motives to discuss teaching, and conditions surrounding teaching discussions. While it is evident that discussions about teaching were occurring for all participants, there were differences between expert and inexpert instructors. In general, expert instructors had larger networks that also consisted of expert alters, greater frequency of discussions throughout the semester (both formal and informal), and participation in discussions centered around best practices and education research. Inexpert instructors had smaller teaching networks that consisted of other inexpert instructors, lower frequency of interactions, and discussions that centered around sharing course-specific, surface-level advice.
As with all studies, limitations are present in this study and must be noted. We acknowledge many of the usual limitations of a qualitative study of this nature including the limited sample size, relative inability to verify results, and significances that cannot be determined. Keeping these limitations in mind, we did attempt to address these issues. While the sample size is small, efforts through stratified random sampling were made to ensure a variety of participants were invited to share their experiences in terms of expertise, courses taught, years taught, and position. We also have not made any attempts to identify causality or significant relationships. We are also limited in our ability to develop a detailed model of “what” instructors are discussing, due to the large diversity in disciplines represented by the participants. This has forced us to focus on the broader “nature” of discussions, as opposed to specific pedagogical content knowledge within a field. The small sample size and limiting interview methodology has, however, allowed us to establish congruent validity of the F-IMPACT as a measure of instructional expertise within our framework and that of Webb’s DoK, which opens the possibility for future large-scale quantitative investigations of instructor expertise and correlations with other instructor characteristics, such as gender, ethnicity, discipline, etc.
This study has implications for future research that seeks to gain a better understanding of the who, what, when, where, why, and how of instructor teaching-related networks. Researchers need to keep in mind that the instruments used to collect data regarding teaching discussions need to be carefully crafted. As demonstrated by differences in the findings of recent research, including this study, teaching discussions come in many different forms. Simply because an instructor discusses teaching does not necessarily mean that the discussion includes best practices or that the discussions translate into best practice implementation in the classroom. Levels of expertise also come in many different forms. Assuming uncorrelated factors are sufficient to identify expertise, such as years of experience, student evaluations, and department head nominations, ignores what is actually implemented in the classroom, which we argue is the primary indicator of expertise. Finally, our findings suggest that teams attempting to increase the diffusion of instructional expertise across a unit and/or institution may need to engage individuals that possess expertise, insert into existing or build new teaching discussion networks around this expertise, and facilitate frequent discussions focused on crosscutting practices.

Author Contributions

Conceptualization, T.R. and C.M.; methodology, T.R.; formal analysis, T.R. and C.M.; investigation, T.R. and C.M.; resources, C.M.; data curation, T.R. and C.M.; writing—original draft preparation, T.R.; writing—review and editing, C.M.; project administration, C.M.; funding acquisition, C.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the USA National Science Foundation Directorate for Undergraduate Education #2021315.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board of the University of Nebraska Medical Center (IRB protocol #535-20-EX).

Informed Consent Statement

Informed consent was obtained from all subjects.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The raw data are not publicly available in accordance with the IRB protocol.

Acknowledgments

The authors would like to acknowledge the administrative support of the University of Nebraska Omaha STEM TRAIL Center.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Borrego, M.; Henderson, C. Increasing the use of evidence-based teaching in STEM higher education: A comparison of eight change strategies. J. Eng. Educ. 2014, 103, 220–252. [Google Scholar] [CrossRef]
  2. Lane, A.K.; Skvoretz, J.; Ziker, J.P.; Couch, B.A.; Earl, B.; Lewis, J.E.; McAlpin, J.D.; Prevost, L.B.; Shadle, S.E.; Stains, M. Investigating how faculty social networks and peer influence relate to knowledge and use of evidence-based teaching practices. Int. J. STEM Educ. 2019, 6, 1–14. [Google Scholar] [CrossRef] [Green Version]
  3. Ma, S.; Herman, G.L.; West, M.; Tomkin, J.; Mestre, J. Studying STEM faculty communities of practice through social network analysis. J. High. Educ. 2019, 90, 773–799. [Google Scholar] [CrossRef]
  4. McConnell, M.; Montplaisir, L.; Offerdahl, E.G. A model of peer effects on instructor innovation adoption. Int. J. STEM Educ. 2020, 7, 1–11. [Google Scholar] [CrossRef]
  5. Shadle, S.E.; Liu, Y.; Lewis, J.E.; Minderhout, V. Building a community of transformation and a social network analysis of the POGIL project. Innov. High. Educ. 2018, 43, 475–490. [Google Scholar] [CrossRef]
  6. Quardokus, K.; Henderson, C. Promoting instructional change: Using social network analysis to understand the informal structure of academic departments. High. Educ. 2015, 70, 315–335. [Google Scholar] [CrossRef]
  7. Benbow, R.J.; Lee, C. Teaching-focused social networks among college faculty: Exploring conditions for the development of social capital. High. Educ. 2019, 78, 67–89. [Google Scholar] [CrossRef]
  8. Apkarian, N.; Rasmussen, C. Instructional leadership structures across five university departments. High. Educ. 2021, 81, 865–887. [Google Scholar] [CrossRef]
  9. Reding, T.; Moore, C.; Pelton, J.A.; Edwards, S. Barriers to Change: Social Network Interactions Not Sufficient for Diffusion of High-Impact Practices in STEM Teaching. Educ. Sci. 2022, 12, 512. [Google Scholar] [CrossRef]
  10. Kezar, A. Higher education change and social networks: A review of research. J. High. Educ. 2014, 85, 91–125. [Google Scholar] [CrossRef]
  11. Van Waes, S.; Van den Bossche, P.; Moolenaar, N.M.; De Maeyer, S.; Van Petegem, P. Know-who? Linking faculty’s networks to stages of instructional development. High. Educ. 2015, 70, 807–826. [Google Scholar] [CrossRef]
  12. Boshuizen, H.P.; Bromme, R.; Gruber, H. Professional Learning: Gaps and Transitions on the Way from Novice to Expert; Kluwer Academic Publishers: Amsterdam, The Netherlands, 2014. [Google Scholar]
  13. Berger, J.L.; Girardet, C.; Vaudroz, C.; Crahay, M. Teaching experience, teachers’ beliefs, and self-reported classroom management practices: A coherent network. SAGE Open 2018, 8, 2158244017754119. [Google Scholar] [CrossRef] [Green Version]
  14. Harris, D.N.; Sass, T.R. What Makes for a Good Teacher and Who Can Tell? Urban Institute: Washington, DC, USA, 2009. [Google Scholar]
  15. Irvine, J. Relationship between Teaching Experience and Teacher Effectiveness: Implications for Policy Decisions. J. Instr. Pedagog. 2019, 22, EJ1216895. [Google Scholar]
  16. McPherson, M.A.; Jewell, R.T.; Kim, M. What determines student evaluation scores? A random effects analysis of undergraduate economics classes. East. Econ. J. 2009, 35, 37–51. [Google Scholar] [CrossRef]
  17. Fan, Y.; Shepherd, L.J.; Slavich, E.; Waters, D.; Stone, M.; Abel, R.; Johnston, E.L. Gender and cultural bias in student evaluations: Why representation matter. PLoS ONE 2019, 14, e0209749. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Chávez, K.; Mitchell, K.M. Exploring bias in student evaluations: Gender, race, and ethnicity. PS Political Sci. Politics 2020, 53, 270–274. [Google Scholar] [CrossRef] [Green Version]
  19. Carpenter, S.K.; Witherby, A.E.; Tauber, S.K. On students’ (mis)judgments of learning and teaching effectiveness. J. Appl. Res. Mem. Cogn. 2020, 9, 137–151. [Google Scholar] [CrossRef]
  20. Middleton, J.A.; Krause, S.; Judson, E.; Ross, L.; Culbertson, R.; Hjelmstad, K.D.; Hjelmstad, K.L.; Chen, Y.C. A Social Network Analysis of Engineering Faculty Connections: Their Impact on Faculty Student-Centered Attitudes and Practices. Educ. Sci. 2022, 12, 108. [Google Scholar] [CrossRef]
  21. Trigwell, K.; Prosser, M. Development and use of the approaches to teaching inventory. Educ. Psychol. Rev. 2004, 16, 409–424. [Google Scholar] [CrossRef]
  22. Wieman, C.; Gilbert, S. The teaching practices inventory: A new tool for characterizing college and university teaching in mathematics and science. CBE Life Sci. Educ. 2014, 13, 552–569. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  23. Moore, C.; Cutucache, C.; Edwards, S.; Pelton, J.; Reding, T. Modification and validation of the Teaching Practices Inventory for online courses. In Proceedings of the 2021 Physics Education Research Conference, Virtual, 4–5 August 2021. [Google Scholar]
  24. Baker-Doyle, K.J.; Yoon, S.A. In search of practitioner-based social capital: A social network analysis tool for understanding and facilitating teacher collaboration in a US-based STEM professional development program. Prof. Dev. Educ. 2011, 37, 75–93. [Google Scholar] [CrossRef]
  25. Thiele, L.; Sauer, N.C.; Kauffeld, S. Why extraversion is not enough: The mediating role of initial peer network centrality linking personality to long-term academic performance. High. Educ. 2018, 76, 789–805. [Google Scholar] [CrossRef]
  26. Reding, T.E.; Dorn, B.; Grandgenett, N.; Siy, H.; Youn, J.; Zhu, Q.; Engelmann, C. Identification of the emergent leaders within a CSE professional development program. In Proceedings of the 11th Workshop in Primary and Secondary Computing Education (WiPSCE 2016), Münster, Germany, 13–15 October 2016; pp. 37–44. [Google Scholar]
  27. Lin, N.; Cook, K.; Burt, R.S. Social Capital: Theory and Research; Transaction Publishers: Piscataway, NJ, USA, 2001. [Google Scholar]
  28. Froehlich, D.E. Mapping mixed methods approaches to social network analysis in learning and education. In Mixed Methods Social Network Analysis; Routledge: London, UK, 2020; pp. 13–24. [Google Scholar]
  29. McDonald, J.D. Measuring personality constructs: The advantages and disadvantages of self-reports, informant reports and behavioural assessments. Enquire 2008, 1, 1–19. [Google Scholar]
  30. Choi, B.C.; Pak, A.W. A catalog of biases in questionnaires. Prev. Chronic Dis. 2005, 2, A13. [Google Scholar] [PubMed]
  31. Fortune, A.E.; Reid, W.J.; Miller, R.L., Jr. Qualitative Research in Social Work; Columbia University Press: New York, NY, USA, 2013. [Google Scholar]
  32. Hogan, B.; Melville, J.R.; Phillips, G.L., II; Janulis, P.; Contractor, N.; Mustanski, B.S.; Birkett, M. Evaluating the paper-to-screen translation of participant-aided sociograms with high-risk participants. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, San Jose, CA, USA, 7–12 May 2016; pp. 5360–5371. [Google Scholar]
  33. Webb, N.L. Determining Alignment of Expectations and Assessments in Mathematics and Science Education. Nise Brief 1997, 1, n2. [Google Scholar]
  34. Webb, N.L. Alignment of Science and Mathematics Standards and Assessments in Four States; Research Monograph No. 18; Institute of Education Sciences: Washington, DC, USA, 1999.
Table 1. Participant demographics.
Table 1. Participant demographics.
IDCourse
Categorization
PositionYears TeachingNumber Network AltersF-IMPACT Score QuartileExpert Level
1ScienceInstructor (Full-time)4452LLI
2MathematicsInstructor (Full-time)26124HLI
3Information Science and TechnologyAssociate Professor1761LLI
4MathematicsAssistant Professor8104HLI
5ScienceInstructor (Part-time)1253HLI
6ScienceInstructor (Full-time)1541LLI
7Social ScienceAssociate Professor2284HLI
8ScienceAssistant Professor962LLI
Table 2. Resulting codes based on research questions.
Table 2. Resulting codes based on research questions.
Research QuestionCodeSubcodes
Based on expertise, what are the differences in the number, diversity, and expertise of alters?
  • Courses Taught
  • Department
  • College
  • Institution
  • Same (as participant’s)
  • Different (than participant’s)
How does expertise relate to instructors’ interpretations of different levels of teaching interactions?
  • Levels represent different types of discussions
  • Levels do not represent different types of discussions
  • College
  • Institution
  • Clear distinction between level 3 and levels 1 and 2
  • Unclear distinction between level 3 and levels 1 and 2
What differences exist between various expertise levels and their motives for discussing teaching?
  • General
  • Student Success
  • Improve Teaching
  • Course Set Up
What differences exist between various expertise levels and their motives for discussing teaching?
  • Specific
  • "Broader" Pedagogical Practices
  • Current Education Research
How are conditions surrounding teaching interactions different between levels of expertise?
  • Instruction-based Teams
  • Frequency
  • Formality
How are conditions surrounding teaching interactions different between levels of expertise?
  • Barriers
  • Proximity
  • COVID-Related
Table 3. High-level Comparison of Results Based on Implementation Level.
Table 3. High-level Comparison of Results Based on Implementation Level.
CategoriesHigh Level ImplementersLow Level Implementers
Teaching Discussion Network Alters
  • All identified alters from their same department
  • All identified alters in a different department
  • Most identified alters in a different college
  • Most identified alters in a different institution
  • All identified alters from their same department
  • None identified alters in a different department
  • None identified alters in a different college
  • None identified alters in a different institution
Teaching Interaction Level Interpretation
  • Perceived difference between all interaction levels
  • Perceived difference between some interaction levels
Motives for Teaching Discussions
  • Student success as motivator
  • Provided specific “best practice” reasons for teaching discussions
  • Student success as motivator
  • Provided general “best practice” reasons for teaching discussions
Conditions Surrounding Teaching Discussions
  • Some are members of teams focused on instructions
  • Most have frequent discussions about pedagogical improvements
  • All stated that they seek out informal interactions throughout the semester
  • None identified lack of interaction because of public health policies due to COVID
  • Some are members of teams focused on instructions
  • All had infrequent discussions about pedagogical improvements
  • Some stated that they seek out informal interactions throughout the semester
  • Some identified alters in a different institution
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Reding, T.; Moore, C. How Expert and Inexpert Instructors Talk about Teaching. Educ. Sci. 2023, 13, 591. https://doi.org/10.3390/educsci13060591

AMA Style

Reding T, Moore C. How Expert and Inexpert Instructors Talk about Teaching. Education Sciences. 2023; 13(6):591. https://doi.org/10.3390/educsci13060591

Chicago/Turabian Style

Reding, Tracie, and Christopher Moore. 2023. "How Expert and Inexpert Instructors Talk about Teaching" Education Sciences 13, no. 6: 591. https://doi.org/10.3390/educsci13060591

APA Style

Reding, T., & Moore, C. (2023). How Expert and Inexpert Instructors Talk about Teaching. Education Sciences, 13(6), 591. https://doi.org/10.3390/educsci13060591

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop