Next Article in Journal
At the Interface of National and Transnational: The Development of Finnish Policies against Domestic Violence in Terms of Gender Equality
Next Article in Special Issue
Gendered Perceptions of Cultural and Skill Alignment in Technology Companies
Previous Article in Journal
Active Listening Attitude Scale (ALAS): Reliability and Validity in a Nationwide Sample of Greek Educators
Previous Article in Special Issue
Collaboration and Gender Equity among Academic Scientists
Article Menu

Export Article

Soc. Sci. 2017, 6(1), 29; doi:10.3390/socsci6010029

Gender in Engineering Departments: Are There Gender Differences in Interruptions of Academic Job Talks?
Mary Blair-Loy 1,*, Laura E. Rogers 1, Daniela Glaser 2, Y. L. Anne Wong 3, Danielle Abraham 4 and Pamela C. Cosman 5
Department of Sociology, University of California, San Diego, La Jolla, CA 92093, USA
Department of Chemical Engineering and Materials Science, University of Southern California, Los Angeles, CA 90007, USA
Department of Sociology, University of California, Santa Barbara, Santa Barbara, CA 93106, USA
Lindamood-Bell Learning Processes, 445 Marine View Ave #290, Del Mar, CA 92014, USA
Department of Electrical and Computer Engineering, University of California, San Diego, La Jolla, CA 92093, USA
Academic Editors: Maria Charles and Sarah Thébaud
Received: 1 September 2016 / Accepted: 1 March 2017 / Published: 14 March 2017


We use a case study of job talks in five engineering departments to analyze the under-studied area of gendered barriers to finalists for faculty positions. We focus on one segment of the interview day of short-listed candidates invited to campus: the “job talk”, when candidates present their original research to the academic department. We analyze video recordings of 119 job talks across five engineering departments at two Research 1 universities. Specifically, we analyze whether there are differences by gender or by years of post-Ph.D. experience in the number of interruptions, follow-up questions, and total questions that job candidates receive. We find that, compared to men, women receive more follow-up questions and more total questions. Moreover, a higher proportion of women’s talk time is taken up by the audience asking questions. Further, the number of questions is correlated with the job candidate’s statements and actions that reveal he or she is rushing to present their slides and complete the talk. We argue that women candidates face more interruptions and often have less time to bring their talk to a compelling conclusion, which is connected to the phenomenon of “stricter standards” of competence demanded by evaluators of short-listed women applying for a masculine-typed job. We conclude with policy recommendations.
gender; STEM; interruptions; job talks; gender bias; faculty hiring; underrepresentation of women; women in science; double standards; stricter standards

1. Introduction

Women remain starkly under-represented in STEM (Science, Technology, Engineering, and Mathematics) professional occupations in the United States. Over the past two decades, researchers and policy makers have focused on “leaky pipelines” and challenges to the recruitment and retention of girls and women in STEM education, and women have made some gains there. However, women with college and advanced degrees remain underrepresented in many STEM fields [1,2]. Policy makers and academics view the paucity of women in academic STEM as placing limits on scientific creativity [3,4] and contributing to the national shortage of STEM professionals [5,6,7].
This paper focuses on tenured and tenure-track academic engineering faculty positions in research-focused universities [8,9,10]. The ways that gendered barriers may persist in academic hiring are not fully understood. Experimental studies have found that women can be held to double standards and stricter standards compared to men [11,12], especially when the highest levels of competence are demanded [13,14]. In contrast, another study that distributed a set of hypothetical short lifestyle descriptions of faculty job candidates without details of technical qualifications found that women had a higher chance than men of being chosen [15].
However, there is a dearth of research regarding how the faculty hiring process unfolds within real departmental contexts. Moreover, we are aware of no study that considers whether gender barriers are salient for women and men who have risen to the top of a large applicant pool and been added to the “short list” of finalists invited for a campus interview.
We analyze this issue with a case study of short-listed applicants, who have been invited to campus to interview for tenure-track faculty appointments within five male-dominated engineering departments across two Research 1 universities.1 Case-oriented research is not intended to be generalizable but rather sheds light on under-studied social processes. Our data come from the most important segment of the job interview: the “job talk”, a seminar in which the candidate presents his or her original research to the academic department. Our paper assesses whether in these talks, women candidates face greater scrutiny and stricter standards, manifesting as more questions, compared to men candidates.
We analyze the number of questions presenters receive from the audience, which is mostly composed of department faculty and graduate students. By asking questions, faculty try to assess whether the candidate is fully in command of his or her research project and its larger implications. Some interruptions may indicate audience engagement, while others may indicate that the speaker was unclear or that the audience questions the presenter’s competence.
To preview our results, we find that compared to men presenters, women face more questions during their job talk seminars, are confronted with more follow-up questions, and spend a higher proportion of time listening to audience speech. Moreover, we find that questions directed to women and men candidates are more prevalent in more highly male-dominated departments, compared to departments that have a somewhat higher proportion of women on the faculty. More senior candidates generally receive fewer questions than more junior ones, but women face more questioning and scrutiny compared to men with the same level of experience.
As noted in the conclusion section, our data have some limitations. Our IRB agreement allows us research access to this treasure trove of video recordings collected for other purposes but does not permit us to examine which candidates were actually offered a job. Even if job offer data were available, it would be of limited value. Many candidates withdraw from consideration after receiving preferable offers from other universities, so the absence of an offer does not reliably indicate a candidate’s lack of success with the interview. The data do show that, regardless of gender, the number of questions is correlated with candidates’ statements and actions indicating they are rushing to finish their slides and conclude their talk. To our knowledge, this is the first study of whether there are gender differences in the degree to which faculty candidates are interrupted during job talks.
The next section will present our theoretical framework, which motivates our research questions. We adopt the broad sociological perspective that gender frames expectations and interactions within academic departments. We briefly present literature on implicit biases, which automatically give men more credit than similar women for competence. We then examine how these processes can unfold in ways consistent with double or stricter standards, which could manifest as evaluators posing more questions during the job talk. We therefore turn to insights from a literature on interruptions in workplace or task-oriented interactions.
Following that, we present our data and methods. Next, we provide descriptive and multivariate results. Our discussion and conclusion section also presents study limitations and policy implications.

2. Theoretical Framework and Research Questions

Extensive research documents broadly shared implicit biases, which can automatically filter assessments of professionals in ways that penalize women, while giving men automatic credit for competence [16,17,18]. Assumptions that women are less competent are particularly prevalent in male-dominated settings [14,19,20]. This is important in our case study. Engineering has historically been seen as a “masculine” profession, because it is numerically male-dominated, and because the culture and ethos of the industry are considered masculine [21,22]. Further, in male-dominated disciplines such as engineering, academic success has been understood to depend on raw brilliance, a quality less frequently attributed to women [23].
The processes of biased evaluation are illuminated by studies of how, under the illusion of meritocracy, evaluators can apply double standards in evaluation and hiring. Studies of academic hiring, based on detailed candidate information that is real or believed to be real by faculty evaluators, discover that women candidates are seen as less competent, less qualified, and less hirable, compared to men with similar qualifications. In an analysis of real candidate applications selected for a prestigious medical research fellowship, faculty evaluators gave women applicants less credit than men for their publications [24]. In a study of psychology professor applications, faculty assessing one ostensibly real CV with a female name gave this candidate less credit for her qualifications and were less likely to recommend hiring her, compared to other participants, who viewed an identical CV with a male name [25]. Another experimental study used physics, chemistry, and biology professors as participants to examine an ostensibly real CV of either a man or a woman science student applying for a lab manager position [26]. Compared to equally qualified women candidates, the men were more likely to be rated as competent and hirable and were offered a higher salary.
A line of experimental research by Foschi and colleagues examines how subjects evaluating results of participants’ simulated tasks are implicitly aware of status characteristics such as gender. These studies show that evaluators generally hold women to more scrutiny and harsher standards in the inference of competence. In contrast, they will tend to assess similar performances by men with more lenient standards and give them the benefit of the doubt [11,12,14]. Particularly when tasks are seen as masculine, evaluators generally assume that the man candidate has more ability than a comparable woman and apply more lenient standards for him than for her [14].2
In contrast to the broad direction of the literature cited above, one study found that evaluators were more likely to rate a woman academic candidate than a man academic candidate as hirable [15]. However, this study relied on short narrative summaries of similarly strong men and women candidates. Importantly, the narrative summaries described a hypothetical search committee’s evaluations of the job candidates, and assigned the hypothetical men and women candidates identical numerical scores for their interviews and job talks.3 The provision of only narrative summaries and secondary judgments allows evaluators to rely on others’ assessments of the job candidates instead of forming their own judgment about the hypothetical candidates based on detailed objective information generally provided in academic job searches, such as education credentials and research productivity, alongside any implicit gender biases that may exist. Moreover, the assignment of identical scores obviates the double standards phenomenon that the literature shows generally favors men in masculine-typed occupations.4
In other research, Biernat and Kobrynowicz [13] found that for inferences of minimum ability, lower standards are set for the lower status group (such as women) and higher standards are set for the higher status group (men). However, when inferences about greater ability had to be made, the reverse pattern emerged. Women were held to stricter standards and men to more lenient standards. Similar results were found in other studies (see [11] for review, e.g. [27]). Further, women were similarly or more likely to be considered competent enough to be short-listed when compared to the lower standards generally set for women but they were held to harsher standards in objective rankings as well as in promotion and hiring decisions [28,29].
Thus, scope conditions for double standards becoming stricter standards and greater scrutiny for women include masculine-typed settings when confirmatory decisions (that require higher assessments of ability) are being made. These scope conditions fit our study.
Overall, this line of research suggests that in an actual engineering faculty job search, with real stakes and zero-sum decisions involved, women who have made it to the short list may confront heavier scrutiny and stricter standards than short-listed men during the interview. This reasoning suggests that women job candidates may be implicitly assumed to be less competent, will be challenged more than men candidates and face more questions by faculty members during their job talk. Patterns of evaluators’ closer scrutiny and stricter standards for women are manifestations of “prove it again” bias [18]. More broadly, by studying audience—candidate interactions in recorded job talks, we assess whether gender barriers emerge within the social context of actual departments as work units and if such barriers vary depending on social structural features of departments [30].
We now turn specifically to the literature on interruptions. Here, this literature will help us formulate specific research questions. Later, we return to the interruptions literature as we operationalize questions and interruptions.
The literature on conversational interruptions abounds with examples of gender effects. The classic study by Zimmerman and West [31] found that men interrupt women more than the other way around. One experimental study of task-oriented groups found that the odds of a man interrupting another man are less than half of the odds that a man will interrupt a woman. Further, men’s interruptions of men are generally more positive and affirming, while men’s interruptions of women are more negative. In contrast, women interrupt women and men equally [32]. Fewer interruptions were found in all-male groups than in mixed-gender or all-female groups [33].
Other studies have found that a host of variables are predictive of interruptions, and may be more significant than gender in particular situations. For example, Irish and Hall [34] found that patients interrupt more than their physicians do, but also patients tend to interrupt with statements whereas physicians interrupt by asking questions. In conversations between managers and employees, Johnson [35] found that “formal legitimate authority severely attenuates the effect of gender in these groups”. While authority, status, topic, setting, group size and composition, and many other factors have been shown to play significant roles in predicting conversational interruptions, considerable research has supported the basic gender effect that men interrupt more than women do [36,37,38], and that women are more frequently interrupted than men [32,39].
These studies on gender and implicit bias, double standards and interruptions motivate our research questions.
Among job candidates, do women experience more questions than men?
Relative to men, is a higher share of women’s candidate time taken up by audience speech?
We also examine variation by department. Studies of gender in other workplace settings find that women face more equal treatment when they are in gender-integrated work settings, even within male-dominated occupations and industries [40,41]. As we explain below, the proportion of women among the faculty in the departments we study ranges from 4% to 18%. Although none of our departments are gender-balanced, the departments at the higher end are among those with the largest share of women faculty among the top 50 engineering schools in the nation. We study whether higher share of women in the departmental faculty is associated with fewer interruptions at the job talks in those departments.
Net of gender, do candidates presenting in departments with a smaller proportion of women on the faculty experience more questions than candidates presenting in departments with a larger proportion of women on the faculty?
Further, we are interested in whether the job candidate’s post-Ph.D. experience matters. Previous research on faculty CVs suggests that gender bias is more pronounced when candidates are more junior, and their potential is judged more subjectively, compared to when candidates are more senior and have a clear and unambiguous track record of achievement [25]. Moreover, in the interruptions literature, authority dampens the effect of gender on conversational interruptions [35].
Do junior candidates experience more questions than more senior candidates?

3. Data and Methods

Case-oriented research identifies a small, non-random sample and investigates it deeply; this is not meant to be generalizable but rather illuminates the complexity of the context under study [42,43,44].5 For our case study, we analyze interruptions in job talks in highly ranked engineering departments, in order to examine whether gendered processes unfold despite formal commitments to meritocracy and fairness.

3.1. Data

Our data of 119 recorded job talks come from five departments across two Universities, whose engineering divisions each rank in the national top 50.6 The departments are Computer Science (CS), Electrical Engineering (EE), and Mechanical Engineering (ME). CS and EE are studied at both University 1 and University 2. ME is only considered at University 1. The share of women on the faculty ranges from 4% to 18%. We analyze 92 talks from University 1 and 27 from University 2.
The talks existed as archived videos that were already recorded by departments during two years of hiring, for purposes unrelated to this study. Some departments want to have recordings for faculty who are out of town to be able to evaluate the candidate’s talk. Other departments wish to have the recordings available as a resource for their graduate students.
In our data, the job talks take place in a campus conference room. The candidate is evaluated on his or her performance in presenting their original research and responding to questions. All departments in the study schedule job talks for nominally one hour. Candidates are given their schedules in advance. Both candidates and audience members generally also know that there is no hard stop at the one-hour mark, since running over will merely subtract some minutes from the next event, which is typically lunch, or a break.
Talks are generally advertised by posting flyers and by sending e-mail to faculty, postdocs, and graduate students in the academic department conducting the search for faculty candidates. The e-mail may get forwarded to people with related research interests in other departments, and it is common to have a few audience members from other departments. Faculty are the most active members of the audience and the ones most likely to ask the presenter questions.
We lack consistent data on the gender of audience members who ask questions. The presenter wears a microphone for clear audio and is consistently in the field of view of the camera. However, the audience members who ask questions may not be visible in the picture, and the audio sometimes leaves their gender unclear.
We constructed a sample of the archived videos in five engineering departments hosting job searches over two recent academic years.7 For these departments, women applicants represent roughly 15% to 20% of all job applicants. Due to the small numbers of interviewees, the percentages of women in the interview pool varies from 0% to about 33% across different department job searches. Given the small proportion of women presenters in the population, we over-sampled women as follows. We used all the videos from women candidates, and attempted to match each woman with two men of the same seniority from the same department. Seniority is measured by the number of years post-Ph.D. Seniority ranges from 0 (people still finishing up their dissertations, called, colloquially, ABDs (All but Dissertation) or “baby Ph.D.s”) to candidates with multiple years of experience post-Ph.D.
There were a few instances when the matching process was not exact. For example, three women ABD candidates were matched with five (rather than six) men ABDs, because there were not six men available in that seniority category in that department. In another example, a woman candidate seven years out after awarded a Ph.D. was matched with men who are seven and eight years out, because there were not two men available who were seven years out.
We refer to the faculty candidate as the “presenter”, and the time they spend formally presenting their slides (excluding time responding to interruptions) as the “presentation”. In our context of the job talk, we are concerned with the amount of time taken away from the candidate’s nominal one hour of presentation time. Because we are interested in the presentation time and how interruptions and questions could affect the outcome of the talk or whether it is brought to conclusion, our analysis excludes the dedicated Question and Answer (Q & A) segment after the presenter has formally concluded the talk.
To code our data, we watched the videos with a playback that shows minute and second.
When an audience member asked a question, the coder paused the video and noted minute and second for the question start time, as well as end time. Likewise, the start and end times for answers were noted. The coding process involves a judgment call by the coder to decide what constitutes the end of an answer, when the presenter returns to the presentation.

3.2. Defining Types of Interruptions, Our Dependent Variables

In some previous studies, conversational interruptions have been defined in terms of syllabic measurements, for example as simultaneous talk which begins more than two syllables from the end of a current speaker’s sentence [49] or in terms of grammatical, turn-construction units that are “hearably complete” [50]. Interruptions have also been defined in more contextual ways, for example taking into account whether a speaker has already made a point, or whether they are repairing a previous violation of their speaking turn [51,52].
Since much past research has focused primarily on interruptions in turn-taking conversation, it has required definitions of interruptions appropriate to that context. By comparison, there have been few studies of interruptions in scenarios with an audience and a presenter. Furthermore, in these latter studies, for example a psychology experiment involving hecklers during a speech [53] or an examination of television coverage of a political speech [54], determining the existence of an interruption was straightforward and not a focus of the study.
Within the pre-Q & A period under analysis, we are concerned with any time that an audience member speaks, regardless of syntactic positioning. We define three types of speaking by audience members:
An acknowledged question is one where the audience member raises his or her hand, and is acknowledged by the presenter. This definition relies on the audience member’s hand gesture and presenter’s acknowledgement.
A follow-up question corresponds to a situation where the presenter has just finished answering a question from the audience, and a member of the audience asks another (follow-up) question. In this case, the audience member does not raise his or her hand but would not generally be expected to do so.
An unacknowledged interruption happens in one of two ways:
If the presenter is presenting (rather than answering a question), then we expect an audience member to raise a hand, and so an unacknowledged interruption is defined by the audience member speaking without first raising his or her hand, even if the presenter has completed a sentence or a section of the talk. Thus, this definition relies on the lack of audience member hand gesture and presenter acknowledgement.
If the presenter is answering a question, then an unacknowledged interruption is defined by an audience member speaking before the presenter has finished their answer. (In a few rare cases, an interruption arises from an audience member having a speaking overlap with another audience member). Our distinction between this case and the earlier definition of a follow-up question depends upon the contextual information about the presenter’s completion of an answer.
The motivation for these definitions is as follows. When the presenter is presenting, there is a presumption that an audience member should ask permission to speak, so politeness is defined by raising a hand. In that phase, an audience member can speak either by asking permission (raising their hand and getting acknowledged, which is considered polite) or by interrupting (starting to speak without raising their hand, which is considered impolite whether or not the presenter has just finished a sentence, or a section of the talk). This speaking without raising one’s hand is our first type of interrupting.
However, once the presenter has begun answering a question, the situation may be considered to have shifted into one more like conversational turn-taking, in which conversational politeness or lack thereof is defined by allowing the current speaker to complete their thought. Thus, in this phase, an audience member can speak either by waiting for the other person (presenter or other audience questioner) to finish his or her thought, in which case it is a follow-up question (which may be seen as questioning the presenter’s authority but is not conversationally impolite) or by interrupting (not letting the presenter finish their answer, which is considered impolite). This speaking with speech overlap while the presenter is giving an answer is our second type of interrupting. Once the presenter returns to presenting, the situation returns to one in which the audience member should raise their hand to get permission to speak. We combine the two types of interrupting into one category, since they both indicate lack of politeness.

3.3. Meanings of Zero Questions

Based on our own experience in similar departments, and in conversation with other engineering faculty, we are aware of three meanings of “zero questions”.
The talk is very clear, so no questions are needed.
The talk is way below the bar, so nobody bothers asking questions.
The departmental culture does not involve asking questions before the formal Q & A period.
We cannot adjudicate between meanings 1 and 2. It is likely that meaning 3, the departmental culture explanation, does not apply to the five departments in our study. In each department, in most of the talks (91% overall), candidates received questions during the Pre-Q & A period.8
Table 1 provides an example of the collected data. Presentation time begins at 1 min 22 s; the time prior to that is the introduction. This composite example illustrates the situations that the coder must recognize: presenting yielding to an acknowledged question at 11:26 (hand gesture, acknowledgement), a presenter transitioning from answering a question back into presenting at 11:47 (context), presenter getting interrupted at 15:40 (no acknowledgement), a follow-up question at 15:51 (context), and an answer getting interrupted at 16:09 (context).
Our dependent variables also include the total number of questions, which is the sum of acknowledged questions, unacknowledged interruptions, and follow-up questions during one presenter’s seminar. We also measure the audience time as the proportion of the pre-Q & A talk time taken up by audience members’ questions (audience time/total pre-Q & A time). As noted above, all of these dependent variables are indicators of interruptions in a broader sense, because all of them occur before the final segment of the seminar, officially designated as the Q & A period.

4. Descriptive Results

Table 2 presents descriptive statistics for the dependent variables, broken down by gender. The left column shows the average values for men (standard deviations in parentheses below), while the middle column shows the average values for women. The right column shows the differences between the two groups (with standard errors in parentheses below).
Table 2 shows that women, on average, are asked about 1.8 more follow-up questions and about three more total questions than men. Women are asked about 12% more total questions than men. Running a t-test on the difference in average number of questions between men and women will be unlikely to return valid inference in this case because the dependent variable is either a count (number of questions) or a ratio (fraction of the talk). After presenting descriptive statistics on the explanatory variables, we will address the choice of appropriate models for these analyses.

4.1. Explanatory Variables

Table 3 presents descriptive statistics for the explanatory variables, broken down by gender. Our focal predictor variable is gender. Our data set has different departmental indicators, including proportion of the faculty who are women and the specific departments (Computer Science, Electrical Engineering, and Mechanical Engineering) across two Universities. We also indicate experience post-Ph.D. Almost a third of the presenters are ABDs with an experience of 0 years. The highest end of the range is 21 years, with three observations over 12 years. To control skewing, we capped the high end at 12-plus years.9

4.2. Graphical Results

As a first step toward selecting an appropriate model, we show two overlapping conditional density histograms of the total number of questions, indicated in grey for men candidates and white for women candidates (Figure 1). This figure illustrates important patterns. One is the overwhelming number of questions—20 to 50—some candidates are faced with. Note that women experience more questions on average (see the vertical dashed line for the male average and the solid line for the female average). There are few talks with zero questions (11), and women are more likely to experience zero-question talks.
Next, Figure 2 presents a similar conditional histogram (grey for men, white for women), but the horizontal axis is the number of follow-up questions. Similar to the last figure, Figure 2 shows that women have a higher average number of follow-ups than men (see vertical dashed line for male average and solid line for female average). Moreover, most of the talks with a large number (12—30) of follow-ups, on the right hand side of the graph, have women presenters, indicated by the clear bars.
These descriptive results provide preliminary answers to Research Question 1a: Women candidates receive more total questions and, among those, more follow-up questions, than men candidates.

5. Multivariate Results

To more formally assess the results illustrated in the histograms, we need to choose which model to use. The dependent variables for this analysis are counts of the number of questions of different types received by each candidate. In addition to being integer-valued, Figure 1 and Figure 2 show that these counts are non-negative and are not normally distributed. To accurately model these data, therefore, we choose to use a count data method. The standard choices for modeling count data are a Poisson model, negative binomial model, or a zero-inflated version of either of these models [55]. We prefer a zero-inflated, negative binomial (ZINB) model for this analysis for empirical and theoretical reasons. Table 2 shows that the variance of each dependent variable is high relative to the mean, indicating that the data are over-dispersed. This feature means that the negative binomial model is more suitable than a Poisson model. Theoretically, department norms about whether to ask questions during a job talk also mean that some talks are more likely to have zero questions, making a zero-inflated model more appropriate.10
The ZINB model simultaneously fits two models. One model estimates the probability of observing zero questions. Since there are two possible states—zero questions or positive questions—this model uses a logit regression. The other model estimates the number of questions, conditional on the candidate receiving at least one question. This model operates like a typical negative binomial regression. Together, these models account for both the excess number of zero observations and for the positive-value count data. In the results shown below, the model for zero-question observations is shown in the bottom panel of the table, and the model for positive values is shown in the top panel.
We now estimate ZINB models to address our first research question: do women get more questions than men during the job talk? Table 4 shows that the answer is yes, in part. We focus on the top panel of the table, the model for positive values. For each row, the table lists the coefficient from the ZINB model. Below that is the exponentiated coefficient, and below that is the standard error in parentheses.11 Consistent with Figure 1 and Figure 2, the top panel of Table 4 shows that women face more total questions and more follow-up questions than men. Specifically, the coefficient for female is statistically significant in the models predicting the number of follow-up questions (model 3) and the number of total questions (model 4), controlling for the percent of departmental faculty who are women.12 However, there is no gender difference in the number of unacknowledged interruptions.
Taking the exponential of the coefficients, as shown below each ZINB coefficient in Table 4, is helpful for interpretation. The coefficients for the positive values have a similar interpretation to the percent change—they now represent the factor by which the number of questions goes up when that variable increases by one.13 Since the female coefficient of the number of follow-up questions (model 3) is 0.35, then exp(0.35) = 1.4, indicating that women get about 1.4 times more follow-up questions than men.14 Similarly, since the female coefficient for total questions (model 4) is 0.22, then exp(0.22) = 1.2, indicating that women get about 1.2 times more total questions than men, on average, conditional on getting more than zero questions. The exponential of the intercept shows that, conditional on being asked at least one question, the average male candidate would receive about 30 total questions from a hypothetical department composed of only male faculty. Thus, under this condition, women get about 1.2 × 30 = 36, or six more total questions than men do, on average.
Table 4 also answers Research Question 2. Departments with a larger proportion of the faculty who are women pose fewer interruptions, acknowledged questions, follow-up questions, and, of course, total questions than departments with a smaller share of women faculty.
We now address Research Question 1b, whether women candidates, compared to men, generally find that a higher share of the total talk time is spent on audience time. Similar to count data, ratios are best handled by a specialized nonlinear estimation strategy. The standard practice is to use a binomial family estimator with a logit or probit link [56]. Table 5 presents results of a binomial estimator and logit link.15
The binomial model in Table 5 performs similarly to an OLS model (results not shown). The binomial model should be interpreted as percent changes. It indicates that for female candidates, 1.3 times as much time is taken up by questions. From the summary statistics in Table 2, one can see that the audience takes up 4.3% of the total talk time, on average. The results from the model indicate, therefore, that roughly 5% of an average talk by a female candidate is taken up by the audience while 3.8% of an average talk by a male candidate is audience time.
We now turn to Research Question 3: Do junior candidates experience more questions than senior candidates? The ZINB models in Table 6 examine whether the presenter’s professional experience mitigates interruptions. Here, we focus on the number of follow-up questions as the dependent variable.
Model 2 shows that there is a modest, yet statistically significant, decline in the number of follow up questions candidates receive if they have more experience. However, women still face more follow-up questions than men after controlling for years since Ph.D. In Model 3, the interaction term of woman candidate times experience is not statistically significant. In other words, having more experience does not differentially help women candidates. Men and women with more experience receive fewer questions than men and women with less experience, respectively, and this negative effect of years since Ph.D. on number of follow-up questions is the same for men and women.
The data presented so far do not indicate whether having more questions helps or hurts a candidate. We do not have measures of job offers. However, while coding the video recordings, we did monitor in qualitative language when candidates’ verbal cues clearly indicate that they are rushing to get through their carefully prepared slide decks and reach the punch line of their talk. Example statements that indicate rushing include “For the sake of time, I’m going to skip this part”, “There’s not much time left; I will rush through this”, “I’m going to skip to the end”, “I’m going really quick here because I want to get to the second part of the talk” and “We’re running out of time so I’m not going into the details”. We find that rushing, as indicated by these cues, is correlated with the number of total questions (Pearson correlation coefficient 0.22) and with the number of follow-ups (Pearson coefficient 0.19). This suggests that having many questions may prevent a candidate from delivering all their prepared content and may rush them in covering the key sections that are often placed at the end (summary of results, impact of results, future work).

6. Discussion

Our analyses shed light on a key set of interactional processes linked to the persistent under-representation of women faculty in academic engineering departments. Women academics who have made it to the short list in competitive academic job searches in top departments face more follow-up questions and more total questions during their job talks than men do, on average, even after controlling for years of experience post-Ph.D. Under the condition of at least one question being asked during the talk, women receive six more questions than men do, on average. Further, a higher proportion of women’s talk time is spent on audience members’ speech. This means that, generally, women have less time to present their prepared talk and slides.
The larger number of questions women receive on average is mostly driven by the larger number of follow-up questions. These are questions piled on to previous questions and thus may indicate a challenge to the presenter’s competence—not only in their prepared talk but also in their response to questions. Consistent with research on greater scrutiny and stricter standards for higher prizes in masculine-typed occupations and “prove it again” bias, we find a Catch 22 for women. Even short-listed women with impressive CVs may still be assumed to be less competent, are challenged, sometimes excessively, and therefore have less time to present a coherent and compelling talk.
We have revealed subtle conversational patterns of which most engineering faculty are likely unaware. It is a form of almost invisible bias, which allows a climate of challenging women’s competence to persist. These patterns may be linked to the small numbers of women faculty hired into these departments. Indeed, departments with a larger share of women faculty tend to ask fewer questions of all candidates (women and men), take up less of their time in audience speech, and thereby give candidates more time to complete their presentations.

6.1. Policy Recommendations

Our data set shows that a few candidates, both women and men, receive a very large number of questions, in the range of 30 to 50. In some cases, a presenter rushes through slides at the end, or decides to skip a large number of slides. In other cases, the talk runs over by 15–20 min, and the audience dwindles. It may be advisable for each talk to have a facilitator, perhaps a senior faculty member who introduces the presenter, who will pay attention to the number and also the tone of questions being asked. If the number of questions becomes large and especially if the tone seems hostile or the presenter seems to be rushing, the facilitator could ask the audience to hold their remaining questions for the Q & A session at the end. Sometimes presenters may make this request themselves, but it may be difficult for a young ABD candidate to make this request to an audience of senior faculty. If there is no assigned facilitator, it may be appropriate for a senior faculty member in the audience to make this request.
When the suggestion of having a facilitator stop questions was made in one department, a faculty member protested that if he did not ask his questions as the talk went along, he would not understand the subsequent material, and the remainder of the talk would be useless. While this is a legitimate argument, his preference to ask multiple questions should be balanced against the preferences of others in the audience who may be fully understanding the talk and would be better served by having the presenter complete the material.
It would also be helpful for young faculty applicants to be aware that there are large differences in university or departmental culture, so that they are prepared for this. For the five engineering departments in this study, only 9% of talks had zero questions. In contrast, the Biomedical Engineering Department that was excluded from the study, 81% of talks had zero questions. Especially candidates in interdisciplinary sub-fields may be surprised if they have a mixed audience with differing cultures in this regard. Applicants should also know that some talks get derailed by questions, and it is an acceptable option for the presenter to ask the audience to hold remaining questions for the Q & A session at the end. We encourage advisors and mentors to share this knowledge with their graduate students and postdoctoral fellows.

6.2. Limitations

Case studies are, by design, not necessarily representative of other organizations. Our analysis of job talk video recordings is pioneering. However, the data have a number of limitations. We were limited to the departments which had archival video recordings. We constructed a theoretical framework from well-established literature on the unequal treatment by gender in terms of competence and hirability evaluations and the likelihood of being interrupted. Future research should adapt these insights to the study of the effects of candidate race. Moreover, the nature of our access to the archival video recordings precluded us from measuring which candidates were later voted by departmental faculty as worthy of receiving job offers. Note that even if it had been possible for us to investigate job offers in our data, defining this outcome would be problematic. Some top candidates may not receive a formal offer if they have already received—and potentially accepted—offers from other departments further ahead in their recruitment process. We encourage future researchers to investigate these issues in other research-oriented STEM departments.

7. Conclusions

This study analyzed video recordings of job talks in five engineering departments. We found that, compared to men, women with similar years of experience receive more follow-up questions and more total questions and spend less time on their prepared talk. These subtle differences in how women and men candidates are treated persist, likely outside the conscious awareness of hiring departments. More broadly, we assess how gender barriers emerge within the context of actual work units and vary depending on social structural features of the work units. For example, we found that there are more audience interruptions in departments with a smaller proportion of women. We urge future researchers to examine the connections between the number of questions posed at the job talk and actual job offers extended to candidates. Since these patterns operate under the radar, they are not seen to contradict the broader cultural belief that academic science is a meritocracy, in which the best scientific ideas are objectively assessed and rewarded [57,58].


We thank Jeff Shrader for statistical assistance; Benjamin Cosman for data management; David Gibson for initial theoretical conversations; and Maria Charles, Sarah Thébaud, and the anonymous reviewers for helpful comments.

Author Contributions

Pamela Cosman conceived the study design; Pamela Cosman and Mary Blair-Loy jointly supervised this project; Daniela Glaser and Anne Wong reviewed literature and developed the coding protocol for videos; Daniela Glaser, Anne Wong, and Danielle Abraham coded the videos; Laura Rogers analyzed the data with supervision from Mary Blair-Loy and assistance from a statistical consultant; Mary Blair-Loy was the primary writer of most sections of the paper; Pamela Cosman was the primary writer of the sections on implications for policy and on the review and application of the interruptions literature.

Conflicts of Interest

The authors declare no conflict of interest.


  1. Jacob Clark Blickenstaff. “Women and Science Careers: Leaky Pipeline or Gender Filter? ” Gender and Education 17 (2005): 369–86. [Google Scholar] [CrossRef]
  2. National Academy of Sciences Committee. Who Will Do the Science in the Future? A Symposium on Careers of Women in Science. Edited by National Academy of Sciences Committee on Women in Science and Engineering. Washington: National Academy Press, 2000. [Google Scholar]
  3. National Academy of Science Committee. Beyond Bias and Barriers: Fulfilling the Potential of Women in Academic Science and Engineering. Edited by National Academy of Science Committee on Maximizing the Potential of Women in Academic Science and Engineering, National Academy of Engineering and Institute of Medicine. Washington: National Academies Press, 2007. [Google Scholar]
  4. Melanie C. Page, Lucy E. Bailey, and Jean Van Delinder. “The Blue Blazer Club: Masculine Hegemony in Science, Technology, Engineering, and Math Fields.” Forum on Public Policy 2009 (2009): 1–23. [Google Scholar]
  5. Jeffrey J. Kuenzi. Science, Technology, and Engineering, and Mathematics (Stem) Education: Background, Federal Policy, and Legislative Action. Congressional Research Service Reports Paper 35; Washington: Congressional Research Service, 2008. [Google Scholar]
  6. Domestic Policy Council. American Competitiveness Initiative. Washington: Office of Science and Technology Policy, 2006. [Google Scholar]
  7. Presidents Council of Advisors on Science and Technology. Engage to Excel: Producing One Million Additional College Graduates with Degrees in Science, Technology, Engineering, and Mathematics. Washington: Executive Office of the President, 2012. [Google Scholar]
  8. Donna K. Ginther, and Shulamit Kahn. “Education and Academic Career Outcomes for Women of Color in Science and Engineering.” Washington, DC, USA: the Women in Science, Engineering, and Medicine, 8 October 2012. [Google Scholar]
  9. National Science Foundation. “Women, Minorities and Persons with Disabilities in Science and Engineering.” 2008. Available online: (accessed on 8 July 2016). [Google Scholar]
  10. National Science Foundation. “Women, Minorities and Persons with Disabilities in Science and Engineering.” 2012. Available online: (accessed on 8 July 2016). [Google Scholar]
  11. Martha Foschi. “Double Standards for Competence: Theory and Research.” Annual Review of Sociology 26 (2000): 21–42. [Google Scholar] [CrossRef]
  12. Martha Foschi. “Double Standards in the Evaluation of Men and Women.” Social Psychology Quarterly 59 (1996): 237–54. [Google Scholar] [CrossRef]
  13. Monica Biernat, and Diane Kobrynowicz. “Gender- and Race-Based Standards of Competence: Lower Minimum Standards but Higher Ability Standards for Devalued Groups.” Journal of Personality and Social Psychology 72 (1997): 544–57. [Google Scholar] [CrossRef] [PubMed]
  14. Martha Foschi. “Status Characteristics, Standards, and Attributions.” In Sociological Theories in Progress: New Formulations. Edited by Joseph Berger, Morris Zelditch Jr. and Bo Anderson. Newbury Park: SAGE, 1989. [Google Scholar]
  15. Wendy M. Williams, and Stephen J. Ceci. “National Hiring Experiments Reveal 2:1 Faculty Preference for Women on Stem Tenure Track.” Proceedings of the National Academy of Sciences of the United States of America 112 (2015): 5360–65. [Google Scholar] [CrossRef] [PubMed]
  16. Cecilia L. Ridgeway. Framed by Gender: How Gender Inequality Persists in the Modern World. Oxford: Oxford University Press, 2011. [Google Scholar]
  17. Amy J. C. Cuddy, Susan T. Fiske, and Peter Glick. “When Professionals Become Mothers, Warmth Doesn’t Cut the Ice.” Journal of Social Issues 60 (2004): 701–18. [Google Scholar] [CrossRef]
  18. Joan Williams, and Rachel Dempsey. What Works for Women at Work: Four Patterns Working Women Need to Know. New York: New York University Press, 2014. [Google Scholar]
  19. Cecilia L. Ridgeway, and Lynn Smith-Lovin. “The Gender System and Interaction.” Annual Review of Sociology 25 (1999): 191–216. [Google Scholar] [CrossRef]
  20. Linda L. Carli. “Gender and Social Influence.” Journal of Social Issues 57 (2001): 725–41. [Google Scholar] [CrossRef]
  21. Abigail Powell, Barbara Bagihole, and Andrew Dainty. “How Women Engineers Do and Undo Gender: Consequences for Gender Equality.” Gender, Work and Organizations 16 (2009): 411–28. [Google Scholar] [CrossRef]
  22. Julia Evetts. “Managing the Technology but Not the Organization: Women and Career in Engineering.” Women in Management Review 13 (1998): 283–90. [Google Scholar] [CrossRef]
  23. Sarah-Jane Leslie, Andrei Cimpian, Meredith Meyer, and Edward Freeland. “Expectations of Brilliance Underlie Gender Distributions across Academic Disciplines.” Science 347 (2015): 262–65. [Google Scholar] [CrossRef] [PubMed]
  24. Christine Wenneras, and Agnes Wold. “Nepotism and Sexism in Peer-Review.” Nature 387 (1997): 341–43. [Google Scholar] [CrossRef] [PubMed]
  25. Rhea E. Steinpreis, Katie A. Anders, and Dawn Ritzke. “The Impact of Gender on the Review of the Curricula Vitae of Job Applicants and Tenure Candidates: A National Empirical Study.” Sex Roles 41 (1999): 509–28. [Google Scholar] [CrossRef]
  26. Corinne A. Moss-Racusin, John F. Dovidio, Victoria L. Brescoll, Mark J. Graham, and Jo Handelsman. “Science Faculty’s Subtle Gender Biases Favor Male Students.” Proceedings of the National Academy of Sciences of the United States of America 109 (2012): 16474–79. [Google Scholar] [CrossRef] [PubMed]
  27. Martha Foschi, Larissa Lai, and Kirsten Sigerson. “Gender and Double Standards in the Assessment of Job Applicants.” Social Psychology Quarterly 57 (1994): 326–39. [Google Scholar] [CrossRef]
  28. Monica Biernat, and Kathleen Fuegen. “Shifting Standards and the Evaluation of Competence: Complexity in Gender-Based Judgment and Decision Making.” Journal of Social Issues 57 (2001): 707–24. [Google Scholar] [CrossRef]
  29. Monica Biernat, Christian S. Crandall, Lissa V. Young, Diane Kobrynowicz, and Stanley M. Halpin. “All That You Can Be: Stereotyping of Self and Others in a Military Context.” Journal of Personality and Social Psychology 75 (1998): 301–17. [Google Scholar] [CrossRef] [PubMed]
  30. Mary Blair-Loy, and Amy S. Wharton. “Employee’s Use of Work-Family Policies and the Workplace Social Context.” Social Forces 80 (2002): 813–45. [Google Scholar] [CrossRef]
  31. Don H. Zimmerman, and Candace West. “Sex Roles, Interruptions and Silences in Conversation.” Amsterdamn Studies in the Theory and History of Linguistic Science Series 4 (1975): 211–36. [Google Scholar]
  32. Lynn Smith-Lovin, and Charles Brody. “Interruptions in Group Discussions: The Effects of Gender and Group Composition.” American Sociological Review 54 (1989): 424–53. [Google Scholar] [CrossRef]
  33. Dawn T. Robinson, and Smith-Lovin Lynn. “Timing of Interruptions in Group Discussions.” Advances in Group Processes 7 (1990): 45–73. [Google Scholar]
  34. Julie Irish, and Judith A. Hall. “Interruptive Patterns in Medical Visits: The Effects of Role, Status and Gender.” Social Science & Medicine 41 (1995): 873–81. [Google Scholar] [CrossRef]
  35. Cathryn Johnson. “Gender, Legitimate Authority, and Leader-Subordinate Conversations.” American Sociological Review 59 (1994): 122–35. [Google Scholar] [CrossRef]
  36. Carol W. Kennedy, and Carl T. Camden. “A New Look at Interruptions.” Western Journal of Speech Communication 47 (1983): 45–58. [Google Scholar] [CrossRef]
  37. Kristin J. Anderson, and Campbell Leaper. “Meta-Analyses of Gender Effects on Conversation Interruption: Who, What, When, Where, and How.” Sex Roles 39 (1998): 225–52. [Google Scholar] [CrossRef]
  38. Xiaoquan Zhao, and Walter Gantz. “Disruptive and Cooperative Interruptions in Prime-Time Television Fiction: The Role of Gender, Status, and Topic.” Journal of Communication 53 (2003): 347–62. [Google Scholar] [CrossRef]
  39. Deborah James, and Sandra Clarke. “Women, Men, and Interruptions: A Critical Review.” In Gender and Conversational Interaction. Edited by Deborah Tannen. New York: Oxford University Press, 1993, pp. 231–80. [Google Scholar]
  40. Matt L. Huffman, Philip N. Cohen, and Jessica Pearlman. “Engendering Change: Organizational Dynamics and Workplace Gender Desegregation, 1975–2005.” Administrative Science Quarterly 55 (2010): 255–77. [Google Scholar] [CrossRef]
  41. Robin J. Ely. “The Effects of Organizational Demographics and Social Identity on Relationships among Professional Women.” Administrative Science Quarterly 39 (1994): 203–38. [Google Scholar] [CrossRef]
  42. Charles C. Ragin. The Comparative Method: Moving Beyond Qualitative and Quantitative Strategies. Berkeley: University of California Press, 1987. [Google Scholar]
  43. Michele Lamont, and Patricia White. “Workshop on Interdisciplinary Standards for Systematic Qualitative Research.” Arlington, VA, USA: National Science Foundation Workshop 2005, 19–20 May 2005. [Google Scholar]
  44. Mary Blair-Loy, and Erin A. Cech. “Demands & Devotion: Cultural Meanings of (over)Work among Women in Science and Technology Industries.” Sociological Forum 32 (2017): 5–27. [Google Scholar]
  45. Elizabeth E. Armstrong, and Laura T. Hamilton. Paying for the Party: How College Maintains Inequality. Cambridge: Harvard University Press, 2013. [Google Scholar]
  46. Arlie Russell Hochschild. The Time Bind: When Work Becomes Home and Home Becomes Work. New York: Metropolitan Books, 1997. [Google Scholar]
  47. Emilio Castilla. “Gender, Race, and Meritocracy in Organizational Careers.” American Journal of Sociology 113 (2008): 1479–526. [Google Scholar] [CrossRef]
  48. Erin A. Cech, and Tom Waidzunas. “Navigating the Heteronormativity of Engineering: The Experience of Lesbian, Gay, and Bisexual Students.” Engineering Studies 3 (2011): 1–24. [Google Scholar] [CrossRef]
  49. Candace West, and Don H. Zimmerman. “Small Insults: A Study of Interruptions in Cross-Sex Conversations with Unacquainted Persons.” In Language, Gender and Society. Edited by Barrie Thorne, Cheris Kramarae and Nancy Henley. Rowley: Newbury House, 1983, pp. 102–17. [Google Scholar]
  50. David Gibson. “Opportunistic Interruptions: Interactional Vulnerabilities Deriving from Linearization.” Social Psychology Quarterly 68 (2005): 316–37. [Google Scholar] [CrossRef]
  51. Dina G. Okamoto, Lisa Slattery Rashotte, and Lynn Smith-Lovin. “Measuring Interruption: Syntactic and Contextual Methods of Coding Conversation.” Social Psychology Quarterly 65 (2002): 38–55. [Google Scholar] [CrossRef]
  52. Marie-Noelle Guillot. “Revisiting the Methodological Debate on Interruptions: From Measurement to Classification in the Annotation of Data for Cross-Cultural Research.” Pragmatics 15 (2005): 25–47. [Google Scholar] [CrossRef]
  53. Richard E. Petty, and Timothy C. Brock. “Effects of Responding or Not Responding to Hecklers on Audience Agreement with a Speaker.” Journal of Applied Social Psychology 6 (1976): 1–17. [Google Scholar] [CrossRef]
  54. Robert K. Tiemens, Malcolm O. Sillars, Dennis C. Alexander, and David Werling. “Television Coverage of Jesse Jackson’s Speech to the 1984 Democratic National Convention.” Journal of Broadcasting & Electronic Media 32 (1988): 1–22. [Google Scholar] [CrossRef]
  55. A. Colin Cameron, and Pravin K. Trivedi. “Essentials of Count Data Regression.” In A Companion to Theoretical Econometrics. Edited by Badi H. Baltagi. Malden: Blackwell Publishing, Ltd., 2001, p. 331. [Google Scholar]
  56. Leslie E. Papke, and Jeffrey M. Wooldbridge. “Econometric Methods for Fractional Response Variables with an Application to 401(K) Plan Participation Rates.” Journal of Applied Econometrics 11 (1996): 619–32. [Google Scholar] [CrossRef]
  57. J. Scott Long, and Mary Frank Fox. “Scientific Careers: Universalism and Particularism.” Annual Review of Sociology 21 (1995): 45–71. [Google Scholar] [CrossRef]
  58. Michele Lamont. How Professors Think: Inside the Curious World of Academic Judgment. Cambridge: President and Fellows of Harvard University, 2009. [Google Scholar]
  • 1Each university is an elite research-focused institution, ranked as a “Highest Research Activity” university in the Carnegie Classification of Institutions of Higher Education and has an engineering school ranked among the top 50.
  • 2Foschi’s ([11], p. 31) review of experimental research shows “substantial support” for these predictions.
  • 3A third, weaker candidate was added as a foil.
  • 4Williams and Ceci [15] supplemented the study of narratives (they received 711 evaluations) with “control studies” on small groups of hypothetical CVs. In the male-dominated field of engineering, they sent out the hypothetical applicant CVs to only 35 faculty.
  • 5Many high impact studies of social inequality have a case study design. These include Armstrong and Hamilton [45], Hochschild [46], Castilla [47], Cech and Waidzunas [48], and Blair-Loy and Wharton [30].
  • 6Following common usage at the university level in the United States, we use the term “department” to mean an academic unit devoted to one academic discipline, typically lead by a Chair. The terms “division” and “school”, led by a Dean, are used synonymously as in “engineering school” to mean the set of engineering departments within a university.
  • 7We did not include the total population of archived videos, due to the time and expense involved in the coding of each video.
  • 8In contrast, one department we had originally considered—Biomedical Engineering—had zero questions during the pre Q & A period in 81% of the talks, indicating a departmental culture of few to no questions. Our research questions entail understanding how gender may affect audience responses to the talk and affect the amount of time the presenter has to conclude the presentation. We therefore excluded the Biomedical Engineering Department from analysis.
  • 9Capping or not capping the experience variable at 12 years did not affect the substance or statistical significance of results.
  • 10In addition to the reasons for selecting the ZINB model given above, a statistical model selection procedure can also guide model choice. The software program Stata has a user-written routine, countfit, which provides diagnostics on which models to use. The models fit the dependent variable to the exponential of the right-hand side variable, thus constraining the predictions to be positive. For the zero-inflated models, we also specify a separate model for zeros to try to explain why some observations are zero. The decision of whether to use a Poisson or negative binomial is based on the mean of the dependent variable relative to its variance, after taking into account control variables. The Poisson model assumes that the variance of the dependent variable is equal to the mean. Table 2 suggests that this is not true, so we should also expect to prefer a negative binomial model. We fit the model predictions to the actual data at different levels of the dependent variable (results available upon request). These diagnostics indicate that for zero questions, both the Poisson (PRM) and negative binomial (NBRM) models are highly inaccurate. Both of the zero-inflated models perform well at zero, by construction. For positive numbers of questions, the Poisson and zero-inflated negative binomial (ZINB) models are the most accurate. From the fitting model predictions test, ZINB model is preferred.
  • 11Because of limited variation, including the control variable university in the Table 4 model leads to numerical convergence issues for the acknowledged and follow-up question models. Therefore we exclude university from the model for positive values. We exclude university for the same reason in Table 6, below.
  • 12In separate models (not shown), we substituted percent departmental faculty who are women with dummy variables for department (with CS as the excluded reference department). The results were substantively the same, with the same pattern of statistical significant coefficients for women candidates receiving more follow up and more total questions. For numerical reasons, we have also chosen to exclude the university control variable from the model for positive values.
  • 13In cases where the variable is binary, the exponentiated coefficient has an interpretation very similar to the predicted value; it gives the relative increase or decrease in the dependent variable that results from being part of the group indicated by the dummy variable (female).
  • 14The interpretation of the coefficients for the positive values is similar to a log-linear model, so all coefficient values can also be read as approximate percent changes. This approximation is accurate for values less than about 0.1. For exact percent changes, take the coefficient, exponentiate, and subtract 1.
  • 15We found virtually identical results for the effect of female when department dummies (with CS as the excluded reference category) were substituted for percent of the faculty who are women. Results not shown.
Figure 1. Total Number of Questions by Gender.
Figure 1. Total Number of Questions by Gender.
Socsci 06 00029 g001
Figure 2. Number of Follow-ups by Gender.
Figure 2. Number of Follow-ups by Gender.
Socsci 06 00029 g002
Table 1. Example of Raw Data.
Table 1. Example of Raw Data.
Female, Ph.D. + 4 YEARSStartEndDuration
Acknowledged Question0:11:260:11:3300:07
Unacknowledged Interruption0:15:400:15:4400:04
Follow-up Question0:15:510:15:5400:03
Unacknowledged Interruption0:16:090:16:1100:02
Table 2. Descriptive Statistics: Dependent Variables.
Table 2. Descriptive Statistics: Dependent Variables.
Dependent VariablesMenWomenDiff./(SE)
Unacknowledged interruptions3.774.95−1.18
Acknowledged questions5.495.390.097
Follow-up questions4.836.66−1.83
Total questions14.117−2.91
Audience time proportion0.0380.050−0.012
N talks7841
Table 3. Descriptive Statistics: Explanatory Variables.
Table 3. Descriptive Statistics: Explanatory Variables.
Explanatory VariablesMenWomen
Years since Ph.D. (mean/SD)3.123.17
Proportion female faculty in department (mean/SD)0.110.11
University 1 (frequency, %)60 (77%)32 (78%)
University 2 (frequency, %)18 (23%)9 (22%)
CS (frequency, %)43 (55%)21 (51%)
EE (frequency, %)32 (41%)18 (44%)
ME (frequency, %)3 (4%)2 (5%)
N talks7841
Table 4. ZINB Models Predicting Questions (all dependent variables).
Table 4. ZINB Models Predicting Questions (all dependent variables).
Model Number(1)(2)(3)(4)
Num. InterruptionsNum. AcknowledgedNum. Follow-UpsTotal Questions
Model for positive values
Female0.26−0.0110.35 **0.22 *
Proportion female faculty−8.83 ***−2.59 *−7.44 ***−7.19 ***
Constant2.32 ***2.07 ***2.36 ***3.41 ***
Model for zeros
Female−0.520.181.84 *0.75
Pct. female faculty56.3 ***−34.7 ***28.8−30.4 ***
2.8 × 10240.003.2 × 10120.00
University 12.24−6.89 ***−18.7 ***−6.70 ***
Constant−10.6 ***5.52 **−5.944.47 **
ln(alpha)−0.22−1.11 ***−0.70 ***−0.98 ***
N talks119119119119
Notes: Columns 1 through 4 show the coefficients from zero-inflated negative binomial models. Significance is indicated by * p < 0.10, ** p < 0.05, *** p < 0.01. Robust standard errors are shown in parentheses. Below each coefficient (above the standard error) is the exponentiated value of that estimate, which can be interpreted as the factor by which the expected number of questions increases due to a one unit change in the independent variable for the top panel and the factor by which the odds of having no questions changes in the bottom panel. Robust standard errors are shown in parentheses.
Table 5. Binomial Model Predicting Audience Time.
Table 5. Binomial Model Predicting Audience Time.
Variables Predicting Audience TimeBinomial
Audience time
Female0.26 *
Proportion female faculty−5.74 ***
Constant−2.63 ***
N talks119
Notes: Binomial models use a logit link. Significance indicated by * p < 0.10, ** p < 0.05, *** p < 0.01. Robust standard errors are shown in parentheses.
Table 6. ZINB Models Using Gender and Experience to Predict Number of Follow-up Questions.
Table 6. ZINB Models Using Gender and Experience to Predict Number of Follow-up Questions.
Model Number(1)(2)(3)
Num. Follow-UpsNum. Follow-UpsNum. Follow-Ups
Model for positive values
Female0.35 **0.35 **0.45 **
Proportion female faculty−7.44 ***−7.70 ***−7.38 ***
Years since Ph.D. −0.041 *
Years since Ph.D. × female −0.036
Constant2.36 ***2.50 ***2.35 ***
Model for zeros
Female1.84 *1.85 *1.84 *
Proportion female faculty28.830.030.6
3.2 × 10121.1 × 10131.95 × 1013
University 1−18.7 ***−18.4 ***−18.6 ***
7.6 × 10−91.02 × 10−88.4 × 10−9
ln(alpha)−0.70 ***−0.74 ***−0.71 ***
N talks119119119
Notes: Columns 1 through 3 show the coefficients from zero-inflated negative binomial models. Significance is indicated by * p < 0.10, ** p < 0.05, *** p < 0.01. Below each coefficient, above each standard error, is the exponentiated value of that estimate, which can be interpreted as the factor by which the expected number of questions increases due to a one unit change in the independent variable for the top panel and the factor by which the odds of having no questions changes in the bottom panel. Robust standard errors are shown in parentheses.
Soc. Sci. EISSN 2076-0760 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top