The Impact of Pressures to Produce on Knowledge Production and Evaluation in the Modern Academy

: Combining work from the related but distinct ﬁelds of sociology of knowledge and sociology of education, we explore the e ﬀ ects of the changing landscape of higher education on the academic knowledge production system. Drawing on 100 interviews with faculty members from 34 disciplines at an elite private research university, we show that faculty members perceive exponentially increasing pressures to produce, and identify the ways that those pressures can negatively impact the knowledge creation process. We then examine the ways those pressures to produce inﬂuence how faculty evaluate their colleagues’ work, leading faculty to extend the beneﬁt of the doubt, rely on reputation, and emphasize the peer review process, even as they simultaneously critique its weaknesses. Finally, we show that faculty members ultimately reconcile their perceptions of weaknesses in the current knowledge production system with their belief in that system by emphasizing their own and their colleagues’ commitment to resisting structural pressures to produce. While much of the existing body of scholarship on the changing higher education landscape has focused on teaching and learning outcomes, this study contributes to our understanding of how those changes impact the research process, underscoring the relationship between institutional structures and evaluative processes.


Introduction
The sociology of education and the sociology of knowledge-related but distinct fields with surprisingly little overlap (Hermanowicz 2012)-both examine how information is created and disseminated. The sociology of education focuses on how organizations, institutions, policies, and actors transmit knowledge (Durkheim 1956;Saha 2015;Trent et al. 1985). The sociology of knowledge focuses on the processes through which knowledge itself is created, evaluated, and legitimated (Berger and Luckman 1967;Kuklick 1983;Swidler and Arditi 1994).
In the sociology of education, the sociology of higher education focuses specifically on exploring educational processes, policies, and experiences in the context of colleges and universities (Clark 1973;Deem 2004;Gumport 2007;Stevens et al. 2008). In recent years, increasing attention has been paid by sociologists of higher education to the changing nature of the modern academy (e.g., Ginsberg 2011;Neumann 2009;Tuchman 2009Tuchman , 2011. Sociologists of higher education have identified several key changes in higher education in the last four decades. First, across higher education, there has been an increasing corporatization of colleges and universities (Cote and Allahar 2011;Ginsberg 2011;Hermanowicz 2016b;Herrmann 2017). Second, and in part a result of the corporatization of higher education, the nature of the relationship between teaching and research expectations has shifted considerably (Barnett 2005;Gale 2011). Third, faculty members across institutions are experiencing escalating requirements for service and bureaucratic tasks (Archer 2008;Billot 2010;Slaughter and Rhoades 2004). Fourth-and especially at research-oriented colleges and universities-faculty members are under ever-escalating pressures to produce (Finkelstein et al. 2016;Hermanowicz 2016a). Finally, as a result of the changing from teaching a research methods course). Although we analytically distinguish scholarly research activities from teaching and service activities, we recognize that these activities often overlap in faculty members' day-to-day lives.

Materials and Methods
The data for this study are drawn from a larger project on how faculty members across the liberal arts construct and understand the meaning of "evidence" in their disciplines. As part of that larger study, we conducted in-depth, semi-structured interviews with a purposive sample of 100 liberal arts faculty members employed at an elite private research universityin the United States, which we call Private Research University (PRU). All permanent, full-time faculty members whose primary appointment was in a liberal arts discipline were eligible to participate. Interviewees were recruited through faculty development workshops, faculty governance and departmental meetings, the authors' personal networks of colleagues, and referrals from other participants. All interviews took place in a location of the faculty members' choosing, usually in the faculty member's office. The interviews lasted 40-150 min, with most lasting approximately an hour-and-a-half. The interview guide for the project was developed and piloted with feedback from faculty members across the disciplinary clusters (arts and humanities, social sciences, STEM fields). Interview questions focused on a range of topics, including academic background and training, research and teaching experience, career path, beliefs about liberal arts disciplines, and concepts of evidence-related terms.
Interviewees were purposively sampled from every department in the liberal arts at PRU. The number of faculty members in the sample from each of the disciplinary clusters closely approximates that of the liberal arts faculty at PRU. Of the 100 interviewees, 44 had administrative disciplinary cluster homes in the arts and humanities, 26 in the social sciences, and 30 in STEM fields. We use the terms "administrative disciplinary cluster" and "administratively-assigned discipline" to acknowledge that some faculty members did not identify with their "official" departmental or disciplinary home. For instance, a faculty member might have a tenure line in a language department, but identify as a linguistics scholar. Similarly, a faculty member might have a tenure line in the biology department but identify as a neuroscientist. In total, the interviewees represent 34 administratively-assigned liberal arts disciplines and every department in the college of liberal arts at PRU; see Appendix A for a full listing of disciplines.
We intentionally sought to maximize variation within and across disciplines. Based on department size, we interviewed between 1 and 9 interviewees in each discipline, with a mean number of 3 interviewees per discipline. Within each discipline, we sampled to maximize diversity in terms of area of specialization/subfield, demographic variables (e.g., gender, race/ethnicity, age), and institutional variables (e.g., institutional rank and appointment type). To protect the anonymity of our interviewees, we provide aggregate rather than individual demographic characteristics for the sample (see Table 1). The demographic makeup of the sample roughly parallels that of the larger liberal arts faculty at PRU. All interviews were de-identified and fully transcribed. Data were analyzed through a systematic, iterative qualitative coding process (Miles et al. 2014;Morgan 2018), facilitated by the software MaxQDA. Iterative thematic analysis involves identifying, organizing, and synthesizing key themes that emerge in the data (Nowell et al. 2017). Initial codes were generated from concepts relevant to the research questions and the coding scheme was revised as new codes were generated from the data throughout the analysis. Open and focused coding of the interviews centered on perceptions of institutional pressures, strategies for managing those pressures, and faculty members' beliefs about the efficacy and quality of the academic knowledge production system.

Creating Knowledge: The Impact of Pressures to Produce
With no prompting by the interviewer, more than 80% of interviewees across disciplines and across career stages brought up what they described as ever-increasing pressures to produce for tenure and promotion and evaluation purposes. Although research on the changing nature of the modern academy and the identification of increasing pressures to produce is not new, we introduce here an analysis of the specific ways pressures to produce impact how faculty members go about the knowledge production process. A neuroscientist's account of the influence on the research process of increasing pressures to produce was particularly insightful: "I still think that scientists are trying to do a good job, but they could do a better job, and I think part of the problem is the undue pressure to get grant money and to get publications, and whether or not you will or will not get promoted. You know? So I think that there's kind of a structural problem with how science is conducted in the U.S. that creates some of the things that happen. Now again, I want to emphasize I'm a scientist, I love science, I think science is fabulous. I think that scientists are trying to do the best they can possibly do. But the way science works, the pressure of funding, the pressure of publications and promotion puts undue pressure on scientists. You know, back in the day 50 years ago I used to-when I was in graduate school, I used to read papers that were 50 pages long, where somebody had spent 10 or 15 years doing research and published this tome. That's never done anymore. You know, you got to bust your ass and get three three-page papers published this year, you know? Because you're up for a promotion next year, you know? So it creates unfortunate pressure on people to cut corners and tell tidy stories that are not nearly as tidy as you're led to believe." A biologist shares a similar account: "The system is set up that this is what counts, and here's the way you do it, and if you want to get promoted, want to get a job, support your family, etc., you follow this trajectory . . . You just do it. And nobody is trying to be evil here. Well, you know, 1% of the people are committing fraud or something, or less than 1%, whatever it is-say it's 5%, so most-even 95%, nobody's trying to do anything wrong, but there are parts of the system that might drive them to publish too soon. In theory, there's an infinite number of controls you can do. But you can only do so many, so you want to-You really, really do want as many controls as you can think of. But you know, it can take you 10 years to publish something, and by then you might be unfunded because you haven't published." A faculty member in Math and Computer Science shares a similar account of these pressures: "So you know, they're [the authors of a study the interviewee is critiquing] talking as if these things are true, so let's go back and look at how the data was collected, how strong the p values were. Because that's what's wrong with a lot of science, right? (laughs) I mean, I hate to say it, but I mean, our careers are all behind showing stuff and you want to be able to say what you did correctly and not oversell it, right? And so too often, you know, weak relations get sold as big relations." Perceptions of increasing pressures to produce were not limited to faculty in STEM fields. A philosopher, for instance, said: "PRU is now just constantly, ridiculously, absurdly accelerating the demand for research . . . People need time to do good thinking and deep thinking, period. One of the things I really dislike is the culture-It's the whole culture of the academy nationally and even internationally, so it's not just PRU. Those institutional aims structure people and they form people's lives, and I see what they do to us, to all of us as faculty. I worry about it, because you know, it's a culture and culture produces pressures. And it doesn't-the number of publications doesn't equal good publications. I hate to see what the young faculty are going through. I mean, all of us are formed by it, but particularly people who are trying to get to tenure and the requirements are ridiculous. It's just wrong." A psychologist echoed that account, putting it succinctly, "I'd rather have 5 papers that I believe than 50 that are shit, but the tenure and promotion requirements certainly don't help." The specific relationship between increasing demands to produce and the decreasing quality of knowledge produced was also articulated by an anthropologist: "At a place with tenure, you've got-well, one, you have to get papers published, so there's all of that, which will obviously bias results. You know, you're not going to publish negative findings. There's going to be p-hacking and etc., and then you also have to develop sort of a brand for yourself-you have to develop some kind of a brand and there needs to be some coherence to that so all your studies have to support the same thing."

Evaluating Knowledge: Strategies for Managing Time Constraints
In addition to detailed and frequent unprompted critiques of the ways that structural pressures to produce impact the kind and quality of research that faculty members produce, faculty members also described how increasing pressures to produce and the associated time constraints impact the time they have available to review and evaluate the work of their colleagues. Faculty members specifically described the ways that increasing pressures to produce reward creating knowledge over and above the work they do to evaluate the knowledge produced by others. For example, an environmental scientist said: "I think sometimes there's just too many papers that come out. And then there's-we review papers and we don't get rewarded for that, per se, so I think it can be really-people that don't spend enough time reviewing, and so things don't get revealed in the review process, and then it just gets published." Faculty members described two primary ways that they respond to decreasing time available to evaluate new knowledge. First, faculty members frequently explained that because of the ways that increasing pressures to produce decrease their time available for other activities they are increasingly "forced to trust", "give the benefit of the doubt", or otherwise "assume they've gotten it right", when evaluating the work of others in their field. The specific ways increasing pressures to produce impacted how faculty members read and evaluate the knowledge claims of others included not being able to read as carefully as they would like, not having time to consult experts or other sources to determine the thoroughness or correctness of methods, techniques, or controls, and not having time available to follow the trail of sources. For instance, a historian said: "When I read, I tend to give people the benefit of the doubt that they're not manipulating the content of what they found. And so what I think that that speaks to is the fact that among professionalized historians we trust the trappings of professionalization, which is the peer review [process] mainly, and all the things that are sort of allied with that, that you don't even get to submitting an article unless you've gone through all these things." Similarly, a neuroscientist explains: "Some things I do take on faith. You can't check everything. And a lot of times I just read it and I just take for granted that they do it right, you know? Because you just can't-you don't have enough time." A biologist puts it succinctly: "You hope and trust-it's all about trust that the system-the design is right, and they do those controls, that therefore rule out this." At the same time as faculty members reported essentially being forced to trust in the peer review process, they also frequently-and without prompting-raised critiques of the peer review process. For instance, a physicist said: "The reality of the situation is that peer reviewed literature is full of contradictory things and most of it doesn't survive two years, let alone 20, and that that's where the conversation happens, but that doesn't mean that the things that are published are right, right? That's part of a process, and you know, maybe even at one point in time it was true that if it was peer reviewed versus not you could trust it more, but I'm not even sure if that's a reliable metric anymore." Similarly, a mathematician describes skepticism of the peer review process, saying, "The peer review process is designed to check all the details, but some referees are lazy and errors do creep in to published papers." Recognizing that the peer review process is subject to issues that arise from time constraints related to pressures to publish and get grants, faculty members also frequently described resorting to relying on the reputation of a scholar to aid in their evaluation of a study or publication, even while often reporting feeling deeply conflicted about doing so. A neuroscientist, for instance, explains: "There are some labs where I would say, okay, they've been doing neurophysiology recordings for 30 years. They know that technique. I'm not going to go through and look at the details of that, because that's a waste of my time." Similarly, a chemist says: "If that lab or that person has a reputation that they've had a track record of good research then yeah, we're probably going to be more likely to believe them that they designed their experiment well and that the conclusions that they come up with are more sound. I mean, there are a couple of chem ed researchers who-[prominent chemist] would be an example that when she published a study, I didn't really bother crawling into all of the details of the study very well, because I know she was not going to put her name on something that was not well done, and so she had built that caché." An economist directly links what they perceive as the problematic aspect of relying on reputation with the time cost of taking an entirely skeptical approach to all studies: "Reputation does matter. As much as I would like to think-say it doesn't, you know, like with everything, there's a cost to learning new information, and if we have to confront every new paper with complete skepticism and refuse to buy into anything they're saying until we essentially replicate it our self, you know, that's-no one's going to do that . . . I mean, there are certain researchers that I think of, wow, I know of doing-presenting good work and for those people I might be more willing to take as given that they're not trying to fool me or messing with the data in some way. So yeah, I think for sure we use things to lower our research costs, and in this case research cost is how much detail that we're going to go into every aspect of a paper before we believe it, and reputation is one of those." Similarly, a physicist said: "I don't have to look at their [lab the physicist respects] methods section because I know what they do." Similarly, an anthropologist says, "We don't really check [their methods], especially if it comes from a trusted source," and a biologist, when asked what matters most in terms of the quality of a study, explained, "So controls-you would hope that that outweighs everything, but sometimes-sometimes who is on the paper outweighs everything else. Who is this person is what it comes down to." A scholar in interdisciplinary studies similarly described their reliance on reputation, while simultaneously reflecting critically on that reliance: "I guess I would be more easily accepting of the ideas of someone who is an established scholar whose work I know, and I would be like, okay, this person, I know they do good work. I'm going to read this and probably believe that they've done the work to back it up, whereas with somebody new I would be more careful about looking at whether or not they've done the work to back it up. And that might not actually be a good thing when I say that out loud. I think well, you know, somebody who is really well established can still do sloppy work, you know?" Similarly, an anthropologist reflected: "I mean, I would like to say that it [reputation] doesn't matter, but it definitely matters. (sighs) I would like to say I don't hold them to different standards, but I'm sure I do. I'm sure people who have a reputation I would be willing to let them slide a little bit more to some extent . . . I would say for someone who-a junior person I might say, how did this ever get through, and for a senior person be like, okay, clearly they're-you would assume they did it right." Another anthropologist explains: "Reputation makes a difference more than it should. I don't like that it makes a difference, but it does, so I'll just be honest about that. When I see a paper by somebody who I've read lots of their stuff and I like it, I feel like they're a sound scholar, I will tend to accept a more radical view than I would from somebody who is not known to me. Even if the view is the same, the logic is the same, and the evidence is the same, and I really don't like that that's the case, but I think it is the case." A literary and language studies scholar also emphasizes the relationship between reputation and trust saying, "There are certain names that the work has withstood decades of others reading into it and sort of analyzing it, and I turn to them with much more-well, with a higher degree of trust."

Assessing the Current Knowledge Production System
Despite what faculty members perceive as the numerous pressures and flaws in the current knowledge production system-at both the creation and evaluation points-our interviewees repeatedly affirmed (a) that the issues they identified were overwhelmingly ones of structural limitations and pressures rather than individual issues; and (b) that despite its numerous flaws, the current system of knowledge production is the best of available options. The willingness of faculty members to extend the benefit of the doubt, to assume that their colleagues are "getting it right," and to trust their colleagues is strongly related to their perception that despite numerous structural pressures and issues, at the individual level their colleagues are-like them-doing their best to resist those pressures. For example, a psychologist said: "The publish as much as you can or tenure is in jeopardy does not help. And I just frankly refuse. I frankly refuse to just put papers out to put papers out, right? If I do not believe those, they are not going. They're just not, right? And if PRU doesn't give me tenure as a consequence, well, so be it, right?" Repeatedly across our interviews, faculty members acknowledged that in rare instances, their colleagues may "publish too soon," "publish just to get something out," or otherwise publish work that falls short of the standards in their fields, but overwhelmingly emphasized that they and their colleagues' commitment to the pursuit of knowledge and excellence in the research process leads them to resist succumbing to escalating pressures to produce.
Even as they repeatedly-and sometimes strongly-critiqued the ways that structural pressures to produce have negatively impacted the academic knowledge production system, faculty members in our study simultaneously argued that the current system is nonetheless the best available option. A biologist, for instance, said: "The system is not perfect in the natural sciences, but it's pretty darn good. It's pretty darn good, so there's a reason that most people just kind of trust the scientific results . . . So I would say that most people accept the science model, as imperfect as it is." Similarly, a neuroscientist said: "Something I just got from a colleague about two weeks ago or three weeks ago, they did a similar kind of thing in [subfield] and about 50% of the studies were not replicated. But it does call into question the validity of scientific results if a significant proportion of them cannot be replicated. Does it call the whole enterprise into question? I would say at the grandest level, no. I still believe that science is the only way to discover how things work really and have confidence that you know of how things work. It does, however, speak to the issues that I raised before about the pressures on scientists to publish or perish and to make a nice, tidy story out of a very complicated data set, you know? I really, really do-notwithstanding the publications that have come out on the question of replicability of scientific studies, I still think science is the only way to go."

Findings and Interpretation
Our findings show that across arts and humanities, social sciences, and STEM fields, there are multiple points in the academic research system at which ever-increasing pressures to produce have a significant impact on knowledge creation and evaluation. First, we examined how pressures to produce impact the process of creating knowledge, finding that faculty members widely acknowledge that escalating pressures to produce have myriad detrimental effects on the quality of research and publications. Second, we showed that pressures to produce have a significant effect on how faculty members evaluate the knowledge created by their colleagues. We identified two primary mechanisms faculty members use to navigate evaluating the work of others in the face of increasing time constraints: (a) giving the benefit of the doubt, and (b) relying on reputation. We showed that faculty members are themselves critical of these mechanisms and understand them to be problematic, yet also feel that the demands on their time leave them little choice. Finally, we showed that, despite identifying these critical flaws in the modern knowledge production system, faculty members nonetheless affirm that system as the best of available options, drawing on logics that frame themselves and their colleagues as resisting the pressures of a deeply flawed system.
While we found that faculty members repeatedly-and without prompting-discussed the ways that pressures to produce can negatively impact the process of creating knowledge, we also found that faculty members strongly emphasized the ways that they and their colleagues push back against ever-increasing institutional demands for quantity over quality. At the same time, however, faculty members across disciplines also acknowledged that pressures to produce do lead some faculty members to publish work that falls short of the scholarly ideal-work that is preliminary, that relies on insufficient methodological or technical rigor, and/or that otherwise falls short of particular disciplinary standards of excellence. Strongly believing that they and their colleagues strive to maintain the most rigorous standards of research in the face of exponentially increasing pressures to publish more and more with less and less available time, yet also aware that some colleagues are unsuccessful in that resistance, faculty members nonetheless affirm their belief in the current knowledge production system as the best of available options. We argue that our interviewees' conviction that they and their colleagues were consistent in their efforts to resist enormous structural pressures is strongly related to their continued faith in the academic knowledge production system, even as they point to numerous areas of weakness in that system. In sum, we argue that while faculty members trust in the modern academic system of knowledge production, it is facultys' trust in their colleagues rather than the system itself that drives that trust. By emphasizing the ways that faculty members resist structural pressures to produce quantity over quality, even as they recognize that not all of their colleagues are successful in those efforts, faculty members in this study manage the cognitive dissonance between the flaws they identify in the knowledge production system and their continued belief in the efficacy of that system.

Theoretical Contributions
Our findings contribute to literature in the sociology of knowledge on academic evaluations that demonstrates the continued commitment of faculty members to the peer review process (Albert et al. 2012;Lamont 2009). As Lamont (2009) puts it in describing the faith of the faculty in her study of in-person grant review panels, "participants' faith in the system has a tremendous influence on how well it works" (p. 7). We extend the findings of these studies by demonstrating how faculty members maintain their faith in that system even as they recognize its flaws, showing how faculty's perceptions of the specific weaknesses of the modern knowledge production system are in large part ameliorated by their belief in the commitment of their colleagues to avoid "doing anything wrong." Additionally, we contribute to work in the sociology of knowledge on academic evaluation processes by moving beyond Lamont and colleagues' (Guetzkow et al. 2004;Lamont 2009;Lamont and Guetzkow 2016) focus on evaluations of proposed work to understand how faculty evaluate completed work. Further, while Lamont and colleagues focus on the processes involved in collective evaluation processes in group grant panel review settings, we identify strategies faculty use in making evaluations on an individual level.
Within the sociology of education, we contribute to the growing body of research on the "speeding up" of academic life (Berg and Seeber 2016;Menzies and Newson 2007;Vostal 2016) by demonstrating the specific impacts of increasing pressures to produce on how faculty both create and evaluate scholarship. In line with prior research on faculty members' perceptions of the changing culture of the American academy, we find that faculty members perceive significant changes in recent decades relative to expectations for research productivity. While previous research has shown increasing time demands and pressures to produce lead to significantly increased levels of stress and dissatisfaction among faculty (e.g., Araujo 2009;Horn 2016;Menzies and Newson 2007;Vostal 2016), in this study, we extend understandings of the implications of the increasing pressures to produce to the concrete consequences for the knowledge production system itself.
While much of the work on changes in higher education institutions has focused on the impact of these changes for teaching, learning, and student outcomes, we focus on how these processes shape the everyday practices of the faculty as they go about their work as researchers. In so doing, we demonstrate how the sociology of education may be fruitfully analytically linked with the sociology of knowledge. We do this by linking work on the organizational structure of higher education institutions from the sociology of education with work on evaluative processes from the sociology of knowledge to investigate how changes in institutional structures shape the processes that faculty members use to evaluate their own and other's knowledge creation. While previous work has demonstrated that changes in the structure of colleges and universities create pressures and stresses for faculty, our study goes beyond existing work by showing how faculty respond to those pressures as they are navigating their daily work in creating and evaluating knowledge.

Policy and Practical Implications
There are a number of potential policy implications of the findings we report here. First, additional attention to-and guidelines for evaluating-the quality of publications in formal faculty evaluation processes, such as tenure and promotion processes, may help to curb the escalating pressures to produce that faculty members widely describe as emphasizing quantity over quality of scholarship. Second, explicitly rewarding faculty members in tenure and promotion processes for their work in evaluating knowledge-in addition to their role in creating knowledge-may be one effective method of addressing multiple issues in the current academic knowledge production system (see also Armstrong 1997). Rewarding reviewing work may help faculty members prioritize that work. Incentivizing work done to evaluate the work of their colleagues may also help faculty devote more time to peer review, and thus prevent manuscripts from being published "too early" and catch methodological and technical errors that may otherwise not be caught prior to publication.

Limitations and Future Directions
While we suspect that our findings may hold particularly strongly for the faculty at researchintensive institutions, the strong pattern in the literature of increasing productivity demands across institutional types (Finkelstein et al. 2016;Hermanowicz 2016b) suggests that faculty at other types of institutions may experience similar pressures, and may also respond in similar ways. More research is needed, however, to investigate how faculty at teaching-intensive institutions respond to the expectations to both produce and evaluate knowledge.
Additionally, the present study focused exclusively on the experiences of faculty in liberal arts disciplines. Faculty in other disciplines, especially applied and professional disciplines, may face different institutional structures and pressures and be working within different evaluative cultures than those shared in common among faculty in liberal arts fields. Faculty in applied and professional fields may also respond to the challenges of their own academic structures with different strategies than those in liberal arts disciplines.
Future research also should explore in more depth some of the tensions and challenges we identified in this study. In particular, more work is needed to understand how faculty members navigate pressures to both create and evaluate knowledge. How do faculty members prioritize and juggle making progress on their own manuscripts and grant proposals, while also reviewing those of their colleagues? What strategies do faculty create and rely on to maximize their time and effectiveness in juggling these research-related obligations? While much previous work has focused on the tensions between research, teaching, and service, we suggest that the tensions identified in our work suggest a fruitful avenue of exploration that can examine how faculty understand the importance of knowledge creation in comparison with knowledge evaluation, and specifically the ways that institutional pressures and structures shape those understandings.
Finally, there remains a pressing analytical concern not explored in this study: the apparent tension between what faculty members perceive as the academic "system" as being outside and beyond faculty members themselves. While faculty members describe the pressures to produce created by the tenure and promotion system as being part of the "structure" of the academy and many strongly critiqued those structures, we nonetheless note the central role that faculty members themselves play in tenure and promotion evaluations and processes. Future research should examine how faculty members draw mental distinctions between academic structures and their own roles in those structures. For instance, additional work could explicitly explore how faculty members navigate what they perceive as problematic increasing pressures to produce, and the accompanying challenges to high quality work as they make decisions in tenure and promotion evaluations.