Next Article in Journal
Global Citizenship and Analysis of Social Facts: Results of a Study with Pre-Service Teachers
Previous Article in Journal
Effectiveness of a Psychosocial Therapy with SMS in Immigrant Women with Different Degrees of Depression
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Impact of Pressures to Produce on Knowledge Production and Evaluation in the Modern Academy

Department of Sociology, Emory University, Atlanta, GA 30322, USA
*
Author to whom correspondence should be addressed.
Soc. Sci. 2020, 9(5), 64; https://doi.org/10.3390/socsci9050064
Submission received: 30 March 2020 / Revised: 17 April 2020 / Accepted: 23 April 2020 / Published: 28 April 2020

Abstract

:
Combining work from the related but distinct fields of sociology of knowledge and sociology of education, we explore the effects of the changing landscape of higher education on the academic knowledge production system. Drawing on 100 interviews with faculty members from 34 disciplines at an elite private research university, we show that faculty members perceive exponentially increasing pressures to produce, and identify the ways that those pressures can negatively impact the knowledge creation process. We then examine the ways those pressures to produce influence how faculty evaluate their colleagues’ work, leading faculty to extend the benefit of the doubt, rely on reputation, and emphasize the peer review process, even as they simultaneously critique its weaknesses. Finally, we show that faculty members ultimately reconcile their perceptions of weaknesses in the current knowledge production system with their belief in that system by emphasizing their own and their colleagues’ commitment to resisting structural pressures to produce. While much of the existing body of scholarship on the changing higher education landscape has focused on teaching and learning outcomes, this study contributes to our understanding of how those changes impact the research process, underscoring the relationship between institutional structures and evaluative processes.

1. Introduction

The sociology of education and the sociology of knowledge—related but distinct fields with surprisingly little overlap (Hermanowicz 2012)—both examine how information is created and disseminated. The sociology of education focuses on how organizations, institutions, policies, and actors transmit knowledge (Durkheim 1956; Saha 2015; Trent et al. 1985). The sociology of knowledge focuses on the processes through which knowledge itself is created, evaluated, and legitimated (Berger and Luckman 1967; Kuklick 1983; Swidler and Arditi 1994).
In the sociology of education, the sociology of higher education focuses specifically on exploring educational processes, policies, and experiences in the context of colleges and universities (Clark 1973; Deem 2004; Gumport 2007; Stevens et al. 2008). In recent years, increasing attention has been paid by sociologists of higher education to the changing nature of the modern academy (e.g., Ginsberg 2011; Neumann 2009; Tuchman 2009, 2011). Sociologists of higher education have identified several key changes in higher education in the last four decades. First, across higher education, there has been an increasing corporatization of colleges and universities (Cote and Allahar 2011; Ginsberg 2011; Hermanowicz 2016b; Herrmann 2017). Second, and in part a result of the corporatization of higher education, the nature of the relationship between teaching and research expectations has shifted considerably (Barnett 2005; Gale 2011). Third, faculty members across institutions are experiencing escalating requirements for service and bureaucratic tasks (Archer 2008; Billot 2010; Slaughter and Rhoades 2004). Fourth—and especially at research-oriented colleges and universities—faculty members are under ever-escalating pressures to produce (Finkelstein et al. 2016; Hermanowicz 2016a). Finally, as a result of the changing relationship between research and teaching expectations, increased service and administrative loads, and increasing pressures to produce, a “speeding up” of academic life is underway (Berg and Seeber 2016; Menzies and Newson 2007; Schuster and Finkelstein 2006; Vostal 2016).
Understanding the broader context and causes of increasing pressures to produce is key to understanding how faculty perceive increasing demands for productivity and decreasing available time to meet those demands. Billot (2010) identifies the constant pressure felt by academics to “do more” than ever as a significant change in how faculty members experience their work. Finkelstein et al. (2016) also report a marked increase in the publication records of faculty members in the last two decades. Among STEM faculty, for instance, Hermanowicz (2016a) finds an enormous increase in publication in the last three decades, resulting in a pattern where “younger cohorts of scientists typically published at a rate wherein their productivity [at tenure] corresponded to an entire career stage occupied by older counterparts” (310).
We place this work on the changing nature of American higher education into conversation with a related but separate body of research: the sociology of knowledge. Growing out of work on the social construction of reality (Berger and Luckman 1967; Holznar 1968; Mannheim 1936; McCarthy 1996), the sociology of knowledge investigates the creation of epistemological frameworks, understandings of truth and reality, the basis of knowledge claims, the development and diffusion of ideological frameworks, and other processes involved in constructing understandings of the world around us (Berger and Luckman 1967; Chandler et al. 1991; Collins and Evans 2007; Gieryn 1999; Gurvitch 1971; Mannheim 1936; Namer 1984; Willer 1971).
Within the sociology of knowledge, the sociology of valuation and evaluation examines the processes individuals use to create categorizations and hierarchies and to make judgments about value, worth, and merit (e.g., Beljean et al. 2016; Boltanski and Thévenot [1991] 2006; Bourdieu [1979] 1984; Cefai et al. 2015; Lamont 2012; Lamont and Thévenot 2000). These studies show that valuative and evaluative processes shape the development and use of categories and classifications, as well as legitimation dynamics. Valuative processes involve giving worth or value, while evaluative processes involve determining how a certain worth is derived or attained (Lamont 2012).
Within the sociology of valuation and evaluation, a subset of studies focus on evaluative processes in academic contexts (e.g., Hirschauer 2010; Knorr Cetina 1999; Lamont 2009; Lamont and Huutoniemi 2011; Mallard et al. 2009). This literature shows that evaluations in academic contexts are not always based exclusively on merit or quality, but are instead complex processes influenced by myriad institutional, cognitive, and social processes. Lamont and colleagues (Guetzkow et al. 2004; Lamont 2009; Lamont and Guetzkow 2016; Lamont and Huutoniemi 2011; Mallard et al. 2009; Tsay et al. 2003) show that academic evaluation is a thoroughly social process, influenced not only by the merit or quality of work, but also by factors specific to particular disciplines, by the social identity of evaluators, and by more ephemeral criteria, such as the elegance of the writing and the moral qualities of the writer (Guetzkow et al. 2004; Lamont 2009; Lamont and Guetzkow 2016; Lamont and Huutoniemi 2011; Mallard et al. 2009; Tsay et al. 2003). This literature further documents that both publishing productivity and the peer review process are influenced by structural factors such as institution and appointment type (e.g., Bland et al. 2006; Evans 2005; Hirschauer 2015). We extend Lamont and colleagues’ work on evaluations of proposals for future work by exploring evaluative processes related to completed work.
In drawing on the intersection of these two bodies of knowledge, we ask: how do changes in the structure of the academy influence how faculty carry out the day-to-day processes involved in producing and evaluating knowledge? By “producing and evaluating knowledge” we refer to all the components and processes involved in scholarly and research activities, including both creating one’s own research and evaluating the research and research-related materials of colleagues (e.g., manuscripts, grant proposals). Activities that we include in our conceptualization of “knowledge production” include but are not limited to: reading and reviewing manuscripts and grant proposals, carrying out data collection and creation processes, analytical work, writing, receiving research-related training and training research assistants on one’s own research projects (differentiated from, for instance, from teaching a research methods course). Although we analytically distinguish scholarly research activities from teaching and service activities, we recognize that these activities often overlap in faculty members’ day-to-day lives.

2. Materials and Methods

The data for this study are drawn from a larger project on how faculty members across the liberal arts construct and understand the meaning of “evidence” in their disciplines. As part of that larger study, we conducted in-depth, semi-structured interviews with a purposive sample of 100 liberal arts faculty members employed at an elite private research universityin the United States, which we call Private Research University (PRU). All permanent, full-time faculty members whose primary appointment was in a liberal arts discipline were eligible to participate. Interviewees were recruited through faculty development workshops, faculty governance and departmental meetings, the authors’ personal networks of colleagues, and referrals from other participants. All interviews took place in a location of the faculty members’ choosing, usually in the faculty member’s office. The interviews lasted 40–150 min, with most lasting approximately an hour-and-a-half. The interview guide for the project was developed and piloted with feedback from faculty members across the disciplinary clusters (arts and humanities, social sciences, STEM fields). Interview questions focused on a range of topics, including academic background and training, research and teaching experience, career path, beliefs about liberal arts disciplines, and concepts of evidence-related terms.
Interviewees were purposively sampled from every department in the liberal arts at PRU. The number of faculty members in the sample from each of the disciplinary clusters closely approximates that of the liberal arts faculty at PRU. Of the 100 interviewees, 44 had administrative disciplinary cluster homes in the arts and humanities, 26 in the social sciences, and 30 in STEM fields. We use the terms “administrative disciplinary cluster” and “administratively-assigned discipline” to acknowledge that some faculty members did not identify with their “official” departmental or disciplinary home. For instance, a faculty member might have a tenure line in a language department, but identify as a linguistics scholar. Similarly, a faculty member might have a tenure line in the biology department but identify as a neuroscientist. In total, the interviewees represent 34 administratively-assigned liberal arts disciplines and every department in the college of liberal arts at PRU; see Appendix A for a full listing of disciplines.
We intentionally sought to maximize variation within and across disciplines. Based on department size, we interviewed between 1 and 9 interviewees in each discipline, with a mean number of 3 interviewees per discipline. Within each discipline, we sampled to maximize diversity in terms of area of specialization/subfield, demographic variables (e.g., gender, race/ethnicity, age), and institutional variables (e.g., institutional rank and appointment type). To protect the anonymity of our interviewees, we provide aggregate rather than individual demographic characteristics for the sample (see Table 1). The demographic makeup of the sample roughly parallels that of the larger liberal arts faculty at PRU.
All interviews were de-identified and fully transcribed. Data were analyzed through a systematic, iterative qualitative coding process (Miles et al. 2014; Morgan 2018), facilitated by the software MaxQDA. Iterative thematic analysis involves identifying, organizing, and synthesizing key themes that emerge in the data (Nowell et al. 2017). Initial codes were generated from concepts relevant to the research questions and the coding scheme was revised as new codes were generated from the data throughout the analysis. Open and focused coding of the interviews centered on perceptions of institutional pressures, strategies for managing those pressures, and faculty members’ beliefs about the efficacy and quality of the academic knowledge production system.

3. Results

3.1. Creating Knowledge: The Impact of Pressures to Produce

With no prompting by the interviewer, more than 80% of interviewees across disciplines and across career stages brought up what they described as ever-increasing pressures to produce for tenure and promotion and evaluation purposes. Although research on the changing nature of the modern academy and the identification of increasing pressures to produce is not new, we introduce here an analysis of the specific ways pressures to produce impact how faculty members go about the knowledge production process. A neuroscientist’s account of the influence on the research process of increasing pressures to produce was particularly insightful: “I still think that scientists are trying to do a good job, but they could do a better job, and I think part of the problem is the undue pressure to get grant money and to get publications, and whether or not you will or will not get promoted. You know? So I think that there’s kind of a structural problem with how science is conducted in the U.S. that creates some of the things that happen. Now again, I want to emphasize I’m a scientist, I love science, I think science is fabulous. I think that scientists are trying to do the best they can possibly do. But the way science works, the pressure of funding, the pressure of publications and promotion puts undue pressure on scientists. You know, back in the day 50 years ago I used to—when I was in graduate school, I used to read papers that were 50 pages long, where somebody had spent 10 or 15 years doing research and published this tome. That’s never done anymore. You know, you got to bust your ass and get three three-page papers published this year, you know? Because you’re up for a promotion next year, you know? So it creates unfortunate pressure on people to cut corners and tell tidy stories that are not nearly as tidy as you’re led to believe.” A biologist shares a similar account: “The system is set up that this is what counts, and here’s the way you do it, and if you want to get promoted, want to get a job, support your family, etc., you follow this trajectory… You just do it. And nobody is trying to be evil here. Well, you know, 1% of the people are committing fraud or something, or less than 1%, whatever it is—say it’s 5%, so most—even 95%, nobody’s trying to do anything wrong, but there are parts of the system that might drive them to publish too soon. In theory, there’s an infinite number of controls you can do. But you can only do so many, so you want to–You really, really do want as many controls as you can think of. But you know, it can take you 10 years to publish something, and by then you might be unfunded because you haven’t published.” A faculty member in Math and Computer Science shares a similar account of these pressures: “So you know, they’re [the authors of a study the interviewee is critiquing] talking as if these things are true, so let’s go back and look at how the data was collected, how strong the p values were. Because that’s what’s wrong with a lot of science, right? (laughs) I mean, I hate to say it, but I mean, our careers are all behind showing stuff and you want to be able to say what you did correctly and not oversell it, right? And so too often, you know, weak relations get sold as big relations.”
Perceptions of increasing pressures to produce were not limited to faculty in STEM fields. A philosopher, for instance, said: “PRU is now just constantly, ridiculously, absurdly accelerating the demand for research… People need time to do good thinking and deep thinking, period. One of the things I really dislike is the culture—It’s the whole culture of the academy nationally and even internationally, so it’s not just PRU. Those institutional aims structure people and they form people’s lives, and I see what they do to us, to all of us as faculty. I worry about it, because you know, it’s a culture and culture produces pressures. And it doesn’t—the number of publications doesn’t equal good publications. I hate to see what the young faculty are going through. I mean, all of us are formed by it, but particularly people who are trying to get to tenure and the requirements are ridiculous. It’s just wrong.” A psychologist echoed that account, putting it succinctly, “I’d rather have 5 papers that I believe than 50 that are shit, but the tenure and promotion requirements certainly don’t help.” The specific relationship between increasing demands to produce and the decreasing quality of knowledge produced was also articulated by an anthropologist: “At a place with tenure, you’ve got—well, one, you have to get papers published, so there’s all of that, which will obviously bias results. You know, you’re not going to publish negative findings. There’s going to be p-hacking and etc., and then you also have to develop sort of a brand for yourself—you have to develop some kind of a brand and there needs to be some coherence to that so all your studies have to support the same thing.”

3.2. Evaluating Knowledge: Strategies for Managing Time Constraints

In addition to detailed and frequent unprompted critiques of the ways that structural pressures to produce impact the kind and quality of research that faculty members produce, faculty members also described how increasing pressures to produce and the associated time constraints impact the time they have available to review and evaluate the work of their colleagues. Faculty members specifically described the ways that increasing pressures to produce reward creating knowledge over and above the work they do to evaluate the knowledge produced by others. For example, an environmental scientist said: “I think sometimes there’s just too many papers that come out. And then there’s—we review papers and we don’t get rewarded for that, per se, so I think it can be really—people that don’t spend enough time reviewing, and so things don’t get revealed in the review process, and then it just gets published.”
Faculty members described two primary ways that they respond to decreasing time available to evaluate new knowledge. First, faculty members frequently explained that because of the ways that increasing pressures to produce decrease their time available for other activities they are increasingly “forced to trust”, “give the benefit of the doubt”, or otherwise “assume they’ve gotten it right”, when evaluating the work of others in their field. The specific ways increasing pressures to produce impacted how faculty members read and evaluate the knowledge claims of others included not being able to read as carefully as they would like, not having time to consult experts or other sources to determine the thoroughness or correctness of methods, techniques, or controls, and not having time available to follow the trail of sources. For instance, a historian said: “When I read, I tend to give people the benefit of the doubt that they’re not manipulating the content of what they found. And so what I think that that speaks to is the fact that among professionalized historians we trust the trappings of professionalization, which is the peer review [process] mainly, and all the things that are sort of allied with that, that you don’t even get to submitting an article unless you’ve gone through all these things.” Similarly, a neuroscientist explains: “Some things I do take on faith. You can’t check everything. And a lot of times I just read it and I just take for granted that they do it right, you know? Because you just can’t—you don’t have enough time.” A biologist puts it succinctly: “You hope and trust—it’s all about trust that the system—the design is right, and they do those controls, that therefore rule out this.”
At the same time as faculty members reported essentially being forced to trust in the peer review process, they also frequently—and without prompting—raised critiques of the peer review process. For instance, a physicist said: “The reality of the situation is that peer reviewed literature is full of contradictory things and most of it doesn’t survive two years, let alone 20, and that that’s where the conversation happens, but that doesn’t mean that the things that are published are right, right? That’s part of a process, and you know, maybe even at one point in time it was true that if it was peer reviewed versus not you could trust it more, but I’m not even sure if that’s a reliable metric anymore.” Similarly, a mathematician describes skepticism of the peer review process, saying, “The peer review process is designed to check all the details, but some referees are lazy and errors do creep in to published papers.”
Recognizing that the peer review process is subject to issues that arise from time constraints related to pressures to publish and get grants, faculty members also frequently described resorting to relying on the reputation of a scholar to aid in their evaluation of a study or publication, even while often reporting feeling deeply conflicted about doing so. A neuroscientist, for instance, explains: “There are some labs where I would say, okay, they’ve been doing neurophysiology recordings for 30 years. They know that technique. I’m not going to go through and look at the details of that, because that’s a waste of my time.” Similarly, a chemist says: “If that lab or that person has a reputation that they’ve had a track record of good research then yeah, we’re probably going to be more likely to believe them that they designed their experiment well and that the conclusions that they come up with are more sound. I mean, there are a couple of chem ed researchers who—[prominent chemist] would be an example that when she published a study, I didn’t really bother crawling into all of the details of the study very well, because I know she was not going to put her name on something that was not well done, and so she had built that caché.” An economist directly links what they perceive as the problematic aspect of relying on reputation with the time cost of taking an entirely skeptical approach to all studies: “Reputation does matter. As much as I would like to think—say it doesn’t, you know, like with everything, there’s a cost to learning new information, and if we have to confront every new paper with complete skepticism and refuse to buy into anything they’re saying until we essentially replicate it our self, you know, that’s—no one’s going to do that…I mean, there are certain researchers that I think of, wow, I know of doing—presenting good work and for those people I might be more willing to take as given that they’re not trying to fool me or messing with the data in some way. So yeah, I think for sure we use things to lower our research costs, and in this case research cost is how much detail that we’re going to go into every aspect of a paper before we believe it, and reputation is one of those.”
Similarly, a physicist said: “I don’t have to look at their [lab the physicist respects] methods section because I know what they do.” Similarly, an anthropologist says, “We don’t really check [their methods], especially if it comes from a trusted source,” and a biologist, when asked what matters most in terms of the quality of a study, explained, “So controls—you would hope that that outweighs everything, but sometimes—sometimes who is on the paper outweighs everything else. Who is this person is what it comes down to.” A scholar in interdisciplinary studies similarly described their reliance on reputation, while simultaneously reflecting critically on that reliance: “I guess I would be more easily accepting of the ideas of someone who is an established scholar whose work I know, and I would be like, okay, this person, I know they do good work. I’m going to read this and probably believe that they’ve done the work to back it up, whereas with somebody new I would be more careful about looking at whether or not they’ve done the work to back it up. And that might not actually be a good thing when I say that out loud. I think well, you know, somebody who is really well established can still do sloppy work, you know?” Similarly, an anthropologist reflected: “I mean, I would like to say that it [reputation] doesn’t matter, but it definitely matters. (sighs) I would like to say I don’t hold them to different standards, but I’m sure I do. I’m sure people who have a reputation I would be willing to let them slide a little bit more to some extent… I would say for someone who—a junior person I might say, how did this ever get through, and for a senior person be like, okay, clearly they’re—you would assume they did it right.” Another anthropologist explains: “Reputation makes a difference more than it should. I don’t like that it makes a difference, but it does, so I’ll just be honest about that. When I see a paper by somebody who I’ve read lots of their stuff and I like it, I feel like they’re a sound scholar, I will tend to accept a more radical view than I would from somebody who is not known to me. Even if the view is the same, the logic is the same, and the evidence is the same, and I really don’t like that that’s the case, but I think it is the case.” A literary and language studies scholar also emphasizes the relationship between reputation and trust saying, “There are certain names that the work has withstood decades of others reading into it and sort of analyzing it, and I turn to them with much more—well, with a higher degree of trust.”

3.3. Assessing the Current Knowledge Production System

Despite what faculty members perceive as the numerous pressures and flaws in the current knowledge production system—at both the creation and evaluation points—our interviewees repeatedly affirmed (a) that the issues they identified were overwhelmingly ones of structural limitations and pressures rather than individual issues; and (b) that despite its numerous flaws, the current system of knowledge production is the best of available options. The willingness of faculty members to extend the benefit of the doubt, to assume that their colleagues are “getting it right,” and to trust their colleagues is strongly related to their perception that despite numerous structural pressures and issues, at the individual level their colleagues are—like them—doing their best to resist those pressures. For example, a psychologist said: “The publish as much as you can or tenure is in jeopardy does not help. And I just frankly refuse. I frankly refuse to just put papers out to put papers out, right? If I do not believe those, they are not going. They’re just not, right? And if PRU doesn’t give me tenure as a consequence, well, so be it, right?” Repeatedly across our interviews, faculty members acknowledged that in rare instances, their colleagues may “publish too soon,” “publish just to get something out,” or otherwise publish work that falls short of the standards in their fields, but overwhelmingly emphasized that they and their colleagues’ commitment to the pursuit of knowledge and excellence in the research process leads them to resist succumbing to escalating pressures to produce.
Even as they repeatedly—and sometimes strongly—critiqued the ways that structural pressures to produce have negatively impacted the academic knowledge production system, faculty members in our study simultaneously argued that the current system is nonetheless the best available option. A biologist, for instance, said: “The system is not perfect in the natural sciences, but it’s pretty darn good. It’s pretty darn good, so there’s a reason that most people just kind of trust the scientific results… So I would say that most people accept the science model, as imperfect as it is.” Similarly, a neuroscientist said: “Something I just got from a colleague about two weeks ago or three weeks ago, they did a similar kind of thing in [subfield] and about 50% of the studies were not replicated. But it does call into question the validity of scientific results if a significant proportion of them cannot be replicated. Does it call the whole enterprise into question? I would say at the grandest level, no. I still believe that science is the only way to discover how things work really and have confidence that you know of how things work. It does, however, speak to the issues that I raised before about the pressures on scientists to publish or perish and to make a nice, tidy story out of a very complicated data set, you know? I really, really do—notwithstanding the publications that have come out on the question of replicability of scientific studies, I still think science is the only way to go.”

4. Discussion

4.1. Findings and Interpretation

Our findings show that across arts and humanities, social sciences, and STEM fields, there are multiple points in the academic research system at which ever-increasing pressures to produce have a significant impact on knowledge creation and evaluation. First, we examined how pressures to produce impact the process of creating knowledge, finding that faculty members widely acknowledge that escalating pressures to produce have myriad detrimental effects on the quality of research and publications. Second, we showed that pressures to produce have a significant effect on how faculty members evaluate the knowledge created by their colleagues. We identified two primary mechanisms faculty members use to navigate evaluating the work of others in the face of increasing time constraints: (a) giving the benefit of the doubt, and (b) relying on reputation. We showed that faculty members are themselves critical of these mechanisms and understand them to be problematic, yet also feel that the demands on their time leave them little choice. Finally, we showed that, despite identifying these critical flaws in the modern knowledge production system, faculty members nonetheless affirm that system as the best of available options, drawing on logics that frame themselves and their colleagues as resisting the pressures of a deeply flawed system.
While we found that faculty members repeatedly—and without prompting—discussed the ways that pressures to produce can negatively impact the process of creating knowledge, we also found that faculty members strongly emphasized the ways that they and their colleagues push back against ever-increasing institutional demands for quantity over quality. At the same time, however, faculty members across disciplines also acknowledged that pressures to produce do lead some faculty members to publish work that falls short of the scholarly ideal—work that is preliminary, that relies on insufficient methodological or technical rigor, and/or that otherwise falls short of particular disciplinary standards of excellence. Strongly believing that they and their colleagues strive to maintain the most rigorous standards of research in the face of exponentially increasing pressures to publish more and more with less and less available time, yet also aware that some colleagues are unsuccessful in that resistance, faculty members nonetheless affirm their belief in the current knowledge production system as the best of available options. We argue that our interviewees’ conviction that they and their colleagues were consistent in their efforts to resist enormous structural pressures is strongly related to their continued faith in the academic knowledge production system, even as they point to numerous areas of weakness in that system. In sum, we argue that while faculty members trust in the modern academic system of knowledge production, it is facultys’ trust in their colleagues rather than the system itself that drives that trust. By emphasizing the ways that faculty members resist structural pressures to produce quantity over quality, even as they recognize that not all of their colleagues are successful in those efforts, faculty members in this study manage the cognitive dissonance between the flaws they identify in the knowledge production system and their continued belief in the efficacy of that system.

4.2. Theoretical Contributions

Our findings contribute to literature in the sociology of knowledge on academic evaluations that demonstrates the continued commitment of faculty members to the peer review process (Albert et al. 2012; Lamont 2009). As Lamont (2009) puts it in describing the faith of the faculty in her study of in-person grant review panels, “participants’ faith in the system has a tremendous influence on how well it works” (p. 7). We extend the findings of these studies by demonstrating how faculty members maintain their faith in that system even as they recognize its flaws, showing how faculty’s perceptions of the specific weaknesses of the modern knowledge production system are in large part ameliorated by their belief in the commitment of their colleagues to avoid “doing anything wrong.” Additionally, we contribute to work in the sociology of knowledge on academic evaluation processes by moving beyond Lamont and colleagues’ (Guetzkow et al. 2004; Lamont 2009; Lamont and Guetzkow 2016) focus on evaluations of proposed work to understand how faculty evaluate completed work. Further, while Lamont and colleagues focus on the processes involved in collective evaluation processes in group grant panel review settings, we identify strategies faculty use in making evaluations on an individual level.
Within the sociology of education, we contribute to the growing body of research on the “speeding up” of academic life (Berg and Seeber 2016; Menzies and Newson 2007; Vostal 2016) by demonstrating the specific impacts of increasing pressures to produce on how faculty both create and evaluate scholarship. In line with prior research on faculty members’ perceptions of the changing culture of the American academy, we find that faculty members perceive significant changes in recent decades relative to expectations for research productivity. While previous research has shown increasing time demands and pressures to produce lead to significantly increased levels of stress and dissatisfaction among faculty (e.g., Araujo 2009; Horn 2016; Menzies and Newson 2007; Vostal 2016), in this study, we extend understandings of the implications of the increasing pressures to produce to the concrete consequences for the knowledge production system itself.
While much of the work on changes in higher education institutions has focused on the impact of these changes for teaching, learning, and student outcomes, we focus on how these processes shape the everyday practices of the faculty as they go about their work as researchers. In so doing, we demonstrate how the sociology of education may be fruitfully analytically linked with the sociology of knowledge. We do this by linking work on the organizational structure of higher education institutions from the sociology of education with work on evaluative processes from the sociology of knowledge to investigate how changes in institutional structures shape the processes that faculty members use to evaluate their own and other’s knowledge creation. While previous work has demonstrated that changes in the structure of colleges and universities create pressures and stresses for faculty, our study goes beyond existing work by showing how faculty respond to those pressures as they are navigating their daily work in creating and evaluating knowledge.

4.3. Policy and Practical Implications

There are a number of potential policy implications of the findings we report here. First, additional attention to—and guidelines for evaluating—the quality of publications in formal faculty evaluation processes, such as tenure and promotion processes, may help to curb the escalating pressures to produce that faculty members widely describe as emphasizing quantity over quality of scholarship. Second, explicitly rewarding faculty members in tenure and promotion processes for their work in evaluating knowledge—in addition to their role in creating knowledge—may be one effective method of addressing multiple issues in the current academic knowledge production system (see also Armstrong 1997). Rewarding reviewing work may help faculty members prioritize that work. Incentivizing work done to evaluate the work of their colleagues may also help faculty devote more time to peer review, and thus prevent manuscripts from being published “too early” and catch methodological and technical errors that may otherwise not be caught prior to publication.

4.4. Limitations and Future Directions

While we suspect that our findings may hold particularly strongly for the faculty at research-intensive institutions, the strong pattern in the literature of increasing productivity demands across institutional types (Finkelstein et al. 2016; Hermanowicz 2016b) suggests that faculty at other types of institutions may experience similar pressures, and may also respond in similar ways. More research is needed, however, to investigate how faculty at teaching-intensive institutions respond to the expectations to both produce and evaluate knowledge.
Additionally, the present study focused exclusively on the experiences of faculty in liberal arts disciplines. Faculty in other disciplines, especially applied and professional disciplines, may face different institutional structures and pressures and be working within different evaluative cultures than those shared in common among faculty in liberal arts fields. Faculty in applied and professional fields may also respond to the challenges of their own academic structures with different strategies than those in liberal arts disciplines.
Future research also should explore in more depth some of the tensions and challenges we identified in this study. In particular, more work is needed to understand how faculty members navigate pressures to both create and evaluate knowledge. How do faculty members prioritize and juggle making progress on their own manuscripts and grant proposals, while also reviewing those of their colleagues? What strategies do faculty create and rely on to maximize their time and effectiveness in juggling these research-related obligations? While much previous work has focused on the tensions between research, teaching, and service, we suggest that the tensions identified in our work suggest a fruitful avenue of exploration that can examine how faculty understand the importance of knowledge creation in comparison with knowledge evaluation, and specifically the ways that institutional pressures and structures shape those understandings.
Finally, there remains a pressing analytical concern not explored in this study: the apparent tension between what faculty members perceive as the academic “system” as being outside and beyond faculty members themselves. While faculty members describe the pressures to produce created by the tenure and promotion system as being part of the “structure” of the academy and many strongly critiqued those structures, we nonetheless note the central role that faculty members themselves play in tenure and promotion evaluations and processes. Future research should examine how faculty members draw mental distinctions between academic structures and their own roles in those structures. For instance, additional work could explicitly explore how faculty members navigate what they perceive as problematic increasing pressures to produce, and the accompanying challenges to high quality work as they make decisions in tenure and promotion evaluations.

Author Contributions

Conceptualization, B.S. and T.S.; formal analysis, B.S.; investigation, B.S.; project administration, B.S.; supervision, B.S. and T.S.; writing- original draft, B.S.; writing- review and editing, B.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

We acknowledge and appreciate assistance in the initial coding of the data from Benjamin Clary, Caralie Focht, and Quin Rich.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Disciplines represented among interviewees include: African-American Studies; Anthropology; Art History; Biology; Chemistry; Classics; Comparative Literature; Dance; Economics; English; Environmental Sciences; Film and Media Studies; French and Italian Studies; German Studies; History; Health; Interdisciplinary Studies; Latin American and Caribbean Studies; Linguistics; Math and Computer Science; Middle Eastern and South Asian Studies; Music; Neuroscience; Philosophy; Physics; Political Science; Psychology; Russian and East Asian Studies; Religion; Sociology; Spanish and Portuguese; Theater; Women’s, Gender, and Sexuality Studies; Writing. We include in the list of disciplines only those that “officially” exist at PRU to protect the anonymity of the three interviewees who identified with self-described disciplines not reflected in existing departments at PRU.

References

  1. Albert, Mathieu, Suzanne Laberge, and Wendy McGuire. 2012. Criteria for Assessing Quality in Academic Research: The Views of Biomedical Scientists, Clinical Scientists, and Social Scientists. Higher Education 64: 661–76. [Google Scholar] [CrossRef]
  2. Araujo, Emilia Rodrigues. 2009. ‘With a Rope Around Their Neck’: Grant Researchers Living in Suspended Time. New Technology, Work, and Employment 24: 230–42. [Google Scholar] [CrossRef]
  3. Archer, Louise. 2008. The New Neoliberal Subjects? Young/er Academics’ Constructions of Professional Identity. Journal of Education Policy 23: 265–85. [Google Scholar] [CrossRef]
  4. Armstrong, J. Scott. 1997. Peer Review for Journals: Evidence on Quality Control, Fairness, and Innovation. Science and Engineering Ethics 3: 63–84. [Google Scholar] [CrossRef] [Green Version]
  5. Barnett, Ronald. 2005. Reshaping the University: New Relationships Between Research, Scholarship, and Teaching. Maidenhead: Open University Press. [Google Scholar]
  6. Beljean, Stefan, Phillipa Chong, and Michèle Lamont. 2016. A Post-Bourdieusian Sociology of Valuation and Evaluation for the Field of Cultural Production. In Routledge International Handbook of the Sociology of Art and Culture. Edited by Laurie Hanquinet and Mike Savage. New York: Routledge, pp. 38–48. [Google Scholar]
  7. Berg, Maggie, and Barbara Seeber. 2016. The Slow Profesor: Challenging the Culture of Speed in the Academy. Buffalo: University of Toronto Press. [Google Scholar]
  8. Berger, Peter, and Thomas Luckman. 1967. The Social Construction of Reality: A Treatise in the Sociology of Knowledge. New York: Anchor Books. [Google Scholar]
  9. Billot, Jennie. 2010. The Imagined and the Real: Identifying the Tensions for Academic Identity. Higher Education Research and Development 29: 709–21. [Google Scholar] [CrossRef] [Green Version]
  10. Bland, Carole J., Bruce A. Center, Deborah A. Findstad, Kelly R. Risbey, and Justin Staples. 2006. The Impact of Appointment Type on the Productivity and Commitment of Full-Time Faculty in Research and Doctoral Institutions. The Journal of Higher Education 77: 89–123. [Google Scholar] [CrossRef]
  11. Boltanski, Luc, and Laurent Thévenot. 2006. On Justification: Economies of Worth. Princeton: Princeton University Press. First published 1991. [Google Scholar]
  12. Bourdieu, Peter. 1984. Distinction: A Social Critique of the Judgment of Taste [Translation]. Cambridge: Harvard University Press. First published 1979. [Google Scholar]
  13. Cefai, Daniel, Zimmerman Bénédicte, Stefan Nicolae, and Martin Endress. 2015. Special Issue on Sociology of Valuation and Evaluation Introduction. Human Studies 38: 1–12. [Google Scholar]
  14. Chandler, James, Arnold Davidson, and Harry Harootunian, eds. 1991. Questions of Evidence: Proof, Practice, and Persuasion Across the Disciplines. Chicago: University of Chicago Press. [Google Scholar]
  15. Clark, Burton R. 1973. Development of the Sociology of Higher Education. Sociology of Education 46: 2–14. [Google Scholar] [CrossRef]
  16. Collins, Harry, and Robert Evans. 2007. Rethinking Expertise. Chicago: University of Chicago Press. [Google Scholar]
  17. Cote, James, and Anton L. Allahar. 2011. Lowering Higher Education: The Rise of Corporate Universities and the Fall of Liberal Education. Buffalo: University of Toronto Press. [Google Scholar]
  18. Deem, Rosemary. 2004. Sociology and the Sociology of Higher Education: A Missed Call or Disconnection? International Studies in Sociology of Education 14: 21–46. [Google Scholar] [CrossRef]
  19. Durkheim, Émile. 1956. Education and Sociology. New York: Free Press. [Google Scholar]
  20. Evans, John. 2005. Stratification in Knowledge Production: Author Prestige and the Influence of an American Academic Debate. Poetics 33: 111–33. [Google Scholar] [CrossRef]
  21. Finkelstein, Martin J., Valerie Conley Martin, and Jack H. Schuster. 2016. The Faculty Factor: Reassessing the American Academy in a Turbulent Era. Baltimore: Johns Hopkins University Press. [Google Scholar]
  22. Gale, Helen. 2011. The Reluctant Academic: Early-Career Academics in a Teaching-Orientated University. International Journal for Academic Development 16: 215–27. [Google Scholar] [CrossRef]
  23. Gieryn, Thomas. 1999. Cultural Boundaries of Science. Chicago: University of Chicago Press. [Google Scholar]
  24. Ginsberg, Benjamin. 2011. The Fall of the Faculty: The Rise of the All-Administrative University and Why It Matters. Oxford: Oxford University Press. [Google Scholar]
  25. Guetzkow, Joshua, Michele Lamont, and Gregoire Mallard. 2004. What is Originality in the Humanities and Social Sciences? American Sociological Review 69: 190–212. [Google Scholar] [CrossRef]
  26. Gumport, Patricia, ed. 2007. Sociology of Higher Education: Contributions and Their Contexts. Baltimore: Johns Hopkins University Press. [Google Scholar]
  27. Gurvitch, Georges. 1971. The Social Frameworks of Knowledge. New York: Harper and Row. [Google Scholar]
  28. Hermanowicz, Joseph. 2012. The Sociology of Academic Careers: Problems and Prospects. In Higher Education: Handbook of Theory and Research. Edited by J Smart and M Paulsen. Berlin and Heidelberg: Springer, pp. 207–48. [Google Scholar]
  29. Hermanowicz, Joseph. 2016a. The Proliferation of Publishing: Economic Rationality and Ritualized Productivity in a Neoliberal Era. American Sociologist 47: 174–91. [Google Scholar] [CrossRef]
  30. Hermanowicz, Joseph. 2016b. Universities, Academic Careers, and the Valorization of ‘Shiny Things’. Research in the Sociology of Organizations 46: 303–28. [Google Scholar]
  31. Herrmann, Andrew. 2017. The Beatings Will Continue Until Morale Improves. Cultural Studies <-> Critical Methodologies 17: 347–56. [Google Scholar] [CrossRef]
  32. Hirschauer, Stefan. 2010. Editorial Judgments: A Praxeology of ‘Voting’ in Peer Review. Social Studies of Science 40: 71–103. [Google Scholar] [CrossRef]
  33. Hirschauer, Stefan. 2015. How Editors Decide: Oral Communication in Journal Peer Review. Human Studies 38: 37–55. [Google Scholar] [CrossRef]
  34. Holznar, Burkart. 1968. Reality Construction in Society. Cambridge: Schenkman Publishing Company. [Google Scholar]
  35. Horn, Sierk. 2016. The Social and Psychological Costs of Peer Review: Stress and Coping with Manuscript Rejection. Journal of Management Inquiry 25: 11–26. [Google Scholar] [CrossRef] [Green Version]
  36. Knorr Cetina, Karin. 1999. Epistemic Cultures: How the Sciences Make Knowledge. Cambridge: Harvard University Press. [Google Scholar]
  37. Kuklick, Henrika. 1983. The Sociology of Knowledge: Retrospect and Prospect. Annual Review of Sociology 9: 287–310. [Google Scholar] [CrossRef]
  38. Lamont, Michèle. 2009. How Professors Think: Inside the Curious World of Academic Judgment. Cambridge: Harvard University Press. [Google Scholar]
  39. Lamont, Michèle. 2012. Toward a Comparative Sociology of Valuation and Evaluation. Annual Review of Sociology 38: 201–21. [Google Scholar] [CrossRef]
  40. Lamont, Michèle, and Joshua Guetzkow. 2016. How Quality is Recognized by Peer Review Panels: The Case of the Humanities. In Research Assessment in the Humanities. Edited by Michael Ochsner, Sven E. Hug and Hans-Dieter Daniel. Zurich: Springer Open, pp. 31–41. [Google Scholar]
  41. Lamont, Michèle, and K Huutoniemi. 2011. Comparing Customary Rules of Fairness: Evidence of Evaluative Practices in Peer Review Panels. In Social Knowledge in the Making. Edited by Charles Camic, Neil Gross and Michele Lamont. Chicago: University of Chicago, pp. 209–32. [Google Scholar]
  42. Lamont, Michele, and Laurent Thévenot. 2000. Rethinking Comparative Cultural Sociology: Repertoires of Evaluation in France and the United States. Cambridge: Cambridge University Press. [Google Scholar]
  43. Mallard, Gregoire, Michele Lamont, and Joshua Guetzkow. 2009. Fairness as Appropriateness: Negotiating Epistemological Differences in Peer Review. Science, Technology, & Human Values 34: 1–34. [Google Scholar]
  44. Mannheim, Karl. 1936. Ideology and Utopia: An Introduction to the Sociology of Knowledge. London: Routlege and Keegan Paul. [Google Scholar]
  45. McCarthy, E. Doyle. 1996. Knowledge as Culture: The New Sociology of Knowledge. New York: Routledge. [Google Scholar]
  46. Menzies, Heather, and Janice Newson. 2007. No Time to Think: Academics’ Life in the Globally Wired University. Time & Society 16: 83–98. [Google Scholar]
  47. Miles, Matthew, Michael Huberman, and Johnny Saldaña. 2014. Qualitative Data Analysis: A Methods Sourcebook. Thousand Oaks: Sage Publications. [Google Scholar]
  48. Morgan, David. 2018. Iterative Thematic Inquiry: A Method for Analyzing Qualitative Data. Paper presented at Qualitative Methods Conference, Banff, AB, Canada, May 1–3. [Google Scholar]
  49. Namer, Gerard. 1984. The Triple Legitimation: A Model for a Sociology of Knowledge. In Society and Knowledge. Edited by Nico Stehr and Volker Meja. New Brunswick: Transaction Books, pp. 209–222. [Google Scholar]
  50. Neumann, Anna. 2009. Professing to Learn: Creating Tenured Lives and Careers in the American Research University. Baltimore: Johns Hopkins University Press. [Google Scholar]
  51. Nowell, Lorelli, Jill Norris, and Deborah White. 2017. Thematic Analysis: Striving to Meet the Trustworthiness Criteria. International Journal of Qualitative Methods 16: 1–13. [Google Scholar] [CrossRef]
  52. Saha, Lawrence J. 2015. Educational Sociology. In International Encyclopedia of the Social and Behavioral Sciences. Edited by James D. Wright. New York: Elseiver, pp. 289–96. [Google Scholar]
  53. Schuster, Jack H., and Martin J. Finkelstein. 2006. The American Faculty: The Restructuring of Academic Work and Careers. Baltimore: Johns Hopkins University Press. [Google Scholar]
  54. Slaughter, Sheila, and Gary Rhoades. 2004. Academic Capitalism and The New Economy: Markets, State, and Higher Education. Baltimore: Johns Hopkins University Press. [Google Scholar]
  55. Stevens, Mitchell L., Elizabeth A. Armstrong, and Richard Arum. 2008. Sieve, Incubator, Temple, Hub: Empirical and Theoretical Advances in the Sociology of Higher Education. Annual Review of Sociology 34: 127–51. [Google Scholar] [CrossRef] [Green Version]
  56. Swidler, Ann, and Jorge Arditi. 1994. The New Sociology of Knowledge. Annual Review of Sociology 20: 305–29. [Google Scholar] [CrossRef]
  57. Trent, William T., Jomills Henry Braddock, and Ronald D. Henderson. 1985. Sociology of Education: A Focus on Education as an Institution. Review of Research in Education 12: 295–36. [Google Scholar] [CrossRef]
  58. Tsay, Angela, Michele Lamont, Andrew Abbott, and Joshua Guetzkow. 2003. From Character to Intellect: Changing Conceptions of Merit in the Social Sciences and Humanities: 1951–1971. Poetics 31: 23–49. [Google Scholar] [CrossRef]
  59. Tuchman, Gaye. 2009. Wannabe U: Inside the Corporate University. Chicago: University of Chicago Press. [Google Scholar]
  60. Tuchman, Gaye. 2011. The Unintended Decentering of Teaching and Learning. Sociology 48: 216–19. [Google Scholar] [CrossRef] [Green Version]
  61. Vostal, Filip. 2016. Accelerating Academia: The Changing Structure of Academic Time. New York: Palgrave. [Google Scholar]
  62. Willer, Judith. 1971. The Social Determination of Knowledge. Englewood Cliffs: Prentice-Hall. [Google Scholar]
Table 1. Interview Sample Characteristics.
Table 1. Interview Sample Characteristics.
Appointment TypeNumber of Interviewees
Assistant Professor11
Associate Professor19
Full Professor40
Teaching Position30
Gender
Female43
Male54
Declined3
Race
White83
Asian4
Multiracial or Biracial3
African-American5
Hispanic or Latinx2
Declined3
Age
20s1
30s15
40s27
50s33
60s19
70s4
Declined1
PhD-Granting Institution Type
Public62
Private38

Share and Cite

MDPI and ACS Style

Simula, B.; Scott, T. The Impact of Pressures to Produce on Knowledge Production and Evaluation in the Modern Academy. Soc. Sci. 2020, 9, 64. https://doi.org/10.3390/socsci9050064

AMA Style

Simula B, Scott T. The Impact of Pressures to Produce on Knowledge Production and Evaluation in the Modern Academy. Social Sciences. 2020; 9(5):64. https://doi.org/10.3390/socsci9050064

Chicago/Turabian Style

Simula, Brandy, and Tracy Scott. 2020. "The Impact of Pressures to Produce on Knowledge Production and Evaluation in the Modern Academy" Social Sciences 9, no. 5: 64. https://doi.org/10.3390/socsci9050064

APA Style

Simula, B., & Scott, T. (2020). The Impact of Pressures to Produce on Knowledge Production and Evaluation in the Modern Academy. Social Sciences, 9(5), 64. https://doi.org/10.3390/socsci9050064

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop