Next Article in Journal
Big Data, Algorithmic Regulation, and the History of the Cybersyn Project in Chile, 1971–1973
Previous Article in Journal
Social Exclusion of Multicultural Families in Korea
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:

Identifying Relevant Anti-Science Perceptions to Improve Science-Based Communication: The Negative Perceptions of Science Scale

Brian Lamb School of Communication, Purdue University, West Lafayette, IN 47907, USA
Author to whom correspondence should be addressed.
Soc. Sci. 2018, 7(4), 64;
Submission received: 14 March 2018 / Revised: 6 April 2018 / Accepted: 10 April 2018 / Published: 13 April 2018


Science communicators and scholars have struggled to understand what appears to be increasingly frequent endorsement of a wide range of anti-science beliefs and a corresponding reduction of trust in science. A common explanation for this issue is a lack of science literacy/knowledge among the general public (Funk et al. 2015). However, other possible explanations have been advanced, including conflict with alternative belief systems and other contextual factors, and even cultural factors (Gauchat 2008; Kahan 2015) that are not necessarily due to knowledge deficits. One of the challenges is that there are limited tools available to measure a range of possible underlying negative perceptions of science that could provide a more nuanced framework within which to improve communication around important scientific topics. This project describes two studies detailing the development and validation of the Negative Perceptions of Science Scale (NPSS), a multi-dimensional instrument that taps into several distinct sets of negative science perceptions: Science as Corrupt, Science as Complex, Science as Heretical, and Science as Limited. Evidence for the reliability and validity of the NPSS is described. The sub-dimensions of the NPSS are associated with a range of specific anti-science beliefs across a broad set of topic areas above and beyond that explained by demographics (including education, sex, age, and income), political, and religious ideology. Implications for these findings for improving science communication and science-related message tailoring are discussed.

1. Introduction

Policy debates on many pressing social issues, as well as personal decision-making across a range of contexts, hinge upon the public having an accurate view of the underlying science associated with those issues. However, recent surveys have suggested that there is often considerable difference between scientists and publics on certain issues. A recent Pew study, for example, identifies differences between the general lay public and scientists on a range of issues and demonstrates how the two groups view the world through different lenses. One finding from the survey found that 98% of scientists agree with the statement that humans have evolved over time while only 65% of the general public agrees with that statement (Funk et al. 2015). Sometimes, the problem is characterized as a deficit in scientific understanding among the general public. For instance, Pew research indicates that there are gaps in some scientific knowledge among the general public. For example, 86% of respondents knew the core was the hottest layer of the earth while only 34% of respondents knew that water boils at lower temperatures at higher altitudes (Funk and Goo 2015). Certainly, members of the public who lack scientific understanding may find themselves disagreeing with scientists, but current research suggests that disagreements between publics and scientific consensus may reflect factors other than science literacy deficits.
A part of the problem is the recognition of the existence of multiple publics. Sociologist, Gordon Gauchat, identifies three major contributors to anti-science attitudes: low scientific knowledge/literacy, strong Evangelical religious faith, and social embeddedness/social context (Gauchat 2008, 2012). These differences in anti-science attitudes reflect different undercurrents in contemporary changes in anti-science sentiments. An individual may, indeed, challenge certain scientific findings or perspectives because of key knowledge deficits. However, some publics may be perfectly aware of, and reasonably conversant with, the relevant science and still disagree due to conflicting pre-existing beliefs. In addition, certain publics may be embedded in social contexts that encourage certain sorts of perspectives that may result in anti-science sentiments in certain areas. The anti-vaccination public may represent the existence of a social context where individuals struggling with similar difficult issues, such as raising children with autism, are seeking answers that they believe are not well-explained by scientists. Jelen and Lockett (2014) write that, “One cannot assume, for example, that respondents who have low levels of confidence in “the scientific community” will be skeptical of specific claims about global warming, or will be opponents of the teaching of evolution in public schools” (Jelen and Lockett 2014). The context or culture in which they are embedded may be an important component in predicting anti-science beliefs (Kahan 2015).
A key challenge for scientists and science communicators is to avoid falling into a trap that assumes that any evident disagreement between various publics and scientists is due entirely, or even predominantly, to science literacy deficits. Such a belief constrains the range of options available to resolve differences or improve dialogue when such differences exist. Recent research supports this concern. A survey of U.S. and U.K. scientists (Besley and Nisbet [2011] 2013) found that 89.9% of scientists believe that the public does not know very much about science. While this belief may be partially founded, acting on it as the only source of influence on anti-science beliefs may result in exacerbating what many are beginning to believe to be a broad crisis in science communication (Millstone and van Zwanenberg 2000). In other words, the scientific industry has focused on communicating “the facts” of scientific findings, which may not address the fundamental concerns. Millstone and van Zwanenberg (2000) note that, “Bridging the gap in these opinions will largely be the responsibility of scientists, who must restore faith in the public that science can answer—and not simply theorize—whether the Earth is warming, GMOs are safe, or childhood vaccines are a responsible public health measure” (p. 20).
One of the issues is potentially a measurement problem in how well we track lay public science literacy. Information from The National Science Foundation’s science knowledge scale is often used to describe and characterize various publics and their relationship to science. This scale is included in the General Social Survey and has been used to make broad generalizations about various publics. The NSF’s science knowledge scale, however, has come under criticism with regard to reliability and validity with researchers arguing that conclusions should be used cautiously (Roos 2014). One of the concerns Roos raises is that some of what are presumed to be simple science literacy questions, such as questions related to the age of the universe or evolutionary processes, may not actually reflect a science knowledge deficit. Rather, it may actually reflect a disagreement with a known fact based on religious orientation, which generates a mistrust of science, or scientific findings, in a particular domain. Therefore, someone may understand and know that scientists believe the earth is billions of years old, and they may even understand the scientific basis for these claims, yet they may still disagree with the idea because it contradicts competing beliefs they hold more strongly. Furthermore, they may believe these perspectives to be refuted by other evidence. Therefore, such instruments may be tapping into something quite different than just a knowledge deficit.
A second measurement related issue is that scales that have been associated with negative perceptions of science or scientific consensus are often more generalized perceptions of world views, or more narrow measures of very specific scientific issues. For instance, (Kahan and Braman 2006) cultural cognition perspective identifies factors such as Individualism-Communitarianism (e.g., “If the government spent less time trying to fix everyone’s problems, we’d all be a lot better off.”) and Hierarchy-Egalitarianism (e.g., “We have gone too far in pushing equal rights in this country”). These dimensions have been shown to bias how individuals perceive information on polarizing science-related topics such as HPV vaccination (Kahan et al. 2010), climate change, nuclear waste disposal, and handguns (Kahan et al. 2011). Other measures associated with negative science perceptions are often designed to be context specific. For instance, Connor and Siegrist’s (Connor and Siegrist 2010) examination of gene technology acceptance utilized measures of biology knowledge, gene technology knowledge, and gene technology related industries and government organizational trust. This approach, while extremely informative at the specific context level, uses measurement devices that are not easily generalizable beyond the context under study.
Finally, some study protocols use measures with potential inherent limitations. Gauchat (2008) acknowledged that his use of GSS data, though creative and well justified, necessarily limited the choice of items available to measure the dimensions of his model. It is also common for studies to introduce study specific ad hoc measures designed to address theoretically relevant constructs associated with a specific potential predictor of a science related issue. Brossard and Nisbet’s (Brossard and Nisbet 2007) four-item deference to scientific authority scale (e.g., “Scientists know best what is good for the public.”) was found to be associated with support for biotechnology. Ad hoc scales may indeed be valid and reliable measures, but they often are not subjected to development and validation processes associated with measures designed to be used beyond the immediate project.
The current literature suggests that there are at least three conceptually different factors, possibly more, that could underlie negative perceptions of prevailing scientific views, the scientific process itself, or the scientists who advocate for them. However, there are currently no integrated tools for measuring variation in these factors, or other possible factors, to help distinguish between different possible sources of specific anti-science views. Without such tools, it becomes difficult to know how to develop evidence-based strategies for engaging publics effectively.
The current project focuses on addressing this need by describing the development of a preliminary scale that reflects the multidimensional nature of anti-science perceptions and demonstrates the extent to which different aspects of anti-science perceptions may underlie specific areas of controversy between scientists and various publics. Our contention is that if negative perceptions of science are multidimensional, then various aspects of these perceptions should be differentially related to different anti-science beliefs. This, in turn, might suggest different communicative approaches for improving dialogue with relevant publics. Hence, this project also seeks to provide evidence for the validity of multidimensional anti-science attitude scale by examining its relationship with a range of specific issues in which anti-science sentiment has been identified. Having tools to better measure target audience variation in negative perceptions of science would facilitate the ability of scientists and scientific organizations to engage conversation with publics more effectively.

2. Study 1

In Study 1, we detail the initial methodology used to develop a preliminary negative perceptions of science scale from initial item generation, through data collection, and subsequent analysis and validation.

2.1. Methods

2.1.1. Sample and Procedures

Upon IRB approval, 502 participants were recruited to participate in an online survey through Amazon’s Mechanical Turk ( system. mTurk is an online tool for recruiting individuals to complete tasks for a wide range of activities. Increasingly, mTurk is used as a way to recruit individuals for participation in research projects and has been found to provide viable samples for a wide range of common research protocols (Casler et al. 2013; Christenson and Glick 2012; Goodman et al. 2013). Participants were compensated $1.00 each upon completion of the survey. To be eligible for participation, participants needed to be 18 years of age or older and reside in the United States.
The online survey contained a variety of demographic questions, a pool of items potentially associated with different aspects of multidimensional negative perceptions of science scale (NPSS), and items related to specific anti-science beliefs. The 500 participants reflected meaningful variation across a wide range of demographics: age (M = 37.53, SD = 12.22), sex (Male = 49.4%, Female = 50.6%), race/ethnicity (White = 78.6%, African American = 7.8%, Hispanic/Latino = 4.2%, Asian = 7.2%, Other = 2.2%), education (HS or Less = 10.00%, Some College = 37.3%, College Degree = 38.5%, Graduate Degree = 14.2%), and family income (0–30K = 27.1%, 30–40K = 13.9%, 40–50K = 12.7%, 50–60K = 10.0%, 60–70K = 9.2%, 70–80K = 6.6%, 80–90K = 4.4%, 90–100K = 4.4%, 100K+ = 11.8%).

2.1.2. Measures

Negative perceptions of science scale (NPSS). To inform our scale development activity, and build on the themes in the extant literature (primarily Gauchat 2008, 2012), we initially asked 422 college students participating in a survey on a related project to list up to 5 reasons they “can think of for why people in the general public would hold negative beliefs about science, scientists, or scientific methods, either in general or in specific areas.” The goal of this question was to help ensure we captured a range of possible sources of potential anti-science attitudes to supplement those reasons already identified in the academic literature. This question generated 1454 specific statements. We conducted a thematic analysis of these items. Our analysis identified 8 potential themes that we believed reflected differentiable reasons why one might hold negative perceptions of science. Table 1 identifies these themes and provides example participant statements associated with each theme.
Using these themes, along with issues identified in the literature, a pool of 45 candidate belief statements was generated for possible inclusion in the initial NPSS scale.
Specific anti-science beliefs. Drawing inspiration from a recent National Geographic Magazine issue highlighting “The War on Science,” (Achenbach 2015) statements were created to reflect four major anti-science issues: “Humans are not responsible for global warming/climate change” (M = 2.07, SD = 1.10); “The Earth is only a few thousand years old.” (M = 1.63, SD = 1.03); “Vaccines can cause autism.” (M = 2.09, SD = 1.14); and Genetically modified organisms (GMOs) are harmful.” (M = 3.06, SD = 1.15). All statements were represented as 5-point Likert items (1 = Strongly Disagree, 5 = Strongly Agree). These items were developed to facilitate validation of the NPSS. Since our initial perception is that anti-science attitudes are multidimensional and differentially related to different scientific issues, these items were examined as single-item constructs, and not aggregated into a larger scale.
Ideological orientations. Two items were used to assess what we refer to as general ideological orientations. In particular, much of the debate around certain science-based topics aligns with one’s general political orientation (liberal/conservative), and religiosity (non-religious/religious). Religiosity was measured through the item “How would you rate your religious orientation?” on a five-point scale (1 = Very Unreligious, 5 = Very Religious), M = 2.52, SD = 1.45. Political orientation was measured on a 5-point scale through the item “How would you rate your political orientation?” (1 = Very Liberal, 5 = Very Conservative), M = 2.66, SD = 1.12). Our presumption was that some dimensions underlying negative perceptions of science should be associated with variation in religiosity and political orientation, but others may not.

2.2. Results

Initial NPSS construction and validation. An exploratory factor analysis (EFA) using principal axis extraction and a direct oblimin rotation was conducted on the forty-five candidate NPSS items to identify potential underlying dimensions. To facilitate exploration and validation of any resulting solution, we randomly split the data into two samples of 251 participants: one for an initial exploratory investigation, and one to serve as a validation sample. A parallel analysis was conducted to identify the number of latent factors that seemed to be operative above which could be explained by chance. This analysis suggested the existence of four latent factors. Therefore, we conducted follow up analyses seeking a simple structure focused on the clarification of these four underlying dimensions as a potentially parsimonious solution. After removing items that loaded less than 0.5 on their primary factor with a difference of at least 0.2 on secondary factors. The resulting four component solution accounted for 57.14 percent of the overall variation in the 23 retained items. Examination of retained items with primary loadings are presented in Table 2. Based on our interpretation of the content of the items associated with each factor, we labeled these negative perceptions as: Science as Corrupt, Science as Onerous, Science as Limited, and Science as Heretical. Corrupt items reflect a factor associated with strong distrust of scientists or the scientific process and suggest that scientists may have ulterior motives that seriously compromise the integrity of their work. Onerous items reflect a more primitive aversion to science and sees it as complex, uninteresting, and potentially threatening. Heretical items reflect a dimension that science conflicts with other deeply held beliefs, specifically religious beliefs, that are considered more valid than scientific beliefs. Finally, Limited items reflect a potential belief that science cannot explain everything and that there are potentially other ways of knowing or understanding the world.
Confirmation of initial NPSS scale. Using sample 2, we sought to assess the preliminary fit of this four-factor solution using confirmatory factor analysis. Initial validation of the items demonstrated poor fit (χ2 = 643, df = 224, p < 0.001; RMSEA = 0.086, CFI = 0.86; TLI = 0.85; SRMR = 0.07). Examination of factor loadings and modification indices suggested the removal of problematic items. After removal of these items, fit markedly improved (χ2 = 149, df = 71 p < 0.001; RMSEA = 0.067; CFI = 0.95; TLI = 0.94; SRMR = 0.049). Table 2 identifies items retained in the CFA model along with standardized parameter estimates.
Further validation of the initial NPSS subscales. Zero-order correlations were computed among all the study variables using items retained in the CFA analysis from sample 2. Reliability coefficients and descriptive statistics for each subscale is presented in Table 2. Our conjecture was that a valid measure of negative perceptions of science, in general, should be associated with more specific anti-science beliefs and, potentially, political and religious ideology. Table 3 reports pairwise correlations for all these variables, indicating that the initial NPSS scales were, indeed, significantly and meaningfully related to anti-science beliefs, as well as political conservatism and religiosity. Several observations can be made about these initial results. First, all study variables were significantly and positively associated with each other. Relative differences in the magnitude of the correlations is potentially informative and speaks to conceptually meaningful differences. For instance, the association between science as heretical was most strongly associated with religiosity (r = 0.71) and endorsement of belief in a young earth (r = 0.63) but had lower correlations with other items. In contrast, those who saw science as corrupt were most likely to endorse the belief that vaccinations may be linked to autism (r = 0.50). Finally, political orientation (higher levels of political conservatism) was most strongly linked to the belief that humans are not responsible for global warming/climate change (r = 0.45). Among the four specific anti-science beliefs in this study, this issue is the one that has been most politicized in public debate.

2.3. Discussion

These results provide initial evidence of the reliability and validity of the NPSS. Our measures were conceptually meaningful and produced results consistent with expectations regarding their association with other measures designed to help provide an initial assessment of their validity. However, the initial set of items and associated subscales have some notable limitations. First, though we identified four relevant themes, several items associated with these themes were eliminated due to lower than ideal primary loadings and larger than ideal cross loadings. Hence, despite acceptable model fit, there is a need to improve the overall psychometric properties of the subscales. Additionally, one of our themes, science as heretical, only contained two items. Hence, our initial pool of items associated with each factor would likely benefit from further refinement and validation against a broader range of anti-science beliefs. Therefore, we conducted a second study to address these limitations.

3. Study 2

Study 1 provided an initial attempt to identify potential dimensions underlying negative perceptions of science. We followed this initial study with a second study designed to further refine the NPSS items and address other limitations of Study 1. Specifically, Study 2 addresses this need for an expanded set of items to improve measurement limitations for the major factors identified and provide further evidence of the validity of these measures to explain variation in a broader range of specific anti-science beliefs.

3.1. Methods

3.1.1. Recruitment and Participants

Recruitment of subjects, eligibility requirements, and participation incentives were identical to study one. Five-hundred and five completed surveys were collected. Similar to study one, the participants reflected meaningful variation across a range of demographics: age (M = 37.32, SD = 12.17), sex (Male = 54.6%, Female = 45.4%), race/ethnicity (White = 79.1%, African American = 7.8%, Hispanic/Latino = 45.6%, Asian = 5.4%, Other = 2.2%), education (HS or Less = 11.1%, Some College = 34.3%, 4 Year College Degree = 40.4%, Graduate Degree = 14.1%), and family income (0–30K = 22.1%, 30–40K = 13.1%, 40–50K = 12.9%, 50–60K = 10.4%, 60–70K = 10.0%, 70–80K = 8.2%, 80–90K = 5.0%, 90–100K = 5.0%, 100K+ = 13.1%).

3.1.2. Measures

Negative perceptions of science. This version of the survey included a revised set of 32 items, including the retained items from Study 1. A few items in the original survey were slightly reworded for clarity, and additional items were specifically designed to improve the number of items associated with sub-dimensions identified in study one.
Anti-science beliefs. We expanded our initial list of four specific anti-science belief items that were included from Study 1 to eleven in Study 2. These items were drawn from a range of sources often highlighted in political debates featured in the media. The goal was to provide a broader set of items to facilitate further validation of an enhanced version of the NPSS under the same premise that underlying NPSS factors will vary in how strongly they are associated with specific anti-science beliefs. All statements were represented as 5-point Likert items (1 = Strongly Disagree, 5 = Strongly Agree). Items included: “Humans are not responsible for global warming/climate change.” (M = 2.06, SD = 1.13); “The earth is only a few thousand years old.” (M = 1.63, SD = 1.01); “Vaccines can cause autism.” (M = 2.17, SD = 1.20); “Humans do not share common ancestors with other species.” (M = 2.21, SD = 1.26); “Genetically modified organisms (GMOs) are harmful.” (M = 3.01, SD = 1.18); “The moon landing was a hoax.” (M = 1.66, SD = 0.98); “A person’s astrological (Zodiac) sign influences their behavior.” (M = 1.89, SD = 1.06); “Homeopathy can help cure many common diseases.” (M = 2.88, SD = 1.19). “Cures for cancer have been suppressed by those with a financial stake in cancer treatment.” (M = 2.77, S = 1.32). “Water should not be fluoridated because of its harmful effects.” (M = 2.72, SD = 1.14). “Some people have extra sensory perception.” (M = 2.79, SD = 1.25). As in Study 1, these items were treated individually in subsequent analyses.
Ideological orientations. The same two items were used to assess general ideological orientations in this study as in study one: political orientation (M = 2.68, SD = 1.16) and religiosity (M = 2.59, SD = 1.47).

3.2. Results

NPSS refinement. To facilitate continued confirmation of the expanded set of items, we conducted a CFA. We previously identified a conceptually meaningful latent factor structure in study one but wanted to improve measurement of those latent constructs in Study 2. Our initial CFA was composed of items specifically associated with each dimension: Corrupt, 10 items; Onerous, 8 items; Heretical, 6 items; and Limited, 8 items. This initial analysis showed inadequate fit (χ2 = 1626.13, df = 489, p < 0.000; RMSEA = 0.078; CFI = 0.86; TLI = 0.85, and SRMR = 0.07). After examination of modification indices to identify items with strong cross loadings, as well as parameter estimates to identify items with low loadings on their respective latent variables, problematic items were removed in an iterative process, by focusing on most problematic items, to improve overall model fit while preserving improved balance of items across each latent variable. This process yielded a model with considerably stronger fit, contained five observed variables per latent construct, and retained all but one of the retained items from Study 1 (χ2 = 394.32, df = 164; RMSEA = 0.058, p = 0.033; CFI = 0.955; TLI = 0.948; SRMR = 0.049). Table 4 identifies each of the Final NPSS scale items used in subsequent analyses, along with descriptive statistics, subscale reliabilities, and standardized parameter estimates from the final CFA.
Our interpretation of these factors remains similar to how we characterized these factors in Study 1. The first factor, Science as Corrupt, reflects variation in view that scientists have underlying agendas, often financial, that influence results in ways that cannot be trusted. Individuals high in this dimension may be more likely to endorse anti-science beliefs founded in conspiratorial ideation. The second factor, Science as Onerous, reflects what we think of as a “primitive” rejection of science based on a lack of understanding of science and how it works, that may also introduce an associated fear or uncertainty and even a disinterest. Individuals who score high on this dimension may generally be contrarians regarding a range of scientific issues and reject scientific findings or recommendations “just because.” This factor seems most closely to respond to what we refer to as the historically dominant “deficit” model of anti-science beliefs to the extent that individuals who do not have reasonable levels of science literacy are more likely to find science to be complex and uninteresting. The third factor, Science as Heretical, reflects participant views that religious beliefs provide people with the answers they need to understand the world, and this is reason to reject scientific findings that conflict with these strongly held beliefs. Individuals rating high on this factor may need confirmation that scientific findings are in harmony, or at least not antagonistic to, strongly held existing beliefs before they can support them. The final factor, science as limited is a factor reflecting belief that science has fundamental characteristics that compromise its reliability and trustworthiness. Unlike corrupt or heretical views, individuals who see science as limited perceive the enterprise of science, itself, as somehow flawed, irrespective of specific scientific findings. Perceived inconsistencies among scientists may encourage individuals high on this dimension to reject widely held views.
We attempted to provide further concurrent validity for our instrument by examining how well the NPSS factors predict a broader range of specific anti-science beliefs than in the prior study. Table 5 shows zero order correlations between all study variables.
Table 6 reports the results of a set of hierarchical regression models examining the association with the NPSS subscales with eleven specific anti-science beliefs above and beyond demographic variables entered in block one, and religiosity and political orientation entered in block two. Each overall model was significant and, in each model, one or more NPSS items were able to significantly improve the prediction of anti-science belief endorsements above and beyond the other predictors. Examination of the beta weights associated with the NPSS items indicated that each of the subscales were associated with some, but not all anti-science beliefs, and most anti-science beliefs reflected different patterns of relationships with NPSS subscales.

3.3. Discussion

This study provided additional confirmation of our preliminary version of the NPSS, improved measurement of those factors, and provided additional evidence supporting the reliability and validity of a refined version of the NPSS. Consistent with the first study, this study produced a version of the NPSS measuring four latent constructs: science as corrupt, science as onerous, science as heretical, and science as limited. The latent structure of the NPSS was strong based on fit indices from our confirmatory factor analysis, and each of the constructs showed adequate to strong internal consistency. Furthermore, one or more of the sub-dimensions of the NPSS were predictive of eleven different anti-science beliefs after controlling for demographics, political leaning, and religiosity.

4. General Conclusions

In these two studies, we have developed a scale for measuring individual differences in negative perceptions of science that may underlie endorsements of anti-science beliefs. Furthermore, we have found that these individual differences (science as onerous, heretical, limited, and corrupt) account for variation across a host of scientific beliefs beyond that of demographics, education level, political orientation, and religiosity. Consistent with other emerging scholarship (Gauchat 2008, 2012), these findings suggest that anti-science beliefs should not be treated as identical due to differences in the factors that may underlie them. However, research to date has yet to develop psychometrically investigated self-report instruments for investigating these dimensions.

4.1. Implications for Science Communication

The NPSS provides insights into public resistance to science by proposing four distinct factors that characterize different aspects of negative perceptions of science. We know audiences sometimes understand the science that underlies a particular issue, even though they may reject the scientific theory or explanation. One of our domains (science as onerous) reflects aspects likely associated with the general “deficit” model that has been the dominant paradigm in much science communication research and practice. The other dimensions (limited, heretical, and corrupt) tap other conceptually distinct components that do not clearly fit within a deficit model. In fact, viewing science as onerous was a significant predictor in only five of our 11 target beliefs. Additionally, it was the dominant predictor in only one anti-science belief (astrology). In contrast, the view that science as corrupt was a significant predictor in six of our target beliefs and was the dominant predictor in four beliefs (global warming, cancer cures, and fluoride). Science as heretical was significantly associated with six of the target beliefs and the dominant predictor in four (young earth, vaccines, evolution, GMOs). Finally, science as limited was a significant predictor in three target beliefs and the dominant predictor in two (homeopathy and ESP). This pattern of results is consistent with our view that the deficit model, while an important contributor to anti-science beliefs, is not clearly the best explanation, in and of itself, for anti-science beliefs, and in some areas, play an extremely limited role, if any. The key takeaway from this analysis is that scientists and science communicators, when working with an antagonistic public, need to target the salient underlying factors at work in a specific context.
In addition to looking at how specific NPSS factors function across a range of anti-science beliefs, it is potentially useful to look at the range of NPSS factors at work within specific anti-science beliefs. Six of our target beliefs were significantly predicted by only one dominant NPSS factor (global warming, evolution, astrology, cancer cures, fluoride, and ESP). Two NPSS factors were significant predictors of three anti-science beliefs (young earth, GMO’s, and homeopathy). Three NPSS factors were significant predictors in two anti-science beliefs (vaccines and moon landing). This variation in the number of NPSS factors at work in a given anti-science issue may introduce the need for scientists to seriously reflect on how best to frame messages in ways that seriously consider multiple types of anti-science concerns concurrently. Simply working to address one issue, when multiple factors are at work, is problematic. Moreover, when multiple factors are at work and only one of those dimensions is taken seriously by science communicators, publics may be justified in their belief of scientists as “out of touch” or dismissive of their concern. If a person has three serious objections to a policy proposition, and only one is “refuted,” the overall result is not particularly convincing. Additionally, it is also possible that within some target publics, the appearance of possible multiple objections to science-based positions are not all held concurrently by members of the public, but different issues are important sub-populations within a given public. A hidden challenge for science communicators, in such instances, is that attempting to address multiple negative perceptions in a particular public may be perceived as presumptuous to those who do not share all these concerns.
These insights can help scientists and science communicators target specific audiences more effectively by facilitating their ability to adopt the perspective of those who voice objections to scientific findings or consensus. If we know why a particular public holds negative perceptions about a specific scientific finding, we can tailor campaigns and messages to target them more effectively. In the following section, we briefly speculate how knowledge regarding NPSS dimensions in a relevant public might be used to tailor messages. Future research building on the themes identified in these studies should emphasize the identification of message strategies that specifically improve public acceptance and improved quality of dialogue about well-supported scientific findings, especially those that have far-reaching policy implications or impact public safety and well-being. In particular, experimental research designed to explore causal relationships between message strategies and science acceptance are recommended.
One set of implications for social-behavioral communication research following from these results is to systematically investigate message and source factors that can work to improve reception of messages designed to communicate significant scientific findings when different negative perceptions are at work.
If anti-science beliefs on a topic are related to science as onerous, individuals may disengage from important scientific discussions and debates. One potentially productive line of research could focus on investigating message strategies associated with increasing perceived personal relevance of this issue. While not tested in these studies, individuals high on the onerous dimension may be less likely to process scientific messages centrally (i.e., engage in critical evaluation of the substantive message content) because of their lack of motivation to do so (Cacioppo and Petty 1982; Cacioppo et al. 1996). This is a core premise of work based on the Elaboration Likelihood Model tradition. Alternatively, researchers could specifically investigate ways of providing low motivation publics with alterative heuristics or peripheral cues that support acceptance of the critical science issues, such as the use of respected or well-known individuals as advocates for the scientific views. So even if a scientific rationale is not critically engaged by the public, attitudes toward the scientific consensus may improve.
In contrast, if a given audience has concern about a scientific issue based on preexisting beliefs, such as in the case of science as heretical or limited, then other strategies seem more appropriate. For instance, publics that reject the view that humans evolved from earlier life forms, usually do so because they believe it conflicts with deeply held religious beliefs that appear to suggest otherwise. Communication researchers might specifically investigate the extent that explicit acknowledgements of respect for strongly held religious beliefs has on individual openness to entertain alternative ideas. Our speculation is that dismissing a religious belief as simple superstition, or even implying it, is likely to increase resistance. Alternatively, research has found that leveraging individuals inside the community as advocates for science who have found ways of reconciling their faith with the scientific consensus, is less likely to produce resistance, and may, over time, produce more productive dialogue and impact. See Katharine Hayhoe’s work on climate change and evangelicals (Cloud 2016). To the extent that scientists are perceived of as dismissive of alternative frameworks for interpreting how the world works, conducting research to investigate the impact of alternative message sources that are deemed sympathetic to world views congruent with relevant beliefs held by the specific publics may be a useful strategy to reduce resistance and improve the quality of public dialogue.
In situations where anti-science beliefs are associated with views of science as corrupt, we speculate that messages specifically incorporating articulation of shared desired outcomes between scientists and community members will be more acceptable than those that do not articulate shared values or desired outcomes. Additionally, as with the other factors, science communication campaigns that leverage members of the community as the primary advocates, or community members working collaboratively with scientists, may be more impactful than messages delivered by the scientists in isolation.
Though our data suggests that some anti-science beliefs may be primarily related to a dominant negative perception of science, others may be driven by multiple negative perceptions of science. While our results suggest areas where this may be the case, our recommendation is that future scholars using these instruments, collect data on specific communities of interest around specific anti-science issues of concern. In such cases, tools such as the NPSS can facilitate evaluation of campaign effectiveness. Our data, which surveyed a broad range of people across the U.S., is not likely to accurately estimate how negative perceptions of science operate within a specific anti-science community. By and large, our sample reported low levels of endorsement of all the anti-science target beliefs, with endorsement means below the scale midpoint, except for GMOs which was only slightly above the scale midpoint. Overall, our sample does not appear to be particularly anti-science overall. Research targeting specific subpopulations may show stronger endorsement of specific beliefs, or more variation in those beliefs, and, consequently, reflect different patterns of results. However, it is unlikely that very many people are anti-science across a broad range of issues. Regardless, our protocol was not specifically designed to explain anti-science beliefs in a particular community of interest. For instance, scientists and policy advocates working to reduce community resistance to fluoride treatment of a water supply, may leverage instruments such as we have proposed to validate the underlying nature of the concerns to inform campaign strategy development.
Beyond linking underlying negative perceptions to specific beliefs, our study suggests that being college educated provides limited protection against anti-science beliefs. This implies that the larger scientific community may need to more actively work to alleviate more pressing concerns about science that may underlie multiple anti-science views beyond simply stressing the “facts” or “methods” of science. To this extent, our findings parallel those found in the research on motivated reasoning and cultural cognition (e.g., Nisbet and Goidel 2007; Kahan et al. 2010; Druckman and Bolsen 2011) that have consistently found that preexisting values and perceptions bias and filter the perception of available factual information. These findings also echo the work of other recent research that speaks to growing lack of public trust in science (Achterberg et al. 2017; Cacciatore et al. 2016), which, in our estimation, could function as one such bias. If the public holds views that scientists have ulterior motives that compromise the veracity of their claims, other rationales, such as ideological orientation, may be more likely to guide behavior. Finally, though we emphasize the need to look at negative science perceptions in the context of specific anti-science beliefs, it is possible that there are categories of beliefs that largely reflect common patterns of negative perceptions. For instance, some of the beliefs examined in this study represent what has been termed “conspiratorial ideation” (Oliver and Wood 2014; DiGrazia 2017). Others may represent beliefs related to what have been called “new age philosophical” orientations, which often represent beliefs in the paranormal or supernatural (Lange et al. 2000). Others, may simply reflect engrained beliefs driven by political or religious affiliation. Future research should focus on understanding common factors underlying clusters of beliefs that may operate similarly and respond to similar types of communication strategies.

4.2. Limitations

There are several limitations to current research that need consideration. First, while our mTurk sample demonstrated considerable variability across a range of demographics, it is possible that our results are sensitive to differences between typical mTurk participants and the general population. It is likely that mTurk participants differ in employment background, technological literacy, or other factors that might skew their view of science and the scientific method in ways we have not measured. Second, though we ultimately identified four conceptually meaningful and empirically differentiable latent sub-dimensions of negative perceptions of science, there could very well be additional, unmeasured dimensions that could explain endorsement of anti-science beliefs as well. Future research should actively seek to augment the scope of the NPSS. This project is attempting to address deficiencies in current research and make an incremental contribution to research based on the growing dissatisfaction with a fully deficit-oriented model of anti-science attitudes, but we have not presented a comprehensive theoretical model of anti-science attitudes or beliefs. Future research should work to develop stronger theoretical and predictive models that help explain a broader range of issues than those reported in these studies. In particular, we hope future theorizing will continue to focus on exploring the complicated relationship between views of anti-science as deficit-based alongside the recognition of world view discrepancies and trust issues. Our work furthers this conversation but does not yet integrate these concerns in a full theoretical framework. Our work also represents views from a population within the United States. However, some of the issues explored in our study are uniquely American “problems.” For instance, debates about evolution and the age of the Earth are of historical and contemporary significance in the U.S. and are actively considered in public policy debates around science education. Homeopathy, though, is much more commonly practiced in many European countries and it may be that some participants in our study may not have a clear view of what “homeopathy” is. Our survey instrument operated from the assumption that individuals at least had a notion of what was meant by these issues, but this may not have been universally true in our sample. Other anti-science beliefs explored in this study, such as skepticism about the “moon landing” may also reflect a U.S. bias that is not widely held even in the U.S. Future research should explore a broader range of issues in the context of other countries and cultures, and address how well these dimensions translate in those contexts. Our work also does not address, and was underpowered to do so, possible interaction effects among different NPSS factors. It is possible that these factors operate multiplicatively rather than additively to influence endorsement of anti-science beliefs. Finally, though our work suggests that underlying negative perceptions of science may operate causally to influence specific beliefs, cross-sectional survey research is insufficient for making these types of claims. Longitudinal and experimental protocols are warranted to better understand causal dynamics. It is possible that some individuals retrospectively attribute negative perceptions to science and scientists because of apparent disagreements on specific issues. In all likelihood, these general negative perceptions and beliefs reinforce each other over time.

4.3. Conclusions

This project represents the first attempt, to our knowledge, of developing an instrument designed to measure the multi-dimensional nature of negative perceptions of science and provide initial evidence of its measurement reliability and validity. The use of the NPSS shows promise as a resource to help gain a better understanding of various publics who may hold problematic anti-science beliefs. Our work supports the view that negative perceptions of science are more complex and nuanced than prior work assumes, and that the differentiation of several dimensions of negative perceptions of science provides richer view of the influences on anti-science beliefs across a range of topic areas.
In particular, this work has implications for future research and the design of science communication campaigns. Ultimately, conflict is reduced, and dialogue is improved in situations where the disputing parties actively attempt to adopt the perspective of the other with the goal of improving dialogue. Communication interventions and information campaigns aligned with an accurate view of the beliefs, values, and interests of a target population are less likely to foster distrust and cynicism on both sides of the debate. In particular, the view of science as corrupt, which to our thinking represents the most troubling possibility, highlights the need for scientists and science communicators to take the lead in reaching out. Instruments such as those developed in this study provide one way to better understand underlying negative perceptions of science in a population to help science communication initiatives increase their likelihood of success.


This project was sponsored in part by an ASPIRE Research Enhancement Grant, College of Liberal Arts, Purdue University, to the first author.

Author Contributions

M.M, W.B.C. and G.G.S conceived of and designed the research protocol. J.R.W. facilitated data collection and coding. M.M and W.B.C. conducted data analysis. M.M., W.B.C. and G.G.S wrote the paper with contributions from J.R.W.

Conflicts of Interest

The authors declare no conflicts of interest.


  1. Achenbach, Joel. 2015. Why do many reasonable people doubt science. National Geographic, February 14. [Google Scholar]
  2. Achterberg, Peter, Willem de Koster, and Jeroen van der Waal. 2017. A Science Confidence Gap: Education, Trust in Scientific Methods, and Trust in Scientific Institutions in the United States, 2014. Public Understanding of Science 26: 704–20. [Google Scholar] [CrossRef] [PubMed]
  3. Besley, John C., and Matthew Nisbet. 2013. How Scientists View the Public, the Media and the Political Process. Public Understanding of Science 22: 644–59. First published 2011. [Google Scholar] [CrossRef] [PubMed]
  4. Brossard, Dominique, and Matthew C. Nisbet. 2007. Deference to Scientific Authority among a Low Information Public: Understanding U.S. Opinion on Agricultural Biotechnology. International Journal of Public Opinion Research 19: 24–52. [Google Scholar] [CrossRef]
  5. Cacciatore, Michael A., Nick Browning, Dietram A. Scheufele, Dominique Brossard, Michael A. Xenos, and Elizabeth A. Corley. 2016. Opposing Ends of the Spectrum: Exploring Trust in Scientific and Religious Authorities. Public Understanding of Science 21: 11–28. [Google Scholar] [CrossRef] [PubMed]
  6. Cacioppo, John T., and Richard E. Petty. 1982. The Need for Cognition. Journal of Personality and Social Psychology 42: 116–31. [Google Scholar] [CrossRef]
  7. Cacioppo, John T., Richard E. Petty, Jeffrey A. Feinstein, and W. Blair G. Jarvis. 1996. Dispositional Differences in Cognitive Motivation: The Life and Times of Individuals Varying in Need for Cognition. Psychological Bulletin 119: 197–253. [Google Scholar] [CrossRef]
  8. Casler, Krista, Lydia Bickel, and Elizabeth Hackett. 2013. Separate but Equal? A Comparison of Participants and Data Gathered via Amazon’s MTurk, Social Media, and Face-to-Face Behavioral Testing. Computers in Human Behavior 29: 2156–60. [Google Scholar] [CrossRef]
  9. Christenson, Dino P., and David M. Glick. 2012. Crowdsourcing Panel Studies and Real-Time Experiments in MTurk. The Political Methodologist 20: 27–32. [Google Scholar]
  10. Cloud, Doug. 2016. Communicating Climate Change to Religious and Conservative Audiences. Reflections 16: 57–73. [Google Scholar]
  11. Connor, Melanie, and Michael Siegrist. 2010. Factors Influencing People’s Acceptance of Gene Technology: The Role of Knowledge, Health Expectations, Naturalness, and Social Trust. Science Communication 32: 514–38. [Google Scholar] [CrossRef]
  12. DiGrazia, Joseph. 2017. The social determinants of conspiratorial ideation. Socius, 3. [Google Scholar] [CrossRef]
  13. Druckman, James N., and Toby Bolsen. 2011. Framing, motivated reasoning, and opinions about emergent technologies. Journal of Communication 61: 659–88. [Google Scholar] [CrossRef]
  14. Funk, Cary, and Sara Kehaulani Goo. 2015. A Look at What the Public Knows and Does Not Know about Science. Available online: (accessed on 7 February 2018).
  15. Funk, Cary, Lee Rainie, and Aaron Smith. 2015. Public and Scientists’ Views on Science and Society. Available online: (accessed on 13 January 2018).
  16. Gauchat, Gordon William. 2008. A Test of Three Theories of Anti-Science Attitudes. Sociological Focus 41: 337–57. [Google Scholar] [CrossRef]
  17. Gauchat, Gordon. 2012. Politicization of Science in the Public Sphere. American Sociological Review 77: 167–87. [Google Scholar] [CrossRef]
  18. Goodman, Joseph K., Cynthia E. Cryder, and Amar Cheema. 2013. Data Collection in a Flat World: The Strengths and Weaknesses of Mechanical Turk Samples. Journal of Behavioral Decision Making 26: 213–24. [Google Scholar] [CrossRef]
  19. Jelen, Ted G., and Linda A. Lockett. 2014. Religion, Partisanship, and Attitudes toward Science Policy. SAGE Open 4. [Google Scholar] [CrossRef]
  20. Kahan, Dan M. 2015. Climate-Science Communication and the Measurement Problem. Political Psychology 36: 1–43. [Google Scholar] [CrossRef]
  21. Kahan, Dan M., and Donald Braman. 2006. Cultural Cognition and Public Policy. Yale Law & Policy Review 24: 149–72. [Google Scholar]
  22. Kahan, Dan M., Donald Braman, Geoffrey L. Cohen, John Gastil, and Paul Slovic. 2010. Who fears the HPV vaccine, who doesn’t, and why? An experimental study of the mechanisms of cultural cognition. Law and Human Behavior 34: 501–16. [Google Scholar] [CrossRef] [PubMed]
  23. Kahan, Dan M., Hank Jenkins-Smith, and Donald Braman. 2011. Cultural Cognition of Scientific Consensus. Journal of Risk Research 14: 147–74. [Google Scholar] [CrossRef]
  24. Lange, Rense, Harvey J. Irwin, and James Houran. 2000. Top-down purification of Tobacyk’s revised paranormal belief scale. Personality and Individual Differences 29: 131–56. [Google Scholar] [CrossRef]
  25. Millstone, Erik, and Patrick van Zwanenberg. 2000. A Crisis of Trust: For Science, Scientists or for Institutions? Nature Medicine (New York) 6: 1307–8. [Google Scholar] [CrossRef] [PubMed]
  26. Nisbet, Matthew C., and Robert K. Goidel. 2007. Understanding Citizen Perceptions of Science Controversy: Bridging the Ethnographic—Survey Research Divide. Public Understanding of Science 16: 421–40. [Google Scholar] [CrossRef]
  27. Oliver, J. Eric, and Thomas J. Wood. 2014. Conspiracy theories and the paranoid style(s) of mass opinion. American Journal of Political Science 58: 952–66. [Google Scholar] [CrossRef]
  28. Roos, J. Micah. 2014. Measuring Science or Religion? A Measurement Analysis of the National Science Foundation Sponsored Science Literacy Scale 2006–2010. Public Understanding of Science 23: 797–813. [Google Scholar] [CrossRef] [PubMed]
Table 1. Anti-science themes and exemplars.
Table 1. Anti-science themes and exemplars.
ThemeExemplar Statement
Science is in opposition to religionReligion doesn’t leave much room for science
Science has fundamental limitationsScientists are never 100% certain of their findings
Science is not comprehensibleScience is hard for some people to understand
Scientific motives are suspectSome science is based on economic benefits
Scientific discoveries bring about harmSome scientific conclusions are harmful to society
Primitive rejection of scienceIt produces useless research that is not useful to humanity
Scientists are unlikeable peopleScientists are often too arrogant to see the flaws in their own work
Science opposes alternative ways of knowing (besides religious)There are alternative ways to looking at life other than through a scientific perspective
Table 2. EFA (Sample 1) and CFA (Sample 2) results, with reliability and descriptive statistics.
Table 2. EFA (Sample 1) and CFA (Sample 2) results, with reliability and descriptive statistics.
CorruptOnerousHereticalLimitedCronbach’s AlphaM (SD)
Corrupt 0.862.51(.76)
Scientists are often dishonest about their research findings.0.82 (0.84)
Scientists are influenced by pressure from the government.0.78
Scientists often falsify data to manipulate results or findings.0.74 (0.73)
It is difficult to trust the scientific community.0.72
Science has been co-opted by corporate interests.0.71 (0.64)
Scientists are influenced by big business.0.66
Most scientists are politically biased.0.62 (0.62)
Scientists are typically arrogant people.0.58 (0.65)
Scientific results always end up supporting the scientist’s political agenda.0.54
Scientists are motivated to make lots of money.0.54 (0.64)
Onerous 0.852.47 (0.90)
Science is too complicated to understand. −0.90 (0.80)
I don’t care to know the answers to scientific questions. −0.75
I don’t like or appreciate science. −0.70
Scientific results are presented in ways that are too complex to understand. −0.69 (0.81)
Scientific jargon is too complex to understand. −0.69 (0.83)
Science is uninteresting. −0.65
Scientists make things more complicated than necessary. −0.55
Scientific discovery makes me nervous. −0.55
Heretical 0.912.11 (1.27)
The Bible is the ultimate explanation for how the world works. 0.90 (0.93)
God is the ultimate way of knowing, not science. 0.82 (0.91)
Limited 0.713.31 (0.84)
Science has limitations. 0.83 (0.75)
Science cannot explain everything. 0.58 (0.62)
The scientific method is limited. 0.52 (0.68)
Note: Italicized items were retained in CFA from sample 2 with loadings in parentheses. Alphas, Means, and SDs from retained Sample 2 items.
Table 3. Correlations among study variables.
Table 3. Correlations among study variables.
CorruptOnerousHereticalLimitedConservativeReligiosityGlobal WarmYoung EarthVaccinationGMOs
Corrupt- 0.433***0.524***0.404***0.347***0.251***0.414***0.381***0.498***0.306***
Onerous - 0.323***0.383***0.289***0.16*0.148*0.25***0.328***0.339***
Heretical - 0.375***0.398***0.713***0.389***0.629***0.343***0.349***
Limited - 0.22***0.282***0.202**0.26***0.297***0.288***
Conservative - 0.339***0.452***0.324***0.316***0.143*
Religiosity - 0.281***0.442***0.245***0.213***
Global Warm - 0.394***0.35***0.075
Young Earth - 0.346***0.228***
Vaccination - 0.422***
GMOs -
Note: * p < 0.05, ** p < 0.01, *** p < 0.001.
Table 4. Final NPSS Scales and Descriptive Statistics.
Table 4. Final NPSS Scales and Descriptive Statistics.
Parameter EstimatesM (SD)Cronbach’s Alpha
Science as Corrupt 2.45 (0.81)0.84
Scientists are often dishonest about their research findings.0.742.28 (0.98)
Scientists often falsify data to manipulate results or findings.0.822.38 (1.04)
Science has been co-opted by corporate interests.0.642.96 (1.08)
Most scientists are politically biased.0.742.42 (1.05)
Scientists are typically arrogant people.0.642.22 (1.00)
Science as Onerous 2.19 (0.79)0.85
Science is too complicated to understand.0.862.2 (1.06)
Scientific results are presented in ways that are too complex to understand.0.772.59 (1.04)
Scientific jargon is too complex to understand.0.82.59 (1.10)
I don’t care to know the answers to scientific questions.0.611.76 (0.89)
Scientific discovery makes me nervous.0.591.84 (0.92)
Science as Heretical 2.06 (1.22)0.96
Religious scriptures (e.g., The Bible) are the ultimate explanation for how the world works.0.932.03 (1.35)
God is the ultimate way of knowing, not science.0.822.32 (1.38)
If people trusted the Scriptures, they would know all they need to know.0.931.94 (1.26)
Religious doctrine tells us all we need to about know about the world.0.922.00 (1.3)
The Scriptures are sufficient to explain everything that is important.0.912.02 (1.31)
Science as Limited 3.11 (0.82)0.79
Science cannot explain everything.0.643.59 (1.17)
The scientific method is limited.0.632.8 (1.11)
Scientific principles do not always make sense.0.72.93 (1.12)
Science produces many contradictory findings.0.63.26 (1.02)
Science has significant limitations.0.742.95 (1.11)
Note: Parameter estimates from final CFA.
Table 5. Correlations among All Study Variables.
Table 5. Correlations among All Study Variables.
2Age -−−−0.13−0.070.16−0.02−0.04−0.05
3Income -0.290.06−0.02−0.12−0.15−0.03−0.07−0.05−0.06−0.11−0.03−0.04−0.15−0.10−0.06−0.13−0.13−0.10
4College Deg. -−0.040.04−0.13−0.07−0.03−0.01−0.09−0.03−0.15−0.06−0.04−0.07−0.08−0.09−0.15−0.21−0.07
5Conservatism -0.350.320.140.400.310.450.400.230.450.080.06−
6Religiosity -
7Corrupt -0.550.440.620.360.310.410.400.280.410.230.250.440.340.25
8Onerous -0.460.540.160.360.420.340.300.420.350.360.350.300.28
9Heretical -0.480.300.510.440.680.340.350.190.340.260.220.23
10Limited -
11Global Warming -
12Young Earth -0.350.540.240.360.
13Vaccines -0.400.470.420.360.460.470.510.31
14Evolution -0.310.320.190.320.340.280.22
15GMOs -0.310.280.460.430.470.26
16Moon Landing -0.400.310.360.400.25
17Astrology -0.400.330.310.46
18Homeopathy -0.430.370.43
19Cancer Cure -0.420.40
20Fluoride -0.28
21ESP -
Note: Bolded items are statistically significant. Absolute values 0.09 or above are significant at the 0.05 level; values 0.12 or above are significant at 0.01 level; values 0.14 or above are significant at 0.001 level.
Table 6. Multiple regression models predicting anti-science beliefs from demographics, political and religious orientation and NPSS (standardized betas from full model shown below).
Table 6. Multiple regression models predicting anti-science beliefs from demographics, political and religious orientation and NPSS (standardized betas from full model shown below).
Glob. WarmYoung EarthVaccinesEvolutionGMOsMoon LandAstrologyHomeopathyCancer CureFluorideESP
Sex (F = 1)−0.135 **0.088*0.0360.053 *0.208 **0.0180.0600.104 **0.109 *0.125 **0.153 *
Age0.038−0.0190.0530.0210.025−0.101 **−0.0590.136 **−0.043−0.038−0.044
College Degree−0.037−0.041−0.095 *−0.028−0.046−0.001−0.064−0.097 *−0.125 **−0.182 **−0.046
Pol. Orientation0.330 **0.087 *0.0580.201 **−0.063−0.083−0.156 *−0.054−0.104−0.019−0.063
Corrupt0.222 **0.0510.133 *0.0680.123 *0.263 **0.072−0.0760.315 **0.243 *0.078
Onerous−0.0480.151 **0.169 **−0.0310.0560.182 **0.217 **0.199 **0.0620.0940.067
Heretical0.0600.398 **0.301 **0.518 **0.194 **0.256 **0.0330.133 *0.0470.0730.051
Limited0.068−0.0750.0430.0510.067−0.0510.0740.224 **0.119 *−0.0000.194 *
Adj R20.276 **0.278 **0.284 **0.506 **0.195 **0.257 **0.140 **0.226 **0.237 **0.162 **0.146 **
* p < 0.05, ** p < 0.01.

Share and Cite

MDPI and ACS Style

Morgan, M.; Collins, W.B.; Sparks, G.G.; Welch, J.R. Identifying Relevant Anti-Science Perceptions to Improve Science-Based Communication: The Negative Perceptions of Science Scale. Soc. Sci. 2018, 7, 64.

AMA Style

Morgan M, Collins WB, Sparks GG, Welch JR. Identifying Relevant Anti-Science Perceptions to Improve Science-Based Communication: The Negative Perceptions of Science Scale. Social Sciences. 2018; 7(4):64.

Chicago/Turabian Style

Morgan, Melanie, William B. Collins, Glenn G. Sparks, and Jessica R. Welch. 2018. "Identifying Relevant Anti-Science Perceptions to Improve Science-Based Communication: The Negative Perceptions of Science Scale" Social Sciences 7, no. 4: 64.

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop