Next Article in Journal
The Development of Ski Areas in Romania. What Environmental, Political, and Economic Logic?
Next Article in Special Issue
‘To LED or Not to LED?’: Using Color Priming for Influencing Consumers’ Preferences of Light Bulbs
Previous Article in Journal
The Bike-Sharing Rebalancing Problem Considering Multi-Energy Mixed Fleets and Traffic Restrictions
Previous Article in Special Issue
Communicating Climate Change Risk: A Content Analysis of IPCC’s Summary for Policymakers
 
 
Communication
Peer-Review Record

Confidence in Local, National, and International Scientists on Climate Change

Sustainability 2021, 13(1), 272; https://doi.org/10.3390/su13010272
by Aaron C. Sparks 1,*, Heather Hodges 2, Sarah Oliver 3 and Eric R. A. N. Smith 4
Reviewer 1: Anonymous
Reviewer 2:
Reviewer 3:
Sustainability 2021, 13(1), 272; https://doi.org/10.3390/su13010272
Submission received: 26 October 2020 / Revised: 17 December 2020 / Accepted: 22 December 2020 / Published: 30 December 2020
(This article belongs to the Special Issue The Cognitive Psychology of Environmental Sustainability)

Round 1

Reviewer 1 Report

In general, I felt the topic is very interesting and found it very timely. I believe the author(s) have done so much work for this manuscript, but there are some concerns about the quality of the work. I am trying to provide my comments and observations and hope the author(s) find them helpful.

In the introduction section, the author(s) argued that, “one source characteristic that has not been investigated, so far as we are aware, is the community affiliation or proximity of a source to the recipient of the message.” However, the way that the author(s) have written the arguments is a little bit simplistic and vague. From a theoretical perspective, for example, the author(s) did not provide limitations of previous studies’ approaches or framework and the rationale for the application of the new perspective in the context of climate change. I would suggest the author(s) to make a paragraph regarding why the current study’s approach is more particularly necessary in this domain than other contexts. The lack of a robust analysis of the previous research in this field was a really critical omission.

Also, I am not sure why the author(s) mentioned the empirical findings of their study in the introduction section. Furthermore, the author(s) mentioned the elaboration likelihood model and the heuristic-systematic model but did not demonstrate their fundamental notion in the introduction section and the literature review section (actually no literature review section in this paper).

The author(s) should include the literature review section to demonstrate a theory, variables, and hypotheses development (based on psychological frameworks) to point out the theoretical contributions of this study to the existing literature focusing on other theories to explain the role of affiliation or proximity of a source to the recipient of the message. Without the literature review section, potential readers will not be convinced with the author(s)’s arguments.

I felt that data analysis part was very weak. How did the author(s) perform the manipulation check? Was there any pilot test when developing and revising the stimuli? I could not see the figures and any statistical results in the manuscript except for the significance level.

There was a missed opportunity to focus the discussion on what is new from this paper compared to the existing literature focusing on other theories and other aspect of source characteristics. The author(s) would provide limitations of the existing literature and then how this study develops the literature by comparing the limitations with the finding of this study.

Practical implications were vague and did not provide any specific implications at all. To me, the author(s)’s implications sounded too general. I was concerned about a lack of practical implication based on the interrelationship between variables in the context of this study.

Author Response

We would like the thank the reviewers for careful readings and thoughtful comments on our manuscript. We want to keep this manuscript as a short “communication” piece, as Sustainability calls it. It is for this reason that we focus narrowly in the introduction on message source as a cause of persuasion and in the analysis on the main effects (the source of the scientific report) rather than including more comprehensive moderation models in the text. We still want to make sure we can address the concerns of the reviewers to points relating to this, so we have made an effort to note our thinking on this matter more closely in the text – for specifics, please see our responses to each reviewer comment.

 

In line with the journal’s requirements, all changes to the manuscript are highlighted by Word’s track changes function. We made edits for clarity and grammar throughout. We apologize for any areas where the track changes function makes the text difficult to read.

 

The two main themes that were raised across the reviews were to expand the literature portion of the introduction and to clarify and bolster the analysis. We believe we have addressed all the comments from the reviewers and are happy to revise again if necessary.

R1

  • In general, I felt the topic is very interesting and found it very timely. I believe the author(s) have done so much work for this manuscript, but there are some concerns about the quality of the work. I am trying to provide my comments and observations and hope the author(s) find them helpful.

Thank you for closely reading our manuscript and providing substantive comments to help make it better!

  • In the introduction section, the author(s) argued that, “one source characteristic that has not been investigated, so far as we are aware, is the community affiliation or proximity of a source to the recipient of the message.” However, the way that the author(s) have written the arguments is a little bit simplistic and vague. From a theoretical perspective, for example, the author(s) did not provide limitations of previous studies’ approaches or framework and the rationale for the application of the new perspective in the context of climate change. I would suggest the author(s) to make a paragraph regarding why the current study’s approach is more particularly necessary in this domain than other contexts. The lack of a robust analysis of the previous research in this field was a really critical omission.

Thank you for this important suggestion! We added a paragraph of related research that looks at local weather forecasters impact on climate change public opinion on lines 56-63, “Relatedly, and perhaps the closest analog in the literature to the current study, is the examination of how local weather forecasters impact climate change beliefs. Local weather forecasters are typically familiar and trusted sources of information pertaining to weather and climate [17]. In an observational study, researchers found that exposure to local weather news increased participants awareness of the impacts of climate change, particularly among political conservatives [17]. Further, more deliberate reporting on climate change leads to viewers learning about climate change science. [18]. Research also shows that scientist’s credibility is not impacted when they engage in climate activism [19].”

We also briefly summarize the literature to describe our theory on lines 73-80,“In short, our theory builds on the elaboration-likelihood model and heuristic-systematic model that emphasizes the credibility of the communicator in leading to persuasion. Because of overlapping factors of shared identity (eg: Iowans) and attachment to place, along with decreased physical distance between the source and the respondent, we expect scientific reports from the state university from the respondent’s state to be more credible and lead to greater concern about climate impacts. Understanding how persuasion functions on the issue of climate change is especially important because reducing climate skepticism among conservative Republicans may be a key step toward climate policy solutions.”

 

  • Also, I am not sure why the author(s) mentioned the empirical findings of their study in the introduction section. Furthermore, the author(s) mentioned the elaboration likelihood model and the heuristic-systematic model but did not demonstrate their fundamental notion in the introduction section and the literature review section (actually no literature review section in this paper).

Thank you for this comment, we agree and have moved the paragraph discussing results and implications to the conclusion.

The journal format does include a “literature review section,” so we conduct the literature review in the introduction section. We believe we have addressed this with our response to the above comment.

  • The author(s) should include the literature review section to demonstrate a theory, variables, and hypotheses development (based on psychological frameworks) to point out the theoretical contributions of this study to the existing literature focusing on other theories to explain the role of affiliation or proximity of a source to the recipient of the message. Without the literature review section, potential readers will not be convinced with the author(s)’s arguments.

Thank you, our response to comment #2 includes some additional literature review and a clearer summation of the theory being tested.

  • I felt that data analysis part was very weak. How did the author(s) perform the manipulation check? Was there any pilot test when developing and revising the stimuli? I could not see the figures and any statistical results in the manuscript except for the significance level.

 

We appreciate this comment. We are unsure what is meant by “manipulation check.” We measured confidence in the report as well as three other beliefs about climate change impacts after each article. While we did not test how closely the respondents read each report, the slight variation from one article to the next suggests respondents were not simply choosing the same response for each item.

We did not pilot test the treatments because these were excerpts from real journalistic accounts of climate research to improve external validity. Further, we believe the repeated trials lend greater strength to our inference – the university source of the climate research is not an important factor people use to inform their climate change opinions.

We apologize for omitting the figures from the main text, it has been updated to include the figures in the results section. We also would like to provide some additional insight into our analysis:

We tested for heterogeneous treatment effects between Republicans and Democrats. While Republicans were more skeptical of the report, there was no moderation. We tested this by including an interaction term in a regression multiplying party affiliation and a dummy variable for the local university condition. Because this may be of some interest, though outside the main finding of the study, we now include this sentence on lines 125-128, “One may think that this null finding can be explained by Democrats and Republicans responding differently to the treatment; however, within a regression framework, confidence in the report is not moderated by party affiliation although Republicans were more skeptical of the report (results not shown).” And this sentence in the following paragraph describing the effect on belief that climate change will affect the respondent’s family on lines 138-140, “Using multiple regression, party affiliation did not moderate the effect of source of the report on belief that climate change will impact the respondent’s family (results not shown). This supports the statements in our discussion about the climate beliefs being difficult to change because they have become closely aligned with partisan and ideological associations (lines 147-153).

To get these results, we estimated a multiple regression model for each article. Reporting all results would mean 24 separate models (8 articles, and four dependent variables with each). Because this is a short communication, we do not believe it would be worthwhile to expand our analysis to fully include all these analyses, and it may look like an attempt at p-hacking to run so many different hypothesis tests (even if they all turn up null). We believe the main contribution of the short article is the (lack of) main effect and do not want to lose focus on that by reporting out secondary analyses in this format. Thus, instead of presenting full regression results for 24 models, we opted to present the simple comparison of means between the three experimental groups for each article because it tells the same story more succinctly. We defer to the reviewers and editors on this matter.

 

  • There was a missed opportunity to focus the discussion on what is new from this paper compared to the existing literature focusing on other theories and other aspect of source characteristics. The author(s) would provide limitations of the existing literature and then how this study develops the literature by comparing the limitations with the finding of this study.

Thank you for this suggestion, we agree. The discussion section now includes the following paragraph comparing our results to the studies of local weathercasters , lines 170-178: The null results in the present study also differ from findings showing that local weathercasters are seen as credible and can influence public opinion on climate change [17,18]. The current study improved on those studies be using an experimental design with repeated trials, lending better evidence of any causal association between university source of the report and how participants rated the reports credibility. While the reports used in this study are actual examples of reporting on scientific reports, the current study does not have the same real-world application as what likely occurs when people faithfully tune-in for the local weather report on a near daily basis. People watch because they are interested and may be more likely to engage higher levels of information processing when they also receive non-partisan messages relating local weather conditions to climate change.”

 

  • Practical implications were vague and did not provide any specific implications at all. To me, the author(s)’s implications sounded too general. I was concerned about a lack of practical implication based on the interrelationship between variables in the context of this study.

Thank you for this suggestion, we have re-written that paragraph in the conclusion lines 192-204, “Findings from the current study suggest that the university source of climate science is not important for people deciding how credible it is. Practically, this means that climate advocacy groups should focus their persuasion and mobilization strategies on other areas more likely to influence public opinion and behavior. To convince climate skeptics, messages from trusted TV weathercasters [20-22]or from opinion leaders within a strong and shared identity, such as a well-known pastor or Republican politician [40] is probably a stronger effect than the location of the university that produces the science.  In line with Ross et al, additional research on what is and is not effective climate change communication is important to achieving climate change policy [40].”

Reviewer 2 Report

I can't see the figures described in the results section. I would have been better to see the details. 

Author Response

We thank the reviewer for reading our manuscript and providing positive ratings. We now include the figures in the main text of the article.

Reviewer 3 Report

This study sets out to examine whether climate communication by local vs. distal climate scientists engenders greater trust among the general public. This is an important question with great consequence for communication in the climate space. However, the paper needs to address several key methodological issues, as well as provide all materials (figures) prior to consideration for publication.

This paper is well-written and clear.

It would be helpful to review extant research on the impact of using local communicators (e.g. local weathercasters) and communicating on local climate issues (weather and extreme weather – attribution), which bears directly on the questions explored by the authors.

How was the location of the respondents inferred? Was it from the location they reported when signing up to be part of the research panel, or was it from their IP address or another was of inferring their current location?

It appears that the authors measured a few demographic variables. However, why were variables more directly related to the questions of interest not included? These could be used to (1) make sure that the sample is diverse enough on these measures to test the questions of interest, and (2) test for moderation, and examine whether trust in local vs distal communications may differ depending on people’s prior identities and beliefs. A direct measure of climate attitudes would be best. Political ideology is highly related to climate acceptance and is often used as a proxy when more direct measures of climate change acceptance are not available. Perhaps these measures are available in the panel dataset used and could be added to the analysis? Without this the study does not have enough information about why the manipulation didn't work, and therefore doesn't really inform our understanding of the dynamics underlying these findings nor offer insight into how best to communicate to audiences. In practice, most communicators target their messages to particular audiences, and it is that nuance which the analyses need to address to be of value. 

Please list all measures administered to the participants.

Figures 1 and 2 are not included in the manuscript. Given that the greatest amount of information about the findings seems to be graphically represented in those figures, it is difficult to evaluate the findings.

The discussion suggests, based on prior research, that the impact of communication may vary based on people’s pre-existing beliefs. That is precisely the point above re: measuring those beliefs and conducting a more sophisticated set of analyses that can test whether the bias the authors are referring to is in fact is happening. That kind of approach would provide a more valuable set of findings by clarifying why there was no impact of the local vs. distal manipulation, or whether it was only present for people with certain kinds of beliefs or ideologies.

The last paragraph of the discussion suggests that the authors did in fact measure political affiliation. If this is the case, then I would strongly suggest running moderation analyses by looking at the interaction between the source of communication and political affiliation (or any other climate relevant identities or attitudes). Also all measures need to be listed in the methods section.

Author Response

We would like the thank the reviewers for careful readings and thoughtful comments on our manuscript. We want to keep this manuscript as a short “communication” piece, as Sustainability calls it. It is for this reason that we focus narrowly in the introduction on message source as a cause of persuasion and in the analysis on the main effects (the source of the scientific report) rather than including more comprehensive moderation models in the text. We still want to make sure we can address the concerns of the reviewers to points relating to this, so we have made an effort to note our thinking on this matter more closely in the text – for specifics, please see our responses to each reviewer comment.

In line with the journal’s requirements, all changes to the manuscript are highlighted by Word’s track changes function. We made edits for clarity and grammar throughout. We apologize for any areas where the track changes function makes the text difficult to read.

The two main themes that were raised across the reviews were to expand the literature portion of the introduction and to clarify and bolster the analysis. We believe we have addressed all the comments from the reviewers and are happy to revise again if necessary.

 

  • This study sets out to examine whether climate communication by local vs. distal climate scientists engenders greater trust among the general public. This is an important question with great consequence for communication in the climate space. However, the paper needs to address several key methodological issues, as well as provide all materials (figures) prior to consideration for publication.

Thank you for the close reading of our manuscript and for providing very helpful suggestions to improve it.

  • This paper is well-written and clear.

Thank you!

  • It would be helpful to review extant research on the impact of using local communicators (e.g. local weathercasters) and communicating on local climate issues (weather and extreme weather – attribution), which bears directly on the questions explored by the authors.

Thank you, this is a great suggestion. We have added a new paragraph highlighting this research lines 56-63 “Relatedly, and perhaps the closest analog in the literature to the current study, is the examination of how local weather forecasters impact climate change beliefs. Local weather forecasters are typically familiar and trusted sources of information pertaining to weather and climate [17]. In an observational study, researchers found that exposure to local weather news increased participants awareness of the impacts of climate change, particularly among political conservatives [17]. Further, more deliberate reporting on climate change leads to viewers learning about climate change science. [18]. Research also shows that scientist’s credibility is not impacted when they engage in climate activism [19].”

We also connect our results back to this research in the discussion section.

  • How was the location of the respondents inferred? Was it from the location they reported when signing up to be part of the research panel, or was it from their IP address or another was of inferring their current location?

The respondent’s location was determined before they participated in the survey. SSI collects that information when they register and receives periodic updates from their panelists. We updated the text with this clarification at lines 97-98 , “determined by SSI when the respondent signed up to be part of the panel and is continuously updated.”

 

While verifying this with the IP address may have been a benefit, it would not be 100% accurate either as panelists may take part in the survey while away from home.

  • It appears that the authors measured a few demographic variables. However, why were variables more directly related to the questions of interest not included? These could be used to (1) make sure that the sample is diverse enough on these measures to test the questions of interest, and (2) test for moderation, and examine whether trust in local vs distal communications may differ depending on people’s prior identities and beliefs. A direct measure of climate attitudes would be best. Political ideology is highly related to climate acceptance and is often used as a proxy when more direct measures of climate change acceptance are not available. Perhaps these measures are available in the panel dataset used and could be added to the analysis? Without this the study does not have enough information about why the manipulation didn't work, and therefore doesn't really inform our understanding of the dynamics underlying these findings nor offer insight into how best to communicate to audiences. In practice, most communicators target their messages to particular audiences, and it is that nuance which the analyses need to address to be of value. 

Thank you for this comment. We agree and while we did include a short section on the demographic make-up of the sample, we know also include information about the partisan lean of the sample, Our sample is balanced between Republicans and Democrats, the party affiliation variable is a 1-7 point scale from strong Republican to Strong Democrat and had a mean of 4.34 (between independent lean, Democrat and weak Democrat).” (line 88-91)

We had the same thought and tested for heterogeneous treatment effects between Republicans and Democrats. While Republicans were more skeptical of the report, there was no moderation. We tested this by including an interaction term in a regression multiplying party affiliation and a dummy variable for the local university condition. Because this may be of some interest, though outside the main finding of the study, we now include this sentence on line 125-128, “One may think that this null finding can be explained by Democrats and Republicans responding differently to the treatment; however, within a regression framework, confidence in the report is not moderated by party affiliation although Republicans were more skeptical of the report (results not shown).” And this sentence in the following paragraph describing the effect on belief that climate change will affect the respondent’s family on lines 138-140 , “Using multiple regression, party affiliation did not moderate the effect of source of the report on belief that climate change will impact the respondent’s family (results not shown). This supports the statements in our discussion about the climate beliefs being difficult to change because they have become closely aligned with partisan and ideological associations (line 147-153).

To get these results, we estimated a multiple regression model for each article. Reporting all results would mean 24 separate models (8 articles, and four dependent variables with each). Because this is a short communication, we do not believe it would be worthwhile to expand our analysis to fully include all these analyses, and because we did not pre-register our hypotheses, it may look like p-hacking to run so many different hypothesis tests (even if they all turn up null). We believe the main contribution of the short article is the (lack of) main effect and do not want to lose focus on that by reporting out secondary analyses in this format. Thus, instead of presenting full regression results for 24 models, we opted to present the simple comparison of means between the three experimental groups for each article because it tells the same story more succinctly. We defer to the reviewers and editors on this matter.

  • Please list all measures administered to the participants.

We have included all additional questions in the appendix lines 292-423.

 

  • Figures 1 and 2 are not included in the manuscript. Given that the greatest amount of information about the findings seems to be graphically represented in those figures, it is difficult to evaluate the findings.

We regret this accidental omission and have corrected the manuscript to include the two figures in the results section.

  • The discussion suggests, based on prior research, that the impact of communication may vary based on people’s pre-existing beliefs. That is precisely the point above re: measuring those beliefs and conducting a more sophisticated set of analyses that can test whether the bias the authors are referring to is in fact is happening. That kind of approach would provide a more valuable set of findings by clarifying why there was no impact of the local vs. distal manipulation, or whether it was only present for people with certain kinds of beliefs or ideologies.

We agree, please see our response to the comment above.

  • The last paragraph of the discussion suggests that the authors did in fact measure political affiliation. If this is the case, then I would strongly suggest running moderation analyses by looking at the interaction between the source of communication and political affiliation (or any other climate relevant identities or attitudes). Also all measures need to be listed in the methods section.

We agree with the idea, please see our response to the comment above.

We decided to include the full questionnaire in the appendix instead of the methods section because this is a short communication and all the item wording would take up a large proportion of the entire article. We do appreciate the interest in transparency and hope the appendix is satisfactory for this reason.

Round 2

Reviewer 1 Report

First of all, I would like to appreciate the amount of work that has gone through this revision. Even after carefully taking a look at the revision, I am still concerned about the experiment’s rigor (e.g., no manipulation check, no control variables, etc.). As an experimental design needs rigorous processes and techniques, the method and data analysis procedures are very important. But this paper does not provide these processes and techniques in a detailed manner. Without them, we may misinterpret the empirical findings. Although this paper is a short communication piece as Sustainability calls it, I am not still convinced that the paper meets the standards of the experimental design approach.

Author Response

We thank the reviewers and editors for once again closely reading our manuscript and providing comments that have improved it. We accepted all our edits in track changes from the last round of reviews, and all changes in this version are now tracked. We added the clarifications the reviewers requested and edited for grammar and readability throughout as the third reviewer noted. We believe we have addressed these concerns as well as we can, and we are ready to revise again if necessary.

First of all, I would like to appreciate the amount of work that has gone through this revision. Even after carefully taking a look at the revision, I am still concerned about the experiment’s rigor (e.g., no manipulation check, no control variables, etc.). As an experimental design needs rigorous processes and techniques, the method and data analysis procedures are very important. But this paper does not provide these processes and techniques in a detailed manner. Without them, we may misinterpret the empirical findings. Although this paper is a short communication piece as Sustainability calls it, I am not still convinced that the paper meets the standards of the experimental design approach.

 

Thank you for reading the revision closely! We agree that a manipulation check would give greater confidence in the finding that the university’s location is not an important source characteristic. Thus, our null results could indicate two things – the respondents did not pay attention to which university the report originated from, or they noted the source and it didn’t figure into their consideration of their confidence of the report. However, under either mechanism we believe our main conclusion is supported; the location (proximate or distant) of the university that produced the reports is not important in people’s evaluation of climate change science, likely because these attitudes have become firmly established and linked to partisan identity. We added a paragraph to the discussion section to clarify this point on lines 166-172, “Another explanation for the null findings in the experiment may be that respondents did not read the reports closely enough to pick-up on the university source from where the reports originated. We did not test the respondents’ reading comprehension, so we cannot determine if this is what happened. Failing to notice the source of report is different from noting the source and not using the source location to evaluate the report. However, both scenarios support our broad conclusion that the location of the university is not a major consideration in determining confidence in climate change science.”

 

We agree that control variables may provide greater confidence that the treatment conditions were the only differences between the three experimental groups for each article. Randomization should ensure group equivalence, meaning that our difference of means test that we report is a treatment effect. However, we did also run four (one for each dependent variable) multiple regression models for all 8 articles with controls we expect to influence confidence in the report or assessments of climate change impacts. Still, the location of the university does not have a significant impact. We clarify this on line 123 “One may think that this null finding can be explained by Democrats and Republicans responding differently to the treatment; however, within a regression framework with controls for party affiliation, ideology, education, and income, confidence in the report is not moderated by party affiliation although Republicans were more skeptical of the report (results not shown).” And, line 135 “Using multiple regression with control variables…”

 

We also added a small clarification of the experimental design on lines 89-92 : “Randomization occurred for each article, meaning a respondent could have been in the local treatment group for article 1 and the distant treatment group for article 2.

Reviewer 3 Report

Thank you to the authors for thoroughly addressing the comments.

A few thoughts on the revision:

Some spelling and language errors have crept in to the revised text.

For the weathercasters research, the latest and most empirically rigorous paper is: Feygina, I., Myers, T., Placky, B., Sublette, S., Souza, T., Toohey-Morales, J., & Maibach, E. (2020). Localized climate reporting by TV weathercasters enhances public understanding of climate change as a local problem: Evidence from a randomized controlled experiment. Bulletin of the American Meteorological Society.

Thank you for including the measures (appendix indeed seems like the right place for a shorter paper) and the figures. And, importantly, including information about the lack of moderation. Now that I read your explanation and see the figures I am realizing that you had run separate analyses for all the articles (which has a variety of pitfalls, including low power and higher error susceptibility). Have you considered pooling the data? If yes, are there reasons why you have decided against that?

I was also curious about whether you had included any open-ended measures and what you might have gleaned from those.

Author Response

We thank the reviewers and editors for once again closely reading our manuscript and providing comments that have improved it. We accepted all our edits in track changes from the last round of reviews, and all changes in this version are now tracked. We added the clarifications the reviewers requested and edited for grammar and readability throughout as the third reviewer noted. We believe we have addressed these concerns as well as we can, and we are ready to revise again if necessary.

Thank you to the authors for thoroughly addressing the comments.

A few thoughts on the revision:

  1. Some spelling and language errors have crept in to the revised text.

Thank you for noting this. We have edited the entire document, and paid close to attention to the text added with the revisions.

  1. For the weathercasters research, the latest and most empirically rigorous paper is: Feygina, I., Myers, T., Placky, B., Sublette, S., Souza, T., Toohey-Morales, J., & Maibach, E. (2020). Localized climate reporting by TV weathercasters enhances public understanding of climate change as a local problem: Evidence from a randomized controlled experiment. Bulletin of the American Meteorological Society.

Thank you for this great citation! We added it to the introduction, lines 53-55“Most recently, in randomized controlled experiment, researchers found that local climate reporting is influential in getting the public to connect global climate change to local weather and personal actions [19].”

  1. Thank you for including the measures (appendix indeed seems like the right place for a shorter paper) and the figures. And, importantly, including information about the lack of moderation. Now that I read your explanation and see the figures I am realizing that you had run separate analyses for all the articles (which has a variety of pitfalls, including low power and higher error susceptibility). Have you considered pooling the data? If yes, are there reasons why you have decided against that?

Thank you for this question. Yes, we did consider different ways of aggregating the results from each article including pooling the data to creating an additive index for each dv. Both approaches have some problems. A pooled regression, would require assuming the observations occurred at different times, when in reality they were separated only by a minute or two. Another issue with a pooled regression or creating an index is that there were differences among all the articles that may influence how people assessed them. For example, most of the reports are on negative impacts of climate change, but article 7 says, “finally some good news” in relation to growth spurts among California’s giant sequoias. The positive vs negative valence of each article, and the variance in the subject of climate change impacts in each article make it impossible to treat them as similar enough to combine or pool the dependent variables. The figures also show that an attempt at aggregation is unlikely to change the results to a clear and statistically significant pattern because for some the Zurich report garners more confidence (not statistically significant) and for other the local university does (again not significant). Even if we could aggregate, the null results would remain.

We believe the straightforward difference of means tests visualized in the figures is the best way to present these findings. We apologize again that these figures did not show up in the first version of the paper you read!

  1. I was also curious about whether you had included any open-ended measures and what you might have gleaned from those.

Unfortunately we did not include any open-ended questions. We agree, it is possible we could have learned more about how respondents were interpreting the articles with open-ended questions, but we were limited by our research budget and could only include the items we did.

Round 3

Reviewer 1 Report

I am hoping that these changes will also be welcomed by potential future readers and that they increase the reach of the article, and help people who are expressly interested in this topic find it. A thank you to the authors for performing these changes swiftly & competently!

Back to TopTop