Next Article in Journal
A Preliminary Analysis on Gender Aspects in Transport Systems and Mobility Services: Presentation of a Survey Design
Next Article in Special Issue
Applying Social Learning to Climate Communications—Visualising ‘People Like Me’ in Air Pollution and Climate Change Data
Previous Article in Journal
EU Labor Market Inequalities and Sustainable Development Goals
Previous Article in Special Issue
Thawing Permafrost in Arctic Coastal Communities: A Framework for Studying Risks from Climate Change
 
 
Article
Peer-Review Record

Public Perceptions of Climate Change in the Peruvian Andes

Sustainability 2021, 13(5), 2677; https://doi.org/10.3390/su13052677
by Adrian Brügger 1,*, Robert Tobias 2 and Fredy S. Monge-Rodríguez 3
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Sustainability 2021, 13(5), 2677; https://doi.org/10.3390/su13052677
Submission received: 11 January 2021 / Revised: 16 February 2021 / Accepted: 25 February 2021 / Published: 2 March 2021

Round 1

Reviewer 1 Report

The manuscript examines how individuals in the Peruvian Andes relate to various aspects of the topic of climate change. The authors conducted interviews including qualitative and quantitative questions with a fairly large sample of locals residing in that region. 

I found the manuscript to be very well-written and informative.  The data collection on a hard-to-reach population is impressive.  And many of the findings are quite interesting; for example, differences in psychological distance of climate change on this population compared to previous work (mostly conducted in a WEIRD context), and the centrality of concerns about water to considerations of climate change amongst this population.  I also appreciated the descriptive background on the region provided in the introduction.  I suggest only a few minor clarifications before the manuscript is published.

L243: A brief discussion of statistical power and/or why the researchers opted for the sample size that they did will be helpful to other research teams hoping to build off of this for future work.

Table 1: Why is the N for different questions not the same?  I would think that anyone who didn’t answer a given question would be placed into the “refused” category (thus leading to a total N equal to the sample size).  Is there a second, hidden, category for those who didn’t answer a question?  If so, perhaps making this category explicit, and thus leading to an equal total N for each question, might be helpful to readers.   

Table 1: How was the political orientation measure assessed?  Although not impossible, it seems unlikely that there would not be a single participant with right-of-center ideology that would feel comfortable sharing with the interviewer.  If this is not an error, perhaps this could be clarified somehow.

L407: Which conditions for ANOVA were not met?

The conclusion mostly focuses on localized practical implications.  I would consider also looping back to theory and implications for psychological researchers (especially those interested in cross-cultural work at a broader scale). 

Author Response

(A more nicely formatted version of this answer is available as PDF).

The manuscript examines how individuals in the Peruvian Andes relate to various aspects of the topic of climate change. The authors conducted interviews including qualitative and quantitative questions with a fairly large sample of locals residing in that region. 

I found the manuscript to be very well-written and informative.  The data collection on a hard-to-reach population is impressive.  And many of the findings are quite interesting; for example, differences in psychological distance of climate change on this population compared to previous work (mostly conducted in a WEIRD context), and the centrality of concerns about water to considerations of climate change amongst this population.  I also appreciated the descriptive background on the region provided in the introduction.  I suggest only a few minor clarifications before the manuscript is published.

Thank you very much for your favourable assessment of our research and for your helpful feedback.

L243: A brief discussion of statistical power and/or why the researchers opted for the sample size that they did will be helpful to other research teams hoping to build off of this for future work.

We added our sample size considerations in section 5.2 (lines 238-251).

Table 1: Why is the N for different questions not the same?  I would think that anyone who didn’t answer a given question would be placed into the “refused” category (thus leading to a total N equal to the sample size).  Is there a second, hidden, category for those who didn’t answer a question?  If so, perhaps making this category explicit, and thus leading to an equal total N for each question, might be helpful to readers.   

There are indeed two reasons that led to missing values. First, some people did not complete the whole survey, which resulted in missing values. Second, some people explicitly refused to answer questions. Only the latter were listed as ‘refused’. We added a note explaining this (Table 1, lines 309–311).

Table 1: How was the political orientation measure assessed?  Although not impossible, it seems unlikely that there would not be a single participant with right-of-center ideology that would feel comfortable sharing with the interviewer.  If this is not an error, perhaps this could be clarified somehow.

We are very grateful that you spotted this mistake. It seems that an error occurred when formatting the table. Right-of-center ideology was deleted by mistake (which was also obvious in that the percentages didn’t add up to 100 in the last version). We added the missing row about those supporting a right-of-center ideology to Table 1.

L407: Which conditions for ANOVA were not met?

Many variables were not normally distributed and the variance between groups was often not equal. We added this information to the MS (lines 415–416).

The conclusion mostly focuses on localized practical implications.  I would consider also looping back to theory and implications for psychological researchers (especially those interested in cross-cultural work at a broader scale). 

We fully agree that discussing the theoretical and cross-cultural relevance of our findings is important. When reading the discussion section again, we felt that this goal was met relatively well in section 7.2. There we zoom in on psychological distance, which in theoretical terms is probably the most ‘psychological’ concept of the survey. We contextualize the surprising finding that climate is perceived less as a distant threat than other studies have found and discuss possible contextual explanations for this difference. We would be happy to follow your suggestion even more closely, but would need more concrete indications with respect to what you mean by ‘cross-cultural work at a broader scale’.

Author Response File: Author Response.pdf

Reviewer 2 Report

Thank you for the opportunity to review this work.

This work is in an extremely relevant area. To improve it's clarity and quality I recommend some changes.

1) The main problem of this work is the lack of an overall thrust. Its nature is mostly descriptive and a big amount of data is presented, being difficult to understand what is the main contribution of the study. I suggest a better definition of the goals and to present only the data that is relevant to those goals. For instance, in lines 166-169, it says that this work contributes to literature by comparing climate change perceptions of both rural and urbanites. However the analyses do not focus on this issue.

2) The issue of WEIRD research (Western, educated, industrialized, rich and democratic) could be introduced to justify the relevance of the study.

3) Data analyses are not clear. It is important to specify how analyses (quantitative and qualitative) were conducted. I suggest presenting less analyses and being more specific about the analyses presented. 

Author Response

(A more nicely formatted version of this answer is available as PDF).

Thank you for the opportunity to review this work.

This work is in an extremely relevant area. To improve it's clarity and quality I recommend some changes.

Thank you for taking the time to read and comment on our manuscript. Your feedback is much appreciated.

1) The main problem of this work is the lack of an overall thrust. Its nature is mostly descriptive and a big amount of data is presented, being difficult to understand what is the main contribution of the study. I suggest a better definition of the goals and to present only the data that is relevant to those goals. For instance, in lines 166-169, it says that this work contributes to literature by comparing climate change perceptions of both rural and urbanites. However the analyses do not focus on this issue.

Point well taken. We now state the goals in section 4 more clearly (lines 147-158): We now also include the goal to add additional insights from non-WEIRD populations and are more explicit in why understanding climate change perceptions are important. We also removed the part about urban and rural populations, as this may elicit wrong expectations––as you indicated correctly.

Moreover, we strengthened the link between goals and analyses in section 5.4 (lines 411–441) by being more explicit about the purpose of the different analytical steps.

2) The issue of WEIRD research (Western, educated, industrialized, rich and democratic) could be introduced to justify the relevance of the study.

Thanks for this suggestion. We added a reference to Rad et al. (2018) at the end of section 1 (introduction, lines 58-59) and in section 4 (research goals, line 153-155).

3) Data analyses are not clear. It is important to specify how analyses (quantitative and qualitative) were conducted. I suggest presenting less analyses and being more specific about the analyses presented. 

We took several steps to improve this:

  1. We restructured the section in which we describe the analyses (5.4, lines 411–441) separately for open-ended and closed-ended questions
  2. We moved the description of the qualitative analyses from the result section to the “analysis” section (5.4, lines 433-439).
  3. We now describe our analyses in more detail. For instance, we are now more explicit about the conditions that were not met for conducting ANOVAs. The description of how we analysed the open-ended questions contains also more details than before (lines 433-439). Moreover, we now mention the R packages that we used to preprare the article (lines 440–441).
  4. In section 5.4, we linked the analyses more directly to the research goals.

We also carefully examined the idea to remove some of the dependent variables. However, we feel that the different dependent variables complement each other well and provide a comprehensive overall picture of climate change perceptions, involving experiential, affective, cognitive, and evaluative dimensions. As such, they all contribute to our goal of capturing people’s perceptions of climate change. We also tried to identify analyses that might be least useful for readers. However, the interests of the readership of this paper will most likely be very diverse (e.g. researchers from different fields interested in specific theoretical constructs such as worry or knowledge; policy-makers, NGOs and other stakeholders who might look for starting points to inform education materials or behaviour change campaigns). Thus, the exclusion of variables may not only undermine our goal of a holistic assessment of climate change perceptions in the region but unintentionally reduce the usefulness of this article for future readers. We would therefore like to keep all analyses.  

Author Response File: Author Response.pdf

Reviewer 3 Report

The paper presents an interesting research on subjective beliefs about climate change, focusing on a Peruvian region. The approach is interestingly broad, covering aspects that range from the experiential to the evaluative, and from the cognitive to the affective dimensions. The paper focus on a country of the Global south, emphasizing the importance of including in the international literature a more diverse perspective with specific attention for sensitive populations hit by this phenomenon. The theoretical background and the method are sound. The overview of the results and the discussion are good, even if some improvements are still possible. The language is clear.

I truly appreciate the work, yet I suggest some main improvements for the analysis and the discussion, and other minor issues.

For the analysis section, it would be beneficial reporting the values of the tests for statistical significance (e.g. differences between different socio-demographic groups in self-assessed knowledge, in reality and causes of climate change, etc.), in particular for paragraphs 6.5, 6.6, 6.7, 6.8, 6.9. This would help the reader in evaluating the strength of the effects. Moreover, all the analyses show the effect of socio-demographic variables on subjective dimensions, but no effect about the relationship among those dimensions is shown. Is there a reason for this choice? Some variables may co-vary and it would be useful providing the results keeping into account those aspects (e.g. if the population in Cusco is more educated and wealthier, once  the territorial belonging is controlled by education and income is its effect still relevant on the dependent variables?). This might help also to clarify some aspects in the discussion section, where many socio-demographic variables are mentioned but not always within a clear framework.

In paragraph 7.4 an informed participatory approach is suggested, which is valuable in this field. Yet, it would help to clarify how the paper deals with the integration of experts and laypeople perceptions. It might seem, at the beginning of the paragraph, that the paper diminishes the interventions by the experts when they are not consistent with laypeople evaluation. Reading the suggested interventions from line 800 on, especially regarding informative campaigns to balance superficial media coverage, the perspective of the paper becomes clearer. Hence, a more consistent introduction to the paragraph could help the reader.

Some minor issues:

244 – How many local interviewers were involved? Is there any analysis on potential bias due to the interviewers?

304 – and following – Is the answering scale used in other studies? Are there any references that inspired this tool?

335 and following – Before the open question on the most important issue, how is the interview presented? Is there any content presented while approaching participants that may have influenced the open answer? Are the answers to this initial open question recorded or literally transcribed?

481 – instead of “things” you may consider “issues” or “topics” or “words”

425 and following – What is the tool for calculating the co-occurrence of the words and representing the graphs?

794 – close the parenthesis

 

Author Response

(A more nicely formatted version of this answer is available as PDF).

The paper presents an interesting research on subjective beliefs about climate change, focusing on a Peruvian region. The approach is interestingly broad, covering aspects that range from the experiential to the evaluative, and from the cognitive to the affective dimensions. The paper focus on a country of the Global south, emphasizing the importance of including in the international literature a more diverse perspective with specific attention for sensitive populations hit by this phenomenon. The theoretical background and the method are sound. The overview of the results and the discussion are good, even if some improvements are still possible. The language is clear.

I truly appreciate the work, yet I suggest some main improvements for the analysis and the discussion, and other minor issues.

Thanks a lot for the appreciative words and for your helpful feedback.

For the analysis section, it would be beneficial reporting the values of the tests for statistical significance (e.g. differences between different socio-demographic groups in self-assessed knowledge, in reality and causes of climate change, etc.), in particular for paragraphs 6.5, 6.6, 6.7, 6.8, 6.9. This would help the reader in evaluating the strength of the effects.

Point well taken. We agree that the information about whether or not differences are statistically significant is not very informative on its own, especially not when the sample size is large and even small difference turn out to be statistically significant. Because we have conducted a large number of analyses, it would be impractical to report all test statistics in the manuscript. However, we have now included the effect sizes when we describe differences between groups. We are grateful for your suggestion; it is now easier to directly assess the relevance of the effects (before this was only possible via the supplementary materials).

Moreover, all the analyses show the effect of socio-demographic variables on subjective dimensions, but no effect about the relationship among those dimensions is shown. Is there a reason for this choice? Some variables may co-vary and it would be useful providing the results keeping into account those aspects (e.g. if the population in Cusco is more educated and wealthier, once  the territorial belonging is controlled by education and income is its effect still relevant on the dependent variables?). This might help also to clarify some aspects in the discussion section, where many socio-demographic variables are mentioned but not always within a clear framework.

Thank you for your thoughtful comment. We added the requested information about the relationships between the predictor variables to the Supplementary Material (Supplementary Table 1). More specifically, we used Cramer’s V as an effect size measure for categorical variables. This measure varies between 0 (no association) and 1 (perfect association).

If we understand the second part of the above comment correctly, the underlying concern is that some of the effects may be due to spurious relationships; in other words, the reported effects in the dependent variable may not be ‘caused’ by the variable that is used as a predictor but by another variable that influences both the predictor (independent) variable and the dependent variable. Or is this comment more about the question of which predictor can explain more variance in the dependent variable when predictors are correlated? We agree that including additional variables as control variables into the models can sometimes be a good way to deal with these concerns.

However, in the present case, we think there are several reasons why this may not be the best option. First, we currently know very little about how different socio-demographic may affect the different subjective perceptions. A crucial goal of this research was therefore to *explore* possible effects of socio-demographic variables (see revised section ‘The present research’, lines 146–159). To this end, it seems more informative to examine the influence of these variables one by one because this makes it easier to immediately identify the effect of the focal predictors as compared to more complex models with several additional predictor variables. Actually, some analytical approaches (e.g., regression) that are capable to include several predictor variables may even make it impossible to assess the effect of individual predictors because the shared variance is attributed to only the most powerful predictor(s).

Second, the implementation of this idea is not straightforward and would make the already comprehensive analyses even more extensive and more difficult to digest. More specifically, to the best of our knowledge, it is not possible to include controls in Kruskal-Wallis tests (which we used in this study). We would therefore need to use some sort of regression analysis. Because the measurement level of many predictors is nominal, this would require the use of dummy variables. This would increase the number of variables in the model (because each level of the predictors would be represented by a separate 0/1 predictor, minus the reference category) and make the interpretation less straightforward. Another potentially negative effect of including several predictors is that they may cause problems if they are correlated too strongly (i.e., problems due to multicollinearity).

In sum, we appreciate this suggestion and have taken similar steps in other studies. However, for the purpose of this study, we believe that adding control variables would cause more harm (more complexity, potentially concealed effects, more difficult interpretations) than good. We also believe that the current analytical approach fits the goals of the study better than a more complex approach with control variables. We therefore prefer to stick to the current analytical approach.

In paragraph 7.4 an informed participatory approach is suggested, which is valuable in this field. Yet, it would help to clarify how the paper deals with the integration of experts and laypeople perceptions. It might seem, at the beginning of the paragraph, that the paper diminishes the interventions by the experts when they are not consistent with laypeople evaluation. Reading the suggested interventions from line 800 on, especially regarding informative campaigns to balance superficial media coverage, the perspective of the paper becomes clearer. Hence, a more consistent introduction to the paragraph could help the reader.

We agree that in our last version, the introduction to the implications for adaptation was ambiguous and our position not sufficiently clear. We have revised the first paragraph in 7.4 (lines 785–795) to more directly and clearly state our position about how to consider the views of experts and lay people.

Some minor issues:

244 – How many local interviewers were involved? Is there any analysis on potential bias due to the interviewers?

In total, 22 people acted as interviewers. Most interviews (97%) were conducted by 12 interviewers. In addition to the quality checks reported in the last paragraph in section 5.2, we also made some quality checks with respect to the interviewers such as plausibility of number interviews per day and length of the interviews. Apart from these quality checks, we have not further analysed if the interviewers have systematically biased the findings. In our experience, such analyses typically do reveal differences between interviewers and these differences are often plausible, for example, because interviewers worked on different days, at different hours, or in different places. However, because it is often unclear what conclusions to draw from such differences, we have decided not to examine this in detail.

304 – and following – Is the answering scale used in other studies? Are there any references that inspired this tool?

We have developed this tool based on experiences from previous field research. To the best of our knowledge, it has not been reported elsewhere.

335 and following – Before the open question on the most important issue, how is the interview presented? Is there any content presented while approaching participants that may have influenced the open answer? Are the answers to this initial open question recorded or literally transcribed?

Participants were asked if they wanted to participate in an interview. If they agreed, the interviewer read out the following, neutral information: “Thanks for participating in this study. During this interview, we want to inquire about your opinions. We do not want to test your knowledge or convince you of any point of view. Because of this, you can answer freely whatever you think is the most adequate response for you. In case you don't understand a question, please ask to better explain what we want to know.”

Immediately after this they were asked: “What would you say will be the most important problem Peru will face in about 20 years?”.

The answers to this question were directly entered into a tablet (interviewers had a keyboard at their disposal).

We added this information to the MS (lines 285–288).

481 – instead of “things” you may consider “issues” or “topics” or “words”

We changed this to ‘issues’ (now line 492).

425 and following – What is the tool for calculating the co-occurrence of the words and representing the graphs?

We conducted all analyses with R. We added the name of the packages that we used at the end of section 5.4. The analyses of the open-ended questions mostly followed the approach from Silge and Robinson (2017). We added this reference to the paper.

794 – close the parenthesis

Done.

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

All comments and suggestions have been addressed.

Back to TopTop