Next Article in Journal
Construction of Equilibria in Strategic Stackelberg Games in Multi-Period Supply Chain Contracts
Next Article in Special Issue
The Black Box as a Control for Payoff-Based Learning in Economic Games
Previous Article in Journal
Groundwater Usage and Strategic Complements: Part II (Revealed Preferences)
 
 
Article
Peer-Review Record

The Strategy Method Risks Conflating Confusion with a Social Preference for Conditional Cooperation in Public Goods Games

Games 2022, 13(6), 69; https://doi.org/10.3390/g13060069
by Maxwell N. Burton-Chellew 1,2,*, Victoire D’Amico 2 and Claire Guérin 2
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3:
Games 2022, 13(6), 69; https://doi.org/10.3390/g13060069
Submission received: 15 September 2022 / Revised: 20 October 2022 / Accepted: 21 October 2022 / Published: 25 October 2022
(This article belongs to the Special Issue Learning and Evolution in Games, 1st Edition)

Round 1

Reviewer 1 Report

LETTER TO THE AUTHORS

 

Referee report for “The strategy method risks conflating confusion with a social preference for conditional cooperation in public goods games.”

Overview

The present paper aims to contribute to the literature exploring the existence of conditional cooperation using a widely used lab experiment method, the strategy method (introduced by Fischbacher, Gachter, and Fehr in 2001). The authors investigate in a within-subject analysis whether the preeminence of conditional cooperators is altered if subjects play against a computer. Using 845 subjects at the University of Lausanne ( HEC-LABEX), the authors replicate the instructions and control questions of Fischbacher and Gachter (2). Why the research question is not new as can be seen in this paper “Burton-Chellew, M. N., El Mouden, C., & West, S. A. (2016). Conditional cooperation and confusion in public-goods experiments.” Proceedings of the National Academy of Sciences, 113(5), 1291-1296.” The novelty is that the participants play the strategy method twice, once with other humans and once with a computer. The authors also take care of controlling for the order effect in the experiment.

Any rational- profit-maximizing cooperators, but also anyone with social preferences should always contribute 0 while playing against a computer as there is no issue or fairness or other social consideration directly involved in this case. If participants still cooperate against a computer, it may compromise the strategy method's validity as a measure of social preferences.

The authors find that a large and significant fraction of participants are classified as conditional cooperators even when they play with computers, suggesting irrationality and confusion. Hence their results are in line with a previous stream of research questioning the nature of cooperation and how to measure it. Their research highlights the importance of measuring noise in any experimental design and is therefore an important topic to work on.

 

Summary evaluation
The authors are exploring fundamental questions on social preferences and how robust the strategy method is at evaluating individuals' pro-social behavior, particularly cooperation.

The contribution of the paper to the overall literature is nevertheless thin, given that a previous and similar paper was published in PNAS in 2018 (Burton-Chellew, M. N., El Mouden, C., & West, S. A. (2016). Conditional cooperation and confusion in public-goods experiments. Proceedings of the National Academy of Sciences113(5), 1291-1296.). The main difference is that, in the present study, the authors perform a within-subjects analysis, and participants face both situations while being in the experiments. As the authors stipulate in lines 55,56,57, "by making participants complete two strategy methods, we made the distinction between human and computerized groupmates more salient….". I see this paper as an extension and robustness check of the article by Burton-Chellew and al (2018). I believe it should be made more evident in the introduction, for instance.

I believe that the paper can be published in Games, but the link with the PNAS paper has to be highlighted more strongly. In particular, why is it important to have this alternative design? Why do we expect the participants to be more rational? What is the hypothesis? The authors expected the participants to be more rational when the games were played twice in a row (with and without a computer). It should be made more evident. Also, for the possible explanations of the results, the authors are to provide more relevant literature. Some literature on heuristics and framing is missing, for instance. 

Also, avenues for future research on how to improve either the current experimental design or how to measure social preferences should be given.

 

 

Main comments to address for publication:

 

Some English to fix: line 5 “ their groupmates others.”

 

Line 24: Average is missing. The authors have to say that it is the average contribution of others.

 

Line 45: I do not necessarily agree here. The experiment extends citation 18 to me. Burton-Chellew, M. N., El Mouden, C., & West, S. A. (2016). Conditional cooperation and confusion in public-goods experiments. Proceedings of

the National Academy of Sciences113(5),1291-1296 and belongs to the stream of research that Fischbacher et al. (2001) started in 2001 in their seminal paper. Hence line 76-77 is overstated. The authors must be more specific on why it is needed and interesting to have this additional study and design.

 

The authors do not talk about the unconditional contribution that individuals make. It would be nice to refer to it in the paper and see if it differs depending on playing with humans or the computer.

 

Line 84-85: the 95% interval seems too narrow. The authors have to explain why they choose this.

 

Maybe the within-subject analysis is confusing by itself. Participants are less likely to read the second set of instructions. Maybe it should have been checked in a control question.

 

Figure 2 does not appear.

 

Line 104-105: A test is needed to say if the differences are statistically significant.

 

More discussion should appear in the introduction on the role of framing in experiments, where the framing effect appears, and what the hypothesis based on the change of the frame is.

See, for instance: 

 

1- Cartwright, E. (2016). A comment on framing effects in linear public good games. Journal of the Economic Science Association2(1), 73-84.

2- Dariel, A. (2018). Conditional Cooperation and Framing Effects. Games9(2), 37.

3- Dufwenberg, M., Gächter, S., & Hennig-Schmidt, H. (2011). The framing of games and the psychology of play. Games and Economic Behavior73(2), 459-478.

4- Fosgaard, T. R., Hansen, L. G., & Wengström, E. (2017). Framing and misperception in public good experiments. The Scandinavian Journal of Economics119(2), 435-456.

5- Lévy-Garboua, L., Maafi, H., Masclet, D., & Terracol, A. (2012). Risk aversion and framing effects. Experimental Economics15(1), 128-144.

 

Table: I think the unconditional contribution could be added here with a discussion.

 

Lines 161-162: Or how can we design an experiment that reduces noise level? Is there a better alternative? It is known by the studies of Daniel Kahneman (thinking fast and slow) that people follow heuristics. 15 % of the participants seem to adjust their behavior rationally, and the rest seems to follow heuristics or other unknown motives. Could it be that in a representative sample of the population, we would find that 15 % of the population is rational and doesn’t follow heuristics? Maybe the results are in line with average human behavior. The question is, how do we design an experiment that minimizes heuristics and confusion. How do we make participants think slowly?

 

Lines 175-176: Going back to the instructions. Maybe working on the instructions would reduce any type of confusion and heuristic. Instructions are very important. In particular, the written instructions may lead to “ irrational” behavior. That point should be mentioned in the paper. It could be an idea for future research on framing effects. 

 

Your instructions:

After the experiment, only the experimenter will be aware of your conditional and non-conditional contributions, and your decisions will remain anonymous.”

You will now again face the same decisions you have just taken, but in a new special case. In this case, you will be in a group composed only of you and the COMPUTER. Everything else will take place in the same way as before, the only difference being that instead of 3 other people, you will play only with the computer.”

The computer will take the decisions instead of the other 3 members of the virtual group. The decisions of the computer will be taken in a random and independent way (each virtual player will therefore make its own decision at random).”

“You are the only real human member in your group, and only you will receive money from the outcome of this round.”

There may be as well an experiment effect related to the instructions. Participants know the experiment may see their decisions and hence don’t want to be perceived as selfish in the eyes of the experimenter. An online experiment may produce different outcomes.

Line 184-190: add references to support the claim. 

Line 199: “ is rare” this is too bold. We do not know. We know that people are irrational and use shortcuts while making decisions, and this experiment may be proof of it. Participants may just focus on the objective function, and the recipient, in this setting, does not matter. Maybe introducing social ties and communication would change the outcome. 

Such experiments should have control questions about why participants cooperated with the computer.

A mention of the strategy method versus the direct response should be mentioned. 

Brandts, J., & Charness, G. (2011). The strategy versus the direct-response method: a first survey of experimental comparisons. Experimental Economics14(3), 375-398.

Minozzi, W., & Woon, J. (2020). Direct response and the strategy method in an experimental cheap talk game. Journal of Behavioral and Experimental Economics85, 101498.

 

 

 

Author Response

Please see attachment

Author Response File: Author Response.pdf

Reviewer 2 Report

Good research work is presented in this paper, however, I have the following comments:

1- You need to make the aim or the objective very clear in the abstract by stating for example "the aim of this research is ......"

2- The sections of the paper need to be rearranged. After the introduction, you put the results in section 2. I suggest shifting section 4—materials and Methods to section 2.

3- Figure 2. is empty.

4. You need to have a conclusion section at the end of your paper, where you summarize what you have in your paper in addition to any proposed future research directions.

 

 

Author Response

Please see the attachment

Author Response File: Author Response.pdf

Reviewer 3 Report

Thank you for the opportunity to review this article. First of all, an interesting topic. I was also interested in the experiment and its results.

However, I have a few recommendations for improving the article:

• From the point of view of cooperation, it would be appropriate to analyze the topic of "cooperation" in more detail in the article from different perspectives and areas (psychology, biology, other...). This would give you a broader view of the topic you are researching.

• The article relies on Mr. Fischbacher's strategy and sources. It would be appropriate to highlight and bring to the article also other, similar, strategies from well-known authors who did similar research. These findings also answer the questions: What did these authors achieve? How do their results compare to yours? (differences, similarities, conditions, ...)

• At the end of the article, it would also be appropriate to highlight how your findings are beneficial? And how do they enrich current knowledge?

Author Response

Please see the attachment

Author Response File: Author Response.pdf

Back to TopTop