Next Article in Journal
Correction: noor (2025). Bi5: An Autoethnographic Analysis of a Lived Experience Suicide Attempt Survivor Through Grief Concepts and ‘Participant’ Positionality in Community Research. Social Sciences 14: 405
Previous Article in Journal
“I Feel Like a Lot of Times Women Are the Ones Who Are Problem-Solving for All the People That They Know”: The Gendered Impacts of the Pandemic on Women in Alaska
 
 
Perspective
Peer-Review Record

Integrating Open Science Principles into Quasi-Experimental Social Science Research

Soc. Sci. 2025, 14(8), 499; https://doi.org/10.3390/socsci14080499
by Blake H. Heller 1,* and Carly D. Robinson 2
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3:
Soc. Sci. 2025, 14(8), 499; https://doi.org/10.3390/socsci14080499
Submission received: 12 June 2025 / Revised: 8 August 2025 / Accepted: 14 August 2025 / Published: 19 August 2025

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

Please see the attached file.

Comments for author File: Comments.pdf

Author Response

Please see attached response and revised manuscript with changes tracked. Thank you for your thoughtful review of this article.

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

Review of “Integrating Open Science Principles into Quasi-Experimental Social Science Research”

Summary

The authors argue that open science principles, which are currently used for experimental social science, ought to be applied to quasi-experimental research. Quasi-experimental research involves samples in which multiple groups of individuals are ‘accidentally’ assigned to as-good-as-random treatments. The authors provide some background on open science principles and suggest that there are seven areas in which current practices in open science can be applied to quasi-experimental research: 1) Preregistration, 2) Preanalysis, 3) Open Source Data and Code, 4) Registered Reports, 5) Open Access Articles and Preprints, 6) Messaging Practices, and 7) Replication. The authors then make specific recommendations for research stakeholders.

Analysis

I liked this paper very much. I think the authors’ recommendations are well-thought-out and beneficial. Although current social science research is of high quality, these recommendations can improve the quality of academic research by enhancing transparency.

I have a few comments that the authors may wish to consider, but none of these comments must be addressed for publication. I consider the paper to be publishable as is.

As I read the paper I found myself considering the suggested principles as falling into two types: practices and values. Mostly, the paper is about suggesting improved practices for social science research. For example, recommending that researchers preregister their research plans is a practice. But some of the principles are not about practices, but instead about values. For example, I think the authors’ suggestion about giving authors the opportunity to create registered reports and have those reports (and subsequent findings) published is an excellent idea. However, it clashes not so much with editorial practices as with editorial values. Most editors and referees value clear, appropriate methodology; however, they also value results, particularly ‘novel’ results. I am not sure this emphasis on unexpected results is entirely consistent with open science values, and until editors and referees’ values change, neither will their practices. Personally, I think a well-designed study that replicates previous results is at least as worthy of publication as less well-designed study that finds ‘novel’ results.

A second area where are saw this distinction between values and practices was in the section on publication of data sets. The authors seem to believe that in general, data sets should be made public, but that some data sets might need to remain confidential. I don’t agree. I believe that one of the costs of getting your paper published should be giving up exclusive access to your data. For me, the reason to withhold the data is irrelevant; if the data isn’t public, then the paper should not be published. I believe in maximizing transparency, and holding onto your data strikes me as inconsistent with the community scholars are trying to create, and particularly with the open science movement.

A third, less important observation: The authors mention p-hacking and issues with statistical significance, but do not reference the growing literature on best-practices for reporting statistical results that downplay the importance of p-values. This gets back to the idea of values vs. practices – perhaps social scientists value for statistical significance over other types of significance (see Ziliak and McCloskey “The Cult of Statistical Significance” (2008) for a discussion) ought to be discussed. Ziliak and McCloskey point out that statistical significance is about precision, but overall effect (or in my field, economic significance) may be more important. Understanding this distinction might help address the values behind some of the objectionable practices (e.g., p-hacking).

Author Response

Please see attached response and revised manuscript with changes tracked. Thank you for your thoughtful review of this article.

Author Response File: Author Response.pdf

Reviewer 3 Report

Comments and Suggestions for Authors

Thank you for the opportunity to review this paper. I find that it provides a good overview of some of the issues surrounding open science and quasi-experimental research. I think the authors make some good points and I appreciate the ecosystemic approach. I think this contribution will be published, but I have a few suggestions for changes that I think would improve the paper.

Importantly, this paper lacks a methods section and one is needed. In quite a few places the paper reads like an opinion piece. A clear statement about the method used (and its strengths/limitations) is essential. Information about the selection of resources is also needed. I quickly found literature which seems relevant but is not mentioned in this paper, e.g. https://eric.ed.gov/?id=ED672410;  https://sol.sbc.org.br/index.php/opensciense/article/view/17140;  https://doi.org/10.1111/1365-2664.13571; https://doi.org/10.1080/00461520.2021.1901709. A more systematic presentation of the literature is needed. 

I'd like to see a clearer position on the extent to which the use of quasi-experimental data away from its original context poses an epistological problem. It's mainly framed in the paper in terms of bias, but there's also just the possibility of taking data points out of context. 

I don't consider this an essential addition, but consider adding some sort of visualisation which can show researchers how the work flow might be different when following the author's advice. There's a little of this in Fig.1, but I wonder whether there could be another graphic which presents this more at the level of individual practice.

Some other comments: 

p.1 34-5 This claim needs more development/scrutiny. The reference is fairly old, which implies a philosophical/conceptual issue rather than something to do with data. Given that this framing is so foundational for the paper, it should be explored futher and the argument about observational data and causality made explicit and detailed. 

p.2 46 Open Science is introduced as a key category but not defined/explained.

p.2 57 Examples of open science registries that exclude non-experimental research should be provided. I recommend looking into the guidance provided by such registries to see the reasons given and assess whether they are of relevance. 

p.2 89-p.3 100 Consider tightening the presentation of the argument. At the moment, it reads as 'we echo this' and 'we believe that' but it's not evident why anyone else should share these beliefs.

p.5 200-1 I'm not convinced by the 'natural storyteller' line and why this desire for narrative would somehow take priority over epistemological concerns. 

p.5 214-225 Not all research conforms to the model as presented here. E.g. much qualitative/exploratory research is conducted without hypothesizing. Design based research is another example. This section should be clearer about the specific types of research under consideration. 

p.7 268-270 Please provide citations for this claim

p.7 291-3 I think citations for these different services are probably needed 

p.14 643-p.15 652 Look again at the claims made and the original sources. Grossman and Brembs find that the cost scales with the prestige of the journal, not its OA policy.  Limaye writes about APCs in general. I'm not sure where you got the numbers from, and you also need to introduce a lot more nuance around different OA approaches (gold/green/diamond) instead of treating them as a single approach.

p.18 Discussion.  I don't think this brief discussion really captures the key tensions in the paper. It's quite short (more like a conclusion) and reads like an alternative version of the abstract.  I suggest the discussion consider the outcomes of the paper in relation to wider strategies and policies intended to implement Open Science at scale.  I do think the paper needs a discussion section. 

Author Response

Please see attached response and revised manuscript with changes tracked. Thank you for your thoughtful review of this article.

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

Comments and Suggestions for Authors

Please see the attached file.

Comments for author File: Comments.pdf

Author Response

Please see the attachment. Thank you for your careful review of the revised manuscript.

Author Response File: Author Response.pdf

Back to TopTop