Next Article in Journal
Correction: noor (2025). Bi5: An Autoethnographic Analysis of a Lived Experience Suicide Attempt Survivor Through Grief Concepts and ‘Participant’ Positionality in Community Research. Social Sciences 14: 405
Previous Article in Journal
“I Feel Like a Lot of Times Women Are the Ones Who Are Problem-Solving for All the People That They Know”: The Gendered Impacts of the Pandemic on Women in Alaska
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Perspective

Integrating Open Science Principles into Quasi-Experimental Social Science Research

by
Blake H. Heller
1,* and
Carly D. Robinson
2
1
Hobby School of Public Affairs, University of Houston, Houston, TX 77204, USA
2
Graduate School of Education, Stanford University, Stanford, CA 94305, USA
*
Author to whom correspondence should be addressed.
Soc. Sci. 2025, 14(8), 499; https://doi.org/10.3390/socsci14080499
Submission received: 12 June 2025 / Revised: 8 August 2025 / Accepted: 14 August 2025 / Published: 19 August 2025

Abstract

Quasi-experimental methods are a cornerstone of applied social science, answering causal questions to inform policy and practice. Although open science principles have influenced experimental research norms across the social sciences, related practices are rarely implemented in quasi-experimental scholarship. In this perspective article, we describe open science research practices and discuss practical strategies for quasi-experimental researchers to implement or adapt these practices. We also emphasize the shared responsibility of external stakeholders, including data providers, journals, and funders, to create the circumstances and incentives for open science practices to proliferate. While individual quasi-experimental studies may be incompatible with some or most practices, we argue that all quasi-experimental work can benefit from an open science mentality and that shifting research norms toward open science principles will ultimately enhance the transparency, accessibility, replicability, unbiasedness, and credibility of quasi-experimental social science research.

1. Motivation

“True understanding of how best to structure and incentivize science will emerge slowly and will never be finished. That is how science works.” (Munafò et al. 2017)
Answering causal questions about human behavior is a central focus of applied social science research. Borrowing from and building upon empirical methods popularized in the agricultural and medical sciences, quantitative social scientists employ a wide range of tools to identify causal relationships in data (see Imbens 2024 for a recent summary). While experimental research or randomized controlled trials (RCTs) are widely regarded as the gold standard for causal inference, many questions cannot be answered in a laboratory, field, or survey experiment for practical or ethical reasons. Decades of scholarship demonstrate that strictly observational or descriptive quantitative methods are generally insufficient to accurately answer causal questions about public policy or human behavior (e.g., Leamer 1983; LaLonde 1986; Smith and Todd 2005; Angrist and Pischke 2010; Murnane and Willett 2011; Imbens and Xu 2024). Thus, quantitative scholars often turn to quasi-experimental approaches to identify specific settings or populations where exposure to a condition, treatment, or policy is unrelated to a subject’s observable and unobservable characteristics (including potential outcomes) conditional on observable variables (i.e., the unconfoundedness assumption is satisfied; see, e.g., Rubin 2005; Imbens and Wooldridge 2009; Imbens 2024).
In applied settings, quasi-experimental research designs are an increasingly important source of evidence to guide policy decisions or organizational behavior (Angrist and Pischke 2010; Council of Economic Advisors 2014; Currie et al. 2020; Goldsmith-Pinkham 2024). In quasi-experimental research, investigators may leverage idiosyncratic aspects of an intervention’s timing, location, intensity, or eligibility criteria to identify as-good-as-random variation in exposure to a “treatment” (i.e., conditions under which unconfoundedness is likely to hold for a population or subpopulation) and assess its impact on individuals or communities. Quasi-experimental studies are often referred to as “natural experiments,” and use causal research designs like regression discontinuity, difference-in-differences, instrumental variables, and enrollment lottery strategies to identify plausibly causal relationships in data.
Open science principles have influenced experimental research norms across the social sciences (Munafò et al. 2017; Christensen et al. 2020; Logg and Dorison 2021); however, many common experimental open science practices are rarely implemented in quasi-experimental research (Christensen et al. 2020; Miguel 2021; Hardwicke and Wagenmakers 2023; Dee 2025). While there is no universally accepted definition of “open science,” it can be usefully thought of as “[…] the framework that renders research [processes and] outputs visible, accessible, [and] reusable” and contrasted with “closed science”, wherein research “production, communication, or utilization, is inaccessible to potential consumers” (Chubin 1985; Gomez-Diaz and Recio 2020; UNESCO 2021).
In a 2019 essay posted on the World Bank’s Development Impact blog, Berk Özler lamented the challenge of using existing preregistration platforms to accommodate quasi-experimental research designs. While he was able to find a repository that could be adapted to accommodate his propensity-score-matching research design (the now-defunct Evidence in Governance and Politics [EGAP] registry, which has since been integrated into Open Science Framework [OSF] Registries), none of the dominant social science preregistration platforms were set up with quasi-experimental research in mind, and some continue to actively exclude non-experimental research (see Section 4.4 below for further details).
Importantly, Özler’s concerns were not that his study would struggle to be published in the absence of preregistration but that he and his research team were vulnerable to the same unintentional biases that catalyzed the open science revolution to begin with: “The trouble is, I do not trust myself (or my collaborator) to not be affected in our choices by the impact findings, their statistical significance, etc. Not that I have a stake in the success of the program: it is more that I am worried about subconscious choices that can take the analysis in one direction than the other—exactly because I can see the consequences of these choices pretty easily if I have all the data[…]” (Özler 2019).
Recent work interrogating the influence of these subconscious choices, which are typically difficult or impossible to identify in published work, suggests that these concerns are warranted. Decisions about variable definitions, sample exclusions, and model parameters can introduce substantial variation into quasi-experimental estimates of program effects—even when well-intentioned researchers use the same data to answer the same question (Huntington-Klein et al. 2021, 2025; Breznau et al. 2022; Holzmeister et al. 2024; Wuttke et al. 2024). Notably, these findings occur in the absence of publication incentives related to the direction, magnitude, or statistical significance of results. Munafò et al. (2017) similarly emphasized the need for “measures to counter the natural tendency of enthusiastic scientists who are motivated by discovery to see patterns in noise” (p. 2).
In addition to epistemological concerns related to building scientific knowledge on a faulty foundation, the practical stakes of ensuring the credibility and reliability of evidence from quasi-experimental research are high, as the applied focus of quasi-experimental research means that it often yields the best available evidence to drive policy decisions. U.S. presidents have cited quasi-experimental research to justify policy positions in every Economic Report of the President since at least 2010 (e.g., Council of Economic Advisors 2014, 2018, 2022), as well as in speeches like the State of the Union (e.g., Obama 2014; Biden 2024). The credibility and accuracy of quasi-experimental studies can have real consequences that influence billions of dollars in public spending and philanthropic investment (Gibbons 2023; Bill and Melinda Gates Foundation n.d.; The White House n.d.). Ultimately, when quasi-experimental research falls short, for whatever reason, this can have profound real-world implications: misinformed policies, ineffective programs, and the perpetuation of inequities.
In this perspective article, we explore the potential for open science principles and practices to enhance quasi-experimental social science research. The practices we highlight are tangible steps researchers, journal editors, peer reviewers, funders, and research registries can take to promote open science principles—including openness, transparency, reproducibility, and inclusiveness—in quasi-experimental work (UNESCO 2021). Not every quasi-experimental study can or should integrate every open science practice, and as we discuss the particular challenges quasi-experimental scholars face, we remind readers that “[…]open science practices exist on a spectrum rather than as a binary, and researchers need not adopt whole cloth the approaches of experimentalists to find specific ways of increasing transparency in research design that work in particular subfields and methodological approaches” (Reich 2021, p. 102). However, we argue that all quasi-experimental research can benefit from adopting an open science mentality. In the remainder of the article, we describe specific open science research practices and identify opportunities for quasi-experimental scholars to implement or adapt many of the same practices that have been adopted in experimental research to enhance the credibility, accessibility, replicability, and unbiasedness of quantitative social science research.

2. Background on Open Science and Causal Inference

The most popular and well-understood method to separate causation from correlation is the randomized controlled trial or experiment (RCT). Widely regarded as the “gold standard” for causal inference with respect to internal validity, experiments have been used in psychology since the early 1900s (e.g., Woodworth and Thorndike 1901) and are growing in popularity in other social science disciplines (e.g., Currie et al. 2020). In the early 2010s, the so-called “replication crisis” catalyzed substantial changes in experimental psychology research norms, many of which have spilled over to other fields.
Figure 1, adapted from Gehlbach and Robinson (2021), synthesizes the contrast between “old school” experimental research practices that led to the replication crisis in psychology and “open science” research practices that align with the values of transparency, inclusivity, and honesty in pursuit of the truth. For example, while “old school” norms led to unchecked p-hacking via ex post modeling decisions that are easily justified after data is analyzed (i.e., via “Hypothesizing After Results are Known” or HARKing), today, the default expectation is that experimental social science research is preregistered with a detailed preanalysis plan. In the past, research teams might have run dozens of underpowered trials to achieve a statistically significant result, leaving untold numbers of null results to rot in file drawers. Preregistration encourages scholars to take power analyses seriously and adequately power their studies (van den Akker et al. 2024). When interventions do not, in fact, impact outcomes of interest, the resulting precise null results are, in theory, more likely to be published (Nosek et al. 2018), and some are even accepted before data is collected as more and more highly regarded journals accept registered reports.
Historically, replication efforts have been stifled by limited access to data and/or software, and replicators struggled to decipher researchers’ decisions about variable definitions, model parameters, and sample exclusions from the text of an article. Today, journals generally encourage (or require) scholars to facilitate replication by providing public access to the software used to clean and analyze their data (for quasi-experimental studies, this usually means well-documented source code, including any programming scripts), alongside de-identified datasets when possible. Researchers can also democratize access to the research frontier by adopting the open science practices of publishing preprints or publishing in open-access journals, rather than allowing slow publication processes to stall the release of cutting-edge research that remains inaccessible to the public behind expensive paywalls.
While recent research suggests that preregistration with preanalysis plans reduce p-hacking and publication bias in experimental research (Brodeur et al. 2024), it remains unclear whether and how open science practices directly influence trust in research or perceptions of researchers’ credibility (Field et al. 2020). Proponents argue that open science fundamentally strengthens the scientific process by enhancing transparency, accessibility, and reproducibility. As Tennant (2019) asserts, “open science is just good science,” reflecting a broader movement to normalize openness as a standard practice rather than an exception (Watson 2015; Tennant and Breznau 2022). As knowledge of the benefits and ease of implementing open science practices spreads, we expect perceptions and research norms to coevolve until these practices are close to universally adopted in causal research.

3. Practical Advice for Researchers

How can scholars integrate open science principles into quasi-experimental research? Today, the process of conceiving, designing, implementing, and analyzing data to conduct a quasi-experimental research project involves a tremendous degree of researcher choice up to the point of publication (Simmons et al. 2011; Huntington-Klein et al. 2021, 2025; Dee 2025). Hypothesis generation, sample selection, model choice, and myriad other parameters may be decided, adjusted, or finalized at any point of the research process, including during peer review, without documentation.
While some projects may deviate only slightly (or not at all) from a researcher’s initial research design, there is very little transparency about how modeling decisions are made, and if replication materials are shared (e.g., programming scripts or other source code used for data cleaning and/or analysis), only final versions see the light of day. Even when dozens of models or sample selection criteria are tested to produce the final results, all other candidate models are essentially left on the cutting room floor. A growing body of evidence suggests that this process introduces variance to quasi-experimental impact estimates that is unaccounted for by standard errors (Huntington-Klein et al. 2021, 2025; Breznau et al. 2022; Holzmeister et al. 2024; Wuttke et al. 2024).
In this section, we discuss seven open science practices in greater detail to create a practical guide describing how each practice can be implemented by quasi-experimental researchers to increase transparency, replicability, and integrity of quasi-experimental social science research. We describe each practice, explain the problem(s) it solves or ameliorates, and the circumstances under which it can be successfully implemented in quasi-experimental studies. While we aim to provide concrete examples illustrating how and why researchers might embed open science practices in their own quasi-experimental studies, we acknowledge that offering an exhaustive account of all potential open science practices falls beyond the scope of this perspective article.

3.1. Preregistration

The most basic preregistration establishes when a research project was formally initiated, as well as the fundamental questions that the project is designed to answer. A high-quality preregistration (especially when paired with a detailed preanalysis plan) accomplishes five goals:
  • Establishes the timeline of when a research project was initiated;
  • Establishes the primary hypotheses that the project is designed to test;
  • Increases the transparency of methods to facilitate replication and limit scope for HARKing;
  • Allows for a principled approach to multiple hypothesis correction; and
  • Enhances the credibility of prespecified results versus those that emerge from exploratory analyses.
We argue that these five goals are worthwhile—and often attainable—for any study that aims to identify the causal impact of a policy or practice, and researchers can benefit from preregistering quasi-experimental studies (and in some cases, descriptive research). Social scientists who conduct experiments are now expected, if not required, to preregister their studies in order to publish in top peer-reviewed journals or receive funding from a growing list of private foundations (e.g., Arnold Ventures, the Russell Sage Foundation) and government agencies (e.g., the National Institutes of Health or Institute of Education Sciences). However, the vast majority of quasi-experimental studies are not preregistered, and, at present, there are few incentives to do so.
At the most basic level, preregistrations occur before researchers’ findings can exert any influence on the analysis process, ensuring that the study design and any hypotheses are documented in advance, based on the considered judgments of the research team, without being influenced by ex post considerations related to statistical significance or other relationships in the data. In practice, researchers can preregister a quasi-experimental study with just their hypotheses and outcome variables and very little additional detail. However, even among well-intentioned researchers, without a detailed preanalysis plan, preregistration alone is likely insufficient to prevent p-hacking and HARKing (Brodeur et al. 2024). In fact, some would argue (and we agree) that a quasi-experimental preregistration without a preanalysis plan that documents the details of the actual analysis should not be considered a valid preregistration at all (e.g., Klonsky 2024).

3.2. Preanalysis Plans

Humans are natural storytellers (McAdams and McLean 2013; Gehlbach and Robinson 2018), making it easy to justify one data cleaning or modeling choice over others after seeing which option yields more favorable results, particularly when multiple approaches can be reasonably justified ex post. Thus, the general rule is to leave only one modeling path per hypothesis, which keeps researchers from making idiosyncratic post hoc decisions, while also allowing for future work to replicate or extend the analysis.
For quasi-experimental studies, this means that, insofar as is possible, researchers should commit to their primary hypotheses, primary outcomes, identification strategy, empirical model, model parameters, definition of treatment, principles of sample construction, and variable definitions before observing outcome data. For research questions that emerge after researchers begin analyzing outcomes, preregistration is not possible. However, for studies that are conceived as a result of a natural experiment, where the outcomes have not yet been observed (or, ideally, not yet been realized), preregistration is possible.
Although there are key differences between preregistering experimental and non-experimental studies, many of the general tenets are consistent. For instance, researchers conducting preregistered quasi-experimental or experimental research should be able to describe the study, list their hypotheses, explain principles of sample construction, define key variables, and distinguish between primary and exploratory outcomes or hypotheses before accessing outcome data that could influence those decisions based on ex post considerations. However, where experimental studies require information on the experimental design, randomization procedures, sample recruitment, and treatment–control contrast, researchers would instead outline their quasi-experimental study’s identification strategy, sample exclusions, empirical model, model parameters, and treatment–comparison contrast. This act of stating the key features of a study prior to analyses sends the signal of credibility and decreases the likelihood that the findings are spurious, guided by ad hoc, ex post modeling decisions.
Perhaps the greatest difference for preregistering studies with quasi-experimental designs is the existence of the data. While experimental studies always collect data prospectively, quasi-experimental studies can use a variety of data sources: existing data, data currently being collected, or data that will be collected in the future. For the latter two, the preregistration process follows a similar timeline as an experimental study might—the researchers can post the preregistration before the outcome data has been realized or observed. However, many quasi-experimental studies rely on outcome data that already exists, making the timing of researchers’ access to the data a central consideration in the credibility of the preregistration.
Some cases allow researchers to approach quasi-experimental preregistration as if they are designing an experiment. For example, when a study is preregistered before a policy is implemented, it would be physically impossible to access outcome data that could influence any aspect of the preregistration. In other cases, there may be less clarity on when researchers had access to outcome data. Even if data is shared or openly accessible (and even if it has been openly accessible for years), quasi-experimental researchers can engage in practices that can increase the credibility of their work. For example, researchers like Özler, who have the awareness that they might fall victim to the very human trait of HARKing, can still preregister a study if they themselves have not observed the outcome data, even if it is in principle accessible. They can report the timeline in which they preregistered their study and when they accessed the data, and reviewers can choose to believe them (or not). Other open science practices, like sharing well-documented replication files, also play a role in increasing transparency in these cases. Another important consideration (and potential solution) is how the data is shared by data providers, which we discuss further in Section 4.1 below.
Of course, applied research is messy, and sometimes decisions made before working with the data turn out to be impractical—or even impossible—to implement. Insofar as dimensions of uncertainty and their effect on a study’s research design are well-defined and anticipatable (e.g., whether a variable is measured discretely versus continuously or the number of years of data that will be available), researchers may consider pre-registering a decision tree describing conditional modeling decisions rather than a single research design (Dee 2025). However, it is inevitable that quasi-experimental researchers will encounter situations where reality deviates from expectation in unanticipated ways. Perhaps the data structure was different than expected, the initial identification strategy did not yield a viable comparison group, or a variable was measured less precisely than promised. It is often reasonable, and even correct, to deviate from one’s preanalysis plan. When these types of deviations are necessary, open science principles simply imply that researchers should document and justify those decisions (see discussion in Section 3.6 below), and readers or reviewers can decide whether they agree or disagree with the analytic choices (Lakens 2024).
Relying on post hoc modeling decisions invites the skepticism that researchers made choices that increased the likelihood they found a statistically significant or otherwise favorable result, but sometimes ex post deviations from one’s considered ex ante judgments are unavoidable (Lakens 2024). Currently, resolving this skepticism is a central focus of peer review for many quasi-experimental studies, often requiring researchers to produce dozens of robustness checks and alternative specifications to convince referees that their results are not spurious or driven by HARKing. Publishing a preanalysis plan and documenting deviations from these prespecified decisions allows readers and peer reviewers to calibrate their confidence in results by considering the trade-offs of the different approaches with a greater understanding of the researcher’s ex ante thought process and beliefs.

3.3. Open Data and Software

Free and open source software (FOSS) is central to enhancing transparency, reproducibility, and collaboration in contemporary social science research (Fortunato and Galassi 2021). Free software (“[…]think of ‘free’ as in ‘free speech,’ not as in ‘free beer.’”) provides users with four essential freedoms governing its use, modification, and distribution (GNU Operating System 2024). Open source software meets ten criteria related to its source code availability and licensing restrictions, including rules governing the software’s use, (re)distribution, and modification (Open Source Initiative 2024). FOSS is an umbrella term encompassing these closely related but philosophically distinct concepts (Fortunato and Galassi 2021).
FOSS principles apply to a wide range of software objects that may contribute to social science research, including applications, source code (including programming scripts), executable code, compiled code, documentation, and other resources. Poorly documented or undisclosed source code complicates replication by obscuring the researcher decisions that are not explicitly described in the text of an article. Proprietary software can limit access, hinder validation, and create inefficiencies for verifying results, whereas FOSS facilitates collaboration and replication, while also promoting long-term accessibility of source code and data. Moreover, incorporating FOSS into research workflows can improve the overall quality of software and/or models because researchers anticipate they will be subject to external review and use (Pilat and Fukasaku 2007).
The open science community is increasingly encouraging researchers to publicly post replication packages that include the data and software (including programming scripts and other source code) used to conduct a study. This access allows other researchers to more easily review studies, validate the findings, and replicate the results. Sharing data and software promotes transparency and fosters collaboration across the research community.
In practice, the public sharing of data and software does not differ drastically between experimental and quasi-experimental studies, and by a wide margin, this is the open science practice that quasi-experimental researchers most commonly report implementing in their work (Christensen et al. 2020). The general tenet is that researchers share the data and software used to reach the stated conclusions in a particular study (Neal 2022). Thus, if the data is being used for other studies, the researchers do not need to share all the data—just those that are necessary to conduct the specific analyses represented in that study. In cases where publicly sharing data poses a legitimate threat to related, ongoing research, scholars may embargo or delay the posting of data until related projects are completed. These practices align with the broader goals set forth by the OECD’s Principles and Guidelines for Access to Research Data from Public Funding and the principles of FAIR (findable, accessible, interoperable, and reusable) data management and stewardship (Pilat and Fukasaku 2007; Wilkinson et al. 2016). OECD guidelines emphasize that publicly funded research data should be open, transparent, and flexible, while respecting legal, ethical, and institutional constraints.
There are several repositories where researchers can post data and software (e.g., Harvard Dataverse1; the Inter-university Consortium for Political and Social Research [ICPSR]2; OSF3; Open Source Initiative4; ResearchBox5; Scientific Data6). The best place to share data and materials is one that is “independent, accessible, and persistent” (Neal 2022). On OSF, researchers can give data and software a specific license that governs how they can be used and even get credit for the data they post, as the materials are citable.
Because quasi-experimental research is, by definition, applied work that studies real-world phenomena and policies, researchers rarely create or own the data they use, as experimentalists might. Therefore, data ownership and privacy concerns raise questions about the feasibility of publicly posting data for many researchers. In some cases, the data may be publicly available, in which case researchers can simply describe when and how they accessed the data and post the programming scripts and other source code they used for cleaning and analysis. In other cases, the data is not public, and researchers must preserve the confidentiality of the people or organizations represented in the data for legal or ethical reasons. Thus, researchers cannot simply post the dataset, but it still may be possible to share data in a form that preserves confidentiality through de-identification (removing any information that would allow a sophisticated third party to identify specific people or organizations with high confidence), aggregation (combining microdata into groups at higher levels of observation), or suppression (removing variables that could be used to identify specific people or organizations) (Neal 2022). Or, if a particular dataset required researchers to apply for access, the authors could detail the process for others to gain access to the data. In Section 4.1 below, we discuss the role of data providers in facilitating open data practices.

3.4. Registered Reports

One of the potential drawbacks of preregistration and preanalysis plans is that researchers are less likely to find statistically significant results through HARKing and p-hacking (Kaplan and Irvin 2015), which may make it harder to publish a study and contribute to the file drawer problem (Rosenthal 1979). One open science practice introduced to counter the bias toward publishing studies with large, statistically significant findings is the use of registered reports.
Registered reports take the step of preregistration a step further and integrate it into the peer review process (Chambers et al. 2015; Reich 2021). The process of preparing a registered report is quite similar whether the study is experimental, quasi-experimental, or even descriptive or qualitative. The key difference from the status quo is that the review process is split into two phases. First, the authors submit the introduction, background, context, and methods sections of an article for review prior to conducting their analysis. At this point, reviewers evaluate the research questions, methods, and contribution of the manuscript and decide to accept or reject before knowing the direction or magnitude of the results.
In the case of quasi-experimental studies, reviewers can evaluate and give feedback on the proposed research questions, primary hypotheses, primary outcomes, identification strategy, empirical model, model parameters, definition of treatment, principles of sample construction, and variable definitions, among other features of the study design and analytic plan. For quasi-experimental registered reports, sample sizes and units of intervention are usually out of the researchers’ control, so decisions related to model parameters, primary versus exploratory hypotheses, and variable definitions are likely to receive the most scrutiny during the first phase of peer review. Reviewers can also provide feedback on the proposed motivation, theory, and framing.
After soliciting this feedback, the editor chooses whether to invite the authors to implement their analysis and submit the study without revision, to invite the authors to respond to the reviewers’ critiques (similar to a revise and resubmit), or to reject the manuscript. If there are disagreements or additional justifications, the authors and reviewers might go through a couple of rounds in the first phase of the review, as they might in the standard peer review process.
After the first phase peer review process, if the reviewers decide a manuscript should be published based upon the quality and value of the front end of the manuscript, the authors receive an “in-principle” acceptance. In the second phase of the process, the reviewers will evaluate the results and discussion sections of the manuscript. If the authors reasonably follow the process they outlined in the first phase, the paper will be published. If there are deviations or major issues that arise, the authors will have to justify any changes for the reviewers. There may be additional rounds of peer review, and the papers can still be rejected. However, the expectation is that registered reports carried out as planned will be published regardless of whether the authors find statistically significant, surprising, or “large” results.
At its core, the practice of registered reports involves reviewers providing feedback and making judgments about the quality of the work while authors can still reasonably implement changes, with neither party knowing how those changes will affect the outcome. In theory, this should reduce the likelihood of publication bias, help solve the file drawer problem, and increase confidence that researcher and reviewer decisions were motivated by their good faith ex ante beliefs rather than ex post considerations.

3.5. Open Access Articles and Preprints

The main goal of social science research is to produce reliable evidence that can guide practice, shape policy, and advance theory. However, access to this evidence is often fundamentally inequitable because so much published research is stored behind paywalls that are inaccessible or unaffordable to vast swaths of the public (BOAI 2002; Piwowar et al. 2018). The open access movement aims to democratize evidence by removing paywalls and making research available to anyone with an internet connection (see Fleming et al. 2021). Publishing pre-prints and publishing in open access journals are open science practices that are equally compatible with experimental, quasi-experimental, observational, descriptive, theoretical, or qualitative research.
Currently, individual researchers often have limited control over the accessibility of their publications. There are a few things researchers can do, however, to promote equitable access to their research. One avenue is to publish in open access journals or those that allow authors to pay for their article to be open access. There are generally (but not always) fees associated with publishing open access, but these costs can be offset by universities and funders who are committed to promoting and supporting open science. Section 4.2 and Section 4.3 discuss open-access publishing models and explore the roles of funders and journals in further democratizing access (see also the Diamond Open Access Recommendations and Guidelines by Arasteh-Roodsary et al. 2025). When journals do not offer open access options or the fees are prohibitively high, authors are typically permitted to make their work more accessible by posting preprints, which are public versions of manuscripts posted on preprint servers or a personal website before peer review.
While some fields have long-standing traditions of posting preprints (e.g., in economics via the National Bureau of Economic Research [NBER] Working Paper Series, which started in 1973), this practice is gaining popularity across the social sciences. In psychology, the PsyArXiv preprint repository was established in 2016; in sociology, the SocArXiv preprint repository was established in 2016; in education, the Annenberg Institute’s EdWorkingPapers series began in 2019; and in political science, the American Political Science Association {APSA} preprints platform was launched in 2019. Preprint repositories like arXiv (established in 1991) and SSRN (1994, formerly the Social Science Research Network) have helped researchers from a wide range of fields disseminate timely research before it is officially published for decades.
Preprints enable immediate access to new research, which is especially critical in fields where timely evidence can inform pressing decisions. For quasi-experimental studies, which often inform real-world policy and practice, preprints facilitate feedback that can refine methodologies and interpretations. As Fleming et al. (2021) note, the thorny issues people raise about preprints (i.e., lack of peer-review; potential risks to blind review) are tractable, and preprints might actually broaden critique, allowing authors to strengthen their work prior to formal publication. Authors can also upload post-prints—accepted, peer-reviewed manuscripts formatted by the author—to ensure accuracy alongside accessibility (i.e., Green open access or self-archiving; e.g., BOAI 2002; Tennant et al. 2016). In this way, pre- and post-prints help make cutting-edge research readily available and less delayed by lengthy peer review processes.

3.6. Messaging Open Science Practices (Or Lack Thereof)

We recognize that not all quasi-experimental studies will engage in all (or any) open science practices, nor necessarily should they. However, in the spirit of transparency, we propose that readers should be able to easily identify the extent to which a given quasi-experimental study incorporated open science practices (or not). In Section 4, we discuss the growing role journals, funders, and other stakeholders play in incentivizing how researchers message open science practices. However, individual researchers have some control over how they report their results and indicate their use of open science practices.
For instance, authors can easily make it clear whether—and if so, where and how—the data and software can be accessed. Similarly, if a study was preregistered, authors can note that and provide the preregistration in a link or appendix, as is common in experimental studies. Conversely, if a study was not preregistered, authors can also highlight that for readers and recommend that the results should be considered exploratory until they can be replicated.
Of course, exploratory analyses can yield interesting and important findings, and we do not suggest suppressing those findings. Instead, authors can separate their results section into two parts: prespecified hypotheses and exploratory analysis (Gehlbach and Robinson 2018). This signals that readers should have comparatively more faith in the results from those hypotheses that were prespecified and treat the exploratory findings as hypothesis-generating for future studies. In addition to noting which findings are the result of exploratory analyses, scholars can also avoid emphasizing exploratory findings over primary hypothesis tests in their study’s abstract or executive summary.
Many ask, what happens when you have to deviate from your preregistration or preanalysis plan? Applied researchers regularly confront unexpected, complex, and nuanced challenges that can require them to modify their approach to evaluating a policy or intervention. Whether the discrepancies are due to an unforeseen issue with the data or a mistake, deviations will necessarily happen, and they are not all problematic (Lakens 2024). Sometimes the deviation is small and can be easily addressed in the body of the manuscript. Sometimes—and we speak from experience here—the deviations are larger and less straightforward. To facilitate both the review process and to make it easier on the readers, authors of preregistered experimental and quasi-experimental studies can add an appendix that details any deviations from their preregistration or preanalysis plan.

3.7. Replication

In RCTs, open science principles encourage researchers to test whether an experiment’s findings generalize by testing the same hypothesis under different conditions or in different samples. In quasi-experimental studies, the details of the policy scenario will determine whether generalizability studies are feasible versus cases where replication should simply be viewed as demonstrating that a study’s findings can be reproduced by third parties (e.g., as a class exercise or data validation process). Consider an example: during their study evaluating a tutoring program in Washington, D.C., Lee et al. (2024a) stumbled across an interesting (and unexpected) pattern in the data. Their within-student analysis revealed that students were more likely to attend school on days they had a tutoring session scheduled. Given that the finding emerged from an exploratory analysis, the hypothesis and analysis plan were not preregistered. However, because the same tutoring initiative was implemented the following year, the research team was able to preregister the hypothesis and analysis to determine if the result would replicate (Lee et al. 2024b).
Similarly, Heller (2024) used regression discontinuity methods to estimate the causal relationship between reaching a college readiness benchmark on the GED® test and post-secondary attainment. The quasi-experimental study relied upon extant administrative data and was not preregistered. The findings were imprecise but suggested that there was no relationship between earning a GED® College Ready designation and college outcomes. Following this analysis, Heller (2025) collected new administrative data from GED Testing Service, LLC, and preregistered a follow-up quasi-experimental study that would revisit the same question using similar methods in a larger sample (94,000 observations instead of 15,000) to assess whether the null findings would replicate.
Of course, not all replications are as straightforward, nor can all quasi-experimental research be replicated on demand to assess the generalizability of findings. For example, in the absence of a similar mass-migration event, Card’s (1990) difference-in-differences analysis of the Mariel Boatlift on the Miami labor market is unlikely to be replicated in another context, with or without Card providing public access to his data and software. However, there are a wide range of quasi-experiments that might be replicated using data from other contexts or time periods to assess the generalizability or sensitivity of results and could benefit from researchers adopting open science practices.
For example, many studies assess the impact of the minimum Pell Grant on educational outcomes and borrowing behavior using a regression discontinuity design (e.g., Marx and Turner 2018; Park and Scott-Clayton 2018; Chan and Heller 2025). On the one hand, the fact that the results of these studies are broadly consistent suggests that the impacts of the minimum Pell grant on educational attainment and borrowing outcomes may generalize across contexts. However, each study has idiosyncrasies related to the population studied, sample exclusions, outcomes of interest, variable definitions, model parameters, subgroups of interest, covariates included, time periods covered, etc., that may be difficult to discern in the absence of replication packages that include open source data and software (including programming scripts and other source code).
Although replication is a cornerstone of the scientific process, replication studies represent a minuscule share of published research in most fields, including psychology (1%; Makel et al. 2012), education (0.13%; Makel and Plucker 2014), and economics (0.1%; Mueller-Langer et al. 2019). This is, in part, due to the lack of incentives for individual researchers to replicate studies because journal editors and reviewers often prioritize novel findings over demonstrations of generalizability. Despite a growing movement to elevate the status of replication studies in experimental research (e.g., McShane et al. 2019), increasing rates of quasi-experimental replication will require a shift in incentives. External stakeholders can help drive this transition, as we discuss below.

3.8. Summary

Ultimately, there is no one-size-fits-all approach for quasi-experimental researchers who want to align their work with open science principles. As we describe, the details of a given study may make it more or less compatible with specific practices. In general, the open science movement is not focused on promoting any single practice or bundle of practices, but in fostering norms and values that ensure transparency, rigor, and credibility are prioritized above novelty, impact, and incredibility (Makel and Plucker 2014; Mellor 2021).
Open science practices are not meant to police research, but as disciplinary standards evolve, they have the scope to enhance our collective search for truth. The scope for open science practices to improve applied social science is predicated on the honesty and good faith of the overwhelming majority of scholars. Bad actors can and will abuse open science tools—however, practices like replication and open data increase the probability that unethical scholars will be discovered and discredited (Kennedy 2024).
In this section, we described several open science practices and outlined how quasi-experimental researchers can integrate them into their own research. That said, individual researchers can only do so much to change norms and expectations within the quantitative social sciences. In the next section, we discuss how different stakeholders can support and incentivize quasi-experimentalists to adopt open science practices more systematically.

4. Practical Advice for Other Stakeholders

Executing applied research often requires coordinated effort across multiple individuals and organizations. Encouraging a culture of openness in research must go beyond individual efforts to address cognitive biases; it also requires systemic changes in the structures within which researchers operate to shift norms and incentivize open practices (Munafò et al. 2020). Data providers, journals, funders, and research repositories play important roles in facilitating and encouraging the adoption of open science practices, as well as reducing barriers to integrating open science practices into quasi-experimental research. In this section, we identify opportunities for different types of stakeholders to champion open science through their respective roles in quasi-experimental research.

4.1. Data Providers

Unlike experimental studies, which generally must collect data prospectively, quasi-experimental studies can use a variety of data sources: existing data, data currently being collected, or data that will be collected in the future. The data that fuels policy research is often sensitive, private information that requires restrictions to protect human subjects. When data is not publicly accessible for ethical, legal, or business reasons, data providers have considerable influence on nearly all aspects of what, where, when, how, and by whom data is accessed, reported, and shared.
While complex bureaucratic processes and organizational risk aversion can create barriers to knowledge production, the data sharing process also presents opportunities for data providers to facilitate the integration of open science practices into research that relies on the retrospective or prospective analysis of administrative data. Data access is often preceded by a formal or informal proposal process wherein a researcher submits a description of the research questions they plan to answer with a provider’s data, how the analysis will be conducted, and the potential benefits of the proposed work. Researchers and their institutions or organizations typically work with data providers to negotiate the terms of a legal agreement that governs how and when the data can be used and to what ends.
During this stage, data providers can exert influence to encourage or discourage parties interested in accessing restricted access data to evaluate a policy, product, or intervention to adopt open science practices. For example, data providers might encourage or even require researchers to preregister their hypothesis and/or submit a preanalysis plan detailing their empirical design and modeling decisions. Similarly, data providers can stipulate conditions under which deidentified microdata (or aggregated data) can be publicly posted to facilitate replication or, as is often the case, explicitly disallow the public sharing of deidentified data.
While there are legal or ethical reasons why some datasets cannot be shared publicly, even in deidentified or aggregated forms, we encourage data providers to think carefully about what can be shared with minimal risks to the individuals represented in the data in the spirit of promoting FAIR data that is “as open as possible, as closed as necessary” (European Commission 2016). Similarly, researchers can support data providers in this effort by identifying the minimum viable datasets necessary to replicate their primary results. While posting of replication software has become a relatively common practice in quantitative social science, this does little to facilitate replication for the many studies that rely on expensive, proprietary, or restricted-use data. Data providers and researchers can also work together to archive the specific files (and file structures) required to use replication software to replicate published work and create transparent, streamlined processes for replicators to apply for access to those specific restricted-use files.
As the gatekeepers of the administrative data that fuels quasi-experimental research, data providers also have considerable scope to influence the exclusivity of data access in the short and long terms. Relationships between data providers and researchers are built upon trust, which can create a self-reinforcing cycle where the same small groups of researchers obtain and maintain access to restricted-use data from specific providers while others are denied access. These exclusive relationships have benefits to these inner-circle researchers, who face less competition and whose work is more likely to be novel, and data providers, who limit their exposure to risks related to data security or politics by sharing data with trusted partners. However, granting individual researchers or a single team of researchers exclusive permission to conduct a particular quasi-experimental evaluation creates circumstances that are particularly conducive to practices like HARKing and unintentional p-hacking.
However, there are several ways data providers can balance the tradeoffs between exclusivity, organizational risks, and research transparency. As mentioned above, requiring or encouraging partners to preregister their hypotheses with a detailed preanalysis plan is the simplest way to promote the goals of open science, with or without broadening data access. Many data providers already require partners to submit proposals that contain all or most of the content of a preregistration and preanalysis plan in order to obtain permission to use administrative data to answer a specific research question (e.g., University of Houston Education Research Center 2023), so this would not require substantial additional labor in those cases.
Another solution is for data providers to create protocols to facilitate streamlined access to replication files following a study’s completion. If scholars know their work is likely to be replicated, this promotes transparency and accountability, which is particularly important in cases where preregistration is not possible. However, relying upon ex post replication as an accountability mechanism is contingent upon other researchers’ willingness to conduct replication studies that may not yield new insights.
A third way data providers can promote credibility, replicability, and transparency in quasi-experimental evaluations is by broadening access to restricted-use data in real time. This is an especially effective strategy when many scholars can reasonably anticipate a natural experiment whose impact assessment is of general interest (e.g., the introduction of a new benefit, the changing of a rule or threshold, or the staggered rollout of a policy). Even by providing data access to just two teams of researchers, data providers can create a self-reinforcing accountability structure that incentivizes caution, diligence, and robustness.
A more ambitious version of this “horse-race” model of knowledge generation is known as the “many analysts” or “crowdsourced science” approach, wherein multiple teams of researchers (sometimes dozens) are given the same data and work independently to answer the same question, with or without methodological constraints, and publish the results collaboratively (Silberzahn et al. 2018; Huntington-Klein et al. 2025). Evaluations using “horse-race” or “many analysts” approaches can also be preregistered and allow researchers to explore the distributional characteristics of estimated effects—and the factors that influence effect-size estimates—alongside the average estimated effects. To facilitate these types of collaborative studies, data providers could designate a lead research team to coordinate across groups of researchers or stipulate guidelines for coordination as a condition of data access. Funders and journals, whose roles are discussed in further detail below, can also play a role in facilitating crowdsourced science by supporting and rewarding this type of collaboration as well as funding or publishing replication studies.
Another challenge of implementing open science practices in quasi-experimental research comes from uncertainty regarding the availability, measurement, or structure of key variables. This uncertainty makes it difficult for researchers to confidently prespecify the details of planned analyses based on preexisting administrative data. While researchers can and should deviate from their preanalysis plans when necessary, data providers can enable the adoption of preregistration and preanalysis plans in quasi-experimental research through strategic decisions regarding the data provision process. Typically, data providers may offer data dictionaries or detailed descriptions of datasets, and while these are valuable resources, they are often insufficient to identify important features and idiosyncrasies of large administrative datasets.
We suggest three practices that data providers can adopt to make quasi-experimental preregistration less difficult and more credible:
  • Provide access to simulated datasets;
  • Offer partitioned datasets that may omit key outcome variables or time periods; and
  • Certify when researchers gained access to restricted-use data.
Practices (1) and (2) allow researchers to familiarize themselves with the structure and character of key variables and identify viable empirical strategies without being influenced by what those decisions imply about their quasi-experimental impact estimates in the real data (i.e., without unintentionally or intentionally HARKing). Practice (3) is a simple step data providers can take to increase the credibility of quasi-experimental preregistration.

4.2. Funders

Government agencies, private foundations, non-profit organizations, and corporations contribute billions of dollars to support social science and education research each year (Gibbons 2023). Funders have tremendous scope to influence the focus and methodology of research, both directly, through the projects they support, and indirectly, through the incentives they create (Hess and Henig 2015; Feuer 2016; Sands 2023; Reikosky 2024). Relatedly, funders have an opportunity to promote open science practices through their grantmaking requirements. Federal agencies represent the majority of research investment that flows into institutions of higher education (Gibbons 2023); they can use this power of the purse to shift research norms. Already, federal agencies like the U.S. Department of Education and the U.S. Department of Health and Human Services, the major funders of education research (via the Institute for Education Science) and medical and public health research (via the National Institutes of Health), require or encourage open science practices in experimental research (National Institutes of Health 2016; Institute of Education Sciences 2022).
While large governmental or institutional funders could develop policies to encourage the adoption of similar practices in quasi-experimental research they fund, even the most outspoken advocates for increasing the credibility and transparency of funded research have yet to establish frameworks to support open science practices like preregistration or replication in quasi-experimental research. This is likely because it is far more complicated to assess what is reasonable to require in quasi-experimental research than in experimental research. Wide variation in the character of quasi-experiments makes it difficult to settle on universal guidelines, but mounting evidence suggests that researchers’ degrees of freedom threaten the replicability and credibility of quasi-experimental scholarship (Huntington-Klein et al. 2021, 2025; Breznau et al. 2022; Holzmeister et al. 2024; Wuttke et al. 2024). Funders can help nudge quasi-experimental researchers toward adopting the open science practices that are compatible with their research design by treating these practices as the default and requiring clear explanations for deviations from open science best practices.
While some open science practices can be implemented with or without additional financial resources, others are expensive. A small subset of open access journals does not charge authors article processing fees to publish their work (i.e., “diamond” or “platinum” open access journals), but most do (Fuchs and Sandoval 2013). Since open access articles are not directly financed by journal subscriptions, these fees must cover the costs of producing and managing the journal (i.e., the typical “gold” open access model). Some subscription-based journals also operate a “hybrid” open access model where authors can publish paywalled content for free or elect to pay for an article to be open access, and many allow authors to self-archive pre-prints or accepted manuscripts on their personal website, departmental website, or a pre-print server (i.e., the “Green” open access model; Tennant et al. 2016). In journals that charge article processing fees, publishing an open access article costs roughly USD 2000 on average, but article processing charges in the most expensive open access journals can exceed USD 10,000 per article (Grossmann and Brembs 2021; Limaye 2022; Borrego 2023).
Funders can help the scholars they support to democratize access to their research by encouraging or requiring the publication of preprints (i.e., promoting “Green” open access), providing resources to cover the high cost of publishing in open access journals, or paying any required fees to make a specific article open access. Additionally, funders can consider sponsoring journals that want to transition to a fee-free model or directly supporting open access journals that do not charge authors article processing fees to supplement the volunteer networks and learned societies that support most diamond journals.

4.3. Journal Editors and Peer Reviewers

As the primary access point for academic scholarship, journals have substantial influence on the proliferation of open science practices along multiple dimensions. Journal editors and peer reviewers can reward quasi-experimental scholars who adopt open science practices and hold authors accountable for implementing them with fidelity. Journals can identify scholarship that adheres to open science principles to help readers calibrate their confidence. Editorial boards and publishers can revise journal policies to promote open science principles and democratize access to knowledge by adopting fee schedules that acknowledge large gaps in resources by country, institution, or career stage.
Increasingly, peer-reviewed journals are embedding structures to communicate whether an article has certain open science features (Kidwell et al. 2016). Many journals that publish quantitative research ask authors to provide a statement as to whether they are willing and able to publicly post data and software (usually programming scripts or other source code) at the point of article submission, and this is published alongside accepted articles. Other journals include icons or badges that appear in searches or on an article’s landing page to indicate that the study’s data has been published in a repository or that its materials are published online. In otherwise paywalled journals, a badge might indicate which articles are open access as a result of authors paying for their study to not be behind a paywall.
Additionally, a small but growing subset of journals—like the Journal for Research on Educational Effectiveness or the American Journal of Political Science—offer open science badges to demonstrate, for example, whether a study has been preregistered or is a replication study. Badges may influence researchers’ willingness to share data: one year after the Journal of Psychological Science introduced data sharing badges in 2014, the proportion of articles with open data increased five-fold (Kidwell et al. 2016; Munafò et al. 2017). Future scholarship should investigate the causal role of badges and other messaging practices in promoting open science practices in experimental and quasi-experimental research.
In addition to acknowledging authors who adopt open science practices, journals have the scope to reduce incentives for p-hacking and HARKing by decreasing the salience of statistical significance at conventional type I error thresholds (e.g., p < 0.05; Ziliak and McCloskey 2008). Indeed, across the social sciences, prominent journals (e.g., all journals of the American Economic Association; Basic and Applied Social Psychology; Econometrica, and Nature) have adopted policies discouraging or banning the use of asterisks or stars to denote statistical significance. Some discourage discussions of p-values entirely, encouraging authors to focus on confidence intervals or simply reporting estimates alongside their standard errors and allowing readers to calibrate their own confidence in the findings. Evidence on the effects of these policies is mixed, with some studies finding unintended negative externalities (Imbens 2021) and others finding modest reductions in evidence of p-hacking or publication bias (Naguib 2024). Given that efforts to reorient scholarship away from “the cult of statistical significance” are relatively young, long-run effects will take time to evolve (Ziliak and McCloskey 2008). We can turn to journals’ principled efforts to combat p-hacking via removing heuristics that increase the salience of p-values as an example of how the decisions of journal editors can affect practice, while their mixed impacts are a reminder that the effects of such policies must be monitored over time to assess if they are actually increasing the replicability and reliability of published work.
Beyond policies that promote the adoption of open science practices in manuscript production, journals can use their peer review process as a mechanism to reward and maintain the integrity of these practices. For example, journal editors and peer reviewers should subject the methodological choices of quasi-experimental work that is not preregistered to greater scrutiny (e.g., by continuing to place great emphasis on the robustness of the results to alternative choices) while also holding authors of preregistered quasi-experimental studies accountable for following their preanalysis plans and documenting the reasons for any deviations. Editors and peer reviewers can help develop and strengthen new disciplinary norms around preregistration and preanalysis plans to fulfill the promise of open science practices as a tool to increase trust in quasi-experimental (and experimental) research and reduce the prevalence of illusory results. Researchers and reviewers stand to benefit from a simplified review process that deemphasizes exhaustive robustness checks and reduces publishing frictions.
Furthermore, journals may consider the benefits of adopting “open peer review” practices that increase transparency in the peer review process (Ross-Hellauer 2017; Wolfram et al. 2020). The most common open review practices are “open identities”, where reviewers and authors are aware of each other’s identities, and “open reports”, where peer-review reports are published alongside the final article, allowing readers to assess the review process and the evolution of the manuscript. Other emerging open review practices include “open participation”, where the review process is expanded to include input from the broader research community, and “open interaction”, where reviewers and authors (or groups of reviewers) are encouraged to engage in direct dialogue during the review process. Importantly, these practices may be optional or required and can be implemented individually or in combination, depending on the journal’s administrative processes, philosophy, and disciplinary norms (Schmidt et al. 2018). Open reports may have particular value in promoting transparency in quasi-experimental research, where there are many more degrees of researcher freedom relative to experimental research, especially when studies are not preregistered. Open reports would preserve public records of analyses and ex post modeling decisions implemented during peer review that may not be published in a final manuscript or supplemental materials (Polka et al. 2018; Golub 2024).
Authorship policies represent another opportunity for journals to increase transparency in all social science research, including quasi-experimental work. Even within the social sciences, there is wide variation across disciplines regarding when a contributor’s role in a project is sufficient to earn recognition as a co-author (Marušić et al. 2011; Youtie and Bozeman 2014). Additionally, disciplines and sub-disciplines may have different and sometimes conflicting norms regarding how contributory hierarchies are communicated, including the implicit signal conveyed by the order of authorship and whether explicit acknowledgements of co-author roles are included in a footnote or endnote (Frandsen and Nicolaisen 2010; Marušić et al. 2011; McNutt et al. 2018; Weber 2018). In interdisciplinary contexts, this can make it difficult to assess relative contributions, as relying upon disciplinary norms can lead to misinterpretations or distort signals. Journal authorship policies and administrative systems that help clarify this information are especially relevant to quasi-experimental scholarship, where large, interdisciplinary teams of researchers often contribute to a single study.
A growing number of publishers, representing thousands of peer-reviewed journals, now allow or require authors to use the Contributor Role Taxonomy (CRediT) to provide explicit recognition of the relative contributions of co-authors to a project (Scholastica 2023). CRediT role descriptions align with the 14 ways scholars are likely to contribute to a scholarly research project: Conceptualization; Data curation; Formal analysis; Funding acquisition; Investigation; Methodology; Project administration; Resources; Software; Supervision; Validation; Visualization; Writing—original draft; and Writing—reviewing and editing (Larivière et al. 2021). Gomez-Diaz and Recio (2024) build upon the recommended CRediT roles to suggest recognizing a broader spectrum of intellectual and technical contributions—especially in the context of open source software and data—which might be critical for facilitating and incentivizing open science practices. Ultimately, attribution systems like CRediT encourage scholars to acknowledge all individuals who contribute to a study while creating clear and explicit mechanisms to acknowledge those who deserve the most recognition for its intellectual and technical contribution.
Finally, editorial boards and publishers should consider whether and how their journals’ practices align with Guidelines for Transparency and Openness Promotion (TOP) in Journal Policies and Practices (Nosek et al. 2015). TOP guidelines provide a framework to guide journal policies and navigate challenges related to data citation; data, materials, and code transparency; study design and analysis; preregistration; and replication. Journals that are not already open access can consider adopting processes by which scholars can pay to make their specific articles open access or publish post-prints, even if most articles remain behind paywalls. Journals that require article processing fees to cover the costs of publishing can consider sliding-scale or fee-free publication for scholars from low-income countries, as well as graduate students and others who may have limited access to financial resources. Journals that do not accept registered reports can consider whether and how to integrate this option into their existing peer review process for experimental and quasi-experimental research. Fortunately, there are dozens of highly respected publications across the sciences that journal editors can look to for guidance in adopting or strengthening their open science practices.

4.4. Registries and Repositories

The rise of public research registries and data repositories is central to the adoption and proliferation of open science research practices. As more and more researchers preregister studies and post replication packages online, registries and repositories play a larger public-facing role than ever before.
Table 1 summarizes the features of several prominent social science research registries, describing the characteristics that make each registry more or less accommodating to quasi-experimental research. While some registries (e.g., REES) provide resources that are tailored to specific quasi-experimental research methods, others (e.g., OSF, RIDIE, asPredicted) have default templates that can be adapted to accommodate quasi-experimental research designs. However, some (e.g., the American Economic Association [AEA] RCT Registry) actively exclude quasi-experimental research.
We suggest the following steps repositories can take to lower barriers to quasi-experimental preregistration:
  • Allow quasi-experimental studies to be preregistered;
  • Provide adaptable or open-ended templates that are compatible with quasi-experimental research designs;
  • Provide templates for specific quasi-experimental designs;
  • Add quasi-experimental features to sortable meta-data, badges, etc.
As research norms evolve, registries can support the proliferation of open science practices in quasi-experimental research by making explicit space for quasi-experimental studies in their platforms. Without any substantive changes, registries can add a field to their default registration template to allow quasi-experimental researchers to select the specific quasi-experimental method(s) their evaluation employs.
However, some features of quasi-experimental studies do not translate straightforwardly to the language of experimentalism that dominates most registries. While most registries do allow for quasi-experimental studies to be preregistered, researchers exploring open science practices may find the generic templates misaligned with the unique demands of their work, leading to frustration and early abandonment of these efforts. Creating specific preregistration templates for common quasi-experimental strategies, like those found in RIDIE, is a simple step repositories can take to make their platforms more conducive to preregistering quasi-experimental studies and support scholars in creating complete, high-quality preregistrations and preanalysis plans. This also sends an important signal to scholars that open science research practices like preregistration are appropriate for their work and may nudge those on the margin of adopting open science practices to do so.
Additionally, accommodating quasi-experimental research designs expands the utility of registries to new audiences and allows registries to tag quasi-experimental registrations in searches (e.g., to facilitate meta-analyses, replications, or reviews). Data repositories (e.g., Harvard Dataverse; ICPSR; OSF; Open Source Initiative; ResearchBox; Scientific Data) are generally already accessible to quasi-experimental researchers but may benefit from similar steps to optimize their user interfaces to make quasi-experimental materials easier to publish, find, review, and use.

5. Discussion

The replication crisis in experimental psychology exposed deep epistemological limitations of building social scientific knowledge through experimentation without clear norms regarding the design, analysis, and reporting of research. The same fault lines run through quasi-experimental scholarship, and adopting open science practices is an important step toward reinforcing and repairing the foundation of quasi-experimental social science. When faulty analyses rooted in unreliable research practices are added to the cannon of a discipline, this may distort subsequent work that is viewed through the lens of that scholarship.
In the short run, many open science practices that have improved the quality of experimental social science scholarship can be simply and cheaply adapted to quasi-experimental work. Many quasi-experimental projects are already compatible with pre-registration or can accommodate pre-registration with minor changes to the data acquisition and access process, and all projects can provide transparency about the timing of data access and the nature of hypothesis generation and testing for free without any changes to the research process (except those willingly induced by such retrospection). Quasi-experimental scholars can post pre- and post-prints and archive software (including programming scripts and other source code) and data (when possible) to publicly accessible repositories. Other practices may require coordination across multiple stakeholders or substantive changes to research processes and will likely require sustained effort to achieve widespread implementation. However, every research project, data use agreement, call for papers, request for proposals, or collaboration is an opportunity for researchers, data providers, journals, and funders to make incremental progress toward this goal.
To ultimately change practices, fields also need to reform how research is assessed in the context of open science. Recent initiatives such as the San Francisco Declaration on Research Assessment (DORA 2012), the European Commission’s Expert Group report (European Commission 2019), and the COARA Agreement on Reforming Research Assessment (COARA 2022) highlight a growing consensus on the need to move beyond traditional metrics like journal impact factors. These frameworks advocate for recognizing a wider range of research contributions—including data, software, replication studies, and public engagement—many of which are core to open science practices. Embedding such reforms within institutional and funding structures is essential to align incentives with open and transparent research behaviors, thereby ensuring sustainability and broader adoption of open science.
Open science principles promote honesty, equity, and accuracy in the social sciences. In experimental research, the replication crisis prompted a paradigm shift in how research is conducted, and fears about open science stifling scientific progress have proven to be unwarranted. In quasi-experimental work, open science principles may be even more important. The statistical methodology of experimental analyses is generally predetermined by the experimental design—experiments are attractive precisely because they reduce otherwise difficult causal questions to tests of differences in means between groups. In quasi-experimental work, there are many more dimensions of researcher choice; many ex post reasonable approaches to constructing samples, defining variables, and constructing models; and therefore, many more opportunities for implicit or explicit biases to influence results. Enduring shifts in disciplinary norms will require coordinated commitments to open science principles among all stakeholders in quasi-experimental research. From researchers to data providers to funders to journal editors and peer reviewers, everyone has a role to play in promoting transparency and enhancing the credibility of quantitative social science.

Author Contributions

B.H.H. and C.D.R. are equal co-authors. Both authors contributed equally to the Conceptualization; methodology; formal analysis; investigation; writing—original draft preparation; writing—review and editing; and project administration. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded in part by the National Academy of Education through the NAEd/Spencer Post-Doctoral Fellowship. The APC was funded by the University of Houston.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

We thank conference participants at the Association for Public Policy Analysis and Management as well as Association of Education Finance and Policy for useful comments and feedback. Financial support from the National Academy of Education via the National Academy of Education/Spencer Postdoctoral Fellowship is gratefully acknowledged. Opinions expressed do not represent the views of the National Academy of Education. All errors are our own.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
AEAAmerican Economic Association
APSAAmerican Political Science Association
BOAIBudapest Open Access Initiative
COARACoalition for Advancing Research Assessment
CRediTContributor Role Taxonomy
DORADeclaration on Research Assessment
EGAPEvidence in Governance and Politics
FAIRFindable, accessible, interoperable, and reusable
FOSSFree and open source software
GEDNo formal meaning; formerly “General Educational Development”
HARKingHypothesizing after results are known
ICPSRInter-university Consortium for Political and Social Research
NBERNational Bureau of Economic Research
OECDOrganization for Economic Co-operation and Development
OSFOpen Science Framework
QEQuasi-experimental
RCTRandomized controlled trial
REESRegistry of Efficacy and Effectiveness Studies
RIDIERegistry for International Development Impact Evaluations
SSRNNo formal meaning; formerly “Social Science Research Network”
TOPTransparency and Openness Promotion
UNESCOUnited Nations Educational, Scientific, and Cultural Organization
U.S.United States

Notes

1
Harvard Dataverse: https://dataverse.harvard.edu/.
2
ICPSR: https://www.icpsr.umich.edu/web/pages/, accessed on 13 August 2025.
3
4
Open Source Initiative: https://opensource.org/.
5
6
Scientific Data: https://www.nature.com/sdata/, accessed on 13 August 2025.

References

  1. Angrist, Joshua D., and Jörn-Steffen Pischke. 2010. The Credibility Revolution in Empirical Economics: How Better Research Design Is Taking the Con out of Econometrics. Journal of Economic Perspectives 24: 3–30. [Google Scholar] [CrossRef]
  2. Arasteh-Roodsary, Sona Lisa, Vinciane Gaillard, Federica Garbuglia, Pierre Mounier, Janne Pölönen, Vanessa Proudman, Johan Rooryck, Bregt Saenen, and Graham Stone. 2025. DIAMOND Open Access Recommendations and Guidelines for Institutions, Funders, Sponsors, Donors, and Policymakers. Version 1.0. Zenodo. [Google Scholar] [CrossRef]
  3. Biden, Joseph R., Jr. 2024. Remarks by President Biden in State of the Union Address. [Speech Transcript]. Available online: https://bidenwhitehouse.archives.gov/state-of-the-union-2024/ (accessed on 13 August 2025).
  4. Bill and Melinda Gates Foundation. n.d.Evaluation Policy. Available online: https://www.gatesfoundation.org/about/policies-and-resources/evaluation-policy (accessed on 12 November 2024).
  5. BOAI. 2002. Budapest Open Access Initiative Declaration. Available online: https://www.budapestopenaccessinitiative.org/read/ (accessed on 24 July 2025).
  6. Borrego, Ángel. 2023. Article Processing Charges for Open Access Journal Publishing: A Review. Learned Publishing 36: 359–78. [Google Scholar] [CrossRef]
  7. Breznau, Nate, Eike Mark Rinke, Alexander Wuttke, Hung H. V. Nguyen, Muna Adem, Jule Adriaans, Amalia Alvarez-Benjumea, Henrik K. Andersen, Daniel Auer, Flavio Azevedo, and et al. 2022. Observing many researchers using the same data and hypothesis reveals a hidden universe of uncertainty. Proceedings of the National Academy of Sciences 119: e2203150119. [Google Scholar] [CrossRef] [PubMed]
  8. Brodeur, Abel, Nikolai M. Cook, Jonathan S. Hartley, and Anthony Heyes. 2024. Do Preregistration and Preanalysis Plans Reduce p-Hacking and Publication Bias? Evidence from 15,992 Test Statistics and Suggestions for Improvement. Journal of Political Economy Microeconomics 2: 527–561. [Google Scholar] [CrossRef]
  9. Card, David. 1990. The Impact of the Mariel Boatlift on the Miami Labor Market. ILR Review 43: 245–57. [Google Scholar] [CrossRef]
  10. Chambers, Christopher D., Zoltan Dienes, Robert D. McIntosh, Pia Rotshtein, and Klaus Willmes. 2015. Registered Reports: Realigning Incentives in Scientific Publishing. Cortex 66: A1–A2. [Google Scholar] [CrossRef]
  11. Chan, Monnica, and Blake H. Heller. 2025. When Pell Today Doesn’t Mean Pell Tomorrow: The Challenge of Evaluating Aid Programs With Dynamic Eligibility. Educational Evaluation and Policy Analysis. [Google Scholar] [CrossRef]
  12. Christensen, Garret, Zenan Wang, Elizabeth Levy Paluck, Nicholas Swanson, David Birke, Edward Miguel, and Rebecca Littman. 2020. Open Science Practices Are on the Rise: The State of Social Science (3S) Survey. Available online: https://escholarship.org/content/qt0hx0207r/qt0hx0207r.pdf (accessed on 12 June 2025).
  13. Chubin, Daryl E. 1985. Open Science and Closed Science: Tradeoffs in a Democracy. Science, Technology, & Human Values 10: 73–80. [Google Scholar] [CrossRef]
  14. Coalition for Advancing Research Assessment (COARA). 2022. Agreement on Reforming Research Assessment. Available online: https://coara.eu/app/uploads/2022/09/2022_07_19_rra_agreement_final.pdf (accessed on 27 July 2025).
  15. Council of Economic Advisors. 2014. Economic Report of the President. Washington, DC: United States Government Printing Office. [Google Scholar]
  16. Council of Economic Advisors. 2018. Economic Report of the President. Washington, DC: Government Publishing Office. [Google Scholar]
  17. Council of Economic Advisors. 2022. Economic Report of the President. Washington, DC: Government Publishing Office. [Google Scholar]
  18. Currie, Janet, Henrik Kleven, and Esmée Zwiers. 2020. Technology and Big Data Are Changing Economics: Mining Text to Track Methods. AEA Papers and Proceedings 110: 42–48. [Google Scholar] [CrossRef]
  19. Dee, Thomas S. 2025. The Case for Preregistering Quasi-Experimental Program and Policy Evaluations. Evaluation Review. [Google Scholar] [CrossRef]
  20. DORA. 2012. San Francisco Declaration on Research Assessment. Available online: https://sfdora.org/read/ (accessed on 27 July 2025).
  21. European Commission. 2016. H2020 Programme Guidelines on FAIR Data Management in Horizon 2020 Version 3.0. Available online: https://arrow.tudublin.ie/dataguide/4/ (accessed on 27 July 2025).
  22. European Commission. 2019. Future of Scholarly Publishing and Scholarly Communication: Report of the Expert Group to the European Commission. Luxembourg: Publications Office of the European Union. [Google Scholar] [CrossRef]
  23. Feuer, Michael J. 2016. The Rising Price of Objectivity: Philanthropy, Government, and the Future of Education Research. Cambridge: Harvard Education Press. [Google Scholar]
  24. Field, Sarahanne M., E.-J. Wagenmakers, Henk A. L. Kiers, Rink Hoekstra, Anja F. Ernst, and Don van Ravenzwaaij. 2020. The Effect of Preregistration on Trust in Empirical Research Findings: Results of a Registered Report. Royal Society Open Science 7: 181351. [Google Scholar] [CrossRef]
  25. Fleming, Jesse I., Sarah Emily Wilson, Sara A. Hart, William J. Therrien, and Bryan G. Cook. 2021. Open accessibility in education research: Enhancing the credibility, equity, impact, and efficiency of research. Educational Psychologist 56: 110–21. [Google Scholar] [CrossRef]
  26. Fortunato, Laura, and Mark Galassi. 2021. The Case for Free and Open Source Software in Research and Scholarship. Philosophical Transactions of the Royal Society A 379: 20200079. [Google Scholar] [CrossRef]
  27. Frandsen, Tove Faber, and Jeppe Nicolaisen. 2010. What Is in a Name? Credit Assignment Practices in Different Disciplines. Journal of Informetrics 4: 608–17. [Google Scholar] [CrossRef]
  28. Fuchs, Christian, and Marisol Sandoval. 2013. The Diamond Model of Open Access Publishing: Why Policy Makers, Scholars, Universities, Libraries, Labour Unions and the Publishing World Need to Take Non-Commercial, Non-Profit Open Access Seriously. TripleC: Communication, Capitalism & Critique 11: 428–43. [Google Scholar]
  29. Gehlbach, Hunter, and Carly D. Robinson. 2018. Mitigating illusory results through preregistration in education. Journal of Research on Educational Effectiveness 11: 296–315. [Google Scholar] [CrossRef]
  30. Gehlbach, Hunter, and Carly D. Robinson. 2021. From old school to open science: The implications of new research norms for educational psychology and beyond. Educational Psychologist 56: 79–89. [Google Scholar] [CrossRef]
  31. Gibbons, Michael T. 2023. R&D Expenditures at U.S. Universities Increased by $8 Billion in FY 2022. National Science Foundation, National Center for Science and Engineering Statistics. NSF 24-307. Available online: https://ncses.nsf.gov/pubs/nsf24307 (accessed on 30 October 2024).
  32. GNU Operating System. 2024. What Is Free Software? Version 1.169. Available online: https://web.archive.org/web/20250729105453/https://www.gnu.org/philosophy/free-sw.en.html (accessed on 5 August 2025).
  33. Goldsmith-Pinkham, Paul. 2024. Tracking the Credibility Revolution across Fields. arXiv arXiv:2405.20604. [Google Scholar] [CrossRef]
  34. Golub, Benjamin. 2024. In Economics, Editors, Referees, and Authors Often Behave as if a Published Paper Should Reflect Some Kind of Authoritative Consensus. As a Result, Valuable Debate Happens in Secret, and the Resulting Paper is an Opaque Compromise with Anonymous Co-Authors Called Referees. [BlueSky Post]. Available online: https://bsky.app/profile/bengolub.bsky.social/post/3le2omjd5mk2s (accessed on 12 June 2025).
  35. Gomez-Diaz, Tomas, and Teresa Recio. 2020. Towards an Open Science Definition as a Political and Legal Framework: Sharing and Dissemination of Research Outputs. Polis 19: 5–25. [Google Scholar] [CrossRef]
  36. Gomez-Diaz, Tomas, and Teresa Recio. 2024. Articles, Software, Data: An Open Science Ethological Study. Maple Transactions 3: 19. [Google Scholar] [CrossRef]
  37. Grossmann, Alexander, and Björn Brembs. 2021. Current Market Rates for Scholarly Publishing Services. F1000Research 10: 1–24. [Google Scholar] [CrossRef]
  38. Hardwicke, Tom E., and Eric-Jan Wagenmakers. 2023. Reducing Bias, Increasing Transparency and Calibrating Confidence with Preregistration. Nature Human Behaviour 7: 15–26. [Google Scholar] [CrossRef]
  39. Heller, Blake H. 2024. GED® College Readiness Benchmarks and Post-Secondary Success. EdWorkingPaper No. 24-914. Annenberg Institute for School Reform at Brown University. Available online: https://doi.org/10.26300/mvvp-cf18 (accessed on 13 August 2025).
  40. Heller, Blake H. 2025. High School Equivalency Credentialing and Post-Secondary Success: Pre-Registered Qua-si-Experimental Evidence from the GED® Test. EdWorkingPaper No. 25-1240. Annenberg Institute for School Reform at Brown University. Available online: http://doi.org/10.26300/nw9y-a303 (accessed on 13 August 2025).
  41. Hess, Frederick M., and Jeffrey R. Henig, eds. 2015. The New Education Philanthropy: Politics, Policy, and Reform. Cambridge, MA: Harvard Education Press. [Google Scholar]
  42. Holzmeister, Felix, Magnus Johannesson, Robert Böhm, Anna Dreber, Jürgen Huber, and Michael Kirchler. 2024. Heterogeneity in Effect Size Estimates. Proceedings of the National Academy of Sciences 121: e2403490121. [Google Scholar] [CrossRef] [PubMed]
  43. Huntington-Klein, Nick, Andreu Arenas, Emily Beam, Marco Bertoni, Jeffrey R. Bloem, Pralhad Burli, Naibin Chen, Paul Grieco, Godwin Ekpe, Todd Pugatch, and et al. 2021. The Influence of Hidden Researcher Decisions in Applied Microeconomics. Economic Inquiry 59: 944–60. [Google Scholar] [CrossRef]
  44. Huntington-Klein, Nick, Claus C. Pörtner, Yubraj Acharya, Matus Adamkovic, Joop Adema, Lameck Ondieki Agasa, Imtiaz Ahmad, Mevlude Akbulut-Yuksel, Martin Eckhoff Andresen, David Angenendt, and et al. 2025. The Sources of Researcher Variation in Economics. I4R Discussion Paper Series No. 209. Institute for Replication. Available online: https://hdl.handle.net/10419/312260 (accessed on 25 March 2025).
  45. Imbens, Guido W. 2021. Statistical significance, p-values, and the reporting of uncertainty. Journal of Economic Perspectives 35: 157–74. [Google Scholar] [CrossRef]
  46. Imbens, Guido W. 2024. Causal Inference in the Social Sciences. Annual Review of Statistics and Its Application 11: 123–52. [Google Scholar] [CrossRef]
  47. Imbens, Guido W., and Jeffrey M. Wooldridge. 2009. Recent Developments in the Econometrics of Program Evaluation. Journal of Economic Literature 47: 5–8. [Google Scholar] [CrossRef]
  48. Imbens, Guido W., and Yiqing Xu. 2024. Lalonde (1986) After Nearly Four Decades: Lessons Learned. arXiv arXiv:2406.00827. [Google Scholar] [CrossRef]
  49. Institute of Education Sciences. 2022. Standards for Excellence in Education Research. Available online: https://ies.ed.gov/seer/ (accessed on 13 November 2024).
  50. Kaplan, Robert M., and Veronica L. Irvin. 2015. Likelihood of Null Effects of Large NHLBI Clinical Trials Has Increased over Time. PLoS ONE 10: e0132382. [Google Scholar] [CrossRef]
  51. Kennedy, James E. 2024. Addressing Researcher Fraud: Retrospective, Real-Time, and Preventive Strategies—including Legal Points and Data Management That Prevents Fraud. Frontiers in Research Metrics and Analytics 9: 1397649. [Google Scholar] [CrossRef]
  52. Kidwell, Mallory C., Ljiljana B. Lazarević, Erica Baranski, Tom E. Hardwicke, Sarah Piechowski, Lina-Sophia Falkenberg, Curtis Kennett, Agnieska Slowik, Carina Sonnleitner, Chelsey Hess-Holden, and et al. 2016. Badges to Acknowledge Open Practices: A Simple, Low-Cost, Effective Method for Increasing Transparency. PLoS Biology 14: e1002456. [Google Scholar] [CrossRef]
  53. Klonsky, E. David. 2024. Campbell’s Law Explains the Replication Crisis: Pre-Registration Badges Are History Repeating. Assessment 32: 224–34. [Google Scholar] [CrossRef]
  54. Lakens, Daniël. 2024. When and How to Deviate from a Preregistration. Collabra: Psychology 10: 117094. [Google Scholar] [CrossRef]
  55. LaLonde, Robert J. 1986. Evaluating the Econometric Evaluations of Training Programs with Experimental Data. American Economic Review 76: 604–20. [Google Scholar]
  56. Larivière, Vincent, David Pontille, and Cassidy R. Sugimoto. 2021. Investigating the Division of Scientific Labor Using the Contributor Roles Taxonomy (CRediT). Quantitative Science Studies 2: 111–28. [Google Scholar] [CrossRef]
  57. Leamer, Edward E. 1983. Let’s Take the Con out of Econometrics. American Economic Review 73: 31–43. [Google Scholar]
  58. Lee, Monica G., Susanna Loeb, and Carly D. Robinson. 2024a. Effects of High-Impact Tutoring on Student Attendance: Evidence from the OSSE HIT Initiative in the District of Columbia. EdWorkingPaper. Annenberg Institute at Brown University. Available online: https://doi.org/10.26300/wghb-4864 (accessed on 13 August 2025).
  59. Lee, Monica G., Susanna Loeb, and Carly D. Robinson. 2024b. Year 2 of Effects of High-Impact Tutoring on Student Attendance: Evidence from the OSSE HIT Initiative in the District of Columbia. OSF Registries Preregistration. Available online: https://osf.io/n45vt (accessed on 13 August 2025).
  60. Limaye, Aditya M. 2022. Article Processing Charges May Not Be Sustainable for Academic Researchers. MIT Science Policy Review 3: 17–20. [Google Scholar] [CrossRef]
  61. Logg, Jennifer M., and Charles A. Dorison. 2021. Pre-Registration: Weighing Costs and Benefits for Researchers. Organizational Behavior and Human Decision Processes 167: 18–27. [Google Scholar] [CrossRef]
  62. Makel, Matthew C., and Jonathan A. Plucker. 2014. Facts Are More Important Than Novelty: Replication in the Education Sciences. Educational Researcher 43: 304–16. [Google Scholar] [CrossRef]
  63. Makel, Matthew C., Jonathan A. Plucker, and Boyd Hegarty. 2012. Replications in Psychology Research: How Often Do They Really Occur? Perspectives on Psychological Science 7: 537–42. [Google Scholar] [CrossRef]
  64. Marušić, Ana, Lana Bošnjak, and Ana Jerončić. 2011. A Systematic Review of Research on the Meaning, Ethics and Practices of Authorship across Scholarly Disciplines. PLoS ONE 6: e23477. [Google Scholar] [CrossRef] [PubMed]
  65. Marx, Benjamin M., and Lesley J. Turner. 2018. Borrowing Trouble? Human Capital Investment with Opt-In Costs and Implications for the Effectiveness of Grant Aid. American Economic Journal: Applied Economics 10: 163–201. [Google Scholar] [CrossRef]
  66. McAdams, Dan P., and Kate C. McLean. 2013. Narrative Identity. Current Directions in Psychological Science 22: 233–38. [Google Scholar] [CrossRef]
  67. McNutt, Marcia K., Monica Bradford, Jeffrey M. Drazen, Brooks Hanson, Bob Howard, Kathleen Hall Jamieson, Véronique Kiermer, Emilie Marcus, Barbara Kline Pope, Randy Schekman, and et al. 2018. Transparency in Authors’ Contributions and Responsibilities to Promote Integrity in Scientific Publication. Proceedings of the National Academy of Sciences 115: 2557–60. [Google Scholar] [CrossRef]
  68. McShane, Blakeley B., Jennifer L. Tackett, Ulf Böckenholt, and Andrew Gelman. 2019. Large-Scale Replication Projects in Contemporary Psychological Research. The American Statistician 73 Suppl. S1: 99–105. [Google Scholar] [CrossRef]
  69. Mellor, David. 2021. Improving Norms in Research Culture to Incentivize Transparency and Rigor. Educational Psychologist 56: 122–31. [Google Scholar] [CrossRef]
  70. Miguel, Edward. 2021. Evidence on Research Transparency in Economics. Journal of Economic Perspectives 35: 193–214. [Google Scholar] [CrossRef]
  71. Mueller-Langer, Frank, Benedikt Fecher, Dietmar Harhoff, and Gert G. Wagner. 2019. Replication Studies in Economics—How Many and Which Papers Are Chosen for Replication, and Why? Research Policy 48: 62–83. [Google Scholar] [CrossRef]
  72. Munafò, Marcus R., Brian A. Nosek, Dorothy V. M. Bishop, Katherine S. Button, Christopher D. Chambers, Nathalie Percie du Sert, Uri Simonsohn, Eric-Jan Wagenmakers, Jennifer J. Ware, and John P. A. Ioannidis. 2017. A Manifesto for Reproducible Science. Nature Human Behaviour 1: 1–9. [Google Scholar] [CrossRef]
  73. Munafò, Marcus R., Christopher D. Chambers, Alexandra M. Collins, Laura Fortunato, and Malcolm R. Macleod. 2020. Research Culture and Reproducibility. Trends in Cognitive Sciences 24: 91–93. [Google Scholar] [CrossRef]
  74. Murnane, Richard J., and John B. Willett. 2011. Methods Matter: Improving Causal Inference in Educational and Social Science Research. Oxford: Oxford University Press. [Google Scholar]
  75. Naguib, Costanza. 2024. P-hacking and significance stars, Discussion Papers, No. 24-09, University of Bern, Department of Economics, Bern. Available online: https://www.econstor.eu/bitstream/10419/308751/1/1912207044.pdf (accessed on 20 July 2025).
  76. National Institutes of Health. 2016. NIH Policy on the Dissemination of NIH-Funded ClinicalTrial Information. Available online: https://grants.nih.gov/grants/guide/notice-files/NOT-OD-16-149.html (accessed on 12 November 2024).
  77. Neal, Zachary P. 2022. A Quick Guide to Sharing Research Data & Materials. Available online: https://doi.org/10.31219/osf.io/9mu7r (accessed on 27 March 2025).
  78. Nosek, Brian A., Charles R. Ebersole, Alexander C. DeHaven, and David T. Mellor. 2018. The Preregistration Revolution. Proceedings of the National Academy of Sciences 115: 2600–6. [Google Scholar] [CrossRef] [PubMed]
  79. Nosek, Brian A., George Alter, George Christopher Banks, Denny Borsboom, Sara D. Bowman, Steven J. Breckler, Stuart Buck, Christopher D. Chambers, Gilbert Chin, Garret S. Christensen, and et al. 2015. Promoting an Open Research Culture. Science 348: 1422–25. [Google Scholar] [CrossRef] [PubMed]
  80. Obama, Barack H. 2014. Remarks of President Barack Obama—State of the Union Address as Delivered. [Speech Transcript]. Available online: https://obamawhitehouse.archives.gov/the-press-office/2016/01/12/remarks-president-barack-obama-%E2%80%93-prepared-delivery-state-union-address (accessed on 12 November 2024).
  81. Open Source Initiative. 2024. The Open Source Definition. Version 1.9. Available online: https://opensource.org/osd (accessed on 27 July 2025).
  82. Özler, Berk. 2019. Registering Studies When All You Want is a Little More Credibility. Development Impact. World Bank Blogs. Available online: https://blogs.worldbank.org/en/impactevaluations/registering-studies-when-all-you-want-little-more-credibility (accessed on 30 August 2024).
  83. Park, Rina Seung Eun, and Judith Scott-Clayton. 2018. The Impact of Pell Grant Eligibility on Community College Students’ Financial Aid Packages, Labor Supply, and Academic Outcomes. Educational Evaluation and Policy Analysis 40: 557–85. [Google Scholar] [CrossRef]
  84. Pilat, Dirk, and Yukiko Fukasaku. 2007. OECD Principles and Guidelines for Access to Research Data from Public Funding. Data Science Journal 6: OD4–OD11. [Google Scholar] [CrossRef]
  85. Piwowar, Heather, Jason Priem, Vincent Larivière, Juan Pablo Alperin, Lisa Matthias, Bree Norlander, Ashley Farley, Jevin West, and Stefanie Haustein. 2018. The State of OA: A Large-Scale Analysis of the Prevalence and Impact of Open Access Articles. PeerJ 6: e4375. [Google Scholar] [CrossRef]
  86. Polka, Jessica K., Robert Kiley, Boyana Konforti, Bodo Stern, and Ronald D. Vale. 2018. Publish Peer Reviews. Nature 560: 545–47. [Google Scholar] [CrossRef]
  87. Reich, Justin. 2021. Preregistration and Registered Reports. Educational Psychologist 56: 101–9. [Google Scholar] [CrossRef]
  88. Reikosky, Nora. 2024. For (Y) Our Future: Plutocracy and the Vocationalization of Education. Ph.D. thesis, University of Pennsylvania, Philadelphia, PA, USA. [Google Scholar]
  89. Rosenthal, Robert. 1979. The File Drawer Problem and Tolerance for Null Results. Psychological Bulletin 86: 638. [Google Scholar] [CrossRef]
  90. Ross-Hellauer, Tony. 2017. What Is Open Peer Review? A Systematic Review. F1000Research 6: 588. [Google Scholar] [CrossRef]
  91. Rubin, Donald B. 2005. Causal Inference Using Potential Outcomes: Design, Modeling, Decisions. Journal of the American Statistical Association 100: 322–31. [Google Scholar] [CrossRef]
  92. Sands, Sara R. 2023. Institutional Change and the Rise of an Ecosystem Model in Education Philanthropy. Educational Policy 37: 1511–44. [Google Scholar] [CrossRef]
  93. Schmidt, Birgit, Tony Ross-Hellauer, Xenia van Edig, and Elizabeth C. Moylan. 2018. Ten Considerations for Open Peer Review. F1000Research 7: 969. [Google Scholar] [CrossRef]
  94. Scholastica. 2023. Announcing CRediT Taxonomy Support for all Scholastica Products. Available online: https://blog.scholasticahq.com/post/credit-taxonomy-support-scholastica-products/ (accessed on 12 June 2025).
  95. Silberzahn, Raphael, Eric Luis Uhlmann, Daniel Patrick Martin, Pasquale Anselmi, Frederick Aust, Eli Awtrey, Štěpán Bahník, Feng Bai, Colin Bannard, Evelina Bonnier, and et al. 2018. Many Analysts, One Data Set: Making Transparent How Variations in Analytic Choices Affect Results. Advances in Methods and Practices in Psychological Science 1: 337–56. [Google Scholar] [CrossRef]
  96. Simmons, Joseph P., Leif D. Nelson, and Uri Simonsohn. 2011. False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant. Psychological Science 22: 1359–66. [Google Scholar] [CrossRef] [PubMed]
  97. Smith, Jeffrey A., and Petra E. Todd. 2005. Does Matching Overcome LaLonde’s Critique of Nonexperimental Estimators? Journal of Econometrics 125: 305–53. [Google Scholar] [CrossRef]
  98. Tennant, Jon. 2019. Open Science is Just Good Science. Version 1.0.0. DARIAH Campus [Video]. Available online: https://campus.dariah.eu/resources/hosted/open-science-is-just-good-science (accessed on 13 August 2025).
  99. Tennant, Jon, and Nate Breznau. 2022. Legacy of Jon Tennant. Open Science Is Just Good Science. Available online: https://doi.org/10.31235/osf.io/hfns2 (accessed on 27 July 2025).
  100. Tennant, Jonathan P., François Waldner, Damien C. Jacques, Paola Masuzzo, Lauren B. Collister, and Chris H. J. Hartgerink. 2016. The Academic, Economic and Societal Impacts of Open Access: An Evidence-Based Review. F1000Research 5: 632. [Google Scholar] [CrossRef]
  101. The White House. n.d. Economic Report of the President. Available online: https://bidenwhitehouse.archives.gov/cea/economic-report-of-the-president/ (accessed on 12 November 2024).
  102. United Nations Educational, Scientific and Cultural Organization [UNESCO]. 2021. UNESCO Recommendation on Open Science. Available online: https://www.unesco-floods.eu/wp-content/uploads/2022/04/379949eng.pdf (accessed on 27 July 2025).
  103. University of Houston Education Research Center. 2023. Policies & Procedures: General Information. Available online: https://web.archive.org/web/20240913034134/https://uh.edu/education/research/institutes-centers/erc/proposal-preparation-and-submission/uherc-general-information_rev1_feb2023.pdf (accessed on 15 November 2024).
  104. van den Akker, Olmo R., Marcel A. J. van Assen, Marjan Bakker, Mahmoud Elsherif, Tsz Keung Wong, and Jelte M. Wicherts. 2024. Preregistration in Practice: A Comparison of Preregistered and Non-Preregistered Studies in Psychology. Behavior Research Methods 56: 5424–33. [Google Scholar] [CrossRef]
  105. Watson, Mick. 2015. When Will ‘Open Science’ Become Simply ‘Science’? Genome Biology 16: 101. [Google Scholar] [CrossRef]
  106. Weber, Matthias. 2018. The Effects of Listing Authors in Alphabetical Order: A Review of the Empirical Evidence. Research Evaluation 27: 238–45. [Google Scholar] [CrossRef]
  107. Wilkinson, Mark D., Michel Dumontier, IJsbrand Jan Aalbersberg, Gabrielle Appleton, Myles Axton, Arie Baak, Niklas Blomberg, Jan-Willem Boiten, Luiz Bonino da Silva Santos, Philip E. Bourne, and et al. 2016. The FAIR Guiding Principles for Scientific Data Management and Stewardship. Scientific Data 3: 1–9. [Google Scholar] [CrossRef]
  108. Wolfram, Dietmar, Peiling Wang, Adam Hembree, and Hyoungjoo Park. 2020. Open Peer Review: Promoting Transparency in Open Science. Scientometrics 125: 1033–51. [Google Scholar] [CrossRef]
  109. Woodworth, Robert S., and Edward L. Thorndike. 1901. The Influence of Improvement in One Mental Function upon the Efficiency of Other Functions (I). Psychological Review 8: 247. [Google Scholar] [CrossRef]
  110. Wuttke, Alexander, Karolin Freitag, Laura Kiemes, Linda Biester, Paul Binder, Bastian Buitkamp, Larissa Dyk, Louisa Ehlich, Mariia Lesiv, Yannick Poliandri, and et al. 2024. Observing Many Students Using Difference-in-Differences Designs on the Same Data and Hypothesis Reveals a Hidden Universe of Uncertainty. SocArXiv Papers. Available online: https://doi.org/10.31235/osf.io/j7nc8 (accessed on 15 November 2024).
  111. Youtie, Jan, and Barry Bozeman. 2014. Social Dynamics of Research Collaboration: Norms, Practices, and Ethical Issues in Determining Co-Authorship Rights. Scientometrics 101: 953–62. [Google Scholar] [CrossRef]
  112. Ziliak, Stephen T., and Deirdre N. McCloskey. 2008. The Cult of Statistical Significance: How the Standard Error Costs Us Jobs, Justice, and Lives. Ann Arbor: University of Michigan Press. [Google Scholar]
Figure 1. Contrasting the features and consequences of “old school” research practices (a) with “open science” research practices (b). Note: Adapted from Gehlbach and Robinson (2021).
Figure 1. Contrasting the features and consequences of “old school” research practices (a) with “open science” research practices (b). Note: Adapted from Gehlbach and Robinson (2021).
Socsci 14 00499 g001
Table 1. Features of Common Social Science Research Registries.
Table 1. Features of Common Social Science Research Registries.
Feature ↓Repository →AEAAs PredictedOSFREESRIDIE
Allows QE studies to be registeredNoYesYesYesYes
Flexible pre-registration templateNoYesYesYesYes
QE methods can be selected within default templateNoSomeSomeYesYes
Specific QE preregistration templatesNoNoNoYesNo
Meta-data/tags to identify QE registrationsNoNoSomeYesYes
Specific content or geographic limitationsNoNoNoYesYes
Notes: QE = quasi-experimental. AEA RCT Registry: https://www.socialscienceregistry.org/; As Predicted: https://aspredicted.org/; OSF Registries: https://osf.io/registries; Registry of Efficacy and Effectiveness Studies (REES): https://sreereg.icpsr.umich.edu/sreereg/; Registry for International Development Impact Evaluations (RIDIE): https://ridie.3ieimpact.org/. All repository websites accessed on 13 August 2025 via the indicated URLs.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Heller, B.H.; Robinson, C.D. Integrating Open Science Principles into Quasi-Experimental Social Science Research. Soc. Sci. 2025, 14, 499. https://doi.org/10.3390/socsci14080499

AMA Style

Heller BH, Robinson CD. Integrating Open Science Principles into Quasi-Experimental Social Science Research. Social Sciences. 2025; 14(8):499. https://doi.org/10.3390/socsci14080499

Chicago/Turabian Style

Heller, Blake H., and Carly D. Robinson. 2025. "Integrating Open Science Principles into Quasi-Experimental Social Science Research" Social Sciences 14, no. 8: 499. https://doi.org/10.3390/socsci14080499

APA Style

Heller, B. H., & Robinson, C. D. (2025). Integrating Open Science Principles into Quasi-Experimental Social Science Research. Social Sciences, 14(8), 499. https://doi.org/10.3390/socsci14080499

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop