Next Article in Journal
A Scoping Review of Generative Artificial Intelligence (GenAI) and Pedagogy Nexus: Implications for the Higher Education Sector
Previous Article in Journal
Emergence and Evolution of ‘Big Data’ Research: A 30-Year Scientometric Analysis of the Knowledge Field
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

On the Dearth of Retractions in Social Work: A Cross-Sectional Study of Ten Leading Journals

by
Daniel J. Dunleavy
Independent Researcher, Tallahassee, FL 32399, USA
Metrics 2025, 2(3), 16; https://doi.org/10.3390/metrics2030016
Submission received: 14 May 2025 / Revised: 31 July 2025 / Accepted: 19 August 2025 / Published: 1 September 2025

Abstract

In recent decades, there has been an increase in the number of retractions across the biomedical and social sciences. A high rate of retractions undermines the integrity of scholarly journals and threatens the credibility of scientific disciplines. It is unknown how common retractions are within the field of social work. The aim of this study was to determine the prevalence of retractions among ten leading social work journals. This cross-sectional study employed three search strategies. First, each journal’s website was searched using the keywords “retracted” and “retraction”. The same procedure was employed, for each journal, using Google Scholar’s advanced search function. Finally, the Retraction Watch Database was queried using the name of each journal. None of the 196 results produced from these search strategies resulted in the identification of a single retracted article. Reasons for this absence are explored and recommendations to enhance the integrity of social work research and journals are discussed.

1. Introduction

Scholarly journals play a key role in the development and dissemination of scholarly knowledge [1]. They are part of an iterative process: Journals assess the merits of novel research and scholarship—ostensibly selecting for work deemed most rigorous and/or impactful—and critique and purportedly improve it between submission and acceptance. These published works help to inform, justify, and contextualize future scholarship. This is evident, for example, by the way the literature reviews inform and support the background sections within manuscripts and grant proposals, or by the synthesis of results from a body of research via systematic reviews, meta-analyses [2,3] and, at an even higher order, umbrella reviews (i.e., the collection, assessment, and synthesis of multiple systematic reviews and meta-analyses) [4]. Newly published scholarship advances, supports, and sometimes challenges the prevailing set(s) of policies, interventions, and frameworks employed across the biomedical and social sciences.
Traditionally, editors and peer reviewers support journals by serving as a form of quality control or “gatekeeping” [5,6]. While recent developments in scholarly publishing have enabled the partial or even complete decoupling of peer review from this gatekeeping role (sometimes referred to as “journal-independent” peer review) [7,8,9], journals still largely adhere to a traditional model of single- or double-blind, pre-publication peer review. This is especially true for the discipline under study in the current paper—social work [10,11]. Although the published literature is in some ways validated by this model, the underlying processes are largely unstandardized and opaque, and its overall functioning is poorly understood [12,13]. Blatant errors (e.g., misreported or incorrect statistical results; inaccurate or misleading citations), omissions (e.g., selective reporting of results), misrepresentations (inflation or deflation of findings), and even cases of fraud (e.g., fabrication of data, falsification) are inevitably published [14,15,16,17].
To some extent this is to be expected. Both research and peer review are imperfect, human processes [18,19], which rely on the trust of authors, reviewers, and editors [20,21]. Therefore, it should not come as a surprise that, at times, those who might be considered bad faith actors (actively or passively) abuse and manipulate the publishing system. The following examples are illustrative.
In 2011, the award-winning social psychologist and former dean Diederik Stapel was found to have created and used fraudulent data (i.e., altering existing and/or constructing fictitious datasets) or manipulated existing data in more than 50 papers [22,23]. Some of these papers were published in the discipline’s top journals (e.g., Psychological Science, Journal of Personality and Social Psychology, Personality & Social Psychology Bulletin). Though no other persons were found to be complicit in the committed fraud, Stapel’s actions had a wide-ranging, negative impact on colleagues, co-authors, students, and the profession of psychology as a whole [23].
More recently, Lindsay and colleagues, in what has become known as the “Grievance Studies Affair” [24], wrote and submitted 20 questionable—and sometimes fraudulent—manuscripts to journals in the humanities and social sciences. This case, which received considerable media attention [25], caused something of a minor stir within the field of social work [18], since one well-known social work journal Affilia: Feminist Inquiry in Social Work was a target of the hoax—ultimately accepting one of the manuscripts for publication.
There are numerous other (varied) examples—both historical and contemporary—which cut across the biomedical and social sciences, and include gross misconduct, plagiarism, breaches in research ethics, and manipulated or faked peer review, among other things [26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48]. These actions contaminate the professional literature—including literature used and published by social work researchers and scholars—by certifying and canonizing false claims [49,50,51]. This undermines the validity of evidence-synthesis and the process of evidence-based practice. Xu et al. [52], for example, investigated the impact that 1330 retracted randomized controlled trials (RCT) had on subsequent systematic reviews and meta-analyses. Of the 3902 meta-analyses that could be analyzed, the removal of the retracted RCTs resulted in a change to the direction of the pooled effect in 8.4% of cases and a change in determination of statistical significance in 16% of cases. Kulkarni [53] discuss how image duplication and manipulation affected the synthesis of evidence gathered, and subsequent publication, of a review on preclinical studies of depression.
Additionally, research misconduct wastes resources, can lead to real-world harms to individuals and communities, and otherwise undermines trust in the overall scientific process [10,44,54,55,56,57]. This is particularly concerning for the discipline of social work, where there is a professional duty to develop and provide effective services to clients and communities [54]. Research misconduct and fraud risk exposing those served to potential harm and may damage the credibility of the profession [58].

1.1. The Prevalence of Research Misconduct and Fraud

Many of the above cases involve fraud. However, fraud is but one of several unsavory research practices under the broader umbrella of research misconduct. While there is no universally agreed upon definition, research misconduct is generally understood to include practices such as data fabrication, falsification, and plagiarism (commonly referred to as FFP) [59,60]. Some definitions also include insufficient ethics approval, gift authorship, and selective reporting of results, among other things [61,62]. “Honest mistakes” are typically [60], though not always, excluded from discussions of research misconduct.
How common are research misconduct and outright fraud? The data on this is uncertain, in part because most institutional investigations are not made public [63] and suspected cases are almost certainly underreported—to say nothing about the clandestine nature of such actions. Indirect evidence gives a sense of the upper and lower bounds. Marshall [64] summarizes research on the topic, with estimates ranging from one instance of fraud per 100,000 scientists down to one in 100. Others estimate somewhere near one in 10,000 [65]. In a 2021 meta-analysis of studies on research misconduct by Xie and colleagues [66] estimates that the self-reported prevalence of FFP is 2.9% (95% CI: 2.1–3.8%). The estimated prevalence of FFP, observed by others, is 15.5% (95% CI 12.4–19.2%). These figures parallel previous work on the topic [67], which estimated that about 2% of researchers have admitted to fraud, falsification, or other serious data manipulation. More recently, Heathers [68] criticizes this figure as too conservative—a point readily acknowledged in Fanelli’s 2009 paper [67]—and gives a rough estimate of around 1 in 7 papers, containing a fabricated or falsified element.
Estimated rates of plagiarism are similarly crude. Pupovac and Fanelli [69] synthesized the results of self-reported and non-observed plagiarism (i.e., observed or suspected plagiarism). The estimate that the self-reported prevalence of plagiarism is 1.7% (95% CI: 1.2–2.4%). The estimated prevalence of non-observed plagiarism is 29.6% (95% CI 17.4–45.5%). Synthesizing studies of textual similarity, as identified using plagiarism software and human verification, Pupovac [70] calculated a pooled estimate of 18% (95% CI: 12–25%) of articles with instances of plagiarism.

1.2. Editorial Responses to Misconduct and Fraud

When things go wrong, publishers and journal editors have a series of tools at their disposal to respond. These include errata, corrigenda, and expressions of concern. Errata are corrections made to a published article, typically in response to a minor error (e.g., a production error caused by the journal or publisher) [71]—one which does not affect the overall findings or messages. Corrigenda are fundamentally the same, referring to corrections of small errors caused by the article’s authors [71]. Expressions of concern are more serious in nature. Issued by the publisher or journal editor(s), expressions of concern are used to raise awareness that an article’s findings may be—but are not confirmed to be—unreliable (e.g., in an alleged, but still unproven case of research misconduct) [71]. In extreme cases—when an article is found to be seriously flawed and is thus determined to be unreliable—a paper may be retracted and removed from the published record altogether [71]. Organizations, such as the Committee on Publication Ethics (COPE, https://web.archive.org/web/20240109070233/, https://publicationethics.org/retraction-guidelines; accessed on 20 August 2024), the International Council of Medical Journal Editors (ICMJE; https://web.archive.org/web/20231108144736/, http://www.icmje.org/recommendations/browse/publishing-and-editorial-issues/corrections-and-version-control.html; accessed on 20 August 2024) and the Council of Science Editors (CSE, https://web.archive.org/web/20240110174932/, https://www.councilscienceeditors.org/3-5-correcting-the-literature; accessed on 20 August 2024), all have developed guidelines, standards, and resources for when and how to initiate a retraction.

1.3. How Common Are Retractions?

On the whole, retractions are relatively uncommon. Estimates range somewhere between two and eight retractions per 10,000 papers [72,73], though as noted by Cokol et al. [74] this may underestimate the number of papers which should correctly be retracted. Other research has emphasized the increase in retractions over the course of the last several decades [75]. Oransky and colleagues [73] have cataloged more than 4600 retractions in 2022—though the rate at which papers are retracted may be slowing slightly [72,76]. An analysis by Nature found that there were more than 10,000 retractions in 2023—pushing the ratio of papers retracted above 0.2% [77].
Even the most esteemed academic journals are susceptible [76,78,79]; dispelling the erroneous assumption that poor quality or otherwise suboptimal scholarship is published only in low-ranking journals [11,80].
This overarching question posed here is further obscured when taking into account disciplinary differences. Although retractions have increased over time, some studies suggest that rates are still higher in the biomedical and life sciences, compared to the social sciences and humanities [78]. A recent study [81] of retractions made on the basis academic misconduct (25,710 retractions) found wide variation in retraction rates across disciplines. Across the ten disciplinary categories, rates ranged from 1.7 retractions per 100,000 articles (physics) to 17.4 per 100,000 (electrical engineering, electronics, and computer science). The social sciences ranked fourth highest at 5.3 per 100,000, though this is still notably lower than the 8.9 retractions per 100,000 in the clinical and life sciences.
The presence of retractions is not necessarily problematic—in moderation, they may indicate a robust ability to self-correct. Fanelli [76] hypothesizes that these figures, rather than signifying an increase in the rate of fraud and misconduct reflect that researchers, editors, and institutions are more adept at identifying (and increasingly likely to report), papers that cause concern. It is unclear from the available data if this is true. Fang et al. [82] note that among a large sample of retracted articles in the biomedical and social sciences, the most common reason for retraction was “fraud or suspected fraud” (43.4%)—more than twice the rate of those retracted due to error (21.3%).
Nevertheless, these corrective actions may be a marker of a progressive journal or publication system [83], one that prioritizes knowledge generation over non-epistemic factors, such as novelty or perceived impact. If retractions are scarce or altogether absent, it may signify suboptimal mechanisms for self-correction within journals or the broader scientific community [44,84,85].

1.4. Purpose of the Study

How common are retractions within the field of social work? It is unclear from the studies and literature surveyed above. Given the notable increase in retractions over the past several decades, one would expect to find some obvious examples—even if the overall base rate is low. However, the minimal discussion about retractions within the social work literature yields little information about their disciplinary prevalence. Gibelman and Gelman [31], writing in the Journal of Social Work Education, note that “[P]ublic disclosure of cases of scientific misconduct within the social work research community have been absent” (p. 244). In 2017, in an editorial from the same journal, (then) Editor-in-Chief Joanne Yaffe [86] remarked, “…I am not aware of articles retracted from social work journals specifically…” (p. 369). In 2018, Ferguson and Clark [87] commented that, “The social work profession, at this point, holds limited knowledge about the state of misconduct in the field because social work researchers are not empirically investigating the conduct of research”.
The study is intended as a first step towards answering this question by determining the prevalence of retractions within ten leading social work journals and describing their characteristics.

2. Materials and Methods

This study described here is a preliminary and exploratory cross-sectional analysis of the frequency of retractions in social work. As such, it was not preregistered and there were no formal hypotheses—beyond the strong belief held by the author that there were likely some, but relatively few, retractions in the social work literature. Retractions are highlighted because their use is typically reserved as a response to the most egregious and pernicious scholarly practices. Accordingly, they represent the most forceful and concrete example of self-policing and correction. This study does not address the prevalence of corrections (i.e., errata and corrigenda), author and publisher withdrawal of manuscripts, or expressions of concern. Corrections are reserved for minor errors and issues which do not affect the overall findings or message. Withdrawn manuscripts, as described by COPE [88], typically occur prior to publication (e.g., during the review phase or while in-press) or when there is an administrative or production error [89]. As noted above, expressions of concern are reserved for instances where findings have not been confirmed to be unreliable or where investigation is still pending. This study aims to explore concrete instances where retractions have been made.

2.1. Sampling Frame and Analytic Strategy

A convenience sample comprising ten leading social work journals was used for this analysis. Journals were chosen as the unit of analysis because they provide data for understanding historical events and trends within the professional literature [90]. The journals, presented in alphabetical order, were as follows: (1) British Journal of Social Work, (2) Families in Society, (3) Journal of Social Work, (4) Journal of Social Work Education, (5) Journal of Sociology & Social Welfare, (6) Journal of the Society for Social Work and Research, (7) Research on Social Work Practice, (8) Social Service Review, (9) Social Work, (10) Social Work Research (formerly Social Work Research and Abstracts [SWRA]). These journals were chosen, not on the basis of impact factor, which has numerous limitations [83,91,92], but because they are representative of the field [93,94], are well-regarded by social work scholars [95], and generally have a substantial publication history.
Hodge and Lacasse [94] have nine of the ten journals listed above ranked among the top 20 by h-index; the exception being the then nascent Journal of the Society for Social Work and Research (JSSWR). All journals, on average, have a high-level of prestige among senior social work faculty, with all journals ranking among the top 15, and nine of ten ranking within the top 10 [95].
Three search strategies were employed. First, each journal website was reviewed. During January of 2023, two sets of key words (i.e., “retracted” and “retraction”) were used to search for retracted articles. Using these two terms permitted the broadest possible search for retracted articles, permitting the identification of articles labeled simply as “retracted” or “retraction”, but also being inclusive of articles and notices presented in other ways (e.g., “retraction notice”, “retraction statement”, “statement of retraction”). In cases where an article was not clearly labeled (e.g., in the title or via watermark) as being retracted, abstracts were inspected by the author. Then for each journal, using Google Scholar’s advanced search (https://scholar.google.com/#d=gs_asd&t=1679609281252; accessed on 20 August 2024) function, the journal’s title was entered (i.e., “Return articles published in”) with at least one of the terms “retracted” or “retraction” being required (i.e., “With at least one of the words”). Finally, these searches were supplemented using the Retraction Watch Database (version 1.0.6.0; see http://retractiondatabase.org/; accessed on 20 August 2024), the world’s most comprehensive resource on the topic [72], with more than 58,000 retractions listed as of May 2025. This database was queried using the names of each journal. All results were examined and where necessary, reviewed by following the posted DOI. The full Retraction Watch Database was queried a second time on 7 May 2025, but did not yield any additional results. The search strategies used spanned the entire publication histories of the chosen journal titles.
To help verify results from these three search strategies, editors-in-chief for all ten journals were emailed to inquire whether, to their knowledge, there had ever been any retractions made, historically, at their respective journals. All results obtained using the search strategies described above were reviewed by the manuscript’s sole author (DJD). There were no additional attempts to cross-validate the study’s results.

2.2. Data Availability

All data and materials can be freely accessed via the project’s Open Science Framework repository (https://osf.io/y5bdf/?view_only=6071353334724014a66d3ba1fb2fb8aa; (accessed on 20 August 2024)). Search results are saved as PDFs and organized by search strategy. Where possible, URLs for websites (and references) have been archived using the Internet Archive’s “Wayback Machine”. Websites/pages which prevent crawling have been saved as pdfs and are made available at the link above.

3. Results

3.1. Journal Characteristics

There is a fair amount of diversity among publishers of the ten journals. Oxford University Press and Sage Publishing each serve as the publisher for three journals in the sample. Two journals are published by the University of Chicago Press. One journal is published by Taylor & Francis. Finally, the Journal of Sociology & Social Welfare is jointly published by Western Michigan University, its College of Health and Human Services, and its School of Social Work (henceforth WMU/CHHS/SSW). Who publishes a journal is important as journal policies are often influenced by decisions and standards made by publishers.
All four major publishers (Oxford University Press, Sage Publishing, Taylor & Francis, University of Chicago Press) explicitly stated that they follow COPE guidelines on retractions [96,97,98,99]. All but four journals (i.e., Journal of Social Work Education, Journal of Sociology & Social Welfare, Journal of the Society for Social Work and Research, Social Service Review) were listed (https://publicationethics.org/members; accessed on 20 August 2024) as COPE members—though the Journal of Social Work Education pointed to their publisher’s membership to the organization, on its website (https://web.archive.org/web/20230920233817/, https://www.tandfonline.com/action/journalInformation?show=aimsScope&journalCode=uswe20; accessed on 20 August 2024). Only two journals, British Journal of Social Work (https://web.archive.org/web/20230824022839/, https://academic.oup.com/bjsw/pages/Code_Of_Practice; (accessed on 20 August 2024)) and the Journal of Social Work (https://web.archive.org/web/20231215224707/, https://journals.sagepub.com/author-instructions/JSW; (accessed on 20 August 2024)) had specific, if limited, discussion about the retraction process, on their websites. These and other characteristics are described in Table 1.

3.2. Main Findings

None of the 136 results produced from journal websites using the search terms “retracted” and “retraction” was identified as a retracted article. In Social Service Review, a letter to the editor by Kendall [100] calls for the retraction of an article by James Brown IV—for comments made in their prior article on Marshall Field. However, no retraction appears to have been made (see the response by Brown IV) [101] in the same issue). Nor were any of the 59 results produced from subsequent searches of Google Scholar identified as a retraction. The Retraction Watch Database produced one result labeled as a retraction in Research on Social Work Practice, which was classified as a “duplicate publication through error by journal/publisher”. However, examination of the publisher’s posted notice [102] shows that the article was “withdrawn” due to an administrative error, in which the “article was published twice Online First with two different DOIs”. Therefore, this instance is not classified as a retraction in the current study. Full search results are described in Table 2.
Of the eleven editors-in-chief queried (the British Journal of Social Work currently utilizes two editors-in-chief), only six responded. None reported retractions under their leadership or were aware of any under previous editorships. In a few cases (i.e., Families in Society, Journal of Social Work Education, Social Service Review), support staff (e.g., production editors and publishing personnel) were consulted by their respective editors—none of whom could identify any retractions at their affiliated journals.
All relevant materials, including complete search results for both website and Google Scholar searches, for each journal, are accessible as pdfs on the Open Science Framework (OSF) repository linked above. This is done, not only in keeping with the ethos of open science, but to address a limitation identified by Haddaway and Gusenbauer [103]—that search strategies relying on Google Scholar often lack transparency and reproducibility.

4. Discussion

The analysis presented here is not meant to support the strong claim that social work journals and editors do not make retractions. This claim is immediately falsified by a quick, non-systematic search of the social work literature [104]. Rather, it lends support, to the more modest (and perhaps interesting) claim, that retractions are not being made in the field, at a rate one would expect relative to the rest of the biomedical and social sciences. Of course, this conclusion is tentative and will need to be confirmed by a more comprehensive study—ideally a multi-analyst, preregistered review of all major social work journals. The opinions, beliefs, and editorial practices of journal editors could also be queried to provide greater context and may help partially explain this finding.

4.1. Why Are There So Few Retractions in Social Work?

A number of factors impede the identification and investigation of research misconduct and other issues that undermine the validity of published scholarship (e.g., questionable research practice) [66,105]. First, the incomplete reporting of statistical results makes evaluating a study’s findings difficult. It has long been known that many published articles lack sufficient detail in both study methods and results—rendering many unusable and/or uninterpretable [106]. Though this has not been systematically investigated, this is likely the case for social work research as well [105]. Second, even when results are sufficiently described, a lack of open data precludes any possibility for meaningfully assessing computational reproducibility in quantitative studies or determining validity of findings in qualitative studies.
Even after accounting for the numerous challenges and general complexities of sharing data, especially from qualitative studies [107,108,109,110], these issues will increasingly need to be confronted. In the U.S., for federally funded research (whether qualitative or quantitative), all scientific data is being required to be made as open and accessible as possible [111].
Finally, there is a general disinterest (by social workers) in reproducibility and replicability [14], which seems to have led the discipline to a place where every scholar wants to publish, but nobody (readers, peer reviewers, editors) wants to meaningfully validate this work. This final point is lent support by a 2018 systematic review by Ferguson and Clark [87]. In their sample of published articles on the topic of research ethics (n = 1409), only 16 (1%) came from within the social work literature.

4.2. Improving Pre- and Post-Publication Peer Review: Roles for Editors and Reviewers

Gibelman and Gelman [58] rightly note that allegations and investigations of research misconduct (and the retraction process more generally) are “after-the-fact” activities, emphasizing the importance of prevention (e.g., training and mentorship of burgeoning scholars). Others point toward the role of institutions and funders in preventing, deterring, and identifying misconduct and other questionable research practices [112]. Since these important efforts will take time, and are impacted by institutional and professional inertia, this discussion focuses on simple tools and methods that could actively be used now (before or after peer review), by reviewers, editors, and readers, to probe the results presented in quantitative and qualitative studies and bolster the integrity of the field and literature. First, a set of tools is described, which can be used to detect statistical inconsistencies and errors by relying on the results (e.g., sample size, mean, standard deviation) of commonly reported statistical tests. Second, the case if made that qualitative research (common within social work scholarship) could be enhanced through the use of preregistration and a commitment to open science practices. However, in making these recommendations, it is acknowledged that the current publishing dynamics and broader institutional factors unduly place the burden of research integrity on journals and publishers [113] and may also involve a disproportionate risk at the individual level [114]. The interested reader may consult Heathers [115] for an impassioned discussion about the necessity of this broad domain of “forensic metascience”, the essential technological, institutional, and social conditions and incentives required for its facilitation, and how individuals engaged in its conduct may be better supported [112,114].

4.2.1. Probing Quantitative Studies: Re-Analysis of Study Results

A number of tools exist to detect statistical inconsistencies or errors, which may be indicative of questionable or fraudulent research practices in quantitative studies [116]. The Granularity Related Inconsistency of Means (GRIM) test is used to check the consistency of reported means, when reporting results for integer data (i.e., assessing whether the reported results are possible, given the reported sample size and number of items). This is particularly useful when sample sizes are small. Similarly, the Descriptive BInary Test (DEBIT) assesses the accuracy of descriptive statistics for binary data, when the mean and standard deviation are reported. In these circumstances, a given sample size and reported mean will always yield a specific standard deviation. The Sample Parameter Reconstruction via Iterative Techniques (SPRITE) comprises a series of methods used to generate plausible distributions of data, based on the reported sample summary statistics (i.e., sample size, mean, standard deviation, range). This is a useful method for probing for data anomalies or irregularities, especially when the underlying data is unavailable or incomplete.
Statcheck, which is available as a web app (https://michelenuijten.shinyapps.io/statcheck-web/; accessed on 20 August 2024) and R package, extracts the results of significance tests from manuscripts, recomputes reported p-values based on the associated test statistics and degrees of freedom (where reported) [117]. Statcheck then compares the recomputed p-values with the reported p-values to see whether they are consistent. Besides reporting accuracy, this tool can aid in assessing whether a result is correctly interpreted as being statistically significant or not.
Preliminary work, which compared consistency in statistical reporting between two prominent psychology journals (Psychological Science, Journal of Experimental and Social Psychology) that implemented statcheck during peer review, with two control journals (Journal of Experimental Psychology: General, Social Psychology), lends support for its (judicious) use [117]. See Francis and Thunell [118] and Heathers [116,119] for an overview of these and other tools and techniques. For a discussion of how social work research may be made more reproducible/replicable, see Dunleavy and Lacasse [14].

4.2.2. Accountability and Verifiability in Qualitative Studies

While qualitative research presents its own unique challenges (e.g., particular epistemological assumptions, commitments, and goals; protection of participant anonymity and confidentiality), similar concerns remain around the trustworthiness, rigor, warrant, and validity of findings from these studies [120]. Although qualitative research is not susceptible to the same types of misconduct as its quantitative counterparts, opportunities still arise for the commitment of fraud, fabrication, and other idiosyncratic or questionable practices [24,109].
At least in some circumstances, the trustworthiness of qualitative research may be increased by completing a study preregistration. In this context, preregistration would include description of the study aims, design, hypotheses (if applicable), the approach or tradition(s) in which the study is conducted, and detailed information about how data is collected, coded (if applicable), weighed, and synthesized. This could also include detailed information about how data and analyses are validated (e.g., triangulation, use of multiple coders, member-checking, auditing by a researcher outside of the study). A justification for the use of preregistration in qualitative research and an example template is provided by Haven et al. [121]. Haven and Van Gootel [122] describe some common objections and limitations for the preregistration of qualitative studies.
Regardless of whether or not qualitative research is preregistered, its credibility may also be strengthened by making data and materials open for review (e.g., during the review stage, with publication). Following Dubois et al. [109,110], qualitative research should, as a default, be shared with readers and peer reviewers to the greatest extent possible. Understandably this may not be possible in all circumstances. But to the extent to which it is possible, even simply sharing materials (e.g., interview questions, coding information) can improve the trustworthiness and integrity of a study. The OSF (https://osf.io/) and Qualitative Data Repository (https://qdr.syr.edu/qdr-resources) offer some means and guidance for doing so.

5. Conclusions

Social work continues to grow as an academic, research-based profession. It is necessary that we ensure that our policies and practices are both ethical and evidence-based. Not only do social workers have a duty to generate research and scholarship in a way that facilitates the provision of effective service to clients and communities [54], but the field has a responsibility to continuously evaluate how that research and knowledge is vetted and disseminated, both before and after publication. Given that no retractions were able to be identified in this analysis, it is suggested that this has and is not being done effectively.
Rather than merely acting as if the published literature is the final word, social workers should continuously seek to appraise and critique the literature, to employ and reward pre- and post-publication peer review practices, and where necessary cleanse and clarify the scholarly corpus [43]. This may mean, going forward, that the field of social work sees a greater number of retractions. Of course, this need not necessarily be the case. Advances in technology and scholarly publishing are beginning to foster an environment that supports the version control of manuscripts, which may make retractions, if not obsolete, then at least less common [123].
One step towards these goals of increased oversight and accountability would be for social work editors to encourage peer reviewers to scrutinize the results of quantitative studies using the methods described in Section 4.2.1, before a manuscript is accepted for publication. This could be performed as a pilot test, using senior reviewers, before being scaled up across the editorial pipeline. Editors could also consider working with meta-researchers to publish the results of such efforts in the aggregate. Similarly, social work scholars could also systematically sample articles from leading social work journals to determine the rate at which findings are inconsistent or erroneous.
Another step would be the formal systematic review of the social work literature—which would entail sampling a more representative sample of social work journals. For example, this could entail picking the top 25 journals based on impact factor, those with the highest publication output, the highest readership, or the largest impact [124]. This would help to shore up the limitations of the convenience sample used here and give a clearer answer as to how common retractions are within the discipline. Subsequent research in this area may also benefit from consultation with a research librarian or information specialist, who can assist in developing a reproducible search strategy—including the specification of search terms, journals, and databases—and assist with methods for data extraction.
Finally, social work scholars may consider interviewing or surveying reviewers, editors, and publishers within the discipline, who may better be able to share their expertise as to what accounts for the purported lack of retractions within social work.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All study data can be retrieved from the Open Science Framework repository: https://osf.io/y5bdf/?view_only=6071353334724014a66d3ba1fb2fb8aa (accessed on 20 August 2024).

Acknowledgments

The author would like to thank Eileen Gambrill, Bruce Thyer, and John H. Noble for providing helpful feedback on some of the ideas discussed in this article. The author would also like to thank metrics’ three anonymous referees for their helpful comments during the review process. All errors and omissions are the responsibility of the author.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Lindsey, D.; Kirk, S.A. The role of social work journals in the development of a knowledge base for the profession. Soc. Serv. Rev. 1992, 66, 295–310. [Google Scholar] [CrossRef]
  2. Drisko, J.W. Qualitative research synthesis: An appreciative and critical introduction. Qual. Soc. Work 2020, 12, 736–753. [Google Scholar] [CrossRef]
  3. Littell, J.H. Pulling together research studies to inform social work practice: The science of research synthesis. In Social Work Practice Research for the Twenty-First Century; Fortune, A., McCallion, P., Briar-Lawson, K., Eds.; Columbia University Press: New York, NY, USA, 2010; pp. 162–180. [Google Scholar] [CrossRef]
  4. Ioannidis, J.P.A. Integration of evidence from multiple meta-analyses: A primer on umbrella reviews, treatment networks and multiple treatments meta-analyses. CMAJ 2009, 181, 488–493. [Google Scholar] [CrossRef] [PubMed]
  5. Altman, L.K. For science’s gatekeepers, a credibility gap. The New York Times. 2 May 2006. Available online: https://web.archive.org/web/20230410022103/https://www.nytimes.com/2006/05/02/health/02docs.html (accessed on 24 July 2025).
  6. Crane, D. The gatekeepers of science: Some factors affecting the selection of articles for scientific journals. Am. Sociol. 1967, 2, 195–201. [Google Scholar]
  7. Eisen, M.B.; Akhmanova, A.; Behrens, T.E.; Diedrichson, J.; Harper, D.; Iordanova, M.D.; Weigel, D.; Zaida, M. Peer review without gatekeeping. eLife 2022, 11, e83889. [Google Scholar] [CrossRef] [PubMed]
  8. Hamelin, M.; Bourguet, D.; Guillemaud, T. Disconnecting the evaluation of scientific results from their diffusion. Commonplace 2022. [Google Scholar] [CrossRef]
  9. Lumb, E. PeerRef and the future of preprint peer review. Against the Grain. 21 March 2023. Available online: https://web.archive.org/web/20230402163722/https://www.charleston-hub.com/2023/03/peerref-and-the-future-of-preprint-peer-review/ (accessed on 24 July 2025).
  10. Caputo, R.K. Peer review: A vital gatekeeping function and obligation of professional scholarly practice. Fam. Soc. 2019, 100, 6–16. [Google Scholar] [CrossRef]
  11. Dunleavy, D.J. The cultivation of social work knowledge: Towards a more robust system of peer review. Fam. Soc. J. Contemp. Soc. Serv. 2021, 102, 556–568. [Google Scholar] [CrossRef]
  12. Dunleavy, D.J. Research note—Making peer review evidence-based: It’s time to open the “black box”. J. Soc. Work Educ. 2025, 61, 160–170. [Google Scholar] [CrossRef]
  13. Tennant, J.P.; Ross-Hellauer, T. The limitations to our understanding of peer review. Res. Integr. Peer Rev. 2020, 5, 6. [Google Scholar] [CrossRef]
  14. Dunleavy, D.J.; Lacasse, J.R. Is social work research in crisis? Res. Soc. Work Pract. 2023, 35, 264–276. [Google Scholar] [CrossRef]
  15. Ioannidis, J.P.A. Why most published research findings are false. PLOS Med. 2005, 2, e124. [Google Scholar] [CrossRef]
  16. Ioannidis, J.P.A. Why most discovered true associations are inflated. Epidemiology 2008, 19, 640–648. [Google Scholar] [CrossRef]
  17. Nuijten, M.B.; Hartgerink, C.H.; Van Assen, M.A.; Epskamp, S.; Wicherts, J.M. The prevalence of statistical reporting errors in psychology (1985–2013). Behav. Res. Methods 2016, 48, 1205–1226. [Google Scholar] [CrossRef]
  18. Piedra, L.M. The gift of a hoax. Qual. Soc. Work 2019, 18, 152–158. [Google Scholar] [CrossRef]
  19. Smith, R. Peer review: A flawed process at the heart of science and journals. J. R. Soc. Med. 2006, 99, 178–182. [Google Scholar] [CrossRef] [PubMed]
  20. Godlee, F. The fraud behind the MMR scare. BMJ 2011, 342, d22. [Google Scholar] [CrossRef]
  21. Relman, A.S. Peer review in scientific journals–What good is it? West. J. Med. 1990, 153, 520–522. [Google Scholar]
  22. Borsboom, D.; Wagenmakers, E.-J. Derailed: The rise and fall of Diederik Stapel. APS Observer. 27 December 2012. Available online: https://www.psychologicalscience.org/observer/derailed-the-rise-and-fall-of-diederik-stapel (accessed on 24 July 2025).
  23. Levelt, P.; Noort, E.; Drenth, P. Flawed Science: The Fraudulent Research Practices of Social Psychologist Diederik Stapel. 2012. Available online: https://www.rug.nl/about-ug/latest-news/news/archief2012/nieuwsberichten/stapel-eindrapport-eng.pdf (accessed on 24 July 2025).
  24. Lindsay, J.A.; Boghossian, P.; Pluckrose, H. Academic Grievance Studies and the Corruption of Scholarship. Areo. 10 February 2018. Available online: https://areomagazine.com/2018/10/02/academic-grievance-studies-and-the-corruption-of-scholarship/ (accessed on 24 July 2025).
  25. Mounk, Y. What an audacious hoax reveals about academia. The Atlantic. 5 October 2018. Available online: https://www.theatlantic.com/ideas/archive/2018/10/new-sokal-hoax/572212/ (accessed on 24 July 2025).
  26. Altman, L.F.; Melcher, L.A. Fraud in science. BMJ 1983, 286, 2003–2006. [Google Scholar] [CrossRef] [PubMed]
  27. Broad, W. Betrayers of the Truth: Fraud and Deceit in the Halls of Science; Simon & Schuster: New York, NY, USA, 1983; ISBN 978-067-149-549-7. [Google Scholar]
  28. Chawla, D.S. Elsevier investigates hundreds of peer reviewers for manipulating citations. Nature 2019, 573, 174. [Google Scholar] [CrossRef]
  29. Deer, B. How the case against the MMR vaccine was fixed. BMJ 2011, 342, c5347. [Google Scholar] [CrossRef]
  30. Ferguson, C.; Marcus, A.; Oransky, I. Publishing: The peer-review scam. Nature 2014, 515, 480–482. [Google Scholar] [CrossRef]
  31. Gibelman, M.; Gelman, S.R. Learning from the mistakes of others: A look at scientific misconduct in research. J. Soc. Work Educ. 2001, 37, 241–254. [Google Scholar] [CrossRef]
  32. Harvey, L. Research fraud: A long-term problem exacerbated by the clamour for research grants. Qual. High. Educ. 2020, 26, 243–261. [Google Scholar] [CrossRef]
  33. Haug, C.J. Peer-review fraud–Hacking the scientific publication process. N. Engl. J. Med. 2015, 373, 2393–2395. [Google Scholar] [CrossRef] [PubMed]
  34. Jack, A. ‘Open science’ advocates warn of widespread academic fraud. Financial Times. 1 August 2023. Available online: https://www.ft.com/content/fcad4a70-5ba0-4c42-bcec-332cf3b19f5d (accessed on 24 July 2025).
  35. Lee, S.M. A famous honesty researcher is retracting a study over fake data. BuzzFeed News. 20 August 2021. Available online: https://www.buzzfeednews.com/article/stephaniemlee/dan-ariely-honesty-study-retraction (accessed on 24 July 2025).
  36. Moens, J.; Undark; Watch, R. In a tipster’s note, a view of science publishing’s Achilles heel. Undark. 21 June 2023. Available online: https://undark.org/2023/06/21/in-a-tipsters-note-a-view-of-science-publishings-achilles-heel/ (accessed on 24 July 2025).
  37. O’Grady, C. The reckoning. Science 2024, 383, 1046–1051. [Google Scholar] [CrossRef]
  38. Oransky, I.; Marcus, A. There’s far more scientific fraud than anyone wants to admit. The Guardian. 9 August 2023. Available online: https://www.theguardian.com/commentisfree/2023/aug/09/scientific-misconduct-retraction-watch (accessed on 24 July 2025).
  39. Relman, A.S. Lessons from the Darsee affair. N. Engl. J. Med. 1983, 308, 1415–1417. [Google Scholar] [CrossRef]
  40. Scull, A. Rosenhan revisited: Successful scientific fraud. Hist. Psychiatry 2023, 34, 180–195. [Google Scholar] [CrossRef]
  41. Servick, K. Cornell nutrition scientist resigns after retractions and research misconduct finding. Science. 21 September 2018. Available online: https://www.science.org/content/article/cornell-nutrition-scientist-resigns-after-retractions-and-research-misconduct-finding (accessed on 24 July 2025).
  42. Simohnson, U.; Simmons, J.; Nelson, L. Data falsificada (Part 1): “Clusterfake”. Data Colada. 17 June 2023. Available online: https://datacolada.org/109 (accessed on 20 August 2024).
  43. Sox, H.C.; Rennie, D. Research misconduct, retraction, and cleansing the medical literature: Lessons from the Poehlman case. Ann. Intern. Med. 2006, 144, 609–613. [Google Scholar] [CrossRef]
  44. Stroebe, W.; Postmes, T.; Spears, R. Scientific misconduct and the myth of self-correction in science. Perspect. Psychol. Sci. 2012, 7, 670–688. [Google Scholar] [CrossRef]
  45. Subbaraman, N. Harvard teaching hospital seeks retraction of six papers by top researchers. The Wall Street Journal. 22 January 2024. Available online: https://www.wsj.com/health/dana-farber-harvard-retractions-corrections-ceo-laurie-glimcher-935636f5 (accessed on 24 July 2025).
  46. Sun, M. Setting the record straight. Science 1989, 244, 911. [Google Scholar] [CrossRef] [PubMed]
  47. The Office of Research Integrity. Case Summary: Armstead, William M. U.S. Department of Health and Human Services. Available online: https://ori.hhs.gov/content/case-summary-armstead-william-m (accessed on 24 July 2025).
  48. Wise, J. Boldt: The great pretender. BMJ 2013, 346, f1738. [Google Scholar] [CrossRef]
  49. Hsiao, T.-K.; Schneider, J. Continued use of retracted papers: Temporal trends in citations and (lack of) awareness of retractions shown in citation contexts in biomedicine. Quant. Sci. Stud. 2022, 2, 1144–1169. [Google Scholar] [CrossRef]
  50. Gambrill, E. The promotion of avoidable ignorance in the British Journal of Social Work. Res. Soc. Work Pract. 2018, 29, 455–469. [Google Scholar] [CrossRef]
  51. Nissen, S.B.; Magidson, T.; Gross, K.; Bergstrom, C.T. Publication bias and the canonization of false facts. eLife 2016, 5, e21451. [Google Scholar] [CrossRef]
  52. Xu, C.; Fan, S.; Tian, Y.; Liu, F.; Furuya-Kanamori, L.; Clark, J.; Zhang, C.; Li, S.; Lin, L.; Chu, H.; et al. Investigating the impact of trial retractions on the healthcare evidence ecosystem (VITALITY Study I), Retrospective cohort study. BMJ 2025, 389, e082068. [Google Scholar] [CrossRef]
  53. Kulkarni, S. How papers with doctored images can affect scientific reviews. Nature, 28 March 2024. [Google Scholar] [CrossRef]
  54. Council on Social Work Education. National Statement on Research Integrity in Social Work. Available online: https://www.cswe.org/about-cswe/governance/governance-groups/commission-on-research/research-statistics/responsible-conduct-of-research/national-statement/ (accessed on 24 July 2025).
  55. Moore, A.; Fisher, E.; Eccleston, C. Flawed, futile, and fabricated—Features that limit confidence in clinical research in pain and anaesthesia: A narrative review. Br. J. Anaesth. 2023, 130, 287–295. [Google Scholar] [CrossRef]
  56. Piper, K. The staggering death toll of scientific lies: Scientific fraud kills people. Should it be illegal? Vox. 23 August 2024. Available online: https://www.vox.com/future-perfect/368350/scientific-research-fraud-crime-jail-time (accessed on 24 July 2025).
  57. Steen, R.G. Retractions in the medical literature: How many patients are put at risk by flawed research? J. Med. Ethics 2011, 37, 688–692. [Google Scholar] [CrossRef] [PubMed]
  58. Gibelman, M.; Gelman, S.R. Scientific misconduct in social welfare research: Preventive lessons from other fields. Soc. Work Educ. 2005, 24, 275–295. [Google Scholar] [CrossRef]
  59. Bornmann, L. Research misconduct—Definitions, manifestations and extent. Publications 2013, 1, 87–98. [Google Scholar] [CrossRef]
  60. The Office of Research Integrity. Definitions of Research Misconduct. U.S. Department of Health and Human Services. Available online: https://ori.hhs.gov/definition-research-misconduct (accessed on 24 July 2025).
  61. Fanelli, D. The black, the white and the grey areas: Towards an international and interdisciplinary definition of scientific misconduct. In Promoting Research Integrity in a Global Environment; Mayer, T., Steneck, N., Eds.; World Scientific Publishing Company: Singapore, 2012; pp. 79–90. [Google Scholar]
  62. Smith, R. What is research misconduct? In The COPE Report 2000: Annual Report of the Committee on Publication Ethics; White, C., Ed.; BMJ Books: London, UK, 2000; pp. 7–11. Available online: https://publicationethics.org/files/u7141/COPE2000pdfcomplete.pdf (accessed on 24 July 2025).
  63. Smith, R. Research misconduct: The poisoning of the well. J. R. Soc. Med. 2006, 99, 232–237. [Google Scholar] [CrossRef] [PubMed]
  64. Marshall, E. How prevalent is fraud? That’s a million-dollar question. Science 2000, 290, 1662–1663. [Google Scholar] [CrossRef] [PubMed]
  65. Steneck, N.H. ORI Introduction to the Responsible Conduct of Research, Rev. ed.; Department of Health and Human Services: Washington, DC, USA, 2007. Available online: https://ori.hhs.gov/sites/default/files/2018-04/rcrintro.pdf (accessed on 24 July 2025).
  66. Xie, Y.; Wang, K.; Kong, Y. Prevalence of research misconduct and questionable research practices. Sci. Eng. Ethics 2021, 27, 41. [Google Scholar] [CrossRef] [PubMed]
  67. Fanelli, D. How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PLoS ONE 2009, 4, e5738. [Google Scholar] [CrossRef] [PubMed]
  68. Heathers, J.A.J. Approximately 1 in 7 Scientific Papers Are Fake. 2024. Available online: https://metaror.org/kotahi/articles/18/index.html (accessed on 20 August 2024).
  69. Pupovac, V.; Fanelli, D. Scientists admitting to plagiarism: A meta-analysis of surveys. Sci. Eng. Ethics 2015, 21, 1331–1352. [Google Scholar] [CrossRef]
  70. Pupovac, V. The frequency of plagiarism identified by text-matching software in scientific articles: A systematic review and meta-analysis. Scientometrics 2021, 126, 8981–9003. [Google Scholar] [CrossRef]
  71. Wager, E.; Barbour, V.; Yentis, S.; Kleinert, S. Retractions: Guidance from the Committee on Publication Ethics (COPE). Croat. Med. J. 2009, 50, 532–535. [Google Scholar] [CrossRef]
  72. Brainard, J.; You, J. What a massive database of retracted papers reveals about science publishing’s ‘death penalty’. Science, 25 October 2018. [Google Scholar] [CrossRef]
  73. Oransky, I. Nearing 5,000 retractions: A review of 2022. Retraction Watch. 27 December 2022. Available online: https://retractionwatch.com/2022/12/27/nearing-5000-retractions-a-review-of-2022/ (accessed on 24 July 2025).
  74. Cokol, M.; Iossifov, I.; Rodriguez-Esteban, R.; Rzhetsky, A. How many scientific papers should be retracted? EMBO Rep. 2007, 8, 422–423. [Google Scholar] [CrossRef]
  75. Steen, R.G.; Casadevall, A.; Fang, F.C. Why has the number of scientific retractions increased? PLoS ONE 2013, 8, e68397. [Google Scholar] [CrossRef]
  76. Fanelli, D. Why growing retractions are (mostly) a good sign. PLoS Med. 2013, 10, e1001563. [Google Scholar] [CrossRef]
  77. Van Noordan, R. More than 10,000 research papers were retracted in 2023—A new record. Nature 2023, 624, 479–481. [Google Scholar] [CrossRef]
  78. Grieneisen, M.L.; Zhang, M. A comprehensive survey of retracted articles from the scholarly literature. PLoS ONE 2012, 7, e44118. [Google Scholar] [CrossRef]
  79. Wray, K.B.; Andersen, L.E. Retractions in science. Scientometrics 2018, 117, 2009–2019. [Google Scholar] [CrossRef]
  80. Trikalinos, N.A.; Evangelou, E.; Ioannidis, J.P.A. Falsified papers in high-impact journals were slow to retract and indistinguishable from nonfraudulent papers. J. Clin. Epidemiol. 2008, 61, 464–470. [Google Scholar] [CrossRef]
  81. Li, M.; Shen, Z. Science map of academic misconduct. Innovation 2024, 5, 100593. [Google Scholar] [CrossRef] [PubMed]
  82. Fang, F.C.; Steen, R.G.; Casadevall, A. Misconduct accounts for the majority of retracted scientific publications. Proc. Natl. Acad. Sci. USA 2012, 109, 17028–17033. [Google Scholar] [CrossRef]
  83. Dunleavy, D.J. Progressive and degenerative journals: On the growth and appraisal of knowledge in scholarly publishing. Eur. J. Philos. Sci. 2022, 12, 61. [Google Scholar] [CrossRef] [PubMed]
  84. Horbach, S.P.J.M.; Halffman, W. The ability of different peer review procedures to flag problematic publications. Scientometrics 2019, 118, 339–373. [Google Scholar] [CrossRef] [PubMed]
  85. Ioannidis, J.P.A. Why science is not necessarily self-correcting. Perspect. Psychol. Sci. 2012, 7, 645–654. [Google Scholar] [CrossRef]
  86. Yaffe, J. Fake news, information literacy, and scholarly communication in social work. J. Soc. Work Educ. 2017, 53, 369–371. [Google Scholar] [CrossRef]
  87. Ferguson, A.; Clark, J.J. The status of research ethics in social work. J. Evid.-Inf. Soc. Work, 2018; 15, 351–370. [Google Scholar] [CrossRef]
  88. Council on Publication Ethics. Withdrawal of an Article. Available online: https://publicationethics.org/guidance/case/withdrawal-article (accessed on 24 July 2025).
  89. Elsevier. Article Correction, Retraction and Removal Policy. Available online: https://web.archive.org/web/20250403020005/https://www.elsevier.com/about/policies-and-standards/article-withdrawal#4-article-withdrawal (accessed on 24 July 2025).
  90. Morgenshtern, M.; Schmid, J. The value of sourcing social work journals for critical discourse analysis. Qual. Soc. Work 2024, 23, 76–90. [Google Scholar] [CrossRef]
  91. Dunleavy, D.J. It’s time to terminate social work’s relationship with the impact factor. Soc. Work 2022, 67, 296–297. [Google Scholar] [CrossRef] [PubMed]
  92. Brembs, B.; Button, K.; Munafò, M. Deep impact: Unintended consequences of journal rank. Front. Hum. Neurosci. 2013, 7, 291. [Google Scholar] [CrossRef]
  93. Aubele, J.; Perruso, C. Toward a sustainable method for core journal lists: A test case using journals in social work. Ser. Libr. 2017, 73, 89–106. [Google Scholar] [CrossRef]
  94. Hodge, D.R.; Lacasse, J.R. Ranking disciplinary journals with the Google Scholar h-index: A new tool for constructing cases for tenure, promotion, and other professional decisions. J. Soc. Work Educ. 2011, 47, 579–596. [Google Scholar] [CrossRef]
  95. Hodge, D.R.; Yu, M.; Kim, A. Assessing the quality and prestige of disciplinary social work journals: A national study of faculty perceptions. Res. Soc. Work Pract. 2020, 30, 451–459. [Google Scholar] [CrossRef]
  96. Oxford University Press. Changes to Published Articles. Available online: https://academic.oup.com/pages/authoring/journals/production_and_publication/changing-published-articles (accessed on 24 July 2025).
  97. Sage Publishing. Sage Corrections and Retractions Policy. Available online: https://us.sagepub.com/en-us/nam/sage-corrections-and-retractions-policy (accessed on 24 July 2025).
  98. Taylor & Francis. Corrections, Retractions and Updates After Publication: Taylor & Francis Journal Article Correction and Retraction Policy. Available online: https://authorservices.taylorandfrancis.com/publishing-your-research/after-publication/corrections-to-published-articles/ (accessed on 24 July 2025).
  99. University of Chicago Press. Statement of Publication Ethics. Available online: https://www.journals.uchicago.edu/publication-ethics-statement (accessed on 24 July 2025).
  100. Kendall, K.A. Correspondence. Soc. Serv. Rev. 1957, 31, 337–338. [Google Scholar] [CrossRef]
  101. Brown IV, J. Correspondence. Soc. Serv. Rev. 1957, 31, 337. [Google Scholar] [CrossRef]
  102. Garlington, S.B.; Collins, M.E.; Bossaller, M.R.D. WITHDRAWN—Administrative duplicate publication: An ethical foundation for social good: Virtue theory and solidarity. Res. Soc. Work Pract. 2019. [Google Scholar] [CrossRef]
  103. Haddaway, N.; Gusenbauer, M. A broken system—Why literature searching needs a FAIR revolution. LSE Impact Blog. 3 February 2020. Available online: https://blogs.lse.ac.uk/impactofsocialsciences/2020/02/03/a-broken-system-why-literature-searching-needs-a-fair-revolution/ (accessed on 24 July 2025).
  104. Mogro-Wilson, C. RETRACTED ARTICLE: Using fatherhood as a mechanism of change in substance abuse treatment for Hispanic men. J. Ethn. Cult. Divers. Soc. Work 2021, 30, 299–308. [Google Scholar] [CrossRef]
  105. Dunleavy, D.J. Appraising Contemporary Social Work Research: Meta-Research on Statistical Reporting, Statistical Power, and Evidential Value. Ph.D. Dissertation, Florida State University, Tallahassee, FL, USA, 2020. [Google Scholar] [CrossRef]
  106. Glasziou, P.; Altman, D.G.; Bossuyt, P.; Boutron, I.; Clarke, M.; Julious, S.; Michie, S.; Moher, D.; Wager, E. Reducing waste from incomplete or unusable reports of biomedical research. Lancet 2014, 383, 267–276. [Google Scholar] [CrossRef]
  107. Campbell, R.; Javorka, M.; Engleton, J.; Fishwick, K.; Gregory, K.; Goodman-Williams, R. Open-science guidance for qualitative research: An empirically validated approach for de-identifying sensitive narrative data. Adv. Methods Pract. Psychol. Sci. 2023, 6, 25152459231205832. [Google Scholar] [CrossRef]
  108. Chauvette, A.; Schick-Makaroff, K.; Molzahn, A.E. Open data in qualitative research. Int. J. Qual. Methods 2019, 18, 1609406918823863. [Google Scholar] [CrossRef]
  109. DuBois, J.M.; Strait, M.; Walsh, H. Is it time to share qualitative research data? Qual. Psychol. 2019, 5, 380–393. [Google Scholar] [CrossRef]
  110. DuBois, J.M.; Mozersky, J.; Parsons, M.; Walsh, H.A.; Friedrich, A.; Pienta, A. Exchanging words: Engaging the challenges of sharing qualitative research data. Proc. Natl. Acad. Sci. USA 2023, 120, e2206981120. [Google Scholar] [CrossRef] [PubMed]
  111. NIH Office of Science Policy. Final NIH Policy for Data Management and Sharing [NOT-OD-21-013]; National Institutes of Health: Bethesda, MD, USA, 2020. Available online: https://grants.nih.gov/grants/guide/notice-files/NOT-OD-21-013.html (accessed on 24 July 2025).
  112. Chawla, D.S. How can institutions and funders help to police questionable research practices? Nature. 7 September 2021. Available online:https://www.nature.com/nature-index/news/how-can-institutions-and-funders-help-police-questionable-research-practices (accessed on 24 July 2025).
  113. Cochran, A. Putting research integrity checks where they belong. The Scholarly Kitchen. 28 March 2024. Available online: https://scholarlykitchen.sspnet.org/2024/03/28/putting-research-integrity-checks-where-they-belong/ (accessed on 24 July 2025).
  114. Besançon, L.; Samuel, A.; Sana, T.; Rebeaud, M.E.; Guihur, A.; Robinson-Rechavi, M.; Le Berre, N.; Mulot, M.; Meyerowitz-Katz, M.; Maisonneuve, H.; et al. Open Letter: Scientists Stand Up to Protect Academic Whistleblowers and Post-Publication Peer Review. OSF Preprints. 2021. Available online: https://osf.io/preprints/osf/2awsv_v1 (accessed on 20 August 2024).
  115. Heathers, J.A.J. The right to be wrong isn’t the freedom from consequences. james.claims. 27 September 2023. Available online: https://jamesclaims.substack.com/p/the-right-to-be-wrong-isnt-the-freedom (accessed on 24 July 2025).
  116. Heathers, J.A.J. An Introduction to Forensic Metascience. 2025. Available online: https://zenodo.org/records/14871843 (accessed on 20 August 2024).
  117. Nuijten, M.B.; Wicherts, J.M. Implementing statcheck during peer review is related to a steep decline in statistical-reporting inconsistencies. Adv. Methods Pract. Psychol. Sci. 2024, 7, 25152459241258945. [Google Scholar] [CrossRef]
  118. Francis, G.; Thunell, E. Data detective methods for revealing questionable research practices. In Avoiding Questionable Research Practices in Applied Psychology; O’Donohue, W., Masuda, A., Lilienfeld, S., Eds.; Springer International Publishing: Berlin/Heidelberg, Germany, 2022; pp. 123–145. [Google Scholar] [CrossRef]
  119. Heathers, J.A.J. Error detection tools. In Proceedings of the Computational Research Integrity Conference, Virtual, 24 March 2021; Available online: https://youtu.be/gZ4VBYTpDGs?si=Wlv_xebXfKPEF1VV&t=567 (accessed on 24 July 2025).
  120. Lincoln, Y.S.; Lynham, S.A.; Guba, E.G. Paradigmatic controversies, contradictions, and emerging confluences, revisited. In The SAGE Handbook of Qualitative Research, 3rd ed.; Denzin, N.K., Lincoln, Y.S., Eds.; Sage Publishing: New York, NY, USA, 2005; pp. 191–215. [Google Scholar]
  121. Haven, T.L.; Errington, T.M.; Gleditsch, K.S.; van Grootel, L.; Jacobs, A.M.; Kern, F.G.; Piñeiro, R.; Rosenblatt, F.; Mokkink, L.B. Preregistering qualitative research: A Delphi study. Int. J. Qual. Methods 2020, 19. [Google Scholar] [CrossRef]
  122. Haven, T.L.; Van Grootel, D.L. Preregistering qualitative research. Account. Res. 2019, 26, 229–244. [Google Scholar] [CrossRef]
  123. Barbour, V.; Bloom, T.; Lin, J.; Moylan, E. Amending published articles: Time to rethink retractions and corrections? bioRxiv 2017. [Google Scholar] [CrossRef]
  124. Altmetric. The Changing Landscape of Journal Performance Measurement. Available online: https://www.altmetric.com/whitepapers/the-changing-landscape-of-journal-performance-measurement/ (accessed on 24 July 2025).
Table 1. Characteristics of ten leading social work journals.
Table 1. Characteristics of ten leading social work journals.
Journal Title
(Year Established)
PublisherEditor-in-Chief 1Website
British Journal of Social Work (1971)Oxford University PressMaglajlic & Ioakimidishttps://academic.oup.com/bjsw/ (accessed on 20 August 2024)
Families in Society (1920)Sage PublishingMogro-Wilsonhttps://journals.sagepub.com/home/fis
Journal of Social Work (2001)Sage PublishingShardlowhttps://journals.sagepub.com/home/jsw (accessed on 20 August 2024)
Journal of Social Work Education (1965)Taylor & FrancisParrishhttps://www.tandfonline.com/journals/uswe20 (accessed on 20 August 2024)
Journal of Sociology & Social Welfare (1973)WMU/CHHS/
SSW
McCormickhttps://scholarworks.wmich.edu/jssw/ (accessed on 20 August 2024)
Journal of the Society for Social Work and Research (2010)University of Chicago PressHerrenkohlhttps://www.journals.uchicago.edu/toc/jsswr/current (accessed on 20 August 2024)
Research on Social Work Practice (1991)Sage PublishingThyerhttps://journals.sagepub.com/home/rsw (accessed on 20 August 2024)
Social Service Review (1927)University of Chicago PressMosleyhttps://www.journals.uchicago.edu/toc/ssr/current (accessed on 20 August 2024)
Social Work (1956)Oxford University PressScheyetthttps://academic.oup.com/sw (accessed on 20 August 2024)
Social Work Research (1994)
Formerly Social Work Research
& Abstracts, 1977–1993
Oxford University PressHawkinshttps://academic.oup.com/swr (accessed on 20 August 2024) https://academic.oup.com/swra (accessed on 20 August 2024)
1 Editors-in-Chief are as of January 2023.
Table 2. Results of each search strategy, by journal title.
Table 2. Results of each search strategy, by journal title.
Journal Title
(Years Covered)
Website Results
(Search Term)
Google Scholar ResultsRetraction Watch Results (by Title)
British Journal of Social Work (1971–2022)20 results (retracted)
20 results (retraction)
13 results0 results
Families in Society (1920–2022)23 results (retracted)
23 results (retraction)
6 results0 results
Journal of Social Work (2001–2022)4 results (retracted)
4 results (retraction)
2 results0 results
Journal of Social Work Education (1965–2022)8 results (retracted)
8 results (retraction)
5 results0 results
Journal of Sociology & Social Welfare (1973–2022)10 results (retracted)
10 results (retraction)
1 result0 results
Journal of the Society for Social Work and Research (2010–2022)2 results (retracted)
2 results (retraction)
1 result0 results
Research on Social Work Practice (1991–2022)16 results (retracted)
16 results (retraction)
9 results1 result
Social Service Review (1927–2022)21 results (retracted)
21 results (retraction)
11 results0 results
Social Work (1956–2022)16 results (retracted)
16 results (retraction)
8 results0 results
Social Work Research (1994–2022)
Formerly Social Work Research
& Abstracts (1977–1993)
3 results (retracted)
3 results (retraction)
13 results (retracted)
13 results (retraction)
1 result
2 results
0 results
0 results
Total search results136 (retracted)
136 (retraction)
59 results1 result
Total retractions identified0 retractions0 retractions0 retractions
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Dunleavy, D.J. On the Dearth of Retractions in Social Work: A Cross-Sectional Study of Ten Leading Journals. Metrics 2025, 2, 16. https://doi.org/10.3390/metrics2030016

AMA Style

Dunleavy DJ. On the Dearth of Retractions in Social Work: A Cross-Sectional Study of Ten Leading Journals. Metrics. 2025; 2(3):16. https://doi.org/10.3390/metrics2030016

Chicago/Turabian Style

Dunleavy, Daniel J. 2025. "On the Dearth of Retractions in Social Work: A Cross-Sectional Study of Ten Leading Journals" Metrics 2, no. 3: 16. https://doi.org/10.3390/metrics2030016

APA Style

Dunleavy, D. J. (2025). On the Dearth of Retractions in Social Work: A Cross-Sectional Study of Ten Leading Journals. Metrics, 2(3), 16. https://doi.org/10.3390/metrics2030016

Article Metrics

Back to TopTop