Abstract: Repeated failures to replicate reported experimental results could indicate scientific misconduct or simply result from unintended error. Experiments performed by one individual involving tritiated thymidine, published in two papers in Radiation Research, showed exponential killing of V79 Chinese hamster cells. Two other members of the same laboratory were unable to replicate the published results in 15 subsequent attempts to do so, finding, instead, at least 100-fold less killing and biphasic survival curves. These replication failures (which could have been anticipated based on earlier radiobiological literature) raise questions regarding the reliability of the two reports. Two unusual numerical patterns appear in the questioned individual’s data, but do not appear in control data sets from the two other laboratory members, even though the two key protocols followed by all three were identical or nearly so. This report emphasizes the importance of: (1) access to raw data that form the background of reports and grant applications; (2) knowledge of the literature in the field; and (3) the application of statistical methods to detect anomalous numerical behaviors in raw data. Furthermore, journals and granting agencies should require that authors report failures to reproduce their published results.
Abstract: This article draws on research traditions and insights from Criminology to elaborate on the problems associated with current practices of measuring scientific misconduct. Analyses of the number of retracted articles are shown to suffer from the fact that the distinct processes of misconduct, detection, punishment, and publication of a retraction notice, all contribute to the number of retractions and, hence, will result in biased estimates. Self-report measures, as well as analyses of retractions, are additionally affected by the absence of a consistent definition of misconduct. This problem of definition is addressed further as stemming from a lack of generally valid definitions both on the level of measuring misconduct and on the level of scientific practice itself. Because science is an innovative and ever-changing endeavor, the meaning of misbehavior is permanently shifting and frequently readdressed and renegotiated within the scientific community. Quantitative approaches (i.e., statistics) alone, thus, are hardly able to accurately portray this dynamic phenomenon. It is argued that more research on the different processes and definitions associated with misconduct and its detection and sanctions is needed. The existing quantitative approaches need to be supported by qualitative research better suited to address and uncover processes of negotiation and definition.
Abstract: Though scientific misconduct perpetrated by authors has received much press, little attention has been given to the role of journal editors. This article discusses cases and types of “editorial misconduct”, in which the action or inaction of editorial agents ended in publication of fraudulent work and/or poor or failed retractions of such works, all of which ultimately harm scientific integrity and the integrity of the journals involved. Rare but existent, editorial misconduct ranges in severity and includes deliberate omission or ignorance of peer review, insufficient guidelines for authors, weak or disingenuous retraction notices, and refusal to retract. The factors responsible for editorial misconduct and the options to address these are discussed.
Abstract: We hypothesized that scientific misconduct (data fabrication or falsification) is goal-directed behavior. This hypothesis predicts that papers retracted for misconduct: are targeted to journals with a high impact factor (IF); are written by authors with additional papers withdrawn for misconduct; diffuse responsibility across many (perhaps innocent) co-authors; and are retracted slower than papers retracted for other infractions. These hypotheses were initially tested and confirmed in a database of 788 papers; here we reevaluate these hypotheses in a larger database of 2,047 English-language papers. Journal IF was higher for papers retracted for misconduct (p < 0.0001). Roughly 57% of papers retracted for misconduct were written by a first author with other retracted papers; 21% of erroneous papers were written by authors with >1 retraction (p < 0.0001). Papers flawed by misconduct diffuse responsibility across more authors (p < 0.0001) and are withdrawn more slowly (p < 0.0001) than papers retracted for other reasons. Papers retracted for unknown reasons are unlike papers retracted for misconduct: they are generally published in journals with low IF; by authors with no other retractions; have fewer authors listed; and are retracted quickly. Papers retracted for unknown reasons appear not to represent a deliberate effort to deceive.
Abstract: This article examines the current difficulties faced in penetrating the world of scholarly communication technology. While there have been large strides forward in the disintermediation of digital publishing expertise—most notably by the Public Knowledge Project—a substantial number of barriers remain. This paper examines a case study in terms of scholarly typesetting and the Journal Article Tag Suite (JATS) standard before moving to suggest three potential solutions: (1) The formation of open, non-commercial and inclusive (but structured) organizations dedicated to the group exploration and standardisation of scholarly publishing technology; (2) The collective authoring of as much technological and process documentation on scholarly publishing as is possible; (3) The modularisation of platforms and agreement on standards of interoperability. Only through such measures is it possible for researchers to reclaim the means of (re)production, for the remaining barriers are not difficult to understand, merely hard to discover.
Abstract: Does scientific misconduct severe enough to result in retraction disclose itself with warning signs? We test a hypothesis that variables in the results section of randomized clinical trials (RCTs) are associated with retraction, even without access to raw data. We evaluated all English-language RCTs retracted from the PubMed database prior to 2011. Two controls were selected for each case, matching publication journal, volume, issue, and page as closely as possible. Number of authors, subjects enrolled, patients at risk, and patients treated were tallied in cases and controls. Among case RCTs, 17.5% had ≤2 authors, while 6.3% of control RCTs had ≤2 authors. Logistic regression shows that having few authors is associated with retraction (p < 0.03), although the number of subjects enrolled, patients at risk, or treated patients is not. However, none of the variables singly, nor all of the variables combined, can reliably predict retraction, perhaps because retraction is such a rare event. Exploratory analysis suggests that retraction rate varies by medical field (p < 0.001). Although retraction cannot be predicted on the basis of the variables evaluated, concern is warranted when there are few authors, enrolled subjects, patients at risk, or treated patients. Ironically, these features urge caution in evaluating any RCT, since they identify studies that are statistically weaker.