Misconduct in Scientific Publishing

A special issue of Publications (ISSN 2304-6775).

Deadline for manuscript submissions: closed (28 February 2014) | Viewed by 80540

Special Issue Editor

1. Professor, Department of Orthopaedic Surgery (ret.), Louisiana State University School of Medicine, Baton Rouge, LA, USA
2. President, MediCC!, LLC, Medical Communications Consultants Chapel Hill, NC 27517, USA
Interests: scientific retraction; research misconduct; medical misinformation; retraction as a proxy for misconduct; neuroepistemology; cognitive biases associated with misinformation
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Scientists believe—or at least profess to believe—that science is a process of iterative approach to objective truth. Failed experiments are supposed to serve as fodder for successful experiments, so that clouded thinking can be clarified. Observations fundamentally true should find support, while observations flawed in some way are supplanted by better observations.  Why then would anyone think that scientific fraud can succeed?

Recently, there has been an alarming increase in the number of papers in the refereed scientific literature that have been retracted for fraud (e.g., data fabrication or data falsification).  Do fraudulent authors imagine that their fraud will not be exposed?  Do they see the benefits of fraud as so attractive that they are willing to risk exposure?  Or do some scientists doubt the process itself, believing themselves to be immune to the failure to replicate?

It may be true that most scientists who fabricate or falsify data believe that they know the “right” answer in advance of the data and that they will soon have the data necessary to support their favored answer.  It may therefore seem legitimate to fabricate; such scientists may believe that they are simply saving time by cutting corners.  They may even believe that they are serving science and the greater good by pushing a bold “truth” into print.  But humans are so prone to bias that the process of scientific discovery has been developed specifically to insulate scientists from the malign effects of wishful thinking.  Measurement validation, hypothesis testing, random allocation, blinding of outcome assessment, replication of results, referee and peer review, and open sharing of trade secrets are keys to establishing the truth of a scientific idea.  When those processes are subverted, scientific results become prone to retraction.

This Special Issue—Misconduct in Scientific Publishing—will explore the surge in scientific retractions.  Are retractions a valid proxy for research misconduct?  Does the increase in retractions mean that there has been an increase in misconduct?  How can we measure misconduct objectively?  Are surveys that characterize scientific behavior valid or do they misrepresent the prevalence of misconduct?

I look forward to your contributions and your insight on this important topic.

Prof. Dr. R. Grant Steen,
Guest Editor

Submission

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you have registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. A limit of 3,000 words is encouraged for the body of the paper (excluding Abstract, References, Tables, and Figures).  References should number between 20 and 50, with an emphasis on new literature that has been published within the past 5 years.  Papers will be published continuously (as soon as they are accepted) and will be listed together on the special issue website. Research articles, review papers, brief editorials, and short communications are invited.

Submitted manuscripts should not have been published previously nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are refereed through a peer-review process. A guide for authors is available on the Instructions for Authors page, together with other relevant information for manuscript submission. Publications is an international peer-reviewed Open Access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. For the first couple of issues the Article Processing Charge (APC) will be waived for well-prepared manuscripts. English correction and/or formatting fees of 250 CHF (Swiss Francs) will be charged in certain cases for those articles accepted for publication that require extensive additional formatting and/or English corrections.

Keywords

  • research ethics
  • scientific retraction
  • neuroepistemology
  • research misconduct
  • fabrication
  • falsification
  • data plagiarism

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

235 KiB  
Article
Failure to Replicate: A Sign of Scientific Misconduct?
by Helene Z. Hill and Joel H. Pitt
Publications 2014, 2(3), 71-82; https://doi.org/10.3390/publications2030071 - 01 Sep 2014
Cited by 80 | Viewed by 7552
Abstract
Repeated failures to replicate reported experimental results could indicate scientific misconduct or simply result from unintended error. Experiments performed by one individual involving tritiated thymidine, published in two papers in Radiation Research, showed exponential killing of V79 Chinese hamster cells. Two other [...] Read more.
Repeated failures to replicate reported experimental results could indicate scientific misconduct or simply result from unintended error. Experiments performed by one individual involving tritiated thymidine, published in two papers in Radiation Research, showed exponential killing of V79 Chinese hamster cells. Two other members of the same laboratory were unable to replicate the published results in 15 subsequent attempts to do so, finding, instead, at least 100-fold less killing and biphasic survival curves. These replication failures (which could have been anticipated based on earlier radiobiological literature) raise questions regarding the reliability of the two reports. Two unusual numerical patterns appear in the questioned individual’s data, but do not appear in control data sets from the two other laboratory members, even though the two key protocols followed by all three were identical or nearly so. This report emphasizes the importance of: (1) access to raw data that form the background of reports and grant applications; (2) knowledge of the literature in the field; and (3) the application of statistical methods to detect anomalous numerical behaviors in raw data. Furthermore, journals and granting agencies should require that authors report failures to reproduce their published results. Full article
(This article belongs to the Special Issue Misconduct in Scientific Publishing)
Show Figures

Figure 1

191 KiB  
Article
Measuring Scientific Misconduct—Lessons from Criminology
by Felicitas Hesselmann, Verena Wienefoet and Martin Reinhart
Publications 2014, 2(3), 61-70; https://doi.org/10.3390/publications2030061 - 03 Jul 2014
Cited by 13 | Viewed by 9765
Abstract
This article draws on research traditions and insights from Criminology to elaborate on the problems associated with current practices of measuring scientific misconduct. Analyses of the number of retracted articles are shown to suffer from the fact that the distinct processes of misconduct, [...] Read more.
This article draws on research traditions and insights from Criminology to elaborate on the problems associated with current practices of measuring scientific misconduct. Analyses of the number of retracted articles are shown to suffer from the fact that the distinct processes of misconduct, detection, punishment, and publication of a retraction notice, all contribute to the number of retractions and, hence, will result in biased estimates. Self-report measures, as well as analyses of retractions, are additionally affected by the absence of a consistent definition of misconduct. This problem of definition is addressed further as stemming from a lack of generally valid definitions both on the level of measuring misconduct and on the level of scientific practice itself. Because science is an innovative and ever-changing endeavor, the meaning of misbehavior is permanently shifting and frequently readdressed and renegotiated within the scientific community. Quantitative approaches (i.e., statistics) alone, thus, are hardly able to accurately portray this dynamic phenomenon. It is argued that more research on the different processes and definitions associated with misconduct and its detection and sanctions is needed. The existing quantitative approaches need to be supported by qualitative research better suited to address and uncover processes of negotiation and definition. Full article
(This article belongs to the Special Issue Misconduct in Scientific Publishing)
188 KiB  
Article
The Demographics of Deception: What Motivates Authors Who Engage in Misconduct?
by R. Grant Steen
Publications 2014, 2(2), 44-50; https://doi.org/10.3390/publications2020044 - 28 Mar 2014
Cited by 143 | Viewed by 7323
Abstract
We hypothesized that scientific misconduct (data fabrication or falsification) is goal-directed behavior. This hypothesis predicts that papers retracted for misconduct: are targeted to journals with a high impact factor (IF); are written by authors with additional papers withdrawn for misconduct; diffuse responsibility across [...] Read more.
We hypothesized that scientific misconduct (data fabrication or falsification) is goal-directed behavior. This hypothesis predicts that papers retracted for misconduct: are targeted to journals with a high impact factor (IF); are written by authors with additional papers withdrawn for misconduct; diffuse responsibility across many (perhaps innocent) co-authors; and are retracted slower than papers retracted for other infractions. These hypotheses were initially tested and confirmed in a database of 788 papers; here we reevaluate these hypotheses in a larger database of 2,047 English-language papers. Journal IF was higher for papers retracted for misconduct (p < 0.0001). Roughly 57% of papers retracted for misconduct were written by a first author with other retracted papers; 21% of erroneous papers were written by authors with >1 retraction (p < 0.0001). Papers flawed by misconduct diffuse responsibility across more authors (p < 0.0001) and are withdrawn more slowly (p < 0.0001) than papers retracted for other reasons. Papers retracted for unknown reasons are unlike papers retracted for misconduct: they are generally published in journals with low IF; by authors with no other retractions; have fewer authors listed; and are retracted quickly. Papers retracted for unknown reasons appear not to represent a deliberate effort to deceive. Full article
(This article belongs to the Special Issue Misconduct in Scientific Publishing)
Show Figures

Figure 1

253 KiB  
Article
A Case-Control Comparison of Retracted and Non-Retracted Clinical Trials: Can Retraction Be Predicted?
by R. Grant Steen and Robert M. Hamer
Publications 2014, 2(1), 27-37; https://doi.org/10.3390/publications2010027 - 27 Jan 2014
Cited by 127 | Viewed by 7471
Abstract
Does scientific misconduct severe enough to result in retraction disclose itself with warning signs? We test a hypothesis that variables in the results section of randomized clinical trials (RCTs) are associated with retraction, even without access to raw data. We evaluated all English-language [...] Read more.
Does scientific misconduct severe enough to result in retraction disclose itself with warning signs? We test a hypothesis that variables in the results section of randomized clinical trials (RCTs) are associated with retraction, even without access to raw data. We evaluated all English-language RCTs retracted from the PubMed database prior to 2011. Two controls were selected for each case, matching publication journal, volume, issue, and page as closely as possible. Number of authors, subjects enrolled, patients at risk, and patients treated were tallied in cases and controls. Among case RCTs, 17.5% had ≤2 authors, while 6.3% of control RCTs had ≤2 authors. Logistic regression shows that having few authors is associated with retraction (p < 0.03), although the number of subjects enrolled, patients at risk, or treated patients is not. However, none of the variables singly, nor all of the variables combined, can reliably predict retraction, perhaps because retraction is such a rare event. Exploratory analysis suggests that retraction rate varies by medical field (p < 0.001). Although retraction cannot be predicted on the basis of the variables evaluated, concern is warranted when there are few authors, enrolled subjects, patients at risk, or treated patients. Ironically, these features urge caution in evaluating any RCT, since they identify studies that are statistically weaker. Full article
(This article belongs to the Special Issue Misconduct in Scientific Publishing)
Show Figures

Figure 1

588 KiB  
Article
A Novel Rubric for Rating the Quality of Retraction Notices
by Emma Bilbrey, Natalie O'Dell and Jonathan Creamer
Publications 2014, 2(1), 14-26; https://doi.org/10.3390/publications2010014 - 24 Jan 2014
Cited by 79 | Viewed by 9591
Abstract
When a scientific article is found to be either fraudulent or erroneous, one course of action available to both the authors and the publisher is to retract said article. Unfortunately, not all retraction notices properly inform the reader of the problems with a [...] Read more.
When a scientific article is found to be either fraudulent or erroneous, one course of action available to both the authors and the publisher is to retract said article. Unfortunately, not all retraction notices properly inform the reader of the problems with a retracted article. This study developed a novel rubric for rating and standardizing the quality of retraction notices, and used it to assess the retraction notices of 171 retracted articles from 15 journals. Results suggest the rubric to be a robust, if preliminary, tool. Analysis of the retraction notices suggest that their quality has not improved over the last 50 years, that it varies both between and within journals, and that it is dependent on the field of science, the author of the retraction notice, and the reason for retraction. These results indicate a lack of uniformity in the retraction policies of individual journals and throughout the scientific literature. The rubric presented in this study could be adopted by journals to help standardize the writing of retraction notices. Full article
(This article belongs to the Special Issue Misconduct in Scientific Publishing)
Show Figures

Figure 1

167 KiB  
Article
Combating Fraud in Medical Research: Research Validation Standards Utilized by the Journal of Surgical Radiology
by Bhavin Patel, Anahita Dua, Tom Koenigsberger and Sapan S. Desai
Publications 2013, 1(3), 140-145; https://doi.org/10.3390/publications1030140 - 15 Nov 2013
Cited by 12 | Viewed by 11918
Abstract
Fraud in medical publishing has risen to the national spotlight as manufactured and suspect data have led to retractions of papers in prominent journals. Moral turpitude in medical research has led to the loss of National Institute of Health (NIH) grants, directly affected [...] Read more.
Fraud in medical publishing has risen to the national spotlight as manufactured and suspect data have led to retractions of papers in prominent journals. Moral turpitude in medical research has led to the loss of National Institute of Health (NIH) grants, directly affected patient care, and has led to severe legal ramifications for some authors. While there are multiple checks and balances in medical research to prevent fraud, the final enforcement lies with journal editors and publishers. There is an ethical and legal obligation to make careful and critical examinations of the medical research published in their journals. Failure to follow the highest standards in medical publishing can lead to legal liability and destroy a journal’s integrity. More significant, however, is the protection of the medical profession’s trust with their colleagues and the public they serve. This article discusses various techniques and tools available to editors and publishers that can help curtail fraud in medical publishing. Full article
(This article belongs to the Special Issue Misconduct in Scientific Publishing)

Review

Jump to: Research

176 KiB  
Review
Editorial Misconduct—Definition, Cases, and Causes
by Matan Shelomi
Publications 2014, 2(2), 51-60; https://doi.org/10.3390/publications2020051 - 04 Apr 2014
Cited by 15 | Viewed by 10872
Abstract
Though scientific misconduct perpetrated by authors has received much press, little attention has been given to the role of journal editors. This article discusses cases and types of “editorial misconduct”, in which the action or inaction of editorial agents ended in publication of [...] Read more.
Though scientific misconduct perpetrated by authors has received much press, little attention has been given to the role of journal editors. This article discusses cases and types of “editorial misconduct”, in which the action or inaction of editorial agents ended in publication of fraudulent work and/or poor or failed retractions of such works, all of which ultimately harm scientific integrity and the integrity of the journals involved. Rare but existent, editorial misconduct ranges in severity and includes deliberate omission or ignorance of peer review, insufficient guidelines for authors, weak or disingenuous retraction notices, and refusal to retract. The factors responsible for editorial misconduct and the options to address these are discussed. Full article
(This article belongs to the Special Issue Misconduct in Scientific Publishing)
185 KiB  
Review
Research Misconduct—Definitions, Manifestations and Extent
by Lutz Bornmann
Publications 2013, 1(3), 87-98; https://doi.org/10.3390/publications1030087 - 11 Oct 2013
Cited by 39 | Viewed by 13502
Abstract
In recent years, the international scientific community has been rocked by a number of serious cases of research misconduct. In one of these, Woo Suk Hwang, a Korean stem cell researcher published two articles on research with ground-breaking results in Science in 2004 [...] Read more.
In recent years, the international scientific community has been rocked by a number of serious cases of research misconduct. In one of these, Woo Suk Hwang, a Korean stem cell researcher published two articles on research with ground-breaking results in Science in 2004 and 2005. Both articles were later revealed to be fakes. This paper provides an overview of what research misconduct is generally understood to be, its manifestations and the extent to which they are thought to exist. Full article
(This article belongs to the Special Issue Misconduct in Scientific Publishing)
Back to TopTop