Next Article in Journal
Chinese Doctors Connecting to the English Publishing World: Literature Access, Editorial Services, and Training in Publication Skills
Next Article in Special Issue
A Novel Rubric for Rating the Quality of Retraction Notices
Previous Article in Journal
The Open Access Divide
Previous Article in Special Issue
Research Misconduct—Definitions, Manifestations and Extent
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Combating Fraud in Medical Research: Research Validation Standards Utilized by the Journal of Surgical Radiology

1
Medical College of Wisconsin, Milwaukee, WI 53226, USA
2
Department of Cardiothoracic and Vascular Surgery, University of Texas at Houston Medical School, Houston, TX 77030, USA
3
Journal of Surgical Radiology, Houston, TX 77004, USA
4
Department of Surgery, Duke University, Durham, CA 27710, USA
*
Author to whom correspondence should be addressed.
Publications 2013, 1(3), 140-145; https://doi.org/10.3390/publications1030140
Submission received: 24 October 2013 / Revised: 5 November 2013 / Accepted: 6 November 2013 / Published: 15 November 2013
(This article belongs to the Special Issue Misconduct in Scientific Publishing)

Abstract

:
Fraud in medical publishing has risen to the national spotlight as manufactured and suspect data have led to retractions of papers in prominent journals. Moral turpitude in medical research has led to the loss of National Institute of Health (NIH) grants, directly affected patient care, and has led to severe legal ramifications for some authors. While there are multiple checks and balances in medical research to prevent fraud, the final enforcement lies with journal editors and publishers. There is an ethical and legal obligation to make careful and critical examinations of the medical research published in their journals. Failure to follow the highest standards in medical publishing can lead to legal liability and destroy a journal’s integrity. More significant, however, is the protection of the medical profession’s trust with their colleagues and the public they serve. This article discusses various techniques and tools available to editors and publishers that can help curtail fraud in medical publishing.

1. Introduction

Reports of fraud in medical literature have increased sharply over the last two decades leading to a concomitant tenfold increase in article retraction rate, but it is unclear whether this rise is due to increased vigilance or actual incidence [1,2]. More alarming is the lack of knowledge about the true incidence of fraud and misrepresentation in medical research. The combination of financial, legal, professional, and quality of care repercussions due to recent negative publicity in the popular press have led many major publications to reexamine their standards of peer-review and publication standards. This article discusses the standards utilized by the Journal of Surgical Radiology to combat fraud in medical literature.

2. Current Standards of Peer-Review

Over the past twenty years, academic publications and their founding medical societies have established increasingly rigorous guidelines to safeguard the integrity of scientific research. While there is considerable variation in actual implementation, journals that subscribe to the International Committee of Medical Journal Editors (ICMJE) standards require their authors to sign a uniform form created by this society that can be used to confirm the role of each of the contributing authors, verify that the paper has not been previously published elsewhere, and that the work is legitimate to the best of their knowledge. A peer-review process is completed with two or three independent reviewers. The system relies on trust: trust that the authors are all aware of the publication and are answering the disclosure form honestly, and trust that the peer-reviewers will be able to identify meaningful scientific research from junk science, and more importantly, identify papers that may be misleading or even falsified. While peer-review may be an effective way to judge the scientific relevance of the article, whether it is an effective method for detecting fraud is doubtful, particularly since most peer-reviewers do not see the raw data or review high resolution images to evaluate for image manipulation.
The way physician scientists are hired, the way their scientific proposals are vetted by university research committees and grant review groups, and the collaborative nature of the research process all work together as a series of checks and balances to maintain the integrity of scientific research. Journal editors and peer reviewers serve as the last step in this process; they are gatekeepers that determine whether meaningful and presumably legitimate research is published. However, recent events, like those discussed above, have clearly demonstrated that current standards in the peer review process are insufficient for identifying papers that may be fraudulent. As the following examples illustrate, implementation of a standardized form, external peer-review, and the system of checks and balances employed by universities are insufficient to identify instances in which fraud was done intentionally.
Marc Hauser, a Harvard psychologist and director of the Harvard cognitive evolution laboratory, was found to have published fabricated data and falsified data in an ongoing trial. The ORI found Marc Hauser had committed misconduct in four NIH grants, and found him guilty on six counts of research misconduct [3,4,5,6]. The recent events at Harvard could not have been detected solely by the peer review process unless the reviewer was familiar with the raw data and how it was analyzed. Research assistants reported Marc Hauser when they found conflicting data and Hauser refused to get a third opinion. Marc Hauser was insistent on using his version of the data for the final results for the manuscript [3].
An example of such high impact fabricated papers that were later discovered to be based on fabricated data were published by Anil Potti, a medical oncologist at Duke University looking at individualized cancer treatment for severely ill patients. His research appeared to be groundbreaking and, therefore, other institutions attempted to replicate his results for their own patients. During the verification and replication process, researchers at MD Anderson found that the data was filled with errors. While Anil Potti acknowledged minor errors in the data, he refused to stop the trials, and Duke continued to enroll patients into the trial. It wasn’t until the National Cancer Institute (NCI) also voiced concern about the data that Duke suspended the trials in 2009 and had an external review committee check Anil Potti’s data. This committee had limited access to Anil Potti’s data, and erroneously concluded that Anil Potti’s research was satisfactory. The trial began enrolling patients again, and continued to do so until Joseph Nevins, Anil Potti’s supervisor, reviewed the original data himself and found major discrepancies between Anil Potti’s published results and the original. He concluded the data had been manipulated and fabricated [7]. A root cause analysis found that the internal review committees at Duke were ill-prepared to deal with the complexity of the data, and that limited access to the original data hampered their ability to conduct an independent audit. Anil Potti resigned from Duke University in 2010 and had 18 papers either retracted or corrected [8].
Similarly, at Duke the genetic data being collected by Anil Potti was being manipulated and the true raw data was covered up for years. Even the external committee asked to review Anil Potti’s data was not given access to the actual raw data, hence they went on to find his data correct and without any errors. The peer review process was blind to the misrepresentation that occurred during the initial data collection of these trials [7,8]. These cases illustrate that it is possible to circumvent a system that has been in place for decades, and that it can be done relatively effortlessly and avoid detection for many years. There are various tools that we have implemented at the Journal of Surgical Radiology in order to minimize our risk exposure to this type of fraud.

3. Publishing Standards for the Journal of Surgical Radiology

Empowering the peer-review and publication process using a series of independent auditing tools, data verification algorithms, and direct contact with all of the stakeholders can help combat fraud in medical publishing. The journals that achieve a lower rate of retractions improve their trustworthiness and can achieve competitive advantage in a crowded industry.

3.1. Law of Natural Numbers

The most serious cause of fraud in medical publishing is manufactured data that authors use to support high impact conclusions. The Journal of Surgical Radiology requests primary, de-identified data, from authors and completes its own statistical analysis on the information. Co-authors are asked to review this data and sign off on its authenticity, with the belief that fraud is less likely to be perpetuated among multiple stakeholders than from a single author. This data received undergoes analysis using various data verification algorithms that are popular in the financial industry that help to determine whether the numbers have been manufactured. One of these algorithms utilizes principles of Newcomb’s Law (also known as Benford’s Law), a finding that distributions found in nature are asymmetrical and that the ten digits do not occur with equal frequency [9,10]. The lower digits are more common than the larger digits, reflecting the principle that numbers that occur in nature are primarily ratios. In medical fraud, random number generators may be used to quickly generate large volumes of data. These random number generators have an equal chance of generating a number across all digit ranges, and thereby do not follow Newcomb’s Law even though they are meant to represent data collected from natural processes. An unexpectedly high preponderance of “non-natural” numbers may prompt additional investigation.
Additional techniques include the use of clustering and classification to organize the data into a series of groups. Unexpected patterns or associations within or among the groups may indicate possible fraud. The data can also be compared against a known series; for example, smaller studies that report on data can be correlated with national databases to ensure that the proportions fall within known standard deviations. Major anomalies may indicate suspect data, or perhaps indicate that the findings of the study are not valid due to poor study design.
Implementing these algorithms can be done by obtaining the raw, de-identified data as part of all submissions. The specific mathematical techniques can be implemented using any standard statistics package. As this process is somewhat specialized and time-intensive, it may be best implemented for papers that are potentially high impact or have raised a red flag. At present, all original articles that report statistics are reviewed.

3.2. Research Validation

In response to several recent retractions in which co-authors were unaware that a paper had been submitted with their name, another method of identifying fraud may be to directly contact all of the stakeholders in the manuscript. Direct contact with each of the authors and maintaining this contact throughout the peer-review process is essential and helps to enforce that everyone has been involved in the peer-review and validation process. This validation process also helps to ensure that all authors played a role in the manuscript preparation process. A directed question to ascertain whether a ghostwriter was utilized is now also utilized as part of the submission process to ensure that all entities receive appropriate credit. Independent verification that key research protocols have been followed and that appropriate grant support has been cited can also help legitimize the manuscript.
Implementing these standards does not adversely affect author satisfaction, and has only a minimal impact on throughput in the peer-review process. Achieving operational efficiency can be done using advances in virtual intelligence, which automates much of this process when it is properly included in the review process. Our survey of the authors who have submitted their research to the Journal of Surgical Radiology indicate that 100% of them are satisfied with the peer-review process (N = 142), 92% agree that primary data should be included for evaluation alongside all original manuscripts, and 97% agree that including all authors as part of the publication process improved communication and could help prevent fraud.
The Journal of Surgical Radiology has implemented the techniques discussed above, including the data analysis algorithms and a policy of direct contact with all authors since its inception in 2010. Over 150 articles, including original research, review articles, meta-analysis, case reports, and other types of submissions have been evaluated using these methods over this three year period. Analysis of the original data has not found any cases of suspicious data that was an outlier by more than three standard deviations from the expected mean. The data from human pathology appears to be obey the law of natural numbers (p < 0.05, N = 12 studies with original data), and no unexpected clustering or associations have been noted among independent groups. The protocol that we have established is that if any outliers are identified, more precise review of the primary data and an interview with the authors of the study may be requested. While the presence of an outlier does not constitute fraud, it does indicate that there may have been variation from procedural norms, errors in data collection, and mistakes with data analysis. We have found that the actual cost of doing this sort of analysis from a monetary and time point of view is minimal, taking no more than 1–2 h per manuscript accepted for publication. Given the minimal investment in resources and potential dividends in identifying errors in data analysis or outright fraud, it is our recommendation that a similar algorithm be implemented by all major journals.

4. Conclusion

Fraud in medical scholarship remains a pervasive problem that leads to millions in dollars in waste in health care dollars and adversely affects patient care [11]. While major medical publications have taken steps to combat fraud in medicine, there are a number of tools that can preserve the integrity of the peer-review process. Improving trust and reliability leads to a major competitive advantage for journals without sacrificing operational efficiency. The Journal of Surgical Radiology has successfully implemented the above mentioned tools and internal auditing, data, validation, and independent verification policies, with a mandate of author support that greatly improves the vetting process to greatly increase the probability of publishing ethical and accurate research.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Van Noorden, R. Science publishing: The trouble with retractions. Nature. 5 October 2011. Available online: http://www.nature.com/news/2011/111005/full/478026a.html (accessed on 1 October 2013).
  2. Steen, G.R. Retractions in the scientific literature: Is the incidence of research fraud increasing? J. Med. Ethics. 24 December 2010. Available online: http://jme.bmj.com/content/early/2010/12/23/jme.2010.040923 (accessed on 1 October 2013).
  3. Bartlett, T. Document Sheds Light on Investigation at Harvard. Available online: http://chronicle.com/article/Document-Sheds-Light-on/123988/ (accessed on 1 October 2013).
  4. Reich, E.S. Misconduct ruling is silent on intent. Nature 2012, 489, 189–190. [Google Scholar] [CrossRef]
  5. Wade, N. Scientist Under Inquiry Resigns From Harvard. Available online: http://www.nytimes.com/2011/07/21/science/21hauser.html?_r=0 (accessed on 1 October 2013).
  6. Retraction notice. Cognition 2010, 117, 106. [CrossRef]
  7. Pelley, S. Deception at duke: Fraud in cancer care? CBSNews. CBS Interactive, 12 Feb. 2012. Web. 09 Dec.. 2012. Available online: http://www.cbsnews.com/8301-18560_162-57376073/ (accessed on 1 October 2013).
  8. Misconduct in science: An array of errors. The Economist 2011.
  9. Benford, F. The law of anomalous numbers. Proc. Am. Philos. Soc. 1938, 78, 551–572. [Google Scholar]
  10. Newcomb, S. Note on the frequency of use of the different digits in natural numbers. Am. J. Math. 1881, 4, 39–40. [Google Scholar] [CrossRef]
  11. True Cost of Research Misconduct. 2012 iThenticate Report. Available online: http://www.ithenticate.com/research-misconduct-report/ (accessed on 1 October 2013).

Share and Cite

MDPI and ACS Style

Patel, B.; Dua, A.; Koenigsberger, T.; Desai, S.S. Combating Fraud in Medical Research: Research Validation Standards Utilized by the Journal of Surgical Radiology. Publications 2013, 1, 140-145. https://doi.org/10.3390/publications1030140

AMA Style

Patel B, Dua A, Koenigsberger T, Desai SS. Combating Fraud in Medical Research: Research Validation Standards Utilized by the Journal of Surgical Radiology. Publications. 2013; 1(3):140-145. https://doi.org/10.3390/publications1030140

Chicago/Turabian Style

Patel, Bhavin, Anahita Dua, Tom Koenigsberger, and Sapan S. Desai. 2013. "Combating Fraud in Medical Research: Research Validation Standards Utilized by the Journal of Surgical Radiology" Publications 1, no. 3: 140-145. https://doi.org/10.3390/publications1030140

Article Metrics

Back to TopTop