Can We Mathematically Spot Possible Manipulation of Results in Research Manuscripts Using Benford's Law?

The reproducibility of academic research has long been a persistent issue, contradicting one of the fundamental principles of science. What is even more concerning is the increasing number of false claims found in academic manuscripts recently, casting doubt on the validity of reported results. In this paper, we utilize an adaptive version of Benford's law, a statistical phenomenon that describes the distribution of leading digits in naturally occurring datasets, to identify potential manipulation of results in research manuscripts, solely using the aggregated data presented in those manuscripts. Our methodology applies the principles of Benford's law to commonly employed analyses in academic manuscripts, thus, reducing the need for the raw data itself. To validate our approach, we employed 100 open-source datasets and successfully predicted 79% of them accurately using our rules. Additionally, we analyzed 100 manuscripts published in the last two years across ten prominent economic journals, with ten manuscripts randomly sampled from each journal. Our analysis predicted a 3% occurrence of result manipulation with a 96% confidence level. Our findings uncover disturbing inconsistencies in recent studies and offer a semi-automatic method for their detection.


Introduction
The scientific community places great emphasis on maintaining the integrity and dependability of published manuscripts [1,2,3].The accuracy and validity of research findings are crucial for advancing knowledge and establishing evidence-based policies [4,5].Unfortunately, the existence of fraudulent or deceptive research across different disciplines presents a substantial obstacle for scientists [6,7,8].
There are various motivations behind the presentation of misleading results in academic papers.These motivations range from seeking professional recognition by publishing in high-impact journals to securing funding based on impressive previous work, and even attempting to salvage a study that did not yield the desired outcomes [9,10,11].Furthermore, the traditional peer review process often fails to identify deliberate attempts at result fabrication, particularly when raw data is not provided, although the absence of raw data itself is an undesirable practice [12,13].This issue is particularly relevant in the field of economics, where data analysis and statistical properties play a crucial role, but restrictions on sharing raw data, driven by privacy concerns and the protection of business secrets, make it difficult to scrutinize the findings [14].Consequently, scholars in this field may find it tempting to manipulate results with minimal risk involved, creating an undesirable environment for research integrity.
Ensuring the integrity and trustworthiness of research studies is essential, and this necessitates the identification and exposure of potential inconsistencies or intentional misrepresentations within research manuscripts [15].Traditional methods of detecting anomalies or suspicious patterns often involve a manual examination, which is a time-consuming and resource-intensive process [16,17].Furthermore, this approach demands a high level of expertise in each respective field, thereby limiting the number of individuals capable of performing such tasks.As a result, there is an increasing demand for objective and automated approaches to assist in the identification of possible falsehoods in academic research, particularly when the original data is unavailable for review.
This paper presents an innovative method leveraging Benford's law [18], a statistical phenomenon commonly utilized in forensic accounting and auditing.Our approach focuses on devising rules for examining standard statistical analyses like mean, standard deviation, and linear regression coefficients.Benford's law centers around the distribution of leading digits in real-world datasets, offering a mathematical framework to detect deviations from anticipated patterns.Building upon this framework, we introduce multiple tests associated with various types of statistical analyses typically reported in research manuscripts.These tests compare the expected Benford's distribution against the observed distribution for each respective analysis.
In order to assess the efficacy of our methodology, a sample of 100 open-access datasets was obtained.For half of these datasets, we computed the actual statistical values, while for the remaining half, we intentionally introduced modifications to these values.The findings demonstrate that our proposed approach successfully predicted the outcomes with an accuracy rate of 79%.Subsequently, we collected data from 100 papers published in the top 10 economic journals within the last two years.Disturbingly, our method detected anomalies in 3% of the papers, attaining a confidence level of 96%.
This paper is organized as follows.Section 2 outlines our adoption of Benford's distribution and the construction of the manuscripts test.Section 3 presents the methodology employed to collect and preprocess the data used for our experiments as well as the analysis itself.Section 4, provides the results of our experiments.Section 5 discusses the implementations of our results followed by an analysis of the applications and limitations of the study with possible future work.Fig. 1 provides a schematic view of this study.Next, we outline the data acquisition process for the experiments.Finally, we present the experimental setup and results, including a method validation experiment and an evaluation of recent economics studies, followed by an analysis of the results and a discussion about their implementations.

Statistical Operators Benford's Law
Benford's law describes the expected distribution of leading digits in naturally occurring datasets [18].It states that in many sets of numerical data, the leading digits are not uniformly distributed.In fact, they follow a logarithmic distribution, as follows: where d ∈ {1, 2, . . ., 9} indicates the leading digit and P (d) ∈ [0, 1] is the probability a number would have d as its leading digit.To apply Benford's law in practice, one needs to compare the observed distribution of leading digits in a dataset to these of Eq. ( 1).Deviations from the expected distribution can indicate potential anomalies, irregularities, or manipulation within the dataset.Now, let us consider a set of vectors V := {v i } k i=1 ∈ R n×k .Formally, an irregularity test based on Benford's law would return p which is the probability value obtained from the Kolmogorov-Smirnov test [19] between the log distribution obtained by fitting V and the distribution in Eq. (1).In order to perform this test on values obtained from V using operator o, one needs to first find Benford's distribution associated with such an operator.Hence, let us consider three common statistical operators: mean, standard deviation, and linear regression coefficients.One can numerically obtain these distributions using the convolution operator [20].
Formally, we define an anomaly test to be T o (D) where T o : R n → [0, 1] is a function that accepts a vector D ∈ R n and an operator o and returns a score of the probability D is anomaly with respect to operator o.Formally, for our case, we associate each operator o with its Benford's distribution and T o (D) is implemented to return 1 − p where p is the probability value obtained from the Kolmogorov-Smirnov test [19] between the distribution associated with the operator o and the same one after fitting to V .Notably, for each operator, we generated 1000 random samples and calculated the results for each one of them.We denoted the worst result obtained as a ∈ [0, 1].In order to ensure that the proposed test numerically produces results in the range [0, 1], for each outcome, x, we compute and report (x − a)/(1 − a).

Experimental Setup
In this section, we outline the two experiments conducted in this study.The first experiment is designed to numerically validate the performance of the proposed method.After validating the method, in a complementary manner, the second experiment evaluates the number of irregularities in recent academic economic studies.We implemented the experiments using the Python programming language [21] (Version 3.7.5).We set p < 0.05 to be statistically significant.
First, for the method's performance validation, we manually collect 100 numerical datasets from the Data World1 and Kaggle2 , following [22].The datasets are randomly chosen from a broad range of fields and represent a wide range of computational tasks.Each dataset is represented by a matrix D. We define a feature f j of a dataset D as follow f j := ∀i ∈ [1, . . ., n] : d i,j .A feature is used to calculate the unitary statistical properties.Based on this data, for each datasets (D) and statistical operator (o), we computed T o (D), obtaining a vector of results denoted by u.The overall anomaly probability prediction is define to be 1 |u| |u| i=1 u i .For half of the datasets, we introduce uniformly distributed noise which is between 1 and 10 percent of the mean value in a uniform manner.As such, these datasets should not agree with Benford's law and therefore if the proposed method predicts they do, it is an error.As such, we have 50 positive and 50 negative examples.
Second, for the manuscript evaluation, we collected a sample of 100 papers published in 10 leading economic journals over the past two years.These papers served as the test subjects for applying our proposed method to detect anomalies or irregularities.We choose these amounts and distribution to balance the time and resource burden and the statistical power of the sample.In order to determine which journals are leading in the economics field, we used the Scimago Journal and Country rank website 3 , searching for the "Economics and Econometrics" and taking the top 10 journals [23,24,25]: Quarterly Journal of Economics, American Economic Review, Journal of Political Economy, Journal of Finance, Review of Economic Studies, Econometrica, Journal of Economic Literature, Review of Financial Studies, Journal of Marketing, and Journal of Financial Economics.For each journal, we mainly count how many manuscripts the journal published in the last two years, asking the computer to randomly pick 10 indexes.Once the indexes were obtained, we downloaded these manuscripts from the journals' websites.Next, we manually extract the results from the manuscripts presented either in tables or figures.For each of them, if appropriate, we apply our adopted Benford's law.

Results
To assess the performance of our method, we evaluated the confusion matrix for the dataset, as presented in Table 1.The obtained results indicate an accuracy of 0.79 and an F 1 score of 0.77.Notably, the model exhibited a tendency to predict manipulation-free manuscripts incorrectly, identifying 7 manipulation-free manuscripts as containing manipulations.Conversely, it also misclassified 14 manuscripts with manipulations as manipulationfree.However, from the perspective of the journal, it is preferable for the model to err on the side of caution by falsely predicting manuscripts as manipulation-free, as falsely accusing innocent authors of result manipulation is deemed more undesirable than missing manuscripts with actual manipulations.Furthermore, Table 2 provides an overview of the predicted number of economic manuscripts flagged for containing results manipulations based on varying confidence levels.It is evident that as the confidence level increases, the number of flagged manuscripts decreases.This observation aligns with expectations since the null hypothesis assumes that the manuscripts are manipulation-free.Hence, higher confidence levels necessitate stronger statistical evidence of manipulation for a manuscript to be flagged.

Confidence level
90% 92% 94% 96% 98% Flagged manuscripts 12 8 6 3 2 Table 2: The number of manuscripts flagged to have irregularities according to our method as a function of the required confidence level.

Discussion and Conclusion
In this study, we introduced an innovative approach to identify potential falsehoods in research manuscripts by applying Benford's law to commonly reported statistical values, including mean, standard deviation, and linear regression coefficients.By adopting this law to the context of research manuscripts, we aimed to enhance the detection of deceptive information.
To validate the efficacy of our approach, we conducted two experiments.In the initial experiment, we evaluated the performance of our method by applying it to a random sample of 100 datasets from diverse fields.The results demonstrated that our method achieved an accuracy of 0.79 and an F1 score of 0.77, indicating its capability to identify potential anomalies, albeit with some limitations.Consequently, it can serve as a supportive tool or an initial filter to alleviate the burden of manual investigation.
Building upon this premise, the second experiment involved applying our method to 100 recent manuscripts from reputable high-impact academic journals in the field of economics.Alarming findings emerged, as approximately 3% of the manuscripts exhibited anomalies, inaccuracies, or even explicit manipulations, with a confidence level of 96%.These outcomes unfortunately align with existing trends in academic fraud practices, underscoring the significance of our approach in uncovering inconsistencies and deliberate misrepresentations in academic research.
By leveraging Benford's law, our method offers an objective and automated solution to complement traditional manual scrutiny.Furthermore, it holds particular relevance in fields like economics where researchers heavily rely on data analysis and statistical properties, often lacking access to raw data due to privacy or proprietary constraints.While our results demonstrate the promise of our approach, there are several limitations to consider.First, our method relies on the assumption that the reported aggregated data follows Benford's distribution, which may not always hold true [26,27].Second, our approach requires the development of tests for each statistical operator, which makes it hard to utilize a wide spectrum of fields and manuscripts that may use and report about a large number of different statistical analysis methods.Third, our method does not provide definitive proof of fraud or misconduct but rather serves as a signal for potential irregularities that warrant further investigation, thus only slightly reducing the time and resources required for the task.Finally, the publication of this study reduces its effectiveness as malicious scholars would be aware of the proposed method and develop counter-strategies to overcome it, as common in other fields like cybersecurity [28].

Figure 1 :
Figure 1: A schematic view of this study.First, we outline the mathematical framework based on Benford's theory.Next, we outline the data acquisition process for the experiments.Finally, we present the experimental setup and results, including a method validation experiment and an evaluation of recent economics studies, followed by an analysis of the results and a discussion about their implementations.

Table 1 :
A confusion matrix of the proposed method on the open source datasets.