Conﬁdence Intervals and Sample Size to Compare the Predictive Values of Two Diagnostic Tests

: A binary diagnostic test is a medical test that is applied to an individual in order to determine the presence or the absence of a certain disease and whose result can be positive or negative. A positive result indicates the presence of the disease, and a negative result indicates the absence. Positive and negative predictive values represent the accuracy of a binary diagnostic test when it is applied to a cohort of individuals, and they are measures of the clinical accuracy of the binary diagnostic test. In this manuscript, we study the comparison of the positive (negative) predictive values of two binary diagnostic tests subject to a paired design through conﬁdence intervals. We have studied conﬁdence intervals for the difference and for the ratio of the two positive (negative) predictive values. Simulation experiments have been carried out to study the asymptotic behavior of the conﬁdence intervals, giving some general rules for application. We also study a method to calculate the sample size to compare the parameters using conﬁdence intervals. We have written a program in R to solve the problems studied in this manuscript. The results have been applied to the diagnosis of colorectal cancer.


Introduction
A diagnostic test is medical test that is applied to an individual in order to determine the presence of a certain disease. Binary diagnostic tests are a very common type of diagnostic test in clinical practice. A binary diagnostic test (BDT) is a diagnostic test whose possible result is positive or negative. A positive result indicates the presence of the disease, and a negative result indicates the absence. Mammography for the diagnosis of breast cancer is an example of BDT. Accuracy of a BDT is measured in terms of two fundamental parameters: sensitivity and specificity. Sensitivity (Se) is the probability of the result of the BDT being positive when the individual has the disease, and specificity (Sp) is the probability of the result of the BDT being negative when the individual does not have the disease. Therefore, Se and Sp are probabilities of getting the disease diagnosis right, and they represent the intrinsic accuracy of the BDT, since these parameters depend on the physical, chemical, or biological properties upon which the BDT is developed. Other parameters that are used to assess and compare two BDTs are the positive and negative predictive values. Positive predictive value (τ) is the probability of an individual having the disease when the result of the BDT is positive, and the negative predictive value (υ) is the probability of an individual not having the disease when the result of the BDT is negative. Predictive values represent the accuracy of the diagnostic test when it is applied to a cohort of individuals, and they are measures of the clinical accuracy of the BDT. Predictive values depend on Se, Sp, and on the disease prevalence (p), and are easily calculated applying Bayes theorem, i.e., The accuracy of a BDT is assessed in relation to a gold standard. A gold standard (GS) is a medical test that determines without error whether or not an individual has the disease. Biopsy for the diagnosis of breast cancer is an example of GS.
On the other hand, the comparison of parameters of two BDTs is an important topic in the study of statistical methods for diagnosis in Medicine. The most frequent sample design to compare the parameters of two BDTs is the paired design. The paired design consists of applying the two BDTs and the GS to all individuals of a random sample sized n. The comparison of the predictive values of two BDTs subject to a paired design has been the subject of different studies. Bennett [1,2], Leisenring et al. [3], Wang et al. [4], Kosinski [5], Tsou [6], and Takahashi and Yamamoto [7] have studied hypothesis tests to compare the two positive predictive values and the negative predictive values independently. Roldán-Nofuentes et al. [8] studied a global hypothesis test to simultaneously compare the positive and negative predictive values of two BDTs. However, the comparison of predictive values using confidence intervals has been little studied. If the hypothesis test is significant to an α error, the confidence interval (CI) allows determining how much one predictive value is greater than the other. Moskowitz and Pepe [9] have proposed a Wald-type CI for the ratio of the two positive (negative) predictive values. A Wald-type CI for the difference of the two positive (negative) predictive values is easily obtained by inverting the contrast statistic of the hypothesis test studied by Wang et al. [4].
The objective of this manuscript is study CIs to compare the positive (negative) predictive values of two BDTs subject to a paired design. For this, we have studied CIs for the difference and for the ratio of the two positive (negative) predictive values. If a CI for the difference (ratio) does not contain the zero (one) value, then we reject the equality between the two positive (negative) predictive values, and we estimate how much bigger one predictive value is than another one. The problem of calculating the sample size to compare the two positive (negative) predictive values through a CI is also studied.
This manuscript is structured in the following way. In Section 2, the existing CIs are presented and other new CIs are proposed, both for the ratio and for the difference between the positive (negative) predictive values. In Section 3, simulation experiments are carried out to study the coverage probabilities and the average lengths of the CIs. In Section 4, a method to calculate the sample size to compare the parameters through CIs is proposed. In Section 5, we present the program called "cicpvbdt", which is a program written in R that solves the problems studied in this manuscript. In Section 6, the results were applied to an example on the diagnosis of colorectal cancer, and in Section 7, the results are discussed.

Confidence Intervals
Let us consider two BDTs that are assessed in relation to the same GS. Let T i be the variable that models the result of the ith BDT, with i = 1, 2. T i = 0 indicates that the test result is negative, and T i = 1 indicates that the test result is positive. Let D be the random variable that models the result of the GS, so that D = 1 when the individual is diseased and D = 0 when the individual is non-diseased. Let Se i and Sp i be the sensitivity and specificity of the ith BDT. Table 1 shows the observed frequencies obtained subject to a paired design. The frequencies s jk and r jk are the product of a multinomial distribution whose probabilities are p jk = P(D = 1, T 1 = j, T 2 = k) and q jk = P(D = 0, T 1 = j, T 2 = k), with j, k = 0, 1. Applying the conditional dependence model of Vacek [10], probabilities p jk and q jk are written as and q jk = q Sp where δ jk = 1 if j = k and δ jk = −1 if j = k, with j, k = 0, 1, ε 1 is the dependence factor between the two BDTs when D = 1 and ε 0 is the dependence factor between the two BDTs when then the two BDTs are conditionally independent on the disease. This assumption is not realistic, so in practice, it is verified that ε 1 > 0 and/or ε 0 > 0. Let π = (p 11 , p 10 , p 01 , p 00 , q 11 , q 10 , q 01 , q 00 ) T be the vector of probabilities of the multinomial distribution, p = The maximum likelihood estimators of p jk and q jk are: p jk = s jk n andq jk = r jk n . Table 1. Observed frequencies subject to a paired design.

CIs for the Difference
Three CIs for each difference δ τ and δ υ are studied: Wald CI, bias-corrected bootstrap CI, and Monte Carlo Bayesian CI.

Bias-Corrected Bootstrap CI
The bias-corrected bootstrap CI is calculated from B random samples with replacement generated from the sample of n individuals. In each of the B samples, we calculateτ 1b ,τ 2b , υ 1b ,υ 2b ,δ τb =τ 1b −τ 2b andδ υb =υ 1b −υ 2b , with b = 1, . . . , B. Then, the average differences τb . Assuming that the bootstrap statisticŝ δ τB andδ υB can be transformed to a normal distribution, the bias-corrected bootstrap CIs [11] are calculated in the following way. Let A τ = # δ τb <δ τ be the number of bootstrap estimatorsδ τb that are lower than the maximum likelihood estimator (MLE)δ τ , and let A υ = # δ υb <δ υ be the number of bootstrap estimatorsδ υb that are lower than the , and α 2υ = Φ(2ẑ υ + z 1−α/2 ); then, the bias-corrected bootstrap CI for δ τ is and the bias-corrected bootstrap CI for δ υ is τB is the γth quantile of the distribution of the B bootstrap estimations of δ τ , and δ (γ) υB is the γth quantile of the distribution of the B bootstrap estimations of δ υ .

Monte Carlo Bayesian CI
The number of diseased individuals (s) is the product of a binomial distribution, i.e., s → B(n, p) . Conditioning on D = 1, it is verified that The number of non-diseased individuals (r) is the product of a binomial distribution, i.e., r → B(n, q) . Conditioning on D = 0, it is verified that r 01 + r 00 → B(r, Sp 1 ) and r 10 + r 00 → B(r, Sp 2 ).
On the other hand, the estimatorsŜe i ,Ŝp i , andp (Equations (9) and (10)) are estimators of binomial proportions. Therefore, for these estimators we propose conjugate beta prior distributions, i.e., Let n = (s 11 , s 10 , s 01 , s, r 11 , r 10 , r 01 , n − s) be the vector of observed frequencies, with s 00 = s − s 11 − s 10 − s 01 , r = n − s, and r 00 = n − s − r 11 − r 10 − r 01 . Then, the posteriori distributions for the estimators of Se i , Sp i , and p are: The posteriori distribution for the positive (negative) predictive value of each BDT, and for δ τ and δ υ , can be approximated applying the Monte Carlo method. The Monte Carlo method is a computational method that consists of generating M values from the posteriori distributions (12). In the mth iteration, the values generated for Se

Logarithmic CI
Assuming the asymptotic normality of the Napierian logarithm ofρ τ and ofρ υ , i.e., and an asymptotic CI for ln(ρ υ ) is Taking exponential in each of the previous expressions, the logarithmic CI for δ τ is and the logarithmic CI for δ υ is whereVar[ln(ρ τ )] andVar[ln(ρ υ )], obtained applying the delta method, are: If we want to calculate the logarithmic CI for the ratio τ 2 /τ 1 , then the CI is obtained by calculating the inverse of each boundary of CI for ρ τ = τ 1 /τ 2 . In a similar way, the CI for υ 2 /υ 1 is calculated.

Monte Carlo Bayesian CI
The Monte Carlo Bayesian CI for ρ τ (ρ υ ) is obtained in a similar way as the Monte Carlo Bayesian CI for δ τ (δ υ ). Considering the same distributions (10) and (11), in the mth iteration, we calculate the ratios ρ υ . Finally, we calculate the CIs based on quantiles, i.e., The Monte Carlo Bayesian CI for τ 2 /τ 1 (υ 2 /υ 1 ) is calculated by inverting the limits of the Monte Carlo Bayesian CI for ρ τ (ρ υ ).

Simulation Experiments
The CIs studied in Section 2 are approximate, and therefore, it is necessary to study their asymptotic behaviors. For this, Monte Carlo simulation experiments have been carried out to study the coverage probabilities and the average lengths of the CIs studied, considering a confidence level of 95%. These experiments have consisted of generating N = 10,000 random samples with multinomial distribution sized n = {50, 100, 200, 500, 1000}, and whose probabilities have been calculated from Equations (7) and (8) (5) and (6)). As values of ε 1 and ε 0 , we have taken intermediate and high values, i.e., 50% of the maximum value of ε i and 90% of the maximum value of ε i , respectively. Finally, we have calculated the probabilities of the multinomial distributions using Equations (7) and (8). In each scenario, we have calculated all the CIs for each of the N random samples.
For the bias-corrected bootstrap CIs, for each one of the N random samples, B = 2000 samples with replacement have been generated, and from these B samples, the bias-corrected bootstrap CIs have been calculated.
For the Monte Carlo Bayesian CI, we have considered a Beta(1, 1) distribution as a priori distribution for the estimators of sensitivities, specificities and prevalence. The Beta(1, 1) distribution is a non-informative distribution, which is flat for every possible value of the sensitivities, specificities, and prevalence. Therefore, the impact of the Beta(1, 1) distribution on the posteriori distributions is minimal. Moreover, for each one of the N random samples, M = 10,000 random samples have been generated, and the Monte Carlo Bayesian CIs have been calculated from them.
In each of the N samples generated, all the CIs have been calculated. Furthermore, it has been checked whether each CI contains the value of the parameter (difference or ratio, depending on the type of CI). The coverage probability has been calculated by dividing the number of samples in which the CI contains the parameter by the total number of samples. For each CI, its length (upper limit minus the lower limit) has also been calculated, and finally, the average length of each CI has been calculated. Table 2 shows some of the results obtained for the three CIs for the difference δ τ for four different scenarios and for intermediate values of ε 1 and ε 0 . When the sample size is small (n = 50) or moderate (n = 100), the CIs for δ τ have a coverage probability close to 1. For the difference δ τ , in very general terms, the Wald CI is the interval that has a coverage probability with better fluctuations around 95%, especially when n is moderate or large (n ≥ 200). The bias-corrected bootstrap CI has a very similar behavior to the Wald CI, especially when the sample size is large. In general terms, the Monte Carlo Bayesian CI has a coverage probability greater than that of the other two intervals, even when the coverage probability of the other two intervals fluctuates around 95%.   Regarding the CIs for the ratio ρ τ , Table 3 shows the results obtained for the same scenarios as in Table 2. When the sample size is small (n = 50), the five CIs for ρ τ have a coverage probability close to 1. In general terms, there is not an important difference between the coverage probabilities and the average lengths of the Wald, logarithm, and Fieller CIs, especially when n ≥ 100. When the sample size is small, the logarithmic CI and the Fieller CI have an average length slightly greater than the Wald CI. The bias-corrected bootstrap CI has a very similar behavior to the Wald, logarithmic, and Fieller CIs, especially when the sample size is large. In general terms, the Monte Carlo Bayesian CI has a coverage probability greater than that of the other four intervals.  Table 4 shows the results for the three CIs for the difference δ υ for the same scenarios as in Tables 2 and 3. In general terms, when the sample size is small, the three CIs have a coverage probability close to 1, although in some situations, the bias-corrected bootstrap CI may have a coverage probability well below 95%. In general terms, the Monte Carlo Bayesian CI has a coverage probability that almost always fluctuates above 95%. For the difference δ υ , the Wald CI is the interval that has a coverage probability with better fluctuations around 95%, especially when the sample size is moderate or large.

CIs for the Differences and Ratios of Negative Predictive Values
Regarding the CIs for the ratio ρ υ , Table 5 shows the results obtained for the same scenarios as in Table 4. When the sample size is small (n = 50), the five CIs for ρ τ fail or have a coverage probability close to 1. In general terms, the conclusions for the biascorrected bootstrap CI and for the Monte Carlo Bayesian CI are very similar to those obtained for the corresponding intervals for the difference δ υ . With respect to the other intervals, there is not an important difference between the coverage probabilities and the average lengths of the Wald, logarithm, and Fieller CIs, especially when n ≥ 100. When the sample size is small, the logarithmic CI and the Fieller CI have an average length slightly greater than that of the Wald CI. Similar conclusions are obtained when ε 1 and ε 0 take high values. Therefore, the dependency factors ε 1 and ε 0 do not have an important effect on the behavior of the CIs for the difference (ratio) of the two negative predictive values.
As a conclusion, the following general rules of application can be given depending on the sample size, since the sample size is the only parameter controlled by the researcher: (a) apply the Wald CI for the difference of the positive (negative) predictive values whatever the sample size; (b) apply the Wald CI for the ratio of the two positive (negative) predictive values when the sample size is small, and apply the Wald CI, the logarithmic CI, the Fieller CI, or the bias-corrected bootstrap CI when the sample size is moderate or high.
Once some general rules of application have been established, what is better: to use a CI for the difference or a CI for the ratio? Simulation experiments have shown that the Wald CIs for the difference and the Wald CIs for the ratio have a very similar coverage probability. Furthermore, the Wald CI for the difference has a coverage probability very similar to that of the Fieller CI when the sample size is large. The Wald CIs for the difference are obtained by inverting the Wald test statistics of the tests H 0 : τ 1 − τ 2 = 0 and H 0 : υ 1 − υ 2 = 0, and the Wald CIs for the ratio are obtained by inverting the Wald test statistics of the tests H 0 : τ 1 /τ 2 = 1 and H 0 : υ 1 /υ 2 = 1. Wang et al. [4] have shown through simulation experiments that the hypothesis tests H 0 : τ 1 − τ 2 = 0 and H 0 : υ 1 − υ 2 = 0 have better asymptotic behavior than the tests H 0 : τ 1 /τ 2 = 1 and H 0 : υ 1 /υ 2 = 1. Furthermore, Wang et al. recommend using the difference-based approach as it is more straightforward and more understandable for researchers. Therefore, we recommend using a CI for the difference instead of a CI for the ratio.

Sample Size
The calculation of the sample size to compare parameters has great interest in Statistics. Next, we propose a procedure to determine the sample size necessary to estimate the difference between the two positive (negative) predictive values with a precision φ τ (φ υ ) and a confidence 100(1 − α)%. This procedure is based on the Wald CI for the difference δ τ (δ υ ), since in general terms, this CI is the interval with the best asymptotic behavior. This procedure requires having a pilot sample (or another study) in order to estimate the predictive values and their differences. If the pilot sample is not small and the Wald CI for the difference δ τ (δ υ ) contains the value 0, then the null hypothesis of equality of the predictive values is not rejected, and it does not make sense to calculate the sample size. However, if the sample is small, it may be necessary to calculate the sample size, since the Wald CI will be very wide and may contain the value 0 even if the predictive values are different. Let us considerer that τ 1 ≥ τ 2 (υ 1 ≥ υ 2 ) and therefore δ τ ≥ 0 (δ υ ≥ 0), and let φ τ and φ υ be the precisions set by the researcher (φ must be a small value if the researcher wants high precision). Based on the asymptotic normality ofδ τ =τ 1 −τ 1 and of δ υ =υ 1 −υ 2 , it is verified that i.e., the probability of obtaining an estimatorδ τ δ υ is in this interval with a probability 100(1 − α)%.
For positive predictive values, the method is as follows. Setting a precision φ τ , the sample size is calculated from the equation where the variance is The proof can be seen in the Appendix A. This variance depends on the positive predictive values (τ i ), on the disease prevalence (p), on the probability of a positive result of each test (Q i ), on the dependency factors (ε i ), and on the sample size n. Substituting in Equation (13) the parameters with their estimators and clearing n, the sample size to estimate the difference δ τ with precision φ τ and a confidence 100(1 − α)% is Once the equation for the sample size is obtained, the method to calculate the sample size consists of the following steps: (1) Take pilot samples sized n τ , and from this sample, calculateτ i ,υ i ,ε i ,p,Q i and the Wald CI for the difference δ τ . If the Wald CI has a precision φ τ , then with the pilot sample, the precision has been reached, and the process ends. In this situation, the difference δ τ has been estimated with a precision φ τ to a confidence 100(1 − α)%.
Otherwise, go to the next step. (2) From the estimates obtained with the pilot sample, calculate the sample size n τ applying Equation (15). (3) Take the sample sized n τ (n τ − n τ individuals are added to the initial pilot sample), and from this sample, calculate all the estimators and the Wald CI for the difference δ τ . If the Wald CI has a precision φ τ , then the process ends (the precision has been reached with the new sample). If the Wald CI does not have the precision φ τ , then this sample is considered as a pilot sample and go to step 1.
This method to calculate the sample size n is an iterative method, which depends on the initial pilot sample and therefore does not guarantee that the difference between the positive predictive values will be estimated with the precision φ τ .
Sample size to estimate the difference δ υ is calculated in a similar way. In this case, and the sample size n υ to estimate the difference δ υ with precision φ υ and a confidence 100 If the researcher wants to estimate δ τ with precision φ τ and also wants to estimate δ υ with precision φ υ , at the same level of confidence, then the final sample size is n = Max(n τ , n υ ). Using the largest sample size, can guarantee that the CI for the difference of the two positive predictive values and that the CI for the difference of the two negative predictive values verify the set precision for each of them.
The method for calculating the sample size depends on the values of the estimators obtained from the pilot sample. As the values of the estimators depend on each sample (and therefore vary from one sample to another), it is necessary to study how the values of the estimators affect the calculation of the sample size. Therefore, we have carried out simulation experiments to study the effect of the values of the estimators on the calculation of the sample size. These simulation experiments consisted of the following steps: (1) Calculate the sample size n τ (n υ ), Equations (14) and (16), from the values of the parameters in the scenarios considered (Tables 2 and 4). Therefore, these equations have been applied using the values of the parameters instead of the values of the estimators. (2) Generate N = 10, 000 multinomial random samples sized n τ (n υ ) and whose probabilities have been calculated from Equations (7) and (8), using the parameters of the scenarios considered, and as values ε i , intermediate (50%) and high (90%) values have been considered. From each one of the N random samples, all the estimators (τ i ,υ i ,ε i ,p andQ i ) have been calculated, and then, the sample size n τ (n υ ) has been calculated applying Equation (14) (Equation (16)). (3) In each scenario considered, the average sample size and relative bias have been calculated. Table 6 shows the results obtained for different precision values (2.5% and 5%, which are values that can be considered as high precision values) and 1 − α = 0.95. The relative biases are very small, the equations of the sample sizes provide robust values, and therefore, the pilot sample has little effect on the calculation of the sample sizes. Table 6. Sample size for estimated the difference between the positive (negative) predictive values. Sample size  1203  301  5048  1262  Average sample size  1204  302  5054

Program Cipvbdt
We have written a program in R [13] to solve the problems raised in this manuscript. The program is called "cicpvbdt" (confidence intervals to compare the predictive values of binary diagnostic tests), and it allows calculating all CIs and sample sizes. The program runs with the command "cicpvbdt(s 11 , s 10 , s 01 , s 00 , r 11 , r 10 , r 01 , r 00 , δ τ , δ υ )". By default, the level of confidence is 95%. The program does not calculate the sample sizes when δ τ = 0 and δ υ = 0, and only calculates the sample sizes when δ τ > 0 and/or δ υ > 0. In this last situation, the program checks if the set precision is reached. The program checks that all values are valid (e.g., that there are no negative observed frequencies, etc.). The program also checks that all the parameters and their variances-covariances can be estimated. For the bias-corrected bootstrap CIs, 2000 samples with replacement are generated, and for the Monte Carlo Bayesian CIs, 10,000 random samples are generated. The results obtained are saved in a file called "results_cicpvbt.txt" in the folder from where the program is run. The program is available as Supplemental Material of this manuscript.

Example
The results obtained have been applied to a study on the diagnosis of colorectal cancer, using two diagnostic tests: Fecal Immunochemical Testing (FIT) and Fecal Occult Blood Testing (FOBT). The GS for the diagnosis of colorectal cancer is the biopsy. Table 7 shows the observed frequencies when the two BDTs and the GS have been applied to a sample of 168 adult men suspected of having colorectal cancer. Using the program "cicpvbdt" with the command "cicpvbdt(68, 18, 1, 13, 4, 1, 2, 61, 0, 0)"; all the results shown in Table 7 are obtained. The estimated positive predictive values of FIT and of FOBT are 94.5% and 92.0%, and the estimated negative predictive values are 81.8% and 66.7%, respectively. Using the recommendations given in Section 3, the 95% Wald CI for the difference between the two positive predictive values contains the value zero, and therefore (with α = 5%), the equality of the two positive predictive values is not rejected.
Regarding the negative predictive values, the 95% Wald CI does not contain the value zero, and therefore, we reject the equality of both negative predictive values. Therefore, negative predictive value of FIT is significantly greater than the negative predictive value of FOBT. The negative predictive value of FIT is (with a confidence of 95%) a value between 8.1% and 22.2% greater than the negative predictive value of FOBT. The same conclusions are obtained using the other CIs.
To illustrate the method of calculating the sample size, we are going to suppose that the clinician is interested in calculating the sample size necessary to estimate the difference between the two negative predictive values with a precision φ υ = 0.05 and 1 − α = 0.95. The 95% Wald CI for δ υ = υ 1 − υ 2 is (0.081 , 0.222), and the precision is 0.0705 (= (0.222 − 0.081)/2). Since φ υ = 0.05 < 0.0705, with the sample of 168 individuals, the precision has not been reached, and therefore, the sample size must be calculated. Using the sample of 168 patients as a pilot sample and executing the command "cicpvbdt(68, 18, 1, 13, 4, 1, 2, 61, 0, 0.05)", it is obtained that n υ = 338. A sample of 338 patients is necessary to estimate the difference between the two negative predictive values with a precision φ υ = 0.05 and a confidence of 95%. To the sample of 168 patients, another 170 new patients must be added. The two BDTs and the biopsy should be applied to new patients. Finally, it is necessary to recalculate the CIs with the sample of 338 patients and check if the set precision is verified.

Discussion
Comparison of the predictive values of two medical tests is a topic of interest in biostatistics. There are several articles that have studied hypothesis tests to solve these problems; however, the comparison of predictive values through confidence intervals has been little studied. In this manuscript, we have studied confidence intervals for the difference and for the ratio of the positive (negative) predictive values of two diagnostic tests under a paired design. We have carried out simulation experiments to study the asymptotic behaviors of the CIs, and we have given general rules of application. These rules are based on the sample size, since this is the only parameter that is set by the researcher, and also on the practical interpretation of the CIs. As a general conclusion, we recommend using the Wald interval for the difference of the two positive (negative) predictive values.
We have also proposed a method, based on the Wald CI for the difference, to calculate the sample size to estimate the difference between the two positive (negative) predictive values with a determined precision and confidence. This method starts from an initial pilot sample, and then the sample size is calculated from the estimators obtained with the initial sample. This method depends on the estimators of the pilot sample, so we have carried out simulation experiments to study the effect of the pilot sample on the sample size. The results obtained in these experiments have shown that the pilot sample does not have any important effect on the calculation of the sample size, and that therefore, the method has practical validity.