Next Article in Journal / Special Issue
Catecholamines and Neurodegeneration in Parkinson’s Disease—From Diagnostic Marker to Aggregations of α-Synuclein
Previous Article in Journal
Development of MicroRNA Therapeutics for Hepatocellular Carcinoma
Previous Article in Special Issue
Strain Elastography Ultrasound: An Overview with Emphasis on Breast Cancer Diagnosis

Diagnostics 2013, 3(1), 192-209; https://doi.org/10.3390/diagnostics3010192

Article
Analysis of Predictive Values Based on Individual Risk Factors in Multi-Modality Trials
Department of Medical Statistics, University of Göttingen, Humboldtallee 32, 37073 Göttingen, Germany
*
Author to whom correspondence should be addressed.
Received: 23 January 2013; in revised form: 25 February 2013 / Accepted: 13 March 2013 / Published: 15 March 2013

Abstract

:
The accuracy of diagnostic tests with binary end-points is most frequently measured by sensitivity and specificity. However, from the clinical perspective, the main purpose of a diagnostic agent is to assess the probability of a patient actually being diseased and hence predictive values are more suitable here. As predictive values depend on the pre-test probability of disease, we provide a method to take risk factors influencing the patient’s prior probability of disease into account, when calculating predictive values. Furthermore, approaches to assess confidence intervals and a methodology to compare predictive values by statistical tests are presented. Hereby the methods can be used to analyze predictive values of factorial diagnostic trials, such as multi-modality, multi-reader-trials. We further performed a simulation study assessing length and coverage probability for different types of confidence intervals, and we present the R-Package facROC that can be used to analyze predictive values in factorial diagnostic trials in particular. The methods are applied to a study evaluating CT-angiography as a noninvasive alternative to coronary angiography for diagnosing coronary artery disease. Hereby the patients’ symptoms are considered as risk factors influencing the respective predictive values.
Keywords:
positive predictive value; negative predictive value; diagnostic trials; coronary artery disease

1. Introduction

The main purpose of a diagnostic agent is to assess a patient’s true health status, so the probability of the test giving the correct diagnosis is an important assessment of diagnostic ability. Hereby the positive predictive value describes the probability that a patient with an abnormal (i.e., positive) test result is actually diseased and consequently, the negative predictive value represents the probability that a patient with normal (i.e., negative) test result is actually free of disease. However, these quantities are only of limited value: the predictive values of a diagnostic agent critically depend on the prevalence of the disease and, as the prevalence might vary, e.g., between different risk groups, predictive values are not homogenous within the population. Hence sensitivity and specificity are mostly used to describe the accuracy of a diagnostic agent because these measures are independent of the prevalence of disease: they are defined as the probabilities of the test, correctly identifying the diseased subjects or the non-diseased respectively. In contrast to the predictive values, sensitivity and specificity describe the result of a test within groups of patients who either have or do not have the condition. Thus they are characteristics of the diagnostic test itself and are independent of the prevalence of the disease. Even though sensitivity and specificity are useful and powerful measures to understand how effective a diagnostic test is, they also involve a main disadvantage: these quantities do not assess the accuracy of a diagnostic agent in a practically useful way. They concentrate on how accurate the diagnostic test is in discriminating diseased and non-diseased subjects but fail to give an assessment of a normal or abnormal test result of an individual patient. This interpretation of the test result is only provided by predictive values. Hence, predictive values should not be disregarded in the analysis of diagnostic trials, despite all entailed problems. Instead of avoiding predictive values, they should rather be estimated while carefully taking the arising problems into consideration.
This paper now provides a new method to calculate predictive values for different risk groups. As the pre-test probability of disease in different risk groups does not need to be determined in the same study as the efficiency of the diagnostic agent, this method can even be applied after the diagnostic study (analyzed by means of sensitivity and specificity) has already been closed. Hence, the approach is also applicable to case-control studies where, per study design, no prevalence can be assessed. By this method of analysis, the heterogeneity of predictive values throughout the population is taken into account because predictive values are estimated for each risk group separately. Reported additionally to sensitivity and specificity, these predictive values provide a comprehensive review of the strengths and weaknesses of the investigated diagnostic agent.
In our approach of analysis, we assume that sensitivity and specificity are equal in all investigated risk groups and that the probability of disease in each group has been estimated in prior studies. We calculate predictive values by using Bayes’ theorem and determine the asymptotic distribution of the resulting inferential statistics by using the delta method (Section 2). As many diagnostic studies are at least two armed trials in which each subject is diagnosed by different tests, we use the multivariate delta theorem to allow to determine predictive values in factorial designs as well. The idea of using the delta method to calculate the distribution of predictive values in a univariate set-up was proposed by Mercaldo I. [1]. In their work the prevalence has to be known by the investigator and cannot be estimated, which seems to be unlikely in clinical practice. In our approach we hence allow the prevalence being a random variable, which cannot easily be neglected as simulations studies (Section 3) show. The practical relevance of presented methods are illustrated by means of a study evaluating the accuracy of multidetector CT angiography in the diagnosis of coronary artery disease (Section 4). The paper closes with a discussion of the proposed procedures (Section 6).

2. Methods of Analysis

In this approach, Bayes’ theorem [2] serves as the theoretical basis for the analysis. This theorem connects sensitivity and specificity with the predictive values by displaying the positive ( p + ) and the negative ( p ) predictive value as functions of sensitivity ( s e ), specificity ( s p ) and prevalence (π), namely,
p + = f + ( s e , s p , π ) = s e · π s e · π + ( 1 s p ) · ( 1 π )
p = f ( s e , s p , π ) = s p · ( 1 π ) s p · ( 1 π ) + ( 1 s e ) · π
The sensitivity can be estimated by the ratio of the true positive test results to all diseased subjects, and the specificity can be estimated by the ratio of the true negative test results to all non-diseased subjects. As we assume that the prevalence of disease in each risk group has been assessed in prior studies, estimators for the positive and the negative predictive value for each risk group can be calculated by plugging in the estimators of sensitivity, specificity and prevalence in Equations (1) and (2).
This method is applied, e.g., by Diamond and Forrester [3], who compute the probability of having a coronary artery disease. They present a table of post-test probabilities depending on the result of an electrocardiographic stress test (depression of the S-T segment) and depending on the pre-test probability of disease (categorized into different risk groups by age, sex and symptoms). Their work shows the importance of distinguishing between different risk groups: for the same depression of the S-T segment, the positive predictive value varies from 0 . 938 (high risk group) to 0 . 003 (low risk group).
Extending the ideas of Diamond and Forrester, our approach will go one step further: we will derive methods of analysis by which not only the predictive values of two or more modalities are calculated for different risk groups, but also the difference between the modalities is statistically tested. Furthermore, different methods to calculate confidence intervals for the positive and the negative predictive value are provided. As all these methods take the patient’s risk factors into consideration, this approach of analysis can be regarded as a further step towards personalized medicine.

2.1. Notation

We consider a diagnostic trial involving N subjects, where n 0 subjects are classified as non-diseased by a reliable gold standard and n 1 are classified as diseased. In the set-up of this trial, we assume that each subject is examined by means of m = 1 , , M different diagnostic tests. For each subject the results are collected in a vector X i k = ( X i k ( 1 ) , , X i k ( M ) ) , i = 0 , 1 , k = 1 , , n i , where X i k ( m ) = 1 , if the test result of the m-th test is positive and X i k ( m ) = 0 otherwise. Within group i ( i = 0 , 1 ) the vectors X i k , k = 1 , , n i are independent identically distributed random vectors, following a multivariate Bernoulli distribution with success probabilities sp = ( s p ( 1 ) , , s p ( M ) ) for i = 0 and se = ( s e ( 1 ) , , s e ( M ) ) for i = 1 . Hereby sp and se denote the vectors of sensitivity and specificity for the different modalities.

2.2. Estimation and Asymptotic Distribution

The sensitivity of the m-th diagnostic test is estimated by s e ^ ( m ) , the ratio of the true positive test results (of test m) to all diseased subjects, and the specificity of the m-th modality is estimated by s p ^ ( m ) , the ratio of the true negative test results (of test m) to all non-diseased subjects. Similarly to sensitivity and specificity, their estimators are also collected in vectors se ^ = ( s e ^ ( 1 ) , , s e ^ ( M ) ) and sp ^ = ( s p ^ ( 1 ) , , s p ^ ( M ) ) .
For the calculation of the predictive values for a subject, its pre-test probability of disease is required. We assume that the pre-test probability of disease is influenced by a patient’s individual characteristics and that each subject can be attributed to a risk group on the basis of these attributes. Hereby, the prevalences π g , g = 1 , , G , in the g-th risk group have been estimated in prior studies by π ^ g = k g m g , the ratio of the number of diseased subjects k g in group g to all subjects m g in group g. With the help of Equations (1) and (2), the positive and the negative predictive value of the m-th modality for the g-th risk group can be calculated and finally be estimated by replacing sensitivity, specificity and prevalence by their respective estimates:
p ^ g , + ( m ) = f + ( s e ^ ( m ) , s p ^ ( m ) , π ^ g ) = s e ^ ( m ) · π ^ g s e ^ · π ^ g + ( 1 s p ^ ( m ) ) · ( 1 π ^ g ) and p ^ g , ( m ) = f ( s e ^ ( m ) , s p ^ ( m ) , π ^ g ) = s p ^ ( m ) · ( 1 π ^ g ) s p ^ ( m ) · ( 1 π ^ g ) + ( 1 s e ^ ( m ) ) · π ^ g , m = 1 , , M
Similarly to sensitivity and specificity, the positive and the negative predictive values of each risk group g = 1 , , G are collected in vectors p g , + = ( p g , + ( 1 ) , , p g , + ( M ) ) and p g , = ( p g , ( 1 ) , , p g , ( M ) ) .
To derive the asymptotic results for the predictive values, the following regularity assumptions are required:
Assumptions
(1)
For all l , r = 1 , , d the bivariate distribution of ( X i k ( l ) , X i k ( r ) ) is the same for all subjects k = 1 , , n i within group i, i = 0 , 1 .
(2)
N ˜ = min ( n 0 , n 1 , m g , g = 1 , , G ) : Such that  N n i d i , i = 0 , 1 , and  N m g e g , g = 1 , , G  as  N .
(3)
s e ( l ) , s p ( l ) , l = 1 , , d and π g , g = 1 , , G are in ( 0 , 1 ) .
In clinical practice, these assumptions can be interpreted in the following way. The first assumption means that different subjects are independent replications. The second assumption ensures that the sample sizes n 1 (used for the estimation of sensitivity), n 0 (used for the estimation of specificity) and m g , g = 1 , , G (used for the estimation of prevalences for the different risk groups) increase uniformly when the overall sample size is increased. The third assumption excludes the trivial case that the sensitivity, specificity or prevalences are equal to 0 or 1.
These assumptions lead to our main result:
Theorem
For each risk group g = 1 , G , the statistics N ( p ^ + g p + g ) and N ( p ^ g p g ) have, asymptotically, a multivariate normal distribution with mean 0 and covariance matrices V ^ + g and V ^ g , which are defined in Appendix A.
Proof
The proof is mainly based on the central limit theorem and Cramer’s multivariate delta theorem with f + and f as transformation functions. For details as well as for the expressions of V ^ + g and V ^ g , we refer to the Appendix A.
The idea of using the delta method to calculate the distribution of predictive values was already proposed by Mercaldo et al. [1] for the univariate case, i.e., in their approach, predictive values of different diagnostic tests cannot be compared if the tests are carried out on the same subjects. It is further assumed that the prevalence is a known parameter but no quantity that has been estimated. If their method is applied in a set-up, when the prevalence is estimated (but incorrectly treated as fixed in order to meet the requirements for their approach), the variance of the predictive values is systematically underestimated (see Section 3).

2.3. Inferential Statistics

Based on the asymptotic distribution of N ( p ^ + g p + g ) and N ( p ^ g p g ) , the usual test statistics to compare the different diagnostic tests can be statistically tested by formulating the hypotheses in the same way as in theory of linear models
H 0 : p g , ± ( m ) p ¯ g , ± = 0 , m = 1 , , M , where  p ¯ g , ± = 1 M l = 1 M p g , ± ( l )
which equivalently can be written as:
H 0 : T · p ± g = 0 , where  T = I M 1 M 1 M 1 M
Hereby, I M denotes the M-dimensional unit matrix and 1 M the M-dimensional vector of 1s. In this case, an additive model is assumed but this approach can easily be expanded to a logistic model by again applying the delta method with a logit-transformation function. Hypotheses can be tested with the help of the ANOVA-type statistic ([4,5]): Under H 0 the statistic
A T S = N t r ( T V ^ ± g ) ( p ^ ± g ) T p ^ ± g
can be approximated by a central χ f ^ 2 / f ^ distribution with
f ^ = [ t r ( T V ^ ± g ) ] 2 t r ( [ T V ^ ± g ] 2 )
degrees of freedom. Furthermore, the ( 1 α ) -confidence intervals for each modality as well as for the difference between two modalities can be calculated in the usual way:
p ^ g , ± ( m ) ± z 1 α 2 v ^ ± g [ m , m ] N p ^ g , ± ( m 1 ) p ^ g , ± ( m 2 ) ± z 1 α 2 v ^ ± g [ m 1 , m 1 ] + v ^ ± g [ m 2 , m 2 ] 2 v ^ ± g [ m 1 , m 2 ] N
where v ^ ± g [ i , j ] denotes the ( i , j ) -element of V ^ ± g and z 1 α 2 the ( 1 α 2 ) -quantile of a standard normal distribution. For the confidence intervals, as well as for the test statistic, a logistic model can be applied. Hereby, the logistic model has one main advantage: the resulting confidence intervals are range-preserving by construction.
For small sample sizes, the distribution of N ( p ^ g , ± ( m ) p g , ± ( m ) ) / v ^ ± g [ m , m ] can be approximated by a central t ν -distribution (see Appendix B), which increases the coverage probability of the resulting confidence intervals.

3. Simulation Results

In this section, we investigate the coverage and length of confidence intervals constructed with the delta method. Hereby, we compare the approach of Mercaldo et al. [1] with the approaches presented in this paper. There were 48 different combinations for prevalence π { 0 . 05 , 0 . 25 , 0 . 5 } , sensitivity and specificity s e , s e { 0 . 5 , 0 . 75 , 0 . 85 , 0 . 9 } . Three different values of n = n 0 = n 1 { 50 , 100 , 500 } and m g { 100 , 500 , 1 , 000 } were used with each combination to symbolize small, medium and large study sizes for both the study evaluating the usefulness of the diagnostic test and the study assessing the prevalence of disease according to risk groups. For each combination of π , s e , s p , n , m g , 10,000 binomial samples were generated using the function rbinom of the free software R [6]. Hereby in each simulation step, the sensitivity was estimated from a contingency table generated by n 1 Bernoulli samples, the specificity was estimated from a contingency table generated by n 0 Bernoulli samples, and the prevalence was estimated by means of m g Bernoulli samples. The positive and negative predictive values as well as their estimators were calculated by applying Bayes’s theorem to a given set of π , s e , s p and π ^ , s e ^ , s p ^ , respectively.
Simulation results for p + and p can be found in Table 1 and Table 2 and Table A1 and Table A2, respectively. Hereby, the results for the negative predictive value are presented in Appendix C for reasons of readability. Due to the great number of input parameters and the number of possible values, these tables were constructed in the same way as in the paper of Mercaldo et al. [1]: one parameter was held fixed while averaging over the remaining parameters. If the positive or the negative predictive value is estimated 0 or 1, the logistic confidence interval is not applicable. In this case the t ν -approximation is not applicable neither, because the denominator of ν is estimated to be 0. The number of times this occurred was recorded in the last columns of Table 1 and Table A1, respectively. As the point estimators for p + and p are equal in all approaches the failure rates of the t-approximation, the logistic normal approximation, the logistic t-approximation and the logistic method by Mercaldo et al. are the same.
Table 1. Summary of p + coverage probabilities where the cell values denote the coverage probability for one fixed parameter and averaging over the remaining parameters. Hereby N-Approx and t-Approx are abbreviations for normal and t-approximation. π fix denotes the method of [1], where π is assumed to be fixed.
Table 1. Summary of p + coverage probabilities where the cell values denote the coverage probability for one fixed parameter and averaging over the remaining parameters. Hereby N-Approx and t-Approx are abbreviations for normal and t-approximation. π fix denotes the method of [1], where π is assumed to be fixed.
Fixed ParameterN-ApproxAdditive t-Approxπ fixN-ApproxLogistic t-Approxπ fixFailure (logistic & t-Approx)
π = 0 . 05 0.92980.95250.69580.96080.96570.70130.234 %
π = 0 . 25 0.94030.94820.82500.95480.9580.83860.045%
π = 0 . 5 0.93920.94720.84490.95480.95760.85960.047%
s e = 0 . 5 0.93600.94770.80670.95710.96060.81840.105%
s e = 0 . 75 0.93650.94960.78750.95690.96050.79850.111%
s e = 0 . 85 0.93660.94980.78100.95680.96030.79250.107%
s e = 0 . 9 0.93660.95010.77900.95640.96020.79000.111%
s p = 0 . 5 0.94080.95000.68970.95440.95570.69090.060%
s p = 0 . 75 0.93940.94800.78560.95520.95770.79130.061%
s p = 0 . 85 0.93570.94730.82870.95740.96230.84270.077%
s p = 0 . 9 0.92990.95200.85010.96020.96580.87450.235%
m g = 100 0.92660.95120.65780.96050.96460.66980.233%
m g = 500 0.94040.94760.83140.95520.95870.84250.046%
m g = 1 , 000 0.94220.94910.87640.95470.95790.88730.047%
n 0 = n 1 = 50 0.93070.94980.86930.96080.96560.89150.197%
n 0 = n 1 = 100 0.93730.94740.83170.95630.96080.84160.065%
n 0 = n 1 = 500 0.94130.95080.66460.95330.95480.66640.063%
Overall0.93640.94940.78850.95680.96030.79980.1080%
The approach of Mercaldo et al. assumes that the prevalence is a known fixed parameter and the variance of π g is 0. But as the prevalence can only be assessed by estimation, we assumed the prevalence to be a binomial random variable with variance greater than 0. To investigate whether this assumption has an impact on the quality of the confidence intervals or whether this assumption can easily be neglected in practice, we also simulated the approach of Mercaldo et al. with π g being a random variable. Note that we hence investigate the methodology of Mercaldo et al. in set-up, which seems likely in clinical practice but for which it was not designed. As the assumption that π g is known by the investigator seems to be unlikely in practice, no simulation with fixed π g was performed.
Simulation results show that the logistic confidence intervals have a slightly higher coverage probability than the additive intervals. The t-approximation in the additive set-up seems to achieve the best coverage.
For our approach the overall coverages for the logistic interval are 0 . 9568 ( p + ) and 0 . 9562 ( p ), whereas the overall coverages for the additive interval are 0 . 9364 ( p + ) and 0 . 9303 ( p ). The t-approximation increases coverage such that the overall coverages achieve 0 . 9494 ( p + ) and 0 . 9440 ( p ). The t-approximated logistic confidence intervals tend to be even more conservative than the normal-approximated logistic confidence intervals. Furthermore, simulation results show that the assumption of π g being a fixed parameter is a necessary assumption for a good performance of the approach of Mercaldo et al. If π g is a random variable, the overall coverage probability only reaches 0 . 7885 (additive) or 0 . 7998 (logistic) for the positive predictive value. (Simulations of the negative predictive values lead to comparable results.) The variance of π ^ g decreases when the sample size m g increases and, hence, the method of Mercaldo et al. achieves better results for large m g .
Table 2. Summary of p + confidence interval lengths where the cell values denote the confidence interval length for one fixed parameter and averaging over the remaining parameters. Hereby N-Approx and t-Approx are abbreviations for normal and t-approximation. π fix denotes the method of [1], where π is assumed to be fixed.
Table 2. Summary of p + confidence interval lengths where the cell values denote the confidence interval length for one fixed parameter and averaging over the remaining parameters. Hereby N-Approx and t-Approx are abbreviations for normal and t-approximation. π fix denotes the method of [1], where π is assumed to be fixed.
Fixed ParameterN-ApproxAdditive t-Approxπ fixN-ApproxLogistic t-Approxπ fix
π = 0 . 05 0.20450.22040.12900.20410.22160.1269
π = 0 . 25 0.22620.23260.17820.22240.2290.1761
π = 0 . 5 0.15220.15540.12190.15330.15780.1232
s e = 0 . 5 0.20060.20840.15610.19940.20810.1545
s e = 0 . 75 0.19410.20270.14230.19320.20280.1414
s e = 0 . 85 0.19180.20060.13790.19080.20070.1371
s e = 0 . 9 0.19070.19950.13580.18960.19960.1351
s p = 0 . 5 0.13230.13430.07720.13290.13680.0770
s p = 0 . 75 0.18160.18560.12850.18150.18730.1277
s p = 0 . 85 0.21810.22700.16800.21650.22640.1666
s p = 0 . 9 0.24520.26420.19840.24210.26070.1968
m g = 100 0.24800.26100.14150.24760.26440.1407
m g = 500 0.17410.18020.14370.17270.17850.1426
m g = 1 , 000 0.16070.16710.14390.15950.16540.1428
n 0 = n 1 = 50 0.2510.26640.21420.24840.26410.212
n 0 = n 1 = 100 0.19590.20170.14920.19510.20210.1484
n 0 = n 1 = 500 0.1360.14020.06580.13630.14210.0657
Overall0.19430.20280.14300.19330.20280.1420%
As their approach assumed the variance of π g equal to 0, the lengths of the confidence intervals are noticeably smaller than in our approach (about 25%), whereas for both methods the additive and logistic approaches yield comparable intervals lengths. For each method, the lengths of the logistic and the additive confidence intervals are almost equal. By construction, the lengths of the t-approximated confidence intervals are slightly higher than the intervals constructed by means of the normal approximation, but the difference is negligible.
We hence recommend using either the t-approximation or the logistic normal approximation when confidence intervals are computed.

4. Applications: Diagnostic Performance of Multidetector CT Angiography

As coronary artery disease (CAD) has been recognized as the leading cause of death in the United States [7], the diagnosis of the presence and severity of CAD is essential in clinical practice. Conventional coronary angiography reveals the extent, location and severity of obstructive lesions with high accuracy and thus invasive coronary angiography, despite the associated risks, remains the standard procedure for the diagnosis of CAD. Multidetector computed tomographic angiography (MDCTA) has been proposed as a noninvasive alternative to the conventional coronary angiography.
Recently, Miller et al. [8] performed a multi-center diagnostic trial to evaluate the accuracy of MDCTA involving 64 detectors. In 291 patients, segments of 1.5 mm or more in diameter were analyzed by means of CT and conventional angiography (gold standard) to assess whether the patient has at least one coronary stenosis of 50% or more. These data are summarized Table 3.
Table 3. CAD data.
Table 3. CAD data.
Conventional positiveAngiography negative
MDCTApositive14013
negative24114
164127
From Table 3, the sensitivity is estimated to be 0.85 while the specificity is estimated to be 0.90. Miller et al. [8] also estimated the predictive values from Table 3. Hereby, they implicitly assumed that the pre-test likelihood of disease is the same for all patients. They, furthermore, assumed that the study prevalence is representative. Using this approach, 0.83 is estimated as positive predictive value and 0.91 as negative predictive value. From these results, the authors draw the conclusion that CT angiography cannot be used as a simple replacement for conventional angiography. For our approach of analysis, we will regard three risk groups with different pre-test probabilities of disease. Diamond and Forrester [3] reviewed the literature to estimate the prevalence of CAD depending on sex, age and symptoms. For reasons of simplicity, we will only concentrate on the patient’s symptoms as risk factor. According to the patient’s symptoms, Diamond and Forrester [3] provide the pre-test probabilities of disease presented in Table 4.
Table 4. Prevalence of CAD in symptomatic patients.
Table 4. Prevalence of CAD in symptomatic patients.
SymptomProportion of Patients Affected
nonanginal chest pain146/913 (16.0%)
atypical angina963/1,931 (49.9%)
typical angina1,874/2,108 (88.9%)
Using these estimators for the prevalence as well as the estimators of sensitivity and specificity, the positive predictive values (PPV), negative predictive values (NPV) and the corresponding confidence intervals were calculated using the described methods. The results are summarized in Table 5.
Table 5. Predictive values of CAD according to symptoms.
Table 5. Predictive values of CAD according to symptoms.
Nonanginal Chest PainAtypical AnginaTypical Angina
PPVNPVPPVNPVPPVNPV
0.6140.970.8920.860.9850.434
standard[0.483,0.744][0.958,0.982][0.842,0.943][0.814,0.907][0.977,0.993][0.336,0.532]
with t-approx.[0.480,0.747][0.957,0.982][0.840,0.945][0.814,0.907][0.977,0.993][0.335,0.533]
logistic[0.478,0.733][0.955,0.980][0.830,0.933][0.807,0.901][0.975,0.991][0.339,0.533]
with t-approx[0.474,0.736][0.955,0.980][0.828,0.935][0.807,0.901][0.975,0.991][0.338,0.534]
Taking the additional information of the patient’s individual risk factors of disease into account offers a more comprehensive interpretation of the study results. For a patient with nonanginal chest pain, a negative test result from the MDCTA eliminates the need of further examination as well as a positive test result for a patient with typical angina does. In contrast, for a patient with atypical angina, neither a positive nor a negative test result from the MDCTA will lead to a clear statement concerning the patient’s health status.

5. Software

In order to analyze factorial trials, we have developed the R-Package facROC. The software can be used to evaluate most assessments of diagnostic accuracy in factorial set-ups: the area under the ROC-Curve (according to [9]), sensitivity and specificity (according to [10]) as well as predictive values (according to this paper). In most diagnostic trials sensitivity and specificity are analyzed as primary assessments of diagnostic accuracy. The evaluation of sensitivity and specificity serves as a basis for the computation of predictive values and can be performed by the facROC function facBinary:
fB <- facBinary(formula, id, gold, data, logit=FALSE)
Hereby the factorial structure of the trial can be taken into consideration with the help of the formula parameter that specifies the model in the usual way (e.g., formula = testresult˜rater*method). The parameter “id” indicates the patient’s id and the parameter “gold” assigns the patient’s true health status. (For more details as well as more options and parameters, see facROC manual.) To calculate and evaluate predictive values, the result of the analysis of sensitivity and specificity (i.e., a facBinary object) can be passed to the facPV function:
facPV(fB, prev, logit=FALSE, test=FALSE)
The prevalence parameter “prev” has to be passed to the facPV function as a two-dimensional vector: prev = c(diseased patients in prevalence study, number of patients). The options “logit” and “test” are logical flags indicating whether a logistic model should be fitted and whether hypotheses on the predictive values should be tested.
If the data of the original study determining sensitivity and specificity is not available and hence the function facBinary cannot be called, the function facPV can be used instead:
facPV(se, sp, n1, n0, prev, logit=FALSE)
Hereby “se” denotes a vector of sensitivities under different conditions and “sp” denotes the corresponding vector of specificities. Note that it is also possible to pass one-dimensional vectors to the function facPV. “n1” and “n0” characterize the sample sizes of diseased and non-diseased patients used to estimate “se” and “sp”. Again the logical flag “logit" indicates whether logistic confidence intervals should be computed. If the parameters to determine predictive values are provided without a facBinary object, the test option is not available. As the covariance matrixes of sensitivity and specificity are not at hand in this case, the test statistic cannot be computed: it summarizes predictive values of different conditions and these might be dependent if the corresponding sensitivities and specificities are dependent. As confidence intervals are computed for each condition separately, they can nevertheless be calculated.
The package facROC will shortly be available on CRAN. Currently it is uploaded at http://github.com/KatharinaLange. To install directly from github, the package devtools is needed (available on CRAN).
install.packages("devtools")
library(devtools)
install_github(repo="facROC", username="KatharinaLange")
The build Linux (tar.gz) and Windows (zip) versions of this package are available for download in the repository at http://github.com/KatharinaLange/facROC-build.

6. Discussion

In this paper, we suggested a new method to translate the results of diagnostic trials for use in clinical practice by means of predictive values. The proposed method provides an approach to calculate confidence intervals according to factors influencing the risk of disease. As in our approach the pre-test probability of disease has been assessed in prior studies, no prevalence needs to be estimated from the data of the current trial. Thus this method of analysis can also be used for calculating predictive values in case-control studies where estimating prevalences is not possible. Note that in our approach, we assume that the prevalence is independent of sensitivity and specificity, which means that sensitivity and specificity have to be homogeneous in different risk groups. Thus, with this methodology, it is possible to estimate predictive values for risk groups that are not included in the original trial. Note that the assumption of homogeneity has to be considered carefully before this method is applied (For example, the accuracy of imaging devices as well as the risk of disease might depend on the patient’s BMI. In this case, sensitivity and specificity are no longer equal in the different risk groups and hence a stratified estimation for each group has to be performed). We, furthermore, considered a set-up in which it is possible to compare the predictive values of different diagnostic tests by means of the ANOVA-Type statistic. Many diagnostic trials are imaging studies and therefore the investigation of the images is mostly carried out by several readers. As our approach uses the multivariate delta theorem, this method of analysis can easily be extended to multiple reader diagnostic trials by using a vector of indices ( r , m ) indicating the reader and the method. Hypotheses can be tested by choosing appropriate contrast matrices referring to the theory of linear models. Furthermore, confidence intervals for arbitrary contrasts c p ± g can be computed. Hence, in multiple reader trials we can assess the difference between two diagnostic tests by averaging over the different readers [9,10]. In clinical practice, an initial suspicion is sometimes confirmed not by only one diagnostic test but by several ones. In this case, the pre-test probability of disease increases with each positive diagnostic result. In order to calculate and analyze predictive values in these cases, some information is required:
  • the probability of disease before the first test was carried out, i.e., the “pre-testing” probability of disease, which might depend on several risk factors and has to be determined from prevalence studies;
  • the sensitivity and the specificity of each diagnostic test performed as well as the correlation between these tests.
With the help of the second item, a global sensitivity and a global specificity for the whole testing procedure can be calculated. (This might be a complex problem if different diagnostic tests are dependent. For more details, see, e.g., [11].) In combination with the pre-testing probability of disease, predictive values can now be calculated in the way proposed here. Note that a pre-test or a pre-testing probability is always required when predictive values are computed. The methodology developed might help to answer two of the most important questions to clinicians: “How likely is it that the patient has the condition?” and “How likely is it that the patient is free of disease?” Nevertheless, it is important to point out that predictive values cannot replace sensitivity and specificity. As predictive values have a more concrete and thus more user-friendly interpretation than sensitivity and specificity, they might also be considered as accuracy assessments, when the usefulness of a new diagnostic agent is evaluated. However, because these measurements depend on the prevalence, regulatory authorities advise to be careful when using predictive values for the evaluation of diagnostic trials. The EMEA states “predictive values must be reported with caution and only when the study sample is considered to be representative of the prevalence in the real world” [12] and the FDA recommends that “the trials include the intended population in the appropriate clinical setting” [13]. Following these recommendations, predictive values are calculated for a patient with a mean pre-test risk of disease but the results of the evaluation will not be valid for a patient with a known higher or lower probability of disease. Hence, we achieve a result for an average patient but no general result. But the main purpose of a diagnostic trial is to evaluate whether or not a new diagnostic agent increases the probability of a correct diagnosis in general. In contrast, sensitivity and specificity are able to assess the effect of a new diagnostic agent independent of any prior probability of disease and any prevalence. Thus sensitivity and specificity allow us to assess the quality of a new diagnostic agent in general. Therefore, predictive values should rather be avoided when the usefulness of a new diagnostic agent is evaluated and they should only be calculated for the use in clinical practice.

References

  1. Mercaldo, M.D.; Lau, F.L.; Zhou, X.H. Confidence intervals for predictive values with an emphasis to case-control studies. Stat. Med. 2007, 26, 2170–2183. [Google Scholar] [CrossRef] [PubMed]
  2. Bayes, T. An essay towards solving a problem in the doctrine chances. Philos. Trans. 1763, 53, 370–418. [Google Scholar] [CrossRef]
  3. Diamond, G.A.; Forrester, J.S. Analysis of probability as an aid in the clinical diagnosis of coronary-artery disease. N. Engl. J. Med. 1979, 300, 1350–1358. [Google Scholar] [CrossRef] [PubMed]
  4. Brunner, E.; Munzel, U.; Puri, M.L. The multivariate nonparametric Behrens-Fisher problem. J. Stat. Plan. Inference 2002, 108, 37–53. [Google Scholar] [CrossRef]
  5. Munzel, U.; Brunner, E. Nonparametric methods in multivariate factorial Designs. J. Stat. Plan. Inference 2000, 88, 117–132. [Google Scholar] [CrossRef]
  6. R Development Core Team. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing: Vienna, Austria. Available online: http://www.R-project.org (accessed on 23 January 2013).
  7. Alderman, E.L.; Corley, S.D.; Fisher, L.D. Five-year angiographic follow-up of factors associated with progression of coronary artery disease in the Coronary Artery Surgery Study (CASS). J. Am. Coll. Cardiol. 1993, 22, 1141–1154. [Google Scholar] [CrossRef]
  8. Miller, J.M.; Rochitte, C.E.; Dewey, M. Diagnostic performance of coronary angiography by 64-Row CT. N. Engl. J. Med. 2008, 359, 2324–2336. [Google Scholar] [CrossRef] [PubMed]
  9. Kaufmann, J.; Werner, C.; Brunner, E. Nonparametric methods for analyzing the accuracy of diagnostic tests with multiple readers. Stat. Methods Med. Res. 2005, 14, 129–146. [Google Scholar] [CrossRef] [PubMed]
  10. Lange, K.; Brunner, E. Sensitivity, specificity and ROC-curves in multiple reader diagnostic trials—A unified, nonparametric approach. Stat. Methodol. 2012, 9, 490–500. [Google Scholar] [CrossRef]
  11. Gardner, I.A.; Stryhn, H.; Lind, P.; Collins, M.T. Conditional dependence between tests affects the diagnosis and surveillance of animal diseases. Prev. Vet. Med. 2000, 45, 107–122. [Google Scholar] [CrossRef]
  12. EMEA. Guideline on Clinical Evaluation of Diagnostic Agents (Draft). 2008. Available online: http://www.ema.europa.eu/docs/en_GB/document_library/Scientific_guideline/2009/09/WC500003584.pdf (accessed on 23 January 2013).
  13. FDA. Guidance for Industry: Developing Medical Imaging Drug and Biological Products, Part 2: Clinical Indications. 2004. Available online: http://www.fda.gov/downloads/Drugs/GuidanceComplianceRegulatoryInformation/Guidances/ucm071603.pdf (accessed on 23 January 2013). [Google Scholar]
  14. Box, G.E.P. Some theorems on quadratic forms applied in the study of analysis of variance problems, I. Effect of inequality of variance in the one-way classification. Ann. Math. Stat. 1954, 25, 290–302. [Google Scholar] [CrossRef]

A. Proof: Asymptotic Distribution of p + and p

According to the multivariate central limit theorem, we obtain that
n 1 · ( se ^ se ) L U N ( 0 , V s e )
where V s e denotes the covariance matrix of X 11 . Applying the central limit theorem to the estimator of the specificities similarly leads to
n 0 · ( sp ^ sp ) L W N ( 0 , V s p )
where V s p = C o v ( X 01 ) . For the prevalences in the different risk groups, the univariate central limit theorem leads to
m g · ( π ^ g π g ) L Q g N ( 0 , σ g 2 ) , g = 1 , , G
where σ g 2 = π g ( 1 π g ) . As N n i d i , i = 0 , 1 and N m g e g , g = 1 , , G by assumption, Equations (3)–(5) can be rewritten as
N · ( se ^ se ) L U N ( 0 , d 1 · V s e ) N · ( sp ^ sp ) L W N ( 0 , d 0 · V s p ) , and N · ( π ^ g π g ) L Q g N ( 0 , e g · σ g 2 ) , g = 1 , , G
The estimators of sensitivity, specificity and prevalence are independent random variables and thus we obtain that:
N se ^ sp ^ π ^ g se sp π g L B N 0 , d 1 V s e d 2 V s p e g σ g 2
where ⊕ denotes the direct sum.
The functions f + and f given in Equations (1) and (2) map sensitivity, specificity and the prevalence onto the positive and the negative predictive value, respectively. Let
f + ( se , sp , π g ) = f + ( s e ( 1 ) , s p ( 1 ) , π g ) , , f + ( s e ( d ) , s p ( d ) , π g )   and f ( se , sp , π g ) = f ( s e ( 1 ) , s p ( 1 ) , π g ) , , f ( s e ( d ) , s p ( d ) , π g )
denote the multivariate versions of f + and f and let Df + = Df + ( se , sp , π g ) and Df = Df ( se , sp , π g ) denote the corresponding Jacobian matrices of all first-order partial derivatives at position ( se , sp , π g ) . Then, applying Cramer’s δ theorem leads to:
N ( p ^ + g p + g ) = N f + se ^ sp ^ π ^ g f + se sp π g L B + g N 0 , Df + d 1 V s e d 0 V s p e g σ g 2 Df + g = 1 , , G
and
N ( p ^ g p g ) = N f se ^ sp ^ π ^ g f se sp π g L B g N 0 , Df d 1 V s e d 0 V s p e g σ g 2 Df g = 1 , , G
Now Df + = Df + ( se , sp , π g ) and Df = Df ( se , sp , π g ) are estimated by Df ^ + = Df + ( se ^ , sp ^ , π ^ g ) and Df ^ = Df ( se ^ , sp ^ , π ^ g ) , respectively. The quantities d i are estimated by N n i , i = 0 , 1 and e g is estimated by N m g , for all g = 1 , , G . We further estimate the covariance matrices V s e and V s p by the sample covariance matrices
V ^ s e = 1 n 1 1 k = 1 n 1 ( X 1 k se ^ ) ( X 1 k se ^ )
and
V ^ s p = 1 n 0 1 k = 1 n 0 ( X 0 k sp ^ ) ( X 0 k sp ^ )
respectively. We further use the unbiased empirical variance σ ^ g 2 = m g m g 1 · π ^ g · ( 1 π ^ g ) as the estimator of σ g 2 for the ease of convenience.
Plugging in these empirical counterparts and applying Slutzky’s theorem hence leads to our main result: for each risk group g = 1 , G , the statistics N ( p ^ + g p + g ) and N ( p ^ g p g ) have, asymptotically, a multivariate normal distribution with mean 0 and covariance matrices
V ^ + g = Df ^ + N n 1 V ^ s e N n 0 V ^ s p N m g σ ^ g 2 Df ^ +  and V ^ g = Df ^ N n 1 V ^ s e N n 0 V ^ s p N m g σ ^ g 2 Df ^
respectively.

B. t-Approximation for Small Sample Sizes

In order to increase the quality of our methods for small sample sizes, a t ν -approximation of N ( p ^ g , ± ( m ) p g , ± ( m ) ) / v ^ ± g [ m , m ] is also provided. To assess the degrees of freedom ν of the t-distribution, we use an approach based on the Box approximation [14]: the distribution of v ^ ± g [ m , m ] is approximated by a scaled χ ν 2 -distribution, i.e., by the distribution of a random variable g · Z ν , where Z ν χ ν 2 and ν and g are constants such that the first two moments coincide. We hence determine ν by
ν = 2 · E ( f s e · d 1 · v ^ s e [ m , m ] + f s p · d 0 · v ^ s p [ m , m ] + f π · e g · σ ^ g 2 ) 2 Var ( f s e · d 1 · v ^ s e [ m , m ] + f s p · d 0 · v ^ s p [ m , m ] + f π · e g · σ ^ g 2 )
= 2 · ( f s e · d 1 · E ( v ^ s e [ m , m ] ) + f s p · d 0 · E ( v ^ s p [ m , m ] ) + f π · e g · E ( σ ^ g 2 ) ) 2 f s e · d 1 · Var ( v ^ s e [ m , m ] ) + f s p · d 0 · Var ( v ^ s p [ m , m ] ) + f π · e g · Var ( σ ^ g 2 )
where v ^ s e [ m , m ] and v ^ s p [ m , m ] denote the empirical variances of the X i k ( m ) , k = 1 , , n i for i = 1 and i = 0 . f / s e is the partial derivative of f with respect to s e at ( s e ( m ) , s p ( m ) , π g ) . The partial derivatives f / s p and f / π are defined analogously. As ν contains unknown parameters, ν itself is unknown and it has to be estimated. We, hence, estimate the partial derivatives at ( s e ( m ) , s p ( m ) , π g ) by the partial derivatives at ( s e ^ ( m ) , s p ^ ( m ) , π ^ g ) and further estimate d i by N / n i , i = 0 , 1 and e g by N / e g . As v ^ s e [ m , m ] , v ^ s p [ m , m ] and σ ^ g 2 are unbiased by construction, the numerator can be determined easily by means of these plug-in estimates. For the denominator, we can estimate the variance of each term separately because the three parts of the sums are independent. We hence have to estimate the variances of
v ^ s e [ m , m ] = 1 n 1 1 k = 1 n 1 X 1 k ( m ) X ¯ 1 · ( m ) 2 v ^ s p [ m , m ] = 1 n 0 1 k = 1 n 0 X 0 k ( m ) X ¯ 0 · ( m ) 2 and σ ^ g 2 = m g m g 1 · π ^ g · ( 1 π ^ g )
Note that σ ^ g 2 also is the empirical variance of a Bernoulli distributed random variable and therefore can be represented in the same manner as v ^ s e [ m , m ] and v ^ s p [ m , m ] . Hence, we only have to determine the variance of the empirical variance of a Bernoulli distributed random variable, which is illustrated by the example of v ^ s e [ m , m ] . By defining the bin ( n 1 , s e ) -distributed random variable S 1 = k = 1 n 1 X 1 k ( m ) , v ^ s e [ m , m ] can be represented as
v ^ s e [ m , m ] = 1 n 1 1 k = 1 n 1 X 1 k ( m ) X ¯ 1 · ( m ) 2 = 1 n 1 1 k = 1 n 1 X 1 k ( m ) + 1 n 1 k = 1 n 1 X 1 k ( m ) 2 = 1 n 1 1 S 1 + 1 n 1 S 1 2
We, therefore, obtain for the variance of v ^ s e [ m , m ] :
Var ( v ^ s e [ m , m ] ) = 1 ( n 1 1 ) 2 E [ S 1 1 n 1 S 1 2 ] 2 E [ S 1 1 n 1 S 1 2 ] 2 = 1 ( n 1 1 ) 2 E ( S 1 2 ) 2 n 1 E ( S 1 3 ) + 1 n 1 2 E ( S 1 4 ) E ( S 1 ) 2 + 2 n 1 E ( S 1 ) · E ( S 1 2 ) 1 n 1 2 E ( S 1 2 ) 2
As the first four moments of a binomial distribution can easily be determined, the estimator of ν can be obtained by plugging in all estimates in Equation (6).

C. Simulation Results for the Negative Predictive Value

Table A1. Summary of p coverage probabilities where the cell values denote the coverage probability for one fixed parameter and averaging over the remaining parameters. Hereby N-Approx and t-Approx are abbreviations for normal and t-approximation. π fix denotes the method of [1], where π is assumed to be fixed.
Table A1. Summary of p coverage probabilities where the cell values denote the coverage probability for one fixed parameter and averaging over the remaining parameters. Hereby N-Approx and t-Approx are abbreviations for normal and t-approximation. π fix denotes the method of [1], where π is assumed to be fixed.
Fixed ParameterN-ApproxAdditive t-Approxπ fixN-ApproxLogistic t-Approxπ fixFailure (logistic & t-Approx)
π = 0 . 05 0.93890.94710.84410.95460.95760.85900.047%
π = 0 . 25 0.93570.94370.82310.95480.95800.83840.040%
π = 0 . 5 0.91630.94110.68510.95920.96380.69820.244%
s e = 0 . 5 0.93930.94840.68720.95400.95540.69000.064%
s e = 0 . 75 0.93460.94320.78160.95470.95730.79010.064%
s e = 0 . 85 0.92780.93990.82350.95670.96170.84080.076%
s e = 0 . 9 0.91950.94460.84410.95930.96480.87330.245%
s p = 0 . 5 0.93070.94340.80310.95640.95980.81730.114%
s p = 0 . 75 0.93010.94390.78280.95620.95980.79740.114%
s p = 0 . 85 0.93040.94450.77690.95620.95980.79130.110%
s p = 0 . 9 0.93000.94420.77360.95590.95960.78810.112%
m g = 100 0.91780.94160.65360.95950.96360.66650.244%
m g = 500 0.93510.94350.82670.95440.95780.84210.047%
m g = 1 , 000 0.93800.94680.87200.95470.95800.88700.046%
n 0 = n 1 = 50 0.92170.94330.86130.96000.96450.89040.204%
n 0 = n 1 = 100 0.93070.94060.82630.95520.95990.83950.065%
n 0 = n 1 = 500 0.93850.94800.66480.95340.95490.66570.068%
Overall0.93030.94400.78410.95620.96000.79850.112%
Table A2. Summary of p confidence interval lengths where the cell values denote the confidence interval length for one fixed parameter and averaging over the remaining parameters. Hereby N-Approx and t-Approx are abbreviations for normal and t-approximation. π fix denotes the method of [1], where π is assumed to be fixed.
Table A2. Summary of p confidence interval lengths where the cell values denote the confidence interval length for one fixed parameter and averaging over the remaining parameters. Hereby N-Approx and t-Approx are abbreviations for normal and t-approximation. π fix denotes the method of [1], where π is assumed to be fixed.
Fixed ParameterN-ApproxAdditive t-Approxπ fixN-ApproxLogistic t-Approxπ fix
π = 0 . 05 0.15220.15540.12190.15330.15780.1231
π = 0 . 25 0.07980.08120.05950.08220.08440.0611
π = 0 . 5 0.02290.02430.01170.02540.02890.0122
s e = 0 . 5 0.10360.10470.06320.10420.10650.0630
s e = 0 . 75 0.09100.09200.06940.09200.09370.0696
s e = 0 . 85 0.07810.08010.06550.08030.08320.0669
s e = 0 . 9 0.06720.07100.05930.07120.07810.0624
s p = 0 . 5 0.11120.11370.08720.11310.11730.0882
s p = 0 . 75 0.08240.08440.06200.08440.08780.0632
s p = 0 . 85 0.07480.07660.05550.07680.07990.0567
s p = 0 . 9 0.07150.07320.05270.07340.07650.0539
m g = 100 0.10760.11010.06430.11080.11590.0655
m g = 500 0.07640.07810.06430.07780.08040.0655
m g = 1 , 000 0.07100.07270.06430.07220.07480.0655
n 0 = n 1 = 50 0.11000.11400.09530.11360.12050.0979
n 0 = n 1 = 100 0.08620.08760.06750.08780.08990.0683
n 0 = n 1 = 500 0.05870.05930.03020.05940.06070.0303
Overall0.08500.08690.06430.08690.09040.0655 %
Back to TopTop