ROC Analyses Based on Measuring Evidence Using the Relative Belief Ratio

ROC (Receiver Operating Characteristic) analyses are considered under a variety of assumptions concerning the distributions of a measurement X in two populations. These include the binormal model as well as nonparametric models where little is assumed about the form of distributions. The methodology is based on a characterization of statistical evidence which is dependent on the specification of prior distributions for the unknown population distributions as well as for the relevant prevalence w of the disease in a given population. In all cases, elicitation algorithms are provided to guide the selection of the priors. Inferences are derived for the AUC (Area Under the Curve), the cutoff c used for classification as well as the error characteristics used to assess the quality of the classification.


Introduction
An ROC (Receiver Operating Characteristic) analysis is used in medical science to determine whether or not a real-valued diagnostic variable X for a disease or condition is useful. If the diagnostic indicates that an individual has the condition, then this will typically mean that a more expensive or invasive medical procedure is undertaken. So it is important to assess the accuracy of the diagnostic variable X. These methods have a wider class of applications but our terminology will focus on the medical context.
An approach to such analyses is presented here that is based on a characterization of statistical evidence and which incorporates all available information as expressed via prior probability distributions. For example, while p-values are often used in such analyses, there are questions concerning the validity of these quantities as characterizations of statistical evidence. As will be discussed, there are many advantages to the framework adopted here.
A common approach to the assessment of the diagnostic variable X is to estimate its AUC (Area Under the Curve), namely, the probability that an individual sampled from the diseased population will have a higher value of diagnostic variable X than an individual independently sampled from the nondiseased population. A good diagnostic should give a value of the AUC near 1 while a value near 1/2 indicates a poor diagnostic test (if the AUC is near 0, then the classification is reversed). It is possible, however, that a diagnostic with AUC ≈ 1 may not be suitable (see Examples 1 and 6). In particular, a cutoff value c needs to be selected so that if X > c, then an individual is classified as requiring the more invasive procedure. Inferences about the error characteristics for the combination (X, c), such as the false positive rate, etc., are also required.
This paper is concerned with inferences about the AUC, the cutoff c and the error characteristics of the classification based on a valid measure of evidence. A key aspect of the analysis is the relevant prevalence w. The phrase "relevant prevalence" means that X will be applied to a certain population, such as those patients who exhibit certain symptoms, and w represents the proportion of this subpopulation who are diseased. The value of w may vary by geography, medical unit, time, etc. To make a valid assessment of X in an application, it is necessary that the information available concerning w be incorporated. This information is expressed here via an elicited prior probability distribution for w, which may be degenerate at a single value if w is assumed known, or be quite diffuse when little is known about w. In fact, all unknown population quantities are given elicited priors. There are many contexts where data are available relevant to the value of w and this leads to a full posterior analysis for w as well as for the other quantities of interest. Even when such data are not available, however, it is still possible to take the prior for w into account so the uncertainties concerning w always play a role in the analysis and this is a unique aspect of the approach taken here.
While there are some methods available for the choice of c, these often do not depend on the prevalence w which is a key factor in determining the true error characteristics of (X, c) in an application, see [1][2][3][4][5]. So it is preferable to take w into account when considering the value of a diagnostic in a particular context. One approach to choosing c is to minimize some error criterion that depends on w to obtain c opt . As will be demonstrated in the examples, however, sometimes c opt results in a classification that is useless. In such a situation a suboptimal choice of c is required but the error characteristics can still be based on what is known about w so that these are directly relevant to the application.
Others have pointed out deficiencies in the AUC statistic and proposed alternatives. For example, it can be argued that taking into account the costs associated with various misclassification errors is necessary and that using the AUC is implicitly making unrealistic assumptions concerning these costs, see [6]. While costs are relevant, costs are not incorporated here as these are often difficult to quantify. Our goal is to express clearly what the evidence is saying about how good (X, c) is via an assessment of its error characteristics. With the error characteristics in hand, a user can decide whether or not the costs of misclassifications are such that the diagnostic is usable. This may be a qualitative assessment although, if numerical costs are available, these could be subsequently incorporated. The principle here is that economic or social factors be considered separately from what the evidence in the data says, as it is a goal of statistics to clearly state the latter.
The framework for the analysis is Bayesian as proper priors are placed on the unknown distribution F ND (the distribution of X in the nondiseased population), on F D (the distribution of X in the diseased population) and the prevalence w. In all the problems considered, elicitation algorithms are presented for how to choose these priors. Moreover, all inferences are based on the relative belief characterization of statistical evidence where, for a given quantity, evidence in favor (against) is obtained when posterior beliefs are greater (less) than prior beliefs, see Section 2.2 for discussion and [7]. So evidence is determined by how the data change beliefs. Section 2 discusses the general framework, defines relevant quantities and provides an outline for how specific relative belief inferences are determined. Section 3 develops the inferences for the quantities of interest for three contexts (1) X is an ordered discrete variable with and without constraints on (F ND , F D ) (2) X is a continuous variable and (F ND , F D ) are normal distributions (the binormal model) (3) X is a continuous variable and no constraints are placed on (F ND , F D ).
There is previous work on using Bayesian methods in ROC analyses. For example, a Bayesian analysis for the binormal model when there are covariates present is developed in [8]. An estimate of the ROC using the Bayesian bootstrap is discussed in [9]. A Bayesian semiparametric analysis using a Dirichlet mixture process prior is developed in [10,11]. The sampling regime where the data can be used for inference about the relevant prevalence and where a gold standard classifier is not assumed to exist is presented in [12]. Considerable discussion concerning the case where the diagnostic test is binary, covering the cases where there is and is not a gold standard test, as well as the situation where the goal is to compare diagnostic tests and to make inference about the prevalence distribution can be found in [13] and also see [14]. Application of an ROC analysis to a comparison of linear and nonlinear approaches to a problem in medical physics is in [15]. Further discussion of nonlinear methodology can be found in [16,17].
The contributions of this paper, that have not been covered by previous published work in this area, are as follows: (i) The primary contribution is to base all the inferences associated with an ROC analysis on a clear and unambiguous characterization of statistical evidence via the principle of evidence and the relative belief ratio. While Bayes factors are also used to measure statistical evidence, there are serious limitations on their usage with continuous parameters as priors are restricted to be of a particular form. The approach via relative belief removes such restrictions on priors and provides a unified treatment of estimation and hypothesis assessment problems. In particular, this leads directly to estimates of all the quantities of interest, together with assessments of the accuracy of the estimates, and a characterization of the evidence, whether in favor of or against a hypothesis, together with a measure of the strength of the evidence. Moreover, no loss functions are required to develop these inferences. The merits of the relative belief approach over others are more fully discussed in Section 2.2. (ii) A prior on the relevant prevalence is always used to determine inferences even when the posterior distribution of this quantity is not available. As such the prevalence always plays a role in the inferences derived here. (iii) The error in the estimate of the cut-off is always quantified as well as the errors in the estimates of the characteristics evaluated at the chosen cut-off. It is these characteristics, such as the sensitivity and specificity, that ultimately determine the value of the diagnostic test. (iv) The hypothesis H 0 : AUC > 1/2 is first assessed and if evidence is found in favor of this, the prior is then conditioned on this event being true for inferences about the remaining quantities. Note that this is equivalent to conditioning the posterior on the event AUC > 1/2 when inferences are determined by the posterior but with relative belief inferences both the conditioned prior and conditioned posterior are needed to determine the inferences. (v) Precise conditions are developed for the existence of an optimal cutoff with the binormal model. (vi) In the discrete context (1), it is shown how to develop a prior and the analysis under the assumption that the probabilities describing the outcomes from the diagnostic variable X are monotone.
The relative belief ratio, as a measure of evidence, is seen to have a connection to relative entropy. For example, it is equivalent, in the sense that the inferences are the same, to use the logarithm of the relative belief ratio as the measure of evidence. The relative entropy is then the posterior expectation of this quantity and so can be considered as a measure of the overall evidence provided by the model, prior and data concerning a quantity of interest.
The methods used for all the computations in the paper are simulation based and represent fairly standard Bayesian computational methods. In each context considered, sufficient detail is provided so that these can be implemented by a user.

The Problem
Consider the formulation of the problem as presented in [18,19] but with somewhat different notation. There is a measurement X : Ω → R 1 defined on a population where Ω D is comprised of those with a particular disease, and Ω ND represents those without the disease. So F ND (c) = #({ω ∈ Ω ND : X(ω) ≤ c})/#(Ω ND ) is the conditional cdf of X in the nondiseased population, and F D (x) = #({ω ∈ Ω D : X(ω) ≤ x})/#(Ω D ) is the conditional cdf of X in the diseased population. It is assumed that there is a gold standard classifier, typically much more difficult to use than X, such that for any ω ∈ Ω it can be determined definitively if ω ∈ Ω D or ω ∈ Ω ND . There are two ways in which one can sample from Ω, namely, (i) take samples from each of Ω D and Ω ND separately or (ii) take a sample from Ω.
The sampling method used affects the inferences that can be drawn. For many studies (i) is the relevant sampling mode, as in case-control studies, while (ii) is relevant in crosssectional studies.
It supposed that the greater the value X(ω) is for individual ω, the more likely it is that ω ∈ Ω D . For the classification, a cutoff value c is required such that, if X(ω) > c, then ω is classified as being in Ω D and otherwise is classified as being in Ω ND . However, X is an imperfect classifier for any c and it is necessary to assess the performance of (X, c). It seems natural that a value of c be used that is optimal in some sense related to the error characteristics of this classification. Table 1 gives the relevant probabilities for classification into Ω D and Ω ND , together with some common terminology, in a confusion matrix. Table 1. Error probabilities when X > c indicates a positive.
Another key ingredient is the prevalence w = #(Ω D )/#(Ω) of the disease in Ω. In practical situations, it is necessary to also take w into account in assessing the error in (X, c).
The following error characteristics depend on w, , Under sampling regime (ii) and cutoff c, Error(c) is the probability of making an error, FDR(c) is the conditional probability of a subject being misclassified as positive given that it has been classified as positive and FNDR(c) is the conditional probability of a subject being misclassified as negative given that it has been classified as negative. In other words, FDR(c) is the proportion of those individuals in the population consisting of those who have been classified by the diagnostic test as having the disease, but in fact do not have it. It is often observed that when w is very small and FNR(c) and FPR(c) are small, then FDR(c) can be big. This is sometimes referred to as the base rate fallacy as, even though the test appears to be a good one, there is a high probability that an individual classified as having the disease will be misclassified. For example, if w = FNR(c) = FPR(c) = 0.05, then Error(c) = 0.05, FDR(c) = 0.50, FNDR(c) = 2.76 × 10 −3 and when w = 0.01, then Error(c) = 0.05, FDR(c) = 0.84, FNDR(c) = 5.31 × 10 −4 . In these cases the false nondiscovery rate is quite small while the false discovery rate is large. If the disease is highly contagious, then these probabilities may be considered acceptable but indeed they need to be estimated. Similarly, FNDR(c) may be small when FNR(c) is large and w is very small. It is naturally desirable to make inference about an optimal cutoff c opt and its associated error quantities. For a given value of w, the optimal cutoff will be defined here as c opt = arg inf Error(c), the value which minimizes the probability of making an error. Other choices for determining a c opt can be made, and the analysis and computations will be similar, but our thesis is that, when possible, any such criterion should involve the prior distribution of the relevant prevalence w. As demonstrated in Example 6 this can sometimes lead to useless values of c opt even when the AUC is large. While this situation calls into question the value of the diagnostic, a suboptimal choice of c can still be made according to some alternative methodology. For example, sometimes Youden's index, which maximizes 1 − 2Error(c) over c with w = 1/2, is recommended, or the closest-to-(0,1) criterion which minimizes FPR(c) 2 + (1 − TPR(c)) 2 , see [2] for discussion. Youden's index and the closestto-(0,1) criterion do not depend on the prevalence and have geometrical interpretations in terms of the ROC curve, but as we will see, the ROC curve does not exist in full generality and this is particularly relevant in the discrete case. The methodology developed here provides an estimate of the c to be used, together with an exact assessment of the error in this estimate, as well as providing estimates of the associated error characteristics of the classification.
Lettingĉ opt denote the estimate of c opt , the values of Error(ĉ opt ), TPR(ĉ opt ), FPR(ĉ opt ), FNR(ĉ opt ) and TNR(ĉ opt ) are also estimated and the recorded values used to assess the value of the diagnostic test. There are also other characteristics that may prove useful in this regard such as the positive predictive value (PPV) namely, the conditional probability a subject is positive given that they have tested positive, which plays a role similar to FDR(c). See [14] for discussion of the PPV and the similarly defined negative predictive value (NPV). The value of PPV(ĉ opt ) can be estimated in the same way as the other quantities as is subsequently discussed.

The AUC and ROC
Consider two situations where F ND , F D are either both absolutely continuous or both discrete. In the discrete case, suppose that these distributions are concentrated on a set of points c 1 < c 2 < · · · < c m . When ω D , ω ND are selected using sampling scheme (i), then the probability that a higher score is received on diagnostic X by a diseased individual than a nondiseased individual is Under the assumption that F D (c) is constant on {c : F ND (c) = p} for every p ∈ [0, 1], there is a function ROC (receiver operating characteristic) such that In the absolutely continuous case, AUC= 1 0 ROC(p) dp which is the area under the curve given by the ROC function. The area under the curve interpretation is geometrically evocative but is not necessary for (1) to be meaningful.
It is commonly suggested that a good diagnostic variable X will have an AUC close to 1 while a value close to 1/2 suggests a poor diagnostic test. It is surely the case, however, that the utility of X in practice will depend on the cutoff c chosen and the various error characteristics associated with this choice. So while the AUC can be used to screen diagnostics, it is only part of the analysis and inferences about the error characteristics are required to truly assess the performance of a diagnostic. Consider an example. Example 1. Suppose that F D = F q ND for some q > 1, where F ND is continuous, strictly increasing with associated density f ND . Then using (1), AUC = 1 − 1/(q + 1) which is approximately 1 when q is large. The optimal c minimizes Error(c) = wF q ND (c) + (1 − w)(1 − F ND (c)) which implies c satisfies F ND (c) = {(1 − w)/qw} 1/(q−1) when q > (1 − w)/w and the optimal c is otherwise c = ∞. If q = 99, then AUC = 0.99 and with w = 0.025, (1 − w)/w = 39 < q so FNR(c opt ) = 0.390, FPR(c opt ) = 0.009, Error(c opt ) = 0.019, FDR(c opt ) = 0.009 and FNDR(c opt ) = 0.010. So X seems like a good diagnostic via the AUC and the error characteristics that depend on the prevalence although within the diseased population the probability is 0.39 of not detecting the disease. If instead w = 0.01, then the AUC is the same but q = 99 = (1 − w)/w and the optimal classification always classifies an individual as non-diseased which is useless. So the AUC does not indicate enough about the characteristics of the diagnostic to determine if it is useful or not. It is necessary to look at the error characteristics of the classification at the cutoff value that will actually be used, to determine if a diagnostic is suitable and this implies that information about w is necessary in an application.

Relative Belief Inferences
Suppose there is a model { f θ : θ ∈ Θ} for data x together with a prior probability measure Π, with density π, on Θ. These ingredients lead, via the principle of conditional probability, to beliefs about the true value of θ, as initially expressed by Π, being replaced by the posterior probability measure Π(· | x) with density π(· | x). Note that if interest is instead in a quantity ψ = Ψ(θ), where Ψ : Θ → Ψ and we use the same notation for the function and its range, then the model is replaced by is obtained by integrating out the nuisance parameters, and the prior is replaced by the marginal prior π Ψ (ψ) = Ψ −1 {ψ} π(θ) dθ. This leads to the marginal posterior Π Ψ (· | x) with density π Ψ (· | x).
For the moment suppose that all the distributions are discrete. The principle of evidence then says that there is evidence in favor of the value So, for example, there is evidence in favor of ψ if the probability of ψ increases after seeing the data. To order the possible values with respect to the evidence, we use the relative belief ratio Note that RB Ψ (ψ | x) > (<)1 indicates whether there is evidence in favor of (against) the value ψ. If there is evidence in favor of both ψ 1 and ψ 2 , then there is more evidence in favor of ψ 1 than ψ 2 whenever RB Ψ (ψ 1 | x) > RB Ψ (ψ 2 | x) and, if there is evidence against both ψ 1 and ψ 2 , then there is more evidence against ψ 1 than ψ 2 whenever For the continuous case consider a sequence of neighborhoods N (ψ) ↓ {ψ} as → 0 and then under very weak conditions such as π Ψ (ψ) > 0 and π Ψ being continuous at ψ.
All the inferences about quantities considered in the paper are derived based upon the principle of evidence as expressed via the relative belief ratio. For example, it is immediate that the value RB Ψ (ψ 0 | x) indicates whether or not there is evidence in favor of or against the hypothesis H 0 : and this probability is large, then there is strong evidence in favor of H 0 as there is a small belief that the true value has a larger relative belief ratio and if RB Ψ (ψ 0 | x) < 1 and this probability is small, then there is strong evidence against H 0 as there is high belief that the true value has a larger relative belief ratio. For estimation it is natural to estimate ψ by the relative belief estimate ψ(x) = arg sup ψ∈Ψ RB Ψ (ψ | x) as this value has the maximum evidence in its favor. Furthermore, the accuracy of this estimate can be assessed by looking at the plausible region Pl Ψ (x) = {ψ : RB Ψ (ψ | x) > 1}, consisting of all those values for which there is evidence in favor, together with its size and posterior content which measures how strongly it is believed the true value lies in this set. Rather than using the plausible region to assess the accuracy of ψ(x), one could quote a γ−relative belief credible region will contain values for which there is evidence against, and this is only known after the data have been seen.
It is established in [7], and in papers referenced there, that these inferences possess a number of good properties such as consistency, satisfy various optimality criteria and clearly they are based on a direct measure of the evidence. Perhaps most significant is the fact that all the inferences are invariant under reparameterizations. For if λ = Λ(ψ), where Λ is a smooth bijection, then and so, for example, λ(x) = Λ(ψ(x)). This invariance property is not possessed by the most common inference methods employed such as MAP estimation or using posterior means and this invariance holds no matter what the dimension of ψ is. Moreover, it is proved in [20] that relative belief inferences are optimally robust among all Bayesian inferences for ψ, to linear contaminations of the prior on ψ. An analysis, using relative belief, of the data obtained in several physics experiments that were all concerned with examining whether there was evidence in favor of or against the quantum model versus hidden variables is available in [21]. Furthermore, an approach to checking models used for quantum mechanics via relative belief is discussed in [22]. Other applications of relative belief inferences to common problems of statistical practice can be found in [7].
The Bayes factor is an alternative measure of evidence and is commonly used for hypothesis assessment in Bayesian inference. To see why the relative belief ratio has advantages over the Bayes factor for evidence-based inferences consider first assessing the hypothesis H 0 : Ψ(θ) = ψ 0 . When the prior probability of ψ 0 satisfies 0 < Π Ψ ({ψ 0 }) < 1, then the Bayes factor is defined as the ratio of the posterior odds in favor of H 0 to the prior odds in favor of H 0 , namely, It is easily shown that the Bayes factor satisfies the principle of evidence and BF Ψ (ψ 0 | x) > (<)1 is evidence in favor (against) H 0 , so in this context it is a valid measure of evidence. One might wonder why it is necessary to consider a ratio of odds as opposed to the simpler ratio of probabilities, as specified by the relative belief ratio, for the purpose of measuring evidence but in fact there is a more serious issue with the Bayes factor. For suppose, as commonly arises in applications, that Π Ψ is a continuous probability measure so that Π Ψ ({ψ 0 }) = 0 as then the Bayes factor for H 0 is not defined. The common recommendation in this context is to require the specification of the following ingredients: a prior probability p > 0, a prior distribution Π H 0 concentrated on Ψ −1 {ψ 0 } which provides the prior predictive density m H 0 (x), a prior distribution Π H c 0 concentrated on Ψ −1 {ψ 0 } c which provides the prior predictive density m H c 0 (x) and then the full prior is taken to be the mixture With this prior the Bayes factor for H 0 is defined, as now the prior probability of ψ 0 equals p, and an easy calculation shows that . Typically the prior Π H c 0 is taken to be the prior that we might place on θ when interest is in estimating ψ.
Now consider the problem of estimating ψ and the prior is such that Π Ψ ({ψ}) = 0 for every value of ψ as with a continuous prior. The Bayes factor is then not defined for any value of ψ and, if we wished to use the Bayes factor for estimation purposes, it would be necessary to modify the prior to be a different mixture for each value of ψ so that there would be in effect multiple different priors. This does not correspond to the logic underlying Bayesian inference. When using the relative belief ratio for inference only one prior is required and the same measure of evidence is used for both hypothesis assessment and estimation purposes.
Another approach to dealing with the problem that arises with the Bayes factor and continuous priors is to take a limit as in (2) and, when this is done, we obtain the result as → 0 whenever the prior density of Ψ is continuous and positive at ψ. In other words the relative belief ratio can be also considered as a natural definition of the Bayes factor in continuous contexts.

Inferences for an ROC Analysis
Suppose we have a sample of n D from Ω D , namely, x D = (x D1 , . . . , x Dn D ) and a sample of n ND from Ω ND , namely, x ND = (x ND1 , . . . , x NDn ND ) and the goal is to make inference about the AUC, the cutoff c and the error characteristics FNR(c), FPR(c), Error(c), FDR(c) and FNDR(c). For the AUC it makes sense to first assess the hypothesis H 0 : AUC > 1/2 via stating whether there is evidence for or against H 0 together with an assessment of the strength of this evidence. Estimates are required for all of these quantities, together with an assessment of the accuracy of the estimate.

The Prevalence
Consider first inferences for the relevant prevalence w. If w is known, or at least assumed known, then nothing further needs to be done but otherwise this quantity needs to be estimated when assessing the value of the diagnostic and so uncertainty about w needs to be addressed.
If the full data set is based on sampling scheme (ii), then n D ∼ binomial(n, w). A natural prior π W to place on w is a beta(α 1w , α 2w ) distribution. The hyperparameters are chosen based on the elicitation algorithm discussed in [23] where interval [l, u] is chosen such that it is believed that w ∈ [l, u] with prior probability γ. Here [l, u] is chosen so that we are virtually certain that w ∈ [l, u] and γ = 0.99 then seems like a reasonable choice. Note that choosing l = u corresponds to w being known and so γ = 1 in that case. Next pick a point ξ w ∈ [l, u] for the mode of the prior and a reasonable choice might be ξ w = (l + u)/2. Then putting τ w = α 1w + α 2w − 2 leads to the parameterization beta(α 1w , α 2w ) = beta(1 + τ w ξ w , 1 + τ w (1 − ξ w )) where ξ w locates the mode and τ w controls the spread of the distribution about ξ w . Here τ w = 0 gives the uniform distribution and τ w = ∞ gives the distribution degenerate at ξ w . With ξ w specified, τ w is the smallest value of τ w such that the probability content of [l, u] is γ and this is found iteratively. For example, if [l, u] = [0.60, 0.70] and γ = 0.99, so w is known reasonably well, then ξ w = (l + u)/2 = 0.65 and τ w = 601.1, so the prior is beta(391.72, 211.39) and the posterior is beta(391.72 + n D , 211.39 + n ND ).
The estimate of w is then In this case the estimate is the MLE, namely, w(n D , n ND ) = n D /(n D + n ND ). The accuracy of this estimate is measured by the size of the plausible region Pl(n D , n ND ) = {w : The prior and posterior distributions of w play a role in inferences about all the quantities that depend on the prevalence. In the case where the cutoff is determined by minimizing the probability of a misclassification, then c opt , FNR(c opt ), FPR(c opt ), Error(c opt ), FDR(c opt ) and FNDR(c opt ) all depend on the prevalence. Under sampling scheme (i), however, only the prior on w has any influence when considering the effectiveness of X. Inference for these quantities is now discussed in both cases.

Ordered Discrete Diagnostic
Suppose X takes values on the finite ordered scale c 1 < c 2 < · · · < c m and let p NDi with the remaining quantities defined similarly. Ref. [23] can be used to obtain independent elicited Dirichlet priors on these probabilities by placing either upper or lower bounds on each cell probability that hold with virtual certainty γ, as discussed for the beta prior on the prevalence. If little information is available, it is reasonable to use uniform (Dirichlet(1, . . . , 1)) priors on p ND and p D . This together with the independent prior on w leads to prior distributions for the AUC, c opt and all the quantities associated with error assessment such as FNR(c opt ), etc.
. . , f Dm ) which in turn lead to the independent posteriors Under sampling regime (ii) this, together with the independent posterior on w, leads to posterior distributions for all the quantities of interest. Under sampling regime (i), however, the logical thing to do, so the inferences reflect the uncertainty about w, is to only use the prior on w when deriving inferences about any quantities that depend on this such as c opt and the various error assessments. Consider inferences for the AUC. The first inference should be to assess the hypothesis H 0 : AUC > 1/2 for, if H 0 is false, then X would seem to have no value as a diagnostic (the possibility that the directionality is wrong is ignored here). The relative belief ratio is computed and compared to 1. If it is concluded that H 0 is true, then perhaps the next inference of interest is to estimate the AUC via the relative belief estimate. The prior and posterior densities of the AUC are not available in closed form so estimates are required and density histograms are employed here for this. The set (0, 1] is discretized into L subintervals (0, 1] = ∪ L i=1 ((i − 1)/L, i/L], and putting a i = (i − 1/2)/L, the value of the prior density p AUC (a i ) is estimated by L(proportion of prior simulated values of AUC in (i − 1, i]/L) and similarly for the posterior density p AUC (a i | f ND , f D ). Then RB AUC (a | f ND , f ND ) is maximized to obtain the relative belief estimate AUC( f ND , f D ) together with the plausible region and its posterior content.
These quantities are also obtained for c opt in a similar fashion, although c opt has prior and posterior distribution concentrated on {c 1 , c 2 , . . . , c m } so there is no need to discretize. Estimates of the quantities FNR(c opt ( f ND , f D )), FPR(c opt ( f ND , f D )), Error(c opt ( f ND , f D )), FDR(c opt ( f ND , f D )) and FNDR(c opt ( f ND , f D )) are also obtained as these indicate the performance of the diagnostic in practice. The relative belief estimates of these quantities are easily obtained in a second simulation where c opt ( f ND , f D ) is fixed.
Consider now an example. Supposing that the relevant prevalence is known to be w = 0.65, Figure 2 contains plots of the prior and posterior densities and relative belief ratio of c opt . The relative belief estimate is c opt ( f ND , f D ) = 2 with Pl c opt ( f ND , f D ) = {2} with posterior probability content 0.53 so the correct optimal cut-off has been identified but there is a degree of uncertainty concerning this. The error characteristics that tell us about the utility of X as a diagnostic are given by the relative belief estimates (column (a)) in Table 2. It is interesting to note that the estimate of Error(c opt ) is determined by the prior and posterior distributions of a convex combination of FPR(c opt ) and FNR(c opt ) and the estimate is not the same convex combination of the estimates of FPR(c opt ) and FNR(c opt ). So, in this case Error(c opt ) seems like a much better assessment of the performance of the diagnostic. Suppose now that the prevalence is not known but there is a beta(1 + τ w ξ w , 1 + τ w (1 − ξ w )) prior specified for w and consider the choice discussed in Section 3.1 where ξ w = 0.65 and τ w = 601.1. When the data are produced according to sampling regime (i), then there is no posterior for w but this prior can still be used in determining the prior and posterior distributions of c opt and the associated error characteristics. When this simulation was carried out c opt ( f ND , f D ) = 2 with Pl c opt ( f ND , f D ) = {2} with posterior probability content 0.53. and column (b) of Table 2 gives the estimates of the error characteristics. So other than the estimate of the FPR, the results are similar. Finally, assuming that the data arose under sampling scheme (ii), then w has a posterior distribution and using this gives c opt ( f ND , f D ) = 2 with Pl c opt ( f ND , f D ) = {2} with posterior probability content 0.52 and error characteristics as in column (c) of Table 2. These results are the same as if the prevalence is known which is sensible as the posterior concentrates about the true value more than the prior. Table 2. The estimates of the error characteristcs of X at c opt = 2 in Example 2 where (a) w is assumed known, (b) only the prior for w is available, (c) the posterior for w is also available.

Quantity
Estimate ( Another somewhat anomalous feature of this example is the fact that uniform priors on p D and p ND do not lead to a prior on the AUC that is even close to uniform. In fact one could say that this prior has a built-in bias against a diagnostic with AUC > 1/2 and indeed most choices of p D and p ND will not satisfy this. Another possibility is to require p ND1 ≥ · · · ≥ p NDm and p D1 ≤ · · · ≤ p Dm , namely, require monotonicity of the probabilities. A result in [22] implies that p ND satisfies this iff p ND = A k ω ND where ω ND ∈ S k , the standard (k − 1)-dimensional simplex, and A k ∈ R k×k with i-ith row equal to (0, . . . , 0, 1/i, 1/(i + 1), . . . , 1/k) and p D satisfies this iff p D = B k ω D where ω D ∈ S k and B k = I * k A k where I * k ∈ R k×k contains all 0's except for 1's on the crossdiagonal. If ω ND and ω D are independent and uniform on S k , then p D and p ND are independent and uniform on the sets of probabilities satisfying the corresponding monotonicities and Figure 3 has a plot of the prior of the AUC when this is the case. It is seen that this prior is biased in favor of AUC > 1/2. Figure 3 also has a plot of the prior of the AUC when p D is uniform on the set of all nondecreasing probabilities and p ND is uniform on S k . This reflects a much more modest belief that X will satisfy AUC > 1/2 and indeed this may be a more appropriate prior than using uniform distributions on S k . Ref. [22] also provides elicitation algorithms for choosing alternative Dirichlet distributions for ω ND and ω D . When H 0 : AUC > 0.5 is accepted, it makes sense to use the conditional prior, given that this event is true, in the inferences. As such it is necessary to condition the prior on the event ∑ m i=1 ∑ i j=1 p Dj p NDi ≤ 1/2. In general, it is not clear how to generate from this conditional prior but depending on the size of m and the prior, a brute force approach is to simply generate from the unconditional prior and select those samples for which the condition is satisfied and the same approach works with the posterior.
Here m = 5, and using uniform priors for p ND and p D , the prior probability of AUC > 0.5 is 0.281 while the posterior probability is 0.998 so the posterior sampling is much more efficient. Choosing priors that are more favorable to AUC > 0.5 will improve the efficiency of the prior sampling. Using the conditional priors led to AUC( f ND , f D ) = 0.66 with Pl AUC ( f ND , f D ) = [0.60, 0.76] with posterior content 0.85. This is similar to the results obtained using the unconditional prior but the conditional prior puts more mass on larger values of the AUC hence the wider plausible region with lower posterior content. Moreover, c opt ( f ND , f D ) = 2 with Pl c opt ( f ND , f D ) = {1, 2} with posterior probability content approximately 1.00 (actually 0.99999) which reflects virtual certainty that the true optimal value is in {1, 2}.

Binormal Diagnostic
Suppose now that X is a continuous diagnostic variable and it is assumed that the distributions F D and F ND are normal distributions. The assumption of normality should be checked by an appropriate test and it will be assumed here that this has been carried out and normality was not rejected. While the normality assumption may seem somewhat unrealistic, many aspects of the analysis can be expressed in closed form and this allows for a deeper understanding of ROC analyses more generally.
For given (µ D , σ D , µ ND , σ ND ) and c, all these values can be computed using Φ except the AUC and for that quadrature or simulation via generating z ∼ N(0, 1) is required.
The following results hold for the AUC with the proofs in the Appendix A.
From Lemma 1 it is clear that it makes sense to restrict the parameterization so that µ D > µ ND but we need to test the hypothesis H 0 : µ D > µ ND first. Clearly Error(c) = wFNR(c) + (1 − w)FPR(c) → 1 − w as c → −∞ and Error(c) → w as c → ∞ so, if Error(c) does not achieve a minimum at a finite value of c, then the optimal cut-off is infinite and the optimal error is min{w, 1 − w}. It is possible to give conditions under which a finite cutoff exists and express c opt in closed form when the parameters and the relevant prevalence w are all known.

Lemma 2.
(i) When σ 2 D = σ 2 ND = σ 2 , then a finite optimal cut-off minimizing Error(c) exists iff µ D > µ ND and in that case (ii) When σ 2 D = σ 2 ND , then a finite optimal cut-off exists iff and in that case Note that when w = 1/2, then in (i) c opt = (µ D + µ ND )/2 as one might expect. In the case of unequal variances there is an additional restriction beyond µ D ≥ µ ND required to hold if the diagnostic is to serve as a reasonable classifier. The following shows that these can be combined in a natural way.

Corollary 1. The restrictions µ D ≥ µ ND and (6) hold iff
So, if one is unwilling to assume constant variance, then the hypothesis H 0 : (8) holds, needs to be assessed. There is some importance to these results as they demonstrate that a finite optimal cutoff may in fact not exist at least when considering both types of error. For example, when µ ND = 1, µ D = 2, σ D = 1, σ ND = 1.5, then for any w ≤ 0.30885, the optimal cutoff is c opt = ∞ with Error(∞) = w. When c opt is infinite, then one may need to consider various cutoffs c and find one that is acceptable at least with respect to some of the error characteristics FNR(c), FPR(c), Error(c), FDR(c) and FNDR(c).
Consider now examples with equal and unequal variances.
The first inference step is to assess the hypothesis H 0 : AUC > 1/2 which is equivalent to H 0 : µ ND < µ D by computing the prior and posterior probabilities of this event to obtain the relative belief ratio. The prior probability of H 0 given σ 2 is and averaging this quantity over the prior for σ 2 we get 1/2. The posterior probability of this event can be easily obtained via simulating from the joint posterior. When this is done in the specific numerical example, the relative belief ratio of this event is 2.011 with posterior content 0.999 so there is strong evidence that H 0 : AUC > 1/2 is true.
If evidence is found against H 0 , then this would indicate a poor diagnostic. If evidence is found in favor, then we can proceed conditionally given that H 0 holds and so condition the joint prior and joint posterior on this event being true when making inferences about AUC, c opt , etc. So for the prior it is necessary to generate 1/σ 2 ∼ gamma(α 0 , β 0 ) and then generate (µ D , µ ND ) from the joint conditional prior given σ 2 and that µ D > µ ND . Denoting the conditional priors given σ 2 by π D (µ D | σ 2 ) and π ND (µ ND | σ 2 ), we see that this joint conditional prior is proportional to While generally it is not possible to generate efficiently from this distribution we can use importance sampling to calculate any expectations by generating for µ ND ≤ µ D and 0 otherwise. Generating from this distribution via inversion is easy since the cdf is Φ((µ ND − µ 0 )/τ 0 σ)/Φ((µ D − µ 0 )/τ 0 σ). Note that, if we take the posterior from the unconditioned prior and condition that, we will get the same conditioned posterior as when we use the conditioned prior to obtain the posterior. This implies that in the joint posterior for (µ ND , µ D , σ 2 ) it is only necessary to adjust the posterior for µ ND as was done with the prior and this is also easy to generate from. Note that Lemma 2 (i) implies that it is necessary to use the conditional prior and posterior to guarantee that c opt exists finitely. Since H 0 was accepted, the conditional sampling was implemented and the estimate of the AUC is 0.795 with plausible region [0.670, 0.880] which has posterior content 0.856. So the estimate is close to the true value but there is substantial uncertainty. Figure 4 is a plot of the conditioned prior, the conditioned posterior and relative belief ratio for this data.
With the specified prior for w, the posterior is beta (35.3589, 47.53835) which leads to estimate 0.444 for w with plausible interval (0.374, 0.516) having posterior probability content 0.782. Using this prior and posterior for w and the conditioned prior and posterior for (µ D , µ ND , σ 2 ), we proceed to an inference about c opt and the error characteristics associated with this classification. A computational problem arises when obtaining the prior and posterior distributions of c opt as it is clear from (5) that these distributions can be extremely long-tailed. As such, we transform to c mod = 0.5 + arctan(c opt )/π ∈ [0, 1] (the Cauchy cdf), obtain the estimate c mod (d) where d = (n ND ,x ND , s 2 ND , n D ,x D , s 2 D ) and its plausible region and then, applying the inverse transform, obtain c opt (d) = tan(π(c mod (d) − 0.5)) and its plausible region. It is notable that relative belief inferences are invariant under 1-1 smooth transformations, so it does not matter which parameterization is used, but it is much easier computationally to work with a bounded quantity. Furthermore, if a shorter tailed cdf is used rather than a Cauchy, e.g., a N(0, 1) cdf, then errors can arise due to extreme negative values being always transformed to 0 and very extreme positive values always transformed to 1. Figure 5    In this case the prior is given by Although this specifies the same prior for the two populations, this is easily modified to use different priors and, in any case, the posteriors are different. Again it is necessary to check that the AUC > 1/2 but also to check that c opt exists using the full posterior based on this prior and for this we have the hypothesis H 0 given by Corollary 1. If evidence in favor of H 0 is found, the prior is replaced by the conditional prior given this event for inference about c opt . This can be implemented via importance sampling as was done in Example 3 and similarly for the posterior.
Using the same data and hyperparameters as in Example 3 the relative belief ratio of H 0 is 3.748 with posterior content 0.828 so there is reasonably strong evidence in favor of H 0 . Estimating the value of the AUC is then based on conditioning on H 0 being true. Using the conditional prior given that H 0 is true, the relative belief estimate of the AUC is 0.793 with plausible interval (0.683, 0.857) with posterior content 0.839. The optimal cutoff is estimated as c opt (d) = 0.739 with plausible interval (0.316, 1.228) having posterior content 0.875. Figure 6  It is notable that these inferences are very similar to those in Example 3. It is also noted that the sample sizes are not big and so the only situation where it might be expected that the inferences will be quite different between the two analyses is when the variances are substantially different.

Nonparametric Bayes Model
Suppose that X is a continuous variable, of course still measured to some finite accuracy, and available information is such that no particular finite dimensional family of distributions is considered feasible. The situation is considered where a normal distribution N(µ, σ 2 ), perhaps after transforming the data, is considered as a possible base distribution for X but we want to allow for deviation from this form. Alternative choices can also be made for the base distribution. The statistical model is then to assume that the x ND and x D are generated as samples from F ND and F D , where these are independent values from a DP(a, H) (Dirichlet) process with base H = N(µ, σ 2 ) for some (µ, σ 2 ) and concentration parameter a. Actually, since it is difficult to argue for some particular choice of (µ, σ 2 ), it is supposed that (µ, σ 2 ) also has a prior π(µ, σ 2 ). The prior on (F ND , F D ) is then specified hierarchically as a mixture Dirichlet process, To complete the prior it is necessary to specify π and the concentration parameters a ND and a D . For π the prior is taken to be a normal distribution elicited as discussed in where B(·, β 1 , β 2 ) denotes the beta(β 1 , β 2 ) measure. This upper bound on the probability that the random F differs from H by at least ε on an event can be made as small as desirable by choosing a large enough. For example, if ε = 0.25 and it is required that this upper bound be less than 0.1, then this satisfied when a ≥ 9.8 and if instead ε = 0.1, then a ≥ 66.8 is necessary. Note that, since this bound holds for every continuous probability measure H, it also holds when H is random, as considered here. So a is controlling how close it is believed that the true distribution is to H. Alternative methods for eliciting a can be found in [24,25]. Generating (F ND , F D ) from the prior for given (a, H) can only be done approximately and the approach of [26] is adopted. For this, integer n * is specified and measure P n * = ∑ n * i=1 p i,n * I {c i } is generated where (p 1,n * , . . . , p n * ,n * ) ∼ Dirichlet(a/n * , . . . , .a/n * ) independent of c 1 , . . . , c n * iid ∼ H, since P n * w → DP(a, H) as n * → ∞. So to carry out a priori calculations proceed as follows. Generate (p ND1,n * , . . . , p NDn * ,n * ) ∼ Dirichlet((a/n * )1 n * ), (µ ND , and similarly for (p D1,n * , . . . , p Dn * ,n * ), (µ D , σ 2 D ), and (c D1 , . . . , c Dn * ). Then F ND,n * (c) = ∑ {i:c NDi ≤c} p NDin * is the random cdf at c ∈ R 1 and similarly for F D,n * , so AUC = ∑ n * i=1 (1 − F D,n * (c NDi ))p NDi,n * is a value from the prior distribution of the AUC. This is done repeatedly to get the prior distribution of the AUC as in our previous discussions and we proceed similarly for the other quantities of interest.
based on x ND and similarly for H D . The posteriors of (µ ND , σ 2 ND ) and (µ D , σ 2 D ) are obtained via results in [27,28]. The posterior density of (µ ND , σ 2 ND ) given x ND is proportional to whereñ ND is the number of unique values in x ND and {x ND1 , . . . ,x NDñ ND } is the set of unique values with meanx ND and sum of squared deviationss 2 ND . From this it is immediate that A similar result holds for the posterior of (µ D , σ 2 D ). To approximately generate from the full posterior specify some n * * , put p a,n ND = a/(a + n ND ), q a,n ND = 1 − p a,n ND and generate (p ND1,n * * , . . . , p NDn * * ,n * * ) | x ND ∼ Dirichlet(((a + n ND )/n * * )1 n * * ), ∼ p a,n ND N(µ ND , σ 2 ND ) + q a,n NDF ND , w | x ND ∼ beta(α 1w + n D , α 2w + n ND ) and similarly for (p D1,n * * , . . . , p Dn * * ,n * * ), (µ D , σ 2 D ) and (c D1 , . . . , c Dn * * ). If the data does not comprise a sample from the full population, then the posterior for w is replaced by its prior.
There is an issue that arises when making inference about c opt , namely, the distributions for c opt that arises from this approach can be very irregular and particularly the posterior distribution. In part this is due to the discreteness of the posterior distributions of F ND and F D . This does not affect the prior distribution because the points on which the generated distributions are concentrated vary quite continuously among the realizations and this leads to a relatively smooth prior density for c opt . For the posterior, however, the sampling from the ecdf leads to a very irregular, multimodal density for c opt . So some smoothing is necessary in this case.
Consider now applying such an analysis to the dataset of Example 3, where we know the true values of the quantities of interest and then to a dataset concerned with the COVID-19 epidemic.

Example 5. Binormal data (Examples 3 and 4)
The data used in Example 3 are now analyzed but using the methods of this section. The prior on (µ ND , σ 2 ND ), (µ D , σ 2 D ) and w is taken to be the same as that used in Example 4 so the variances are not assumed to be the same. The value ε = 0.25 is used and requiring (11) to be less than 0.018 leads to a = 20. So the true distributions are allowed to differ quite substantially from a normal distribution. Testing the hypothesis H 0 : AUC > 1/2 led to the relative belief ratio 1.992 (maximum possible value is 2) and the strength of the evidence is 0.997 so there is strong evidence that H 0 is true. The AUC, based on the prior conditioned on H 0 being true, is estimated to be equal to 0.839 with plausible interval (0.691, 0.929) having posterior content 0.814. For these data c opt (d) = 0.850 with plausible interval (0.45, 1.75) having posterior content 0.835. The true value of the AUC is 0.760 and the true value of c opt is 0.905 so these inferences are certainly reasonable although, as one might expect, when the length of the plausible intervals are taken into account, they are not as accurate as those when binormality is assumed as this is correct for this data. So the DP approach worked here although the posterior density for c opt was quite multimodal and required some smoothing (averaging 3 consecutive values).

Example 6. COVID-19 data.
A dataset was downloaded from https://github.com/YasinKhc/Covid-19 containing data on 3397 individuals diagnosed with COVID-19 and includes whether or not the patient survived the disease, their gender and their age. There are 1136 complete cases on these variables of which 646 are male, with 52 having died, and 490 are female, with 25 having died. Our interest is in the use of a patient's age X to predict whether or not they will survive. More detail on this dataset can be found in [29]. The goal is to determine a cutoff age so that extra medical attention can be paid to patients beyond that age. Furthermore, it is desirable to see whether or not gender leads to differences so separate analyses can be carried out by gender. So, for example, in the male group ND refers to those males with COVID-19 that will not die and D refers to the population that will. Looking at histograms of the data, it is quite clear that binormality is not a suitable assumption and no transformation of the age variable seems to be available to make a normality assumption more suitable. Table 3 gives summary statistics for the subgroups. Of some note is that condition (8), when using standard estimates for population quantities such as w = 52/646 = 0.08 for Males and w = 25/490 = 0.05 for females, is not satisfied which suggests that in a binormal analysis no finite optimal cutoff exists. For the prior, it is assumed that (µ ND , σ 2 ND ) and (µ D , σ 2 D ) are independent values from the same prior distribution as in (10). For the prior elicitation, as discussed in Example 3, suppose it is known with virtual certainty that both means lie in (20,70) and (l 0 , u 0 ) = (20, 50) so we take µ 0 = 45, τ 0 = (m 2 − m 1 )/2u 0 = 0.75 and the iterative process leads to (λ 1 , λ 2 ) = (8.545, 1080.596) which implies a prior on the σ's with mode at 10.932 and the interval (7.764, 19.411) containing 0.99 of the prior probability. Here the relevant prevalence refers to the proportion of COVID-19 patients that will die and it is supposed that w ∈ [0.00, 0.15] with virtual certainty which implies w ∼ beta(9.81, 109.66). So the prior probability that someone with COVID-19 will die is assumed to be less than 15% with virtual certainty. Since normality is not an appropriate assumption for the distribution of X, the choice ε = 0.25 with the upper bound (11) equal to 0.1 seems reasonable and so a = 9.8. This specifies the prior that is used for the analysis with both genders and it is to be noted that it is not highly informative.
For males the hypothesis AUC > 1/2 is assessed and RB = 1.991 (maximum value 2) with strength effectively equal to 1.00 was obtained, so there is extremely strong evidence that this is true. The unconditional estimate of the AUC is 0.808 with plausible region [0.698, 0.888] having posterior content 0.959, so there is a fair bit of uncertainty concerning the true value. For the conditional analysis, given that AUC > 1/2, the estimate of the AUC is 0.806 with plausible region [0.731, 0.861] having posterior content 0.932. So the conditional analysis gives a similar estimate for the AUC with a small increase in accuracy. In either case it seems that the AUC is indicating that age should be a reasonable diagnostic. Note that the standard nonparametric estimate of the AUC is 0.810 so the two approaches agree here. For females the hypothesis AUC > 1/2 is assessed and RB = 1.994 with strength effectively equal to 1 was obtained, so there is extremely strong evidence that this is true. The unconditional estimate of the AUC is 0.873 with plausible region (0.742, 0.948) having posterior content 0.968. For the conditional analysis, given that AUC > 1/2, the estimate of the AUC is 0.874 with plausible region (0.791, 0.936) having posterior content 0.956. The traditional estimate of the AUC is 0.902 so the two approaches are again in close agreement.
Inferences for c opt are more problematical in both genders. Consider the male data. The data set is very discrete as there are many repeats and the approach samples from the ecdf about 84% of the time for the males that died and 98% of the time for the males that did not die. The result is a plausible region that is not contiguous even with smoothing. Without smoothing the estimate is c opt (d) = 85.5 for males, which is a very dominant peak for the relative belief ratio. The plausible region contains 0.928 of the posterior probability and, although it is not a contiguous interval, the subinterval [85.2, 85.8] is a 0.58-credible interval for c opt that is in agreement with the evidence. If we make the data continuous by adding a uniform(0,1) random error to each age in the data set, then c opt (d) = 86.1 and plausible interval [75.9, 86.7] with posterior content 0.968 is obtained. These cutoffs are both greater than the maximum value in the ND data, so there is ample protection against false positives but it is undoubtedly false negatives that are of most concern in this context. If instead the FNDR is used as the error criterion to minimize, then c opt (d) = 35.7 and plausible interval [26.1, 35.7] with posterior content 0.826 is obtained and so in this case there will be too many false positives. So a useful optimal cutoff incorporating the relevant prevalence does not seem to exist with these data.
If the relevant prevalence is ignored and w 0 FNR+(1 − w 0 )FPR is used for some fixed weight w 0 to determine c opt (d), then more reasonable values are obtained. Table 4 gives the estimates for various w 0 values. With w 0 = 0.5 (corresponding to using Youden's index) c opt (d) = 65.7 while if w 0 = 0.7, then c opt (d) = 56.7. When w 0 is too small or too large then the value of c opt (d) is not useful. While these estimates do not depend on the relevant prevalence, the error characteristics that do depend on this prevalence (as expressed via its prior and posterior distributions) can still be quoted and a decision made as to whether or not to use the diagnostic. Table 5 contains the estimates of the error characteristics at c opt (d) for various values of w 0 where these are determined using the prior and posterior on the relevant prevalence w. Note that these estimates are determined as the values that maximize the corresponding relative belief ratios and take into account the posterior of w. So, for example, the estimate of the Error is not the convex combination of the estimates of FNR and FPR based on the w 0 weight. Another approach is to simply set the cutoff Age at a value at a value c 0 and then investigate the error characteristics at that value. For example, with c 0 = 60, then the estimated values are given by FNR(c 0 ) = 0.238, FPR(c 0 ) = 0.308, Error(c 0 ) = 0.328, FDR(c 0 ) = 0.818 and FNDR(c 0 ) = 0.028.
Similar results are obtained for the cutoff with female data although with different values. Overall, Age by itself does not seem to be useful classifier although that is a decision for medical practitioners. Perhaps it is more important to treat those who stand a significant chance of dying more extensively and not worry too much that some treatments are not necessary. The clear message from this data, however, is that a relatively high AUC does not immediately imply that a diagnostic is useful and the relevant prevalence is a key aspect of this determination.

Conclusions
ROC analyses represent a significant practical application of statistical methodology. While previous work has considered such analyses within a Bayesian framework, this has typically required the specification of loss functions. Losses are not required in the approach taken here which is entirely based on a natural characterization of statistical evidence via the principle of evidence and the relative belief ratio. As discussed in Section 2.2 this results in a number of good properties for the inferences that are not possessed by inferences derived by other approaches. While the Bayes factor is also a valid measure of evidence, its usage is far more restricted than the relative belief ratio which can be applied with any prior, without the need for any modifications, for both hypothesis assessment and estimation problems. This paper has demonstrated the application of relative belief to ROC analyses under a number of model assumptions. In addition, as documented in points (ii)-(vi) of the Introduction, a number of new results have been developed for ROC analyses more generally.
Proof of Corollary 1. Suppose µ D ≥ µ ND and (6) hold. Then putting we have that, for fixed µ D , σ 2 D , σ 2 ND and w, then (µ D − µ ND ) 2 + a is a quadratic in µ ND . This quadratic has discriminant −4a and so has no real roots whenever a > 0 and, noting a does not depend on µ D , the only restriction on µ ND is µ ND ≤ µ D . When a ≤ 0 the roots of the quadratic are given by µ D ± √ −a and so, since the quadratic is negative between the roots and µ D − √ −a ≤ µ D ≤ µ D + √ −a the two restrictions imply µ ND ≤ µ D − √ −a. Combining the two cases gives (8).