Next Article in Journal
Expected Utility and Entropy-Based Decision-Making Model for Large Consumers in the Smart Grid
Next Article in Special Issue
Bayesian Inference on the Memory Parameter for Gamma-Modulated Regression Models
Previous Article in Journal
Approximate Analytical Solutions of Time Fractional Whitham–Broer–Kaup Equations by a Residual Power Series Method
Previous Article in Special Issue
A Bayesian Predictive Discriminant Analysis with Screened Data
 
 
Order Article Reprints
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Bayesian Decision-Theoretic Approach to Logically-Consistent Hypothesis Testing

1
Institute of Mathematics and Statistics, University of São Paulo, São Paulo, 05508-090, Brazil
2
Department of Statistics, Federal University of São Carlos, São Carlos, 13565-905, Brazil
*
Author to whom correspondence should be addressed.
Entropy 2015, 17(10), 6534-6559; https://doi.org/10.3390/e17106534
Received: 27 May 2015 / Revised: 1 September 2015 / Accepted: 9 September 2015 / Published: 24 September 2015
(This article belongs to the Special Issue Inductive Statistical Methods)

Abstract

:
This work addresses an important issue regarding the performance of simultaneous test procedures: the construction of multiple tests that at the same time are optimal from a statistical perspective and that also yield logically-consistent results that are easy to communicate to practitioners of statistical methods. For instance, if hypothesis A implies hypothesis B, is it possible to create optimal testing procedures that reject A whenever they reject B? Unfortunately, several standard testing procedures fail in having such logical consistency. Although this has been deeply investigated under a frequentist perspective, the literature lacks analyses under a Bayesian paradigm. In this work, we contribute to the discussion by investigating three rational relationships under a Bayesian decision-theoretic standpoint: coherence, invertibility and union consonance. We characterize and illustrate through simple examples optimal Bayes tests that fulfill each of these requisites separately. We also explore how far one can go by putting these requirements together. We show that although fairly intuitive tests satisfy both coherence and invertibility, no Bayesian testing scheme meets the desiderata as a whole, strengthening the understanding that logical consistency cannot be combined with statistical optimality in general. Finally, we associate Bayesian hypothesis testing with Bayes point estimation procedures. We prove the performance of logically-consistent hypothesis testing by means of a Bayes point estimator to be optimal only under very restrictive conditions.

1. Introduction

One could (...) argue that ‘power is not everything’. In particular for multiple test procedures one can formulate additional requirements, such as, for example, that the decision patterns should be logical, conceivable to other persons, and, as far as possible, simple to communicate to non-statisticians.
—G. Hommel and F. Bretz [1]
Multiple hypothesis testing, a formal quantitative method that consists of testing several hypotheses simultaneously [2], has gained considerable ground in the last few decades with the aim of drawing conclusions from data in scientific experiments regarding unknown quantities of interest. Most of the development of multiple hypothesis testing has been focused on the construction of test procedures satisfying statistical optimality criteria, such as the minimization of posterior expected loss functions or the control of various error rates. These advances are detailed, for instance, in [2], [3] (p. 7), [4] and the references therein. However, another important issue concerning multiple hypothesis testing, namely the construction of simultaneous tests that yield coherent results easier to communicate to practitioners of statistical methods, has not been so deeply investigated yet, especially under the Bayesian paradigm. As a matter of fact, most traditional multiple hypothesis testing schemes do not combine statistical optimality with logical consistency. For example, [5] (p. 250) presents a situation regarding the parameter, θ, of a single exponential random variable, X, in which uniformly most powerful (UMP) tests of level 0.05 for the one-sided hypothesis H 0 ( 1 ) : θ 1 and the two-sided hypothesis H 0 ( 2 ) : θ 1 θ 2 , say φ 1 and φ 2 , respectively, lead to puzzling decisions. In fact, for the sample outcome X = 0 . 7 , the test φ 2 rejects H 0 ( 2 ) , and because H 0 ( 1 ) implies H 0 ( 2 ) , one may decide to reject H 0 ( 1 ) , as well. On the other hand, the test φ 1 does not reject H 0 ( 1 ) , a fact that makes a practitioner confused given these conflicting results. In this example, an inconsistency related to nested hypotheses named coherence [6] takes place. Frequently, other logical relationships one may expect from the conclusions drawn from multiple hypothesis testing, such as consonance [6] and compatibility [7], are not met either.
Although several of these properties have been deeply investigated under a frequentist hypothesis-testing framework, Bayesian literature lacks such analyses. In this work, we contribute to this discussion by examining three rational requirements in simultaneous tests under a Bayesian decision-theoretic perspective. In short, we characterize the families of loss functions that induce multiple Bayesian tests that satisfy partially such desiderata. In Section 2, we review and illustrate the concept of a testing scheme (TS), a mathematical object that assigns to each statistical hypothesis of interest a test function. In Section 3, we formalize three consistency relations one may find important to hold in simultaneous tests: coherence, union consonance and invertibility. In Section 4, we provide necessary and sufficient conditions on loss functions to ensure Bayesian tests to meet each desideratum separately, whatsoever the prior distribution for the relevant parameters is. In Section 5, we prove, under quite general conditions, the impossibility of creating multiple tests under a Bayesian decision-theoretic framework that fulfill the triplet of requisites simultaneously with respect to all prior distributions. We also explore the connection between logically-consistent Bayes tests and Bayes point estimation procedures. Final remarks and suggestions for future inquiries are presented in Section 6. All theorems are proven in the Appendix.

2. Testing Schemes

We start by formulating the mathematical setup for multiple Bayesian tests. For the remainder of the manuscript, the parameter space is denoted by Θ and the sample space by X . Furthermore, σ ( Θ ) and σ ( X ) represent σ-fields of subsets of Θ and X , respectively. We consider the Bayesian statistical model ( X × Θ , σ ( X × Θ ) , I P ) . The I P -marginal distribution of θ, namely the prior distribution for θ, is denoted by π, while π x ( . ) represents the posterior distribution for θ given X = x , x X . Moreover, P ( . | θ ) stands for the conditional distribution of the observable X given θ, and L x ( θ ) represents the likelihood function at the point θ Θ generated by the sample observation x X . Finally, let Ψ be the set of all test functions, that is the set of all { 0 , 1 } -valued measurable functions defined on X . As usual, “1” denotes the decision of rejecting the null hypothesis and “0” the decision of not rejecting or accepting it.
Next, we review the definition of a TS, a mathematical device that formally describes the idea that to each hypothesis of interest it is assigned a test function. Although the specification of the hypotheses of interest most of the times depends on the scientific problem under consideration, here, we assume that a decision-maker has to assign a test to each element of σ ( Θ ) . This assumption not only enables us to precisely define the relevant consistency properties, but it also allows multiple Bayesian testing based on posterior probabilities of the hypotheses (a deeper discussion on this issue may be found in [3] (p. 5) and [8]).
Definition 1. (Testing scheme (TS)) Let the σ-field of subsets of the parameter space σ ( Θ ) be the set of hypotheses to be tested. Moreover, let Ψ be the set of all test functions defined on X . A TS is a function φ : σ ( Θ ) Ψ that assigns to each hypothesis A σ ( Θ ) the test φ A Ψ for testing A.
Thus, for A σ ( Θ ) and x X , φ A ( x ) = 1 represents the decision of rejecting the hypothesis A when the datum x is observed. Similarly, φ A ( x ) = 0 represents the decision of not rejecting A. We now present examples of testing schemes.
Example 1. (Tests based on posterior probabilities) Assume Θ = R d and σ ( Θ ) = B ( R d ) , the Borelians of R d . Let π be the prior probability distribution for θ. For each A σ ( Θ ) , let φ A : X { 0 , 1 } be defined by:
φ A ( x ) = I π x ( A ) < 1 2 ,
where π x ( . ) is the posterior distribution of θ, given x. This is the TS that assigns to each hypothesis A B ( R d ) the test that rejects it when its posterior probability is smaller than 1 / 2 .
Recall that, under a Bayesian decision-theoretic perspective, a hypothesis testing for the hypothesis Θ 0 Θ [5] (p. 214) is a decision problem in which the action space is { 0 , 1 } and the loss function L : { 0 , 1 } × Θ R satisfies:
L ( 1 , θ ) L ( 0 , θ ) for θ Θ 0 and L ( 1 , θ ) L ( 0 , θ ) for θ Θ 0 c ,
that is, L is such that the wrong decision ought to be assigned a loss at least as large as that assigned to a correct decision (many authors consider strict inequalities in Equation (1)). We call such a loss function a (strict) hypothesis testing loss function.
A solution of this decision problem, named a Bayes test, is a test function φ * Ψ derived, for each sample point x X , by minimizing the expectation of the loss function L over { 0 , 1 } with respect to the posterior distribution. That is, for each x X ,
φ * ( x ) = 1 E [ L ( 1 , θ ) | X = x ] < E [ L ( 0 , θ ) | X = x ] ,
where E [ L ( d , θ ) | X = x ] = Θ L ( d , θ ) d π x ( θ ) , d { 0 , 1 } . In the case of the equality of the posterior expectations, both zero and one are optimal decisions, and either of them can be chosen as φ * ( x ) .
When dealing with multiple tests, one can use the above procedure for each hypothesis of interest. Hence, one can derive a Bayes test for each null hypothesis A σ ( Θ ) considering a specified loss function L A : { 0 , 1 } × Θ R satisfying Equation (1). This is formally described in the following definition.
Definition 2. (TS generated by a family of loss functions) Let ( X × Θ , σ ( X × Θ ) , I P ) be a Bayesian statistical model. Let ( L A ) A σ ( Θ ) be a family of hypothesis testing loss functions, where L A : { 0 , 1 } × Θ R is the loss function for testing A σ ( Θ ) . A TS generated by the family of loss functions ( L A ) A σ ( Θ ) is any TS φ defined over σ ( Θ ) , such that, A σ ( Θ ) , φ A is a Bayes test for hypothesis A with respect to π considering the loss L A .
The following example illustrates this concept.
Example 2. (Tests based on posterior probabilities) Assume the same scenario as Example 1 and that ( L A ) A σ ( Θ ) is a family of loss functions, such that A σ ( Θ ) and θ Θ ,
L A ( 0 , θ ) = I ( θ A ) a n d L A ( 1 , θ ) = I ( θ A ) ,
that is, L A is the 0–1 loss for A ([5] (p. 215)). The testing scheme introduced in Example 1 is a TS generated by the family of 0–1 loss functions.
The next example shows a TS of Bayesian tests motivated by different epistemological considerations (see [9,10] for details), the full Bayesian significance tests (FBST).
Example 3. (FBST testing scheme) Let Θ = R d , σ ( Θ ) = B ( R d ) and f ( . ) be the prior probability density function (pdf) for θ. Suppose that, for each x X , there exists f ( . | x ) , the pdf of the posterior distribution of θ, given x. For each hypothesis A σ ( Θ ) , let:
T x A = θ Θ : f ( θ | x ) > sup θ A f ( θ | x )
be the set tangent to the null hypothesis, and let e v x ( A ) = 1 π x ( T x A ) be the Pereira–Stern evidence value for A (see [11] for a geometric motivation). One can define a TS φ by:
φ A ( x ) = I e v x ( A ) c , A σ ( Θ )   a n d   x X ,
in which c [ 0 , 1 ] is fixed. In other words, one does not reject the null hypothesis when its evidence is larger than c.
We end this section by defining a TS generated by a point estimation procedure, an intuitive concept that plays an important role in characterizing logically-consistent simultaneous tests.
Definition 3. (TS generated by a point estimation procedure) Let δ : X Θ be a point estimator for θ ([5] (p. 296)). The TS generated by δ is defined by:
φ A ( x ) = I ( δ ( x ) A ) .
Hence, the TS generated by the point estimator δ rejects hypothesis A after observing x if, and only if, the point estimate for θ, δ ( x ) , is not in A.
Example 4. (TS generated by a point estimation procedure) Let Θ = R , σ ( Θ ) = P ( Θ ) and X 1 , X n | θ i.i.d. N ( θ , 1 ) . The TS generated by the sample mean, X, rejects A σ ( Θ ) when x is observed if x A .

3. The Desiderata

In this section, we review three properties one may expect from simultaneous test procedures: coherence, invertibility and union consonance.

3.1. Coherence

When a hypothesis is tested by a significance test and is not rejected, it is generally agreed that all hypotheses implied by that hypothesis (its “components”) must also be considered as non-rejected.
—K. R. Gabriel [6]
The first property concerns nested hypotheses and was originally defined by [6]. It states that if hypothesis H 0 ( 1 ) implies hypothesis H 0 ( 2 ) , that is H 0 ( 1 ) H 0 ( 2 ) , then the rejection of H 0 ( 2 ) implies the rejection of H 0 ( 1 ) . In the context of TSs, we have the following definition.
Definition 4. (Coherence) A testing scheme φ is coherent if:
A ,   B σ ( Θ ) ,   A B φ A φ B ,   i . e . ,   x X ,   φ A ( x ) φ B ( x ) .
In other words, if after observing x, a hypothesis is rejected, any hypothesis that implies it has to be rejected, as well.
The testing schemes introduced in Examples 1, 3 and 4 are coherent. Indeed, in Example 1, coherence is a consequence of the monotonicity of probability measures, while in Example 3, it follows from the fact that if A B , then T x B T x A and, therefore, e v x ( A ) e v x ( B ) . In Example 4, coherence is immediate. On the other hand, testing schemes based on UMP tests or generalized likelihood ratio tests with a common fixed level of significance are not coherent in general. Neither are TSs generated by some families of loss functions (see Section 4). Next, we illustrate that even test procedures based on p-values or Bayes factors may be incoherent.
Example 5. Suppose that in a case-control study, one measures the genotype in a certain locus for each individual of a sample. Results are shown in Table 1. These numbers were taken from a study presented by [12] that had the aim of verifying the hypothesis that subunits of the gene G A B A A contribute to a condition known as methamphetamine use disorder. Here, the set of all possible genotypes is G = { A A , A B , B B } . Let γ = ( γ A A , γ A B , γ B B ) , where γ i is the probability that an individual from the case group has genotype i. Similarly, let π = ( π A A , π A B , π B B ) , where π i is the probability that an individual of control group has genotype i.
In this context, two hypotheses are of interest: the hypothesis that the genotypic proportions are the same in both groups, H 0 G : γ = π , and the hypothesis that the allelic proportions are the same in both groups H 0 A : γ A A + 1 2 γ A B = π A A + 1 2 π A B . The p-values obtained using chi-square tests for these hypotheses are, respectively, 0.152 and 0.069. Hence, at the level of significance α = 10 % , the TS given by chi-square tests rejects H 0 A , but does not reject H 0 G . That is, the TS leads a practitioner to believe that the allelic proportions are different in both groups, but it does not suggest any difference between the genotypic proportions. This is absurd!If the allelic proportions are not the same in both groups, the genotypic proportions cannot be the same either. Indeed, if the latter were the same, then γ i = π i , i G , and hence, θ H 0 A . This example is further discussed in [8,13].
Table 1. Genotypic sample frequencies.
Table 1. Genotypic sample frequencies.
AAABBBTotal
Case558350188
Control244239105
Several other (in)coherent testing schemes are explored by [8,14].
Coherence is by far the most emphasized logical requisite for simultaneous test procedures in the literature. It is often regarded as a sensible property by both theorists and practitioners of statistical methods who perceive a hypothesis test as a two-fold (accept/reject) decision problem. On the other hand, adherents to evidence-based approaches to hypothesis testing [15] do not see the need for coherence. Under the frequentist approach to hypothesis testing, the construction of coherent procedures is closely associated with the so-called closure methods [16,17]. Many results on coherent classical tests are shown in [6,17], among others. On the other hand, coherence has not been deeply investigated from a Bayesian standpoint yet, except for [18], who relate coherence with admissibility and Bayesian optimality in certain situations of finitely many hypotheses of interest. In Section 4, we provide a characterization of coherent testing schemes under a decision-theoretic framework.

3.2. Invertibility

There is a duality between hypotheses and alternatives which is not respected in most of the classical hypothesis-testing literature. (...) suppose that we decide to switch the names of alternative and hypothesis, so that Ω H becomes Ω A , and vice versa. Then we can switch tests from ϕ to ψ = 1 ϕ and the “actions” accept and reject become switched.
—M. J. Schervish [5] (p. 216)
The duality mentioned in the quotation above is formally described in the next definition.
Definition 5. (Invertibility) A testing scheme φ satisfies invertibility if:
A σ ( Θ ) ,   φ A c = 1 φ A .
In other words, it is irrelevant to decision-making which hypothesis is labeled as null and which is labeled as alternative.
Unlike coherence, there is no consensus among statisticians on how reasonable invertibility is. While it is supported by many decision-theorists, invertibility is usually discredited by advocates of the frequentist theory owing to the difference between the interpretations of “not reject a hypothesis” and “accept a hypothesis” under various epistemological viewpoints (the reader is referred to [7] for a discussion on this distinction). As a matter of fact, invertibility can also be seen, from a logic perspective, as a version of the law of the excluded middle, which itself represents a gap between schools of logic ([19] (p. 32)). In spite of the controversies on invertibility, it seems to be beyond any argument the fact that the absence of invertibility in multiple tests may lead a decision-maker to be puzzled by senseless conclusions, such as the simultaneous rejections of both a hypothesis and its alternative. The following example illustrates this point.
Example 6. Suppose that X | θ N o r m a l ( θ , 1 ) , and consider that the parameter space is Θ = { 3 , 3 } . Assume one wants to test the following null hypotheses:
H 0 A : θ = 3 a n d H 0 B : θ = 3
The Neyman–Pearson tests for these hypotheses have the following critical regions, at the level 5%, respectively:
{ x R : x < 1 . 35 } a n d { x R : x > 1 . 35 } .
Hence, if we observe x = 0 . 5 , we reject both H 0 A and H 0 B , even though H 0 A H 0 B = Θ ! 
The testing schemes of Examples 2 and 4 satisfy invertibility. In Example 4, it is straightforward to verify this. In Example 2, it follows essentially from the equivalence π x ( A ) < 1 / 2 π x ( A c ) > 1 / 2 . If π x ( A ) 1 / 2 for each sample x and for all A σ ( Θ ) , the unique TS generated by the 0–1 loss functions satisfies invertibility. Otherwise, there is a testing scheme generated by such losses that is still in line with this property. Indeed, for any A σ ( Θ ) and x 0 X , such that π x 0 ( A ) = 1 / 2 , the decision of rejecting A (not rejecting A c ) after observing x 0 has the same expected loss as the decision of not rejecting (rejecting) it. Thus, among all testing schemes generated by the 0–1 loss functions, which are all equivalent from a decision-theoretic point of view ([20] (p. 123)), a decision-maker can always choose a TS φ , such that φ A c ( x 0 ) = 1 φ A ( x 0 ) for all A σ ( Θ ) , and x 0 X , such that π x 0 ( A ) = 1 / 2 . Such a TS φ meets invertibility.

3.3. Consonance

... a test for ( i I H i ) c versus ( i I H i ) may result in rejection which then indicates that at least one of the hypotheses H i , i I , may be true.
—H. Finner and K. Strassburger [21]
The third property concerns two hypotheses, say A and B, and their union, A B . It is motivated by the fact that in many cases, it seems reasonable that a testing scheme that retains the union of these hypotheses should also retain at least one of them. This idea is generalized in Definition 6.
Definition 6. (Union Consonance) A TS φ satisfies the finite (countable) union consonance if for all finite (countable) set of indices I,
{ A i } i I σ ( Θ ) , φ i I A i min { φ A i } i I .
In other words, if we retain the union of the hypotheses i I A i , we should not reject at least one of the A i ’s.
There are several testing schemes that meet union consonance. For instance, TSs generated by point estimation procedures, TSs of Aitchison’s confidence-region tests [22] and FBST TSs (under quite general conditions; see [8]) satisfy both finite and countable union consonance.
Although union consonance may not be considered as appealing as coherence for simultaneous test procedures, it was hinted at in a few relevant works. For instance, the interpretation given by [21] on the final joint decisions derived from partial decisions implicitly suggests that union consonance is reasonable: they suggest one should consider B : = A : φ A ( x ) = 1 A to be the set of all parameter values rejected by the simultaneous procedure at hand when x is observed. Under this reading, it seems natural to expect that φ B ( x ) = 1 , which is exactly what the union consonance principle states. As a matter of fact, the general partitioning principle proposed by these authors satisfies union consonance. It should also be mentioned that union consonance, together with coherence, plays a key role in the possibilistic abstract belief calculus [23]. In addition, an evidence-based approach detailed in [24] satisfies both consonance and invertibility.
We end this section by stating a result derived from putting these logical requirements together.
Theorem 1. Let Θ be a countable parameter space and σ ( Θ ) = P ( Θ ) . Let φ be a testing scheme defined on σ ( Θ ) . The TS φ satisfies coherence, invertibility and countable union consonance if, and only if, there is a point estimator δ : X Θ , such that φ is generated by δ.
Theorem 1 is also valid for finite union consonance with the obvious adaptation.

4. A Bayesian Look at Each Desideratum

In the previous section, we provided several examples of testing schemes satisfying some of the logical properties reviewed therein. In particular, a testing scheme generated by the family of 0–1 loss functions (Example 2) was shown to fulfill both coherence and invertibility. However, not all families of loss functions generate a TS meeting any of these requisites, as is shown in the examples below.
Example 7. Suppose that X | θ B e r n o u l l i ( θ ) and that one is interested in testing the null hypotheses:
H 0 A : θ 0 . 4 a n d H 0 B : θ 0 . 5 .
Furthermore, assume θ U n i f o r m ( 0 , 1 ) a priori and that he uses the loss functions from Table 2 to perform the tests.
Thus, Bayes tests for testing H 0 A and H 0 B are, respectively,
φ A ( x ) = I I P ( θ 0 . 4 | x ) 1 / 7 a n d φ B ( x ) = I I P ( θ 0 . 5 | x ) 1 / 3 .
As θ | x B e t a ( 2 , 1 ) if x = 1 is observed, then I P ( θ 0 . 4 | x ) = 0 . 16 and I P ( θ 0 . 5 | x ) = 0 . 25 , so that one does not reject H 0 A , but rejects H 0 B . Since H 0 A H 0 B , we conclude that coherence does not hold.
Table 2. Loss functions for tests of Example 7.
State of Nature
Decision θ H 0 A θ H 0 A
001
160
State of Nature
Decision θ H 0 B θ H 0 B
001
120
Intuitively, incoherence takes place because the loss of falsely rejecting H 0 A is three-times as large as the loss of falsely rejecting H 0 B , while the corresponding errors of Type II are of the same magnitude. Hence, these loss functions reveal that the decision-maker is more reluctant to reject H 0 A than to reject H 0 B in such a way that he only needs little evidence to accept H 0 A (posterior probability greater than 1/7) when compared to the amount of evidence needed to accept H 0 B (posterior probability greater than 1/3). Thus, it is not surprising at all that in this case, the tests do not cohere for some priors.
Example 8. In the setup of Example 7, suppose one also needs to test the null hypothesis H 0 B c : θ > 0 . 5 by taking into account the loss function in Table 3.
The Bayes test for H 0 B c is then to reject it if I P ( θ > 0 . 5 | x ) < 4 / 5 . For x = 1 , I P ( θ > 0 . 5 | x ) = 0 . 75 , and consequently, H 0 B c is rejected. As both H 0 B and H 0 B c are rejected when x = 1 is observed, these tests do not satisfy invertibility.
Table 3. Loss function for Example 8.
Table 3. Loss function for Example 8.
State of Nature
Decision θ H 0 B c θ H 0 B c
004
110
The absence of invertibility is somewhat expected here, because the degree to which the decision-maker believes an incorrect decision of choosing H 0 B to be more serious than an incorrect decision of choosing H 0 B c is not the same whether H 0 B is regarded as the “null” or the “alternative” hypothesis. More precisely, while the decision-maker assigns a loss to the error of Type I that is the double of the one assigned to the error of Type II when testing the null hypothesis H 0 B , he evaluates the loss of falsely accepting H 0 B c to be four-times (not twice!) as large as that of falsely rejecting it when H 0 B c is the null hypothesis.
The examples we have examined so far give rise to the question: from a decision-theoretic perspective, what conditions must be imposed on a family of loss functions so that the resultant Bayesian testing scheme meets coherence (invertibility)? Next, we offer a solution to this question. We first give a definition in order to simplify the statement of the main results of this section.
Definition 7. (Relative loss) Let L A be a loss function for testing the hypothesis A σ ( Θ ) . The function Δ A : Θ R defined by:
Δ A ( θ ) = L A ( 1 , θ ) L A ( 0 , θ ) , if   θ A L A ( 0 , θ ) L A ( 1 , θ ) , if   θ A
is named the relative loss of L A for testing A.
In short, the relative loss measures the difference between losses of taking the wrong and the correct decisions. Thus, the relative loss of any hypothesis testing loss function is always non-negative.
A careful examination of Example 7 hints that in order to obtain coherent tests, the “larger” (the “smaller”) the null hypothesis of interest is, the more cautious about falsely rejecting (accepting) it the decision-maker ought to be. This can be quantified as follows: for hypotheses A and B, such that A B and with corresponding hypothesis testing loss functions L A and L B , if θ 1 A , then Δ B ( θ 1 ) should be at least as large as Δ A ( θ 1 ) . Similarly, if θ 2 B c , then Δ B ( θ 2 ) should be at most Δ A ( θ 2 ) . Such conditions are also appealing, since it seems reasonable that greater relative losses should be assigned to greater “distances” between the parameter and the wrong decision. For instance, if θ A (and consequently, θ B ), the rougher error of rejecting B should be penalized more heavily than the error of rejecting A; Figure 1 enlightens this idea.
Figure 1. Interpretation of sensible relative losses: rougher errors of decisions should be assigned larger relative losses.
Figure 1. Interpretation of sensible relative losses: rougher errors of decisions should be assigned larger relative losses.
Entropy 17 06534 g001
These conditions, namely:
Δ A ( θ 1 ) Δ B ( θ 1 ) , θ 1 A and Δ A ( θ 2 ) Δ B ( θ 2 ) , θ 2 B c ,
are sufficient for coherence. As a matter of fact, Theorem 2 states that the weaker condition:
Δ A ( θ 1 ) Δ B ( θ 2 ) Δ A ( θ 2 ) Δ B ( θ 1 ) , θ 1 A , θ 2 B c ,
is necessary and sufficient for a family of hypothesis testing loss functions to induce a coherent testing scheme with respect to each prior distribution for θ. Henceforward, we assume that E ( L A ( d , θ ) | x ) < , for all A σ ( Θ ) , d { 0 , 1 } and x X .
Theorem 2. Let ( L A ) A σ ( Θ ) be a family of hypothesis testing loss functions. Suppose that for all θ 1 , θ 2 Θ , there is x X , such that L x ( θ 1 ) , L x ( θ 2 ) > 0 . Then, for all prior distributions π for θ, there exists a testing scheme generated by ( L A ) A σ ( Θ ) with respect to π that is coherent if, and only if, ( L A ) A σ ( Θ ) is such that for all A , B σ ( Θ ) with A B :
Δ A ( θ 1 ) Δ B ( θ 2 ) Δ A ( θ 2 ) Δ B ( θ 1 ) , θ 1 A , θ 2 B c .
Notice that the “if” part of Theorem 2 still holds for families of hypothesis testing loss functions that depend also on the sample. Theorem 2 characterizes, under certain conditions, all families of loss functions that induce coherent tests, no matter what the decision-maker’s opinion (prior) on the unknown parameter is. Although the result of Theorem 2 is not properly normative, any Bayesian decision-maker can make use of it to prevent himself from drawing incoherent conclusions from multiple hypothesis testing by checking whether his personal losses satisfy the condition in Equation (2).
Many simple families of loss functions generate coherent tests, as we illustrate in Examples 9 and 10.
Example 9. Consider, for each A σ ( Θ ) , the loss function L A in Table 4 to test the null hypothesis A, in which λ : σ ( Θ ) R + is any finite measure, such that λ ( Θ ) > 0 . This family of loss functions satisfies the condition in Equation (2) for coherence as for all A , B σ ( Θ ) , such that A B , and for all θ 1 A and θ 2 B c , Δ A ( θ 1 ) = λ ( A ) , Δ B ( θ 2 ) = λ ( B c ) , Δ A ( θ 2 ) = λ ( A c ) and Δ B ( θ 1 ) = λ ( B ) .
Table 4. Loss function L A for testing A.
Table 4. Loss function L A for testing A.
State of Nature
Decision θ A θ A
00 λ ( A c )
1 λ ( A ) 0
As a matter of fact, if for each A σ ( Θ ) , L A is a 0 1 c A loss function ([5] (p. 215)), with 0 < c A c B if A B , then the family ( L A ) A σ ( Θ ) will induce a coherent TS for each prior for θ.
Example 10. Assume Θ is equipped with a distance, say d. Define, for each A σ ( Θ ) the loss function L A for testing A by:
L A ( 0 , θ ) = d * ( θ , A ) and L A ( 1 , θ ) = d * ( θ , A c ) ,
where d * ( θ , A ) = inf a A d ( θ , a ) is the distance between θ Θ and A. For A , B σ ( Θ ) , such that A B , and for θ 1 A and θ 2 B c , Δ A ( θ 1 ) = d * ( θ 1 , A c ) , Δ B ( θ 2 ) = d * ( θ 2 , B ) , Δ A ( θ 2 ) = d * ( θ 2 , A ) and Δ B ( θ 1 ) = d * ( θ 1 , B c ) . These values satisfy Equation (2) from Theorem 2. Hence, families of loss functions based on distances as the above generate Bayesian coherent tests.
Next, we characterize Bayesian tests with respect to invertibility. In order to obtain TSs that meet invertibility, it seems reasonable that when the null and alternative hypotheses are switched, the relative losses ought to remain the same. That is to say, when testing the null hypothesis A, the relative loss at each point θ Θ , Δ A ( θ ) , should be equal to the relative loss Δ A c ( θ ) when A c is the null hypothesis instead. This condition is sufficient, but not necessary for a family of loss functions to induce tests fulfilling this logical requisite with respect to all prior distributions. In Theorem 3, however, we provide necessary and sufficient conditions for invertibility.
Theorem 3. Let ( L A ) A σ ( Θ ) be a family of hypothesis testing loss functions. Suppose that for all θ 1 , θ 2 Θ , there is x X , such that L x ( θ 1 ) , L x ( θ 2 ) > 0 . Then, for all prior distributions π for θ, there exists a testing scheme generated by ( L A ) A σ ( Θ ) with respect to π that satisfies invertibility if, and only if, ( L A ) A σ ( Θ ) is such that for all A σ ( Θ ) :
Δ A ( θ 1 ) Δ A c ( θ 2 ) = Δ A c ( θ 1 ) Δ A ( θ 2 ) , θ 1 A θ 2 A c .
Condition Equation (3) is equivalent (for strict hypothesis testing loss functions) to impose, for each A σ ( Θ ) , that the function Δ A ( . ) Δ A c ( . ) to be constant over Θ. We should mention that the “if” part of Theorem 3 still holds for hypothesis testing loss functions satisfying (Equation (3)) that also depend on the sample x.
The families of loss functions introduced in Examples 9 and 10 satisfy (Equation (3)). Thus, such families of losses ensure the construction of simultaneous Bayes tests that are in conformity with both coherence and invertibility for all prior distributions on σ ( Θ ) . Thus, if one believes these (two) logical requirements to be of primary importance in multiple hypothesis testing, he can make use of any of these families of loss functions to perform tests satisfactorily. Other simple loss functions also lead to TSs that meet invertibility: for instance, any family of 0–1–c loss functions for which c A c = 1 / c A for all A σ ( Θ ) leads to invertible TSs.
We end this section by examining union consonance under a decision-theoretic point of view. From Definition 6, it appears that a necessary condition for the derivation of consonant tests is that “smaller” (“larger”) null hypotheses ought to be assigned greater losses for false rejection (acceptance). More precisely, for A , B σ ( Θ ) , if θ 1 A B , then it seems that either Δ A B ( θ 1 ) Δ A ( θ 1 ) or Δ A B ( θ 1 ) Δ B ( θ 1 ) should hold. If θ 2 ( A B ) c , then it is reasonable that either Δ A B ( θ 2 ) Δ A ( θ 2 ) or Δ A B ( θ 2 ) Δ B ( θ 2 ) . The next theorem shows that this is nearly the case. However, it is still unknown whether sufficient conditions for union consonance are determinable.
Theorem 4. Let ( L A ) A σ ( Θ ) be a family of hypothesis testing loss functions. Suppose that for all θ 1 , θ 2 Θ , there is x X , such that L x ( θ 1 ) , L x ( θ 2 ) > 0 . If for all prior distribution π for θ, there exists a testing scheme generated by ( L A ) A σ ( Θ ) with respect to π that satisfies finite union consonance, then ( L A ) A σ ( Θ ) is such that for all A , B σ ( Θ ) and for all θ 1 A B , θ 2 ( A B ) c ,
either Δ A B ( θ 1 ) Δ A ( θ 2 ) Δ A B ( θ 2 ) Δ A ( θ 1 ) or Δ A B ( θ 1 ) Δ B ( θ 2 ) Δ A B ( θ 2 ) Δ B ( θ 1 ) .

5. Putting the Desiderata Together

In Section 4, we showed that there are infinitely many families of loss functions that induce, for each prior distribution for θ, a TS that satisfies both coherence and invertibility (Examples 9 and 10). However, requiring the three logical consistency properties we presented to hold simultaneously with respect to all priors is too restrictive: under mild conditions, no TS constructed under a Bayesian decision-theoretic approach to hypothesis testing fulfills this, as stated in the next theorem.
Theorem 5. Assume that Θ and σ ( Θ ) are such that | Θ | 3 and that there is a partition of Θ composed of three nonempty measurable sets. Assume also that for all triplet θ 1 , θ 2 , θ 3 Θ , there is x X , such that L x ( θ i ) > 0 for i = 1 , 2 , 3 . Then, there is no family of strict hypothesis testing loss functions that induces, for each prior distribution for θ, a testing scheme satisfying coherence, invertibility and finite union consonance.
Theorem 5 states that Bayesian optimality (based on standard loss functions that do not depend on the sample) cannot be combined with complete logical consistency. This fact can lead one to wonder whether such properties are indeed sensible in multiple hypothesis testing. The following result shows us that the desiderata are in fact reasonable in the sense that a TS meeting these requirements does correspond to the optimal tests of some Bayesian decision-makers. We return to this point in the concluding remarks.
Theorem 6. Let Θ be a countable (finite) parameter space, σ ( Θ ) = P ( Θ ) , and X be a countable sample space. Let φ be a testing scheme that satisfies coherence, invertibility and countable (finite) union consonance. Then, there exist a probability measure μ over P ( Θ × X ) and a family of strict hypothesis testing loss functions ( L A ) A σ ( Θ ) , such that φ is generated by ( L A ) A σ ( Θ ) with respect to the μ-marginal distribution of θ.
We end this section by associating logically-consistent Bayesian hypothesis testing with Bayes point estimation procedures in case both Θ and X are finite. This relationship is characterized in Theorem 7.
Theorem 7. Let Θ and X be finite sets and σ ( Θ ) = P ( Θ ) . Let φ be the testing scheme generated by the point estimator δ : X Θ . Suppose that for all x X , L x ( δ ( x ) ) > 0 .
(a) 
If there exist a probability measure π : σ ( Θ ) [ 0 , 1 ] for θ, with π ( δ ( x ) ) > 0 for all x X , and a loss function L : Θ × Θ R + , satisfying L ( θ , θ ) = 0 and L ( d , θ ) > 0 for d θ , such that δ is a Bayes estimator for θ generated by L with respect to π, then there is a family of hypothesis testing loss functions ( L A ) A σ ( Θ ) , L A : { 0 , 1 } × ( Θ × X ) R + for each A σ ( Θ ) , such that φ is generated by ( L A ) A σ ( Θ ) with respect to π.
(b) 
If there exist a probability measure π : σ ( Θ ) [ 0 , 1 ] for θ, with π ( δ ( x ) ) > 0 for all x X , and a family of strict hypothesis testing loss functions ( L A ) A σ ( Θ ) , L A : { 0 , 1 } × Θ R + for each A σ ( Θ ) , such that φ is generated by ( L A ) A σ ( Θ ) with respect to π, then there is a loss function L : Θ × Θ R + , with L ( θ , θ ) = 0 and L ( d , θ ) > 0 for d θ , such that δ is a Bayes estimator for θ generated by L with respect to π.
Theorem 7 ensures that multiple Bayesian tests that fulfill the desiderata cannot be separated from Bayes point estimation procedures. One may find in Theorem 7, Part (a), a decision-theoretic justification for performing simultaneous tests by means of a Bayes point estimator. However, the optimality of such tests is derived under very restrictive conditions, as the underlying loss functions depend both on the sample and on a point estimator. This fact reinforces that one can reconcile statistical optimality and logical consistency in multiple tests only in very particular cases. We should also emphasize that, under the conditions of Part (a), if, in addition, π ( θ ) > 0 for all θ Θ , then, for all A σ ( Θ ) , φ A is an admissible test for A with regard to L A (the standard proof of this result developed for losses that do not depend on the sample also works here). The second part of Theorem 7 states that if a Bayesian testing scheme meets coherence, invertibility and finite union consonance, then the point estimator that generates it cannot be devoid of optimality: it must be a Bayes estimator for specific loss functions. Example 11 illustrates the first part of this theorem.
Example 11. Assume that Θ = { θ 1 , θ 2 , , θ k } and X is finite. Assume also that there is a maximum likelihood estimator (MLE) for θ, δ M L : X Θ , such that L x ( δ M L ( x ) ) > 0 , for all x X . Then, the testing scheme generated by δ M L is a TS of Bayes tests. Indeed, when Θ is finite, an MLE for θ is a Bayes estimator generated by the loss function L ( d , θ ) = I ( d θ ) , d , θ Θ , with respect to the uniform prior over Θ (that is, δ M L ( x ) corresponds to a mode of the posterior distribution π x , for each x X ). Consequently (recall that | Θ | = k ), π x ( δ M L ( x ) ) 1 / k and E [ L ( δ M L ( x ) , θ ) | x ] = 1 π x ( δ M L ( x ) ) , for each x X . Thus,
max x X E [ L ( δ M L ( x ) , θ ) | x ] π x ( δ M L ( x ) ) = max x X 1 π x ( δ M L ( x ) ) π x ( δ M L ( x ) ) 1 1 k 1 k = k 1 ,
as g : ( 0 , 1 ] R + given by g ( t ) = ( 1 t ) / t is strictly decreasing.
By Theorem 7, it follows that the TS generated by the MLE δ M L is a Bayesian TS generated by (for instance) the family of loss functions ( L A ) A σ ( Θ ) given, for each A σ ( Θ ) , by L A ( 1 , ( θ , x ) ) = 0 and L A ( 0 , ( θ , x ) ) = 1 , for θ A c , and L A ( 0 , ( θ , x ) ) = 0 and L A ( 1 , ( θ , x ) ) = k I A ( δ M L ( x ) ) + ( 1 / k ) I A c ( δ M L ( x ) ) , for θ A .
It is worth mentioning that the development of Theorem 7(a) and Example 11 is in a sense related to the optimality of least relative surprise estimators under prior-based loss functions [24] (Section 2).

6. Conclusions

While several studies on frequentist multiple tests deal with the question of seeking for a balance between statistical optimality and logical consistency, this issue has not been addressed yet under a decision-theoretic standpoint. For this reason, in this work, we examine simultaneous Bayesian hypothesis testing with respect to three rational properties: coherence, invertibility and union consonance. Briefly, we characterize the families of loss functions that yield Bayes tests meeting each of these requisites separately, whatever the prior distribution for the relevant parameter is. These results not only shed some light on when each of these relationships may be considered to be sensible for a given scientific problem, but they also serve as a guide for a Bayesian decision-maker aiming at performing tests in line with the requirement he finds more important. In particular, this can be done through the usage of the loss functions described in the paper.
We also explore how far one can go by putting these properties together. We provide examples of fairly intuitive loss functions that induce testing schemes satisfying both coherence and invertibility, no matter what one’s prior opinion on the parameter is. On the other hand, we prove that no family of reasonable loss functions generates Bayes tests that respect the logical properties as a whole with respect to all priors, although any testing scheme meeting the desiderata corresponds to the optimal tests of several Bayesian decision-makers.
Finally, we discuss the relationship between logically-consistent Bayesian hypothesis testing and Bayes point estimations procedures when both the parameter space and the sample space are finite. We conclude that the point estimator generating a testing scheme fulfilling the rational properties is inevitably and unavoidably a Bayes estimator for certain loss functions. Furthermore, performing logically-consistent procedures by means of a Bayes estimator is one’s best approach towards multiple hypothesis testing only under very restrictive conditions in which the underlying loss functions depend not only on the decision to be made and the parameter as usual, but also on the observed sample. See [24,25,26] for some examples of such loss functions. That is, a more complex framework is needed to combine Bayesian optimality with logical consistency. This fact and the impossibility result of Theorem 5 corroborate the thesis that full rationality and statistical optimality rarely can be combined in simultaneous tests. In practice, this suggests that when testing hypotheses at once, a practitioner may abandon in part the desiderata so as to preserve statistical optimality. This is further discussed in [8].
Several issues remain open, among which we mention three. First, the extent to which the results derived in this work can be generalized to infinite (continuous) parameter spaces is an important problem from both theoretical and practical aspects. Furthermore, the consideration of different decision-theoretic approaches to hypothesis testing, such as the “agnostic” tests with three-fold action spaces proposed by [27], may bring new insight into which logical properties may be expected, not only in the current, but also in alternative frameworks. In epistemological terms, one may be concerned with the question of whether multiple hypothesis testing is the most adequate way to draw inferences about a parameter of interest from data given the incompatibility between full logical consistency and the achievement of statistical optimality. As a matter of fact, many Bayesians regard the whole posterior distribution as the most complete inference one can make about the unknown parameter. These analyses may contribute to better decision-making.

Acknowledgments

The authors are thankful for Carlos Alberto de Bragança Pereira, José Carlos Simon de Miranda, José Galvão Leite, Julio Michael Stern, Marcelo Esteban Coniglio, Márcio Alves Diniz and Paulo Cilas Marques Filho for fruitful discussions and important comments and suggestions, which improved the manuscript. We are also grateful to the referees for all of the detailed comments that helped improve the paper. This work was supported by Fundação de Amparo à Pesquisa do Estado de São Paulo (2009/03385-5,2014/25302-2) Brazil and Conselho Nacional de Pesquisa e Desenvolvimento Científico e Tecnológico (131982/2009-5) Brazil.

Author Contributions

The manuscript has come to fruition by the substantial contributions of all authors from conceiving the idea of examining Bayes tests with respect to logical consistency to obtaining the main theorems and providing several examples. All authors have also been involved in either writing the article or carefully revising it. All authors have read and approved the submitted version of the paper.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix

A. Proof of Theorem 1

That a testing scheme generated by a point estimation procedure δ satisfies the desiderata follows from Theorem 4.3 from [8] and the fact that for all x X and all countable partition ( A n ) n 1 of Θ, there is a unique i * N * , such that δ ( x ) A i * and, consequently, i = 1 [ 1 I ( δ ( x ) A i ) ] = 1 . For the converse, Theorem 4.3 from [8] implies that x X , ! θ 0 = θ 0 ( x ) Θ , such that φ { θ 0 } ( x ) = 0 . Thus, for A σ ( Θ ) , θ 0 A { θ 0 ( x ) } A and, as coherence holds, φ A ( x ) = 0 . On the other hand, θ 0 A { θ 0 ( x ) } A c . Coherence and invertibility yield φ A ( x ) = 1 . Hence, for each A σ ( Θ ) , φ A ( x ) = 1 θ ( x ) A . We conclude the proof by defining δ : X Θ by δ ( x ) = θ 0 ( x ) .

B. Proof of Theorem 2

First, we prove the necessary condition by the contrapositive. Thus, let us suppose there are A , B σ ( Θ ) with A B and θ 1 A and θ 2 B c , such that:
Δ A ( θ 1 ) Δ B ( θ 2 ) > Δ A ( θ 2 ) Δ B ( θ 1 ) ,
which implies that Δ A ( θ 1 ) > 0 and Δ B ( θ 2 ) > 0 .
Adding Δ A ( θ 2 ) Δ B ( θ 2 ) to both sides of the inequality above, straightforward manipulations yield:
0 Δ A ( θ 2 ) Δ A ( θ 2 ) + Δ A ( θ 1 ) < Δ B ( θ 2 ) Δ B ( θ 2 ) + Δ B ( θ 1 ) 1 .
Thus, there is α 0 ( 0 , 1 ) , such that:
Δ A ( θ 2 ) Δ A ( θ 2 ) + Δ A ( θ 1 ) < α 0 < Δ B ( θ 2 ) Δ B ( θ 2 ) + Δ B ( θ 1 ) .
Furthermore, there is x X , such that L x ( θ 1 ) , L x ( θ 2 ) > 0 . Considering the prior distribution π * for θ given by:
π * ( θ 1 ) = α 0 L x ( θ 2 ) α 0 L x ( θ 2 ) + ( 1 α 0 ) L x ( θ 1 )   and   π * ( θ 2 ) = 1 π * ( θ 1 ) ,
the corresponding posterior distribution given x is π x * ( θ 1 ) = α 0 and π x * ( θ 2 ) = 1 α 0 . Let φ * be any TS generated by ( L A ) A σ ( Θ ) with respect to π * . Thus,
φ A * ( x ) = 0 , if α 0 > Δ A ( θ 2 ) Δ A ( θ 2 ) + Δ A ( θ 1 ) and φ B * ( x ) = 0 , if α 0 > Δ B ( θ 2 ) Δ B ( θ 2 ) + Δ B ( θ 1 ) .
From Equation (4), we have φ A * ( x ) = 0 and φ B * ( x ) = 1 . Therefore, there is a prior distribution π * for θ with respect to which any TS generated by ( L A ) A σ ( Θ ) is not coherent.
We now prove the “if” part. We suppose that the family ( L A ) A σ ( Θ ) satisfies the condition that for all A , B σ ( Θ ) with A B , Δ A ( θ 1 ) Δ B ( θ 2 ) Δ A ( θ 2 ) Δ B ( θ 1 ) , θ 1 A , θ 2 B c . Integrating (with respect to θ 1 ) over A with respect to any probability measure P, we obtain:
Δ B ( θ 2 ) A Δ A ( θ 1 ) d P ( θ 1 ) Δ A ( θ 2 ) A Δ B ( θ 1 ) d P ( θ 1 ) , θ 2 B c .
Similarly, integration (with respect to θ 2 ) over B c with respect to the same measure P yields:
B c Δ B ( θ 2 ) d P ( θ 2 ) A Δ A ( θ 1 ) d P ( θ 1 ) B c Δ A ( θ 2 ) d P ( θ 2 ) A Δ B ( θ 1 ) d P ( θ 1 ) .
Now, let φ be a testing scheme generated by the family ( L A ) A σ ( Θ ) . For A , B σ ( Θ ) with A B and x X ,
φ A ( x ) = 0 Θ [ L A ( 0 , θ ) L A ( 1 , θ ) ] d π x ( θ ) 0 ,
where π x ( . ) denotes the posterior distribution of θ given X = x . Thus,
A Δ A ( θ ) d π x ( θ ) + A c B Δ A ( θ ) d π x ( θ ) + B c Δ A ( θ ) d π x ( θ ) 0 .
Multiplying the last inequality by B c Δ B ( θ ) d π x ( θ ) 0 , we get:
B c Δ B ( θ ) d π x ( θ ) A Δ A ( θ ) d π x ( θ ) + B c Δ B ( θ ) d π x ( θ ) A c B Δ A ( θ ) d π x ( θ ) + B c Δ B ( θ ) d π x ( θ ) B c Δ A ( θ ) d π x ( θ ) 0 .
From inequality Equation (5), it follows that:
A Δ B ( θ ) d π x ( θ ) B c Δ A ( θ ) d π x ( θ ) + B c Δ B ( θ ) d π x ( θ ) A c B Δ A ( θ ) d π x ( θ ) + B c Δ B ( θ ) d π x ( θ ) B c Δ A ( θ ) d π x ( θ ) 0 .
As B c Δ B ( θ ) d π x ( θ ) A c B Δ A ( θ ) d π x ( θ ) 0 and A c B Δ B ( θ ) d π x ( θ ) B c Δ A ( θ ) d π x ( θ ) 0 , we have that:
A Δ B ( θ ) d π x ( θ ) B c Δ A ( θ ) d π x ( θ ) + A c B Δ B ( θ ) d π x ( θ ) B c Δ A ( θ ) d π x ( θ ) + B c Δ B ( θ ) d π x ( θ ) B c Δ A ( θ ) d π x ( θ ) 0 ,
and, consequently,
B c Δ A ( θ ) d π x ( θ ) A Δ B ( θ ) d π x ( θ ) + A c B Δ B ( θ ) d π x ( θ ) + B c Δ B ( θ ) d π x ( θ ) 0 .
Finally,
Θ [ L B ( 0 , θ ) L B ( 1 , θ ) ] d π x ( θ ) 0 .
If Θ [ L B ( 0 , θ ) L B ( 1 , θ ) ] d π x ( θ ) < 0 , then φ B ( x ) = 0 . If this integral is equal to zero, then both zero and one are optimal solutions, and we can choose the decision zero as φ B ( x ) in order to ensure that φ B ( x ) φ A ( x ) . Hence, with respect to each prior π, there is a TS generated by ( L A ) A σ ( Θ ) that is coherent.

C. Proof of Theorem 3

The proof is analogous to that of Theorem 2. First, we prove the necessary condition by the contrapositive. Suppose that there are A σ ( Θ ) and θ 1 A and θ 2 A c , such that:
Δ A ( θ 1 ) Δ A c ( θ 2 ) Δ A c ( θ 1 ) Δ A ( θ 2 ) .
Assume Δ A ( θ 1 ) Δ A c ( θ 2 ) < Δ A c ( θ 1 ) Δ A ( θ 2 ) (the other case is developed in the same way), which implies that Δ A c ( θ 1 ) > 0 and Δ A ( θ 2 ) > 0 . Adding Δ A c ( θ 2 ) Δ A ( θ 2 ) to both sides of the inequality, we easily obtain that:
0 Δ A c ( θ 2 ) Δ A c ( θ 1 ) + Δ A c ( θ 2 ) < Δ A ( θ 2 ) Δ A ( θ 1 ) + Δ A ( θ 2 ) 1 .
Thus, there is α 0 ( 0 , 1 ) , such that:
0 Δ A c ( θ 2 ) Δ A c ( θ 1 ) + Δ A c ( θ 2 ) < α 0 < Δ A ( θ 2 ) Δ A ( θ 1 ) + Δ A ( θ 2 ) 1 .
In addition, there is x X , such that L x ( θ 1 ) , L x ( θ 2 ) > 0 . For the prior distribution π * for θ given by:
π * ( θ 1 ) = α 0 L x ( θ 2 ) α 0 L x ( θ 2 ) + ( 1 α 0 ) L x ( θ 1 )   and   π * ( θ 2 ) = 1 π * ( θ 1 ) ,
the posterior distribution given x is π x * ( θ 1 ) = α 0 and π x * ( θ 2 ) = 1 α 0 . Let φ * be any TS generated by ( L A ) A σ ( Θ ) with respect to π * . Thus,
φ A * ( x ) = 0 , if α 0 > Δ A ( θ 2 ) Δ A ( θ 1 ) + Δ A ( θ 2 ) and φ A c * ( x ) = 0 , if α 0 < Δ A c ( θ 2 ) Δ A c ( θ 1 ) + Δ A c ( θ 2 ) .
From Equation (6), we have φ A * ( x ) = 1 and φ A c * ( x ) = 1 . Therefore, there is a prior distribution π * for θ with respect to which any TS generated by ( L A ) A σ ( Θ ) does not meet invertibility.
Now, we prove the sufficiency. Suppose that for all A σ ( Θ ) :
Δ A ( θ 1 ) Δ A c ( θ 2 ) = Δ A c ( θ 1 ) Δ A ( θ 2 ) , θ 1 A , θ 2 A c .
Integrating (with respect to θ 2 ) over the set A c with respect to any probability measure P defined on σ ( Θ ) , we have:
A c Δ A ( θ 2 ) Δ A c ( θ 1 ) d P ( θ 2 ) = A c Δ A ( θ 1 ) Δ A c ( θ 2 ) d P ( θ 2 ) ,   for all   θ 1 A .
Similarly, integrating (with respect to θ 1 ) over A, we get:
A Δ A c ( θ 1 ) d P ( θ 1 ) A c Δ A ( θ 2 ) d P ( θ 2 ) = A Δ A ( θ 1 ) d P ( θ 1 ) A c Δ A c ( θ 2 ) d P ( θ 2 ) .
Let φ be a TS generated by ( L A ) A σ ( Θ ) . If φ A ( x ) = 0 , then:
Θ [ L A ( 0 , θ ) L A ( 1 , θ ) ] d π x ( θ ) = A Δ A ( θ ) d π x ( θ ) + A c Δ A ( θ ) d π x ( θ ) 0 .
Multiplying both sides by A Δ A c ( θ ) d π x ( θ ) 0 , we get:
A Δ A c ( θ ) d π x ( θ ) A Δ A ( θ ) d π x ( θ ) + A Δ A c ( θ ) d π x ( θ ) A c Δ A ( θ ) d π x ( θ ) 0 .
From Equation (7), it follows that:
A Δ A ( θ ) d π x ( θ ) A Δ A c ( θ ) d π x ( θ ) + A c Δ A c ( θ ) d π x ( θ ) 0 .
Thus,
A Δ A c ( θ ) d π x ( θ ) + A c Δ A c ( θ ) d π x ( θ ) 0 ,
since A Δ A ( θ ) d π x ( θ ) 0 . In this way,
Θ [ L A c ( 0 , θ ) L A c ( 1 , θ ) ] d π x ( θ ) 0 .
If Θ [ L A c ( 0 , θ ) L A c ( 1 , θ ) ] d π x ( θ ) > 0 , then φ A c ( x ) = 1 . If the integral is zero, then we can choose φ A c ( x ) = 1 , so as to obtain φ A c ( x ) = 1 φ A ( x ) . Similarly, we prove that if φ A ( x ) = 1 , then there is a Bayes test for A c , φ A c , generated by L A c , such that φ A c ( x ) = 0 . Consequently, there is a TS generated by ( L A ) A σ ( Θ ) that satisfies invertibility.

D. Proof of Theorem 4

Suppose that there are A , B σ ( Θ ) , θ 1 A B and θ 2 ( A B ) c such that both:
Δ A B ( θ 1 ) Δ A ( θ 2 ) > Δ A B ( θ 2 ) Δ A ( θ 1 ) and Δ A B ( θ 1 ) Δ B ( θ 2 ) > Δ A B ( θ 2 ) Δ B ( θ 1 )
hold, from which it follows that Δ A B ( θ 1 ) > 0 , Δ A ( θ 2 ) > 0 and Δ B ( θ 2 ) > 0 . Proceeding as in the previous proofs, we obtain that:
0 Δ A B ( θ 2 ) Δ A B ( θ 1 ) + Δ A B ( θ 2 ) < min Δ A ( θ 2 ) Δ A ( θ 1 ) + Δ A ( θ 2 ) , Δ B ( θ 2 ) Δ B ( θ 1 ) + Δ B ( θ 2 ) 1 .
Thus, there is α 0 ( 0 , 1 ) such that:
0 Δ A B ( θ 2 ) Δ A B ( θ 1 ) + Δ A B ( θ 2 ) < α 0 < min Δ A ( θ 2 ) Δ A ( θ 1 ) + Δ A ( θ 2 ) , Δ B ( θ 2 ) Δ B ( θ 1 ) + Δ B ( θ 2 ) 1 .
In addition, there is x X such that L x ( θ 1 ) , L x ( θ 2 ) > 0 . For the prior distribution π * for θ given by:
π * ( θ 1 ) = α 0 L x ( θ 2 ) α 0 L x ( θ 2 ) + ( 1 α 0 ) L x ( θ 1 )   and   π * ( θ 2 ) = 1 π * ( θ 1 ) ,
the posterior distribution is π x * ( θ 1 ) = α 0 and π x * ( θ 2 ) = 1 α 0 . Let φ * be any TS generated by ( L A ) A σ ( Θ ) with respect to π * . Next, we consider three cases:
(i)
if θ 1 A B , then:
φ C * ( x ) = 0 , if α 0 > Δ C ( θ 2 ) Δ C ( θ 1 ) + Δ C ( θ 2 ) ,
for any C { A , B , A B } . Thus, we have φ A * ( x ) = 1 , φ B * ( x ) = 1 and φ A B * ( x ) = 0 ;
(ii)
if θ A , then:
φ B * ( x ) = 0 , if α 0 > Δ B ( θ 2 ) Δ B ( θ 1 ) + Δ B ( θ 2 ) and φ A B * ( x ) = 0 , if α 0 > Δ A B ( θ 2 ) Δ A B ( θ 1 ) + Δ A B ( θ 2 ) ,
and:
Θ [ L A ( 0 , θ ) L A ( 1 , θ ) ] d π x ( θ ) = Δ A ( θ 1 ) α 0 + Δ A ( θ 2 ) ( 1 α 0 ) > 0 .
Thus, φ A * ( x ) = 1 , φ B * ( x ) = 1 and φ A B * ( x ) = 0 ;
(iii)
if θ B , a development similar to that of Case (ii) yields the same results: φ A * ( x ) = 1 , φ B * ( x ) = 1 and φ A B * ( x ) = 0 .
Therefore, in any case, there is a prior distribution π * for θ with respect to which no TS generated by ( L A ) A σ ( Θ ) meets finite union consonance, concluding the proof.

E. Proof of Theorem 5

The proof of Theorem 5 consists of verifying the inexistence of such a family of loss functions that generates Bayes tests satisfying the desiderata with respect to all priors concentrated on three points in Θ (of course, there will not be such a family satisfying these requisites with respect to all priors over σ ( Θ ) ).
Let { A 1 , A 2 , A 3 } be a measurable partition of Θ and θ 1 , θ 2 , θ 3 Θ , such that θ i A i , i = 1 , 2 , 3 . First, notice that for all x X , such that L x ( θ i ) > 0 for i = 1 , 2 , 3 , there is a one-to-one correspondence between prior and posterior distributions concentrated on { θ 1 , θ 2 , θ 3 } . Indeed, for all ( α 1 , α 2 , α 3 ) A = { ( a , b , c ) R + 3 : a + b + c = 1 } and x X , such that L x ( θ i ) > 0 for i = 1 , 2 , 3 , there is a unique prior distribution for θ, π, such that the corresponding posterior distribution given x, π x , satisfies π x ( θ i ) = α i , i = 1 , 2 , 3 , namely:
π ( θ i ) = α i L x ( θ i ) α 1 L x ( θ 1 ) + α 2 L x ( θ 2 ) + α 3 L x ( θ 3 ) , i = 1, 2, 3.
Henceforth, we will refer to the above posterior by ( α 1 , α 2 , α 3 ) for short. Let ( L A ) A σ ( Θ ) be any family of strict hypothesis testing loss functions. For each ( α 1 , α 2 , α 3 ) A , the difference between the posterior risk of accepting H 0 ( i ) : θ A i and that of rejecting it is given by:
Θ [ L A i ( 0 , θ ) L A i ( 1 , θ ) ] d π x ( θ ) = Δ A i ( θ 1 ) α 1 + Δ A i ( θ 2 ) α 2 + Δ A i ( θ 3 ) α 3 ,
where Δ A i ( θ j ) = L A i ( 0 , θ j ) L A i ( 1 , θ j ) (note that Δ A i ( θ j ) > 0 , if i j , while Δ A i ( θ i ) < 0 ). In order to evaluate the tests for the hypotheses H 0 ( 1 ) , H 0 ( 2 ) and H 0 ( 3 ) with respect to all posterior distributions concentrated on { θ 1 , θ 2 , θ 3 } , we consider the transformation T : A R 3 defined by:
T ( α 1 , α 2 , α 3 ) = Θ Δ A 1 ( θ ) d π x ( θ ) , Θ Δ A 2 ( θ ) d π x ( θ ) , Θ Δ A 3 ( θ ) d π x ( θ ) ,
where Θ Δ A i ( θ ) d π x ( θ ) = Δ A i ( θ 1 ) α 1 + Δ A i ( θ 2 ) α 2 + Δ A i ( θ 3 ) α 3 . Thus, T assigns to each posterior ( α 1 , α 2 , α 3 ) A the differences between the risks of accepting H 0 ( i ) and of rejecting it, i = 1 , 2 , 3 . It is easy to verify that B = T ( A ) = { T ( α 1 , α 2 , α 3 ) : ( α 1 , α 2 , α 3 ) A } is a convex set. Indeed, B is a triangle (see Figure 2) with vertices P 1 = T ( 1 , 0 , 0 ) = ( Δ A 1 ( θ 1 ) , Δ A 2 ( θ 1 ) , Δ A 3 ( θ 1 ) ) , P 2 = T ( 0 , 1 , 0 ) = ( Δ A 1 ( θ 2 ) , Δ A 2 ( θ 2 ) , Δ A 3 ( θ 2 ) ) and P 3 = T ( 0 , 0 , 1 ) = ( Δ A 1 ( θ 3 ) , Δ A 2 ( θ 3 ) , Δ A 3 ( θ 3 ) ) (these points are not aligned owing to the restrictions on the quantities Δ A i ( θ j ) ,14]).
Figure 2. Set B .
Figure 2. Set B .
Entropy 17 06534 g002
Now, we turn to the main argument of the proof. By Theorem 4.3 from [8], it is necessary for a Bayesian testing scheme to satisfy the logical requirements with respect to all priors over σ ( Θ ) that exactly one of the A i s is accepted for each vector of probabilities ( α 1 , α 2 , α 3 ) . Geometrically, such a necessary condition is equivalent to the triangle B to be contained in the union of the octants that comprise the triplets with only one negative coordinate, namely R × R + × R + , R + × R × R + and R + × R + × R . However, this is impossible. To verify this fact, we consider three cases (Figure 3 illustrates the projection of B over the plane w = { ( u , v , 0 ) : u , v R } in each of these cases):
(i)
if Δ A 1 ( θ 1 ) Δ A 2 ( θ 2 ) > Δ A 1 ( θ 2 ) Δ A 2 ( θ 1 ) , then the projection of the line segment joining P 1 and P 2 over the plane w intersects the (third) quadrant R × R × { 0 } (see the first graphic in Figure 3). Thus, there is γ ( 0 , 1 ) , such that γ Δ A i ( θ 1 ) + ( 1 γ ) Δ A i ( θ 2 ) < 0 , i = 1 , 2 . As γ P 1 + ( 1 γ ) P 2 B , there is a posterior ( α 1 , α 2 , α 3 ) concentrated on { θ 1 , θ 2 , θ 3 } with respect to which any TS generated by ( L A ) A σ ( Θ ) does not reject both A 1 and A 2 and, therefore, does not respect coherence, invertibility and finite union consonance;
(ii)
if Δ A 1 ( θ 1 ) Δ A 2 ( θ 2 ) = Δ A 1 ( θ 2 ) Δ A 2 ( θ 1 ) , then the projection of the line segment joining P 1 and P 2 over w intersects the origin ( 0 , 0 , 0 ) (see the second graphic in Figure 3). Thus, there is t 0 > 0 , such that the point P 0 = ( 0 , 0 , t 0 ) B . Considering now the line segment joining P 0 and P 3 , it is easily seen that for any γ ( Δ A 3 ( θ 3 ) t 0 Δ A 3 ( θ 3 ) , 1 ) , γ 0 + ( 1 γ ) Δ A 1 ( θ 3 ) > 0 , γ 0 + ( 1 γ ) Δ A 2 ( θ 3 ) > 0 and γ t 0 + ( 1 γ ) Δ A 3 ( θ 3 ) > 0 . As γ P 0 + ( 1 γ ) P 3 B , there is a posterior distribution with respect to which any TS generated by ( L A ) A σ ( Θ ) rejects A 1 , A 2 and A 3 and, therefore, does not satisfy the logical consistency properties all together;
(iii)
if Δ A 1 ( θ 1 ) Δ A 2 ( θ 2 ) < Δ A 1 ( θ 2 ) Δ A 2 ( θ 1 ) , then the projection of the above-mentioned segment over w intersects the (first) quadrant R + × R + × { 0 } (third graphic in Figure 3). Thus, there is γ ( 0 , 1 ) such that γ Δ A i ( θ 1 ) + ( 1 γ ) Δ A i ( θ 2 ) > 0 , i = 1 , 2 . As γ P 1 + ( 1 γ ) P 2 B , there is a posterior ( α 1 , α 2 , α 3 ) concentrated on { θ 1 , θ 2 , θ 3 } with respect to which any TS generated by ( L A ) A σ ( Θ ) rejects A 1 , A 2 and A 3 and, consequently, does not meet the desiderata.
From (i)–(iii), the result follows.
Figure 3. Projection of B in u × v .
Figure 3. Projection of B in u × v .
Entropy 17 06534 g003

F. Proof of Theorem 6

Let φ be a TS satisfying coherence, invertibility and countable union consonance. From Theorem 1, there is a unique point estimator δ : X Θ , such that for all A σ ( Θ ) and x X , φ A ( x ) = I ( δ ( x ) A ) . For each x X , define μ x : σ ( Θ ) R + by:
μ x ( A ) = 1 φ A ( x ) = I ( δ ( x ) A ) ,
that is, μ x is the probability measure degenerate at the point δ ( x ) [14]. Furthermore, let μ 0 be any probability measure defined on P ( X ) . Defining μ : P ( Θ × X ) R + by:
μ ( B ) = ( θ , x ) B μ 0 ( { x } ) μ x ( { θ } ) , B P ( Θ × χ ) ,
it is immediate that μ is a probability measure and that μ x is the conditional distribution of θ given X = x , for each x X . Next, let ( L A ) A σ ( Θ ) be any family of strict hypothesis testing loss functions. Let φ * be a testing scheme generated by ( L A ) A σ ( Θ ) with respect to the μ-marginal distribution of θ. Let us verify that φ * coincides with φ. Indeed, for x X and A σ ( Θ ) , we have:
φ A ( x ) = 0 μ x ( A ) = 1
θ Θ [ L A ( 0 , θ ) L A ( 1 , θ ) ] μ x ( θ ) = θ A [ L A ( 0 , θ ) L A ( 1 , θ ) ] μ x ( θ ) < 0 φ A * ( x ) = 0 .
Similarly,
φ A ( x ) = 1 μ x ( A ) = 0
θ Θ [ L A ( 0 , θ ) L A ( 1 , θ ) ] μ x ( θ ) = θ A c [ L A ( 0 , θ ) L A ( 1 , θ ) ] μ x ( θ ) > 0 φ A * ( x ) = 1 ,
concluding the proof. It should be emphasized that there are many other probability measures over P ( Θ × X ) and families of strict hypothesis testing loss functions that yield the result of Theorem 6. For instance, considering for each x X , a conditional probability measure μ x , such that μ x ( δ ( x ) ) > 1 / 2 and μ x ( θ ) > 0 , for all θ Θ , together with the family of 0–1 loss functions, one will obtain a Bayesian TS that coincides with φ, as well (see [14] for the details).

G. Proof of Theorem 7

To prove Part (a), we define a family of loss functions that generates Bayesian testing schemes satisfying both coherence and invertibility with respect to all prior distributions for θ, which implies, by Theorem 3.1 from [28], that, for each sample point, at most one hypothesis of each partition of Θ is not rejected. Next, we prove that, for each x X , there is a singleton that is not rejected with respect to the prior π. Combining these facts, we prove that, for each sample point, exactly one hypothesis of each partition of Θ is accepted, which is equivalent (Theorem 4.3 from [8]) to asserting that the TS generated by that family of losses with respect to π meets the desiderata.
Thus, for A σ ( Θ ) , let L A : { 0 , 1 } × ( Θ × X ) R + be given, for θ A c and x X , by L A ( 1 , ( θ , x ) ) = 0 and:
L A ( 0 , ( θ , x ) ) = min min L ( d , θ ) ; 1 L ( d , θ ) I A ( δ ( x ) ) + max L ( d , θ ) ; 1 L ( d , θ ) I A c ( δ ( x ) ) : d A ,
and, for θ A and x X , by L A ( 0 , ( θ , x ) ) = 0 and:
L A ( 1 , ( θ , x ) ) = min 1 C min L ( d , θ ) ; 1 L ( d , θ ) I A c ( δ ( x ) ) + C max L ( d , θ ) ; 1 L ( d , θ ) I A ( δ ( x ) ) : d A c ,
where C > 1 is any constant greater than max E [ L ( δ ( x ) , θ ) | x ] π x ( δ ( x ) ) : x X .
These hypothesis testing loss functions do not penalize correct decisions. They also reflect the decision-maker’s tendency to not reject the hypotheses that comprise the best estimate for θ, δ ( x ) . For instance, if θ A and one decides to reject A on the basis of the sample x, the loss of falsely rejecting A is:
C min max { L ( d , θ ) ; 1 L ( d , θ ) } : d A c , if δ ( x ) A 1 C min min { L ( d , θ ) ; 1 L ( d , θ ) } : d A c , otherwise.
If C is large enough, of course the decision-maker will be more reluctant to reject A in case of δ ( x ) A than to reject it if not.
This family of loss functions satisfies the condition in Equation (2) of Theorem 2. In fact, for all A , B σ ( Θ ) , with A B , for all θ 1 A , θ 2 B c and x X , we have:
(i)
if δ ( x ) A , then:
Δ A ( θ 1 ) Δ B ( θ 1 ) = C min max L ( d , θ 1 ) ; 1 L ( d , θ 1 ) : d A c C min max L ( d , θ 1 ) ; 1 L ( d , θ 1 ) : d B c 1 min min L ( d , θ 2 ) ; 1 L ( d , θ 2 ) : d A min min L ( d , θ 2 ) ; 1 L ( d , θ 2 ) : d B = Δ A ( θ 2 ) Δ B ( θ 2 ) .
(ii)
if δ ( x ) B A c (recall C 1 ), it follows that:
Δ A ( θ 1 ) Δ B ( θ 1 ) = 1 C min min L ( d , θ 1 ) ; 1 L ( d , θ 1 ) : d A c C min max L ( d , θ 1 ) ; 1 L ( d , θ 1 ) : d B c 1 min max L ( d , θ 2 ) ; 1 L ( d , θ 2 ) : d A min min L ( d , θ 2 ) ; 1 L ( d , θ 2 ) : d B = Δ A ( θ 2 ) Δ B ( θ 2 ) .
(iii)
if δ ( x ) B c ,
Δ A ( θ 1 ) Δ B ( θ 1 ) = 1 C min min L ( d , θ 1 ) ; 1 L ( d , θ 1 ) : d A c 1 C min min L ( d , θ 1 ) ; 1 L ( d , θ 1 ) : d B c 1 min max L ( d , θ 2 ) ; 1 L ( d , θ 2 ) : d A min max L ( d , θ 2 ) ; 1 L ( d , θ 2 ) : d B = Δ A ( θ 2 ) Δ B ( θ 2 ) .
Therefore, ( L A ) A σ ( Θ ) generates coherent testing schemes with respect to all prior distribution for θ, if C 1 . Furthermore, for all A σ ( Θ ) , θ 1 A and θ 2 A c , we have, if δ ( x ) A , that:
Δ A ( θ 2 ) Δ A c ( θ 2 ) = min { min { L ( d , θ 2 ) , 1 L ( d , θ 2 ) } : d A } 1 C min { min { L ( d , θ 2 ) , 1 L ( d , θ 2 ) } : d A } = 1 1 C = C = C min { max { L ( d , θ 1 ) , 1 L ( d , θ 1 ) } : d A c } min { max { L ( d , θ 1 ) , 1 L ( d , θ 1 ) } : d A c } = Δ A ( θ 1 ) Δ A c ( θ 1 ) .
Analogously, we prove that the condition in Equation (3) of Theorem 3 is fulfilled if δ ( x ) A . Thus, there are testing schemes generated by ( L A ) A σ ( Θ ) that respect invertibility with respect to all priors. Finally, let us prove that a TS φ generated by ( L A ) A σ ( Θ ) is such that φ { δ ( x ) } ( x ) = 0 , for all x X . Indeed,
θ Θ L { δ ( x ) } ( 0 , ( θ , x ) ) π x ( θ ) = θ δ ( x ) L { δ ( x ) } ( 0 , ( θ , x ) ) π x ( θ ) = θ δ ( x ) min L ( δ ( x ) , θ ) ; 1 L ( δ ( x ) , θ ) π x ( θ ) θ δ ( x ) L ( δ ( x ) , θ ) π x ( θ )
and:
θ Θ L { δ ( x ) } ( 1 , ( θ , x ) ) π x ( θ ) = L { δ ( x ) } ( 1 , ( δ ( x ) , x ) ) π x ( δ ( x ) ) = C min max L ( d , δ ( x ) ) ; 1 L ( d , δ ( x ) ) : d δ ( x ) π x ( δ ( x ) ) > min max L ( d , δ ( x ) ) ; 1 L ( d , δ ( x ) ) : d δ ( x ) θ δ ( x ) L ( δ ( x ) , θ ) π x ( θ ) ,
since C > max E [ L ( δ ( x ) , θ ) | x ] π x ( δ ( x ) ) : x X θ δ ( x 0 ) L ( δ ( x 0 ) , θ ) π x 0 ( θ ) π x 0 ( δ ( x 0 ) ) , for any x 0 X .
From Equations (8) and (9), it follows that:
θ Θ L { δ ( x ) } ( 1 , ( θ , x ) ) π x ( θ ) > min max L ( d , δ ( x ) ) ; 1 L ( d , δ ( x ) ) : d δ ( x ) θ Θ L { δ ( x ) } ( 0 , ( θ , x ) ) π x ( θ ) .
Therefore, θ Θ L { δ ( x ) } ( 1 , ( θ , x ) ) π x ( θ ) > θ Θ L { δ ( x ) } ( 0 , ( θ , x ) ) π x ( θ ) and, consequently, φ { δ ( x ) } ( x ) = 0 , concluding the proof of Part (a).
For Part (b), suppose φ is generated by ( L A ) A σ ( Θ ) with respect to π. From Theorem 4.3 from [8] and Theorem 1, it follows that for all x X , φ { δ ( x ) } ( x ) = 0 and φ { d } ( x ) = 1 , for all d δ ( x ) . Thus,
θ Θ [ L { δ ( x ) } ( 0 , θ ) L { δ ( x ) } ( 1 , θ ) ] π x ( θ ) 0 θ Θ [ L { d } ( 0 , θ ) L { d } ( 1 , θ ) ] π x ( θ ) ,
for d δ ( x ) , where π x is the posterior distribution for θ given x. Defining L : Θ × Θ R + by:
L ( d , θ ) = [ L { d } ( 0 , θ ) L { d } ( 1 , θ ) ] min { L { d } ( 0 , θ ) L { d } ( 1 , θ ) : d Θ }
= [ L { d } ( 0 , θ ) L { d } ( 1 , θ ) ] [ L { θ } ( 0 , θ ) L { θ } ( 1 , θ ) ] ,
it follows that:
θ Θ L ( δ ( x ) , θ ) π x ( θ ) θ Θ L ( d , θ ) π x ( θ ) ,
for d δ ( x ) , for each x X . Therefore, δ is a Bayes estimator for θ generated by L with respect to π. Notice that L (essentially) assigns to the estimate d for the parameter the difference between the loss of not rejecting the hypothesis { d } and that of rejecting it when the state of nature is θ. It seems reasonable that the greater the “distance” between d and θ, the greater this difference (and, consequently, L ( d , θ ) ) should be.

References

  1. Hommel, G.; Bretz, F. Aesthetics and power considerations in multiple testing—A contradiction? Biom. J. 2008, 20, 657–666. [Google Scholar] [CrossRef] [PubMed]
  2. Shaffer, J.P. Multiple hypothesis testing. Ann. Rev. Psychol. 1995, 46, 561–584. [Google Scholar] [CrossRef]
  3. Hochberg, Y.; Tamhane, A.C. Multiple Comparison Procedures; Wiley: New York, NY, USA, 1987. [Google Scholar]
  4. Farcomeni, A. A review of modern multiple hypothesis testing, with particular attention to the false discovery proportion. Stat. Methods Med. Res. 2008, 17, 347–388. [Google Scholar] [CrossRef] [PubMed]
  5. Schervish, M.J. Theory of Statistics; Springer: New York, NY, USA, 1997. [Google Scholar]
  6. Gabriel, K.R. Simultaneous test procedures—Some theory of multiple comparisons. Ann. Math. Stat. 1969, 41, 224–250. [Google Scholar] [CrossRef]
  7. Lehmann, E.L. A theory of some multiple decision problems, II. Ann. Math. Stat. 1957, 28, 547–572. [Google Scholar] [CrossRef]
  8. Izbicki, R.; Esteves, L.G. Logical Consistency in Simultaneous Statistical Test Procedures. Log. J. IGPL 2015. [Google Scholar] [CrossRef]
  9. Pereira, C.A.B.; Stern, J.M.; Wechsler, S. Can a significance test be genuinely Bayesian? Bayesian Anal. 2008, 3, 79–100. [Google Scholar] [CrossRef]
  10. Stern, J.M. Constructive verification, empirical induction and falibilist deduction: A threefold contrast. Information 2001, 2, 635–650. [Google Scholar] [CrossRef]
  11. Pereira, C.A.B.; Stern, J.M. Evidence and credibility: Full Bayesian significance test for precise hypotheses. Entropy 1999, 1, 99–110. [Google Scholar] [CrossRef]
  12. Lin, S.K.; Chen, C.K.; Ball, D.; Liu, H.C.; Loh, E.W. Gender-specific contribution of the GABAA subunit genes on 5q33 in methamphetamine use disorder. Pharm. J. 2003, 3, 349–355. [Google Scholar] [CrossRef] [PubMed]
  13. Izbicki, R.; Fossaluza, V.; Hounie, A.G.; Nakano, E.Y.; Pereira, C.A. Testing allele homogeneity: The problem of nested hypotheses. BMC Genet. 2012, 13. [Google Scholar] [CrossRef] [PubMed][Green Version]
  14. Silva, G.M. Propriedades Lógicas de Classes de Testes de Hipóteses. Ph.D. Thesis, University of São Paulo, São Paulo, Brazil, 2014. [Google Scholar]
  15. Evans, M. Measuring Statistical Evidence Using Relative Belief; Chapman & Hall/CRC: London, UK, 2015. [Google Scholar]
  16. Marcus, R.; Eric, P.; Gabriel, K.R. On closed testing procedures with special reference to ordered analysis of variance. Biometrika 1976, 63, 655–660. [Google Scholar] [CrossRef]
  17. Sonnemann, E. General solutions to multiple testing problems. Biom. J. 2008, 50, 641–656. [Google Scholar] [CrossRef] [PubMed]
  18. Lavine, M.; Schervish, M.J. Bayes Factors: What they are and what they are not. Am. Stat. 1999, 53, 119–122. [Google Scholar]
  19. Kneale, W.; Kneale, M. The Development of Logic; Oxford University Press: Oxford, UK, 1962. [Google Scholar]
  20. DeGroot, M.H. Optimal Statistical Decisions; McGraw-Hill: New York, NY, USA, 1970. [Google Scholar]
  21. Finner, H.; Strassburger, K. The partitioning principle: A powerful tool in multiple decision theory. Ann. Stat. 2002, 30, 1194–1213. [Google Scholar] [CrossRef]
  22. Aitchison, J. Confidence-region tests. J. R. Stat. Soc. Ser. B 1964, 26, 462–476. [Google Scholar]
  23. Darwiche, A.Y.; Ginsberg, M.L. A Symbolic Generalization of Probability Theory. In Proceedings of the Tenth National Conference on Artificial Inteligence, AAAI-92, San Jose, CA, USA, 12–16 July 1992.
  24. Evans, M.; Jang, G.H. Inferences from Prior-Based Loss Functions. 2011; arXiv:1104.3258. [Google Scholar]
  25. Berger, J.O. In defense of the likelihood principle: axiomatics and coherency. Bayesian Stat. 1985, 2, 33–66. [Google Scholar]
  26. Madruga, M.R.; Esteves, L.G.; Wechsler, S. On the bayesianity of pereira-stern tests. Test 2001, 10, 291–299. [Google Scholar] [CrossRef]
  27. Ripley, B.D. Pattern Recognition and Neural Networks; Cambridge University Press: Cambridge, UK, 1996. [Google Scholar]
  28. Izbicki, R. Classes de Testes de Hipóteses. Ph.D. Thesis, University of São Paulo, São Paulo, Brazil, 2010. [Google Scholar]

Share and Cite

MDPI and ACS Style

Da Silva, G.M.; Esteves, L.G.; Fossaluza, V.; Izbicki, R.; Wechsler, S. A Bayesian Decision-Theoretic Approach to Logically-Consistent Hypothesis Testing. Entropy 2015, 17, 6534-6559. https://doi.org/10.3390/e17106534

AMA Style

Da Silva GM, Esteves LG, Fossaluza V, Izbicki R, Wechsler S. A Bayesian Decision-Theoretic Approach to Logically-Consistent Hypothesis Testing. Entropy. 2015; 17(10):6534-6559. https://doi.org/10.3390/e17106534

Chicago/Turabian Style

Da Silva, Gustavo Miranda, Luis Gustavo Esteves, Victor Fossaluza, Rafael Izbicki, and Sergio Wechsler. 2015. "A Bayesian Decision-Theoretic Approach to Logically-Consistent Hypothesis Testing" Entropy 17, no. 10: 6534-6559. https://doi.org/10.3390/e17106534

Article Metrics

Back to TopTop