# A Bayesian Decision-Theoretic Approach to Logically-Consistent Hypothesis Testing

^{1}

^{2}

^{*}

## Abstract

**:**

## 1. Introduction

One could (...) argue that ‘power is not everything’. In particular for multiple test procedures one can formulate additional requirements, such as, for example, that the decision patterns should be logical, conceivable to other persons, and, as far as possible, simple to communicate to non-statisticians.—G. Hommel and F. Bretz [1]

## 2. Testing Schemes

**Definition 1.**(Testing scheme (TS)) Let the σ-field of subsets of the parameter space $\sigma (\Theta )$ be the set of hypotheses to be tested. Moreover, let Ψ be the set of all test functions defined on $\mathcal{X}$. A TS is a function $\phi :\sigma (\Theta )\to \Psi $ that assigns to each hypothesis $A\in \sigma (\Theta )$ the test ${\phi}_{A}\in \Psi $ for testing A.

**Example 1.**(Tests based on posterior probabilities) Assume $\Theta ={\mathbb{R}}^{d}$ and $\sigma (\Theta )=\mathcal{B}\left({\mathbb{R}}^{d}\right)$, the Borelians of ${\mathbb{R}}^{d}$. Let π be the prior probability distribution for θ. For each $A\in \sigma (\Theta )$, let ${\phi}_{A}:\mathcal{X}\to \{0,1\}$ be defined by:

**Definition 2.**(TS generated by a family of loss functions) Let $(\mathcal{X}\times \Theta ,\sigma (\mathcal{X}\times \Theta ),\mathrm{I}\phantom{\rule{-2.5pt}{0ex}}\mathrm{P})$ be a Bayesian statistical model. Let ${\left({L}_{A}\right)}_{A\in \sigma (\Theta )}$ be a family of hypothesis testing loss functions, where ${L}_{A}:\{0,1\}\times \Theta \to \mathbb{R}$ is the loss function for testing $A\in \sigma (\Theta )$. A TS generated by the family of loss functions ${\left({L}_{A}\right)}_{A\in \sigma (\Theta )}$ is any TS φ defined over $\sigma (\Theta )$, such that, $\forall A\in \sigma (\Theta )$, ${\phi}_{A}$ is a Bayes test for hypothesis A with respect to π considering the loss ${L}_{A}$.

**Example 2.**(Tests based on posterior probabilities) Assume the same scenario as Example 1 and that ${\left({L}_{A}\right)}_{A\in \sigma (\Theta )}$ is a family of loss functions, such that $\forall A\in \sigma (\Theta )$ and $\forall \theta \in \Theta $,

**Example 3.**(FBST testing scheme) Let $\Theta ={\mathbb{R}}^{d}$, $\sigma (\Theta )=\mathcal{B}\left({\mathbb{R}}^{d}\right)$ and $f(.)$ be the prior probability density function (pdf) for θ. Suppose that, for each $x\in \mathcal{X}$, there exists $f(.|x)$, the pdf of the posterior distribution of θ, given x. For each hypothesis $A\in \sigma (\Theta )$, let:

**Definition 3**. (TS generated by a point estimation procedure) Let $\delta :\mathcal{X}\u27f6\Theta $ be a point estimator for θ ([5] (p. 296)). The TS generated by δ is defined by:

**Example 4.**(TS generated by a point estimation procedure) Let $\Theta =\mathbb{R}$, $\sigma (\Theta )=\mathcal{P}(\Theta )$ and ${X}_{1}\phantom{\rule{0.166667em}{0ex}}\dots ,{X}_{n}|\theta $ i.i.d. $N(\theta ,1)$. The TS generated by the sample mean, X, rejects $A\in \sigma (\Theta )$ when x is observed if $x\notin A$.

## 3. The Desiderata

#### 3.1. Coherence

When a hypothesis is tested by a significance test and is not rejected, it is generally agreed that all hypotheses implied by that hypothesis (its “components”) must also be considered as non-rejected.—K. R. Gabriel [6]

**Definition 4.**(Coherence) A testing scheme φ is coherent if:

**Example 5.**Suppose that in a case-control study, one measures the genotype in a certain locus for each individual of a sample. Results are shown in Table 1. These numbers were taken from a study presented by [12] that had the aim of verifying the hypothesis that subunits of the gene $GAB{A}_{A}$ contribute to a condition known as methamphetamine use disorder. Here, the set of all possible genotypes is $\mathbb{G}=\{AA,AB,BB\}.$ Let $\gamma =({\gamma}_{AA},{\gamma}_{AB},{\gamma}_{BB})$, where ${\gamma}_{i}$ is the probability that an individual from the case group has genotype i. Similarly, let $\pi =({\pi}_{AA},{\pi}_{AB},{\pi}_{BB})$, where ${\pi}_{i}$ is the probability that an individual of control group has genotype i.

AA | AB | BB | Total | |
---|---|---|---|---|

Case | 55 | 83 | 50 | 188 |

Control | 24 | 42 | 39 | 105 |

#### 3.2. Invertibility

There is a duality between hypotheses and alternatives which is not respected in most of the classical hypothesis-testing literature. (...) suppose that we decide to switch the names of alternative and hypothesis, so that ${\Omega}_{H}$ becomes ${\Omega}_{A}$, and vice versa. Then we can switch tests from ϕ to $\psi =1-\varphi $ and the “actions” accept and reject become switched.—M. J. Schervish [5] (p. 216)

**Definition 5.**(Invertibility) A testing scheme φ satisfies invertibility if:

**Example 6.**Suppose that $X|\theta \phantom{\rule{-0.166667em}{0ex}}\sim \phantom{\rule{-0.166667em}{0ex}}Normal(\theta ,1)$, and consider that the parameter space is $\Theta \phantom{\rule{-0.166667em}{0ex}}=\phantom{\rule{-0.166667em}{0ex}}\{-3,3\}$. Assume one wants to test the following null hypotheses:

#### 3.3. Consonance

... a test for ${\left({\cup}_{i\in I}{H}_{i}\right)}^{c}$ versus $\left({\cup}_{i\in I}{H}_{i}\right)$ may result in rejection which then indicates that at least one of the hypotheses ${H}_{i}$, $i\in I$, may be true.—H. Finner and K. Strassburger [21]

**Definition 6.**(Union Consonance) A TS φ satisfies the finite (countable) union consonance if for all finite (countable) set of indices I,

**Theorem 1.**Let Θ be a countable parameter space and $\sigma (\Theta )=\mathcal{P}(\Theta )$. Let φ be a testing scheme defined on $\sigma (\Theta )$. The TS φ satisfies coherence, invertibility and countable union consonance if, and only if, there is a point estimator $\delta :\mathcal{X}\to \Theta $, such that φ is generated by δ.

## 4. A Bayesian Look at Each Desideratum

**Example 7.**Suppose that $X|\theta \sim Bernoulli(\theta )$ and that one is interested in testing the null hypotheses:

State of Nature | ||
---|---|---|

Decision | $\theta \in {H}_{0}^{A}$ | $\theta \notin {H}_{0}^{A}$ |

0 | 0 | 1 |

1 | 6 | 0 |

State of Nature | ||
---|---|---|

Decision | $\theta \in {H}_{0}^{B}$ | $\theta \notin {H}_{0}^{B}$ |

0 | 0 | 1 |

1 | 2 | 0 |

**Example 8.**In the setup of Example 7, suppose one also needs to test the null hypothesis ${H}_{0}^{{B}^{c}}:\theta >0.5$ by taking into account the loss function in Table 3.

State of Nature | ||
---|---|---|

Decision | $\theta \in {H}_{0}^{{B}^{c}}$ | $\theta \notin {H}_{0}^{{B}^{c}}$ |

0 | 0 | 4 |

1 | 1 | 0 |

**Definition 7.**(Relative loss) Let ${L}_{A}$ be a loss function for testing the hypothesis $A\in \sigma (\Theta )$. The function ${\Delta}_{A}:\Theta \to \mathbb{R}$ defined by:

**Figure 1.**Interpretation of sensible relative losses: rougher errors of decisions should be assigned larger relative losses.

**Theorem 2.**Let ${\left({L}_{A}\right)}_{A\in \sigma (\Theta )}$ be a family of hypothesis testing loss functions. Suppose that for all ${\theta}_{1},\phantom{\rule{3.33333pt}{0ex}}{\theta}_{2}\phantom{\rule{3.33333pt}{0ex}}\in \phantom{\rule{3.33333pt}{0ex}}\Theta $, there is $x\in \mathcal{X}$, such that ${L}_{x}\left({\theta}_{1}\right),{L}_{x}\left({\theta}_{2}\right)>0$. Then, for all prior distributions π for θ, there exists a testing scheme generated by ${\left({L}_{A}\right)}_{A\in \sigma (\Theta )}$ with respect to π that is coherent if, and only if, ${\left({L}_{A}\right)}_{A\in \sigma (\Theta )}$ is such that for all $A,B\in \sigma (\Theta )$ with $A\subseteq B$:

**Example 9.**Consider, for each $A\in \sigma (\Theta )$, the loss function ${L}_{A}$ in Table 4 to test the null hypothesis A, in which $\lambda :\sigma (\Theta )\to {\mathbb{R}}_{+}$ is any finite measure, such that $\lambda (\Theta )>0$. This family of loss functions satisfies the condition in Equation (2) for coherence as for all $A,B\in \sigma (\Theta )$, such that $A\subseteq B$, and for all ${\theta}_{1}\in A$ and ${\theta}_{2}\in {B}^{c}$, ${\Delta}_{A}\left({\theta}_{1}\right)=\lambda \left(A\right)$, ${\Delta}_{B}\left({\theta}_{2}\right)=\lambda \left({B}^{c}\right)$, ${\Delta}_{A}\left({\theta}_{2}\right)=\lambda \left({A}^{c}\right)$ and ${\Delta}_{B}\left({\theta}_{1}\right)=\lambda \left(B\right)$.

State of Nature | ||
---|---|---|

Decision | $\theta \in A$ | $\theta \notin A$ |

0 | 0 | $\lambda \left({A}^{c}\right)$ |

1 | $\lambda \left(A\right)$ | 0 |

**Example 10.**Assume Θ is equipped with a distance, say d. Define, for each $A\in \sigma (\Theta )$ the loss function ${L}_{A}$ for testing A by:

**Theorem 3.**Let ${\left({L}_{A}\right)}_{A\in \sigma (\Theta )}$ be a family of hypothesis testing loss functions. Suppose that for all ${\theta}_{1},{\theta}_{2}\in \Theta $, there is $x\in \mathcal{X}$, such that ${L}_{x}\left({\theta}_{1}\right),{L}_{x}\left({\theta}_{2}\right)>0$. Then, for all prior distributions π for θ, there exists a testing scheme generated by ${\left({L}_{A}\right)}_{A\in \sigma (\Theta )}$ with respect to π that satisfies invertibility if, and only if, ${\left({L}_{A}\right)}_{A\in \sigma (\Theta )}$ is such that for all $A\in \sigma (\Theta )$:

**Theorem 4.**Let ${\left({L}_{A}\right)}_{A\in \sigma (\Theta )}$ be a family of hypothesis testing loss functions. Suppose that for all ${\theta}_{1},{\theta}_{2}\in \Theta $, there is $x\in \mathcal{X}$, such that ${L}_{x}\left({\theta}_{1}\right),{L}_{x}\left({\theta}_{2}\right)>0$. If for all prior distribution π for θ, there exists a testing scheme generated by ${\left({L}_{A}\right)}_{A\in \sigma (\Theta )}$ with respect to π that satisfies finite union consonance, then ${\left({L}_{A}\right)}_{A\in \sigma (\Theta )}$ is such that for all $A,B\in \sigma (\Theta )$ and for all ${\theta}_{1}\in A\cup B,{\theta}_{2}\in {(A\cup B)}^{c}$,

## 5. Putting the Desiderata Together

**Theorem 5.**Assume that Θ and $\sigma (\Theta )$ are such that $|\Theta |\ge 3$ and that there is a partition of Θ composed of three nonempty measurable sets. Assume also that for all triplet ${\theta}_{1},{\theta}_{2},{\theta}_{3}\in \Theta $, there is $x\in \mathcal{X}$, such that ${L}_{x}\left({\theta}_{i}\right)>0$ for $i=1,2,3$. Then, there is no family of strict hypothesis testing loss functions that induces, for each prior distribution for θ, a testing scheme satisfying coherence, invertibility and finite union consonance.

**Theorem 6.**Let Θ be a countable (finite) parameter space, $\sigma (\Theta )=\mathcal{P}(\Theta )$, and $\mathcal{X}$ be a countable sample space. Let φ be a testing scheme that satisfies coherence, invertibility and countable (finite) union consonance. Then, there exist a probability measure μ over $\mathcal{P}(\Theta \times \mathcal{X})$ and a family of strict hypothesis testing loss functions ${\left({L}_{A}\right)}_{A\in \sigma (\Theta )}$, such that φ is generated by ${\left({L}_{A}\right)}_{A\in \sigma (\Theta )}$ with respect to the μ-marginal distribution of θ.

**Theorem 7.**Let Θ and $\mathcal{X}$ be finite sets and $\sigma (\Theta )=\mathcal{P}(\Theta )$. Let φ be the testing scheme generated by the point estimator $\delta :\mathcal{X}\to \Theta $. Suppose that for all $x\in \mathcal{X}$, ${L}_{x}(\delta \left(x\right))>0$.

- (a)
- If there exist a probability measure $\pi :\sigma (\Theta )\to [0,1]$ for θ, with $\pi (\delta (x\left)\right)>0$ for all $x\in \mathcal{X}$, and a loss function $L:\Theta \times \Theta \to {\mathbb{R}}_{+}$, satisfying $L(\theta ,\theta )=0$ and $L(d,\theta )>0$ for $d\ne \theta $, such that δ is a Bayes estimator for θ generated by L with respect to π, then there is a family of hypothesis testing loss functions ${\left({L}_{A}\right)}_{A\in \sigma (\Theta )}$, ${L}_{A}:\{0,1\}\times (\Theta \times \mathcal{X})\to {\mathbb{R}}_{+}$ for each $A\in \sigma (\Theta )$, such that φ is generated by ${\left({L}_{A}\right)}_{A\in \sigma (\Theta )}$ with respect to π.
- (b)
- If there exist a probability measure $\pi :\sigma (\Theta )\to [0,1]$ for θ, with $\pi (\delta (x\left)\right)>0$ for all $x\in \mathcal{X}$, and a family of strict hypothesis testing loss functions ${\left({L}_{A}\right)}_{A\in \sigma (\Theta )}$, ${L}_{A}:\{0,1\}\times \Theta \to {\mathbb{R}}_{+}$ for each $A\in \sigma (\Theta )$, such that φ is generated by ${\left({L}_{A}\right)}_{A\in \sigma (\Theta )}$ with respect to π, then there is a loss function $L:\Theta \times \Theta \to {\mathbb{R}}_{+}$, with $L(\theta ,\theta )=0$ and $L(d,\theta )>0$ for $d\ne \theta $, such that δ is a Bayes estimator for θ generated by L with respect to π.

**Example 11.**Assume that $\Theta =\{{\theta}_{1},{\theta}_{2},\dots ,{\theta}_{k}\}$ and $\mathcal{X}$ is finite. Assume also that there is a maximum likelihood estimator (MLE) for θ, ${\delta}_{ML}:\mathcal{X}\to \Theta $, such that ${L}_{x}\left({\delta}_{ML}\left(x\right)\right)>0$, for all $x\in \mathcal{X}$. Then, the testing scheme generated by ${\delta}_{ML}$ is a TS of Bayes tests. Indeed, when Θ is finite, an MLE for θ is a Bayes estimator generated by the loss function $L(d,\theta )=\mathbb{I}(d\ne \theta )$, $d,\theta \in \Theta $, with respect to the uniform prior over Θ (that is, ${\delta}_{ML}\left(x\right)$ corresponds to a mode of the posterior distribution ${\pi}_{x}$, for each $x\in \mathcal{X}$). Consequently (recall that $|\Theta |=k$), ${\pi}_{x}\left({\delta}_{ML}\left(x\right)\right)\ge 1/k$ and $\mathbb{E}\left[L({\delta}_{ML}\left(x\right),\theta )\right|x]=1-{\pi}_{x}\left({\delta}_{ML}\left(x\right)\right)$, for each $x\in \mathcal{X}$. Thus,

## 6. Conclusions

## Acknowledgments

## Author Contributions

## Conflicts of Interest

## Appendix

## A. Proof of Theorem 1

## B. Proof of Theorem 2

## C. Proof of Theorem 3

## D. Proof of Theorem 4

- (i)
- if ${\theta}_{1}\in A\cap B$, then:$${\phi}_{C}^{*}\left({x}^{\prime}\right)=0\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\text{,}\phantom{\rule{4.pt}{0ex}}\text{if}\phantom{\rule{4.pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}{\alpha}_{0}>\frac{{\Delta}_{C}\left({\theta}_{2}\right)}{{\Delta}_{C}\left({\theta}_{1}\right)+{\Delta}_{C}\left({\theta}_{2}\right)}\phantom{\rule{4pt}{0ex}},$$
- (ii)
- if $\theta \notin A$, then:$${\phi}_{B}^{*}\left({x}^{\prime}\right)=0\phantom{\rule{4pt}{0ex}}\text{,}\phantom{\rule{4.pt}{0ex}}\text{if}\phantom{\rule{4.pt}{0ex}}{\alpha}_{0}>\frac{{\Delta}_{B}\left({\theta}_{2}\right)}{{\Delta}_{B}\left({\theta}_{1}\right)+{\Delta}_{B}\left({\theta}_{2}\right)}\phantom{\rule{4pt}{0ex}}\text{and}\phantom{\rule{4pt}{0ex}}{\phi}_{A\cup B}^{*}\left({x}^{\prime}\right)=0\phantom{\rule{4pt}{0ex}}\text{,}\phantom{\rule{4.pt}{0ex}}\text{if}\phantom{\rule{4.pt}{0ex}}{\alpha}_{0}>\frac{{\Delta}_{A\cup B}\left({\theta}_{2}\right)}{{\Delta}_{A\cup B}\left({\theta}_{1}\right)+{\Delta}_{A\cup B}\left({\theta}_{2}\right)},$$$${\int}_{\Theta}[{L}_{A}(0,\theta )-{L}_{A}(1,\theta )]\mathrm{d}{\pi}_{{x}^{\prime}}(\theta )\phantom{\rule{4pt}{0ex}}=\phantom{\rule{4pt}{0ex}}{\Delta}_{A}\left({\theta}_{1}\right){\alpha}_{0}\phantom{\rule{4pt}{0ex}}+\phantom{\rule{4pt}{0ex}}{\Delta}_{A}\left({\theta}_{2}\right)(1-{\alpha}_{0})\phantom{\rule{4pt}{0ex}}>\phantom{\rule{4pt}{0ex}}0.$$
- (iii)
- if $\theta \notin B$, a development similar to that of Case (ii) yields the same results: ${\phi}_{A}^{*}\left({x}^{\prime}\right)=1$, ${\phi}_{B}^{*}\left({x}^{\prime}\right)=1$ and ${\phi}_{A\cup B}^{*}\left({x}^{\prime}\right)=0$.

## E. Proof of Theorem 5

- (i)
- if ${\Delta}_{{A}_{1}}\left({\theta}_{1}\right){\Delta}_{{A}_{2}}\left({\theta}_{2}\right)>{\Delta}_{{A}_{1}}\left({\theta}_{2}\right){\Delta}_{{A}_{2}}\left({\theta}_{1}\right)$, then the projection of the line segment joining ${P}_{1}$ and ${P}_{2}$ over the plane $\mathbf{w}$ intersects the (third) quadrant ${\mathbb{R}}_{-}\times {\mathbb{R}}_{-}\times \left\{0\right\}$ (see the first graphic in Figure 3). Thus, there is $\gamma \in (0,1)$, such that $\gamma {\Delta}_{{A}_{i}}\left({\theta}_{1}\right)+(1-\gamma ){\Delta}_{{A}_{i}}\left({\theta}_{2}\right)<0$, $i=1,2$. As $\gamma {P}_{1}+(1-\gamma ){P}_{2}\in \mathcal{B}$, there is a posterior $({\alpha}_{1},{\alpha}_{2},{\alpha}_{3})$ concentrated on $\{{\theta}_{1},{\theta}_{2},{\theta}_{3}\}$ with respect to which any TS generated by ${\left({L}_{A}\right)}_{A\in \sigma (\Theta )}$ does not reject both ${A}_{1}$ and ${A}_{2}$ and, therefore, does not respect coherence, invertibility and finite union consonance;
- (ii)
- if ${\Delta}_{{A}_{1}}\left({\theta}_{1}\right){\Delta}_{{A}_{2}}\left({\theta}_{2}\right)={\Delta}_{{A}_{1}}\left({\theta}_{2}\right){\Delta}_{{A}_{2}}\left({\theta}_{1}\right)$, then the projection of the line segment joining ${P}_{1}$ and ${P}_{2}$ over $\mathbf{w}$ intersects the origin $(0,0,0)$ (see the second graphic in Figure 3). Thus, there is ${t}_{0}>0$, such that the point ${P}_{0}=(0,0,{t}_{0})\in \mathcal{B}$. Considering now the line segment joining ${P}_{0}$ and ${P}_{3}$, it is easily seen that for any $\gamma \in (\frac{-{\Delta}_{{A}_{3}}\left({\theta}_{3}\right)}{{t}_{0}-{\Delta}_{{A}_{3}}\left({\theta}_{3}\right)},1)$, $\gamma 0+(1-\gamma ){\Delta}_{{A}_{1}}\left({\theta}_{3}\right)>0$,$\gamma 0+(1-\gamma ){\Delta}_{{A}_{2}}\left({\theta}_{3}\right)>0$ and $\gamma {t}_{0}+(1-\gamma ){\Delta}_{{A}_{3}}\left({\theta}_{3}\right)>0$. As $\gamma {P}_{0}+(1-\gamma ){P}_{3}\in \mathcal{B}$, there is a posterior distribution with respect to which any TS generated by ${\left({L}_{A}\right)}_{A\in \sigma (\Theta )}$ rejects ${A}_{1}$, ${A}_{2}$ and ${A}_{3}$ and, therefore, does not satisfy the logical consistency properties all together;
- (iii)
- if ${\Delta}_{{A}_{1}}\left({\theta}_{1}\right){\Delta}_{{A}_{2}}\left({\theta}_{2}\right)<{\Delta}_{{A}_{1}}\left({\theta}_{2}\right){\Delta}_{{A}_{2}}\left({\theta}_{1}\right)$, then the projection of the above-mentioned segment over $\mathbf{w}$ intersects the (first) quadrant ${\mathbb{R}}_{+}\times {\mathbb{R}}_{+}\times \left\{0\right\}$ (third graphic in Figure 3). Thus, there is $\gamma \in (0,1)$ such that $\gamma {\Delta}_{{A}_{i}}\left({\theta}_{1}\right)+(1-\gamma ){\Delta}_{{A}_{i}}\left({\theta}_{2}\right)>0$, $i=1,2$. As $\gamma {P}_{1}+(1-\gamma ){P}_{2}\in \mathcal{B}$, there is a posterior $({\alpha}_{1},{\alpha}_{2},{\alpha}_{3})$ concentrated on $\{{\theta}_{1},{\theta}_{2},{\theta}_{3}\}$ with respect to which any TS generated by ${\left({L}_{A}\right)}_{A\in \sigma (\Theta )}$ rejects ${A}_{1}$, ${A}_{2}$ and ${A}_{3}$ and, consequently, does not meet the desiderata.

## F. Proof of Theorem 6

## G. Proof of Theorem 7

- (i)
- if $\delta \left(x\right)\in A$, then:$$\frac{{\Delta}_{A}\left({\theta}_{1}\right)}{{\Delta}_{B}\left({\theta}_{1}\right)}=\frac{Cmin\left\{max\left\{L(d,{\theta}_{1});\frac{1}{L(d,{\theta}_{1})}\right\}:d\in {A}^{c}\right\}}{Cmin\left\{max\left\{L(d,{\theta}_{1});\frac{1}{L(d,{\theta}_{1})}\right\}:d\in {B}^{c}\right\}}\le 1\le \frac{min\left\{min\left\{L(d,{\theta}_{2});\frac{1}{L(d,{\theta}_{2})}\right\}:d\in A\right\}}{min\left\{min\left\{L(d,{\theta}_{2});\frac{1}{L(d,{\theta}_{2})}\right\}:d\in B\right\}}=\frac{{\Delta}_{A}\left({\theta}_{2}\right)}{{\Delta}_{B}\left({\theta}_{2}\right)}.$$
- (ii)
- if $\delta \left(x\right)\in B\cap {A}^{c}$ (recall $C\ge 1$), it follows that:$$\frac{{\Delta}_{A}\left({\theta}_{1}\right)}{{\Delta}_{B}\left({\theta}_{1}\right)}=\frac{\frac{1}{C}min\left\{min\left\{L(d,{\theta}_{1});\frac{1}{L(d,{\theta}_{1})}\right\}:d\in {A}^{c}\right\}}{Cmin\left\{max\left\{L(d,{\theta}_{1});\frac{1}{L(d,{\theta}_{1})}\right\}:d\in {B}^{c}\right\}}\le 1\le \frac{min\left\{max\left\{L(d,{\theta}_{2});\frac{1}{L(d,{\theta}_{2})}\right\}:d\in A\right\}}{min\left\{min\left\{L(d,{\theta}_{2});\frac{1}{L(d,{\theta}_{2})}\right\}:d\in B\right\}}=\frac{{\Delta}_{A}\left({\theta}_{2}\right)}{{\Delta}_{B}\left({\theta}_{2}\right)}.$$
- (iii)
- if $\delta \left(x\right)\in {B}^{c}$,$$\frac{{\Delta}_{A}\left({\theta}_{1}\right)}{{\Delta}_{B}\left({\theta}_{1}\right)}=\frac{\frac{1}{C}min\left\{min\left\{L(d,{\theta}_{1});\frac{1}{L(d,{\theta}_{1})}\right\}:d\in {A}^{c}\right\}}{\frac{1}{C}min\left\{min\left\{L(d,{\theta}_{1});\frac{1}{L(d,{\theta}_{1})}\right\}:d\in {B}^{c}\right\}}\le 1\le \frac{min\left\{max\left\{L(d,{\theta}_{2});\frac{1}{L(d,{\theta}_{2})}\right\}:d\in A\right\}}{min\left\{max\left\{L(d,{\theta}_{2});\frac{1}{L(d,{\theta}_{2})}\right\}:d\in B\right\}}=\frac{{\Delta}_{A}\left({\theta}_{2}\right)}{{\Delta}_{B}\left({\theta}_{2}\right)}.$$

## References

- Hommel, G.; Bretz, F. Aesthetics and power considerations in multiple testing—A contradiction? Biom. J.
**2008**, 20, 657–666. [Google Scholar] [CrossRef] [PubMed] - Shaffer, J.P. Multiple hypothesis testing. Ann. Rev. Psychol.
**1995**, 46, 561–584. [Google Scholar] [CrossRef] - Hochberg, Y.; Tamhane, A.C. Multiple Comparison Procedures; Wiley: New York, NY, USA, 1987. [Google Scholar]
- Farcomeni, A. A review of modern multiple hypothesis testing, with particular attention to the false discovery proportion. Stat. Methods Med. Res.
**2008**, 17, 347–388. [Google Scholar] [CrossRef] [PubMed] - Schervish, M.J. Theory of Statistics; Springer: New York, NY, USA, 1997. [Google Scholar]
- Gabriel, K.R. Simultaneous test procedures—Some theory of multiple comparisons. Ann. Math. Stat.
**1969**, 41, 224–250. [Google Scholar] [CrossRef] - Lehmann, E.L. A theory of some multiple decision problems, II. Ann. Math. Stat.
**1957**, 28, 547–572. [Google Scholar] [CrossRef] - Izbicki, R.; Esteves, L.G. Logical Consistency in Simultaneous Statistical Test Procedures. Log. J. IGPL
**2015**. [Google Scholar] [CrossRef] - Pereira, C.A.B.; Stern, J.M.; Wechsler, S. Can a significance test be genuinely Bayesian? Bayesian Anal.
**2008**, 3, 79–100. [Google Scholar] [CrossRef] - Stern, J.M. Constructive verification, empirical induction and falibilist deduction: A threefold contrast. Information
**2001**, 2, 635–650. [Google Scholar] [CrossRef] - Pereira, C.A.B.; Stern, J.M. Evidence and credibility: Full Bayesian significance test for precise hypotheses. Entropy
**1999**, 1, 99–110. [Google Scholar] [CrossRef] - Lin, S.K.; Chen, C.K.; Ball, D.; Liu, H.C.; Loh, E.W. Gender-specific contribution of the GABAA subunit genes on 5q33 in methamphetamine use disorder. Pharm. J.
**2003**, 3, 349–355. [Google Scholar] [CrossRef] [PubMed] - Izbicki, R.; Fossaluza, V.; Hounie, A.G.; Nakano, E.Y.; Pereira, C.A. Testing allele homogeneity: The problem of nested hypotheses. BMC Genet.
**2012**, 13. [Google Scholar] [CrossRef] [PubMed][Green Version] - Silva, G.M. Propriedades Lógicas de Classes de Testes de Hipóteses. Ph.D. Thesis, University of São Paulo, São Paulo, Brazil, 2014. [Google Scholar]
- Evans, M. Measuring Statistical Evidence Using Relative Belief; Chapman & Hall/CRC: London, UK, 2015. [Google Scholar]
- Marcus, R.; Eric, P.; Gabriel, K.R. On closed testing procedures with special reference to ordered analysis of variance. Biometrika
**1976**, 63, 655–660. [Google Scholar] [CrossRef] - Sonnemann, E. General solutions to multiple testing problems. Biom. J.
**2008**, 50, 641–656. [Google Scholar] [CrossRef] [PubMed] - Lavine, M.; Schervish, M.J. Bayes Factors: What they are and what they are not. Am. Stat.
**1999**, 53, 119–122. [Google Scholar] - Kneale, W.; Kneale, M. The Development of Logic; Oxford University Press: Oxford, UK, 1962. [Google Scholar]
- DeGroot, M.H. Optimal Statistical Decisions; McGraw-Hill: New York, NY, USA, 1970. [Google Scholar]
- Finner, H.; Strassburger, K. The partitioning principle: A powerful tool in multiple decision theory. Ann. Stat.
**2002**, 30, 1194–1213. [Google Scholar] [CrossRef] - Aitchison, J. Confidence-region tests. J. R. Stat. Soc. Ser. B
**1964**, 26, 462–476. [Google Scholar] - Darwiche, A.Y.; Ginsberg, M.L. A Symbolic Generalization of Probability Theory. In Proceedings of the Tenth National Conference on Artificial Inteligence, AAAI-92, San Jose, CA, USA, 12–16 July 1992.
- Evans, M.; Jang, G.H. Inferences from Prior-Based Loss Functions. 2011; arXiv:1104.3258. [Google Scholar]
- Berger, J.O. In defense of the likelihood principle: axiomatics and coherency. Bayesian Stat.
**1985**, 2, 33–66. [Google Scholar] - Madruga, M.R.; Esteves, L.G.; Wechsler, S. On the bayesianity of pereira-stern tests. Test
**2001**, 10, 291–299. [Google Scholar] [CrossRef] - Ripley, B.D. Pattern Recognition and Neural Networks; Cambridge University Press: Cambridge, UK, 1996. [Google Scholar]
- Izbicki, R. Classes de Testes de Hipóteses. Ph.D. Thesis, University of São Paulo, São Paulo, Brazil, 2010. [Google Scholar]

© 2015 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Da Silva, G.M.; Esteves, L.G.; Fossaluza, V.; Izbicki, R.; Wechsler, S. A Bayesian Decision-Theoretic Approach to Logically-Consistent Hypothesis Testing. *Entropy* **2015**, *17*, 6534-6559.
https://doi.org/10.3390/e17106534

**AMA Style**

Da Silva GM, Esteves LG, Fossaluza V, Izbicki R, Wechsler S. A Bayesian Decision-Theoretic Approach to Logically-Consistent Hypothesis Testing. *Entropy*. 2015; 17(10):6534-6559.
https://doi.org/10.3390/e17106534

**Chicago/Turabian Style**

Da Silva, Gustavo Miranda, Luis Gustavo Esteves, Victor Fossaluza, Rafael Izbicki, and Sergio Wechsler. 2015. "A Bayesian Decision-Theoretic Approach to Logically-Consistent Hypothesis Testing" *Entropy* 17, no. 10: 6534-6559.
https://doi.org/10.3390/e17106534