# On Falsifiable Statistical Hypotheses

## Abstract

**:**

## 1. Introduction

Statistical falsification, Gelman and Shalizi suggest, is all but deductive.1 But how extremal, exactly, does a p-Value have to be for a test to count as a falsification? Popper was loathe to draw the line at any particular value: “It is fairly clear that this ‘practical falsification’ can be obtained only through a methodological decision to regard highly improbable events as ruled out …But with what right can they be so regarded? Where are we to draw the line? Where does this ‘high improbability’ begin?” ([3], p. 182) The problem of when to consider a statistical hypothesis to be falsified has engendered a significant literature [4,5,6] but no universally accepted answer.Extreme p-Values indicate that the data violate regularities implied by the model, or approach doing so. If these were strict violations of deterministic implications, we could just apply modus tollens …as it is, we nonetheless have evidence and probabilities. Our view of model checking, then, is firmly in the long hypothetico-deductive tradition, running from Popper (1934/1959) back through Bernard (1865/1927) and beyond (Laudan, 1981).

## 2. In Search of Statistical Falsifiability

- Error Avoidance: Output conclusions are true.

- Monotonicity: Logically stronger inputs yield logically stronger conclusions.

- Limiting Convergence: The method converges to ¬H iff H is false.

- α-Error Avoidance: For every sample size, the objective chance that the output conclusion is false is not higher than $\alpha $.

- F1.
- Hypothesis H is falsifiable iff there is a monotonic, error avoiding method M that falsifies H in the limit;
- F1.5.
- Hypothesis H is falsifiable iff there is a method M that falsifies H in the limit, and for every $\alpha >0$, M is $\alpha $-error avoiding;
- F2.
- Hypothesis H is falsifiable iff for every $\alpha >0$ there is an $\alpha $-error evoiding method that falsifies H in the limit.

- Monotonicity in chance: If H is false, then the objective chance of rejecting H is strictly increasing with sample size.

- α-Monotonicity in chance: If H is false, then for any sample sizes ${n}_{1}<{n}_{2}$, the objective chance of rejecting H decreases by no more than $\alpha $.

- F3.
- Hypothesis H is falsifiable iff for every $\alpha >0$ there is an $\alpha $-error avoiding and $\alpha $-monotonic method that falsifies H in the limit.

## 3. The Topology of Inquiry

- C1.
- If ${H}_{1},{H}_{2}$ are falsifiable, then so is their disjunction, ${H}_{1}\cup {H}_{2}$;
- C2.
- If $\mathcal{H}$ is a (potentially infinite) collection of falsifiable propositions, then their conjunction, $\cap \mathcal{H}$, is also falsifiable.

- O1.
- If ${H}_{1}$ and ${H}_{2}$ are verifiable, then so is their conjunction, ${H}_{1}\cap {H}_{2}$;
- O2.
- If $\mathcal{H}$ is a (potentially infinite) collection of verifiable propositions, then their disjunction, $\cup \mathcal{H}$, is also verifiable.

## 4. The Statistical Setting

#### 4.1. Models, Measures and Samples

**Example**

**1.**

**Example**

**2.**

#### 4.2. Statistical Tests

#### 4.3. The Weak Topology

## 5. Statistical Falsifiability

- BndErr. ${\mu}^{n}\left[{\lambda}_{n}^{-1}\left(W\right)\right]\ge 1-\alpha $, if $\mu \in H$;
- LimCon. ${\mu}^{n}\left[{\lambda}_{n}^{-1}\left({H}^{\mathsf{c}}\right)\right]\stackrel{n}{\u27f6}1$, if $\mu \in {H}^{\mathsf{c}}$.

- VanErr. ${\mu}^{n}\left[{\lambda}_{n}^{-1}\left(W\right)\right]\stackrel{n}{\u27f6}1$, if $\mu \in H$.

**Theorem**

**1.**

**Fundamental Characterization Theorem**). Suppose that the statistical setup $(W,\mathsf{\Omega},\mathcal{F})$ is feasibly based. Then, for $H\subseteq W,$ the following are equivalent:

- H is α-falsifiable in chance for some $\alpha >0;$
- H is falsifiable in chance;
- H is closed in the weak topology on W.

## 6. Monotonic Falsifiability

- Mon. ${\mu}^{{n}_{2}}\left[{\lambda}_{{n}_{2}}^{-1}\left({H}^{\mathsf{c}}\right)\right]>{\mu}^{{n}_{1}}\left[{\lambda}_{{n}_{1}}^{-1}\left({H}^{\mathsf{c}}\right)\right].$

**Lemma**

**1.**

**Proof of Lemma**

**1.**

- $\alpha $-Mon${\mu}^{{n}_{2}}\left[{\lambda}_{{n}_{2}}^{-1}\left({H}^{\mathsf{c}}\right)\right]+\alpha >{\mu}^{{n}_{1}}\left[{\lambda}_{{n}_{1}}^{-1}\left({H}^{\mathsf{c}}\right)\right]$.

- MVanEee. For all $\mu \in H$, there exists a sequence $\left({\alpha}_{n}\right)$ such that each ${\alpha}_{n}\le \alpha $, ${\alpha}_{n}\downarrow 0,$ and ${\mu}^{n}\left[{\lambda}_{n}^{-1}\left({H}^{\mathsf{c}}\right)\right]\le {\alpha}_{n}$;
- LimCon.$\mu \in {H}^{\mathsf{c}}$, ${\mu}^{n}\left[{\lambda}_{n}^{-1}\left({H}^{\mathsf{c}}\right)\right]\stackrel{n}{\u27f6}1$;
- α-Mon.$\mu \in W$, ${\mu}^{{n}_{2}}\left[{\lambda}_{{n}_{2}}^{-1}\left({H}^{\mathsf{c}}\right)\right]+\alpha >{\mu}^{{n}_{1}}\left[{\lambda}_{{n}_{1}}^{-1}\left({H}^{\mathsf{c}}\right)\right]$, if ${n}_{1}<{n}_{2}$.

**Theorem**

**2.**

- H is α-falsifiable in chance for some $\alpha >0$;
- H is statistically falsifiable;
- H is monotonically falsifiable;
- H is closed in the weak topology on W.

## 7. Conclusions: Falsifiability and Induction

## Funding

## Acknowledgments

## Conflicts of Interest

## Appendix A. Proof of Theorem 2

- σ-BndErr. ${\sum}_{n=1}^{\infty}{\mu}^{n}\left[{\lambda}_{n}^{-1}\left(W\right)\right]\ge 1-\alpha $, if $\mu \in H$.

- SVanErr. ${\mu}^{\infty}[lim\; inf{\lambda}_{n}^{-1}\left(W\right)]=1,$ if $\mu \in H$.

- σ-BndErr. ${\sum}_{n=1}^{\infty}{\mu}^{n}\left[{\lambda}_{n}^{-1}\left(W\right)\right]\ge 1-\alpha $, if $\mu \in H,$ and
- sLimCon. ${\mu}^{\infty}\left[lim\; inf\phantom{\rule{1.0pt}{0ex}}{\lambda}_{n}^{-1}\left({H}^{\mathsf{c}}\right)\right]=1$, if $\mu \in {H}^{\mathsf{c}}$.

**Theorem**

**A1.**

- H is α-verifiable in chance for some $\alpha >0$;
- H is monotonically verifiable;
- H is almost surely verifiable;
- H is open in the weak topology on W.

**Lemma**

**A1.**

**Proof of Lemma**

**A1.**

$\left({\psi}_{n}^{1}\right)$ | $\left({\psi}_{n}^{2}\right)$ | $\left({\psi}_{n}^{3}\right)$ | $\left({\psi}_{n}^{4}\right)$ | ⋯ |

${\omega}_{1}$ | ||||

${\omega}_{2}$ | ${\omega}_{3}$ | |||

${\omega}_{4}$ | ${\omega}_{5}$ | ${\omega}_{6}$ | ||

${\omega}_{7}$ | ${\omega}_{8}$ | ${\omega}_{9}$ | ${\omega}_{10}$ | |

${\omega}_{11}$ | ${\omega}_{12}$ | ${\omega}_{13}$ | ${\omega}_{14}$ | ⋯ |

⋮ | ⋮ | ⋮ | ⋮ | ⋯ |

**Lemma**

**A2.**

**Proof of Lemma**

**2**

**Lemma**

**A3.**

**Proof of Lemma**

**A3.**

**Lemma**

**A4.**

**Proof of Lemma**

**A4.**

## Notes

1 | Some frequentists go even further. In their response to the American Statistical Association’s controversial statement on p-Values, Ionides et al. [2] identify frequentist method with deduction, and Bayesian method with induction. |

2 | Albert [10] answers Redhead’s challenge by suggesting that we drop the fiction that observed variables are continuous. Since we would like our results to apply directly to problems as formulated in the sciences, where continuous variables are commonplace, we do not adopt Albert’s solution. However, we assimilate this insight in Section 4.1 and Section 4.2 by insisting on “feasible” test methods, i.e., methods whose verdicts depend only on a discretization of the data, even if the underlying variables are continuous. |

3 | |

4 | Kvanvig [15] makes a parallel point for epistemology: “…it is far from obvious that …the best way of structuring a fruitful cognitive life is to concentrate on individual time-slices, and whether knowledge or justification is posessed at each time-slice, and let the totality …get generated by cementing together these time-slices”. Laudan [16] makes a similar point in philosophy of science: “Progress necessarily involves a …process through time. Rationality …has tended to be viewed as an atemporal concept …most writers see progress as nothing more than the temporal projection of individual rational choices …we may be able to learn something by inverting the presumed dependence of progress on rationality”. |

5 | We adopt here the idiom of “negative” falsificationism, according to which one should suspend judgement on a hypothesis unless it is falsified. “Positive” falsificationism, on the other hand, endorses belief in hypotheses which have passed an appropriate test (see Musgrave [17], Section 6). If we adopted the positive formulation, we could no longer speak of error avoidance tout court, but would have to rephrase the norm in terms of avoidance of errors of Type I (false rejection). Not much hinges on the choice, since the set of falsififiable hypotheses remains the same. |

6 | Here an astute reader may object: how do we know that observation must turn up an elusive non-black raven if one exists? We might rephrase the hypothesis as ‘all ravens that will ever be observed are black.’ We might simply appeal to the background assumptions of inquiry: the method must converge to not-H not in all those possibilities in which H is false, but in all possibilities in which H is false and the background assumptions of inquiry are true. A more careful answer might require a falsification method to converge to not-H in a “maximal set” of possibilities in which H is false, e.g., in all those possibilities in which convergence is compatible with error avoidance. See Liu [18] for a rigorous development of this idea. |

7 | The definitions in this section are intentionally rather schematic. Hopefully this will aid, rather than hinder, comprehension. All definitions are formalized in the following. |

8 | The reader may object: surely no method can be expected to converge to not-H in all the possibilities in which H is false. For example, what if H is false because the assumption of i.i.d sampling is violated? A more careful formulation requires that the method converge to not-H in all those worlds in which H is false but the background assumptions of inquiry are true—if it is statistical falsification which is at issue, then some kind of statistical regularity must be taken for granted. |

9 | The following are only proto-definitions, leaving many things unspecified. The notion of verification in the limit is here a free parameter: compatible notions include convergence in propositional information, convergence in probability and almost sure convergence. The notion of $\alpha $-error avoidance is also parametric: it can mean that the chance of error at any sample size is bounded by $\alpha $, or that the sum of the chances of error over all sample sizes is bounded by $\alpha $. Each of these concepts will be developed in detail in the following. These parametric details are omitted here to expose the essential differences between the three concepts. |

10 | For a contrived example, suppose it is known that random samples are distributed uniformly on the interval $(\mu -1/2,\mu +1/2)$, for some unknown parameter $\mu $. Although samples may land outside the interval, they only do so with probability zero. Let H be the hypothesis that the true parameter is $\mu $. Let M be the method that concludes not-H if some sample lands outside of the interval $(\mu -1/2,\mu +1/2)$, and draws no non-trivial conclusion otherwise. Then, M is a deductive falsifier of the second type, although not of the first. Clearly, every falsifier of the first type is a falsifier of the second type. |

11 | A topological space $\mathcal{T}$ is a structure $\langle W,\mathcal{V}\rangle $ where W is a set, and $\mathcal{F}$ is a collection of subsets of W closed under conjunction and finite disjunction. The elements of $\mathcal{V}$ are called the closed sets of $\mathcal{T}$. In our case W is a set of epistemic possibilities, or possible worlds, and $\mathcal{F}$ is the set of falsifiable propositions over W. Although we define a topology here in terms of closed sets, we could just as easily have done it in terms of open sets since the complement of every closed set is open. |

12 | |

13 | We use the notation $\mathsf{bdry}A$ to denote the set of boundary points of A. |

14 | The acceptance region is ${\psi}^{-1}\left(W\right)$, rather than ${\psi}^{-1}\left(H\right)$, because failing to reject H licenses only suspension of belief i.e., the trivial inference W. |

15 | Every topology gives rise to a kind of convergence by setting ${\mu}_{n}\Rightarrow \mu $ iff for every open E containing $\mu $ there is N such that ${\mu}_{n}\in E$ for $n\ge N$. Every kind of convergence gives rise to a topology by letting $E\subseteq W$ be open iff for each sequence ${\mu}_{n}\Rightarrow \mu \in E$ there is N such that ${\mu}_{n}\in E$ for $n\ge N$. The notion of convergence arising from a topology defined in this way will agree with the original notion of convergence. |

16 | For example, if we are interpolating points with polynomials, linear, quadratic, cubic, ... is such a collection, when the hypotheses are understood in the strict sense. Although the individual hypotheses are not falsifiable—if the truth is linear, quadratic will never be refuted—unions of inital segments are—if the generating polynomial is of degree greater than 2, we will get a sign. Many statistical model selection problems also fit this description. |

17 | |

18 | Niiniluoto [43] admits: “the problem of estimating verisimilitude is neither more nor less difficult than the traditional problem of induction”. |

19 | This entailment holds only for countably additive measures, to which we restrict attention. |

## References

- Gelman, A.; Shalizi, C.R. Philosophy and the practice of Bayesian statistics. Br. J. Math. Stat. Psychol.
**2013**, 66, 8–38. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Ionides, E.L.; Giessing, A.; Ritov, Y.; Page, S.E. Response to the ASA’s Statement on p-Values: Context, Process, and Purpose. Am. Stat.
**2017**, 71, 88–89. [Google Scholar] [CrossRef] - Popper, K.R. The Logic of Scientific Discovery, 1st ed.; Hutchinson: London, UK, 1959. [Google Scholar]
- Gillies, D.A. A Falsifying Rule for Probability Statements. Br. J. Philos. Sci.
**1971**, 22, 231–261. [Google Scholar] [CrossRef] - Albert, M. Die Falsifikation Statistischer Hypothesen. J. Gen. Philos. Sci.
**1992**, 23, 1–32. [Google Scholar] [CrossRef] - Mayo, D.G.; Spanos, A. Severe testing as a basic concept in a Neyman–Pearson philosophy of induction. Br. J. Philos. Sci.
**2006**, 57, 323–357. [Google Scholar] - Fisher, R.A. Statistical Methods and Scientific Inference; Oliver and Boyd: Edinburgh, UK, 1959. [Google Scholar]
- Neyman, J. Lectures and Conferences on Mathematical Statistics and Probability; Graduate School, US Department of Agriculture: Washington, DC, USA, 1952.
- Redhead, M. On Neyman’s paradox and the theory of statistical tests. Br. J. Philos. Sci.
**1974**, 25, 265–271. [Google Scholar] - Albert, M. Resolving Neyman’s Paradox. Br. J. Philos. Sci.
**2020**, 53, 69–76. [Google Scholar] [CrossRef] - Neyman, J.; Pearson, E.S. IX. On the problem of the most efficient tests of statistical hypotheses. Philos. Trans. R. Soc. Lond. Ser. A Contain. Pap. Math. Phys. Character
**1933**, 231, 289–337. [Google Scholar] - Casella, G.; Berger, R.L. Statistical Inference, 2nd ed.; Duxbury: Pacific Grove, CA, USA, 2002. [Google Scholar]
- Shah, R.D.; Peters, J. The hardness of conditional independence testing and the generalised covariance measure. Ann. Stat.
**2020**, 48, 1514–1538. [Google Scholar] - Neykov, M.; Balakrishnan, S.; Wasserman, L. Minimax optimal conditional independence testing. Ann. Stat.
**2021**, 49, 2151–2177. [Google Scholar] - Kvanvig, J.L. The Intellectual Virtues and the Life of the Mind: On the Place of Virtues in Contemporary Epistemology; Rowman and Littlefield: Savage, MD, USA, 1992. [Google Scholar]
- Laudan, L. Progress and its Problems: Towards a Theory of Scientific Growth; University of California Press: Berkeley, CA, USA, 1978. [Google Scholar]
- Musgrave, A. Critical Rationalism. In The Power of Argumentation; Poznań Studies in the Philosophy of the Sciences and the, Humanities; Suárez-Iñiguez, E., Ed.; Brill: Leiden, The Netherlands, 2007; pp. 171–211. [Google Scholar]
- Lin, H. Modes of convergence to the truth: Steps toward a better epistemology of induction. Rev. Symb. Log.
**2022**, forthcoming. [Google Scholar] - Bar-Hillel, Y.; Carnap, R. Semantic Information. Br. J. Philos. Sci.
**1953**, 4, 147–157. [Google Scholar] - Floridi, L. Is semantic information meaningful data? Philos. Phenomenol. Res.
**2005**, 70, 351–370. [Google Scholar] [CrossRef] [Green Version] - Floridi, L. The Philosophy of Information; Oxford University Press: Oxford, UK, 2011. [Google Scholar]
- Niranjan, S.; Frenzel, J.F. A comparison of fault-tolerant state machine architectures for space-borne electronics. IEEE Trans. Reliab.
**1996**, 45, 109–113. [Google Scholar] [CrossRef] - Chernick, M.R.; Liu, C.Y. The saw-toothed behavior of power versus sample size and software solutions: Single binomial proportion using exact methods. Am. Stat.
**2002**, 56, 149–155. [Google Scholar] [CrossRef] - Schuette, P.; Rochester, C.G.; Jackson, M. Power and sample size for safety registries: New methods using confidence intervals and saw-tooth power curves. In Proceedings of the 8th International R User Conference, Nashville, TN, USA, 12–15 June 2012. [Google Scholar]
- Musonda, P. The Self-Controlled Case Series Method: Performance and Design in Studies of Vaccine Safety. Ph.D. Thesis, Open University, Milton Keynes, UK, 2006. [Google Scholar]
- Schaarschmidt, F. Experimental design for one-sided confidence intervals or hypothesis tests in binomial group testing. Commun. Biometry Crop. Sci.
**2007**, 2, 32–40. [Google Scholar] - Genin, K.; Mayo-Wilson, C. Statistical Decidability in Linear, Non-Gaussian Causal Models. In Proceedings of the Causal Discovery & Causality-Inspired Machine Learning Workshop, NeurIPS, Virtual, 11–12 December 2020. [Google Scholar]
- Abramsky, S. Domain Theory and the Logic of Observable Properties. Ph.D. Thesis, University of London, London, UK, 1987. [Google Scholar]
- Vickers, S. Topology Via Logic; Cambridge University Press: Cambridge, UK, 1996. [Google Scholar]
- Kelly, K.T. The Logic of Reliable Inquiry; Oxford University Press: Oxford, UK, 1996. [Google Scholar]
- de Brecht, M.; Yamamoto, A. Interpreting learners as realizers for (${\Sigma}_{2}^{0}$)-measurable functions. Spec. Interes. Group Fundam. Probl. Artif. Intell. (SIG-FPAI)
**2009**, 74, 39–44. [Google Scholar] - Genin, K.; Kelly, K.T. Theory Choice, Theory Change, and Inductive Truth-Conduciveness. In Proceedings of the Fifteenth Conference on Theoretical Aspects of Rationality and Knowledge (TARK), Pittsburgh, PA, USA, 4–6 June 2015; pp. 111–121. [Google Scholar]
- Baltag, A.; Gierasimczuk, N.; Smets, S. On the Solvability of Inductive Problems: A Study in Epistemic Topology. Electron. Proc. Theor. Comput. Sci.
**2016**, 215, 81–98. [Google Scholar] [CrossRef] [Green Version] - Genin, K.; Kelly, K.T. The Topology of Statistical Verifiability. In Proceedings of the Sixteenth Conference on Theoretical Aspects of Rationality and Knowledge (TARK), Liverpool, UK, 24–26 July 2017; pp. 236–250. [Google Scholar] [CrossRef] [Green Version]
- Lin, H. The Hard Problem of Theory Choice: A Case Study on Causal Inference and Its Faithfulness Assumption. Philos. Sci.
**2019**, 86, 967–980. [Google Scholar] - Lin, H.; Zhang, J. On Learning Causal Structures from Non-Experimental Data without Any Faithfulness Assumption. In Proceedings of the Algorithmic Learning Theory, PMLR, San Diego, CA, USA, 8–11 February 2020; pp. 554–582. [Google Scholar]
- Saeki, S. A proof of the existence of infinite product probability measures. Am. Math. Mon.
**1996**, 103, 682–683. [Google Scholar] [CrossRef] - Billingsley, P. Convergence of Probability Measures, 2nd ed.; John Wiley & Sons: Hoboken, NJ, USA, 1999. [Google Scholar] [CrossRef] [Green Version]
- Genin, K. Statistical Undecidability in Linear, Non-Gaussian Models in the Presence of Latent Confounders. In Proceedings of the Advances in Neural Information Processing Systems 34, Virtual, 6–12 December 2020. [Google Scholar]
- Miller, D. Popper’s qualitative theory of verisimilitude. Br. J. Philos. Sci.
**1974**, 25, 166–177. [Google Scholar] [CrossRef] - Tichý, P. On Popper’s definitions of verisimilitude. Br. J. Philos. Sci.
**1974**, 25, 155–160. [Google Scholar] [CrossRef] - Oddie, G. Likeness to Truth; The University of Western Ontario Series in Philosoph of Science; Reidel: Dordrecht, The Netherlands, 1986; Volume 30. [Google Scholar]
- Niiniluoto, I. Truthlikeness; The Synthese Library; Reidel: Dordrecht, The Netherlands, 1987; Volume 185. [Google Scholar]
- Niiniluoto, I. Critical Scientific Realism; Oxford University Press: Oxford, UK, 1999. [Google Scholar]
- Genin, K. The Topology of Statistical Inquiry. Ph.D. Thesis, Carnegie Mellon University, Pittsburgh, PA, USA, 2018. [Google Scholar]
- Gilat, D. Monotonicity of a power function: An elementary probabilistic proof. Am. Stat.
**1977**, 31, 91–93. [Google Scholar] [CrossRef]

**Figure 1.**Pictured is a hierarchy of topological complexity and corresponding notions of methodological success. The set of all open sets is referred to as ${\mathsf{\Sigma}}_{1}^{0}$; the set of all closed sets as ${\mathsf{\Pi}}_{1}^{0}$; and the set of all clopen sets as ${\mathsf{\Delta}}_{1}^{0}={\mathsf{\Sigma}}_{1}^{0}\mathsf{\cap}{\mathsf{\Pi}}_{1}^{0}$. Depending on whether the ${\mathsf{\Sigma}}_{1}^{0}$ sets are propositions of type V1 or V2, we get the logical (

**left**) and statistical (

**right**) hierarchies. Sets of greater complexity are built out of ${\mathsf{\Sigma}}_{1}^{0}$ sets by logical operations, e.g., ${\mathsf{\Sigma}}_{2}^{0}$ sets are countable unions of locally closed sets. Inclusion relations between notions of complexity are also indicated. For more on the notions of methodological success characterized by higher levels of Borel complexity, see Genin and Kelly [34].

**Figure 2.**Diachronic plot of power of typical test of the null hypothesis that a coin is not head-biased, when p(H) = 0.775. The plot exhibits the characteristic “saw-tooth” shape identified by Chernick and Liu [23]. Note that the drops in power are significant e.g., >0.07 between sample sizes 31 and 33.

**Figure A1.**The basic idea of the proof is encapsulated in the figure. For any sample size n, we construct a step function ${\varphi}_{n}$ that “almost dominates” the power function ${\beta}_{n}.$ The step function dominates ${\beta}_{n}$ for all $\theta $ except those for which ${\beta}_{n}\left(\theta \right)$ is less than $\alpha $, or close to 1. Then, since the set of steps is finite, and the original test satisfies LimCon, there must be a sample size $\sigma \left(n\right)$ such that the power function ${\beta}_{\sigma \left(n\right)}$ strictly dominates the step function. Since ${\beta}_{\sigma \left(n\right)}$ dominates the step function, ${\beta}_{\sigma \left(n\right)}\left(\theta \right)$ can only be less than ${\beta}_{n}\left(\theta \right)$ if ${\beta}_{n}\left(\theta \right)$ is less than $\alpha ,$ or they are both close to 1. The loss of power from n to $\sigma \left(n\right)$ is thereby bounded by $\alpha .$ Iterating this process we get a sequence of “good” sample sizes $n,\sigma \left(n\right),{\sigma}^{2}\left(n\right),\dots $ such that the power is “almost increasing”. It remains only to interpolate the intermediate sample sizes with tests that throw out samples until they arrive at the nearest “good” sample size.

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2022 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Genin, K.
On Falsifiable Statistical Hypotheses. *Philosophies* **2022**, *7*, 40.
https://doi.org/10.3390/philosophies7020040

**AMA Style**

Genin K.
On Falsifiable Statistical Hypotheses. *Philosophies*. 2022; 7(2):40.
https://doi.org/10.3390/philosophies7020040

**Chicago/Turabian Style**

Genin, Konstantin.
2022. "On Falsifiable Statistical Hypotheses" *Philosophies* 7, no. 2: 40.
https://doi.org/10.3390/philosophies7020040