Next Article in Journal
Towards a More Realistic Citation Model: The Key Role of Research Team Sizes
Next Article in Special Issue
Monitoring Parameter Change for Time Series Models of Counts Based on Minimum Density Power Divergence Estimator
Previous Article in Journal
Conditional Lie-Bäcklund Symmetries and Differential Constraints of Radially Symmetric Nonlinear Convection-Diffusion Equations with Source
Previous Article in Special Issue
Robust Change Point Test for General Integer-Valued Time Series Models Based on Density Power Divergence
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Some Dissimilarity Measures of Branching Processes and Optimal Decision Making in the Presence of Potential Pandemics

by
Niels B. Kammerer
1 and
Wolfgang Stummer
2,*
1
Königinstrasse 75, 80539 Munich, Germany
2
Department of Mathematics, University of Erlangen–Nürnberg, Cauerstrasse 11, 91058 Erlangen, Germany
*
Author to whom correspondence should be addressed.
Entropy 2020, 22(8), 874; https://doi.org/10.3390/e22080874
Submission received: 26 June 2020 / Revised: 27 July 2020 / Accepted: 28 July 2020 / Published: 8 August 2020

Abstract

:
We compute exact values respectively bounds of dissimilarity/distinguishability measures–in the sense of the Kullback-Leibler information distance (relative entropy) and some transforms of more general power divergences and Renyi divergences–between two competing discrete-time Galton-Watson branching processes with immigration GWI for which the offspring as well as the immigration (importation) is arbitrarily Poisson-distributed; especially, we allow for arbitrary type of extinction-concerning criticality and thus for non-stationarity. We apply this to optimal decision making in the context of the spread of potentially pandemic infectious diseases (such as e.g., the current COVID-19 pandemic), e.g., covering different levels of dangerousness and different kinds of intervention/mitigation strategies. Asymptotic distinguishability behaviour and diffusion limits are investigated, too.

Contents
1 Introduction3
2 The Framework and Application Setups5
  2.1 Process Setup5
  2.2 Connections to Time Series of Counts6
  2.3 Applicability to Epidemiology8
  2.4 Information Measures12
  2.5 Decision Making under Uncertainty15
  2.6 Asymptotical Distinguishability19
3 Detailed Recursive Analyses of Hellinger Integrals21
  3.1 A First Basic Result21
  3.2 Some Useful Facts for Deeper Analyses25
  3.3 Detailed Analyses of the Exact Recursive Values, i.e., for the Cases β A , β H , α A , α H P NI P SP , 1 27
  3.4 Some Preparatory Basic Facts for the Remaining Cases β A , β H , α A , α H P SP \ P SP , 1 29
  3.5 Lower Bounds for the Cases β A , β H , α A , α H , λ ( P SP \ P SP , 1 ) × ] 0 , 1 [ 31
  3.6 Goals for Upper Bounds for the Cases β A , β H , α A , α H , λ ( P SP \ P SP , 1 ) × ] 0 , 1 [ 32
  3.7 Upper Bounds for the Cases β A , β H , α A , α H , λ P SP , 2 × ] 0 , 1 [ 34
  3.8 Upper Bounds for the Cases β A , β H , α A , α H , λ P SP , 3 a × ] 0 , 1 [ 35
  3.9 Upper Bounds for the Cases β A , β H , α A , α H , λ P SP , 3 b × ] 0 , 1 [ 36
  3.10 Upper Bounds for the Cases β A , β H , α A , α H , λ P SP , 3 c × ] 0 , 1 [ 37
  3.11 Upper Bounds for the Cases β A , β H , α A , α H , λ P SP , 4 a × ] 0 , 1 [ 37
  3.12 Upper Bounds for the Cases β A , β H , α A , α H , λ P SP , 4 b × ] 0 , 1 [ 37
  3.13 Concluding Remarks on Alternative Upper Bounds for all Cases β A , β H , α A , α H , λ ( P SP \ P SP , 1 ) × ] 0 , 1 [ 37
  3.14 Intermezzo 1: Application to Asymptotical Distinguishability38
  3.15 Intermezzo 2: Application to Decision Making under Uncertainty39
   3.15.1 Bayesian Decision Making39
   3.15.2. Neyman-Pearson Testing41
  3.16 Goals for Lower Bounds for the Cases β A , β H , α A , α H , λ ( P SP \ P SP , 1 ) × ( R \ [ 0 , 1 ] ) 41
  3.17 Lower Bounds for the Cases β A , β H , α A , α H , λ P SP , 2 × ( R \ [ 0 , 1 ] ) 44
  3.18 Lower Bounds for the Cases β A , β H , α A , α H , λ P SP , 3 a × ( R \ [ 0 , 1 ] ) 45
  3.19 Lower Bounds for the Cases β A , β H , α A , α H , λ P SP , 3 b × ( R \ [ 0 , 1 ] ) 46
  3.20 Lower Bounds for the Cases β A , β H , α A , α H , λ P SP , 3 c × ( R \ [ 0 , 1 ] ) 47
  3.21 Lower Bounds for the Cases β A , β H , α A , α H , λ P SP , 4 a × ( R \ [ 0 , 1 ] ) 47
  3.22 Lower Bounds for the Cases β A , β H , α A , α H , λ P SP , 4 b × ( R \ [ 0 , 1 ] ) 48
  3.23 Concluding Remarks on Alternative Lower Bounds for all Cases β A , β H , α A , α H , λ ( P SP \ P SP , 1 ) × ( R \ [ 0 , 1 ] ) 48
  3.24 Upper Bounds for the Cases β A , β H , α A , α H , λ ( P SP \ P SP , 1 ) × ( R \ [ 0 , 1 ] ) 48
4 Power Divergences of Non-Kullback-Leibler-Information-Divergence Type49
  4.1 A First Basic Result49
  4.2 Detailed Analyses of the Exact Recursive Values of I λ ( · · ) , i.e., for the Cases β A , β H , α A , α H , λ ( P NI P SP , 1 ) × ( R \ { 0 , 1 } ) 51
  4.3 Lower Bounds of I λ ( · · ) for the Cases β A , β H , α A , α H , λ ( P SP \ P SP , 1 ) × ] 0 , 1 [ 52
  4.4 Upper Bounds of I λ ( · · ) for the Cases β A , β H , α A , α H , λ ( P SP \ P SP , 1 ) × ] 0 , 1 [ 53
  4.5 Lower Bounds of I λ ( · · ) for the Cases ( β A , β H , α A , α H , λ ) ( P SP \ P SP , 1 ) × ( R \ [ 0 , 1 ] ) 53
  4.6 Upper Bounds of I λ ( · · ) for the Cases β A , β H , α A , α H , λ ( P SP \ P SP , 1 ) × ( R \ [ 0 , 1 ] ) 54
  4.7 Applications to Bayesian Decision Making55
5 Kullback-Leibler Information Divergence (Relative Entropy)55
  5.1 Exact Values Respectively Upper Bounds of I ( · | | · ) 55
  5.2 Lower Bounds of I ( · | | · ) for the Cases β A , β H , α A , α H ( P SP \ P SP , 1 ) 56
  5.3 Applications to Bayesian Decision Making58
6 Explicit Closed-Form Bounds of Hellinger Integrals59
  6.1 Principal Approach59
  6.2 Explicit Closed-Form Bounds for the Cases β A , β H , α A , α H , λ ( P NI P SP , 1 ) × ( R \ { 0 , 1 } ) 63
  6.3 Explicit Closed-Form Bounds for the Cases β A , β H , α A , α H , λ ( P SP \ P SP , 1 ) × ] 0 , 1 [ 64
  6.4 Explicit Closed-Form Bounds for the Cases β A , β H , α A , α H , λ ( P SP \ P SP , 1 ) × ( R \ [ 0 , 1 ] ) 67
  6.5 Totally Explicit Closed-Form Bounds69
  6.6 Closed-Form Bounds for Power Divergences of Non-Kullback-Leibler-Information-Divergence Type70
  6.7 Applications to Decision Making71
7 Hellinger Integrals and Power Divergences of Galton-Watson Type Diffusion Approximations71
  7.1 Branching-Type Diffusion Approximations71
  7.2 Bounds of Hellinger Integrals for Diffusion Approximations74
  7.3 Bounds of Power Divergences for Diffusion Approximations79
  7.4 Applications to Decision Making80
A Proofs and Auxiliary Lemmas81
  A.1. Proofs and Auxiliary Lemmas for Section 381
  A.2 Proofs and Auxiliary Lemmas for Section 588
  A.3 Proofs and Auxiliary Lemmas for Section 694
  A.4 Proofs and Auxiliary Lemmas for Section 7101
References115

1. Introduction

(This paper is a thoroughly revised, extended and retitled version of the preprint arXiv:1005.3758v1 of both authors) Over the past twenty years, density-based divergences D ( P , Q ) –also known as (dis)similarity measures, directed distances, disparities, distinguishability measures, proximity measures–between probability distributions P and Q, have turned out to be of substantial importance for decisive statistical tasks such as parameter estimation, testing for goodness-of-fit, Bayesian decision procedures, change-point detection, clustering, as well as for other research fields such as information theory, artificial intelligence, machine learning, signal processing (including image and speech processing), pattern recognition, econometrics, and statistical physics. For some comprehensive overviews on the divergence approach to statistics and probability, the reader is referred to the insightful books of e.g., Liese & Vajda [1], Read & Cressie [2], Vajda [3], Csiszár & Shields [4], Stummer [5], Pardo [6], Liese & Miescke [7], Basu et al. [8], Voinov et al. [9], the survey articles of e.g., Liese & Vajda [10], Vajda & van der Meulen [11], the structure-building papers of Stummer & Vajda [12], Kißlinger & Stummer [13] and Broniatowski & Stummer [14], and the references therein. Divergence-based bounds of minimal mean decision risks (e.g., Bayes risks in finance) can be found e.g., in Stummer & Vajda [15] and Stummer & Lao [16].
Amongst the above-mentioned dissimilarity measures, an important omnipresent subclass are the so-called f divergences of Csiszar [17], Ali & Silvey [18] and Morimoto [19]; important special cases thereof are the total variation distance and the very frequently used λ order power divergences I λ ( P , Q ) (also known as alpha-entropies, Cressie-Read measures, Tsallis cross-entropies) with λ R . The latter cover e.g., the very prominent Kullback-Leibler information divergence I 1 ( P , Q ) (also called relative entropy), the (squared) Hellinger distance I 1 / 2 ( P , Q ) , as well as the Pearson chi-square divergence I 2 ( P , Q ) . It is well known that the power divergences can be build with the help of the λ order Hellinger integrals H λ ( P , Q ) (where e.g., the case λ = 1 / 2 corresponds to the well-known Bhattacharyya coefficient), which are information measures of interest by their own and which are also the crucial ingredients of λ order Renyi divergences R λ ( P , Q ) (see e.g., Liese & Vajda [1], van Erven & Harremoes [20]); the case R 1 / 2 ( P , Q ) corresponds to the well-known Bhattacharyya distance.
The above-mentioned information/dissimilarity measures have been also investigated in non-static, time-dynamic frameworks such as for various different contexts of stochastic processes like processes with independent increments (see e.g., Newman [21], Liese [22], Memin & Shiryaev [23], Jacod & Shiryaev [24], Liese & Vajda [1], Linkov & Shevlyakov [25]), Poisson point processes (see e.g., Liese [26], Jacod & Shiryaev [24], Liese & Vajda [1]), diffusion prcoesses and solutions of stochastic differential equations with continuous paths (see e.g., Kabanov et al. [27], Liese [28], Jacod & Shiryaev [24], Liese & Vajda [1], Vajda [29], Stummer [30,31,32], Stummer & Vajda [15]), and generalized binomial processes (see e.g., Stummer & Lao [16]); further related literature can be found e.g., in references of the aforementioned papers and books.
Another important class of time-dynamic models is given by discrete-time integer-valued branching processes, in particular (Bienaymé-)Galton-Watson processes without immigration GW respectively with immigration (resp. importation, invasion) GWI, which have numerous applications in biotechnology, population genetics, internet traffic research, clinical trials, asset price modelling, derivative pricing, and many others. As far as important terminology is concerned, we abbreviatingly subsume both models as GW(I) and, simply as GWI in case that GW appears as a parameter-special-case of GWI; recall that a GW(I) is called subcritical respectively critical respectively supercritical if its offspring mean is less than 1 respectively equal to 1 respectively larger than 1.
For applications of GW(I) in epidemiology, see e.g., the works of Bartoszynski [33], Ludwig [34], Becker [35,36], Metz [37], Heyde [38], von Bahr & Martin-Löf [39], Ball [40], Jacob [41], Barbour & Reinert [42], Section 1.2 of Britton & Pardoux [43]); for more details see Section 2.3 below.
For connections of GW(I) to time series of counts including GLM models, see e.g., Dion, Gauthier & Latour [44], Grunwald et al. [45], Kedem & Fokianos [46], Held, Höhle & Hofmann [47], and Weiß [48]; a more comprehensive discussion can be found in Section 2.2 below.
As far as the combined study of information measures and GW processes is concerned, let us first mention that (transforms of) power divergences have been used for supercritical Galton-Watson processes without immigration for instance as follows: Feigin & Passy [49] study the problem to find an offspring distribution which is closest (in terms of relative entropy type distance) to the original offspring distribution and under which ultimate extinction is certain. Furthermore, Mordecki [50] gives an equivalent characterization for the stable convergence of the corresponding log-likelihood process to a mixed Gaussian limit, in terms of conditions on Hellinger integrals of the involved offspring laws. Moreover, Sriram & Vidyashankar [51] study the properties of offspring-distribution-parameters which minimize the squared Hellinger distance between the model offspring distribution and the corresponding non-parametric maximum likelihood estimator of Guttorp [52]. For the setup of GWI with Poisson offspring and nonstochastic immigration of constant value 1, Linkov & Lunyova [53] investigate the asymptotics of Hellinger integrals in order to deduce large deviation assertions in hypotheses testing problems.
In contrast to the above-mentioned contexts, this paper pursues the following main goals:
(MG1)
for any time horizon and any criticality scenario (allowing for non-stationarities), to compute lower and upper bounds–and sometimes even exact values–of the Hellinger integrals H λ P A P H , power divergences I λ P A P H and Renyi divergences R λ P A P H of two alternative Galton-Watson branching processes P A and P H (on path/scenario space), where (i) P A has Poisson( β A ) distributed offspring as well as Poisson( α A ) distributed immigration, and (ii) P H has Poisson( β H ) distributed offspring as well as Poisson( α H ) distributed immigration; the non-immigration cases are covered as α A = α H = 0 ; as a side effect, we also aim for corresponding asymptotic distinguishability results;
(MG2)
to compute the corresponding limit quantities for the context in which (a proper rescaling of) the two alternative Galton-Watson processes with immigration converge to Feller-type branching diffusion processes, as the time-lags between the generation-size observations tend to zero;
(MG3)
as an exemplary field of application, to indicate how to use the results of (MG1) for Bayesian decision making in the epidemiological context of an infectious-disease pandemic (e.g., the current COVID-19), where e.g., potential state-budgetary losses can be controlled by alternative public policies (such as e.g., different degrees of lockdown) for mitigations of the time-evolution of the number of infectious persons (being quantified by a GW(I)). Corresponding Neyman-Pearson testing will be treated, too.
Because of the involved Poisson distributions, these goals can be tackled with a high degree of tractability, which is worked out in detail with the following structure (see also the full table of contents after this paragraph): in Section 2, we first introduce (i) the basic ingredients of Galton-Watson processes together with their interpretations in the above-mentioned pandemic setup where it is essential to study all types of criticality (being connected with levels of reproduction numbers), (ii) the employed fundamental information measures such as Hellinger integrals, power divergences and Renyi divergences, (iii) the underlying decision-making framework, as well as (iv) connections to time series of counts and asymptotical distinguishability. Thereafter, we start our detailed technical analyses by giving recursive exact values respectively recursive bounds–as well as their applications–of Hellinger integrals H λ P A P H (see Section 3), power divergences I λ P A P H and Renyi divergences R λ P A P H (see Section 4 and Section 5). Explicit closed-form bounds of Hellinger integrals H λ P A P H will be worked out in Section 6, whereas Section 7 deals with Hellinger integrals and power divergences of the above-mentioned Galton-Watson type diffusion approximations.

2. The Framework and Application Setups

2.1. Process Setup

We investigate dissimilarity measures and apply them to decisions, in the following context. Let the integer-valued random variable X n ( n N 0 ) denote the size of the nth generation of a population (of persons, organisms, spreading news, other kind of objects, etc.) with specified characteristics, and suppose that for the modelling of the time-evolution n X n we have the choice between the following two (e.g., alternative, competing) models ( H ) and ( A ) :
( H ) a discrete-time homogeneous Galton-Watson process with immigration GWI, given by the recursive description
X 0 N ; N 0 X n = k = 1 X n 1 Y n 1 , k + Y ˜ n , n N ,
where Y n 1 , k is the number of offspring of the kth object (e.g., organism, person) within the ( n 1 ) th generation, and Y ˜ n denotes the number of immigrating objects in the nth generation. Notice that we employ an arbitrary deterministic (i.e., degenerate random) initial generation size X 0 . We always assume that under the corresponding dynamics-governing law P H
(GWI1)
the collection Y : = Y n 1 , k , n N , k N consists of independent and identically distributed (i.i.d.) random variables which are Poisson distributed with parameter β H > 0 ,
(GWI2)
the collection Y ˜ : = Y ˜ n , n N consists of i.i.d. random variables which are Poisson distributed with parameter α H 0 (where α H = 0 stands for the degenerate case of having no immigration),
(GWI3)
Y and Y ˜ are independent.
( A ) a discrete-time homogeneous Galton-Watson process with immigration GWI given by the same recursive description (1), but with different dynamics-governing law P A under which (GWI1) holds with parameter β A > 0 (instead of β H > 0 ), (GWI2) holds with α A 0 (instead of α H 0 ), and (GWI3) holds. As a side remark, in some contexts the two models ( H ) and ( A ) may function as a “sandwich” of a more complicated not fully known model.
Basic and advanced facts on general GWI (introduced by Heathcote [54]) can be found e.g., in the monographs of Athreya & Ney [55], Jagers [56], Asmussen & Hering [57], Haccou [58]; see also e.g., Heyde & Seneta [59], Basawa & Rao [60], Basawa & Scott [61], Sankaranarayanan [62], Wei & Winnicki [63], Winnicki [64], Guttorp [52] as well as Yanev [65] (and also the references therein all those) for adjacent fundamental statistical issues including the involved technical and conceptual challenges.
For the sake of brevity, wherever we introduce or discuss corresponding quantities simultaneously for both models H and A , we will use the subscript • as a synonym for either the symbol H or A . For illustration, recall the well-known fact that the corresponding conditional probabilities P ( X n = · | X n 1 = k ) are again Poisson-distributed, with parameter β · k + α .
In oder to achieve a transparently representable structure of our results, we subsume the involved parameters as follows:
(PS1)
P SP is the set of all constellations β A , β H , α A , α H of real-valued parameters β A > 0 , β H > 0 , α A > 0 , α H > 0 , such that β A β H or α A α H (or both); in other words, both models are non-identical and have non-vanishing immigration;
(PS2)
P NI is the set of all β A , β H , α A , α H of real-valued parameters β A > 0 , β H > 0 , α A = α H = 0 , such that β A β H ; this corresponds to the important special case that both models have no immigration and are non-identical;
(PS3)
the resulting disjoint union will be denoted by P = P SP P NI .
Notice that for (unbridgeable) technical reasons, we do not allow for “crossovers” between “immigration and no-immigration” (i.e., α A = 0 and α H 0 , respectively, α A 0 and α H = 0 ). For practice, this is not a strong restriction, since one may take e.g., α A = 10 12 and α H = 1 .
For the non-immigration case α = 0 one has the following extinction properties (see e.g., Harris [66], Athreya & Ney [55]). As usual, let us define the extinction time τ : = min { i N : X = 0 for all integers i } if this minimum exists, and τ : = else. Correspondingly, let B : = { τ < } be the extinction set. If the offspring mean β satisfies β < 1 —which is called the subcritical case– or β = 1 —which is known as the critical case–then extinction is certain, i.e., there holds P ( B | X 0 = 1 ) = 1 . However, if the offspring mean satisfies β > 1 —which is called the supercritical case–then there is a probability greater than zero, that the population never dies out, i.e., P ( B | X 0 = 1 ) ] 0 , 1 [ . In the latter case, X n explodes (a.s.) to infinity as n .
In contrast, for the (nondegenerate, nonvanishing) immigration case α 0 there is no extinction, viz. P ( B | X 0 = 1 ) = 0 , although there may be zero population X 0 = 0 for some intermediate time 0 N ; but due to the immigration, with probability one there is always a later time 1 > 0 , such that X 1 > 0 . Nevertheless, also for the setup α 0 it is important to know whether β 1 —which is still called (super-, sub-)criticality–since e.g., in the case β < 1 the population size X n converges (as n ) to a stationary distribution on N whereas for β > 1 the behaviour is non-stationary (non-ergodic), see e.g., Athreya & Ney [55].
At this point, let us emphasize that in our investigations (both for α = 0 and for α 0 ) we do allow for “crossovers” between “different criticalities”, i.e., we deal with all cases β A 1 versus all cases β H 1 ; as will be explained in the following, this unifying flexibility is especially important for corresponding epidemiological-model comparisons (e.g., for the sake of decision making).
One of our main goals is to quantitatively compare (the time-evolution of) two competing GWI models H and A with respective parameter sets ( β H , α H ) and ( β A , α A ) , in terms of the information measures H λ P A P H (Hellinger intergrals), I λ P A P H (power divergences), R λ P A P H (Renyi divergences). The latter two express a distance (degree of dissimilarity) between H and A . From this, we shall particularly derive applications for decision making under uncertainty (including tests).

2.2. Connections to Time Series of Counts

It is well known that a Galton-Watson process with Poisson offspring (with parameter β ) and Poisson immigration (with parameter α ) is “distributionally” equal to each of the following models (listed in “tree-type” chronological order):
(M1)
a Poissonian Generalized Integer-valued Autoregressive process GINAR(1) in the sense of Gauthier & Latour [67] (see also Dion, Gauthier & Latour [44], Latour [68], as well as Grunwald et al. [45]), that is, a first-order autoregressive times series with Poissonian thinning (with parameter β ) and Poissonian innovations (with parameter α );
(M2)
Poissonian first order Conditional Linear Autoregressive model (Poissonian CLAR(1)) in the sense of Grunwald et al. [45] (and earlier preprints thereof) (since the conditional expectation is E P [ X n | F n 1 ] = α + β · X n 1 ); this can be equally seen as Poissonian autoregressive Generalized Linear Model GLM with identity link function (cf. [45] as well as Chapter 4 of Kedem & Fokianos [46]), that is, an autoregressive GLM with Poisson distribution as random component and the identity link as systematic component;
the same model was used (and generalized)
(M2i)
under the name BIN(1) by Rydberg & Shephard [69] for the description of the number X n of stock transactions/trades recorded up to time n;
(M2ii)
under the name Poisson autoregressive model PAR(1) by Brandt & Williams [70] for the description of event counts in political and other social science applications;
(M2iii)
under the name Autoregressive Conditional Poisson model ACP(1,0) by Heinen [71];
(M2iv)
by Held, Höhle & Hofmann [47] as well as Held et al. [72], as a description of the time-evolution of counts from infectious disease surveillance databases, where β (respectively, α ) is interpreted as driving parameter of epidemic (respectively, endemic) component; in principle, this type of modelling can be also implicitly recovered as a special case of the epidemics-treating work of Finkenstädt, Bjornstad & Grenfell [73], by assuming trend- and season-neglecting (e.g., intra-year) measles data in urban areas of about 10 million people (provided that their population size approximation extends linearly);
(M2v)
under the name integer-valued Generalized Autoregressive Conditional Heteroscedastic model INGARCH(1,0) by Ferland, Latour & Oraichi [74] (since the conditional variance is V a r P [ X n | F n 1 ] = α + β · X n 1 ), see also Weiß [75]; this has been refinely named as INARCH(1) model by Weiß [76,77], and frequently applied thereafter; for an “overlapping-generation type” interpretation of the INARCH(1) model, which is an adequate description for the time-evolution of overdispersed counts with an autoregressive serial dependence structure, see Weiß & Testik [78]; for a corresponding comprehensive recent survey (also to more general count time series), the reader is referred to the book of Weiß [48];
Moreover, according to the general considerations of Grunwald et al. [45], the Poissonian Galton-Watson model with immigration may possibly be “distributionally equal” to an integer-valued autoregressive model with random coefficient (thinning).
Nowadays, besides the name homogeneous Galton-Watson model with immigration GWI, the name INARCH(1) seems to be the most used one, and we follow this terminology (with emphasis on GWI). Typical features of the above-mentioned models (M1) to (M2v), are the use of Z as the set of times, and the assumptions α > 0 as well as β ] 0 , 1 [ , which guarantee stationarity and ergodicity (see above). In contrast, we employ N 0 as the set of times, degenerate (and thus, non-equilibrium) starting distribution, and arbitrary α 0 as well as β > 0 . For such a situation, as explained above, we quantitatively compare two competing GWI models H and A with respective parameter sets ( β H , α H ) and ( β A , α A ) . Since–as can be seen e.g., in (29) below—we basically employ only (conditionally) distributional ingredients, such as the corresponding likelihood ratio (see e.g., (13) to (15), (27) to (29) below), all the results of the Section 3, Section 4, Section 5 and Section 6 can be immediately carried over to the above-mentioned time-series contexts (where we even allow for non-stationarities, in fact we start with a one-point/Dirac distribution); for the sake of brevity, in the rest of the paper this will not be mentioned explicitly anymore.
Notice that a Poissonian GWI as well as all models (M1) and (M2) are–despite of their conditional Poisson law– typically overdispersed since
E P [ X n ] = α + β · E P [ X n 1 ] α + β · E P [ X n 1 ] + β 2 · V a r P [ X n 1 ] = V a r P [ X n ] , n N \ { 1 } ,
with equality iff (i.e., if and only if) α = 0 (NI) and X n 2 = 0 (extinction at n 2 with n 3 ).

2.3. Applicability to Epidemiology

The above-mentioned framework can be used for any of the numerous fields of applications of discrete-time branching processes, and of the closely related INARCH(1) models. For the sake of brevity, we explain this—as a kind of running-example—in detail for the currently highly important context of the epidemiology of infectious diseases. For insightful non-mathematical introductions to the latter, see e.g., Kaslow & Evans [79], Osterholm & Hedberg [80]; for a first entry as well as overviews on modelling, the reader is referred to e.g., Grassly & Fraser [81], Keeling & Rohani [82], Yan [83,84], Britton [85], Diekmann, Heesterbeek & Britton [86], Cummings & Lessler [87], Just et al. [88], Britton & Giardina [89], Britton & Pardoux [43]. A survey on the particular role of branching processes in epidemiology can be found e.g., in Jacob [41].
Undoubtedly, by nature, the spreading of an infectious disease through a (human, animal, plant) population is a branching process with possible immigration. Indeed, typically one has the following mechanism:
(D1)
at some time t k E –called the time of exposure (moment of infection)—an individual k of a specified population is infected in a wide sense, i.e., entered/invaded/colonized by a number of transmissible disease-causative pathogens (etiologic agents such as viruses, bacteria, protozoans and other parasites, subviruses (e.g., prions and plant viroids), etc.); the individual is then a host (of pathogens);
(D2)
depending on the level of immunity and some other factors, these pathogens may multiply/replicate within the host to an extent (over a threshold number) such that at time t k I some of the pathogens start to leave their host (shedding of pathogens); in other words, the individual k becomes infectious at the time t k I of onset of infectiousness. Ex post, one can then say that the individual became infected in the narrow sense at earlier time t k E and call it a primary case. The time interval [ t k E , t k I [ is called the latent/latency/pre-infectious period of k, and t k I t k E its duration (in some literature, there is no verbal distinction between them); notice that t k I may differ from the time t k O S of onset (first appearance) of symptoms, which leads to the so-called incubation period [ t k E , t k O S [ ; if t k I < t k O S then [ t k I , t k O S [ is called the pre-symptomatic period;
(D3)
as long as the individual k stays infectious, by shedding of pathogens it may infect in a narrow sense a random number Y k N 0 of other individuals which are susceptible (i.e., neither immune nor already infected in a narrow sense), where the distribution of Y k depends on the individual’s (natural, voluntary, forced) behaviour, its environment, as well as some other factors e.g., connected with the type of pathogen transmission; the newly infected individuals are called offspring of k, and secondary cases if they are from the same specified population or exportations if they are from a different population; from the view of the latter, these infections are imported cases and thus can be viewed as immigrants;
(D4)
at the time t k R of cessation of infectiousness, the individual stops being infectious (e.g., because of recovery, death, or total isolation); the time interval [ t k I , t k R [ is called the period of infectiousness (also period of communicability, infectious/infective/shedding/contagious period) of k, and t k R t k I its duration (in some literature, there is no verbal distinction between them); notice that t k R may differ from the time t k C S of cessation (last appearance) of symptoms which leads to the so-called sickness period [ t k O S , t k C S [ ;
(D5)
this branching mechanism continues within the specified population until there are no infectious individuals and also no importations anymore (eradication, full extinction, total elimination)– up to a specified final time (which may be large or even infinite);
All the above-mentioned times t k · and time intervals are random, by nature. Two further connected quantities are also important for modelling (see e.g., Yan & Chowell [84] (p. 241ff), including a history of corresponding terminology). Firstly, the generation interval (generation time, transmission interval) is the time interval from the onset of infectiousness in a primary case (called the infector) to the onset of infectiousness in a secondary case (called the infectee) infected by the primary case; clearly, the generation interval is random, and so is its duration (often, the (population-)mean of the latter is also called generation interval). Typically, generation intervals are important ingredients of branching process models of infectious diseases. Secondly, the serial interval describes time interval from the onset of symptoms in a primary case to the onset of symptoms in a secondary case infected by the primary case. By nature, the serial interval is random, and so is its duration (often, the (population-)mean of the latter is also called serial interval). Typically, the serial interval is easier to observe than the generation interval, and thus, the latter is often approximately estimated from data of the former. For further investigations on generation and serial intervals, the reader is referred to e.g., Fine [90], Svensson [91,92], Wallinga & Lipsitch [93], Forsberg White & Pagano [94], Nishiura [95], Scalia Tomba et al. [96], Trichereau et al. [97], Vink, Bootsma & Wallinga [98], Champredon & Dushoff [99], Just et al. [88], and–especially for the novel COVID-19 pandemics—An der Heiden & Hamouda [100], Ferretti et al. [101], Ganyani et al. [102], Li et al. [103], Nishiura, Linton & Akhmetzhanov [104], Park et al. [105].
With the help of the above-mentioned individual ingredients, one can aggregatedly build numerous different population-wide models of infectious diseases in discrete time as well as in continuous time; the latter are typically observed only in discrete-time steps (discrete-time sampling), and hence in the following we concentrate on discrete-time modelling (of the real or the observational process). In fact, we confine ourselves to the important task of modelling the evolution n X n of the number of incidences at “stage” n, where incidence refers to the number of new infected/infectious individuals. Here, n may be a generation number where, inductively, n = 0 refers to the generation of the first appearing primary cases in the population (also called initial importations), and n refers to the generation of offsprings of all individuals of generation n 1 . Alternatively, n may be the index of a physical (“calender”) point of time t n , which may be deterministic or random; e.g., ( t n ) n N may be a strictly increasing series of (i) equidistant deterministic time points (and thus, one can identify t n = n in appropriate time units such as days, weeks, bi-weeks, months), or (ii) non-equidistant deterministic time points, or (iii) random time points (as a side remark, let us mention that in some situations, X n may alternatively denote the number of prevalences at “stage” n, where prevalence refers to the total number of infected/infectious individuals (e.g., through some methodical tricks like “self-infection”)).
In the light of this, one can loosely define an epidemic as the rapid spread of an infectious disease within a specified population, where the numbers X n of incidences are high (or much higher than expected) for that kind of population. A pandemic is a geographically large-scale (e.g., multicontinental or worldwide) epidemic. An outbreak/onset of an epidemic in the narrow sense is the (time of) change where an infectious disease turns into an epidemic, which is typically quantified by exceedance over an threshold; analogously, an outbreak/onset of a pandemic is the (time of) change where the epidemic turns into a pandemic. Of course, one goal of infectious-disease modelling is to quantify “early enough” the potential danger of an emerging outbreak of an epidemic or a pandemic.
Returning to possible models of the incidence-evolution n X n , its description may be theoretically derived from more detailed, time-finer, highly sophisticated, individual-based “mechanistic” infectious-disease models such as e.g., continuous-time suscetible-exposed-infectious-recovered (SEIR) models (see the above-mentioned introductory texts); however, as e.g., pointed out in Held et al. [72], the estimation of the correspondingly involved numerous parameters may be too ambitious for routinely collected, non-detailed disease data, such as e.g., daily/weekly counts X n of incidences–especially in decisive emerging/early phases of a novel disease (such as the current COVID-19 pandemic). Accordingly, in the following we assume that X n can be approximately described by a Poissonian Galton-Watson process with immigration respectively a (“distributionally equal”) Poissonian autoregressive Generalized Linear Model in the sense of (M2). Depending on the situation, this can be quite reasonable, for the following arguments (apart from the usual “if the data say so”). Firstly, it is well known (see e.g., Bartoszynski [33], Ludwig [34], Becker [35,36], Metz [37], Heyde [38], von Bahr & Martin-Löf [39], Ball [40], Jacob [41], Barbour & Reinert [42], Section 1.2 of Britton & Pardoux [43]) that in populations with a relatively high number of susceptible individuals and a relatively low number of infectious individuals (e.g., in a large population and in decisive emerging/early phases of the disease spreading), the incidence-evolution n X n can be well approximated by a (e.g., Poissonian) Galton-Watson process with possible immigration where n plays the role of a generation number. If the above-mentioned generation interval is “nearly” deterministic (leading to nearly synchronous, non-overlapping generations)—which is the case e.g., for (phases of) Influenza A(H1N1)pdm09, Influenza A(H3N2), Rubella (cf. Vink, Bootsma & Wallinga [98]), and COVID-19 (cf. Ferretti et al. [101])—and the length of the generation interval is approximated by its mean length and the latter is tuned to be equal to the unit time between consecutive observations, then n plays the role of an observation (surveillance) time. This effect is even more realistic if the period of infectiousness is nearly deterministic and relatively short. Secondly, as already mentioned above, the spreading of an infectious disease is intrinsically a (not necessarily Poissonian Galton-Watson) branching mechanism, which may be blurred by other effects in a way that a Poissonian autoregressive Generalized Linear Model is still a reasonably fitting model for the observational process in disease surveillance. The latter have been used e.g., by Finkenstädt, Bjornstad & Grenfell [73], Held, Höhle & Hofmann [47], and Held et al. [72]; they all use non-constant parameters (e.g., to describe seasonal effects, which are however unknown in early phases of a novel infectious disease such as COVID-19). In contrast, we employ different new–namely divergence-based–statistical techniques, for which we assume constant parameters but also indicate procedures for the detection of changes; the extension to non-constant parameters is straightforward.
Returning to Galton-Watson processes, let us mention as a side remark that they can be also used to model the above-mentioned within-host replication dynamics (D2) (e.g., in the time-interval [ t k E , t k I [ and beyond) on a sub-cellular level, see e.g., Spouge [106], as well as Taneyhill, Dunn & Hatcher [107] for parasitic pathogens; on the other hand, one can also employ Galton-Watson processes for quantifying snowball-effect (avalanche-effect, cascade-effect) type, economic-crisis triggered consequences of large epidemics and pandemics, such as e.g., the potential spread of transmissible (i) foreclosures of homes (cf. Parnes [108]), or clearly also (ii) company insolvencies, downsizings and credit-risk downgradings; moreover, the time-evolution of integer-valued indicators concerning the spread of (rational or unwarranted) fears resp. perceived threats may be modelled, too.
Summing up things, we model the evolution n X n of the number of incidences at stage n by a Poissonian Galton Watson process with immigration GWI
X 0 N ; N 0 X n = k = 1 X n 1 Y n 1 , k + Y ˜ n , n N , cf . ( 1 ) , ( GWI 1 ) ( GWI 3 ) with law P ,
(where Y n 1 , k corresponds to the Y k of (D3), equipped with an additional stage-index n 1 ), respectively by a corresponding “distributionally equal”–possibly non-stationary– Poissonian autoregressive Generalized Linear Model in the sense of (M2); depending on the situation, we may also fix a (deterministic or random) upper time horizon other than infinity. Recall that both models are overdispersed, which is consistent with the current debate on overdispersion in connection with the current COVID-19 pandemic. In infectious-disease language, the sum k = 1 X n 1 Y n 1 , k can also be loosely interpreted as epidemic component (in a narrow sense) driven by the parameter β , and Y ˜ n as endemic component driven by the parameter α . In fact, the offspring mean (here, β ) is called reproduction number and plays a major role–also e.g., in the current public debate about the COVID-19 pandemic–because it crucially determines the rapidity of the spread of the disease and—as already indicated above in the second and third paragraph after (PS3)–also the probability that the epidemic/pandemic becomes (maybe temporally) extinct or at least stationary at a low level (that is, endemic). For this to happen, β should be subcritical, i.e., β < 1 , and even better, close to zero. Of course, the size of the importation mean α 0 matters, too, in a secondary order.
Keeping this in mind, let us discuss on which factors the reproduction number β and the importation mean α depend upon, and how they can be influenced/controlled. To begin with, by recalling the above-mentioned points (D1) to (D5) and by adapting the considerations of e.g., Grassly & Fraser [81] to our model, one encounters the fact that the distribution of the offspring Y n 1 , k —here driven by the reproduction number (offspring mean) β —depends on the following factors:
(B1)
the degree of infectiousness of the individual k, with three major components:
(B1a)
degree of biological infectiousness; this reflects the within-host dynamics (D2) of the “representative” individual k, in particular the duration and amount of the corresponding replication and shedding/excretion of the infectious pathogens; this degree depends thus on (i) the number of host-invading pathogens (called the initial infectious dose), (ii) the type of the pathogen with respect to e.g., its principal capabilities of replication speed, range of spread and drug-sensitivity, (iii) features of the immune system of the host k including the level of innate or acquired immunity, and (iv) the interaction between the genetic determinants of disease progression in both the pathogen and the host;
(B1b)
degree of behavioural infectiousness; this depends on the contact patterns of an infected/infectious individual (and, if relevant, the contact patterns of intermediate hosts or vectors), in relation to the disease-specific type of route(s) of transmission of the infectious pathogens (for an overview of the latter, see e.g., Table 3 of Kaslow & Evans [79]); a long-distance-travel behaviour may also lead to the disease exportation to another, outside population (and thus, for the latter to a disease importation);
(B1c)
degree of environmental infectiousness; this depends on the location and environment of the host k, which influences the duration of outside-host survival of the pathogens (and, if relevant, of the intermediate hosts or vectors) as well as the speed and range of their outside-host spread; for instance, high temperature may kill the pathogens, high airflow or rainfall dynamics may ease their spread, etc.
(B2)
the degree of susceptibility of uninfected individuals who have contact with k, with the following three major components (with similar background as their infectiousness counterparts):
(B2a)
degree of biological susceptibility;
(B2b)
degree of behavioural susceptibility;
(B2c)
degree of environmental susceptibility.
All these factors (B1a) to (B2c) can be principally influenced/controlled to a certain–respective–extent. Let us briefly discuss this for human infectious diseases, where one major goal of epidemic risk management is to operate countermeasures/interventions in order to slow down the disease transmission (e.g., by reducing the reproduction number β to less than 1) and eventually even break the chain of transmission, for the sake of containment or mitigation; preparedness and preparation are motives, too, for instance as a part of governmental pandemic risk management.
For instance, (B1a) can be reduced or even erased through pharmaceutical interventions such as medication (if available), and preventive strengthening of the immune system through non-extreme sports activities and healthy food.
Moreover, the following exemplary control measures for (B2) can be either put into action by common-sense self-behaviour, or by large-scale public recommendations (e.g., through mass media), or by rules/requirements from authorities:
(i)
personal preventive measures such as frequent washing and disinfecting of hands; keeping hands away from face; covering coughs; avoidance of handshakes and hugs with non-family-members; maintaining physical distance (e.g., of two meters) from non-family-members; wearing a face-mask of respective security degree (such as homemade cloth face mask, particulate-filtering face-piece respirator, medical (non-surgical) mask, surgical mask); self-quarantine;
(ii)
environmental measures, such as e.g., cleaning of surfaces;
(iii)
community measures aimed at mild or stringent social distancing, such as e.g., prohibiting/cancelling/banning gatherings of more than z non-family members (e.g., z = 2 , 5 , 10 , 100 , 1000 in various different phases and countries during the current COVID-19 pandemic); mask-wearing (see above); closing of schools, universities, some or even all nonessential (“system-irrelevant”) businesses and venues; home-officing/work ban; home isolation of disease cases; isolation of homes for the elderly/aged (nursing homes); stay-at-home orders with exemptions, household or even general quarantine; testing & tracing; lockdown of entire cities and beyond; restricting the degrees of travel freedom/allowed mobility (e.g., local, union-state, national, international including border and airport closure). The latter also affects the mean importation rate α , which can be controlled by vaccination programs in “outside populations”, too.
As far as the degree of biological susceptibility (B2a) is concerned, one obvious therapeutic countermeasure is a mass vaccination program/campaign (if available).
In case of highly virulent infectious diseases causing epidemics and pandemics with substantial fatality rates, some of the above-mentioned control strategies and countermeasures may (have to) be “drastic” (e.g., lockdown), and thus imply considerable social and economic costs, with a huge impact and potential danger of triggering severe social, economic and political disruptions.
In order to prepare corresponding suggestions for decisions about appropriate control measures (e.g., public policies), it is therefore important–especially for a novel infectious disease such as the current COVID-19 pandemic–to have a model for the time-evolution of the incidences in (i) a natural (basically uncontrolled) set-up, as well as in (ii) the control set-ups under consideration. As already mentioned above, we assume that all these situations can be distilled into an incidence evolution n X n which follows a Poissonian Galton-Watson process with respectively different parameter pairs ( β , α ) . Correspondingly, we always compare two alternative models ( H ) and ( A ) with parameter pairs ( β H , α H ) and ( β A , α A ) which reflect either a “pure” statistical uncertainty (under the same uncontrolled or controlled set-up), or the uncertainty between two different potential control set-ups (for the sake of assessing the potential impact/efficiency of some planned interventions, compared with alternative ones); the economic impact can be also taken into account, within a Bayesian decision framework discussed in Section 2.5 below. As will be explained in the next subsections, we achieve such comparisons by means of density-based dissimilarity distances/divergences and related quantities thereof.
From the above-mentioned detailed explanations, it is immediately clear that for the described epidemiological context one should investigate all types of criticality and importation means for the therein involved two Poissonian Galton-Watson processes with/without immigration (respectively the equally distributed INARCH(1) models); in particular, this motivates (or even “justifies”) the necessity of the very lengthy detailed studies in the Section 3, Section 4, Section 5, Section 6 and Section 7 below.

2.4. Information Measures

Having two competing models ( H ) and ( A ) at stake, it makes sense to study questions such as “how far are they apart?” and thus “how dissimilar are they?”. This can be quantified in terms of divergences in the sense of directed (i.e., not necessarily symmetric) distances, where usually the triangular inequality fails. Let us first discuss our employed divergence subclasses in a general set-up of two equivalent probability measures P H , P A on a measurable space Ω , F . In terms of the parameter λ R , the power divergences—also known as Cressie-Read divergences, relative Tsallis entropies, or generalized cross-entropy family– are defined as (see e.g., Liese & Vajda [1,10])
0 I λ P A P H : = I P A P H , if λ = 1 , 1 λ ( λ 1 ) H λ P A P H 1 , if λ R \ { 0 , 1 } , I P H P A , if λ = 0 ,
where
I P A P H : = p A log p A p H d μ 0
is the Kullback-Leibler information divergence (also known as relative entropy) and
H λ P A P H : = Ω p A λ p H 1 λ d μ 0
is the Hellinger integral of order λ R \ { 0 , 1 } ; for this, we assume as usual without loss of generality that the probability measures P H , P A are dominated by some σ –finite measure μ , with densities
p A = d P A d μ and p H = d P H d μ
defined on Ω (the zeros of p H , p A are handled in (3) and (4) with the usual conventions). Clearly, for λ { 0 , 1 } one trivially gets
H 0 P A P H = H 1 P A P H = 1 .
The Kullback-Leibler information divergences (relative entropies) in (2) and (3) can alternatively be expressed as (see, e.g., Liese & Vajda [1])
I P A P H = lim λ 1 1 H λ P A P H λ ( 1 λ ) , I P H P A = lim λ 0 1 H λ P A P H λ ( 1 λ ) .
Apart from the Kullback-Leibler information divergence (relative entropy), other prominent examples of power divergences are the squared Hellinger distance 1 2 I 1 / 2 P A P H and Pearson’s χ 2 divergence 2 I 2 P A P H ; the Hellinger integral H 1 / 2 P A P H is also known as (multiple of) the Bhattacharyya coefficent. Extensive studies about basic and advanced general facts on power divergences, Hellinger integrals and the related Renyi divergences of order λ R \ { 0 , 1 }
0 R λ P A P H : = 1 λ ( λ 1 ) log H λ P A P H , with log 0 = ,
can be found e.g., in Liese & Vajda [1,10], Jacod & Shiryaev [24], van Erven & Harremoes [20] (as a side remark, R 1 / 2 P A P H is also known as (multiple of) Bhattacharyya distance). For instance, the integrals in (3) and (4) do not depend on the choice of μ . Furthermore, one has the skew symmetries
H λ P A P H = H 1 λ P H P A , as well as I λ P A P H = I 1 λ P H P A ,
for all λ R (see e.g., Liese & Vajda [1]). As far as finiteness is concerned, for λ ] 0 , 1 [ one gets the rudimentary bounds
0 < H λ P A P H 1 , and equivalently ,
0 I λ P A P H = 1 H λ P A P H λ ( 1 λ ) < 1 λ ( 1 λ ) ,
where the lower bound in (10) (upper bound in (9)) is achieved iff P A = P H . For λ R \ ] 0 , 1 [ , one gets the bounds
0 I λ P A P H , and equivalently , 1 H λ P A P H ,
where, in contrast to above, both the lower bound of H λ P A P H and the lower bound of I λ P A P H is achieved iff P A = P H ; however, the power divergence I λ P A P H and Hellinger integral H λ P A P H might be infinite, depending on the particular setup.
The Hellinger integrals can be also used for bounds of the well-known total variation
0 V ( P A P H ) : = 2 sup A F P A ( A ) P H ( A ) = Ω p A p H d μ ,
with p A and p H defined in (5). Certainly, the total variation is one of the best known statistical distances, see e.g., Le Cam [109]. For arbitrary λ ] 0 , 1 [ there holds (cf. Liese & Vajda [1])
1 V ( P A P H ) 2 H λ ( P A P H ) 1 + V ( P A P H ) 2 max { λ , 1 λ } 1 V ( P A P H ) 2 min { λ , 1 λ } .
From this together with the particular choice λ = 1 2 , we can derive the fundamental universal bounds
2 1 H 1 2 ( P A P H ) V ( P A P H ) 2 1 H 1 2 ( P A P H ) 2 .
We apply these concepts to our setup of Section 2.1 with two competing models ( H ) and ( A ) of Galton-Watson processes with immigration, where one can take Ω N 0 N 0 to be the space of all paths of ( X n ) n N . More detailed, in terms of the extinction set B : = { τ < } and the parameter-set notation (PS1) to (PS3), it is known that for P SP the two laws P H and P A are equivalent, whereas for P NI the two restrictions P H B and P A B are equivalent (see e.g., Lemma 1.1.3 of Guttorp [52]); with a slight abuse of notation we shall henceforth omit B . Consistently, for fixed time n N 0 we introduce P A , n : = P A F n and P H , n : = P H F n as well as the corresponding Radon-Nikodym-derivative (likelihood ratio)
Z n : = d P A , n d P H , n ,
where ( F n ) n N denotes the corresponding canonical filtration generated by X : = ( X n ) n N ; in other words, F n reflects the “process-intrinsic” information known at stage n. Clearly, Z 0 = 1 . By choosing the reference measure μ = P H , n one obtains from (4) the Hellinger integral H λ P A , 0 P H , 0 = 1 , as well as and for all n N
H λ P A , n P H , n = E P H , n ( Z n ) λ ,
I P A , n P H , n = E P A , n log Z n ,
from which one can immediately build I λ P A , n P H , n ( λ R ) respectively R λ P A , n P H , n ( λ R \ { 0 , 1 } ) respectively bounds of V P A , n P H , n via (2) respectively (7) respectively (12).
The outcoming values (respectively bounds) of H λ P A , n P H , n are quite diverse and depend on the choice of the involved parameter pairs ( β H , α H ) , ( β A , α A ) as well as λ ; the exact details will be given in the Section 3 and Section 6 below.
Before we achieve this, in the following we explain how the outcoming dissimilarity results can be applied to Bayesian testing and more general Bayesian decision making, as well as to Neyman-Pearson testing.

2.5. Decision Making under Uncertainty

Within the above-mentioned context of two competing models ( H ) and ( A ) of Galton-Watson processes with immigration, let us briefly discuss how knowledge about the time-evolution of the Hellinger integrals H λ P A , n P H , n –or equivalently, of the power divergences I λ P A , n P H , n , cf. (2)—can be used in order to take decisions under uncertainty, within a framework of Bayesian decision making BDM, or alternatively, of Neyman-Pearson testing NPT.
In our context of BDM, we decide between an action d H “associated with” the (say) hypothesis law P H and an action d A “associated with” the (say) alternative law P A , based on the sample path observation X n : = { X l : l { 0 , 1 , , n } } of the GWI-generation-sizes (e.g., infectious-disease incidences, cf. Section 2.3) up to observation horizon n N . Following the lines of Stummer & Vajda [15] (adapted to our branching process context), for our BDM let us consider as admissible decision rules δ n : Ω n { d H , d A } the ones generated by all path sets G n Ω n (where Ω n denotes the space of all possible paths of ( X k ) k { 1 , , n } ) through
δ n ( X n ) : = δ G n ( X n ) : = d A , if X n G n , d H , if X n G n ,
as well as loss functions of the form
L ( d H , H ) L ( d H , A ) L ( d A , H ) L ( d A , A ) : = 0 L A L H 0
with pregiven constants L A > 0 , L H > 0 (e.g., arising as bounds from quantities in worst-case scenarios); notice that in (16), d H is assumed to be a zero-loss action under H and d A a zero-loss action under A . Per definition, the Bayes decision rule δ G n , min minimizes–over G n —the mean decision loss
L ( δ G n ) : = p H prior · L H · P r δ G n ( X n ) = d A | H + p A prior · L A · P r δ G n ( X n ) = d H | A = p H prior · L H · P H , n ( G n ) + p A prior · L A · P A , n ( Ω n G n )
for given prior probabilities p H prior = P r ( H ) ] 0 , 1 [ for H and p A prior : = P r ( A ) = 1 p H prior for A . As a side remark let us mention that, in a certain sense, the involved model (parameter) uncertainty expressed by the “superordinate” Bernoulli-type law P r = B i n ( 1 , p H prior ) can also be reinterpreted as a rudimentary static random environment caused e.g., by a random Bernoulli-type external static force. By straightforward calculations, one gets with (13) the minimizing path set G n , min = Z n p H prior L H p A prior L A leading to the minimal mean decision loss, i.e., the Bayes risk,
R n : = min G n L ( δ G n ) = L ( δ G n , min ) = Ω n min p H prior L H , p A prior L A Z n d P H , n .
Notice that—by straightforward standard arguments—the alternative decision procedure
take action d A ( resp . d H ) if L H · p H post ( X n ) ( resp . > ) L A · p A post ( X n )
with posterior probabilities p H post ( X n ) : = p H prior ( 1 p H prior ) · Z n ( X n ) + p H prior = : 1 p A post ( X n ) , leads exactly to the same actions as δ G n , min . By adapting the Lemma 6.5 of Stummer & Vajda [15]—which on general probability spaces gives fundamental universal inequalities relating Hellinger integrals (or equivalently, power divergences) and Bayes risks—one gets for all L H > 0 , L A > 0 , p H prior ] 0 , 1 [ , λ ] 0 , 1 [ and n N the upper bound
R n Λ A λ Λ H 1 λ H λ P A , n P H , n , with Λ H : = p H p r i o r L H , Λ A : = ( 1 p H p r i o r ) L A ,
as well as the lower bound
R n min { λ , 1 λ } · Λ H + Λ A R n max { λ , 1 λ } Λ A λ Λ H 1 λ H λ P A , n P H , n
which implies in particular the “direct” lower bound
R n Λ A max { 1 , λ 1 λ } Λ H max { 1 , 1 λ λ } Λ A + Λ H max { λ 1 λ , 1 λ λ } · H λ P A , n P H , n max { 1 λ , 1 1 λ } .
By using (19) (respectively (20)) together with the exact values and the upper (respectively lower) bounds of the Hellinger integrals H λ P A , n P H , n derived in the following sections, we end up with upper (respectively lower) bounds of the Bayes risk R n . Of course, with the help of (2) the bounds (19) and (20) can be (i) immediately rewritten in terms of the power divergences I λ P A , n P H , n and (ii) thus be directly interpreted in terms of dissimilarity-size arguments. As a side-remark, in such a Bayesian context the λ order Hellinger integral H λ P A , n P H , n = E P H , n ( Z n ) λ (cf. (14)) can be also interpreted as λ order Bayes-factor moment (with respect to P H , n ), since Z n = Z n ( X n ) = p A post ( X n ) p H post ( X n ) / p A prior p H prior is the Bayes factor (i.e., the posterior odds ratio of ( A ) to ( H ) , divided by the prior odds ratio of ( A ) to ( H ) ).
At this point, the potential applicant should be warned about the usual way of asynchronous decision making, where one first tests ( A ) versus ( H ) (i.e., L A = L H = 1 which leads to 0–1 losses in (16)) and afterwards, based on the outcoming result (e.g., in favour of ( A ) ), takes the attached economic decision (e.g., d A ); this can lead to distortions compared with synchronous decision making with “full” monetary losses L A and L H , as is shown in Stummer & Lao [16] within an economic context in connection with discrete approximations of financial diffusion processes (they call this distortion effect a non-commutativity between Bayesian statistical and investment decisions).
For different types of–mainly parameter estimation (squared-error type loss function) concerning—Bayesian analyses based on GW(I) generation size observations, see e.g., Jagers [56], Heyde [38], Heyde & Johnstone [110], Johnson et al. [111], Basawa & Rao [60], Basawa & Scott [61], Scott [112], Guttorp [52], Yanev & Tsokos [113], Mendoza & Gutierrez-Pena [114], and the references therein.
Within our running-example epidemiological context of Section 2.3, let us briefly discuss the role of the above-mentioned losses L A and L H . To begin with, as mentioned above the unit-free choice L A = L H = 1 corresponds to Bayesian testing. Recall that this concerns with two alternative infectious-disease models ( H ) and ( A ) with parameter pairs (recall the interpretation of β as reproduction number and α as importation mean) ( β H , α H ) and ( β A , α A ) which reflect either a “pure” statistical uncertainty (under the same uncontrolled or controlled set-up), or the uncertainty between two different potential control set-ups (for the sake of assessing the potential impact/efficiency of some planned interventions, compared with alternative ones). As far as non-unit-free–e.g., macroeconomic or monetary–losses is concerned, recall that some of the above-mentioned control strategies (countermeasures, public policies, governmental pandemic risk management plans) may imply considerable social and economic costs, with a huge impact and potential danger of triggering severe social, economic and political disruptions; a corresponding tradeoff between health and economic issues can be incorporated by choosing L A and L H to be (e.g., monetary) values which reflect estimates or upper bounds of losses due to wrong decisions, e.g., if at stage n due to the observed data one erroneously thinks (reinforced by fear) that a novel infectious disease (e.g., COVID-19) will lead (or re-emerge) to a severe pandemic and consequently decides for a lockdown with drastic future economic consequences, versus, if one erroneously thinks (reinforced by carelessness) that the infectious disease is (or stays) non-severe and consequently eases some/all control measures which will lead to extremely devastating future economic consequences. For the estimates/bounds of L A and L H , one can e.g., employ (i) the comprehensive stochastic studies of Feicht & Stummer [115] on the quantitative degree of elasticity and speed of recovery of economies after a sudden macroeconomic disaster, or (ii) the more short-term, German-specific, scenario-type (basically non-stochastic) studies of Dorn et al. [116,117] in connection with the current COVID-19 pandemic.
Of course, the above-mentioned Bayesian decision procedure can be also operated in sequential way. For instance, suppose that we are encountered with a novel infectious disease (e.g., COVID-19) of non-negligible fatality rate and let ( A ) reflect a “potentially dangerous” infectious-disease-transmission situation (e.g., a reproduction number of substantially supercritical case β A = 2 , and an importation mean of α A = 10 , for weekly appearing new incidence-generations) whereas ( H ) describes a “relatively harmless/mild” situation (e.g., a substantially subcritical β H = 0.5 , α H = 0.2 ). Moreover, let d A respectively d H denote (non-quantitatively) the decision/action to accept ( A ) respectively ( H ) . It can then be reasonable to decide to stop the observation process n X n (also called surveillance or online-monitoring) of incidence numbers at the first time at which n Z n = Z n ( X n ) exceeds the threshold p H prior / p A prior ; if this happens, one takes d A as decision (and e.g., declare the situation as occurrence of an epidemic outbreak and start with control/intervention measures (however, as explained above, one should synchronously involve also the potential economic losses)) whereas as long as this does not happen, one continues the observation (and implicitly takes d H as decision). This can be modelled in terms of the pair ( τ ˜ , d A ) with (random) stopping time τ ˜ : = inf n N : Z n p H prior p A prior (with the usual convention that the infimum of the empty set is infinity), and the corresponding decision d A . After the time τ ˜ < and e.g., immediate subsequent employment of some control/counter measures, one can e.g., take the old model ( A ) as new ( H ) , declare a new target ( A ) for the desired quantification of the effectiveness of the employed control measures (e.g., a mitigation to a slightly subcritical case of β A = 0.95 , α H = 0.8 ), and starts to observe the new incidence numbers until the new target ( A ) has been reached. This can be interpreted as online-detection of a distributional change; a related comprehensive new framework for the use of divergences (even much beyond power divergences) for distributional change detection can be found e.g., in the recent work of Kißlinger & Stummer [118]. A completely different, SIR-model based, approach for the detection of change points in the spread of COVID-19 is given in Dehning et al. [119]. Moreover, other different surveillance methods can be also found e.g., in the corresponding overview of Frisen [120] and the Swedish epidemics outbreak investigations of Friesen & Andersson & Schiöler [121].
One can refine the above-mentioned sequential procedure via two (instead of one) appropriate thresholds c 1 < c 2 and the pair ( τ ˘ , δ τ ˘ ) , with the stopping time τ ˘ : = inf n N : Z n [ c 1 , c 2 ] as well as corresponding decision rule
δ τ ˘ : = d A , if Z τ ˘ > c 2 , d H , if Z τ ˘ < c 1 .
An exact optimized treatment on the two above-mentioned sequential procedures, and their connection to Hellinger integrals (and power divergences) of Galton-Watson processes with immigration, is beyond the scope of this paper.
As a side remark, let us mention that our above-mentioned suggested method of Bayesian decision making with Hellinger integrals of GWIs differs completely from the very recent work of Brauner et al. [122] who use a Bayesian hierarchical model for the concrete, very comprehensive study on the effectiveness and burden of non-pharmaceutical interventions against COVID-19 transmission.
The power divergences I λ P A , n P H , n ( λ R ) can be employed also in other ways within Bayesian decision making, of statistical nature. Namely, by adapting the general lines of Österreicher & Vajda [123] (see also Liese & Vajda [10], as well as diffusion-process applications in Stummer [5,31,32]) to our context of Galton-Watson processes with immigration, we can proceed as follows. For the sake of comfortable notations, we first attach the value θ : = 1 to the GWI model ( A ) (which has prior probability p A prior ] 0 , 1 [ ) and θ : = 0 to ( H ) (which has prior probability 1 p A prior ). Suppose we want to decide, in an optimal Bayesian way, which degree of evidence d e g [ 0 , 1 ] we should attribute (according to a pregiven loss function LO ) to the model ( A ) . In order to achieve this goal, we choose a nonnegatively-valued loss function LO ( θ , d e g ) defined on { 0 , 1 } × [ 0 , 1 ] , of two types which will be specified below. The risk at stage 0 (i.e., prior to the GWI-path observations X n ), from the optimal decision about the degree of evidence d e g concerning the decision parameter θ , is defined as
BR LO p A prior : = min d e g [ 0 , 1 ] { ( 1 p A prior ) · LO ( 0 , d e g ) + p A prior · LO ( 1 , d e g ) } ,
which can be thus interpreted as a minimal prior expected loss (the minimum will always exist). The corresponding risk posterior to the GWI-path observations X n , from the optimal decision about the degree of evidence d e g concerning the parameter θ , is given by
BR LO post p A prior : = Ω n BR LO p A post ( X n ) ( p A prior d P A , n + ( 1 p A prior ) d P H , n ) ,
which is achieved by the optimal decision rule (about the degree of evidence)
D * X n : = arg min d e g [ 0 , 1 ] { 1 p A post ( X n ) · LO ( 0 , d e g ) + p A post ( X n ) · LO ( 1 , d e g ) } .
The corresponding statistical information measure (in the sense of De Groot [124])
Δ BR LO p A prior : = BR LO p A prior BR LO p o s t p A prior 0
represents the reduction of the decision risk about the degree of evidence d e g concerning the parameter θ , that can be attained by observing the GWI-path X n until stage n. For the first-type loss function LO ˜ ( θ , d e g ) : = d e g ( 2 d e g 1 ) · 1 { 1 } ( θ ) , defined on { 0 , 1 } × [ 0 , 1 ] with the help of the indicator function 1 A ( . ) on the set A, one can show that
D * X n : = 0 , if p A post ( X n ) [ 0 , 1 2 [ , 1 , if p A post ( X n ) ] 1 2 , 1 [ , any number in [ 0 , 1 ] , if p A post ( X n ) = 1 2 ,
as well as the representation formula
I λ P A , n P H , n = 0 1 Δ BR LO ˜ p A prior · 1 p A prior λ 2 · p A prior 1 λ d p A prior , λ R ,
(cf. Österreicher & Vajda [123], Liese & Vajda [10], adapted to our GWI context); in other words, the power divergence I λ P A , n P H , n can be regarded as a weighted-average statistical information measure (weighted-average decision risk reduction). One can also use other weights of p A prior in order to get bounds of I λ P A , n P H , n (analogously to Stummer [5]).
For the second-type loss function LO λ , χ ( θ , d e g ) : = λ θ 1 d e g λ θ χ λ ( 1 χ ) 1 λ ( 1 λ ) θ ( 1 d e g ) λ θ defined on { 0 , 1 } × [ 0 , 1 ] with parameters λ ] 0 , 1 [ and χ ] 0 , 1 [ , one can derive the optimal decision rule
D * X n = p A post ( X n )
as well as the representation formula as a limit statistical information measure (limit decision risk reduction)
I λ P A , n P H , n = lim χ p A prior Δ BR LO λ , χ p A prior = : Δ BR LO λ , p A prior p A prior
(cf. Österreicher & Vajda [123], Stummer [5], adapted to our GWI context).
As an alternative to the above-mentioned Bayesian-decision-making applications of Hellinger integrals H λ P A , n P H , n , let us now briefly discuss the use of the latter for the corresponding Neyman-Pearson (NPT) framework with randomized tests T n : Ω n [ 0 , 1 ] of the hypothesis P H against the alternative P A , based on the GWI-generation-size sample path observations X n : = { X l : l { 0 , 1 , , n } } . In contrast to (17) and (18) a Neyman-Pearson test minimizes—over T n –the type II error probability Ω n ( 1 T n ) d P A , n in the class of the tests for which the type I error probability Ω n T n d P H , n is at most ς ] 0 , 1 [ . The corresponding minimal type II error probability
E ς P A , i P H , i : = inf T i : Ω i T i d P H , i ς Ω i ( 1 T i ) d P A , i
can for all ς ] 0 , 1 [ , λ ] 0 , 1 [ , i I be bounded from above by
E ς P A , i P H , i E ς U P A , i P H , i : = min ( 1 λ ) · λ ς λ / ( 1 λ ) · H λ P A , i P H , i 1 / ( 1 λ ) , 1 ,
and for all λ > 1 , i I it can be bounded from below by
E ς P A , i P H , i E ς L P A , i P H , i : = ( 1 ς ) λ / ( λ 1 ) · H λ P A , i P H , i 1 / ( 1 λ ) ,
which is an adaption of a general result of Krafft & Plachky [125], see also Liese & Vajda [1] as well as Stummer & Vajda [15]. Hence, by combining (23) and (24) with the exact values respectively upper bounds of the Hellinger integrals H 1 λ P A , n P H , n from the following sections, we obtain for our context of Galton-Watson processes with Poisson offspring and Poisson immigration (including the non-immigration case) some upper bounds of E ς P A , n P H , n , which can also be immediately rewritten as lower bounds for the power 1 E ς P A , n P H , n of a most powerful test at level ς . In contrast to such finite-time-horizon results, for the (to our context) incompatible setup of Galton-Watson processes with Poisson offspring but nonstochastic immigration of constant value 1, the asymptotic rates of decrease as n of the unconstrained type II error probabilities as well as the type I error probabilites were studied in Linkov & Lunyova [53] by a different approach employing also Hellinger integrals. Some other types of Galton-Watson-process concerning Neyman-Pearson testing investigations different to ours can be found e.g., in Basawa & Scott [126], Feigin [127], Sweeting [128], Basawa & Scott [61], and the references therein.

2.6. Asymptotical Distinguishability

The next two concepts deal with two general families P A , i i I and P H , i i I of probability measures on the measurable spaces Ω i , F i i I , where the index set I is either N 0 or R + . For them, the following two general types of asymptotical distinguishability are well known (see e.g., LeCam [109], Liese & Vajda [1], Jacod & Shiryaev [24], Linkov [129], and the references therein).
Definition 1.
The family ( P A , i ) i I is contiguous to the family ( P H , i ) i I – in symbols, ( P A , i ) ( P H , i ) – if for all sets A i F i with lim i P H , i ( A i ) = 0 there holds lim i P A , i ( A i ) = 0 .
Definition 2.
Families of measures ( P A , i ) i I and ( P H , i ) i I are called entirely separated (completely asymptotically distinguishable)—in symbols, ( P A , i ) ( P H , i ) –if there exist a sequence i m as m and for each m N 0 an A i m F i m such that lim m P A , i m ( A i m ) = 1 and lim m P H , i m ( A i m ) = 0 .
It is clear that the notion of contiguity is the attempt to carry the concept of absolute continuity over to families of measures. Loosely speaking, ( P A , i ) is contiguous to ( P H , i ) , if the limit lim i ( P A , i ) (existence preconditioned) is absolute continuous to the limit lim i ( P H , i ) . However, for the definition of contiguity, we do not need to require the probability measures to converge to limiting probability measures. On the other hand, entire separation is the generalization of singularity to families of measures.
The corresponding negations will be denoted by ¯ and ¯ . One can easily check that a family ( P A , i ) cannot be both contiguous and entirely separated to a family ( P H , i ) . In fact, as shown in Linkov [129], the relation between the families ( P A , i ) and ( P H , i ) can be uniquely classified into the following distinguishability types:
(a)
( P A , i ) ( P H , i ) ;
(b)
( P A , i ) ( P H , i ) , ( P H , i ) ¯ ( P A , i ) ;
(c)
( P A , i ) ¯ ( P H , i ) , ( P H , i ) ( P A , i ) ;
(d)
( P A , i ) ¯ ¯ ( P H , i ) , ( P A , i ) ¯ ( P H , i ) ;
(e)
( P A , i ) ( P H , i ) .
As demonstrated in the above-mentioned references for a general context, one can conclude the type of distinguishability from the time-evolution of Hellinger integrals. Indeed, the following assertions can be found e.g., in Linkov [129], where part (c) was established in Liese & Vajda [1] and (f), (g) in Vajda [3].
Proposition 1.
The following assertions are equivalent:
( a ) ( P A , i ) ( P H , i ) , ( b ) lim inf i H λ ( P A , i P H , i ) = 0 for all λ ] 0 , 1 [ , ( c ) there exists a λ ] 0 , 1 [ : lim inf i H λ ( P A , i P H , i ) = 0 , ( d ) there exists a π ] 0 , 1 [ : lim inf i e π ( P A , i P H , i ) = 0 , ( e ) lim sup i V ( P A , i P H , i ) = 2 , ( f ) there exists a λ ] 0 , 1 [ : lim sup i I λ ( P A , i P H , i ) = 1 λ · ( 1 λ ) , ( g ) lim sup i I λ ( P A , i P H , i ) = 1 λ · ( 1 λ ) , for all λ ] 0 , 1 [ .
In combination with the discussion after Definition 2, one can thus interpret the λ order Hellinger integral H λ ( P A , i P H , i ) as a “measure” for the distinctness of the two families P A , i and P H , i up to a fixed finite time horizon i I .
Furthermore, for the contiguity we obtain the equivalence (see e.g., Liese & Vajda [1], Linkov [129])
( P A , i ) ( P H , i ) lim inf λ 1 lim inf i H λ P A , i P H , i = 1 lim sup λ 1 lim sup i λ · ( 1 λ ) · I λ P A , i P H , i = 0 .
All the above-mentioned general results can be applied to our context of two competing Poissonian Galton-Watson processes with immigration (GWI) ( H ) and ( A ) (reflected by the two different laws P H resp. P A with parameter pairs ( β H , α H ) resp. ( β A , α A ) ), by taking P A , i : = P A F i and P H , i : = P H F i . Recall from the preceding subsections (by identifying i with n) that the latter two describe the stochastic dynamics of the respective GWI within the restricted time-/stage-frame { 0 , 1 , , i } .
In the following, we study in detail the evolution of Hellinger integrals between two competing models of Galton-Watson processes with immigration, which turns out to be quite extensive.

3. Detailed Recursive Analyses of Hellinger Integrals

3.1. A First Basic Result

In terms of our notations (PS1) to (PS3), a typical situation for applications in our mind is that one particular constellation β A , β H , α A , α H P (e.g., obtained from theoretical or previous statistical investigations) is fixed, whereas–in contrast–the parameter λ R \ { 0 , 1 } for the Hellinger integral or the power divergence might be chosen freely, e.g., depending on which (transform of a) dissimilarity measure one decides to choose for further analysis. At this point, let us emphasize that in general we will not make assumptions of the form β 1 , i.e., upon the type of criticality.
To start with our investigations, in order to justify for all n N 0
Z n : = d P A , n d P H , n ( cf . ( 13 ) ) ,
(14) and (15) (as well as I λ P A , n P H , n for λ R respectively R λ P A , n P H , n for λ R \ { 0 , 1 } ), we first mention the following straightforward facts: (i) if β A , β H , α A , α H P NI , then P A , n and P H , n are equivalent (i.e., P A , n P H , n ), as well as (ii) if β A , β H , α A , α H P SP , then P A , n and P H , n are equivalent (i.e., P A , n P H , n ). Moreover, by recalling Z 0 = 1 and using the “rate functions” f ( x ) = β x + α ( x [ 0 , [ ), a version of (13) can be easily determined by calculating for each x : = ( x 0 , x 1 , x 2 , ) Ω : = N × N 0 × N 0 ×
Z n ( x ) = k = 1 n Z n , k ( x ) with Z n , k ( x ) : = exp f A ( x k 1 ) f H ( x k 1 ) f A ( x k 1 ) f H ( x k 1 ) x k ,
where for the last term we use the convention 0 0 x = 1 for all x N 0 . Furthermore, we define for each x Ω
Z n , k ( λ ) ( x ) : = exp λ f A ( x k 1 ) + ( 1 λ ) f H ( x k 1 ) f A ( x k 1 ) λ f H ( x k 1 ) 1 λ x k x k !
with the convention 0 0 0 ! = 1 for the last term. Accordingly, one obtains from (14) the Hellinger integral H λ P A , 0 P H , 0 = 1 , as well as for all β A , β H , α A , α H , λ P × ( R \ { 0 , 1 } )
H λ P A , 1 P H , 1 = exp f A ( x 0 ) λ f H ( x 0 ) ( 1 λ ) ( λ f A ( x 0 ) + ( 1 λ ) f H ( x 0 ) )
for x 0 = X 0 N , and for all n N \ { 1 }
H λ P A , n P H , n = E P H , n ( Z n ) λ = x 1 = 0 x n = 0 k = 1 n Z n , k ( λ ) ( x ) = x 1 = 0 x n 1 = 0 k = 1 n 1 Z n , k ( λ ) ( x ) · e ( λ f A ( x n 1 ) + ( 1 λ ) f H ( x n 1 ) ) x n = 0 f A ( x n 1 ) λ f H ( x n 1 ) 1 λ x n x n ! = x 1 = 0 x n 1 = 0 k = 1 n 1 Z n , k ( λ ) ( x ) · exp { f A ( x n 1 ) λ f H ( x n 1 ) 1 λ ( λ f A ( x n 1 ) + ( 1 λ ) f H ( x n 1 ) ) } .
From (29), one can see that a crucial role for the exact calculation (respectively the derivation of bounds) of the Hellinger integral is played by the functions defined for x [ 0 , [
ϕ λ ( x ) : = ϕ ( x , β A , β H , α A , α H , λ ) : = φ λ ( x ) f λ ( x ) , with
φ λ ( x ) : = φ ( x , β A , β H , α A , α H , λ ) : = f A ( x ) λ f H ( x ) 1 λ and
f λ ( x ) : = f ( x , β A , β H , α A , α H , λ ) : = λ f A ( x ) + ( 1 λ ) f H ( x ) = α λ + β λ x ,
where we have used the λ -weighted-averages
α λ : = α ( α A , α H , λ ) : = λ · α A + ( 1 λ ) · α H and β λ : = β ( β A , β H , λ ) : = λ · β A + ( 1 λ ) · β H .
Since λ plays a special role, henceforth we typically use it as index and often omit β A , β H , α A , α H . According to Lemma A1 in the Appendix A.1, it follows that for λ ] 0 , 1 [ (respectively λ R \ [ 0 , 1 ] ) one gets ϕ λ ( x ) 0 (respectively ϕ λ ( x ) 0 ) for all x [ 0 , [ . Furthermore, in both cases there holds ϕ λ ( x ) = 0 iff f A ( x ) = f H ( x ) , i.e., for x = x * : = α A α H β H β A 0 . This is consistent with the corresponding generally valid upper and lower bounds (cf. (9) and (11)) 0 < H λ P A , n P H , n 1 , for λ ] 0 , 1 [ , 1 H λ P A , n P H , n , for λ R \ [ 0 , 1 ] .
As a first indication for our proposed method, let us start by illuminating the simplest case λ R \ { 0 , 1 } and γ : = α H β A α A β H = 0 . This means that β A , β H , α A , α H P NI P SP , 1 , where P SP , 1 is the set of all (componentwise) strictly positive β A , β H , α A , α H with β A β H , α A α H and β A β H = α A α H 1 (“the equal-fraction-case”). In this situation, all the three functions (30) to (32) are linear. Indeed,
φ λ ( x ) = p λ E + q λ E x
with p λ E : = α A λ α H 1 λ and q λ E : = β A λ β H 1 λ (where the index E stands for exact linearity). Clearly, q λ E > 0 on P NI P SP , 1 , as well as p λ E > 0 on P SP , 1 and p λ E = 0 on P NI . Furthermore,
ϕ λ ( x ) = r λ E + s λ E x
with r λ E : = p λ E α λ = α A λ α H 1 λ ( λ α A + ( 1 λ ) α H ) and s λ E : = q λ E β λ = β A λ β H 1 λ ( λ β A + ( 1 λ ) β H ) . Due to Lemma A1 one knows that on P NI P SP , 1 one gets s λ E < 0 for λ ] 0 , 1 [ and s λ E > 0 for λ R \ [ 0 , 1 ] . Furthermore, on P SP , 1 one gets r λ E < 0 (resp. r λ E > 0 ) for λ ] 0 , 1 [ (resp. λ R \ [ 0 , 1 ] ), whereas on P NI , the no-immigration setup, we get for all λ R \ { 0 , 1 } r λ E = 0 .
As it will be seen later on, such kind of linearity properties are useful for the recursive handling of the Hellinger integrals. However, only on the parameter set P NI P SP , 1 the functions φ λ and ϕ λ are linear. Hence, in the general case β A , β H , α A , α H , λ P × R \ { 0 , 1 } we aim for linear lower and upper bounds
φ λ L ( x ) : = p λ L + q λ L x φ λ ( x ) φ λ U ( x ) : = p λ U + q λ U x ,
x [ 0 , [ (ultimately, x N 0 ), which by (30) and (31) leads to
ϕ λ ( x ) ϕ λ U ( x ) : = r λ U + s λ U · x : = ( p λ U α λ ) + ( q λ U β λ ) · x , ϕ λ L ( x ) : = r λ L + s λ L · x : = ( p λ L α λ ) + ( q λ L β λ ) · x ,
x [ 0 , [ (ultimately, x N 0 ). Of course, the involved slopes and intercepts should satisfy reasonable restrictions. Later on, we shall impose further restrictions on the involved slopes and intercepts, in order to guarantee nice properties of the general Hellinger integral bounds given in Theorem 1 below (for instance, in consistency with the nonnegativity of φ λ we could require p λ U p λ L 0 , q λ U q λ L 0 which nontrivially implies that these bounds possess certain monotonicity properties). For the formulation of our first assertions on Hellinger integrals, we make use of the following notation:
Definition 3.
For all β A , β H , α A , α H , λ P × R \ { 0 , 1 } and all p , q R let us define the sequences a n ( q ) n N 0 and b n ( p , q ) n N 0 recursively by
a 0 ( q ) : = 0 ; a n ( q ) : = ξ λ ( q ) a n 1 ( q ) : = q · e a n 1 ( q ) β λ , n N ,
b 0 ( p , q ) : = 0 ; b n ( p , q ) : = p · e a n 1 ( q ) α λ , n N .
Notice the interrelation a 1 ( q λ A ) = s λ A and b 1 ( p λ A , q λ A ) = r λ A for A { E , L , U } . Clearly, for all q R \ { 0 } and p R one has the linear interrelation
b n ( p , q ) = p q a n ( q ) + p q β λ α λ , n N .
Accordingly, we obtain fundamental Hellinger integral evaluations:
Theorem 1.
(a) 
For all β A , β H , α A , α H , λ ( P NI P SP , 1 ) × R \ { 0 , 1 } , all initial population sizes X 0 N and all observation horizons n N one can recursively compute the exact value
H λ ( P A , n P H , n ) = exp a n ( q λ E ) X 0 + α A β A k = 1 n a k ( q λ E ) = : V λ , X 0 , n ,
where α A β A can be equivalently replaced by α H β H . Recall that q λ E : = β A λ β H 1 λ . Notice that on P NI × ( R \ { 0 , 1 } ) the formula (39) simplifies significantly, since α A = α H = 0 .
(b) 
For all β A , β H , α A , α H , λ ( P SP \ P SP , 1 ) × ( R \ { 0 , 1 } ) , all coefficients p λ L , p λ U , q λ L , q λ U R which satisfy (35) for all x N 0 (and thus in particular p λ L p λ U , q λ L q λ U ), all initial population sizes X 0 N and all observation horizons n N one gets the following recursive (i.e., recursively computable) bounds for the Hellinger integrals:
for λ ] 0 , 1 [ : B λ , X 0 , n L : = B ˜ λ , X 0 , n ( p λ L , q λ L ) < H λ ( P A , n P H , n ) min B ˜ λ , X 0 , n ( p λ U , q λ U ) , 1 = : B λ , X 0 , n U ,
for λ R \ [ 0 , 1 ] : B λ , X 0 , n L : = max B ˜ λ , X 0 , n ( p λ L , q λ L ) , 1 H λ ( P A , n P H , n ) < B ˜ λ , X 0 , n ( p λ U , q λ U ) = : B λ , X 0 , n U ,
where for general λ R \ { 0 , 1 } , p R , q R \ { 0 } we use the definitions
B ˜ λ , X 0 , n ( p , q ) : = exp a n ( q ) · X 0 + k = 1 n b k ( p , q ) = exp a n ( q ) · X 0 + p q k = 1 n a k ( q ) + n · p q β λ α λ ,
as well as
B ˜ λ , X 0 , n ( p , 0 ) : = exp β λ · X 0 + p · e β λ α λ · n .
Remark 1.
(a) 
Notice that the expression B ˜ λ , X 0 , n ( p , q ) can analogously be defined on the parameter set P NI P SP , 1 . For the choices q λ E : = β A λ β H 1 λ > 0 and p λ E : = α A λ α H 1 λ = q λ E · α A β A = q λ E · α H β H 0 one gets ( p λ E / q λ E ) · β λ α λ = 0 , and thus the characterization B ˜ λ , X 0 , n ( p λ E , q λ E ) = V λ , X 0 , n as the exact value (rather than a lower/upper bound (component)).
(b) 
In the case q = β λ one gets the explicit representation B ˜ λ , X 0 , n ( p , q ) = exp p α λ · n .
(c) 
Using the skew symmetry (8), one can derive alternative bounds of the Hellinger integral by switching to the transformed parameter setup ( β A , β H , α A , α H , λ ) : = ( β H , β A , α H , α A , 1 λ ) . However, this does not lead to different bounds: define ϕ λ , φ λ and f λ analogously to (30), (31) and (32) by replacing the parameters β A , β H , α A , α H , λ with ( β A , β H , α A , α H , λ ) . Then, there holds f λ ( x ) = f λ ( x ) , φ λ ( x ) = φ λ ( x ) and ϕ λ ( x ) = ϕ λ ( x ) , and the set of (lower and upper bound) parameters p λ L , q λ L , p λ U , q λ U satisfying (35) does not change under this transformation.
(d) 
If there are no other restrictions on p λ L , p λ U , q λ L , q λ U than (35), the bounds in (40) and (41) can have some inconvenient features, e.g., being 1 for all (large enough) n N , having oscillating n-behaviour, being suboptimal in certain (other) senses. For a detailed discussion, the reader is referred to Section 3.16 ff. below.
(e) 
For the (to our context) incompatible setup of GWI with Poisson offspring but nonstochastic immigration of constant value 1, the exact values of the corresponding Hellinger integrals (i.e., an “analogue” of part (a)) was established in Linkov & Lunyova [53].
Proof of Theorem 1.
Let us fix β A , β H , α A , α H P as well as x 0 : = X 0 N , and start with arbitrary λ ] 0 , 1 [ . We first prove the upper bound B λ , X 0 , n U of part (b). Correspondingly, we suppose that the coefficients p λ U , q λ U satisfy (35) for all x N 0 . From (28), (30), (31), (32) and (35) one gets immediately B λ , X 0 , 1 U in terms of the first sequence-element a 1 ( q λ U ) (cf. (36)). With the help of (29) for all observation horizons n N \ { 1 } we get (with the obvious shortcut for n = 2 )
H λ P A , n P H , n = x 1 = 0 x n 1 = 0 k = 1 n 1 Z n , k ( λ ) ( x ) · exp φ λ ( x n 1 ) f λ ( x n 1 ) < x 1 = 0 x n 1 = 0 k = 1 n 1 Z n , k ( λ ) ( x ) · exp ( p λ U α λ ) + ( q λ U β λ ) x n 1 = x 1 = 0 x n 1 = 0 k = 1 n 1 Z n , k ( λ ) ( x ) · exp b 1 ( p λ U , q λ U ) + a 1 ( q λ U ) x n 1 = exp b 1 ( p λ U , q λ U ) x 1 = 0 x n 2 = 0 k = 1 n 2 Z n , k ( λ ) ( x ) · exp exp a 1 ( q λ U ) φ λ ( x n 2 ) f λ ( x n 2 ) < exp b 1 ( p λ U , q λ U ) x 1 = 0 x n 2 = 0 k = 1 n 2 Z n , k ( λ ) ( x ) · exp exp a 1 ( q λ U ) p λ U α λ + exp a 1 ( q λ U ) q λ U β λ · x n 2 < exp b 1 ( p λ U , q λ U ) x 1 = 0 x n 2 = 0 k = 1 n 2 Z n , k ( λ ) ( x ) · exp b 2 ( p λ U , q λ U ) + a 2 ( q λ U ) x n 2 < < exp a n ( q λ U ) x 0 + k = 1 n b k ( p λ U , q λ U ) .
Notice that for the strictness of the above inequalities we have used the fact that ϕ λ ( x ) < ϕ λ U ( x ) for some (in fact, all but at most two) x N 0 (cf. Properties 3(P19) below). Since for some admissible choices of p λ U , q λ U and some n N the last term in (43) can become larger than 1, one needs to take into account the cutoff-point 1 arising from (9). The lower bound B λ , X 0 , n L of part (b), as well as the exact value of part (a) follow from (29) in an analogous manner by employing p λ L , q λ L and p λ E , q λ E respectively. Furthermore, we use the fact that for β A , β H , α A , α H , λ ( P NI P SP , 1 ) × ] 0 , 1 [ one gets from (38) the relation b n ( p λ E , q λ E ) = α A β A a n ( q λ E ) . For the sake of brevity, the corresponding straightforward details are omitted here. Although we take the minimum of the upper bound derived in (43) and 1, the inequality B λ , X 0 , n L < B λ , X 0 , n U is nevertheless valid: the reason is that for constituting a lower bound, the parameters p λ L , q λ L must fulfill either the conditions [ p λ L α λ < 0 and q λ L β λ 0 ] or [ p λ L α λ 0 and q λ L β λ < 0 ] (or both), which guarantees that B λ , X 0 , n L < 1 . The proof for all λ R \ [ 0 , 1 ] works out completely analogous, by taking into account the generally valid lower bound H λ ( P A , n P H , n ) 1 (cf. (11)). □

3.2. Some Useful Facts for Deeper Analyses

Theorem 1(b) and Remark 1(a) indicate the crucial role of the expression B ˜ λ , X 0 , n ( p , q ) and that the choice of the quantities p , q depends on the underlying (e.g., fixed) offspring-immigration parameter constellation β A , β H , α A , α H as well as on the (e.g., selectable) value of λ , i.e., p λ A = p A β A , β H , α A , α H , λ and q λ A = q A β A , β H , α A , α H , λ with A { E , L , U } . In order to study the desired time-behaviour n B ˜ λ , X 0 , n ( · , · ) of the Hellinger integral bounds resp. exact values, one therefore faces a six-dimensional (and thus highly non-obvious) detailed analysis, including the search for criteria (in addition to (35)) on good/optimal choices of p λ L , q λ L , p λ U , q λ U . Since these criteria will (almost) always imply the nonnegativity of p λ A , q λ A ( A { L , U } ) and p λ E 0 , q λ E > 0 (cf. Remark 1(a)), let us first present some fundamental properties of the underlying crucial sequences a n ( q ) n N and b n ( p , q ) n N for general p 0 , q 0 .
Properties 1.
For all λ R the following holds:
(P1) 
If 0 < q < β λ , then the sequence a n ( q ) n N is strictly negative, strictly decreasing and converges to the unique negative solution x 0 ( q ) ] β λ , q β λ [ of the equation
ξ λ ( q ) ( x ) = q · e x β λ = x .
(P2) 
If 0 < q = β λ , then a n ( q ) 0 .
(P3) 
If q > max { 0 , β λ } , then the sequence a n ( q ) n N is strictly positive and strictly increasing. Notice that in this setup, q = 1 implies min { 1 , e β λ 1 } = e β λ 1 < q .
(P3a) 
If additionally q min 1 , e β λ 1 , then the sequence a n ( q ) n N converges to the smallest positive solution x 0 ( q ) ] 0 , log q ] of the Equation (44).
(P3b) 
If additionally q > min 1 , e β λ 1 , then the sequence a n ( q ) n N diverges to ∞, faster than exponentially (i.e., there do not exist constants c 1 , c 2 R such that a n ( q ) e c 1 + c 2 n for all n N ).
(P4) 
If q = 0 , then one gets a n ( 0 ) β λ .
Due to the linear interrelation (38), these results directly carry over to the behaviour of the sequence b n ( p , q ) n N :
(P5) 
If p > 0 and 0 < q < β λ , then the sequence b n ( p , q ) n N is strictly decreasing and converges to p · e x 0 ( q ) α λ . Trivially, b 1 ( p , q ) = p α λ .
(P5a) 
If additionally p < α λ , then b n ( p , q ) n N is strictly negative for all n N .
(P5b) 
If additionally p = α λ , then b n ( p , q ) n N is strictly negative for all n N \ { 1 } .
(P5c) 
If additionally p > α λ , then b n ( p , q ) n N is strictly positive for some (and possibly for all) n N .
(P6) 
If 0 < q = β λ , then b n ( p , q ) p α λ .
(P7) 
If p > 0 and q > max { 0 , β λ } , then the sequence b n ( p , q ) n N is strictly increasing.
(P7a) 
If additionally q min 1 , e β λ 1 , then the sequence b n ( p , q ) n N converges to p · e x 0 ( q ) α λ ] p α λ , p / q α λ ] ; this limit can take any sign, depending on the parameter constellation.
(P7b) 
If additionally q > min 1 , e β λ 1 , then the sequence b n ( p , q ) n N diverges to ∞, faster than exponentially.
(P8) 
For the remaining cases we get: b n ( 0 , q ) α λ and b n ( p , 0 ) p · e β λ α λ ( p R , q R ).Moreover, in our investigations we will repeatedly make use of the function ξ λ ( q ) ( · ) from the definition (36) of a n ( q ) (see also (44)), which has the following properties:
(P9) 
For q ] 0 , [ and all λ R \ { 0 , 1 } the function ξ λ ( q ) ( · ) is strictly increasing, strictly convex and smooth, and there holds
(P9a) 
ξ λ ( q ) ( 0 ) < 0 , if q < β λ , = 0 , if q = β λ , > 0 , if q > β λ .
(P9b) 
lim x ξ λ ( q ) ( x ) = β λ , and lim x ξ λ ( q ) ( x ) = .
The proof of these properties is provided in Appendix A.1. From Properties 1 (P1) to (P4) we can see, that the behaviour of the sequence a n ( q ) n N can be classified basically into four different types; besides the case (P2) where a n ( q ) is constant, the sequence can be either (i) strictly decreasing and convergent (e.g., for the NI case β A , β H , α A , α H , λ = ( 0.5 , 2 , 0 , 0 , 0.5 ) leading to β λ = λ β A + ( 1 λ ) β H = 1.25 and to q : = q λ E = β A λ β H 1 λ = 1 , cf. (33) resp. Theorem 1(a)), or (ii) strictly increasing and convergent (e.g., for β A , β H , α A , α H , λ = ( 0.5 , 2 , 0 , 0 , 1.5 ) leading to β λ = 0.25 , q : = q λ E = 0.25 ), or (iii) strictly increasing and divergent (e.g., for β A , β H , α A , α H , λ = ( 0.5 , 2 , 0 , 0 , 2.7 ) leading to β λ = 2.05 , q : = q λ E 0.047366 ). Within our running-example epidemiological context of Section 2.3, this corresponds to a “potentially dangerous” infectious-disease-transmission situation ( H ) (with supercritical reproduction number β H = 2 ), whereas ( A ) describes a “mild” situation (with “low” subcritical β A = 0.5 ).
As already mentioned before, the sequences a n ( q ) n N and b n ( p , q ) n N –whose behaviours for general p 0 and q 0 were described by the Properties 1–have to be evaluated at setup-dependent choices p = p λ = p β A , β H , α A , α H , λ and q = q λ = q β A , β H , α A , α H , λ . Hence, for fixed β A , β H , α A , α H , one of the questions–which arises in the course of the desired investigations of the time-behaviour of the Hellinger integral bounds (resp. exact values)–is for which λ R the sequence a n ( q λ ) n N converges. In the following, we illuminate this for the important special case q λ = β A λ β H 1 λ . Suppose at first that β A β H . Properties 1 (P1) implies that for λ ] 0 , 1 [ one has lim n a n ( q λ ) = x 0 ( q λ ) ] β λ , q λ β λ [ , and Lemma A1 states that q λ β λ < 0 . For λ R \ [ 0 , 1 ] , there holds q λ > max { 0 , β λ } , and from (P3) one can see that a n ( q λ ) n N does not converge to x 0 ( q λ ) in general, but for q λ min { 1 , e β λ 1 } which constitutes an implicit condition on λ . This can be made explicit, with the help of the auxiliary variables
λ : = λ ( β A , β H ) : = inf λ 0 : β A λ β H 1 λ min 1 , exp { λ β A + ( 1 λ ) β H 1 } , in case that the set is nonempty , 0 , else , λ + : = λ + ( β A , β H ) : = sup λ 1 : β A λ β H 1 λ min 1 , exp { λ β A + ( 1 λ ) β H 1 } , in case that the set is nonempty , 1 , else .
For the constellation β A = β H > 0 we clearly obtain q λ = β A λ β H 1 λ = β A = β H = β λ . Hence, (P2) implies that the sequence a n ( q λ ) n N converges for all λ R \ { 0 , 1 } and we can set λ : = as well as λ + : = . Incorporating this and by adapting a result of Linkov & Lunyova [53] on λ ( v 1 , v 2 ) , λ + ( v 1 , v 2 ) for β A β H , we end up with
Lemma 1.
(a) For all β A > 0 , β H > 0 with β A β H there holds
λ = λ ( β A , β H ) = 0 , if β H 1 , λ ˘ , if β H < 1 and β A [ β H , β H z ( β H ) ] , , if β H < 1 and β A ] β H , β H z ( β H ) ] ,
λ + = λ + ( β A , β H ) = 1 , if β A 1 , λ ˘ , if β A < 1 and β H [ β A , β A z ( β A ) ] , , if β A < 1 and β H ] β A , β A z ( β A ) ] ,
where
λ ˘ : = λ ˘ ( β A , β H ) : = β H 1 log β H β H β A + log β A β H < 0 , if β H < 1 and β A [ β H , β H z ( β H ) ] , > 1 , if β A < 1 and β H [ β A , β A z ( β A ) ] .
Here, for fixed β ] 0 , [ \ { 1 } we denote by z ( β ) the unique solution of the equation log ( x ) β ( x 1 ) = 0 , x ] 0 , [ \ { 1 } . For β = 1 , z ( β ) = 1 denotes the unique solution of log ( x ) ( x 1 ) = 0 , x ] 0 , [ .(b) For all β A = β H > 0 one gets λ = λ ( β A , β H ) = as well as λ + = λ + ( β A , β H ) = .Notice that the relationship λ ˘ ( β A , β H ) = 1 λ ˘ ( β H , β A ) is consistent with the skew symmetry (8).
A corresponding proof is given in Appendix A.1.
With these auxiliary basic facts in hand, let us now work out our detailed investigations of the time-behaviour n H λ ( P A , n P H , n ) , where we start with the exactly treatable case (a) in Theorem 1.

3.3. Detailed Analyses of the Exact Recursive Values, i.e., for the Cases β A , β H , α A , α H P NI P SP , 1

In the no-immigration-case β A , β H , α A , α H P NI and in the equal-fraction-case β A , β H , α A , α H P SP , 1 , the Hellinger integral can be calculated exactly in terms of H λ ( P A , n P H , n ) = V λ , X 0 , n (cf. (39)), as proposed in part (a) of Theorem 1. This quantity depends on the behaviour of the sequence a n ( q λ E ) n N , with q λ E : = β A λ β H 1 λ > 0 , and of the sum α A β A k = 1 n a k ( q λ E ) n N . The last expression is equal to zero on P NI . On P SP , 1 , this sum is unequal to zero. Using Lemma A1 we conclude that q λ E < β λ (resp. q λ E > β λ ) iff λ ] 0 , 1 [ (resp. λ R \ [ 0 , 1 ] ), since on P NI P SP , 1 there holds β A β H . Thus, from Properties 1 (P1) we can see that the sequence a n ( q λ E ) n N is strictly negative, strictly decreasing and it converges to the unique solution x 0 ( q λ E ) ] β λ , q λ E β λ [ of the Equation (44) if λ ] 0 , 1 [ . For λ R \ [ 0 , 1 ] , (P3) implies that the sequence a n ( q λ E ) n N is strictly positive, strictly increasing and converges to the smallest positive solution x 0 ( q λ E ) ] 0 , log ( q λ E ) ] of the Equation (44) in case that (P3a) is satisfied, otherwise it diverges to . Thus, we have shown the following detailed behaviour of Hellinger integrals:
Proposition 2.
For all β A , β H , α A , α H , λ P NI × ] 0 , 1 [ and all initial population sizes X 0 N there holds
( a ) H λ ( P A , 1 P H , 1 ) = exp β A λ β H 1 λ λ β A ( 1 λ ) β H X 0 < 1 , ( b ) the sequence H λ ( P A , n P H , n ) n N given by H λ ( P A , n P H , n ) = exp a n ( q λ E ) X 0 = : V λ , X 0 , n is strictly decreasing , ( c ) lim n H λ ( P A , n P H , n ) = exp x 0 ( q λ E ) X 0 ] 0 , 1 [ , ( d ) lim n 1 n log H λ ( P A , n P H , n ) = 0 ( e ) the map X 0 V λ , X 0 , n is strictly decreasing .
Proposition 3.
For all β A , β H , α A , α H , λ P NI × ( R \ [ 0 , 1 ] ) and all initial population sizes X 0 N there holds with q λ E : = β A λ β H 1 λ
( a ) H λ ( P A , 1 P H , 1 ) = exp β A λ β H 1 λ β λ · X 0 > 1 , ( b ) the sequence H λ ( P A , n P H , n ) n N given by H λ ( P A , n P H , n ) = exp a n ( q λ E ) · X 0 = : V λ , X 0 , n is strictly increasing , ( c ) lim n H λ ( P A , n P H , n ) = exp x 0 ( q λ E ) · X 0 > 1 , if λ [ λ , λ + ] \ [ 0 , 1 ] , , if λ ] , λ [ ] λ + , [ , ( d ) lim n 1 n log H λ ( P A , n P H , n ) = 0 , if λ [ λ , λ + ] \ [ 0 , 1 ] , , if λ ] , λ [ ] λ + , [ , ( e ) the map X 0 V λ , X 0 , n is strictly increasing .
In the case β A , β H , α A , α H P SP , 1 , the sequence a n ( q λ E ) n N under consideration is formally the same, with the parameter q λ E : = β A λ β H 1 λ > 0 . However, in contrast to the case P NI , on P SP , 1 both the sequence a n ( q λ E ) n N and the sum α A β A k = 1 n a k ( q λ E ) n N are strictly decreasing in case that λ ] 0 , 1 [ , and strictly increasing in case that λ R \ [ 0 , 1 ] . The respective convergence behaviours are given in Properties 1 (P1) and (P3). We thus obtain
Proposition 4.
For all β A , β H , α A , α H , λ P SP , 1 × ] 0 , 1 [ and all initial population sizes X 0 N there holds with q λ E : = β A λ β H 1 λ
( a ) H λ ( P A , 1 P H , 1 ) = exp β A λ β H 1 λ β λ · X 0 + α A β A < 1 , ( b ) the sequence H λ ( P A , n P H , n ) n N given by H λ ( P A , n P H , n ) = exp a n ( q λ E ) · X 0 + α A β A k = 1 n a k ( q λ E ) = : V λ , X 0 , n is strictly decreasing , ( c ) lim n H λ ( P A , n P H , n ) = 0 , ( d ) lim n 1 n log H λ ( P A , n P H , n ) = α A β A · x 0 ( q λ E ) < 0 , ( e ) the map X 0 V λ , X 0 , n is strictly decreasing .
Proposition 5.
For all β A , β H , α A , α H , λ P SP , 1 × ( R \ [ 0 , 1 ] ) and all initial population sizes X 0 N there holds with q λ E : = β A λ β H 1 λ
( a ) H λ ( P A , 1 P H , 1 ) = exp β A λ β H 1 λ β λ · X 0 + α A β A > 1 , ( b ) the sequence H λ ( P A , n P H , n ) n N given by H λ ( P A , n P H , n ) = exp a n ( q λ E ) · X 0 + α A β A k = 1 n a k ( q λ E ) = : V λ , X 0 , n is strictly increasing , ( c ) lim n H λ ( P A , n P H , n ) = , ( d ) lim n 1 n log H λ ( P A , n P H , n ) = α A β A · x 0 ( q λ E ) > 0 , if λ [ λ , λ + ] \ [ 0 , 1 ] , , if λ ] , λ [ ] λ + , [ , ( e ) the map X 0 V λ , X 0 , n is strictly increasing .
Due to the nature of the equal-fraction-case P SP , 1 , in the assertions (a), (b), (d) of the Propositions 4 and 5, the fraction α A / β A can be equivalently replaced by α H / β H .
Remark 2.
For the (to our context) incompatible setup of GWI with Poisson offspring but nonstochastic immigration of constant value 1, an “analogue” of part (d) of the Propositions 4 resp. 5 was established in Linkov & Lunyova [53].

3.4. Some Preparatory Basic Facts for the Remaining Cases β A , β H , α A , α H P SP \ P SP , 1

The bounds B λ , X 0 , n L , B λ , X 0 , n U for the Hellinger integral introduced in formula (40) in Theorem 1 can be chosen arbitrarily from a ( p λ L , q λ L , p λ U , q λ U ) -indexed set of context-specific parameters satisfying (34), or equivalently (35).
In order to derive bounds which are optimal, with respect to goals that will be discussed later, the following monotonicity properties of the sequences a n ( q ) n N and b n ( p , q ) n N (cf. (36), (37)) for general, context-independent parameters q and p, will turn out to be very useful:
Properties 2.
(P10) 
For 0 q 1 < q 2 < there holds a n ( q 1 ) < a n ( q 2 ) for all n N .
(P11) 
For each fixed q 0 and 0 p 1 < p 2 < there holds b n ( p 1 , q ) < b n ( p 2 , q ) , for all n N .
(P12) 
For fixed p > 0 and 0 q 1 < q 2 it follows b n ( p , q 1 ) < b n ( p , q 2 ) for all n N .
(P13) 
Suppose that 0 p 1 < p 2 and 0 q 2 < q 1 . For fixed n N , no dominance assertion can be conjectured for b n ( p 1 , q 1 ) , b n ( p 2 , q 2 ) . As an example, consider the setup β A , β H , α A , α H , λ = ( 0.4 , 0.8 , 5 , 3 , 0.5 ) ; within our running-example epidemiological context of Section 2.3, this corresponds to a “nearly dangerous” infectious-disease-transmission situation ( H ) (with nearly critical reproduction number β H = 0.8 and importation mean of α H = 3 ), whereas ( A ) describes a “mild” situation (with “low” subcritical β A = 0.4 and α A = 5 ). On the nonnegative real line, the function ϕ λ ( x ) can be bounded from above by the linear functions ϕ λ U , 1 ( x ) : = p 1 + q 1 x : = 4.040 + 0.593 · x as well as by ϕ λ U , 2 ( x ) : = p 2 + q 2 x : = 4.110 + 0.584 · x . Clearly, p 1 < p 2 and q 1 > q 2 . Let us show the first eight elements and the respective limits of the corresponding sequences b n ( p 1 , q 1 ) , b n ( p 2 , q 2 ) :
n12345678
b n ( p 1 , q 1 ) 0.0400.011−0.005−0.015−0.021−0.024−0.026−0.028−0.029
b n ( p 2 , q 2 ) 0.1100.0450.007−0.014−0.026−0.033−0.036−0.039−0.041
(P14) 
For arbitrary 0 < p 1 , p 2 and 0 q 1 , q 2 min { 1 , e β λ 1 } suppose that log ( p 1 ) + x 0 ( q 1 ) < log ( p 2 ) + x 0 ( q 2 ) . Then there holds
p 1 · e x 0 ( q 1 ) α λ = lim n 1 n k = 1 n b k ( p 1 , q 1 ) < lim n 1 n k = 1 n b k ( p 2 , q 2 ) = p 2 · e x 0 ( q 2 ) α λ .
From (P10) to (P12) one deduces that both sequences a n ( q ) n N and b n ( p , q ) n N are monotone in the general parameters p , q 0 . Thus, for the upper bound of the Hellinger integral B λ , X 0 , n U we should use nonnegative context-specific parameters p λ U = p U β A , β H , α A , α H , λ and q λ U = q U β A , β H , α A , α H , λ which are as small as possible, and for the lower bound B λ , X 0 , n L we should use nonnegative context-specific parameters p λ L = p L β A , β H , α A , α H , λ and q λ L = q L β A , β H , α A , α H , λ which are as large as possible, of course, subject to the (equivalent) restrictions (34) and (35).
To find “optimal” parameter pairs, we have to study the following properties of the function ϕ λ ( · ) = ϕ ( · , β A , β H , α A , α H , λ ) defined on [ 0 , [ in (30) (which are also valid for the previous parameter context β A , β H , α A , α H ( P NI P SP , 1 ) ):
Properties 3.
(P15) 
One has
ϕ λ ( x ) = α A + β A x λ α H + β H x 1 λ λ ( α A + β A x ) + ( 1 λ ) ( α H + β H x ) 0 , if λ ] 0 , 1 [ , 0 , if λ R \ [ 0 , 1 ] ,
where equality holds iff f A ( x ) = f H ( x ) for some x [ 0 , [ iff x = x * : = α A α H β H β A [ 0 , [ .
(P16) 
There holds
ϕ λ ( 0 ) = α A λ α H 1 λ α λ 0 , if λ ] 0 , 1 [ , 0 , if λ R \ [ 0 , 1 ] ,
with equality iff α A = α H together with β A β H (cf. Lemma A1).
(P17) 
For all λ R \ { 0 , 1 } one gets
ϕ λ ( x ) = λ β A f A ( x ) λ 1 f H ( x ) 1 λ + ( 1 λ ) β H f A ( x ) λ f H ( x ) λ β λ .
(P18) 
There holds
lim x ϕ λ ( x ) = β A λ β H 1 λ β λ 0 , if λ ] 0 , 1 [ , 0 , if λ R \ [ 0 , 1 ] ,
with equality iff β A = β H together with α A α H (cf. Lemma A1).
(P19) 
There holds
ϕ λ ( x ) = λ ( 1 λ ) f A ( x ) λ 2 f H ( x ) λ 1 α A β H α H β A 2 0 , if λ ] 0 , 1 [ , 0 , if λ R \ [ 0 , 1 ] ,
with equality iff β A , β H , α A , α H ( P NI P SP , 1 ) . Hence, for β A , β H , α A , α H P SP \ P SP , 1 , the function ϕ λ is strictly concave (convex) for λ ] 0 , 1 [ ( λ R \ [ 0 , 1 ] ). Notice that ϕ λ ( 0 ) = λ β A α A α H λ 1 + ( 1 λ ) β H α A α H λ β λ can be either negative (e.g., for the setup β A , β H , α A , α H , λ { ( 4 , 2 , 3 , 1 , 0.5 ) , ( 4 , 2 , 5 , 1 , 2 ) } , or zero (e.g., for β A , β H , α A , α H , λ ( 4 , 2 , 4 , 1 , 0.5 ) , ( 4 , 2 , 3 , 1 , 2 ) ), or positive (e.g., for β A , β H , α A , α H , λ { ( 4 , 2 , 5 , 1 , 0.5 ) , ( 4 , 2 , 2 , 1 , 2 ) } ) , where the exemplary parameter constellations have concrete interpretations in our running-example epidemiological context of Section 2.3. Accordingly, for λ ] 0 , 1 [ , due to concavity and (P17), the function ϕ λ ( · ) can be either strictly decreasing, or can obtain its global maximum in ] 0 , [ , or–only in the case β A = β H —can be strictly increasing. Analogously, for λ R \ [ 0 , 1 ] , the function ϕ λ ( · ) can be either strictly increasing, or can obtain its global minimum in ] 0 , [ , or–only in the case β A = β H —can be strictly decreasing.
(P20)
For all λ R \ { 0 , 1 } one has
lim x ϕ λ ( x ) r λ ˜ + s λ ˜ x = 0 , for r λ ˜ : = p λ ˜ α λ : = λ α A β A β H λ 1 + ( 1 λ ) α H β A β H λ α λ and s λ ˜ : = q λ ˜ β λ : = β A λ β H 1 λ β λ .
The linear function ϕ λ ˜ ( x ) : = r λ ˜ + s λ ˜ · x constitutes the asymptote of ϕ λ ( · ) . Notice that if β A = β H one has s ˜ λ = 0 = r ˜ λ ; if β A β H we have s ˜ λ < 0 in the case λ ] 0 , 1 [ and s ˜ λ > 0 if λ R \ [ 0 , 1 ] . Furthermore, ϕ λ ( 0 ) < r λ ˜ if λ ] 0 , 1 [ and ϕ λ ( 0 ) > r λ ˜ if λ R \ [ 0 , 1 ] , (cf. Lemma A1(c1) and (c2)). If α A = α H (and thus β A β H ), then the intercept r λ ˜ is strictly positive if λ ] 0 , 1 [ resp. strictly negative if λ R \ [ 0 , 1 ] . In contrast, for the case α A α H , the intercept r λ ˜ can assume any sign, take e.g., β A , β H , α A , α H , λ { ( 3.7 , 0.9 , 2.0 , 1.0 , 0.5 ) , ( 4 , 2 , 1.6 , 1 , 2 ) } for r λ ˜ > 0 , β A , β H , α A , α H , λ { ( 3.6 , 0.9 , 2.0 , 1.0 , 0.5 ) , ( 4 , 2 , 1.5 , 1 , 2 ) } for r λ ˜ = 0 , and β A , β H , α A , α H , λ { ( 3.5 , 0.9 , 2.0 , 1.0 , 0.5 ) , ( 4 , 2 , 1.4 , 1 , 2 ) } for r λ ˜ < 0 ; again, the exemplary parameter constellations have concrete interpretations in our running-example epidemiological context of Section 2.3.
The properties (P15) to (P20) above describe in detail the characteristics of the function ϕ λ ( · ) = ϕ ( · , β A , β H , α A , α H , λ ) . In the previous parameter setup P NI P SP , 1 , this function is linear, which can be seen from (P19). In the current parameter setup P SP \ P SP , 1 , this function can basically be classified into four different types. From (P16) to (P20) it is easy to see that for all current parameter constellations the particular choices
p λ A : = α A λ α H 1 λ > 0 , q λ A : = β A λ β H 1 λ > 0 ,
which correspond to the following choices in (35)
r λ A : = α A λ α H 1 λ α λ 0 ( r e s p . 0 ) , s λ A : = β A λ β H 1 λ β λ 0 ( r e s p . 0 ) ,
– where A = L (resp. A = U )–lead to the tightest lower bound B λ , X 0 , n L (resp. upper bound B λ , X 0 , n U ) for H λ ( P A , n P H , n ) in (40) in the case λ ] 0 , 1 [ (resp. λ R \ [ 0 , 1 ] ). Notice that for the previous parameter setup β A , β H , α A , α H ( P NI P SP , 1 ) these choices led to the exact values of the Hellinger integral and to the simplification p λ E / q λ E · β λ α λ = 0 , which implies b n ( p λ E , q λ E ) = α A / β A · a n ( q λ E ) . In contrast, in the current parameter setup β A , β H , α A , α H P SP \ P SP , 1 we only derive the optimal lower (resp. upper) bound for λ ] 0 , 1 [ (resp. λ R \ [ 0 , 1 ] ) by using the parameters p λ A , q λ A for A = L (resp. A = U ) and p λ A / q λ A · β λ α λ 0 . For a better distinguishability and easier reference we thus stick to the L notation (resp. U notation) here.

3.5. Lower Bounds for the Cases β A , β H , α A , α H , λ ( P SP \ P SP , 1 ) × ] 0 , 1 [

The discussion above implies that the lower bound B λ , X 0 , n L for the Hellinger integral H λ ( P A , n P H , n ) in (40) is optimal for the choices p λ L , q λ L > 0 defined in (45). If β A β H , due to Properties 1 (P1) and Lemma A1, the sequence a n ( q λ L ) n N is strictly negative and strictly decreasing and converges to the unique negative solution of the Equation (44). Furthermore, due to (P5), the sequence b n ( p λ L , q λ L ) n N , as defined in (37), is strictly decreasing. Since b 1 ( p λ L , q λ L ) = p λ L α λ 0 by Lemma A1, with equality iff α A = α H , the sequence b n ( p λ L , q λ L ) n N is also strictly negative (with the exception b 1 ( p λ L , q λ L ) = 0 for α A = α H ) and strictly decreasing. If β A = β H and thus α A α H , due to (P2), (P6) and Lemma A1, there holds a n ( q λ L ) 0 and b n ( q λ L ) p λ L α λ < 0 . Thus, analogously to the cases P NI P SP , 1 we obtain
Proposition 6.
For all β A , β H , α A , α H , λ ( P SP \ P SP , 1 ) × ] 0 , 1 [ and all initial population sizes X 0 N there holds with p λ L : = α A λ α H 1 λ , q λ L : = β A λ β H 1 λ
( a ) B λ , X 0 , 1 L = exp β A λ β H 1 λ β λ · X 0 + α A λ α H 1 λ α λ < 1 , ( b ) the sequence of lower bounds B λ , X 0 , n L n N for H λ ( P A , n P H , n ) given by B λ , X 0 , n L = exp a n ( q λ L ) · X 0 + p λ L q λ L k = 1 n a k ( q λ L ) + n · p λ L q λ L · β λ α λ is strictly decreasing , ( c ) lim n B λ , X 0 , n L = 0 , ( d ) lim n 1 n log B λ , X 0 , n L = p λ L q λ L · x 0 ( q λ L ) + β λ α λ = p λ L · e x 0 ( q λ L ) α λ < 0 . ( e ) the map X 0 B λ , X 0 , n L is strictly decreasing .

3.6. Goals for Upper Bounds for the Cases β A , β H , α A , α H , λ ( P SP \ P SP , 1 ) × ] 0 , 1 [

For parameter constellations β A , β H , α A , α H , λ ( P SP \ P SP , 1 ) × ] 0 , 1 [ , in contrast to the treatment of the lower bounds (cf. the previous Section 3.5), the fine-tuning of the upper bounds of the Hellinger integrals H λ ( P A , n P H , n ) is much more involved. To begin with, let us mention that the monotonicity-concerning Properties 2 (P10) to (P12) imply that for a tight upper bound B λ , X 0 , n U (cf. (40)) one should choose parameters p λ U p λ L > 0 , q λ U q λ L > 0 as small as possible. Due to the concavity (cf. Properties 3 (P19)) of the function ϕ λ ( · ) , the linear upper bound ϕ λ U ( · ) (on the ultimately relevant subdomain N 0 ) thus must hit the function ϕ λ ( · ) in at least one point x N 0 , which corresponds to some “discrete tangent line” of ϕ λ ( · ) in x, or in at most two points x , x + 1 N 0 , which corresponds to the secant line of ϕ λ ( · ) across its arguments x and x + 1 . Accordingly, there is in general no overall best upper bound; of course, one way to obtain “good” upper bounds for H λ ( P A , n P H , n ) is to solve the optimization problem
p λ U ¯ , q λ U ¯ : = arg min ( p λ U , q λ U ) exp a n ( q λ U ) · X 0 + k = 1 n b k ( p λ U , q λ U ) ,
subject to the constraint (35). However, the corresponding result generally depends on the particular choice of the initial population X 0 N and on the observation time horizon n N . Hence, there is in general no overall optimal choice of p λ U , q λ U without the incorporation of further goal-dependent constraints such as lim n B λ , X 0 , n U = 0 in case of lim n H λ ( P A , n P H , n ) = 0 . By the way, mainly because of the non-explicitness of the sequence a n ( q λ U ) n N (due to the generally not explicitly solvable recursion (36)) and the discreteness of the constraint (35), this optimization problem seems to be not straightforward to solve, anyway. The choice of parameters p λ U , q λ U for the upper bound B λ , X 0 , n U H λ ( P A , n P H , n ) can be made according to different, partially incompatible (“optimality-” resp. “goodness-”) criteria and goals, such as:
(G1)
the validity of B λ , X 0 , n U < 1 simultaneously for all initial configurations X 0 N , all observation horizons n N and all λ ] 0 , 1 [ , which leads to a strict improvement of the general upper bound H λ ( P A , n P H , n ) < 1 (cf. (9));
(G2)
the determination of the long-term-limits lim n H λ ( P A , n P H , n ) respectively lim n B λ , X 0 , n U for all X 0 N and all λ ] 0 , 1 [ ; in particular, one would like to check whether lim n H λ ( P A , n P H , n ) = 0 , which implies that the families of probability distributions P A , n n N and P H , n n N are asymptotically distinguishable (entirely separated), cf. (25);
(G3)
the determination of the time-asymptotical growth rates lim n 1 n log H λ ( P A , n P H , n ) resp. lim n 1 n log B λ , X 0 , n U for all X 0 N and all λ ] 0 , 1 [ .
Further goals–with which we do not deal here for the sake of brevity–are for instance (i) a very good tightness of the upper bound B λ , X 0 , n U for n N for some fixed large N N , or (ii) the criterion (G1) with fixed (rather than arbitrary) initial population size X 0 N .
Let us briefly discuss the three Goals (G1) to (G3) and their challenges: due to Theorem 1, Goal (G1) can only be achieved if the sequence a n ( q λ U ) n N is non-increasing, since otherwise, for each fixed observation horizon n N there is a large enough initial population size X 0 such that the upper bound component B ˜ λ , X 0 , n ( p λ U , q λ U ) becomes larger than 1, and thus B λ , X 0 , n U = 1 (cf. (40)). Hence, Properties 1 (P1) and (P2) imply that one should have q λ U β λ . Then, the sequence b n ( p λ U , q λ U ) n N is also non-increasing. However, since b n ( p λ U , q λ U ) might be positive for some (even all) n N , the sum k = 1 n b k ( p λ U , q λ U ) n N is not necessarily decreasing. Nevertheless, the restriction
q λ U β λ 0 and p λ U α λ 0 , where at least one of the inequalities is strict ,
ensures that both sequences a n ( q λ U ) n N and b n ( p λ L , q λ U ) n N are nonpositive and decreasing, where at least one sequence is strictly negative, implying that the sum k = 1 n b k ( p λ U , q λ U ) n N is strictly negative for n 2 and strictly decreasing. To see this, suppose that (47) is satisfied with two strict inequalities. Then, a n ( q λ U ) n N as well as b n ( p λ L , q λ U ) n N are strictly negative and strictly decreasing. If q λ U = β λ and p λ U < α λ , we see from (P2) and (P6) that a n ( q λ U ) 0 and that b n ( p λ U , q λ U ) p λ U α λ < 0 (notice that α λ = 0 is not possible in the current setup P SP \ P SP , 1 and for λ ] 0 , 1 [ ). In the last case q λ U < β λ and p λ U = α λ , from (P1) and (P5) it follows that a n ( q λ U ) n N is strictly negative and strictly decreasing, as well as that b 1 ( p λ U , q λ U ) = 0 and b n ( p λ L , q λ U ) n N is strictly decreasing and strictly negative for n 2 . Thus, whenever (47) is satisfied, the sum k = 1 n b k ( p λ U , q λ U ) n N is strictly negative for n 2 and strictly decreasing.
To achieve Goal (G2), we have to require that the sequence a n ( q λ U ) n N converges, which is the case if either q λ U β λ or β λ < q λ U min { 1 , e β λ 1 } (cf. Properties 1 (P1) to (P3)). From the upper bound component B ˜ λ , X 0 , n ( p λ U , q λ U ) (42) we conclude that Goal (G2) is met if the sequence b n ( p λ U , q λ U ) n N converges to a negative limit, i.e., lim n b n ( p λ U , q λ U ) = p λ U · e x 0 ( q λ U ) α λ < 0 . Notice that this condition holds true if (47) is satisfied: suppose that q λ U < β λ , then x 0 ( q λ U ) < 0 and p λ U · e x 0 ( q λ U ) α λ < p λ U α λ 0 . On the other hand, if p λ U α λ < 0 , one obtains x 0 ( q λ U ) 0 leading to p λ U · e x 0 ( q λ U ) α λ p λ U α λ < 0 .
The examination of Goal (G2) above enters into the discussion of Goal (G3): if the sequence a n ( q λ U ) n N converges and lim n B λ , X 0 , n U = 0 , then there holds
lim n 1 n log B λ , X 0 , n U = lim n 1 n log B ˜ λ , X 0 , n ( p λ U , q λ U ) = p λ U · e x 0 ( q λ U ) α λ .
For the case β A , β H , α A , α H , λ ( P SP \ P SP , 1 ) × ] 0 , 1 [ , let us now start with our comprehensive investigations of the upper bounds, where we focus on fulfilling the condition (47) which tackles Goals (G1) and (G2) simultaneously; then, the Goal (G3) can be achieved by (48). As indicated above, various different parameter subcases can lead to different Hellinger-integral-upper-bound details, which we work out in the following. For better transparency, we employ the following notations (where the first four are just reminders of sets which were already introduced above)
P NI : = β A , β H , α A , α H [ 0 , [ 4 : α A = α H = 0 ; β A > 0 ; β H > 0 ; β A β H , P SP : = β A , β H , α A , α H ] 0 , [ 4 : ( α A α H ) or ( β A β H ) or both , P : = P NI P SP , P SP , 1 : = β A , β H , α A , α H P SP : α A α H , β A β H , α A β A = α H β H , P SP , 2 : = β A , β H , α A , α H P SP : α A = α H , β A β H , P SP , 3 : = β A , β H , α A , α H P SP : α A α H , β A β H , α A β A α H β H = P SP , 3 a P SP , 3 b P SP , 3 c , P SP , 3 a : = β A , β H , α A , α H P SP : α A α H , β A β H , α A β A α H β H , α A α H β H β A ] , 0 [ , P SP , 3 b : = β A , β H , α A , α H P SP : α A α H , β A β H , α A β A α H β H , α A α H β H β A ] 0 , [ \ N , P SP , 3 c : = β A , β H , α A , α H P SP : α A α H , β A β H , α A β A α H β H , α A α H β H β A N , P SP , 4 : = β A , β H , α A , α H P SP : α A α H > 0 , β A = β H = P SP , 4 a P SP , 4 b , P SP , 4 a : = β A , β H , α A , α H P SP : α A α H > 0 , β A = β H ] 0 , 1 [ , P SP , 4 b : = β A , β H , α A , α H P SP : α A α H > 0 , β A = β H [ 1 , [ ;
notice that because of Lemma A1 and of the Properties 3 (P15) one gets on the domain ] 0 , [ the relation ϕ λ ( x ) = 0 iff f A ( x ) = f H ( x ) iff x = x * : = α H α A β A β H ] 0 , [ .

3.7. Upper Bounds for the Cases β A , β H , α A , α H , λ P SP , 2 × ] 0 , 1 [

For this parameter constellation, one has ϕ λ ( 0 ) = 0 and ϕ λ ( 0 ) = 0 (cf. Properties 3 (P16), (P17)). Thus, the only admissible intercept choice satisfying (47) is r λ U = 0 = p λ U α λ (i.e., p λ U = p U β A , β H , α A , α H , λ = α λ = α > 0 ), and the minimal admissible slope which implies (35) for all x N 0 is given by s λ U = ϕ λ ( 1 ) ϕ λ ( 0 ) 1 0 = q λ U β λ = a 1 ( q λ U ) < 0 (i.e., q λ U = q U β A , β H , α A , α H , λ = ( α + β A ) λ ( α + β H ) 1 λ α > 0 ). Analogously to the investigation for P SP , 1 in the above-mentioned Section 3.3, one can derive that a n ( q λ U ) n N is strictly negative, strictly decreasing, and converges to x 0 ( q λ U ) ] β λ , q λ U β λ [ as indicated in Properties 1 (P1). Moreover, in the same manner as for the case P SP , 1 this leads to
Proposition 7.
For all β A , β H , α A , α H , λ P SP , 2 × ] 0 , 1 [ and all initial population sizes X 0 N there holds with p λ U = α , q λ U = ( α + β A ) λ ( α + β H ) 1 λ α
( a ) B λ , X 0 , 1 U = exp q λ U β λ · X 0 < 1 , ( b ) the sequence B λ , X 0 , n U n N of upper bounds for H λ ( P A , n P H , n ) given by B λ , X 0 , n U = exp a n ( q λ U ) · X 0 + k = 1 n b k ( p λ U , q λ U ) is strictly decreasing , ( c ) lim n B λ , X 0 , n U = 0 = lim n H λ ( P A , n P H , n ) , ( d ) lim n 1 n log B λ , X 0 , n U = p λ U · e x 0 ( q λ U ) α λ = α e x 0 ( q λ U ) 1 < 0 . ( e ) the map X 0 B λ , X 0 , n U is strictly decreasing .

3.8. Upper Bounds for the Cases β A , β H , α A , α H , λ P SP , 3 a × ] 0 , 1 [

From Properties 3 (P16) one gets ϕ λ ( 0 ) < 0 , whereas ϕ λ ( 0 ) can assume any sign, take e.g., the parameters β A , β H , α A , α H , λ = ( 1.8 , 0.9 , 2.7 , 0.7 , 0.5 ) for ϕ λ ( 0 ) < 0 , β A , β H , α A , α H , λ = ( 1.8 , 0.9 , 2.8 , 0.7 , 0.5 ) for ϕ λ ( 0 ) = 0 and β A , β H , α A , α H , λ = ( 1.8 , 0.9 , 2.9 , 0.7 , 0.5 ) for ϕ λ ( 0 ) > 0 ; within our running-example epidemiological context of Section 2.3, this corresponds to a “nearly dangerous” infectious-disease-transmission situation ( H ) (with nearly critical reproduction number β H = 0.9 and importation mean of α H = 0.7 ), whereas ( A ) describes a “dangerous” situation (with supercritical β A = 1.8 and α A = 2.7 , 2.8 , 2.9 ). However, in all three subcases there holds max x N 0 ϕ λ ( x ) max x [ 0 , [ ϕ λ ( x ) < 0 . Thus, there clearly exist parameters p λ U = p U β A , β H , α A , α H , λ , q λ U = q U β A , β H , α A , α H , λ with p λ U [ α A λ α H 1 λ , α λ [ and q λ U [ β A λ β H 1 λ , β λ [ (implying (47)) such that (35) is satisfied. As explained above, we get the following
Proposition 8.
For all β A , β H , α A , α H , λ P SP , 3 a × ] 0 , 1 [ there exist parameters p λ U , q λ U which satisfy p λ U [ α A λ α H 1 λ , α λ [ and q λ U [ β A λ β H 1 λ , β λ [ as well as (35) for all x N 0 , and for all such pairs ( p λ U , q λ U ) and all initial population sizes X 0 N there holds
( a ) B λ , X 0 , 1 U = exp q λ U β λ · X 0 + p λ U α λ < 1 , ( b ) the sequence B λ , X 0 , n U n N of upper bounds for H λ ( P A , n P H , n ) given by B λ , X 0 , n U = exp a n ( q λ U ) X 0 + k = 1 n b k ( p λ U , q λ U ) is strictly decreasing , ( c ) lim n B λ , X 0 , n U = 0 = lim n H λ ( P A , n P H , n ) , ( d ) lim n 1 n log B λ , X 0 , n U = p λ U · e x 0 ( q λ U ) α λ < 0 , ( e ) the map X 0 B λ , X 0 , n U is strictly decreasing .
Notice that all parts of this proposition also hold true for parameter pairs ( p λ U , q λ U ) satisfying (35) and additionally either p λ U = α λ , q λ U < β λ or p λ U < α λ , q λ U = β λ .
Let us briefly illuminate the above-mentioned possible parameter choices, where we begin with the case of ϕ λ ( 0 ) 0 , which corresponds to λ β A α A / α H λ 1 + ( 1 λ ) β H α A / α H λ β λ 0 (cf. (P17)); then, the function ϕ λ ( · ) is strictly negative, strictly decreasing, and–due to (P19)–strictly concave (and thus, the assumption α H α A β A β H < 0 is superfluous here). One pragmatic but yet reasonable parameter choice is the following: take any intercept p λ U [ α A λ α H 1 λ , α λ ] such that ( p λ U α λ ) + 2 ( ϕ λ ( 1 ) ( p λ U α λ ) ) ϕ λ ( 2 ) (i.e., 2 α A + β A λ α H + β H 1 λ p λ U + α λ α A + 2 β A λ α H + 2 β H 1 λ ) and q λ U : = ϕ λ ( 1 ) ( p λ U α λ ) + β λ = α A + β A λ α H + β H 1 λ p λ U , which corresponds to a linear function ϕ λ U which is (i) nonpositive on N 0 and strictly negative on N , and (ii) larger than or equal to ϕ λ on N 0 , strictly larger than ϕ λ on N \ { 1 , 2 } , and equal to ϕ λ at the point x = 1 (“discrete tangent or secant line through x = 1 ”). One can easily see that (due to the restriction (34)) not all p λ U [ α A λ α H 1 λ , α λ ] might qualify for the current purpose. For the particular choice p λ U = α A λ α H 1 λ and q λ U = α A + β A λ α H + β H 1 λ α A λ α H 1 λ one obtains r λ U = p λ U α λ = b 1 ( p λ U , q λ U ) < 0 (cf. Lemma A1) and s λ U = q λ U β λ = ϕ λ ( 1 ) ϕ λ ( 0 ) = a 1 ( q λ U ) < 0 (secant line through ϕ λ ( 0 ) and ϕ λ ( 1 ) ).
For the remaining case ϕ λ ( 0 ) > 0 , which corresponds to λ β A α A / α H λ 1 + ( 1 λ ) β H α A / α H λ β λ > 0 , the function ϕ λ ( · ) is strictly negative, strictly concave and hump-shaped (cf. (P18)). For the derivation of the parameter choices, we employ x max : = argmax x ] 0 , [ ϕ λ ( x ) which is the unique solution of
λ β A f A ( x ) f H ( x ) λ 1 1 + ( 1 λ ) β H f A ( x ) f H ( x ) λ 1 = 0 , x ] 0 , [ ,
(cf. (P17), (P19)); notice that x = x * : = α H α A β A β H ] 0 , [ formally satisfies the Equation (50) but does not qualify because of the current restriction x * < 0 .
Let us first inspect the case ϕ λ ( x max ) > ϕ λ ( x max + 1 ) , where x denotes the integer part of x. Consider the subcase ϕ λ ( x max ) + x max ϕ λ ( x max ) ϕ λ ( x max + 1 ) 0 , which means that the secant line through ϕ λ ( x max ) and ϕ λ ( x max + 1 ) possesses a non-positive intercept. In this situation it is reasonable to choose as intercept any p λ U α λ = b 1 ( p λ U , q λ U ) = r λ U [ ϕ λ ( x max ) , ϕ λ ( x max ) + x max ϕ λ ( x max ) ϕ λ ( x max + 1 ) ] , and as corresponding slope q λ U α λ = a 1 ( q λ U ) = s λ U = ϕ λ ( x max ) r λ U ( x max ) 0 0 . A larger intercept would lead to a linear function ϕ λ U for which (35) is not valid at x max + 1 . In the other subcase ϕ λ ( x max ) + x max ϕ λ ( x max ) ϕ λ ( x max + 1 ) > 0 , one can choose any intercept p λ U α λ = b 1 ( p λ U , q λ U ) = r λ U [ ϕ λ ( x max ) , 0 ] and as corresponding slope q λ U α λ = a 1 ( q λ U ) = s λ U = ϕ λ ( x max ) r λ U ( x max ) 0 0 (notice that the corresponding line ϕ λ U is on ] x max , [ strictly larger than the secant line through ϕ λ ( x max ) and ϕ λ ( x max + 1 ) ).
If ϕ λ ( x max ) ϕ λ ( x max + 1 ) , one can proceed as above by substituting the crucial pair of points ( x max , x max + 1 ) with ( x max + 1 , x max + 2 ) and examining the analogous two subcases.

3.9. Upper Bounds for the Cases β A , β H , α A , α H , λ P SP , 3 b × ] 0 , 1 [

The only difference to the preceding Section 3.8 is that–due to Properties 3 (P15)–the maximum value of ϕ λ ( · ) now achieves 0, at the positive non-integer point x max = x * = α H α A β A β H ] 0 , [ \ N (take e.g., β A , β H , α A , α H , λ = ( 1.8 , 0.9 , 1.1 , 3.0 , 0.5 ) as an example, which within our running-example epidemiological context of Section 2.3 corresponds to a “nearly dangerous” infectious-disease-transmission situation ( H ) (with nearly critical reproduction number β H = 0.9 and importation mean of α H = 3 ), whereas ( A ) describes a “dangerous” situation (with supercritical β A = 1.8 and α A = 1.1 )); this implies that ϕ λ ( x ) < 0 for all x on the relevant subdomain N 0 . Due to (P16), (P17) and (P19) one gets automatically λ β A α A / α H λ 1 + ( 1 λ ) β H α A / α H λ β λ > 0 for all λ ] 0 , 1 [ . Analogously to Section 3.8, there exist parameter p λ U [ α A λ α H 1 λ , α λ ] and q λ U [ β A λ β H 1 λ , β λ ] such that (47) and (35) are satisfied. Thus, all the assertions (a) to (e) of Proposition 8 also hold true for the current parameter constellations.

3.10. Upper Bounds for the Cases β A , β H , α A , α H , λ P SP , 3 c × ] 0 , 1 [

The only difference to the preceding Section 3.9 is that the maximum value of ϕ λ ( · ) now achieves 0 at the integer point x max = x * = α H α A β A β H N (take e.g., β A , β H , α A , α H , λ = ( 1.8 , 0.9 , 1.2 , 3.0 , 0.5 ) as an example). Accordingly, there do not exist parameters p λ U , q λ U , such that (35) and (47) are satisfied simultaneously. The only parameter pair that ensures exp a n ( q λ U ) · X 0 + k = 1 n b k ( p λ U , q λ U ) 1 for all n N and all X 0 N without further investigations, leads to the choices p λ U = α λ as well as q λ U = β λ . Consequently, B λ , X 0 , n U 1 , which coincides with the general upper bound (9), but violates the above-mentioned desired Goal (G1). However, there might exist parameters p λ U < α λ , q λ U > β λ or p λ U > α λ , q λ U < β λ , such that at least the parts (c) and (d) of Proposition 8 are satisfied. Nevertheless, by using a conceptually different method we can prove
H λ ( P A , n P H , n ) < 1 n N \ { 1 } as well as the convergence lim n H λ ( P A , n P H , n ) = 0
which will be used for the study of complete asymptotical distinguishability (entire separation) below. This proof is provided in Appendix A.1.

3.11. Upper Bounds for the Cases β A , β H , α A , α H , λ P SP , 4 a × ] 0 , 1 [

This setup and the remaining setup β A , β H , α A , α H , λ P SP , 4 b × ] 0 , 1 [ (see the next Section 3.12) are the only constellations where ϕ λ ( · ) is strictly negative and strictly increasing, with lim x ϕ λ ( x ) = lim x ϕ λ ( x ) = 0 , leading to the choices p λ U = α λ as well as q λ U = β λ = β under the restriction that exp a n ( q λ U ) · X 0 + k = 1 n b k ( p λ U , q λ U ) 1 for all n N and all X 0 N . Consequently, one has B λ , X 0 , n U 1 , which is consistent with the general upper bound (9) but violates the above-mentioned desired Goal (G1). Unfortunately, the proof method of (51) (cf. Appendix A.1) can’t be carried over to the current setup. The following proposition states two of the above-mentioned desired assertions which can be verified by a completely different proof method, which is also given in Appendix A.1.
Proposition 9.
For all β A , β H , α A , α H , λ P SP , 4 a × ] 0 , 1 [ there exist parameters p λ U < α λ , 1 > q λ U > β λ = β such that (35) is satisfied for all x [ 0 , [ and such that for all initial population sizes X 0 N the parts (c) and (d) of Proposition 8 hold true.

3.12. Upper Bounds for the Cases β A , β H , α A , α H , λ P SP , 4 b × ] 0 , 1 [

The assertions preceding Proposition 9 remain valid. However, any linear upper bound of the function ϕ λ ( · ) on the domain N 0 possesses the slope q λ U β λ 0 . If q λ U = β λ , then the intercept is p λ U α λ = 0 leading to B λ , X 0 , n U 1 and thus Goal (G1) is violated. If we use a slope q λ U β λ > 0 , then both the sequences a n ( q λ U ) n N and b n ( p λ U , q λ U ) n N are strictly increasing and diverge to . This comes from Properties 1 (P3b) and (P7b) since q λ U > β λ = β 1 . Altogether, this implies that the corresponding upper bound component B ˜ λ , X 0 , n ( p λ U , q λ U ) (cf. (42)) diverges to as well. This leads to
Proposition 10.
For all β A , β H , α A , α H , λ P SP , 4 b × ] 0 , 1 [ and all initial population sizes X 0 N there do not exist parameters p λ U 0 , q λ U 0 such that (35) is satisfied and such that the parts (c) and (d) of Proposition 8 hold true.

3.13. Concluding Remarks on Alternative Upper Bounds for all Cases β A , β H , α A , α H , λ ( P SP \ P SP , 1 ) × ] 0 , 1 [

As mentioned earlier on, starting from Section 3.6 we have principally focused on constructing upper bounds B λ , X 0 , n U of the Hellinger integrals, starting from p λ U , q λ U which fulfill (35) as well as further constraints depending on the Goals (G1) and (G2). For the setups in the Section 3.7, Section 3.8 and Section 3.9, we have proved the existence of special parameter choices p λ U , q λ U which were consistent with (G1) and (G2). Furthermore, for the constellation in the Section 3.11 we have found parameters such that at least (G2) is satisfied. In contrast, for the setup of Section 3.12 we have not found any choices which are consistent with (G1) and (G2), leading to the “cut-off bound” B λ , X 0 , n U 1 which gives no improvement over the generally valid upper bound (9).
In the following, we present some alternative choices of p λ U , q λ U which–depending on the parameter constellation β A , β H , α A , α H , λ ( P SP \ P SP , 1 ) × ] 0 , 1 [ –may or may not lead to upper bounds B λ , X 0 , n U which are consistent with Goal (G1) or with (G2) (and which are maybe weaker or better than resp. incomparable with the previous upper bounds when dealing with some relaxations of (G1), such as e.g., H λ ( P A , n P H , n ) < 1 for all but finitely many n N ).
As a first alternative choice for a linear upper bound of ϕ λ ( · ) (cf. (35)) one could use the asymptote ϕ λ ˜ ( · ) (cf. Properties 3 (P20)) with the parameters p λ U : = p λ ˜ = λ α A β A / β H λ 1 + ( 1 λ ) α H β A / β H λ and q λ U : = q λ ˜ = β A λ β H 1 λ . Another important linear upper bound of ϕ λ ( · ) is the tangent line ϕ λ , y tan ( · ) on ϕ λ ( · ) at an arbitrarily fixed point y [ 0 , [ , which amounts to
ϕ λ , y tan ( x ) : = r λ , y tan + s λ , y tan · x : = p λ , y tan α λ + q λ , y tan β λ · x : = ϕ λ ( y ) y · ϕ λ ( y ) + ϕ λ ( y ) · x ,
where ϕ λ ( · ) is given by (P17). Notice that this upper bound is for y ] 0 , [ \ N “not tight” in the sense that ϕ λ , y tan ( · ) does not hit the function ϕ λ ( · ) on N 0 (where the generation sizes “live”); moreover, ϕ λ , y tan ( x ) might take on strictly positive values for large enough points x which is counter-productive for Goal (G1). Another alternative choice of a linear upper bound for ϕ λ ( · ) , which in contrast to the tangent line is “tight” (but not necessarily avoiding the strict positivity), is the secant line ϕ λ , k sec ( · ) across its arguments k and k + 1 , given by
ϕ λ , k sec ( x ) : = r λ , k sec + s λ , k sec · x : = p λ , k sec α λ + q λ , k sec β λ · x : = ϕ λ ( k ) k · ϕ λ ( k + 1 ) ϕ λ ( k ) + ϕ λ ( k + 1 ) ϕ λ ( k ) · x .
Another alternative choice is the horizontal line
ϕ λ hor ( x ) max ϕ λ ( y ) , y N 0 .
For p λ U p λ ˜ , p λ , y tan , p λ , y sec and q λ U q λ , y tan , q λ , y sec it is possible that in some parameter cases β A , β H , α A , α H either the intercept r λ U = p λ U α λ is strictly larger than zero or the slope s λ U = q λ U β λ is strictly larger than zero. Thus, it can happen that B ˜ λ , X 0 , n ( p λ U , q λ U ) > 1 for some (and even for all) n N , such that the corresponding upper bound B λ , X 0 , n U for the Hellinger integral H λ ( P A , n P H , n ) amounts to the cut-off at 1. However, due to Properties 1 (P5) and (P7a), the sequence B ˜ λ , X 0 , n ( p λ U , q λ U ) n N may become smaller than 1 and may finally converge to zero. Due to Properties 2 (P14), this upper bound can even be tighter (smaller) than those bounds derived from parameters p λ U , q λ U fulfilling (47).
As far as our desired Hellinger integral bounds are concerned, in the setup of Section 3.11—where lim y ϕ λ , y tan ( · ) 0 –for the proof of Proposition 9 in Appendix A.1 we shall employ the mappings y ϕ λ , y tan resp. y p λ , y tan resp. y q λ , y tan . These will also be used for the proof of the below-mentioned Theorem 4.

3.14. Intermezzo 1: Application to Asymptotical Distinguishability

The above-mentioned investigations can be applied to the context of Section 2.6 on asymptotical distinguishability. Indeed, with the help of the Definitions 1 and 2 as well as the equivalence relations (25) and (26) we obtain the following
Corollary 1.
(a) 
For all β A , β H , α A , α H P SP \ P SP , 4 b and all initial population sizes X 0 N , the corresponding sequences ( P A , n ) n N 0 and ( P H , n ) n N 0 are entirely separated (completely asymptotically distinguishable).
(b) 
For all β A , β H , α A , α H P NI with β A 1 and all initial population sizes X 0 N , the sequence ( P A , n ) n N 0 is contiguous to ( P H , n ) n N 0 .
(c) 
For all β A , β H , α A , α H P NI with β A > 1 and all initial population sizes X 0 N , the sequence ( P A , n ) n N 0 is neither contiguous to nor entirely separated to ( P H , n ) n N 0 .
The proof of Corollary 1 will be given in Appendix A.1.
Remark 3.
(a) 
Assertion (c) of Corollary 1 contrasts the case of Gaussian processes with independent increments where one gets either entire separation or mutual contiguity (see e.g., Liese & Vajda [1]).
(b) 
By putting Corollary 1(b) and (c) together, we obtain for different “criticality pairs” in the non-immigration case P NI the following asymptotical distinguishability types: ( P A , n ) ( P H , n ) if β A 1 , β H 1 ; ( P A , n ) ¯ ( P H , n ) if β A 1 , β H > 1 ; ( P A , n ) ¯ ( P H , n ) if β A > 1 , β H 1 ; ( P A , n ) ¯ ¯ ( P H , n ) and ( P A , n ) ¯ ( P H , n ) if β A > 1 , β H > 1 ;in particular, for P NI the sequences ( P A , n ) n N 0 and ( P H , n ) n N 0 are not completely asymptotically inseparable (indistinguishable).
(c) 
In the light of the above-mentioned characterizations of contiguity resp. entire separation by means of Hellinger integral limits, the finite-time-horizon results on Hellinger integrals given in the “ λ ] 0 , 1 [ parts” of Theorem 1, the Section 3.3, Section 3.4, Section 3.5, Section 3.6, Section 3.7, Section 3.8, Section 3.9, Section 3.10, Section 3.11, Section 3.12, Section 3.13 and also in the below-mentioned Section 6 can loosely be interpreted as “finite-sample (rather than asymptotical) distinguishability” assertions.

3.15. Intermezzo 2: Application to Decision Making under Uncertainty

3.15.1. Bayesian Decision Making

The above-mentioned investigations can be applied to the context of Section 2.5 on dichotomous Bayesian decision making on the space of all possible path scenarios (path space) of Poissonian Galton-Watson processes without/with immigration GW(I) (e.g., in combination with our running-example epidemiological context of Section 2.3). More detailed, for the minimal mean decision loss (Bayes risk) R n defined by (18) we can derive upper (respectively lower) bounds by using (19) respectively (20) together with the exact values or the upper (respectively lower) bounds of the Hellinger integrals H λ ( P A , n P H , n ) derived in the “ λ ] 0 , 1 [ parts” of Theorem 1, the Section 3.3, Section 3.4, Section 3.5, Section 3.6, Section 3.7, Section 3.8, Section 3.9, Section 3.10, Section 3.11, Section 3.12, Section 3.13 (and also in the below-mentioned Section 6); instead of providing the corresponding outcoming formulas–which is merely repetitive–we give the illustrative
Example 1.
Based on a sample path observation X n : = { X : = 1 , . . . , n } of a GWI, which is either governed by a hypothesis law P H or an alternative law P A , we want to make a dichotomous optimal Bayesian decision described in Section 2.5, namely, decide between an action d H “associated with” P H and an action d A “associated with” P A , with pregiven loss function (16) involving constants L A > 0 , L H > 0 which e.g., arise as bounds from quantities in worst-case scenarios.
For this, let us exemplarily deal with initial population X 0 = 5 as well as parameter setup β A , β H , α A , α H = ( 1.2 , 0.9 , 4 , 3 ) P SP , 1 ; within our running-example epidemiological context of Section 2.3, this corresponds e.g., to a setup where one is encountered with a novel infectious disease (such as COVID-19) of non-negligible fatality rate, and ( A ) reflects a “potentially dangerous” infectious-disease-transmission situation (with supercritical reproduction number β A = 1.2 and importation mean of α A = 4 , for weekly appearing new incidence-generations) whereas ( H ) describes a “milder” situation (with subcritical β H = 0.9 and α H = 3 ). Moreover, let d H and d A reflect two possible sets of interventions (control measures) in the course of pandemic risk management, with respective “worst-case type” decision losses L A = 600 and L H = 300 (e.g., in units of billion Euros or U.S. Dollars). Additionally we assume the prior probabilities π = P r ( H ) = 1 P r ( A ) = 0.5 , which results in the prior-loss constants L A = 300 and L H = 150 . In order to obtain bounds for the corresponding minimal mean decision loss (Bayes Risk) R n defined in (18) we can employ the general Stummer-Vajda bounds (cf. [15]) (19) and (20) in terms of the Hellinger integral H λ ( P A , n P H , n ) (with arbitrary λ ] 0 , 1 [ ), and combine this with the appropriate detailed results on the latter from the preceding subsections. To demonstrate this, let us choose λ = 0.5 (for which H 1 / 2 ( P A , n P H , n ) can be interpreted as a multiple of the Bhattacharyya coefficient between the two competing GWI) respectively λ = 0.9 , leading to the parameters p 0.5 E = 3.464 , q 0.5 E = 1.039 respectively p 0.9 E = 3.887 , q 0.9 E = 1.166 (cf. (33)). Combining (19) and (20) with Theorem 1 (a)– which provides us with the exact recursive values of H λ ( P A , n P H , n ) in terms of the sequence a n ( q λ E ) (cf. (36))– we obtain for λ = 0.5 the bounds
R n R n U : = 2.121 · 10 2 · exp 5 · a n ( 1.039 ) + 10 3 · k = 1 n a k ( 1.039 ) , R n R n L : = 100 · exp 10 · a n ( 1.039 ) + 20 3 · k = 1 n a k ( 1.039 ) ,
whereas for λ = 0.9 we get
R n R n U : = 2.799 · 10 2 · exp 5 · a n ( 1.166 ) + 10 3 · k = 1 n a k ( 1.166 ) , R n R n L : = 3.902 · exp 50 · a n ( 1.166 ) + 100 3 · k = 1 n a k ( 1.166 ) .
Figure 1 illustrates the lower (orange resp. cyan) and upper (red resp. blue) bounds R n L resp. R n U of the Bayes Risk R n employing λ = 0.5 resp. λ = 0.9 on both a unit scale (left graph) and a logarithmic scale (right graph). The lightgrey/grey/black curves correspond to the (18)-based empirical evaluation of the Bayes risk sequence R n sample n = 1 , . . . , 50 from three independent Monte Carlo simulations of 10000 GWI sample paths (each) up to time horizon 50.

3.15.2. Neyman-Pearson Testing

By combining (23) with the exact values resp. upper bounds of the Hellinger integrals H λ P A , n P H , n from the preceding subsections, we obtain for our context of GW(I) with Poisson offspring and Poisson immigration (including the non-immigration case) some upper bounds of the minimal type II error probability E ς P A , n P H , n in the class of the tests for which the type I error probability is at most ς ] 0 , 1 [ , which can also be immediately rewritten as lower bounds for the power 1 E ς P A , n P H , n of a most powerful test at level ς . As for the Bayesian context of Section 3.15.1, instead of providing the–merely repetitive–outcoming formulas for the bounds of E ς P A , n P H , n we give the illustrative
Example 2.
Consider the Figure 2 and Figure 3 which deal with initial population X 0 = 5 and the parameter setup β A , β H , α A , α H = ( 0.3 , 1.2 , 1 , 4 ) P SP , 1 ; within our running-example epidemiological context of Section 2.3, this corresponds to a “potentially dangerous” infectious-disease-transmission situation ( H ) (with supercritical reproduction number β H = 1.2 and importation mean of α H = 4 ), whereas ( A ) describes a “very mild” situation (with “low” subcritical β A = 0.3 and α A = 1 ). Figure 2 shows the lower and upper bounds of E ς P A , n P H , n with ς = 0.05 , evaluated from the Formulas (23) and (24), together with the exact values of the Hellinger integral H λ P A , n P H , n , cf. Theorem 1 (recall that we are in the setup P SP , 1 ) on both a unit scale (left graph) and a logarithmic scale (right graph). The orange resp. red resp. purple curves correspond to the outcoming upper bounds E n U : = E n U ( P A , n P H , n ) (cf. (23)) with parameters λ = 0.3 resp. λ = 0.5 resp. λ = 0.7 . The green resp. cyan resp. blue curves correspond to the lower bounds E n L : = E n L ( P A , n P H , n ) (cf. (24)) with parameters λ = 2 resp. λ = 1.5 resp. λ = 1.1 . Notice the different λ-ranges in (23) and (24). In contrast, Figure 3 compares the lower bound E n L (for fixed λ = 1.1 ) with the upper bound E n U (for fixed λ = 0.5 ) of the minimal type II error probability E ς ( P A , n P H , n ) for different levels ς = 0.1 (orange for the lower and cyan for the upper bound), ς = 0.05 (green and magenta) and ς = 0.01 (blue and purple) on both a unit scale (left graph) and a logarithmic scale (right graph).

3.16. Goals for Lower Bounds for the Cases β A , β H , α A , α H , λ ( P SP \ P SP , 1 ) × ( R \ [ 0 , 1 ] )

Recall from (49) the set P SP : = β A , β H , α A , α H ] 0 , [ 4 : ( α A α H ) or ( β A β H ) or both and the “equal-fraction-case” set P SP , 1 : = β A , β H , α A , α H P SP : α A α H , β A β H , α A β A = α H β H , where for the latter we have derived in Theorem 1(a) and in Proposition 5 the exact recursive values for the time-behaviour of the Hellinger integrals H λ ( P A , 1 P H , 1 ) of order λ R \ [ 0 , 1 ] . Moreover, recall that for the case β A , β H , α A , α H , λ ( P SP \ P SP , 1 ) × ] 0 , 1 [ we have obtained in the Section 3.4 and Section 3.5 some “optimal” linear lower bounds ϕ λ L ( · ) for the strictly concave function ϕ λ ( x ) : = ϕ ( x , β A , β H , α A , α H , λ ) on the domain x [ 0 , [ ; due to the monotonicity Properties 2 (P10) to (P12) of the sequences a n ( q λ L ) n N and b n ( p λ L , q λ L ) n N , these bounds have led to the “optimal” recursive lower bound B λ , X 0 , n L of the Hellinger integral H λ ( P A , n P H , n ) in (40) of Theorem 1(b)).
In contrast, the strict convexity of the function ϕ λ ( · ) in the case β A , β H , α A , α H , λ ( P SP \ P SP , 1 ) × ( R \ [ 0 , 1 ] ) implies that we cannot maximize both parameters p λ L , q λ L R simultaneously subject to the constraint (35). This effect carries over to the lower bounds B λ , X 0 , n L of the Hellinger integrals H λ ( P A , n P H , n ) (cf. (41)); in general, these bounds cannot be maximized simultaneously for all initial population sizes X 0 N and all observation horizons n N .
Analogously to (46), one way to obtain “good” recursive lower bounds for H λ ( P A , n P H , n ) from (41) in Theorem 1 (b) is to solve the optimization problem,
p λ L ¯ , q λ L ¯ : = arg max ( p λ L , q λ L ) R 2 exp a n ( q λ L ) · X 0 + k = 1 n b k ( p λ L , q λ L ) such that ( 35 ) is satisfied ,
for each fixed initial population size X 0 N and observation horizon n N . But due to the same reasons as explained right after (46), the optimization problem (55) seems to be not straightforward to solve explicitly. In a congeneric way as in the discussion of the upper bounds for the case λ ] 0 , 1 [ above, we now have to look for suitable parameters p λ L , q λ L for the lower bound B λ , X 0 , n L H λ ( P A , n P H , n ) that fulfill (35) and that guarantee certain reasonable criteria and goals; these are similar to the goals (G1) to (G3) from Section 3.6, and are therefore supplemented by an additional “ ”:
(G1 )
the validity of B λ , X 0 , n L > 1 simultaneously for all initial configurations X 0 N , all observation horizons n N and all λ R \ [ 0 , 1 ] , which leads to a strict improvement of the general upper bound H λ ( P A , n P H , n ) > 1 (cf. (11));
(G2 )
the determination of the long-term-limits lim n H λ ( P A , n P H , n ) respectively lim n B λ , X 0 , n L for all X 0 N and all λ R \ [ 0 , 1 ] ; in particular, one would like to check whether lim n H λ ( P A , n P H , n ) = ;
(G3 )
the determination of the time-asymptotical growth rates lim n 1 n log H λ ( P A , n P H , n ) resp. lim n 1 n log B λ , X 0 , n L for all X 0 N and all λ R \ [ 0 , 1 ] .
In the following, let us briefly discuss how these three goals can be achieved in principle, where we confine ourselves to parameters p λ L , q λ L which–in addition to (35)–fulfill the requirement
q λ L max { 0 , β λ } p λ L > max { 0 , α λ } q λ L > max { 0 , β λ } p λ L max { 0 , α λ } ,
where ∧ is the logical “AND” and ∨ the logical “OR” operator. This is sufficient to tackle all three Goals (G1 ) to (G3 ). To see this, assume that p λ L , q λ L satisfy (35). Let us begin with the two “extremal” cases in (56), i.e., with (i) q λ L = max { 0 , β λ } , p λ L > max { 0 , α λ } , respectively (ii) q λ L > max { 0 , β λ } , p λ L = max { 0 , α λ } .
Suppose in the first extremal case (i) that β λ 0 . Then, q λ L = 0 and Properties 1 (P4) implies that a n ( q λ L ) = β λ 0 and hence b n ( p λ L , q λ L ) = p λ L e β λ α λ p λ L α λ > 0 for all n N . This enters into (41) as follows: the Hellinger integral lower bound becomes B λ , X 0 , n L B ˜ λ , X 0 , n ( p λ L , q λ L ) = exp { β λ · X 0 + ( p λ L e β λ α λ ) · n } > 1 . Furthermore, one clearly has lim n B λ , X 0 , n L = as well as lim n 1 n log B λ , X 0 , n L = p λ L e β λ α λ > 0 . Assume now that β λ > 0 . Then, q λ L = β λ > 0 , a n ( q λ L ) = 0 (cf. (P2)), b n ( p λ L , q λ L ) = p λ L α λ > 0 and thus B λ , X 0 , n L = exp { ( p λ L α λ ) · n } > 1 for all n N . Furthermore, one gets lim n B λ , X 0 , n L = as well as lim n 1 n log B λ , X 0 , n L = p λ L α λ > 0 .
Let us consider the other above-mentioned extremal case (ii). Suppose that q λ L > max { 0 , β λ } together with q λ L > min { 1 , e β λ 1 } which implies that the sequence a n ( q λ L ) n N is strictly positive, strictly increasing and grows to infinity faster than exponentially, cf. (P3b). Hence, B λ , X 0 , n L exp { a n ( q λ L ) · X 0 } > 1 , lim n B λ , X 0 , n L = as well as lim n 1 n log B λ , X 0 , n L = . If max { 0 , β λ } < q λ L min { 1 , e β λ 1 } , then a n ( q λ L ) n N is strictly positive, strictly increasing and converges to x 0 ( q λ ) ] 0 , log ( q λ L ) ] (cf. (P3a)). This carries over to the sequence b n ( p λ L , q λ L ) n N : one gets b 1 ( p λ L , q λ L ) = p λ L α λ 0 and b n ( p λ L , q λ L ) > 0 for all n 2 . Furthermore, b n ( p λ L , q λ L ) is strictly increasing and converges to p λ L · e x 0 ( q λ L ) α λ > 0 , leading to B λ , X 0 , n L > 1 for all n N , to lim n B λ , X 0 , n L = as well as to lim n 1 n log B λ , X 0 , n L = p λ L · e x 0 ( q λ L ) α λ > 0 .
It remains to look at the cases where p λ L , q λ L satisfy (35), and (56) with two strict inequalities. For this situation, one gets
  • a n ( q λ L ) n N is strictly positive, strictly increasing and–iff q λ L min { 1 , e β λ 1 } –convergent (namely to the smallest positive solution x 0 ( q λ L ) ] 0 , log ( q λ L ) ] of (44)), cf. (P3);
  • b n ( p λ L , q λ L ) n N is strictly increasing, strictly positive (since b 1 ( p λ L , q λ L ) = p λ L α λ > 0 ) and–iff q λ L min { 1 , e β λ 1 } –convergent (namely to p λ L e x 0 ( q λ L ) α λ [ p λ L α λ , p λ L / q λ L α λ ] ), cf (P7).
Hence, under the assumptions (35) and p λ L > max { 0 , α λ } q λ L > max { 0 , β λ } the corresponding lower bounds B λ , X 0 , n L of the Hellinger integral H λ ( P A , n P H , n ) fulfill for all X 0 N
  • B λ , X 0 , n L > 1 for all n N ,
  • lim n B λ , X 0 , n L = ,
  • lim n 1 n log B λ , X 0 , n L = p λ L e x 0 ( q λ L ) α λ > 0 for the case q λ L ] max { 0 , β λ } , min { 1 , e β λ 1 } ] , respectively lim n 1 n log B λ , X 0 , n L = for the remaining case q λ L > min { 1 , e β λ 1 } .
Putting these considerations together we conclude that the constraints (35) and (56) are sufficient to achieve the Goals (G1 ) to (G3 ). Hence, for fixed parameter constellation β A , β H , α A , α H , λ , we aim for finding p λ L = p L β A , β H , α A , α H , λ and q λ L = q L β A , β H , α A , α H , λ which satisfy (35) and (56). This can be achieved mostly, but not always, as we shall show below. As an auxiliary step for further investigations, it is useful to examine the set of all λ R \ [ 0 , 1 ] for which α λ 0 or β λ 0 (or both). By straightforward calculations, we see that
α λ 0 λ α H α A α H , if α A > α H , α H α H α A , if α A < α H , and β λ 0 λ β H β A β H , if β A > β H , β H β H β A , if β A < β H .
Furthermore, recall that (35) implies the general bounds p λ L α A λ α H 1 λ = φ λ ( 0 ) (being equivalent to the requirement ϕ λ L ( 0 ) = ϕ λ ( 0 ) ) and q λ L β A λ β H 1 λ = q ˜ λ (the latter being the maximal slope due to Properties 3 (P19), (P20)).
Let us now undertake the desired detailed investigations on lower and upper bounds of the Hellinger integrals H λ ( P A , n P H , n ) of order λ R \ [ 0 , 1 ] , for the various different subclasses of P SP \ P SP , 1 .

3.17. Lower Bounds for the Cases β A , β H , α A , α H , λ P SP , 2 × ( R \ [ 0 , 1 ] )

In such a constellation, where P SP , 2 : = β A , β H , α A , α H P SP : α A = α H , β A β H (cf. (49)), one gets ϕ λ ( 0 ) = 0 (cf. Properties 3 (P16)), ϕ λ ( 0 ) = 0 (cf. (P17)). Thus, the only choice for the intercept and the slope of the linear lower bound ϕ λ L ( · ) for ϕ λ ( · ) , which satisfies (35) for all x N and (potentially) (56), is r λ L = 0 = p λ L α λ (i.e., p λ L = α λ = α > 0 ) and s λ L = ϕ λ ( 1 ) ϕ λ ( 0 ) 1 0 = q λ L β λ = a 1 ( q λ L ) > 0 (i.e., q λ L = ( α + β A ) λ ( α + β H ) 1 λ α ). However, since p λ L = α λ = α > 0 , the restriction (56) is fulfilled iff q λ L > 0 , which is equivalent to
λ I SP , 2 : = ] log α α + β H log α + β A α + β H , 0 1 , [ , if β A > β H , ] , 0 1 , log α α + β H log α + β A α + β H [ , if β A < β H .
Suppose that λ I SP , 2 . As we have seen above, from Properties 1 (P3a) and (P3b) one can derive that a n ( q λ L ) n N is strictly positive, strictly increasing, and converges to x 0 ( q λ L ) ] 0 , log ( q λ L ) ] iff q λ L min { 1 , e β λ 1 } , and otherwise it diverges to . Notice that both cases can occur: consider the parameter setup β A , β H , α A , α H = ( 1.5 , 0.5 , 0.5 , 0.5 ) P SP , 2 , which leads to I SP , 2 = ] 1 , 0 [ ] 1 , [ ; within our running-example epidemiological context of Section 2.3, this corresponds to a “mild” infectious-disease-transmission situation ( H ) (with “low” reproduction number β H = 0.5 and importation mean of α H = 0.5 ), whereas ( A ) describes a “dangerous” situation (with supercritical β A = 1.5 and α A = 0.5 ). For λ = 0.5 I SP , 2 one obtains q λ L 0.207 min { 1 , e β λ 1 } 0.368 , whereas for λ = 2 I SP , 2 one gets q λ L = 3.5 > min { 1 , e β λ 1 } = 1 . Altogether, this leads to
Proposition 11.
For all β A , β H , α A , α H , λ P SP , 2 × I SP , 2 and all initial population sizes X 0 N there holds with p λ L = α A = α H = α , q λ L = ( α + β A ) λ ( α + β H ) 1 λ α
( a ) B λ , X 0 , 1 L = B ˜ λ , X 0 , 1 ( p λ L , q λ L ) = exp q λ L β λ · X 0 > 1 , ( b ) the sequence B λ , X 0 , n L n N of lower bounds for H λ ( P A , n P H , n ) given by B λ , X 0 , n L = B ˜ λ , X 0 , n ( p λ L , q λ L ) = exp a n ( q λ L ) · X 0 + k = 1 n b k ( p λ L , q λ L ) is strictly increasing , ( c ) lim n B λ , X 0 , n L = = lim n H λ ( P A , n P H , n ) , ( d ) lim n 1 n log B λ , X 0 , n L = p λ L · exp x 0 ( q λ L ) α > 0 , if q λ L min 1 , e β λ 1 , , if q λ L > min 1 , e β λ 1 , ( e ) the map X 0 B λ , X 0 , n L = B ˜ λ , X 0 , n ( p λ L , q λ L ) is strictly increasing .
Nevertheless, for the remaining constellations β A , β H , α A , α H , λ P SP , 2 × R \ I SP , 2 [ 0 , 1 ] , all observation time horizons n N and all initial population sizes X 0 N one can still prove
1 < H λ P A , n P H , n and lim n H λ P A , n P H , n = ,
(i.e., the achievement of the Goals (G1 ), (G2 )), which is done by a conceptually different method (without involving p λ L , q λ L ) in Appendix A.1.

3.18. Lower Bounds for the Cases β A , β H , α A , α H , λ P SP , 3 a × ( R \ [ 0 , 1 ] )

In the current setup, where P SP , 3 a : = β A , β H , α A , α H P SP : α A α H , β A β H , α A β A α H β H , α A α H β H β A ] , 0 [ (cf. (49)), we always have either ( α A > α H ) ( β A > β H ) or ( α A < α H ) ( β A < β H ) . Furthermore, from Properties 3 (P16) we obtain ϕ λ ( 0 ) > 0 . As in the case λ ] 0 , 1 [ , the derivative ϕ λ ( 0 ) can assume any sign on P SP , 3 a , take e.g., β A , β H , α A , α H , λ = ( 2.2 , 4.5 , 1 , 3 , 2 ) for ϕ λ ( 0 ) < 0 , β A , β H , α A , α H , λ = ( 2.25 , 4.5 , 1 , 3 , 2 ) for ϕ λ ( 0 ) = 0 and β A , β H , α A , α H , λ = ( 2.3 , 4.5 , 1 , 3 , 2 ) for ϕ λ ( 0 ) > 0 (these parameter constellations reflect “dangerous” ( A ) versus “highly dangerous” ( H ) situations within our running-example epidemiological context of Section 2.3). Nevertheless, in all three subcases one gets min x N 0 ϕ λ ( x ) min x 0 ϕ λ ( x ) > 0 . Thus, there exist parameters p λ L ] α λ , α A λ α H 1 λ ] and q λ L ] β λ , β A λ β H 1 λ ] which satisfy (35) (in particular, p λ L α λ > 0 , q λ L β λ > 0 ). We now have to look for a condition which guarantees that these parameters additionally fulfill (56); such a condition is clearly that both α λ 0 and β λ 0 hold, which is equivalent (cf. (57)) with
λ I SP , 3 a ( ) : = [ max α H α A α H , β H β A β H , 0 1 , [ , if ( α A > α H ) ( β A > β H ) , , 0 1 , min α H α H α A , β H β H β A , if ( α A < α H ) ( β A < β H ) ;
recall that α λ = 0 and β λ = 0 cannot occur simultaneously in the current setup. If α λ 0 and β λ 0 , i.e., if
λ I SP , 3 a ( < ) : = ] , min α H α A α H ; β H β A β H ] , if ( α A > α H ) ( β A > β H ) , [ max α H α H α A ; β H β H β A , [ , if ( α A < α H ) ( β A < β H ) ,
then–due to the strict positivity of the function φ λ ( · ) (cf. (31))–there exist parameters p λ L > 0 = max { 0 , α λ } and q λ L > 0 = max { 0 , β λ } which satisfy (56) and (34) (where the latter implies (35) and thus p λ L α A λ α H 1 λ , q λ L β A λ β H 1 λ ). With
I SP , 3 a : = I SP , 3 a ( ) I SP , 3 a ( < )
and with the discussion below (56), we thus derive the following
Proposition 12.
For all β A , β H , α A , α H , λ P SP , 3 a × I SP , 3 a there exist parameters p λ L , q λ L which satisfy max { 0 , α λ } < p λ L α A λ α H 1 λ , max { 0 , β λ } < q λ L β A λ β H 1 λ as well as (35) for all x N 0 , and for all such pairs ( p λ L , q λ L ) and all initial population sizes X 0 N one gets
( a ) B λ , X 0 , 1 L = B ˜ λ , X 0 , 1 ( p λ L , q λ L ) = exp q λ L β λ · X 0 + p λ L α λ > 1 , ( b ) the sequence B λ , X 0 , n L n N of lower bounds for H λ ( P A , n P H , n ) given by B λ , X 0 , n L = B ˜ λ , X 0 , n ( p λ L , q λ L ) = exp a n ( q λ L ) · X 0 + k = 1 n b k ( p λ L , q λ L ) is strictly increasing , ( c ) lim n B λ , X 0 , n L = = lim n H λ ( P A , n P H , n ) , ( d ) lim n 1 n log B λ , X 0 , n L = p λ L · exp x 0 ( q λ L ) α λ > 0 , if q λ L min 1 , e β λ 1 , , if q λ L > min 1 , e β λ 1 , ( e ) the map X 0 B λ , X 0 , n L = B ˜ λ , X 0 , n ( p λ L , q λ L ) is strictly increasing .
Notice that the assertions (a) to (e) of Proposition 12 hold true for parameter pairs ( p λ L , q λ L ) whenever they satisfy (35) and (56); in particular, we may allow either p λ L = max { 0 , α λ } or q λ L = max { 0 , β λ } . Let us furthermore mention that in part (d) both asymptotical behaviours can occur: consider e.g., the parameter setup β A , β H , α A , α H = ( 0.3 , 0.2 , 4 , 3 ) P SP , 3 a , leading to ] 1 , [ I SP , 3 a ( ) I SP , 3 a . For λ = 2 I SP , 3 a , the parameters p λ L : = p ˜ λ : = 5.25 , q λ L : = q ˜ λ : = 0.45 (corresponding to the asymptote ϕ ˜ λ ( · ) , cf. (P20)) fulfill (35), (56) and additionally q λ L = 0.45 < min { 1 , e β λ 1 } 0.549 . Analogously, in the setup β A , β H , α A , α H , λ = ( 3 , 2 , 4 , 3 , 2 ) P SP , 3 a × I SP , 3 a , the choices p λ L : = p ˜ λ : = 5.25 , q λ L : = q ˜ λ : = 4.5 satisfy (35), (56) and there holds q λ L = 4.5 > min { 1 , e β λ 1 } = 1 .
For the remaining two cases ( α λ 0 ) ( β λ > 0 ) (e.g., β A , β H , α A , α H , λ = ( 6 , 5 , 3 , 2 , 3 ) ) and ( α λ > 0 ) ( β λ 0 ) (e.g., β A , β H , α A , α H , λ = ( 3 , 2 , 6 , 5 , 3 ) ), one has to proceed differently. Indeed, for all parameter constellations β A , β H , α A , α H , λ P SP , 3 a × R \ I SP , 3 a [ 0 , 1 ] , all observation time horizons n N and all initial population sizes X 0 N one can still prove
1 < H λ P A , n P H , n , and lim n H λ P A , n P H , n = ,
which is done in Appendix A.1, using a similar method as in the proof of assertion (59).

3.19. Lower Bounds for the Cases β A , β H , α A , α H , λ P SP , 3 b × ( R \ [ 0 , 1 ] )

Within such a constellation, where P SP , 3 b : = β A , β H , α A , α H P SP : α A α H , β A β H , α A β A α H β H , α A α H β H β A ] 0 , [ \ N (cf. (49)), one always has either ( α A < α H ) ( β A > β H ) or ( α A > α H ) ( β A < β H ) . Moreover, from Properties 3 (P15) one can see that ϕ λ ( x ) = 0 for x = x * = α H α A β A β H > 0 . However, x * N 0 , which implies ϕ λ ( x ) > 0 for all x on the relevant subdomain N 0 . Again, we incorporate (57) and consider the set of all λ R \ [ 0 , 1 ] such that α λ 0 and β λ 0 (where α λ = 0 β λ = 0 cannot appear), i.e.,
λ I SP , 3 b ( ) : = β H β A β H , 0 1 , α H α H α A , if ( α A < α H ) ( β A > β H ) , α H α A α H , 0 1 , β H β H β A , if ( α A > α H ) ( β A < β H ) .
As above in Section 3.18, if λ I SP , 3 b ( ) then there exist parameters p λ L ] α λ , α A λ α H 1 λ ] , q λ L ] β λ , β A λ β H 1 λ ] (which thus fulfill (56)) such that (35) is satisfied for all x N 0 . Hence, for all λ I SP , 3 b : = I SP , 3 b ( ) , all assertions (a) to (e) of Proposition 12 hold true. Notice that for the current setup P SP , 3 b one cannot have α λ 0 and β λ 0 simultaneously. Furthermore, in each of the two remaining cases ( α λ < 0 ) ( β λ > 0 ) respectively ( α λ > 0 ) ( β λ < 0 ) it can happen that there do not exist parameters p λ L , q λ L > 0 which satisfy both (35) and (56). However, as in the case P SP , 3 a above, for all λ I SP , 3 b we prove in Appendix A.1 (by a method without p λ L , q λ L ) that for all observation times n N and all initial population sizes X 0 N there holds
1 < H λ P A , n P H , n and lim n H λ P A , n P H , n = .

3.20. Lower Bounds for the Cases β A , β H , α A , α H , λ P SP , 3 c × ( R \ [ 0 , 1 ] )

Since in this subcase one has P SP , 3 c : = β A , β H , α A , α H P SP : α A α H , β A β H , α A β A α H β H , α A α H β H β A N (cf. (49)) and thus ϕ λ ( x * ) = 0 for x * N , there do not exist parameters p λ L , q λ L such that (35) and (56) are satisfied. The only parameter pair that ensures exp a n ( q λ L ) · X 0 + k = 1 n b k ( p λ L , q λ L ) 1 for all n N and all X 0 N within our proposed method, is the choice p λ L = α λ , q λ L = β λ . Consequently, B λ , X 0 , n L 1 , which coincides with the general lower bound (11) but violates the above-mentioned desired Goal (G1 ). However, in some constellations there exist nonnegative parameters p λ L < α λ , q λ L > β λ or p λ L > α λ , q λ L < β λ , such that at least the parts (c) and (d) of Proposition 12 are satisfied. As in Section 3.19 above, by using a conceptually different method (without p λ L , q λ L ) we prove in Appendix A.1 that for all λ R \ [ 0 , 1 ] , all observation times n N and all initial population sizes X 0 N there holds
1 < H λ P A , n P H , n and lim n H λ P A , n P H , n = .

3.21. Lower Bounds for the Cases β A , β H , α A , α H , λ P SP , 4 a × ( R \ [ 0 , 1 ] )

In the current setup, where P SP , 4 a : = β A , β H , α A , α H P SP : α A α H > 0 , β A = β H ] 0 , 1 [ (cf. (49)), the function ϕ λ ( · ) is strictly positive and strictly decreasing, with lim x ϕ λ ( x ) = lim x ϕ λ ( x ) = 0 . The only choice of parameters p λ L , q λ L which fulfill (35) and exp a n ( q λ L ) · X 0 + k = 1 n b k ( p λ L , q λ L ) 1 for all n N and all X 0 N , is the choice p λ L = α λ as well as q λ L = β λ = β , where β stands for both (equal) β H and β A . Of course, this leads to B λ , X 0 , n L 1 , which is consistent with the general lower bound (11), but violates the above-mentioned desired Goal (G1 ). Nevertheless, in Appendix A.1 we prove the following
Proposition 13.
For all β A , β H , α A , α H , λ P SP , 4 a × R \ [ 0 , 1 ] there exist parameters p λ L > α λ (not necessarily satisfying p λ L 0 ) and 0 < q λ L < β λ = β < min { 1 , e β 1 } = e β 1 such that (35) holds for all x [ 0 , [ and such that for all initial population sizes X 0 N the parts (c) and (d) of Proposition 12 hold true.

3.22. Lower Bounds for the Cases β A , β H , α A , α H , λ P SP , 4 b × ( R \ [ 0 , 1 ] )

By recalling P SP , 4 b : = β A , β H , α A , α H P SP : α A α H > 0 , β A = β H [ 1 , [ (cf.(49)), the assertions preceding Proposition 13 remain valid. However, the proof of Proposition 13 in Appendix A.1 contains details which explain why it cannot be carried over to the current case P SP , 4 b . Thus, the generally valid lower bound B λ , X 0 , n L 1 cannot be improved with our methods.

3.23. Concluding Remarks on Alternative Lower Bounds for all Cases β A , β H , α A , α H , λ ( P SP \ P SP , 1 ) × ( R \ [ 0 , 1 ] )

To achieve the Goals (G1 ) to (G3 ), in the above-mentioned investigations about lower bounds of the Hellinger integral H λ ( P A , n P H , n ) , λ R \ [ 0 , 1 ] , we have mainly focused on parameters p λ L , q λ L which satisfy (35) and additionally (56). Nevertheless, Theorem 1 (b) gives lower bounds B λ , X 0 , n L whenever (35) is fulfilled. However, this lower bound can be the trivial one, B λ , X 0 , n L 1 . Let us remark here that for the parameter constellations β A , β H , α A , α H , λ P SP , 2 × R \ [ 0 , 1 ] I SP , 2 P SP , 3 a × R \ [ 0 , 1 ] I SP , 3 a P SP , 3 b × R \ [ 0 , 1 ] I SP , 3 b one can prove that there exist p λ L , q λ L which satisfy (35) for all x N 0 as well as the condition (generalizing (56))
p λ L α λ , q λ L β λ , ( where at least one of the inequalities is strict ) ,
and that for such p λ L , q λ L one gets the validity of H λ ( P A , n P H , n ) B λ , X 0 , n L = B ˜ λ , X 0 , n ( p λ L , q λ L ) > 1 for all X 0 N and all n N ; consequently, Goal (G1 ) is achieved. However, in these parameter constellations it can unpleasantly happen that n B λ , X 0 , n L is oscillating (in contrast to the monotone behaviour in the Propositions 11 (b), 12 (b)).
As a final general remark, let us mention that the functions ϕ λ , y tan ( · ) , ϕ λ , k sec ( · ) , ϕ λ hor ( · ) , ϕ λ ˜ ( · ) –defined in (52)–(54) and Properties 3 (P20)–constitute linear lower bounds for ϕ λ ( · ) on the domain N 0 in the case λ R \ [ 0 , 1 ] . Their parameters p λ L p λ , y tan , p λ , y sec , p λ , y hor , p λ ˜ and q λ L q λ , y tan , q λ , y sec , q λ , y hor , q λ ˜ lead to lower bounds B λ , X 0 , n L of the Hellinger integrals that may or may not be consistent with Goals (G1 ) to (G3 ), and which may be possibly better respectively weaker respectively incomparable with the previous lower bounds when adding some relaxation of (G1 ), such as e.g., the validity of H λ ( P A , n P H , n ) > 1 for all but finitely many n N .

3.24. Upper Bounds for the Cases β A , β H , α A , α H , λ ( P SP \ P SP , 1 ) × ( R \ [ 0 , 1 ] )

For the cases λ R \ [ 0 , 1 ] , the investigation of upper bounds for the Hellinger integral H λ ( P A , n P H , n ) is much easier than the above-mentioned derivations of lower bounds. In fact, we face a situation which is similar to the lower-bounds-studies for the cases λ ] 0 , 1 [ : due to Properties 3 (P19), the function ϕ λ ( · ) is strictly convex on the nonnegative real line. Furthermore, it is asymptotically linear, as stated in (P20). The monotonicity Properties 2 (P10) to (P12) imply that for the tightest upper bound (within our framework) one should use the parameters p λ U : = α A λ α H 1 λ > 0 and q λ U : = β A λ β H 1 λ > 0 . Lemma A1 states that p λ U α λ resp. q λ U β λ , with equality iff α A = α H resp. iff β A = β H . From Properties 1 (P3a) we see that for β A β H the corresponding sequence a n ( q λ U ) n N is convergent to x 0 ( q λ U ) ] 0 , log ( q λ U ) ] if q λ U min { 1 , e β λ 1 } (i.e., if λ [ λ , λ + ] , cf. Lemma 1 (a)), and otherwise it diverges to faster than exponentially (cf. (P3b)). If β A = β H (i.e., if β A , β H , α A , α H P SP , 4 = P SP , 4 a P SP , 4 b ), then one gets q λ U = β λ and a n ( q λ U ) = 0 = x 0 ( q λ U ) for all n N (cf. (P2)). Altogether, this leads to
Proposition 14.
For all β A , β H , α A , α H , λ ( P SP \ P SP , 1 ) × ( R \ [ 0 , 1 ] ) and all initial population sizes X 0 N there holds with p λ U : = α A λ α H 1 λ , q λ U : = β A λ β H 1 λ
( a ) B λ , X 0 , 1 U = B ˜ λ , X 0 , 1 ( p λ U , q λ U ) = exp β A λ β H 1 λ β λ · X 0 + α A λ α H 1 λ α λ > 1 , ( b ) the sequence B λ , X 0 , n U n N of upper bounds for H λ ( P A , n P H , n ) given by B λ , X 0 , n U = B ˜ λ , X 0 , n ( p λ U , q λ U ) = exp a n ( q λ U ) · X 0 + k = 1 n b k ( p λ U , q λ U ) is strictly increasing , ( c ) lim n B λ , X 0 , n U = , ( d ) lim n 1 n log B λ , X 0 , n U = p λ U · exp x 0 ( q λ U ) α λ > 0 , if λ [ λ , λ + ] \ [ 0 , 1 ] , , if λ ] , λ [ ] λ + , [ , ( e ) the map X 0 B λ , X 0 , n U = B ˜ λ , X 0 , n ( p λ U , q λ U ) is strictly increasing .

4. Power Divergences of Non-Kullback-Leibler-Information-Divergence Type

4.1. A First Basic Result

For orders λ R \ { 0 , 1 } , all the results of the previous Section 3 carry correspondingly over from the Hellinger integrals H λ ( · · ) to the total variation distance V ( · | | · ) , by virtue of the relation (cf. (12))
2 1 H 1 2 ( P A , n P H , n ) V ( P A , n P H , n ) 2 1 H 1 2 ( P A , n P H , n ) 2 ,
to the Renyi divergences R λ ( · · ) , by virtue of the relation (cf. (7))
0 R λ P A , n P H , n = 1 λ ( λ 1 ) log H λ P A , n P H , n , with log 0 : = ,
as well as to the power divergences I λ · · , by virtue of the relation (cf. (2))
I λ P A , n P H , n = 1 H λ ( P A , n P H , n ) λ · ( 1 λ ) , n N ;
in the following, we concentrate on the latter. In particular, the above-mentioned carrying-over procedure leads to bounds on I λ P A P H which are tighter than the general rudimentary bounds (cf. (10) and (11))
0 I λ P A , n P H , n < 1 λ ( 1 λ ) , for λ ] 0 , 1 [ , 0 I λ P A , n P H , n , for λ R \ [ 0 , 1 ] .
Because power divergences have a very insightful interpretation as “directed distances” between two probability distributions (e.g., within our running-example epidemiological context), and function as important tools in statistics, information theory, machine learning, and artificial intelligence, we present explicitly the outcoming exact values respectively bounds of I λ P A P H ( λ R \ { 0 , 1 } , n N ), in the current and the following subsections. For this, recall the case-dependent parameters p A = p λ A = p A β A , β H , α A , α H , λ and q A = q λ A = q A β A , β H , α A , α H , λ ( A { E , L , U } ). To begin with, we can deduce from Theorem 1
Theorem 2.
(a) 
For all β A , β H , α A , α H ( P NI P SP , 1 ) , all initial population sizes X 0 N 0 , all observation horizons n N and all λ R \ { 0 , 1 } one can recursively compute the exact value
I λ ( P A , n P H , n ) = 1 λ ( λ 1 ) · exp a n ( q λ E ) · X 0 + α A β A k = 1 n a k ( q λ E ) 1 = : V λ , X 0 , n I ,
where α A β A can be equivalently replaced by α H β H and q λ E : = β A λ β H 1 λ . Notice that on P NI the formula (65) simplifies significantly, since α A = α H = 0 .
(b) 
For general parameters p R , q 0 recall the general expression (cf. (42))
B ˜ λ , X 0 , n ( p , q ) : = exp a n ( q ) · X 0 + p q k = 1 n a k ( q ) + n · p q β λ α λ
as well as
B ˜ λ , X 0 , n ( p , 0 ) : = exp β λ · X 0 + p · e β λ α λ · n .
Then, for all β A , β H , α A , α H P SP \ P SP , 1 , all λ R \ { 0 , 1 } , all coefficients p λ L , p λ U , q λ L , q λ U R which satisfy (35) for all x N 0 , all initial population sizes X 0 N and all observation horizons n N one gets the following recursive bounds for the power divergences: for λ ] 0 , 1 [ there holds
I λ ( P A , n P H , n ) < 1 λ ( 1 λ ) · 1 B λ , X 0 , n L = 1 λ ( 1 λ ) · 1 B ˜ λ , X 0 , n ( p λ L , q λ L ) = : B λ , X 0 , n I , U , 1 λ ( 1 λ ) · 1 B λ , X 0 , n U = 1 λ ( 1 λ ) · 1 min B ˜ λ , X 0 , n ( p λ U , q λ U ) , 1 = : B λ , X 0 , n I , L ,
whereas for λ R \ [ 0 , 1 ] there holds
I λ ( P A , n P H , n ) < 1 λ ( λ 1 ) · B λ , X 0 , n U 1 = 1 λ ( λ 1 ) · B ˜ λ , X 0 , n ( p λ U , q λ U ) 1 = : B λ , X 0 , n I , U , 1 λ ( λ 1 ) · B λ , X 0 , n L 1 = 1 λ ( λ 1 ) · max B ˜ λ , X 0 , n ( p λ L , q λ L ) , 1 1 = : B λ , X 0 , n I , L .
In order to deduce the subsequent detailed recursive analyses of power divergences, we also employ the obvious relations
lim n 1 n log 1 λ ( 1 λ ) I λ ( P A , n P H , n ) = lim n 1 n log λ ( 1 λ ) + log H λ ( P A , n P H , n ) = lim n 1 n log H λ ( P A , n P H , n ) , for λ ] 0 , 1 [ ,
as well as
lim n 1 n log I λ ( P A , n P H , n ) = lim n 1 n log λ ( λ 1 ) + log H λ ( P A , n P H , n ) 1 = lim n 1 n log 1 1 H λ ( P A , n | | P H , n ) + log H λ ( P A , n P H , n ) = lim n 1 n log H λ ( P A , n P H , n ) ,
for λ R \ [ 0 , 1 ] (provided that lim inf n H λ ( P A , n P H , n ) > 1 ).

4.2. Detailed Analyses of the Exact Recursive Values of I λ ( · · ) , i.e., for the Cases β A , β H , α A , α H , λ ( P NI P SP , 1 ) × ( R \ { 0 , 1 } )

Corollary 2.
For all β A , β H , α A , α H , λ P NI × ] 0 , 1 [ and all initial population sizes X 0 N there holds with q λ E : = β A λ β H 1 λ
( a ) I λ ( P A , 1 P H , 1 ) = 1 λ ( 1 λ ) · 1 exp β A λ β H 1 λ β λ · X 0 > 0 , ( b ) the sequence I λ ( P A , n P H , n ) n N given by I λ ( P A , n P H , n ) = 1 λ ( 1 λ ) · 1 exp a n ( q λ E ) · X 0 = : V λ , X 0 , n I is strictly increasing , ( c ) lim n I λ ( P A , n P H , n ) = 1 λ ( 1 λ ) · 1 exp x 0 ( q λ E ) · X 0 ] 0 , 1 λ ( 1 λ ) [ , ( d ) lim n 1 n log 1 λ ( 1 λ ) I λ ( P A , n P H , n ) = lim n 1 n log H λ ( P A , n P H , n ) = 0 , ( e ) the map X 0 V λ , X 0 , n I is strictly increasing .
Corollary 3.
For all β A , β H , α A , α H , λ P NI × ( R \ [ 0 , 1 ] ) and all initial population sizes X 0 N there holds with q λ E : = β A λ β H 1 λ
( a ) I λ ( P A , 1 P H , 1 ) = 1 λ ( λ 1 ) · exp β A λ β H 1 λ β λ · X 0 1 > 0 , ( b ) the sequence I λ ( P A , n P H , n ) n N given by I λ ( P A , n P H , n ) = 1 λ ( λ 1 ) · exp a n ( q λ E ) · X 0 1 = : V λ , X 0 , n I is strictly increasing , ( c ) lim n I λ ( P A , n P H , n ) = 1 λ ( λ 1 ) · exp x 0 ( q λ E ) · X 0 1 > 0 , if λ [ λ , λ + ] \ [ 0 , 1 ] , , if λ ] , λ [ ] λ + , [ , ( d ) lim n 1 n log I λ ( P A , n P H , n ) = 0 , if λ [ λ , λ + ] \ [ 0 , 1 ] , , if λ ] , λ [ ] λ + , [ , ( e ) the map X 0 V λ , X 0 , n I is strictly increasing .
Corollary 4.
For all β A , β H , α A , α H , λ P SP , 1 × ] 0 , 1 [ and all initial population sizes X 0 N there holds with q λ E : = β A λ β H 1 λ
( a ) I λ ( P A , 1 P H , 1 ) = 1 λ ( 1 λ ) · 1 exp β A λ β H 1 λ β λ · X 0 + α A β A > 0 , ( b ) the sequence I λ ( P A , n P H , n ) n N given by I λ ( P A , n P H , n ) = 1 λ ( 1 λ ) · 1 exp a n ( q λ E ) · X 0 + α A β A k = 1 n a k ( q λ E ) = : V λ , X 0 , n I is strictly increasing , ( c ) lim n I λ ( P A , n P H , n ) = 1 λ ( 1 λ ) , ( d ) lim n 1 n log 1 λ ( 1 λ ) I λ ( P A , n P H , n ) = α A β A · x 0 ( q λ E ) < 0 , ( e ) the map X 0 V λ , X 0 , n I is strictly increasing .
Corollary 5.
For all β A , β H , α A , α H , λ P SP , 1 × ( R \ [ 0 , 1 ] ) and all initial population sizes X 0 N there holds with q λ E : = β A λ β H 1 λ
( a ) I λ ( P A , 1 P H , 1 ) = 1 λ ( λ 1 ) · exp β A λ β H 1 λ β λ · X 0 + α A β A 1 > 0 , ( b ) the sequence I λ ( P A , n P H , n ) n N given by I λ ( P A , n P H , n ) = 1 λ ( λ 1 ) · exp a n ( q λ E ) · X 0 + α A β A k = 1 n a k ( q λ E ) 1 = : V λ , X 0 , n I is strictly increasing , ( c ) lim n I λ ( P A , n P H , n ) = , ( d ) lim n 1 n log I λ ( P A , n P H , n ) = α A β A · x 0 ( q λ E ) > 0 , if λ [ λ , λ + ] \ [ 0 , 1 ] , , if λ ] , λ [ ] λ + , [ , ( e ) the map X 0 V λ , X 0 , n I is strictly increasing .
In the assertions (a), (b), (d) of the Corollaries 4 and 5 the fraction α A / β A can be equivalently replaced by α H / β H .
Let us now derive the corresponding detailed results for the bounds of the power divergences for the parameter cases P SP \ P SP , 1 , where the Hellinger integral, and thus I λ ( P A , n P H , n ) , cannot be determined exactly. The extensive discussion on the Hellinger-integral bounds in the Section 3.4, Section 3.5, Section 3.6, Section 3.7, Section 3.8, Section 3.9, Section 3.10, Section 3.11, Section 3.12 and Section 3.13, as well as in the Section 3.16, Section 3.17, Section 3.18, Section 3.19, Section 3.20, Section 3.21, Section 3.22, Section 3.23 and Section 3.24 can be carried over directly to obtain power-divergence bounds. In the following, we summarize the outcoming key results, referring a detailed discussion on the possible choices of p λ A = p A β A , β H , α A , α H , λ and q λ A = q A β A , β H , α A , α H , λ ( A { L , U } ) to the corresponding above-mentioned subsections.

4.3. Lower Bounds of I λ ( · · ) for the Cases β A , β H , α A , α H , λ ( P SP \ P SP , 1 ) × ] 0 , 1 [

Corollary 6.
For all β A , β H , α A , α H , λ ( P SP , 2 P SP , 3 a P SP , 3 b ) × ] 0 , 1 [ there exist parameters p λ U , q λ U which satisfy p λ U α A λ α H 1 λ , α λ and q λ U [ β A λ β H 1 λ , β λ [ as well as (35) for all x N 0 , and for all such pairs ( p λ U , q λ U ) and all initial population sizes X 0 N there holds
( a ) B λ , X 0 , 1 I , L = 1 λ ( 1 λ ) · 1 exp q λ U β λ · X 0 + p λ U α λ > 0 , ( b ) the sequence B λ , X 0 , n I , L n N of lower bounds for I λ ( P A , n P H , n ) given by B λ , X 0 , n I , L = 1 λ ( 1 λ ) · 1 exp a n ( q λ U ) · X 0 + k = 1 n b k ( p λ U , q λ U ) is strictly increasing , ( c ) lim n B λ , X 0 , n I , L = lim n I λ ( P A , n P H , n ) = 1 λ ( 1 λ ) , ( d ) lim n 1 n log 1 λ ( 1 λ ) B λ , X 0 , n I , L = p λ U · e x 0 ( q λ U ) α λ < 0 , ( e ) the map X 0 B λ , X 0 , n I , L is strictly increasing .
Remark 4.
(a) 
Notice that in the case β A , β H , α A , α H , λ P SP , 2 × ] 0 , 1 [ –where α A λ α H 1 λ = α λ = α A = α H = α –we get the special choice p λ U = α and q λ U = ( α + β A ) λ ( α + β H ) 1 λ α (cf. Section 3.7). For the constellations β A , β H , α A , α H , λ ( P SP , 3 a P SP , 3 b ) × ] 0 , 1 [ there exist parameters p λ U [ α A λ α H 1 λ , α λ [ , q λ U [ β A λ β H 1 λ , β λ [ which satisfy (35) for all x N 0 .
(b) 
For the parameter setups β A , β H , α A , α H , λ ( P SP , 2 P SP , 3 a P SP , 3 b ) × ] 0 , 1 [ there might exist parameter pairs ( p λ U , q λ U ) satisfying (35) and either p λ U = α λ or q λ U = β λ , for which all assertions of Corollary 6 still hold true.
(c) 
Following the discussion in Section 3.10 for all β A , β H , α A , α H , λ P SP , 3 c × ] 0 , 1 [ at least part (c) still holds true.
Corollary 7.
For all β A , β H , α A , α H , λ P SP , 4 a × ] 0 , 1 [ there exist parameters p λ U < α λ , 1 > q λ U > β λ = β such that (35) is satisfied for all x [ 0 , [ and such that for all initial population sizes X 0 N at least the parts (c) and (d) of Corollary 6 hold true.
As in Section 3.12, for the parameter setup β A , β H , α A , α H , λ P SP , 4 b × ] 0 , 1 [ we cannot derive a lower bound for the power divergences which improves the generally valid lower bound I λ ( P A , n P H , n ) 0 (cf. (10)) by employing our proposed ( p λ U , q λ U )-method.

4.4. Upper Bounds of I λ ( · · ) for the Cases β A , β H , α A , α H , λ ( P SP \ P SP , 1 ) × ] 0 , 1 [

Since in this setup the upper bounds of the power divergences can be derived from the lower bounds of the Hellinger integrals, we here appropriately adapt the results of Proposition 6.
Corollary 8.
For all β A , β H , α A , α H , λ ( P SP \ P SP , 1 ) × ] 0 , 1 [ and all initial population sizes X 0 N there holds with p λ L : = α A λ α H 1 λ and q λ L : = β A λ β H 1 λ
( a ) B λ , X 0 , 1 I , U = 1 λ ( 1 λ ) · 1 exp β A λ β H 1 λ β λ · X 0 + α A λ α H 1 λ α λ > 0 , ( b ) the sequence of upper bounds B λ , X 0 , n I , U n N for I λ ( P A , n P H , n ) given by B λ , X 0 , n I , U = 1 λ ( 1 λ ) · 1 exp a n ( q λ L ) · X 0 + p λ L q λ L k = 1 n a k ( q λ L ) + n · p λ L q λ L · β λ α λ is strictly increasing , ( c ) lim n B λ , X 0 , n I , U = 1 λ ( 1 λ ) , ( d ) lim n 1 n log 1 λ ( 1 λ ) B λ , X 0 , n I , U = p λ L q λ L · x 0 ( q λ L ) + β λ α λ = p λ L · e x 0 ( q λ L ) α λ < 0 , ( e ) the map X 0 B λ , X 0 , n I , U is strictly increasing .

4.5. Lower Bounds of I λ ( · · ) for the Cases ( β A , β H , α A , α H , λ ) ( P SP \ P SP , 1 ) × ( R \ [ 0 , 1 ] )

In order to derive detailed results on lower bounds of the power divergences in the case λ R \ [ 0 , 1 ] , we have to subsume and adapt the Hellinger-integral concerning lower-bounds investigations from the Section 3.16, Section 3.17, Section 3.18, Section 3.19, Section 3.20, Section 3.21, Section 3.22 and Section 3.23. Recall the λ -sets I SP , 2 , I SP , 3 a , I SP , 3 b (cf. (58), (60), (62)). For the constellations P SP , 2 × I SP , 2 we employ the special choice p λ L = α A λ α H 1 λ = α λ = α A = α H = α together with q λ L = ( α + β A ) λ ( α + β H ) 1 λ α > max { 0 , β λ } (cf. (58)) which satisfy (35) for all x N 0 and (56), whereas for the constellations ( P SP , 3 a × I SP , 3 a ) ( P SP , 3 b × I SP , 3 b ) we have proved the existence of parameters p λ L , q λ L satisfying both (35) for all x N 0 and (56) with two strict inequalities. Subsuming this, we obtain
Corollary 9.
For all β A , β H , α A , α H , λ ( P SP , 2 × I SP , 2 ) ( P SP , 3 a × I SP , 3 a ) ( P SP , 3 b × I SP , 3 b ) there exist parameters p λ L , q λ L which satisfy max { 0 , α λ } p λ L α A λ α H 1 λ , max { 0 , β λ } < q λ L β A λ β H 1 λ as well as (35) for all x N 0 , and for all such pairs ( p λ L , q λ L ) and all initial population sizes X 0 N one gets
( a ) B λ , X 0 , 1 I , L = 1 λ ( λ 1 ) · exp q λ L β λ · X 0 + p λ L α λ 1 > 0 , ( b ) the sequence B λ , X 0 , n I , L n N of lower bounds for I λ ( P A , n P H , n ) given by B λ , X 0 , n I , L = 1 λ ( λ 1 ) · exp a n ( q λ L ) · X 0 + k = 1 n b k ( p λ L , q λ L ) 1 is strictly increasing , ( c ) lim n B λ , X 0 , n I , L = lim n I λ ( P A , n P H , n ) = , ( d ) lim n 1 n log B λ , X 0 , n I , L = p λ L · exp x 0 ( q λ L ) α λ > 0 , if q λ L min 1 ; e β λ 1 , , if q λ L > min 1 ; e β λ 1 , ( e ) the map X 0 B λ , X 0 , n I , L is strictly increasing .
Analogously to the discussions in the Section 3.17, Section 3.18, Section 3.19 and Section 3.20, for the parameter setups P SP , 2 × R \ I SP , 2 [ 0 , 1 ] P SP , 3 a × R \ I SP , 3 a [ 0 , 1 ] P SP , 3 b × R \ I SP , 3 b [ 0 , 1 ] P SP , 3 c × R \ [ 0 , 1 ] and for all initial population sizes X 0 N one can still show
0 < I λ ( P A , n P H , n ) , and lim n I λ ( P A , n P H , n ) = .
For the penultimate case we obtain
Corollary 10.
For all β A , β H , α A , α H , λ P SP , 4 a × ( R \ [ 0 , 1 ] ) there exist parameters p λ L > α λ (where not necessarily p λ L 0 ) and 0 < q λ L < β λ = β < min { 1 , e β 1 } = e β 1 such that (35) is satisfied for all x [ 0 , [ and such that for all initial population sizes X 0 N at least the parts (c) and (d) of Corollary 9 hold true.
Notice that for the last case β A , β H , α A , α H , λ P SP , 4 b × R \ [ 0 , 1 ] (where ( β A = β H 1 ) we cannot derive lower bounds of the power divergences which improve the generally valid lower bound I λ ( P A , n P H , n ) 0 (cf. (11)) by employing our proposed ( p λ U , q λ U )-method.

4.6. Upper Bounds of I λ ( · · ) for the Cases β A , β H , α A , α H , λ ( P SP \ P SP , 1 ) × ( R \ [ 0 , 1 ] )

For these constellations we adapt Proposition 14, which after modulation becomes
Corollary 11.
For all β A , β H , α A , α H , λ ( P SP \ P SP , 1 ) × ( R \ [ 0 , 1 ] ) and all initial population sizes X 0 N there holds with p λ U : = α A λ α H 1 λ and q λ U : = β A λ β H 1 λ
( a ) B λ , X 0 , 1 I , U = 1 λ ( λ 1 ) · exp β A λ β H 1 λ β λ · X 0 + α A λ α H 1 λ α λ 1 > 0 , ( b ) the sequence B λ , X 0 , n I , U n N of upper bounds for I λ ( P A , n P H , n ) given by B λ , X 0 , n I , U = 1 λ ( λ 1 ) · exp a n ( q λ U ) · X 0 + k = 1 n b k ( p λ U , q λ U ) 1 is strictly increasing , ( c ) lim n B λ , X 0 , n I , U = , ( d ) lim n 1 n log B λ , X 0 , n I , U = p λ U · exp x 0 ( q λ U ) α λ > 0 , if λ [ λ , λ + ] \ [ 0 , 1 ] , , if λ ] , λ [ ] λ + , [ , ( e ) the map X 0 B λ , X 0 , n I , U is strictly increasing .

4.7. Applications to Bayesian Decision Making

As explained in Section 2.5, the power divergences fulfill
I λ P A , n P H , n = 0 1 Δ BR LO ˜ p A prior · 1 p A prior λ 2 · p A prior 1 λ d p A prior , λ R , ( cf . ( 21 ) ) ,
and
I λ P A , n P H , n = lim χ p A prior Δ BR LO λ , χ p A prior , λ ] 0 , 1 [ , ( cf . ( 22 ) ) ,
and thus can be interpreted as (i) weighted-average decision risk reduction (weighted-average statistical information measure) about the degree of evidence d e g concerning the parameter θ that can be attained by observing the GWI-path X n until stage n, and as (ii) limit decision risk reduction (limit statistical information measure). Hence, by combining (21) and (22) with the investigations in the previous Section 4.1, Section 4.2, Section 4.3, Section 4.4, Section 4.5 and Section 4.6, we obtain exact recursive values respectively recursive bounds of the above-mentioned decision risk reductions. For the sake of brevity, we omit the details here.

5. Kullback-Leibler Information Divergence (Relative Entropy)

5.1. Exact Values Respectively Upper Bounds of I ( · | | · )

From (2), (3) and (6) in Section 2.4, one can immediately see that the Kullback-Leibler information divergence (relative entropy) between two competing Galton-Watson processes without/with immigration can be obtained by the limit
I ( P A , n P H , n ) = lim λ 1 I λ P A , n P H , n ,
and the reverse Kullback-Leibler information divergence (reverse relative entropy) by I P H , n P A , n = lim λ 0 I λ P A , n P H , n . Hence, in the following we concentrate only on (68), the reverse case works analogously. Accordingly, we can use (68) in appropriate combination with the λ ] 0 , 1 [ -parts of the previous Section 4 (respectively, the corresponding parts of Section 3) in order to obtain detailed analyses for I P H , n P A , n . Let us start with the following assertions on exact values respectively upper bounds, which will be proved in Appendix A.2:
Theorem 3.
(a) 
For all β A , β H , α A , α H ( P NI P SP , 1 ) , all initial population sizes X 0 N and all observation horizons n N the Kullback-Leibler information divergence (relative entropy) is given by
I ( P A , n P H , n ) = I X 0 , n : = β A · log β A β H 1 + β H 1 β A · X 0 α A 1 β A · 1 β A n + α A · β A · log β A β H 1 + β H β A ( 1 β A ) · n , if β A 1 , β H log β H 1 · α A 2 · n 2 + X 0 + α A 2 · n , if β A = 1 .
(b) 
For all β A , β H , α A , α H P SP \ P SP , 1 , all initial population sizes X 0 N and all observation horizons n N there holds I ( P A , n P H , n ) E X 0 , n U , where
E X 0 , n U : = β A · log β A β H 1 + β H 1 β A · X 0 α A 1 β A · 1 β A n + α A · β A · log β A β H 1 + β H β A ( 1 β A ) + α A log α A β H α H β A β H β A + α H · n , if β A 1 , β H log β H 1 · α A 2 · n 2 + X 0 + α A 2 · n + α A log α A β H α H β H + α H · n , if β A = 1 .
Remark 5.
(i) Notice that the exact values respectively upper bounds are in closed form (rather than in recursive form).
(ii) The n behaviour of (the bounds of) the Kullback-Leibler information divergence/relative entropy I ( P A , n P H , n ) in Theorem 3 is influenced by the following facts:
(a) 
β A · log β A β H 1 + β H 0 with equality iff β A = β H .
(b) 
In the case β A 1 of (70), there holds α A · β A · log β A β H 1 + β H β A ( 1 β A ) + α A log α A β H α H β A β H β A + α H 0 , with equality iff α A = α H and β A = β H .

5.2. Lower Bounds of I ( · | | · ) for the Cases β A , β H , α A , α H ( P SP \ P SP , 1 )

Again by using (68) in appropriate combination with the “ λ ] 0 , 1 [ -parts” of the previous Section 4 (respectively, the corresponding parts of Section 3), we obtain the following (semi-)closed-form lower bounds of I P H , n P A , n :
Theorem 4.
For all β A , β H , α A , α H P SP \ P SP , 1 , all initial population sizes X 0 N and all observation horizons n N
I ( P A , n P H , n ) E X 0 , n L : = sup k N 0 , y [ 0 , [ E y , X 0 , n L , t a n , E k , X 0 , n L , s e c , E X 0 , n L , h o r [ 0 , [ ,
where for all y [ 0 , [ we define the – possibly negatively valued– finite bound component
E y , X 0 , n L , tan : = β A log α A + β A y α H + β H y + β H 1 α A + β A y α H + β H y · 1 β A n 1 β A · X 0 α A 1 β A + [ α A β A ( 1 β A ) β A log α A + β A y α H + β H y + β H 1 α A + β A y α H + β H y + α H α A β H β A 1 α A + β A y α H + β H y ] · n , if β A 1 , log α A + y α H + β H y + β H 1 α A + y α H + β H y · α A 2 · n 2 + X 0 + α A 2 · n + α H α A β H 1 α A + y α H + β H y · n , if β A = 1 ,
and for all k N 0 the – possibly negatively valued– finite bound component
E k , X 0 , n L , sec : = f A ( k + 1 ) log f A ( k + 1 ) f H ( k + 1 ) f A ( k ) log f A ( k ) f H ( k ) + β H β A · 1 β A n 1 β A · X 0 α A 1 β A + [ α A β A ( 1 β A ) f A ( k + 1 ) log f A ( k + 1 ) f H ( k + 1 ) f A ( k ) log f A ( k ) f H ( k ) + β H β A f A ( k + 1 ) log f A ( k + 1 ) f H ( k + 1 ) f A ( k ) log f A ( k ) f H ( k ) · k + α A β A + f A ( k ) log f A ( k ) f H ( k ) α A β H β A + α H ] · n , if β A 1 , f A ( k + 1 ) log f A ( k + 1 ) f H ( k + 1 ) f A ( k ) log f A ( k ) f H ( k ) + β H 1 · α A 2 · n 2 + X 0 + α A 2 · n [ f A ( k + 1 ) log f A ( k + 1 ) f H ( k + 1 ) f A ( k ) log f A ( k ) f H ( k ) k + α A f A ( k ) log f A ( k ) f H ( k ) + α A β H α H ] · n , if β A = 1 .
Furthermore, on P SP , 4 we set E X 0 , n L , h o r : = 0 for all n N whereas on P SP \ ( P SP , 1 P SP , 4 ) we define
E X 0 , n L , h o r : = α A + β A z * · log α A + β A z * α H + β H z * 1 + α H + β H z * · n , , n N ,
with z * : = arg max x N 0 ( α A + β A x ) log α A + β A x α H + β H x + 1 ( α H + β H x ) .
On P SP \ ( P SP , 1 P SP , 3 c ) one even gets E X 0 , n L > 0 for all X 0 N and all n N .
For the subcase P SP , 3 c , one obtains for each fixed n N and each fixed X 0 N the strict positivity E X 0 , n L > 0 if y E y , n L , t a n ( y * ) 0 , where y * : = α A α H β H β A N and hence
y E y , X 0 , n L , t a n ( y * ) = ( β A β H ) 3 α A β H α H β A · 1 β A n 1 β A · X 0 α A 1 β A ( β A β H ) 2 β A 1 + α A ( β A β H ) ( 1 β A ) ( α A β H α H β A ) · n , if β A 1 , ( 1 β H ) 3 α A β H α H · α A 2 · n 2 + X 0 + α A 2 · n ( 1 β H ) 2 · n , if β A = 1 .
A proof of this theorem is given in in Appendix A.2.
Remark 6.
Consider the exemplary parameter setup β A , β H , α A , α H = ( 1 3 , 2 3 , 2 , 1 ) P SP , 3 c ; within our running-example epidemiological context of Section 2.3, this corresponds to a “semi-mild” infectious-disease-transmission situation ( H ) (with subcritical reproduction number β H = 2 3 and importation mean of α H = 1 ), whereas ( A ) describes a “mild” situation (with “low” subcritical β A = 1 3 and α A = 2 ). In the case of X 0 = 3 there holds y E y , X 0 , n L , t a n ( y * ) = 0 for all n N , whereas for X 0 3 one obtains y E y , X 0 , n L , t a n ( y * ) 0 for all n N .
It seems that the optimization problem in (71) admits in general only an implicitly representable solution, and thus we have used the prefix “(semi-)” above. Of course, as a less tight but less involved explicit lower bound of the Kullback-Leibler information divergence (relative entropy) I ( P A , n | | P H , n ) one can use any term of the form max E y , X 0 , n L , t a n , E k , X 0 , n L , s e c , E X 0 , n L , h o r ( y [ 0 , [ , k N 0 ), as well as the following
Corollary 12.
(a) For all β A , β H , α A , α H P SP \ P SP , 1 , all initial population sizes X 0 N and all observation horizons n N
I ( P A , n P H , n ) E X 0 , n L E ˜ X 0 , n L : = max E , X 0 , n L , t a n , E 0 , X 0 , n L , s e c , E X 0 , n L , h o r [ 0 , [ ,
with E X 0 , n L , h o r defined by (74), with – possibly negatively valued– finite bound component E , X 0 , n L , t a n : = lim y E y , X 0 , n L , t a n , where
E , X 0 , n L , t a n : = β A · log β A β H 1 + β H 1 β A · X 0 α A 1 β A · 1 β A n + α A · β A · log β A β H 1 + β H β A ( 1 β A ) + α A 1 β H β A + α H 1 β A β H · n , if β A 1 , β H log β H 1 · α A 2 · n 2 + X 0 + α A 2 · n + α A 1 β H + α H 1 1 β H · n , if β A = 1 ,
and –possibly negatively valued–finite bound component
E 0 , X 0 , n L , s e c = α A + β A · log α A + β A α H + β H α A · log α A α H + β H β A · 1 β A n 1 β A · X 0 α A 1 β A + { α A β A ( 1 β A ) α A + β A · log α A + β A α H + β H α A · log α A α H α A 1 β A 1 β H α A 1 + α A β A · log α H ( α A + β A ) α A ( α H + β H ) + α H } · n , if β A 1 , α A + 1 · log α A + 1 α H + β H α A · log α A α H + β H 1 · n · X 0 + α A 2 · n 2 + { α A 2 α A + 1 · log α A + 1 α H + β H α A · log α A α H β H 1 α A 1 + α A · log α H ( α A + 1 ) α A ( α H + β H ) + α H } · n , if β A = 1 .
For the cases P SP , 2 P SP , 3 a P SP , 3 b one gets even E ˜ X 0 , n L > 0 for all X 0 N and all n N .

5.3. Applications to Bayesian Decision Making

As explained in Section 2.5, the Kullback-Leibler information divergence fulfills
I P A , n P H , n = 0 1 Δ BR LO ˜ p A prior · 1 p A prior 1 · p A prior 2 d p A prior , ( cf . ( 21 ) with λ = 1 ) ,
and thus can be interpreted as weighted-average decision risk reduction (weighted-average statistical information measure) about the degree of evidence d e g concerning the parameter θ that can be attained by observing the GWI-path X n until stage n. Hence, by combining (21) with the investigations in the previous Section 5.1 and Section 5.2, we obtain exact values respectively bounds of the above-mentioned decision risk reductions. For the sake of brevity, we omit the details here.

6. Explicit Closed-Form Bounds of Hellinger Integrals

6.1. Principal Approach

Depending on the parameter constellation β A , β H , α A , α H , λ P × ( R \ { 0 , 1 } ) , for the Hellinger integrals H λ P A , n P H , n we have derived in Section 3 corresponding lower/upper bounds respectively exact values–of recursive nature– which can be obtained by choosing appropriate p = p λ A = p A β A , β H , α A , α H , λ , q = q λ A = q A β A , β H , α A , α H , λ ( A { E , L , U } ) and by using those together with the recursion a n ( q ) n N defined by (36) as well as the sequence b n ( p , q ) n N obtained from a n ( q ) n N by the linear transformation (38). Both sequences are “stepwise fully evaluable” but generally seem not to admit a closed-form representation in the observation horizons n; consequently, the time-evolution n H λ P A , n P H , n –respectively the time-evolution of the corresponding recursive bounds– can generally not be seen explicitly. On order to avoid this intransparency (at the expense of losing some precision) one can approximate (36) by a recursion that allows for a closed-form representation; by the way, this will also turn out to be useful for investigations concerning diffusion limits (cf. the next Section 7).
To explain the basic underlying principle, let us first assume some general q ] 0 , β λ [ and λ ] 0 , 1 [ . With Properties 1 (P1) we see that the sequence a n ( q ) n N is strictly negative, strictly decreasing and converges to x 0 ( q ) ] β λ , q β λ [ . Recall that this sequence is obtained by the recursive application of the function ξ λ ( q ) ( x ) : = q · e x β λ , through a 1 ( q ) = ξ λ ( q ) ( 0 ) = q β λ < 0 , a n ( q ) = ξ λ ( q ) a n 1 ( q ) = q e a n 1 ( q ) β λ (cf. (36)). As a first step, we want to approximate ξ λ ( q ) ( · ) by a linear function on the interval x 0 ( q ) , 0 . Due to convexity (P9), this is done by using the tangent line of ξ λ ( q ) ( · ) at x 0 ( q )
ξ λ ( q ) , T ( x ) : = c ( q ) , T + d ( q ) , T · x : = x 0 ( q ) 1 q · e x 0 ( q ) + q · e x 0 ( q ) · x ,
as a linear lower bound, and the secant line of ξ λ ( q ) ( · ) across its arguments 0 and x 0 ( q )
ξ λ ( q ) , S ( x ) : = c ( q ) , S + d ( q ) , S · x : = q β λ + x 0 ( q ) ( q β λ ) x 0 ( q ) · x ,
as a linear upper bound. With the help of these functions, we can define the linear recursions
a 0 ( q ) , T : = 0 , a n ( q ) , T : = ξ λ ( q ) , T a n 1 ( q ) , T , n N ,
as well as a 0 ( q ) , S : = 0 , a n ( q ) , S : = ξ λ ( q ) , S a n 1 ( q ) , S , n N .
In the following, we will refer to these sequences as the rudimentary closed-form sequence-bounds.
Clearly, both sequences are strictly negative (on N ), strictly decreasing, and one gets the sandwiching
a n ( q ) , T < a n ( q ) a n ( q ) , S
for all n N , with equality on the right side iff n = 1 (where a 1 ( q ) = q β λ < 0 ); moreover,
lim n a n ( q ) , T = lim n a n ( q ) , S = lim n a n ( q ) = x 0 ( q ) .
Furthermore, such linear recursions allow for a closed-form representation, namely
a n ( q ) , * = c ( q ) , * 1 d ( q ) , * · 1 d ( q ) , * n = x 0 ( q ) · 1 d ( q ) , * n ,
where the “ * ” stands for either S or T. Notice that this representation is valid due to d ( q ) , T , d ( q ) , S ] 0 , 1 [ . So far, we have considered the case q ] 0 , β λ [ . If q = β λ , then one can see from Properties 1 (P2) that a n ( q ) 0 , which is also an explicitly given (though trivial) sequence. For the remaining case, where q > β λ and thus ξ λ ( q ) ( 0 ) = a 1 ( q ) = q β λ > 0 ), we want to exclude q min 1 , e β λ 1 for the following reasons. Firstly, if q > min 1 , e β λ 1 , then from (P3) we see that the sequence a n ( q ) n N is strictly increasing and divergent to , at a rate faster than exponentially (P3b); but a linear recursion is too weak to approximate such a growth pattern. Secondly, if q = min 1 , e β λ 1 , then one necessarily gets q = e β λ 1 < 1 (since we have required q > β λ , and otherwise one obtains the contradiction β λ < q = 1 e β λ 1 ). This means that the function ξ λ ( q ) ( · ) now touches the straight line i d ( · ) in the point log ( q ) , i.e., ξ λ ( q ) log ( q ) = log ( q ) . Our above-proposed method, namely to use the tangent line of ξ λ ( q ) ( · ) at x = x 0 ( q ) = log ( q ) as a linear lower bound for ξ λ ( q ) ( · ) , leads then to the recursion a n ( q ) , T 0 (cf. (78)). This is due to the fact that the tangent line ξ λ ( q ) , T ( · ) is in the current case equivalent with the straight line i d ( · ) . Consequently, (81) would not be satisfied.
Notice that in the case β λ < q < min 1 , e β λ 1 , the above-introduced functions ξ λ ( q ) , T ( · ) , ξ λ ( q ) , S ( · ) constitute again linear lower and upper bounds for ξ λ ( q ) ( · ) , however, this time on the interval 0 , x 0 ( q ) . The sequences defined in (78) and (79) still fulfill the assertions (80) and (81), and additionally allow for the closed-form representation (82). Furthermore, let us mention that these rudimentary closed-form sequence-bounds can be defined analogously for λ R \ [ 0 , 1 ] and either 0 < q < β λ , or q = β λ , or max { 0 , β λ } < q < min { 1 , e β λ 1 } .
In a second step, we want to improve the above-mentioned linear (lower and upper) approximations of the sequence a n ( q ) by reducing the faced error within each iteration. To do so, in both cases of lower and upper approximates we shall employ context-adapted linear inhomogeneous difference equations of the form
a ˜ 0 : = 0 ; a ˜ n : = ξ ˜ a ˜ n 1 + ρ n 1 , n N ,
with
ξ ˜ ( x ) : = c + d · x , x R ,
ρ n 1 : = K 1 · ϰ n 1 + K 2 · ν n 1 , n N ,
for some constants c R , d ] 0 , 1 [ , K 1 , K 2 , ϰ , ν R with 0 ν < ϰ d . This will be applied to c : = c ( q ) , S , c : = c ( q ) , T , d : = d ( q ) , S and d : = d ( q ) , T later on. Meanwhile, let us first present some facts and expressions which are insightful for further formulations and analyses.
Lemma 2.
Consider the sequence a ˜ n n N 0 defined in (83) to (85). If 0 ν < ϰ < d , then one gets the closed-form representation
a ˜ n = a ˜ n h o m + c ˜ n with a ˜ n h o m = c · 1 d n 1 d and c ˜ n = K 1 · d n ϰ n d ϰ + K 2 · d n ν n d ν ,
which leads for all n N to
k = 1 n a ˜ k = K 1 d ϰ + K 2 d ν c 1 d · d · 1 d n 1 d K 1 · ϰ · 1 ϰ n ( d ϰ ) ( 1 ϰ ) K 2 · ν · 1 ν n ( d ν ) ( 1 ν ) + c 1 d · n .
If 0 ν < ϰ = d , then one gets the closed-form representation
a ˜ n = a ˜ n h o m + c ˜ n with a ˜ n h o m = c · 1 d n 1 d and c ˜ n = K 1 · n · d n 1 + K 2 · d n ν n d ν ,
which leads for all n N to
k = 1 n a ˜ k = K 1 d ( 1 d ) + K 2 d ν c 1 d · d · 1 d n 1 d K 2 · ν · 1 ν n ( d ν ) ( 1 ν ) + c 1 d K 1 · d n 1 d · n .
Lemma 2 will be proved in Appendix A.3. Notice that (88) is consistent with taking the limit ϰ d in (86). Furthermore, for the special case K 2 = K 1 > 0 one has from (85) for all integers n 2 the relation ρ n 1 < 0 and thus a ˜ n a ˜ n h o m < 0 , leading to
c ˜ n < 0 and k = 1 n c ˜ n < 0 .
Lemma 2 gives explicit expressions for a linear inhomogeneous recursion of the form (83) possessing the extra term given by (85). Therefrom we derive lower and upper bounds for the sequence a n ( q ) n N by employing a n ( q ) , T resp. a n ( q ) , S as the homogeneous solution of (83), i.e., by setting a ˜ n h o m : = a n ( q ) , T resp. a ˜ n h o m : = a n ( q ) , S . Moreover, our concrete approximation-error-reducing “correction terms” ρ n will have different form, depending on whether 0 < q < β λ or q > max { 0 , β λ } . In both cases, we express ρ n by means of the slopes d ( q ) , T = q e x 0 ( q ) resp. d ( q ) , S = x 0 ( q ) ( q β λ ) x 0 ( q ) of the tangent line ξ λ ( q ) , T ( · ) (cf. (76)) resp. the secant line ξ λ ( q ) , S ( · ) (cf. (77)), as well as in terms of the parameters
Γ < ( q ) : = 1 2 · x 0 ( q ) 2 · q · e x 0 ( q ) , for 0 < q < β λ , and Γ > ( q ) : = q 2 · x 0 ( q ) 2 , for q > max { 0 , β λ } .
In detail, let us first define the lower approximate by
a ̲ 0 ( q ) : = 0 , a ̲ n ( q ) : = ξ λ ( q ) , T a ̲ n 1 ( q ) + ρ ̲ n 1 ( q ) , n N ,
where
ρ ̲ n 1 ( q ) : = Γ < ( q ) · d ( q ) , T 2 ( n 1 ) , if 0 < q < β λ , Γ > ( q ) · d ( q ) , S 2 ( n 1 ) , if max { 0 , β λ } < q < min { 1 , e β λ 1 } .
The upper approximate is defined by
a ¯ 0 ( q ) : = 0 , a ¯ n ( q ) : = ξ λ ( q ) , S a ¯ n 1 ( q ) + ρ ¯ n 1 ( q ) , n N ,
where
ρ ¯ n 1 ( q ) : = Γ < ( q ) · d ( q ) , T n 1 · 1 d ( q ) , S n 1 , if 0 < q < β λ , Γ > ( q ) · d ( q ) , S n 1 · 1 d ( q ) , T n 1 , if max { 0 , β λ } < q < min { 1 , e β λ 1 } .
In terms of (85), we use for ρ ̲ n ( q ) the constants K 2 = ν = 0 as well as K 1 = Γ < ( q ) , ϰ = d ( q ) , T 2 for 0 < q < β λ respectively K 1 = Γ > ( q ) , ϰ = d ( q ) , S 2 for max { 0 , β λ } < q < min { 1 , e β λ 1 } . For ρ ¯ n ( q ) we shall employ the constants K 1 = K 2 = Γ < ( q ) , ϰ = d ( q ) , T , ν = d ( q ) , S d ( q ) , T for 0 < q < β λ , and K 1 = K 2 = Γ > ( q ) , ϰ = d ( q ) , S , ν = d ( q ) , S d ( q ) , T for max { 0 , β λ } < q < min { 1 , e β λ 1 } . Recall from (76) the constants c ( q ) , T : = x 0 ( q ) ( 1 q e x 0 ( q ) ) , d ( q ) , T : = q e x 0 ( q ) and from (77) c ( q ) , S : = q β λ , d ( q ) , S : = x 0 ( q ) ( q β λ ) x 0 ( q ) . In the following, we will refer to the sequences a ̲ n ( q ) resp. a ¯ n ( q ) as the improved closed-form sequence-bounds. Putting all ingredients together, we arrive at the
Lemma 3.
For all β A , β H , α A , α H P there holds with d ( q ) , T = q e x 0 ( q ) and d ( q ) , S = x 0 ( q ) ( q β λ ) x 0 ( q )
(a) 
in the case 0 < q < β λ :
(i) 
a ̲ n ( q ) < a n ( q ) a ¯ n ( q ) for all n N ,
with equality on the right-hand side iff n = 1 , where
a ̲ n ( q ) = x 0 ( q ) · 1 d ( q ) , T n + Γ < ( q ) · d ( q ) , T n 1 1 d ( q ) , T · 1 d ( q ) , T n > a n ( q ) , T , and a ¯ n ( q ) = x 0 ( q ) · 1 d ( q ) , S n Γ < ( q ) · d ( q ) , S n d ( q ) , T n d ( q ) , S d ( q ) , T d ( q ) , S n 1 1 d ( q ) , T n 1 d ( q ) , T a n ( q ) , S ,
with a n ( q ) , T and a n ( q ) , S defined by (78) and (79).
(ii) 
Both sequences a ̲ n ( q ) n N and a ¯ n ( q ) n N are strictly decreasing.
(iii) 
lim n a ̲ n ( q ) = lim n a ¯ n ( q ) = lim n a n ( q ) = x 0 ( q ) ] β λ , q β λ [ .
(b) 
in the case max { 0 , β λ } < q < min 1 , e β λ 1 :
(i) 
a ̲ n ( q ) < a n ( q ) a ¯ n ( q ) , for all n N ,
with equality on the right-hand side iff n = 1 , where
a ̲ n ( q ) = x 0 ( q ) · 1 d ( q ) , T n + Γ > ( q ) · d ( q ) , T n d ( q ) , S 2 n d ( q ) , T d ( q ) , S 2 > a n ( q ) , T and a ¯ n ( q ) = x 0 ( q ) · 1 d ( q ) , S n Γ > ( q ) · d ( q ) , S n 1 n 1 d ( q ) , T n 1 d ( q ) , T a n ( q ) , S ,
with a n ( q ) , T and a n ( q ) , S defined by (78) and (79).
(ii) 
Both sequences a ̲ n ( q ) n N and a ¯ n ( q ) n N are strictly increasing.
(iii) 
lim n a ̲ n ( q ) = lim n a ¯ n ( q ) = lim n a n ( q ) = x 0 ( q ) ] q β λ , log ( q ) [ .
A detailed proof of Lemma 3 is provided in Appendix A.3. In the following, we employ the above-mentioned investigations in order to derive the desired closed-form bounds of the Hellinger integrals H λ ( P A , n P H , n ) .

6.2. Explicit Closed-Form Bounds for the Cases β A , β H , α A , α H , λ ( P NI P SP , 1 ) × ( R \ { 0 , 1 } )

Recall that in this setup, we have obtained the recursive, non-explicit exact values V λ , X 0 , n = H λ ( P A , n P H , n ) given in (39) of Theorem 1, where we used q = q λ E = q E ( β A , β H , λ ) = β A λ β H 1 λ ] 0 , β λ [ in the case λ ] 0 , 1 [ respectively q = q λ E = β A λ β H 1 λ > max { 0 , β λ } in the case λ R \ [ 0 , 1 ] . For the latter, Lemma 1 implies that q λ E < min { 1 , e β λ 1 } iff λ ] λ , λ + [ \ [ 0 , 1 ] . This—together with (39) from Theorem 1, Lemma 2 and with the quantities d ( q ) , T , d ( q ) , S , Γ < ( q ) and Γ > ( q ) as defined in (76) and (77) resp. (91) –leads to
Theorem 5.
Let p λ E : = α A λ α H 1 λ and q λ E : = β A λ β H 1 λ . For all β A , β H , α A , α H , λ ( P NI P SP , 1 ) × ] λ , λ + [ \ { 0 , 1 } , all initial population sizes X 0 N and for all observation horizons n N the following assertions hold true:
(a) 
the Hellinger integral can be bounded by the closed-form lower and upper bounds
C λ , X 0 , n ( p λ E , q λ E ) , T C λ , X 0 , n ( p λ E , q λ E ) , L V λ , X 0 , n = H λ ( P A , n P H , n ) C λ , X 0 , n ( p λ E , q λ E ) , U C λ , X 0 , n ( p λ E , q λ E ) , S ,
(b) 
lim n 1 n log V λ , X 0 , n = lim n 1 n log C λ , X 0 , n ( p λ E , q λ E ) , L = lim n 1 n log C λ , X 0 , n ( p λ E , q λ E ) , U = lim n 1 n log C λ , X 0 , n ( p λ E , q λ E ) , T = lim n 1 n log C λ , X 0 , n ( p λ E , q λ E ) , S = α A β A · x 0 ( q λ E ) ,
where the involved closed-form lower bounds are defined by
C λ , X 0 , n ( p λ E , q λ E ) , L : = C λ , X 0 , n ( p λ E , q λ E ) , T · exp ζ ̲ n ( q λ E ) · X 0 + α A β A · ϑ ̲ n ( q λ E ) , with C λ , X 0 , n ( p λ E , q λ E ) , T : = exp x 0 ( q λ E ) · X 0 α A β A · d ( q λ E ) , T 1 d ( q λ E ) , T · 1 d ( q λ E ) , T n + α A β A x 0 ( q λ E ) · n ,
and the closed-form upper bounds are defined by
C λ , X 0 , n ( p λ E , q λ E ) , U : = C λ , X 0 , n ( p λ E , q λ E ) , S · exp ζ ¯ n ( q λ E ) · X 0 α A β A · ϑ ¯ n ( q λ E ) , with C λ , X 0 , n ( p λ E , q λ E ) , S : = exp x 0 ( q λ E ) · X 0 α A β A · d ( q λ E ) , S 1 d ( q λ E ) , S · 1 d ( q λ E ) , S n + α A β A x 0 ( q λ E ) · n ,
where in the case λ ] 0 , 1 [
ζ ̲ n ( q λ E ) : = Γ < ( q λ E ) · d ( q λ E ) , T n 1 1 d ( q λ E ) , T · 1 d ( q λ E ) , T n > 0 ,
ϑ ̲ n ( q λ E ) : = Γ < ( q λ E ) · 1 d ( q λ E ) , T n 1 d ( q λ E ) , T 2 · 1 d ( q λ E ) , T 1 + d ( q λ E ) , T n 1 + d ( q λ E ) , T > 0 ,
ζ ¯ n ( q λ E ) : = Γ < ( q λ E ) · d ( q λ E ) , S n d ( q λ E ) , T n d ( q λ E ) , S d ( q λ E ) , T d ( q λ E ) , S n 1 · 1 d ( q λ E ) , T n 1 d ( q λ E ) , T > 0 ,
ϑ ¯ n ( q λ E ) : = Γ < ( q λ E ) · d ( q λ E ) , T 1 d ( q λ E ) , T · 1 d ( q λ E ) , S d ( q λ E ) , T n 1 d ( q λ E ) , S d ( q λ E ) , T d ( q λ E ) , S n d ( q λ E ) , T n d ( q λ E ) , S d ( q λ E ) , T > 0 ,
and where in the case λ ] λ , λ + [ \ [ 0 , 1 ]
ζ ̲ n ( q λ E ) : = Γ > ( q λ E ) · d ( q λ E ) , T n d ( q λ E ) , S 2 n d ( q λ E ) , T d ( q λ E ) , S 2 > 0 ,
ϑ ̲ n ( q λ E ) : = Γ > ( q λ E ) d ( q λ E ) , T d ( q λ E ) , S 2 d ( q λ E ) , T 1 d ( q λ E ) , T n 1 d ( q λ E ) , T d ( q λ E ) , S 2 1 d ( q λ E ) , S 2 n 1 d ( q λ E ) , S 2 > 0 ,
ζ ¯ n ( q λ E ) : = Γ > ( q λ E ) · d ( q λ E ) , S n 1 · n 1 d ( q λ E ) , T n 1 d ( q λ E ) , T > 0 ,
ϑ ¯ n ( q λ E ) : = Γ > ( q λ E ) · [ d ( q λ E ) , S d ( q λ E ) , T 1 d ( q λ E ) , S 2 1 d ( q λ E ) , T · 1 d ( q λ E ) , S n + d ( q λ E ) , T 1 d ( q λ E ) , S d ( q λ E ) , T n 1 d ( q λ E ) , T 1 d ( q λ E ) , S d ( q λ E ) , T d ( q λ E ) , S n 1 d ( q λ E ) , S · n ] > 0 .
Notice that α A β A can be equivalently be replaced by α H β H in (96) and in (97).
A proof of Theorem 5 is given in Appendix A.3.

6.3. Explicit Closed-Form Bounds for the Cases β A , β H , α A , α H , λ ( P SP \ P SP , 1 ) × ] 0 , 1 [

To derive (explicit) closed-form lower bounds of the (nonexplicit) recursive lower bounds B λ , X 0 , n L for the Hellinger integral H λ ( P A , n P H , n ) respectively closed-form upper bounds of the recursive upper bounds B λ , X 0 , n U for all parameters cases β A , β H , α A , α H , λ ( P SP \ P SP , 1 ) × ( R \ { 0 , 1 } ) , we combine part (b) of Theorem 1, Lemma 2, Lemma 3 together with appropriate parameters p λ L = p L β A , β H , α A , α H , λ , p λ U = p U β A , β H , α A , α H , λ 0 and q λ L = q L β A , β H , α A , α H , λ , q λ U = q U β A , β H , α A , α H , λ > 0 satisfying (35). Notice that the representations of the lower and upper closed-form sequence-bounds depend on whether 0 < q λ A < β λ , 0 < q λ A = β λ or max { 0 , β λ } < q λ A < min { 1 , e β λ 1 } ( A { L , U } ) .
Let us start with closed-form lower bounds for the case λ ] 0 , 1 [ ; recall that the choice p λ L = α A λ α H 1 λ , q λ L = β A λ β H 1 λ led to the optimal recursive lower bounds B λ , X 0 , n L of the Hellinger integral (cf. Theorem 1(b) and Section 3.5). Correspondingly, we can derive
Theorem 6.
Let p λ L = α A λ α H 1 λ and q λ L = β A λ β H 1 λ . Then, the following assertions hold true:
(a) 
For all β A , β H , α A , α H , λ P SP , 2 P SP , 3 a P SP , 3 b P SP , 3 c × ] 0 , 1 [ (for which particularly 0 < q λ L < β λ , β A β H ), all initial population sizes X 0 N and all observation horizons n N there holds
C λ , X 0 , n ( p λ L , q λ L ) , T C λ , X 0 , n ( p λ L , q λ L ) , L B λ , X 0 , n L < 1 ,
where C λ , X 0 , n ( p λ L , q λ L ) , L : = C λ , X 0 , n ( p λ L , q λ L ) , T · exp ζ ̲ n ( q λ L ) · X 0 + p λ L q λ L · ϑ ̲ n ( q λ L )
with C λ , X 0 , n ( p λ L , q λ L ) , T : = exp { x 0 ( q λ L ) · X 0 p λ L q λ L · d ( q λ L ) , T 1 d ( q λ L ) , T · 1 d ( q λ L ) , T n + p λ L q λ L · β λ + x 0 ( q λ L ) α λ · n } , and with ζ ̲ n ( q λ L ) : = Γ < ( q λ L ) · d ( q λ L ) , T n 1 1 d ( q λ L ) , T · 1 d ( q λ L ) , T n > 0 ,
ϑ ̲ n ( q λ L ) : = Γ < ( q λ L ) · 1 d ( q λ L ) , T n 1 d ( q λ L ) , T 2 · 1 d ( q λ L ) , T 1 + d ( q λ L ) , T n 1 + d ( q λ L ) , T > 0 .
(b) 
For all β A , β H , α A , α H , λ ( P SP , 4 a P SP , 4 b ) × ] 0 , 1 [ (for which particularly 0 < q λ L = β λ , β A = β H ), all initial population sizes X 0 N and all observation horizons n N there holds
C λ , X 0 , n ( p λ L , q λ L ) , L : = C λ , X 0 , n ( p λ L , q λ L ) , T : = B λ , X 0 , n L = exp p λ L α λ · n < 1 .
(c) 
For all β A , β H , α A , α H , λ ( P SP \ P SP , 1 ) × ] 0 , 1 [ and all initial population sizes X 0 N one gets
lim n 1 n log C λ , X 0 , n ( p λ L , q λ L ) , T = lim n 1 n log C λ , X 0 , n ( p λ L , q λ L ) , L = lim n 1 n log B λ , X 0 , n L = p λ L q λ L · β λ + x 0 ( q λ L ) α λ < 0 ,
where in the case β A = β H there holds q λ L = β λ and x 0 ( q λ L ) = 0 .
The proof will be provided in Appendix A.3.
In order to deduce closed-form upper bounds for the case λ ] 0 , 1 [ , we first recall from the Section 3.6, Section 3.7, Section 3.8, Section 3.9, Section 3.10, Section 3.11, Section 3.12 and Section 3.13, that we have to employ suitable parameters p λ U = p U β A , β H , α A , α H , λ , q λ U = q U β A , β H , α A , α H , λ satisfying (35). Notice that we automatically obtain p λ U p λ L = α A λ α H 1 λ > 0 . Correspondingly, we obtain
Theorem 7.
For all β A , β H , α A , α H , λ ( P SP \ P SP , 1 ) × ] 0 , 1 [ , all coefficients p λ U , q λ U which satisfy (35) for all x N 0 and additionally either 0 < q λ U β λ or β λ < q λ U < min { 1 , e β λ 1 } , all initial population sizes X 0 N and all observation horizons n N the following assertions hold true:
C λ , X 0 , n ( p λ U , q λ U ) , S C λ , X 0 , n ( p λ U , q λ U ) , U B ˜ λ , X 0 , n ( p λ U , q λ U ) B λ , X 0 , n U , where
(a) 
in the case 0 < q λ U < β λ one has
C λ , X 0 , n ( p λ U , q λ U ) , U : = C λ , X 0 , n ( p λ U , q λ U ) , S · exp ζ ¯ n ( q λ U ) · X 0 p λ U q λ U · ϑ ¯ n ( q λ U )
with C λ , X 0 , n ( p λ U , q λ U ) , S : = exp { x 0 ( q λ U ) · X 0 p λ U q λ U · d ( q λ U ) , S 1 d ( q λ U ) , S · 1 d ( q λ U ) , S n + p λ U q λ U · β λ + x 0 ( q λ U ) α λ · n } , ζ ¯ n ( q λ U ) : = Γ < ( q λ U ) · d ( q λ U ) , S n d ( q λ U ) , T n d ( q λ U ) , S d ( q λ U ) , T d ( q λ U ) , S n 1 · 1 d ( q λ U ) , T n 1 d ( q λ U ) , T > 0 ,
ϑ ¯ n ( q λ U ) : = Γ < ( q λ U ) · d ( q λ U ) , T 1 d ( q λ U ) , T · 1 d ( q λ U ) , S d ( q λ U ) , T n 1 d ( q λ U ) , S d ( q λ U ) , T d ( q λ U ) , S n d ( q λ U ) , T n d ( q λ U ) , S d ( q λ U ) , T > 0 ;
furthermore, whenever p λ U , q λ U satisfy additionally (47) (such parameters exist particularly in the setups P SP , 2 P SP , 3 a P SP , 3 b , cf. Section 3.7, Section 3.8 and Section 3.9), then
1 > C λ , X 0 , n ( p λ U , q λ U ) , S and B ˜ λ , X 0 , n ( p λ U , q λ U ) = B λ , X 0 , n U n N ;
(b) 
in the case 0 < q λ U = β λ one has
C λ , X 0 , n ( p λ U , q λ U ) , U : = C λ , X 0 , n ( p λ U , q λ U ) , S : = B ˜ λ , X 0 , n ( p λ U , q λ U ) = exp p λ U α λ · n ;
(c) 
in the case β λ < q λ U < min 1 , e β λ 1 the formulas (109) and (110) remain valid, but with
ζ ¯ n ( q λ U ) : = Γ > ( q λ U ) · d ( q λ U ) , S n 1 · n 1 d ( q λ U ) , T n 1 d ( q λ U ) , T > 0 ,
ϑ ¯ n ( q λ U ) : = Γ > ( q λ U ) · [ d ( q λ U ) , S d ( q λ U ) , T 1 d ( q λ U ) , S 2 1 d ( q λ U ) , T · 1 d ( q λ U ) , S n + d ( q λ U ) , T 1 d ( q λ U ) , S d ( q λ U ) , T n 1 d ( q λ U ) , T 1 d ( q λ U ) , S d ( q λ U ) , T d ( q λ U ) , S n 1 d ( q λ U ) , S · n ] > 0 ;
(d) 
for all cases (a) to (c) one gets
lim n 1 n log C λ , X 0 , n ( p λ U , q λ U ) , S = lim n 1 n log C λ , X 0 , n ( p λ U , q λ U ) , U = lim n 1 n log B ˜ λ , X 0 , n ( p λ U , q λ U ) = p λ U q λ U · β λ + x 0 ( q λ U ) α λ ,
where in the case q λ U = β λ there holds x 0 ( q λ U ) = 0 .
This Theorem 7 will be proved in Appendix A.3. Notice that for an inadequate choice of p λ U , q λ U it may hold that p λ U q λ U ( β λ + x 0 ( q λ U ) ) α λ > 0 in part (d) of Theorem 7.

6.4. Explicit Closed-Form Bounds for the Cases β A , β H , α A , α H , λ ( P SP \ P SP , 1 ) × ( R \ [ 0 , 1 ] )

For λ R \ [ 0 , 1 ] , let us now construct closed-form lower bounds of the recursive lower bound components B ˜ λ , X 0 , n ( p λ L , q λ L ) , for suitable parameters p λ L 0 and either 0 < q λ L β λ or max { 0 , β λ } < q λ L < min { 1 , e β λ 1 } satisfying (35).
Theorem 8.
For all β A , β H , α A , α H , λ ( P SP \ P SP , 1 ) × ( R \ [ 0 , 1 ] ) , all coefficients p λ L 0 , q λ L > 0 which satisfy (35) for all x N 0 and either 0 < q λ L β λ or max { 0 , β λ } < q λ L < min { 1 , e β λ 1 } , all initial population sizes X 0 N and all observation horizons n N the following assertions hold true:
C λ , X 0 , n ( p λ L , q λ L ) , T C λ , X 0 , n ( p λ L , q λ L ) , L B ˜ λ , X 0 , n ( p λ L , q λ L ) B λ , X 0 , n L , where
(a) 
in the case 0 < q λ L < β λ one has
C λ , X 0 , n ( p λ L , q λ L ) , L : = C λ , X 0 , n ( p λ L , q λ L ) , T · exp ζ ̲ n ( q λ L ) · X 0 + p λ L q λ L · ϑ ̲ n ( q λ L ) ,
with C λ , X 0 , n ( p λ L , q λ L ) , T : = exp { x 0 ( q λ L ) · X 0 p λ L q λ L · d ( q λ L ) , T 1 d ( q λ L ) , T · 1 d ( q λ L ) , T n + p λ L q λ L · β λ + x 0 ( q λ L ) α λ · n } ζ ̲ n ( q λ L ) : = Γ < ( q λ L ) · d ( q λ L ) , T n 1 1 d ( q λ L ) , T · 1 d ( q λ L ) , T n > 0 ,
ϑ ̲ n ( q λ L ) : = Γ < ( q λ L ) · 1 d ( q λ L ) , T n 1 d ( q λ L ) , T 2 · 1 d ( q λ L ) , T 1 + d ( q λ L ) , T n 1 + d ( q λ L ) , T > 0 ;
furthermore, whenever p λ L , q λ L satisfy additionally (56) (such parameters exist particularly in the setups P SP , 2 P SP , 3 a P SP , 3 b , cf. Section 3.17, Section 3.18 and Section 3.19), then
1 < C λ , X 0 , n ( p λ L , q λ L ) , T and B ˜ λ , X 0 , n ( p λ L , q λ L ) = B λ , X 0 , n L n N ;
(b) 
in the case 0 < q λ L = β λ one has
C λ , X 0 , n ( p λ L , q λ L ) , L : = C λ , X 0 , n ( p λ L , q λ L ) , T = B ˜ λ , X 0 , n ( p λ L , q λ L ) = exp p λ L α λ · n ;
(c) 
in the case max { 0 , β λ } < q λ L < min 1 , e β λ 1 the formulas (115) and (116) remain valid, but with
ζ ̲ n ( q λ L ) : = Γ > ( q λ L ) · d ( q λ L ) , T n d ( q λ L ) , S 2 n d ( q λ L ) , T d ( q λ L ) , S 2 > 0 ,
ϑ ̲ n ( q λ L ) : = Γ > ( q λ L ) d ( q λ L ) , T d ( q λ L ) , S 2 · d ( q λ L ) , T · 1 d ( q λ L ) , T n 1 d ( q λ L ) , T d ( q λ L ) , S 2 · 1 d ( q λ L ) , S 2 n 1 d ( q λ L ) , S 2 > 0 ;
(d) 
for all cases (a) to (c) one gets
lim n 1 n log C λ , X 0 , n ( p λ L , q λ L ) , T = lim n 1 n log C λ , X 0 , n ( p λ L , q λ L ) , L = lim n 1 n log B ˜ λ , X 0 , n ( p λ L , q λ L ) = p λ L q λ L · β λ + x 0 ( q λ L ) α λ ,
where in the case q λ L = β λ there holds x 0 ( q λ L ) = 0 .
For the proof of Theorem 8, see Appendix A.3. Notice that for an inadequate choice of p λ L , q λ L it may hold that p λ L q λ L ( β λ + x 0 ( q λ U ) ) α λ < 0 in the last assertion of Theorem 8.
To derive closed-form upper bounds of the recursive upper bounds B λ , X 0 , n U of the Hellinger integral in the case λ R \ [ 0 , 1 ] , let us first recall from Section 3.24 that we have to use the parameters p λ U = α A λ α H 1 λ > 0 and q λ U = β A λ β H 1 λ > 0 . Furthermore, in the case β A β H we obtain from Lemma 1 (setting q λ = q λ U ) the assertion that max { 0 , β λ } < q λ U < min { 1 , e β λ 1 } iff λ ] λ , λ + [ \ [ 0 , 1 ] (implying that the sequence a n ( q λ U ) n N converges). In the case β A = β H on gets q λ U = β A λ β H 1 λ = β A = β H = β λ and therefore (cf. (P2)) a n ( q λ U ) = 0 for all n N and for all λ R \ [ 0 , 1 ] . Correspondingly, we deduce
Theorem 9.
Let p λ U = α A λ α H 1 λ and q λ U = β A λ β H 1 λ . Then, the following assertions hold true:
(a) 
For all β A , β H , α A , α H , λ ( P SP , 2 P SP , 3 a P SP , 3 b P SP , 3 c ) × ( ] λ , λ + [ \ [ 0 , 1 ] ) (in particular for β A β H ), all initial population sizes X 0 N and all observation horizons n N there holds
> C λ , X 0 , n ( p λ U , q λ U ) , S C λ , X 0 , n ( p λ U , q λ U ) , U B λ , X 0 , n U > 1 ,
where C λ , X 0 , n ( p λ U , q λ U ) , U : = C λ , X 0 , n ( p λ U , q λ U ) , S · exp ζ ¯ n ( q λ U ) · X 0 p λ U q λ U · ϑ ¯ n ( q λ U )
with C λ , X 0 , n ( p λ U , q λ U ) , S : = exp { x 0 ( q λ U ) · X 0 p λ U q λ U · d ( q λ U ) , T 1 d ( q λ U ) , T · 1 d ( q λ U ) , T n + p λ U q λ U · β λ + x 0 ( q λ U ) α λ · n } , ζ ¯ n ( q λ U ) : = Γ > ( q λ U ) · d ( q λ U ) , S n 1 · n 1 d ( q λ U ) , T n 1 d ( q λ U ) , T > 0 ,
ϑ ¯ n ( q λ U ) : = Γ > ( q λ U ) · [ d ( q λ U ) , S d ( q λ U ) , T 1 d ( q λ U ) , S 2 1 d ( q λ U ) , T · 1 d ( q λ U ) , S n + d ( q λ U ) , T 1 d ( q λ U ) , S d ( q λ U ) , T n 1 d ( q λ U ) , T 1 d ( q λ U ) , S d ( q λ U ) , T d ( q λ U ) , S n 1 d ( q λ U ) , S · n ] > 0 .
(b) 
For all β A , β H , α A , α H , λ ( P SP , 4 a P SP , 4 b ) × ( R \ [ 0 , 1 ] ) (for which particularly 0 < q λ U = β λ , β A = β H ), all initial population sizes X 0 N and all observation horizons n N there holds
C λ , X 0 , n ( p λ U , q λ U ) , U : = C λ , X 0 , n ( p λ U , q λ U ) , S : = B λ , X 0 , n U = exp p λ U α λ · n > 1 .
(c) 
For all β A , β H , α A , α H , λ ( P SP \ P SP , 1 ) × ( ] λ , λ + [ \ [ 0 , 1 ] ) and all initial population sizes X 0 N one gets
lim n 1 n log C λ , X 0 , n ( p λ U , q λ U ) , S = lim n 1 n log C λ , X 0 , n ( p λ U , q λ U ) , U = lim n 1 n log B λ , X 0 , n U = p λ U q λ U · β λ + x 0 ( q λ U ) α λ > 0 ,
where in the case β A = β H there holds q λ U = β λ and x 0 ( q λ U ) = 0 .
A proof of Theorem 9 is provided in Appendix A.3.
Remark 7.
Substituting a n ( q ) by a n ( q ) , T resp. a n ( q ) , S (cf. (78) resp. (79)) in B ˜ λ , X 0 , n ( p , q ) from (42) leads to the “rudimentary” closed-form bounds C λ , X 0 , n ( p , q ) , T resp. C λ , X 0 , n ( p , q ) , S , whereas substituting a n ( q ) by a ̲ n ( q ) resp. a ¯ n ( q ) (cf. (92) resp. (94)) in B ˜ λ , X 0 , n ( p , q ) from (42) leads to the “improved” closed-form bounds C λ , X 0 , n ( p , q ) , L resp. C λ , X 0 , n ( p , q ) , U in all the Theorems 5–9.

6.5. Totally Explicit Closed-Form Bounds

The above-mentioned results give closed-form lower bounds C λ , X 0 , n ( p , q ) , L , C λ , X 0 , n ( p , q ) , T resp. closed-form upper bounds C λ , X 0 , n ( p , q ) , U , C λ , X 0 , n ( p , q ) , S of the Hellinger integrals H λ ( P A , n P H , n ) for case-dependent choices of p , q . However, these bounds still involve the fixed point x 0 ( q ) which in general has to be calculated implicitly. In order to get “totally” explicit but “slightly” less tight closed-form bounds of H λ ( P A , n P H , n ) , one can proceed as follows:
  • in all the closed-form lower bound formulas of the Theorems 5, 6 and 8–including the definitions (76), (77) and (91)–replace the implicit x 0 ( q ) by a close explicitly known point x ̲ 0 ( q ) < x 0 ( q ) ;
  • in all closed-form upper bound formulas of the Theorems 5, 7 and 9–including (76), (77) and (91)–replace x 0 ( q ) by a close explicitly known point x ¯ 0 ( q ) > x 0 ( q ) .
For instance, one can use the following choices which will be also employed as an auxiliary tool for the diffusion-limit-concerning proof of Lemma A6 in Appendix A.4:
x ̲ 0 ( q ) : = q 1 · e x ̲ ̲ 0 ( q ) · 1 q 1 q 2 2 · q · e x ̲ ̲ 0 ( q ) · q β λ , if q ] 0 , β λ [ , q 1 · 1 q 1 q 2 2 · q · q β λ , if max { 0 , β λ } < q < min { 1 , e β λ 1 } ,
where x ̲ ̲ 0 ( q ) : = max β λ , q β λ 1 q , if q ] 0 , 1 [ , β λ , if q 1 ,
x ¯ 0 ( q ) : = q 1 · 1 q 1 q 2 2 · q · q β λ , if q ] 0 , β λ [ , 1 q 1 q 2 2 · q β λ , if max { 0 , β λ } < q < min { 1 , e β λ 1 } and 1 q 2 2 · q · q β λ 0 , x ¯ ¯ 0 ( q ) : = log ( q ) if max { 0 , β λ } < q < min { 1 , e β λ 1 } and 1 q 2 2 · q · q β λ < 0 .
Behind this choice “lies” the idea that–in contrast to the solution x 0 ( q ) of ξ λ ( q ) ( x ) : = q e x β λ = x –the point x ̲ 0 ( q ) is a solution of (the obviously explicitly solvable) Q ̲ λ ( q ) ( x ) : = a ̲ λ ( q ) x 2 + b ̲ λ ( q ) x + c ̲ λ ( q ) = x in both cases 0 < q < β λ and max { 0 , β λ } < q < min { 1 , e β λ 1 } , whereas the point x ¯ 0 ( q ) is a solution of Q ¯ λ ( q ) ( x ) : = a ¯ λ ( q ) x 2 + b ¯ λ ( q ) x + c ¯ λ ( q ) = x in the case 0 < q < β λ and in the case max { 0 , β λ } < q < min { 1 , e β λ 1 } together with 1 q 2 2 · q · q β λ 0 . Thereby, Q ̲ λ ( q ) ( · ) and Q ¯ λ ( q ) ( · ) are the lower resp. upper quadratic approximates of ξ λ ( q ) ( · ) satisfying the following constraints:
  • for q ] 0 , β λ [ (mostly but not only for λ ] 0 , 1 [ ) (lower bound):
    Q ̲ λ ( q ) ( 0 ) = ξ λ ( q ) ( 0 ) = q β λ , Q ̲ λ ( q ) ( 0 ) = ξ λ ( q ) ( 0 ) = q , Q ̲ λ ( q ) ( x ) = ξ λ ( q ) ( y ) = q e y , x R ,
    for some explicitly known approximate y < x 0 ( q ) (leading to the (tighter) explicit lower approximate x ̲ 0 ( q ) ] y , x 0 ( q ) [ ); here, we choose
    y : = x ̲ ̲ 0 ( q ) : = max β λ , q β λ 1 q , if q < 1 , β λ , if q 1 ;
  • for q ] 0 , β λ [ (mostly but not only for λ ] 0 , 1 [ ) (upper bound):
    Q ¯ λ ( q ) ( 0 ) = ξ λ ( q ) ( 0 ) = q β λ , Q ¯ λ ( q ) ( 0 ) = ξ λ ( q ) ( 0 ) = q , Q ¯ λ ( q ) ( x ) = ξ λ ( q ) ( 0 ) = q , x R ;
  • for max { 0 , β λ } < q < min { 1 , e β λ 1 } (mostly but not only for λ R \ [ 0 , 1 ] ) (lower bound):
    Q ̲ λ ( q ) ( 0 ) = ξ λ ( q ) ( 0 ) = q β λ , Q ̲ λ ( q ) ( 0 ) = ξ λ ( q ) ( 0 ) = q , Q ̲ λ ( q ) ( x ) = ξ λ ( q ) ( 0 ) = q , x R ;
  • for max { 0 , β λ } < q < min { 1 , e β λ 1 } in combination with 1 q 2 2 · q · q β λ 0 (mostly but not only for λ R \ [ 0 , 1 ] ) (upper bound):
    Q ¯ λ ( q ) ( 0 ) = ξ λ ( q ) ( 0 ) = q β λ , Q ¯ λ ( q ) ( 0 ) = ξ λ ( q ) ( 0 ) = q , Q ¯ λ ( q ) ( x ) = ξ λ ( q ) ( log ( q ) ) = 1 , x R .
If max { 0 , β λ } < q < min { 1 , e β λ 1 } and 1 q 2 2 · q · q β λ < 0 , then a real-valued solution Q ¯ λ ( q ) ( x ) = x does not exist and we set x ¯ 0 ( q ) : = x ¯ ¯ 0 ( q ) : = log ( q ) , with ξ λ ( q ) x ¯ ¯ 0 ( q ) = 1 . The above considerations lead to corresponding unique choices of constants a ̲ λ ( q ) , b ̲ λ ( q ) , c ̲ λ ( q ) , a ¯ λ ( q ) , b ¯ λ ( q ) , c ¯ λ ( q ) culminating in
Q ̲ λ ( q ) ( x ) : = q 2 · e x ̲ ̲ 0 ( q ) · x 2 + q · x + q β λ , if 0 < q < β λ , q 2 · x 2 + q · x + q β λ , if max { 0 , β λ } < q < min { 1 , e β λ 1 } ,
Q ¯ λ ( q ) ( x ) : = q 2 · x 2 + q · x + q β λ , if 0 < q < β λ , 1 2 · x 2 + q · x + q β λ , if max { 0 , β λ } < q < min { 1 , e β λ 1 } .

6.6. Closed-Form Bounds for Power Divergences of Non-Kullback-Leibler-Information-Divergence Type

Analogously to Section 4 (see especially Section 4.1), for orders λ R \ { 0 , 1 } all the results of the previous Section 6.1, Section 6.2, Section 6.3, Section 6.4 and Section 6.5 carry correspondingly over from closed-form bounds of the Hellinger integrals H λ ( · · ) to closed-form bounds of the total variation distance V ( · | | · ) , by virtue of the relation (cf. (12))
2 1 H 1 2 ( P A , n P H , n ) V ( P A , n P H , n ) 2 1 H 1 2 ( P A , n P H , n ) 2 ,
to closed-form bounds of the Renyi divergences R λ ( · · ) , by virtue of the relation (cf. (7))
0 R λ P A , n P H , n = 1 λ ( λ 1 ) log H λ P A , n P H , n , with log 0 : = ,
as well as to closed-form bounds of the power divergences I λ · · , by virtue of the relation (cf. (2))
I λ P A , n P H , n = 1 H λ ( P A , n P H , n ) λ · ( 1 λ ) , n N .
For the sake of brevity, the–merely repetitive–exact details are omitted.

6.7. Applications to Decision Making

The above-mentioned investigations of the Section 6.1 to Section 6.6 can be applied to the context of Section 2.5 on dichotomous decision making on the space of all possible path scenarios (path space) of Poissonian Galton-Watson processes without (with) immigration GW(I) (e.g., in combination with our running-example epidemiological context of Section 2.3). More detailed, for the minimal mean decision loss (Bayes risk) R n defined by (18) we can derive explicit closed-form upper (respectively lower) bounds by using (19) respectively (20) together with the results of the Section 6.1, Section 6.2, Section 6.3, Section 6.4 and Section 6.5 concerning Hellinger integrals of order λ ] 0 , 1 [ ; we can proceed analogously in the Neyman-Pearson context in order to deduce closed-form bounds of type II error probabilities, by means of (23) and (24). Moreover, in an analogous way we can employ the investigations of Section 6.6 on power divergences in order to obtain closed-form bounds of (i) the corresponding (cf. (21)) weighted-average decision risk reduction (weighted-average statistical information measure) about the degree of evidence d e g concerning the parameter θ that can be attained by observing the GW(I)-path X n until stage n, as well as (ii) the corresponding (cf. (22)) limit decision risk reduction (limit statistical information measure). For the sake of brevity, the–merely repetitive–exact details are omitted.

7. Hellinger Integrals and Power Divergences of Galton-Watson Type Diffusion Approximations

7.1. Branching-Type Diffusion Approximations

One can show that a properly rescaled Galton-Watson process without (respectively with) immigration GW(I) converges weakly to a diffusion process X ˜ : = X ˜ s , s [ 0 , [ which is the unique, strong, nonnegative – and in case of η σ 2 1 2 strictly positive– solution of the stochastic differential equation (SDE) of the form
d X ˜ s = η κ X ˜ s d s + σ X ˜ s d W s , s [ 0 , [ , X ˜ 0 ] 0 , [ given ,
where η [ 0 , [ , κ [ 0 , [ , σ ] 0 , [ are constants and W s , s [ 0 , [ denotes a standard Brownian motion with respect to the underlying probability measure P; see e.g., Feller [130], Jirina [131], Lamperti [132,133], Lindvall [134,135], Grimvall [136], Jagers [56], Borovkov [137], Ethier & Kurtz [138], Durrett [139] for the non-immigration case corresponding to η = 0 , κ 0 , Kawazu & Watanabe [140], Wei & Winnicki [141], Winnicki [64] for the immigration case corresponding to η 0 , κ = 0 , as well as Sriram [142] for the general case η [ 0 , [ , κ R . Feller-type branching processes of the form (129), which are special cases of continuous state branching processes with immigration (see e.g., Kawazu & Watanabe [140], Li [143], as well as Dawson & Li [144] for imbeddings to affine processes) play for instance an important role in the modelling of the term structure of interest rates, cf. the seminal Cox-Ingersoll-Ross CIR model [145] and the vast follow-up literature thereof. Furthermore, (129) is also prominently used as (a special case of) Cox & Ross’s [146] constant elasticity of variance CEV asset price process, as (part of) Heston’s [147] stochastic asset-volatility framework, as a model of neuron activity (see e.g., Lansky & Lanska [148], Giorno et al. [149], Lanska et al. [150], Lansky et al [151], Ditlevsen & Lansky [152], Höpfner [153], Lansky & Ditlevsen [154]), as a time-dynamic description of the nitrous oxide emission rate from the soil surface (see e.g., Pedersen [155]), as well as a model for the individual hazard rate in a survival analysis context (see e.g., Aalen & Gjessing [156]).
Along these lines of branching-type diffusion limits, it makes sense to consider the solutions of two SDEs (129) with different fixed parameter sets ( η , κ A , σ ) and ( η , κ H , σ ) , determine for each of them a corresponding approximating GW(I), investigate the Hellinger integral between the laws of these two GW(I), and finally calculate the limit of the Hellinger integral (bounds) as the GW(I) approach their SDE solutions. Notice that for technicality reasons (which will be explained below), the constants η and σ ought to be independent of A , H in our current context.
In order to make the above-mentioned limit procedure rigorous, it is reasonable to work with appropriate approximations such that in each convergence step m one faces the setup P NI P SP , 1 (i.e., the non-immigration or the equal-fraction case), where the corresponding Hellinger integral can be calculated exactly in a recursive way, as stated in Theorem 1. Let us explain the details in the following.
Consider a sequence of GW(I) X ( m ) m N with probability laws P ( m ) on a measurable space ( Ω , F ) , where as above the subscript • stands for either the hypothesis H or the alternative A . Analogously to (1), we use for each fixed step m N the representation X ( m ) : = X ( m ) , N with
X ( m ) : = j = 1 X 1 ( m ) Y 1 , j ( m ) + Y ˜ ( m ) , N , X 0 ( m ) N given ,
where under the law P ( m )
  • the collection Y ( m ) : = Y i , j ( m ) , i N 0 , j N consists of i.i.d. random variables which are Poisson distributed with parameter β ( m ) > 0 ,
  • the collection Y ˜ ( m ) : = Y ˜ i ( m ) , i N consists of i.i.d. random variables which are Poisson distributed with parameter α ( m ) 0 ,
  • Y ( m ) and Y ˜ ( m ) are independent.
From arbitrary drift-parameters η [ 0 , [ , κ [ 0 , [ , and diffusion-term-parameter σ > 0 , we construct the offspring-distribution-parameter and the immigration-distribution parameter of the sequence X ( m ) N by
β ( m ) : = 1 κ σ 2 m and α ( m ) : = β ( m ) · η σ 2 .
Here and henceforth, we always assume that the approximation step m is large enough to ensure that β ( m ) ] 0 , 1 ] and at least one of β A ( m ) , β H ( m ) is strictly less than 1; this will be abbreviated by m N ¯ . Let us point out that – as mentioned above–our choice entails the best-to-handle setup P NI P SP , 1 (which does not happen if instead of η one uses η with η A η H ). Based on the GW(I) X ( m ) , let us construct the continuous-time branching process X ˜ ( m ) : = X ˜ s ( m ) , s [ 0 , [ by
X ˜ s ( m ) : = 1 m X σ 2 m s ( m ) ,
living on the state space E ( m ) : = 1 m N 0 . Notice that X ˜ ( m ) is constant on each time-interval [ k σ 2 m , k + 1 σ 2 m [ and takes at s = k σ 2 m the value 1 m X k ( m ) of the k-th GW(I) generation size, divided by m, i.e., it “jumps” with the jump-size 1 m X k ( m ) X k 1 ( m ) which is equal to the 1 m -fold difference to the previous generation size. From (132) one can immediately see the necessity of having σ to be independent of A , H because for the required law-equivalence in (the corresponding version of) (13) both models at stake have to “live” on the same time-scale τ s ( m ) : = σ 2 m s . For this setup, one obtains the following convergenc result:
Theorem 10.
Let η [ 0 , [ , κ [ 0 , [ , σ ] 0 , [ and X ˜ ( m ) be as defined in (130) to (132). Furthermore, let us suppose that lim m 1 m X 0 ( m ) = X ˜ 0 > 0 and denote by d ( [ 0 , [ , [ 0 , [ ) the space of right-continuous functions f : [ 0 , [ [ 0 , [ with left limits. Then the sequence of processes X ˜ ( m ) m N ¯ convergences in distribution in d ( [ 0 , [ , [ 0 , [ ) to a diffusion process X ˜ which is the unique strong, nonnegative–and in case of η σ 2 1 2 strictly positive–solution of the SDE
d X ˜ s = η κ X ˜ s d s + σ X ˜ s d W s , s [ 0 , [ , X ˜ 0 ] 0 , [ given ,
where W s , s [ 0 , [ denotes a standard Brownian motion with respect to the limit probability measure P ˜ .
Remark 8.
Notice that the condition η σ 2 1 2 can be interpreted in our approximation setup (131) as α ( m ) β ( m ) / 2 , which quantifies the intuitively reasonable indication that if the probability P [ Y ˜ ( m ) = 0 ] = e α ( m ) of having no immigration is small enough relative to the probability P [ Y 1 , k ( m ) = 0 ] = e β ( m ) of having no offspring ( m N ¯ ), then the limiting diffusion X ˜ never hits zero almost surely.
The corresponding proof of Theorem 10–which is outlined in Appendix A.4–is an adaption of the proof of Theorem 9.1.3 in Ethier & Kurtz [138] which deals with drift-parameters η = 0 , κ = 0 in the SDE (133) whose solution is approached on a σ independent time scale by a sequence of (critical) Galton-Watson processes without immigration but with general offspring distribution with mean 1 and variance σ . Notice that due to (131) the latter is inconsistent with our Poissonian setup, but this is compensated by our chosen σ dependent time scale. Other limit investigations for (133) involving offspring/immigration distributions and parametrizations which are also incompatible to ours, are e.g., treated in Sriram [142].
As illustration of our proposed approach, let us give the following
Example 3.
Consider the parameter setup ( η , κ , σ ) = ( 5 , 2 , 0.4 ) and initial generation size X ˜ 0 = 3 . Figure 4 shows the diffusion-approximation X ˜ s ( m ) (blue) of the corresponding solution X ˜ s of the SDE (133) up to the time horizon T = 10 , for the approximation steps m { 13 , 50 , 200 , 1000 } . Notice that in this setup there holds N ¯ = { k N : k 13 } (recall that N ¯ is the subset of the positive integers such that β ( m ) = 1 κ σ 2 · m > 0 ). The “long-term mean” of the limit process X ˜ s is η κ = 2.5 and is indicated as red line. The “long-term mean” of the approximations X ˜ s ( m ) is equal to α ( m ) 1 β ( m ) = η κ η σ 2 · m = 2.5 31.25 / m and is displayed as green line.

7.2. Bounds of Hellinger Integrals for Diffusion Approximations

For each approximation step m and each observation horizon t [ 0 , [ , let us now investigate the behaviour of the Hellinger integrals H λ P A , t ( m ) , C d A P H , t ( m ) , C d A , where P , t ( m ) , C d A denotes the canonical law (under H resp. A ) of the continuous-time diffusion approximation X ˜ ( m ) (cf. (132)), restricted to [ 0 , t ] . It is easy to see that H λ P A , t ( m ) , C d A P H , t ( m ) , C d A coincides with H λ P A , σ 2 m t ( m ) P H , σ 2 m t ( m ) of the law restrictions of the GW(I) generations sizes X ( m ) { 0 , , σ 2 m t } , where σ 2 m t σ 2 m can be interpreted as the last “jump-time” of X ˜ ( m ) before t. These Hellinger integrals obey the results of
  • the Propositions 2 and 3 (for η = 0 ) respectively the Propositions 4 and 5 (for η ] 0 , [ ), as far as recursively computable exact values are concerned,
  • Theorem 5 as far as closed-form bounds are concerned; recall that the current setup is of type P NI P SP , 1 , and thus we can use the simplifications proposed in the Remark 7(a).
In order to obtain the desired Hellinger integral limits lim m H λ P A , σ 2 m t ( m ) P H , σ 2 m t ( m ) , one faces several technical problems which will be described in the following. To begin with, for fixed m N ¯ we apply the Propositions 2(b), 3(b), 4(b), 5(b) to the current setup ( β A ( m ) , β H ( m ) , α A ( m ) , α H ( m ) ) P NI P SP , 1 with
β ( m ) : = β ( m , κ , σ 2 ) : = 1 κ σ 2 m and α ( m ) : = α ( m , κ , σ 2 , η ) : = β ( m ) · η σ 2 ( cf . ( 131 ) ) .
Notice that η = 0 corresponds to the no-immigration (NI) case and that α ( m ) β ( m ) = η σ 2 . Accordingly, we set α λ ( m ) : = λ · α A ( m ) + ( 1 λ ) · α H ( m ) , β λ ( m ) : = λ · β A ( m ) + ( 1 λ ) · β H ( m ) . By using
q λ ( m ) : = q ( m , κ , σ 2 , λ ) : = β A ( m ) λ β H ( m ) 1 λ , λ R \ { 0 , 1 } ,
as well as the connected sequence a n ( m ) n N : = a n ( q λ ( m ) ) n N we arrive at the
Corollary 13.
For all β A ( m ) , β H ( m ) , α A ( m ) , α H ( m ) , λ ( P NI P SP , 1 ) × ( R \ { 0 , 1 } ) and all population sizes X 0 ( m ) N there holds
h λ ( m ) : = H λ P A , σ 2 m t ( m ) P H , σ 2 m t ( m ) = exp a σ 2 m t ( q λ ( m ) ) · X 0 ( m ) + η σ 2 k = 1 σ 2 m t a k ( q λ ( m ) )
with η = 0 in the NI case.
In the following, we employ the SDE-parameter constellations (which are consistent with (131) in combination with our requirement to work here only on ( P NI P SP , 1 ) )
P ˜ N I : = ( κ A , κ H , η ) , η = 0 , κ A [ 0 , [ , κ H [ 0 , [ , κ A κ H ,
P ˜ S P , 1 : = ( κ A , κ H , η ) , η > 0 , κ A [ 0 , [ , κ H [ 0 , [ , κ A κ H .
Due to the–not in closed-form representable–recursive nature of the sequences a n ( q ) n N defined by (36), the calculation of lim m h λ ( m ) in (135) seems to be not (straightforwardly) tractable; after all, one “has to move along” a sequence of recursions (roughly speaking) since σ 2 m t as m tends to infinity. One way to “circumvent” such technical problems is to compute instead of the limit lim m h λ ( m ) of the (exact values of the) Hellinger integrals h λ ( m ) , the limits of the corresponding (explicit) closed-form lower resp. upper bounds adapted from Theorem 5. In order to achieve this, one first needs a preparatory step, due to the fact that the sequence a σ 2 m t ( q λ ( m ) ) m N ¯ (and hence its bounds leading to closed-form expressions) does not necessarily converge for all λ R \ [ 0 , 1 ] ; roughly, this can be conjectured from the Propositions 3(c) and 5(c) in combination with σ 2 m t . Correspondingly, for our “sequence-of-recursions” context equipped with the diffusion-limit’s drift-parameter constellations ( κ A , κ H , η ) we have to derive a “convergence interval” [ λ ˜ , λ ˜ + ] \ [ 0 , 1 ] which replaces the single-recursion-concerning [ λ , λ + ] \ [ 0 , 1 ] (cf. Lemma 1). This amounts to
Proposition 15.
For all ( κ A , κ H , η ) P ˜ NI P ˜ SP , 1 define
0 > λ ˜ : = , if κ A < κ H , κ H 2 κ A 2 κ H 2 , if κ A > κ H , and 1 < λ ˜ + : = κ H 2 κ H 2 κ A 2 , if κ A < κ H , , if κ A > κ H .
Then, for all ( κ A , κ H , η , λ ) ( P ˜ NI P ˜ SP , 1 ) × ] λ ˜ , λ ˜ + [ \ [ 0 , 1 ] there holds for all sufficiently large m N ¯
q λ ( m ) : = 1 κ A σ 2 m λ 1 κ H σ 2 m 1 λ < min 1 , e β λ ( m ) 1 ,
and thus the sequence a n ( q λ ( m ) ) n N converges to the fixed point x 0 ( m ) ] 0 , log q λ ( m ) [ .
This will be proved in Appendix A.4.
We are now in the position to determine bounds of the Hellinger integral limits lim m H λ P A , σ 2 m t ( m ) P H , σ 2 m t ( m ) in form of m-limits of appropriate versions of closed-form bounds from Section 6. For the sake of brevity, let us henceforth use the abbreviations x 0 ( m ) : = x 0 ( q λ ( m ) ) , Γ < ( m ) : = Γ < ( q λ ( m ) ) = q λ ( m ) 2 · e x 0 ( m ) · x 0 ( m ) 2 , Γ > ( m ) : = Γ > ( q λ ( m ) ) = q λ ( m ) 2 · x 0 ( m ) 2 , d ( m ) , S : = d ( q λ ( m ) ) , S = x 0 ( m ) ( q λ ( m ) β λ ( m ) ) x 0 ( m ) and d ( m ) , T : = d ( q λ ( m ) ) , T = q λ ( m ) · e x 0 ( m ) . By the above considerations, the Theorem 5 (together with Remark 7(a)) adapts to the current setup as follows:
Corollary 14.
(a) For all ( κ A , κ H , η , λ ) ( P ˜ N I P ˜ S P , 1 ) × ] 0 , 1 [ , all t [ 0 , [ , all approximation steps m N ¯ and all initial population sizes X 0 ( m ) N the Hellinger integral can be bounded by
C λ , X 0 ( m ) , t ( m ) , L : = exp { x 0 ( m ) · X 0 ( m ) η σ 2 d ( m ) , T 1 d ( m ) , T 1 d ( m ) , T σ 2 m t + x 0 ( m ) η σ 2 · σ 2 m t + ζ ̲ σ 2 m t ( m ) · X 0 ( m ) + η σ 2 · ϑ ̲ σ 2 m t ( m ) }
H λ P A , σ 2 m t ( m ) P H , σ 2 m t ( m ) exp { x 0 ( m ) · X 0 ( m ) η σ 2 d ( m ) , S 1 d ( m ) , S 1 d ( m ) , S σ 2 m t + x 0 ( m ) η σ 2 · σ 2 m t ζ ¯ σ 2 m t ( m ) · X 0 ( m ) η σ 2 · ϑ ¯ σ 2 m t ( m ) } = : C λ , X 0 ( m ) , t ( m ) , U ,
where we define analogously to (98) to (101)
ζ ̲ n ( m ) : = Γ < ( m ) · d ( m ) , T n 1 1 d ( m ) , T · 1 d ( m ) , T n > 0 ,
ϑ ̲ n ( m ) : = Γ < ( m ) · 1 d ( m ) , T n 1 d ( m ) , T 2 · 1 d ( m ) , T 1 + d ( m ) , T n 1 + d ( m ) , T > 0 ,
ζ ¯ n ( m ) : = Γ < ( m ) · d ( m ) , S n d ( m ) , T n d ( m ) , S d ( m ) , T d ( m ) , S n 1 · 1 d ( m ) , T n 1 d ( m ) , T > 0 ,
ϑ ¯ n ( m ) : = Γ < ( m ) · d ( m ) , T 1 d ( m ) , T · 1 d ( m ) , S d ( m ) , T n 1 d ( m ) , S d ( m ) , T d ( m ) , S n d ( m ) , T n d ( m ) , S d ( m ) , T > 0 .
Notice that (140) and (141) simplify significantly for ( κ A , κ H , η , λ ) P ˜ N I × ] 0 , 1 [ for which η = 0 holds.
(b) For all ( κ A , κ H , η , λ ) ( P ˜ NI P ˜ SP , 1 ) × ] λ ˜ , λ ˜ + [ \ [ 0 , 1 ] and all initial population sizes X 0 ( m ) N the Hellinger integral bounds (140) and (141) are valid for all sufficiently large m N ¯ , where the expressions (142) to (145) have to be replaced by
ζ ̲ n ( m ) : = Γ > ( m ) · d ( m ) , T n d ( m ) , S 2 n d ( m ) , T d ( m ) , S 2 > 0 ,
ϑ ̲ n ( m ) : = Γ > ( m ) d ( m ) , T d ( m ) , S 2 · d ( m ) , T · 1 d ( m ) , T n 1 d ( m ) , T d ( m ) , S 2 · 1 d ( m ) , S 2 n 1 d ( m ) , S 2 > 0 , ζ ¯ n ( m ) : = Γ > ( m ) · d ( m ) , S n 1 · n 1 d ( m ) , T n 1 d ( m ) , T > 0 ,
ϑ ¯ n ( m ) : = Γ > ( m ) · [ d ( m ) , S d ( m ) , T 1 d ( m ) , S 2 1 d ( m ) , T · 1 d ( m ) , S n
+ d ( m ) , T 1 d ( m ) , S d ( m ) , T n 1 d ( m ) , T 1 d ( m ) , S d ( m ) , T d ( m ) , S n 1 d ( m ) , S · n ] .
Let us finally present the desired assertions on the limits of the bounds given in Corollary 14 as the approximation step m tends to infinity, by employing for λ ] λ ˜ , λ ˜ + [ [ 0 , 1 ] the quantities
κ λ : = λ κ A + ( 1 λ ) κ H as well as Λ λ : = λ κ A 2 + ( 1 λ ) κ H 2 ,
for which the following relations hold:
Λ λ > κ λ > 0 , for λ ] 0 , 1 [ ,
0 < Λ λ < κ λ , for λ ] λ ˜ , λ ˜ + [ \ [ 0 , 1 ] .
Theorem 11.
Let the initial SDE-value X ˜ 0 ] 0 , [ be arbitrary but fixed, and suppose that lim m 1 m X 0 ( m ) = X ˜ 0 . Then, for all ( κ A , κ H , η , λ ) ( P ˜ N I P ˜ S P , 1 ) × ] λ ˜ , λ ˜ + [ \ { 0 , 1 } and all t [ 0 , [ the Hellinger integral limit can be bounded by
D λ , X ˜ 0 , t L : = exp { Λ λ κ λ σ 2 X ˜ 0 η Λ λ 1 e Λ λ · t η σ 2 Λ λ κ λ · t + L λ ( 1 ) ( t ) · X ˜ 0 + η σ 2 · L λ ( 2 ) ( t ) }
lim m H λ P A , σ 2 m t ( m ) P H , σ 2 m t ( m ) exp { Λ λ κ λ σ 2 X ˜ 0 η 1 2 ( Λ λ + κ λ ) 1 e 1 2 ( Λ λ + κ λ ) · t η σ 2 Λ λ κ λ · t U λ ( 1 ) ( t ) · X ˜ 0 η σ 2 · U λ ( 2 ) ( t ) } = : d λ , X ˜ 0 , t U ,
where for the (sub)case of all λ ] 0 , 1 [ and all t 0
L λ ( 1 ) ( t ) : = Λ λ κ λ 2 2 σ 2 · Λ λ · e Λ λ · t · 1 e Λ λ · t ,
L λ ( 2 ) ( t ) : = 1 4 · Λ λ κ λ Λ λ 2 · 1 e Λ λ · t 2 ,
U λ ( 1 ) ( t ) : = Λ λ κ λ 2 σ 2 · e 1 2 ( Λ λ + κ λ ) · t e Λ λ · t Λ λ κ λ e 1 2 ( Λ λ + κ λ ) · t 1 e Λ λ · t 2 · Λ λ ,
U λ ( 2 ) ( t ) : = Λ λ κ λ 2 Λ λ · 1 e 1 2 3 Λ λ + κ λ · t 3 Λ λ + κ λ + e Λ λ · t e 1 2 ( Λ λ + κ λ ) · t Λ λ κ λ ,
and for the remaining (sub)case of all λ ] λ ˜ , λ ˜ + [ \ [ 0 , 1 ] and all t 0
L λ ( 1 ) ( t ) : = Λ λ κ λ 2 2 σ 2 · κ λ · e Λ λ · t · 1 e κ λ · t ,
L λ ( 2 ) ( t ) : = Λ λ κ λ 2 2 · κ λ · 1 e Λ λ · t Λ λ 1 e ( Λ λ + κ λ ) · t Λ λ + κ λ ,
U λ ( 1 ) ( t ) : = Λ λ κ λ 2 2 · σ 2 · e 1 2 ( Λ λ + κ λ ) · t · t 1 e Λ λ · t Λ λ ,
U λ ( 2 ) ( t ) : = Λ λ κ λ 2 · Λ λ κ λ 1 e 1 2 ( Λ λ + κ λ ) · t Λ λ · Λ λ + κ λ 2 + 1 e 1 2 ( 3 Λ λ + κ λ ) · t Λ λ · 3 Λ λ + κ λ e 1 2 ( Λ λ + κ λ ) · t Λ λ + κ λ · t .
Notice that the components L λ ( i ) ( t ) and U λ ( i ) ( t ) (for i = 1 , 2 and in both cases λ ] 0 , 1 [ and λ ] λ ˜ , λ ˜ + [ \ [ 0 , 1 ] ) are strictly positive for t > 0 and do not depend on the parameter η. Furthermore, the bounds d λ , X ˜ 0 , t L and d λ , X ˜ 0 , t U simplify significantly in the case ( κ A , κ H , η ) P ˜ N I , for which η = 0 holds.
This will be proved in Appendix A.4. For the time-asymptotics, we obtain the
Corollary 15.
Let the initial SDE-value X ˜ 0 ] 0 , [ be arbitrary but fixed, and suppose that lim m 1 m X 0 ( m ) = X ˜ 0 . Then:
(a) For all ( κ A , κ H , η , λ ) P ˜ N I × ] λ ˜ , λ ˜ + [ \ { 0 , 1 } the Hellinger integral limit converges to
lim t lim m log H λ P A , σ 2 m t ( m ) P H , σ 2 m t ( m ) = X ˜ 0 σ 2 · Λ λ κ λ < 0 , for λ ] 0 , 1 [ , > 0 , for λ ] λ ˜ , λ ˜ + [ \ [ 0 , 1 ] .
(b) For all ( κ A , κ H , η , λ ) P ˜ S P , 1 × ] λ ˜ , λ ˜ + [ \ { 0 , 1 } the Hellinger integral limit possesses the asymptotical behaviour
lim t 1 t log lim m H λ P A , σ 2 m t ( m ) P H , σ 2 m t ( m ) = η σ 2 · Λ λ κ λ < 0 , for λ ] 0 , 1 [ , > 0 , for λ ] λ ˜ , λ ˜ + [ \ [ 0 , 1 ] .
The assertions of Corollary 15 follow immediately by inspecting the expressions in the exponential of (153) and (154) in combination with (155) to (162).

7.3. Bounds of Power Divergences for Diffusion Approximations

Analogously to Section 4 (see especially Section 4.1), for orders λ R \ { 0 , 1 } all the results of the previous Section 7.2 carry correspondingly over from (limits of) bounds of the Hellinger integrals H λ P A , σ 2 m t ( m ) P H , σ 2 m t ( m ) to (limits of) bounds of the total variation distance V P A , σ 2 m t ( m ) P H , σ 2 m t ( m ) (by virtue of (12)), to (limits of) bounds of the Renyi divergences R λ P A , σ 2 m t ( m ) P H , σ 2 m t ( m ) (by virtue of (7)) as well as to (limits of) bounds of the power divergences I λ P A , σ 2 m t ( m ) P H , σ 2 m t ( m ) (by virtue of (2)). For the sake of brevity, the–merely repetitive–exact details are omitted. Moreover, by combining the outcoming results on the above-mentioned power divergences with parts of the Bayesian-decision-making context of Section 2.5, we obtain corresponding assertions on (i) the (cf. (21)) weighted-average decision risk reduction (weighted-average statistical information measure) about the degree of evidence d e g concerning the parameter θ that can be attained by observing the GWI-path X n until stage n, as well as (ii) the (cf. (22)) limit decision risk reduction (limit statistical information measure).
In the following, let us concentrate on the derivation of the Kullback-Leibler information divergence KL (relative entropy) within the current diffusion-limit framework. Notice that altogether we face two limit procedures simultaneously: by the first limit lim λ 1 I λ P A , σ 2 m t ( m ) P H , σ 2 m t ( m ) we obtain the KL I P A , σ 2 m t ( m ) P H , σ 2 m t ( m ) for every fixed approximation step m N ¯ ; on the other hand, for each fixed λ ] 0 , 1 [ , the second limit lim m I λ P A , σ 2 m t ( m ) P H , σ 2 m t ( m ) describes the limit of the power divergence – as the sequence of rescaled and continuously interpolated GW(I)’s X ˜ s ( m ) s [ 0 , [ m N ¯ (equipped with probability law P A , σ 2 m t ( m ) resp. P H , σ 2 m t ( m ) up to time σ 2 m t ) converges weakly to the continuous-time CIR-type diffusion process X ˜ s s [ 0 , [ (with probability law P ˜ A , t resp. P ˜ H , t up to time t). In Appendix A.4 we shall prove that these two limits can be interchanged:
Theorem 12.
Let the initial SDE-value X ˜ 0 ] 0 , [ be arbitrary but fixed, and suppose that lim m 1 m X 0 ( m ) = X ˜ 0 . Then, for all ( κ A , κ H , η ) P ˜ N I P ˜ S P , 1 and all t [ 0 , [ , one gets the Kullback-Leibler information divergence (relative entropy) convergences
lim m I P A , σ 2 m t ( m ) P H , σ 2 m t ( m ) = lim m lim λ 1 I λ P A , σ 2 m t ( m ) P H , σ 2 m t ( m ) = κ A κ H 2 2 σ 2 · κ A · X ˜ 0 η κ A · 1 e κ A · t + η · t , if κ A > 0 , κ H 2 2 σ 2 · η 2 · t 2 + X ˜ 0 · t , if κ A = 0 , = lim λ 1 lim m I λ P A , σ 2 m t ( m ) P H , σ 2 m t ( m ) .
This immediately leads to the following
Corollary 16.
Let the initial SDE-value X ˜ 0 ] 0 , [ be arbitrary but fixed, and suppose that lim m 1 m X 0 ( m ) = X ˜ 0 . Then, the KL limit (163) possesses the following time-asymptotical behaviour:
(a) For all ( κ A , κ H , η ) P ˜ N I (i.e., η = 0 ) one gets
( i ) in the case κ A > 0 lim t lim m I P A , σ 2 m t ( m ) P H , σ 2 m t ( m ) = X ˜ 0 · ( κ A κ H ) 2 2 σ 2 · κ A , ( ii ) in the case κ A = 0 lim t lim m 1 t · I P A , σ 2 m t ( m ) P H , σ 2 m t ( m ) = X ˜ 0 · κ H 2 4 σ 2 .
(b) For all ( κ A , κ H , η ) P ˜ S P , 1 (i.e., η > 0 ) one gets
( i ) in the case κ A > 0 lim t lim m 1 t · I P A , σ 2 m t ( m ) P H , σ 2 m t ( m ) = η · ( κ A κ H ) 2 2 σ 2 · κ A , ( ii ) in the case κ A = 0 lim t lim m 1 t 2 · I P A , σ 2 m t ( m ) P H , σ 2 m t ( m ) = η · κ H 2 4 σ 2 .
Remark 9.
In Appendix A.4 we shall see that the proof of the last (limit-interchange concerning) equality in (163) relies heavily on the use of the extra terms L λ ( 1 ) ( t ) , L λ ( 2 ) ( t ) , U λ ( 1 ) ( t ) , U λ ( 2 ) ( t ) in (153) and (154). Recall that these terms ultimately stem from (manipulations of) the corresponding parts of the “improved closed-form bounds” in Theorem 5, which were derived by using the linear inhomogeneous difference equations a ̲ n ( q ) resp. a ¯ n ( q ) (cf. (92) resp. (94)) instead of the linear homogeneous difference equations a n ( q ) , T resp. a n ( q ) , S (cf. (78) resp. (79)) as explicit approximates of the sequence a n ( q ) . Not only this fact shows the importance of this more tedious approach.
Interesting comparisons of the above-mentioned results in Section 7.2 and Section 7.3 with corresponding information measures of the solutions of the SDE (129) themselves (rather their branching approximations), can be found in Kammerer [157].

7.4. Applications to Decision Making

Analogously to Section 6.7, the above-mentioned investigations of the Section 7.1, Section 7.2 and Section 7.3 can be applied to the context of Section 2.5 on dichotomous decision making about GW(I)-type diffusion approximations of solutions of the stochastic differential Equation (129). For the sake of brevity, the–merely repetitive–exact details are omitted.

Author Contributions

Conceptualization, N.B.K. and W.S.; Formal analysis, N.B.K. and W.S.; Methodology, N.B.K. and W.S.; Visualization, N.B.K.; Writing, N.B.K. and W.S. All authors have read and agreed to the published version of the manuscript.

Funding

Niels B. Kammerer received a scholarship of the “Studienstiftung des Deutschen Volkes” for his PhD Thesis.

Acknowledgments

We are very grateful to the referees for their patience to review this long manuscript, and for their helpful suggestions. Moreover, we would like to thank Andreas Greven for some useful remarks.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Proofs and Auxiliary Lemmas

Appendix A.1. Proofs and Auxiliary Lemmas for Section 3

Lemma A1.
For all real numbers x , y , z > 0 and all λ R one has
x λ y 1 λ λ x z λ 1 + ( 1 λ ) y z λ 0 , for λ ] 0 , 1 [ , = 0 , for λ { 0 , 1 } , 0 , for λ R \ [ 0 , 1 ] ,
with equality in the cases λ R \ { 0 , 1 } iff x y = z .
Proof of Lemma A1.
For fixed x ˜ : = x z λ 1 > 0 , y ˜ : = y z λ > 0 with x ˜ y ˜ we inspect the function g on R defined by g ( λ ) : = x ˜ λ y ˜ 1 λ ( λ x ˜ + ( 1 λ ) y ˜ ) which satisfies g ( 0 ) = g ( 1 ) = 0 , g ( 0 ) = y ˜ log ( x ˜ / y ˜ ) ( x ˜ y ˜ ) < y ˜ ( ( x ˜ / y ˜ ) 1 ) ( x ˜ y ˜ ) = 0 and which is strictly convex. Thus, the assertion follows immediately by taking into account the obvious case x ˜ = y ˜ . □
Proof of Properties 1.
Property (P9) is trivially valid. To show (P1) we assume 0 < q < β λ , which implies a 1 ( q ) = ξ λ ( q ) ( 0 ) = q β λ < 0 . By induction, a n n N is strictly negative and strictly decreasing. As stated in (P9), the function ξ λ ( q ) is strictly increasing, strictly convex and converges to β λ for x . Thus, it hits the straight line i d ( x ) = x once and only once on the negative real line at x 0 ( q ) ] β λ , 0 [ (cf. (44)). This implies that the sequence a n ( q ) n N converges to x 0 ( q ) ] β λ , q β λ [ . Property (P2) follows immediately. In order to prove (P3), let us fix q > max { 0 , β λ } , implying a 1 ( q ) = ξ λ ( q ) ( 0 ) = q β λ > 0 ; notice that in this setup, the special choice q = 1 implies min { 1 , e β λ 1 } = e β λ 1 < q . By induction, a n ( q ) n N is strictly positive and strictly increasing. Since lim x ξ λ ( q ) ( x ) = , the function ξ λ ( q ) does not necessarily hit the straight line i d ( x ) = x on the positive real line. In fact, due to strict convexity (cf. (P9)), this is excluded if ξ λ ( q ) ( 0 ) = q 1 . Suppose that q < 1 . To prove that there exists a positive solution of the equation ξ λ ( q ) ( x ) = x it is sufficient to show that the unique global minimum of the strict convex function h λ ( q ) ( x ) : = ξ λ ( q ) ( x ) x is taken at some point x 0 ] 0 , [ and that h λ ( q ) ( x 0 ) 0 . It holds h λ ( q ) ( x ) = q · e x 1 , and therefore h λ ( q ) ( x ) = 0 iff x = x 0 = log q . We have h λ ( q ) ( log q ) = 1 β λ + log q , which is less or equal to zero iff q e β λ 1 . It remains to show that for q > β λ and q > min 1 , e β λ 1 the sequence a n ( q ) n N grows faster than exponentially, i.e., there do not exist constants c 1 , c 2 R such that a n ( q ) e c 1 + c 2 n for all n N . We already know that (in the current case) a n ( q ) n . Notice that it is sufficient to verify lim sup n log ( a n + 1 ( q ) ) log ( a n ( q ) ) = . For the case β λ 0 the latter is obtained by
log a n + 1 ( q ) log a n ( q ) = log ( q β λ ) e a n ( q ) + β λ ( e a n ( q ) 1 ) log q e a n 1 ( q ) β λ log ( q β λ ) log ( q ) + q e a n 1 ( q ) β λ a n 1 ( q ) a n 1 ( q ) .
An analogous consideration works out for the case β λ < 0 . Property (P4) is trivial, and (P5) to (P8) are direct implications of the already proven properties (P1) to (P4). □
Proof of Lemma 1.
(a) Let β A > 0 , β H > 0 with β A β H , λ R \ ] 0 , 1 [ , β λ : = λ β A + ( 1 λ ) β H and q λ : = β A λ β H 1 λ > max { 0 , β λ } (cf. Lemma A1). Below, we follow the lines of Linkov & Lunyova [53], appropriately adapted to our context. We have to find those λ R \ ] 0 , 1 [ for which the following two conditions hold:
(i)
q λ 1 , i.e., ξ λ ( q λ ) ( 0 ) 1 ,
(ii)
q λ e β λ 1 (cf.(P3a)), which is equivalent with the existence of a–positive, if (i) is satisfied,–solution of the equation ξ λ ( q λ ) ( x ) = x .
Notice that the case q λ = 1 , λ R \ [ 0 , 1 ] , cannot appear in (i), provided that (ii) holds (since due to Lemma A1 e β λ 1 < e q λ 1 = 1 ). For (i), it is easy to check that we have to require
λ < log ( β H ) log ( β H / β A ) , if β A > β H , > log ( β H ) log ( β H / β A ) , if β A < β H .
To proceed, straightforward analysis leads to log ( q λ ) = arg min x R ξ λ ( q λ ) ( x ) x . To check (ii), we first notice that q λ e β λ 1 iff ξ λ ( q λ ) ( x ) x 0 for some x R . Hence, we calculate
ξ λ ( q λ ) log ( q λ ) + log ( q λ ) 0 1 λ ( β A β H ) β H + λ log β A β H + log ( β H ) 0 λ · β H 1 β A β H + log β A β H β H 1 log β H .
In order to isolate λ in (A2), one has to find out for which ( β A , β H ) the term in the square bracket is positive resp. zero resp. negative. To achieve this, we aim for the substitutions x : = β A / β H , β = β H and thus study first the auxiliary function h β ( x ) : = log ( x ) β ( x 1 ) , x > 0 , with fixed parameters β > 0 . Straightforwardly, we obtain h β ( x ) = x 1 β and h β ( x ) = x 2 . Thus, the function h β ( · ) is strictly concave and attains a maximum at x = β 1 . Since additionally h β ( 1 ) = 0 and h β ( 1 ) = 1 β , there exists a second solution z ( β ) 1 of the equation h β ( x ) = 0 iff β 1 . Thus, one gets
  • for β = 1 : for all x > 0 there holds h β ( x ) 0 , with equality iff x = β 1 ,
  • for β < 1 : h β ( x ) 0 iff x [ 1 , z ( β ) ] , with equality iff x { 1 , z ( β ) } (notice that z ( β ) > 1 ),
  • for β > 1 : h β ( x ) 0 iff x [ z ( β ) , 1 ] , with equality iff x { z ( β ) , 1 } (notice that z ( β ) < 1 ).
Suppose that λ < 0 .
Case 1: If β H = 1 , then condition (ii) is not satisfied whenever β A β H , since the right side of (A2) is equal to zero and the left side is strictly greater than zero. Hence, λ = 0 .
Case 2: Let β H > 1 . If β A < β H , then condition (i) is not satisfied and hence λ = 0 . If β A > β H , then condition (i) is satisfied iff λ < λ ˘ ˘ : = λ ˘ ˘ ( β A , β H ) : = log ( β H ) log ( β H / β A ) < 0 . On the other hand, incorporating the discussion of the function h β ( · ) , we see that h β H β A β H < 0 . Thus, (A2) implies that condition (ii) is satisfied when λ λ ˘ : = λ ˘ ( β A , β H ) : = β H 1 log β H β H β A + log β A β H . We claim that λ ˘ ˘ < λ ˘ and conclude that the conditions (i) and (ii) are not fulfilled jointly, which leads to λ = 0 . To see this, we notice that due to 1 < β H < β A we get log ( β A ) / ( β A 1 ) < log ( β H ) / ( β H 1 ) and thus
log ( β A ) ( β H 1 ) < log ( β H ) ( β A 1 ) β H log ( β H ) β A log ( β H ) < β H log ( β H ) β H log ( β A ) log ( β H ) + log ( β A ) log ( β H ) ( β H β A ) + log ( β H ) log β A β H < log β H β A ( β H 1 ) + log ( β H ) log β A β H log ( β H ) log β H β A < β H 1 log ( β H ) β H β A + log β A β H λ ˘ ˘ < λ ˘ .
Case 3: Let β H < 1 . For this, one gets h β H β A β H 0 for β A ] β H , β H z ( β H ) ] . Hence, condition (ii) is satisfied if either β A ] β H , β H z ( β H ) ] , or β A ] β H , β H z ( β H ) ] and λ λ ˘ . If β A > β H z ( β H ) , then condition (i) is trivially satisfied for all λ < 0 . In the case β A < β H , condition (i) is satisfied whenever λ > λ ˘ ˘ . Notice that since 0 < β A < β H < 1 , an analogous consideration as in (A3) leads to λ ˘ ˘ < λ ˘ . This implies that λ = λ ˘ . The last case β A ] β H , β H z ( β H ) ] is easy to handle: since log ( β H ) log ( β H / β A ) > 0 as well as z β H β A β H > 0 , both conditions (i) and (ii) hold trivially.
The representation of λ + follows straightforwardly from the λ -result and the skew symmetry (8), by employing 1 λ ˘ ( β H , β A ) = λ ˘ ( β A , β H ) . Alternatively, one can proceed analogously to the λ -case.
Part (b) is much easier to prove: if β : = β A = β H > 0 , then for all λ R \ [ 0 , 1 ] one gets q λ = β A λ β H 1 λ = β as well as β λ = β . Hence, Properties 1 (P2) implies that a n ( q λ ) 0 and thus it is convergent, independently of the choice λ R \ [ 0 , 1 ] . □
Proof of Formula (51).
For the parameter constellation in Section 3.10, we employ as upper bound for ϕ λ ( x ) ( x N 0 ) the function
ϕ λ ¯ ( x ) : = ϕ λ ( 0 ) , if x = 0 , 0 , if x > 0 .
Notice that this method is rather crude, and gives in the other cases treated in the Section 3.7, Section 3.8 and Section 3.9 worse bounds than those derived there. Since λ ] 0 , 1 [ and α A α H , one has ϕ λ ( 0 ) < 0 . In order to derive an upper bound of the Hellinger integral, we first set ϵ ¯ : = 1 e ϕ λ ( 0 ) ] 0 , 1 [ . Hence, for all n N \ { 1 } we obtain the auxiliary expression
x n 1 = 0 φ λ ( x n 2 ) x n 1 x n 1 ! · exp ϕ λ ( x n 1 ) x n 1 = 0 φ λ ( x n 2 ) x n 1 x n 1 ! · exp ϕ λ ¯ ( x n 1 ) = exp φ λ ( x n 2 ) ϵ ¯ = exp φ λ ( x n 2 ) · 1 ϵ ¯ · exp φ λ ( x n 2 ) .
Moreover, since β A β H , one gets lim x ϕ λ ( x ) = (cf. Properties 3 (P20) and Lemma A1). This–together with the nonnegativity of φ λ ( · ) –implies
sup x N 0 exp ϕ λ ( x ) · 1 ϵ ¯ · exp φ λ ( x ) = : δ ¯ ] 0 , 1 [ .
Incorporating these considerations as well as the formulas (27) to (32), we get for n = 1 the relation H λ P A , n P H , n = exp { ϕ λ ( x 0 ) } 1 (with equality iff x 0 = x * = α A α H β H β A ), and–as a continuation of formula (29)– for all n N \ { 1 } (recall that x : = ( x 0 , x 1 , ) Ω )
H λ P A , n P H , n = x 1 = 0 x n = 0 k = 1 n Z n , k ( λ ) ( x ) = x 1 = 0 x n 1 = 0 k = 1 n 1 Z n , k ( λ ) ( x ) · exp f A ( x n 1 ) λ f H ( x n 1 ) ( 1 λ ) ( λ f A ( x n 1 ) + ( 1 λ ) f H ( x n 1 ) ) = x 1 = 0 x n 2 = 0 k = 1 n 2 Z n , k ( λ ) ( x ) · exp f λ ( x n 2 ) x n 1 = 0 φ λ ( x n 2 ) x n 1 x n 1 ! · exp { ϕ λ ( x n 1 ) } x 1 = 0 x n 2 = 0 k = 1 n 2 Z n , k ( λ ) ( x ) · exp ϕ λ ( x n 2 ) · 1 ϵ ¯ · exp φ λ ( x n 2 ) δ ¯ · x 1 = 0 x n 2 = 0 k = 1 n 2 Z n , k ( λ ) ( x ) δ ¯ n / 2 .
Hence, H λ P A , n P H , n < 1 for (at least) all n N \ { 1 } , and lim n H λ P A , n P H , n = 0 . □
Notice that the above proof method of formula (51) does not work for the parameter setup in Section 3.11, because there one gets δ ¯ = sup x N 0 exp ϕ λ ( x ) · 1 ϵ ¯ · exp φ λ ( x ) = 1 .
Proof of Proposition 9.
In the setup β A , β H , α A , α H , λ P SP , 4 a × ] 0 , 1 [ we require β : = β A = β H < 1 . As a linear upper bound for ϕ λ ( · ) , we employ the tangent line at y 0 (cf. (52))
ϕ λ , y tan ( x ) : = ( p y α λ ) + q y β · x : = ( p λ , y tan α λ ) + ( q λ , y tan β λ ) · x : = ϕ λ ( y ) y · ϕ λ ( y ) + ϕ λ ( y ) · x .
Since in the current setup P SP , 4 a the function ϕ λ ( · ) is strictly increasing, the slope ϕ λ ( y ) of the tangent line at y is positive. Thus we have q y > β λ and Properties 1 (P3) implies that the sequence a n ( q y ) n N is strictly increasing and converges to x 0 ( q y ) ] 0 , log ( q y ) ] iff q y min { 1 , e β 1 } = e β 1 < 1 (cf. (P3a)), where x 0 ( q y ) is the smallest solution of the equation ξ λ ( q y ) ( x ) = q y · e x β = x . Since q y β for y (cf. Properties 3 (P18)) and additionally e β 1 > β , there exists a large enough y 0 such that the sequence a n ( q y ) n N converges. If this y is also large enough to additionally guarantee h ( y ) < 0 for
h ( y ) : = lim n 1 n log B ˜ λ , X 0 , n ( p y , q y ) = p y · e x 0 ( q y ) α λ ,
then one can conclude that lim n H λ ( P A , n P H , n ) = 0 . As a first step, for verifying h ( y ) < 0 we look for an upper bound x ¯ 0 ( q y ) for the fixed point x 0 ( q y ) where the latter exists for y y 1 (say). Notice that
Q ¯ λ ( q y ) ( x ) : = 1 2 x 2 + q y x + q y β q y · e x β = ξ λ ( q y ) ( x ) ,
since Q ¯ λ ( q y ) ( 0 ) = ξ λ ( q y ) ( 0 ) , Q ¯ λ ( q y ) ( 0 ) = ξ λ ( q y ) ( 0 ) and Q ¯ λ ( q y ) ( x ) ξ λ ( q y ) ( x ) for x [ 0 , log ( q y ) ] . For sufficiently large y y 2 y 1 (say), we easily obtain the smaller solution of Q ¯ λ ( q y ) ( x ) = x as
x ¯ 0 ( q y ) = ( 1 q y ) ( 1 q y ) 2 2 ( q y β ) = ( 1 ϕ λ ( y ) β ) ( 1 ϕ λ ( y ) β ) 2 2 ϕ λ ( y ) x 0 ( q y )
where the expression in the root is positive since q y β for y . We now have
h ( y ) = p y · e x 0 ( q y ) α λ p y · e x ¯ 0 ( q y ) α λ = : h ¯ ( y ) , y y 2 .
Hence, it suffices to show that h ¯ ( y ) < 0 for some y y 2 . We recall from Properties 3 (P15), (P17) and (P19) that
ϕ λ ( y ) = α A + β · y λ α H + β · y 1 λ λ α A + β · y ( 1 λ ) α H + β · y < 0 , ϕ λ ( y ) = λ · β · α A + β · y α H + β · y λ 1 + ( 1 λ ) · β · α A + β · y α H + β · y λ β > 0 and that ϕ λ ( y ) = α A + β · y α H + β · y λ · λ ( 1 λ ) · β 2 · ( α A α H ) 2 ( α A + β · y ) 2 ( α H + β · y ) < 0 ,
which immediately implies lim y ϕ λ ( y ) = lim y ϕ λ ( y ) = lim y ϕ λ ( y ) = 0 and with l’Hospital’s rule
lim y y · ϕ λ ( y ) = lim y y 2 · ϕ λ ( y ) = lim y y 3 2 · ϕ λ ( y ) = 1 2 lim y α A + β · y α H + β · y λ · λ ( 1 λ ) · β 2 · ( α A α H ) 2 ( α A / y + β ) 2 ( α H / y + β ) = 1 2 λ ( 1 λ ) · ( α A α H ) 2 β .
The formulas (A5), (A7) and (A9) imply the limits lim y p y = α λ , lim y q y = β , lim y x ¯ 0 ( q y ) = 0 . Notice that p y < α λ holds trivially for all y 0 since the intercept ( p y α λ ) of the tangent line ϕ λ , y tan ( · ) is negative. Incorporating (A8) we therefore obtain lim y h ( y ) lim y h ¯ ( y ) = 0 . As mentioned before, for the proof it is sufficient to show that h ¯ ( y ) < 0 for some y y 2 . This holds true if lim y y · h ¯ ( y ) < 0 . To verify this, notice first that from (A5), (A7) and (A8) we get
h ¯ ( y ) = p y · e x ¯ 0 ( q y ) · ϕ λ ( y ) · 1 2 ϕ λ ( y ) β ( 1 q y ) 2 2 ( q y β ) y · ϕ λ ( y ) · e x ¯ 0 ( q y ) y 0 .
Finally we obtain with (A10)
lim y y · h ¯ ( y ) = lim y y 2 · h ¯ ( y ) = lim y p y · e x ¯ 0 ( q y ) · y 2 · ϕ λ ( y ) · 1 2 ϕ λ ( y ) β ( 1 q y ) 2 2 ( q y β ) + y 3 · ϕ λ ( y ) · e x ¯ 0 ( q y ) = 0 λ ( 1 λ ) · ( α A α H ) 2 β < 0 .
Proof of Corollary 1.
Part (a) follows directly from Proposition 1 (a),(b) and the limit lim n H λ ( P A , n P H , n ) = 0 in the respective part (c) of the Propositions 7, 8, 9 as well as from (51). To prove part (b), according to (26) we have to verify lim inf λ 1 lim inf n H λ P A , n P H , n = 1 . From part (c) of Proposition 2 we see that this is satisfied iff lim λ 1 x 0 ( q λ E ) = 0 . Recall that for fixed λ ] 0 , 1 [ we have β λ = λ β A + ( 1 λ ) β H > 0 , q λ E = β A λ β H 1 λ < β λ (cf. Lemma A1) and from Properties 1 (P1) the unique negative solution x 0 ( q λ E ) ] β λ , q λ E β λ [ of ξ λ ( q λ E ) ( x ) = q λ E e x β λ = x (cf. (44)). Due to the continuity and boundedness of the map λ x 0 ( q λ E ) (for λ [ 0 , 1 ] ) one gets that lim λ 1 x 0 ( q λ E ) exists and is the smallest nonpositive solution of β A e x β A = x . From this, the part (b) as well as the non-contiguity in part (c) follow immediately. The other part of (c) is a direct consequence of Proposition 1 (a),(b) and Proposition 2 (c). □
Proof of Formula (59).
One can proceed similarly to the proof of formula (51) above. Recall H λ ( P A , 1 P H , 1 ) = exp { ϕ λ ( X 0 ) } > 1 for X 0 N (cf. (28), Lemma A1 and f A ( X 0 ) f H ( X 0 ) for all X 0 N ). For β A , β H , α A , α H , λ P SP , 2 × ( R \ [ 0 , 1 ] ) one gets ϕ λ ( 0 ) = 0 , ϕ λ ( 1 ) > 0 , and we define for x 0
ϕ λ ̲ ( x ) : = ϕ λ ( 1 ) , if x = 1 , 0 , if x 1 .
By means of the choice ϵ ̲ : = φ λ ( 1 ) · e ϕ λ ( 1 ) 1 > 0 , we obtain for all n N \ { 1 }
x n 1 = 0 φ λ ( x n 2 ) x n 1 x n 1 ! · exp ϕ λ ( x n 1 ) x n 1 = 0 φ λ ( x n 2 ) x n 1 x n 1 ! · exp ϕ λ ̲ ( x n 1 ) = exp φ λ ( x n 2 ) + ϵ ̲ = exp φ λ ( x n 2 ) · 1 + ϵ ̲ · exp φ λ ( x n 2 ) .
Incorporating
inf x N 0 exp ϕ λ ( x ) · 1 + ϵ ̲ · exp φ λ ( x ) = : δ ̲ > 1 ,
one can show analogously to (A4) that
H λ P A , n P H , n δ ̲ n / 2 n .
Proof of the Formulas (61), (63) and (64).
In the following, we slightly adapt the above-mentioned proof of formula (59). Let us define
ϕ λ ̲ ( x ) : = ϕ λ ( 0 ) , if x = 0 , 0 , if x > 0 .
In all respective subcases one clearly has ϕ λ ̲ ( 0 ) = ϕ λ ( 0 ) > 0 . With ϵ ̲ : = e ϕ λ ( 0 ) 1 > 0 we obtain for all n N \ { 1 }
x n 1 = 0 φ λ ( x n 2 ) x n 1 x n 1 ! · exp ϕ λ ( x n 1 ) x n 1 = 0 φ λ ( x n 2 ) x n 1 x n 1 ! · exp ϕ λ ̲ ( x n 1 ) = exp φ λ ( x n 2 ) + ϵ ̲ = exp φ λ ( x n 2 ) · 1 + ϵ ̲ · exp φ λ ( x n 2 ) .
By employing
inf x N 0 exp ϕ λ ( x ) · 1 + ϵ ̲ · exp φ λ ( x ) = : δ ̲ > 1 ,
one can show analogously to (A4) that
H λ P A , n P H , n δ ̲ n / 2 n .
Notice that this method does not work for the parameter cases P SP , 4 a P SP , 4 b , since there the infimum in (A12) is equal to one. □
Proof of Proposition 13.
In the setup β A , β H , α A , α H , λ P SP , 4 a × ( R \ [ 0 , 1 ] ) we require β : = β A = β H < 1 . As in the proof of Proposition 9, we stick to the tangent line ϕ λ , y tan ( · ) at y 0 (cf. (52)) as a linear lower bound for ϕ λ ( · ) , i.e., we use the function
ϕ λ , y tan ( x ) : = p y α λ + q y β · x : = p λ , y tan α λ + q λ , y tan β λ · x : = ϕ λ ( y ) y · ϕ λ ( y ) + ϕ λ ( y ) · x .
As already mentioned in Section 3.21, on P SP , 4 a the function ϕ λ ( · ) is strictly decreasing and converges to 0. Thus, for all y 0 the slope ϕ λ ( y ) of the tangent line at y is negative, which implies that q y < β λ = β . For λ R \ [ 0 , 1 ] there clearly may hold q y < 0 for some y R . However, there exists a sufficiently large y 1 > 0 such that q y > 0 for all y > y 1 , since lim y ϕ λ ( y ) = 0 and hence q y β > 0 for y . Thus, let us suppose that y > y 1 . Then, the sequence a n ( q y ) n N is strictly negative, strictly decreasing and converges to x 0 ( q y ) ] β , q y β [ (cf. Properties 1 (P1)). If there is some y y 1 such that h ( y ) > 0 with
h ( y ) : = lim n 1 n log B ˜ λ , X 0 , n ( p y , q y ) = p y · e x 0 ( q y ) α λ ,
then one can conclude that lim n H λ ( P A , n P H , n ) = . Let us at first consider the case α λ 0 . By employing p y α λ for y , one gets p y > 0 for all y 0 . Analogously to the proof of Proposition 9, we now look for a lower bound x ̲ 0 ( q y ) of the fixed point x 0 ( q y ) . Notice that x 0 ( q y ) > β implies
Q ̲ λ ( q y ) ( x ) : = e β 2 · q y · x 2 + q y · x + q y β q y · e x β = ξ λ ( q y ) ( x ) ,
since Q ̲ λ ( q y ) ( 0 ) = ξ λ ( q y ) ( 0 ) < 0 , Q ̲ λ ( q y ) ( 0 ) = ξ λ ( q y ) ( 0 ) > 0 and 0 < Q ̲ λ ( q y ) ( x ) < ξ λ ( q y ) ( x ) for x ] β , 0 ] . Thus, the negative solution x ̲ 0 ( q y ) of the equation Q ̲ λ ( q y ) ( x ) = x (which definitely exists) implies that there holds x ̲ 0 ( q y ) x 0 ( q y ) . We easily obtain
x ̲ 0 ( q y ) = e β q y ( 1 q y ) ( 1 q y ) 2 2 e β q y ( q y β ) = e β ϕ λ ( y ) + β ( 1 ϕ λ ( y ) β ) ( 1 ϕ λ ( y ) β ) 2 2 · e β q y · ϕ λ ( y ) < 0 .
Since
h ( y ) = p y · e x 0 ( q y ) α λ p y · e x ̲ 0 ( q y ) α λ = : h ̲ ( y ) ,
it is sufficient to show h ̲ ( y ) > 0 for some y > y 1 . We recall from Properties 3 (P15), (P17) and (P19) that
ϕ λ ( y ) = α A + β · y λ α H + β · y 1 λ λ α A + β · y ( 1 λ ) α H + β · y > 0 , ϕ λ ( y ) = λ · β · α A + β · y α H + β · y λ 1 + ( 1 λ ) · β · α A + β · y α H + β · y λ β < 0 and ϕ λ ( y ) = α A + β · y α H + β · y λ · λ ( 1 λ ) · β 2 · ( α A α H ) 2 ( α A + β · y ) 2 ( α H + β · y ) > 0 ,
which immediately implies lim y ϕ λ ( y ) = lim y ϕ λ ( y ) = lim y ϕ λ ( y ) = 0 , and by means of l’Hospital’s rule
lim y y · ϕ λ ( y ) = lim y y 2 · ϕ λ ( y ) = lim y y 3 2 · ϕ λ ( y ) = 1 2 lim y α A + β · y α H + β · y λ · λ ( 1 λ ) · β 2 · ( α A α H ) 2 ( α A / y + β ) 2 ( α H / y + β ) = 1 2 λ ( 1 λ ) · ( α A α H ) 2 β .
The Formulas (A13), (A15), (A17) imply the limits lim y p y = α λ , lim y q y = β and lim y x ̲ 0 ( q y ) = 0 iff β 1 . The latter is due to the fact that for β > 1 one gets with (A15) lim y x ̲ 0 ( q y ) = e β β ( 1 β ) ( 1 β ) 2 = e β β 2 2 β 0 . In the following, let us assume β < 1 (the reason why we exclude the case β = 1 is explained below). One gets lim y h ( y ) lim y h ̲ ( y ) = 0 . Since we have to prove that h ̲ ( y ) > 0 for some y > y 1 , it is sufficient to show that lim y y · h ̲ ( y ) > 0 . To verify the latter, we first derive with l’Hospital’s rule and with (A17), (A18)
lim y y · 1 e x ̲ 0 ( q y ) = lim y y 2 · e x ̲ 0 ( q y ) · y x ̲ 0 ( q y ) = lim y { y 2 · e β · ϕ λ ( y ) ϕ λ ( y ) + β 2 · ( 1 q y ) ( 1 q y ) 2 2 e β q y ( q y β ) + e β q y · y 2 · ϕ λ ( y ) 2 y 2 ϕ λ ( y ) ( 1 q y ) 2 y 2 ϕ λ ( y ) e β q y 2 y 2 ϕ λ ( y ) e β ϕ λ ( y ) 2 · ( 1 q y ) 2 2 e β q y ( q y β ) } = 0 .
Notice that without further examination this limit would not necessarily hold for β = 1 , since then the denominator in (A19) converges to zero. With (A13), (A16), (A18) and (A19) we finally obtain
lim y y · h ̲ ( y ) = lim y y · ϕ λ ( y ) y 2 · ϕ λ ( y ) · e x ̲ 0 ( q y ) y · 1 e x ̲ 0 ( q y ) α λ = λ ( 1 λ ) ( α A α H ) 2 β > 0 .
Let us now consider the case α λ < 0 . The proof works out almost completely analogous to the case α λ 0 . We indicate the main differences. Since p y α λ < 0 and q y β ] 0 , 1 [ for y , there is a sufficiently large y 2 > y 1 , such that p y < 0 and q y > 0 . Thus,
Q ¯ λ ( q y ) ( x ) : = q y 2 · x 2 + q y · x + q y β ξ λ ( q y ) ( x ) = q y e x β for x ] , 0 ] .
The corresponding (existing) smaller solution of Q ¯ λ ( q y ) ( x ) = x is
x ¯ 0 ( q y ) = 1 q y ( 1 q y ) ( 1 q y ) 2 2 q y ( q y β ) ,
having the same form as the solution (A15) with e β substituted by 1. Notice that there clearly holds x 0 ( q y ) < x ¯ 0 ( q y ) < 0 . However, since p y < 0 , we now get h ( y ) = p y · e x 0 ( q y ) α λ p y · e x ¯ 0 ( q y ) α λ = : h ̲ ( y ) , as in (A16). Since all calculations (A17) to (A20) remain valid (with e β substituted by 1), this proof is finished. □

Appendix A.2. Proofs and Auxiliary Lemmas for Section 5

We start with two lemmas which will be useful for the proof of Theorem 3. They deal with the sequence a n ( q λ ) n N from (36).
Lemma A2.
For arbitrarily fixed parameter constellation β A , β H , α A , α H , λ P × ] 0 , 1 [ , suppose that q λ > 0 and lim λ 1 q λ = β A holds. Then one gets the limit
n N : lim λ 1 a n ( q λ ) = 0 .
Proof. 
This can be easily seen by induction: for n = 1 there clearly holds
lim λ 1 a 1 ( q λ ) = lim λ 1 ( q λ β λ ) = β A β A = 0 .
Assume now that lim λ 1 a k ( q λ ) = 0 holds for all k N , k n 1 , then
lim λ 1 a n ( q λ ) = lim λ 1 ( q λ · e a n 1 ( q λ ) β λ ) = β A · 1 β A = 0 .
Lemma A3.
In addition to the assumptions of Lemma A2, suppose that λ q λ is continuously differentiable on ] 0 , 1 [ and that the limit l : = lim λ 1 q λ λ is finite. Then, for all n N one obtains
lim λ 1 a n ( q λ ) λ = u n : = l + β H β A 1 β A · 1 β A n , if β A 1 , n · l + β H 1 , if β A = 1 ,
which is the unique solution of the linear recursion equation
u n = l + β H β A + β A · u n 1 , u 0 = 0 .
Furthermore, for all n N there holds
k = 1 n lim λ 1 a k ( q λ ) λ = k = 1 n u k = l + β H β A 1 β A · n β A 1 β A 1 β A n , if β A 1 , n · ( n + 1 ) 2 · l + β H 1 , if β A = 1 .
Proof. 
Clearly, u n defined by (A22) is the unique solution of (A23). We prove by induction that lim λ 1 a n ( q λ ) λ = u n holds. For n = 1 one gets
lim λ 1 a 1 ( q λ ) λ = lim λ 1 ( q λ β λ ) λ = l ( β A β H ) = u 1 .
Suppose now that (A22) holds for all k N , k n 1 . Then, by incorporating (A21) we obtain
lim λ 1 a n ( q λ ) λ = lim λ 1 λ q λ · e a n 1 ( q λ ) β λ = lim λ 1 e a n 1 ( q λ ) · q λ λ + q λ a n 1 ( q λ ) λ ( β A β H ) = l ( β A β H ) + β A · u n 1 = u n .
The remaining assertions follow immediately. □
We are now ready to give the
Proof of Theorem 3.
(a) Recall that for the setup β A , β H , α A , α H ( P NI P SP , 1 ) we chose the intercept as p λ : = p λ E : = α A λ α H 1 λ and the slope as q λ : = q λ E : = β A λ β H 1 λ , which in (39) lead to the exact value V λ , X 0 , n of the Hellinger integral. Because of p λ q λ β λ α λ = 0 as well as lim λ 1 q λ = β A , we obtain by using (38) and Lemma A2 for all X 0 N and for all n N
lim λ 1 V λ , X 0 , n : = lim λ 1 exp a n ( q λ ) · X 0 + k = 1 n b k ( p λ , q λ ) = lim λ 1 exp a n ( q λ ) · X 0 + α A β A k = 1 n a k ( q λ ) = 1 ,
which leads by (68) to
I ( P A , n P H , n ) = lim λ 1 1 H λ ( P A , n P H , n ) λ · ( 1 λ ) = lim λ 1 1 V λ , X 0 , n λ · ( 1 λ ) = lim λ 1 V λ , X 0 , n 1 2 λ · λ a n ( q λ ) · X 0 + p λ q λ k = 1 n a k ( q λ ) = lim λ 1 a n ( q λ ) λ · X 0 + λ p λ q λ · k = 1 n a k ( q λ ) + p λ q λ · k = 1 n a k ( q λ ) λ .
For further analysis, we use the obvious derivatives
p λ λ = p λ log α A α H , λ p λ q λ = p λ q λ log α A β H α H β A , q λ λ = q λ log β A β H ,
where the subcase β A , β H , α A , α H P NI (with p λ 0 ) is consistently covered. From (A25) and Lemma A3 we deduce
lim λ 1 a n ( q λ ) λ · X 0 = β A log β A β H ( β A β H ) · 1 β A n 1 β A · X 0 , if β A 1 , n · β A log β A β H ( β A β H ) · X 0 , if β A = 1 ,
and by means of (A21)
n N : lim λ 1 λ p λ q λ · k = 1 n a k ( q λ ) = 0 .
For the last expression in (A24) we again apply Lemma A3 to end up with
lim λ 1 p λ q λ · k = 1 n λ a k ( q λ ) = α A · β A log β A β H ( β A β H ) β A ( 1 β A ) · n β A 1 β A 1 β A n , if β A 1 , n · ( n + 1 ) α A 2 β A · β A log β A β H ( β A β H ) , if β A = 1 ,
which finishes the proof of part (a). To show part (b), for the corresponding setup β A , β H , α A , α H P SP \ P SP , 1 let us first choose – according to (45) in Section 3.4—the intercept as p λ : = p λ L : = α A λ α H 1 λ and the slope as q λ : = q λ L : = β A λ β H 1 λ , which in part (b) of Proposition 6 lead to the lower bounds B λ , X 0 , n L of the Hellinger integral. This is formally the same choice as in part (a) satisfying lim λ 1 p λ = α A , lim λ 1 q λ = β A but in contrast to (a) we now have p λ q λ β λ α λ 0 but nevertheless
lim λ 1 p λ q λ β λ α λ = 0 .
From this, (38), part (b) of Proposition 6 and Lemma A2 we obtain
lim λ 1 B λ , X 0 , n L = lim λ 1 exp a n ( q λ ) · X 0 + p λ q λ k = 1 n a k ( q λ ) + n · p λ q λ β λ α λ = 1
and hence
I ( P A , n P H , n ) lim λ 1 1 B λ , X 0 , n L λ · ( 1 λ ) = lim λ 1 B λ , X 0 , n L 1 2 λ · λ a n ( q λ ) X 0 + p λ q λ k = 1 n a k ( q λ ) + n p λ q λ β λ α λ = lim λ 1 a n ( q λ ) λ X 0 + λ p λ q λ k = 1 n a k ( q λ ) + p λ q λ k = 1 n a k ( q λ ) λ + n λ p λ q λ β λ α λ .
In the current setup, the first three expressions in (A28) can be evaluated in exactly the same way as in (A25) to (A26), and for the last expression one has the limit
λ p λ q λ β λ α λ = p λ q λ log α A β H α H β A · β λ + p λ q λ · β A β H α A α H λ 1 α A log α A β H α H β A β H β A + α H ,
which finishes the proof of part (b). □
Proof of Theorem 4.
Let us fix β A , β H , α A , α H P SP \ P SP , 1 , X 0 N , n N and y [ 0 , [ . The lower bound E y , X 0 , n L , t a n of the Kullback-Leibler information divergence (relative entropy) is derived by using ϕ λ U ϕ λ , y tan (cf. (52)), which corresponds to the tangent line of ϕ λ at y, as a linear upper bound for ϕ λ ( λ ] 0 , 1 [ ). More precisely, one gets ϕ λ U ( x ) : = ( p λ U α λ ) + ( q λ U β λ ) x ( x [ 0 , [ ) with p λ : = p λ ( y ) : = ϕ λ ( y ) y ϕ λ ( y ) + α λ and q λ : = q λ ( y ) : = ϕ λ ( y ) + β λ , implying q λ > 0 because of Properties 3 (P17). Analogously to (A27) and (A28), we obtain from (38) and (40) the convergence lim λ 1 B λ , X 0 , n U = 1 and thus
I ( P A , n P H , n ) lim λ 1 a n ( q λ ) λ X 0 + λ p λ q λ k = 1 n a k ( q λ ) + p λ q λ k = 1 n a k ( q λ ) λ + n λ p λ q λ β λ α λ .
As before, we compute the involved derivatives. From (30) to (32) as well as (P17) we get
p λ λ = f A ( y ) f H ( y ) λ f H ( y ) log f A ( y ) f H ( y ) β A y f A ( y ) f H ( y ) λ 1 λ β A y f A ( y ) f H ( y ) λ 1 log f A ( y ) f H ( y ) + β H y f A ( y ) f H ( y ) λ ( 1 λ ) β H y f A ( y ) f H ( y ) λ log f A ( y ) f H ( y ) λ 1 α A log f A ( y ) f H ( y ) + y · ( α A β H α H β A ) f H ( y ) ,
and
q λ λ = β A f A ( y ) f H ( y ) λ 1 + λ β A f A ( y ) f H ( y ) λ 1 log f A ( y ) f H ( y ) β H f A ( y ) f H ( y ) λ + ( 1 λ ) β H f A ( y ) f H ( y ) λ log f A ( y ) f H ( y ) λ 1 β A 1 + log f A ( y ) f H ( y ) β H f A ( y ) f H ( y ) = : l .
Combining these two limits we get
λ p λ q λ β λ α λ = q λ p λ λ p λ q λ λ ( q λ ) 2 · β λ + p λ q λ · β A β H α A α H λ 1 y · ( α A β H α H β A ) f H ( y ) α A 1 β H f A ( y ) β A f H ( y ) + α H α A β H β A . = α H α A β H β A 1 f A ( y ) f H ( y ) .
The above calculation also implies that lim λ 1 λ p λ q λ is finite and thus lim λ 1 λ p λ q λ k = 1 n a k ( q λ ) = 0 by means of Lemma A2. The proof of I ( P A , n P H , n ) E y , X 0 , n L , tan is finished by using Lemma A3 with l defined in (A31) and by plugging the limits (A30) to (A32) in (A29).
To derive the lower bound E k , X 0 , n L , sec (cf. (73)) for fixed k N 0 , we use as a linear upper bound ϕ λ U for ϕ λ ( · ) ( λ ] 0 , 1 [ ) the secant line ϕ λ , k sec (cf. (53)) of ϕ λ across its arguments k and k + 1 , corresponding to the choices p λ : = p λ , k sec = ( k + 1 ) · ϕ λ ( k ) k · ϕ λ ( k + 1 ) + α λ and q λ : = q λ , k sec : = ϕ λ ( k + 1 ) ϕ λ ( k ) + β λ , implying q λ > 0 because of Properties 3 (P18). As a side remark, notice that this ϕ λ U ( x ) may become positive for some x [ 0 , [ (which is not always consistent with Goal (G1) for fixed λ , but leads to a tractable limit bound as λ tends to 1). Analogously to (A27) and (A28) we get again lim λ 1 B λ , X 0 , n U = 1 , which leads to the lower bound given in (A29) with appropriately plugged-in quantities. As in the above proof of the lower bound E y , X 0 , n L , t a n , the inequality I ( P A , n P H , n ) E k , X 0 , n L , sec follows straightforwardly from Lemma A2, Lemma A3 and the three limits
p λ λ = f A ( k ) f H ( k ) λ f H ( k ) · ( k + 1 ) log f A ( k ) f H ( k ) f A ( k + 1 ) f H ( k + 1 ) λ f H ( k + 1 ) · k log f A ( k + 1 ) f H ( k + 1 ) λ 1 f A ( k ) ( k + 1 ) log f A ( k ) f H ( k ) f A ( k + 1 ) k log f A ( k + 1 ) f H ( k + 1 ) , q λ λ = f A ( k + 1 ) f H ( k + 1 ) λ f H ( k + 1 ) log f A ( k + 1 ) f H ( k + 1 ) f A ( k ) f H ( k ) λ f H ( k ) log f A ( k ) f H ( k ) λ 1 f A ( k + 1 ) log f A ( k + 1 ) f H ( k + 1 ) f A ( k ) log f A ( k ) f H ( k ) = : l , and λ p λ q λ β λ α λ = q λ p λ λ p λ q λ λ ( q λ ) 2 · β λ + p λ q λ · β A β H α A α H λ 1 f A ( k ) log f A ( k ) f H ( k ) k + 1 + α A β A f A ( k + 1 ) log f A ( k + 1 ) f H ( k + 1 ) k + α A β A α A β H β A + α H .
To construct the third lower bound E X 0 , n L , h o r (cf. (74)), we start by using the horizontal line ϕ λ hor ( · ) (cf. (54)) as an upper bound of ϕ λ . For each fixed λ ] 0 , 1 [ , it is defined by the intercept sup x N 0 ϕ λ ( x ) . On P SP , 3 a P SP , 3 b , this supremum is achieved at the finite integer point z λ * : = arg max x N 0 ϕ λ ( x ) (since lim x ϕ λ ( x ) = ) and there holds ϕ λ ( z λ * ) < 0 which leads with the parameters q λ = β λ , p λ = ϕ λ ( z λ * ) + α λ to the Hellinger integral upper bound B λ , X 0 , n U = exp ϕ λ ( z λ * ) · n < 1 (cf. Remark 1 (b)). We strive for computing the limit lim λ 1 1 B λ , X 0 , n U λ ( 1 λ ) , which is not straightforward to solve since in general it seems to be intractable to express z λ * explicitly in terms of λ . To circumvent this problem, we notice that it is sufficient to determine z λ * in a small ϵ environment ] 1 ϵ , 1 [ . To accomplish this, we incorporate lim λ 1 ϕ λ ( x ) = 0 for all x [ 0 , [ and calculate by using l’Hospital’s rule
lim λ 1 ϕ λ ( x ) 1 λ = ( α A + β A x ) log α A + β A x α H + β H x + 1 ( α H + β H x ) .
Accordingly, let us define z * : = arg max x N 0 ( α A + β A x ) log α A + β A x α H + β H x + 1 ( α H + β H x ) (note that the maximum exists since lim x ( α A + β A x ) log α A + β A x α H + β H x + 1 ( α H + β H x ) = ). Due to continuity of the function ( λ , x ) ϕ λ ( x ) 1 λ , there exists an ϵ > 0 such that for all λ ] 1 ϵ , 1 [ there holds z λ * = z * . Applying these considerations, we get with l’Hospital’s rule
I ( P A , n P H , n ) lim λ 1 1 exp ϕ λ ( z * ) · n λ ( 1 λ ) = f A ( z * ) · log f A ( z * ) f H ( z * ) 1 + f H ( z * ) · n 0 .
In fact, for the current parameter constellation P SP , 3 a P SP , 3 b we have ϕ λ ( x ) < 0 for all λ ] 0 , 1 [ and all x N 0 which implies f A ( z * ) f H ( z * ) by Lemma A1; thus, we even get E X 0 , n L , h o r > 0 for all n N by virtue of the inequality log f H ( z * ) f A ( z * ) > f H ( z * ) f A ( z * ) + 1 .
For the case P SP , 2 , the above-mentioned procedure leads to z λ * = 0 = z * ( λ ] 0 , 1 [ ) which implies ϕ λ ( z λ * ) = 0 , B λ , X 0 , n U 1 and thus the trivial lower bound E X 0 , n L , h o r = lim λ 1 1 B λ , X 0 , n U λ ( 1 λ ) = 0 follows for all n N . In contrast, for the case P SP , 3 c one gets z λ * = α A α H β H β A = z * ( λ ] 0 , 1 [ ) which nevertheless also implies ϕ λ ( z λ * ) = 0 and hence E X 0 , n L , h o r 0 . On P SP , 4 , we have sup x N 0 ϕ λ ( x ) = lim x ϕ λ ( x ) = 0 and hence we set E X 0 , n L , h o r 0 .
To show the strict positivity E X 0 , n L > 0 in the parameter case P SP , 2 , we inspect the bound E 0 , X 0 , n L , s e c . With α : = α : = α A = α H (the bullet will be omitted in this proof) and the auxiliary variable x : = β H β A > 0 , the definition (73) respectively its special case (76) rewrites for all n N as
E 0 , X 0 , n L , s e c : = E 0 , X 0 , n L , s e c ( x ) : = ( α + β A ) · log α + β A x α + β A + β A ( x 1 ) · 1 ( β A ) n 1 β A · X 0 α 1 β A + [ α β A ( 1 β A ) ( α + β A ) · log α + β A x α + β A + β A ( x 1 ) + α β A α + β A · log α + β A x α + β A α ( x 1 ) ] · n , if β A 1 , ( α + 1 ) · log α + x α + 1 + x 1 · α 2 · n 2 + X 0 + α 2 · n + ( α + 1 ) · log α + x α + 1 x + 1 · α · n , if β A = 1 .
To prove that E 0 , X 0 , n L , s e c > 0 for all X 0 N and all n N it suffices to show that E 0 , X 0 , n L , s e c ( 1 ) = x E 0 , X 0 , n L , s e c ( 1 ) = 0 and 2 x 2 E 0 , X 0 , n L , s e c ( x ) > 0 for all x ] 0 , [ \ { 1 } . The assertion E 0 , X 0 , n L , s e c ( 1 ) = 0 is trivial from (A34). Moreover, we obtain
x E 0 , X 0 , n L , s e c ( x ) = β A · 1 α + β A α + β A x · 1 ( β A ) n 1 β A · X 0 α 1 β A + α · 1 α + β A α + β A x · β A 1 β A · n , if β A 1 , 1 α + 1 α + x · α 2 · n 2 + X 0 α 2 · n , if β A = 1 ,
which immediately yields x E 0 , X 0 , n L , s e c ( 1 ) = 0 . For the second derivative we get
2 x 2 E 0 , X 0 , n L , s e c ( x ) = ( α + β A ) · β A 2 ( α + β A x ) 2 · 1 ( β A ) n 1 β A · X 0 α 1 β A + α α + β A ( α + β A x ) 2 · β A 2 1 β A · n > 0 , if β A 1 , α + 1 ( α + x ) 2 · α 2 · n 2 + X 0 α 2 · n > 0 , if β A = 1 ,
where the strict positivity of E 0 , X 0 , n L , s e c in the case β A 1 follows immediately by replacing X 0 with 0 and by using the obvious relation 1 1 β A · n 1 β A n 1 β A = 1 1 β A k = 0 n 1 1 β A k > 0 . The strict positivity in the case β A = 1 is trivial by inspection.
For the constellation P SP , 4 with parameters β : = β : = β A = β H , α A α H , the strict positivity of E X 0 , n L > 0 follows by showing that E y , X 0 , n L , t a n converges from above to zero as y tends to infinity. This is done by proving lim y y · E y , X 0 , n L , t a n ] 0 , [ . To see this, let us first observe that by l’Hospital’s rule we get
lim y y · log α A + β y α H + β y = α A α H β as well as lim y y · 1 α A + β y α H + β y = α A α H β .
From this and (72), we obtain lim y y · E y , X 0 , n L , t a n = ( α A α H ) 2 β · n > 0 in both cases β 1 and β = 1 .
Finally, for the parameter case P SP , 3 c we consider the bound E y * , X 0 , n L , t a n , with y * = α A α H β H β A . Since α A + β A y * = α H + β H y * , it is easy to see that E y * , X 0 , n L , t a n = 0 for all n N . However, the condition y E y , X 0 , n L , t a n ( y * ) 0 implies that sup y 0 E y , X 0 , n L , t a n > 0 . The explicit form (75) of this condition follows from
y E y , X 0 , n L , t a n ( y ) = ( α A β H α H β A ) 2 f A ( y ) f H ( y ) 2 · 1 β A n 1 β A · X 0 α A 1 β A + α A β H α H β A f H ( y ) 2 · α A β A ( 1 β A ) f A ( y ) α A β H α H β A β A · n , if β A 1 , ( α A β H α H ) 2 f A ( y ) f H ( y ) 2 · α A 2 · n 2 + X 0 + α A 2 · n ( α A β H α H ) 2 f H ( y ) 2 · n , if β A = 1 ,
y 0 , by using the particular choice y = y * together with f A ( y * ) = f H ( y * ) = α A β H α H β A β A β H . □

Appendix A.3. Proofs and Auxiliary Lemmas for Section 6

Proof of Lemma 2.
A closed-form representation of a sequence a ˜ n n N 0 defined in (83) to (85) is given by the formula
a ˜ n = k = 0 n 1 c + ρ k d n 1 k .
This can be seen by induction: from (83) we obtain with a ˜ 0 = 0 for the first element a ˜ 1 = c + ρ 0 = k = 0 0 ( c + ρ k ) d k . Supposing that (A36) holds for the n-th element, the induction step is
a ˜ n + 1 = c + d · a ˜ n + ρ n = c + d · k = 0 n 1 c + ρ k d n 1 k + ρ n = k = 0 n c + ρ k d n k .
In order to obtain the explicit representation of a ˜ n , we consider first the case 0 ν < ϰ < d and ρ n = K 1 · ϰ n + K 2 · ν n , which leads to
a ˜ n = d n 1 k = 0 n 1 c · d k + K 1 · ϰ d k + K 2 · ν d k = d n 1 · c · 1 d n 1 d 1 + K 1 · 1 ϰ d n 1 ϰ d + K 2 · 1 ν d n 1 ν d = c 1 d ( 1 d n ) + K 1 · d n ϰ n d ϰ + K 2 · d n ν n d ν .
Hence, for the corresponding sum we get
k = 1 n a ˜ k = k = 1 n c 1 d + K 1 d ϰ + K 2 d ν c 1 d · d k K 1 d ϰ · ϰ k K 2 d ν · ν k = c 1 d · n + K 1 d ϰ + K 2 d ν c 1 d · d · ( 1 d n ) 1 d K 1 · ϰ · ( 1 ϰ n ) ( d ϰ ) ( 1 ϰ ) K 2 · ν · ( 1 ν n ) ( d ν ) ( 1 ν ) .
Consider now the case 0 ν < ϰ = d . Then some expressions in (A37) and (A38) have a zero denominator. In this case, the evaluation of (A36) becomes
a ˜ n = d n 1 k = 0 n 1 c · d k + K 1 + K 2 · ν d k = d n 1 · c · 1 d n 1 d 1 + K 1 · n + K 2 · 1 ν d n 1 ν d = c 1 d ( 1 d n ) + K 1 · n · d n 1 + K 2 · d n ν n d ν .
Before we calculate the corresponding sum k = 1 n a ˜ k , we notice that
k = 1 n k · d k 1 = k = 1 n d d k = d k = 1 n d k = d d · ( 1 d n ) 1 d = 1 n · d n ( 1 d ) d n ( 1 d ) 2 .
Using this fact, we obtain
k = 1 n a ˜ k = k = 1 n c 1 d ( 1 d k ) + K 1 · k · d k 1 + K 2 · d k ν k d ν = c 1 d · n + k = 1 n K 2 d ν c 1 d d k + K 1 k = 1 n k · d k 1 K 2 d ν k = 1 n ν k = K 2 d ν c 1 d d · ( 1 d n ) 1 d + K 1 · 1 n · d n ( 1 d ) d n ( 1 d ) 2 K 2 · ν ( 1 ν n ) ( d ν ) ( 1 ν ) + c 1 d · n = K 1 d ( 1 d ) + K 2 d ν c 1 d d · ( 1 d n ) 1 d K 2 · ν ( 1 ν n ) ( d ν ) ( 1 ν ) + c 1 d K 1 · d n 1 d · n .
Proof of Lemma 3.
(a) In this case we have 0 < q < β λ . To prove part (i), we consider the function ξ λ ( q ) ( · ) on [ x 0 ( q ) , 0 ] , the range of the sequence a n ( q ) n N (recall Properties 1 (P1)). For tackling the left-hand inequality in (i), we compare ξ λ ( q ) ( x ) = q · e x β λ with the quadratic function
Υ ̲ λ ( q ) ( x ) : = q 2 e x 0 ( q ) · x 2 + q e x 0 ( q ) 1 x 0 ( q ) · x + x 0 ( q ) 1 q e x 0 ( q ) + q 2 e x 0 ( q ) x 0 ( q ) .
Clearly, one has the relations Υ ̲ λ ( q ) ( x 0 ( q ) ) = x 0 ( q ) = ξ λ ( q ) ( x 0 ( q ) ) , Υ ̲ λ ( q ) ( x 0 ( q ) ) = q · e x 0 ( q ) = ξ λ ( q ) ( x 0 ( q ) ) , and Υ ̲ λ ( q ) ( x ) < ξ λ ( q ) ( x ) for all x ] x 0 ( q ) , 0 ] . Hence, Υ ̲ λ ( q ) ( · ) is on ] x 0 ( q ) , 0 ] a strict lower functional bound of ξ λ ( q ) ( · ) . We are now ready to prove the left-hand inequality in (i) by induction. For n = 1 , we easily see that a ̲ 1 ( q ) < a 1 ( q ) iff x 0 ( q ) 1 q e x 0 ( q ) + q 2 e x 0 ( q ) x 0 ( q ) < q β λ iff Υ ̲ λ ( q ) ( 0 ) < ξ λ ( q ) ( 0 ) , and the latter is obviously true. Let us assume that a ̲ n ( q ) a n ( q ) holds. From this, (93), (78) and (80) we obtain
0 < ρ ̲ n ( q ) = q 2 e x 0 ( q ) x 0 ( q ) · q · e x 0 ( q ) n 2 = q 2 e x 0 ( q ) a n ( q ) , T x 0 ( q ) 2 < q 2 e x 0 ( q ) a n ( q ) x 0 ( q ) 2 = Υ ̲ λ ( q ) a n ( q ) d ( q ) , T · a n ( q ) x 0 ( q ) · 1 d ( q ) , T < ξ λ ( q ) a n ( q ) d ( q ) , T · a n ( q ) x 0 ( q ) · 1 d ( q ) , T < a n + 1 ( q ) d ( q ) , T · a ̲ n ( q ) x 0 ( q ) · 1 d ( q ) , T = a n + 1 ( q ) ξ λ ( q ) , T ( a ̲ n ( q ) ) .
Thus, there holds a ̲ n + 1 ( q ) < a n + 1 ( q ) . For the right-hand inequality in (i), we proceed analogously:
Υ ¯ λ ( q ) ( x ) : = q 2 e x 0 ( q ) · x 2 + 1 q 2 e x 0 ( q ) x 0 ( q ) q β λ x 0 ( q ) · x + q β λ
satisfies Υ ¯ λ ( q ) ( x 0 ( q ) ) = x 0 ( q ) = ξ λ ( q ) ( x 0 ( q ) ) , Υ ¯ λ ( q ) ( 0 ) = q β λ = ξ λ ( q ) ( 0 ) as well as Υ ¯ λ ( q ) ( x ) < ξ λ ( q ) ( x ) for all x ] x 0 ( q ) , 0 ] . Hence, Υ ¯ λ ( q ) ( · ) is on ] x 0 ( q ) , 0 ] a strict upper functional bound of ξ λ ( q ) ( · ) . Let us first observe the obvious relation a ¯ 1 ( q ) = q β λ = a 1 ( q ) < 0 , and assume that a ¯ n ( q ) a n ( q ) ( n N ) holds. From this, (95), (79), and (80) we obtain the desired inequality a ¯ n + 1 ( q ) > a n + 1 ( q ) by
0 > ρ ¯ n ( q ) = Γ < ( q ) d ( q ) , T n · a n ( q ) , S x 0 ( q ) = q 2 e x 0 ( q ) a n ( q ) , T x 0 ( q ) · a n ( q ) , S q 2 e x 0 ( q ) a n ( q ) x 0 ( q ) · a n ( q ) = Υ ¯ λ ( q ) a n ( q ) d ( q ) , S · a n ( q ) ( q β λ ) > ξ λ ( q ) a n ( q ) d ( q ) , S · a n ( q ) ( q β λ ) a n + 1 ( q ) d ( q ) , S · a ¯ n ( q ) ( q β λ ) = a n + 1 ( q ) ξ λ ( q ) , S ( a ¯ n ( q ) ) .
The explicit representations of the sequences a n ( q ) n N , a ̲ n ( q ) n N and a ¯ n ( q ) n N follow from (86) by incorporating the appropriate constants mentioned in the prelude of Lemma 3. With (83) to (85) and (86) we immediately achieve a ̲ n ( q ) > a n ( q ) , T for all n N . Analogously, for all n 2 , we get ρ ¯ n 1 < 0 , which implies that a ¯ n ( q ) < a n ( q ) , S for all n 2 . For n = 1 one obtains ρ ¯ 0 = 0 as well as a ¯ 1 ( q ) = a 1 ( q ) , S = a 1 ( q ) = q β λ .
For the second part (ii), we employ the representation (A36) which leads to
a ̲ n ( q ) = k = 0 n 1 d ( q ) , T n 1 k · ρ ̲ k ( q ) + x 0 ( q ) · ( 1 d ( q ) , T ) as well as a ¯ n ( q ) = k = 0 n 1 d ( q ) , S n 1 k · ρ ¯ k ( q ) + ( q β λ ) .
The strict decreasingness of both sequences follows from
ρ ̲ k ( q ) + x 0 ( q ) ( 1 d ( q ) , T ) = q e x 0 ( q ) 2 x 0 ( q ) 2 d ( q ) , T 2 n + x 0 ( q ) 1 d ( q ) , T Υ ̲ λ ( q ) ( 0 ) < ξ λ ( q ) ( 0 ) = q β λ < 0
and from the fact that ρ ¯ k ( q ) 0 for all k N 0 and q < β λ . Part (iii) follows directly from (i), since d ( q ) , T , d ( q ) , S ] 0 , 1 [ .
Let us now prove part (b), where max { 0 , β λ } < q < min 1 , e β λ 1 is assumed. To tackle part (i), we compare ξ λ ( q ) ( x ) = q · e x β λ with the quadratic function
υ ̲ ̲ λ ( q ) ( x ) : = q 2 · x 2 + q · e x 0 ( q ) x 0 ( q ) · x + x 0 ( q ) 1 q e x 0 ( q ) + q 2 x 0 ( q ) > 0
on the interval [ 0 , x 0 ( q ) ] . Clearly, we have υ ̲ ̲ λ ( q ) x 0 ( q ) = ξ λ ( q ) ( x 0 ( q ) ) = x 0 ( q ) , υ ̲ ̲ λ ( q ) ( x 0 ( q ) ) = ξ λ ( q ) ( x 0 ( q ) ) = q e x 0 ( q ) and 0 < υ ̲ ̲ λ ( q ) ( x ) < ξ λ ( q ) ( x ) for all x ] 0 , x 0 ( q ) ] . Thus, υ ̲ ̲ λ ( q ) ( · ) constitutes a positive functional lower bound for ξ λ ( q ) ( · ) on [ 0 , x 0 ( q ) ] . Let us now prove the left-hand inequality of (i) by induction: for n = 1 we get a ̲ 1 ( q ) = υ ̲ ̲ λ ( q ) ( 0 ) < ξ λ ( q ) ( 0 ) = a 1 ( q ) . Moreover, by assuming a ̲ n ( q ) a n ( q ) for n N , we obtain with the above-mentioned considerations and (93), (80) and (82)
0 < ρ ̲ n ( q ) = Γ > ( q ) d ( q ) , S 2 n = q 2 · a n ( q ) , S x 0 ( q ) 2 < q 2 · a n ( q ) x 0 ( q ) 2 = q 2 a n ( q ) 2 + q · e x 0 ( q ) x 0 ( q ) · a n ( q ) + x 0 ( q ) · 1 q e x 0 ( q ) + q 2 x 0 ( q ) d ( q ) , T a n ( q ) c ( q ) , T = υ ̲ ̲ λ ( q ) ( a n ( q ) ) d ( q ) , T a n ( q ) c ( q ) , T < ξ λ ( q ) ( a n ( q ) ) d ( q ) , T a n ( q ) c ( q ) , T < a n + 1 ( q ) d ( q ) , T a ̲ n ( q ) c ( q ) , T = a n + 1 ( q ) ξ λ ( q ) , T ( a ̲ n ( q ) ) .
Hence, a ̲ n + 1 ( q ) < a n + 1 ( q ) . For the right-hand inequality in part (i), we define the quadratic function
υ ¯ ¯ λ ( q ) ( x ) : = q 2 · x 2 + 1 q 2 x 0 ( q ) q β λ x 0 ( q ) · x + q β λ ,
which is a functional upper bound for ξ λ ( q ) ( · ) on the interval [ 0 , x 0 ( q ) ] since there holds υ ¯ ¯ λ ( q ) ( 0 ) = ξ λ ( q ) ( 0 ) = q β λ , υ ¯ ¯ λ ( q ) ( x 0 ( q ) ) = ξ λ ( q ) ( x 0 ( q ) ) = x 0 ( q ) and additionally υ ¯ ¯ λ ( q ) ( x ) = q < q e x = ξ λ ( q ) ( x ) on ] 0 , x 0 ( q ) [ . Obviously, a ¯ 1 ( q ) = q β λ = a 1 ( q ) . By assuming a ¯ n ( q ) a n ( q ) for n N , we obtain with (80), (82) and (95)
0 > ρ ¯ n ( q ) = Γ > ( q ) · d ( q ) , S n · 1 d ( q ) , T n = q 2 · x 0 a n ( q ) , S · a n ( q ) , T > q 2 · x 0 a n ( q ) · a n ( q ) = υ ¯ ¯ λ ( q ) ( a n ( q ) ) x 0 ( q ) ( q β λ ) x 0 ( q ) · a n ( q ) ( q β λ ) > ξ λ ( q ) ( a n ( q ) ) d ( q ) , S a n ( q ) c ( q ) , S > ξ λ ( q ) ( a n ( q ) ) d ( q ) , S a ¯ n ( q ) , S c ( q ) , S = a n + 1 ( q ) ξ λ ( q ) , S ( a ¯ n ( q ) ) ,
which implies a ¯ n + 1 ( q ) > a n + 1 ( q ) . The explicit representations of the sequences a ̲ n ( q ) n N and a ¯ n ( q ) n N follow from (86) by employing the appropriate constants mentioned in the prelude of Lemma 3. By means of (83) to (85) and (86), we directly get a ̲ n ( q ) > a n ( q ) , T for all n N , whereas a ¯ n ( q ) < a n ( q ) , S holds only for all n 2 , since ρ ¯ 0 = 0 implies that a ¯ 1 ( q ) = a 1 ( q ) , S = a 1 ( q ) = q β λ .
The second part (ii) can be proved in the same way as part (ii) of (a), by employing the representation (A36). For the lower bound one has
a ̲ n ( q ) = k = 0 n 1 d ( q ) , T n 1 k · c ( q ) , T + ρ ̲ k ( q ) , with c ( q ) , T > 0 and ρ ̲ k ( q ) > 0 .
For the upper bound we get
a ¯ n ( q ) = k = 0 n 1 d ( q ) , S n 1 k · c ( q ) , S + ρ ¯ k ( q ) ,
hence it is enough to show c ( q ) , S + ρ ¯ n ( q ) > 0 for all n N 0 . Considering the first two lines of calculation (A44) and incorporating c ( q ) , S = q β λ , this can be seen from
c ( q ) , S + ρ ¯ n ( q ) > υ ¯ ¯ λ ( q ) ( a n ( q ) ) x 0 ( q ) ( q β λ ) x 0 ( q ) · a n ( q ) = υ ¯ ¯ λ ( q ) ( a n ( q ) ) d ( q ) , S · a n ( q ) > 0 ,
because on [ 0 , x 0 ( q ) ] there holds d ( q ) , S · x < x < υ ¯ ¯ λ ( q ) ( x ) . The last part (iii) can be easily deduced from (i) together with lim n n · d ( q ) , S n 1 = 0 . □
The proofs of all Theorems 5–9 are mainly based on the following
Lemma A4.
Recall the quantity B ˜ λ , X 0 , n ( p , q ) from (42) for general p 0 , q > 0 (notice that we do not consider parameters p < 0 , q 0 in Section 6) as well as the constants d ( q ) , T , d ( q ) , S and Γ < ( q ) , Γ > ( q ) defined in (76), (77) and (91). For all β A , β H , α A , α H , λ P × R \ { 0 , 1 } , all initial population sizes X 0 N and all observation horizons n N there holds
(a) 
in the case p 0 and 0 < q < β λ
B ˜ λ , X 0 , n ( p , q ) exp { x 0 ( q ) · X 0 p q · d ( q ) , T 1 d ( q ) , T · 1 d ( q ) , T n + p q · β λ + x 0 ( q ) α λ · n + ζ ̲ n ( q ) · X 0 + p q · ϑ ̲ n ( q ) } = : C λ , X 0 , n ( p , q ) , L ,
B ˜ λ , X 0 , n ( p , q ) exp { x 0 ( q ) · X 0 p q · d ( q ) , S 1 d ( q ) , S · 1 d ( q ) , S n + p q · β λ + x 0 ( q ) α λ · n ζ ¯ n ( q ) · X 0 p q · ϑ ¯ n ( q ) } = : C λ , X 0 , n ( p , q ) , U ,
where ζ ̲ n ( q ) : = Γ < ( q ) · d ( q ) , T n 1 1 d ( q ) , T · 1 d ( q ) , T n > 0 ,
ϑ ̲ n ( q ) : = Γ < ( q ) · 1 d ( q ) , T n 1 d ( q ) , T 2 · 1 d ( q ) , T 1 + d ( q ) , T n 1 + d ( q ) , T > 0 ,
ζ ¯ n ( q ) : = Γ < ( q ) · d ( q ) , S n d ( q ) , T n d ( q ) , S d ( q ) , T d ( q ) , S n 1 · 1 d ( q ) , T n 1 d ( q ) , T > 0 ,
ϑ ¯ n ( q ) : = Γ < ( q ) · d ( q ) , T 1 d ( q ) , T · 1 d ( q ) , S d ( q ) , T n 1 d ( q ) , S d ( q ) , T d ( q ) , S n d ( q ) , T n d ( q ) , S d ( q ) , T > 0 .
(b) 
in the case p 0 and 0 < q = β λ
B ˜ λ , X 0 , n ( p , q ) = exp p q · β λ + x 0 ( q ) α λ · n = exp p α λ · n .
(c) 
in the case p 0 and max { 0 , β λ } < q < min 1 , e β λ 1 the bounds C λ , X 0 , n ( p , q ) , L and C λ , X 0 , n ( p , q ) , U from (96) and (97) remain valid, but with
ζ ̲ n ( q ) : = Γ > ( q ) · d ( q ) , T n d ( q ) , S 2 n d ( q ) , T d ( q ) , S 2 > 0 ,
ϑ ̲ n ( q ) : = Γ > ( q ) d ( q ) , T d ( q ) , S 2 · d ( q ) , T · 1 d ( q ) , T n 1 d ( q ) , T d ( q ) , S 2 · 1 d ( q ) , S 2 n 1 d ( q ) , S 2 > 0 ,
ζ ¯ n ( q ) : = Γ > ( q ) · d ( q ) , S n 1 · n 1 d ( q ) , T n 1 d ( q ) , T > 0 ,
ϑ ¯ n ( q ) : = Γ > ( q ) · [ d ( q ) , S d ( q ) , T 1 d ( q ) , S 2 1 d ( q ) , T · 1 d ( q ) , S n + d ( q ) , T 1 d ( q ) , S d ( q ) , T n 1 d ( q ) , T 1 d ( q ) , S d ( q ) , T d ( q ) , S n 1 d ( q ) , S · n ] .
(d) 
for the special choices p : = p λ E : = α A λ α H 1 λ > 0 , q : = q λ E : = β A λ β H 1 λ > 0 in the parameter setup β A , β H , α A , α H , λ ( P NI P SP , 1 ) × ] λ , λ + [ \ { 0 , 1 } we obtain
lim n 1 n log V λ , X 0 , n = lim n 1 n log C λ , X 0 , n ( p λ E , q λ E ) , L = lim n 1 n log C λ , X 0 , n ( p λ E , q λ E ) , U = α A β A · x 0 ( q λ E ) .
(e) 
for all general p 0 with either 0 < q < β λ or max { 0 , β λ } < q < min 1 , e β λ 1 we get
lim n 1 n log B ˜ λ , X 0 , n ( p , q ) = lim n 1 n log C λ , X 0 , n ( p , q ) , L = lim n 1 n log C λ , X 0 , n ( p , q ) , U = p q · β λ + x 0 ( q ) α λ .
Proof of Lemma A4.
The closed-form bounds C λ , X 0 , n ( p , q ) , L and C λ , X 0 , n ( p , q ) , U are obtained by substituting in the representation (42) (for B ˜ λ , X 0 , n ( p , q ) , cf. Theorem 1) the recursive sequence member a n ( q ) by the explicit sequence member a ̲ n ( q ) respectively a ¯ n ( q ) . From the definitions of these sequences (92) to (95) and from (83) to (85) one can see that we basically have to evaluate the term
exp a ˜ n h o m + c ˜ n · X 0 + p q · k = 1 n a ˜ k h o m + c ˜ k + p q · β λ α λ · n ,
where a ˜ n h o m + c ˜ n = a ˜ n is either interpreted as the lower approximate a ̲ n ( q ) or as the upper approximate a ¯ n ( q ) . After rearranging and incorporating that c ( q ) , S 1 d ( q ) , S = c ( q ) , T 1 d ( q ) , T = x 0 ( q ) in both approximate cases, we obtain with the help of (86), (87) for the expression (A55) in the case 0 ν < ϰ < d
exp { x 0 ( q ) · 1 d n · X 0 p q · d 1 d + p q · β λ + x 0 ( q ) α λ · n + K 1 · d n ϰ n d ϰ + K 2 · d n ν n d ν · X 0 + p q · K 1 d ϰ + K 2 d ν · d · 1 d n 1 d K 1 · ϰ · 1 ϰ n ( d ϰ ) ( 1 ϰ ) K 2 · ν · 1 ν n ( d ν ) ( 1 ν ) } .
In the other case 0 ν < ϰ = d , the application of (88), (89) turns (A55) into
exp { x 0 ( q ) · 1 d n · X 0 p q · d 1 d + p q · β λ + x 0 ( q ) α λ · n + K 1 · n · d n 1 + K 2 · d n ν n d ν · X 0 + p q · K 1 d ( 1 d ) + K 2 d ν · d · 1 d n 1 d K 2 · ν · 1 ν n ( d ν ) ( 1 ν ) K 1 · d n 1 d · n } .
After these preparatory considerations let us now begin with elaboration of the details.
(a) Let 0 < q < β λ . We obtain a closed-form lower bound for B ˜ λ , X 0 , n ( p , q ) by employing the parameters c = ^ c ( q ) , T , d = ^ d ( q ) , T , K 2 = ν = 0 , K 1 = Γ < ( q ) , and ϰ = d ( q ) , T 2 , cf. (93) in combination with (85). Since ϰ < d ( q ) , T , we have to plug in these parameters into (A56). The representations of ζ ̲ n ( q ) and ϑ ̲ n ( q ) in (A47) and (A48) follow immediately. For a closed-form upper bound, we employ the parameters c = ^ c ( q ) , S , d = ^ d ( q ) , S , K 1 = K 2 = Γ < ( q ) , ϰ = d ( q ) , T and ν = d ( q ) , S d ( q ) , T (in particular, ϰ < d ( q ) , S implying that we have to use (A56)). From this, (A49) can be deduced directly; the representation (A50) comes from the expressions in the squared brackets in the last line of (A56) and from
Γ < ( q ) d ( q ) , S d ( q ) , T Γ < ( q ) d ( q ) , S d ( q ) , S d ( q ) , T · d ( q ) , S · 1 d ( q ) , S n 1 d ( q ) , S + Γ < ( q ) · d ( q ) , T · 1 d ( q ) , T n d ( q ) , S d ( q ) , T 1 d ( q ) , T Γ < ( q ) · d ( q ) , S d ( q ) , T · 1 d ( q ) , S d ( q ) , T n d ( q ) , S d ( q ) , S d ( q ) , T 1 d ( q ) , S d ( q ) , T = Γ < ( q ) · d ( q ) , T 1 d ( q ) , S d ( q ) , S d ( q ) , S d ( q ) , T 1 d ( q ) , T · d ( q ) , S · 1 d ( q ) , S n 1 d ( q ) , S + Γ < ( q ) · d ( q ) , T · 1 d ( q ) , T n d ( q ) , S d ( q ) , T 1 d ( q ) , T Γ < ( q ) · d ( q ) , T · 1 d ( q ) , S d ( q ) , T n 1 d ( q ) , T 1 d ( q ) , S d ( q ) , T = Γ < ( q ) · d ( q ) , T 1 d ( q ) , T · 1 d ( q ) , S d ( q ) , T n 1 d ( q ) , S d ( q ) , T + 1 d ( q ) , S n d ( q ) , S d ( q ) , T 1 d ( q ) , T n d ( q ) , S d ( q ) , T = Γ < ( q ) · d ( q ) , T 1 d ( q ) , T · 1 d ( q ) , S d ( q ) , T n 1 d ( q ) , S d ( q ) , T d ( q ) , S n d ( q ) , T n d ( q ) , S d ( q ) , T = ϑ ¯ n ( q ) .
Part (b) has already been mentioned in Remark 1 (b) and is due to the fact that for 0 < q = β λ , the sequence a n ( q ) n N is itself explicitly representable by a n ( q ) = 0 for all n N (cf. Properties 1 (P2)). Plugging this into (42) gives the desired result.
(c) Let us now consider max { 0 , β λ } < q < min { 1 , e β λ 1 } . For a closed-form lower bound for B ˜ λ , X 0 , n ( p , q ) we have to employ the parameters c = ^ c ( q ) , T , d = ^ d ( q ) , T , K 2 = ν = 0 , K 1 = Γ > ( q ) and ϰ = d ( q ) , S 2 , cf. (93) in combination with (85). The representations of ζ ̲ n ( q ) and ϑ ̲ n ( q ) in (A51) and (A52) follow immediately from (A56). For a closed-form upper bound, we use the parameters c = ^ c ( q ) , S , d = ^ d ( q ) , S , K 1 = K 2 = Γ > ( q ) , ϰ = d ( q ) , S and ν = d ( q ) , S d ( q ) , T . Notice that in this case we stick to the representation (A57). The formula (104) is obviously valid, and (105) is implied by
Γ > ( q ) d ( q ) , S 1 d ( q ) , S + Γ > ( q ) d ( q ) , S d ( q ) , S d ( q ) , T · d ( q ) , S · 1 d ( q ) , S n 1 d ( q ) , S = Γ > ( q ) · d ( q ) , S d ( q ) , T 1 d ( q ) , S 2 1 d ( q ) , T · 1 d ( q ) , S n .
The parts (d) and (e) are trivial by incorporating that in all respective cases one has d ( q ) , S ] 0 , 1 [ , d ( q ) , T ] 0 , 1 [ and lim n n · d ( q ) , S = 0 . □
Proof of Theorem 5.
(a) For λ ] 0 , 1 [ , we get 0 < q λ E < β λ and the assertion follows by applying part (a) of Lemma A4. Notice that in the current subcase P NI P SP , 1 there holds p λ E q λ E β λ α λ = 0 as well as p λ E q λ E = α A β A = α H β H . For the case λ R \ [ 0 , 1 ] , one gets from Lemma A1 that max { 0 , β λ } < q λ E , and there holds q λ E < min { 1 , e β λ 1 } iff λ ] λ , λ + [ \ [ 0 , 1 ] , cf. Lemma 1. Thus, an application of part (c) of Lemma A4 proves the desired result. The assertion (b) is equivalent to part (d) of Lemma A4. □
Proof of Theorem 6.
The assertions follow immediately from (A45), Lemma A4(b),(e), Proposition 6(d) as well as the incorporation of the fact that for λ ] 0 , 1 [ there holds q λ L = β A λ β H 1 λ < β λ in the case β A , β H , α A , α H ( P SP \ ( P SP , 1 P SP , 4 ) ) (i.e., β A β H ) respectively q λ L = β λ in the case β A , β H , α A , α H P SP , 4 (i.e., β A = β H ). □
Proof of Theorem 7.
This can be deduced from (A46), from the parts (b), (c) and (e) of Lemma A4 as well as the incorporation of p λ U α A λ α H 1 λ > 0 for λ ] 0 , 1 [ . Notice that an inadequate choice of p λ U , q λ U may lead to p λ U q λ U ( β λ + x 0 ( q λ U ) ) α λ > 0 . □
Proof of Theorem 8.
The assertions follow immediately from (A45) and from the parts (b), (c) and (e) of Lemma A4. Notice that an inadequate choice of p λ L , q λ L may lead to p λ L q λ L ( β λ + x 0 ( q λ U ) ) α λ < 0 . □
Proof of Theorem 9.
Let p λ U = α A λ α H 1 λ > max { 0 , α λ } and q λ U = β A λ β H 1 λ > max { 0 , β λ } . Since q λ U < min { 1 , e β λ 1 } iff λ ] λ , λ + [ \ [ 0 , 1 ] (cf. Lemma 1 for q λ : = q λ U ) ), this theorem follows from (A46) of Lemma A4, from the parts (b), (e) of Lemma A4 and from part (d) of Proposition 14. □

Appendix A.4. Proofs and Auxiliary Lemmas for Section 7

Proof of Theorem 10.
As already mentioned above, one can adapt the proof of Theorem 9.1.3 in Ethier & Kurtz [138] who deal with drift-parameters η = 0 , κ = 0 , and the different setup of σ independent time-scale and a sequence of critical Galton-Watson processes without immigration with general offspring distribution. For the sake of brevity, we basically outline here only the main differences to their proof; for similar limit investigations involving offspring/immigration distributions and parametrizations which are incompatble to ours, see e.g., Sriram [142].
As a first step, let us define the generator
A f ( x ) : = η κ · x · f ( x ) + σ 2 2 · x · f ( x ) , f C c [ 0 , ) ,
which corresponds to the diffusion process X ˜ governed by (133). In connection with (130), we study
T ( m ) f ( x ) : = E P f 1 m k = 1 m x Y 0 , k ( m ) + Y ˜ 0 ( m ) , x E ( m ) : = 1 m N 0 , f C c ( [ 0 , ) ,
where the Y 0 , k ( m ) , Y ˜ 0 ( m ) are independent and (Poisson- β ( m ) respectively Poisson- α ( m ) ) distributed as the members of the collection Y ( m ) respectively Y ˜ ( m ) . By the Theorems 8.2.1 and 1.6.5 as well as Corollary 4.8.9 of [138] it is sufficient to show
lim m sup x E ( m ) σ 2 m T ( m ) f ( x ) f ( x ) A f ( x ) = 0 , f C c [ 0 , ) .
But (A58) follows mainly from the next
Lemma A5.
Let
S n ( m ) : = 1 n k = 1 n Y 0 , k ( m ) β ( m ) + Y ˜ 0 ( m ) α ( m ) , n N , m N ¯ ,
with the usual convention S 0 ( m ) : = 0 . Then for all m N ¯ , x E ( m ) and all f C c [ 0 , )
ϵ ( m ) ( x ) : = E P 0 1 S m x ( m ) 2 x ( 1 v ) f β ( m ) x + α ( m ) m + v x m S m x ( m ) f ( x ) d v = 1 σ 2 · σ 2 m · T ( m ) f ( x ) f ( x ) A f ( x ) + R ( m ) , where lim m R ( m ) = 0 .
Proof of Lemma A5.
Let us fix f C c [ 0 , ) . From the involved Poissonian expectations it is easy to see that
lim m σ 2 m T ( m ) f ( 0 ) f ( 0 ) A f ( 0 ) = 0 ,
and thus (A49) holds for x = 0 . Accordingly, we next consider the case x E ( m ) \ { 0 } , with fixed m N ¯ . From E P S m x ( m ) 2 = β ( m ) + α ( m ) m x we obtain
E P S m x ( m ) 2 x f ( x ) 0 1 ( 1 v ) d v = 1 2 β ( m ) · x + α ( m ) m f ( x ) = : a m x f ( x ) 2 = : a f ( x ) 2 .
Furthermore, with b m x : = b : = a + x / m · S m x ( m ) = 1 m k = 1 m x Y 0 , k ( m ) + Y ˜ 0 ( m ) we get on { S m x ( m ) 0 }
0 1 f β ( m ) x + α ( m ) m + v x m S m x ( m ) d v = m x · 1 S m x ( m ) a b f ( y ) d y = m x · f ( b ) f ( a ) S m x ( m )
as well as
0 1 v f β ( m ) x + α ( m ) m + v x m S m x ( m ) d v = m x S m x ( m ) 2 a b y f ( y ) d y a a b f ( y ) d y = m x · f ( b ) S m x ( m ) + m x · f ( a ) f ( b ) S m x ( m ) 2 .
With our choice β ( m ) = 1 κ σ 2 m and α ( m ) = β ( m ) · η σ 2 , a Taylor expansion of f at x gives
f ( a ) = f ( x ) + 1 σ 2 m · f ( x ) β ( m ) · η κ · x + o 1 m ,
where for the case η = κ = 0 we use the convention o 1 m 0 . Combining (A60) to (A63) and the centering E P S m x ( m ) = 0 , the left hand side of Equation (A59) becomes
E P 0 1 S m x ( m ) 2 x ( 1 v ) f β ( m ) x + α ( m ) m + v x m S m x ( m ) f ( x ) d v = E P m x · S m x ( m ) · f ( b ) f ( a ) E P m x · S m x ( m ) · f ( b ) + m · ( f ( a ) f ( b ) ) 1 2 β ( m ) · x + α ( m ) m · f ( x ) = m · E P f ( b ) f ( a ) 1 2 β ( m ) · x + α ( m ) m · f ( x ) = m · E P f 1 m k = 1 m x Y 0 , k ( m ) + Y ˜ 0 f ( x ) 1 σ 2 A f ( x ) + 1 σ 2 η κ · x β ( m ) · η + κ · x · f ( x ) + x 2 1 β ( m ) α ( m ) m · f ( x ) m · o 1 m
which immediately leads to the right hand side of (A59). □
To proceed with the proof of Theorem 10, we obtain for m 2 κ / σ 2 the inequality β ( m ) 1 / 2 and accordingly for all v ] 0 , 1 [ , x E ( m )
β ( m ) x + α ( m ) m + v x m S m x ( m ) = ( 1 v ) · x · β ( m ) + ( 1 v ) α ( m ) m + v k = 1 m x Y 0 , k ( m ) + Y ˜ 0 x · 1 v 2 .
Suppose that the support of f is contained in the interval [ 0 , c ] . Correspondingly, for v 1 2 c / x the integrand in ϵ ( m ) ( x ) is zero and hence with (A64) we obtain the bounds
0 1 S m x ( m ) 2 x ( 1 v ) f β ( m ) x + α ( m ) m + v x m S m x ( m ) f ( x ) d v 0 ( 1 2 c / x ) 1 S m x ( m ) 2 x ( 1 v ) · 2 f d v x · S m x ( m ) 2 1 2 c x 2 f .
From this, one can deduce lim m sup x E ( m ) ϵ ( m ) ( x ) = 0 –and thus (A58) – in the same manner as at the end of the proof of Theorem 9.1.3 in [138] (by means of the dominated convergence theorem).
Proof of Proposition 15.
Let ( κ A , κ H , η ) P ˜ N I P ˜ S P , 1 be fixed. We have to find those orders λ R \ [ 0 , 1 ] which satisfy for all sufficiently large m N ¯
q λ ( m ) = 1 κ A σ 2 m λ 1 κ H σ 2 m 1 λ < min 1 , e β λ ( m ) 1 .
In order to achieve this, we interpret q λ ( m ) = q λ 1 m in terms of the function
q λ ( x ) : = 1 κ A σ 2 · x λ 1 κ H σ 2 · x 1 λ , x ] ϵ , ϵ [ ,
for some small enough ϵ > 0 such that (A65) is well-defined. Since β λ ( m ) 1 = κ λ σ 2 · m = κ λ σ 2 · x = λ κ A + ( 1 λ ) κ H σ 2 · x , for the verification of (A64) it suffices to show
lim x 0 1 q λ ( x ) x > 0 ,
and lim x 0 e κ λ σ 2 · x q λ ( x ) x 2 > 0 .
By l’Hospital’s rule, one gets lim x 0 1 q λ ( x ) x = λ κ A + ( 1 λ ) κ H σ 2 = κ λ σ 2 and hence
( A66 ) λ < κ H κ H κ A , if κ A < κ H , λ > κ H κ A κ H , if κ A > κ H .
To find a condition that guarantees (A67), we use l’Hospital’s rule twice to deduce
lim x 0 e κ λ σ 2 · x q λ ( x ) x 2 = 1 2 σ 4 κ λ 2 λ ( λ 1 ) ( κ A κ H ) 2 = 1 2 σ 4 λ κ A 2 + ( 1 λ ) κ H 2
and hence we obtain
( A67 ) λ < κ H 2 κ H 2 κ A 2 , if κ A < κ H , λ > κ H 2 κ A 2 κ H 2 , if κ A > κ H .
To compare both the lower and upper bounds in (A68) and (A69), let us calculate
κ H 2 κ H 2 κ A 2 κ H κ H κ A = κ A κ H ( κ H κ A ) ( κ H + κ A ) < 0 , if κ A < κ H , > 0 , if κ A > κ H .
Incorporating this, we observe that both conditions (A66) and (A67) are satisfied simultaneously iff
λ < min κ H κ H κ A , κ H 2 κ H 2 κ A 2 = κ H 2 κ H 2 κ A 2 if κ A < κ H , λ > max κ H κ A κ H , κ H 2 κ A 2 κ H 2 = κ H 2 κ A 2 κ H 2 if κ A > κ H ,
which finishes the proof. □
The following lemma is the main tool for the proof of Theorem 11.
Lemma A6.
Let ( κ A , κ H , η , λ ) ( P ˜ N I P ˜ S P , 1 ) × λ ˜ , λ ˜ + \ { 0 , 1 } . By using the quantities κ λ : = λ κ A + ( 1 λ ) κ H and Λ λ : = λ κ A 2 + ( 1 λ ) κ H 2 from (150) (which is well-defined, cf. (138)), one gets for all t > 0
( a ) lim m m · 1 q λ ( m ) = lim m m · 1 β λ ( m ) = κ λ σ 2 . ( b ) lim m m 2 · a 1 ( m ) = lim m m 2 · q λ ( m ) β λ ( m ) = λ ( 1 λ ) κ A κ H 2 2 σ 4 = Λ λ 2 κ λ 2 2 σ 4 . ( c ) lim m m · x 0 ( m ) = Λ λ κ λ σ 2 < 0 , if λ ] 0 , 1 [ , > 0 , if λ ] λ ˜ , λ ˜ + [ \ [ 0 , 1 ] . ( d ) lim m m 2 · Γ < ( m ) = lim m m 2 · Γ > ( m ) = ( Λ λ κ λ ) 2 2 σ 4 > 0 . ( e ) lim m m · ( 1 d ( m ) , S ) = Λ λ + κ λ 2 σ 2 > 0 . ( f ) lim m m · ( 1 d ( m ) , T ) = Λ λ σ 2 > 0 . ( g ) lim m m · ( 1 d ( m ) , S d ( m ) , T ) = 3 Λ λ + κ λ 2 σ 2 > 0 . ( h ) lim m d ( m ) , S σ 2 m t = exp Λ λ + κ λ 2 · t < 1 . ( i ) lim m d ( m ) , T σ 2 m t = exp Λ λ · t < 1 . ( j ) lim m d ( m ) , S d ( m ) , T σ 2 m t = exp 3 Λ λ + κ λ 2 · t < 1 . ( k ) for λ ] 0 , 1 [ , there holds for the respective quantities defined in ( 142 ) to ( 145 ) lim m m · ζ ̲ σ 2 m t ( m ) = Λ λ κ λ 2 2 σ 2 · Λ λ · e Λ λ · t · 1 e Λ λ · t > 0 , lim m ϑ ̲ σ 2 m t ( m ) = 1 4 · Λ λ κ λ Λ λ 2 · 1 e Λ λ · t 2 > 0 , lim m m · ζ ¯ σ 2 m t ( m ) = Λ λ κ λ 2 σ 2 · e 1 2 ( Λ λ + κ λ ) · t e Λ λ · t Λ λ κ λ e 1 2 ( Λ λ + κ λ ) · t 1 e Λ λ · t 2 · Λ λ > 0 , lim m ϑ ¯ σ 2 m t ( m ) = Λ λ κ λ 2 Λ λ · 1 e 1 2 3 Λ λ + κ λ · t 3 Λ λ + κ λ + e Λ λ · t e 1 2 ( Λ λ + κ λ ) · t Λ λ κ λ > 0 . ( l ) for λ ] λ ˜ , λ ˜ + [ \ [ 0 , 1 ] , there holds for the respective quantities defined in ( 146 ) to ( 149 ) lim m m · ζ ̲ σ 2 m t ( m ) = Λ λ κ λ 2 2 σ 2 · κ λ · e Λ λ · t · 1 e κ λ · t > 0 , lim m ϑ ̲ σ 2 m t ( m ) = Λ λ κ λ 2 2 · κ λ · 1 e Λ λ · t Λ λ 1 e ( Λ λ + κ λ ) · t Λ λ + κ λ > 0 , lim m m · ζ ¯ σ 2 m t ( m ) = Λ λ κ λ 2 2 · σ 2 · e 1 2 ( Λ λ + κ λ ) · t · t 1 e Λ λ · t Λ λ > 0 , lim m ϑ ¯ σ 2 m t ( m ) = Λ λ κ λ 2 · [ Λ λ κ λ 1 e 1 2 ( Λ λ + κ λ ) · t Λ λ · Λ λ + κ λ 2 + 1 e 1 2 ( 3 Λ λ + κ λ ) · t Λ λ · 3 Λ λ + κ λ e 1 2 ( Λ λ + κ λ ) · t Λ λ + κ λ · t ] > 0 .
Proof of Lemma A6.
For each of the assertions (a) to (l), we will make use of l’Hospital’s rule. To begin with, we obtain for arbitrary μ , ν R
lim m m 1 ( β A ( m ) ) μ ( β H ( m ) ) ν = lim m m 2 μ · ( β A ( m ) ) μ 1 ( β H ( m ) ) ν κ A σ 2 m 2 + ν · ( β A ( m ) ) μ ( β H ( m ) ) ν 1 κ H σ 2 m 2 = μ κ A σ 2 + ν κ H σ 2 .
From this, the first part of (a) follows immediately and the second part is a direct consequence of the definition of β λ ( m ) . Part (b) can be deduced from (A71):
lim m m 2 · a 1 ( m ) = lim m m 2 σ 2 · [ λ · κ A 1 ( β A ( m ) ) λ 1 ( β H ( m ) ) 1 λ + ( 1 λ ) · κ H 1 ( β A ( m ) ) λ ( β H ( m ) ) λ ] = λ ( 1 λ ) ( κ A κ H ) 2 2 σ 4 = Λ λ 2 κ λ 2 2 σ 4 .
For the proof of (c), we rely on the inequalities x ̲ 0 ( m ) x 0 ( m ) x ¯ 0 ( m ) ( m N ), where x ̲ 0 ( m ) and x ¯ 0 ( m ) are the obvious notational adaptions of (124) and (126), respectively. Notice that x ̲ 0 ( m ) and x ¯ 0 ( m ) are solutions of the (again adapted) quadratic equations Q ̲ λ ( m ) ( x ) = x resp. Q ¯ λ ( m ) ( x ) = x (cf. (127) and (128)). These solutions clearly exist in the case λ ] 0 , 1 [ . For sufficiently large approximations steps m N , these solutions also exist in the case λ ] λ ˜ , λ ˜ + [ \ [ 0 , 1 ] since (138) together with parts (a) and (b) imply
lim m m · ( 1 q λ ( m ) ) 2 2 · q λ ( m ) · m 2 · a 1 ( m ) = σ 2 · λ κ A 2 + ( 1 λ ) κ H 2 > 0 , for λ ] λ ˜ , λ ˜ + [ \ [ 0 , 1 ] .
To prove part (c), we show that the limits of x ̲ 0 ( m ) and x ¯ 0 ( m ) coincide. Assume first that λ ] 0 , 1 [ . Using (a) and (b), we obtain together with the obvious limit lim m q λ ( m ) = 1
lim m m · x ¯ 0 ( m ) = lim m q λ ( m ) 1 · m · ( 1 q λ ( m ) ) m · ( 1 q λ ( m ) ) 2 2 · q λ ( m ) · m 2 · a 1 ( m ) = κ λ σ 2 κ λ σ 2 2 + Λ λ 2 κ λ 2 σ 4 = Λ λ κ λ σ 2 .
Let x ̲ ̲ 0 ( m ) be the adapted version of the auxiliary fixed-point lower bound defined in (125). By incorporating lim m β λ ( m ) = 1 we obtain with (a) and (b)
lim m x ̲ ̲ 0 ( m ) = lim m max β λ ( m ) , q λ ( m ) β λ ( m ) 1 q λ ( m ) = lim m 1 m · m 2 · a 1 ( m ) m · 1 q λ ( m ) = 0 ,
which implies
lim m m · x ̲ 0 ( m ) = lim m e x ̲ ̲ 0 ( m ) q λ ( m ) · m · ( 1 q λ ( m ) ) m · ( 1 q λ ( m ) ) 2 2 · e x ̲ ̲ 0 ( m ) q λ ( m ) · m 2 · a 1 ( m ) = κ λ σ 2 κ λ σ 2 2 + Λ λ 2 κ λ 2 σ 4 = Λ λ κ λ σ 2 .
Combining (A72) and (A73), the desired result (c) follows for λ ] 0 , 1 [ . Assume now that λ ] λ ˜ , λ ˜ + [ \ [ 0 , 1 ] . In this case the approximates x ̲ 0 ( m ) and x ¯ 0 ( m ) have a different form, given in (124) and (126). However, the calculations work out in the same way: with parts (a) and (b) we get
lim m m · x ̲ 0 ( m ) = lim m 1 q λ ( m ) · m · 1 q λ ( m ) m · ( 1 q λ ( m ) ) 2 2 · q λ ( m ) · m 2 · a 1 ( m ) = κ λ σ 2 κ λ σ 2 2 + Λ λ 2 κ λ 2 σ 4 = Λ λ κ λ σ 2 ,
as well as
lim m m · x ¯ 0 ( m ) = lim m m · 1 q λ ( m ) m · ( 1 q λ ( m ) ) 2 2 · m 2 · a 1 ( m ) = κ λ σ 2 κ λ σ 2 2 + Λ λ 2 κ λ 2 σ 4 = Λ λ κ λ σ 2 ,
which finally finishes the proof of part (c). Assertion (d) is a direct consequence of (c). Since the representations of the parameters c ( m ) , S , d ( m ) , S , c ( m ) , T , d ( m ) , T are the same in both cases λ ] 0 , 1 [ and λ ] λ ˜ , λ ˜ + [ \ [ 0 , 1 ] , the following considerations hold generally. Part (e) follows from (b) and (c) by
lim m m · ( 1 d ( m ) , S ) = lim m m 2 · a 1 ( m ) m · x 0 ( m ) = Λ λ + κ λ 2 σ 2 > 0 .
Notice that this term is positive since on ] λ ˜ , λ ˜ + [ \ { 0 , 1 } there holds κ λ > 0 as well as Λ λ > 0 , cf. (A70). To prove (f), we apply the general limit lim x 0 e x 1 x = 1 and get with (a), (c)
lim m m · ( 1 d ( m ) , T ) = lim m m · 1 q λ ( m ) q λ ( m ) · m · x 0 ( m ) · e x 0 ( m ) 1 x 0 ( m ) = Λ λ σ 2 .
The limit (g) can be obtained from (e) and (f):
lim m m · ( 1 d ( m ) , S d ( m ) , T ) = lim m m · ( 1 d ( m ) , S ) + d ( m ) , S · m · ( 1 d ( m ) , T ) = 3 Λ λ + κ λ 2 σ 2 .
The assertions (h) resp. (i) resp. (j) follow from (e) resp. (f) resp. (g) by using the general relation lim m 1 + x m m m = exp lim m x m . To get the last two parts (k) and (l), we make repeatedly use of the results (a) to (j) and combine them with the formulas (142) to (149) of Corollary 14. More detailed, for λ ] 0 , 1 [ (and thus q λ ( m ) < β λ ( m ) ) we obtain
m · ζ ̲ σ 2 m t ( m ) = m 2 · Γ < ( m ) · d ( m ) , T σ 2 m t 1 m · 1 d ( m ) , T · 1 d ( m ) , T σ 2 m t m Λ λ κ λ 2 2 σ 2 · Λ λ · e Λ λ · t · 1 e Λ λ · t > 0 , ϑ ̲ σ 2 m t ( m ) = m 2 · Γ < ( m ) · 1 d ( m ) , T σ 2 m t m · 1 d ( m ) , T 2 · 1 d ( m ) , T 1 + d ( m ) , T σ 2 m t 1 + d ( m ) , T m 1 4 · Λ λ κ λ Λ λ 2 · 1 e Λ λ · t 2 > 0 ,
m · ζ ¯ σ 2 m t ( m ) = m 2 · Γ < ( m ) · [ d ( m ) , S σ 2 m t d ( m ) , T σ 2 m t m · 1 d ( m ) , T m · 1 d ( m ) , S d ( m ) , S σ 2 m t 1 · 1 d ( m ) , T σ 2 m t m · 1 d ( m ) , T ] m Λ λ κ λ 2 σ 2 · e 1 2 ( Λ λ + κ λ ) · t e Λ λ · t Λ λ κ λ e 1 2 ( Λ λ + κ λ ) · t 1 e Λ λ · t 2 · Λ λ > 0 ,
ϑ ¯ σ 2 m t ( m ) = m 2 · Γ < ( m ) · d ( m ) , T m · 1 d ( m ) , T · 1 d ( m ) , S d ( m ) , T σ 2 m t m · 1 d ( m ) , S d ( m ) , T d ( m ) , S σ 2 m t d ( m ) , T σ 2 m t m · 1 d ( m ) , T m · 1 d ( m ) , S m Λ λ κ λ 2 Λ λ · 1 e 1 2 3 Λ λ + κ λ · t 3 Λ λ + κ λ + e Λ λ · t e 1 2 ( Λ λ + κ λ ) · t Λ λ κ λ > 0 .
For λ ] λ ˜ , λ ˜ + [ \ [ 0 , 1 ] (and thus q λ ( m ) > β λ ( m ) ) we get
m · ζ ̲ σ 2 m t ( m ) = m 2 · Γ > ( m ) · d ( m ) , T σ 2 m t d ( m ) , S 2 · σ 2 m t m · 1 d ( m ) , S 1 + d ( m ) , S m · 1 d ( m ) , T m Λ λ κ λ 2 2 σ 2 · κ λ · e Λ λ · t · 1 e κ λ · t > 0 , ϑ ̲ σ 2 m t ( m ) = m 2 · Γ > ( m ) m · 1 d ( m ) , S 1 + d ( m ) , S m · 1 d ( m ) , T · d ( m ) , T · 1 d ( m ) , T σ 2 m t m · 1 d ( m ) , T d ( m ) , S 2 · 1 d ( m ) , S 2 · σ 2 m t m · 1 d ( m ) , S 1 + d ( m ) , S m Λ λ κ λ 2 2 · κ λ · 1 e Λ λ · t Λ λ 1 e ( Λ λ + κ λ ) · t Λ λ + κ λ > 0 , m · ζ ¯ σ 2 m t ( m ) = m 2 · Γ > ( m ) · d ( m ) , S σ 2 m t 1 · σ 2 m t m 1 d ( m ) , T σ 2 m t m · 1 d ( m ) , T m Λ λ κ λ 2 2 · σ 2 · e 1 2 ( Λ λ + κ λ ) · t · t 1 e Λ λ · t Λ λ > 0 ,
ϑ ¯ σ 2 m t ( m ) = m 2 · Γ > ( m ) · [ m · 1 d ( m ) , T m · 1 d ( m ) , S m 2 · 1 d ( m ) , S 2 · m · 1 d ( m ) , T · 1 d ( m ) , S σ 2 m t + d ( m ) , T 1 d ( m ) , S d ( m ) , T σ 2 m t m · 1 d ( m ) , T · m · 1 d ( m ) , S d ( m ) , T d ( m ) , S σ 2 m t m · 1 d ( m ) , S · σ 2 m t m ] m Λ λ κ λ 2 · Λ λ κ λ 1 e 1 2 ( Λ λ + κ λ ) · t Λ λ · Λ λ + κ λ 2 + 1 e 1 2 ( 3 Λ λ + κ λ ) · t Λ λ · 3 Λ λ + κ λ e 1 2 ( Λ λ + κ λ ) · t Λ λ + κ λ · t > 0 .
Proof of Theorem 11.
It suffices to compute the limits of the bounds given in Corollary 14 as m tends to infinity. This is done by applying Lemma A6 which provides corresponding limits of all quantities of interest. Accordingly, for all t > 0 the lower bound (153) in the case λ ] 0 , 1 [ can be obtained from (140), (142) and (143) by
lim m exp { x 0 ( m ) · X 0 ( m ) η σ 2 · d ( m ) , T 1 d ( m ) , T 1 d ( m ) , T σ 2 m t + x 0 ( m ) η σ 2 · σ 2 m t + ζ ̲ σ 2 m t ( m ) · X 0 ( m ) + ϑ ̲ σ 2 m t ( m ) } = lim m exp { m · x 0 ( m ) · X 0 ( m ) m η σ 2 · d ( m ) , T m · 1 d ( m ) , T 1 d ( m ) , T σ 2 m t + m · x 0 ( m ) η σ 2 · σ 2 m t m + m · ζ ̲ σ 2 m t ( m ) · X 0 ( m ) m + ϑ ̲ σ 2 m t ( m ) } = exp { Λ λ κ λ σ 2 · X ˜ 0 η σ 2 · σ 2 Λ λ 1 e Λ λ t Λ λ κ λ σ 2 · η σ 2 · σ 2 t + Λ λ κ λ 2 2 σ 2 · Λ λ · e Λ λ · t · 1 e Λ λ · t · X ˜ 0 + η 4 σ 2 · Λ λ κ λ Λ λ 2 · 1 e Λ λ · t 2 } = exp Λ λ κ λ σ 2 X ˜ 0 η Λ λ 1 e Λ λ · t η σ 2 Λ λ κ λ · t + L λ ( 1 ) ( t ) · X ˜ 0 + η σ 2 · L λ ( 2 ) ( t ) .
For all t > 0 , the upper bound (154) in the case λ ] 0 , 1 [ follows analogously from (141), (144), (145) by
lim m exp { x 0 ( m ) · X 0 ( m ) η σ 2 · d ( m ) , S 1 d ( m ) , S 1 d ( m ) , S σ 2 m t + x 0 ( m ) η σ 2 · σ 2 m t ζ ¯ σ 2 m t ( m ) · X 0 ( m ) ϑ ¯ σ 2 m t ( m ) } = lim m exp { m · x 0 ( m ) · X 0 ( m ) m η σ 2 · d ( m ) , S m · 1 d ( m ) , S 1 d ( m ) , S σ 2 m t + m · x 0 ( m ) η σ 2 · σ 2 m t m m · ζ ¯ σ 2 m t ( m ) · X 0 ( m ) m ϑ ¯ σ 2 m t ( m ) } = exp { Λ λ κ λ σ 2 X ˜ 0 η σ 2 · 2 σ 2 Λ λ + κ λ 1 e 1 2 ( Λ λ + κ λ ) t Λ λ κ λ σ 2 · η σ 2 · σ 2 t Λ λ κ λ 2 σ 2 · e 1 2 ( Λ λ + κ λ ) · t e Λ λ · t Λ λ κ λ e 1 2 ( Λ λ + κ λ ) · t 1 e Λ λ · t 2 · Λ λ · X ˜ 0 η σ 2 Λ λ κ λ 2 Λ λ · 1 e 1 2 3 Λ λ + κ λ · t 3 Λ λ + κ λ + e Λ λ · t e 1 2 ( Λ λ + κ λ ) · t Λ λ κ λ } = exp { Λ λ κ λ σ 2 X ˜ 0 η 1 2 ( Λ λ + κ λ ) 1 e 1 2 ( Λ λ + κ λ ) · t η σ 2 Λ λ κ λ · t U λ ( 1 ) ( t ) · X ˜ 0 η σ 2 · U λ ( 2 ) ( t ) } .
In the case λ ] λ ˜ , λ ˜ + [ \ [ 0 , 1 ] , the lower bound as well as the upper bound of the Hellinger integral limit is obtained analogously, by taking into account that the quantities ζ ̲ n ( m ) , ϑ ̲ n ( m ) , ζ ¯ n ( m ) , ϑ ¯ n ( m ) now have the form (146) to (149) instead of (142) to (145). Thus, the functions L λ ( 1 ) ( t ) , U λ ( 1 ) ( t ) , L λ ( 2 ) ( t ) , U λ ( 2 ) ( t ) are obtained by employing the limits of part (l) of Lemma A6 instead of part (k). □
The next Lemma (and parts of its proof) will be useful for the verification of Theorem 12:
Lemma A7.
Recall the bounds on the Hellinger integral m limit given in (153) and (154) of Theorem 11, in terms of L λ ( i ) ( t ) and U λ ( i ) ( t ) ( i = 1 , 2 ) defined by (155) to (158). Correspondingly, one gets the following λ limits for all t [ 0 , [ :
(a) 
for all κ A ] 0 , [ and all κ H [ 0 , [ with κ A κ H
lim λ 1 L λ ( 1 ) ( t ) λ = lim λ 1 L λ ( 2 ) ( t ) λ = lim λ 1 U λ ( 1 ) ( t ) λ = lim λ 1 U λ ( 2 ) ( t ) λ = 0 .
(b) 
for κ A = 0 and all κ H ] 0 , [
lim λ 1 L λ ( 1 ) ( t ) λ = κ H 2 · t 2 σ 2 ,
lim λ 1 L λ ( 2 ) ( t ) λ = κ H 2 · t 2 4 ,
lim λ 1 U λ ( 1 ) ( t ) λ = lim λ 1 U λ ( 2 ) ( t ) λ = 0 .
Proof of Lemma A7.
For all κ A , κ H [ 0 , [ with κ A κ H one can deduce from (150) as well as (155) to (158) the following derivatives:
L λ ( 1 ) ( t ) λ = 1 2 σ 2 { t 2 Λ λ κ λ Λ λ 2 κ A 2 κ H 2 2 e 2 Λ λ t e Λ λ t + e Λ λ t 1 e Λ λ t Λ λ Λ λ κ λ Λ λ κ A 2 κ H 2 2 Λ λ ( κ A κ H ) Λ λ κ λ Λ λ 2 κ A 2 κ H 2 2 } ,
L λ ( 2 ) ( t ) λ = 1 4 { Λ λ κ λ Λ λ · 1 e Λ λ t Λ λ 2 · κ A 2 κ H 2 2 Λ λ ( κ A κ H ) Λ λ κ λ Λ λ κ A 2 κ H 2 + t · e Λ λ t · Λ λ κ λ Λ λ 2 · 1 e Λ λ t Λ λ · κ A 2 κ H 2 } ,
U λ ( 1 ) ( t ) λ = 1 σ 2 { Λ λ κ λ 2 Λ λ t e Λ λ t κ A 2 κ H 2 t 2 e 1 2 ( Λ λ + κ λ ) t κ A 2 κ H 2 + 2 Λ λ ( κ A κ H ) e 1 2 ( Λ λ + κ λ ) t e Λ λ t 2 Λ λ · κ A 2 κ H 2 2 Λ λ ( κ A κ H ) + Λ λ κ λ 2 Λ λ 2 [ t 2 e 1 2 ( Λ λ + κ λ ) t κ A 2 κ H 2 + 2 Λ λ ( κ A κ H ) t 2 e 1 2 ( 3 Λ λ + κ λ ) t 3 κ A 2 κ H 2 + 2 Λ λ ( κ A κ H ) + e 1 2 ( Λ λ + κ λ ) t · 1 e Λ λ t Λ λ · κ A 2 κ H 2 ] + Λ λ κ λ Λ λ κ A 2 κ H 2 2 Λ λ ( κ A κ H ) e 1 2 ( Λ λ + κ λ ) t e Λ λ t Λ λ κ λ e 1 2 ( Λ λ + κ λ ) t 1 e Λ λ t 2 Λ λ } ,
U λ ( 2 ) ( t ) λ = Λ λ κ λ 2 Λ λ ( 3 Λ λ + κ λ ) [ t 2 e 1 2 ( 3 Λ λ + κ λ ) t 3 κ A 2 κ H 2 2 Λ λ + κ A κ H 1 e 1 2 ( 3 Λ λ + κ λ ) t 3 Λ λ + κ λ · 3 κ A 2 κ H 2 2 Λ λ + κ A κ H ] + Λ λ κ λ Λ λ t 2 e 1 2 ( Λ λ + κ λ ) t κ A 2 κ H 2 2 Λ λ + κ A κ H t e Λ λ t κ A 2 κ H 2 2 Λ λ + e 1 2 ( Λ λ + κ λ ) t e Λ λ t Λ λ κ A 2 κ H 2 2 Λ λ κ A + κ H + 2 κ A 2 κ H 2 2 Λ λ κ A + κ H Λ λ κ λ Λ λ 2 · κ A 2 κ H 2 2 · 1 Λ λ Λ λ κ λ 3 Λ λ + κ λ 1 e 1 2 ( 3 Λ λ + κ λ ) t e 1 2 ( Λ λ + κ λ ) t + e Λ λ t .
If κ A ] 0 , [ and κ H [ 0 , [ with κ A κ H , then one gets lim λ 1 Λ λ = lim λ 1 κ λ = κ A > 0 which implies (A74) from (A78) to (A81). For the proof of part (b), let us correspondingly assume κ A = 0 and κ H ] 0 , [ , which by (150) leads to κ λ = κ H · ( 1 λ ) , Λ λ = κ H · 1 λ and the convergences lim λ 1 Λ λ = lim λ 1 κ λ = 0 . From this, the assertions (A75), (A76), (A77) follow in a straightforward manner from (A78), (A79), (A80) – respectively – by using (parts of) the obvious relations
lim λ 1 κ λ Λ λ = 0 , lim λ 1 Λ λ ± κ λ Λ λ = lim λ 1 Λ λ κ λ Λ λ + κ λ = 1 ,
lim λ 1 1 e c λ · t c λ = t for all c λ Λ λ , Λ λ + κ λ 2 , 3 Λ λ + κ λ 2 .
In order to get the last assertion in (A77), we make use of the following limits
lim λ 1 1 Λ λ κ λ 3 3 Λ λ + κ λ = lim λ 1 4 κ H ( κ H κ H · 1 λ ) · ( 3 κ H + κ H · 1 λ ) = 4 3 κ H
and
lim λ 1 1 Λ λ 1 e 1 2 ( 3 Λ λ + κ λ ) t 3 Λ λ + κ λ 1 e Λ λ t Λ λ κ λ + 1 e 1 2 ( Λ λ + κ λ ) t Λ λ κ λ = 0 .
To see (A85), let us first observe that the involved limit can be rewritten as
lim λ 1 { 1 Λ λ ( Λ λ κ λ ) 1 3 1 3 e 1 2 ( 3 Λ λ + κ λ ) t + e Λ λ t e 1 2 ( Λ λ + κ λ ) t
+ 1 e 1 2 ( 3 Λ λ + κ λ ) t Λ λ 1 3 Λ λ + κ λ 1 3 ( Λ λ κ λ ) } .
Substituting x : = 1 λ and applying l’Hospital’s rule twice, we get for the first limit (A86)
lim x 0 1 3 1 3 e κ H t 2 ( 3 x + x 2 ) + e κ H t x e κ H t 2 ( x + x 2 ) κ H 2 · x 2 x 3 = lim x 0 κ H t 6 ( 3 + 2 x ) e κ H t 2 ( 3 x + x 2 ) κ H t e κ H t x + κ H t 2 ( 1 + 2 x ) e κ H t 2 ( x + x 2 ) κ H 2 · 2 x 3 x 2 = lim x 0 κ H 2 t 2 12 ( 3 + 2 x ) 2 + κ H t 3 e κ H t 2 ( 3 x + x 2 ) + κ H 2 t 2 e κ H t x κ H 2 t 2 4 ( 1 + 2 x ) 2 κ H t e κ H t 2 ( x + x 2 ) κ H 2 · 2 6 x = 1 2 κ H 2 3 κ H 2 t 2 4 + κ H t 3 + κ H 2 t 2 κ H 2 t 2 4 + κ H t = 2 t 3 κ H .
The second limit (A87) becomes
lim λ 1 1 e 1 2 ( 3 Λ λ + κ λ ) t 3 Λ λ + κ λ · 3 Λ λ + κ λ Λ λ · 4 κ H ( 3 κ H + 1 λ κ H ) ( 3 κ H 3 1 λ κ H )
and consequently (A85) follows. To proceed with the proof of (A77), we rearrange
lim λ 1 U λ ( 2 ) ( t ) λ = lim λ 1 { Λ λ κ λ Λ λ 2 [ Λ λ 3 Λ λ + κ λ t 2 e 1 2 ( 3 Λ λ + κ λ ) t 3 κ H 2 2 Λ λ κ H Λ λ 3 Λ λ + κ λ · 1 e 1 2 ( 3 Λ λ + κ λ ) t 3 Λ λ + κ λ 3 κ H 2 2 Λ λ κ H + Λ λ Λ λ κ λ e 1 2 ( Λ λ + κ λ ) t e Λ λ t Λ λ κ λ κ H 2 2 Λ λ + κ H Λ λ Λ λ κ λ t 2 e 1 2 ( Λ λ + κ λ ) t κ H 2 2 Λ λ κ H t e Λ λ t κ H 2 2 Λ λ ] + Λ λ κ λ Λ λ κ H 2 + 2 Λ λ κ H + Λ λ κ λ Λ λ 2 κ H 2 2 · 1 e 1 2 ( 3 Λ λ + κ λ ) t Λ λ ( 3 Λ λ + κ λ ) e 1 2 ( Λ λ + κ λ ) t e Λ λ t Λ λ ( Λ λ κ λ ) } = lim λ 1 { Λ λ κ λ Λ λ 2 [ κ H 2 t 4 3 e 1 2 ( 3 Λ λ + κ λ ) t 3 Λ λ + κ λ e 1 2 ( Λ λ + κ λ ) t Λ λ κ λ + 2 e Λ λ t Λ λ κ λ
+ κ H 2 2 3 1 e 1 2 ( 3 Λ λ + κ λ ) t 3 Λ λ + κ λ 2 1 e Λ λ t Λ λ κ λ 2 + 1 e 1 2 ( Λ λ + κ λ ) t Λ λ κ λ 2
+ κ H ( Λ λ 3 Λ λ + κ λ · t e 1 2 ( 3 Λ λ + κ λ ) t 2 + Λ λ 3 Λ λ + κ λ · 1 e 1 2 ( 3 Λ λ + κ λ ) t 3 Λ λ + κ λ Λ λ Λ λ κ λ · t e 1 2 ( Λ λ + κ λ ) t 2 + Λ λ Λ λ κ λ · 1 e Λ λ t Λ λ κ λ Λ λ Λ λ κ λ · 1 e 1 2 ( Λ λ + κ λ ) t Λ λ κ λ ) ] + Λ λ κ λ Λ λ κ H 2 + 2 Λ λ κ H + Λ λ κ λ Λ λ 2 κ H 2 2 · 1 e 1 2 ( 3 Λ λ + κ λ ) t Λ λ ( 3 Λ λ + κ λ ) e 1 2 ( Λ λ + κ λ ) t e Λ λ t Λ λ ( Λ λ κ λ ) } .
By means of (A82) to (A84), the limit of the expression after the squared brackets in (A89) becomes
lim λ 1 { κ H 2 t 4 1 e 1 2 ( Λ λ + κ λ ) t Λ λ κ λ 2 1 e Λ λ t Λ λ κ λ + 3 1 e 1 2 ( 3 Λ λ + κ λ ) t 3 Λ λ + κ λ + 1 Λ λ κ λ 3 3 Λ λ + κ λ = κ H t 3 ,
and the limit of the expression in (A90) becomes with (A85)
lim λ 1 { Λ λ Λ λ κ λ · κ H 2 2 Λ λ · 1 e 1 2 ( 3 Λ λ + κ λ ) t 3 Λ λ + κ λ 1 e Λ λ t Λ λ κ λ + 1 e 1 2 ( Λ λ + κ λ ) t Λ λ κ λ κ H 2 2 · 1 e 1 2 ( 3 Λ λ + κ λ ) t 3 Λ λ + κ λ · 1 Λ λ κ λ 3 3 Λ λ + κ λ = κ H t 3 .
By putting (A91)–(A93) together with (A85) we finally end up with
lim λ 1 U λ ( 2 ) ( t ) λ = κ H t 3 κ H t 3 + κ H t 6 + t 6 t 2 + t t 2 + κ H 2 + κ H 2 2 · 0 = 0 ,
which finishes the proof of Lemma A7. □
Proof of Theorem 12.
Recall from (131) the approximative Poisson offspring-distribution parameter β ( m ) : = 1 κ σ 2 m and Poisson immigration-distribution parameter α ( m ) : = β ( m ) · η σ 2 , which is a special case of β A ( m ) , β H ( m ) , α A ( m ) , α H ( m ) P NI P SP , 1 . Let us first calculate lim m I P A , σ 2 m t ( m ) P H , σ 2 m t ( m ) by starting from Theorem 3(a). Correspondingly, we evaluate for all κ A 0 , κ H 0 with κ A κ H by a twofold application of l’Hospital’s rule
lim m m 2 · β A ( m ) · log β A ( m ) β H ( m ) 1 + β H ( m ) = lim m m 2 σ 2 κ A log β A ( m ) β H ( m ) + κ H 1 β A ( m ) β H ( m ) = 1 2 σ 4 · lim m β H ( m ) · κ A β A ( m ) · κ H β H ( m ) 2 · κ A · β H ( m ) β A ( m ) κ H = κ A κ H 2 2 σ 4 .
Additionally there holds
lim m m · ( 1 β A ( m ) ) = κ A σ 2 and lim m β A ( m ) σ 2 m t = lim m 1 κ A σ 2 m m σ 2 m t / m = e κ A · t .
For κ A > 0 , we apply the upper part of formula (69) as well as (A94) and (A95) to derive
lim m I λ P A , σ 2 m t ( m ) P H , σ 2 m t ( m ) = lim m m 2 · β A ( m ) · log β A ( m ) β H ( m ) 1 + β H ( m ) m · ( 1 β A ( m ) ) · X 0 ( m ) m α A ( m ) m · ( 1 β A ( m ) ) · 1 β A ( m ) σ 2 m t + α A ( m ) β A ( m ) · m · ( 1 β A ( m ) ) · m 2 · β A ( m ) · log β A ( m ) β H ( m ) 1 + β H ( m ) · σ 2 m t m = κ A κ H 2 2 σ 2 · κ A · X ˜ 0 η κ A · 1 e κ A · t + η · t .
For κ A = 0 (and thus κ H > 0 , β A ( m ) 1 , α A ( m ) η / σ 2 ), we apply the lower part of formula (69) as well as (A94) and (A95) to obtain
lim m I λ P A , σ 2 m t ( m ) P H , σ 2 m t ( m ) = { lim m m 2 · β H ( m ) log β H ( m ) 1 · η 2 σ 2 · σ 2 m t 2 m 2 + X 0 ( m ) m + η 2 σ 2 · m · σ 2 m t m } = κ H 2 2 σ 2 · η 2 · t 2 + X ˜ 0 · t .
Let us now calculate the “converse” double limit
lim λ 1 lim m I λ P A , σ 2 m t ( m ) P H , σ 2 m t ( m ) = lim λ 1 lim m 1 H λ P A , σ 2 m t ( m ) P H , σ 2 m t ( m ) λ · ( 1 λ ) .
This will be achieved by evaluating for each t > 0 the two limits
lim λ 1 1 d λ , X ˜ 0 , t L λ · ( 1 λ ) and lim λ 1 1 d λ , X ˜ 0 , t U λ · ( 1 λ )
which will turn out to coincide; the involved lower and upper bound d λ , X ˜ 0 , t L , d λ , X ˜ 0 , t U defined by (153) and (154) satisfy lim λ 1 d λ , X ˜ 0 , t L = lim λ 1 d λ , X ˜ 0 , t U = 1 as an easy consequence of the limits (cf. 150)
lim λ 1 Λ λ = κ A 0 and lim λ 1 κ λ = κ A 0 ,
as well as the formulas (A82) and (A83) for the case κ A = 0 . Accordingly, we compute
lim λ 1 1 d λ , X ˜ 0 , t L λ · ( 1 λ ) = lim λ 1 d λ , X ˜ 0 , t L 1 2 λ λ [ Λ λ κ λ σ 2 · X ˜ 0 η Λ λ · 1 e Λ λ · t η σ 2 · Λ λ κ λ · t + L λ ( 1 ) ( t ) · X ˜ 0 + η σ 2 · L λ ( 2 ) ( t ) ] = lim λ 1 { Λ λ κ λ σ 2 X ˜ 0 η Λ λ · t e Λ λ · t · Λ λ λ + 1 e Λ λ · t · η Λ λ 2 · Λ λ λ 1 σ 2 · λ Λ λ κ λ · X ˜ 0 η Λ λ · 1 e Λ λ · t η t σ 2 · λ Λ λ κ λ + X ˜ 0 L λ ( 1 ) ( t ) λ + η σ 2 L λ ( 2 ) ( t ) λ } , with
Λ λ λ = κ A 2 κ H 2 2 Λ λ and κ λ λ = κ A κ H .
For the case κ A > 0 , one can combine this with (A97) and (A74) to end up with
lim λ 1 1 d λ , X ˜ 0 , t L λ · ( 1 λ ) = κ A κ H 2 2 σ 2 · κ A · X ˜ 0 η κ A · 1 e κ A · t + η · t .
For the case κ A = 0 , we continue the calculation (A98) by rearranging terms and by employing the Formulas (A75), (A76), (A82) and (A83) as well as the obvious relation 1 Λ Λ κ λ Λ 2 = 1 κ H and obtain
lim λ 1 1 d λ , X ˜ 0 , t L λ · ( 1 λ ) = lim λ 1 { κ H 2 · X ˜ 0 2 σ 2 Λ λ κ λ Λ λ · t · e Λ λ t + 1 e Λ λ t Λ λ + η · κ H 2 · t 2 σ 2 1 Λ λ Λ λ κ λ Λ λ 2 + Λ λ κ λ Λ λ · 1 e Λ λ t Λ λ η · κ H 2 2 σ 2 · 1 e Λ λ t Λ λ 1 Λ λ Λ λ κ λ Λ λ 2 κ H · X ˜ 0 σ 2 1 e Λ λ t + η · κ H σ 2 1 e Λ λ t Λ λ t + L λ ( 1 ) ( t ) λ · X ˜ 0 + η σ 2 · L λ ( 2 ) ( t ) λ } = κ H 2 X ˜ 0 t σ 2 + η κ H 2 t 2 σ 2 1 κ H + t η κ H t 2 σ 2 κ H 2 X ˜ 0 t 2 σ 2 η κ H 2 t 2 4 σ 2 = κ H 2 2 σ 2 · η 2 · t 2 + X ˜ 0 · t .
Let us now turn to the second limit (A96) for which we compute analogously to (A98)
lim λ 1 1 d λ , X ˜ 0 , t U λ · ( 1 λ ) = lim λ 1 d λ , X ˜ 0 , t U 1 2 λ λ [ Λ λ κ λ σ 2 · X ˜ 0 η 1 2 ( Λ λ + κ λ ) · 1 e 1 2 ( Λ λ + κ λ ) · t η σ 2 · Λ λ κ λ · t U λ ( 1 ) ( t ) · X ˜ 0 η σ 2 · U λ ( 2 ) ( t ) ] = lim λ 1 { Λ λ κ λ σ 2 [ X ˜ 0 η 1 2 ( Λ λ + κ λ ) · t 2 · e 1 2 ( Λ λ + κ λ ) · t λ Λ λ + κ λ + 1 e 1 2 ( Λ λ + κ λ ) · t · 2 · η ( Λ λ + κ λ ) 2 · λ ( Λ λ + κ λ ) ] 1 σ 2 · λ Λ λ κ λ · X ˜ 0 η 1 2 ( Λ λ + κ λ ) · 1 e 1 2 ( Λ λ + κ λ ) · t η t σ 2 · λ Λ λ κ λ U λ ( 1 ) ( t ) λ · X ˜ 0 η σ 2 U λ ( 2 ) ( t ) λ } .
For the case κ A > 0 , one can combine this with (A97), (A99) and (A74) to end up with
lim λ 1 1 d λ , X ˜ 0 , t U λ · ( 1 λ ) = κ A κ H 2 2 σ 2 · κ A · X ˜ 0 η κ A · 1 e κ A · t + η · t .
For the case κ A = 0 , we continue the calculation of (A102) by rearranging terms and by employing the formulas (A77), (A82) and (A83) as well as the obvious relation lim λ 1 1 Λ λ Λ λ κ λ Λ λ ( Λ λ + κ λ ) = 2 κ H to obtain
lim λ 1 1 d λ , X ˜ 0 , t U λ · ( 1 λ ) = lim λ 1 { t · X ˜ 0 4 σ 2 · Λ λ κ λ Λ λ · e 1 2 Λ λ + κ λ · t κ H 2 + 2 Λ λ κ H + X ˜ 0 2 σ 2 · 1 e 1 2 ( Λ λ + κ λ ) · t Λ λ κ H 2 2 Λ λ κ H η · t σ 2 [ κ H 1 + e 1 2 Λ λ + κ λ · t Λ λ κ λ Λ λ + κ λ κ H 2 2 · 1 Λ λ Λ λ κ λ Λ λ ( Λ λ + κ λ ) + Λ λ κ λ Λ λ + κ λ · 1 e 1 2 Λ λ + κ λ · t Λ λ ] + 2 η σ 2 · 1 e 1 2 Λ λ + κ λ · t Λ λ + κ λ κ H 1 + Λ λ κ λ Λ λ + κ λ κ H 2 2 1 Λ λ Λ λ κ λ Λ λ ( Λ λ + κ λ ) U λ ( 1 ) ( t ) λ · X ˜ 0 η σ 2 U λ ( 2 ) ( t ) λ } = κ H 2 t X ˜ 0 4 σ 2 + κ H 2 t X ˜ 0 4 σ 2 η t σ 2 2 κ H κ H κ H 2 t 4 + η t σ 2 2 κ H κ H = κ H 2 2 σ 2 η 2 · t 2 + X ˜ 0 · t .
Since (A100) coincides with (A103) and (A101) coincides with (A104), we have finished the proof. □

References

  1. Liese, F.; Vajda, I. Convex Statistical Distances; Teubner: Leipzig, Germany, 1987. [Google Scholar]
  2. Read, T.R.C.; Cressie, N.A.C. Goodness-of-Fit Statistics for Discrete Multivariate Data; Springer: New York, NY, USA, 1988. [Google Scholar]
  3. Vajda, I. Theory of Statistical Inference and Information; Kluwer: Dordrecht, The Netherlands, 1989. [Google Scholar]
  4. Csiszár, I.; Shields, P.C. Information Theory and Statistics: A Tutorial; Now Publishers: Hanover, MA, USA, 2004. [Google Scholar]
  5. Stummer, W. Exponentials, Diffusions, Finance, Entropy and Information; Shaker: Aachen, Germany, 2004. [Google Scholar]
  6. Pardo, L. Statistical Inference Based on Divergence Measures; Chapman & Hall/CRC: Bocan Raton, FL, USA, 2006. [Google Scholar]
  7. Liese, F.; Miescke, K.J. Statistical Decision Theory: Estimation, Testing, and Selection; Springer: New York, NY, USA, 2008. [Google Scholar]
  8. Basu, A.; Shioya, H.; Park, C. Statistical Inference: The Minimum Distance Approach; CRC Press: Boca Raton, FL, USA, 2011. [Google Scholar]
  9. Voinov, V.; Nikulin, M.; Balakrishnan, N. Chi-Squared Goodness of Fit Tests with Applications; Academic Press: Waltham, MA, USA, 2013. [Google Scholar]
  10. Liese, F.; Vajda, I. On divergences and informations in statistics and information theory. IEEE Trans. Inform. Theory 2006, 52, 4394–4412. [Google Scholar]
  11. Vajda, I.; van der Meulen, E.C. Goodness-of-fit criteria based on observations quantized by hypothetical and empirical percentiles. In Handbook of Fitting Statistical Distributions with R; Karian, Z.A., Dudewicz, E.J., Eds.; CRC: Heidelberg, Germany, 2010; pp. 917–994. [Google Scholar]
  12. Stummer, W.; Vajda, I. On Bregman distances and divergences of probability measures. IEEE Trans. Inform. Theory 2012, 58, 1277–1288. [Google Scholar]
  13. Kißlinger, A.-L.; Stummer, W. Robust statistical engineering by means of scaled Bregman distances. In Recent Advances in Robust Statistics–Theory and Applications; Agostinelli, C., Basu, A., Filzmoser, P., Mukherjee, D., Eds.; Springer: New Delhi, India, 2016; pp. 81–113. [Google Scholar]
  14. Broniatowski, M.; Stummer, W. Some universal insights on divergences for statistics, machine learning and artificial intelligence. In Geometric Structures of Information; Nielsen, F., Ed.; Springer: Cham, Switzerland, 2019; pp. 149–211. [Google Scholar]
  15. Stummer, W.; Vajda, I. Optimal statistical decisions about some alternative financial models. J. Econom. 2007, 137, 441–471. [Google Scholar]
  16. Stummer, W.; Lao, W. Limits of Bayesian decision related quantities of binomial asset price models. Kybernetika 2012, 48, 750–767. [Google Scholar]
  17. Csiszar, I. Eine informationstheoretische Ungleichung und ihre Anwendung auf den Beweis der Ergodizität von Markoffschen Ketten. Publ. Math. Inst. Hungar. Acad. Sci. 1963, A-8, 85–108. [Google Scholar]
  18. Ali, M.S.; Silvey, D. A general class of coefficients of divergence of one distribution from another. J. Roy. Statist. Soc. B 1966, 28, 131–140. [Google Scholar]
  19. Morimoto, T. Markov processes and the H-theorem. J. Phys. Soc. Jpn 1963, 18, 328–331. [Google Scholar]
  20. van Erven, T.; Harremoes, P. Renyi divergence and Kullback-Leibler divergence. IEEE Trans. Inf. Theory 2014, 60, 3797–3820. [Google Scholar]
  21. Newman, C.M. On the orthogonality of independent increment processes. In Topics in Probability Theory; Courant Institute of Mathematical Sciences New York University: New York, NY, USA, 1973; pp. 93–111. [Google Scholar]
  22. Liese, F. Hellinger integrals of Gaussian processes with independent increments. Stochastics 1982, 6, 81–96. [Google Scholar]
  23. Memin, J.; Shiryayev, A.N. Distance de Hellinger-Kakutani des lois correspondant a deux processus a accroissements indépendants. Probab. Theory Relat. Fields 1985, 70, 67–89. [Google Scholar]
  24. Jacod, J.; Shiryaev, A.N. Limit Theorems for Stochastic Processes; Springer: Berlin, Germany, 1987. [Google Scholar]
  25. Linkov, Y.N.; Shevlyakov, Y.A. Large deviation theorems in the hypotheses testing problems for processes with independent increments. Theory Stoch. Process 1998, 4, 198–210. [Google Scholar]
  26. Liese, F. Hellinger integrals, error probabilities and contiguity of Gaussian processes with independent increments and Poisson processes. J. Inf. Process. Cybern. 1985, 21, 297–313. [Google Scholar]
  27. Kabanov, Y.M.; Liptser, R.S.; Shiryaev, A.N. On the variation distance for probability measures defined on a filtered space. Probab. Theory Relat. Fields 1986, 71, 19–35. [Google Scholar]
  28. Liese, F. Hellinger integrals of diffusion processes. Statistics 1986, 17, 63–78. [Google Scholar]
  29. Vajda, I. Distances and discrimination rates for stochastic processes. Stoch. Process. Appl. 1990, 35, 47–57. [Google Scholar]
  30. Stummer, W. The Novikov and entropy conditions of multidimensional diffusion processes with singular drift. Probab. Theory Relat. Fields 1993, 97, 515–542. [Google Scholar]
  31. Stummer, W. On a statistical information measure of diffusion processes. Stat. Decis. 1999, 17, 359–376. [Google Scholar]
  32. Stummer, W. On a statistical information measure for a generalized Samuelson-Black-Scholes model. Stat. Decis. 2001, 19, 289–314. [Google Scholar]
  33. Bartoszynski, R. Branching processes and the theory of epidemics. In Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, Vol. IV; Le Cam, L.M., Neyman, J., Eds.; University of California Press: Berkeley, CA, USA, 1967; pp. 259–269. [Google Scholar]
  34. Ludwig, D. Qualitative behaviour of stochastic epidemics. Math. Biosci. 1975, 23, 47–73. [Google Scholar]
  35. Becker, N.G. Estimation for an epidemic model. Biometrics 1976, 32, 769–777. [Google Scholar]
  36. Becker, N.G. Estimation for discrete time branching processes with applications to epidemics. Biometrics 1977, 33, 515–522. [Google Scholar]
  37. Metz, J.A.J. The epidemic in a closed population with all susceptibles equally vulnerable; some results for large susceptible populations and small initial infections. Acta Biotheor. 1978, 27, 75–123. [Google Scholar]
  38. Heyde, C.C. On assessing the potential severity of an outbreak of a rare infectious disease. Austral. J. Stat. 1979, 21, 282–292. [Google Scholar]
  39. Von Bahr, B.; Martin-Löf, A. Threshold limit theorems for some epidemic processes. Adv. Appl. Prob. 1980, 12, 319–349. [Google Scholar]
  40. Ball, F. The threshold behaviour of epidemic models. J. Appl. Prob. 1983, 20, 227–241. [Google Scholar]
  41. Jacob, C. Branching processes: Their role in epidemics. Int. J. Environ. Res. Public Health 2010, 7, 1186–1204. [Google Scholar]
  42. Barbour, A.D.; Reinert, G. Approximating the epidemic curve. Electron. J. Probab. 2013, 18, 1–30. [Google Scholar]
  43. Britton, T.; Pardoux, E. Stochastic epidemics in a homogeneous community. In Stochastic Epidemic Models; Britton, T., Pardoux, E., Eds.; Springer: Cham, Switzerland, 2019; pp. 1–120. [Google Scholar]
  44. Dion, J.P.; Gauthier, G.; Latour, A. Branching processes with immigration and integer-valued time series. Serdica Math. J. 1995, 21, 123–136. [Google Scholar]
  45. Grunwald, G.K.; Hyndman, R.J.; Tedesco, L.; Tweedie, R.L. Non-Gaussian conditional linear AR(1) models. Aust. N. Z. J. Stat. 2000, 42, 479–495. [Google Scholar]
  46. Kedem, B.; Fokianos, K. An Regression Models for Time Series Analysis; Wiley: Hoboken, NJ, USA, 2002. [Google Scholar]
  47. Held, L.; Höhle, M.; Hofmann, M. A statistical framework for the analysis of multivariate infectious disease surveillance counts. Stat. Model. 2005, 5, 187–199. [Google Scholar]
  48. Weiss, C.H. An Introduction to Discrete-Valued Time Series; Wiley: Hoboken, NJ, USA, 2018. [Google Scholar]
  49. Feigin, P.D.; Passy, U. The geometric programming dual to the extinction probability problem in simple branching processes. Ann. Probab. 1981, 9, 498–503. [Google Scholar]
  50. Mordecki, E. Asymptotic mixed normality and Hellinger processes. Stoch. Stoch. Rep. 1994, 48, 129–143. [Google Scholar]
  51. Sriram, T.N.; Vidyashankar, A.N. Minimum Hellinger distance estimation for supercritical Galton-Watson processes. Stat. Probab. Lett. 2000, 50, 331–342. [Google Scholar]
  52. Guttorp, P. Statistical Inference for Branching Processes; Wiley: New York, NY, USA, 1991. [Google Scholar]
  53. Linkov, Y.N.; Lunyova, L.A. Large deviation theorems in the hypothesis testing problems for the Galton-Watson processes with immigration. Theory Stoch. Process 1996, 2, 120–132, Erratum in Theory Stoch. Process 1997, 3, 270–285. [Google Scholar]
  54. Heathcote, C.R. A branching process allowing immigration. J. R. Stat. Soc. B 1965, 27, 138–143, Erratum in: Heathcote, C.R. Corrections and comments on the paper “A branching process allowing immigration”. J. R. Stat. Soc. B 1966, 28, 213–217. [Google Scholar]
  55. Athreya, K.B.; Ney, P.E. Branching Processes; Springer: New York, NY, USA, 1972. [Google Scholar]
  56. Jagers, P. Branching Processes with Biological Applications; Wiley: London, UK, 1975. [Google Scholar]
  57. Asmussen, S.; Hering, H. Branching Processes; Birkhäuser: Boston, MA, USA, 1983. [Google Scholar]
  58. Haccou, P.; Jagers, P.; Vatutin, V.A. Branching Processes: Variation, Growth, and Extinction of Populations; Cambrigde University Press: Cambridge, UK, 2005. [Google Scholar]
  59. Heyde, C.C.; Seneta, E. Estimation theory for growth and immigration rates in a multiplicative process. J. Appl. Probab. 1972, 9, 235–256. [Google Scholar]
  60. Basawa, I.V.; Rao, B.L.S. Statistical Inference of Stochastic Processes; Academic Press: London, UK, 1980. [Google Scholar]
  61. Basawa, I.V.; Scott, D.J. Asymptotic Optimal Inference for Non-Ergodic Models; Springer: New York, NY, USA, 1983. [Google Scholar]
  62. Sankaranarayanan, G. Branching Processes and Its Estimation Theory; Wiley: New Delhi, India, 1989. [Google Scholar]
  63. Wei, C.Z.; Winnicki, J. Estimation of the means in the branching process with immigration. Ann. Stat. 1990, 18, 1757–1773. [Google Scholar]
  64. Winnicki, J. Estimation of the variances in the branching process with immigration. Probab. Theory Relat. Fields 1991, 88, 77–106. [Google Scholar]
  65. Yanev, N.M. Statistical inference for branching processes. In Records and Branching Processes; Ahsanullah, M., Yanev, G.P., Eds.; Nova Science Publishes: New York, NY, USA, 2008; pp. 147–172. [Google Scholar]
  66. Harris, T.E. The Theory of Branching Processes; Springer: Berlin, Germany, 1963. [Google Scholar]
  67. Gauthier, G.; Latour, A. Convergence forte des estimateurs des parametres d’un processus GENAR(p). Ann. Sci. Math. Que. 1994, 18, 49–71. [Google Scholar]
  68. Latour, A. Existence and stochastic structure of a non-negative integer-valued autoregressive process. J. Time Ser. Anal. 1998, 19, 439–455. [Google Scholar]
  69. Rydberg, T.H.; Shephard, N. BIN models for trade-by-trade data. Modelling the number of trades in a fixed interval of time. In Econometric Society World Congress; Contributed Papers No. 0740; Econometric Society: Cambridge, UK, 2000. [Google Scholar]
  70. Brandt, P.T.; Williams, J.T. A linear Poisson autoregressive model: The Poisson AR(p) model. Polit. Anal. 2001, 9, 164–184. [Google Scholar]
  71. Heinen, A. Modelling time series count data: An autoregressive conditional Poisson model. In Core Discussion Paper; MPRA Paper No. 8113; University of Louvain: Louvain, Belgium, 2003; Volume 62, Available online: https://mpra.ub.uni-muenchen.de/8113 (accessed on 18 May 2020).
  72. Held, L.; Hofmann, M.; Höhle, M.; Schmid, V. A two-component model for counts of infectious diseases. Biostatistics 2006, 7, 422–437. [Google Scholar]
  73. Finkenstädt, B.F.; Bjornstad, O.N.; Grenfell, B.T. A stochastic model for extinction and recurrence of epidemics: Estimation and inference for measles outbreak. Biostatistics 2002, 3, 493–510. [Google Scholar]
  74. Ferland, R.; Latour, A.; Oraichi, D. Integer-valued GARCH process. J. Time Ser. Anal. 2006, 27, 923–942. [Google Scholar]
  75. Weiß, C.H. Modelling time series of counts with overdispersion. Stat. Methods Appl. 2009, 18, 507–519. [Google Scholar]
  76. Weiß, C.H. The INARCH(1) model for overdispersed time series of counts. Comm. Stat. Sim. Comp. 2010, 39, 1269–1291. [Google Scholar]
  77. Weiß, C.H. INARCH(1) processes: Higher-order moments and jumps. Stat. Probab. Lett. 2010, 80, 1771–1780. [Google Scholar]
  78. Weiß, C.H.; Testik, M.C. Detection of abrupt changes in count data time series: Cumulative sum derivations for INARCH(1) models. J. Qual. Technol. 2012, 44, 249–264. [Google Scholar]
  79. Kaslow, R.A.; Evans, A.S. Epidemiologic concepts and methods. In Viral Infections of Humans; Evans, A.S., Kaslow, R.A., Eds.; Springer: New York, NY, USA, 1997; pp. 3–58. [Google Scholar]
  80. Osterholm, M.T.; Hedberg, C.W. Epidemiologic principles. In Mandell, Douglas, and Bennett’s Principles and Practice of Infectious Diseases, 8th ed.; Bennett, J.E., Dolin, R., Blaser, M.J., Eds.; Elsevier: Philadelphia, PA, USA, 2015; pp. 146–157. [Google Scholar]
  81. Grassly, N.C.; Fraser, C. Mathematical models of infectious disease transmission. Nat. Rev. 2008, 6, 477–487. [Google Scholar]
  82. Keeling, M.J.; Rohani, P. Modeling Infectious Diseases in Humans and Animals; Princeton UP: Princeton, NJ, USA, 2008. [Google Scholar]
  83. Yan, P. Distribution theory stochastic processes and infectious disease modelling. In Mathematical Epidemiology; Brauer, F., van den Driessche, P., Wu, J., Eds.; Springer: Berlin, Germany, 2008; pp. 229–293. [Google Scholar]
  84. Yan, P.; Chowell, G. Quantitative Methods for Investigating Infectious Disease Outbreaks; Springer: Cham, Switzerland, 2019. [Google Scholar]
  85. Britton, T. Stochastic epidemic models: A survey. Math. Biosc. 2010, 225, 24–35. [Google Scholar]
  86. Diekmann, O.; Heesterbeek, H.; Britton, T. Mathematical Tools for Understanding Infectious Disease Dynamics; Princeton University Press: Princeton, NJ, USA, 2013. [Google Scholar]
  87. Cummings, D.A.T.; Lessler, J. Infectious disease dynamics. In Infectious Disease Epidemiology: Theory and Practice; Nelson, K.E., Masters Williams, C., Eds.; Jones & Bartlett Learning: Burlington, MA, USA, 2014; pp. 131–166. [Google Scholar]
  88. Just, W.; Callender, H.; Drew LaMar, M.; Toporikova, N. Transmission of infectious diseases: Data, models and simulations. In Algebraic and Discrete Mathematical Methods of Modern Biology; Robeva, R.S., Ed.; Elsevier: London, UK, 2015; pp. 193–215. [Google Scholar]
  89. Britton, T.; Giardina, F. Introduction to statistical inference for infectious diseases. J. Soc. Franc. Stat. 2016, 157, 53–70. [Google Scholar]
  90. Fine, P.E.M. The interval between successive cases of an infectious disease. Am. J. Epidemiol. 2003, 158, 1039–1047. [Google Scholar]
  91. Svensson, A. A note on generation times in epidemic models. Math. Biosci. 2007, 208, 300–311. [Google Scholar]
  92. Svensson, A. The influence of assumptions on generation time distributions in epidemic models. Math. Biosci. 2015, 270, 81–89. [Google Scholar]
  93. Wallinga, J.; Lipsitch, M. How generation intervals shape the relationship between growth rates and reproductive numbers. Proc. R. Soc. B 2007, 274, 599–604. [Google Scholar]
  94. Forsberg White, L.; Pagano, M. A likelihood-based method for real-time estimation of the serial interval and reproductive number of an epidemic. Stat. Med. 2008, 27, 2999–3016. [Google Scholar]
  95. Nishiura, H. Time variations in the generation time of an infectious disease: Implications for sampling to appropriately quantify transmission potential. Math. Biosci. 2010, 7, 851–869. [Google Scholar]
  96. Scalia Tomba, G.; Svensson, A.; Asikainen, T.; Giesecke, J. Some model based considerations on observing generation times for communicable diseases. Math. Biosci. 2010, 223, 24–31. [Google Scholar]
  97. Trichereau, J.; Verret, C.; Mayet, A.; Manet, G. Estimation of the reproductive number for A(H1N1) pdm09 influenza among the French armed forces, September 2009–March 2010. J. Infect. 2012, 64, 628–630. [Google Scholar]
  98. Vink, M.A.; Bootsma, M.C.J.; Wallinga, J. Serial intervals of respiratory infectious diseases: A systematic review and analysis. Am. J. Epidemiol. 2014, 180, 865–875. [Google Scholar]
  99. Champredon, D.; Dushoff, J. Intrinsic and realized generation intervals in infectious-disease transmission. Proc. R. Soc. B 2015, 282, 20152026. [Google Scholar]
  100. An der Heiden, M.; Hamouda, O. Schätzung der aktuellen Entwicklung der SARS-CoV-2-Epidemie in Deutschland— Nowcasting. Epid. Bull. 2020, 17, 10–16. (In Germany) [Google Scholar]
  101. Ferretti, L.; Wymant, C.; Kendall, M.; Zhao, L.; Nurtay, A.; Abeler-Dörner, L.; Parker, M.; Bonsall, D.; Fraser, C. Quantifying SARS-CoV-2 transmission suggests epidemic control with digital contact tracing. Science 2020, 368, eabb6936. [Google Scholar]
  102. Ganyani, T.; Kremer, C.; Chen, D.; Torneri, A.; Faes, C.; Wallinga, J.; Hens, N. Estimating the generation interval for COVID-19 based on symptom onset data. medRxiv Prepr. 2020. [Google Scholar] [CrossRef] [Green Version]
  103. Li, M.; Liu, K.; Song, Y.; Wang, M.; Wu, J. Serial interval and generation interval for respectively the imported and local infectors estimated using reported contact-tracing data of COVID-19 in China. medRxiv Prepr. 2020. [Google Scholar] [CrossRef]
  104. Nishiura, H.; Linton, N.M.; Akhmetzhanov, A.R. Serial interval of novel coronavirus (COVID-19) infections. medRxiv Prepr. 2020. [Google Scholar] [CrossRef] [Green Version]
  105. Park, M.; Cook, A.R.; Lim, J.J.; Sun, X.; Dickens, B.L. A systematic review of COVID-19 epidemiology based on current evidence. J. Clin. Med. 2020, 9, 967. [Google Scholar] [CrossRef] [Green Version]
  106. Spouge, J.L. An accurate approximation for the expected site frequency spectrum in a Galton-Watson process under an infinite sites mutation model. Theor. Popul. Biol. 2019, 127, 7–15. [Google Scholar]
  107. Taneyhill, D.E.; Dunn, A.M.; Hatcher, M.J. The Galton-Watson branching process as a quantitative tool in parasitology. Parasitol. Today 1999, 15, 159–165. [Google Scholar]
  108. Parnes, D. Analyzing the contagion effect of foreclosures as a branching process: A close look at the years that follow the Great Recession. J. Account. Financ. 2017, 17, 9–34. [Google Scholar]
  109. Le Cam, L. Asymptotic Methods in Statistical Decision Theory; Springer: New York, NY, USA, 1986. [Google Scholar]
  110. Heyde, C.C.; Johnstone, I.M. On asymptotic posterior normality for stochastic processes. J. R. Stat. Soc. B 1979, 41, 184–189. [Google Scholar]
  111. Johnson, R.A.; Susarla, V.; van Ryzin, J. Bayesian non-parametric estimation for age-dependent branching processes. Stoch. Proc. Appl. 1979, 9, 307–318. [Google Scholar]
  112. Scott, D. On posterior asymptotic normality and asymptotic normality of estimators for the Galton-Watson process. J. R. Stat. Soc. B 1987, 49, 209–214. [Google Scholar]
  113. Yanev, N.M.; Tsokos, C.P. Decision-theoretic estimation of the offspring mean in mortal branching processes. Comm. Stat. Stoch. Models 1999, 15, 889–902. [Google Scholar]
  114. Mendoza, M.; Gutierrez-Pena, E. Bayesian conjugate analysis of the Galton-Watson process. Test 2000, 9, 149–171. [Google Scholar]
  115. Feicht, R.; Stummer, W. An explicit nonstationary stochastic growth model. In Economic Growth and Development (Frontiers of Economics and Globalization, Vol. 11); De La Grandville, O., Ed.; Emerald Group Publishing Limited: Bingley, UK, 2011; pp. 141–202. [Google Scholar]
  116. Dorn, F.; Fuest, C.; Göttert, M.; Krolage, C.; Lautenbacher, S.; Link, S.; Peichl, A.; Reif, M.; Sauer, S.; Stöckli, M.; et al. Die volkswirtschaftlichen Kosten des Corona-Shutdown für Deutschland: Eine Szenarienrechnung. ifo Schnelldienst 2020, 73, 29–35. (In Germany) [Google Scholar]
  117. Dorn, F.; Khailaie, S.; Stöckli, M.; Binder, S.; Lange, B.; Peichl, A.; Vanella, P.; Wollmershäuser, T.; Fuest, C.; Meyer-Hermann, M. Das gemeinsame Interesse von Gesundheit und Wirtschaft: Eine Szenarienrechnung zur Eindämmung der Corona-Pandemie. ifo Schnelld. Dig. 2020, 6, 1–9. [Google Scholar]
  118. Kißlinger, A.-L.; Stummer, W. A new toolkit for robust distributional change detection. Appl. Stoch. Models Bus. Ind. 2018, 34, 682–699. [Google Scholar]
  119. Dehning, J.; Zierenberg, J.; Spitzner, F.P.; Wibral, M.; Neto, J.P.; Wilczek, M.; Priesemann, V. Inferring change points in the spread of COVID-19 reveals the effectiveness of interventions. Science 2020, 369, eabb9789. [Google Scholar] [CrossRef]
  120. Friesen, M. Statistical surveillance. Optimality and methods. Int. Stat. Review 2003, 71, 403–434. [Google Scholar]
  121. Friesen, M.; Andersson, E.; Schiöler, L. Robust outbreak surveillance of epidemics in Sweden. Stat. Med. 2009, 28, 476–493. [Google Scholar]
  122. Brauner, J.M.; Mindermann, S.; Sharma, M.; Stephenson, A.B.; Gavenciak, T.; Johnston, D.; Salvatier, J.; Leech, G.; Besiroglu, T.; Altman, G.; et al. The effectiveness and perceived burden of nonpharmaceutical interventions against COVID-19 transmission: A modelling study with 41 countries. medRxiv Prepr. 2020. [Google Scholar] [CrossRef]
  123. Österreicher, F.; Vajda, I. Statistical information and discrimination. IEEE Trans. Inform. Theory 1993, 39, 1036–1039. [Google Scholar]
  124. De Groot, M.H. Uncertainty, information and sequential experiments. Ann. Math. Statist. 1962, 33, 404–419. [Google Scholar]
  125. Krafft, O.; Plachky, D. Bounds for the power of likelihood ratio tests and their asymptotic properties. Ann. Math. Stat. 1970, 41, 1646–1654. [Google Scholar]
  126. Basawa, I.V.; Scott, D.J. Efficient tests for branching processes. Biometrika 1976, 63, 531–536. [Google Scholar]
  127. Feigin, P.D. The efficiency criteria problem for stochastic processes. Stoch. Proc. Appl. 1978, 6, 115–127. [Google Scholar]
  128. Sweeting, T.J. On efficient tests for branching processes. Biometrika 1978, 65, 123–127. [Google Scholar]
  129. Linkov, Y.N. Lectures in Mathematical Statistics, Parts 1 and 2; American Mathematical Society: Providence, RI, USA, 2005. [Google Scholar]
  130. Feller, W. Diffusion processes in genetics. In Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability; Neyman, J., Ed.; University of California Press: Berkeley, CA, USA, 1951; pp. 227–246. [Google Scholar]
  131. Jirina, M. On Feller’s branching diffusion process. Časopis Pěst. Mat. 1969, 94, 84–89. [Google Scholar]
  132. Lamperti, J. Limiting distributions for branching processes. In Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, Vol. II, Part 2; Le Cam, L.M., Neyman, J., Eds.; University of California Press: Berkeley, CA, USA, 1967; pp. 225–241. [Google Scholar]
  133. Lamperti, J. The limit of a sequence of branching processes. Z. Wahrscheinlichkeitstheorie Verw. Geb. 1967, 7, 271–288. [Google Scholar]
  134. Lindvall, T. Convergence of critical Galton-Watson branching processes. J. Appl. Prob. 1972, 9, 445–450. [Google Scholar]
  135. Lindvall, T. Limit theorems for some functionals of certain Galton-Watson branching processes. Adv. Appl. Prob. 1974, 6, 309–321. [Google Scholar]
  136. Grimvall, A. On the convergence of sequences of branching processes. Ann. Probab. 1974, 2, 1027–1045. [Google Scholar]
  137. Borovkov, K.A. On the convergence of branching processes to a diffusion process. Theor. Probab. Appl. 1986, 30, 496–506. [Google Scholar]
  138. Ethier, S.N.; Kurtz, T.G. Markov Processes: Characterization and Convergence; Wiley: New York, NY, USA, 1986. [Google Scholar]
  139. Durrett, R. Stochastic Calculus; CRC Press: Boca Raton, FL, USA, 1996. [Google Scholar]
  140. Kawazu, K.; Watanabe, S. Branching processes with immigration and related limit theorems. Theor. Probab. Appl. 1971, 16, 36–54. [Google Scholar]
  141. Wei, C.Z.; Winnicki, J. Some asymptotic results for the branching process with immigration. Stoch. Process. Appl. 1989, 31, 261–282. [Google Scholar]
  142. Sriram, T.N. Invalidity of bootstrap for critical branching processes with immigration. Ann. Stat. 1994, 22, 1013–1023. [Google Scholar]
  143. Li, Z. Branching processes with immigration and related topics. Front. Math. China 2006, 1, 73–97. [Google Scholar]
  144. Dawson, D.A.; Li, Z. Skew convolution semigroups and affine Markov processes. Ann. Probab. 2006, 34, 1103–1142. [Google Scholar]
  145. Cox, J.C.; Ingersoll, J.E., Jr.; Ross, S.A. A theory of the term structure of interest rates. Econometrica 1985, 53, 385–407. [Google Scholar]
  146. Cox, J.C.; Ross, S.A. The valuation of options for alternative processes. J. Finan. Econ. 1976, 3, 145–166. [Google Scholar]
  147. Heston, S.L. A closed-form solution for options with stochastic volatilities with applications to bond and currency options. Rev. Finan. Stud. 1993, 6, 327–343. [Google Scholar]
  148. Lansky, P.; Lanska, V. Diffusion approximation of the neuronal model with synaptic reversal potentials. Biol. Cybern. 1987, 56, 19–26. [Google Scholar]
  149. Giorno, V.; Lansky, P.; Nobile, A.G.; Ricciardi, L.M. Diffusion approximation and first-passage-time problem for a model neuron. Biol. Cybern. 1988, 58, 387–404. [Google Scholar]
  150. Lanska, V.; Lansky, P.; Smith, C.E. Synaptic transmission in a diffusion model for neuron activity. J. Theor. Biol. 1994, 166, 393–406. [Google Scholar]
  151. Lansky, P.; Sacerdote, L.; Tomassetti, F. On the comparison of Feller and Ornstein-Uhlenbeck models for neural activity. Biol. Cybern. 1995, 73, 457–465. [Google Scholar]
  152. Ditlevsen, S.; Lansky, P. Estimation of the input parameters in the Feller neuronal model. Phys. Rev. E 2006, 73, 061910. [Google Scholar]
  153. Höpfner, R. On a set of data for the membrane potential in a neuron. Math. Biosci. 2007, 207, 275–301. [Google Scholar]
  154. Lansky, P.; Ditlevsen, S. A review of the methods for signal estimation in stochastic diffusion leaky integrate-and-fire neuronal models. Biol. Cybern. 2008, 99, 253–262. [Google Scholar]
  155. Pedersen, A.R. Estimating the nitrous oxide emission rate from the soil surface by means of a diffusion model. Scand. J. Stat. Theory Appl. 2000, 27, 385–403. [Google Scholar]
  156. Aalen, O.O.; Gjessing, H.K. Survival models based on the Ornstein-Uhlenbeck process. Lifetime Data Anal. 2004, 10, 407–423. [Google Scholar]
  157. Kammerer, N.B. Generalized-Relative-Entropy Type Distances Between Some Branching Processes and Their Diffusion Limits. Ph.D. Thesis, University of Erlangen-Nürnberg, Erlangen, Germany, 2011. [Google Scholar]
Figure 1. Bayes risk bounds (using λ = 0.5 (red/orange) resp. λ = 0.9 (blue/cyan)) and Bayes risk simulations (lightgrey/grey/black) on a unit (left graph) and logarithmic (right graph) scale in the parameter setup β A , β H , α A , α H = ( 1.2 , 0.9 , 4 , 3 ) P SP , 1 , with initial population X 0 = 5 and prior-loss constants L A = 300 and L H = 150 .
Figure 1. Bayes risk bounds (using λ = 0.5 (red/orange) resp. λ = 0.9 (blue/cyan)) and Bayes risk simulations (lightgrey/grey/black) on a unit (left graph) and logarithmic (right graph) scale in the parameter setup β A , β H , α A , α H = ( 1.2 , 0.9 , 4 , 3 ) P SP , 1 , with initial population X 0 = 5 and prior-loss constants L A = 300 and L H = 150 .
Entropy 22 00874 g001
Figure 2. Different lower bounds E n L (using λ { 1.1 , 1.5 , 2 } ) and upper bounds E n U (using λ { 0.3 , 0.5 , 0.7 } ) of the minimal type II error probability E ς P A , n P H , n for fixed level ς = 0.05 in the parameter setup β A , β H , α A , α H = ( 0.3 , 1.2 , 1 , 4 ) P SP , 1 together with initial population X 0 = 5 on both a unit scale (left graph) and a logarithmic scale (right graph).
Figure 2. Different lower bounds E n L (using λ { 1.1 , 1.5 , 2 } ) and upper bounds E n U (using λ { 0.3 , 0.5 , 0.7 } ) of the minimal type II error probability E ς P A , n P H , n for fixed level ς = 0.05 in the parameter setup β A , β H , α A , α H = ( 0.3 , 1.2 , 1 , 4 ) P SP , 1 together with initial population X 0 = 5 on both a unit scale (left graph) and a logarithmic scale (right graph).
Entropy 22 00874 g002
Figure 3. The lower bound E n L (using λ = 1.1 ) and the upper bound E n U (using λ = 0.5 ) of the minimal type II error probability E ς P A , n P H , n for different levels ς { 0.01 , 0.05 , 0.1 } in the parameter setup β A , β H , α A , α H = ( 0.3 , 1.2 , 1 , 4 ) P SP , 1 together with initial population X 0 = 5 on both a unit scale (left graph) and a logarithmic scale (right graph).
Figure 3. The lower bound E n L (using λ = 1.1 ) and the upper bound E n U (using λ = 0.5 ) of the minimal type II error probability E ς P A , n P H , n for different levels ς { 0.01 , 0.05 , 0.1 } in the parameter setup β A , β H , α A , α H = ( 0.3 , 1.2 , 1 , 4 ) P SP , 1 together with initial population X 0 = 5 on both a unit scale (left graph) and a logarithmic scale (right graph).
Entropy 22 00874 g003
Figure 4. Simulation of the process X ˜ s ( m ) for the approximation steps m { 13 , 50 , 200 , 1000 } in the parameter setup ( η , κ , σ ) = ( 5 , 2 , 0.4 ) and with initial starting value X ˜ 0 = 3 .
Figure 4. Simulation of the process X ˜ s ( m ) for the approximation steps m { 13 , 50 , 200 , 1000 } in the parameter setup ( η , κ , σ ) = ( 5 , 2 , 0.4 ) and with initial starting value X ˜ 0 = 3 .
Entropy 22 00874 g004

Share and Cite

MDPI and ACS Style

Kammerer, N.B.; Stummer, W. Some Dissimilarity Measures of Branching Processes and Optimal Decision Making in the Presence of Potential Pandemics. Entropy 2020, 22, 874. https://doi.org/10.3390/e22080874

AMA Style

Kammerer NB, Stummer W. Some Dissimilarity Measures of Branching Processes and Optimal Decision Making in the Presence of Potential Pandemics. Entropy. 2020; 22(8):874. https://doi.org/10.3390/e22080874

Chicago/Turabian Style

Kammerer, Niels B., and Wolfgang Stummer. 2020. "Some Dissimilarity Measures of Branching Processes and Optimal Decision Making in the Presence of Potential Pandemics" Entropy 22, no. 8: 874. https://doi.org/10.3390/e22080874

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop