On Non-Occurrence of the Inspection Paradox

: The well-known inspection paradox or waiting time paradox states that, in a renewal process, the inspection interval is stochastically larger than a common interarrival time having a distribution function F , where the inspection interval is given by the particular interarrival time containing the speciﬁed time point of process inspection. The inspection paradox may also be expressed in terms of expectations, where the order is strict, in general. A renewal process can be utilized to describe the arrivals of vehicles, customers, or claims, for example. As the inspection time may also be considered a random variable T with a left-continuous distribution function G independent of the renewal process, the question arises as to whether the inspection paradox inevitably occurs in this general situation, apart from in some marginal cases with respect to F and G . For a random inspection time T , it is seen that non-trivial choices lead to non-occurrence of the paradox. In this paper, a complete characterization of the non-occurrence of the inspection paradox is given with respect to G . Several examples and related assertions are shown, including the deterministic time situation.


Introduction
The inspection paradox, also known as the waiting time paradox or renewal paradox, describes a paradoxical effect where observing a running renewal process with events occurring at specific times leads to atypical findings, in the sense that the observed time interval between events may be longer than the other intervals.For example, this happens when the events in question are incoming claims of an insurance company and we arbitrarily select a time to observe the process (without knowledge of any claim arrival times).The time we select specifies an interval between two successive claims and we record the length of this time interval.It is stochastically larger than a regular (unobserved) interval between two successive incoming claims.
A renewal process can be used to model the times of incoming claims, where the waiting times between successive claims, called interarrival times, are modeled as realizations of independent and identically distributed non-negative random variables with cumulative distribution function F, for example.Thus, in a realized renewal process based on a non-degenerate distribution, we observe interarrival times of different lengths and, when inspecting the process at a certain time t, it will be very likely to observe a comparably larger time interval (cf.Feller [1], p. 13).
This paradoxical effect arises in various scenarios, such as waiting for a bus or a train (cf.Feller [1], p. 12-14, Masuda and Porter [2]), observing the lifetimes of identical batteries (cf.Ross [3], p. 460), in connection with sampling bias (cf.Stein and Dattero [4]), and in stochastic resetting (cf.Pal et al. [5]).In a medical context, Jenkins et al. [6] discussed how the perception of regularly occurring phase singularities, which are indicators for cardiac fibrillation, is influenced by the inspection paradox.They found that visual observation may systematically oversample phase singularities that last longer (potentially leading to errors) and that longer windows of observation can minimize the effect.
Much attention has been paid to the study of the inspection paradox, its properties, and implications (see, e.g., Gakis and Sivazlian [7], Angus [8], Ross [9]).In particular, considering a random variable for the time of inspection instead of a deterministic time leads to insights regarding the quantification of the effect (see, e.g., Kamps [10]).Herff et al. [11] derived an inequality for the length of the inspection interval with a random time and Rauwolf and Kamps [12] gave a general representation for the expected inspection interval length, which served as the basis for the main results in this work.Several explicit examples with random time and applications to earthquake and geyser data can be found in the literature (see, e.g., Liu and Peña [13], Rauwolf and Kamps [12]).
In the case of a deterministic inspection time t, the inspection paradox does not occur for a trivial choice of interarrival times having a degenerate distribution, i.e., for deterministic interval lengths.Moreover, it does not occur if the smallest possible interarrival time is larger than t; i.e., if the inspection is performed prior to the first event.However, for a random inspection time T with a left-continuous distribution function G, it is seen that there are examples with non-trivial choices of the distribution functions F and G where the paradox does not appear, meaning that the length of the inspection interval is also distributed as F.
In this general situation, we give a complete characterization of the non-occurrence of the inspection paradox with respect to the choice of G, as well as results for F in the classical case of degenerate G.The use of an additional random inspection time led to effects and a conclusion regarding the classical case with deterministic time, where non-occurrence of the paradoxical situations only happened for degenerate distributions.
In Section 2, we briefly recap the classical inspection paradox.Renewal processes with random time T are discussed in Section 3, along with examples.Section 4 contains a complete characterization of settings with respect to the distribution function G of T. A case with degenerate time t is studied in Section 5 regarding situations with non-occurrence of the inspection paradox leading to degenerate interarrival times.
Let X 1 , X 2 , . . .be a sequence of non-negative, independent, and identically distributed (iid) random variables on some probability space with a common distribution function F, F(0) < 1.These random variables will be called interarrival times in the following.Then, the sequence of occurrence times (S n ) n∈N 0 given by defines a renewal process (see Figure 1).The corresponding renewal counting process is denoted by (N(t)) t≥0 , where counts the number of occurrences up to time t.In particular, N(t) ≥ n ⇐⇒ S n ≤ t holds for all n ∈ N 0 and for all t ≥ 0.
When we inspect a renewal process at some fixed time t > 0, exactly N(t) renewals have already taken place.The last renewal prior to t was at time S N(t) and the subsequent renewal will occur at time S N(t)+1 .The renewal interval covering t is referred to as the "inspection interval" and its length is given by X N(t)+1 = S N(t)+1 − S N(t) (see Figure 2).Representations for the survival function and the expected value of the inspection interval length X N(t)+1 can be found in the literature (see, e.g., Gakis and Sivazlian [7], pp.44-45).
The inspection paradox of renewal theory then states that P(X N(t)+1 > x) ≥ P(X 1 > x) for all x ≥ 0 and t ≥ 0 (cf.Angus [8], Ross [3]), which means that the inspection interval is stochastically larger than a common renewal interval, i.e., X N(t)+1 ≥ st X 1 .Consequently, in terms of expected values, the mean inspection interval length exceeds the mean length of any regular renewal interval, in the sense that No paradoxical effect occurs in the trivial case where the interarrival times have a degenerate distribution, i.e., X i ∼ a , i ∈ N, for some a > 0, as equality holds in the inspection paradox, i.e., P(X N(t)+1 > x) = 1 (a,∞] (x) = P(X 1 > x) for all x ≥ 0, and all events in the corresponding renewal process take place perfectly on time.In other words, all time intervals (including the inspected interval) have precisely the same length.Up to this point, it has remained open whether this is the only example in which equality holds for fixed t.The answer will be provided in the following sections by means of a generalization to a random inspection time T, for which the equality in (1) and (2) is characterized.

The Inspection Paradox with a Random Inspection Time
Instead of a fixed point in time t ≥ 0, we can also consider a random variable to model the time of inspection.Let T be such a random inspection time, i.e., a non-negative random variable that is independent of the renewal process and has a left-continuous distribution function G given by G(t) = P(T < t), t ∈ R.
Then, (1) implies that the paradoxical effect occurs as in the classical inspection paradox with i.e., X N(T)+1 ≥ st X 1 , and from (2) we conclude for the inspection paradox in terms of expectations.
In Section 2, we saw that equality in (1) in the fixed time case, i.e., for the choice T ∼ t , happens when the interarrival times have a degenerate distribution.The following example shows that, in a trivial case but for non-degenerate distributions of X 1 and T, equality in (3) and (4) holds true.Thus, introducing a random inspection time can lead to other cases with equality in (3) and ( 4).
Example 1.Consider a Binomial renewal process with interarrival times having a geometric distribution on N, i.e., P( and a random inspection time T with a two-point distribution, i.e., with P(T = 0) = P(T = 1/2) = 1/2.Since N(t) = 0 for t < 1, we have X N(t)+1 = X 1 , and thus X N(T)+1 = X 1 holds.The situation is trivial in the sense that the inspection is made prior to X 1 .In particular, the distribution function G is constant on the support of the interarrival times with G(k) = P(T < k) = 1 for all k ∈ N.
On the other hand, in many well-known examples, Inequality (1) is strict for t > 0 and so is (3); e.g., this is the case for a Poisson process with exponentially distributed interarrival times.The same holds true in connection with random inspection times; we refer to Liu and Peña [13] who discussed the choice of an exponentially distributed random inspection time.
In fact, the gap between the expected inspection interval length and a common expected interval length can be quantified for any choice of distribution.Rauwolf and Kamps [12] derived the following representation where ϕ : [0, ∞) → [0, ∞) is a measurable function such that all expected values and integrals are well-defined and exist finitely.If ϕ is monotone non-decreasing, then all covariance terms are non-negative.In particular, this leads to a representation for the expected inspection interval length by choosing ϕ(x) = x, x ≥ 0, and to a representation for P(X N(T)+1 > z) by choosing ϕ Thus, Inequalities (3) and ( 4) can be derived from (5), and choosing T ∼ t leads to formulae in the classical case from Section 2.
As in Example 1, the introduction of a random inspection time T leads to various other possible and non-degenerate cases with non-occurrence of the inspection paradox.Concerning the identification of these respective situations, Formula (5) offers the option to examine cases where all covariance terms are equal to zero.On the other hand, Formula (5) also facilitates the study of situations with possible large gaps between EX N(T)+1 and EX 1 , say.For examples, we refer to [12].
In the following discussion, we will need the left and right endpoints of the support of the interarrival times, the formal introduction of which is given in Notation 1.
Notation 1.Let X be a random variable with right-continuous distribution function F. Let F −1 be the quantile function defined by F −1 (y) := inf{x : F(x) ≥ y}, y ∈ (0, 1).Then, the left and right endpoints α and ω of the support of X are denoted by The support of X then lies in the interval [α, ω] if ω < ∞ or in [α, ∞), otherwise, and will be denoted by supp(X).
The following result by Behboodian [18] given in Lemma 1 can be utilized to determine whether a covariance of two functions of a random variable is zero.In particular, this will be applied to determine whether the covariance terms in (5) are positive or zero.An alternative proof can be found in Rauwolf [19].
Lemma 1 (cf.[18], Theorem 2).Let X be a non-negative and non-degenerate random variable with support S := supp(X) and probability distribution P. Let h 1 : [0, ∞) → R and h 2 : [0, ∞) → R be two monotone non-decreasing, measurable functions such that Cov(h 1 (X), h 2 (X)) exists finitely.Then, If none of the functions h 1 and h 2 in Lemma 1 are constant on S, then the covariance of h 1 (X) and h 2 (X) is strictly positive.
Based on the representation with the covariance terms, we present an example of a renewal process with absolutely continuous interarrival times and a particular choice for the distribution function G of the random inspection time such that EX N(T)+1 = EX 1 holds.
Example 2. Let the interarrival times X i , i ∈ N, have a uniform distribution on the interval [1, 1.3] with a density function f given by f (x) = 10 3 1 [1,1.3](x), x ∈ R. Furthermore, let the random inspection time T have the left-continuous distribution function G given in Figure 3.The function G is constant on the interval [1, 1.3], i.e., on the support of X 1 , and therefore, an application of Lemma 1 yields covariance Cov(X 1 , G(X 1 )) = 0. Similarly, G is constant on the support of the occurrence times S 2 , S 3 , . . .from which equality in Equation (5) follows for any choice of ϕ.In between the supports of S n and S n+1 , n ∈ N, the form of the distribution function G (left-continuous version) is arbitrary.The equality in Example 1 can also be derived by applying Lemma 1.This approach facilitates finding new explicit examples with equality-especially in the case of random inspection times which allow for other possibilities than in the classical inspection paradox with a degenerate time T ∼ t .

Non-Occurrence of the Inspection Paradox
A general result regarding non-occurrence of the inspection paradox can be derived on the basis of Representation (5) with a random inspection time.This is realised in Theorem 1 and the result is applied to the special cases of equality in (3) and (4); see Subsection 4.2 and Remark 3, respectively.Either result can be used to decide whether a strict inequality holds for specific choices of distribution functions F and G.In particular, the condition for equality is easy to check and can therefore be utilized without calculating, e.g., the expected value of X N(T)+1 explicitly.

General Results
This subsection is concerned with determining distributions for X 1 and T with distribution functions F and G, respectively, such that Eϕ(X N(T)+1 ) = Eϕ(X 1 ) holds, given that ϕ is not a constant function.Lemma 2 serves as a key component in the discussion and states that given the support endpoints of the interarrival times, we can explicitly calculate the smallest index n for which the succeeding occurrence times S n have overlapping supports.Throughout, α is called an atom of (the distribution of) X 1 if P(X 1 = α) > 0.
Lemma 2. Let the interarrival times have a left support endpoint α > 0 and right support endpoint ω > α.Then, there exists a natural number κ ∈ N 0 at which the supports of two consecutive occurrence times S n and S n+1 , n ≥ κ + 1, overlap in the sense that P([nα, nω] ∩ [(n + 1)α, (n + 1)ω]) > 0. In particular, κ is given by if ω = k+1 k α for some k ∈ N and α and ω are no atoms of X 1 , k − 1, if ω = k+1 k α for some k ∈ N and α or ω are atoms of X 1 .
Proof.Assuming that there is no point of overlap for any n ∈ N, i.e., that nω < (n + 1)α for all n ∈ N leads to ω ≤ lim n→∞ n+1 n α = α which is a contradiction to the assumption that α < ω.Therefore, there exists a natural number for which the supports of successive occurrence times with large enough indices overlap.If ω = k+1 k α for some k ∈ N, then If the right support endpoint ω is of the form ω = k+1 k α for some k ∈ N and α or ω is an atom of X 1 , then the first point of overlap is at (k + 1)α = kω, i.e., the supports of S k and S k+1 touch (κ = k − 1).If neither α nor ω are atoms of X 1 , then the first overlap happens for the supports of S k+1 and S k+2 (i.e., κ = k).
The trivial case α = 0 is excluded in Lemma 2 as the support of X 1 , covered by the interval [0, ω] (or [0, ∞)), is a subset of the support of S n , covered by [0, nω] (or [0, ∞)), for all n ∈ N. In this case, the supports of all occurrence times overlap.
We will now determine cases with equality of the expected values in (5), i.e., nonoccurrence of the inspection paradox, under the general assumption that G is a leftcontinuous distribution function.
Theorem 1.Let the interarrival times (X i ) i∈N have left support endpoint α > 0, finite right support endpoint ω > α and let κ be defined as in Lemma 2.
Let ϕ : [0, ∞) → [0, ∞) be a measurable, monotone non-decreasing function such that all expected values and integrals are well-defined and exist finitely.Furthermore, assume that ϕ is not constant on S P X 1 -almost surely.
For the opposite implication, noticing that G as in ( 7) is constant on the support of all occurrence times and applying Lemma 1, we see that the covariances Cov(ϕ(X n ), G(S n )) are all equal to zero.This leads to equality of the expected values.
We note that the constant κ as introduced in Lemma 2 indicates the kind of support the interarrival times have.For example, κ = 0 can correspond to the case 0 < 2α < ω, where ω may be infinite.In this case, the distribution of the random inspection time T can be chosen as T ∼ 0 , T ∼ α or T ∼ twopoint{0, α} among others.On the other hand, ω must be finite for κ ≥ 1.
The equality remains when introducing a random variable T. This can also be derived with the following argument: If X 1 ∼ α , then S n ∼ nα for all n ∈ N implies Cov(ϕ(X n ), G(S n )) = 0 for all n ∈ N and for any choice of the distribution function G, leading to equality for any ϕ.
Theorem 1 can be applied to any interarrival distribution.In particular, only the left support endpoint α and the right support endpoint ω of the interarrival times are of interest.Given a specific interarrival distribution and a distribution function G, Theorem 1 establishes whether or not the inspection paradox occurs.
Here, for α = 1, T may have a two-point distribution on S = {1, 2} and X 1 may be uniformly distributed on the interval (1, 2).
In the case of α being an atom of X 1 , i.e., κ = 0, G is given by and g 0 (α) = 1 has to be fulfilled.Furthermore, the following example derived from the results of Theorem 1 shows that, in the degenerate time case with T ∼ t , equality in (1) or (2) can also take place for non-degenerate interarrival times.Nevertheless, this situation is irrelevant, since the inspection time coincides with the lower bound of the support of X 1 .
Example 4. For absolutely continuous interarrival times with a left endpoint α := t > 0 (i.e., α is not an atom of X 1 ) and right endpoint ω > 2t, which corresponds to the case κ = 0, we have equality Eϕ(X N(t)+1 ) = Eϕ(X 1 ) for any ϕ satisfying the assumptions of Theorem 1, since Therefore, choosing ϕ(x) = 1 (z,∞) (x) and ϕ(x) = x for x ≥ 0 in Theorem 1, we obtain an example for which P(X N(t)+1 > z) = P(X 1 > z), z ≥ 0, and EX N(t)+1 = EX 1 , respectively, holds even though the interarrival times have a distribution other than the degenerate distribution.Consequently, requiring equality for a single z and t is not sufficient in general to derive a characterization for the distribution of the interarrival times.This case will be considered in detail in Section 5.
The case α = 0 that was not included in Theorem 1 is studied separately in the following theorem.
Let T be a non-negative random variable with a left-continuous distribution function G that is independent of the interarrival times (X i ) i∈N .
Proof.Analogously to the proof of Theorem 1, G(x) = c n must hold for all 0 < x ≤ nω and for all n ∈ N, wherefore we obtain c 1 = . . .= c n = 1 for all n ∈ N. Thus, G is a distribution function of the degenerate distribution in 0. The opposite direction follows directly from an application of Lemma 1.
Remark 2. If p := P(X 1 = 0) > 0 in Theorem 2 and G(x) = 1 (0,∞) (x) is the left-continuous version of the distribution function of the degenerate distribution in 0, then the first covariance Cov(ϕ(X 1 ), G(X 1 )) is positive and an inspection paradox occurs.This is the case as G is not constant P X 1 -almost surely on the support of the interarrival times.
In general, a random inspection time T having a distribution function of the particular form ( 7) is sufficient for equality in the inspection paradox.More precisely, the inspection paradox does not appear in this case and both random variables X N(T)+1 and X 1 are identically distributed, as stated in the following corollary.
Corollary 1.Let the interarrival times have a left support endpoint α > 0 and right support endpoint ω > α.If T has a distribution function of the form (7), then P(X N(T)+1 > z) = P(X 1 > z) for all z ≥ 0, i.e., X N(T)+1 = st X 1 .
Proof.Since the G given in ( 7) is constant on the supports of X n and of S n for all n ∈ N, applying Lemma 1 yields Cov(1 (z,∞) (X n ), G(S n )) = 0 for all n ∈ N and for all z ≥ 0 as in the proof of Theorem 1.Therefore, with for all z ≥ 0, X N(T)+1 and X 1 are identically distributed.
In the characterization of Theorem 1, the function ϕ is assumed to be non-constant on S P X 1 -almost surely.Therefore, for the function 1 (z,∞) (•) to take both values 0 and 1 on the support of X 1 with positive probability, we require z ∈ supp(X 1 ).
Corollary 2. Let the interarrival times have a left support endpoint α > 0, right support endpoint ω > α and let S := supp(X 1 ).Let T be a non-negative random variable with left-continuous distribution function G that is independent of the interarrival times (X i ) i∈N .If there is a z ∈ S such that 1 (z,∞) (•) takes both values 0 and 1 on S with positive probability and then G is of the form (7).
Proof.This follows from Theorem 1 for the choice ϕ Corollary 2 shows that equality for an appropriate value of z is enough to determine the general form of the distribution function G (on finite intervals).Thus, if we have equality of the survival functions of X N(T)+1 and X 1 for this z ∈ S, then T is of the form (7) and Corollary 1 yields X N(T)+1 = st X 1 .
Remark 3. The inspection paradox is also discussed in terms of expected values, as we have seen in Sections 2 and 3.With the choice ϕ(x) = x, x ≥ 0, it follows under the assumptions of Theorem 1 that equality of the expected values EX N(T)+1 = EX 1 holds if and only if G is of the form (7).
Similarly, equality of the moments, i.e., E(X m N(T)+1 ) = E(X m 1 ), also determines the distribution of T. This can be obtained from Theorem 1 via the choice ϕ(x) = x m , x ≥ 0, m > 0.

Equality in the Degenerate Time Case
As discussed in Section 2 and in Remark 1, no paradoxical effect appears for the inspection interval length given degenerate interarrival times, regardless of whether the inspection time is random or deterministic.On the other hand, degenerate interarrival times are not the only example with this property (cf.Example 4).In this section, we further study equality in the inspection paradox by means of a fixed sequence of inspection times.
The following Theorem 3 states that having such a sequence of fixed times for which equality holds is sufficient for the interarrival times to have a degenerate distribution.Theorem 3. Let the interarrival times (X i ) i∈N have a distribution function F with F(0) < 1.Let (t i ) i∈N ⊆ (0, ∞) be a sequence of monotone increasing times (t i < t i+1 , i ∈ N) with lim i→∞ t i = ∞, such that EX N(t i )+1 = EX 1 for all i ∈ N.
Proof.We assume that the interarrival times have a left support endpoint α ≥ 0 and right support endpoint ω > α and define S := [α, ω] if ω < ∞ and S = [α, ∞) if ω = ∞.Thus, it is assumed that the support contains at least two values and this is shown to lead to a contradiction in the following.Due to Representation (6) with T ∼ t i , equality holds if and only if Cov(X n , 1 (t i ,∞) (S n )) = 0 for all n ∈ N and for all i ∈ N.
In the case n = 1, an application of Lemma 1 yields Cov(X 1 , 1 (t i ,∞) (X 1 )) = 0 for all n ∈ N and for all i ∈ N ⇐⇒ 1 (t i ,∞) (x) = const for P X 1 -almost all x ∈ S and for all i ∈ N. (8) times have a degenerate distribution based on only one time point t, it is necessary to assume that t lies in the support of an occurrence time whose support overlaps with both the support of the preceding and the succeeding occurrence time.This is formally stated in Corollary 4. Corollary 4. Let the interarrival times (X i ) i∈N have a distribution function F with F(0) < 1. Assume that there exists a k ∈ N with k ≥ κ + 2 and a t > 0 such that t ∈ supp(S k ).Then, EX N(t)+1 = EX 1 implies X 1 = t/k almost surely, i.e., F(x) = 1 [t/k,∞) (x), x ≥ 0.
Proof.Due to the equality of the expected values and to Representation (6), the covariances Cov(X n , 1 (t,∞) (S n )) must be equal to zero for all n ∈ N. Assume that the support of S k contains at least two different points, then P(supp(S k−1 ) ∩ supp(S k )) > 0 and P(supp(S k ) ∩ supp(S k+1 )) > 0 holds due to Lemma 2. As in the proof of Theorem 3, we can infer that t lies in the inner of either the support of S k , of S k−1 or of S k+1 , respectively, which leads to a contradiction.Thus, supp(S k ) = t implies P(S k ≤ x) = 1 [t,∞) (x); i.e., S k has a degenerate distribution in t.Since S k is the sum of k iid interarrival times, we then have X 1 = t/k almost surely.

Conclusions
By considering a random inspection time in a renewal counting process, interesting effects come into play and new insights are gained regarding the distribution of the random time.With respect to the well-known inspection paradox, non-trivial choices of this distribution and the distribution of the interarrival times lead to non-occurrence of the paradox in contrast to the situations for a deterministic time.In the general case, a complete characterization of the (non-)occurrence of the inspection paradox with respect to G is given.

Figure 3 .
Figure 3.A distribution function G with non-occurrence of the inspection paradox.