Abstract
We show how the Brooks–Chacon Biting Lemma can be combined with the Castaing–Saadoune procedure to provide the complete rate of convergence along subsequences when the uniformly boundedness condition is violated.
MSC:
60F15; 60E15
1. Introduction
A practical and efficient way of providing the complete rate of convergence along subsequences, when the original sequence of random variables is uniformly (norm) bounded, is provided in [1]; that result works without any supplementary probabilistic hypothesis on their (in)dependence or on their distributions:
Theorem 1.
Let . On a complete probability space , we consider a sequence of random variables that is uniformly bounded in , i.e., for some constant , we have the following:
Then, for all and any , we have
along a subsequence of .
Remark 1.
The complete convergence of the series in Formula (1) implies that the subsequence satisfies the strong law of large numbers, i.e.,
The parameter is keeping Theorem 1 within the realm of laws of large numbers; indeed, if , then by the central limit theorem for subsequences, the series in Formula (1) diverges for all even if is an i.i.d. sequence with mean zero and finite variance. Also note that Formula (1) trivially holds if .
There are situations (see, e.g., the recent papers [2,3] and the references therein) when the sequence of random variables in question is not uniformly bounded in , not even bounded, and yet satisfies a law of large numbers. In this case, Theorem 1 is no longer useful in quantifying the rate of convergence in the associated law of large numbers. Moreover, the examples in [4,5,6] show that Theorem 1 may fail if one drops the -uniform boundedness hypothesis, for any .
Using novel techniques, Karatzas and Schachermayer (see [2,3]) recently extended the law of large numbers; inspired by their results, in Section 2 we shall prove a version of the Baum-Katz theorem under a special Komlós–Saks-type boundedness hypothesis, different from the -boundedness condition required in Theorem 1. This will be accomplished by combining the methodology given by the celebrated Biting Lemma of Brooks and Chacon (cf. [7]) with the Castaing–Saadoune procedure of constructing, as in [8], a family of uniformly integrable subsequences of for which condition (1) holds. This methodology is new, and shows a departure from the standard protocols from [4,6], whose set-up and working hypotheses cannot produce such a family of subsequences by lack of uniform integrability. A modification of this methodology is presented in Section 3; it will produce a second version of the Baum–Katz theorem under a Mazur–Orlicz-type hypothesis.
2. Main Result
Theorem 2.
Let . On a complete probability space we consider a sequence of random variables such that, for all , satisfies
Then, for all , Equation (1) holds along a subsequence of . In particular, the subsequence satisfies the strong law of large numbers, i.e.,
Example 1.
(i) A textbook-type argument (cf. [9]) shows that the working hypothesis in Theorem 1 reduces to
In particular, one can see that Theorems 1 and 2 do not overlap and do not imply each other. Indeed, uniformly -bounded sequences of functions can still have an infinite limsup, and vice-versa: there are sequences of functions, with finite limsup, that are not (uniformly) bounded in (see, e.g., [9]).
(ii) The hypotheses in Theorem 2 are satisfied, e.g., by the working condition in the motivational papers [2,3], namely
This condition implies uniform boundedness in (tightness), i.e.,
and is implied by uniform integrability, i.e.,
where denotes the expectation with respect to . Also note that the -boundedness condition in Theorem 1 is stronger than the last three conditions, provided (see, e.g., Example 4.2 in [2]).
Proof of Theorem 2.
We shall work with the following -measurable sets
defined for any natural number . We have, by hypothesis, that
Hence, if is fixed, we can choose an index , such that
for any . Fatou’s lemma then gives the following:
This condition shows that the working hypotheses in the Biting Lemma (cf. [7]) are satisfied by the sequence and the subset . We thus obtain a non-decreasing sequence of subsets in with
and a subsequence of that is uniformly integrable on each of the subsets , . Equation (2) shows that Theorem 1 applies to the sequence and gives:
for any and .
Next, we choose a natural number , such that
and another application of the Biting Lemma, but this time to , produces a non-decreasing sequence of subsets in , such that
and a subsequence of , and therefore a subsequence of as well, with the property that is uniformly integrable on each of the subsets for , and
for any and .
The procedure continues by induction so, at each step , one obtains an -measurable set satisfying
a non-decreasing sequence of subsets in such that
and a subsequence of , such that is uniformly integrable on each of the subsets , ; they all satisfy the following:
for any and . (The convention is that is precisely ).
Now define for each ; using the previous formula, it follows that is a subsequence of that satisfies:
for any and . As
an application of the dominated convergence theorem eliminates the sets in Formula (3); indeed, we have the following:
for any .
To ensure that our series (1) converges along this particular subsequence , it suffices to prove that
for any . (This is Formula (4) written on the complement of ).
Indeed, as
the series
and this argument finishes the proof in the case .
If , then we modify the above methodology as follows: as above, by induction, we can choose -measurable sets with
for each ; as such, the Biting Lemma and the diagonal argument produce the subsequence and the following replacement of Equation (4):
for any . To ensure that our series (1) converges along this particular subsequence , it suffices to prove that
for any , with the new choice of the set . Indeed, using Formula (6), we obtain
and this argument finishes the proof in the case . □
3. A Variant of the Main Result
Proposition 1.
Let . On a complete probability space , we consider a sequence of random variables satisfying the following condition: each subsequence of and produces a convex ccombination of with the property that
Then, for all , Equation (1) holds along a subsequence of and, in particular,
Example 2.
The sequence
satisfies Theorem 2 because a.s. (with respect to the Lebesgue measure on ); however, it does not satisfy Theorem 1 (with ) because it is not bounded in . Note that both Theorems 1 and 2 may fail for unbounded sequences, e.g., , .
Proof of Proposition 1.
The named convex combinations have the following form:
for some with , where are finite subsets of . Moreover, it is straightforward to see that
Let us define the -measurable sets
for any natural number . We then have
hence, for fixed , there is an index , such that
for some fixed, or , according to or , respectively. In both cases, Fatou’s lemma eliminates the sequence :
We then obtain a subsequence of , such that
We finalize precisely as we did in Theorem 2, with Equation (2) replaced by the equation above, and applied to the subsequence . □
4. Conclusions
Our reader already noticed that Theorem 2 and Proposition 1 are obtained for Komlós–Saks-type and, respectively, Mazur–Orlicz-type boundedness hypotheses, both different from the typical -boundedness condition for sequences of random variables. The techniques used in their proofs rely upon a successful blending of the Biting Lemma of Brooks and Chacon with the Castaing–Saadoune procedure of constructing a family of uniformly integrable subsequences of the original sequence of random variables. We also noticed, by examples, that a weaker hypothesis, like boundedness in probability, is not enough to sustain results like Theorem 2 and Proposition 1. On the other hand, boundedness of moments is a popular condition in the realm of laws of large numbers; unfortunately, it does not connect well with uniform integrability, hence the techniques used in this paper are not appropriate for such a condition to produce a result of complete convergence along subsequences. In addition, it is worth mentioning that we did not ask our sequences to obey any (in)dependence condition, no matter how weak that might be. In future works, we aim at blending various such hypotheses and finding new techniques for obtaining the complete convergence of the underlying random variables along subsequences.
Author Contributions
Conceptualization, G.S., D.L. and L.L.; Methodology, G.S., D.L. and L.L.; Writing—original draft, G.S., D.L. and L.L.; Writing—review & editing, G.S., D.L. and L.L. All authors have read and agreed to the published version of the manuscript.
Funding
The research of Deli Li was partially supported by a grant from the Natural Sciences and Engineering Research Council of Canada (RGPIN-2019-06065).
Data Availability Statement
No data was used for the research described in the article.
Conflicts of Interest
The authors declare that they have no known competing financial interests that could have appeared to influence the work reported in this paper.
References
- Stoica, G. The Baum-Katz theorem for bounded subsequences. Stat. Probab. Lett. 2008, 78, 924–926. [Google Scholar] [CrossRef]
- Karatzas, I.; Schachermayer, W. A weak law of large numbers for dependent random variables. Theory Probab. Its Appl. 2023, 68, 501–509. [Google Scholar] [CrossRef]
- Karatzas, I.; Schachermayer, W. A strong law of large numbers for positive random variables. Ill. J. Math. 2023, 67, 517–528. [Google Scholar] [CrossRef]
- von Weizsäcker, H. Can one drop the L1-boundedness in Komlós subsequence theorem? Am. Math. Mon. 2004, 111, 900–903. [Google Scholar] [CrossRef]
- Lesigne, E.; Volný, D. Large deviations for martingales. Stoch. Process. Their Appl. 2001, 96, 143–159. [Google Scholar] [CrossRef]
- Dilworth, S.J. Convergence of series of scalar- and vector-valued random variables and a subsequence principle in L2. Am. Math. Soc. 1987, 301, 375–384. [Google Scholar]
- Brooks, J.K.; Chacon, R.V. Continuity and compactness of measures. Adv. Math. 1980, 37, 16–26. [Google Scholar] [CrossRef]
- Castaing, C.; Saadoune, M. Komlós type convergence for random variables and random sets with applications to minimization problems. Adv. Math. Econ. 2007, 10, 1–29. [Google Scholar]
- Available online: https://mathoverflow.net/questions/168221/uniform-boundedness-in-l10-1-implies-finite-limsup-almost-everywhere-for (accessed on 8 September 2025).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).