The Marcinkiewicz–Zygmund-Type Strong Law of Large Numbers with General Normalizing Sequences under Sublinear Expectation

: In this paper we study the Marcinkiewicz–Zygmund-type strong law of large numbers with general normalizing sequences under sublinear expectation. Speciﬁcally, we establish complete convergence in the Marcinkiewicz–Zygmund-type strong law of large numbers for sequences of negatively dependent and identically distributed random variables under certain moment conditions. We also give results for sequences of independent and identically distributed random variables. The moment conditions in this paper are based on a class of slowly varying functions that satisfy some convergence properties. Moreover, some special examples and comparisons to existing results are also given.


Introduction
In recent years, with the development of science and society, more and more uncertain phenomena no longer satisfy the assumption that probability and expectation are linearly additive; therefore, we cannot use linear expectation to construct the models.Inspired by uncertainty problems in financial mathematics, statistics, and other fields, many scholars have begun to study nonlinear probability and nonlinear expectation, for example Choquet expectation and g-expectation, see Chen and Epstein [1], Choquet [2], Schmeidler [3], Wakker [4], and Wasserman and Kadane [5].Nonlinear expectation theory provides mathematical theoretical tools for the analysis of big data with high uncertainty, and it has a wide range of applications in the fields of risk measurement, financial mathematics, and financial technology, see Barrieu and Karoui [6], El Karoui et al. [7], Gianin [8], Peng [9], and Peng et al. [10].
Recently, Peng [11,12] presented a general theory of sublinear expectation, which differs from classical linear expectation in that sublinear expectation is directly defined via expectation that satisfies certain properties.Based on the framework of sublinear expectation theory, many scholars have generalized the classical law of large numbers (LLN).For example, Chen, Liu, and Zong [13] weakened the independence of the random variables in Peng [14] and obtained the moment conditions for the weak LLN to hold; Chen, Wu, and Li [15] proved that the strong LLN holds for independent and identically distributed random varibles under the condition that the (1 + α)-th moment is finite; Zhang [16] studied the strong LLN for a sequence of independent and negatively dependent random variables under the condition that the first moment is finite for Choquet expectation; Hu [17] proved that the strong LLN is still true under a general moment condition, which is weaker than that for a finite (1 + α)-th moment; Zhan and Wu [18] studied the strong LLN for weighted sums of extended negatively dependent random variables; Feng and Lan [19] studied the Marcinkiewicz-Zygmund-type strong LLN for arrays of row-wise independent random variables.
The Marcinkiewicz-Zygmund (M-Z)-type strong LLN is a very important class of the strong LLN.Let {X n , n ≥ 1} be a sequence of independent and identically distributed random variables, then {X n , n ≥ 1} is said to satisfy the M-Z-type strong LLN, i.e., holds if and only if where 1 < p < 2. Anh et al. [20] replaced the series {n 1/p , n ≥ 1} in (1) with normalizing constant series {n 1/α L(n 1/α ), n ≥ A α } and proved the M-Z-type strong LLN for sequences of negatively associated and identically distributed random variables, i.e., holds if and only if where 1 ≤ α < 2, L(x) is a slowly varying function defined on [A, ∞) with some A > 0 and L(x) is the de Bruijn conjugate of L(x).For more conclusions on the M-Z-type strong LLN in the classical framework see Bai and Cheng [21], Chen and Gan [22], Miao, Mu and Xu [23], and Sung [24].Inspired by Anh et al. [20] under the classical framework, in this paper we generalize to the framework of sublinear expectation theory.It is worth noting that in this paper we consider a slowly varying function satisfying we can prove the complete convergence of weighted sums and the M-Z-type strong LLN with general normalizing sequences.Note that in the existing literature, only some special regularization sequences are considered.For example, Deng and Wang [25] studied complete convergence for extended independent random variables under sublinear expectation in the case of L(x) = 1.Feng and Huang [26] studied strong convergence for weighted sums of extended negatively dependent random variables under sublinear expectation in the case of L(x) = log −1/γ (x), 0 < γ < 2. By comparing the conditions with those in existing results, it is shown that the results in this paper generalize existing results to some extent.Furthermore, we also studied complete convergence that was introduced by Hsu and Robbins [27] under sublinear expectation.Note that there have been some results about complete convergence under sublinear expectation, such as Deng and Wang [25], Feng and Huang [28], Lin and Feng [29], and Zhong and Wu [30].
The paper is organized as follows.In Section 2, we recall the basic concepts of sublinear expectation and slowly varying functions as well as some lemmas that will be used in the proofs.In Section 3, we give the main results: complete convergence for weighted sums and the M-Z-type strong LLN with general normalizing sequences under sublinear expectation.In Section 4, the results for three specific slowly varying functions are given and compared with existing results.

Sublinear Expectation
In this paper we use the framework of sublinear expectation introduced by Peng [14].Given a measurable space (Ω, F ), let H be a linear space of real functions defined on Ω satisfying the following: if where the constant C > 0 and the integer m ∈ N depend on ϕ.The space H can be used as the space of random variables.
Let P be a nonempty set of probability measures on the measurable space (Ω, F ). Define the upper probability V(•) and the lower probability v(•) as Define the upper expectation Ê(•) and lower expectation Ê (•) with respect to P, where X is an F -measurable real-valued random variable such that E P [X] < ∞ for any P ∈ P. (Ω, F , P, Ê) is called the upper expectation space.Obviously, Ê [X] ≤ Ê[X] and Ê [X] = − Ê[−X] hold for every X.
The triplet (Ω, H, E) is called a sublinear expectation space.
Remark 2. By the definition of the sublinear expectation E, it's easy to check that and it can be verified that the upper expectation Ê is a sublinear expectation.Note that all the results obtained in this paper are in the context of the upper expectation space.
Definition 2. For any capacity V, the Choquet expectation is defined by In particular, if the capacity V satisfies then the Choquet expectation induced by this capacity V is a sublinear expectation.Replacing the capacity V in the definition by the upper probability V and the lower probability v, respectively, we can obtain a pair of Choquet expectations (C V , C v ).
A sequence of random variables {X n , n ≥ 1} is said to be negatively dependent if for each n ≥ 1, X n+1 is negatively dependent to (X 1 , X 2 , . . ., X n ).
Definition 5 ([32], Definition 2.5).Random variables X and Y are said to be identically distributed, denoted by

Slowly Varying Functions
First, we present the relevant definitions and properties of slowly varying functions.Definition 6 ([33], Definitions 1.1, and 1.2).A function L(•) is said to be regularly varying at infinity if it is real-valued, positive, and measurable on [A, ∞) with some A > 0, and if for each where ρ ∈ R (ρ is called the index of regular variation).A regularly varying function with the index of regular variation ρ = 0 is called slowly varying.

Definition 7 ([34]
).Let L(•) be a slowly varying function, then there exists a slowly varying function L(•) that can be asymptotically uniquely determined such that The function L is called the de Bruijn conjugate of L, and (L, L) is called a (slowly varying) conjugate pair.
Next we present some important properties about slowly varying functions.
Lemma 3 shows that for any slowly varying function L(•), we can always find a differentiable slowly varying function L 1 (•) that is equivalent to it.Therefore, without loss of generality, in the following we may assume that the slowly varying function L(x) is differentiable and satisfies Equation (4).Moreover, Anh et al. [20] proved that if L(•) is a slowly varying function defined on [A, ∞) with some A > 0, then there exists Thus, in addition to the assumption of differentiability, we also assume that L(x) (x ≥ A, A > 0) is bounded on a finite closed interval.Lemma 4 ([20], Lemma 2.3).Let p > 0 and L(•) be a slowly varying function defined on [A, ∞) with some A > 0 satisfying (4), then the following statements hold.
The following lemma describes the convergence of a special class of slowly varying function series, and the conclusion will be used in the proofs.Lemma 5 ([20], Lemma 2.5).Let p > 1, q ∈ R and let L(•) be a differentiable slowly varying function defined on [A, ∞) with some A > 0, then

The Strong LLN for Negatively Dependent Random Variables
In this section, we study the M-Z-type strong LLN for negatively dependent and identically distributed sequences of random variables in the upper expectation space (Ω, F , P, Ê).In order to establish the connection between complete convergence and the M-Z-type strong LLN, we first prove the following lemma under sublinear expectation.In all the proofs, C denotes a positive constant that varies from row to row.
Proof of Lemma 6.For 0 < b n ↑, we have By the Borel-Cantelli lemma and b 2n /b n = O(1), we have and for The next proposition relates the existence of Choquet expectation to the convergence of some series, providing an equivalent characterization of the moment condition while also generalizing the classical result of Proposition 2.6 in Anh et al. [20].Proposition 1.Let X be a random variable and α ≥ 1.Let L(x) be a slowly varying function defined on [A, ∞) with some (A > 0).Assume that A α is an integer, otherwise, take If the random variables X, Y and the constant a are nonnegative, by the definition of Choquet expectation, the subadditivity of V, and {X + Y ≥ t} ⊂ {X ≥ t/2} ∪ {Y ≥ t/2} , we have Moreover, for any random variables X, Y, if X ≤ Y, then by the monotonicity of V, we have From the above properties of Choquet expectation and the Cr inequality, we have where We already have I 1 < ∞, now we focus on I 2 , By the definition of Choquet expectation, we have then, it follows from (2) that Next, we give the main result of this section.
Theorem 1.Let 1 ≤ α < 2 and ε > 0, for a slowly varying function L(x) defined on [A, ∞) with some A > 0, assume that L(x) is increasing when α = 1 and satisfies where L(x) is the de Bruijn conjugate of L(x).Let {X, X n , n ≥ 1} be a sequence of negatively dependent and identically distributed random variables in the upper expectation space (Ω, Define b n = n 1/α L(n 1/α ), n ≥ A α , then we have the following: (i) For any array of nonnegative constants {a we have Specifically, (ii) The M-Z-type strong LLN holds, i.e., Proof of Theorem 1.For simplicity, we assume that A α is an integer number, otherwise we can take First, since C V |X| α L α+ε (|X| + A) < ∞, by the subadditivity of V and Proposition 1, we have Next, we focus on the second term on the right-hand side of (12).Define By the Cauchy-Schwarz inequality and (8), we have Thus, recall Ê[X] = 0, we have By Lemmas 3 and 4, we can find B ≥ A such that x 1/α L(x 1/α ) and x α−1 L α (x) are increasing on [B, ∞).Without loss of generality, we can assume that x 1/α L(x 1/α ) and By Definition 7 and (7), we have From ( 12)-( 14), to obtain (9), it remains to show that By the Chebyshev inequality (see Proposition 2.1 in [15] and Theorem 2.1 in [16]), we have where For M 2 , by Ê a ni X ni − Ê[a ni X ni ] = 0 and the Cauchy-Schwarz inequality, we have then, from (6), we have For M 1 , we have By Proposition 1 and (7), we have From ( 16)-( 18), to obtain (15), it remains to show that Note that where Let L(x) = L(x 1/α ), x ≥ A α .Since L(x) is a slowly varying function defined on [A, ∞) with some A > 0, by Definition 6, for any λ > 0, we have therefore, L(•) is a slowly varying function defined on [A α , ∞) with some A α > 0. By Lemma 5, we have For N 2 , by Lemma 5, we have therefore, we have Combining ( 20) and ( 21), we obtain (19).Let a ni ≡ 1 in ( 9), we obtain (10).
by Lemma 6, we have The above theorem requires that the series ∑ n≥A α L2ε (n 1/α )/n is finite, where L(x) is the de Bruijn conjugate of the slowly varying function L(x).By Definition 7, we can show that L(x) is not unique.In fact, it is sufficient that there exists at least one L(x) satisfying the condition.Indeed, by Remark 3, when L(x) satisfies (2), L(x) = 1/L(x) is the de Bruijn conjugate of L(x) and is asymptotically unique.Condition (6) can then be rewritten as ∑ n≥A α 1/ nL 2ε (n 1/α ) < ∞, from which we obtain the following theorem.Theorem 2. Let 1 ≤ α < 2 and ε > 0, for a slowly varying function L(x) defined on [A, ∞) with some A > 0, assume that L(x) is increasing when α = 1 and satisfies where L(x) is the de Bruijn conjugate of L(x).Let {X, X n , n ≥ 1} be a sequence of negatively dependent and identically distributed random variables in the upper expectation space (Ω, Define b n = n 1/α L(n 1/α ), n ≥ A α , then we have the following: (i) For any array of nonnegative constants {a Specifically, (ii) The M-Z-type strong LLN holds, i.e., = 0, a.s.V.

The Strong LLN for Independent Random Variables
In Theorem 1, we consider the strong LLN for a sequence of negatively dependent and identically distributed random variables, and, in order to ensure that Xi = a ni X ni − Ê[a ni X ni ] is also negatively dependent, we assume that a ni is a nonnegative constant.For a sequence of independent (Definition 3) and identically distributed random variables, we can extend the condition in Theorem 1 to a general array {a ni }.The theorem we obtain is as follows.
Theorem 3. Let 1 ≤ α < 2 and ε > 0, for a slowly varying function L(x) defined on [A, ∞) with some A > 0, assume that L(x) is increasing when α = 1 and satisfies where L(x) is the de Bruijn conjugate of L(x).Let {X, X n , n ≥ 1} be a sequence of independent and identically distributed random variables in the upper expectation space (Ω, F , P, Ê) that satisfies Define b n = n 1/α L(n 1/α ), n ≥ A α , then we have the following: (i) For any array of constants {a we have Specifically, (ii) The M-Z-type strong LLN holds, i.e., Proof of Theorem 3. The proof is similar to Theorem 1 with necessary modifications due to the condition on a ni ; therefore, we only give the proof with respect to a ni .The rest of the proof is similar to that of Theorem 1 and is omitted here.
For every array of constants {a Next, for M 2 , we have Remark 4. All the above results obtained in this paper hold in the upper expectation space (Ω, F , P, Ê).If we consider the general sublinear expectation space (Ω, H, Ê), the corresponding results still hold as long as both the sublinear expectation Ê and the capacity V are countably subadditive.

Further Discussions on the Moment Condition
In this section we consider several special slowly varying functions and compare the corresponding results with existing conclusions.Let ln x denote the natural logarithm function with e as the base and log x denote the logarithmic function with 2 as the base.

L(x) = ln x
Let L(x) = ln x, we have the following result.Theorem 4. Let 1 ≤ α < 2, ε > 0 and {X, X n , n ≥ 1} be a sequence of negatively dependent and identically distributed random variables in the upper expectation space (Ω, F , P, Ê), let b n = n 1/α / ln(n 1/α ), n ≥ [e α ] + 1, suppose the random variable X satisfies then we have the following: Specifically, (ii) The M-Z-type strong LLN holds, i.e., Proof of Theorem 4. For ∀ε > 0, n ≥ [e α ] + 1, we have Similar to the proof of Theorem 1, we can show that and To obtain (22), it remains to show that which can be proved as follows: Then, similar to the proof of Theorem 1, we can obtain ( 23) and ( 24).
Remark 5. We note that the order of the moment condition C V |X| α ln α+1/2+ε (|X| + e) < ∞ in Theorem 4 is increased by 1/2 + ε compared to the result in the linear expectation space.The ε term is commonly used for the generalization of the M-Z-type strong LLN to the sublinear expectation space, which is similar to the α term of the moment condition [32]; the 1/2 term is needed in this paper due to the specific method.We conjecture that the 1/2 term can be removed and we will study this in our future work.

L(x) ≡ 1
Let L(x) ≡ 1, then we have the following result.
Theorem 5. Let 1 ≤ α < 2, ε > 0 and {X, X n , n ≥ 1} be a sequence of negatively dependent and identically distributed random variables in the upper expectation space (Ω, F , P, Ê), suppose the random variable X satisfies then we have the following: (i) For any array of nonnegative constants {a Specifically, (ii) The M-Z-type strong LLN holds, i.e., Deng and Wang [25] studied extended independent (EI, for short) sequences of random variables and also proved complete convergence.First, we recall the definition of an extended independent sequence of random variables.Definition 8. Given a sublinear expectation space (Ω, H, Ê).A sequence of random variables {X n , n ≥ 1} is said to be extended independent if For sequences of extended independent random variables, Deng and Wang [25] proved the following proposition.Proposition 2. Let αp = 1, α > 1/2, and 0 < p < 1. Assume that {X n , n ≥ 1} is a sequence of identically distributed EI random variables with Remark 6.In Theorem 5, we consider a sequence of negatively dependent and identically distributed random variables.Note that (25) describes the complete convergence of "weighted" sums of random variables, which is more general than the complete convergence of "standard" sums of random variables described by (26).
Remark 7. In Theorem 6, we remove the case α = 1.This is because the method we used requires that x α−1 log −α/γ (x) is increasing on [A, ∞), which is not satisfied when α = 1.
First we recall the definition of an extended negatively dependent sequence of random variables in [26].Definition 9. Given a sublinear expectation space (Ω, H, Ê).A sequence of random variables {X n ; n ≥ 1} is said to be upper (respectively lower) extended negatively dependent, if there is some dominating constant K ≥ 1 such that where the nonnegative functions ϕ i (x) ∈ C b,Lip (R) (i = 1, 2, • • • ) are all nondecreasing (respectively all nonincreasing).They are called extended negatively dependent (END) if they are both upper extended negatively dependent and lower extended negatively dependent.
Feng and Huang [26] studied extended independent sequences of random variables and proved complete convergence as follows.