Next Article in Journal
3D Simulation of Debris Flows with the Coupled Eulerian–Lagrangian Method and an Investigation of the Runout
Next Article in Special Issue
A Rényi-Type Limit Theorem on Random Sums and the Accuracy of Likelihood-Based Classification of Random Sequences with Application to Genomics
Previous Article in Journal
Fractal Continuum Mapping Applied to Timoshenko Beams
Previous Article in Special Issue
Analytic and Asymptotic Properties of the Generalized Student and Generalized Lomax Distributions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Equivalent Conditions of Complete p-th Moment Convergence for Weighted Sum of ND Random Variables under Sublinear Expectation Space

1
School of Mathematics, Jilin University, Changchun 130012, China
2
School of Mathematics and Statistics, Beihua University, Jilin 132013, China
3
School of Mathematics and Statistics, Liaoning University, Shenyang 110031, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(16), 3494; https://doi.org/10.3390/math11163494
Submission received: 24 July 2023 / Revised: 9 August 2023 / Accepted: 11 August 2023 / Published: 13 August 2023

Abstract

:
We investigate the complete convergence for weighted sums of sequences of negative dependence (ND) random variables and p-th moment convergence for weighted sums of sequences of ND random variables under sublinear expectation space. Using moment inequality and truncation methods, we prove the equivalent conditions of complete convergence for weighted sums of sequences of ND random variables and p-th moment convergence for weighted sums of sequences of ND random variables under sublinear expectation space.

1. Introduction

The nonadditive probabilities theory and nonadditive expectations theory are useful tools for researching measures of risk, uncertainties in statistics, non-linear stochastic calculus and superhedging in finance, cf. Peng [1,2], Denis [3], Gilboa [4], Marinacci [5]. This paper considers the general sublinear expectations which were introduced by Peng [6,7,8] in a general space by relaxing the linear property of the classical expectation to the subadditivity and positive homogeneity (cf. Definition 1 below). The sublinear expectation conception provided a very flexible framework to model the problems which are not additive. Inspired by the work of Peng, researchers have tried to study lots of limit theorems under linear expectation space to extend the corresponding results in probability and statistics. Zhang [9,10,11] studied the exponential inequalities, Rosenthal’s inequalities, Hölder’s inequalities and Donsker’s invariance principle under sublinear expectation space. Chen [12,13,14] studied the strong laws of large numbers, the weak laws of large numbers, and the large deviation for ND random variables under sublinear expectations, respectively. Wu [15] obtained precise asymptotics for complete integral convergence under sublinear expectation space. For more research about limit theorems of sublinear expectation space, the reader could refer to the articles of Hu and Peng [15], Li and Li [16], Liu [17], Ding [18], Wu [19], Guo and Zhang [20,21], Dong and Tan [22].
Recently, Guo and Shan [23] studied equivalent conditions of complete q-th moment convergence for sums of sequences of negatively orthant dependent (NOD) variables under the classical space. Xu and Cheng [24,25] obtained equivalent conditions of complete convergence for sums of independence identical distribution (i.i.d.) random variables sequences and p-th moment convergence for sums of i.i.d. random variables sequences under sublinear expectation space. ND sequences have wide applications in penetration theory, multivariable statistics, etc. Therefore, it is necessary to generalize the properties of independent sequences to ND sequences. Hence, it is meaningful to extend the results of Xu and Cheng [24,25] to ND random variables under sublinear expectation space. In this paper, we try to prove the equivalent conditions of complete convergence random variables and p-th moment convergence for weighted sums of sequences of ND random variables under sublinear expectation space.

2. Preliminaries

We use the framework of Peng [8]. Suppose that ( Ω , F ) is a given measurable space, H is a linear space of real functions defined on Ω such that I A H , where A F , I A denotes the indicator function of A, and if ( X 1 , X 2 , , X n ) H , then φ ( X 1 , X 2 , , X n ) H for each φ C l , L i p ( R n ) , where C l , L i p ( R n ) is the linear space of local Lipschitz continuous functions φ satisfying
| φ ( x ) φ ( y ) | C ( 1 + | x | m + | y | m ) | x y | , x , y R n ,
for some C > 0 , m N depending on φ . We also denote C b , L i p ( R n ) as the linear space of bounded Lipschitz continuous functions, for some C > 0 , φ satisfying
| φ ( x ) φ ( y ) | C | x y | , x , y R n .
Definition 1. 
A sublinear expectation E on H is a function E : H R ¯ satisfying the following properties: for all X , Y H , we have
(1) 
Monotonicity: if X Y then E [ X ] E [ Y ] ;
(2) 
Constant preserving: E [ c ] = c ;
(3) 
Sub-additivity: E [ X + Y ] E [ X ] + E [ Y ] whenever E [ X ] + E [ Y ] is not of the form + or + ;
(4) 
Positive homogeneity: E [ λ X ] = λ E [ X ] , λ 0 .
Here, R ¯ = [ , ] . The triple ( Ω , H , E ) is called a sublinear expectation space. Give a sublinear expectation E , let us denote the conjugate expectation E of E by
E [ X ] : = E [ X ] , X H .
A set function V : F [ 0 , 1 ] is called a capacity if
(1) 
V ( ) = 0 , V ( Ω ) = 1 ;
(2) 
V ( A ) V ( B ) , A B , A , B F .
In this paper, given a sublinear expectation space ( Ω , H , E ) , we set the capacity V ( A ) : = E [ I A ] for A F . We set the Choquet expectations C V by
C V : = 0 ( V ( X x ) 1 ) + 0 V ( X x ) d x .
Definition 2. 
Let X 1 be a n-dimensional random vector defined in sublinear expectation space Ω 1 , H 1 , E 1 and X 2 be a n-dimensional random vector defined in sublinear expectation space Ω 2 , H 2 , E 2 . They are called ’identically distributed’, denoted by X 1 = d X 2 , if
E 1 φ X 1 = E 2 φ X 2 , φ C b , L i p R n .
Definition 3. 
In a sublinear expectation space ( Ω , H , E ) , a random vector Y = Y 1 , , Y n , Y i H is said to be independent to another random vector X = X 1 , , X m , X i H under E if
E [ φ ( X , Y ) ] = E E [ φ ( x , Y ) ] | x = X , φ C b , L i p R m × R n .
Random variables X n , n 1 are said to be independent, if X i + 1 is independent to X 1 , , X i for each i 1 .
From the definition of independence, it is easily seen that, if Y is independent to X and X , Y L , L = { X H : E [ | X | ] < } . X 0 , E [ Y ] 0 , then
E [ X Y ] = E [ X ] E [ Y ] .
Further, if Y is independent to X and X , Y L and X 0 , Y 0 , then
E [ X Y ] = E [ X ] E [ Y ] .
Definition 4. 
A sequence of random variables { X n , n 1 } is said to be i.i.d., if X i = d X 1 and X i + 1 is independent to ( X 1 , , X i ) for each i 1 .
Definition 5. 
(i) In a sublinear expectation space ( Ω , H , E ) , a random vector Y = Y 1 , , Y n , Y i H is said to be ND to another random vector X = X 1 , , X m , X i H under E if for each pair of test functions φ 1 C l , L i p R m and φ 2 C l , L i p R n , we have E [ φ 1 ( X ) φ 2 ( Y ) ] E [ φ 1 ( X ) ] E [ φ 2 ( Y ) ] whenever φ 1 ( X ) 0 , E [ φ 2 ( Y ) ] 0 , E [ φ 1 ( X ) φ 2 ( Y ) ] < , E [ φ 1 ( X ) ] < , E [ φ 2 ( Y ) ] < , and either φ 1 and φ 2 are coordinate-wise non-increasing.
(ii) Let { X n , n 1 } be a sequence of random variables in the sublinear expectations. X 1 , X 2 , are said to be ND if X i + 1 is ND to ( X 1 , , X i ) for each i 1 .
From the definition of independence and ND, if Y is independent to X, then Y is ND to X. Furthermore, let { X n , n 1 } be a sequence of independent random variables and f 1 ( x ) , f 2 ( x ) , C l , L i p ( R ) , then { f n ( X n ) , n 1 } is also a sequence of independent random variables; let { X n , n 1 } be a sequence of ND random variables, f 1 ( x ) , f 2 ( x ) , C l , L i p ( R ) are non-decreasing (non-increasing) functions, then { f n ( X n ) , n 1 } is also a sequence of ND.
In the sequel we suppose that E is sub-additive. Let C denote a positive constant which may differ from place to place. a n b n denote that there exists a constant C > 0 such that a n C b n for n large enough, a n b n means that a n b n and b n a n , log x means ln ( max { e , x } ) . I ( A ) or I A represents the indicator function of A.
We present several necessary lemmas to prove our main results.
Lemma 1 
([9]). Let p , q > 1 be two real numbers satisfying 1 p + 1 q = 1 . Then, for two random variables X , Y in ( Ω , H , E ) we have E [ | X Y | ] ( E [ | X | p ] ) 1 p ( E [ | X | q ] ) 1 q .
Lemma 2 
([9]). If E is countably subadditive and C V ( | X | ) < , then
E [ | X | ] C V ( | X | ) .
Lemma 3 
([9]). Suppose that X k is ND to ( X k + 1 , , X n ) for each k = 1 , , n 1 , or X k + 1 is ND to ( X 1 , , X k ) for each k = 1 , , n 1 . Then, for p 2 ,
E max k n S k p C p k = 1 n E | X k | p + k = 1 n E | X k | 2 p / 2 + k = 1 n ( E [ X k ] ) + ( E [ X k ] ) + p ,
where S k = i = 1 k X k , C p is a positive constant depending only on p.
Lemma 4 
([24]). Let Y be a random variable under sublinear expectation space ( Ω , H , E ) . Then, for any α > 0 , γ > 0 , and β > 1
( i ) 1 u β C V ( | Y | α I ( | Y | > u γ ) ) d u C C V ( | Y | ( β + 1 ) / γ + α ) , ( i i ) 1 u β log ( u ) C V ( | Y | α I ( | Y | > u γ ) ) d u C C V ( | Y | ( β + 1 ) / γ + α ) log ( 1 + | Y | ) .
Lemma 5. 
Let { X n , n 1 } be a sequence of ND random variables under sublinear expectation space ( Ω , H , E ) . Then, the condition that for all x > 0 ,
lim n V max 1 j n | X j | > x = 0 ,
implies that there exist constants C such that for all x > 0 , and n large enough,
1 V max 1 j n | X j | > x 2 j = 1 n V | X j | > x C V max 1 j n | X j | > x .
Proof. 
Write α n = V max 1 j n | X j | > x . Without the loss of generality, we may assume that α n > 0 . Since { I ( X k > x ) E I ( X k > x ) , k 1 } and { I ( X k < x ) E I ( X k < x ) , k 1 } are sequences of ND under sublinear expectation space, denote A k = ( X k > x ) , B k = ( X k < x ) , D k = ( | X k | > x ) , combining C r inequality and Lemma 3 results in
E k = 1 n I ( A k ) E I ( A k ) 2 C k = 1 n E I ( A k ) E I ( A k ) 2 + C k = 1 n ( E [ I ( A k ) E I ( A k ) ] ) + ( E [ I ( A k ) E I ( A k ) ] + 2 C k = 1 n E I ( A k ) V ( A k ) 2 + C k = 1 n 2 E [ | I ( A k ) E I ( A k ) | ] 2 C k = 1 n E I ( A k ) V ( A k ) 2 + C k = 1 n E [ I ( A k ) V ( A k ) ] 2 C k = 1 n V ( A k ) + C k = 1 n V ( A k ) 2 .
In the same way, we could obtain
E k = 1 n I ( B k ) E I ( B k ) 2 C k = 1 n V ( B k ) + C k = 1 n V ( B k ) 2 .
It follows that
E k = 1 n I ( D k ) E I ( ( D k ) 2 E k = 1 n ( ( I ( A k ) E I ( A k ) ) + ( I ( B k ) E I ( B k ) ) 2 2 E k = 1 n I ( A k ) E I ( A k ) 2 + 2 E k = 1 n I ( B k ) E I ( B k ) 2 C k = 1 n V ( A k ) + C k = 1 n V ( A k ) 2 + C k = 1 n V ( B k ) + C k = 1 n V ( B k ) 2 C k = 1 n V ( D k ) + C k = 1 n V ( D k ) 2 .
Similar to the proof of Lemma 2.5 in Xu [24], by positive homogeneity of sublinear expectation space, Lemma 1 and the subadditivity of expectations, we conclude that
k = 1 n V ( D k ) = k = 1 n E [ I ( D k ) ] = k = 1 n 2 E [ I ( D k ) ] + E [ I ( D n 1 ) + E [ I ( D n ) ] ] = k = 1 n 2 E [ I ( D k ) ] + E [ I ( D n 1 ) + I ( D n ) ] = = E I ( D 1 ) + E k = 2 n I ( D k ) = E k = 1 n I ( D k ) = E k = 1 n I ( D k j = 1 n D j ) = E k = 1 n I ( D k ) I ( j = 1 n D j ) E k = 1 n I ( D k ) E I ( D k ) I j = 1 n D j + k = 1 n V ( D k ) V j = 1 n D j E k = 1 n ( I ( D k ) E I ( D k ) ) 2 E I j = 1 n D j 1 2 + α n k = 1 n V ( D k ) C α n k = 1 n V ( D k ) + k = 1 n V ( D k ) 2 1 2 + α n k = 1 n V ( D k ) C α n 1 2 k = 1 n V ( D k ) + 1 2 C α n 1 α n + ( 1 α n ) k = 1 n V ( D k ) + α n j = 1 n V ( D k ) .
which combined with (1) results in (2) immediately. Therefore the proof is finished. □
Lemma 6 
([25]). Assume that Y is a random variable under sublinear expectation space ( Ω , H , E ) .
Then, for p > 0 , q > 0 , r > 0 , the following is equivalent:
(i)
C V ( | Y | p )   < , for p > r / q , C V ( | Y | r / q log | Y | )   < , for p = r / q , C V ( | Y | r / q )   < , for p < r / q .
(ii)
1 d y 1 y r 1 V ( | Y | > x 1 / p y q ) d x < .
Lemma 7 
([25]). Assume that Y is a random variable under sublinear expectation space ( Ω , H , E ) .
Then, for p > 0 , q > 0 , r > 0 , the following is equivalent:
(i)
C V ( | Y | p )   < , for p > r / q , C V ( | Y | r / q log 2 | Y | )   < , for p = r / q , C V ( | Y | r / q log | Y | )   < , for p < r / q .
(ii)
1 d y 1 y r 1 V ( | Y | > x 1 / p y q ) d x < .

3. Main Results

Our main results are as follows.
Theorem 1. 
Assume that { X n , n 1 } is a ND random variables sequence under sublinear expectation space ( Ω , H , E ) , which is identically distributed as X. Suppose that r > 1 , q > 1 2 , β + q > 0 , moreover, for 1 2 < q 1 ,
E ( X ) = E ( X ) = 0 .
Furthermore, let { a n i ( i / n ) β ( 1 / n ) q , 1 i n , n 1 } be a triangular array of real numbers. Then, the following is equivalent:
(i)
C V ( | X | r / q )     < , for β > q / r , C V ( | X | ( r 1 ) / ( q + β ) )   < , for q < β < q / r , C V ( | X | r / q log ( 1 + | X | ) )   < , for β = q / r .
(ii)
n = 1 n r 2 V max 1 k n i = 1 k a n i X i > ϵ < , ϵ > 0
Theorem 2. 
Assume that { X n , n 1 } is a ND random variables sequenceunder sublinear expectation space ( Ω , H , E ) , which is identically distributed as X. Suppose that r > 1 , q > 1 2 , β > q / r , moreover, for 1 2 < q 1 ,
E ( X ) = E ( X ) = 0 .
Furthermore, let { a n i ( i / n ) β ( 1 / n ) q , 1 i n , n 1 } be a triangular array of real numbers. Then the following is equivalent:
(i)
C V ( | X | p )   < , for p > r / q , C V ( | X | r / q )   < , for p < r / q , C V ( | X | r / q log | X | )   < , for p = r / q .
(ii)
n = 1 n r 2 C V max 1 k n i = 1 k a n i X i p ϵ + < , ϵ > 0
Theorem 3. 
Assume that { X n , n 1 } is a ND random variables sequence under sublinear expectation space ( Ω , H , E ) , which is identically distributed as X. Suppose that r > 1 , q > 1 2 , β = q / r < 0 , moreover, for 1 2 < q 1 ,
E ( X ) = E ( X ) = 0 .
Furthermore, let { a n i ( i / n ) β ( 1 / n ) q , 1 i n , n 1 } be a triangular array of real numbers. Then, (6) equivalent to
C V ( | X | p )   < , for p > r / q , C V ( | X | r / q log | X | )   < , for p < r / q , C V ( | X | r / q log 2 | X | )   < , for p = r / q .
Theorem 4. 
Assume that { X n , n 1 } is a ND random variables sequence under sublinear expectation space ( Ω , H , E ) , which is identically distributed as X. Suppose that r > 1 , q > 1 2 , q < β < q / r < 0 , moreover, for 1 2 < q 1 ,
E ( X ) = E ( X ) = 0 .
Furthermore, let { a n i ( i / n ) β ( 1 / n ) q , 1 i n , n 1 } be a triangular array of real numbers. Then (6) equivalent to
C V ( | X | p )   < , for p > ( r 1 ) / ( q + β ) , C V ( | X | ( r 1 ) / ( q + β ) )   < , for p < ( r 1 ) / ( q + β ) , C V ( | X | ( r 1 ) / ( q + β ) log | X | ) < , for p = ( r 1 ) / ( q + β ) .

4. Proof of the Main Results

4.1. Proof of Theorem 1

We first prove (3) ⇒ (4). Choose δ > 0 , small enough, and a sufficiently large integer K. For all 1 i n , n 1 , we write
X n i ( 1 ) = n τ I ( a n i X i < n τ ) + a n i X i I ( | a n i X i | n τ ) + n τ I ( a n i X i > n τ ) , X n i ( 2 ) = ( a n i X i n τ ) I ( n τ < a n i X i < ϵ K ) , X n i ( 3 ) = ( a n i X i + n τ ) I ( ϵ K < a n i X i < n τ ) , X n i ( 4 ) = ( a n i X i + n τ ) I ( a n i X i ϵ K ) + ( a n i X i n τ ) I ( a n i X i ϵ K ) .
Obviously, i = 1 k a n i X i = i = 1 k X n i ( 1 ) + i = 1 k X n i ( 2 ) + i = 1 k X n i ( 3 ) + i = 1 k X n i ( 4 ) . Notice that
max 1 k n i = 1 k a n i X i 4 ϵ j = 1 4 max 1 k n i = 1 k X n i ( j ) ϵ .
Thus, in order to establish (4), it suffices to prove that
I j : = n = 1 n r 2 V max 1 k n i = 1 k X n i ( j ) ϵ < , j = 1 , 2 , 3 , 4 .
In order to estimate I 1 , we verify that
max 1 k n i = 1 k E X n i ( 1 ) 0 as n .
By Lemmas 2 and (3), we could obtain E | X | 1 / q < , E | X | r / q < . When q > 1 , notice that | X n i ( 1 ) | n τ and | X n i ( 1 ) | | a n i X i | , it follows that
max 1 k n i = 1 k E X n i ( 1 ) i = 1 k E X n i ( 1 ) n τ ( 1 1 / q ) i = 1 k E | a n i X i | 1 / q n τ ( 1 1 / q ) i = 1 k n ( β + q ) / q i β / q n τ ( 1 1 / q ) 0 as n .
When 1 2 < q 1 , note that E ( X ) = E ( X ) = 0 , by choosing τ small enough such that τ ( 1 r / q ) + 1 r < 0 , we obtain
max 1 k n i = 1 k E X n i ( 1 ) 2 i = 1 n E | a n i X i | I ( | a n i X i | > n τ ) 2 n τ ( 1 r / q ) i = 1 n E | a n i X i | r / q n τ ( 1 r / q ) i = 1 n | a n i | r / q n τ ( 1 r / q ) i = 1 n n r ( β + q ) / q i r β / q
n τ r ( τ + β + q ) / q , q < β < q / r , n τ ( 1 r / q ) + 1 r log n , β = q / r , n τ ( 1 r / q ) + 1 r , β > q / r , 0 as n .
Hence, to prove I 1 < , it suffices to prove that
I 1 * : = n = 1 n r 2 V max 1 k n i = 1 k X n i ( 1 ) E X n i ( 1 ) ϵ < .
From the property of ND random variables under sublinear expectation space, we could obtain X n i ( 1 ) is also a sequence of ND random variables under sublinear expectation space. By Markov’s inequality and Cr’s inequality under sublinear expectation, Lemma 3, it can be shown that for a suitably large M,
V max 1 k n i = 1 k X n i ( 1 ) E X n i ( 1 ) ϵ i = 1 n E | X n i ( 1 ) | M + i = 1 n E | X n i ( 1 ) | 2 M / 2 + i = 1 n ( E [ X n i ( 1 ) ] ) + ( E [ X n i ( 1 ) ] ) + M
Taking M sufficiently large such that 2 τ M + ( τ β ) r / q < 1 , 1 τ M + τ r / q < 1 , we have
n = 1 n r 2 i = 1 n E | X n i ( 1 ) | M n = 1 n r 2 i = 1 n n τ ( M r / q ) E | a n i X i | r / q n = 1 n 2 τ M + ( τ β ) r / q , q < β < q / r , n = 1 n 1 τ M + τ r / q log n , β = q / r , n = 1 n 1 τ M + τ r / q , β > q / r , < .
When r / q 2 , (3) implies E X 2 < . Noting that β + q > 0 , q > 1 / 2 , we can choose a sufficiently large M such that r 2 M ( q + β ) < 1 , r 2 q M + M / 2 < 1 , then
n = 1 n r 2 i = 1 n E | X n i ( 1 ) | 2 M / 2 n = 1 n r 2 i = 1 n a n i 2 M / 2 n = 1 n r 2 M ( q + β ) , q < β < 1 / 2 , n = 1 n r 2 q M + M / 2 ( log n ) M / 2 , β = 1 / 2 , n = 1 n r 2 q M + M / 2 , β > 1 / 2 , < .
When r / q < 2 , we could choose a sufficiently large M such that r 2 ( r + r β / q + ( 2 r / q ) τ ) M / 2 < 1 , r 2 ( r 1 + ( 2 r / q ) τ ) M / 2 < 1 , then
n = 1 n r 2 i = 1 n E | X n i ( 1 ) | 2 M / 2 n = 1 n r 2 n τ M ( 2 r / q ) / 2 i = 1 n a n i r / q M / 2 n = 1 n r 2 ( r + r β / q + ( 2 r / q ) τ ) M / 2 , q < β < q / r , n = 1 n r 2 ( r 1 + ( 2 r / q ) τ ) M / 2 ( log n ) M / 2 , β = q / r , n = 1 n r 2 ( r 1 + ( 2 r / q ) τ ) M / 2 , β > q / r , < .
From β + q > 0 , q > 1 / 2 , | X n i ( 1 ) | n τ and | X n i ( 1 ) | | a n i X i | , choosing a sufficiently large M such that r 2 ( τ + r ( τ β ) r / q ) M < 1 , r 2 ( r 1 τ τ r / q ) M < 1 , we obtain
n = 1 n r 2 i = 1 n ( E [ X n i ( 1 ) ] ) + ( E [ X n i ( 1 ) ] ) + M n = 1 n r 2 i = 1 n E [ | X n i ( 1 ) | ] + E [ | X n i ( 1 ) | ] M C n = 1 n r 2 i = 1 n E [ n τ ( 1 r / q ) | a n i X i | r / q ] M n = 1 n r 2 ( r 1 τ τ r / q ) M , q < β < q / r , n = 1 n r 2 ( τ + r ( τ β ) r / q ) M ( log n ) M , β = q / r , n = 1 n r 2 ( τ + r ( τ β ) r / q ) M , β > q / r , < .
By the definition X n i ( 2 ) , we have 0 < X n i ( 2 ) < ϵ K . It follows that
V max 1 k n i = 1 k X n i ( 2 ) ϵ = V i = 1 n X n i ( 2 ) ϵ V there are at least K indices i [ 1 , n ] , such that a n i X i > n τ 1 i 1   < i 2 < < i K n V | a n i 1 X i 1 | > n τ , , | a n i K X i K | > n τ i = 1 n E [ I ( | a n i 1 X | > n τ , , I ( | a n i K X | > n τ ) ] i = 1 n E [ I ( | a n i X | > n τ ) ] K i = 1 n V ( | a n i X | > n τ ) K .
Hence, by Markov’s inequality under sublinear expectation, it follows that
I 2 n = 1 n r 2 i = 1 n V ( | a n i X | > n τ ) K C n = 1 n r 2 i = 1 n n r τ / p | a n i | r / p E | X | r / p K n = 1 n r 2 K r ( q + β τ / q ) , q < β < q / r , n = 1 n r 2 K ( r 1 r τ / q ) log K n , β = q / r , n = 1 n r 2 K ( r 1 r τ / q ) , β > q / r .
Notice that r > 1 , q + β > 0 , we could choose τ > 0 , small enough, and a sufficiently large integer K such that r 2 K r ( q + β τ / q ) < 1 and r 2 K ( r 1 r τ / q ) < 1 . Hence, by Lemma 2, we obtain I 2 < . Similarly, we could obtain I 3 < .
By the definition of X n i ( 4 ) , we have
max 1 j n i = 1 j X n i ( 4 ) ϵ max 1 i n a n i X i ϵ K .
Since { a n i ( i / n ) β ( 1 / n ) q } , by Lemma 4, we see that
I 4 n = 1 n r 2 V | a n i X i | ϵ K n = 1 n r 2 V | X | ϵ K n q + β i β 1 x r 2 1 x V | X | > ϵ C K x q + β y β d y d x ( Letting u = x q + β , v = y ) = 1 q + β 1 d u 1 u 1 / q u ( r 1 ) / ( q + β ) 1 v β ( r 1 ) / ( q + β ) V | X | ϵ C K u d v C 1 u ( r 1 ) / ( q + β ) 1 V | X | ϵ C K u d u C V ( | X | ( r 1 ) / ( q + β ) ) , q < β < q / r ; C 1 u r / q 1 ln ( u ) V | X | ϵ C K u d u C V ( | X | r / q log ( 1 + | X | ) ) , β = q / r ; C 1 u r / q 1 V | X | ϵ C K u d u C V ( | X | r / q ) , β > q / r .
Then by (3), we conclude I 4 < . Now we prove (4) ⇒ (3). Since
max 1 k n a n k X k 2 max 1 k n i = 1 k a n i X i ,
applying (4), we have
V max 1 k n | a n k X k | ϵ 0 , n .
By Lemma 5, it follows that, for all ϵ > 0
i = 1 n V | a n i X i | ϵ V max 1 k n | a n k X k | ϵ .
Now, combining (12) with (4) gives
n = 1 n r 2 i = 1 n V | a n i X i | ϵ < .
By the process of proof of I 4 < , we see that (13) is equivalent to (3). The proof of Theorem 1 is finished.

4.2. Proof of Theorem 2

We first prove that (5) ⇒ (6). Notice that
n = 1 n r 2 C V max 1 k n i = 1 k a n i X i p ϵ + = n = 1 n r 2 0 V max 1 k n i = 1 k a n i X i p ϵ + x d x = n = 1 n r 2 ϵ 1 V max 1 k n i = 1 k a n i X i p x d x + n = 1 n r 2 1 V max 1 k n i = 1 k a n i X i p x d x = n = 1 n r 2 ϵ 1 V max 1 k n i = 1 k a n i X i x 1 / p d x + n = 1 n r 2 1 V max 1 k n i = 1 k a n i X i x 1 / p d x : = I + I I
From Theorem 1, we see that I < . We next establish I I < . Choose 0 < α < 1 / p , δ > 0 , sufficiently small, and a large enough integer K. For every 1 i n , n 1 , we note the fact that n is sufficiently large to guarantee x α n τ < x 1 / p 4 K . Without the loss of restrictions, we could write
Y n i ( 1 ) = x α n δ I a n i X i < x α n τ + a n i X i I | a n i X i |   x α n τ + x α n δ I a n i X i > x α n τ ; Y n i ( 2 ) = ( a n i X i x α n τ ) I x α n τ < a n i X i < x 1 / p 4 K ; Y n i ( 3 ) = ( a n i X i + x α n τ ) I x 1 / p 4 K < a n i X i < x α n τ ; Y n i ( 4 ) = ( a n i X i + x α n τ ) I a n i X i x 1 / p 4 K + ( a n i X i x α n τ ) I a n i X i x 1 / p 4 K .
It is obvious that i = 1 k a n i Y i = i = 1 k Y n i ( 1 ) + i = 1 k Y n i ( 2 ) + i = 1 k Y n i ( 3 ) + i = 1 k Y n i ( 4 ) . Notice that
max 1 l n i = 1 l a n i Y i x 1 / p j = 1 4 max 1 l n i = 1 l Y n i ( j ) x 1 / p / 4 .
Thus, in order to establish (6), we only need to prove that
J j : = n = 1 n r 2 1 V max 1 l n i = 1 l Y n i ( j ) x 1 / p / 4 d x < , j = 1 , 2 , 3 , 4 .
In order to estimate J 1 , we verify that
sup x 1 1 x 1 / p max 1 l n i = 1 l E Y n i ( 1 ) 0 as n .
Lemmas 1 and 2, and (5) imply that
E | X | 1 / q < , E | X | r / q < .
When q > 1 , since | Y n i ( 1 ) | x α n τ and | Y n i ( 1 ) | | a n i X i | , by Lemma 2, it follows that
max 1 k n i = 1 k E Y n i ( 1 ) i = 1 n E Y n i ( 1 ) ( x α n τ ) 1 1 / q i = 1 n E | a n i X i | 1 / q x α ( 1 1 / q ) n τ ( 1 1 / q ) i = 1 n | a n i | 1 / q x α ( 1 1 / q ) n τ ( 1 1 / q ) n ( r 1 ) / r i = 1 n | a n i | r / q 1 / r x α ( 1 1 / q ) n τ ( 1 1 / q ) .
Since q > 1 , 0 < α < 1 , we could know ( 1 1 / q ) α < α < 1 / p . Then by (14), for x 1 , we obtain
sup x 1 1 x 1 / p max 1 l n i = 1 l E Y n i ( 1 ) n τ ( 1 1 / q ) 0 as n .
When 1 / 2 < q 1 , noticing that E ( X ) = E ( X ) = 0 , taking a sufficiently small τ such that τ ( 1 r / q ) + 1 r < 0 , we obtain
max 1 l n i = 1 l E Y n i ( 1 ) 2 i = 1 n E | a n i X i | I | a n i X i |   > x α n τ 2 x α ( 1 r / q ) n τ ( 1 r / q ) i = 1 n E | a n i X i | r / q 2 x α ( 1 r / q ) n τ ( 1 r / q ) i = 1 n E | a n i | r / q x α ( 1 r / q ) n τ ( 1 r / q ) + 1 r .
Observing that 1 r / q < 0 , we have
sup x 1 1 x 1 / p max 1 k n i = 1 k E Y n i ( 1 ) n τ ( 1 r / q ) + 1 r 0 as n .
Then, to prove J 1 < , we only need to show
J 1 * : = n = 1 n r 2 1 V max 1 k n i = 1 k Y n i ( 1 ) E Y n i ( 1 ) x 1 / p 8 d x < .
It is obvious that Y n i ( 1 ) is a sequence of negatively dependent random variables under sublinear expectation space. It follows from Markov’s inequality and Cr’s inequality under sublinear expectation, Lemma 3, that for a sufficiently large M,
V max 1 k n i = 1 k Y n i ( 1 ) E Y n i ( 1 ) x 1 / p 8 x M / p n M / 2 1 ( log n ) M i = 1 n E [ | Y n i ( 1 ) | ] M .
Taking a suitably large M such that ( 1 / p α ) M r α / q < 1 , 2 τ ( M r / q ) + M / 2 < 1 , we have
n = 1 n r 2 + M / 2 1 ( log n ) M i = 1 n 1 x M / p E [ | Y n i ( 1 ) | ] M d x n = 1 n r 2 + M / 2 1 n τ ( M r / q ) ( log n ) M i = 1 n | a n i | r / q 1 x M ( 1 / p α ) r α / q d x n = 1 n 2 τ ( M r / q ) + M / 2 log n M < .
Consequently, we obtain J 1 * < . Similar to the proof of (9), we could obtain
V max 1 k n i = 1 k Y n i ( 2 ) x 1 / p / 4 i = 1 n V ( | a n i X |   > x α n τ ) K .
From β > q / r , and a n i ( i / n ) β ( 1 / n ) q , we obtain
i = 1 n a n i r / q i = 1 n n r ( q + β ) / q i β r / q n 1 r
By Maokov’s inequality under sublinear expectations, we conclude that
J 2 n = 1 n r 2 1 i = 1 n V ( | a n i X |   > μ x α n τ ) K d x C n = 1 n r 2 1 i = 1 n x r α / q n r τ / q E | X | r / q K d x n = 1 n r 2 K ( r 1 r τ / q ) 1 x r K α / q d x .
Since α > 0 , r > 1 , we could take a sufficiently small τ and sufficiently large K such that r K α / q < 1 and 2 + r K ( r 1 r τ / q ) < 1 . It follows that J 2 < . Similarly, we can obtain J 3 < . It is obvious that β > q / r implies β ( r 1 ) / ( q + β ) > 1 . Then,
1 s 1 / q t β ( r 1 ) / ( q + β ) d t s 1 q + β ( r + 1 ) q ( q + β ) .
It follows that
J 4 n = 1 n r 2 i = 1 n 1 V | a n i X i |   > x 1 / p 4 K d x n = 1 n r 2 i = 1 n 1 V | X | > x 1 / p 4 C K n q + β i β d x 1 d x 1 v r 2 d v 1 v V | X | > x 1 / p 4 C K v q + β u β d u 1 d x 1 d s 1 s 1 / q s ( r 1 ) / ( q + β ) 1 y β ( r 1 ) / ( q + β ) V | X | > x 1 / p 4 C K s d y 1 d x 1 s r / q 1 V | X | > x 1 / p 4 C K s d s .
Hence, from Lemma 6 and (5), we obtain J 4 < . Now we prove (6) ⇒ (5). By Markov’s inequality under sublinear expectations, (6), and Lemma 2, we have
n = 1 n r 2 V max 1 k n i = 1 k a n i X i ϵ = n = 1 n r 2 E I max 1 k n i = 1 k a n i X i ϵ n = 1 n r 2 E max 1 k n i = 1 k a n i X i p ( ϵ / 2 ) p + / ( ϵ / 2 ) p I max 1 k n i = 1 k a n i X i ϵ n = 1 n r 2 E max 1 k n i = 1 k a n i X i p ( ϵ / 2 ) p + / ( ϵ / 2 ) p n = 1 n r 2 E max 1 k n i = 1 k a n i X i p ( ϵ / 2 ) p + / ( ϵ / 2 ) p < .
similar proofs of (3.17) are available in Guo [23], we have
V max 1 k n | a n k X k | > ϵ 0 , n .
By Lemma 5, it follows that, for all ϵ > 0
i = 1 n V | a n i X i | > ϵ V max 1 k n | a n k X k | > ϵ .
Now, combining (16) with (4) gives
n = 1 n r 2 ϵ i = 1 n V | a n i X i | > x 1 / p < .
By the process of proof of I 4 < , we see that (17) is equivalent to (3). The proof of Theorem 2 is finished.

4.3. Proof of Theorem 3

From the supposition of Theorem 3, for β = q / r , one can obtain
i = 1 n a n i r / q i = 1 n n r ( q + β ) / q log n ,
and
1 s 1 / q t β ( r 1 ) / ( q + β ) d t = 1 s 1 / q t 1 d t log s ,
By the same argument as the proof of Theorem 2, with Lemma 7 in place of Lemma 6, together with (18) and (19), we could prove Theorem 3. Therefore, the proof is omitted.

4.4. Proof of Theorem 4

From the supposition of Theorem 4, for β < q / r , one can obtain
i = 1 n a n i r / q i = 1 n n r ( q + β ) / q i β r / q n r ( q + β ) / q ,
and
1 s 1 / q t β ( r 1 ) / ( q + β ) d t C .
By the same argument as the proof of Theorem 2, with (21) in place of (15), we could prove Theorem 4. Therefore, the proof is omitted.

5. Conclusions

In this paper, using the moment inequality for ND random variables sequences under sublinear expectation space and the truncation method, the authors establish the equivalent conditions of complete convergence for sums of ND random variables sequences and p-th moment convergence for sums of ND random variables sequences. The results extend the corresponding results from the classical probability space to the sublinear expectation space, as well as extending i.i.d random variables to ND random variables. In the future, we will try to establish the corresponding results for other dependent sequences under sublinear expectation space.

Author Contributions

P.S., D.W. and X.T. contributed equally to the development of this paper. All authors have read and agreed to the published version of the manuscript.

Funding

Department of Science and Technology of Jilin Province (Grant No. YDZJ202101ZYTS156), Natural Science Foundation of Jilin Province (Grant No. YDZJ202301ZYTS373).

Data Availability Statement

No new data were created or analyzed in this study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Peng, S.G. Backward SDE and related G-Expectation. Pitman Res. Notes Math. Ser. 1997, 364, 141–159. [Google Scholar]
  2. Peng, S.G. Monotonic limit theorem of BSDE and nonlinear decomposition theorem of Doob-Meyer type. Probab. Theory Relat. Fields 1999, 113, 473–499. [Google Scholar] [CrossRef]
  3. Denis, L.; Martini, C. A theoretical framework for the pricing of contingent claims in the presence of model uncertainty. Ann. Appl. Probab. 2006, 16, 827–852. [Google Scholar] [CrossRef] [Green Version]
  4. Gilboa, I. Expected utility with purely subjective non-additive probabilities. J. Math. Econ. 1987, 16, 65–88. [Google Scholar] [CrossRef] [Green Version]
  5. Marinacci, M. Limit laws for non-additive probabilities and their frequentist interpretation. J. Econ. Theory 1999, 84, 145–195. [Google Scholar] [CrossRef]
  6. Peng, S.G. G-Gxpectation, G-Brownian motion and related stochastic calculus of Ito’s type. Stoch. Anal. Appl. 2006, 2, 541–567. [Google Scholar]
  7. Peng, S.G. Multi-dimensional G-Brownian motion and related stochastic calculus under G-expectation. Stoch. Proc. Appl. 2008, 118, 2223–2253. [Google Scholar] [CrossRef] [Green Version]
  8. Peng, S.G. Nonlinear Expectations and Stochastic Calculus under Uncertainty, 1st ed.; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  9. Zhang, L.X. Exponential inequalities under the sub-linear expectations with applications to laws of the iterated logarithm. Sci. China Math. 2016, 59, 2503–2526. [Google Scholar] [CrossRef] [Green Version]
  10. Zhang, L.X. Rosenthal’s inequalities for independent and negatively dependent random variables under sub-linear expectations with applications. Sci. China Math. 2016, 59, 751–768. [Google Scholar] [CrossRef] [Green Version]
  11. Zhang, L.X. Donsker’s invariance principle under the sub-linear expectation with an application to chung’s law of the iterated logarithm. Commun. Math. Stat. 2015, 3, 187–214. [Google Scholar] [CrossRef] [Green Version]
  12. Zhang, M.; Chen, Z.J. Strong laws of large numbers for sub-linear expectations. Sci. China Math. 2016, 59, 945–954. [Google Scholar]
  13. Chen, Z.J.; Liu, Q.; Zong, G. Weak laws of large numbers for sublinear expectation. Math. Control Relat. Fields 2018, 8, 637–651. [Google Scholar] [CrossRef] [Green Version]
  14. Chen, Z.J.; Feng, X.F. Large deviation for negatively dependent random variables under sublinear expectation. Comm. Stat. Theory Methods 2016, 45, 400–412. [Google Scholar] [CrossRef]
  15. Wu, Q.Y. Precise Asymptotics for Complete Integral Convergence under Sublinear Expectations. Math. Probl. Eng. 2020, 13, 3145935. [Google Scholar] [CrossRef]
  16. Li, M.; Shi, Y.F. A general central limit theorem under sublinear expectations. Sci. China Math. 2010, 53, 1989–1994. [Google Scholar] [CrossRef] [Green Version]
  17. Liu, W.; Zhang, Y. Large deviation principle for linear processes generated by real stationary sequences under the sub-linear expectation. Comm. Stat. Theory Methods 2023, 52, 5727–5741. [Google Scholar] [CrossRef]
  18. Ding, X. A general form for precise asymptotics for complete convergence under sublinear expectation. AIMS Math. 2022, 7, 1664–1677. [Google Scholar] [CrossRef]
  19. Wu, Y.; Wang, X.J. General results on precise asymptotics under sub-linear expectations. J. Math. Anal. Appl. 2022, 511, 126090. [Google Scholar] [CrossRef]
  20. Guo, S.; Zhang, Y. Central limit theolrem for linear processes generated by m-dependent random variables under the sub-linear expectation. Comm. Stat. Theory Methods 2023, 52, 6407–6419. [Google Scholar] [CrossRef]
  21. Guo, S.; Zhang, Y. Moderate deviation principle for m-dependent random variables under the sublinear expectation. AIMS Math. 2022, 7, 5943–5956. [Google Scholar] [CrossRef]
  22. Dong, H.; Tan, X.L.; Yong, Z. Complete convergence and complete integration convergence for weighted sums of arrays of rowwise m-END under sub-linear expectations space. AIMS Math. 2023, 8, 6705–6724. [Google Scholar] [CrossRef]
  23. Guo, M.L.; Shan, S. Equivalent conditions of complete qth monent convergence for weighted sums of sequences of negatively orthant dependent raodom variables. Chin. J. Appl. Probab. Stat. 2020, 36, 381–392. [Google Scholar]
  24. Xu, M.Z.; Cheng, K. Convergence of sums of i.i.d. random variables under sublinear expectations. J. Inequal. Appl. 2021, 2021, 157. [Google Scholar] [CrossRef]
  25. Xu, M.Z.; Cheng, K. Equivalent conditions of complete p-th moment convergence for weighted sums of i.i.d. random variables under sublinear expectations. arXiv 2021, arXiv:2109.08464. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sun, P.; Wang, D.; Tan, X. Equivalent Conditions of Complete p-th Moment Convergence for Weighted Sum of ND Random Variables under Sublinear Expectation Space. Mathematics 2023, 11, 3494. https://doi.org/10.3390/math11163494

AMA Style

Sun P, Wang D, Tan X. Equivalent Conditions of Complete p-th Moment Convergence for Weighted Sum of ND Random Variables under Sublinear Expectation Space. Mathematics. 2023; 11(16):3494. https://doi.org/10.3390/math11163494

Chicago/Turabian Style

Sun, Peiyu, Dehui Wang, and Xili Tan. 2023. "Equivalent Conditions of Complete p-th Moment Convergence for Weighted Sum of ND Random Variables under Sublinear Expectation Space" Mathematics 11, no. 16: 3494. https://doi.org/10.3390/math11163494

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop