Next Article in Journal
Improved Oversampling Algorithm for Imbalanced Data Based on K-Nearest Neighbor and Interpolation Process Optimization
Previous Article in Journal
Seat Pressure Asymmetries after Cycling at Constant Intensity
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Conditional Strong Law of Large Numbers under G-Expectations

1
Department of Mathematics, Harbin Institute of Technology, Harbin 150001, China
2
Department of Mathematics, Southern University of Science and Technology, Shenzhen 518055, China
3
SUSTech International Center for Mathematics, Southern University of Science and Technology, Shenzhen 518055, China
*
Author to whom correspondence should be addressed.
Symmetry 2024, 16(3), 272; https://doi.org/10.3390/sym16030272
Submission received: 26 January 2024 / Revised: 20 February 2024 / Accepted: 21 February 2024 / Published: 25 February 2024

Abstract

:
In this paper, we investigate two types of the conditional strong law of large numbers with a new notion of conditionally independent random variables under G-expectation which are related to the symmetry G-function. Our limit theorem demonstrates that the cluster points of empirical averages fall within the bounds of the lower and upper conditional expectations with lower probability one. Moreover, for conditionally independent random variables with identical conditional distributions, we show the existence of two cluster points of empirical averages that correspond to the essential minimum and essential maximum expectations, respectively, with G-capacity one.

1. Introduction

The strong law of large numbers (SLLN for short) is a fundamental limit theorem in modern probability theory that has extensive applications in various fields such as statistics, econometrics, finance, and filtering theory, to name a few. As practical applications continue to evolve, the mathematical models described by the SLLN and their probability basis have also undergone continuous refinement and adaptation. For example, in the financial market, the prices of different stocks may not be independent despite being filtered to remove spurious correlations. This gives rise to the necessity of the study of conditional SLLN. The interested reader can find more details about conditional SLLN in the papers by Patterson et al. [1], Majerek et al. [2] and Prakasa Rao [3].
Furthermore, the additivity of probability may not always hold in certain fields due to complex practical problems that cannot be modeled well by classical probability theory. In such cases, alternative probabilistic frameworks have been proposed to address these issues, such as non-additive probabilities which are also called capacities (see Choquet [4], Walley and Fine [5], Schmeidler [6], Wasserman and Kadane [7]) and non-additive expectations like Choquet expectations (see Choquet [4], Chen et al. [8]), g-expectations (see Peng [9], Coquet et al. [10], Gianin [11]), and G-expectations (see Peng [12,13,14] for more details). The question arises naturally whether the SLLN still holds for non-additive probabilities. Based on the non-additive probabilistic framework, numerous authors studied SLLN under various conditions. The SLLN under non-additive probabilities differs from the classic SLLN in that it states that the empirical average of random variables lies within an interval asymptotically, as opposed to converging to a single point. Marinacci [15], Maccheroni and Marinacci [16] established an SLLN for bounded and continuous random variables within a totally monotone capacity. After developing a theoretical framework for the nonlinear expectation space, Peng [12,13,14] initiated the notion of independent and identically distributed (IID for short) random variables in the context of sublinear expectation space. Under this probability basis, Peng [17] established a weak law of large numbers (LLN) for IID random variables to converge to a maximal distribution, as well as a central limit theorem (CLT) whose limit is a G-normal distribution. We refer the reader to Peng [14] for more details.
The recent results for SLLN for non-additive probability show that any cluster point of empirical averages lies between the lower expectation E [ X 1 ] E ˜ [ X 1 ] and the upper expectation E ˜ [ X 1 ] with the capacity one under lower capacity v. Chen et al. [18] obtained an SLLN for independent random variables under the condition of finite ( 1 + α ) th moments for upper expectation:
v ω Ω : E [ X 1 ] lim inf n i = 1 n X i n lim sup n i = 1 n X i n E ˜ [ X 1 ] = 1 .
After that, Chen [19], Chen et al. [20], Hu et al. [21], extended the results under different conditions.
In this paper, we investigate two types of conditional SLLN for G-expectations within Peng’s framework, aiming to establish a foundation for future applications such as machine learning, reinforcement learning and stochastic filtering simulations within the G-expectation framework. These extensions are based on the conditional Kolmogorov’s SLLN but adopt different approaches from the original ones. Motivated by the definition of conditional sublinear expectation stated in Hu and Peng [22], we introduce the definition of conditional G-capacity as well as its related properties. The first result of this paper is to prove that the equation above is still true for conditionally independent random variables. The second is to prove the SLLN when the sequence of random variables is conditionally IID with respect to F t for a fixed t 0 , a sub- σ -field of F in a G-expectation space ( Ω , L G 1 ( Ω ) , E ^ [ · ] ) under the condition of finite conditional ( 1 + α ) th moments for upper expectation:
V ω Ω : lim sup n i = 1 n X i n = μ ¯ ( t ) = 1 ,
and
V ω Ω : lim inf n i = 1 n X i n = μ ̲ ( t ) = 1 ,
where V is the upper capacity generated by the G-expectation E ^ , μ ¯ ( t ) = E ^ [ X 1 | F t ] , and μ ̲ ( t ) = E ^ [ X 1 | F t ] .
This paper is organized as follows. Section 2 introduces the preliminaries about G-expectation space and G-capacities in Peng’s framework. In Section 3, we introduce the definitions of conditional G-capacity, conditionally IID random variables on a G-expectation space, and related lemmas that would be useful in this paper. Then, we state and prove the main results of this paper in Section 4, which are two types of conditional SLLN under G-expectation space.

2. Preliminaries

In this section, we delve into fundamental concepts and outcomes within the framework of G-expectation. The reader may refer to Peng [12,13,14] and Hu et al. [21] for more details.
Let C b ( R d ) be the space of all the continuous bounded functions defined on R d , C b , L i p ( R d ) be the space of all functions in C b ( R d ) within Lipschitz continuity, C b + ( R d ) be the space of all positive functions in C b ( R d ) , C b i , + ( R d ) be the space of all ith-continuously differentiable functions in C b + ( R d ) for i N + , S ( d ) be the space of all d × d symmetric matrices, and S + ( d ) be the space of all non-negative matrices in S ( d ) . Consider a nonempty set Ω , and let H be a linear space comprising real-valued functions defined on Ω . It is structured such that if X 1 , , X d H , then ϕ ( X 1 , , X d ) H for any ϕ C b , L i p ( R d ) . The space H is interpreted as the collection of random variables.
Definition 1.
A sublinear expectation E ˜ : H R is a functional adhering to the following properties: for each X , Y H , we have
(1) 
Monotonicity: E ˜ [ X ] E ˜ [ Y ] if X Y ;
(2) 
Constant preserving: E ˜ [ k ] = k for k R ;
(3) 
Sub-additivity: E ˜ [ X + Y ] E ˜ [ Y ] + E ˜ [ Y ] ;
(4) 
Positive homogeneity: E ˜ [ α X ] = α E ˜ [ X ] for any α 0 .
A space denoted as ( Ω , H , E ˜ ) is termed a sublinear expectation space.
Definition 2.
Let X and Y be two d-dimensional random variables defined on sublinear expectation spaces ( Ω , H , E ˜ ) and ( Ω , H , E ˜ ) , respectively. They are called having identical distributions and denoted by X = d Y , provided
E ˜ [ ψ ( X ) ] = E ˜ [ ψ ( Y ) ] , for all ψ C b , L i p ( R d ) .
Definition 3.
On a sublinear expectation space ( Ω , H , E ˜ ) , a random variable Y = ( Y 1 , , Y d ) , Y i H , is said to be independent of another random variable X = ( X 1 , , X n ) , X j H , denoted by Y X , provided for each function ψ C b , L i p ( R d + n ) , we have
E ˜ [ ψ ( X , Y ) ] = E ˜ [ E ˜ [ ψ ( x , Y ) ] x = X ] .
An n-dimensional random variable X is an independent copy of X if X = d X and X X .
Definition 4.
Let X be a d-dimensional random variable on ( Ω , H , E ˜ ) with an independent copy X . Then, X is considered G-normally distributed, provided for any a , b 0 ,
a X + b X = d a 2 + b 2 X .
Here, G : S ( d ) R is a functional defined by
G ( A ) : = 1 2 E ˜ [ A X , X ] .
Peng [14] showed that X = ( X 1 , , X d ) is a G-normally distributed if and only if for each ψ C b , L i p ( R d ) , u ( t , x ) E ˜ [ ψ ( x + t X ) ] , ( t , x ) [ 0 , ) × R d , is the solution of the following G-heat equation:
t u G ( D x 2 u ) = 0 , u ( 0 , x ) = ψ ( x ) .
Let G ( · ) : S ( d ) R be a given monotone and sublinear function. Then there exists a bounded, convex, and closed subset Σ S + ( d ) such that for A S ( d ) ,
G ( A ) = 1 2 sup B Σ ( A , B ) ,
Hence, the G-normal distribution N ( { 0 } × Σ ) exists.
In what follows, let Ω : = C ( [ 0 , ) ; R d ) be the space comprising continuous paths ( ω t ) t 0 R d with w 0 = 0 , with an associated distance
ρ ( ω 1 , ω 2 ) i = 1 2 i max t [ 0 , i ] | ω t 1 ω t 2 | 1 , ω 1 , ω 2 Ω .
Define the canonical process B t ( ω ) : = ω t , t [ 0 , ) for ω Ω . For 0 t 1 t 2 t n T , define the space of random variables:
L i p ( Ω T ) ψ ( B t 1 , B t 2 B t 1 , , B t n B t n 1 ) : n N + , ψ C b , L i p ( R d × n )
and
L i p ( Ω ) i = 1 L i p ( Ω i ) ,
where Ω T { ( ω t T ) t 0 : ω Ω } . We note that L i p ( Ω t ) L i p ( Ω T ) for t T . The spaces L G p ( Ω ) and L G p ( Ω T ) are defined as the completions of L i p ( Ω ) and L i p ( Ω T ) , respectively, with the norm ξ p = ( E ^ [ | ξ | p ] ) 1 / p for p 1 . Similarly, for 0 t T < , we have L G p ( Ω t ) L G p ( Ω T ) L G p ( Ω ) .
Definition 5
(Peng [13]). Let { ξ i } i = 1 n be a sequence of d-dimensional random variables on a sublinear expectation space ( Ω ˜ , H ˜ , E ˜ ) such that for all 1 i n , ξ i is G-normally distributed and ξ i + 1 is independent of ( ξ 1 , , ξ i ) . For all X = ψ ( B t 1 , B t 2 B t 1 , , B t n B t n 1 ) L i p ( Ω ) , where ψ C b , L i p ( R d × n ) , 0 t 1 < < t n < , the sublinear expectation E ^ : L i p ( Ω ) R defined by
E ^ [ X ] = E ˜ [ ψ ( t 1 ξ 1 , t 2 t 1 ξ 2 , , t n t n 1 ξ n ) ] ,
is a G-expectation. The corresponding canonical process { B t } t 0 is a d-dimensional G-Brownian motion satisfying:
1. 
B 0 = 0 ;
2. 
For each t , s 0 , B t + s B t and B s are identically distributed and B t + s B t is independent of ( B t 1 , B t 2 , , B t j ) for all j N + and 0 t 1 t 2 t j t ;
3. 
B t = d t ξ , where ξ is G-normally distributed.
Throughout this paper, consider the G-expectation space ( Ω , L G 1 ( Ω ) , E ^ [ · ] ) , within which B t ( ω ) t 0 is identified as the G-Brownian motion. We set for each t [ 0 , ) ,
F t : = B t ( Ω ) = B ( Ω t ) , F t + : = B t + ( Ω ) = s > t B s ( Ω ) , F : = s > 0 F s .
The space ( Ω , F ) is the canonical space equipped with the natural filtration. Denote by M the set of all probability measures on ( Ω , F ) .
Theorem 1
(Denis et al. [23], Hu and Peng [24]). For any X L G 1 ( Ω ) , there exists a weakly compact set P M such that
E ^ [ X ] = sup P P E P [ X ] , E ^ [ X ] = inf P P E P [ X ] .
Here, E P denotes the expectation under the probability P. We call P the set representing E ^ .
Definition 6.
The set function c denoted by
c ( A ) = c M ( A ) : = sup P M P ( A ) , A F ,
is called an upper probability associated with M . This upper probability c ( · ) is a capacity satisfying the following properties:
(1) 
c ( ) = 0 , c ( Ω ) = 1 ;
(2) 
c ( A ) c ( B ) , whenever A B and A , B F ;
(3) 
c ( i = 1 n A i ) i = 1 n c ( A i ) , A i F for all i = 1 , 2 , , n .
Definition 7.
Let P be a weakly compact set representing E ^ . A pair of capacities ( V , v ) is called G-capacities generated by a G-expectation E ^ if
V ( A ) : = sup P P P ( A ) , v ( A ) : = inf P P P ( A ) , A F .
It is easy to prove that V and v are conjugates, namely, V ( A ) + v ( A c ) = 1 , for any A F .
Definition 8.
V ( · ) is called lower or upper continuous if it satisfies (a) or (b):
(a) 
V ( A n ) V ( A ) , if A n A , where A n , A F ;
(b) 
V ( A n ) V ( A ) , if A n A , where A n , A F .
Lemma 1
(Chen et al. [18] (Lemma 2.1)). The capacity V ( · ) and v ( · ) satisfy the lower and upper continuity, respectively. Moreover, the upper (lower) continuity of V ( · ) is equivalent to the lower (upper) continuity v ( · ) .
Next, we provide a case where the capacity V ( · ) is upper continuous.
Lemma 2
(Peng [14] (Lemma 6.1.12)). Let P be a weakly compact set representing E ^ . Suppose that ( A n ) n 1 are closed sets, then we have V ( A n ) V ( A ) if A n A .
Definition 9
(Quasi-surely). A set A Ω is polar if V ( A ) = 0 and a property holds “quasi-surely”(q.s. for short) if it holds outside a polar set.
Definition 10
(Peng [14]). A functional X : Ω R is called quasi-continuous if for any ε > 0 , there exists an open set O with V ( O ) < ε such that X | O c is continuous.
Definition 11.
A mapping X : Ω R is said to have a quasi-continuous version if there exists a quasi-continuous function Y : Ω R such that X = Y q.s.
Next, we give an important lemma for the equivalent definition of L G p ( Ω ) .
Lemma 3
(Denis et al. [23], Hu and Peng [24]).
L G p ( Ω ) = { X F : lim N E ^ [ | X | p I { | X | N } ] = 0   and   X   has   a   quasi-continuous   version } .
Theorem 2
(Hu et al. [25] (Theorem 3.16)). Let X be an n-dimensional random variable in L G 1 ( Ω ) . If A is a Borel set of R n with V ( { X A } ) = 0 , then I { X A } L G 1 ( Ω ) .
Lemma 4.
Let E ^ be a G-expectation and ( V , v ) be the G-capacities generated by E ^ . For any X , Y L G 1 ( Ω ) , X is independent of Y under V and v, if X is independent of Y under E ^ . Namely, for all subsets A and B F with V ( X A ) = V ( Y B ) = 0 ,
μ ( X A , Y B ) = μ ( X A ) μ ( Y B )
holds for μ = V or v .
This is a direct consequence of Lemma 2.6 in Chen [19] and Theorem 2, so we omit the proof.

3. Conditional G-Expectation and Related Lemmas

In this section, we first give the definitions of the conditional G-capacity, conditional independence, and conditionally identical distribution, and then estimate some important inequalities and lemmas under the conditional version in G-expectation space ( Ω , L G 1 ( Ω ) , E ^ ) .
Definition 12
(Peng [12]). For all X = ψ ( B t 1 , B t 2 B t 1 , , B t n B t n 1 ) L i p ( Ω ) , 0 t 1 < < t n < , the conditional G-expectation E ^ [ · | F t j ] given F t j is defined by
E ^ [ X | F t j ] = ψ ˜ ( B t 1 , B t 2 B t 1 , , B t j B t j 1 ) ,
where ψ C b , L i p ( R d × n ) , { B t } t 0 is a d-dimensional G-Brownian motion and
ψ ˜ ( x 1 , , x j ) = E ^ [ ψ ( x 1 , , x j , B t j + 1 B t j , , B t n B t n 1 ) ] .
For each fixed t 0 , the conditional G-expectation E ^ [ · | F t ] : L i p ( Ω ) L i p ( Ω t ) is a continuous mapping, which can be uniquely extended to the Banach space L G 1 ( Ω ) , and satisfies the following properties:
(1)
E ^ [ X | F t ] E ^ [ Y | F t ] , if X , Y L G 1 ( Ω ) , X Y ;
(2)
E ^ [ X + Y | F t ] E ^ [ X | F t ] + E ^ [ Y | F t ] , if X , Y L G 1 ( Ω ) ;
(3)
E ^ [ X + Y | F t ] = X + E ^ [ Y | F t ] , if X L G 1 ( Ω t ) , Y L G 1 ( Ω ) ;
(4)
E ^ [ X Y | F t ] = X + E ^ [ Y | F t ] + X E ^ [ Y | F t ] , if X L G 1 ( Ω t ) is bounded, X + = max { X , 0 } , X = max { X , 0 } with X = X + X , and Y L G 1 ( Ω ) ;
(5)
E ^ E ^ [ X | F t ] | F s = E ^ [ X | F t s ] , if X L G 1 ( Ω ) . In particular, E ^ E ^ [ X | F t ] = E ^ [ X ] .
We refer the reader to Peng [12] for more details.
Let P be a weakly compact set representing E ^ . For each fixed t 0 and P P , we define
P ( t , P ) = { Q P : E Q [ X ] = E P [ X ] , X L i p ( Ω t ) } .
Then, the conditional G-expectation can be characterized as follows.
Theorem 3
(Soner et al. [26] (Proposition 3.4)). For each X L G 1 ( Ω ) and each P P ,
E ^ [ X | F t ] = ess sup Q P ( t , P ) E Q [ X | F t ] P a . s .
Similar to the definition of capacity, we now characterize the conditional G-capacity.
Definition 13.
Let P be a weakly compact set representing E ^ . For each fixed t 0 and each P P , ( V ( · | F t ) , v ( · | F t ) ) are called the conditional G-capacities generated by the G-expectation E ^ given F t , if for each A F ,
V ( A | F t ) : = ess sup Q P ( t , P ) Q ( A | F t ) , P - a . s . , v ( A | F t ) : = ess inf Q P ( t , P ) Q ( A | F t ) , P - a . s .
We also call V ( · | F t ) a conditional upper probability and v ( · | F t ) a conditional lower probability for given t 0 .
Similar to capacity, one can easily verify that conditional G-capacity has the following properties.
Proposition 1.
Fixed t 0 , for any A , A n , B F and F t F , n 1 ,
(1) 
0 V ( A | F t ) 1 , V ( | F t ) = 0 , V ( Ω | F t ) = 1 , q.s.;
(2) 
V ( A | F t ) V ( B | F t ) , q.s., whenever A B ;
(3) 
V ( A B | F t ) V ( A | F t ) + V ( B | F t ) , q.s.;
(4) 
V ( A | F t ) = V ( A ) , q.s., if A is independent of F t ;
(5) 
V ( A | F t ) + v ( A c | F t ) = 1 , q.s.;
(6) 
V ( · | F t ) is lower continuous, namely, V ( A n | F t ) V ( A | F t ) , q.s., if A n A ;
(7) 
Suppose ( A n ) n 1 are closed sets, then V ( · | F t ) is upper continuous, namely, V ( A n | F t ) V ( A | F t ) , q.s., if A n A ;
(8) 
The upper (lower) continuity of V ( · | F t ) is equivalent to the lower (upper) continuity v ( · | F t ) .
Remark 1.
The proof of Proposition 1 is similar to the proof of properties of capacity, so we omit them. Due to Definition 13, for fixed P P , all the aforementioned properties hold P-a.s. Moreover, since they hold for each given P P , the aforementioned properties hold quasi-surely.
Definition 14.
Let X 1 and X 2 be two n-dimensional random variables defined on a G-expectation space ( Ω , L G 1 ( Ω ) , E ^ ) . They are said to have a conditionally identical distribution given F t under E ^ [ · | F t ] for fixed t 0 , if for each function ψ C b , L i p ( R n ) ,
E ^ [ ψ ( X 1 ) | F t ] = E ^ [ ψ ( X 2 ) | F t ] , q . s .
Definition 15
(Conditional independence). Let X and Y be two n-dimensional random variables defined on a G-expectation space ( Ω , L G 1 ( Ω ) , E ^ ) . Then, X is called conditionally independent of Y given F t under E ^ [ · | F t ] for fixed t 0 , if for each function ψ C b , L i p ( R 2 n ) ,
E ^ [ ψ ( X , Y ) | F t ] = E ^ E ^ [ ψ ( x , Y ) | F t ] x = X | F t , q . s .
Remark 2.
When X and Y both belong to L G 1 ( Ω t ) , they naturally also belong to L G 1 ( Ω ) and are F t -measurable. By Definition 15, it can be deduced that X is conditionally independent of Y given F t under E ^ [ · | F t ] .
Definition 16
( F t -conditionally IID random variables). A sequence of random variables { X i } i = 1 is considered F t -conditionally IID under E ^ [ · | F t ] for fixed t 0 , if X i and X 1 are conditionally identically distributed given F t under E ^ [ · | F t ] , and X i + 1 is conditionally independent of ( X 1 , , X i ) given F t under E ^ [ · | F t ] for each i 1 .
Lemma 5.
Let ( V ( · | F t ) , v ( · | F t ) ) be the conditional G-capacities generated by a G-expectation E ^ [ · ] for given F t . For any X , Y L G 1 ( Ω ) , X is conditionally independent of Y given F t under V and v, if X is conditionally independent of Y given F t under E ^ . Namely, for all subsets A , B F with V ( X A ) = V ( Y B ) = 0 ,
μ ( X A , Y B | F t ) = μ ( X A | F t ) μ ( Y B | F t ) , q . s .
holds for μ = V or v.
Proof. 
By Lemma 3, Theorem 2, Theorem 3 and Definition 15, if we choose ψ ( x , y ) = x y , then for fixed P P ,
V ( X A , Y B | F t ) = ess sup Q P ( t , P ) Q ( X A , Y B | F t ) = E ^ 1 { X A , Y B } | F t = E ^ ψ ( 1 { X A } , 1 { Y B } ) | F t = E ^ 1 { X A } | F t E ^ 1 { Y B } | F t = V ( X A | F t ) V ( Y B | F t ) , P - a . s .
Since by Definition 13, the above equation holds for each P P , it holds quasi-surely. Likewise, by selecting ψ ( x , y ) = x y , we can demonstrate that X is conditionally independent of Y given F t under v. Then the proof is completed. □
Chen et al. [18] proved that Chebyshev’s inequality, Hölder’s inequality, and Jensen’s inequality hold in sublinear expectation space. Now we give their forms under conditional G-expectation E ^ [ · | F t ] for fixed t 0 .
Proposition 2.
Let X , Y L G 1 ( Ω ) be two random variables on a G-expectation space ( Ω , L G 1 ( Ω ) , E ^ ) .
(1) 
(Chebyshev’s inequality.) For each α > 0 , p 1 and given F t ,
V ( | X | α | F t ) E ^ [ | X | p | F t ] α p q . s .
(2) 
(Hölder’s inequality.) For p , q > 1 with 1 p + 1 q = 1 and given F t , we have
E ^ | X Y | | F t E ^ | X | p | F t 1 p E ^ | Y | q | F t 1 q q . s .
(3) 
(Jensen’s inequality.) Let g ( · ) be a convex function. For the given F t , suppose that E ^ [ X | F t ] and E ^ [ g ( X ) | F t ] exist. Then
g E ^ [ X | F t ] E ^ [ g ( X ) | F t ] , q . s .
Proof. 
For (1), for fixed P P , due to Theorem 3,
E ^ [ | X | p | F t ] = E ^ | X | p I { | X | α } + | X | p I { | X | < α } | F t E ^ | X | p I | X | α | F t α p E ^ I { | X | α } | F t = α p ess sup Q P ( t , P ) Q ( | X | α | F t ) = α p V ( | X | α | F t ) , P - a . s .
Since by Definition 13, the above inequality holds for each P P , it holds quasi-surely.
For (2), we first consider the case when E ^ [ | X | p | F t ] · E ^ [ | Y | q | F t ] > 0 , q . s . Let
ξ = X ( E ^ [ | X | p | F t ] ) 1 / p , η = Y ( E ^ [ | Y | q | F t ] ) 1 / q .
By Young’s inequality, we can derive that
E ^ [ | ξ η | F t ] E ^ | ξ | p p + | η | q q | F t E ^ | ξ | p p | F t + E ^ | η | q q | F t = 1 p + 1 q = 1 , q . s .
For the case E ^ [ | X | p | F t ] · E ^ [ | Y | q | F t ] = 0 , q . s , we set
ξ = X ( E ^ [ | X | p | F t ] + ϵ ) 1 / p , η = Y ( E ^ [ | Y | q | F t ] + ϵ ) 1 / q ,
for any ϵ > 0 . Using the similar approach above and letting ϵ 0 , we can finish the proof.
For (3), it follows from the definition of E ^ [ · | F t ] that for all k R ,
E ^ [ k X | F t ] = k + E ^ [ X | F t ] + k E ^ [ X | F t ] k + E ^ [ X | F t ] k E ^ [ X | F t ] = k E ^ [ X | F t ] , q . s .
Let g : R R be a convex function. Then, there exists a countable set D R 2 such that
g ( x ) = sup ( k , μ ) D ( k x + μ ) .
Hence, we can derive that
E ^ [ g ( X ) | F t ] = E ^ sup ( k , μ ) D ( k X + μ ) | F t sup ( k , μ ) D E ^ [ k X + μ | F t ] sup ( k , μ ) D ( k E ^ [ X | F t ] + μ ) = g ( E ^ [ X | F t ] ) , q . s .
Borel–Cantelli lemma is one of the most fundamental theorems in classical probability theory used for proving SLLN. Chen et al. [18] proved that Borel–Cantelli Lemma for capacity. Next, we show that the Borel–Cantelli Lemma is still true for the conditional G-capacities given certain assumptions on G-expectation space.
Lemma 6
(Conditional Borel–Cantelli Lemma). Let { A n , n 1 } be a sequence of random events in F and V ( · | F t ) be the conditional G-capacity generated by the G-expectation E ^ given F t for fixed t 0 on a G-expectation space ( Ω , L G 1 ( Ω ) , E ^ ) .
(1) 
If A = { ω : n = 1 V ( A n | F t ) < } with V ( A ) = 1 , then V ( n = 1 k = n { A k A } | F t ) = 0 q.s.
(2) 
Suppose that v ( · | F t ) is lower continuous and { A n c } n = 1 is a sequence of conditional independent events for the fixed t 0 with respect to v, namely,
v i = n A i c | F t = i = n v A i c | F t , q . s .
Let A = { ω : n = 1 V ( A n | F t ) = } , then V ( lim sup n = A n ) = V ( A ) .
Proof. 
For (1), let B = n = 1 k = n { A k A } F , then
V B | F t = V n = 1 k = n { A k A } | F t V k = n { A k A } | F t k = n V { A k A } | F t 0 q . s . , as n .
For (2), let C = n = 1 k = n A k c F . By the Proposition 1, we know that V ( · | F t ) is upper continuous for the given F t . Then
0 v ( C | F t ) = 1 V ( C c | F t ) = 1 lim n V k = n A k | F t = lim n v k = n A k c | F t = lim n k = n 1 V ( A k | F t ) lim n k = n exp V ( A k | F t ) = lim n exp k = n V ( A k | F t ) = 0 q . s .
Thus, in virtue of Hu and Peng [22] (Lemma 23), we can derive that
v ( C ) = inf P P P ( C ) = sup P P E P [ I C ] = sup P P , Q P ( t , P ) E Q [ I C ] = sup P P E P [ ess sup Q P ( t , P ) E Q [ I C | F t ] ] = sup P P E P [ ess sup Q P ( t , P ) Q ( C | F t ) I A + ess sup Q P ( t , P ) Q ( C | F t ) I A c ] = sup P P E P [ ess sup Q P ( t , P ) Q ( C | F t ) I A c ] inf P P E P [ I A c ] = v ( A c ) ,
and hence, we have
V ( C c ) V ( A ) .
On the other hand, following from (1), only finitely many events of the sequence { A n , n 1 } hold on set A c , hence, V ( C c ) V ( A ) .

4. Conditional SLLN under G-Expectations

In this section, we introduce the conditional SLLN under G-expectations. Before moving forward, consider the following lemmas.
Lemma 7.
For fixed t 0 , consider a sequence of F t -conditionally independent random variables { X i } i = 1 L G 1 ( Ω ) such that sup i 1 E ^ | X i | 1 + α | F t < q.s. for some α > 0 . If there is a constant c > 0 such that for i N + ,
| X i E ^ [ X i | F t ] | c i ln ( 1 + i ) , q . s .
Then for all m > 1 , we can derive that
sup n 1 E ^ exp m ln ( 1 + n ) n i = 1 n ( X i E ^ [ X i | F t ] ) | F t < , q . s .
Proof. 
We notice that the fact sup i 1 E ^ | X i | 1 + α | F t < q.s. implies that sup i 1 E ^ [ | X i | | F t ] < and sup i 1 E ^ [ | X i E [ X i | F t ] | 1 + α ] < q.s for the fixed t 0 . It follows from
lim n α ( ln ( 1 + n ) ) 1 + α = ,
that for all m > 1 , there exists a constant n m N + such that
n α ( ln ( 1 + n ) ) 1 + α m 1 + α , n > n m ,
which means
m ln ( 1 + n ) n 1 + α 1 n , n > n m .
Besides, for any n n m , i n , set x = m ln ( 1 + n ) n ( X i E ^ [ X i | F t ] ) . Due to the fact
e x 1 + x + | x | 1 + α e 2 | x | , 0 α 1 ,
plugging x back into (2), we can obtain that
exp m ln ( 1 + n ) n ( X i E ^ [ X i | F t ] ) 1 + m ln ( 1 + n ) n ( X i E ^ [ X i | F t ] ) + m ln ( 1 + n ) n 1 + α | X i E ^ [ X i | F t ] | 1 + α exp 2 m ln ( 1 + n ) n ( X i E ^ [ X i | F t ] ) , q . s .
By assumptions,
m ln ( 1 + n ) n ( X i E ^ [ X i | F t ] ) c m q . s . ,
for fixed t 0 . Combined with (1), one has
exp m ln ( 1 + n ) n ( X i E ^ [ X i | F t ] ) 1 + m ln ( 1 + n ) n ( X i E ^ [ X i | F t ] ) + e 2 c m n | X i E ^ [ X i | F t ] | 1 + α q . s .
Let ζ = E ^ [ | X i E ^ [ X i | F t ] | 1 + α | F t ] . Taking E ^ [ · | F t ] on both sides of the equation above, we can derive that ζ < q.s. and
E ^ exp m ln ( 1 + n ) n ( X i E ^ [ X i | F t ] ) | F t 1 + ζ n e 2 c m q . s .
By conditional independence of { X i } i = 1 given F t ,
E ^ exp m ln ( 1 + n ) n i = 1 n ( X i E ^ [ X i | F t ] ) | F t = i = 1 n E ^ exp m ln ( 1 + n ) n ( X i E ^ [ X i | F t ] ) | F t 1 + ζ n e 2 c m n e ζ e 2 c m < , q . s . as n .
The proof is completed. □
Lemma 8.
For the fixed t 0 , let { X i } i = 1 L G 1 ( Ω ) be a sequence of conditionally independent random variables given F t satisfying the following conditions:
1. 
sup i 1 E ^ | X i | 1 + α | F t < q.s. for some α > 0 .
2. 
E ^ [ X i | F t ] = μ i ¯ ( t ) q.s.
3. 
There exists an F t -measurable random variable μ ¯ ( t ) such that
lim n 1 n i = 1 n μ i ¯ ( t ) = μ ¯ ( t ) , q . s .
4. 
There exists a positive constant c such that | X i μ i ¯ ( t ) | c i ln ( 1 + i ) , q.s. for all i = 1 , 2 , .
Set S n = i = 1 n X i . Then
V lim sup n S n n > μ ¯ ( t ) = 0 .
Proof. 
We notice that the sequence { X i } i = 1 satisfies assumptions of Lemma 7. To end this proof, we first show that for all ε > 0 and fixed t 0 ,
V n = 1 k = n S k i = 1 k μ i ¯ ( t ) k ε = 0 .
For m > 1 / ε , by Lemma 7, we can derive that
sup n 1 E ^ exp m ln ( 1 + n ) n i = 1 n ( X i μ i ¯ ( t ) ) | F t < q . s .
Chebyshev’s inequality implies,
V S n i = 1 n μ i ¯ ( t ) n ε = V m ln ( 1 + n ) n i = 1 n ( X i μ i ¯ ( t ) ) ε m ln ( 1 + n ) exp ( ε m ln ( 1 + n ) ) E ^ exp ( m ln ( 1 + n ) n i = 1 n ( X i μ i ¯ ( t ) ) ) 1 ( 1 + n ) ε m E ^ sup n 1 E ^ exp m ln ( 1 + n ) n i = 1 n ( X i μ i ¯ ( t ) ) | F t .
Note, that ε m > 1 , it follows from the convergence of n = 1 1 ( 1 + n ) ε m and (4) that
n = 1 V i = 1 n ( X i μ i ¯ ( t ) ) n ε < .
By Chen et al. [18] (Lemma 2.2), Equation (3) holds, and hence
V lim sup n i = 1 n ( X i μ i ¯ ( t ) ) n ε = 0 .
Therefore, in virtue of the lower continuity of V ( · ) , we have
V lim sup n S n n > μ ¯ ( t ) = 0 .
This finishes the proof. □
The following theorem is one of the main results in this paper which gives one type of conditional SLLN under G-expectation for conditionally independent random variables.
Theorem 4.
For the fixed t 0 , let { X i } i = 1 L G 1 ( Ω ) be a sequence of conditionally independent random variables given F t satisfying the following conditions:
1. 
sup i 1 E ^ | X i | 1 + α | F t < q.s. for some α > 0 .
2. 
E ^ [ X i | F t ] = μ i ¯ ( t ) , E ^ [ X i | F t ] = μ i ̲ ( t ) , where < μ i ̲ ( t ) μ i ¯ ( t ) < , q.s.
3. 
There exist two F t -measurable random variables μ ̲ ( t ) and μ ¯ ( t ) such that
lim n 1 n i = 1 n μ i ̲ ( t ) = μ ̲ ( t ) , lim n 1 n i = 1 n μ i ¯ ( t ) = μ ¯ ( t ) , q . s .
Set S n = i = 1 n X i . Then
V lim inf n S n n < μ ̲ ( t ) lim sup n S n n > μ ¯ ( t ) = 0 ,
and
v μ ̲ ( t ) lim inf n S n n lim sup n S n n μ ¯ ( t ) = 1 .
Proof. 
Note, that V and v are conjugates. This implies the equivalence of (5) and (6). By sub-additivity and monotonicity of V and v, to finish this proof it suffices to show that
V lim sup n S n n > μ ¯ ( t ) = 0 ,
and
V lim inf n S n n < μ ̲ ( t ) = 0 .
Now we prove (7). For any fixed c > 0 , set
X i ¯ = ( X i μ i ¯ ( t ) ) 1 { | X i μ i ¯ ( t ) | c i ln ( 1 + i ) } E ^ ( X i μ i ¯ ( t ) ) 1 { | X i μ i ¯ ( t ) | c i ln ( 1 + i ) } | F t + μ i ¯ ( t ) .
It is obvious that
X i = X i ¯ + ( X i μ i ¯ ( t ) ) 1 { | X i μ i ¯ ( t ) | > c i ln ( 1 + i ) } + E ^ ( X i μ i ¯ ( t ) ) 1 { | X i μ i ¯ ( t ) | c i ln ( 1 + i ) } | F t .
For each i, E ^ [ X i | F t ] = E ^ [ X i ¯ | F t ] = μ i ¯ ( t ) , and
| X i ¯ μ i ¯ ( t ) | 2 c i ln ( 1 + i ) , q . s .
Meanwhile, for all i 1 ,
| X i ¯ μ i ¯ ( t ) | | X i μ i ¯ ( t ) | + E ^ [ | X i μ i ¯ ( t ) | | F t ] , q . s .
Then it follows from Jensen’s inequality under the conditional version that
E ^ | X i ¯ μ i ¯ ( t ) | 1 + α | F t 2 1 + α E ^ | X i μ i ¯ ( t ) | 1 + α | F t + E ^ | X i μ i ¯ ( t ) | | F t 1 + α 2 2 + α E ^ | X i μ i ¯ ( t ) | 1 + α | F t < , q . s .
Hence, the sequence { X i ¯ } i = 1 fulfills assumptions of Lemma 8.
Set S ¯ n = i = 1 n X i ¯ , then by sublinearity of E ^ [ · | F t ] , we can derive that
S n n = S ¯ n n + 1 n i = 1 n ( X i μ i ¯ ( t ) ) 1 { | X i μ i ¯ ( t ) | > c i ln ( 1 + i ) } + 1 n i = 1 n E ^ ( X i μ i ¯ ( t ) ) 1 { | X i μ i ¯ ( t ) | c i ln ( 1 + i ) } | F t = S ¯ n n + 1 n i = 1 n ( X i μ i ¯ ( t ) ) 1 { | X i μ i ¯ ( t ) | > c i ln ( 1 + i ) } + 1 n i = 1 n E ^ ( X i μ i ¯ ( t ) ) + ( μ i ¯ ( t ) X i ) 1 { | X i μ i ¯ ( t ) | > c i ln ( 1 + i ) } | F t S ¯ n n + 1 n i = 1 n | X i μ i ¯ ( t ) | 1 { | X i μ i ¯ ( t ) | > c i ln ( 1 + i ) } + 1 n i = 1 n E ^ | X i μ i ¯ ( t ) | 1 { | X i μ i ¯ ( t ) | > c i ln ( 1 + i ) } | F t , q . s .
Applying Hölder’s inequality and Chebyshev’s inequality under the conditional version, we have
i = 1 1 i E ^ | X i μ i ¯ ( t ) | 1 { | X i μ i ¯ ( t ) | > c i ln ( 1 + i ) } | F t i = 1 1 i E ^ [ | X i μ i ¯ ( t ) | 1 + α | F t ] 1 1 + α E ^ [ 1 { | X i μ i ¯ ( t ) | > c i ln ( 1 + i ) } | F t ] α 1 + α i = 1 1 i E ^ [ | X i μ i ¯ ( t ) | 1 + α | F t ] 1 1 + α E ^ [ | X i μ i ¯ ( t ) | 1 + α | F t ] ( c i ln ( 1 + i ) ) 1 + α α 1 + α = i = 1 ( ln ( 1 + i ) ) α c α i 1 + α E ^ [ | X i μ i ¯ ( t ) | 1 + α | F t ] sup i 1 E ^ [ | X i μ i ¯ ( t ) | 1 + α | F t ] 1 c α i = 1 ( ln ( 1 + i ) ) α i 1 + α , q . s .
By Kronecker’s Lemma,
lim n 1 n E ^ | X i μ i ¯ ( t ) | 1 { | X i μ i ¯ ( t ) | > c i ln ( 1 + i ) } | F t = 0 , q . s .
Our next goal is to claim that
1 n i = 1 n | X i μ i ¯ ( t ) | 1 { | X i μ i ¯ ( t ) | > c i ln ( 1 + i ) } = 0 , q . s .
A similar approach by Kronecker’s Lemma, we only need to prove that
i = 1 1 i | X i μ i ¯ ( t ) | 1 { | X i μ i ¯ ( t ) | > c i ln ( 1 + i ) } < , q . s .
Utilizing Chebyshev’s inequality,
V | X i μ i ¯ ( t ) | > c i ln ( 1 + i ) E ^ [ E ^ [ | X i μ i ¯ ( t ) | 1 + α | F t ] ] ( c i ln ( 1 + i ) ) 1 + α ,
and then,
i = 1 V | X i μ i ¯ ( t ) | > c i ln ( 1 + i ) E ^ sup i 1 E ^ [ | X i μ i ¯ ( t ) | 1 + α | F t ] i = 1 ln ( 1 + i ) c i 1 + α < .
According to Chen et al. [18] (Lemma 2.2),
V n = 1 i = n | X i μ i ¯ ( t ) | > c i ln ( 1 + i ) = 0 .
which means
V n = 1 i = n | X i μ i ¯ ( t ) | c i ln ( 1 + i ) = 1 .
Namely, for all ω n = 1 i = n | X i μ i ¯ ( t ) | c i ln ( 1 + i ) , there exist n ( ω ) N + such that i > n ( ω ) , ω | X i μ i ¯ ( t ) | c i ln ( 1 + i ) . It follows that
i = 1 1 i | X i μ i ¯ ( t ) | 1 { | X i μ i ¯ ( t ) | > c i ln ( 1 + i ) } = i = 1 n ( ω ) + i = n ( ω ) + 1 1 i | X i μ i ¯ ( t ) | 1 { | X i μ i ¯ ( t ) | > c i ln ( 1 + i ) } < q . s .
Consequently, (11) holds.
Taking the limit superior on both sides of inequality (9), due to (10) and (11), we have
lim sup n S ¯ n n lim sup n S n n , q . s .
Since the sequence { X i ¯ } i = 1 satisfies assumptions of Lemma 8, we can derive that
V lim sup n S ¯ n n > μ ¯ ( t ) = 0 ,
and hence
V lim sup n S n n > μ ¯ ( t ) = 0 .
Similarly, consider the sequence { X i } i = 1 , it follows from E ^ [ X i | F t ] = μ i ̲ ( t ) that
V lim sup n S n n > μ ̲ ( t ) = 0 ,
which implies that
V lim inf n S n n < μ ̲ ( t ) = 0 .
Therefore, this proof is finished. □
Based on Theorem 4, the following conclusions can be immediately deduced.
Corollary 1.
Under assumptions of Theorem 4, if μ ¯ ( t ) = μ ̲ ( t ) , for any t 0 , we have
v lim n S n n = μ ¯ ( t ) = 1 .
As previously mentioned, we present the conditional SLLN in a G-expectation space when the sequence is conditionally independent. In what follows, we focus on the case when the sequence is F t -conditionally IID.
For a F t -conditionally IID sequence { X i } i = 1 L G 1 ( Ω ) with μ ¯ ( t ) : = E ^ [ X 1 | F t ] and μ ̲ ( t ) : = E ^ [ X 1 | F t ] given F t for the fixed t 0 , define Y as a class of F t -measurable random variables satisfying
Y { ξ L G 1 ( Ω t ) : μ ̲ ( t ) ξ μ ¯ ( t ) , q . s . , for fixed t 0 } .
For each ϕ C b + ( R ) and all ξ Y , there exists ξ * Y that
ϕ ( ξ * ) ϕ ( ξ ) , q . s .
Then we define x * R to be the maximal point of ϕ over Y which satisfies that
ϕ ( x * ) = max ω Ω ϕ ( ξ * ( ω ) ) .
That is, for fixed t 0 , there exists ω * Ω such that ϕ ( x * ) = ϕ ( x * ( ω * ) ) , and for all ω Ω , μ ̲ ( t ) ξ μ ¯ ( t ) , q.s.
Lemma 9.
Let { X i } i = 1 L G 1 ( Ω ) be a sequence of F t -conditionally IID random variables with finite conditional means μ ¯ ( t ) E ^ [ X 1 | F t ] and μ ̲ ( t ) E ^ [ X 1 | F t ] for fixed t 0 . Set S n = i = 1 n X i . Suppose that E ^ [ | X 1 | 1 + α | F t ] < , q . s . for some α > 0 . Let x * R be the maximal point of ϕ over Y. Then, for any ϕ C b + ( R ) and m N + n we have
E ^ ϕ S n n | F t ϕ ( x * ) n ess inf ξ 0 L G 1 ( Ω t ) E ^ ϕ ξ 0 + X n m x * n | F t ϕ ( ξ 0 ) , q . s .
Proof. 
Let T k 1 n i = 1 k X i with T 0 = 0 and y n 1 x * . Note that
E ^ [ ϕ ( T n ) | F t ] ϕ ( x * ) = E ^ [ ϕ ( T n ) | F t ] E ^ [ ϕ ( T n 1 + y ) | F t ] + E ^ [ ϕ ( T n 1 + y ) | F t ] E ^ [ ϕ ( T n 2 + 2 y ) | F t ] + + E ^ [ ϕ ( T 1 + ( n 1 ) y ) | F t ] E ^ [ ϕ ( n y ) | F t ] = m = 0 n 1 E ^ [ ϕ ( T n m + m y ) | F t ] E ^ [ ϕ ( T n m 1 + ( m + 1 ) y ) | F t ] , q . s .
Define h ( x , ω ) E ^ [ ϕ ( x + X n m n ) | F t ] . Due to the conditional independence of the sequence { X i } i = 1 ,
E ^ [ ϕ ( T n m + m y ) | F t ] = E ^ E ^ ϕ x + X n m n | F t x = T n m 1 + m y | F t = E ^ [ h ( T n m 1 + m y ) | F t ] , q . s .
Then, it follows from the sublinearity of E ^ [ · | F t ] that
E ^ [ ϕ ( T n m + m y ) | F t ] E ^ [ ϕ ( T n m 1 + ( m + 1 ) y ) | F t ] = E ^ [ h ( T n m 1 + m y ) | F t ] E ^ [ ϕ ( T n m 1 + ( m + 1 ) y ) | F t ] E ^ h ( T n m 1 + m y ) ϕ ( T n m 1 + ( m + 1 ) y ) | F t ess inf ξ 0 L G 1 ( Ω t ) h ( ξ 0 ) ϕ ( ξ 0 + y ) = ess inf ξ 0 L G 1 ( Ω t ) E ^ ϕ ξ 0 + X n m n | F t ϕ ξ 0 + x * n = ess inf ξ 0 L G 1 ( Ω t ) E ^ ϕ ξ 0 + X n m x * n | F t ϕ ξ 0 , q . s .
Therefore, this proof finishes using the identically conditional distribution of { X i } i = 1 . □
Lemma 10.
Let { X i } i = 1 L G 1 ( Ω ) be a sequence of F t -conditionally IID random variables with finite conditional means μ ¯ ( t ) E ^ [ X 1 | F t ] and μ ̲ ( t ) E ^ [ X 1 | F t ] for fixed t 0 . Set S n = i = 1 n X i . Suppose that E ^ [ | X 1 | 1 + α | F t ] < , q . s . for some α > 0 . Let x * R be the maximal point of ϕ over Y. Then, for any ϕ C b 2 , + ( R ) , we have
lim inf n n ess inf ξ 0 L G 1 ( Ω t ) E ^ ϕ ξ 0 + X n m x * n | F t ϕ ( ξ 0 ) 0 , q . s .
Proof. 
It follows from the Taylor expansion of ϕ that there exists a series of random variables { θ i } i = 1 n valued in [ 0 , 1 ] such that
ϕ ξ 0 + X i x * n ϕ ( ξ 0 ) = ϕ ( ξ 0 ) X i x * n + J n ( ξ 0 , X i , x * ) q . s . ,
where
J n ( ξ 0 , X i , x * ) = ϕ ξ 0 + θ i X i x * n ϕ ( ξ 0 ) X i x * n , 1 i n .
Taking conditional G-expectation E ^ [ · | F t ] on both sides of (12), we can derive that
E ^ ϕ ξ 0 + X i x * n | F t ϕ ( ξ 0 ) E ^ ϕ ( ξ 0 ) X i x * n | F t E ^ | J n ( ξ 0 , X i , x * ) | | F t , q . s .
By means of the fact that x * Y and ξ 0 L G 1 ( Ω t ) ,
E ^ ϕ ( ξ 0 ) X i x * n | F t = ( ϕ ( ξ 0 ) ) + μ ¯ ( t ) x * n + ( ϕ ( ξ 0 ) ) x * μ ̲ ( t ) n 0 , q . s .
To end this proof, we only need to show that
i = 1 n ess sup ξ 0 L G 1 ( Ω t ) E ^ | J n ( ξ 0 , X i , x * ) | | F t 0 , q . s . , as n .
For any ε > 0 , it follows from Hölder’s and Chebyshev’s inequalities under the conditional version and identically conditional distribution of the sequence { X i } i = 1 that
i = 1 n ess sup ξ 0 L G 1 ( Ω t ) E ^ | J n ( ξ 0 , X i , x * ) | | F t i = 1 n { ess sup ξ 0 L G 1 ( Ω t ) E ^ | J n ( ξ 0 , X i , x * ) | 1 | X i x * n | > ε | F t + ess sup ξ 0 L G 1 ( Ω t ) E ^ | J n ( ξ 0 , X i , x * ) | 1 | X i x * n | ε | F t } i = 1 n { E ^ ess sup ξ 0 L G 1 ( Ω t ) ϕ ξ 0 + θ ^ i θ i X i x * n X i x * n 2 1 { | X i x * | n ε } | F t + E ^ [ ( ess sup ξ 0 L G 1 ( Ω t ) | ϕ ξ 0 + θ i X i x * n | + ess sup ξ 0 L G 1 ( Ω t ) | ϕ ( ξ 0 ) | ) | X i x * | n 1 { | X i x * | > n ε } | F t ] } n 2 ϕ n E ^ | X 1 x * | 1 { | X i x * | > n ε } | F t + ε ϕ n E ^ [ | X 1 x * | | F t ] 2 ϕ E ^ | X 1 x * | 1 + α | F t 1 1 + α E ^ 1 { | X 1 x * | > n ε } | F t α 1 + α + ε ϕ E ^ [ | X 1 x * | | F t ] 2 ϕ E ^ | X 1 x * | 1 + α | F t 1 1 + α E ^ [ | X 1 x * | 1 + α | F t ] ( n ε ) 1 + α α 1 + α + ε ϕ E ^ [ | X 1 x * | | F t ] = 2 ϕ ( n ε ) α E ^ | X 1 x * | 1 + α | F t + ε ϕ E ^ [ | X 1 x * | | F t ] ε ϕ E ^ [ | X 1 x * | | F t ] , q . s . , as n .
Here, { θ ^ i } i = 1 take values in [ 0 , 1 ] . Then the proof is completed due to the arbitrariness of ε . □
Proposition 3.
Let { X i } i = 1 L G 1 ( Ω ) be a sequence of F t -conditionally IID random variables with finite conditional means μ ¯ ( t ) E ^ [ X 1 | F t ] and μ ̲ ( t ) E ^ [ X 1 | F t ] for fixed t 0 . Set S n = i = 1 n X i . Suppose that E ^ [ | X 1 | 1 + α | F t ] < , q . s . for some α > 0 . Then, for any ϕ C b + ( R ) , we have
lim inf n E ^ ϕ S n n | F t ess sup ξ Y ϕ ( ξ ) , q . s .
Proof. 
Notice that for ϕ C b + ( R ) , there exists ϕ ^ C b 2 , + ( R ) such that
sup x R | ϕ ( x ) ϕ ^ ( x ) | ε .
By Lemmas 9 and 10, we know that for any ϕ ^ C b 2 , + ( R ) ,
lim inf n E ^ ϕ ^ S n n | F t ess sup ξ Y ϕ ^ ( ξ ) 0 , q . s .
Then we can derive that
lim inf n E ^ ϕ S n n | F t ess sup ξ Y ϕ ( ξ ) = lim inf n E ^ ϕ S n n + ϕ ^ S n n ϕ ^ S n n | F t ess sup ξ Y ϕ ( ξ ) + ϕ ^ ( ξ ) ϕ ^ ( ξ ) lim inf n E ^ ϕ ^ S n n | F t ess sup ξ Y ϕ ^ ( ξ ) 2 ε 2 ε , q . s .
Then the results follow from the arbitrariness of ε which finishes the proof. □
Now, we are ready to discuss the other main result which gives the conditional SLLN for F t -conditionally IID sequences.
Theorem 5.
Let { X i } i = 1 L G 1 ( Ω ) be a sequence of F t -conditionally IID random variables with finite conditional means μ ¯ ( t ) E ^ [ X 1 | F t ] and μ ̲ ( t ) E ^ [ X 1 | F t ] for fixed t 0 . Set S n = i = 1 n X i . Suppose that E ^ [ | X 1 | 1 + α | F t ] < , q . s . for some α > 0 . Then,
V lim sup n S n n = μ ¯ ( t ) = 1 ,
V lim inf n S n n = μ ̲ ( t ) = 1 ,
if the G-capacity V is upper continuous.
Remark 3.
It is possible for a sequence { X i } i = 1 L G 1 ( Ω ) to be both F t -conditionally IID and F t -conditionally IID, where t t . However, this is not contradictory to Theorem 4.8 since μ ¯ ( t ) = μ ¯ ( t ) q.s. in this case.
Proof to Theorem 5.
Let Ω = Ω { μ ¯ ( t ) = μ ̲ ( t ) } and Ω = Ω { μ ¯ ( t ) > μ ̲ ( t ) } . Then Ω Ω = and Ω Ω = Ω .
For all ω Ω , { X i } i = 1 is a classical F t -conditionally IID sequence, and it is straightforward to prove that
V ω Ω : lim sup n S n n = μ ¯ ( t ) = V ( Ω ) , V ω Ω : lim inf n S n n = μ ̲ ( t ) = V ( Ω ) ,
which implies that
V ω Ω : lim sup n S n n μ ¯ ( t ) = 0 , V ω Ω : lim inf n S n n μ ̲ ( t ) = 0 .
For all ω Ω , we first prove that
V ω Ω : lim sup k S n k n k μ ¯ ( t ) = V ( Ω ) .
Due to the upper continuity of V , it suffices to show the existence of an increasing sequence { n k } N + such that for all 0 < ε < ( μ ¯ ( t ) μ ̲ ( t ) ) 1 Ω ,
V ω Ω : m = 1 k = m S n k n k μ ¯ ( t ) ε = V ( Ω ) .
Choose n k = k k , for k 1 . Let S ¯ n i = 1 n ( X i μ ¯ ( t ) ) and
ϕ ( x ) = 1 e ( x + ε ) , x ε , 0 , x < ε .
Notice that by Definition 15, X i μ ¯ ( t ) is F t -conditionally independent of μ ¯ ( t ) and μ ̲ ( t ) , for all i N + . Then we can derive from Lemma 5 that
V ω Ω : S n k S n k 1 n k n k 1 μ ¯ ( t ) ε | F t = V ω Ω : S n k n k 1 ( n k n k 1 ) μ ¯ ( t ) n k n k 1 ε | F t = V ω Ω : μ ¯ ( t ) > μ ̲ ( t ) , S ¯ n k n k 1 n k n k 1 ε | F t = V S ¯ n k n k 1 n k n k 1 ε | F t V ( Ω | F t ) E ^ 1 exp S ¯ n k n k 1 n k n k 1 ε 1 S ¯ n k n k 1 n k n k 1 ε | F t V ( Ω | F t ) = E ^ ϕ S ¯ n k n k 1 n k n k 1 | F t V ( Ω | F t ) , q . s .
Next, we consider the F t -conditionally IID sequence { X i μ ¯ ( t ) } i = 1 given F t . It is obvious that
E ^ [ X 1 μ ¯ ( t ) | F t ] = 0 , E ^ [ ( X 1 μ ¯ ( t ) ) | F t ] = μ ̲ ( t ) μ ¯ ( t ) .
As k , n k n k 1 , then it follows from Proposition 3 that
lim inf k E ^ ϕ S ¯ n k n k 1 n k n k 1 | F t V ( Ω | F t ) ess sup μ ̲ ( t ) μ ¯ ( t ) ξ 0 ϕ ( ξ ) V ( Ω | F t ) = ϕ ( 0 ) V ( Ω | F t ) = ( 1 e ε ) V ( Ω | F t ) > 0 , q . s . ,
which implies that
k = 1 V ω Ω : S n k S n k 1 n k n k 1 μ ¯ ( t ) ε | F t k = 1 E ^ ϕ S ¯ n k n k 1 n k n k 1 | F t V ( Ω | F t ) = , q . s .
Moreover, it follows from Chebyshev’s inequality that
V | S n k n k 1 ( n k n k 1 ) μ ¯ ( t ) | a = V | i = 1 n k n k 1 X i μ ¯ ( t ) | a 1 a 1 + α E ^ | i = 1 n k n k 1 X i μ ¯ ( t ) | 1 + α .
Taking a = 2 ( n k n k 1 ) , by the fact sup i 1 E ^ [ | X i | 1 + α | F t ] < q . s . , we can derive that
V | S n k n k 1 ( n k n k 1 ) μ ¯ ( t ) | = = 0 , as k .
Then, due to Lemma 5, the sequence { S n k S n k 1 } = { S n k n k 1 } is F t -conditionally IID under V [ · | F t ] , and hence by Lemma 6, we have
V ω Ω : lim sup k S n k S n k 1 n k n k 1 μ ¯ ( t ) ε = V ( Ω ) .
However,
S n k n k S n k S n k 1 n k n k 1 · n k n k 1 n k | S n k 1 | n k 1 · n k 1 n k .
Notice that
n k n k 1 n k 1 , n k 1 n k 0 , as k .
Combining with the fact from Theorem 4 that
lim sup n S n n μ ¯ ( t ) , q . s . , and lim sup n S n n μ ̲ ( t ) , q . s . ,
we can derive that
lim sup n | S n | n max { | μ ¯ ( t ) | , | μ ̲ ( t ) | } q . s .
Hence,
lim sup k S n k n k lim sup k S n k S n k 1 n k n k 1 n k n k 1 n k lim sup k | S n k 1 | n k 1 n k 1 n k , q . s ,
which implies that
V ω Ω : lim sup k S n k n k μ ¯ ( t ) ε = V ( Ω ) .
By the arbitrary ε and upper continuity of V , we have
V ω Ω : lim sup k S n k n k μ ¯ ( t ) = V ( Ω ) .
According to Theorem 4, we know that
V lim sup n S n n > μ ¯ ( t ) = 0 ,
and hence
V ω Ω : lim sup n S n n μ ¯ ( t ) V ω Ω : lim sup n S n n = μ ¯ ( t ) = V ω Ω : lim sup n S n n = μ ¯ ( t ) + V ω Ω : lim sup n S n n > μ ¯ ( t ) V ω Ω : lim sup n S n n μ ¯ ( t ) = V ( Ω ) .
This implies that
V ω Ω : lim sup n S n n μ ¯ ( t ) = 0 .
Combining with (15) and (17), we have
0 V ω Ω : lim sup n S n n μ ¯ ( t ) V ω Ω : lim sup n S n n μ ¯ ( t ) + V ω Ω : lim sup n S n n μ ¯ ( t ) = 0 ,
and hence,
1 V ω Ω : lim sup n S n n = μ ¯ ( t ) = V ω Ω : lim sup n S n n = μ ¯ ( t ) + V ω Ω : lim sup n S n n μ ¯ ( t ) V ( Ω ) = 1 .
Finally, we prove (14). Consider the sequence { X i } i = 1 . By (13), we have
V ω Ω : lim sup n S n n = E ^ [ X 1 | F t ] = 1 ,
which implies that
V ω Ω : lim inf n S n n = E ^ [ X 1 | F t ] = μ ̲ ( t ) = 1 .
Thus, this proof is completed. □
Remark 4.
Upon transitioning from a G-expectation framework to a classical probability space, the upper and lower conditional means become equal, namely, μ ¯ ( t ) = μ ̲ ( t ) . Consequently, the conditional SLLN under G-expectation that we present in Theorems 4 and 5 degenerates to its counterpart in the classical probabilistic sense which is derived by Majerek et al. [2].

5. Conclusions

In this article, we introduce new concepts of conditional G-capacity and F t -conditionally IID random variables. Additionally, we establish the Borel–Cantelli Lemma under conditional G-capacity. Building upon these concepts, we derive two kinds of conditional SLLN applicable to sequences of conditionally independent random variables and sequences of conditionally IID random variables within the G-expectation space ( Ω , L G 1 ( Ω ) , E ^ ) , respectively. We aim to establish a foundation for future applications such as machine learning, reinforcement learning and stochastic filtering simulations within the G-expectation framework. The findings suggest that, within an uncertain evaluation framework and under the influence of partial information, the cluster points of sample means can still fall within the upper and lower conditional expectations of the samples. This bears substantial practical significance for investors in understanding and predicting market behavior such as inflation patterns, return patterns, and general market changes in financial markets where only partial information is available. However, the samples in the current study still need to meet specific technical requirements. For X in the broader context of ( Ω , H , E ˜ ) , the same conclusions cannot be derived.

Author Contributions

Conceptualization, J.Z., Y.T. and J.X.; methodology, J.Z., Y.T. and J.X.; validation, J.Z., Y.T. and J.X.; writing—original draft preparation, J.Z. and Y.T.; writing—review and editing, J.Z., Y.T. and J.X.; supervision, J.X.; project administration, J.X.; funding acquisition, J.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Key R&D Program of China grant, grant number 2022YFA1006102, and NSFC grant, grant number 11831010.

Data Availability Statement

Not applicable.

Acknowledgments

The authors are grateful to three referees whose suggestions have improved this article substantially.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Patterson, R.F.; Bozorgnia, A.; Taylor, R.L. Strong laws of large numbers for arrays of rowwise conditionally independent random elements. J. Appl. Math. Stoch. Anal. 1993, 6, 637938. [Google Scholar] [CrossRef]
  2. Majerek, D.; Nowak, W.; Zieba, W. Conditional strong law of large number. Int. J. Pure Appl. Math. 2005, 20, 143–156. [Google Scholar]
  3. Prakasa Rao, B.L. Conditional independence, conditional mixing and conditional association. Ann. Inst. Stat. Math. 2009, 61, 441–460. [Google Scholar] [CrossRef]
  4. Choquet, G. Theory of capacities. Annales de l’institut Fourier 1954, 5, 131–295. [Google Scholar] [CrossRef]
  5. Walley, P.; Fine, T.L. Towards a frequentist theory of upper and lower probability. Ann. Stat. 1982, 10, 741–761. [Google Scholar] [CrossRef]
  6. Schmeidler, D. Subjective probability and expected utility without additivity. Econom. J. Econom. Soc. 1989, 57, 571–587. [Google Scholar] [CrossRef]
  7. Wasserman, L.A.; Kadane, J.B. Bayes’ theorem for Choquet capacities. Ann. Stat. 1990, 18, 1328–1339. [Google Scholar] [CrossRef]
  8. Chen, Z.; Chen, T.; Davison, M. Choquet expectation and Peng’S g-expectation. Ann. Probab. 2005, 33, 1179–1199. [Google Scholar] [CrossRef]
  9. Peng, S. Backward SDE and related G-expectation. Pitman Res. Notes Math. 1997, 364, 141–160. [Google Scholar]
  10. Coquet, F.; Hu, Y.; Mémin, J.; Peng, S. Filtration-consistent nonlinear expectations and related g-expectations. Probab. Theory Relat. Fields 2002, 123, 1–27. [Google Scholar] [CrossRef]
  11. Gianin, E.R. Risk measures via g-expectations. Insur. Math. Econ. 2006, 39, 19–34. [Google Scholar] [CrossRef]
  12. Peng, S. G-expectation, G-Brownian motion and related stochastic calculus of Itô type. In Stochastic Analysis and Applications: The Abel Symposium 2005; Springer: Berlin/Heidelberg, Germany, 2007; pp. 541–567. [Google Scholar]
  13. Peng, S. Multi-dimensional G-Brownian motion and related stochastic calculus under G-expectation. Stoch. Process. Their Appl. 2008, 118, 2223–2253. [Google Scholar] [CrossRef]
  14. Peng, S. Nonlinear Expectations and Stochastic Calculus under Uncertainty: With Robust CLT and G-Brownian Motion; Springer: Berlin, Germany, 2019; Volume 95. [Google Scholar]
  15. Marinacci, M. Limit laws for non-additive probabilities and their frequentist interpretation. J. Econ. Theory 1999, 84, 145–195. [Google Scholar] [CrossRef]
  16. Maccheroni, F.; Marinacci, M. A strong law of large numbers for capacities. Ann. Probab. 2005, 33, 1171–1178. [Google Scholar] [CrossRef]
  17. Peng, S. Law of large numbers and central limit theorem under nonlinear expectations. Probab. Uncertain. Quant. Risk 2019, 4, 1–8. [Google Scholar] [CrossRef]
  18. Chen, Z.; Wu, P.; Li, B. A strong law of large numbers for non-additive probabilities. Int. J. Approx. Reason. 2013, 54, 365–377. [Google Scholar] [CrossRef]
  19. Chen, Z. Strong laws of large numbers for sub-linear expectations. Sci. China Math. 2016, 59, 945–954. [Google Scholar] [CrossRef]
  20. Chen, Z.; Hu, C.; Zong, G. Strong laws of large numbers for sub-linear expectation without independence. Commun. Stat. Theory Methods 2017, 46, 7529–7545. [Google Scholar] [CrossRef]
  21. Hu, F.; Chen, Z.; Wu, P. A general strong law of large numbers for non-additive probabilities and its applications. Stat. A J. Theor. Appl. Stat. 2016, 50, 733–749. [Google Scholar] [CrossRef]
  22. Hu, M.; Peng, S. Extended conditional G-expectations and related stopping times. Probab. Uncertain. Quant. Risk 2021, 6, 369–390. [Google Scholar] [CrossRef]
  23. Denis, L.; Hu, M.; Peng, S. Function spaces and capacity related to a sublinear expectation: Application to G-Brownian motion paths. Potential Anal. 2011, 34, 139–161. [Google Scholar] [CrossRef]
  24. Hu, M.; Peng, S. On representation theorem of G-expectations and paths of G-Brownian motion. Acta Math. Appl. Sin. Engl. Ser. 2009, 25, 539–546. [Google Scholar] [CrossRef]
  25. Hu, M.; Wang, F.; Zheng, G. Quasi-continuous random variables and processes under the G-expectation framework. Stoch. Process. Their Appl. 2016, 126, 2367–2387. [Google Scholar] [CrossRef]
  26. Soner, H.M.; Touzi, N.; Zhang, J. Martingale representation theorem for the G-expectation. Stoch. Process. Their Appl. 2011, 121, 265–287. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, J.; Tang, Y.; Xiong, J. Conditional Strong Law of Large Numbers under G-Expectations. Symmetry 2024, 16, 272. https://doi.org/10.3390/sym16030272

AMA Style

Zhang J, Tang Y, Xiong J. Conditional Strong Law of Large Numbers under G-Expectations. Symmetry. 2024; 16(3):272. https://doi.org/10.3390/sym16030272

Chicago/Turabian Style

Zhang, Jiaqi, Yanyan Tang, and Jie Xiong. 2024. "Conditional Strong Law of Large Numbers under G-Expectations" Symmetry 16, no. 3: 272. https://doi.org/10.3390/sym16030272

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop