Next Article in Journal
Multi-Scale Feature Enhancement Method for Underwater Object Detection
Previous Article in Journal
Resource Management and Secure Data Exchange for Mobile Sensors Using Ethereum Blockchain
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Laws of Large Numbers for Uncertain Random Variables in the Framework of U-S Chance Theory

1
School of Mathematics and Statistics, Weifang University, Weifang 261061, China
2
School of Statistics and Data Science, Qufu Normal University, Qufu 273165, China
3
School of Management, Shandong University, Jinan 250100, China
4
School of Economics, Ocean University of China, Qingdao 266100, China
*
Author to whom correspondence should be addressed.
Symmetry 2025, 17(1), 62; https://doi.org/10.3390/sym17010062
Submission received: 26 November 2024 / Revised: 23 December 2024 / Accepted: 31 December 2024 / Published: 2 January 2025
(This article belongs to the Section Mathematics)

Abstract

:
The paper introduces U-S chance spaces, a new framework based on uncertainty theory and sub-linear expectation theory, to depict human uncertainty and sub-linear features, simultaneously. These spaces can be used to analyze the characteristics of uncertain random variables and study investments and other related issues in incomplete financial markets. Within the framework, sub-linear expectation theory describes the randomness in financial behaviors, while uncertainty theory describes the uncertainty associated with government macro-control or experts’ opinions. The main achievement of this paper is the derivation of the Kolmogorov law of large numbers for uncertain random variables under U-S chance spaces. Examples are provided, and the theorems can be applied to uncertain random variables that are functions of random variables with symmetric or asymmetric distributions and uncertain variables with symmetric or asymmetric distributions. In some cases, when both random and uncertain variables are symmetric, the limit in the law exhibits the form that is characterized by symmetrical uncertain variables.

1. Introduction

In the 16th century, Cardano introduced a kind of limit theorem within the framework of classical probability theory, which subsequently gained recognition as the “Law of Large Numbers” (abbreviated as LLNs). Since then, LLN has since been investigated by numerous mathematicians, such as Bernoulli, Kolmogorov, and so on. Following a protracted period of study and development, LLN has developed into a nearly flawless theoretical framework and is now frequently applied in practical applications. The additivity of probabilities and the linear property of expectations play significant roles in the proof of such LLNs. Due to the fact that many events in both subjective and objective environments cannot be represented with additive probability or classical linear expectation, the additivity premise is not practical in numerous fields of application. Since Choquet proposed the concept of non-additive probability (capacity) in [1], people have become increasingly interested in non-additive probability theory. Non-additive probabilities are used in objective settings, particularly in quantum mechanics. According to [2], despite the frequentist interpretation commonly applied to quantum phenomena, the probabilities describing them are typically non-additive due to the well-known wave-particle duality. Non-additive probabilities are used in subjective settings because additive ones make it hard to analyze decision-makers’ confidence.
One very significant aspect of the real world is the fuzzy phenomenon. In order to model fuzzy phenomenon, [3] introduced the idea of a fuzzy set, and [4] established possibility theory, which is connected to the theory of fuzzy sets. Following that, further research was performed on the fuzzy measure, Sugeno integral, and Choquet integral (see, e.g., [5,6,7,8]). Furthermore, [6,7] divided the fuzzy measures into eight different categories that are closed under the operations of distortion functions: submeasure, supermeasure, submodular, supermodular, belief, plausibility, possibility, and necessity. Numerous surveys indicated that human uncertainty does not act in the same way as fuzziness. Before [9,10], the argument centered on the idea that the maximum of measurements of individual events is not always the same as the measure of the union of events, a perspective that was eventually summarized in [11].
To manoeuvre around this disadvantage, [9] created uncertainty theory, and [10] also developed it. An uncertain measure fulfills normality, duality, subadditivity, and product axioms. According to uncertainty theory, the degree of belief that an event will occur is indicated by the uncertain measure of the event. In order to represent the quantities under uncertain situations, this theory offers the concept of an uncertain variable. [12] studied the strong law of large numbers and the weak law of large numbers for Bernoulli uncertain sequences. In complex systems, human uncertainty and random factors might coexist at times. To investigate such complex systems, [13,14] developed the notion of an uncertain random variable to describe the quantities under uncertain and random situations, merged the concepts of probability and uncertain measures, and introduced the idea of chance measure. These concepts, which are founded on probability and updated uncertainty theory in [15], are mathematical explanations for uncertain random events. In this chance space, LLNs have produced some excellent study findings. The first demonstration of LLN under chance measure was made by [16]. It shows that the mean of uncertain random variables, which are functions, converges in distribution to an uncertain variable if random variables are independent and identically distributed (IID for short), and uncertain variables are IID and regular. LLNs under chance measures were subsequently developed by [17,18,19,20,21].
In the real world, apart from randomness, fuzziness, and human uncertainty, financial uncertainty is another form of indeterminacy. Inspired by risk measures, super-hedge pricing, and modeling uncertainty in finance, [22] first introduced the concepts of IID random variables, maximum distribution, and G-normal distribution in the framework of sub-linear expectations. Many financial uncertainties are difficult to adequately model using additive probability and traditional linear expectation. Nevertheless, a very adaptable framework for analyzing and explaining them is provided by sub-linear expectation theory. [22] has investigated weak convergences, including weak LLNs and central limit theorems under sub-linear expectation theory. Subsequently, [23,24] discovered three different types of strong LLNs for non-additive probabilities within the same context, resulting in the development of novel strategies for handling uncertain financial issues. A number of researchers have subsequently investigated strong LLNs in the framework of sub-linear expectations theory. For example, [25] made the strong LLNs in [23,24] hold in this case by providing the weaker requirement that random variables be independent, but not necessarily identically distributed. An expansion of the independence of random variables under sub-linear expectation known as negative dependence was first presented by [26]. Strong LLNs in [23,24] were also extended under the weakest moment conditions by [26]. Almost at the same time, the sub-linear expectations theory was also improved in [27,28]. The latest findings on LLNs based on this theory are presented in [29,30].
However, in the intricate and multifaceted real world, there exists a class of highly challenging systems. These systems are characterized not only by randomness with nonlinear features but also by the imprecision introduced by human subjective consciousness and linguistic expression. The stock market stands as a quintessential example of such complex systems, teeming with myriad random factors and human-induced uncertainties. Regrettably, previous theoretical frameworks have struggled to adequately model such systems. Inspired by the method presented in [31], we propose a novel theoretical framework of U-S chance space, aiming to provide a more precise portrayal of the complete picture of these complex systems. The core of the U-S chance space theoretical framework lies in the ingenious integration of sub-linear expectation theory and uncertainty theory, constructing a two-dimensional product space. Within this spatial framework, tools such as chance measures and chance distributions can comprehensively characterize the probabilistic properties and statistical regularities of uncertain random variables. These novel mathematical tools enable a more precise analysis of complex scenarios in financial environments, encompassing both random phenomena with sub-linear expectation characteristics and uncertainties stemming from human subjective judgments. For instance, when studying stock investments in incomplete financial markets, one can utilize sub-linear expectation theory to depict the incomplete nature of the market while leveraging uncertainty theory to address uncertainties arising from government macro-controls or expert opinions, which significantly impact stock returns and prices. Additionally, [32] gave four distinct kinds of expectations and four models of uncertain random programming under U-S chance spaces, and they were used for optimal investment in incomplete financial markets and system reliability design. [33] proposed risk aversion in incomplete markets affected by government regulation in the framework of the U-S chance theory, presented expectations of risk and the risk premium, and proved Pratt’s theorem under U-S chance spaces. Given the significant practical implications of the U-S chance space, the construction of the LLN under this space undoubtedly emerges as a highly valuable research direction. It is noteworthy that the LLNs for uncertain random variables presented in [20] are based on the assumption of linear probability and expectation. To transcend this linear assumption, under a series of reasonable conditional hypotheses, we cleverly employ the results of the LLNs for random variables from [24] and successfully derive the Kolmogorov type LLN for uncertain random variables being functions of IID random variables and IID uncertain variables under U-S chance spaces in Section 3. Here, the assumption of regularity for uncertain variables is necessary. A number of examples are given and explained in order to better illustrate the LLN for uncertain random variables under U-S chance spaces. In certain specific scenarios, if random and uncertain variables are both symmetric, then the limit in the Kolmogorov type LLN for uncertain random variables under U-S chance spaces also exhibits the symmetrical form. The innovations of this paper are primarily manifested in two aspects: firstly, theoretical innovation, as we propose the theoretical framework of the U-S chance space for the first time; secondly, methodological innovation, given the non-additivity of nonlinear expectations, traditional methods and techniques for studying linear expectations and probabilities are no longer applicable in the study of the LLNs within the U-S chance space. Therefore, we construct appropriate inequalities and innovatively improve the proof methods and techniques of the LLNs presented in [20].
This is the structure for the remainder of the paper. Some novel ideas regarding U-S chance spaces are presented in Section 2. In Section 3, we obtain the Kolmogorov type LLN for uncertain random variables that are functions of IID random variables and IID uncertain variables under U-S chance spaces. After that, five examples are given, which can be seen as two applications of the LLN in Section 4, and a short conclusion is presented in Section 5. Appendix A contains some fundamental definitions pertaining to uncertainty theory. Finally, some results of sub-linear expectations that are used in this paper are given in Appendix B.

2. U-S Chance System and Related Concepts

To describe complex systems where human uncertainty and randomness coexist, [13,14] proposed the chance theory. This randomness can be portrayed through classical probability theory. However, with the development of society, many random phenomena cannot be analyzed through additive probabilities and need to be modeled by non-additive probabilities. For complex systems where human uncertainty and randomness with sub-linear characteristics coexist, the chance theory cannot analyze the problems in such systems well. Therefore, a new theoretical framework is needed to be constructed for dealing with the problems in the above systems. We combine uncertainty theory and sub-linear expectation theory to propose U-S chance theory. Chance measures under U-S chance spaces are the most basic concepts, and they can be used to represent the possible degree that an uncertain random event may occur under U-S chance spaces. An uncertain random variable is a measurable function from the U-S chance space to the set of real numbers. It is used to indicate quantities with both uncertainty and randomness with sub-linear characteristics. Chance distributions can be used to characterize uncertain random variables under U-S chance spaces, and they are carriers of incomplete information of uncertain random variables. In some instances, it is more important to understand chance distributions than uncertain random variables themselves. In this section, we develop a novel class of framework for chance spaces called U-S chance spaces. Under the chance spaces, definitions for chance distributions, chance measures, and uncertain random variables are suggested.
Definition 1.
Suppose that ( Γ , L , M ) is an uncertainty space, ( Ω , F ) is a measurable space, and ( Ω , H , E ) is a sub-linear expectation space. Let V , v be two non-additive probabilities generated by sub-linear expectation E . A pair of chance spaces generated by uncertainty space and sub-linear expectation space (abbreviated U-S chance spaces) are the forms as follows:
( Γ , L , M ) × ( Ω , F , V ) = ( Γ × Ω , L × F , M × V ) ,
and
( Γ , L , M ) × ( Ω , F , v ) = ( Γ × Ω , L × F , M × v ) ,
where Γ × Ω represents the universal set, L × F represents the product σ-algebra, M × V and M × v represent two product measures. Θ is called an uncertain random event in U-S chance spaces if Θ L × F . The chance measures c h ¯ and C H of Θ can be defined by
c h ¯ { Θ } : = 0 1 v { ω Ω M { γ Γ ( γ , ω ) Θ } r } d r ,
and
C H { Θ } : = 0 1 V { ω Ω M { γ Γ ( γ , ω ) Θ } r } d r ,
respectively.
( Ω , H , E ) , v , and V are in the framework of sub-linear expectation theory presented by Peng [27,28] and Chen [23,24]. In Appendix B, we provide some fundamental definitions and properties regarding sub-linear expectation theory, including the definitions of sub-linear expectation E , non-additive probabilities v and V , IID random variables, maximal distribution and G-normal distribution and some of their properties.
Remark 1.
It is evident that Γ × Ω represents the universal set containing all ordered pairs ( γ , ω ) , where γ Γ and ω Ω , i.e.,
Γ × Ω = { ( γ , ω ) | γ Γ , ω Ω } .
The product σ-algebra L × F represents the smallest σ-algebra that contains measurable rectangles Λ × B , where Λ L and B F . Under U-S chance spaces, an element is considered an event if it is in L × F .
Next, using a similar approach by Liu [15] (pp. 409–410), the product measures M × V and M × v are discussed. Assume that Θ becomes an event in L × F . For every ω Ω , we obtain that the set
Θ ω = { γ Γ | ( γ , ω ) Θ }
becomes an event in L . Hence, for any ω Ω , uncertain measure M { Θ ω } is existing. However, unluckily, there is no guarantee that M { Θ ω } is a measurable function about ω. As a result, for any real number x, the set
Θ x * = { ω Ω | M { Θ ω } x }
becomes a subset of Ω but possibly fails to be an event in F . Therefore, the existences of upper probability measure V { Θ x * } and lower probability measure v { Θ x * } are not ensured. For this instance, let
V { Θ x * } = inf B F , B Θ x * V { B } , if inf B F , B Θ x * V { B } < 0.5 sup B F , B Θ x * V { B } , if sup B F , B Θ x * V { B } > 0.5 0.5 , otherwise
and
v { Θ x * } = inf B F , B Θ x * v { B } , if inf B F , B Θ x * v { B } < 0.5 sup B F , B Θ x * v { B } , if sup B F , B Θ x * v { B } > 0.5 0.5 , otherwise
based on the principle of maximum uncertainty. For each real number x, this guarantees the existences of upper probability measure V { Θ x * } and lower probability measure v { Θ x * } . It is now possible to define M × V and M × v of Θ as the expected values of M { Θ ω } with regard to ω Ω , i.e., 0 1 V { Θ r * } d r and 0 1 v { Θ r * } d r . Thus, chance measures C H and c h ¯ can be well defined.
Proposition 1.
Suppose that ( Γ , L , M ) × ( Ω , F , V ) and ( Γ , L , M ) × ( Ω , F , v ) are U-S chance spaces. Four properties of the chance measures c h ¯ and C H can be listed as following:
(i)
c h ¯ { Λ × A } = M Λ × v A ; C H { Λ × A } = M Λ × V A for Λ L and A F . In particular, c h ¯ { } = 0 , c h ¯ { Γ × Ω } = 1 ; C H { } = 0 , C H { Γ × Ω } = 1 .
(ii)
C H { Θ } + c h ¯ { Θ c } = 1 , Θ L × F .
(iii)
For events Θ 1 , Θ 2 L × F , if Θ 1 Θ 2 , then c h ¯ { Θ 1 } c h ¯ { Θ 2 } and C H { Θ 1 } C H { Θ 2 } .
(iv)
c h ¯ { Θ } C H { Θ } , Θ L × F .
Proof. 
(i) For every ω A , it is clear that γ Γ | ( γ , ω ) Λ × A = Λ , and for every ω A c , it is easy to find that γ Γ | ( γ , ω ) Λ × A = . Then M γ Γ | ( γ , ω ) Λ × A = M { Λ } I A ( ω ) , for all ω Ω . For ∀r, suppose M { Λ } r , then we have
v ω Ω M { γ Γ ( γ , ω ) Λ × A } r = v { A } .
Suppose M { Λ } < r , then we have
v ω Ω M { γ Γ ( γ , ω ) Λ × A } r = v { } = 0 .
Thus, it is concluded that
c h ¯ { Λ × A } = 0 1 v { ω Ω M { γ Γ ( γ , ω ) Λ × A } r } d r = 0 M { Λ } v { A } d r + M { Λ } 1 0 d r = M Λ × v A .
Moreover, it is easy to obtain that c h ¯ { } = M × v = 0 , c h ¯ { Γ × Ω } = M Γ × v Ω = 1 . Using a similar method as above, we have C H { Λ × A } = M Λ × V A , C H { } = 0 , C H { Γ × Ω } = 1 .
(ii) Taking Θ L × F , since V { A } + v { A c } = 1 , A F , it follows from the duality of M that
C H { Θ } = 0 1 V { ω Ω M { γ Γ ( γ , ω ) Θ } r } d r = 0 1 V { ω Ω M { γ Γ ( γ , ω ) Θ c } 1 r } d r = 0 1 1 v { ω Ω M { γ Γ ( γ , ω ) Θ c } > 1 r } d r = 1 0 1 v { ω Ω M { γ Γ ( γ , ω ) Θ c } > r } d r = 1 0 1 v { ω Ω M { γ Γ ( γ , ω ) Θ c } > r } d r = 1 c h ¯ { Θ c } .
So, we obtain C H { Θ } + c h ¯ { Θ c } = 1 .
(iii) Suppose that Θ 1 Θ 2 , for every ω Ω , we can conclude that { γ Γ ( γ , ω ) Θ 1 } { γ Γ ( γ , ω ) Θ 2 } . It is obvious from the monotonicity of M that
M { γ Γ ( γ , ω ) Θ 1 } M { γ Γ ( γ , ω ) Θ 2 } .
From the fact that
{ ω Ω M { γ Γ ( γ , ω ) Θ 1 } r }
{ ω Ω M { γ Γ ( γ , ω ) Θ 2 } r }
and the monotonicity of v, it follows that
0 1 v { ω Ω M { γ Γ ( γ , ω ) Θ 1 } r } d r
0 1 v { ω Ω M { γ Γ ( γ , ω ) Θ 2 } r } d r .
Thus, c h ¯ { Θ 1 } c h ¯ { Θ 2 } .
Similar to the method above, it is easy to obtain that C H { Θ 1 } C H { Θ 2 } from the monotonicity of V .
(iv) By the fact that v { A } V { A } , A F , for ∀ Θ L × F , 0 r 1 , it can be obtained that
v { ω Ω M { γ Γ ( γ , ω ) Θ } r }
V { ω Ω M { γ Γ ( γ , ω ) Θ } r } .
Thus, from (1) and (2), the conclusion c h ¯ { Θ } C H { Θ } can be obtained. □
Remark 2.
In generally, c h ¯ and C H are both not subadditive because the subadditivity of upper probability V could not determine the subadditivity of c h ¯ and C H .
Definition 2.
If a function ξ : Γ × Ω R is measurable, then it is named an uncertain random variable in U-S chance spaces, i.e., for each B B ( R )
{ ξ B } = { ( γ , ω ) ξ ( γ , ω ) B } L × F .
A pair of chance distributions Ξ 1 and Ξ 2 for ξ with the functions
Ξ 1 ( x ) = c h ¯ { ξ x } , Ξ 2 ( x ) = C H { ξ x } , x R ,
are named the lower distribution and the upper distribution, respectively.
Both here and in the next sections, uncertain random variables are variables under U-S chance spaces ( Γ , L , M ) × ( Ω , F , V ) and ( Γ , L , M ) × ( Ω , F , v ) .

3. LLN for Uncertain Random Variables Under U-S Chance Spaces

In this section, the Kolmogorov type LLN for uncertain random variables under U-S chance spaces is given. We first require a few fundamental notations and assumptions before we provide the LLN.
Let { η i } i = 1 be a sequence of random variables, and { τ i } i = 1 be a sequence of uncertain variables. The class of functions f: R × R R is denoted by C I , which satisfies that f ( x , y ) is strictly increasing concerning y for every x R , in the meantime, f ( x , y ) C l , L i p ( R ) concerning y for every x R , and f ( x , y ) is continuous concerning x for every y R . And the class of functions f: R × R R is denoted by C D , which satisfies that f ( x , y ) is strictly decreasing concerning y for every x R , in addition, f ( x , y ) C l , L i p ( R ) concerning y for every x R , and f ( x , y ) is continuous concerning x for every y R . Furthermore, we write the class of continuous function g : R R strictly increasing as C ¯ I , and the class of continuous function g : R R strictly decreasing as C ¯ D .
Here, C l , L i p ( R ) represents the linear space of (local Lipschitz) functions f that satisfies | f ( y 1 ) f ( y 2 ) | C ( 1 + | y 1 | m + | y 2 | m ) | y 1 y 2 | , y 1 , y 2 R , for some C > 0 , m N depending on f .
Let
S n = i = 1 n f ( η i , τ i ) , n N , f C I C D ,
S n ( y ) = i = 1 n f ( η i , y ) , y R , n N , f C I C D ,
S ¯ n = i = 1 n g ( τ i ) , n N , g C ¯ I C ¯ D ,
F ( y ) : = E [ f ( η 1 , y ) ] , f C I C D , y R ,
and
F ¯ ( y ) : = E [ f ( η 1 , y ) ] = E [ f ( η 1 , y ) ] ,
f C I C D , y R .
Remark 3.
Let ( Ω , H , E ) be a sub-linear expectation space. Suppose that η 1 is a random variable within ( Ω , H , E ) . For y R , if f C I and E [ | f ( η 1 , y ) | ] < , then it is obtained that each of F ( y ) and F ¯ ( y ) is continuous and increasing concerning y. In a similar vein, for y R , if f C D and E [ | f ( η 1 , y ) | ] < , then it can be obtained that each of F ( y ) and F ¯ ( y ) is continuous and decreasing concerning y.
Proof. 
We only need to demonstrate that F ( y ) is continuous and monotonic but not strictly monotonic. Due to E [ · ] = E [ · ] , the continuity and monotonicity of F ¯ ( y ) can be also obtained.
We first prove the continuity of F ( y ) . For a random variable η 1 , and each y R , then f ( η 1 , y ) C l , L i p ( R ) . So there exist two constants C > 0 , m R , for any given y 1 R , we have
| f ( η 1 , y ) f ( η 1 , y 1 ) | C ( 1 + | y | m + | y 1 | m ) | y y 1 | .
Combining Proposition A1 (iii) in Appendix B, and letting y y 1 , we have
| F ( y ) F ( y 1 ) | = | E [ f ( η 1 , y ) ] E [ f ( η 1 , y 1 ) ] | E [ | f ( η 1 , y ) f ( η 1 , y 1 ) | ] E [ C ( 1 + | y | m + | y 1 | m ) | y y 1 | ] = C ( 1 + | y | m + | y 1 | m ) | y y 1 | 0 .
Thus, for any given y 1 R , lim y y 1 F ( y ) = F ( y 1 ) .
Next, we give the proof of monotonicity of F ( y ) . With no loss of generality, we only take into account the situation of f C I .
Suppose f ( x , y ) C I , then f ( x , y ) is strictly increasing concerning y for every x R . For ∀ y 1 , y 2 R , y 1 < y 2 , we have f ( η 1 , y 1 ) < f ( η 1 , y 2 ) . By the monotonicity of E , it follows that E [ f ( η 1 , y 1 ) ] E [ f ( η 1 , y 2 ) ] . Hence, it follows that F ( y 1 ) F ( y 2 ) . It is obvious that F ( y ) is not strictly increasing since E is not strictly monotonic. □
Our primary result is given in the following theorem. We establish the Kolmogorov type LLN for uncertain random variables that are functions of IID random variables under E and IID uncertain variables. And the assumption of regularity for uncertain variables is necessary.
Theorem 1
(The Kolmogorov type LLN for uncertain random variables under U-S chance spaces). Consider a pair of U-S chance spaces ( Γ , L , M ) × ( Ω , F , V ) and ( Γ , L , M ) × ( Ω , F , v ) , and c h ¯ and C H are the chance measures under U-S chance spaces. Let { η i } i = 1 be a sequence of IID random variables under E (see Definition A9 in Appendix B), { τ i } i = 1 be a sequence of IID uncertain variables, and F ( τ 1 ) = E [ f ( η 1 , y ) ] | y = τ 1 , F ¯ ( τ 1 ) = E [ f ( η 1 , y ) ] | y = τ 1 . Suppose that F ( τ 1 ) and F ¯ ( τ 1 ) are regular uncertain variables, f C I C D , and E | f ( η 1 , y ) | 1 + α < , for y R and some α ( 0 , 1 ] . Then, for z ( inf { F ( y ) | y R } , sup { F ¯ ( y ) | y R } ) ,
M { F ( τ 1 ) z } lim inf n c h ¯ S n n z lim sup n C H S n n z M { F ¯ ( τ 1 ) z } .
In particularly, if F ( τ 1 ) = F ¯ ( τ 1 ) , then
lim n c h ¯ S n n z = lim n C H S n n z = M { F ( τ 1 ) z } .
Proof. 
For simplification, the uncertainty distributions of F ( τ 1 ) and F ¯ ( τ 1 ) are denoted by Ψ and Ψ ¯ , respectively. Firstly, we can prove (3) by taking into account the situations f C I and f C D , respectively.
Case 1: Let f C I . Fix z ( inf { F ( y ) | y R } , sup { F ( y ) | y R } ) , and give a ε > 0 such that z ε ( inf { F ( y ) | y R } , sup { F ( y ) | y R } ) . For ∀ u ( inf { F ( y ) | y R } , sup { F ( y ) | y R } ) , denote y 0 ( u ) = max { y | F ( y ) = u } . It follows from Theorem A4 (a) in Appendix B, ∃ N 1 N , such that for any n N 1 ,
v sup k n S k ( y 0 ( z ε ) ) k z 1 ε .
If n N 1 ,
c h ¯ S n n z = 0 1 v M S n n z r d r 0 1 v { sup k n S k ( y 0 ( z ε ) ) k z M S n n z r } d r 0 1 v { sup k n S k ( y 0 ( z ε ) ) k z M S n n sup k n S k ( y 0 ( z ε ) ) k r } d r 0 1 v { sup k n S k ( y 0 ( z ε ) ) k z M S n n S n ( y 0 ( z ε ) ) n r } d r = 0 1 v { sup k n S k ( y 0 ( z ε ) ) k z M i = 1 n f ( η i , τ i ) i = 1 n f ( η i , y 0 ( z ε ) ) r } d r 0 1 v { sup k n S k ( y 0 ( z ε ) ) k z M i = 1 n f ( η i , τ i ) f ( η i , y 0 ( z ε ) ) r } d r .
Notice that { τ i } i = 1 is an IID uncertain sequence and f ( x , y ) is a strictly increasing function about y for every x. By (5) and y 0 ( z ε ) = max { y | F ( y ) = z ε } , we obtain
c h ¯ S n n z 0 1 v { sup k n S k ( y 0 ( z ε ) ) k z M i = 1 n τ i y 0 ( z ε ) r } d r = 0 1 v { sup k n S k ( y 0 ( z ε ) ) k z M τ 1 y 0 ( z ε ) r } d r = 0 1 v { sup k n S k ( y 0 ( z ε ) ) k z M F ( τ 1 ) z ε r } d r ( 1 ε ) Ψ ( z ε ) .
Because F ( τ 1 ) is regular, its uncertainty distribution Ψ is continuous. Based on the arguments mentioned above and taking ε 0 , it follows
lim inf n c h ¯ S n n z Ψ ( z ) = M { F ( τ 1 ) z } .
Another aspect, given
z ( inf { F ¯ ( y ) | y R } , sup { F ¯ ( y ) | y R } )
and take ε > 0 such that z + ε ( inf { F ¯ ( y ) | y R } , sup { F ¯ ( y ) | y R } ) . For ∀ u ( inf { F ¯ ( y ) | y R } , sup { F ¯ ( y ) | y R } ) , denote y 1 ( u ) = max { y | F ¯ ( y ) = u } . From Theorem A4 (b) in Appendix B, ∃ N 2 N , such that for any n N 2 ,
v inf k n S k ( y 1 ( z + ε ) ) k > z 1 ε .
If n N 2 ,
c h ¯ S n n > z = 0 1 v M S n n > z r d r 0 1 v { inf k n S k ( y 1 ( z + ε ) ) k > z M S n n > z r } d r 0 1 v { inf k n S k ( y 1 ( z + ε ) ) k > z M S n n > inf k n S k ( y 1 ( z + ε ) ) k r } d r 0 1 v { inf k n S k ( y 1 ( z + ε ) ) k > z M i = 1 n f ( η i , τ i ) > i = 1 n f ( η i , y 1 ( z + ε ) ) r } d r 0 1 v { inf k n S k ( y 1 ( z + ε ) ) k > z M i = 1 n f ( η i , τ i ) > f ( η i , y 1 ( z + ε ) ) r } d r .
Notice that { τ i } i = 1 is an IID uncertain sequence and f ( x , y ) is a strictly increasing function about y for every x. By (7) and y 1 ( z + ε ) = max { y | F ¯ ( y ) = z + ε } , we have
c h ¯ S n n > z 0 1 v { inf k n S k ( y 1 ( z + ε ) ) k > z M i = 1 n τ i > y 1 ( z + ε ) r } d r = 0 1 v { inf k n S k ( y 1 ( z + ε ) ) k > z M τ 1 > y 1 ( z + ε ) r } d r = 0 1 v { inf k n S k ( y 1 ( z + ε ) ) k > z M F ¯ ( τ 1 ) > z + ε r } d r ( 1 ε ) ( 1 Ψ ¯ ( z + ε ) ) .
Applying Proposition 1 (ii), it yields
C H S n n z 1 ( 1 ε ) ( 1 Ψ ¯ ( z + ε ) ) .
Because F ¯ ( τ 1 ) is regular, its uncertainty distribution Ψ ¯ is continuous. Letting ε 0 , we obtain
lim sup n C H S n n z Ψ ¯ ( z ) = M { F ¯ ( τ 1 ) z } .
Since E [ · ] E [ · ] , it follows that inf { F ¯ ( y ) | y R } inf { F ( y ) | y R } , and sup { F ¯ ( y ) | y R } sup { F ( y ) | y R } . Then, for ∀ z ( inf { F ( y ) | y R } , sup { F ¯ ( y ) | y R } ) , (3) follows, integrating (6) and (8).
Case 2: Let f C D , then f C I . To make the notation simpler, we use the notations F ( τ 1 ) E [ f ( η 1 , τ 1 ) ] and F ¯ ( τ 1 ) E [ f ( η 1 , τ 1 ) ] . By taking into account f and z instead of f and z in (3), we obtain
M { E [ f ( η 1 , τ 1 ) ] < z } lim inf n c h ¯ S n n < z lim sup n C H S n n < z M { E [ f ( η 1 , τ 1 ) ] < z } .
Due to the fact E [ f ( η 1 , τ 1 ) ] = E [ f ( η 1 , τ 1 ) ] ,
M { E [ f ( η 1 , τ 1 ) ] < z } lim inf n c h ¯ S n n < z lim sup n C H S n n < z M { E [ f ( η 1 , τ 1 ) ] < z } .
From Proposition 1 (ii) and the duality axiom of M , we obtain
1 M { E [ f ( η 1 , τ 1 ) ] z } 1 lim inf n C H S n n z 1 lim sup n c h ¯ S n n z 1 M { E [ f ( η 1 , τ 1 ) ] z } .
Moreover,
M { E [ f ( η 1 , τ 1 ) ] z } lim inf n c h ¯ S n n z lim sup n C H S n n z M { E [ f ( η 1 , τ 1 ) ] z } .
Then, for ∀ z ( inf { F ( y ) | y R } , sup { F ¯ ( y ) | y R } ) , (3) follows.
Finally, we consider (4). If F ( τ 1 ) = F ¯ ( τ 1 ) , we have
M { F ( τ 1 ) z } = M { F ¯ ( τ 1 ) z } .
By (3), we obtain
lim inf n c h ¯ S n n z = lim sup n C H S n n z = M { F ( τ 1 ) z } .
Applying Proposition 1 (iv), we have
lim inf n c h ¯ S n n z lim sup n c h ¯ S n n z lim sup n C H S n n z ,
and
lim inf n c h ¯ S n n z lim inf n C H S n n z lim sup n C H S n n z .
Combining (10), (11) and (12), we obtain
lim n c h ¯ S n n z = lim n C H S n n z = M { F ( τ 1 ) z } .
Hence, (4) is proved. Thus, the proof of Theorem 1 is completed. □
In the degenerate situation for uncertain variables with f ( x , y ) = g ( y ) , x, y R , Theorem 1 directly leads to the following corollary.
Corollary 1.
Suppose that { τ i } i = 1 is a sequence of regular and IID uncertain variables, and g C ¯ I C ¯ D . Then for z ( inf { g ( y ) | y R } , sup { g ( y ) | y R } ) ,
lim n c h ¯ S ¯ n n z = lim n C H S ¯ n n z = lim n M S ¯ n n z = M { g ( τ 1 ) z } .
Theorem 2 below includes the degenerate LLNs for random variables with f ( x , y ) = h ( x ) , x, y R . It can be obtained from Theorems A2 and A3 in Appendix B immediately.
Theorem 2.
Assume that { η i } i = 1 is a sequence of IID random variables under E , and h ( x ) , x R is continuous and E | h ( η 1 ) | 1 + α < for some α ( 0 , 1 ] . Then for any ε > 0 ,
lim n C H E [ h ( η 1 ) ] ε i = 1 n h ( η i ) n E [ h ( η 1 ) ] + ε = lim n V E [ h ( η 1 ) ] ε i = 1 n h ( η i ) n E [ h ( η 1 ) ] + ε = 1 ,
and
lim n c h ¯ E [ h ( η 1 ) ] ε i = 1 n h ( η i ) n E [ h ( η 1 ) ] + ε = lim n v E [ h ( η 1 ) ] ε i = 1 n h ( η i ) n E [ h ( η 1 ) ] + ε = 1 .

4. Examples

This section employs the notations in Section 3. Five examples are given, which can be seen as applications of Theorem 1.
Example 1.
Suppose that { η i } i = 1 is a sequence of IID random variables with maximal distribution under E (see Definition A10 in Appendix B), i.e., E [ φ ( η 1 ) ] = sup μ ̲ y μ ¯ φ ( y ) , for φ C l . l i p ( R ) , where μ ¯ = E [ η 1 ] , μ ̲ = E [ η 1 ] , and { τ i } i = 1 is a sequence of uncertain variables that are independent, normally distributed, and the uncertainty distribution is
Ψ ( x ) = 1 + e x p π x 3 1 , x R .
Then, it follows that
(I) 
For every z ( , ) ,
Ψ ( z μ ¯ ) lim inf n c h ¯ i = 1 n ( η i + τ i ) n z lim sup n C H i = 1 n ( η i + τ i ) n z Ψ ( z μ ̲ ) ,
and
1 Ψ ( μ ¯ z ) lim inf n c h ¯ i = 1 n ( η i τ i ) n z lim sup n C H i = 1 n ( η i τ i ) n z 1 Ψ ( μ ̲ z ) .
(II) 
For any z ( , ) ,
lim n c h ¯ i = 1 n τ i 3 n z = lim n C H i = 1 n τ i 3 n z = Ψ ( z 3 ) ,
and
lim n c h ¯ i = 1 n τ i 3 n z = lim n C H i = 1 n τ i 3 n z = 1 Ψ ( z 3 ) .
Example 2.
Suppose that { η i } i = 1 is a sequence of IID random variables with maximal distribution under E , i.e., E [ φ ( η 1 ) ] = sup μ ̲ y μ ¯ φ ( y ) , for φ C l . l i p ( R ) , where μ ¯ = E [ η 1 ] , μ ̲ = E [ η 1 ] , and { τ i } i = 1 is a sequence of uncertain variables that are independent, linear distributed, and the uncertainty distribution is
Ψ ( x ) = 0 , if x < a x a b a , if a x < b 1 , otherwise
where a and b are real numbers with a < b . Then, it follows that
(I) 
For every z < a + μ ̲ ,
lim n c h ¯ i = 1 n ( η i + τ i ) n z = lim n C H i = 1 n ( η i + τ i ) n z = 0 ,
for every z b + μ ¯ ,
lim n c h ¯ i = 1 n ( η i + τ i ) n z = lim n C H i = 1 n ( η i + τ i ) n z = 1 ,
for every z [ a + μ ̲ , min { a + μ ¯ , b + μ ̲ } ) ,
0 lim inf n c h ¯ i = 1 n ( η i + τ i ) n z lim n C H i = 1 n ( η i + τ i ) n z z μ ̲ a b a ,
for every z [ max { a + μ ¯ , b + μ ̲ } , b + μ ¯ ) ,
z μ ¯ a b a lim inf n c h ¯ i = 1 n ( η i + τ i ) n z lim n C H i = 1 n ( η i + τ i ) n z 1 .
for every z [ a + μ ¯ , b + μ ̲ ] with a + μ ¯ < b + μ ̲ ,
z μ ¯ a b a lim inf n c h ¯ i = 1 n ( η i + τ i ) n z lim n C H i = 1 n ( η i + τ i ) n z z μ ̲ a b a ,
and for every z [ b + μ ̲ , a + μ ¯ ] with a + μ ¯ b + μ ̲ ,
0 lim inf n c h ¯ i = 1 n ( η i + τ i ) n z lim n C H i = 1 n ( η i + τ i ) n z 1 .
(II) 
For every z > μ ¯ a ,
lim n c h ¯ i = 1 n ( η i τ i ) n z = lim n C H i = 1 n ( η i τ i ) n z = 1 ,
for every z μ ̲ b ,
lim n c h ¯ i = 1 n ( η i τ i ) n z = lim n C H i = 1 n ( η i τ i ) n z = 0 ,
for every z ( μ ̲ b , min { μ ̲ a , μ ¯ b } ] ,
0 lim inf n c h ¯ i = 1 n ( η i τ i ) n z lim sup n C H i = 1 n ( η i τ i ) n z 1 μ ̲ z a b a ,
for every z ( max { μ ̲ a , μ ¯ b } , μ ¯ a ] ,
1 μ ¯ z a b a lim inf n c h ¯ i = 1 n ( η i τ i ) n z lim sup n C H i = 1 n ( η i τ i ) n z 1 ,
for every z ( μ ¯ b , μ ̲ a ] with μ ¯ b < μ ̲ a ,
1 μ ¯ z a b a lim inf n c h ¯ i = 1 n ( η i τ i ) n z lim sup n C H i = 1 n ( η i τ i ) n z 1 μ ̲ z a b a ,
and for every z ( μ ̲ a , μ ¯ b ] with μ ¯ b μ ̲ a ,
0 lim inf n c h ¯ i = 1 n ( η i τ i ) n z lim n C H i = 1 n ( η i τ i ) n z 1 .
(III) 
For any z ( , ) ,
lim n c h ¯ i = 1 n τ i 3 n z = lim n C H i = 1 n τ i 3 n z = Ψ ( z 3 ) ,
and
lim n c h ¯ i = 1 n τ i 3 n z = lim n C H i = 1 n τ i 3 n z = 1 Ψ ( z 3 ) .
Example 3.
Suppose that { η i } i = 1 is a sequence of IID random variables with G-normal distribution under E (see Definition A11 in Appendix B), i.e., η 1 N { 0 , [ σ ̲ 2 , σ ¯ 2 ] } , where E [ η 1 ] = E [ η 1 ] = 0 , σ ¯ 2 = E η 1 2 and σ ̲ 2 = E η 1 2 , and { τ i } i = 1 is a sequence of uncertain variables that are independent, normally distributed, and the uncertainty distribution is
Ψ ( x ) = 1 + e x p π x 3 1 , x R .
Then, we have
(I) 
For any z ( , ) ,
lim n c h ¯ i = 1 n ( η i + τ i ) n z = lim n C H i = 1 n ( η i + τ i ) n z = Ψ ( z ) ,
and
lim n c h ¯ i = 1 n ( η i τ i ) n z = lim n C H i = 1 n ( η i τ i ) n z = 1 Ψ ( z ) .
(II) 
For any z ( , ) ,
lim n c h ¯ i = 1 n τ i 3 n z = lim n C H i = 1 n τ i 3 n z = Ψ ( z 3 ) ,
and
lim n c h ¯ i = 1 n τ i 3 n z = lim n C H i = 1 n τ i 3 n z = 1 Ψ ( z 3 ) .
Example 4.
Suppose that { η i } i = 1 is a sequence of IID random variables with G-normal distribution under E , i.e., η 1 N { 0 , [ σ ̲ 2 , σ ¯ 2 ] } , where E [ η 1 ] = E [ η 1 ] = 0 , σ ¯ 2 = E η 1 2 and σ ̲ 2 = E η 1 2 , and { τ i } i = 1 is a sequence of uncertain variables that are independent, linear distributed, and the uncertainty distribution is
Ψ ( x ) = 0 , if x < a x a b a , if a x < b 1 , otherwise
where a and b are real numbers with a < b . Then, it follows that
(I) 
For any z ( , ) ,
lim n c h ¯ i = 1 n ( η i + τ i ) n z = lim n C H i = 1 n ( η i + τ i ) n z = Ψ ( z ) ,
and
lim n c h ¯ i = 1 n ( η i τ i ) n z = lim n C H i = 1 n ( η i τ i ) n z = 1 Ψ ( z ) .
(II) 
For any z ( , ) ,
lim n c h ¯ i = 1 n τ i 3 n z = lim n C H i = 1 n τ i 3 n z = Ψ ( z 3 ) ,
and
lim n c h ¯ i = 1 n τ i 3 n z = lim n C H i = 1 n τ i 3 n z = 1 Ψ ( z 3 ) .
Example 5.
The triplets ( Ω , F , V ) and ( Ω , F , v ) are an upper probability space and a lower probability space, respectively. Assume that { η i } i = 1 n is a sequence of IID random variables. The upper and lower probabilities are defined by V ( A ) = sup { P 1 ( A ) , P 2 ( A ) } , v ( A ) = inf { P 1 ( A ) , P 2 ( A ) } , for any A F , and
P 1 ( η 1 x ) = 1 e λ 1 x , if x > 0 0 , otherwise ,
P 2 ( η 1 x ) = 1 e λ 2 x , if x > 0 0 , otherwise ,
where 0 < λ 1 < λ 2 , E [ η 1 ] = sup { E P 1 [ η 1 ] , E P 2 [ η 1 ] } , and E [ η 1 ] = inf { E P 1 [ η 1 ] , E P 2 [ η 1 ] } . And assume that { τ i } i = 1 is a sequence of uncertain variables that are independent, linear distributed, and the uncertainty distribution is
Ψ ( x ) = 0 , if x < a x a b a , if a x < b 1 , otherwise
where a and b are real numbers with a < b . Then, it follows that
(I) 
For every z < a + 1 λ 2 ,
lim n c h ¯ i = 1 n ( η i + τ i ) n z = lim n C H i = 1 n ( η i + τ i ) n z = 0 ,
for every z b + 1 λ 1 ,
lim n c h ¯ i = 1 n ( η i + τ i ) n z = lim n C H i = 1 n ( η i + τ i ) n z = 1 ,
for every z [ a + 1 λ 2 , min { a + 1 λ 1 , b + 1 λ 2 } ) ,
0 lim inf n c h ¯ i = 1 n ( η i + τ i ) n z lim n C H i = 1 n ( η i + τ i ) n z z 1 λ 2 a b a ,
for every z [ max { a + 1 λ 1 , b + 1 λ 2 } , b + 1 λ 1 ) ,
z 1 λ 1 a b a lim inf n c h ¯ i = 1 n ( η i + τ i ) n z lim n C H i = 1 n ( η i + τ i ) n z 1 ,
for every z [ a + 1 λ 1 , b + 1 λ 2 ] with a + 1 λ 1 < b + 1 λ 2 ,
z 1 λ 1 a b a lim inf n c h ¯ i = 1 n ( η i + τ i ) n z lim n C H i = 1 n ( η i + τ i ) n z z 1 λ 2 a b a ,
and for every z [ b + 1 λ 2 , a + 1 λ 1 ] with a + 1 λ 1 b + 1 λ 2 ,
0 lim inf n c h ¯ i = 1 n ( η i + τ i ) n z lim n C H i = 1 n ( η i + τ i ) n z 1 .
(II) 
For every z > 1 λ 1 a ,
lim n c h ¯ i = 1 n ( η i τ i ) n z = lim n C H i = 1 n ( η i τ i ) n z = 1 ,
for every z 1 λ 2 b ,
lim n c h ¯ i = 1 n ( η i τ i ) n z = lim n C H i = 1 n ( η i τ i ) n z = 0 ,
for every z ( 1 λ 2 b , min { 1 λ 2 a , 1 λ 1 b } ] ,
0 lim inf n c h ¯ i = 1 n ( η i τ i ) n z lim sup n C H i = 1 n ( η i τ i ) n z 1 1 λ 2 z a b a ,
for every z ( max { 1 λ 2 a , 1 λ 1 b } , 1 λ 1 a ] ,
1 1 λ 1 z a b a lim inf n c h ¯ i = 1 n ( η i τ i ) n z lim sup n C H i = 1 n ( η i τ i ) n z 1 ,
for every z ( 1 λ 1 b , 1 λ 2 a ] with 1 λ 1 b < 1 λ 2 a ,
1 1 λ 1 z a b a lim inf n c h ¯ i = 1 n ( η i τ i ) n z lim sup n C H i = 1 n ( η i τ i ) n z 1 1 λ 2 z a b a ,
and for every z ( 1 λ 2 a , 1 λ 1 b ] with 1 λ 1 b 1 λ 2 a ,
0 lim inf n c h ¯ i = 1 n ( η i τ i ) n z lim n C H i = 1 n ( η i τ i ) n z 1 .
(III) 
For any z ( , ) ,
lim n c h ¯ i = 1 n τ i 3 n z = lim n C H i = 1 n τ i 3 n z = Ψ ( z 3 ) ,
and
lim n c h ¯ i = 1 n τ i 3 n z = lim n C H i = 1 n τ i 3 n z = 1 Ψ ( z 3 ) .
Remark 4.
In Theorem 1, if distributions of η i , i = 1 , 2 , are symmetrical, i.e., the distributions of η i and η i are same, τ i , i = 1 , 2 , exhibit symmetry, i.e., Ψ ( x ) + Ψ ( x ) = 1 , for all x R (see [34]), and f ( x , y ) = x + y or f ( x , y ) = x y , then lim n c h ¯ S n n z exhibits the form that is characterized by the symmetrical uncertain variable τ 1 or τ 1 , respectively. Example 3 illustrates this remark as a special case.

5. Conclusions

To better handle complex systems where human uncertainty and randomness with sub-linear characteristics coexist, a new type of chance spaces named U-S chance spaces is introduced in this paper. The U-S chance spaces are a pair of two-dimensional spaces made up of uncertainty space and sub-linear expectation space. Additionally, under U-S chance spaces, we define uncertain random variables, chance measures and chance distributions. A novel framework for uncertain random variables is also suggested by merging sub-linear expectation theory with uncertainty theory. This paper’s primary contribution is the derivation of the Kolmogorov type LLN for uncertain random variables that are functions of IID random variables and IID uncertain variables under U-S chance spaces. Five examples were given and explained to better illustrate the LLN for uncertain random variables under U-S chance spaces.These results are obtained under the condition that the uncertain variables are regular, and future research may further weaken the regularity assumptions.
The LLNs in the framework of U-S chance theory provides a solid theoretical foundation for statistical analysis within the same theory, enabling the inference of population characteristics through sample analysis. Based on the LLNs under U-S chance spaces, topics such as statistical inference, parameter estimation, and so on within the theory are all worthy of future research endeavors. And the U-S chance theory provides a robust framework for modeling issues such as stock investment, optimal investment, risk aversion, and the design problems of system reliability in incomplete financial markets. And these issues have been studied to a certain extent in related papers.
Based on this theory, [33] studied the risk aversion problem under incomplete markets subject to government regulation, defined the risk premium and the relative risk premium, and proved Pratt’s theorem within the framework of U-S chance theory. [32] proved operational laws for uncertain random variables, gave four expectations and models of uncertain random programming, and used them to tackle optimal investment in incomplete markets and the design problems of system reliability. Other outcomes from classical probability theory, like multi-objective uncertain random programming, uncertain random networks, and so forth, could be generalized under U-S chance spaces in future research.

Author Contributions

Writing—review and editing, Writing—original draft, X.F.; Supervision, Methodology, Conceptualization, F.H.; Writing—review and editing, Writing—original draft, X.M.; Writing—review and editing, Writing—original draft, Y.T.; Writing—review and editing, Writing—original draft, D.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported in part by National Natural Science Foundation of China Grant No. 11801307, Natural Science Foundation of Shandong Province of China Grants No. ZR2017MA012, No. ZR2021MA009 and Postgraduate Dissertation Research Innovation Foundation of Qufu Normal University Grant No. LWCXS202223.

Data Availability Statement

Data sharing is not applicable to this article. No dataset was generated or analyzed during this research.

Acknowledgments

The authors would like to thank the and anonymous referees for their constructive suggestions and valuable comments that greatly improved this article. This paper was the subject of the dissertation of Xiaoting Fu while she worked on her master’s degree, and the first version was substantially completed in December 2022. Xue Meng, Yu Tian and Deguo Yang also contributed to the completion of this paper while working on their master’s degrees.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
LLNLaw of large number
U-S chance spacesA pair of chance spaces generated by uncertainty space and sub-linear expectation space
IIDIndependent and identically distributed
a.s.almost surely

Appendix A. Uncertainty Space and Uncertain Variable

We provide a number of fundamental concepts about uncertainty theory. Suppose that R is the set of real numbers, and B ( R ) is the σ - algebra of subsets of R .
Definition A1
([9]). Assume that Γ is a nonempty set and L is a σ- algebra of subsets of Γ. Let M : L [ 0 , 1 ] be a set function. If the conditions listed below:
Axiom 1 (normality axiom): M ( Γ ) = 1 ;
Axiom 2 (duality axiom): M ( Λ ) + M ( Λ c ) = 1 , Λ L ;
Axiom 3 (subadditivity axiom): for any given sequence of events { Λ i } i = 1 L ,
M i = 1 Λ i i = 1 M { Λ i } ;
hold, then M is called an uncertain measure, and ( Γ , L , M ) is called an uncertainty space.
Definition A2
([10]). Assume that ( Γ k , L k , M k ) , k = 1 , 2 , , is an uncertainty space sequence. Let M be an uncertain measure on the product σ- algebra. If for any Λ k L k , k = 1 , 2 , ,
M k = 1 Λ i = k = 1 M k { Λ k }
holds, then M is called a product uncertain measure.
Definition A3
([9]). If a function τ : Γ R is measurable, then it is called an uncertain variable, i.e., for every B B ( R ) ,
{ τ B } = { γ Γ τ ( γ ) B } L .
An uncertain variable’s uncertainty distribution can be described as a function
Ψ ( x ) = M { τ x } , x R .
For every α ( 0 , 1 ) , if the inverse function Ψ 1 ( α ) is existent and unique, then Ψ and τ are named regular uncertainty distribution and regular uncertain variable, respectively.
Definition A4
([9]). If uncertainty distributions of uncertain variables are same, then uncertain variables are identically distributed.
Definition A5
([10])). Suppose that τ 1 , τ 2 , , τ n are uncertain variables on an uncertainty space ( Γ , L , M ) . They are independent if
M j = 1 n { τ j B j } = j = 1 n M { τ j B j }
for arbitrary B j B ( R ) , j = 1 , 2 , , n .

Appendix B. Some Results of Sub-Linear Expectation

Frameworks and notations of [23,24,28] are used in this paper.
Given measurable space ( Ω , F ) , let H be a linear space of real functions on ( Ω , F ) , and the conditions listed below hold:
(i) suppose X 1 , X 2 , , X n H , then φ ( X 1 , X 2 , , X n ) H for every φ C l , L i p ( R n ) , where C l , L i p ( R n ) represents the linear space of (local Lipschitz) function φ that satisfies | φ ( x ) φ ( y ) | C ( 1 + | x | m + | y | m ) | x y | , x , y R n , for some C > 0 , m N depending on φ .
(ii) For any A F , I A H , where I A is the indicator function of event A.
Definition A6
([28]). A function E ^ : H [ , + ] is called a sub-linear expectation E ^ on H if the properties below hold: for all X , Y H , we have
(a) 
monotonicity: If X Y , then E ^ [ X ] E ^ [ Y ] ;
(b) 
constant preserving: E ^ [ c ] = c , c R ;
(c) 
sub-additivity: if E ^ [ X ] + E ^ [ Y ] is not of the form + or + , then E ^ [ X + Y ] E ^ [ X ] + E ^ [ Y ] ;
(d) 
positive homogeneity: E ^ [ λ X ] = λ E ^ [ X ] , λ 0 .
The triple ( Ω , H , E ^ ) is named a sub-linear expectation space. The random variable η under sub-linear expectation is a function η : Ω R that satisfies { η B } = { ω Ω η ( ω ) B } F for each B B ( R ) . It can be thought of H as a random variable space. Considering a sub-linear expectation E ^ , the conjugate expectation E ^ of E ^ can be defined as E ^ [ X ] : = E ^ [ X ] , X H .
When the inequality in (c) of Definition A6 turns into equality, E ^ is a linear expectation. This is a specific situation of sub-linear expectations. Not considering this particular case, most sub-linear expectations need not require to satisfy E ^ [ X + Y ] < E ^ [ X ] + E ^ [ Y ] , for every X , Y H . However, a sub-linear expectation may satisfy E ^ [ X + Y ] < E ^ [ X ] + E ^ [ Y ] for some X , Y H .
Next, we give an example of sub-linear expectations.
Example A1.
During a game, a participant selects a ball at random from a box that includes balls of three colors: white (W), blue (B) and red (R) balls. The participant is not informed of the exact amounts of W, B, and R by the urn’s owner, who serves as the game’s banker. He/she only makes sure that W = B [ 30 , 40 ] and W + B + R = 100 . Consider a random variables ξ, and
ξ = 1 , the selected ball is W ; 0 , the selected ball is R ; 1 , the selected ball is B .
Let H : = { φ ( ξ ) | φ C l , L i p ( R ) } . We can evaluate the loss X = φ ( ξ ) , φ C l , L i p ( R ) conservatively. We can get that the distribution of ξ is
1 0 1 θ 2 1 θ θ 2 with uncertainty : θ [ 0.6 , 0.8 ] .
For each fixed φ C l , L i p ( R ) , the robust expectation of ξ can be expressed as
E ^ [ φ ( ξ ) ] : = sup θ Θ E P θ [ φ ( ξ ) ] = sup θ [ 0.6 , 0.8 ] θ 2 φ ( 1 ) + ( 1 θ ) φ ( 0 ) + θ 2 φ ( 1 ) .
Next, we show that E ^ [ φ ( ξ ) ] is a sub-linear expectation. For every φ 1 ( ξ ) , φ 2 ( ξ ) H , we have:
(a) 
monotonicity: If φ 1 ( ξ ) φ 2 ( ξ ) , then sup θ Θ E P θ [ φ 1 ( ξ ) ] sup θ Θ E P θ [ φ 2 ( ξ ) ] by the monotonicity of E P θ .
(b) 
constant preserving: E ^ [ c ] = sup θ Θ E P θ [ c ] = c , c R .
(c) 
sub-additivity:
E ^ [ φ 1 ( ξ ) + φ 2 ( ξ ) ] = sup θ Θ E P θ [ φ 1 ( ξ ) + φ 2 ( ξ ) ] = sup θ Θ E P θ [ φ 1 ( ξ ) ] + E P θ [ φ 2 ( ξ ) ] sup θ Θ E P θ [ φ 1 ( ξ ) ] + sup θ Θ E P θ [ φ 2 ( ξ ) ] = E ^ [ φ 1 ( ξ ) ] + E ^ [ φ 2 ( ξ ) ] .
(d) 
positive homogeneity:
E ^ [ λ φ 1 ( ξ ) ] = sup θ Θ E P θ [ λ φ 1 ( ξ ) ] = λ sup θ Θ E P θ [ φ 1 ( ξ ) ] = λ E ^ [ φ 1 ( ξ ) ] , λ 0 .
Hence, E ^ [ φ ( ξ ) ] is a sub-linear expectation. Actually, ∃ φ ^ , φ ¯ C l , L i p ( R ) , such that E ^ [ φ ^ ( ξ ) + φ ¯ ( ξ ) ] < E ^ [ φ ^ ( ξ ) ] + E ^ [ φ ¯ ( ξ ) ] . Let
φ ^ ( x ) = 2 x , x < 0 ; x , x 0 ,
and φ ¯ ( x ) = x 2 , x R . Obviously, φ ^ ( ξ ) , φ ¯ ( ξ ) H , and E ^ [ φ ^ ( ξ ) + φ ¯ ( ξ ) ] < E ^ [ φ ^ ( ξ ) ] + E ^ [ φ ¯ ( ξ ) ] .
In Example A1, a participant selects a ball at random from a box that contains W, B, and R balls. When the participant picks a ball from the urn repeatedly, at time i, the banker knows the true distribution of ξ, but the participant does not. At time i + 1 , the banker may change the numbers of B balls B i + 1 and W balls W i + 1 without telling the participant. However, the range of B i + 1 and W i + 1 are both fixed within [ 30 , 40 ] . For example, B 1 = 33 , W 1 = 33 at the first time, however, B 2 = 36 , W 2 = 36 at the second time. Simply stated, when the ball is picked each time, the type of urns is different. The difference between this and the classic game is that the number of B and W balls changes when the participant picks a ball from an urn each time, i.e., for each time, the type of urns changes, while the classic game does not. The range of B and W here is either informed by the banker or determined by the data. More generally, the range of B and W here is the range of Θ . For how to determine the range of Θ by the data, we can later give the principle in Proposition A3.
Proposition A1
([28]). Given a sub-linear expectation space ( Ω , H , E ^ ) , let E ^ be the conjugate expectation of E ^ . For any X , Y H , then
(i) E ^ [ X ] E ^ [ X ] .
(ii) E ^ [ X + c ] = E ^ [ X ] + c , for c R .
(iii) | E ^ [ X ] E ^ [ Y ] | E ^ [ | X Y | ] .
(iv) E ^ [ X ] and E ^ [ X ] are both finite if E ^ [ | X | ] is finite.
It is very helpful to know the following sublinear expectation representation theorem.
Theorem A1
([28]) (Robust Daniell-Stone theorem). Suppose that ( Ω , H , E ^ ) is a sub-linear expectation space, and it satisfies the following condition:
E ^ [ X n ] 0 , a s n ,
for every sequence { X n } n = 1 of random variables in H which fulfills X n ( ω ) 0 for each ω Ω . Then, on the measurable space ( Ω , σ ( H ) ) , ∃ a family of probability measures { P θ } θ Θ , such that
E ^ [ X ] = max θ Θ Ω X ( ω ) d P θ , for every X H .
Here σ ( H ) denotes the smallest σ-algebra generated by H .
Remark A1.
Theorem A1 shows that: under the suitable condition, a sub-linear expectation E ^ [ · ] could be expressed as a supremum of linear expectations. Based on this theorem, Chen [23] gave the definition of maximum expectation E [ · ] . Definition A7 below provide the concepts of maximum expectation E and minimum expectation E introduced by Chen [23]. In most cases, the sub-linear expectation E ^ [ · ] is the maximum expectation E [ · ] .
Definition A7
([23]). Suppose that ( Ω , F ) is a measurable space, M is the set of all probability measures on Ω, and E P θ is the linear expectation under probability measure P θ . For a non-empty subset { P θ } θ Θ M , A F and X H , the upper probability V , the lower probability v, the maximum expectation E and the minimum expectation E are defined by
V ( A ) : = sup θ Θ P θ ( A ) , v ( A ) : = inf θ Θ P θ ( A ) ,
E [ X ] : = sup θ Θ E P θ [ X ] and E [ X ] : = inf θ Θ E P θ [ X ] ,
respectively.
Definition A8
([1]). Let V ( · ) be a set function from F to [ 0 , 1 ] . V ( · ) is named a non-additive probability (also capacity) if properties (i) and (ii) below hold, and is named a lower or an upper continuous non-additive probability if property (iii) or (iv) below also hold:
(i)
V ( ) = 0 , V ( Ω ) = 1 .
(ii)
If A B and A, B F , then V ( A ) V ( B ) .
(iii)
If A n , A F , and A n A , then V ( A n ) V ( A ) .
(iv)
If A n , A F , and A n A , then V ( A n ) V ( A ) .
Remark A2.
(i) The upper probability V and the lower probability v are two kinds of specific non-additive probabilities. Furthermore, V is subadditive, and V ( A ) + v ( A c ) = 1 for every given A F . But v is not superadditive. E [ · ] is a sub-linear expectation, and E [ · ] is the conjugate expectation of E [ · ] . Indeed, provided a maximum expectation E , V and v can be produced by V ( A ) = E [ I A ] and v ( A ) = E [ I A ] = E [ I A ] for any A F .
(ii) We also call the subset { P θ } θ Θ a family of probability measures related to the sub-liner expectation E . For a given random variable X H , let { F θ ( x ) = P θ ( X x ) , x R } θ Θ and call it a family of ambiguous probability distributions of X. Therefore, there exist two distributions for X: F 1 ( x ) : = i n f θ Θ P θ { X x } = v { X x } , and F 2 ( x ) : = s u p θ Θ P θ { X x } = V { X x } , x R , named the lower distribution and the upper distribution, respectively. In fact, the lower distribution and the upper distribution of X can characterize the ambiguity of distributions of X.
Throughout this paper, we only explore cases that the sub-linear expectation E ^ [ · ] is the maximum expectation E [ · ] , and the upper probability V is upper continuous. Next, the concept of IID random variables under sub-linear expectation E is introduced. We use the concept of IID random variables proposed by [24,28].
Definition A9.
(i) Independence: Let Y 1 , Y 2 , , Y n be a random variable sequence satisfying Y i H . If for every Borel-measurable function φ on R n with φ ( X , Y n ) H and φ ( x , Y n ) H for every x R n 1 , E [ φ ( X , Y n ) ] = E [ φ ¯ ( X ) ] , holds, where φ ¯ ( x ) : = E [ φ ( x , Y n ) ] and φ ¯ ( X ) H , then random variable Y n is independent to X : = ( Y 1 , Y 2 , , Y n 1 ) under E .
(ii) Identical distribution: Given random variables X and Y, if for every Borel-measurable function φ satisfying φ ( X ) , φ ( Y ) H , E [ φ ( X ) ] = E [ φ ( Y ) ] , then X and Y are identically distributed, denoted X = d Y .
(iii) IID random variables: Given a sequence of random variables, if X i = d X 1 and X i + 1 is independent to Y : = ( X 1 , , X i ) for every i 1 , then { X i } i = 1 is IID.
Remark A3.
(i) The case “Y is independent to X” occurs when Y occurs after X. Therefore, the information of X should be considered in a robust expectation. In a sub-linear expectation space ( Ω , H , E ) , the case that Y is independent to X implies that the family of distributions { G π ( y ) , y R } π Π of Y is unchanged after every realization of X = x happens. That is, the “conditional sub-linear expectation” of Y with respect to X is E [ φ ( x , Y ) ] | x = X . This notion of independence is merely the classical one when it comes to linear expectation. As illustrated by Peng [27], it should be noticed that the case that “Y is independent to X” is not automatically meant to mean that “X is independent to Y ” under sub-linear expectations. The following Proposition A2 (ii) illustrates this point.
(ii) Let’s consider Example A1 again. Now we could mix completely and randomly. After selecting a ball from the box, we will receive 1 dollar if the selected ball is W, and −1 dollar if it is B. The gain η 1 of this game is
η 1 ( ω ) = 1 { W b a l l } 1 { B b a l l } .
That is, η 1 = 0 if a R ball is selected. We duplicate this game, but after time i the random variable η i is produced, and a new game starts, the banker can change the number of B balls B i + 1 within the fixed range Θ without telling us. Now if we sold a contract φ ( η i ) based on the i-th output η i , then, taking into account the worst case, the robust expectation is
E [ φ ( η i ) ] = E [ φ ( η 1 ) ] = sup θ Θ θ 2 φ ( 1 ) + ( 1 θ ) φ ( 0 ) + θ 2 φ ( 1 ) , i = 1 , 2 , .
So the sequence { η i } i = 1 is identically distributed. It can also be proved that η i + 1 is independent to ( η 1 , , η i ) . Generally speaking, if X ( ω ) = φ ( η 1 , , η i ) denotes a path-dependent loss function, then the robust expected loss is:
E [ X ] = E [ φ ( η 1 , , η i , η i + 1 ) ] = E [ E [ φ ( x 1 , , x i , η i + 1 ) ] | η j = x j , 1 j i ] .
(iii) If the sample { x i } i = 1 n is actual data related to humanity, management, economics and finance, it is often not guaranteed that it meets the IID condition in classical probability theory. But Example A1 shows that most of the actual data can satisfy, or approximately, the IID condition under E given by Definition A9. Also, the test function φ has different meanings for different situations. For example, φ ( X ) could become a financial contract based on X, a put option φ ( X ) = max { 0 , X k } , a consumption function, a profit and loss function, a cost function in an optimal control system, etc. When dealing with theoretical problems about sub-linear expectations, we only need to calculate the sub-linear expectation E [ φ ( X ) ] corresponding to a certain class of function φ that we care about (see Peng [27] subsection 3.2 and 3.3 for more details).
Proposition A2.
Given the maximum expectation E , let X and Y be two random variables under E , { F θ ( x ) , x R } θ Θ be a family of probability distributions of X that corresponds to the family of probability measures { P θ } θ Θ , and { G π ( y ) , y R } π Π be a family of probability distributions of Y that corresponds to the family of probability measures { P π } π Π . For every given x , y , z R , then ∃ a family of probability measures { P δ } δ Δ satisfying:
(i)
V { X x , Y y } = sup δ Δ P δ { X x , Y y } ,
and
v { X x , Y y } = inf δ Δ P δ { X x , Y y } .
Specifically, if Y is independent to X under E , or X is independent to Y under E , we have
sup δ Δ P δ { X x , Y y } = sup θ Θ F θ ( x ) sup π Π G π ( y ) ,
and
inf δ Δ P δ { X x , Y y } = inf θ Θ F θ ( x ) inf π Π G π ( y ) .
(ii) Suppose that Y is independent to X under E , then the lower distribution and the upper distribution of X + Y are
inf δ Δ P δ { X + Y z } = inf θ Θ inf π Π G π ( z x ) d F θ ( x )
and
sup δ Δ P δ { X + Y z } = sup θ Θ sup π Π G π ( z x ) d F θ ( x ) ,
respectively. Suppose that X is independent to Y under E , the lower distribution and the upper distribution of X + Y are
inf δ Δ P δ { X + Y z } = inf π Π inf θ Θ F θ ( z y ) d G π ( y )
and
sup δ Δ P δ { X + Y z } = sup π Π sup θ Θ F θ ( z y ) d G π ( y ) ,
respectively.
Proof. 
(i) For (A1), for every given x , y R , we obtain
V { X x , Y y } = E I { X x , Y y } = sup δ Δ E P δ I { X x , Y y } = sup δ Δ P δ { X x , Y y } ,
by Definition A7 and Remark A2. Similarly, (A2) follows by the fact v { · } = E · = i n f δ Δ E P δ · = i n f δ Δ P δ { · } .
Next, we show (A3). Since Y is independent to X under E , we have
sup δ Δ P δ { X x , Y y } = V { X x , Y y } = E I { X x , Y y } = E I { X x } I { Y y } = E E I { a x } I { Y y } | a = X = E I { a x } E I { Y y } | a = X = E I { X x } E I { Y y } = V { X x } V { Y y } = sup θ Θ F θ ( x ) sup π Π G π ( y ) ,
by the positive homogeneity of E .
Since X is independent to Y under E , it follows that
sup δ Δ P δ { X x , Y y } = V { X x , Y y } = E I { X x , Y y } = E I { X x } I { Y y } = E E I { X x } I { b y } | b = Y = E I { b y } E I { X x } | b = Y = E I { X x } E I { Y y } = V { X x } V { Y y } = sup θ Θ F θ ( x ) sup π Π G π ( y ) .
Hence, (A3) is proved.
Finally, we consider (A4). Since Y is independent to X under E , we have
inf δ Δ P δ { X x , Y y } = v { X x , Y y } = E I { X x , Y y } = E I { X x , Y y } = E I { X x } I { Y y } = E E I { a x } I { Y y } | a = X = E I { a x } E I { Y y } | a = X = E I { X x } E I { Y y } = E I { X x } E I { Y y } = v { X x } v { Y y } = inf θ Θ F θ ( x ) inf π Π G π ( y ) .
If X is independent to Y under E , we can prove (A4) in a similar manner. Thus, (i) is proved.
(ii) With no loss of generality, we only show (A5) and (A6). Since Y is independent to X under E , we obtain
sup δ Δ P δ { X + Y z } = V X + Y z = E I { X + Y z } = E E I { Y z x } | x = X = E V Y z x | x = X = E sup π Π G π ( z X ) = sup θ Θ E P θ sup π Π G π ( z X ) = sup θ Θ sup π Π G π ( z x ) d F θ ( x ) .
Hence, (A5) is proved.
Since X is independent to Y under E , we obtain
inf δ Δ P δ { X + Y z } = v X + Y z = E I { X + Y z } = E I { X + Y z } = E E I { X z y } | y = Y = E E I { X z y } | y = Y = E v X z y | y = Y = E inf θ Θ F θ ( z Y ) = E inf θ Θ F θ ( z Y ) = inf π Π E P Π inf θ Θ F θ ( z Y ) = inf π Π inf θ Θ F θ ( z y ) d G π ( y ) .
Hence, (A6) is proved. □
Definition A10
([28]) (Maximal distribution). If E [ φ ( η ) ] = sup μ ̲ y μ ¯ φ ( y ) , for φ C l . l i p ( R ) , where μ ¯ = E [ η ] , μ ̲ = E [ η ] , then the random variable η on a sub-linear expectation space ( Ω , H , E ) is called maximally distributed.
Definition A11
([28]) (G-normal distribution). If for every given φ C l . l i p ( R ) , writing u ( t , x ) : = E [ φ ( x + t η ) ] , ( t , x ) [ 0 , ) × R , u is the viscosity solution of the partial differential equation (PDE): t u G ( x x 2 u ) = 0 , u ( 0 , x ) = φ ( x ) , where G ( x ) : = 1 2 ( σ ¯ 2 x + σ ̲ 2 x ) and x + : = max x , 0 , x : = ( x ) + , then the random variable η on a sub-linear expectation space ( Ω , H , E ) with σ ¯ 2 = E [ η 2 ] , σ ̲ 2 = E [ η 2 ] is called G-normal distribution, denoted η N { 0 , [ σ ̲ 2 , σ ¯ 2 ] } .
Theorem A2
([24]) (Kolmogorov strong LLN under sub-linear expectation). Assume that { X i } i = 1 is a sequence of IID random variables under E , and E | X 1 | 1 + α < for some α ( 0 , 1 ] . Let μ ¯ : = E [ X 1 ] , μ ̲ = E [ X 1 ] , and v and V be the lower and upper probabilities respectively. Then
v μ ̲ lim inf n i = 1 n X i n lim sup n i = 1 n X i n μ ¯ = 1 .
If V is upper continuous, then
V lim inf n i = 1 n X i n = μ ̲ = 1 , V lim sup n i = 1 n X i n = μ ¯ = 1 .
Definition A12
(Empirical distribution function). Assume that { X i } i = 1 n is a sequence of IID random samples from a family of ambiguous probability distributions F θ θ Θ under sub-linear expectation E .
F n ( x ) = 1 n i = 1 n I X i x , x R
is called the empirical distribution function.
Proposition A3.
Assume that { X i } i = 1 is a sequence of IID random samples from a family of ambiguous probability distributions F θ θ Θ under sub-linear expectation E . Set F n ( x ) = 1 n i = 1 n I X i x , x R . Then for every given x R ,
v { inf θ Θ F θ ( x ) lim inf n F n ( x ) lim sup n F n ( x ) sup θ Θ F θ ( x ) } = 1 ,
and
V lim inf n F n ( x ) = inf θ Θ F θ ( x ) = 1 , V lim sup n F n ( x ) = sup θ Θ F θ ( x ) = 1 .
Proof. 
For any given x R , set η i x = I X i x , i N . Since { X i } i = 1 is a sequence of IID random samples under E , { η i x } i = 1 is a sequence of IID random samples under E . It can be shown that η 1 x has means μ ¯ and μ ̲ , where
μ ¯ = E [ η 1 x ] = E I X 1 x = V X 1 x = sup θ Θ F θ ( x ) ,
and μ ̲ = inf θ Θ F θ ( x ) , for any given x R . Take S n ^ / n = F n ( x ) in Theorem A2, then for any x R , (A7) and (A8) hold. □
Remark A4.
In fact, the empirical distribution function under E of the set of observed data can be non-convergent. When it converges, this is the classical probability problem. However, when it does not converge, it is our hypothetical scenario. Let’s still consider Example A1. When the empirical distribution function under E does not converge, this situation is what we are concerned about. At this time, the empirical distribution function must have the upper limit and lower limit, where the upper limit is the upper probability, and the lower limit is the lower probability. This is exactly the result of Proposition A3. By Proposition A3, we can determine the value boundary of Θ of Example A1, so as to identify the range of values for Θ in Example A1.
Definition A13.
Suppose that ( Ω , F ) is a provided measurable space, and V is a non-additive probability. If V lim n η n = η = 1 , then the random variable sequence { η n } n = 1 converges almost surely (a.s.) to η, denoted η n η , a.s.
Definition A14.
Suppose that ( Ω , F ) is a provided measurable space, and V is a non-additive probability. If ∃ { E k } k = 1 , such that { η n } n = 1 converges uniformly to η in E k c : = Ω E k for every given k N and V { E k c } 1 , as k , then the random variable sequence { η n } n = 1 converges uniformly to η, a.s.
Theorem A3
(Egoroff’s Theorem under sub-linear expectation). Suppose that ( Ω , F ) is a given measurable space, and v and V are the lower and upper probabilities respectively. If the random variable sequence { η n } n = 1 is convergent to η, a.s., with respect to v, then { η n } n = 1 converges uniformly to η, a.s., with respect to v, i.e., for any given k N , E k F , such that V { E k } 0 , v { E k c } 1 , as k ,
sup ω E k c | η n η | 0 .
Proof. 
Let D denote the set of these points ω Ω at which { η n } n = 1 does not converge to η . Then
D = m = 1 j = 1 n = j | η n η | 1 m .
Since { η n } n = 1 is convergent to η , a.s., with respect to v, from V { D } + v { D c } = 1 , we have V { D } = 0 . Then, for every given m N , we have
V j = 1 n = j | η n η | 1 m = 0 .
It is clear that
n = j | η n η | 1 m j = 1 n = j | η n η | 1 m .
So, from the condition that V is upper continuous, it can be obtained that
lim n V n = j | η n η | 1 m = V j = 1 n = j | η n η | 1 m = 0 .
So, for every given k , m N , ∃ m k N , such that
V n = m k | η n η | 1 m 1 2 m · 1 k .
Take E k = m = 1 n = m k | η n η | 1 m , then
V { E k } m = 1 V n = m k | η n η | 1 m m = 1 1 2 m · 1 k = 1 k 0 .
Since V { E k } + v { E k c } = 1 , we have v { E k c } 1 , as k . Then for every given m N , ∃ m k N , such that for every given n m k ,
sup ω E k c | η n η | 1 m .
Thus, { η n } n = 1 is convergent uniformly to η , a.s., with respect to v. □
Theorem A4.
Suppose that { η i } i = 1 is a sequence of IID random variables under E , f C I C D , and E | f ( η 1 , y ) | 1 + α < , for any y R and some α ( 0 , 1 ] . For any y R , denote S n ( y ) = i = 1 n f ( η i , y ) , n N , and F ( y ) = E [ f ( η 1 , y ) ] , F ¯ ( y ) = E [ f ( η 1 , y ) ] . Define
y 0 ( u ) = max { y | F ( y ) = u } , u ( inf { F ( y ) | y R } , sup { F ( y ) | y R } ) ,
and
y 1 ( u ) = max { y | F ¯ ( y ) = u } , u ( inf { F ¯ ( y ) | y R } , sup { F ¯ ( y ) | y R } ) .
(a) For every given w ( inf { F ( y ) | y R } , sup { F ( y ) | y R } ) and ε > 0 satisfying w ε ( inf { F ( y ) | y R } , sup { F ( y ) | y R } ) , then ∃ N 1 N , such that for every n N 1 ,
v sup k n S k ( y 0 ( w ε ) ) k w 1 ε .
(b) For every given w ( inf { F ¯ ( y ) | y R } , sup { F ¯ ( y ) | y R } ) and ε > 0 satisfying w + ε ( inf { F ¯ ( y ) | y R } , sup { F ¯ ( y ) | y R } ) , then ∃ N 2 N , such that for every n N 2 ,
v inf k n S k ( y 1 ( w + ε ) ) k > w 1 ε .
Proof. 
Since the proofs of (a) and (b) are similar, we only show (a) here.
(a) Since { η i } i = 1 is a sequence of IID random variables under E , then for any y R , { f ( η i , y ) } i = 1 is a sequence of IID random variables under E . Hence, by Theorem A2, we have
v lim sup n S n ( y ) n F ( y ) = 1 .
For every given w ( inf { F ( y ) | y R } , sup { F ( y ) | y R } ) and ε > 0 such that w ε ( inf { F ( y ) | y R } , sup { F ( y ) | y R } ) , denote
ξ n w ε = sup k n S k ( y 0 ( w ε ) ) k , n = 1 , 2 , ,
and
ξ w ε = lim sup n S n ( y 0 ( w ε ) ) n .
Obviously, ξ n w ε ξ w ε , as n , and by (A9), it follows that v lim n ξ n w ε = ξ w ε w ε = 1 . Thus, from Theorem A3, ∃ k 0 N and N 1 N , such that E k 0 F , v { E k 0 c } 1 ε , and for any n N 1 ,
sup ω E k 0 c | ξ n w ε ξ w ε | ε .
By (A10), for all ω E k 0 c and n N 1 , we have
sup k n S k ( y 0 ( w ε ) ) k = ξ n w ε ξ w ε + ε w ε + ε = w .
It implies that
v sup k n S k ( y 0 ( w ε ) ) k w v { E k 0 c } 1 ε .

References

  1. Choquet, G. Theory of capacities. Ann. L’Institut Fourier 1954, 5, 131–295. [Google Scholar] [CrossRef]
  2. Feynman, R.; Hibbs, A.R. Quantum Mechanics and Path Integrals; McGraw-Hill: New York, NY, USA, 1965. [Google Scholar]
  3. Zadeh, L.A. Fuzzy sets. Inf. Control 1965, 8, 338–353. [Google Scholar] [CrossRef]
  4. Zadeh, L.A. Fuzzy sets as a basis for a theory of possibility. Fuzzy Sets Syst. 1978, 1, 3–28. [Google Scholar] [CrossRef]
  5. Calvo, T.; Mesiarová, A.; Valášková, L. Construction of aggregation operators: New composition method. Kybernetika 2003, 39, 643–650. [Google Scholar]
  6. Valášková, L.; Struk, P. Preservation of Distinguished Fuzzy Measure Classes by Distortion. In Modeling Decisions for Artificial Intelligence; Springer: Berlin/Heidelberg, Germany, 2004; pp. 175–182. [Google Scholar]
  7. Valášková, L.; Struk, P. Classes of fuzzy measures and distortion. Kybernetika 2005, 41, 205–212. [Google Scholar]
  8. Struk, P. Extremal fuzzy integrals. Soft Comput. 2006, 10, 502–505. [Google Scholar] [CrossRef]
  9. Liu, B. Uncertainty Theory, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2007. [Google Scholar]
  10. Liu, B. Some research problems in uncertainty theory. J. Uncertain Syst. 2009, 3, 3–10. [Google Scholar]
  11. Liu, B. Why is there a need for uncertainty theory? J. Uncertain Syst. 2012, 6, 3–10. [Google Scholar]
  12. Qu, Z.; Zong, Z.; Hu, F. Law of large numbers, central limit theorem, and law of the iterated logarithm for Bernoulli uncertain sequence. Symmetry 2022, 14, 1642. [Google Scholar] [CrossRef]
  13. Liu, Y. Uncertain random variables: A mixture of uncertainty and randomness. Soft Comput. 2013, 17, 625–634. [Google Scholar] [CrossRef]
  14. Liu, Y. Uncertain random programming with applications. Fuzzy Optim. Decis. Mak. 2013, 12, 153–169. [Google Scholar] [CrossRef]
  15. Liu, B. Uncertainty Theory, 4th ed.; Springer: Berlin/Heidelberg, Germany, 2014. [Google Scholar]
  16. Yao, K.; Gao, J. Law of Large Numbers for Uncertain Random Variables. IEEE Trans. Fuzzy Syst. 2016, 24, 615–621. [Google Scholar] [CrossRef]
  17. Gao, R.; Sheng, Y. Law of large numbers for uncertain random variables with different chance distributions. J. Intell. Fuzzy Syst. 2016, 31, 1227–1234. [Google Scholar] [CrossRef]
  18. Gao, R.; Ralescu, D.A. Convergence in Distribution for Uncertain Random Variables. IEEE Trans. Fuzzy Syst. 2018, 26, 1427–1434. [Google Scholar] [CrossRef]
  19. Sheng, Y.; Shi, G.; Qin, Z. A stronger law of large numbers for uncertain random variables. Soft Comput. 2018, 22, 5655–5662. [Google Scholar] [CrossRef]
  20. Nowak, P.; Hryniewicz, O. On some laws of large numbers for uncertain random variables. Symmetry 2021, 13, 2258. [Google Scholar] [CrossRef]
  21. Hu, F.; Fu, X.; Qu, Z.; Zong, Z. Further results on laws of large numbers for uncertain random variables. Kybernetika 2023, 59, 314–338. [Google Scholar] [CrossRef]
  22. Peng, S. Law of Large Numbers and Central Limit Theorem Under Nonlinear Expectations. arXiv 2007. [Google Scholar] [CrossRef]
  23. Chen, Z. Strong Laws of Large Numbers for Capacities. arXiv 2010, arXiv:1006.0749. [Google Scholar]
  24. Chen, Z. Strong laws of large numbers for sub-linear expectations. Sci. China Math. 2016, 59, 945–954. [Google Scholar] [CrossRef]
  25. Hu, F.; Chen, Z.; Wu, P. A general strong law of large numbers for nonadditive probabilities and its applications. Statistics 2016, 50, 733–749. [Google Scholar] [CrossRef]
  26. Zhang, L. Rosenthal’s inequalities for independent and negatively dependent random variables under sub-linear expectations with applications. Sci. China Math. 2016, 59, 751–768. [Google Scholar] [CrossRef]
  27. Peng, S. Theory, methods and meaning of nonlinear expectation theory (in Chinese). Sci. Sin. Math. 2017, 47, 1223–1254. [Google Scholar]
  28. Peng, S. Nonlinear Expectations and Stochastic Calculus under Uncertainty: With Robust CLT and G-Brownian Motion; Springer: Berlin/Heidelberg, Germany, 2019. [Google Scholar]
  29. Huang, W.; Wu, P. Strong law of large numbers under moment restrictions in sublinear expectation spaces. Commun. Stat. Theory Methods 2021, 51, 8671–8683. [Google Scholar] [CrossRef]
  30. Hu, F.; Fu, Y.; Gao, M.; Zong, Z. Further results on laws of large numbers for the array of random variables under sub-linear expectation. Commun. Stat. Theory Methods 2024, 53, 6076–6101. [Google Scholar] [CrossRef]
  31. Hu, F.; Fu, X.; Qu, Z. Uncertain random variables and laws of large numbers under U-C chance space. Fuzzy Sets Syst. 2024, 493–494, 109086. [Google Scholar] [CrossRef]
  32. Hu, F.; Qu, Z.; Yang, D. Uncertain random programming models in the framework of U-S chance theory and their applications. TOP 2024. [Google Scholar] [CrossRef]
  33. Xu, M.; Hu, F. Risk aversion in an incomplete market influenced by government regulation. J. Ind. Manag. Optim. 2025, 21, 20–41. [Google Scholar] [CrossRef]
  34. Srivastava, H.M.; Mohammed, P.O.; Guirao, J.L.G.; Hamed, Y.S. Link theorem and distributions of solutions to uncertain Liouville-Caputo difference equations. Discret. Contin. Dyn. Syst. S 2021, 15, 427–440. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Fu, X.; Hu, F.; Meng, X.; Tian, Y.; Yang, D. Laws of Large Numbers for Uncertain Random Variables in the Framework of U-S Chance Theory. Symmetry 2025, 17, 62. https://doi.org/10.3390/sym17010062

AMA Style

Fu X, Hu F, Meng X, Tian Y, Yang D. Laws of Large Numbers for Uncertain Random Variables in the Framework of U-S Chance Theory. Symmetry. 2025; 17(1):62. https://doi.org/10.3390/sym17010062

Chicago/Turabian Style

Fu, Xiaoting, Feng Hu, Xue Meng, Yu Tian, and Deguo Yang. 2025. "Laws of Large Numbers for Uncertain Random Variables in the Framework of U-S Chance Theory" Symmetry 17, no. 1: 62. https://doi.org/10.3390/sym17010062

APA Style

Fu, X., Hu, F., Meng, X., Tian, Y., & Yang, D. (2025). Laws of Large Numbers for Uncertain Random Variables in the Framework of U-S Chance Theory. Symmetry, 17(1), 62. https://doi.org/10.3390/sym17010062

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop