The Strong Laws of Large Numbers for Set-Valued Random Variables in Fuzzy Metric Space

: In this paper, we ﬁrstly introduce the deﬁnition of the fuzzy metric of sets, and discuss the properties of fuzzy metric induced by the Hausdorff metric. Then we prove the limit theorems for set-valued random variables in fuzzy metric space; the convergence is about fuzzy metric induced by the Hausdorff metric. The work is an extension from the classical results for set-valued random variables to fuzzy metric space.


Introduction
We all know that the research about set-valued theory has been a hot topic in recent years. In the real world, sometimes we cannot get accurate single-valued data. For example, if we describe the price of the stock on one day, the single-point value is limited to characterizing the changes of the stock's price in a day. So it is more appropriate to use set value to describe the price of stock. Many scholars have done a lot of beautiful work on set-valued theory. Arrow and Debreu [1] in 1954 introduced the concept of set-valued random variables and Aumann [2] in 1965 introduced the integral. Hiai and Umegaki [3] gave the definition of conditional expectation of set-valued random variables in 1977. Beer [4] discussed the topologies of closed and closed convex sets in 1993.
It is well known that limit theorems are important in probability and statistics. Since the 1970s, many scholars have studied the strong law of large numbers (SLLN, for short) for set-valued random variables. Artstein and Vitale [5] demonstrated an SLLN for compact setvalued random variables in R p . Puri and Ralescu [6] obtained the SLLNs for independent and identically distributed compact convex set-valued random variables in Banach spaces. Taylor and Inoue discussed the convergence theorems for independent and weighted sums of set-valued random variables, respectively, in [7,8]. Fu et al. [9] , Casting et al. [10] and Li and Guan [11,12] also studied the limit theorems for set-valued random variables in different kinds of conditions. All the above studies were discussed in the sense of the clear distance between sets. However, in real life, the distance between two objects sometimes is uncertainty, and it may not be easy to describe with explicit distance. Only words with fuzzy language such as "very close" and "very far" can be used to describe them. So it is necessary to study the fuzzy metric.
George and Veeramani firstly gave the definition of fuzzy metric in [13] and discussed the conditions of completeness and separability in fuzzy metric space in [14]. Later, Gregori et al. did a lot of research work on fuzzy metric space in [15][16][17][18]. Minana et al. [19] and Wu et al. [20] discussed the properties of fuzzy metric space. Morillas et al. [21,22] discussed the application of fuzzy metric in image filter and other practical fields. In addition, Saadati and Vaezpour [23] defined fuzzy normed space, studied its properties and discussed the relationship between fuzzy norm and fuzzy metric, thus defining fuzzy Banach space. There are also some scholars who elaborated the fuzzy metric in different ways [24,25]. However, the elements in the above papers are still single-point values. In [26], Ghasemi et al. extended the fuzzy metric space to the case of set-valued and fuzzy set-valued random variables and discussed the laws of large numbers for fuzzy set-valued random variables, but the authors did not give the complete statement and definition of fuzzy metric and fuzzy norm for sets. In this paper we consider the definition of fuzzy metric for sets, discuss its properties and study the SLLNs for set-valued random variables in fuzzy metric space.
This article was organized as follows. In Section 2, we mainly introduce the concepts and notations on set-valued random variables. In Section 3, we shall introduce the concepts of fuzzy metric and fuzzy norm on K(X) and discuss the properties. In Section 4, we prove the SLLN for independent and identical distributed compact set-valued random variable and SLLN for independent, tight set valued random variables. The convergence is about fuzzy metric M d H induced by d H .

Preliminaries on Set-Valued Random Variables
Throughout this paper, we assume that (Ω, A, µ) is a complete probability space (i.e., every µ-null set belongs to σ-field A); for the detail about complete probability space, readers can refer to [27] (Page 55, Theorem B). (X, · ) is a Banach space in R , K(X) (K k (X), K c (X)) is the family of all nonempty closed (compact and convex, respectively) subsets of X. For a set A ∈ K(X), coA denotes the convex hull of A.
Let A, B ∈ K(X) and λ ∈ R. Define the Minkowski addition and scalar multiplication as The Hausdorff metric on K(X) is defined by The metric space (K k (X), d H ) is complete and separable, and K kc (X) is a closed subset of (K k (X), d H ) (cf. [28], Theorems 1.1.2 and 1.1.3).
For each A ∈ K(X), the support function is defined by where X * is the dual space of X. Now we recall the definition of total gHukuhara difference in [29], define We say that C ∈ D(A, B) is minimal with respect to set magnitude (norm-minimal for short) if no C ∈ D(A, B) exists with C < C . The set of all elements of D(A, B) with the norm-minimality property will be denoted by D norm (A, B).
Let A, B ∈ K kc (R n ) be given. The following convex set always exists and is unique where co means closure convex hull of A; A t B is called the total gHukuhara difference of A and B. For each set-valued random variable F, the expectation of F is defined by where Ω f dµ is the usual Bochner integral in L 1 [Ω, X] (the family of integrable X-valued random variables), and ) denote the space of all integrably bounded compact (compact and convex) random variables. We denote it as L p [Ω; K k (X)](L p [Ω; K kc (X)], respectively) for simplicity.
For any F, For more concepts and results of set-valued random variables, readers may refer to the books [28][29][30][31].

Fuzzy Metric Space
In this section, we shall introduce the definition of fuzzy metric and fuzzy norm on K(X), and discuss their properties.

Definition 1. (cf. [32]) A t-norm is a binary operator
; the following conditions are satisfied: When * is a continuous function on [0, 1] × [0, 1], it is said to be continuous.

Definition 2.
Let X be an arbitrary non-empty set, * is a continuous t-norm. The 3-tuple (K(X), M, * ) is said to be a fuzzy metric space for sets if M is a fuzzy set on K(X) × K(X) × (0, ∞), satisfying the following conditions for ∀A, B, C ∈ K(X) and t, s > 0: M is called fuzzy metric on K(X).

Definition 3.
Let X be a vector space and * a continuous t-norm. The 3-tuple (K(X), N, * ) is said to be a fuzzy normed space for sets if N is a fuzzy set on K(X) × (0, ∞), satisfying the following conditions for ∀A, B ∈ K(X) and t, s > 0: , k, m, n ∈ R + .
In this case, it is easy to show that (K(X), M, * ) is a fuzzy metric space. (K(X), N, * ) is a fuzzy normed space. M is called fuzzy metric induced by d H . N is called the fuzzy norm induced by d H . There are also some other kinds of fuzzy metrics and fuzzy norm induced by d H , we denote the fuzzy metrics that were induced by d H as M d H and fuzzy norm induced by d H as N d H .
From Definition 2, we can easily get the following property. So it is not decreasing with respect to t.
Next is the definition of convergence for sets in fuzzy metric space. Proof. Assume x n is a monotone decreasing sequence; then there exists a constant n 0 such that when n > n 0 , x n monotone increases (or decreases) to x 0 ; then by remark I (3), there exists k n > 0 such that x n = k n x 0 , and N(x n , t) monotone is non-increasing (or non-decreasing) when n > n 0 ; then lim n→∞ N(x n , t) exists. Take x n = k n x 0 for n > n 0 , where k n monotone increases (or decreases) to 1. Thus for n > n 0 , where t > 0 and T is the total gHukuhara difference [33], then M is a fuzzy metric on K kc (X). We call it fuzzy metric induced by N.   (2) For ∀A, B ∈ K kc (X) and scalar α = 0, we have The result was proved.
Theorem 3. Let X be a separable normed space. There exists a fuzzy normed space X and a function j : K kc (X) → X with the following properties: Thus, K kc (X) is embedded into a fuzzy normed space by j(·).
Proof. By embedding the theorem in [28], there exists an embedding function j : and j is an isometrical and isomorphic function.   The result has been proved.
The following theorem gives the separability of the fuzzy metric space for sets.

Laws of Large Numbers in Fuzzy Metric
In this section, we shall give the convergence theorems for set-valued random variables in the sense of M d H , which is induced by the Hausdorff metric d H . Firstly we introduce the Shapley-Folkman inequality for set-valued random variables in the sense of fuzzy metric d H , which will be used later.
where p is the dimension of X.
Then we have the Shapley-Folkman inequality for set-valued random variables in fuzzy metric space.
for any t ≥ 0, where p is the dimension of X.
Proof. According to Lemma 3, Therefore, for fixed n, there exists k 0 (n), 1 ≤ k 0 (n) ≤ n, such that That means for fixed n, Furthermore by remark II, for t > 0, The result has been proved.
] be a sequence of independent and identically distributed (i.i.d.) set-valued random variables. Then in the metric M d H , we have the following convergence: that is, for t > 0, Proof.
Step 1. Let F k : Ω → K kc (X) be independent and identically distributed set-valued random variables and j : K kc (X) → X be the isometry provided by Theorem 3. Then {j(F k ) : k ∈ N} are i.i.d. X -valued random elements and are integrable. By a standard SLLN in Banach space (see [35]), it follows that Since Then there is It follows from the embedding theorem (j is isometric isomorphic mapping) that Step 2. Consider the general case. Let F k : Ω → K k (X); then {coF k : k ∈ N} is i.i.d. compact convex set. It follows from step 1 that From Theorem 7, By Theorem 5, for t > 0, we have Finally, from the triangle inequality (Definition 2 (4)), it follows that According to (2) and (3), the right values tend to 1. Then we have The proof is complete.
The sequence {F n : n ∈ N} ∈ L 1 [Ω, A, µ; K k (X)] is said to be tight if for every ε > 0, there is a compact subset K ε of K k (X) such that µ{F n / ∈ K ε } < ε for all n ∈ N. From the definition of tightness, we can have the following lemma. Lemma 4. Let {F n : n ∈ N} ∈ L 1 [Ω, A, µ; K kc (X)] be tight and j be the embedding function in Theorem 3; then {j(F n ) : n ∈ N} is also tight.
Proof. Since {F n : n ∈ N} ∈ L 1 [Ω, A, µ; K kc (X)] is tight, by the definition of tightness for set-valued sequence, we know that for every ε > 0 there exists a compact subset K ε of K k (X) with respect to the metric d H such that µ{F n / ∈ K ε } < ε for all n ∈ N. Since j is isometric isomorphic mapping, j(K ε ) is also a compact subset of X . We have µ j(F n ) / ∈ j(K ε ) = µ F n / ∈ K ε < ε, for all n.
That means {j(F n ) : n ∈ N} is tight.

Proof.
Step 1. Let F k : Ω → K kc (X) be independent set-valued random variables and E[ F k p K ] < ∞ for all k, and j : K kc (X) → X be the isometry provided by Theorem 3. Then {j(F k ) : k ∈ N} are independent X -valued random elements and By Lemma 4, we know that {j(F k ) : k ≥ 1} in X is tight. By a standard SLLN in Banach space (cf. [36], Theorem 2), it follows that Since then by (4) and Theorem 2, for t > 0, It follows from the embedding theorem (j is isometric isomorphic mapping) that Step 2. Consider the general case. Let F k ∈ L 1 [Ω, K k (X)], so {coF k : k ∈ N} is independent and satisfies It follows from step 1 that and by Theorem 7 we can have Finally, from the triangle inequality (Definition 2 (4)), it follows that According to (5) and (6), the right terms above tend to 1 when n → ∞. Then we have The proof is complete.
Next, we shall give two examples.

Example 1.
In order to provide a more intuitive understanding of fuzzy metric, we give a practical example. Compare the close degree of return rate between stock A 1 and stock A 2 on a certain day, and measure it by fuzzy metric. As the stock price is changing in a day, we select three time points to record and give their return rates as follows: . For t = 0.08, M d H (A 1 , A 2 , t) = 0.57; for t = 2, M d H (A 1 , A 2 , t) = 0.97. We can say that at the scale t = 2, A 1 and A 2 are extremely close. But at the scale t = 0.08, the degree of closeness is only 0.57.

Example 2.
We can use an interval-valued [a, b] to describe the price of a stock in a day, where a and b are the minimal price and maximal price, respectively. Assume we get the interval-valued data F 1 , F 2 , · · · ; they satisfy the conditions of Theorem 8; then at different level t(t can be thought of as a different evaluation scale), we consider the distance between the average 1 n n ∑ i=1 F i and the population mean. Take Then by Theorem 8, we can get the convergence.
Author Contributions: L.G. is mainly responsible for providing Funding acquisition and methods, H.M. is mainly responsible for scientific research, J.W. is mainly responsible for writing-original draft, and J.Z. is mainly responsible for Writing-review & editing. All authors have read and agreed to the published version of the manuscript.

Funding:
The work is supported by the National Social Science Fund of China (No. 19BTJ017).

Conflicts of Interest:
The authors declare no conflict of interest.