Next Article in Journal
Analysis and Suppression of the Eddy Current Damping Force of the Cooling Plate of a Permanent Magnet Linear Motor
Next Article in Special Issue
Arithmetic Operations and Expected Values of Regular Interval Type-2 Fuzzy Variables
Previous Article in Journal
Molecular Recognition, Transient Chirality and Sulfur Hydrogen Bonding in the Benzyl Mercaptan Dimer
Previous Article in Special Issue
Delayed Renewal Process with Uncertain Random Inter-Arrival Times
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Sine Entropy of Uncertain Random Variables

1
College of Information Science and Engineering, Xinjiang University, Urumqi 830046, China
2
College of Mathematical and System Sciences, Xinjiang University, Urumqi 830046, China
*
Author to whom correspondence should be addressed.
Symmetry 2021, 13(11), 2023; https://doi.org/10.3390/sym13112023
Submission received: 1 September 2021 / Revised: 16 October 2021 / Accepted: 19 October 2021 / Published: 26 October 2021
(This article belongs to the Special Issue Fuzzy Set Theory and Uncertainty Theory)

Abstract

:
Entropy is usually used to measure the uncertainty of uncertain random variables. It has been defined by logarithmic entropy with chance theory. However, this logarithmic entropy sometimes fails to measure the uncertainty of some uncertain random variables. In order to solve this problem, this paper proposes two types of entropy for uncertain random variables: sine entropy and partial sine entropy, and studies some of their properties. Some important properties of sine entropy and partial sine entropy, such as translation invariance and positive linearity, are obtained. In addition, the calculation formulas of sine entropy and partial sine entropy of uncertain random variables are given.

1. Introduction

Entropy is a parameter describing the disorder of objective things. Shannon [1] believes that information is the elimination or reduction of uncertainty in people’s understanding of things. He calls the degree of uncertainty information entropy. Since then, some scholars have studied Shannon entropy. Using fuzzy set theory, Zadeh [2] introduced fuzzy entropy to quantify the number of fuzziness. Following that, De Luca and Termini [3] proposed a definition of fuzzy entropy, that is, the uncertainty related to the fuzzy set. After that, many studies involved the definition and application of fuzzy entropy, such as Bhandary pal [4], Pal and PAL [5], Pal and Bezdek [6]. Furthermore, Li and Liu [7] put forward the definition of entropy of fuzzy variable.
In 2007, in order to study the uncertainty related to belief degree, Liu [8] established uncertainty theory. As a branch of mathematics, Liu [9] improved the theory in 2009. Uncertain variable was defined [10]. After that, Liu [9] gave a definition of expect value of uncertain variable, and Liu and Ha [11] gave a formula for calculating the expected value of uncertain variable function. Liu [8] proposed some formulas by uncertainty distribution for calculating variance and moment. Yao [12], and Sheng and Samarjit [13] proposed a formula using inverse uncertainty distribution for calculating variance and moment. After that, Liu [8] proposed a concept of logarithmic entropy of uncertain variables. Later, Dai and Chen [14] established a formula to calculate the entropy through the inverse of uncertainty distribution. In addition, Chen and Dai [15] studied the maximum entropy principle. After that, Dai [16] proposed quadratic entropy. Yao et al. [17] proposed sine entropy of uncertain variables.
We know that in order to deal with the number of uncertainties, we have two mathematical tools: probability theory and uncertainty theory. The probability theory is a powerful tool for modeling frequency through samples, and uncertainty theory is another tool for modeling belief degree. However, when the system becomes more and more complex, it creates both uncertainty and randomness. In 2013, Liu [18] established chance theory for modeling the systems. Liu [19] also proposed and studied the basic concepts of chance measure, which is a monotonically increasing set function and satisfies self-duality. Hou [20] proved that the chance measure satisfies sub-additivity. Liu [19] also put forward some basic concepts, including uncertain random variable, and its chance distribution and digital features, etc. Furthermore, Sheng and Yao [21] provided a formula for calculating the variance. Sheng et al. [22] proposed the concept of logarithmic entropy in 2017. After that, Ahmadzade et al. [23] proposed the concept of quadratic entropy, and Ahmadzade et al. [24] studied the question of partial logarithmic entropy.
Since logarithmic entropy may not be able to measure the uncertainty in some cases. Therefore, in order to further improve this problem, this paper will propose two new entropies for uncertain random variables, namely sine entropy and partial sine entropy, and discuss their properties. Furthermore, the calculation formulas of sine entropy and partial sine entropy are obtained by using chance theory. Section 2 reviews some basic concepts of chance theory. Section 3 introduces the concept and basic properties of sine entropy of uncertain random variables. Furthermore, this paper will also propose the concept of partial sine entropy and discuss its properties in Section 4. Finally, we will give a summary in Section 5.

2. Preliminaries

In this part, we review some basic concepts of chance theory
Definition 1
(Liu [18]). Let ( Γ , L , M ) be an uncertainty space and ( Ω , A , Pr ) be a probability space. Then, the product ( Γ , L , M ) × ( Ω , A , Pr ) is called a chance space. Let Θ L × A be an uncertain random event. Then, the chance measure of Θ is defined as
Ch { Θ } = 0 1 Pr ω Ω M { γ Γ | ( γ , ω ) Θ } r d r .
The chance measure satisfies: (i) Normality [18]: Ch { Γ × Ω } = 1 ; (ii) Duality [18]: Ch { Θ } + Ch { Θ c } = 1 for and event Θ ; (iii) Monotonicity [18]: Ch { Θ 1 } Ch { Θ 2 } for any real number set Θ 1 Θ 2 . (iv) Subadditivity [20]: Ch i = 1 Θ i i = 1 Ch { Θ i } for a sequence of events Θ 1 , Θ 2 , .
Definition 2
(Liu [18]). A function ξ is called an uncertain random variable if it is from a chance space ( Γ , L , M ) × ( Ω , A , Pr ) to the set of real numbers such that { ξ B } is an event in L × A for any Borel set B of real numbers.
Definition 3
(Liu [18]). Let ξ be an uncertain random variable. Then, the function
Φ ( x ) = Ch { ξ x } , x
is a chance distribution of ξ.
Theorem 1
(Liu [19]). Let Ψ 1 , Ψ 2 , , Ψ m be probability distributions of independent random variables η 1 , η 2 , , η m , and let Y 1 , Y 2 , , Y n be uncertainty distributions of independent uncertain variables τ 1 , τ 2 , , τ n , respectively. Then, chance distribution of ξ = f ( η 1 , η 2 , , η m , τ 1 , τ 2 , , τ n ) is
Φ ( x ) = m F ( x , y 1 , , y m ) d Ψ 1 ( y 1 ) d Ψ m ( y m )
where F ( x , y 1 , , y m ) is the uncertainty distribution of f ( y 1 , y 2 , , y m , τ 1 , τ 2 , , τ n ) for any ( y 1 , y 2 , , y m ) m and is determined by Y 1 , Y 2 , , Y n .
Definition 4
(Sheng et al. [22]). Let Φ ( x ) be chance distribution of an uncertain random variable ξ. Then, the entropy of ξ is defined by
H [ ξ ] = + S Φ ( x ) d x
where S ( t ) = t ln t ( 1 t ) ln ( 1 t ) .
Definition 5
(Ahmadzade et al. [23]). Let η 1 , η 2 , , η m be independent random variables, and τ 1 , τ 2 , , τ m be uncertain variables. Then, partial entropy of ξ = f ( η 1 , η 2 , , η m , τ 1 , τ 2 , , τ m ) is defined by
P H [ ξ ] = m + S F ( x , y 1 , , y m ) d x d Ψ 1 ( y 1 ) d Ψ m ( y m )
where S ( t ) = t ln t ( 1 t ) ln ( 1 t ) and F ( x , y 1 , , y m ) is uncertainty distribution of f ( y 1 , , y m , τ 1 , , τ m ) for any real numbers y 1 , , y m .

3. Sine Entropy of Uncertain Random Variables

Since logarithmic entropy may not be able to measure the uncertainty of uncertain random variables in some case. Therefore, we will propose a sine entropy of uncertain random variables as a supplement to measure the uncertainty in fail of the logarithmic entropy, as shown below.
Definition 6.
Let Φ ( x ) be chance distribution of an uncertain random variable ξ. Then, we define sine entropy
S H [ ξ ] = + sin π Φ ( x ) d x .
Obviously, in the following, sin ( π x ) is a symmetric function with x = 0.5 , and reaches its unique maximum 1 at x = 0.5 , and it is strictly increasing in [ 0 , 0.5 ] and strictly decreasing in [ 0.5 , 1 ] . By Definition 6, we have S H [ ξ ] 0 . If ξ = c , c is a special uncertainty, that is a constant, then S H [ ξ ] = 0 and S H [ ξ + c ] = S H [ ξ ] . Set ξ [ a , b ] , If chance distribution Φ ( x ) = 0.5 of ξ , then S H [ ξ ] ( b a ) .
Remark 1.
We can find that the sine entropy of uncertain random variables is invariant under any translations.
Example 1.
Let Ψ be a probability distribution of random variable η, and let Υ be an uncertainty distribution of uncertain variable τ. Then, sine entropy of the sum ξ = η + τ is
S H [ ξ ] = + sin π + Y ( x y ) d Ψ ( y ) d x .
Example 2.
Let Ψ be a probability distribution of random variable η > 0 , and let Υ be an uncertainty distribution of uncertain variable τ > 0 . Then, sine entropy of the product ξ = η τ is
S H [ ξ ] = + sin π 0 + Y ( x / y ) d Ψ ( y ) d x .
Example 3.
Let Ψ be a probability distribution of random variable η , and let Υ be an uncertainty distribution of uncertain variable τ.Then, sine entropy of the minimum ξ = η τ is
S H [ ξ ] = + sin π Ψ ( x ) + Y ( x ) Ψ ( x ) Y ( x ) d x .
Example 4.
Let Ψ be a probability distribution of random variable η, and let Υ be an uncertainty distribution of uncertain variable τ. Then, sine entropy of the maximum ξ = η τ is
S H [ ξ ] = + sin π Ψ ( x ) Y ( x ) d x .
Theorem 2.
Let Φ 1 be an inverse chance distribution of uncertain random variable ξ. Then, sine entropy S H [ ξ ] is
S H [ ξ ] = π 0 1 Φ 1 ( 1 α ) cos ( π α ) d α .
Proof. 
According to known conditions that ξ has an inverse chance distribution Φ 1 , then ξ has a chance distribution Φ . We can obtain
sin ( π Φ ( x ) ) = 0 Φ ( x ) π cos ( π α ) d α = Φ ( x ) 1 π cos ( π α ) d α ,
then the sine entropy of ξ can be obtained:
S H [ ξ ] = + sin π Φ ( x ) d x = 0 sin π Φ ( x ) d x + 0 + sin π Φ ( x ) d x = 0 0 Φ ( x ) π cos ( π α ) d α d x 0 + Φ ( x ) 1 π cos ( π α ) d α d x .
We can also obtain the following formula by Fubini theorem:
S H [ ξ ] = 0 0 Φ ( x ) π cos ( π α ) d α d x 0 + Φ ( x ) 1 π cos ( π α ) d α d x = 0 Φ ( 0 ) Φ 1 ( α ) 0 Φ 1 ( α ) π cos ( π α ) d x d α Φ ( 0 ) 1 Φ 1 ( α ) π cos ( π α ) d x d α = π 0 1 Φ 1 ( α ) cos ( π α ) d α = π 1 0 Φ 1 ( 1 α ) cos ( π π α ) d ( α ) = π 0 1 Φ 1 ( 1 α ) cos ( π α ) d α .
The proof is completed from this theorem. □
Remark 2.
Theorem 2 provides a new method to calculate sine entropy of an uncertain random variable when the inverse chance distribution exists.
Theorem 3.
Let Ψ 1 , Ψ 2 , , Ψ m be probability distributions of independent random variables η 1 , η 2 , , η m , respectively, and let τ 1 , τ 2 , , τ n be independent uncertain variables. Then, the sine entropy of ξ = f ( η 1 , η 2 , , η m , τ 1 , τ 2 , , τ n ) is
S H [ ξ ] = + sin π m F ( x , y 1 , , y m ) d Ψ 1 ( y 1 ) d Ψ m ( y m ) d x
where for any real numbers y 1 , y 2 , , y m , F ( x , y 1 , , y m ) is the uncertainty distribution of f ( y 1 , y 2 , , y m , τ 1 , τ 2 , , τ n ) .
Proof. 
For any real numbers y 1 , y 2 , , y m we know that ξ has a chance distribution by Theorem 1,
Φ ( x ) = m F ( x , y 1 , , y m ) d Ψ 1 ( y 1 ) d Ψ m ( y m )
where F ( x , y 1 , , y m ) is the uncertainty distribution of f ( y 1 , y 2 , , y m , τ 1 , τ 2 , , τ n ) . By definition of sine entropy, we have
S H [ ξ ] = + sin π m F ( x , y 1 , , y m ) d Ψ 1 ( y 1 ) d Ψ m ( y m ) d x .
Thus, we proved this theorem. □
Corollary 1.
Let ξ = f ( η 1 , η 2 , , η m , τ 1 , τ 2 , , τ n ) be strictly decreasing with respect to τ k + 1 , τ k + 2 , , τ n and strictly increasing with respect to τ 1 , τ 2 , , τ k . If Y 1 , Y 2 , , Y n are continuous, then the sine entropy of ξ is
S H [ ξ ] = + sin ( π m sup f ( y 1 , , y m , z 1 , , z n ) = x min 1 i k Y i ( z i ) min k + 1 i n 1 Y i ( z i ) d Ψ 1 ( y 1 ) d Ψ m ( y m ) ) d x .
Proof. 
By Theorem 1, we know that the chance distribution of ξ is
Φ ( x ) = m F ( x , y 1 , , y m ) d Ψ 1 ( y 1 ) d Ψ m ( y m ) .
Then, we have
F ( x , y 1 , , y m ) = sup f ( y 1 , , y m , z 1 , , z n ) = x min 1 i k Y i ( z i ) min k + 1 i n ( 1 Y i ( z i ) ) .
Thus, we can obtain
S H [ ξ ] = + sin ( π m sup f ( y 1 , , y m , z 1 , , z n ) = x min 1 i k Y i ( z i ) min k + 1 i n 1 Y i ( z i ) d Ψ 1 ( y 1 ) d Ψ m ( y m ) ) d x .
by Theorem 3. The proof is completed from this corollary. □
Corollary 2.
Let ξ = f ( η 1 , η 2 , , η m , τ 1 , τ 2 , , τ n ) be strictly decreasing with respect to τ k + 1 , τ k + 2 , , τ n and strictly increasing with respect to τ 1 , τ 2 , , τ k . If Y 1 , Y 2 , , Y n are regular, then the sine entropy of ξ is
S H [ ξ ] = + sin π m F ( x , y 1 , , y m ) d Ψ 1 ( y 1 ) d Ψ m ( y m ) d x
where F ( x , y 1 , y 2 , , y m ) may be determined by its inverse uncertainty distribution F 1 ( α , y 1 , y 2 , , y m ) , that is
f y 1 , , y m , Y 1 1 ( α ) , Y 2 1 ( α ) , , Y k 1 ( α ) , Y k + 1 1 ( 1 α ) , Y k + 2 1 ( 1 α ) , , Y n 1 ( 1 α ) .
Proof. 
By Theorem 1, for any real numbers y 1 , y 2 , , y m , we know that the chance distribution of ξ is
Φ ( x ) = m F ( x , y 1 , , y m ) d Ψ 1 ( y 1 ) d Ψ m ( y m )
where F ( x , y 1 , , y m ) is the uncertainty distribution of f y 1 , y 2 , , y m , τ 1 , τ 2 , , τ n . From the assumption that f ( η 1 , η 2 , , η m , τ 1 , τ 2 , , τ n ) be strictly decreasing with respect to τ k + 1 , τ k + 2 , , τ n and strictly increasing with respect to τ 1 , τ 2 , , τ k . It follows that F ( x , y 1 , , y m ) may be determined by its inverse uncertainty distribution F 1 ( α , y 1 , , y m ) when Y 1 , Y 2 , , Y n are regular, that is
f y 1 , , y m , Y 1 1 ( α ) , Y 2 1 ( α ) , , Y k 1 ( α ) , Y k + 1 1 ( 1 α ) , Y k + 2 1 ( 1 α ) , , Y n 1 ( 1 α ) .
From Theorem 3, we can obtain
S H [ ξ ] = + sin π m F ( x , y 1 , , y m ) d Ψ 1 ( y 1 ) d Ψ m ( y m ) d x
where F ( x , y 1 , , y m ) and F ( x , y 1 , , y m ) may be determined by its inverse uncertainty distribution F 1 ( α , y 1 , , y m ) that is equal to
f y 1 , , y m , Y 1 1 ( α ) , Y 2 1 ( α ) , , Y k 1 ( α ) , Y k + 1 1 ( 1 α ) , Y k + 2 1 ( 1 α ) , , Y n 1 ( 1 α ) .
The proof is completed of this corollary. □

4. Partial Sine Entropy of Uncertain Random Variables

The concept of sine entropy of uncertain random variables are proposed theoretically by using chance theory. However, sometimes we need to know how much the sine entropy of uncertain random variables is related to uncertain variables? To answer this question, following that, we will define a new concept of partial sine entropy of uncertain random variables to measure how much the sine entropy of uncertain random variables is related to uncertain variables. Therefore, we propose the concept of partial sine entropy as following as.
Definition 7.
Let τ 1 , τ 2 , , τ m be uncertain variables, and let η 1 , η 2 , , η m be independent random variables. Then, the partial sine entropy of ξ is
P S H [ ξ ] = m + sin π F ( x , y 1 , , y m ) d x d Ψ 1 ( y 1 ) d Ψ m ( y m )
where for any real numbers y 1 , , y m , F ( x , y 1 , , y m ) is the uncertainty distribution of f ( y 1 , , y m , τ 1 , , τ m ) .
Theorem 4.
Let Ψ 1 , Ψ 2 , , Ψ m be probability distributions of independent random variables η 1 , η 2 , , η m , respectively, let τ 1 , τ 2 , , τ n be independent uncertain variables. If f is a measurable function, then the partial sine entropy of ξ = f ( η 1 , η 2 , , η m , τ 1 , τ 2 , , τ n ) is
P S H [ ξ ] = m 0 1 π F 1 ( 1 α , y 1 , , y m ) cos ( π α ) d α d Ψ 1 ( y 1 ) d Ψ m ( y m )
where for any real numbers y 1 , y 2 , , y m , F 1 ( α , y 1 , , y m ) is the inverse uncertainty distribution of f ( y 1 , y 2 ,⋯, y m , τ 1 , τ 2 ,⋯, τ n ) .
Proof. 
We know that sin ( π α ) is a derivable function with sin ( π α ) = π cos ( π α ) . Thus, we have
sin π F ( x , y 1 , , y m ) = 0 F ( x , y 1 , , y m ) π cos ( π α ) d α = F ( x , y 1 , , y m ) 1 π cos ( π α ) d α ,
then the partial sine entropy is
P S H [ ξ ] = m + sin π F ( x , y 1 , , y m ) d x d Ψ 1 ( y 1 ) d Ψ m ( y m ) = m 0 sin π F ( x , y 1 , , y m ) d x d Ψ 1 ( y 1 ) d Ψ m ( y m ) + m 0 + sin π F ( x , y 1 , , y m ) d x d Ψ 1 ( y 1 ) d Ψ m ( y m ) = m 0 0 F ( x , y 1 , , y m ) π cos ( π α ) d α d x d Ψ 1 ( y 1 ) d Ψ m ( y m ) m 0 + F ( x , y 1 , , y m ) 1 π cos ( π α ) d α d x d Ψ 1 ( y 1 ) d Ψ m ( y m ) .
By the Fubini theorem, we have
P S H [ ξ ] = m 0 F ( 0 , y 1 , , y m ) F 1 ( α , y 1 , , y m ) 0 π cos ( π α ) d x d α d Ψ 1 ( y 1 ) d Ψ m ( y m ) m F ( 0 , y 1 , , y m ) 1 0 F 1 ( α , y 1 , , y m ) π cos ( π α ) d x d α d Ψ 1 ( y 1 ) d Ψ m ( y m ) = m 0 F ( 0 , y 1 , , y m ) F 1 ( α , y 1 , , y m ) π cos ( π α ) d α d Ψ 1 ( y 1 ) d Ψ m ( y m ) m F ( 0 , y 1 , , y m ) 1 F 1 ( α , y 1 , , y m ) π cos ( π α ) d α d Ψ 1 ( y 1 ) d Ψ m ( y m ) = m 0 1 F 1 ( α , y 1 , , y m ) π cos ( π α ) d α d Ψ 1 ( y 1 ) d Ψ m ( y m ) = m 0 1 π F 1 ( 1 α , y 1 , , y m ) cos ( π α ) d α d Ψ 1 ( y 1 ) d Ψ m ( y m ) .
The proof is completed from this theorem. □
Example 5.
Let Ψ be a probability distribution of random variable η, let Υ be an uncertainty distribution of uncertain variable τ. Then, the partial sine entropy of the sum ξ = η + τ is
P S H [ ξ ] = S [ τ ] .
Proof. 
It is obvious that the inverse uncertain distribution of uncertain variable y + τ is F 1 ( α , y ) = Y 1 ( α ) + y . By Theorem 4, we have
P S H [ ξ ] = 0 1 π F 1 ( 1 α , y ) cos ( π α ) d α d Ψ ( y ) = 0 1 π Y 1 ( 1 α ) + y cos ( π α ) d α d Ψ ( y ) = 0 1 π Y 1 ( 1 α ) cos ( π α ) d α d Ψ ( y ) + 0 1 π y cos ( π α ) d α d Ψ ( y ) = 0 1 π Y 1 ( 1 α ) cos ( π α ) d α d Ψ ( y ) = S [ τ ] .
Thus, the proof is finished. □
Example 6.
Let Ψ be a probability distribution of random variable η, let Υ be an uncertainty distribution of uncertain variable τ. Then, the partial sine entropy of the product ξ = η τ is
P S H [ ξ ] = E [ η ] S [ τ ] .
Proof. 
It is obvious that the inverse uncertain distribution of uncertain variable y τ is F 1 ( α , y ) = Y 1 ( α ) y . By Theorem 4, we have
P S H [ ξ ] = 0 1 π F 1 ( 1 α , y ) cos ( π α ) d α d Ψ ( y ) = 0 1 π Y 1 ( 1 α ) y cos ( π α ) d α d Ψ ( y ) = 0 1 π Y 1 ( 1 α ) cos ( π α ) d α y d Ψ ( y ) = E [ η ] S [ τ ] .
Thus, the proof is finished. □
Example 7.
Let Y 1 and Y 2 be two uncertainty distributions of uncertain variables τ 1 and τ 2 , respectively, and let Ψ 1 and Ψ 2 be two probability distributions of random variables η 1 and η 2 , respectively. Set ξ 1 = τ 1 + η 1 and ξ 2 = τ 2 + η 2 , then
P S H [ ξ 1 ξ 2 ] = S [ τ 1 τ 2 ] + E [ η 1 ] S [ τ 2 ] + E [ η 2 ] S [ τ 1 ] .
Proof. 
It is obvious that the inverse uncertain distributions of uncertain variables y 1 + τ 1 and y 2 + τ 2 are F 1 1 ( α , y 1 ) = y 1 + Y 1 1 ( α ) , F 2 1 ( α , y 2 ) = y 2 + Y 2 1 ( α ) and F 1 ( α , y 1 , y 2 ) = ( y 1 + Y 1 1 ( α ) ) ( y 2 + Y 2 1 ( α ) ) . By Theorem 4, we can obtain
P S H [ ξ 1 ξ 2 ] = 2 0 1 π F 1 ( 1 α , y 1 , y 2 ) cos ( π α ) d α d Ψ 1 ( y 1 ) d Ψ 1 ( y 1 ) = 2 0 1 π Y 1 1 ( 1 α , y 1 ) Y 2 1 ( 1 α , y 2 ) cos ( π α ) d α d Ψ 1 ( y 1 ) d Ψ 1 ( y 1 ) = 2 0 1 π Y 1 1 ( 1 α ) + y 1 Y 2 1 ( 1 α ) + y 2 cos ( π α ) d α d Ψ 1 ( y 1 ) d Ψ 1 ( y 1 ) = 0 1 π Y 1 1 ( 1 α ) Y 2 1 ( 1 α ) cos ( π α ) d α + 0 1 π y 1 Y 2 1 ( 1 α ) cos ( π α ) d α d Ψ 1 ( y 1 ) + 0 1 π y 2 Y 1 1 ( 1 α ) cos ( π α ) d α d Ψ 2 ( y 2 ) = S [ τ 1 τ 2 ] + E [ η 1 ] S [ τ 2 ] + E [ η 2 ] S [ τ 1 ] .
Thus, the proof is finished. □
Example 8.
Let Y 1 and Y 2 be two uncertainty distributions of uncertain variables τ 1 and τ 2 , respectively, and let Ψ 1 and Ψ 2 be two probability distributions of random variables η 1 and η 2 , respectively. Set ξ 1 = τ 1 η 1 and ξ 2 = τ 2 η 2 , then
P S H ξ 1 ξ 2 = S τ 1 τ 2 E [ η 1 ] E 1 η 2 .
Proof. 
It is obvious that the inverse uncertain distributions of uncertain variables y 1 τ 1 and y 2 τ 2 are F 1 1 ( α , y 1 ) = y 1 Y 1 1 ( α ) , F 2 1 ( α , y 2 ) = y 2 Y 2 1 ( α ) and F 1 ( α , y 1 , y 2 ) = y 1 Y 1 1 ( α ) y 2 Y 2 1 ( 1 α ) . By Theorem 4, we can obtain
P S H ξ 1 ξ 2 = 2 0 1 π F 1 ( 1 α , y 1 , y 2 ) cos ( π α ) d α d Ψ 1 ( y 1 ) d Ψ 1 ( y 1 ) = 2 0 1 π Y 1 1 ( 1 α , y 1 ) Y 2 1 ( α , y 2 ) cos ( π α ) d α d Ψ 1 ( y 1 ) d Ψ 1 ( y 1 ) = 2 0 1 π y 1 y 2 Y 1 1 ( 1 α ) Y 2 1 ( α ) cos ( π α ) d α d Ψ 1 ( y 1 ) d Ψ 1 ( y 1 ) = 0 1 π Y 1 1 ( 1 α ) Y 2 1 ( α ) cos ( π α ) d α 2 y 1 y 2 d Ψ 1 ( y 1 ) d Ψ 2 ( y 2 ) = 0 1 π Y 1 1 ( 1 α ) Y 2 1 ( α ) cos ( π α ) d α y 1 d Ψ 1 ( y 1 ) 1 y 2 d Ψ 2 ( y 2 ) = S τ 1 τ 2 E [ η 1 ] E 1 η 2 .
Thus, the proof is finished. □
Theorem 5.
Let τ 1 , τ 2 , , τ n be independent uncertain variables, and let η 1 , η 2 , , η n be independent random variables. Set ξ 1 = f 1 ( η 1 , τ 1 ) , ξ 2 = f 2 ( η 2 , τ 2 ) , ⋯, and ξ n = f n ( η n , τ n ) . If f ( x 1 , x 2 , , x n ) is strictly increasing with respect to x 1 , x 2 , , x m and strictly decreasing with respect x m + 1 , x m + 2 , , x n . Then, the partial sine entropy of ξ = f ( ξ 1 , ξ 2 , , ξ n ) is
P S H [ ξ ] = m 0 1 π f F 1 1 ( 1 α , y 1 ) , , F m 1 ( 1 α , y m ) , F m + 1 1 ( α , y m + 1 ) , , F n 1 ( α , y n ) cos ( π α ) d α d Ψ 1 ( y 1 ) d Ψ n ( y n )
where F i 1 ( α , y i ) or F i 1 ( 1 α , y i ) are the inverse uncertainty distribution of f ( y i , τ i ) for any real numbers y i , i = 1 , 2 , , n .
Proof. 
It is obvious that the inverse uncertain distribution of uncertain variable, we have
F 1 ( α , y 1 , y 2 , , y n ) = f F 1 1 ( α , y 1 ) , , F m 1 ( α , y m ) , F m + 1 1 ( 1 α , y m + 1 ) , , F n 1 ( 1 α , y n ) .
By Theorem 4, we can obtain
P S H [ ξ ] = m 0 1 π f F 1 1 ( 1 α , y 1 ) , , F m 1 ( 1 α , y m ) , F m + 1 1 ( α , y m + 1 ) , , F n 1 ( α , y n ) cos ( π α ) d α d Ψ 1 ( y 1 ) d Ψ n ( y n ) .
The proof is completed from this theorem. □
Theorem 6.
Let τ 1 , τ 2 , , τ n be independent uncertain variables, and let η 1 , η 2 , , η n be independent random variables. Set ξ 1 = f 1 ( η 1 , τ 1 ) , ξ 2 = f 2 ( η 2 , τ 2 ) , ⋯, and ξ n = f n ( η n , τ n ) . For any real numbers c 1 , c 2 , , c n , we have
P S H i = 1 n c i ξ i = i = 1 n | c i | P S H [ ξ i ] .
Proof. 
This problem will be proved by three steps.
Step 1: We prove P S H [ c 1 ξ 1 ] = | c 1 | P S H [ ξ 1 ] . If c 1 > 0 , then c 1 f 1 ( y 1 , τ 1 ) has an inverse uncertainty distribution F 1 1 ( y 1 , α ) = c 1 F 1 1 ( y 1 , α ) , where F 1 1 ( y 1 , α ) is the inverse uncertainty distribution of f 1 ( η 1 , τ 1 ) . We have
P S H [ c 1 ξ 1 ] = 0 1 π F 1 1 ( 1 α , y 1 ) cos ( π α ) d α d Ψ 1 ( y 1 ) = 0 1 π c 1 F 1 1 ( 1 α , y 1 ) cos ( π α ) d α d Ψ 1 ( y 1 ) = c 1 0 1 π F 1 1 ( 1 α , y 1 ) cos ( π α ) d α d Ψ 1 ( y 1 ) = | c 1 | P S H [ ξ 1 ] .
If c 1 < 0 , then c 1 f 1 ( y 1 , τ 1 ) has an inverse uncertainty distribution F 1 1 ( y 1 , α ) = c 1 F 1 1 ( y 1 , 1 α ) , where F 1 1 ( y 1 , 1 α ) is the inverse uncertainty distribution of f 1 ( η 1 , τ 1 ) . We have
P S H [ c 1 ξ 1 ] = 0 1 π F 1 1 ( 1 α , y 1 ) cos ( π α ) d α d Ψ 1 ( y 1 ) = 0 1 π c 1 F 1 1 ( α , y 1 ) cos ( π α ) d α d Ψ 1 ( y 1 ) = c 1 1 0 π F 1 1 ( 1 α , y 1 ) cos ( π α ) d α d Ψ 1 ( y 1 ) = c 1 1 0 π F 1 1 ( 1 α , y 1 ) cos ( π α ) d α d Ψ 1 ( y 1 ) = | c 1 | P S H [ ξ 1 ] .
If c 1 = 0 , then we immediately have P S H [ c 1 ξ 1 ] = 0 = | c 1 | P S H [ ξ 1 ] . Thus, we always have
P S H [ c 1 ξ 1 ] = | c 1 | P S H [ ξ 1 ] .
Step 2: We prove
P S H [ ξ 1 + ξ 2 + + ξ n ] = P S H [ ξ 1 ] + P S H [ ξ 2 ] + + P S H [ ξ n ] .
The inverse uncertainty distribution of f 1 ( y 1 , τ 1 ) + f 1 ( y 2 , τ 2 ) + + f 1 ( y n , τ n ) is
F 1 ( y 1 , y 2 , , y n , α ) = F 1 1 ( y 1 , α ) + F 2 1 ( y 2 , α ) + + F n 1 ( y n , α ) .
We can obtain
P S H [ ξ 1 + ξ 2 + + ξ n ] = n 0 1 π F 1 ( y 1 , y 2 , , y n , 1 α ) cos ( π α ) d α d Ψ 1 ( y 1 ) d Ψ 2 ( y 2 ) d Ψ n ( y n ) = n 0 1 π F 1 1 ( y 1 , 1 α ) + F 2 1 ( y 2 , 1 α ) + + F n 1 ( y n , 1 α ) cos ( π α ) d α d Ψ 1 ( y 1 ) d Ψ 2 ( y 2 ) d Ψ n ( y n ) = n 0 1 π F 1 1 ( y 1 , 1 α ) cos ( π α ) d α d Ψ 1 ( y 1 ) d Ψ 2 ( y 2 ) d Ψ n ( y n ) + n 0 1 π F 2 1 ( y 2 , 1 α ) cos ( π α ) d α d Ψ 1 ( y 1 ) d Ψ 2 ( y 2 ) d Ψ n ( y n ) + + n 0 1 π F n 1 ( y n , 1 α ) cos ( π α ) d α d Ψ 1 ( y 1 ) d Ψ 2 ( y 2 ) d Ψ n ( y n ) = P S H [ ξ 1 ] + P S H [ ξ 2 ] + + P S H [ ξ n ]
by Theorem 5.
Step 3: By Step 1 and Step 2, for any real numbers c i , i = 1 , 2 , , n , we can obtain
P S H i = 1 n c i ξ i = i = 1 n | c i | P S H [ ξ i ] .
Thus, the proof is finished. □
Remark 3.
From Theorem 6, we see that the partial sine entropy is positive linearity any real numbers.

5. Conclusions

Chance theory is a mathematical method to research the phenomenon with uncertainty and randomness. The entropy of uncertain random variables is very important and necessary for measuring uncertainty. In this paper, two new definitions of sine entropy and partial sine entropy were proposed, and some properties of sine entropy and partial sine entropy were studied. Using chance distribution or inverse chance distribution, some calculation formulas of sine entropy and partial sine entropy of uncertain random variables were derived. The partial sine entropy were investigated, which it was translation invariance and positive linearity.

Author Contributions

Conceptualization, G.S. and R.Z.; methodology, Y.S.; formal analysis, Y.S.; investigation, G.S.; writing—original draft preparation, G.S.; writing—review and editing, Y.S.; funding acquisition, G.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by National Natural Science Foundation of China (Grants Nos. 12061072 and 62162059).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors especially thank the editors and anonymous referees for their kindly review and helpful comments. In addition, the authors would like to acknowledge the gracious support of this work by the National Natural Science Foundation of China—Joint Key Program of Xinjiang (Grants No. U1703262).

Conflicts of Interest

We declare that we have no relevant or material financial interests that relate to the research described in this paper. The manuscript has neither been published before, nor has it been submitted for consideration of publication in another journal.

References

  1. Shannon, C. The mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 373–423. [Google Scholar] [CrossRef] [Green Version]
  2. Zadeh, L.A. Probability measures of fuzzy events. J. Math. Anal. Appl. 1968, 23, 421–427. [Google Scholar] [CrossRef] [Green Version]
  3. De Luca, A.; Termini, S. A definition of nonprobabilitistic entropy in the setting of fuzzy sets theory. Inf. Control. 1972, 20, 301–312. [Google Scholar] [CrossRef] [Green Version]
  4. Bhandary, D.; Pal, N. Some new information measures for fuzzy sets. Inf. Sci. 1993, 67, 209–228. [Google Scholar] [CrossRef]
  5. Pal, N.; Pal, K. Object background segmentation using a new definition of entropy. IEEE Proc. Comput. Digit. Tech. 1989, 136, 284–295. [Google Scholar] [CrossRef] [Green Version]
  6. Pal, N.R.; Bezdek, J. Measuring fuzzy uncertainty. IEEE Trans. Fuzzy Syst. 1994, 2, 107–118. [Google Scholar] [CrossRef]
  7. Li, P.K.; Liu, B. Entropy of credibility distributions for fuzzy variables. IEEE Trans. Fuzzy Syst. 2008, 16, 123–129. [Google Scholar]
  8. Liu, B. Uncertainty Theory, 2nd ed.; Springer: Berlin, Germany, 2007. [Google Scholar]
  9. Liu, B. Some research problems in uncertainty theory. J. Uncertain Syst. 2009, 3, 3–10. [Google Scholar]
  10. Liu, B. Uncertainty Theory: A Branch of Mathematics for Modeling Human Uncertainty; Springer: Berlin, Germany, 2010. [Google Scholar]
  11. Liu, Y.H.; Ha, M.H. Expected value of function of uncertain variables. J. Uncertain Syst. 2010, 4, 181–186. [Google Scholar]
  12. Yao, K. A formula to calculate the variance of uncertain variable. Soft Comput. 2015, 19, 2947–2953. [Google Scholar] [CrossRef]
  13. Sheng, Y.H.; Samarjit, K. Some results of moments of uncertain variable through inverse uncertainty distribution. Fuzzy Optim. Decis. Mak. 2015, 14, 57–76. [Google Scholar] [CrossRef]
  14. Dai, W.; Chen, X.W. Entropy of function of uncertain variables. Math. Comput. Model. 2012, 55, 754–760. [Google Scholar] [CrossRef]
  15. Chen, X.W.; Dai, W. Maximum entropy principle for uncertain variables. Int. J. Fuzzy Syst. 2011, 13, 232–236. [Google Scholar]
  16. Dai, W. Quadratic entropy of uncertain variables. Soft Comput. 2018, 22, 5699–5706. [Google Scholar] [CrossRef]
  17. Yao, K.; Gao, J.W.; Dai, W. Sine entropy for uncertain variable. Int. J. Uncertain. Fuzziness-Knowl.-Based Syst. 2013, 21, 743–753. [Google Scholar] [CrossRef]
  18. Liu, Y.H. Uncertain random variables: A mixture of uncertainty and randomness. Soft Comput. 2013, 17, 625–634. [Google Scholar] [CrossRef]
  19. Liu, Y.H. Uncertain random programming with applications. Fuzzy Optim. Decis. Mak. 2013, 12, 153–169. [Google Scholar] [CrossRef]
  20. Hou, Y.C. Subadditivity of Chance Measure. J. Uncertain. Anal. Appl. 2014, 2, 14. [Google Scholar] [CrossRef] [Green Version]
  21. Sheng, Y.H.; Yao, K. Some formulas of variance of uncertain random variable. J. Uncertain. Anal. Appl. 2014, 2, 12. [Google Scholar] [CrossRef] [Green Version]
  22. Sheng, Y.H.; Shi, G.; Ralescu, D. Entropy of uncertain random variables with application to minimum spanning tree problem. Int. J. Uncertain. Fuzziness-Knowl.-Based Syst. 2017, 25, 497–514. [Google Scholar] [CrossRef] [Green Version]
  23. Ahmadzade, H.; Gao, R.; Zarei, H. Partial quadratic entropy of uncertain random variables. J. Uncertain. Syst. 2016, 10, 292–301. [Google Scholar]
  24. Ahmadzade, H.; Gao, R.; Dehghan, M.; Sheng, Y.H. Partial entropy of uncertain random variables. J. Intell. Fuzzy Syst. 2017, 33, 105–112. [Google Scholar] [CrossRef] [Green Version]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Shi, G.; Zhuang, R.; Sheng, Y. Sine Entropy of Uncertain Random Variables. Symmetry 2021, 13, 2023. https://doi.org/10.3390/sym13112023

AMA Style

Shi G, Zhuang R, Sheng Y. Sine Entropy of Uncertain Random Variables. Symmetry. 2021; 13(11):2023. https://doi.org/10.3390/sym13112023

Chicago/Turabian Style

Shi, Gang, Rujun Zhuang, and Yuhong Sheng. 2021. "Sine Entropy of Uncertain Random Variables" Symmetry 13, no. 11: 2023. https://doi.org/10.3390/sym13112023

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop