Further Properties of Tsallis Entropy and Its Application

The entropy of Tsallis is a different measure of uncertainty for the Shannon entropy. The present work aims to study some additional properties of this measure and then initiate its connection with the usual stochastic order. Some other properties of the dynamical version of this measure are also investigated. It is well known that systems having greater lifetimes and small uncertainty are preferred systems and that the reliability of a system usually decreases as its uncertainty increases. Since Tsallis entropy measures uncertainty, the above remark leads us to study the Tsallis entropy of the lifetime of coherent systems and also the lifetime of mixed systems where the components have lifetimes which are independent and further, identically distributed (the iid case). Finally, we give some bounds on the Tsallis entropy of the systems and clarify their applicability.


Introduction
The Shannon entropy measure of a probability distribution has found critical applications in numerous areas described in Shannon's seminal work [1]. Information theory provides an uncertainty measure associated with unpredictable phenomena. If X is a non-negative random variable (rv) with absolutely continuous cumulative distribution function (cdf) F(x) and probability density function (pdf) f (x), the Shannon entropy is represented as follows provided the integral is finite. The entropy of Tsallis with order α generalizes the Shannon entropy, defined by (see [2]) whereᾱ = 1 − α and for all α ∈ D α = (0, 1) ∪ (1, ∞), where E(·) means the expectation and F −1 (u) for u ∈ [0, 1], stands for the quantile function. Generally, Tsallis entropy can be negative, however, it can be nonnegative if a proper value of α is chosen. It is evident that H( f ) = lim α→1 H α ( f ) and hence it reduces to the Shannon entropy. It is known that the Shannon entropy is additive. The entropy of Tsallis is, however, non-additive since H α (X, Y) = H α (X) + H α (Y) +ᾱH α (X)H α (Y).
[The] dispersive order is recognized to be the order of distributional variability.

Properties of Tsallis Entropy
Below are other useful properties of the measure. First, another useful expression for the Tsallis entropy can be conveyed in terms of the proportional hazard rate (PH) function. For this purpose, if X ∈ R + is related to S(x), or the hazard rate function λ(x) = f (x)/S(x), the Tsallis entropy is expressed as for all α ∈ D α , where X α denotes the rv with pdf In Table 1, we provided the Tsallis entropy of some well-known distributions.
means the beta function and Γ(·) is the gamma function.
It is worth noting that the pdf given in (4), is actually the pdf of the minimum of two rv's in the iid case (see, [13]). We note that the Tsallis entropy is invariant in the discrete case, while it is not invariant in the continuous case under one-to-one transformation of the rv under consideration. In this case, if φ(·) : R → R is a one-to-one function and X 2 = φ(X 1 ), then (see e.g., Ebrahimi et al. [14]) for all α ∈ D α . The Shannon entropy is scale-dependent. However, it is free of location, that is, X retains the identical differential entropy as X + b, for any b ∈ R. The same results also hold for the Tsallis entropy. Indeed, for all a ≥ 0 and b ∈ R, from the above relation, we have Now, we recall the definition of Tsallis entropy which can be seen in [4] for greater details.
The rv X 1 is smaller than the rv X 2 in the Tsallis entropy of order α, ( It is worth pointing out that X 1 ≤ TE X 2 indicates that the predictability of the outcome of X 1 is more than that of X 2 in terms of the Tsallis entropy. As an immediate consequence of (2), consider the subsequent theorem. Theorem 1. Let X 1 , X 2 ∈ R + with cdfs F 1 and F 2 , respectively. If X 1 ≤ d X 2 , then X 1 ≤ TE X 2 .
Proof. If X 1 ≤ d X 2 , then, for all α ≥ (≤)1 The following theorem develops the impact of a transformation on the Tsallis entropy of an rv. It is analogous to Theorem 1 in [15] and hence we skip its proof.
The next theorem presents implications of the stochastic order under some aging constraints of the associated rv's and the order α.
The first inequality in (5) is obtained as follows. Since X 1 is DFR, then f α−1 The second inequality is obtained by using Hölder's inequality. Thanks to the use of relations (2) and (5), which proves the claim. To prove (ii), we have The first inequality is obtained by mentioning that X 2 is DFR and hence f α−1 2 (x) is decreasing in x for all α ≥ 1 while the second inequality is given by using Hölder's inequality. Now, the results follow.
It is worth noting that Theorem 3 can be applied to several statistical models such as Weibull, Rayleigh, Pareto, Gamma, and Half Logistic, among others. The mentioned models involve the DFR aging property by choosing a suitable parameter.

Properties of Residual Tsallis Entropy
Here we give an overview of the dynamical perspectives of the Tsallis entropy of order α. We note that in this section, the term "decreasing" is equivalent to "non-increasing", and "increasing" is equivalent to "non-decreasing". Suppose that X is the life length of a new system. In this case, the Tsallis entropy H α (X) is appropriate to measure the uncertainty of such a system. However, if the system remains alive until the age t, then H α (X) is not appropriate to measure the remaining or residual uncertainty in the system's lifetime. Thus, let us denote X t := [X − t|X ≥ t], the residual lifetime of an item with the pdf Then, the residual Tsallis entropy is represented by for all t ≥ 0 and α ∈ D α . Another useful representation is as follows: where X α,t denotes the rv having pdf Based on the measure H α (X; t), a few classes of life distributions are proposed.

Definition 2. The rv X has increasing (decreasing) residual Tsallis entropy of order
Roughly speaking, if a unit has a cdf belonging to the class of IRTE α (DRTE α ), then the conditional pdf becomes less (more) informative as the unit becomes older. In the following lemma, we give the derivative of the residual Tsallis entropy.

Remark 1.
Let us assume that X is IRTE α (DRTE α ). Then, H α (X; t) = 0 and therefore we have This reveals that exponential distribution is the only distribution fulfilling IRTE α and DRTE α .
We establishes a relationship among the new and known classes of life distributions with increasing (decreasing) failure rates.

Theorem 4.
For any X ∈ R + with the pdf f , if X has IFR(DFR) property, then X is DRTE α (IRTE α ).
Proof. We shall prove it for IFR, while the DFR case can be derived similarly. Suppose X is IFR and 0 ≤ α ≤ 1. Then λ α−1 (x) is decreasing and hence we have for t > 0. Since 1 − α ≥ 0, from (8) the result follows. When α ≥ 1, then the above relation is reversed and using again (8), we have the result and this implies that X is DRTE α .
There is a large class of monotonic density distributions to which the above theorem can be applied. Another important class of life distributions is those with an increasing failure rate in average (IFRA). In particular, X is IFRA if − log S(x)/x increases in x. The following example shows no relationship between the proposed class and the IFRA class of life distributions. Example 1. Consider the sf of a random lifetime as The plot of the residual Tsallis entropy in Figure 1 exhibits that X is not DRTE α . We now see how the residual Tsallis entropy and the hazard rate orders are related.

Tsallis Entropy of Coherent Structures and Mixed Structures
This section gives some aspects of the Tsallis entropy of coherent (and mixed) structures. A coherent system is one in which all system components are relevant, and the structure-function of it is monotonic. The k-out-of-n system is a particular case of a coherent structure. Furthermore, a mixture of coherent systems is considered as a mixed system (see, [16]). The reliability function of the mixed system lifetime T, in the iid case, is represented by where S i:n (t) = ∑ i−1 j=0 ( n j )[F(t)] j [S(t)] n−j when i = 1, . . . , n are the sf's of X 1:n , . . . , X n:n . The density function of T can be written as so that The vector p = (p 1 , . . . , p n ) in S T (t) is called the system signature, such that p i = P(T = X i:n ). Note that p 1 , . . . , p n are non-negative as they are probabilities and thus ∑ n i=1 p i = 1 holds. The order statistic U i:n = F(X i:n ) has pdf Now, the pdf of V = F(T) is given by Applying the above transformations, we find in the following theorem a formula for the Tsallis entropy of T.

Proposition 1. The Tsallis entropy of T is
where g V (v) has been shown in (14).
Proof. By using the change of v = F(x), from (2) and (11) we have for all α ∈ D α . The result now follows.
To apply Proposition 1, consider the next example.

Example 2.
The vector s = (0, 0.2, 0.6, 0.2, 0) is the signature of a bridge system with n = 5 in the iid case with the basal reliability function S(t) = exp(−λt). This system remains functional provided that there is a path of operational connections running from left to right. It is obvious that for all α ∈ D α . The Tsallis entropy is decreasing with respect to λ as the uncertainty of the system's lifetime decreases with increasing the parameter λ. Moreover, we have H 0.2 (T) = 2.6952, H 1.5 (T) = 0.4670, H 2 (T) = 0.3682, H 2.5 (T) = 0.3062. It is clear that it decreases as α increases.
The system signatures of orders 1-5 have been calculated in [17] and, therefore, we can compute the values of H α (T) numerically for all α ≥ 0. Considering different values of α, the Tsallis entropy of systems with 1-4 iid exponential components has been given in Table 2. Generally, H α (T) has well resulted concerning the standard deviations of T α for some α as was shown in Table 2. To compare the Tsallis entropy of two mixed systems with the identical signature having iid component lifetimes by using Equation (15) which is expressed below. Table 2. Comparisons of Tsallis entropy and the lower bound of T α for some values of α.
Proof. (i) By the assumption X ≤ d Y and the two systems have the same signature, so expression (15) gives for all 0 < α < 1 and for α > 1. Now, this completes the proof.
(ii) If the condition B 1 = φ or B 2 = φ hold, then the outcome is clear. Hence, we assume that B 1 = φ and B 2 = φ. Assumption X ≤ TE Y and Equation (2) imply, for α > 1, and for 0 < α < 1, Let us assume Then, for all α > 1, it holds that In (18), the second inequality is given by the assumption inf B 1 g V (v) ≥ sup B 2 g V (v) while the last inequality is found by implementing (17). The result for 0 < α < 1 can also be lifted as in the proof above.
Let us take the following example to demonstrate the above theorem.

Example 3.
Consider the lifetime T X = min{X 1 , max{X 2 , X 3 }} in the iid case with basal cdf and T Z = min{Z 1 , max{Z 2 , Z 3 }} another lifetime with basal cdf The signature is s = ( 1 3 , 2 3 , 0). It can be readily seen, for all α ∈ D α , that So, one can see that X ≤ TE Z. Moreover, it is readily apparent that B 1 = [0, 1) and B 2 = {1}, and hence inf B 1 g V (v) = sup B 2 g V (v) = 0. Therefore Part (ii) of Theorem 6 results T X ≤ TE T Z .
We show that the minimum of lifetimes in the iid case has lower or equal Tsallis entropy than all mixed systems under the property of decreasing failure rate of component lifetimes.

Theorem 7.
If T represents the mixed system's lifetime in the iid case and the component lifetime is DFR, then X 1:n ≤ TE T.

Proof.
Recalling [18], it is evident that X 1:n ≤ hr T and X 1:n ≤ d T as the series system has a DFR lifetime provided that the parent component lifetime is DFR. Now, the result is found by using Theorem 1.

Conclusions
Extensions of Shannon entropy have been presented in the literature before. The Tsallis entropy, which can be considered as a different measure of uncertainty, is one of the extended entropy measures. We have described here some other properties of this measure. We first determined the Tsallis entropy of some known distributions and then proved that it is not invariant if a one-to-one transformation of the lifetime is taken in the continuous case. The connection to the usual stochastic order has been revealed. It is well known that systems having greater lifetimes and lower uncertainty are apposite systems, and that the reliability of a system usually decreases as its uncertainty increases. These findings led us to study the entropy of Tsallis for coherent structures and mixed systems in the iid case. Finally, we established some bounds on the Tsallis entropy of the systems and illustrated the usefulness of the given bounds.

Data Availability Statement:
No new data were created or analyzed in this study. Data sharing is not applicable to this article.