Tsallis Entropy of a Used Reliability System at the System Level

Measuring the uncertainty of the lifetime of technical systems has become increasingly important in recent years. This criterion is useful to measure the predictability of a system over its lifetime. In this paper, we assume a coherent system consisting of n components and having a property where at time t, all components of the system are alive. We then apply the system signature to determine and use the Tsallis entropy of the remaining lifetime of a coherent system. It is a useful criterion for measuring the predictability of the lifetime of a system. Various results, such as bounds and order properties for the said entropy, are investigated. The results of this work can be used to compare the predictability of the remaining lifetime between two coherent systems with known signatures.


Introduction
For engineers, the performance and quantification of uncertainties over the lifetime of a system is critical. The reliability of a system decreases as uncertainty increases, and systems with longer lifetimes and lower uncertainty are better systems (see, e.g., Ebrahimi and Pellery, [1]). It has found applications in numerous areas described in Shannon's seminal work, [2]. Information theory provides a measure of the uncertainty associated with a random phenomenon. If X is a nonnegative random variable with an absolutely continuous cumulative distribution function (CDF) F(x) and density function f (x), the Tsallis entropy of order α, defined by (see [3]), is for all α > 0, α = 1, where E(·) denotes the expected value. In general, the Tsallis entropy can be negative, but it can also be non-negative if one chooses an appropriate value for α. It is obvious that H( f ) = lim α→1 H α ( f ) and thus reduces to the Shannon differential entropy. It is known that the Shannon differential entropy is additive in the sense that for two independent random variables X and Y, H(X, Y) = H(X) + H(Y), where (X, Y) denotes the common random variable. However, the Tsallis entropy is non-additive in the sense that H α (X, Y) = H α (X) + H α (Y) + (1 − α)H α (X)H α (Y). Because of the flexibility of Tsallis entropy compared to Shannon entropy, non-additive entropy measures find their justification in many areas of information theory, physics, chemistry, and engineering. If X denotes the lifetime of a new system, then H α (X) measures the uncertainty of the new system. In some cases, agents know something about the current age of the system. For example, one may know that the system is in operation at time t and is interested in measuring the uncertainty of its remaining lifetime, that is, X t = X − t|X > t. Then H α (X) is no longer useful in such situations. Accordingly, the residual Tsallis entropy is defined as where is the probability density function (PDF) of X t , S(t) = P(X > t) is the survival function of X and S −1 x, t > 0. Various properties, generalizations and applications of H α (X t ) are investigated by Asadi et al. [4], Nanda and Paul [5], Zhang [6], Irshad et al. [7], Rajesh and Sunoj [8], Toomaj and Agh Atabay [9], Mohamed et al. [10], among others.
Several properties and statistical applications of Tsallis entropy have been studied in the literature, which you can read in Maasoumi [11], Abe [12], Asadi et al. [13] and the references therein. Recently, Alomani and Kayid [14] investigated some additional properties of Tsallis entropy, including its connection with the usual stochastic order, as well as some other properties of the dynamical version of this measure and bounds. Moreover, they investigated some properties of Tsallis entropy for the lifetime of a coherent and mixed system. It is suitable to study the behavior of the uncertainty of the new system in terms of Tsallis entropy. For other applications and researchers concerned with measuring the uncertainty of reliability systems, we refer readers to [15][16][17][18] and the references therein. In contrast to the work of Alomani and Kayid [14], the aim of this work is to study some uncertainty properties of a coherent system consisting of n components and having the property that at time t, all components of the system are alive. In fact, we generalize the results of the work published in the literature. To this end, we use the concept of system signature to determine the Tsallis entropy of the remaining lifetime of a coherent system.
The results of this paper are organized as follows: In Section 2, we provide an expression for the Tsallis entropy of a coherent system under the assumption that all components have survived to time t. For this purpose, we used the concept of system signature when the lifetimes of the components in a coherent system are independent and identically distributed. The ordering properties of the residual Tsallis entropy of two coherent systems are studied in Section 3 based on some ordering properties of system signatures even without simple calculations. Section 4 presents some useful bounds. Finally, Section 5 gives some conclusions and further detailed remarks.
Throughout the paper, "≤ st ", "≤ hr ", "≤ lr " and "≤ d " stand for stochastic, hazard rate, likelihood ratio and dispersive orders, respectively; for more details on these orderings, we refer the reader to Shaked and Shanthikumar [19].

Tsallis Entropy of the System in Terms of Signature Vectors of the System
In this section, the concept of system signature is used to define the Tsallis entropy of the remaining lifetime of a coherent system with an arbitrary system-level structure, assuming that all components of the system are functioning at time t. An n-dimensional vector p = (p 1 , . . . , p n ) whose i-th element p i = P(T = X i:n ), i = 1, 2, . . . , n; is the signature of such a system where X i:n is the i-th order statistic of the n independent and identically distributed (i.i.d.) component lifetimes X = (X 1 , . . . , X n ), that is, the time of the i-th component failure, and T is the failure time of the system; (see Samaniego [20]). Consider a coherent system with independent and identically distributed component lifetimes X 1 , . . . , X n and a known signature vector p = (p 1 , . . . , p n ). If T 1,n t = [T − t|X 1:n > t], represents the remaining lifetime of the system under the condition that at time t, all com-ponents of the system are functioning, then from the results of Khaledi and Shaked [21] the survival function of T 1,n t can be expressed as where T 1,i,n t = [X i:n − t|X 1:n > t], i = 1, 2, · · · , n, denotes the remaining lifetime of an i-outof-n system under the condition that all components at time t. The survival and probability density functions of T 1,i,n t are given by and respectively, where Γ(·) is the complete gamma function. It follows that In what follows, we focus on the study of the Tsallis entropy of the random variable T 1,n t , which measures the degree of uncertainty contained in the density of [T − t|X 1:n > t], in terms of the predictability of the remaining lifetime of the system in terms of Tsallis entropy. The probability integral transformation V = S t (T 1,n t ) plays a crucial role in our goal. It is clear that U i:n = S t (T 1,i,n t ) follows from a beta distribution with parameters n − i + 1 and i with the PDF In the forthcoming proposition, we provide an expression for the Tsallis entropy of T t 1,n by using the earlier transformation formulas. Theorem 1. The Tsallis entropy of T t 1,n can be expressed as follows: for all α > 0.
Proof. By using the change of u = S t (x), from (2) and (6) we obtain In the last equality g V (u) = ∑ n i=1 p i g i (u) is the PDF of V denotes the lifetime of the system with independent and identically distributed uniform distribution.
In the specail case, if we consider an i-out-of-n system with the system signature p = (0, . . . , 0, 1 i , 0, . . . , 0), i = 1, 2, · · · , n, then Equation (9) reduces to for all t > 0. The next theorem immediately follows by Theorem 1 from the aging properties of their components. We recall that X has increasing (decreasing) failure rate ( Proof. We just prove it when X is IFR where the proof for the DFR is similar. It is easy to see that f t (S −1 t (u)) = uλ t (S −1 t (u)), 0 < u < 1. This implies that Equation (9) can be rewritten as for all α > 0. On the other hand, one can conclude that S −1 for all 0 < u < 1, and hence we have If t 1 ≤ t 2 , then S −1 (uS(t 1 )) ≤ S −1 (uS(t 2 )). Thus, when F is IFR, then for all α > 1(0 < α ≤ 1), we have for all t 1 ≤ t 2 . Using (11), we obtain ,n ) for all α > 0 and this completes the proof.
The next example illustrates the results of Theorems 1 and 2. Example 1. Consider a coherent system with system signature p = (0, 1/2, 1/4, 1/4). The exact value of H α (T t 1,4 ) can be calculated using the relation (9) given the lifetime distributions of the components. For this purpose, let us assume the following lifetime distributions.
(i) Consider a Pareto type II with the survival function It is not hard to see that It is obvious that the Tsallis entropy of H α (T t 1,4 ) is an increasing function of time t. Thus, the uncertainty of the conditional lifetime T t 1,4 increases as t increases. We recall that this distribution has the DFR property. (ii) Let us suppose that X has a Weibull distribution with the shape parameter k with the survival function After some manipulation, we have It is difficult to find an explicit expression for the above relation, and therefore we are forced to calculate it numerically. In Figure 1 we have plotted the entropy of T t 1,4 as a function of time t for values of α = 0, 2 and α = 2 and k > 0. In this case, it is known that X is DFR when α = 0, 1. As expected from Theorem 2, it is obvious that H α (T t 1,4 ) is increasing in t for The results are shown in Figure 1.  Proof. We prove it when X is IFR where the proof for DFR property is similar. Since X is IFR, Theorem 3.B.25 of Shaked and Shanthikumar [19] implies that X ≥ d X t , that is for all t > 0. If α > 1 (0 < α < 1), so we have Thus, from (9) and (15), we obtain Therefore, the proof is completed. Theorem 4. If X is DFR, then a lower bound for H α (T 1,n t ) is given as follows: for all α > 0.
Proof. Since X is DFR, then it is NWU (i.e., S t (x) ≥ S(x), x, t ≥ 0.) This implies that for all 0 < u < 1. On the other hand, it is known that when X is DFR, the PDF f is decreasing which implies that for all α > 1 (0 < α < 1). From (9), one can conclude that for all α > 0, and this completes the proof.

Entropy Ordering of Two Coherent Systems
Given the imponderables of two coherent systems, this section discusses the partial ordering of their conditional lifetimes. Based on various existing orderings between the component lifetimes and their signature vectors, we find some results for the entropy ordering of two coherent systems. The next theorem compares the entropies of the residual lifetimes of two coherent systems. . . , X n and Y 1 , . . . , Y n from cdfs F and G, respectively. If X ≤ d Y and X or Y is IFR, then H α (T X,1,n t ) ≤ H α (T Y,1,n t ) for all α > 0.
Proof. As a result of the relation (9), it is sufficient to demonstrate that X t ≤ d Y t . Due to the assumption that X ≤ d Y and X or Y is IFR, the proof of Theorem 5 of Ebrahimi and Kirmani [22] means that X t ≤ d Y t , and this concludes the proof. . Suppose that X ∼ W(3, 1) and Y ∼ W(2, 1), where W(k, 1) stands for the Weibull distribution with the survival function given in (14). It is easy to see that X ≤ d Y. Moreover, X and Y are both IFR. Thus, Theorem 5 yields that H α (T X,1,4 for all α > 0. The plot of the Tsallis entropies of these systems is displayed in Figure 2. Next, we compare the residual Tsallis entropies of two coherent systems with the same component lifetimes and different structures. Theorem 6. Let T 1,n 1,t = [T 1 − t|X 1:n > t] and T 1,n 2,t = [T 2 − t|X 1:n > t] represent the residual lifetimes of two coherent systems with signature vectors p 1 and p 2 , respectively. Assume that the system's components are independent and identically distributed according to the common CDF, F. Additionally, let p 1 ≤ lr p 2 . Then, is decreasing in u for all t > 0, then H α (T 1,n 1,t ) ≤ H α (T 1,n 2,t ) for all α > 0.

Some Useful Bounds
When the complexity is high and the number of components is large, it is difficult to compute the H α (T 1,n t ) of a coherent system. This situation is frequently encountered in practice. Under such circumstances, a Tsallis entropy bound can be useful to estimate the lifetime of a coherent system. To see some recent research on bounds on the uncertainty of the lifetime of coherent systems, we refer the reader, for example, to Refs. [15,16,23] and the references there. In the following theorem, we provide bounds on the residual Tsallis entropy of the lifetime of the coherent system in terms of the residual Tsallis entropy of the parent distribution H α (X t ). Theorem 7. Let T 1,n t = [T − t|X 1:n > t] represent the residual lifetime of a coherent system consisting of n independent and identically distributed component lifetimes having the common CDF F with the signature p = (p 1 , · · · , p n ). Suppose that H α (T 1,n t ) < ∞ for all α > 0. It holds that for all α > 1 and for 0 < α < 1 where B n (p) = ∑ n i=1 p i g i (p i ), and p i = n−i n−1 .
Proof. It can be clearly verified that the mode of the beta distribution with parameters n − i + 1 and i is p i = n−i n−1 . Therefore, we obtain Thus, for α > 1 (0 < α < 1), we have The last equality is obtained from (3), from which the desired result follows.
The bounds given in (18) and (19) are very valuable when the number of components is large or the structure of the system is complicated. Now, we obtain a public lower bound using properties of the Tsallis information measure and mathematical concepts.
ify that B 5 (p) = 2.22. Thus, by Theorem 7, the Tsallis entropy of T 1,5 t is bounded for α > 1 (0 < α < 1), as follows: for all α > 1 and for 0 < α < 1. Moreover, the lower bound given in (20) can be obtained as follows: for all α > 0. Assuming uniform distribution for the component lifetimes, we computed the bounds given by (19) (dashed line), as well as the exact value of H α (T 1,3 t ) obtained directly from (9), and also the bounds given by (22) (dotted line). The results are displayed in Figure 4. As we can see, regarding the lower bound in (22) (dotted line) for α > 1, it is better than the lower bound given by (19).

Conclusions
Intuitively, it is better to have systems that work longer and whose remaining life is less uncertain. We can make more accurate predictions when a system has low uncertainty. The Tsallis entropy of a system is an important measure for designing systems based on these facts. If we have some information about the lifetime of the system at time t, for example, that the system will still function at age t, then we may be interested in quantifying the predictability of the remaining lifetime. In this work, we presented a simple assertion for the Tsallis entropy of the system lifetime for the case where all components contained in the system are in operation at time t. Several properties of the proposed measure were discussed. In addition, some partial stochastic orderings between the remaining lifetimes of two coherent systems were discussed in terms of their Tsallis entropy using the concept of a system signature. Numerous examples were also given to illustrate the results.

Data Availability Statement:
No new data were created or analyzed in this study. Data sharing is not applicable to this article.