Next Article in Journal
Fractional–Order Modeling and Control of COVID-19 with Shedding Effect
Previous Article in Journal
New Generalization of Geodesic Convex Function
Previous Article in Special Issue
How Particular Firm-Specific Features Influence Corporate Debt Level: A Case Study of Slovak Enterprises
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Renyi Entropy of the Residual Lifetime of a Reliability System at the System Level

1
Département de Mathématiques et d’Informatique, Université du Québec à Trois-Rivières, 3351 Boulevard des Forges, Trois-Rivières, QC G9A 5H7, Canada
2
Department of Statistics, United Arab Emirates University, Al Ain 15551, United Arab Emirates
3
Department of Statistics and Operations Research, College of Science, King Saud University, P.O. Box 2455, Riyadh 11451, Saudi Arabia
*
Author to whom correspondence should be addressed.
Axioms 2023, 12(4), 320; https://doi.org/10.3390/axioms12040320
Submission received: 5 February 2023 / Revised: 18 March 2023 / Accepted: 21 March 2023 / Published: 23 March 2023
(This article belongs to the Special Issue Information Theory in Economics, Finance, and Management)

Abstract

:
The measurement of uncertainty across the lifetimes of engineering systems has drawn more attention in recent years. It is a helpful metric for assessing how predictable a system’s lifetime is. In these circumstances, Renyi entropy, a Shannon entropy extension, is particularly appealing. In this paper, we develop the system signature to give an explicit formula for the Renyi entropy of the residual lifetime of a coherent system when all system components have lived to a time t. In addition, several findings are studied for the aforementioned entropy, including the bounds and order characteristics. It is possible to compare the residual lifespan predictability of two coherent systems with known signatures using the findings of this study.
MSC:
60E05; 62B10; 62N05; 94A17

1. Introduction

In physics, the idea of entropy has been crucial as a very useful measure. To indicate the thermodynamic change of heat that is transferred during the course of a reversible process at a specific temperature T, Clausius [1] invented the metric in 1850. However, entropy’s significance in statistical mechanics lies at a slightly deeper level, where it is typically viewed as the level of uncertainty in the state that a physical system can achieve or as a link between microscopic and macroscopic cases, since it calculates the number of states that an atom or molecule must take in order to satisfy a macroscopic configuration. Entropy’s application is not restricted to statistical analysis because it is directly related to the second law of thermodynamics and therefore applies to all other areas of physics. Entropy in statistical mechanics is measured using the Boltzmann–Gibbs measure, which is given by
H = i = 1 N p i log ( p i ) ,
in which p = ( p 1 , , p N ) stands as a discrete probability distribution on a random quantum state for N microstates. For p = 1 / N , the measure H takes its greatest value (i.e., H = log ( N ) ), which holds when the state of system is in equilibrium. The Boltzmann–Gibbs entropy has the additivity property, as for two systems A and B which are non-interactive and adequately separated from each other with microstates N A and N B , respectively, which are accessible, one has H ( N A N B ) = H ( N A ) + H ( N B ) . It is worth mentioning that H also has the extensity property (i.e., the composite system A + B has an entropy which fulfills H ( N A + B ) = H ( N A ) + H ( N B ) ). It should be noted that the entropy in Equation (1) does not depend on the dimension, so temperature in this discussion has the dimension of energy.
The application of entropy in mechanics, thermodynamics, and fatigue life modeling has been presented recently. For example, in the Basaran [2] theory, Newton’s universal laws of motion included the creation of an entropy. Lee et al. [3] used the thermodynamic state index (TSI) determined via cumulative entropy generation, which is used to predict the lifetime. On the basis of the theory used in unified mechanics, Lee et al. [4] introduced a fatigue life model to anticipate the fatigue life of metals at very high cycling using an entropy generation mechanism. Lee and Basaran [5] analyzed models based on an irreversible entropy as a metric with an empirical evolution function for empirical models developed in the context of Newtonian mechanics. Another fatigue model using entropy generation was proposed by Temfack and Basaran [6].
Several generalizations of the Boltzmann–Gibbs entropy have been brought forward to push the preparation of statistical mechanics to new limits. Several of these approaches have been prompted by the desire to maintain the thermodynamic limit while deforming the structured entropy of Equation (1) without free parameters (see, for example, the work of Obregón [7], Fuentes et al. [8], and Fuentes and Obregón [9]). Others have studied the relaxation of the measure in Equation (1) by introducing free parameters (see Kaniadakis [10] and Sharma and Mittal [11]). In the sequel to this paper, we will concentrate on the Renyi entropy (RE), which was first discussed in relation to coding and information theory by Renyi [12] as one of the first endeavors to extend the Shannon entropy (Shannon [13]). The definition of the RE is
H α = 1 1 α log i = 1 N p i α
where α ( , + ) is a parameter that deforms the entropy structure. For example, if α 1 , then the RE corresponds to the Boltzmann–Gibbs case. The logarithmic structure allows the RE to retain the property of additivity no matter what value the parameter (which is free) takes, although the extensive property is no longer retained for any α 1 .
Because of its properties, the RE has gained considerable attention in information theory from both the quantum and classical perspectives (see, for example, the work of Campbell [14], Csiszar [15], and Goold et al. [16]). The RE is a powerful measure for quantizing quantum entanglement and the corresponding correlations which are strong, as they are found in a quantum field (Cui et al. [17]). For instance, in multipartite systems, the special case α = 2 was discovered to be a measure of information describing the Gaussian states of harmonic quantum oscillators, resulting in strong inequalities of subadditivity that require generalized measures of mutual information (Adesso et al. [18]), the computation of which can be traced through path regularization schemes (Srdinšek [19]). Generalizations of conditional quantum information and the topological entanglement entropy are two more uses of the RE in quantum information (Berta et al. [20]). The RE has also been proposed as a method to describe phase changes between self-organized states in dynamical problems, both in complex systems (Beck [21] and Bashkirov [22]) and in fuzzy systems (Eslami-Giski et al. [23]), connected to the occurrence of quantum fuzzy trajectories (Fuentes [24]). This entropy measure’s viability has also been acknowledged in other disciplines, such as molecular imaging for therapeutic applications (Hughes [25]) and mathematical physics (Franchini et al. [26]) and biostatistics (Chavanis [27]). The concept of the RE can also be defined for a continuous distribution function (CDF) with some obvious modifications.
In the context of statistics and probability, quantifying uncertainties in a system’s lifetime is critical for engineers performing survival analysis. They concur that systems with higher dependability and longer lifetimes are better systems and that system reliability declines as uncertainties rise (see, for example, the work of Ebrahimi and Pellery [28]). Let X be a non-negative random variable (RV) with a probability density function (PDF) f . The RE, recognized as an RE of the order α , is given by
H α ( X ) = 1 1 α log 0 f α ( x ) d x , α > 0 ,
in which “log” represents the natural logarithm. Specifically, the Shannon differential entropy [13] can be calculated as H ( X ) = lim α 1 H α ( X ) = E [ log f ( X ) ] . It is worth pointing out that the Shannon differential entropy is utilized to measure the uniformity of a PDF. In this case, the maximum entropy distribution is uniform. Hence, the values of H ( X ) which are greater induce more uncertainty being generated by the PDF f and, as a result, the ability to predict the next upshot of the RV X .
If X denotes the lifetime of a new system, then H α ( X ) measures the uncertainty of the new system. In some cases, agents know something regarding the age of the system. For example, one may know that the system is alive at time t and is interested in the uncertainty, the value it takes, and its RL (i.e., X t = X t | X > t ). Then, H α ( X ) will not be useful in such situations. Accordingly, the residual RE is
H α ( X t ) = 1 1 α log 0 f t α ( x ) d x = 1 1 α log 0 f ( x + t ) S ( t ) α d x ,
= 1 1 α log 0 1 f t α 1 ( S t 1 ( u ) ) d u , α > 0 ,
where
f t ( x ) = f ( x + t ) S ( t ) , x , t > 0 ,
is the PDF of X t , S ( t ) = P ( X > t ) is the survival function (SF) of X, and S t 1 ( u ) = inf { x ; S t ( x ) u } is the quantile function of S t ( x ) = S ( x + t ) / S ( t ) , x , t > 0 . Various properties, generalizations, and applications of H α ( X t ) were investigated by Asadi et al. [29], Gupta and Nanda [30], Nanda and Paul [31], and the authors of the references therein.
We recall that a number of standards are available for assessing the aging process of a lifetime unit. To this aim, the residual lifetime (RL) will be, in turn, an illustrative measure of the aging behavior. In fact, in a situation where the distribution of the RL is not affected by the age of a system, it is said that this is a no-aging aspect of the system. Thus, the behavior of the RL is considered dominant when one studies the aging phenomenon in a component or, more generally, in a system. In this paper, the RE of the RL of a system is evaluated. Namely, a formula for the RE of the RL is presented whenever the components of the system are working at time t. The expressions are based on system signatures, lifetime distribution functions, and the beta distribution. The concept of a system signature is used when the lifetimes the components of a coherent system have are independent and identically distributed (IID). Ebrahimi [32] initiated the concept of the dynamic Shannon entropy and acquired several aspects this measure has. Toomaj and Doostparat [33] studied the classic Shannon entropy and its possessions for mixed systems, and moreover, further findings on the measure have also been achieved (see also [34]). Recently, Toomaj et al. [35] investigated some results concerning the information aspects in working systems in use by applying a dynamic signature. In this paper, we continue this line of research and investigate some results on the RE properties of working used systems using system signatures. In fact, we generalize the results of the aforementioned papers.
The results of this work are organized as follows. In Section 2, we provide an expression for the RE of a coherent system under the assumption that all components have survived to a certain point in time. The residual RE is also ordered based on some ordering properties of system signatures without being computed directly in Section 3. Section 4 presents some useful limits and bounds for the new measure. There are some remarks that may be useful in future studies, which are given in Section 5.
In the remaining parts of the paper, the notations “ s t ”, “ h r ”, “ l r ”, and “ d ” are used to signify the usual stochastic order, the hazard rate order, the likelihood ratio order, and the dispersive order, respectively. For the further aspects and properties these orders have, the reader can be referred to Shaked and Shanthikumar [36].

2. Renyi Entropy of the Residual Lifetime

In this section, we use the concept of the system signature to find an assertion for the RE of the RL of a system with a coherent structure that has an arbitrary system-level structure in the sense that we know that the components contained in the system are all in operation at a time t. The signature a coherent system having n components has is an n-dimensional vector p = ( p 1 , , p n ) for which the jth element p j = P ( T = X j : n ) , j = 1 , , n , in which T represents the lifetime the underlying coherent system has and X 1 : n , , X n : n represent the order statistics of n IID lifetimes of the components X = ( X 1 , , X n ) with an unchanged distribution. For more details, see, for example, the work of Samaniego [37]. Let us think about a coherent structure with IID component lifetimes X 1 , , X n and a known signature vector p = ( p 1 , , p n ) . If T t 1 , n = [ T t | X 1 : n > t ] represents the RL of the system provided that, at time t , all components of the system are functioning, then from the results of Khaledi and Shaked [38], the SF of T t 1 , n is derivable as follows:
P ( T t 1 , n > x ) = i = 1 n p i P ( X i : n t > x | X 1 : n > t ) , = i = 1 n p i P ( T t 1 , i , n > x ) ,
where T t 1 , i , n = [ X i : n t | X 1 : n > t ] , i = 1 , 2 , , n denotes the RL of an i-out-of-n system, provided that the components are all operating at the time t. The SF and PDF of T t 1 , i , n are given by
P ( T t 1 , i , n > x ) = k = 0 i 1 n k 1 S t ( x ) k S t ( x ) n k , x , t > 0 ,
and
f T t 1 , i , n ( x ) = Γ ( n + 1 ) Γ ( i ) Γ ( n i + 1 ) 1 S t ( x ) i 1 S t ( x ) n i f t ( x ) , x , t > 0 ,
respectively, where Γ ( · ) is the complete gamma function. It follows that
f T t 1 , n ( x ) = i = 1 n p i f T t 1 , i , n ( x ) , x , t > 0 .
In what follows, we will concentrate on the study of the RE of the RV T t 1 , n , which measures the degree of uncertainty induced by the PDF of [ T t | X 1 : n > t ] with respect to the predictability of the RL of the system in terms of the RE. The transformation V = S t ( T t 1 , n ) plays a crucial role in our goal. It is clear that U i : n = S t ( T t 1 , i , n ) follows from a beta distribution having the parameters n i + 1 and i with the PDF
g i ( u ) = Γ ( n + 1 ) Γ ( i ) Γ ( n i + 1 ) ( 1 u ) i 1 u n i , 0 < u < 1 , i = 1 , , n .
Next, we give a statement on the RE of T t 1 , n by applying the earlier transformations:
Theorem 1.
The RE of T t 1 , n can be derived as
H α ( T t 1 , n ) = 1 1 α log 0 1 g V α ( u ) f t α 1 ( S t 1 ( u ) ) d u , t > 0 ,
where S t 1 ( u ) = inf { x ; S t ( x ) u } , 0 < u < 1 for all α > 0 .
Proof. 
By using the change of u = S t ( x ) , from Equations (4) and (8), we obtain
H α ( T t 1 , n ) = 1 1 α log 0 f T t 1 , n ( x ) α d x = 1 1 α log 0 i = 1 n p i f T t 1 , i , n ( x ) α d x = 1 1 α log 0 1 i = 1 n p i g i ( u ) α f t ( S t 1 ( u ) ) α 1 d x = 1 1 α log 0 1 g V α ( u ) f t ( S t 1 ( u ) ) α 1 d u .
In the last equality, g V ( u ) = i = 1 n p i g i ( u ) is the PDF of V, which signifies the lifetime the system with the IID uniform distribution has. □
According to Equation (11), if p = ( 0 , , 0 , 1 i , 0 , , 0 ) (standing for the signature of an i-out-of-n system), then
H α ( T t 1 , i , n ) = H α ( U i : n ) 1 α 1 log E [ f t α 1 ( S t 1 ( Z i ) ) ] ,
in which Z i follows the beta distribution having the parameters α ( i 1 ) + 1 and α ( n i ) + 1 and
H α ( U i : n ) = α α 1 log B ( i , n i + 1 ) 1 α 1 log B ( α ( i 1 ) + 1 , α ( n i ) + 1 ) ,
such that B ( a , b ) = Γ ( a ) Γ ( b ) / Γ ( a + b ) stands for the beta function. In the special case for t = 0 , Equation (12) coincides with the results of Abbasnejad and Arghami [39].
The next theorem follows directly from Theorem 1, concerning the aging properties of its components. We recall that X has an increasing (decreasing) failure rate (IFR (DFR)) if S t ( x ) in x is decreasing (increasing) for all t > 0 :
Theorem 2.
If X is the IFR (DFR), then H α ( T t 1 , n ) is decreasing (increasing) in t for all α > 0 .
Proof. 
We just prove this for when X is the IFR, while the proof for the DFR is similar. It is plain to observe that f t ( S t 1 ( u ) ) = u λ t ( S t 1 ( u ) ) , 0 < u < 1 . This implies that Equation (11) can be rewritten as
e ( 1 α ) H α ( T t 1 , n ) = 0 1 g V α ( u ) u α 1 λ t ( S t 1 ( u ) ) α 1 d u ,
for all α > 0 . Instead, one can find that S t 1 ( u ) = S 1 ( u S ( t ) ) t for all 0 < u < 1 , and hence one has
λ t ( S t 1 ( u ) ) = λ ( S t 1 ( u ) + t ) = λ ( S 1 ( u S ( t ) ) ) , 0 < u < 1 .
If t 1 t 2 , then S 1 ( u S ( t 1 ) ) S 1 ( u S ( t 2 ) ) . Therefore, when F is the IFR, then for all α > 1 ( 0 < α 1 ) , we have
0 1 g V α ( u ) u α 1 λ t 1 ( S t 1 1 ( u ) ) α 1 d u = 0 1 g V α ( u ) u α 1 λ ( S 1 ( u S ( t 1 ) ) ) α 1 d u ( ) 0 1 g V α ( u ) u α 1 λ ( S 1 ( u S ( t 2 ) ) ) α 1 d u = 0 1 g V α ( u ) u α 1 λ t 2 ( S t 2 1 ( u ) ) α 1 d u ,
for all t 1 t 2 . Using (14), we get
e ( 1 α ) H α ( T t 1 1 , n ) ( ) e ( 1 α ) H α ( T t 2 1 , n ) ,
for all α > 1 ( 0 < α 1 ) . This implies that H α ( T t 1 1 , n ) H α ( T t 2 1 , n ) for all α > 0 , and this completes the proof. □
The next example illustrates the results of Theorems 1 and 2:
Example 1.
Contemplate a bridge system with a system signature p = ( 0 , 1 / 5 , 3 / 5 , 1 / 5 , 0 ) . The exact value of H α ( T t 1 , 5 ) can be calculated using the relation in Equation (11), given the distributions of component lifetimes. For this purpose, let us assume the following lifetime distributions:
(i)
Let X follow the uniform distribution in [ 0 , 1 ] . From Equation (11), we immediately obtain
H α ( T t 1 , 5 ) = log ( 1 t ) + 1 1 α log 0 1 g V α ( u ) d u , t > 0 .
We see that H α ( T t 1 , 5 ) is decreasing in t . We note that the uniform distribution has the IFR property, and therefore H α ( T t 1 , 5 ) decreases as t increases, as we expected based on Theorem 2.
(ii)
Think about a Pareto type II with the SF
S ( t ) = ( 1 + t ) 2 , t > 0 .
It is not hard to see that
H α ( T t 1 , 5 ) = log 1 + t 2 + 1 1 α log 0 1 u α 1 g V α ( u ) d u , t > 0 .
It is obvious that the RE of H α ( T t 1 , 5 ) is increasing in terms of t . Thus, the uncertainty of the conditional lifetime T t 1 , 5 increases as t increases. We recall that this distribution has the DFR property.
(iii)
Let us suppose that X has a Weibull distribution with the shape parameter k and with the SF
S ( t ) = e t k , k , t > 0 .
Through some manipulation, we obtain
H α ( T t 1 , 5 ) = log k + 1 1 α log 0 1 t k log u ( 1 1 k ) ( α 1 ) u α 1 g V α ( u ) d u , t > 0 .
It is not a facile assignment to acquire a plain statement for the above relation, and therefore we computed it numerically. In Figure 1, we framed the entropy of T t 1 , 5 in terms of the time t for values of α = 0.2 and α = 2 as well as 0 < k < 1 , which has the DFR property. As expected from Theorem 2, it is evident that H α ( T t 1 , 5 ) increases in t . In Figure 2, we plotted the entropy of T t 1 , 5 with respect to time t for values of α = 0.2 and α = 2 along with k 1 , which has the IFR property. As expected from Theorem 2, it is evident that H α ( T t 1 , 5 ) decreases in t .
Below, we compare the Renyi entropies of two coherent system lifetimes and their residual lifetimes.
Theorem 3.
Consider a coherent system with IID IFR (DFR) component lifetimes. Then, H α ( T t 1 , n ) ( ) H α ( T ) for all α > 0 .
Proof. 
We prove the theorem for the case when X is the IFR, where the proof for the DFR property is similar. Since X is the IFR, Theorem 3.B.25 of Shaked and Shanthikumar [36] implies that X d X t ; that is, we have
f t ( S t 1 ( u ) ) f ( S 1 ( u ) ) , 0 < u < 1 ,
for all t > 0 . If α > 1 ( 0 < α < 1 ) , then we have
0 1 g V α ( u ) f t α 1 ( S t 1 ( u ) ) d u ( ) 0 1 g V α ( u ) f α 1 ( S 1 ( u ) ) d u , t > 0 .
Thus, from Equations (11) and (18), we obtain
H α ( T t 1 , n ) = 1 1 α log 0 1 g V α ( u ) f t α 1 ( S t 1 ( u ) ) d u 1 1 α log 0 1 g V α ( u ) f α 1 ( S 1 ( u ) ) d u = H α ( T ) .
Therefore, the proof is completed. □
We remark that Theorem 3 reveals that when the lifetimes the components have are the IFR (DFR), then the RE of the working coherent system when all components of the system are alive at time t are less (greater) than the RE of the new system. The next theorem provides a lower bound for the residual RE in terms of the RE of the new system:
Theorem 4.
If X is the DFR, then a lower bound for H α ( T t 1 , n ) is given as follows:
H α ( T t 1 , n ) log S ( t ) + H α ( T ) ,
for all α > 0 .
Proof. 
Since X is the DFR, then it is NWU (i.e., S t ( x ) S ( x ) , x , t 0 ). This implies that
S t 1 ( u ) + t S 1 ( u ) , t 0 ,
for all 0 < u < 1 . On the other hand, if X is the DFR, then the PDF f is decreasing, which implies that
f α 1 ( S t 1 ( u ) + t ) ( ) f α 1 ( S 1 ( u ) ) , 0 < u < 1 ,
for all α > 1 ( 0 < α < 1 ) . From Equation (11), one can conclude that
H α ( T t 1 , n ) = log S ( t ) + 1 1 α log 0 1 g V α ( u ) f α 1 ( S t 1 ( u ) + t ) d u log S ( t ) + 1 1 α log 0 1 g V α ( u ) f α 1 ( S 1 ( u ) ) d u = log S ( t ) + H α ( T ) ,
for all α > 0 , and this completes the proof. □

3. Renyi Entropy Comparison

In this section, we are concerned with the partial ordering (that is, it has reflexive, transitive, and antisymmetric properties) of the conditionally represented lifetimes in two coherent systems on the basis of their uncertainties. We report a number of achievements for making orderings between two coherent systems on the basis of different existing orderings among the lifetimes their components have and the associated vectors of the signature. In the next result, we analyze the entropies of the residual lifetimes of two coherent systems with the same structures:
Theorem 5.
Let T t X , 1 , n = [ T t | X 1 : n > t ] and T t Y , 1 , n = [ T t | Y 1 : n > t ] denote the RLs of two coherent systems with matching signatures and n IID component lifetimes X 1 , , X n and Y 1 , , Y n from CDFs F and G, respectively. If X d Y , and X or Y is the IFR, then H α ( T t X , 1 , n ) H α ( T t Y , 1 , n ) for all α > 0 .
Proof. 
As a result of the relation in Equation (11), it is sufficient to demonstrate that X t d Y t . Due to the ordering relation X d Y and the assumption that X or Y is the IFR, the proof of Theorem 5 of Ebrahimi and Kirmani [40] shows that X t d Y t , and this concludes the proof. □
Example 2.
Let us assume two coherent systems with residual lifetimes T t X , 1 , 4 and T t Y , 1 , 4 with the common signature p = ( 1 2 , 1 4 , 1 4 , 0 ) . Suppose that X W ( 3 , 1 ) and Y W ( 2 , 1 ) , where W ( k , 1 ) stands for the Weibull distribution with the SF given in Equation (17). It is easy to see that X d Y . Moreover, X and Y are both the IFR. Therefore, Theorem 5 yields that H α ( T t X , 1 , 4 ) H α ( T t Y , 1 , 4 ) for all α > 0 . The plots of the Renyi entropies of these systems are displayed in Figure 3.
In the next result, we analyze the residual REs for two coherent systems having matching component lifetimes and distinct structures:
Theorem 6.
Let T 1 , t 1 , n = [ T 1 t | X 1 : n > t ] and T 2 , t 1 , n = [ T 2 t | X 1 : n > t ] signify the RLs of two coherent systems with vectors of signatures given by p 1 and p 2 , respectively. Suppose that the system’s components are IID according to the CDF F, and also also let p 1 l r p 2 . Then, we have the following:
(i)
If f t ( S t 1 ( u ) ) increases in u for all t > 0 , then H α ( T 1 , t 1 , n ) H α ( T 2 , t 1 , n ) for all α > 0 .
(ii)
If f t ( S t 1 ( u ) ) decreases in u for all t > 0 , then H α ( T 1 , t 1 , n ) H α ( T 2 , t 1 , n ) for all α > 0 .
Proof. 
(i) First, we note that Equation (11) can be reformulated as follows:
e ( 1 α ) H α ( T i , t 1 , n ) = 0 1 g V i α ( u ) d u 0 1 g V i 🟉 ( u ) f t ( S t 1 ( u ) ) α 1 d u , ( i = 1 , 2 ) ,
where V 🟉 has the PDF as
g V 🟉 ( u ) = g V α ( u ) 0 1 g V α ( u ) d u , 0 < u < 1 .
Assumption s 1 l r s 2 implies V 1 l r V 2 , and this gives that V 1 🟉 l r V 2 🟉 , which means that
g V 2 🟉 ( u ) g V 1 🟉 ( u ) g V 2 ( u ) g V 1 ( u ) α ,
increases in u for all α > 0 , and hence V 1 🟉 s t V 2 🟉 . When α > 1 ( 0 < α < 1 ) , we obtain
0 1 g V 1 🟉 ( u ) f t ( S t 1 ( u ) ) α 1 d u ( ) 0 1 g V 2 🟉 ( u ) f t ( S t 1 ( u ) ) α 1 d u ,
where the inequality in Equation (20) is obtained by noting that the conditions V 1 🟉 s t V 2 🟉 imply E [ π ( V 1 🟉 ) ] E [ π ( V 2 🟉 ) ] for all increasing (decreasing) functions π . Therefore, the relation in Equation (19) gives
e ( 1 α ) H α ( T 1 , t 1 , n ) ( ) e ( 1 α ) H α ( T 2 , t 1 , n ) ,
or equivalently, H α ( T 1 , t 1 , n ) H α ( T 2 , t 1 , n ) for all α > 0 . Part (ii) can be obtained in a similar way. □
The next example gives an application of Theorem 6:
Example 3.
Let p 1 = ( 1 2 , 1 2 , 0 , 0 ) and p 2 = ( 0 , 0 , 3 4 , 1 4 ) be the signatures of two coherent systems of the order n = 4 , with residual lifetimes T 1 , t 1 , 4 = [ T 1 t | X 1 : 4 > t ] and T 2 , t 1 , 4 = [ T 2 t | X 1 : 4 > t ] . Let us consider a Pareto type II with the SF given in Equation (16). After some calculation, one can find
f t ( S t 1 ( u ) ) = 2 u u 1 + t , t > 0 .
Clearly, the above function is increasing in u for all t > 0 (i.e., f t ( S t 1 ( u ) ) increases in u for all t > 0 ). Hence, due to Theorem 6, it holds that H α ( T 1 , t 1 , 4 ) H α ( T 2 , t 1 , 4 ) for all α > 0 .

4. Bounds for the Renyi Entropy of the Residual Lifetime

When the distributions of the component lifetimes are not precisely known, or when the component number and the complexity of the system are high, it is not plain to work out H α ( T t 1 , n ) for a coherent system. This situation is frequently encountered in practice. A residual RE bound can be useful to be close to the behavior of the lifetime of a coherent system under such circumstances. Toomaj and Doostparast [33,34] obtained bounds on the entropy a coherent system induces through its lifetime for a new system. Recently, Toomaj et al. [35] obtained some bounds for the entropy of the coherent system under the assumption that the components contained in the system were all alive. In the following theorem, we provide bounds on the residual RE of the lifetime of the coherent system in terms of the residual RE of the parent distribution H α ( X t ) :
Theorem 7.
Let T t 1 , n = [ T t | X 1 : n > t ] represent the RL of a coherent system consisting of n IID component lifetimes with a common CDF F that has the signature p = ( p 1 , , p n ) . Let H α ( T t 1 , n ) < for all α > 0 . It holds that
H α ( T t 1 , n ) ( ) α 1 α log B n ( p ) + H α ( X t ) ,
for α > 1 ( 0 < α < 1 ) , where B n ( p ) = i = 1 n p i g i ( p i ) , and p i = n i n 1 .
Proof. 
It is easy to see that the mode the beta distribution having the parameters n i + 1 and i has is p i = n i n 1 . Therefore, we obtain
g V ( v ) i = 1 n p i g i ( p i ) = B n ( p ) , 0 < v < 1 .
Thus, for α > 1 ( 0 < α < 1 ) , we have
H α ( T ) = 1 1 α log 0 1 g V α ( v ) f t ( S t 1 ( u ) ) α 1 d v ( ) 1 1 α log 0 1 B n ( p ) α f t ( S t 1 ( u ) ) α 1 d v = α 1 α log B n ( p ) + H α ( X t ) .
The last equality is obtained from Equation (5), which the desired result follows. □
The bounds in Equation (21) are quite helpful when the component number is extensive or the configuration the system induces is difficult. Now, a lower bound is acquired from the properties of the RE measure:
Theorem 8.
Underneath the requirements of Theorem 7, we have
H α ( T t 1 , n ) H α L ( T t 1 , n ) ,
where H α L ( T t 1 , n ) = 1 1 α log i = 1 n p i 0 f T t 1 , i , n α ( x ) d x for all α > 0 .
Proof. 
The Jensen’s inequality as it applies to the function x α (which is concave (convex) for 0 < α < 1 ( α > 1 ) ) yields
i = 1 n p i f T t 1 , i , n ( x ) α ( ) i = 1 n p i f T t 1 , i , n α ( x ) , x > 0 ,
Thus, we obtain
0 f T t 1 , n α ( x ) d x ( ) i = 1 n p i 0 f T t 1 , i , n α ( x ) d x .
The inequality given above is followed by the linearity of the integration operator. Since 1 α > 0 ( 1 α < 0 ) , then by multiplying both sides in Equation (24) in 1 / ( 1 α ) , the intended result is achieved. □
We note that the inequality in Equation (23) is satisfied for i-out-of-n systems in the sense that p j = 0 for j i and p j = 1 for j = i , while H α ( T t 1 , n ) = H α ( T t 1 , i , n ) . As the lower bounds for 0 < α < 1 in both assertions of Theorems 7 and 8 can be evaluated, we can use the maximum value the lower bounds take:
Example 4.
Let T t 1 , 3 = [ T t | X 1 : 3 > t ] signify the RL of a coherent system with the signature p = ( 1 3 , 2 3 , 0 ) and with n = 3 IID component lifetimes uniformly distributed over [ 0 , 1 ] . It is easy to verify that B 3 ( p ) = 2 . Thus, under Theorem 7, the RE of T t 1 , 3 takes some bounds for α > 1 ( 0 < α < 1 ) as shown below:
H α ( T t 1 , 3 ) ( ) α 1 α log 2 + log ( 1 t ) , t > 0 .
Moreover, the lower bound given in Equation (23) can be obtained as follows:
H α ( T t 1 , 3 ) 1 1 α log i = 1 n p i 0 1 g i α ( u ) d u + log ( 1 t ) , t > 0 ,
for all α > 0 . For the component lifetimes, assumed to be uniformly distributed, the bounds given in Equation (25) (dashed line) are calculated, and the exact value H α ( T t 1 , 3 ) is obtained from (11) while the bounds are derived in Equation (26) (dotted line). The results are displayed in Figure 4. As we can see, the lower bound in Equation (26) (dotted line) for α > 1 is better than the lower bound given by Equation (25).

5. Conclusions

Clearly, any system that is in operation longer and has less uncertainty about its remaining lifetime is a better system because it makes more accurate predictions. This fact makes it obvious that system uncertainty is a prestigious aspect of systems in engineering. To this end, a useful criterion is to measure the predictability of the lifetime of a system by using the RE as an extension of the Shannon entropy. In this work, we found a simple statement for the uncertainty the lifetime of a system induces under the assumption that all system components are functioning at time t (t can be the age of a system) with regard to the RE. Several stochastic properties of the proposed measure were also discussed. Some limits were presented and some partial order behaviors fulfilled by the RLs of two coherent systems on the basis of their REs being studied appealing to the concept of the signature of a system. The results were examined in some examples to illustrate the concepts presented in this paper. In reliability engineering, there are more complex systems that include coherent systems whose lifetimes are given in terms of successive failure times of the components assembled to form the system. To make clear what the contribution of the authors in this work is, it is worth mentioning that the properties of the Shannon differential entropy of the work systems used have been studied in the literature (see, for example, the work of Toomaj et al. [35]). However, since the Renyi entropy is a generalization of the Shannon differential entropy, and we used it in our study, we used the earlier extended results of Toomaj et al. [35], which from this perspective can be considered the novelty of our work. We feel that the results of the present work contain more information about the information properties of functioning systems than in the case of the application of the Shannon differential entropy. Moreover, the novel bound given in Theorem 7 in Section 4 is straightforward and easy to compute since it depends on the known beta distribution.
In future work, we can focus on the RE properties of such systems to provide a new framework and open a new window to the predictability of the lifetime a coherent system has from a novel perspective.

Author Contributions

Conceptualization, M.M.; methodology, M.M.; software, M.K.; validation, M.M.; formal analysis, M.K.; investigation, M.M.; resources, M.S.; writing—original draft preparation, M.M.; writing—review and editing, M.K.; visualization, M.S.; supervision, M.S.; project administration, M.S.; funding acquisition, M.S. All authors have read and agreed to the published version of the manuscript.

Funding

The first author was funded by the Natural Sciences and Engineering Research Council of Canada (No. 06536-2018). The research of the second and third authors was funded by Researchers Supporting Project number (RSP2023R464) of King Saud University in Riyadh, Saudi Arabia.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

The authors are thankful to three anonymous reviewers for their constructive comments. The first author acknowledges financial support from the Natural Sciences and Engineering Research Council of Canada (No. 06536-2018). The second and third authors acknowledge financial support from the Researchers Supporting Project number (RSP2023R464), King Saud University, Riyadh, Saudi Arabia.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Clausius, R. The Mechanical Theory of Heat; MacMillan and Co.: London, UK, 1879. [Google Scholar]
  2. Basaran, C. Introduction to Unified Mechanics Theory with Applications; Springer Nature: Berlin/Heidelberg, Germany, 2023. [Google Scholar]
  3. Lee, H.W.; Fakhri, H.; Ranade, R.; Basaran, C.; Egner, H.; Lipski, A.; Piotrowski, M.; Mroziński, S. Modeling fatigue of pre-corroded body-centered cubic metals with unified mechanics theory. Mater. Des. 2022, 224, 111383. [Google Scholar] [CrossRef]
  4. Lee, H.W.; Basaran, C.; Egner, H.; Lipski, A.; Piotrowski, M.; Mroziński, S.; Rao, C.L. Modeling ultrasonic vibration fatigue with unified mechanics theory. Int. J. Solids Struct. 2022, 236, 111313. [Google Scholar] [CrossRef]
  5. Lee, H.W.; Basaran, C. A review of damage, void evolution, and fatigue life prediction models. Metals 2021, 11, 609. [Google Scholar] [CrossRef]
  6. Temfack, T.; Basaran, C. Experimental verification of thermodynamic fatigue life prediction model using entropy as damage metric. Mater. Sci. Technol. 2015, 31, 1627–1632. [Google Scholar] [CrossRef]
  7. Obregón, O. Superstatistics and gravitation. Entropy 2010, 12, 2067. [Google Scholar] [CrossRef] [Green Version]
  8. Fuentes, J.; López, J.L.; Obregón, O. Generalized Fokker-Planck equations derived from nonextensive entropies asymptotically equivalent to Boltzmann-Gibbs. Phys. Rev. E 2020, 102, 012118. [Google Scholar] [CrossRef]
  9. Fuentes, J.; Obregón, O. Optimisation of information processes using non-extensive entropies without parameters. Int. J. Inf. Coding Theory 2022, 6, 35–51. [Google Scholar] [CrossRef]
  10. Kaniadakis, G.; Lissia, M.; Rapisarda, A. Non Extensive Thermodynamics and Its Applications; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2002; Volume 305, pp. 1–305. [Google Scholar]
  11. Sharma, B.; Mittal, D. New nonadditive measures of entropy for discrete probability distributions. J. Math. Sci. 1975, 10, 28–40. [Google Scholar]
  12. Rényi, A. On Measures of Information and Entropy. In Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Berkeley, CA, USA, 20 June–30 July 1960; pp. 547–561. [Google Scholar]
  13. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. 1948, 27, 379–423. [Google Scholar] [CrossRef] [Green Version]
  14. Campbell, L. A coding theorem and Rényi’s entropy. Inf. Control 1965, 8, 423–425. [Google Scholar] [CrossRef] [Green Version]
  15. Csiszar, I. Generalized cutoff rates and Rényi’s information measures. IEEE Trans. Inf. Theory 1995, 41, 26–34. [Google Scholar] [CrossRef]
  16. Goold, J.; Huber, M.; Riera, A.; Rio, L.D.; Skrzypczyk, P. The role of quantum information in thermodynamics—A topical review. J. Phys. A Math. Theor. 2016, 49, 1–34. [Google Scholar] [CrossRef]
  17. Cui, J.; Gu, M.; Kwek, L.C.; Santos, M.F.; Fan, H.; Vedral, V. Quantum phases with differing computational power. Nat. Commun. 2012, 3, 812. [Google Scholar] [CrossRef] [Green Version]
  18. Adesso, G.; Girolami, D.; Serafini, A. Measuring Gaussian Quantum Information and Correlations Using the Rényi Entropy of Order 2. Phys. Rev. Lett. 2012, 109, 190502. [Google Scholar] [CrossRef] [Green Version]
  19. Srdinšek, M.; Casula, M.; Vuilleumier, R. Quantum Rényi entropy by optimal thermodynamic integration paths. Phys. Rev. Res. 2022, 4, L032002. [Google Scholar] [CrossRef]
  20. Berta, M.; Seshadreesan, K.P.; Wilde, M.M. Rényi generalizations of quantum information measures. Phys. Rev. A 2015, 91, 022333. [Google Scholar] [CrossRef] [Green Version]
  21. Beck, C. Upper and lower bounds on the Renyi dimensions and the uniformity of multifractals. Phys. D Nonlinear Phenom. 1990, 41, 67–78. [Google Scholar] [CrossRef]
  22. Bashkirov, A.G. Rényi entropy as a statistical entropy for complex systems. Theor. Math. Phys. 2006, 149, 1559–1573. [Google Scholar] [CrossRef]
  23. Eslami-Giski, Z.; Ebrahimzadeh, A.; Markechová, D. Rényi entropy of fuzzy dynamical systems. Chaos Solitons Fractals 2019, 123, 244–253. [Google Scholar] [CrossRef]
  24. Fuentes, J. Quantum control operations with fuzzy evolution trajectories based on polyharmonic magnetic fields. Sci. Rep. 2020, 10, 22256. [Google Scholar] [CrossRef]
  25. Hughes, M.S.; Marsh, J.N.; Arbeit, J.M.; Neumann, R.G.; Fuhrhop, R.W.; Wallace, K.D.; Thomas, L.; Smith, J.; Agyem, K.; Lanza, G.M.; et al. Application of Rényi entropy for ultrasonic molecular imaging. J. Acoust. Soc. Am. 2009, 125, 3141–3145. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Franchini, F.; Its, A.R.; Korepin, V.E. Renyi entropy of the XY spin chain. J. Phys. A Math. Theor. 2007, 41, 025302. [Google Scholar] [CrossRef]
  27. Chavanis, P.H. Nonlinear mean field Fokker-Planck equations. Application to the chemotaxis of biological populations. Eur. Phys. J. B 2008, 62, 179–208. [Google Scholar] [CrossRef] [Green Version]
  28. Ebrahimi, N.; Pellerey, F. New partial ordering of survival functions based on the notion of uncertainty. J. Appl. Probab. 1995, 32, 202–211. [Google Scholar] [CrossRef]
  29. Asadi, M.; Ebrahimi, N.; Soofi, E.S. Dynamic generalized information measures. Stat. Probab. Lett. 2005, 71, 85–98. [Google Scholar] [CrossRef]
  30. Gupta, R.D.; Nanda, A.K. K-and L-entropies and relative entropies of distributions. J. Stat. Theory Appl. 2002, 1, 177–190. [Google Scholar]
  31. Nanda, A.K.; Paul, P. Some results on generalized residual entropy. Inf. Sci. 2006, 176, 27–47. [Google Scholar] [CrossRef]
  32. Ebrahimi, N. How to measure uncertainty in the residual life time distribution. Sankhyā Indian J. Stat. Ser. A 1996, 58, 48–56. [Google Scholar]
  33. Toomaj, A.; Doostparast, M. A note on signature-based expressions for the entropy of mixed r-out-of-n systems. Nav. Res. Logist. 2014, 61, 202–206. [Google Scholar] [CrossRef]
  34. Toomaj, A. Renyi entropy properties of mixed systems. Commun. Stat.-Theory Methods 2017, 46, 906–916. [Google Scholar] [CrossRef]
  35. Toomaj, A.; Chahkandi, M.; Balakrishnan, N. On the information properties of working used systems using dynamic signature. Appl. Stoch. Model. Bus. Ind. 2021, 37, 318–341. [Google Scholar] [CrossRef]
  36. Shaked, M.; Shanthikumar, J.G. Stochastic Orders; Springer Science and Business Media: New York, NY, USA, 2007. [Google Scholar]
  37. Samaniego, F.J. System Signatures and Their Applications in Engineering Reliability; Springer Science and Business Media: Berlin/Heidelberg, Germany, 2007; Volume 110. [Google Scholar]
  38. Khaledi, B.-E.; Shaked, M. Ordering conditional lifetimes of coherent systems. J. Stat. Plan. Inference 2007, 137, 1173–1184. [Google Scholar] [CrossRef]
  39. Abbasnejad, M.; Arghami, N.R. Renyi entropy properties of order statistics. Commun. Stat.-Theory Methods 2010, 40, 40–52. [Google Scholar] [CrossRef]
  40. Ebrahimi, N.; Kirmani, S.N.U.A. Some results on ordering of survival functions through uncertainty. Stat. Probab. Lett. 1996, 29, 167–176. [Google Scholar] [CrossRef]
Figure 1. The exact values of H α ( T t 1 , 5 ) with respect to t for the Weibull distribution for values of α = 0.2 and α = 2 when 0 < k < 1 .
Figure 1. The exact values of H α ( T t 1 , 5 ) with respect to t for the Weibull distribution for values of α = 0.2 and α = 2 when 0 < k < 1 .
Axioms 12 00320 g001
Figure 2. The exact values of H α ( T t 1 , 5 ) with respect to t for the Weibull distribution for values of α = 0.2 and α = 2 when k 1 .
Figure 2. The exact values of H α ( T t 1 , 5 ) with respect to t for the Weibull distribution for values of α = 0.2 and α = 2 when k 1 .
Axioms 12 00320 g002
Figure 3. The exact values of H α ( T t X , 1 , 4 ) (blue color) and H α ( T t Y , 1 , 4 ) (red color) with respect to t for values of α = 0.2 and α = 2 .
Figure 3. The exact values of H α ( T t X , 1 , 4 ) (blue color) and H α ( T t Y , 1 , 4 ) (red color) with respect to t for values of α = 0.2 and α = 2 .
Axioms 12 00320 g003
Figure 4. Exact value of H α ( T t 1 , 3 ) (solid line) and the lower bounds corresponding to them (Equation (21) (dashed line) and (25) (dotted line)) for the standard uniform distribution concerning time t .
Figure 4. Exact value of H α ( T t 1 , 3 ) (solid line) and the lower bounds corresponding to them (Equation (21) (dashed line) and (25) (dotted line)) for the standard uniform distribution concerning time t .
Axioms 12 00320 g004
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mesfioui, M.; Kayid, M.; Shrahili, M. Renyi Entropy of the Residual Lifetime of a Reliability System at the System Level. Axioms 2023, 12, 320. https://doi.org/10.3390/axioms12040320

AMA Style

Mesfioui M, Kayid M, Shrahili M. Renyi Entropy of the Residual Lifetime of a Reliability System at the System Level. Axioms. 2023; 12(4):320. https://doi.org/10.3390/axioms12040320

Chicago/Turabian Style

Mesfioui, Mhamed, Mohamed Kayid, and Mansour Shrahili. 2023. "Renyi Entropy of the Residual Lifetime of a Reliability System at the System Level" Axioms 12, no. 4: 320. https://doi.org/10.3390/axioms12040320

APA Style

Mesfioui, M., Kayid, M., & Shrahili, M. (2023). Renyi Entropy of the Residual Lifetime of a Reliability System at the System Level. Axioms, 12(4), 320. https://doi.org/10.3390/axioms12040320

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop