Residual Tsallis Entropy and Record Values: Some New Insights

: Recently, the uncertainty aspects of record values have been increasingly studied in the literature. In this paper, we study the residual Tsallis entropy of upper record values coming from random samples. In the continuous case, we deﬁne the Tsallis entropy quantity for the residual lifetime of upper record values in general distributions as the residual Tsallis entropy of upper record values coming from a uniform distribution. We also obtain a lower bound on the residual Tsallis entropy of upper data set values originating from an arbitrary continuous probability distribution. We also discuss the monotonic property of the residual Tsallis entropy of upper data sets.


Introduction
The entropy measure proposed by Shannon [1] has many applications in various fields such as information science, physics, probability, statistics, communication theory, and economics.Let X represent the lifetime of a unit which has a cumulative distribution function (CDF) K(x) and a probability density function (PDF) k(x).The Shannon differential entropy of X is defined by H(X) = −E[log k(X)] if the expected value exists.This quantifies the uncertainty of a random phenomenon.The concept of the Tsallis entropy, initiated by Tsallis [2], see also [3], is a generalization of the Boltzmann-Gibbs statistic.Very recently, Tsallis and Borges [4] showed that, depending on the initial condition and the size of the time series, time reversal can enable the recovery, within a small error bar, of past information when the Lyapunov exponent is non-positive, notably at the Feigenbaum point (edge of chaos), where weak chaos is known to exist.The practical usefulness of time reversal has been very recently exhibited by decreasing error bars in the predictions of strong earthquakes [5,6].For a non-negative continuous random variable (RV) X with PDF k(x), the Tsallis entropy of order α is given by where ω(α) = 1 1−α for all α > 0, α = 1, and where K −1 (u) = inf{x; K(x) ≥ u}, for u ∈ [0, 1], denotes the quantile function.The Tsallis entropy can be negative for some values of α, but it can also be non-negative if one chooses appropriate values for α.The Tsallis entropy converges to the Shannon entropy when α approaches 1, i.e., H(X) = lim α→1 H α (X).The Shannon differential entropy has the property of additivity, i.e., for RVs X and Y, which are independent and the entropy of their joint distribution is equal to the sum of their entropies, i.e., H(X, Y) = H(X) + H(Y).However, the Tsallis entropy does not have this property and instead follows a non-additive rule given by H α (X, Y) = H α (X) + H α (Y) + (1 − α)H α (X)H α (Y).
The uncertainty in the lifetime of a new system can be measured by H α (X), where X is the RV representing the lifetime.However, in some situations, operators already have some information about the age of the system.For example, they may know that the system is still functioning at time t and want to quantify the uncertainty in the remaining lifetime, i.e., X t = [X − t|X > t].In such cases, H α (X) is not appropriate.Consider the PDF of X t , represented as k t (x) = k(x + t)/K(t), where both x and t are positive.Here, K(t) = P(X > t) represents the survival function of X.Thus, the residual Tsallis entropy (RTE) is defined as = ω(α) where The characteristics, extensions, and uses of H α (X; t) have been deeply researched by a number of scholars, including Asadi et al. [7], Nanda and Paul [8], and Zhang [9], along with further studies referenced in their work.
Records have applications in various fields, such as reliability engineering, insurance science, and others.An illustrative example can be found in the field of reliability theory.Consider a system involving k of n components to function successfully.Surprisingly, the lifetime of this system corresponds to the (n − k + 1)-th order statistic from a sample of size n.As a result, we can interpret the n-th upper record value as the lifetime of a system that requires (n − k + 1) components to function properly.In other words, the nth upper record value represents the operating lifetime of a system that requires (n − k + 1) components to function successfully.In insurance science, the second or third highest values are relevant for some types of non-life insurance claims; see, e.g., Kamps [10] for more details.The information properties of record values have been studied by many researchers.The paper by Baratpour et al. [11] explores the information properties of record values using the Shannon differential entropy.Kumar [12] investigates the Tsallis entropy of k-record statistics in various continuous probability models and provides a characterization result for the Tsallis entropy of k-record values.Additionally, the mentioned paper examines the residual Tsallis entropy of k-record statistics in a summary context.Drawing on the information measures of record data obtained from independent and identically distributed continuous RVs, Ahmadi [13] offers new characterizations for continuous symmetric distributions based on the cumulative residual (past) entropy, Shannon entropy, Renyi entropy, Tsallis entropy, and common Kerridge inaccuracy measures.Xiong et al. [14] present the symmetric property of the extropy of record values and provide characterizations of exponential distributions.They also propose a new test for the symmetry of continuous distributions based on a characterization result, and the Monte Carlo simulation results investigate a wide range of alternative asymmetric distributions.In a recent study, Jose and Sathar [15] examine the residual extropy of k-records derived from any continuous distribution, relating it to the residual extropy of k-records derived from a uniform distribution.They establish lower bounds for the residual extropy of upper and lower k-records arising from any continuous probability distribution.Furthermore, they discuss the monotone property of the residual extropy for both upper and lower k-records.Further contributions in this area can be found in the following papers: Paul et al. [16], Cali et al. [17], Zamani et al. [18], Gupta et al. [19], Paula et al. [20], Zarezadeh [21], Baratpour et al. [22], and Qiu [23].These papers, along with their references, provide additional insights and developments related to the topic.
This article is concerned with the study of the residual Tsallis entropy of upper data sets obtained from continuous distributions.The uniform distribution is chosen as a benchmark because of its simplicity and convenience in terms of its density function.It also serves as a versatile tool for modeling other distributions by applying appropriate transformations.Thus, by examining the residual Tsallis entropy of the upper data sets from the uniform distribution, we can gain insight into the entropy behavior of the upper data sets from any continuous distribution.Our main insight is that the RTE of upper data sets from any continuous distribution can be represented by the RTE of upper data sets from the uniform distribution over the interval (0, 1).To denote the uniform distribution over this interval, we use the notation U(0, 1).
In the present study, we will comprehensively investigate the residual Tsallis entropy of upper data sets obtained from continuous probability distributions.The following sections provide an overview of the structure of the paper and its main contributions: In Section 2, we first introduce the residual Tsallis entropy and application it provides regarding upper data sets derived from continuous probability distributions.We present a rigorous derivation of the expression for the RTE of upper data sets.Moreover, we establish a lower bound for this entropy measure that provides valuable insight into the minimum achievable entropy of upper data set values.Furthermore, we investigate how the RTE of data set changes in view of the aging behaviors of their components.Finally, we show the monotonic behavior of the residual Tsallis entropy of n-th upper record values as a function of n.Section 3 provides an expression for the residual records based on the knowledge that all units consist of stresses exceeding t > 0. Understanding the monotonic property enhances our understanding of entropy dynamics and provides deeper insights into the system reliability and performance characteristics.In Section 4, we present a comprehensive summary of the overall results of our study.We highlight the main contributions, implications, and applications of the derived expressions, the lower bound, and the monotone property of residual Tsallis entropy.
" We will use some notations throughout the paper.The order relations "≤ st ", "≤ hr ", "≤ lr ", and "≤ d " represent, respectively, the usual stochastic order, hazard rate order, likelihood ratio order, and dispersive order; for a detailed discussion of these stochastic orders, the reader may be referred to Shaked and Shanthikumar [24]."

Residual Tsallis Entropy of Record Values
Let us consider a technical system that is subjected to shocks, such as voltage spikes.Then, the shocks can be modeled as a sequence of independent and identically distributed (i.i.d.) RVs {X i , i ≥ 1}, with a common continuous CDF K, PDF k, and survival function K(t) = 1 − K(t).The shocks represent the stresses on the system at different times.We are interested in the record statistics (the values of the highest stresses observed so far) of this sequence.Let us denote by X i:n the ith order statistics from the first n observations.Then, we define the sequences of upper record times T n , (n ≥ 1) and upper record values U n , respectively, as follows: It is well known that the PDF and the survival function of U n , denoted by k n (x) and K n (x), respectively, are given by and where is known as the incomplete gamma function (see e.g., [25]).We use the notation V ∼ Γ t (a, b) to indicate that the RV V has a truncated Gamma distribution with the following PDF where a > 0 and b > 0. In the residual part, we concentrate on examining the residual Tsallis entropy of the RVX U (n) , as it is considered a measure to quantify the uncertainty degree induced by the density of [X U (n) − t|X U (n) > t] in terms of the system's residual lifetime and its predictability.To facilitate the computations, we introduce a lemma that approves the RTE of order statistics from a uniform distribution that is linked to the incomplete beta function.This relationship is crucial from a practical point of view and allows for a more convenient computation of RTE.The proof of this lemma is omitted here since it involves simple calculations.

Lemma 1.
Let {U i , i ≥ 1} be a sequence of i.i.d.RVs from the uniform distribution.Moreover, let U n denote the n-th upper record values of the sequence {U i }.
By leveraging this lemma, researchers and practitioners can readily compute the RTE of record values from a uniform distribution using the well-known incomplete gamma function.This computational simplification enhances the applicability and usability of the RTE in various contexts.In Figure 1, we present the plot of H α (U n ; t) for values of α = 0.2 and α = 2 and values of n = 2, . . ., 6.The upcoming theorem establishes a relationship between the RTE of record values U n and the RTE of record values from a uniform distribution.Theorem 1.Let {X i , i ≥ 1} be a sequence of i.i.d.RVs with CDF F and PDF f .Let U n , denote the n-th upper record value of the sequence {X i }.Then, the residual Tsallis entropy of U n , for all α > 0, α = 1, is formulated as below: where Proof.By employing the transformation u = F(x), we can utilize Equations ( 2), (4), and (5) to derive the following expression where the last identity is acquired by applying Lemma 1.Hence, the proof is completed.
Our analysis shows a significant decomposition of the remaining Tsallis entropy of the upper record values.In particular, we have shown that this entropy measure can be expressed as the product of two key components: the residual Tsallis entropy of the upper records from the uniform distribution and the expectation of a truncated gamma RV.From Equation ( 8), we can also see that the residual Tsallis entropy of the n-th upper data set value from an arbitrary continuous distribution F can be expressed by the residual Shannon entropy of the n-th upper data set value from U(0, 1) as follows: where V n ∼ Γ − log K(t) (n, 1).The specialized version of this result for t = 0, was already obtained by Baratpour et al. [11].
Next, we examine how the residual Tsallis entropy of record values changes by referring to the aging aspects of the underlying distribution.The aging property of X affects the behavior of its residual Tsallis entropy of order α > 0. The forthcoming theorem is necessary to our aim.We recall that X has an increasing (decreasing) failure rate (IFR(DFR)) property if its hazard rate function λ(t) = k(t)/K(t) is an increasing (decreasing) function of t > 0.
Proof.We focus on the case where X is IFR, but the case where X has DFR is similar.We can observe that k t (K where λ t (•) denotes the hazard rate of the residual lifetime X t .This means that we can express Equation (3) as for all α > 0. We can easily verify that Therefore, when X is IFR, we obtain the following inequality for all α > 1(0 < α ≤ 1): for all t 1 ≤ t 2 .By utilizing the expression (10), we can derive the following inequality: This inequality holds true for all values of α satisfying α > 1 or 0 < α ≤ 1.Consequently, we can conclude that H α (X; t 1 ) ≥ H α (X; t 2 ) for all α > 0. Now, we prove that the IFR property of X affects the behavior of the residual Tsallis entropy of record values.Theorem 3. If X is IFR, then H α (U n ; t) is decreasing in t for all α > 0.
Proof.The IFR property of X implies that U n is also IFR, according to Corollary 1 of Gupta and Kirmani [26].Therefore, the proof follows from Theorem 2.
We demonstrate how to use Theorems 3 and 6 with an example.Example 1.We consider a sequence of i.i.d.RVs {X i , i ≥ 1} that follow a common Weibull distribution.The CDF of this distribution is given by We can find the inverse CDF of X as We can also obtain Therefore, using (8), we obtain We show the plots of H α (U n ; t) for different values of α = 0.2, α = 2, and n = 2, . . ., 6 in Figure 2. The plots confirm the result of Theorem 3, which states that the residual Tsallis entropy decreases with t when X has IFR.Now, we present a theorem that establishes a lower bound for the residual Tsallis entropy of upper records from any continuous distribution.The lower bound for the residual Tsallis entropy of upper records is influenced by two key factors: the residual Tsallis entropy of upper records from the uniform distribution on the interval [0, 1] and the mode of the original distribution.Theorem 4. Given the conditions outlined in Theorem 6, let us assume that M = k(m) < ∞, where m represents the mode of the PDF k.Under this assumption, we can derive the following result for any α > 0, α = 1 Proof.Since for α > 1(0 < α < 1), it holds that The result now is easily obtained from relation (8) and this completes the proof.

Remark 1.
It is essential to emphasize that the equality stated in Equation ( 15) may not hold universally, as there is no distribution in which f (x) = M for all x within the support of X.Nevertheless, the bound established in Theorem 4 proves to be immensely valuable, as it offers significant utility in cases where the computation of the mode for various distributions is relatively simple.
We have presented a theorem that establishes a lower bound on the RTE of U n , denoted H α (U n ; t).This lower bound depends on the RTE of record values from a uniform distribution and the mode of PDF, denoted by M, of the original distribution.This result provides interesting insights into the information properties of U n and provides a measurable lower bound for the RTE with respect to the mode of the distribution.In Table 1, we show the lower bounds on the RTE of the record values for some common distributions based on Theorem 4.

Probability Density Function
Lower Bound In the following theorem, we show the monotonic behavior of the residual Tsallis entropy of n-th upper set values with respect to n.First, we need the following lemma.Lemma 2. Let {X i , i ≥ 1} be a sequence of i.i.d.RVs with CDF F and PDF f .Let U n denote the n-th upper record values of the sequence U(0, 1).Then, H α (U n ; K(t)) ≥ H α (U n+1 ; K(t)) for all α > 0.

Conditional Tsallis Entropy of Record Values
Hereafter, we are interested in evaluating the residual records X U(n) − t, t ≥ 0 based on knowing the fact that all units are of voltages exceeding t > 0. It follows that the survival function of the X 0 can be written as (see [27]) and hence, we have Hereafter, we will focus on studying the Tsallis entropy of the RV X 0 U n ,t that measures the amount of uncertainty contained in the density of [X U(n) − t|X U(0) > t], about the predictability of the system's residual lifetime in terms of the Tsallis entropy.The probability integral transformation V = K t (X 0 U n ,t ) plays a crucial role in our aim.It is clear that U n ,t ) had the pdf as In the forthcoming proposition, we provide an expression for the Tsallis entropy of X 0 U n ,t by using the earlier mentioned transforms.Theorem 6.Let {X i , i ≥ 1} be a sequence of i.i.d.RVs with CDF F and PDF f .The Tsallis entropy of X 0 U n ,t can be expressed as follows: for all α > 0.
Proof.By using the change in u = K t (x), from ( 2) and ( 20) we obtain In the last equality, g n (u) is the PDF of V given in (21) and this completes the proof.
In the next theorem, we investigate how the residual Tsallis entropy of record values changes with respect to the aging properties of their components.
) is decreasing (increasing) in t for all α > 0.
Proof.By using similar arguments of Theorem 2, when X is IFR, then for all α > 1 (0 < α ≤ 1), we have du, for all t 1 ≤ t 2 .By utilizing Equation ( 22), we can establish the following inequality: This inequality holds true for all values of α satisfying α > 1 or 0 < α ≤ 1.As a result, we can conclude that H α (X n,t 1 ) ≥ H α (X n,t 2 ) for all α > 0.
In the following example, we provide an illustration of the results presented in Theorems 6 and 7.
Example 2. Suppose we have a sequence of the set of RVs {X i , i ≥ 1} that are independent and identically distributed with a common Pareto type II distribution.The survival function of this distribution is Using this, we can show that The implication of this result is that the Tsallis entropy of X 0 U n ,t exhibits a positive correlation with time t, indicating that it increases as t increases.Consequently, the uncertainty associated with the conditional lifetime X 0 U n ,t also increases with the passage of time.It is important to highlight that the distribution under consideration in this context possesses the property of DFR.
The equality holds when the component lifetimes are exponentially distributed.
Proof.Using the relation (22), we only need to show that X t ≤ d Y t .Since we assume that X ≤ d Y and either X or Y has IFR, we can apply the proof of Theorem 5 of [28] to prove that X t ≤ d Y t .This completes the proof.Example 3. Consider two residual records, denoted as X 0 U n ,t and Y 0 U n ,t , which consist of n i.i.d.component lifetimes, namely X 1 , . . ., X n and Y 1 , . . ., Y n , respectively.The component lifetimes are drawn from CDFs F and G, respectively.Let us assume that X follows a Weibull distribution with parameters k = 3 and scale parameter 1, denoted as X ∼ W(3, 1).Similarly, we assume that Y follows a Weibull distribution with parameters k = 2 and scale parameter 1, denoted as Y ∼ W(2, 1).It can be observed that X is stochastically dominated by Y (X ≤ d Y), indicating that Y has larger values than X in terms of the dispersion order.Furthermore, both X and Y exhibit the property of IFR.So, Theorem 10 yields that H α (X 0 U n ,t ) ≤ H α (Y 0 U n ,t ) for all α > 0.

Conclusions
In this paper, we explored the concept of RTE for datasets.We presented a new approach to express RTE of dataset values from a continuous distribution in terms of RTE of dataset values from a uniform distribution.This connection provides valuable insight into the properties and behavior of RTE for different distributions.Since it is difficult to obtain closed-form expressions for the RTE of dataset values, we have derived bounds that provide practical approximations and allow for a better understanding of their properties.These bounds serve as useful tools for analyzing and comparing RTE values in different scenarios.To validate our results and demonstrate the applicability of our approach, we have provided illustrative examples.These examples show the practical implications of RTE for record values and illustrate the versatility of our methodology for different distributions.In summary, this study contributes to the understanding of RTE for record values by establishing relationships, deriving bounds, and demonstrating stochastic orderings.The results of this work provide valuable insights for researchers and practitioners working in the field of statistical inference and entropy-based analysis.For example, the results of this paper may be have practical implications in the field of goodness-of-fit tests.In particular, it can be applied to test the null hypothesis H 0 : F(x) = F 0 (x) for all x ≥ 0, against the alternative hypothesis H 1 : F(x) = F 0 (x) for some x ≥ 0. Here, F 0 (x) = 1 − e −λx represents an exponential distribution with λ > 0 and x ≥ 0. According to (8), the above-mentioned goodness-of-fit problem is equivalent to testing the null hypothesis H 0 : H α (U n ; t) = ω(α) λ α−1 α α(n−1)+1 Γ(α(n − 1) + 1, αλt) Γ α (n, λt) − 1 , for all t ≥ 0, against the alternative hypothesis for some t ≥ 0. Here, H α (U n ; t) represents a specific statistical measure.To perform this test, the maximum likelihood estimate of the parameter λ is 1/x = k/ ∑ n i=1 x i .If a reliable estimate for H α (U n ; t), denoted as H α (U n ; t), is available, significant deviations of H α (U n ; t) − ω(α) Γ(α(n − 1) + 1, αt x ) (x) α−1 α α(n−1)+1 Γ α (n, t x ) from zero indicate non-exponential behavior.Consequently, the null hypothesis can be rejected.We note that Equation ( 8) can be rewritten as follows: H α (U n ; t) = ω(α) where I(u ≥ F(t)) denotes the indicator function for (u ≥ F(t)).Following Park [29], a sample estimate of H α (U n ; t) can be constructed based on a sample of size k as where m ≤ k/2 is a positive integer known as the window size, K represents the empirical distribution function of X, and X i:k = X 1:k if i < 1, and X i:k = X k:k if i > k.