Next Article in Journal
Analysis of the Chaotic Component of Photoplethysmography and Its Association with Hemodynamic Parameters
Next Article in Special Issue
Centrality and System Size Dependence among Freezeout Parameters and the Implications for EOS and QGP in High-Energy Collisions
Previous Article in Journal
Information from Noise: Measuring Dyslexia Risk Using Rasch-like Matrix Factorization with a Procedure for Equating Instruments
Previous Article in Special Issue
Rapidity and Energy Dependencies of Temperatures and Volume Extracted from Identified Charged Hadron Spectra in Proton–Proton Collisions at a Super Proton Synchrotron (SPS)
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Some New Results Involving Past Tsallis Entropy of Order Statistics

Department of Statistics and Operations Research, College of Science, King Saud University, P.O. Box 2455, Riyadh 11451, Saudi Arabia
*
Author to whom correspondence should be addressed.
Entropy 2023, 25(12), 1581; https://doi.org/10.3390/e25121581
Submission received: 11 October 2023 / Revised: 22 November 2023 / Accepted: 23 November 2023 / Published: 24 November 2023

Abstract

:
This work focuses on exploring the properties of past Tsallis entropy as it applies to order statistics. The relationship between the past Tsallis entropy of an ordered variable in the context of any continuous probability law and the past Tsallis entropy of the ordered variable resulting from a uniform continuous probability law is worked out. For order statistics, this method offers important insights into the characteristics and behavior of the dynamic Tsallis entropy, which is associated with past events. In addition, we investigate how to find a bound for the new dynamic information measure related to the lifetime unit under various conditions and whether it is monotonic with respect to the time when the device is idle. By exploring these properties and also investigating the monotonic behavior of the new dynamic information measure, we contribute to a broader understanding of order statistics and related entropy quantities.

1. Introduction

The mathematical study of the storage, transmission, and quantification of information is known as information theory. The field of applied mathematics lies at the intersection of statistical mechanics, computer science, electrical engineering, probability theory, and statistics. A foundational method for determining the level of uncertainty in random events is provided by information theory. Its applications are many and are outlined in Shannon’s influential work [1]. Entropy is an important parameter in information theory. The degree of uncertainty regarding the value of a random variable or the outcome of a random process is measured by entropy. For example, determining the outcome of a fair coin toss provides less information (lower entropy and lower uncertainty) than determining the outcome of a dice roll where six equally likely outcomes are obtained. Relative entropy, the error exponent, mutual information, and channel capacity are some other important metrics in information theory. Source coding, algorithmic complexity theory, algorithmic information theory, and information-theoretic security are important subfields of information theory.
Applications of the basic concepts of information theory include channel coding/error detection and correction and source coding/data compression. The development of the Internet, the compact disk, the viability of cell phones, and the Voyager space missions have all benefited greatly from its influence. Statistical inference, cryptography, neurobiology, perception, linguistics, thermophysics, molecular dynamics, quantum computing, black holes, information retrieval, intelligence, plagiarism detection, pattern recognition, anomaly detection, and even the creation of art are other areas where the theory has found application.
Probability theory and statistics form the basis of information theory, in which quantifiable data is usually expressed in the form of bits. Information measures of distributions associated with random variables are a frequent topic of discussion in information theory. Entropy is a crucial metric that serves as the basis for numerous other measurements. The information measure of a single random variable can be quantified thanks to entropy. Mutual information, which is defined as a measure of the joint information of two random variables and can be used to characterize their correlation, is another helpful idea. The first number sets a limit on the rate at which the data generated from independent samples with the given distribution can be successfully compressed. It is a property of the probability distribution of a random variable. The second number, which represents the maximum rate of reliable communication over a noisy channel in the limiting case of long block lengths, is a property of the joint distribution of two random variables when the joint distribution determines the channel statistics.
When analyzing a random variable (rv) X that is non-negative and has a cumulative distribution function (cdf) F ( x ) , which is continuous, and a probability density function (pdf) f ( x ) , the Tsallis entropy of order α is an important measure, which is elucidated in [2] as follows:
H α ( X ) = k α 0 ( f ( x ) ) α d x 1 ,
where k α = 1 / ( 1 α ) with α > 0 ,   α 1 . Note that H α ( X ) = k α [ E ( f α 1 ( F 1 ( U ) ) ) 1 ] in which F 1 ( u ) represents the right-continuous inverse of F and U is a random number (according to the uniform distribution) from the unit interval. The Tsallis entropy can yield nonpositive values in general, but appropriate choices of α can ensure non-negativity. It is worth noting that as α approaches one, H ( X ) converges to the Shannon differential entropy as E ( ln f ( X ) ) , thereby signifying an important relationship.
In situations involving the analysis of the random lifetime X of a newly introduced system, H α ( X ) is commonly used to quantify the unsureness inherent in a fresh unit. Despite this, there are cases where operators know the age of the system. To be more specific, assume that they are aware that the system has been in use during an interval time with a length t. Then, they can calculate the amount of uncertainty in the residual lifetime after t, i.e., X t = [ X t X > t ] , so that X stands for the original lifetime of the system. In such cases, the conventional Tsallis entropy H α ( X ) does not provide the desired insight. Therefore, a novel quantity, the Tsallis entropy for the residual lifetime of the device of the lifetime unit under consideration, is introduced to address this limitation as follows:
H α ( X ; t ) = k α 0 f t α ( x ) d x 1 = k α t f ( x ) S ( t ) α d x 1 ,
in which f t ( x ) = f ( x + t ) S ( t ) represents the pdf of X t . The term S ( t ) corresponds to the reliability function (rf) of X. The new dynamic information quantity takes into account the system’s age and provides a more accurate measure of uncertainty in scenarios where this temporal information is available. Several recent studies have contributed to the generalization of the new measure, as discussed in Nanda and Paul [3], Rajesh and Sunoj [4], Toomaj and Agh Atabay [5], and the references therein.
Uncertainty is a pervasive feature found in various systems in nature, which is influenced by future events and even past events. This has led to the development of an interdependent concept of entropy that encapsulates the amount of uncertainty induced by incidents in the past. The past entropy is different from the residual entropy, in which the quantification of uncertainty is regarded to be influenced by events in the future. The study of entropy for past events and the relevant applications that have arisen have been accomplished by many researchers. The works carried out by Di Crescenzo and Longobardi [6] and Nair and Sunoj [7] have shed light on this topic. The research carried out by Gupta et al. [8] on the aspects and use of past entropy for order statistics was helpful in this area. In particular, they studied and performed stochastic comparisons between the entropy of the remaining lifetime of a lifespan and the entropy of the past lifetime of the lifespan, where the lifespan was quantified with respect to an ordered random variable.
Consider an rv X and assume it signifies the system’s lifetime. The pdf of X t = [ t X | X < t ] is f t ( x ) = f ( t x ) / F ( t ) , in which x ( 0 , t ) . Now, the past Tsallis entropy (PTE) as a function of t, the time of an observation of past failure of the system, is recognized by (see, e.g., Kayid and Alshehri [9])
H ¯ α ( X ; t ) = k α 0 t f t α ( x ) d x 1 ,
for every t ( 0 , + ) . We emphasize that H ¯ α ( X ; t ) has a wide range of possible values, from negative infinity to positive infinity. In the context of system failures, H ¯ α ( X ; t ) serves as a metric to quantify the uncertainty related to the inactivity time of a system, especially if it has experienced a failure at time t.
Extensive research has been conducted in the literature to explore Tsallis entropy’s numerous characteristics and statistical uses. For detailed insights, we recommend the work of Asadi et al. [10], Nanda and Paul [3], Zhang [11], Maasoumi [12], Abe [13], Asadi et al. [14], and the sources provided in these works. These sources provide comprehensive discussions on the topic and offer a deeper understanding of Tsallis entropy in various contexts.
In this paper, our main goal is to scrutinize the traits of PTE in terms of ordered variables. We focus on X 1 , , X n , as n identical random variables, which are independent and follow F. The order statistic refers to the ordering of these sample values in ascending order so that X i : n represents the ith ordered variable. These statistics have important roles in various areas of probability and statistics, as they allow for the description of probability distributions, the evaluation of the fit of data to certain models, the quality control of products or processes, the analysis of the reliability of systems or components, and numerous other applications. For a thorough understanding of the theory and applications of order statistics, we recommend the comprehensive review by David and Nagaraja [15]. The degree of predictability of an ordered random variable is usually related to its distribution; the entropy of this random variable can actually access this property. It is worth exploring the quantification of information for ordered random variables, including order statistics as a general class of statistics relevant to survival analysis and systems engineering. Aspects of information for order statistics have garnered significant attention from researchers in the literature. Several studies have explored various information properties associated with order statistics. For instance, Wong and Chen [16] demonstrated that the discrepancy among the mean entropy of ordered variables and the empirical entropy remains unchanged. They further established that, for distributions which are symmetric, the entropy of ordered variables exhibits symmetry around the median. Park [17] established some relations to acquire the entropy of ordered variables. Ebrahimi et al. [18] studied the information features of ordered random variables using Shannon entropy and the Kullback–Leibler distance. Similarly, Abbasnejad and Arghami [19] and Baratpour and Khammar [20] obtained similar results for the Renyi and Tsallis entropy of ordered random variables, respectively. Despite these efforts, the Tsallis entropy of the past lifetime of ordered variables has not been considered in literature thus far. It is commonly known that the past Tsallis entropy can be used to measure the amount of information that can be gleaned from historical observations in order to improve the forecasts of future events. This motivates us to investigate aspects of the Tsallis entropy of the past lifetime distribution of order statistics. By building upon existing research, our study aims to contribute significantly to this area by examining the behaviors of past Tsallis entropy examples for ordered variables. By highlighting previous studies and emphasizing the gap in the literature regarding the investigation of past Tsallis entropy examples in order statistics, we establish the significance and novelty of our research.
The current work’s outcomes are organized as follows: In Section 2, we derive the representation of PTE for order statistics denoted as X i : n , which is arisen from a sample taken from an arbitrary distribution recognized by cdf F . We express this PTE on the basis of the PTE for ordered variables from a sample selected according to the law of uniform probability. We derive upper and lower bounds to approximate the PTE, since equations with exact solutions for the PTE of order statistics are frequently unavailable for many statistical models. We provide several illustrative examples to demonstrate the practicality and usefulness of these bounds. In addition, we scrutinize the monotonicity of the PTE for the extremum of a sample provided that some convenient conditions are satisfied. We find that the PTEs of the extremum of a random sample exhibit monotonic behavior as the sample’s number of individuals rises. However, we counter this observation by presenting a counterexample that demonstrates the nonmonotonic behavior of PTE for X i : n based on n. To further analyze the monotonic behavior, we examine the PTE of order statistics X i : n with respect to the index of order statistics i . Our results show that the PTE of X i : n does not change monotonically with i.
In what follows in the paper, the notations “ s t ” and “ l r ” will be used to indicate the usual stochastic order and the likelihood ratio order, respectively. For a more detailed discussion on definitions and properties of these stochastic orders, the reader can refer to Shaked and Shanthikumar [21].

2. Past Tsallis Entropy of Order Statistics

Here, we acquire an expression that relates the PTE of the ordinal statistic to the PTE of an ordered random variable based on a set of values that are randomly generated according to the law of uniform probability. Let us consider the pdf and the rf of X i : n denoted as f i : n ( x ) and F i : n ( x ) , respectively, where i = 1 , , n . We have the following relationships:
f i : n ( x ) = 1 B ( i , n i + 1 ) F ( x ) i 1 S ( x ) n i f ( x ) ,   x > 0 ,
F i : n ( x ) = k = i n n k F ( x ) k S ( x ) n k ,   x > 0 ,
in which B ( a , b ) represents the complete beta function (see [15] for more details). Additionally, the cdf of X i : n , i.e., the function F i : n , is derived as
F i : n ( x ) = B F ( x ) ( i , n i + 1 ) B ( i , n i + 1 ) ,
where B x ( a , b ) represents the lower incomplete beta function. Hereafter, we shall write Y B t ( a , b ) to specifiy that the rv Y follows a beta distribution truncated on [ 0 , t ] , which has density
f Y ( y ) = 1 B t ( a , b ) y a 1 ( 1 y ) b 1 , 0 y t .
In our context, we are concerned with the analysis of Tsallis entropy, which is measured by the cdf or pdf of the rv X i : n . In this way, one quantifies the strength of the uncertainty induced by [ t X i : n | X i : n t ] in terms of how predictable the elapsed time since the failure time of a system is. In the reliability literature, ( n i + 1 ) -out-of-n structures have proven to be very useful for modeling the life lengths of typical systems. In such systems, the functionality is guaranteed only if at least ( n i + 1 ) of the n units or constituents in the system are operational. A system with separate component lifetimes is headed in this way. Furthermore, a consistent distribution of the component lifetimes is assumed. The lifetime of the components in the system is denoted by X 1 , X 2 , , X n . The lifetime of the system is determined by the ordered rv X i : n , where the value of i is the position of the order statistic. When i = 1 , this corresponds to a serial system, while i = n represents a parallel system. In the context of ( n i + 1 ) -out-of-n structures that have experienced failures before time t, the PTE of X i : n serves as a measure of entropy associated with the past lifetimes of the system. This dynamic entropy measure provides system designers with valuable insights into the entropy of the lifetime of systems with ( n i + 1 ) -out-of-n structures operating at a given time t.
To increase the computational efficiency, we introduce a lemma that establishes the relationship the PTE of ordered uniformly distributed rvs has with the beta function in its imperfect form. From a practical perspective, this link is essential, since it makes the computation of PTE easier. Since it only requires a few simple computations, the demonstration of this lemma—which flows immediately from the definition of PTE—is not included here.
Lemma 1.
Suppose we have drawn a random sample of size n from ( 0 , 1 ) according to the law of uniform probability. Let we arrange the sample values in ascending order, where U i : n is the ith order statistic. Then,
H ¯ α ( U i : n ; t ) = 1 α ¯ B t ( α i α ¯ , 1 + n α i α ) B t α ( i , 1 + n i ) 1 , 0 < t < 1 ,
 for all  α > 0 , α 1 with  α ¯ = 1 α .
This lemma provides researchers and practitioners with a useful tool to work out the PTE of the ordered variables of a sample adopted from uniform distribution. The computation can be conveniently performed via the imperfect beta function. In Figure 1, the plot of H ¯ α ( U i : n ; t ) is exhibited for various amounts of α , where i takes the values 1 , 2 , , 5 , and the total number of observations is n = 5 . The figure illustrates that there is no inherent monotonic relationship between the order statistics. The next theorem shows how the PTE of the order statistic X i : n is related to the PTE of the order statistic calculated for a uniform distribution.
Theorem 1.
The past Tsallis entropy of X i : n , for all α ( 0 , + ) , α 1 , can be expressed as follows:
H ¯ α ( X i : n ; t ) = 1 α ¯ ( α ¯ H ¯ α ( U i : n ; F ( t ) ) + 1 ) E [ f α 1 ( F 1 ( Y i ) ) ] 1 , t ( 0 , + ) ,
so that Y i B F ( x ) ( α i + α ¯ , 1 + α ( n i ) ) .
Proof. 
Remember that k α = 1 / ( 1 α ) . By making the change in variables as u = F ( x ) , based on the formulas given in (2), (4), and (6), we obtain:
H ¯ α ( X i : n ; t ) = k α 0 t f i : n ( x ) S i : n ( t ) α d x 1 = k α 0 t F i 1 ( x ) S n i ( x ) f ( x ) B F ( t ) ( i , 1 + n i ) α d x 1 = k α B F ( t ) ( α i + α ¯ , 1 + α ( n i ) ) B F ( t ) α ( i , n i + 1 ) 0 t F α ( i 1 ) ( x ) S α ( n i ) ( x ) f α ( x ) B F ( t ) ( α i + α ¯ , 1 + α ( n i ) ) d x 1 = k α B F ( t ) ( α i + α ¯ , 1 + α ( n i ) ) B F ( t ) α ( i , 1 + n i ) 0 F ( t ) u α ( i 1 ) ( 1 u ) α ( n i ) f α 1 ( F 1 ( u ) ) B F ( t ) ( α i + α ¯ , 1 + α ( n i ) ) d u 1 = k α ( α ¯ H ¯ α ( U i : n ; F ( t ) ) + 1 ) E [ f α 1 ( F 1 ( Y i ) ) ] 1 , t > 0 .
The recent equality above is due to Lemma 1. This finalizes the proof. □
1 α ¯ 0 exp ( 1 / t ) x α ( i 1 ) ( 1 x ) α ( n i ) ( log ( x ) ) α 1 d x 0 exp ( 1 / t ) x i 1 ( 1 x ) n i d x α d x 1
Upon further calculation, it can be deduced that when the order α approaches unity in Equation (8), the Shannon entropy of the ith ordered variable from a set of random variables adopted from F can be expressed as follows:
H ¯ ( X i : n ; t ) = H ¯ ( U i : n ; F ( t ) ) E [ f ( F 1 ( Y i ) ) ] ,
in which Y i B F ( t ) ( i , n i + 1 ) . This specific result for t = has previously been derived by Ebrahimi et al. [18]. Next, we establish a fundamental result concerning the problem of monotonicity of the PTE of an rv X, provided that X fulfills the decreasing reversed hazard rate (DRHR) trait. More precisely, we say that X possesses the DRHR if the reversed hazard rate (rhr) function it has, i.e., the function τ ( x ) = d d x ln ( F ( x ) ) , decreases monotonically for all x > 0 .
Lemma 2.
If X i : n denotes the ith order statistic obtained from a sample following a DRHR distribution, then X i : n is also a DRHR.
Proof. 
We can express the rhr function of X i : n as follows:
τ i : n ( t ) = f i : n ( t ) F i : n ( t ) = h F ( t ) S ( t ) τ ( t ) , t > 0 ,
where
h ( x ) = x i B ( i , 1 + n i ) k = i n n k x k , x > 0 .
Under the assumption that X is a DRHR, according to Equation (10), the distribution of X i : n is a DRHR if, and only if, h ( x ) decreases in x > 0 . Evidently, h ( x ) indeed decreases in x, thus completing the proof. □
We now demonstrate how the behavior of the new information measure is influenced by the DRHR feature of X.
Theorem 2.
If X induces the DRHR feature, then the Tsallis entropy H ¯ α ( X i : n ; t ) increases in t for every α ( 0 , + ) .
Proof. 
The DRHR trait of the distribution of X further induces that the distribution of X i : n also has the DRHR trait, as stated in Lemma 2. The proof is obtained directly using Theorem 2 of the paper by Kayid et al. [9]. □
Using an example, we illustrate the application of Theorems 1 and 2.
Example 1.
We contemplate a distribution with the cdf F ( x ) = x 2 for x ( 0 , 1 ) to be the distribution of the components’ lifetimes. It is evident that f ( F 1 ( u ) ) = 2 u for 0 < u < 1 . Using this information, we can derive the expression:
E [ f α 1 ( F 1 ( Y i ) ) ] = 2 α 1 B t 2 ( α ( i 1 2 ) + 1 2 , 1 + α ( n i ) ) B t 2 ( α i + α ¯ , 1 + α ( n i ) ) ,
Furthermore, we can obtain:
H ¯ α ( U i : n ; F ( t ) ) = 1 α ¯ B t 2 ( α i + α ¯ , 1 + α ( n i ) ) B t 2 α ( i , 1 + n i ) 1 .
Using Equation (8), we deduce that
H ¯ α ( X i : n ; t ) = 1 α ¯ 2 α 1 B t 2 ( α ( i 1 2 ) + 1 2 , 1 + α ( n i ) ) B t 2 α ( i , 1 + n i ) 1 , i = 1 , 2 , , n .
In Figure 2, we have plotted H ¯ α ( X i : n ; t ) for various amounts of α with i = 1 , , 5 and n = 5 . It can be observed that the PTR increases with t, which aligns with the expectation from Theorem 2.
Unfortunately, convenient statements for the PTE of ordered rvs are not available in some situations for many distributions. Given this limitation, we are motivated to explore alternative approaches to characterizing the PTE of order statistics. We therefore propose to establish thresholds for the PTE of order statistics. To this end, we present the following theorem as a conclusive proof that provides valuable insight into the nature of these bounds and their applicability in practical scenarios.
Theorem 3.
Consider a nonnegative rv X, which is continuous having pdf f and cdf F . Suppose we have M = f ( m ) < + , in which m plays the role of the mode of the underlying distribution with density F such that f ( x ) M . Then, for every α ( 0 , + ) , we obtain
H ¯ α ( X i : n ; t ) 1 α ¯ ( ( α ¯ ) H ¯ α ( U i : n ; F ( t ) ) + 1 ) M α 1 1 .
Proof. 
Because for every α ( 1 , + ) ( α ( 0 , 1 ) ) ) , one has
f α 1 ( F 1 ( u ) ) ( ) M α 1 ,
one can write
E [ f α 1 ( F 1 ( Y i ) ) ] ( ) M α 1 .
The desired conclusion now clearly follows from the use of (8). This concludes the proof of the theorem. □
The recent result introduces a boundary on the PTE of X i : n , i.e., the function which is signified by H ¯ α ( X i : n ; t ) . This limiting value is expressed via the PTE of the ordered variable of a set of random variables selected according to the uniform probability law and, further, the mode of the distribution under consideration, which is represented by m. This result yields a quantitative measure of the lower bound of the PTE with regard to the distribution mode and offers intriguing insights into the uncertainty features of X i : n . Based on Theorem 4, we show the bound of the PTE on the ordered rvs for a few standard and reputable distributions in Table 1.
The following result establishes an upper boundary condition for the new information measure of the system with parallel structure with regard to the rhr of the distribution under consideration.
Theorem 4.
Let the distribution of X fulfill the DRHR trait. For α > 1 , we have the inequality
H ¯ α ( X n : n ; t ) α τ α 1 ( t ) α ( α 1 ) ,
in which τ ( t ) is the rhr of X, which is a decreasing function by assumption.
Proof. 
Since the distribution of X has a decreasing rhr function, thus Theorem 2 provides that H ¯ α ( X n : n ; t ) increases as t increases. Therefore, based on Theorem 3 of Kayid and Alshehri [9], we have
H ¯ α ( X n : n ; t ) k α α τ n : n α 1 ( t ) α k α α τ α 1 ( t ) α , t > 0 ,
in which k α = 1 / ( 1 α ) . Since τ n : n ( t ) = n τ ( t ) τ ( t ) , the last inequality is easily obtained for α > 1 , and the proof is now complete. □
Next, we delve into the monotone behavior of the PTE of extreme order statistics with components whose lifetimes are uniformly distributed.
Lemma 3.
In a system with parallel (series) structure in which components have random lifetimes following a uniform probability law, the PTE of the lifetime of the device is decreasing with respect to the components’ number.
Proof. 
We give the proof when the system operates in parallel. Analogous reasoning can be applied to a series system. Let us set two rvs Z 1 and Z α with densities f 1 ( z ) and f α ( z ) , respectively, which are given by the following:
f 1 ( z ) = z n 1 0 t x n 1 d x and f α ( z ) = z α ( n 1 ) 0 t x α ( n 1 ) d x , z ( 0 , t ) .
Next, one obtains
ξ n = H ¯ α ( U n : n ; t ) = 1 α ¯ 0 t x α ( n 1 ) d x 0 t x n 1 d x α 1 , 0 < t < 1 .
Let us assume that n [ 1 , + ) . Then, we suppose that the derivative of ξ n with regard to n is well defined. We have the following:
ξ n n = 1 α ¯ ς n n ,
where
ς n = 0 t x α ( n 1 ) d x 0 t x n 1 d x α .
It is evident that for α ( 1 , + ) ( α ( 0 , 1 ) ) :
ς n n = α A ( t ) B α ( t ) E [ ln ( Z α ) ] E [ ln ( Z 1 ) ] ( ) 0 ,
where
A ( t ) = 0 t x α ( n 1 ) d x , and also B ( t ) = 0 t x n 1 d x .
It is readily seen that for α ( 1 , + ) ( α ( 0 , 1 ) ) , it holds that Z α is greater (less) than Z 1 in usual stochastic order. Consequently, ln ( z ) increases as z grows; as an application of Theorem 1.A.3. of [21], one has E [ ln ( Z α ) ] ( ) E [ ln ( Z 1 ) ] . Hence, (13) is positive (negative), and as a result, ξ n decreases as n grows. Consequently, it is deduced that the PTE of the life length of a system with parallel units decreases as the number of components increases. □
A large class of distributions consists of those that have density functions that decrease as the value increases. Some examples of these distributions are exponential, Pareto, and mixtures of distributions, among others. There are also distributions that have density functions that increase as the value increases like the power distribution. We will use the result from the previous lemma to establish the next theorem by which distributions that have density functions that are either increasing or decreasing are involved.
Theorem 5.
Suppose that f is the pdf of the component’s lifetime in a parallel (series) system, and let f be an increasing (a decreasing) function. Then, the PTE of the system’s lifetime decreases as n grows.
Proof. 
Assuming that Y n B F ( t ) ( α ( n 1 ) + 1 , 1 ) , then f Y n ( y ) indicates the density of Y n . It is evident that
f Y n + 1 ( y ) f Y n ( y ) = B F ( t ) ( α ( n 1 ) + 1 , 1 ) B F ( t ) ( α n + 1 , 1 ) y α , 0 < y < F ( t ) ,
increases as y grows. This in turn concludes that Y n is less than or equal to Y n + 1 in likelihood ratio order and, therefore, Y n is less than or equal to Y n + 1 in usual stochastic order also. In addition, α ( 1 , + ) ( α ( 0 , 1 ) ) , f α ¯ ( F 1 ( x ) ) increases (decreases) as x grows. Therefore,
E [ f α 1 ( F 1 ( Y n ) ) ] ( ) E [ f α 1 ( F 1 ( Y n + 1 ) ] .
From Theorem 3, for α ( 1 , + ) ( α ( 0 , 1 ) ) , one obtains
1 + α ¯ H ¯ α ( X n : n ; t ) = [ 1 + α ¯ H ¯ α ( U n : n ; F ( t ) ) ] E [ f α ¯ ( F 1 ( Y n ) ) ] ( ) [ 1 + α ¯ H ¯ α ( U n : n ; F ( t ) ) ] E [ f α ¯ ( F 1 ( Y n + 1 ) ) ] ( ) [ 1 + α ¯ H ¯ α ( U n + 1 : n + 1 ; F ( t ) ) ] E [ f α ¯ ( F 1 ( Y n + 1 ) ) ] = 1 + α ¯ H ¯ α ( X n + 1 : n + 1 ; t ) .
The initial inequality is obtained by noting that 1 + α ¯ H ¯ α ( U n : n ; F ( t ) ) is nonnegative, whereas the last one is due to Lemma 3(i). Thus, we deduce that H ¯ α ( X n : n ; t ) H ¯ α ( X n + 1 : n + 1 ; t ) for all t ( 0 , + ) .  □
The following example shows that this Theorem does not work for all kinds of systems with an ( n i + 1 ) -out-of-n structure.
Example 2.
We presume a system is operational when more than or equal to ( n 1 ) of the n components in the system are in operation. It is then not difficult to observe that the system’s random lifetime is X 2 : n . The components are assumed to have an identical distribution, which is uniform on ( 0 , 1 ) . In Figure 3, we see how the PTE of X 2 : n changes with n when α = 2 and t = 0.2 . In fact, it is observed in the graph that the PTE of the system does not always decrease as n increases. For example, it reveals that H ¯ α ( X 2 : 2 ; 0.2 ) is less than that of H ¯ α ( X 2 : n ; 0.2 ) for n = 3 , 4 , , 23 .

3. Conclusions

We investigated the idea of PTE for order statistics in this paper. A novel method has been suggested by us to merge the PTE of ordered random variables belonging to a continuous distribution set with the PTE of the ordered random variables belonging to a set of random numbers selected from a uniform distribution. This relationship aids in our comprehension of PTE’s characteristics and behavior for various distributions. Additionally, because it is challenging to derive precise formulas for the PTE of order statistics, we have discovered constraints that offer helpful approximations and enable a deeper comprehension of their characteristics. The derived limits and bounds can be applied to evaluate the PTE and compare its values in different situations from different perspectives. In addition, we have investigated how the index of ordered random variables, denoted by i, and the number of observations, denoted by n, affect PTE. In order to corroborate our findings and show how our method is applicable, we included examples. These illustrations showed the usefulness of PTE for ordered random variables and the adaptability of our approach to various distributions. In short, the current work improves the perception of PTE for ordered random variables by providing the connections this quantity has with other measures, by obtaining bounds and exploring the effects of the position of the ordered variable, and by determining the impact of the size of the sample under consideration. The findings reported in this paper provide useful and profitable intuitions for professionals engaged in the analysis of information measures and statistical inferential procedures.

Author Contributions

Methodology, M.S.; Software, M.S.; Validation, M.S.; Formal analysis, M.S.; Investigation, M.K.; Resources, M.S.; Writing—original draft, M.K.; Writing—review and editing, M.K. and M.S.; Visualization, M.K.; Supervision, M.K.; Project administration, M.S. All authors have read and agreed to the published version of the manuscript.

Funding

The authors acknowledge financial support from the Researchers Supporting Project number (RSP2023R464) through King Saud University in Riyadh, Saudi Arabia.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
  2. Tsallis, C. Possible generalization of Boltzmann-Gibbs statistics. J. Stat. Phys. 1988, 52, 479–487. [Google Scholar] [CrossRef]
  3. Nanda, A.K.; Paul, P. Some results on generalized residual entropy. Inf. Sci. 2006, 176, 27–47. [Google Scholar] [CrossRef]
  4. Rajesh, G.; Sunoj, S. Some properties of cumulative Tsallis entropy of order α. Stat. Pap. 2019, 60, 583–593. [Google Scholar] [CrossRef]
  5. Toomaj, A.; Atabay, H.A. Some new findings on the cumulative residual Tsallis entropy. J. Comput. Appl. Math. 2022, 400, 113669. [Google Scholar] [CrossRef]
  6. Di Crescenzo, A.; Longobardi, M. Entropy-based measure of uncertainty in past lifetime distributions. J. Appl. Probab. 2002, 39, 434–440. [Google Scholar] [CrossRef]
  7. Nair, N.U.; Sunoj, S. Some aspects of reversed hazard rate and past entropy. Commun. Stat. Theory Methods 2021, 32, 2106–2116. [Google Scholar] [CrossRef]
  8. Gupta, R.C.; Taneja, H.; Thapliyal, R. Stochastic comparisons of residual entropy of order statistics and some characterization results. J. Stat. Theory Appl. 2014, 13, 27–37. [Google Scholar] [CrossRef]
  9. Kayid, M.; Alshehri, M.A. Tsallis entropy for the past lifetime distribution with application. Axioms 2023, 12, 731. [Google Scholar] [CrossRef]
  10. Asadi, M.; Ebrahimi, N.; Soofi, E.S. Dynamic generalized information measures. Stat. Probab. Lett. 2005, 71, 85–98. [Google Scholar] [CrossRef]
  11. Zhang, Z. Uniform estimates on the Tsallis entropies. Lett. Math. Phys. 2007, 80, 171–181. [Google Scholar] [CrossRef]
  12. Maasoumi, E. The measurement and decomposition of multi-dimensional inequality. Econ. J. Econ. Soc. 1986, 54, 991–997. [Google Scholar] [CrossRef]
  13. Abe, S. Axioms and uniqueness theorem for Tsallis entropy. Phys. Lett. A 2000, 271, 74–79. [Google Scholar] [CrossRef]
  14. Asadi, M.; Ebrahimi, N.; Soofi, E.S. Connections of Gini, Fisher, and Shannon by Bayes risk under proportional hazards. J. Appl. Probab. 2017, 54, 1027–1050. [Google Scholar] [CrossRef]
  15. David, H.A.; Nagaraja, H.N. Order Statistics; John Wiley & Sons: Hoboken, NJ, USA, 2004. [Google Scholar]
  16. Wong, K.M.; Chen, S. The entropy of ordered sequences and order statistics. IEEE Trans. Inf. Theory 1990, 36, 276–284. [Google Scholar] [CrossRef]
  17. Park, S. The entropy of consecutive order statistics. IEEE Trans. Inf. Theory 1995, 41, 2003–2007. [Google Scholar] [CrossRef]
  18. Ebrahimi, N.; Soofi, E.S.; Soyer, R. Information measures in perspective. Int. Stat. Rev. 2010, 78, 383–412. [Google Scholar] [CrossRef]
  19. Abbasnejad, M.; Arghami, N.R. Renyi entropy properties of order statistics. Commun. Stat. Methods 2010, 40, 40–52. [Google Scholar] [CrossRef]
  20. Baratpour, S.; Khammar, A. Tsallis entropy properties of order statistics and some stochastic comparisons. J. Stat. Res. Iran JSRI 2016, 13, 25–41. [Google Scholar] [CrossRef]
  21. Shaked, M.; Shanthikumar, J.G. Stochastic Orders; Springer: Berlin/Heidelberg, Germany, 2007. [Google Scholar]
Figure 1. Amounts of H ¯ α ( U i : n ; t ) for α = 0.2 (left console) and α = 2 (right console ) for various choices of 0 < t < 1 .
Figure 1. Amounts of H ¯ α ( U i : n ; t ) for α = 0.2 (left console) and α = 2 (right console ) for various choices of 0 < t < 1 .
Entropy 25 01581 g001
Figure 2. The amounts of H ¯ α ( X i : n ; t ) for α = 0.2 (left console) and α = 2 (right console) with regard to t .
Figure 2. The amounts of H ¯ α ( X i : n ; t ) for α = 0.2 (left console) and α = 2 (right console) with regard to t .
Entropy 25 01581 g002
Figure 3. The amounts of the PTE for several choices of n in a system with an ( n 1 ) -out-of-n structure with an underlying uniform distribution and where α = 2 when t = 0.2 .
Figure 3. The amounts of the PTE for several choices of n in a system with an ( n 1 ) -out-of-n structure with an underlying uniform distribution and where α = 2 when t = 0.2 .
Entropy 25 01581 g003
Table 1. Lower bound on H ¯ α ( X i : n ; t ) derived from Theorem 4.
Table 1. Lower bound on H ¯ α ( X i : n ; t ) derived from Theorem 4.
pdfBounds
f ( x ) = 2 π ( 1 + x 2 ) , x > 0 , 1 α ¯ ( 1 + α ¯ H ¯ α ( U i : n ; F ( t ) ) ) 2 π α ¯ 1
f ( x ) = 2 σ 2 π e ( x μ ) 2 / 2 σ 2 , x ( μ , + ) , μ > 0 , 1 α ¯ ( 1 + α ¯ H ¯ α ( U i : n ; F ( t ) ) ) 2 σ 2 π α ¯ 1
f ( x ) = λ β e ( x μ ) β ( 1 e ( x μ ) β ) λ 1 , x ( μ , + ) , μ > 0 , 1 α ¯ ( 1 + α ¯ H ¯ α ( U i : n ; F ( t ) ) ) ( β ( 1 1 λ ) 1 λ ) α ¯ 1
f ( x ) = b c Γ ( c ) x c 1 e b x , x > 0 , 1 1 α ( 1 + α ¯ H ¯ α ( U i : n ; F ( t ) ) ) ( b ( c 1 ) c 1 e 1 c Γ ( c ) ) α ¯ 1
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shrahili, M.; Kayid, M. Some New Results Involving Past Tsallis Entropy of Order Statistics. Entropy 2023, 25, 1581. https://doi.org/10.3390/e25121581

AMA Style

Shrahili M, Kayid M. Some New Results Involving Past Tsallis Entropy of Order Statistics. Entropy. 2023; 25(12):1581. https://doi.org/10.3390/e25121581

Chicago/Turabian Style

Shrahili, Mansour, and Mohamed Kayid. 2023. "Some New Results Involving Past Tsallis Entropy of Order Statistics" Entropy 25, no. 12: 1581. https://doi.org/10.3390/e25121581

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop