Abstract
This study explores the application of Tsallis entropy in evaluating uncertainty within the framework of consecutive k-out-of-n good systems, which are widely utilized in various reliability and engineering contexts. We derive new analytical expressions and meaningful bounds for the Tsallis entropy under various lifetime distributions, offering fresh insight into the structural behavior of system-level uncertainty. The approach establishes theoretical connections with classical entropy measures, such as Shannon and Rényi entropies, and provides a foundation for comparing systems under different stochastic orders. A nonparametric estimator is proposed to estimate the Tsallis entropy in this setting, and its performance is evaluated through Monte Carlo simulations. In addition, we develop a new entropy-based test for exponentiality, building on the distinctive properties of system lifetimes. So, Tsallis entropy serves as a flexible tool in both reliability characterization and statistical inference.
1. Introduction
In recent decades, the study of consecutive k-out-of-n systems has gained prominence due to their relevance across numerous engineering and industrial settings. These models effectively capture configurations found in applications such as communication relay networks, segmented pipeline systems, and complex assemblies used in high-energy physics. The operational behavior of such systems depends on the arrangement of their components and the logic that governs their collective functioning. A key configuration, known as the linear consecutive k-out-of-n good system, consists of n sequential components, each assumed to function independently and identically. The system remains operational provided that at least k consecutive components are functioning simultaneously. Classical series and parallel structures appear as limiting cases of this setup: the n-out-of-n case corresponds to a series system, while the 1-out-of-n configuration resembles a parallel structure. The foundational contributions of researchers such as Jung and Kim [1], Shen and Zuo [2], Kuo and Zuo [3], Chung et al. [4], Boland and Samaniego [5], and Eryılmaz [6,7] have laid the groundwork for this domain. Among the various system configurations, linear arrangements that satisfy the condition 2k ≥ n are of particular significance. These systems offer a meaningful compromise between analytical tractability and practical relevance in reliability engineering. In such models, the lifetime of the i-th component is typically represented by T1, T2, …, Tₙ, where each component follows a common continuous lifetime distribution characterized by its probability density function (pdf) f(t), cumulative distribution function (cdf) F(t), and survival function S(t) = P(T > t). The overall system lifetime is denoted by , representing the operational lifetime of a linear consecutive k-out-of-n good system. Eryılmaz [8] established that when the condition 2k ≥ n holds, the reliability function of such a system takes the form:
In the context of information theory, a key aim is to measure the uncertainty associated with probability distributions. This study examines the use of Tsallis entropy as a tool for evaluating such uncertainty in consecutive k-out-of-n good systems, where the components are assumed to have continuous lifetime distributions. It is important to highlight that Tsallis [9] stimulated a resurgence of interest in the generalized entropy measures, often referred to as Tsallis entropy, building on earlier work by Havrda and Charvát [10] and the independent development of similar concepts in ecology by Patil and Tallie [11]. It provides a flexible alternative to classical measures and is widely regarded for its capacity to capture non-extensive behavior. The Tsallis entropy of order β is mathematically defined as:
where , for , represents the quantile function of . Of particular note, the Shannon differential entropy, introduced by Shannon [12], emerges as a special case of Tsallis entropy in the limit as β approaches 1. This foundational concept in information theory provides a baseline for measuring uncertainty under additive conditions. Mathematically, it is defined as:
An alternative and insightful expression of Tsallis entropy can be obtained by reformulating Equation (2) in terms of the hazard rate function. This representation provides a useful perspective in reliability analysis, particularly when examining lifetime distributions. The resulting form is given by:
where represents the hazard rate function, means the expectation, and follows a pdf given by:
Note that Equation (5) is known as the proportional hazards model in the literature and defines a general class of distributions, which are sometimes known as exponentiated survival distributions. The term proportional hazards model has been used widely to describe the relative risk or Cox model and in survival analysis because the parameter controls the tail behavior of the distributions; for more details, see, e.g., Kalbfleisch and Prentice [13] and Lawless [14]. It has long been understood that entropic measures for continuous random variables often yield negative values across many distribution types. This observation holds for various entropy functionals, including those proposed by Shannon and Tsallis. While discrete random variables typically yield non-negative values under these measures, continuous ones can be negative. Moreover, both Tsallis and Shannon entropies share their invariance under location shifts and their sensitivity to scale transformations. Unlike the additive nature of Shannon entropy, which is only additive for independent random variables, Tsallis entropy is non-additive, making it a more general and adaptable tool in complex systems. Specifically, for two independent continuous random variables (T1, T2), the entropy satisfies the relation: , however for Shannon entropy that the rightmost term vanishes because as , resulting in the classic additivity property when T1 and T2 are independent i.e., This non-additivity property is particularly relevant in the context of independent random variables, which underscores the versatility and theoretical strength of the Tsallis functional. When and are dependent, we have where is the mutual information (see, e.g., Cover and Thomas [15]). A similar result holds for dependent random variables in the context of Tsallis entropy, as demonstrated by Màrius Vila et al. [16]. In the field of telecommunications, entropic measures such as those by Shannon and Tsallis help define the fundamental performance boundaries of communication systems. These include limits on data compression efficiency and transmission reliability. Shannon entropy, for instance, quantifies the average uncertainty or information content of messages from a given source. According to Shannon’s Source Coding Theorem, entropy defines the minimum average number of bits per symbol required for lossless encoding. Moreover, the Shannon–Hartley Theorem links entropy to the maximum achievable rate for reliable communication over a noisy channel. In essence, Shannon entropy is a cornerstone in information theory, setting the theoretical limits for the two most essential tasks in digital communication: efficient compression and robust transmission. For clarity, Table 1 presents our notation, common alternatives found in the literature, symbols, conceptual definitions, and the location of mathematical definitions.
Table 1.
Notational Conventions and Their Definitions in the Context of Entropy Measures.
The study of information-theoretic measures within the context of reliability systems and order statistics has attracted growing interest in recent decades. Several foundational works have shaped this domain, including contributions by Wong and Chen [17], Park [18], Ebrahimi et al. [19], Zarezadeh and Asadi [20], Toomaj and Doostparast [21], Toomaj [22], and Mesfioui et al. [23], among others. Building on this foundation, Alomani and Kayid [24] extended the analysis of Tsallis entropy to coherent and mixed systems, assuming independent and identically distributed (i.i.d.) component lifetimes. Further developments include Baratpour and Khammar [25], who investigated the entropy’s behavior with respect to order statistics and record values, and Kumar [26], who analyzed its relevance in the study of r-records. Kayid and Alshehri [27] provided notable advancements by deriving a closed-form expression for the lifetime entropy of consecutive k-out-of-n good systems. Their work also established a characterization result, proposed practical bounds, and introduced a nonparametric estimation method. Complementing these efforts, Kayid and Shrahili [28] focused on the fractional generalized cumulative residual entropy in similar systems, presenting a computational framework, establishing several preservation properties, and offering two nonparametric estimators supported by simulation-based evidence.
Building on previous studies, this work seeks to provide a more comprehensive understanding of how Tsallis entropy can be applied to analyze consecutive k-out-of-n good systems. We expand upon earlier findings by offering new insights into the structural properties of these systems, proposing improved bounding strategies, and developing estimation techniques tailored to their unique reliability characteristics. Although this study centers on consecutive systems with i.i.d. components, it is worth recognizing that more general binary systems, involving non-i.i.d. structures, have received significant attention in the literature. Notable contributions in this direction include the works of Tsallis et al. [29] and Hanel et al. [30,31], which examine how dependencies among components influence entropic formulations and their theoretical implications.
The structure of this paper is outlined as follows. In Section 2, we introduce a novel expression for the Tsallis entropy of consecutive k-out-of-n good systems, denoted by , where component lifetimes are drawn from a general continuous distribution function F. This formulation is developed using the uniform distribution as a starting point. Due to the challenges involved in deriving closed-form results for Tsallis entropy in complex reliability settings, we also establish several analytical bounds, supported by illustrative numerical examples. Section 3 focuses on characterization results, highlighting key theoretical properties of Tsallis entropy in the setting of consecutive systems. In Section 4, we present computational validation of our findings and propose a nonparametric estimator specifically designed for evaluating system-level Tsallis entropy. The estimator’s performance is assessed using both simulated and empirical data. Finally, Section 5 summarizes the main conclusions and discusses their broader implications.
2. Tsallis Entropy of Consecutive k-out-of-n Good System
This section is structured into three parts. We begin with a brief overview of essential properties of Tsallis entropy and its connections to other well-known measures, such as Rényi and Shannon differential entropies. In the second part, we derive a closed-form expression for the Tsallis entropy in the context of consecutive k-out-of-n good systems and analyze its behavior with respect to different stochastic orderings. The final part introduces a series of analytical bounds that further clarify the entropy characteristics of these systems.
2.1. Results on Tsallis Entropy
In this paper, we consider a random variable, denoted by T, which is assumed to be absolutely continuous and nonnegative. A random variable is a mathematical construct used to represent the outcome of a random process, assigning numerical values to each possible outcome. In this context, T specifically represents the lifetime of a component, system, or living organism, meaning it quantifies the duration until a specific event occurs, such as the failure of a mechanical component, the breakdown of a system, or the death of an organism. The term absolutely continuous implies that the random variable T has a probability density function, allowing for a continuous range of possible values (e.g., any positive real number) rather than being restricted to discrete values. The nonnegative property ensures that which is appropriate for modeling lifetimes, as time cannot be negative. This setup provides a flexible framework for analyzing the probabilistic behavior of lifetimes in various applications. Here, we present the relationship between Rényi and Tsallis entropy. For a non-negative random variable T with density function f(t), Rényi entropy introduces a tunable parameter β, allowing different aspects of the distribution’s uncertainty to be emphasized. This parameterized form enables more flexibility in analyzing the behavior of uncertainty across various probability models. It is formally defined as:
Both Tsallis and Rényi entropies serve as measures of deviation from uniformity, as they quantify the concentration of the probability distribution f. These entropy measures can take values across the extended real line, i.e., within the interval [−∞, ∞]. For an absolutely continuous, non-negative random variable T, it is established that ≥ H(T) for all 0 < β < 1, and ≤ H(T) for all β > 1. Furthermore, the relationship between Tsallis and Rényi entropies follows a similar pattern: when 0 < β < 1, and when β > 1. In the theorem that follows, we explore the connection between the Shannon differential entropy under the proportional hazards rate model, as defined in Equation (5), and the corresponding Tsallis entropy.
Theorem 1.
Let T be an absolutely continuous, non-negative random variable. Then, for all and for all .
Proof.
By the log-sum inequality (see Cover and Thomas [15]), we have
which implies
where denotes the hazard rate function of . By noting that
we get the results for all , and hence the theorem. □
2.2. Expression and Stochastic Orders
To derive the Tsallis entropy for a consecutive k-out-of-n good system, we begin by applying the probability integral transformation , where F is the continuous cumulative distribution function of the component lifetimes. Under standard assumptions, this transformation maps the system lifetime into a variable that follows a uniform distribution on the interval [0, 1]. Leveraging this property, we obtain an explicit form for the Tsallis entropy of the system lifetime , assuming that the component lifetimes are independently and identically distributed. Based on Equation (1), the probability density function of is expressed as:
Furthermore, when , the pdf of can be represented as follows:
We next state a key result that follows directly from the preceding analysis. As the proof closely parallels the argument used in Theorem 1 of Mesfioui et al. [23], it is omitted here for brevity.
Proposition 1.
Let denote the system lifetime of a consecutive k-out-of-n good system, where . Then, for all , the Tsallis entropy of is given by:
In the next theorem, an alternative formulation of is derived using Theorem 1 in conjunction with Newton’s generalized binomial theorem.
Theorem 2.
Under the conditions of Proposition 1, we get
where and for all .
Proof.
where the third equality follows directly from Newton’s generalized binomial series . This result, in conjunction with Equation (11), complete the proof. □
By defining and , and referring to (10) and (11), we find that
To demonstrate the usefulness of the representation given in Equation (11), we consider the following illustrative example.
Example 1.
Consider a linear consecutive 2-out-of-4 good system whose lifetime is given by:
Let us assume that the component lifetimes are i.i.d. and follow the log-logistic distribution (known as the Fisk distribution in economics). The pdf of this distribution with the shape parameter and the scale parameter one is represented as follows:
After appropriate algebraic manipulation, the following identity is obtained:
As a result of specific algebraic manipulations, we obtain the following expression for the Tsallis entropy:
Due to the complexity of deriving a closed-form expression, numerical techniques are used to explore how the Tsallis entropy varies with the parameters β and γ. The analysis focuses on the consecutive 2-out-of-4 good system and is conducted for values and , since the integral diverges when or .
Figure 1 demonstrates that Tsallis entropy decreases as both β and γ increase. This behavior highlights the entropy’s sensitivity to changes in these parameters and emphasizes their influence on the system’s underlying uncertainty and information-theoretic profile.
Figure 1.
The plot of with respect to and as demonstrated in Example 1.
Definition 1.
Assume two absolutely continuous nonnegative random variables and with pdfs and , cdfs and and survival functions and , respectively. Then, (i) i.e., is smaller than or equal to in the dispersive order, if and only if for all ; (ii) i.e., is smaller than in the hazard rate order if is increasing for all ; (iii) has a decreasing failure rate (DFR) property if is decreasing in .
For a thorough discussion of stochastic ordering concepts, readers are referred to the seminal work of Shaked and Shanthikumar [32]. The next theorem follows directly from the representation established in Equation (11).
Theorem 3.
Let and be the lifetimes of two consecutive k-out-of-n good systems having i.i.d. component lifetimes with cdfs and , respectively. If , then
Proof.
If , then for all , we have
This yields that for all , and this completes the proof. □
The following result formally establishes that, among consecutive k-out-of-n good systems whose components possess the decreasing failure rate (DFR) property, the series system attains the minimum Tsallis entropy.
Proposition 2.
Let denote the lifetime of a consecutive k-out-of-n good system, comprising i.i.d. components that exhibit the DFR property. Then, for , and for all ,
- (i)
- it holds that .
- (ii)
- it holds that .
Proof.
(i) It is easy to see that . Furthermore, if exhibits the DFR property, then it follows that also possesses the DFR property. Due to Bagai and Kochar [33], it can be concluded that which immediately obtain by recalling Theorem 3. (ii) Based on the findings presented in Proposition 3.2 of Navarro and Eryılmaz [34], it can be inferred that . Consequently, employing analogous reasoning to that employed in Part (i) leads to the acquisition of similar results. □
An important application of Equation (11) is in comparing the Tsallis entropy of consecutive k-out-of-n good systems with independent components drawn from different lifetime distributions. This comparison is formally addressed in the following result.
Proposition 3.
Under the conditions of Theorem 3, if for all , and , for , then for all and .
Proof.
where . Assuming , based on Equation (11), we have
The first inequality holds because in and in when . The last inequality follows directly from Equation (14). Consequently, we have for , which completes the proof for the case . The proof for the case , follows a similar argument. □
Given that for all , Equation (2) implies
The following example serves to illustrate the practical application of the preceding proposition.
Example 2.
Assume coherent systems with lifetimes and , where are i.i.d. component lifetimes with a common cdf , and are i.i.d. component lifetimes with the common cdf . We can easily confirm that and , so . Additionally, since and , we have . Thus Theorem 3 implies that
2.3. Some Bounds
In situations where closed-form expressions for Tsallis entropy are unavailable, particularly for systems with diverse lifetime distributions or a large number of components, bounding techniques offer a practical approach for approximating the entropy’s behavior over the system’s lifetime. This subsection explores the use of analytical bounds to characterize the Tsallis entropy of consecutive k-out-of-n good systems. In particular, we present the following theorem, which establishes a lower bound on the system’s Tsallis entropy. This bound provides valuable insights into the entropy structure under realistic conditions and supports a deeper understanding of system-level uncertainty.
Lemma 1.
Consider a nonnegative continuous random variable with pdf and cdf such that , where , denotes the mode of the pdf . Then, for , we have
Proof.
By noting that , then for , we have . Now, the identity implies that
and hence the result. When , then we have . Now, since , by using the similar arguments, we have the results. □
The following theorem presents a lower bound for the Tsallis entropy of . This bound is expressed in terms of the Tsallis entropy of a consecutive k-out-of-n good system assuming uniformly distributed component lifetimes, and it incorporates the mode of the original lifetime distribution.
Example 3.
Assume a linear consecutive k-out-of-n good system with lifetime
where
for
. Let further that the lifetimes of the components are i.i.d. having the common mixture of two Pareto distributions with parameters
and
with pdf as follows:
Given that the mode of this distribution is , we can determine the mode value as . Consequently, from Lemma 2, we get
The next theorem establishes bounds for the Tsallis entropy of consecutive k-out-of-n good systems by relating it to the Tsallis entropy of the individual component lifetimes.
Theorem 4.
When , we have
where where .
Proof.
The mode of is clearly observed as . As a result, we can establish that for . Therefore, for , we can conclude that:
and hence the theorem. □
To demonstrate the lower bound established in Theorem 5, we now consider its application to a consecutive k-out-of-n good system.
Example 4.
Let us consider a linear consecutive 10-out-of-18 good system with lifetime , where for . In order to conduct the analysis, we assume that the lifetimes of the individual components in the system are i.i.d. according to a common standard exponential distribution. By performing a simple verification, we find that the optimal value is equal to 0.08, resulting in a corresponding value of as 4.268. Utilizing Theorem 4, we can write
The next result establishes bounds for consecutive k-out-of-n good systems based on the hazard rate function of the component lifetimes.
Proposition 4.
Let , be the lifetimes of components of a consecutive k-out-of-n good systems with having the common failure rate function . If and , then
where
has the pdf
, for
.
Proof.
The hazard rate function of can be easily represented by , where
Since for and , it follows that is a monotonically decreasing function of . Since and , we have for , which implies that , for . Combining this result with the relationship between Tsallis entropy and the hazard rate (as defined in Equation (4)) for , completes the proof. □
We now present an illustrative example to demonstrate the application of the preceding proposition.
Example 5.
Consider a linear consecutive 2-out-of-3 good system with lifetime , where the component lifetimes are i.i.d. with an exponential distribution with the cdf for . The exponential distribution has a constant hazard rate, , so, it follows that . Applying Proposition 4 yields the bounds on the Tsallis entropy of the system as . Based on (11), one can compute the exact value as which falls within the bounds.
The next theorem holds under the condition that the expected value of the squared hazard rate function of T exists.
Theorem 5.
Under the conditions of Proposition 4 such that , for and , it holds that
Proof.
It is not hard to see that , while its failure rate function is given by
Thus, by (4) and the Cauchy-Schwarz inequality, we have
In the last equality, we use the substitution , and this completes the proof. □
3. Characterization Results
This section presents characterization results for consecutive k-out-of-n good systems based on Tsallis entropy. The analysis focuses on linear consecutive (n-i)-out-of-n good systems, and . We begin by recalling a lemma that relies on the Müntz–Szász theorem, as presented by Kamps [35].
Lemma 3.
For an integrable function on the finite interval if 1, then for almost all , where is a strictly increasing sequence of positive integers satisfying .
It is worth pointing out that Lemma 3 is a well-established concept in functional analysis, stating that the sets constitutes a complete sequence. Notably, Hwang and Lin [36] expanded the scope of the Müntz-Szász theorem for the functions , where is both absolutely continuous and monotonic over the interval .
Theorem 6.
Let us assume two consecutives ()-out-of-n good systems with lifetimes and consisting of i.i.d. components with cdfs and , and pdfs and , respectively. Then and belong to the same family of distributions, but for a change in location, if and only if for a fixed ,
Proof.
Since and are from the same location family, , for all and . Thus
implies the necessity part. For the sufficiency part, for a consecutive ()-out-of-n good system, Equation (10) gives
where and By assumption , we can write
or equivalently
where
Applying Lemma 3 to the complete sequence with
yields the result that or . Consequently, and are part of the same distribution family, differing only in a location shift. □
Noting that a consecutive n-out-of-n good system corresponds to a classical series system, the following corollary provides a characterization of its Tsallis entropy.
Corollary 1.
Let
and be two series systems having the common pdfs and and cdfs and , respectively. Then and belong to the same family of distributions, but for a change in location, if and only if
An additional useful characterization is presented in the following theorem.
Theorem 7.
Under the conditions of Theorem 6, and belong to the same family of distributions, but for a change in location and scale, if and only if for a fixed ,
Proof.
The necessity is trivial. To establish sufficiency, we leverage Equations (6) and (18) to derive
An analogous argument can be made for . If relation (22) holds for two cdfs and , then we can infer from Equation (23) that
Let us set . By similar arguments as in Theorem 6, we have
The proof follows similarly to Theorem 6. □
Applying Theorem 7 yields the following corollary.
Corollary 2.
Suppose the assumptions of Corollary 1, and belong to the same family of distributions, but for a change in location and scale, if and only if for a fixed ,
The following theorem characterizes the exponential distribution through Tsallis entropy within the framework of consecutive k-out-of-n good systems. This result serves as the theoretical basis for a newly proposed goodness-of-fit test for exponentiality, intended to be applicable across a wide variety of datasets. To establish this characterization, we begin by introducing the lower incomplete beta function, defined as:
where and are positive real numbers. When this expression reduces to the complete beta function. We now present the main result of this section.
Theorem 8.
Let us assume that is the lifetime of the consecutive -out-of-n good system having i.i.d. component lifetimes with the pdf and . Then has an exponential distribution with the parameter if and only if for a fixed ,
for all
, and
.
Proof.
Given an exponentially distributed random variable , its Tsallis entropy, directly calculated using , is . Furthermore, since , application of Equation (11) yields:
for . To derive the second term, let us set and , upon recalling (10), it holds that
where the necessity is derived. To establish sufficiency, we assume that Equation (25) holds for a fixed value of and assume that . Following the proof of Theorem 6 and utilizing the result in Equation (26), we obtain the following relation
which is equivalent to
where is defined in (18). Thus, it holds that
where is defined in (21). Applying Lemma 3 to the function
and utilizing the complete sequence , we can deduce that
This implies that
By solving this equation, it yields , where is an arbitrary constant. Utilizing the boundary condition , it follows that , we determine that . Consequently, leading to for . This implies the cdf , confirming that follows an exponential distribution with scale parameter . This establishes the theorem. □
4. Tsallis Entropy-Based Exponentiality Testing
In this section, we propose a nonparametric method for estimating the Tsallis entropy of consecutive k-out-of-n good systems. Given the wide applicability of the exponential distribution in reliability and lifetime modeling, numerous test statistics have been developed to assess exponentiality—many of which are grounded in core principles of statistical theory. The primary objective here is to test whether the distribution of a random variable T follows an exponential law. Let , for , denote the cumulative distribution function under the null hypothesis. The hypothesis to be tested is formally stated as follows:
A specific case of Tsallis entropy of order 2, referred to as extropy, has recently attracted considerable attention as a useful metric for goodness-of-fit testing. Qiu and Jia [37] pioneered the development of two consistent estimators for extropy based on the concept of spacings and subsequently introduced a goodness-of-fit test for the uniform distribution using the more efficient of the two estimators. In a related contribution, Xiong et al. [38] utilized properties of classical record values to derive a characterization result for the exponential distribution, leading to the development of a novel exponentiality test. Their study outlined the test statistic in detail and demonstrated its effectiveness, particularly in small-sample scenarios. Building on this foundation, Jose and Sathar [39] proposed a new test for exponentiality based on a characterization involving the extropy of lower r-record values. Extending these developments, the present section explores the Tsallis entropy of consecutive k-out-of-n good systems. As established in Theorem 8, the exponential distribution can be uniquely characterized through the Tsallis entropy associated with such systems. Leveraging Equation (25), and following appropriate simplification, we now propose a new test statistic for exponentiality, denoted by , defined for as follows:
where
then Theorem 8 directly implies that if and only if is exponentially distributed. This fundamental property establishes as a viable measure of exponentiality and a suitable candidate for a test statistic. Given a random sample , an estimator of can be used as a test statistic. Significant deviations of from its expected value under the null hypothesis (i.e., the assumption of an exponential distribution) would indicate non-exponentiality, prompting the rejection of the null hypothesis. Consider a random sample of size , denoted by drawn from an absolutely continuous distribution and represent the corresponding order statistics. To estimate the test statistic, we adopt an estimator proposed by Vasicek [40] for as follows:
and is a positive integer smaller than which is known as the window size and for and for . So, a reasonable estimator for can be derived using Equation (30) as follows:
Establishing estimator consistency is a fundamental step when evaluating estimators for parametric functions. The following theorem confirms the consistency of the estimator defined in Equation (31). The proof follows an approach similar to that of Theorem 1 in Vasicek [40], who introduced a widely adopted technique for proving consistency in entropy-based statistics. This method has also been employed by Park [41] and Xiong et al. [38] to validate the reliability of their respective test statistics.
Theorem 9.
Assume that is a random sample of size taken from a population with pdf and cdf . Also, let the variance of the random variable be finite. Then as and , where stands for the convergence in probability for all .
Proof.
To establish the consistency of the estimator , we employ the approach given in Noughabi and Arghami [42]. As both and tend to infinity, with the ratio approaching 0, we can approximate the density as follows:
where represents the empirical distribution function. Furthermore, given that , we can express
where the second approximation relies on the almost sure convergence of the empirical distribution function i.e., as . Now, applying the Strong Law of Large Numbers, we have
This convergence demonstrates that is a consistent estimator of and hence completes the proof of consistency for all . □
The root mean square error (RMSE) of the estimator i is invariant under location shifts in the random variable T, but not under scale transformations. This property is formally established in the following theorem by adapting the arguments of Ebrahimi et al. [43].
Theorem 10.
Assume that is a random sample of size taken from a population with pdf and and . Denote the estimators for on the basis of and with and , respectively. Then, the following properties apply:
- (i)
- ,
- (ii)
- ,
- (iii)
- , for all .
Proof.
It is not hard to see from (25) that
Hence, we complete the proof by applying the properties of the mean, variance, and RMSE of . □
A variety of test statistics can be constructed by selecting different combinations of n and i. For computational implementation, we consider the case n = 3 and i = 1, which simplifies the evaluation of the expression in Equation (30).
The test statistic converges to zero asymptotically as the sample size N approaches infinity under the null hypothesis . Conversely, under an alternative distribution with an absolutely continuous cdf , it converges to a positive value as . Consequently, for a finite sample size , we reject the null hypothesis at a significance level α if the observed value of exceeds the critical value . However, the asymptotic distribution of is analytically intractable due to its complex dependence on N and the window parameter m.
To address this challenge, we employed a Monte Carlo simulation approach. Specifically, 10,000 random samples of sizes were generated from the standard exponential distribution under the null hypothesis. For each sample size, we computed the -th quantile of the simulated values of to determine the critical values corresponding to significance levels and while varying the window size m from 2 to 30. Figure 1 and Figure 2 display the resulting critical values for each sample size and significance level. The critical values of the proposed test statistic at the significance level α=0.01 are presented in Figure 3, which serve as the basis for evaluating the power performance through Monte Carlo simulations.
Figure 2.
Critical values of the statistic at significance level .
Figure 3.
Critical values of the statistic at significance level .
Power Comparisons
The power of the test statistic was evaluated through a Monte Carlo simulation involving nine alternative probability distributions. For each specified sample size replicates of size N were drawn from each alternative distribution, and the value of was computed for each replicate. The empirical power at a given significance level α was estimated as the proportion of test statistics exceeding the corresponding critical value. To assess the efficiency of the newly proposed test based on , its performance was benchmarked against several well-established tests for exponentiality reported in the literature. The specifications of the alternative distributions considered are summarized in Table 2.
Table 2.
Alternative Probability Distributions for Evaluating the Power of the Test Statistic.
The simulation setup, including the selection of alternative distributions and their associated parameters, closely follows the framework proposed by Jose and Sathar [39]. To evaluate the effectiveness of the newly proposed test based on the statistic , its performance is compared with several well-established tests for exponentiality documented in the literature. A summary of these comparative tests is provided in Table 3.
Table 3.
Competing Tests for Exponentiality.
The performance of the test statistic is influenced by the choice of window size m, making it necessary to determine an appropriate value in advance to ensure sufficient adjusted statistical power. Simulation results across various sample sizes led to the empirical recommendation , where denotes the floor function. This heuristic formula offers a practical guideline for selecting m and aims to ensure robust power performance across a range of alternative distributions. To comprehensively evaluate the performance of the proposed test, we selected ten established tests for exponentiality and assessed their power against a diverse set of alternative distributions. Notably, Xiong et al. [38] proposed a test based on the Tsallis entropy of classical record values, while Jose and Sathar [39] introduced a test statistic using Tsallis entropy derived from lower r-records as a characterization of the exponential distribution.
The two tests referred to as and in Table 4 are included in our comparative analysis due to their basis in information-theoretic principles. The original authors provided extensive justification for their use in testing exponentiality, highlighting their theoretical soundness and practical applicability. To estimate the power of each test, we simulated 10,000 independent samples for each sample size from each alternative distribution specified in Table 4. The power of the proposed test was then computed at the 5% significance level. Subsequently, power values for both and the eleven competing tests were obtained using the same simulation framework. A summary of the comparative results is presented in Table 4.
Table 4.
Power comparisons of the tests at the significance level .
Overall, the test statistic exhibits strong discriminatory power in detecting departures from exponentiality in the direction of the gamma distribution. In contrast, its performance against other alternatives, such as the Weibull, uniform, half-normal, and log-normal distributions, is more moderate, reflecting a balanced sensitivity without displaying either pronounced strength or notable limitations.
5. Conclusions
This study has investigated the utility of Tsallis entropy as a flexible and informative measure of uncertainty within the reliability framework of consecutive k-out-of-n good systems. A central contribution lies in establishing a meaningful relationship between the Tsallis entropy of such systems, under general continuous lifetime distributions, and their counterparts governed by the uniform distribution. Given the analytical complexity involved in deriving closed-form entropy expressions, especially for systems with large n or heterogeneous component behaviors, we derived a suite of informative bounds. These approximations not only facilitate practical computation but also deepen the theoretical understanding of entropy dynamics in complex system structures. Additionally, we proposed a nonparametric estimator specifically tailored to the structure of consecutive k-out-of-n systems. Its consistency and performance were validated through simulation studies and empirical applications. This estimator provides a valuable tool for quantifying system-level uncertainty and supports broader applications such as statistical inference and pattern recognition, including image processing and reliability-centered decision-making. In summary, this work contributes to the growing literature on information-theoretic measures in reliability by (i) establishing theoretical foundations that link Tsallis entropy to system reliability behavior; (ii) introducing practical bounding techniques to overcome analytical intractability; and (iii) developing a robust entropy-based estimator suitable for practical use.
Despite these advancements, several promising avenues remain open for further exploration: (i) The current analysis assumes independent and identically distributed components. Extending the framework to systems with dependent or heterogeneous lifetimes, such as those governed by copula-based models or frailty structures, would significantly broaden applicability. (ii) Investigating Tsallis entropy in more general system configurations (e.g., coherent systems, phased-mission systems, or dynamic networks) could yield new insights into uncertainty and resilience. (iii) Developing online or censored-data versions of the Tsallis entropy estimator would enhance its relevance in real-world reliability monitoring and predictive maintenance applications. (iv) Leveraging entropy measures to guide optimal system design (e.g., maximizing reliability for a fixed entropy budget) represents a novel and practically important direction. (v) A systematic comparison between Tsallis, Rényi, and cumulative residual entropy within the same system contexts may reveal cases where one measure is superior in inference, diagnostics, or optimization. These directions highlight the richness of Tsallis entropy as both a theoretical construct and a practical tool in reliability analysis and statistical modeling.
Author Contributions
A.A.A.: Visualization, validation, software, resources, investigation, and editing, writing—original draft, and conceptualization; G.A.: visualization, investigation, validation, resources, and conceptualization; M.K.: writing—review and editing, writing—original draft, visualization, validation, software, resources, funding acquisition, data curation, and conceptualization. All authors have read and agreed to the published version of the manuscript.
Funding
Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2025R226), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.
Institutional Review Board Statement
This study did not involve human participants or animals.
Data Availability Statement
The datasets are available within the manuscript.
Acknowledgments
The authors gratefully thank the Academic Editor and the three anonymous reviewers for their insightful and constructive feedback, which has greatly enhanced the quality and clarity of this manuscript.
Conflicts of Interest
The authors declare that they have no conflicts of interest to report regarding the present study.
References
- Jung, K.H.; Kim, H. Linear consecutive-k-out-of-n:F system reliability with common-mode forced outages. Reliab. Eng. Syst. Saf. 1993, 41, 49–55. [Google Scholar] [CrossRef]
- Shen, J.; Zuo, M.J. Optimal design of series consecutive-k-out-of-n:G systems. Reliab. Eng. Syst. Saf. 1994, 45, 277–283. [Google Scholar] [CrossRef]
- Kuo, W.; Zuo, M.J. Optimal Reliability Modeling: Principles and Applications; John Wiley & Sons: Hoboken, NJ, USA, 2003. [Google Scholar]
- Chung, C.I.H.; Cui, L.; Hwang, F.K. Reliabilities of Consecutive-k Systems; Springer: Berlin/Heidelberg, Germany, 2013; Volume 4. [Google Scholar]
- Boland, P.J.; Samaniego, F.J. Stochastic ordering results for consecutive k-out-of-n:F systems. IEEE Trans. Reliab. 2004, 53, 7–10. [Google Scholar] [CrossRef]
- Eryılmaz, S. Mixture representations for the reliability of consecutive-k systems. Math. Comput. Model. 2010, 51, 405–412. [Google Scholar] [CrossRef]
- Eryılmaz, S. Conditional lifetimes of consecutive k-out-of-n systems. IEEE Trans. Reliab. 2010, 59, 178–182. [Google Scholar] [CrossRef]
- Eryılmaz, S. Reliability properties of consecutive k-out-of-n systems of arbitrarily dependent components. Reliab. Eng. Syst. Saf. 2009, 94, 350–356. [Google Scholar] [CrossRef]
- Tsallis, C. Possible generalization of Boltzmann-Gibbs statistics. J. Stat. Phys. 1988, 52, 479–487. [Google Scholar] [CrossRef]
- Havrda, J.; Charvát, F. Quantification method of classification processes: Concept of structural-entropy. Kybernetika 1967, 3, 30–35. [Google Scholar]
- Patil, G.P.; Taillie, C. Diversity as a concept and its measurement. J. Am. Stat. Assoc. 1982, 77, 548–567. [Google Scholar] [CrossRef]
- Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
- Kalbfleisch, J.D.; Prentice, R.L. The Statistical Analysis of Failure Time Data; Wiley: New York, NY, USA, 2011; Volume 360. [Google Scholar]
- Lawless, J.F. Statistical Models and Methods for Lifetime Data, Wiley Series in Probability and Statistics; Wiley-Interscience: New York, NY, USA, 2003. [Google Scholar]
- Cover, T.M.; Thomas, J. Elements of Information Theory; John Wiley and Sons Inc.: Hoboken, NJ, USA, 1991. [Google Scholar]
- Duran, M.V.; Reig, A.B.I.; Feixas, M.F.; Sbert, M. Tsallis mutual information for document classification. Entropy 2011, 13, 1694–1707. [Google Scholar]
- Wong, K.M.; Chen, S. The entropy of ordered sequences and order statistics. IEEE Trans. Inf. Theory 1990, 36, 276–284. [Google Scholar] [CrossRef]
- Park, S. The entropy of consecutive order statistics. IEEE Trans. Inf. Theory 1995, 41, 2003–2007. [Google Scholar] [CrossRef]
- Ebrahimi, N.; Soofi, E.S.; Soyer, R. Information measures in perspective. Int. Stat. Rev. 2010, 78, 383–412. [Google Scholar] [CrossRef]
- Zarezadeh, S.; Asadi, M. Results on residual Rényi entropy of order statistics and record values. Inf. Sci. 2010, 180, 4195–4206. [Google Scholar] [CrossRef]
- Toomaj, A.; Doostparast, M. A note on signature-based expressions for the entropy of mixed r-out-of-n systems. Naval Res. Logist. 2014, 61, 202–206. [Google Scholar] [CrossRef]
- Toomaj, A. Rényi entropy properties of mixed systems. Commun. Stat. Theory Methods 2017, 46, 906–916. [Google Scholar] [CrossRef]
- Mesfioui, M.; Kayid, M.; Shrahili, M. Rényi entropy of the residual lifetime of a reliability system at the system level. Axioms 2023, 12, 320. [Google Scholar] [CrossRef]
- Alomani, G.; Kayid, M. Further properties of Tsallis entropy and its application. Entropy 2023, 25, 199. [Google Scholar] [CrossRef] [PubMed]
- Baratpour, S.; Khammar, A.H. Results on Tsallis entropy of order statistics and record values, Istatistik. J. Turk. Stat. Assoc. 2016, 8, 60–73. [Google Scholar]
- Kumar, V. Some results on Tsallis entropy measure and k-record values. Physica A 2016, 462, 667–673. [Google Scholar] [CrossRef]
- Kayid, M.; Alshehri, M.A. Shannon differential entropy properties of consecutive k-out-of-n:G systems. Oper. Res. Lett. 2024, 57, 107190. [Google Scholar] [CrossRef]
- Kayid, M.; Shrahili, M. Information properties of consecutive systems using fractional generalized cumulative residual entropy. Fractal Fract. 2024, 8, 568. [Google Scholar] [CrossRef]
- Tsallis, C.; Gell-Mann, M.; Sato, Y. Asymptotically scale-invariant occupancy of phase space makes the entropy Sq extensive. Proc. Natl. Acad. Sci. USA 2005, 102, 15377–15382. [Google Scholar] [CrossRef]
- Hanel, R.; Thurner, S.; Gell-Mann, M. Generalized entropies and the transformation group of superstatistics. Proc. Natl. Acad. Sci. USA 2011, 108, 6390–6394. [Google Scholar] [CrossRef]
- Hanel, R.; Thurner, S.; Gell-Mann, M. Generalized entropies and logarithms and their duality relations. Proc. Natl. Acad. Sci. USA 2012, 109, 19151–19154. [Google Scholar] [CrossRef]
- Shaked, M.; Shanthikumar, J.G. Shanthikumar, Stochastic Orders; Springer: Berlin/Heidelberg, Germany, 2007. [Google Scholar]
- Bagai, I.; Kochar, S.C. On tail-ordering and comparison of failure rates. Commun. Stat. Theory Methods 1986, 15, 1377–1388. [Google Scholar] [CrossRef]
- Navarro, J.; Eryılmaz, S. Mean residual lifetimes of consecutive-k-out-of-n systems. J. Appl. Probab. 2007, 44, 82–98. [Google Scholar] [CrossRef]
- Kamps, U. Characterizations of distributions by recurrence relations and identities for moments of order statistics. In Handbook of Statistics; Elsevier: Amsterdam, The Netherlands, 1998; Volume 16, pp. 291–311. [Google Scholar]
- Hwang, J.S.; Lin, G.D. On a generalized moment problem. II. Proc. Amer. Math. Soc. 1984, 91, 577–580. [Google Scholar] [CrossRef]
- Qiu, G.; Jia, K. Extropy estimators with applications in testing uniformity. J. Nonparametr. Stat. 2018, 30, 182–196. [Google Scholar] [CrossRef]
- Xiong, P.; Zhuang, W.; Qiu, G. Testing exponentiality based on the extropy of record values. J. Appl. Stat. 2022, 49, 782–802. [Google Scholar] [CrossRef] [PubMed]
- Jose, J.; Sathar, E.I.A. Characterization of exponential distribution using extropy based on lower k-records and its application in testing exponentiality. J. Comput. Appl. Math. 2022, 402, 113816. [Google Scholar] [CrossRef]
- Vasicek, O. A test for normality based on sample entropy. J. R. Stat. Soc. Ser. B Stat. Methodol. 1976, 38, 54–59. [Google Scholar] [CrossRef]
- Park, S. A goodness-of-fit test for normality based on the sample entropy of order statistics. Stat. Probab. Lett. 1999, 44, 359–363. [Google Scholar] [CrossRef]
- Noughabi, H.A.; Arghami, N.R. Testing exponentiality based on characterizations of the exponential distribution. J. Stat. Comput. Simul. 2011, 81, 1641–1651. [Google Scholar] [CrossRef]
- Ebrahimi, N.; Pflughoeft, K.; Soofi, E.S. Two measures of sample entropy. Stat. Probab. Lett. 1994, 20, 225–234. [Google Scholar] [CrossRef]
- Fortiana, J.; Grané, A. A scale-free goodness-of-fit statistic for the exponential distribution based on maximum correlations. J. Stat. Plann. Inference 2002, 108, 85–97. [Google Scholar] [CrossRef]
- Choi, B.; Kim, K.; Song, S.H. Goodness-of-fit test for exponentiality based on Kullback-Leibler information. Commun. Stat. Simul. Comput. 2004, 33, 525–536. [Google Scholar] [CrossRef]
- Mimoto, N.; Zitikis, R. The Atkinson index, the Moran statistic, and testing exponentiality. J. Jpn. Stat. Soc. 2008, 38, 187–205. [Google Scholar] [CrossRef]
- Volkova, K.Y. On asymptotic efficiency of exponentiality tests based on Rossberg’s characterization. J. Math. Sci. 2010, 167, 489–493. [Google Scholar] [CrossRef]
- Zamanzade, E.; Arghami, N.R. Goodness-of-fit test based on correcting moments of modified entropy estimator. J. Stat. Comput. Simul. 2011, 81, 2077–2093. [Google Scholar] [CrossRef]
- Baratpour, S.; Rad, A.H. Testing goodness-of-fit for exponential distribution based on cumulative residual entropy. Commun. Stat. Theory Methods 2012, 41, 1387–1396. [Google Scholar] [CrossRef]
- Noughabi, H.A.; Arghami, N.R. Goodness-of-fit tests based on correcting moments of entropy estimators. Commun. Stat. Simul. Comput. 2013, 42, 499–513. [Google Scholar] [CrossRef]
- Volkova, K.Y.; Nikitin, Y.Y. Exponentiality tests based on Ahsanullah’s characterization and their efficiency. J. Math. Sci. 2015, 204, 42–54. [Google Scholar] [CrossRef]
- Torabi, H.; Montazeri, N.H.; Grané, A. A wide review on exponentiality tests and two competitive proposals with application on reliability. J. Stat. Comput. Simul. 2018, 88, 108–139. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).