1. Introduction
In recent decades, the study of consecutive
k-out-of-
n systems has gained prominence due to their relevance across numerous engineering and industrial settings. These models effectively capture configurations found in applications such as communication relay networks, segmented pipeline systems, and complex assemblies used in high-energy physics. The operational behavior of such systems depends on the arrangement of their components and the logic that governs their collective functioning. A key configuration, known as the linear consecutive
k-out-of-
n good system, consists of
n sequential components, each assumed to function independently and identically. The system remains operational provided that at least
k consecutive components are functioning simultaneously. Classical series and parallel structures appear as limiting cases of this setup: the
n-out-of-
n case corresponds to a series system, while the
1-out-of-
n configuration resembles a parallel structure. The foundational contributions of researchers such as Jung and Kim [
1], Shen and Zuo [
2], Kuo and Zuo [
3], Chung et al. [
4], Boland and Samaniego [
5], and Eryılmaz [
6,
7] have laid the groundwork for this domain. Among the various system configurations, linear arrangements that satisfy the condition 2
k ≥
n are of particular significance. These systems offer a meaningful compromise between analytical tractability and practical relevance in reliability engineering. In such models, the lifetime of the
i-th component is typically represented by
T1,
T2, …,
Tₙ, where each component follows a common continuous lifetime distribution characterized by its probability density function (pdf)
f(
t), cumulative distribution function (cdf)
F(
t), and survival function
S(
t) =
P(
T >
t). The overall system lifetime is denoted by
, representing the operational lifetime of a linear consecutive k-out-of-n good system. Eryılmaz [
8] established that when the condition 2
k ≥ n holds, the reliability function of such a system takes the form:
In the context of information theory, a key aim is to measure the uncertainty associated with probability distributions. This study examines the use of Tsallis entropy as a tool for evaluating such uncertainty in consecutive
k-out-of-
n good systems, where the components are assumed to have continuous lifetime distributions. It is important to highlight that Tsallis [
9] stimulated a resurgence of interest in the generalized entropy measures, often referred to as Tsallis entropy, building on earlier work by Havrda and Charvát [
10] and the independent development of similar concepts in ecology by Patil and Tallie [
11]. It provides a flexible alternative to classical measures and is widely regarded for its capacity to capture non-extensive behavior. The Tsallis entropy of order β is mathematically defined as:
where
, for
, represents the quantile function of
. Of particular note, the Shannon differential entropy, introduced by Shannon [
12], emerges as a special case of Tsallis entropy in the limit as β approaches 1. This foundational concept in information theory provides a baseline for measuring uncertainty under additive conditions. Mathematically, it is defined as:
An alternative and insightful expression of Tsallis entropy can be obtained by reformulating Equation (2) in terms of the hazard rate function. This representation provides a useful perspective in reliability analysis, particularly when examining lifetime distributions. The resulting form is given by:
where
represents the hazard rate function,
means the expectation, and
follows a pdf given by:
Note that Equation (5) is known as the proportional hazards model in the literature and defines a general class of distributions, which are sometimes known as exponentiated survival distributions. The term proportional hazards model has been used widely to describe the relative risk or Cox model and in survival analysis because the parameter
controls the tail behavior of the distributions; for more details, see, e.g., Kalbfleisch and Prentice [
13] and Lawless [
14]. It has long been understood that entropic measures for continuous random variables often yield negative values across many distribution types. This observation holds for various entropy functionals, including those proposed by Shannon and Tsallis. While discrete random variables typically yield non-negative values under these measures, continuous ones can be negative. Moreover, both Tsallis and Shannon entropies share their invariance under location shifts and their sensitivity to scale transformations. Unlike the additive nature of Shannon entropy, which is only additive for independent random variables, Tsallis entropy is non-additive, making it a more general and adaptable tool in complex systems. Specifically, for two independent continuous random variables (T
1, T
2), the entropy satisfies the relation:
, however for Shannon entropy that the rightmost term vanishes because
as
, resulting in the classic additivity property when T
1 and T
2 are independent i.e.,
This non-additivity property is particularly relevant in the context of independent random variables, which underscores the versatility and theoretical strength of the Tsallis functional. When
and
are dependent, we have
where
is the mutual information (see, e.g., Cover and Thomas [
15]). A similar result holds for dependent random variables in the context of Tsallis entropy, as demonstrated by Màrius Vila et al. [
16]. In the field of telecommunications, entropic measures such as those by Shannon and Tsallis help define the fundamental performance boundaries of communication systems. These include limits on data compression efficiency and transmission reliability. Shannon entropy, for instance, quantifies the average uncertainty or information content of messages from a given source. According to Shannon’s Source Coding Theorem, entropy defines the minimum average number of bits per symbol required for lossless encoding. Moreover, the Shannon–Hartley Theorem links entropy to the maximum achievable rate for reliable communication over a noisy channel. In essence, Shannon entropy is a cornerstone in information theory, setting the theoretical limits for the two most essential tasks in digital communication: efficient compression and robust transmission. For clarity,
Table 1 presents our notation, common alternatives found in the literature, symbols, conceptual definitions, and the location of mathematical definitions.
The study of information-theoretic measures within the context of reliability systems and order statistics has attracted growing interest in recent decades. Several foundational works have shaped this domain, including contributions by Wong and Chen [
17], Park [
18], Ebrahimi et al. [
19], Zarezadeh and Asadi [
20], Toomaj and Doostparast [
21], Toomaj [
22], and Mesfioui et al. [
23], among others. Building on this foundation, Alomani and Kayid [
24] extended the analysis of Tsallis entropy to coherent and mixed systems, assuming independent and identically distributed (i.i.d.) component lifetimes. Further developments include Baratpour and Khammar [
25], who investigated the entropy’s behavior with respect to order statistics and record values, and Kumar [
26], who analyzed its relevance in the study of
r-records. Kayid and Alshehri [
27] provided notable advancements by deriving a closed-form expression for the lifetime entropy of consecutive k-out-of-n good systems. Their work also established a characterization result, proposed practical bounds, and introduced a nonparametric estimation method. Complementing these efforts, Kayid and Shrahili [
28] focused on the fractional generalized cumulative residual entropy in similar systems, presenting a computational framework, establishing several preservation properties, and offering two nonparametric estimators supported by simulation-based evidence.
Building on previous studies, this work seeks to provide a more comprehensive understanding of how Tsallis entropy can be applied to analyze consecutive
k-out-of-
n good systems. We expand upon earlier findings by offering new insights into the structural properties of these systems, proposing improved bounding strategies, and developing estimation techniques tailored to their unique reliability characteristics. Although this study centers on consecutive systems with i.i.d. components, it is worth recognizing that more general binary systems, involving non-i.i.d. structures, have received significant attention in the literature. Notable contributions in this direction include the works of Tsallis et al. [
29] and Hanel et al. [
30,
31], which examine how dependencies among components influence entropic formulations and their theoretical implications.
The structure of this paper is outlined as follows. In
Section 2, we introduce a novel expression for the Tsallis entropy of consecutive
k-out-of-
n good systems, denoted by
, where component lifetimes are drawn from a general continuous distribution function
F. This formulation is developed using the uniform distribution as a starting point. Due to the challenges involved in deriving closed-form results for Tsallis entropy in complex reliability settings, we also establish several analytical bounds, supported by illustrative numerical examples.
Section 3 focuses on characterization results, highlighting key theoretical properties of Tsallis entropy in the setting of consecutive systems. In
Section 4, we present computational validation of our findings and propose a nonparametric estimator specifically designed for evaluating system-level Tsallis entropy. The estimator’s performance is assessed using both simulated and empirical data. Finally,
Section 5 summarizes the main conclusions and discusses their broader implications.
2. Tsallis Entropy of Consecutive k-out-of-n Good System
This section is structured into three parts. We begin with a brief overview of essential properties of Tsallis entropy and its connections to other well-known measures, such as Rényi and Shannon differential entropies. In the second part, we derive a closed-form expression for the Tsallis entropy in the context of consecutive k-out-of-n good systems and analyze its behavior with respect to different stochastic orderings. The final part introduces a series of analytical bounds that further clarify the entropy characteristics of these systems.
2.1. Results on Tsallis Entropy
In this paper, we consider a random variable, denoted by T, which is assumed to be absolutely continuous and nonnegative. A random variable is a mathematical construct used to represent the outcome of a random process, assigning numerical values to each possible outcome. In this context,
T specifically represents the lifetime of a component, system, or living organism, meaning it quantifies the duration until a specific event occurs, such as the failure of a mechanical component, the breakdown of a system, or the death of an organism. The term absolutely continuous implies that the random variable
T has a probability density function, allowing for a continuous range of possible values (e.g., any positive real number) rather than being restricted to discrete values. The nonnegative property ensures that
which is appropriate for modeling lifetimes, as time cannot be negative. This setup provides a flexible framework for analyzing the probabilistic behavior of lifetimes in various applications. Here, we present the relationship between Rényi and Tsallis entropy. For a non-negative random variable
T with density function f(t), Rényi entropy introduces a tunable parameter β, allowing different aspects of the distribution’s uncertainty to be emphasized. This parameterized form enables more flexibility in analyzing the behavior of uncertainty across various probability models. It is formally defined as:
Both Tsallis and Rényi entropies serve as measures of deviation from uniformity, as they quantify the concentration of the probability distribution
f. These entropy measures can take values across the extended real line, i.e., within the interval [−∞, ∞]. For an absolutely continuous, non-negative random variable
T, it is established that
≥
H(
T) for all 0 <
β < 1, and
≤
H(
T) for all
β > 1. Furthermore, the relationship between Tsallis and Rényi entropies follows a similar pattern:
when 0 <
β < 1, and
when
β > 1. In the theorem that follows, we explore the connection between the Shannon differential entropy under the proportional hazards rate model, as defined in Equation (5), and the corresponding Tsallis entropy.
Theorem 1.
Let T be an absolutely continuous, non-negative random variable. Then, for all and for all .
Proof. By the log-sum inequality (see Cover and Thomas [
15]), we have
which implies
where
denotes the hazard rate function of
. By noting that
we get the results for all
, and hence the theorem. □
2.2. Expression and Stochastic Orders
To derive the Tsallis entropy for a consecutive
k-out-of-
n good system, we begin by applying the probability integral transformation
, where
F is the continuous cumulative distribution function of the component lifetimes. Under standard assumptions, this transformation maps the system lifetime into a variable that follows a uniform distribution on the interval [0, 1]. Leveraging this property, we obtain an explicit form for the Tsallis entropy of the system lifetime
, assuming that the component lifetimes are independently and identically distributed. Based on Equation (1), the probability density function of
is expressed as:
Furthermore, when
, the pdf of
can be represented as follows:
We next state a key result that follows directly from the preceding analysis. As the proof closely parallels the argument used in Theorem 1 of Mesfioui et al. [
23], it is omitted here for brevity.
Proposition 1.
Let denote the system lifetime of a consecutive k-out-of-n good system, where . Then, for all , the Tsallis entropy of is given by: In the next theorem, an alternative formulation of is derived using Theorem 1 in conjunction with Newton’s generalized binomial theorem.
Theorem 2.
Under the conditions of Proposition 1, we getwhere and for all .
Proof. By defining and , and referring to (10) and (11), we find that
where the third equality follows directly from Newton’s generalized binomial series
. This result, in conjunction with Equation (11), complete the proof. □
To demonstrate the usefulness of the representation given in Equation (11), we consider the following illustrative example.
Example 1.
Consider a linear consecutive 2-out-of-4 good system whose lifetime is given by: Let us assume that the component lifetimes are i.i.d. and follow the log-logistic distribution (known as the Fisk distribution in economics). The pdf of this distribution with the shape parameter and the scale parameter one is represented as follows: After appropriate algebraic manipulation, the following identity is obtained: As a result of specific algebraic manipulations, we obtain the following expression for the Tsallis entropy: Due to the complexity of deriving a closed-form expression, numerical techniques are used to explore how the Tsallis entropy varies with the parameters β and γ. The analysis focuses on the consecutive 2-out-of-4 good system and is conducted for values and , since the integral diverges when or .
Figure 1 demonstrates that Tsallis entropy decreases as both β and γ increase. This behavior highlights the entropy’s sensitivity to changes in these parameters and emphasizes their influence on the system’s underlying uncertainty and information-theoretic profile.
Definition 1.
Assume two absolutely continuous nonnegative random variables and with pdfs and , cdfs and and survival functions and , respectively. Then, (i) i.e., is smaller than or equal to in the dispersive order, if and only if for all ; (ii) i.e., is smaller than in the hazard rate order if is increasing for all ; (iii) has a decreasing failure rate (DFR) property if is decreasing in .
For a thorough discussion of stochastic ordering concepts, readers are referred to the seminal work of Shaked and Shanthikumar [
32]. The next theorem follows directly from the representation established in Equation (11).
Theorem 3.
Let and be the lifetimes of two consecutive k-out-of-n good systems having i.i.d. component lifetimes with cdfs and , respectively. If , then Proof. If
, then for all
, we have
This yields that
for all
, and this completes the proof. □
The following result formally establishes that, among consecutive k-out-of-n good systems whose components possess the decreasing failure rate (DFR) property, the series system attains the minimum Tsallis entropy.
Proposition 2.
Let denote the lifetime of a consecutive k-out-of-n good system, comprising i.i.d. components that exhibit the DFR property. Then, for , and for all ,
- (i)
it holds that .
- (ii)
it holds that .
Proof. (i) It is easy to see that
. Furthermore, if
exhibits the DFR property, then it follows that
also possesses the DFR property. Due to Bagai and Kochar [
33], it can be concluded that
which immediately obtain
by recalling Theorem 3. (ii) Based on the findings presented in Proposition 3.2 of Navarro and Eryılmaz [
34], it can be inferred that
. Consequently, employing analogous reasoning to that employed in Part (i) leads to the acquisition of similar results. □
An important application of Equation (11) is in comparing the Tsallis entropy of consecutive k-out-of-n good systems with independent components drawn from different lifetime distributions. This comparison is formally addressed in the following result.
Proposition 3.
Under the conditions of Theorem 3, if for all , and , for , then for all and .
Proof. Given that for all , Equation (2) implies
where
. Assuming
, based on Equation (11), we have
The first inequality holds because
in
and
in
when
. The last inequality follows directly from Equation (14). Consequently, we have
for
, which completes the proof for the case
. The proof for the case
, follows a similar argument. □
The following example serves to illustrate the practical application of the preceding proposition.
Example 2.
Assume coherent systems with lifetimes and , where are i.i.d. component lifetimes with a common cdf , and are i.i.d. component lifetimes with the common cdf . We can easily confirm that and , so . Additionally, since and , we have . Thus Theorem 3 implies that
2.3. Some Bounds
In situations where closed-form expressions for Tsallis entropy are unavailable, particularly for systems with diverse lifetime distributions or a large number of components, bounding techniques offer a practical approach for approximating the entropy’s behavior over the system’s lifetime. This subsection explores the use of analytical bounds to characterize the Tsallis entropy of consecutive k-out-of-n good systems. In particular, we present the following theorem, which establishes a lower bound on the system’s Tsallis entropy. This bound provides valuable insights into the entropy structure under realistic conditions and supports a deeper understanding of system-level uncertainty.
Lemma 1.
Consider a nonnegative continuous random variable with pdf and cdf such that , where , denotes the mode of the pdf . Then, for , we have Proof. By noting that
, then for
, we have
. Now, the identity
implies that
and hence the result. When
, then we have
. Now, since
, by using the similar arguments, we have the results. □
The following theorem presents a lower bound for the Tsallis entropy of . This bound is expressed in terms of the Tsallis entropy of a consecutive k-out-of-n good system assuming uniformly distributed component lifetimes, and it incorporates the mode of the original lifetime distribution.
Example 3.
Assume a linear consecutive k-out-of-n good system with lifetimewhere
for
. Let further that the lifetimes of the components are i.i.d. having the common mixture of two Pareto distributions with parameters
and
with pdf as follows: Given that the mode of this distribution is , we can determine the mode value as .
Consequently, from Lemma 2, we getThe next theorem establishes bounds for the Tsallis entropy of consecutive
k-out-of-
n good systems by relating it to the Tsallis entropy of the individual component lifetimes.
Theorem 4.
When , we havewhere where .
Proof. The mode of
is clearly observed as
. As a result, we can establish that
for
. Therefore, for
, we can conclude that:
and hence the theorem. □
To demonstrate the lower bound established in Theorem 5, we now consider its application to a consecutive k-out-of-n good system.
Example 4.
Let us consider a linear consecutive 10-out-of-18 good system with lifetime , where for . In order to conduct the analysis, we assume that the lifetimes of the individual components in the system are i.i.d. according to a common standard exponential distribution. By performing a simple verification, we find that the optimal value is equal to 0.08, resulting in a corresponding value of as 4.268. Utilizing Theorem 4, we can write The next result establishes bounds for consecutive k-out-of-n good systems based on the hazard rate function of the component lifetimes.
Proposition 4.
Let , be the lifetimes of components of a consecutive k-out-of-n good systems with having the common failure rate function . If and , thenwhere
has the pdf
, for
.
Proof. The hazard rate function of
can be easily represented by
, where
Since
for
and
, it follows that
is a monotonically decreasing function of
. Since
and
, we have
for
, which implies that
, for
. Combining this result with the relationship between Tsallis entropy and the hazard rate (as defined in Equation (4)) for
, completes the proof. □
We now present an illustrative example to demonstrate the application of the preceding proposition.
Example 5.
Consider a linear consecutive 2-out-of-3 good system with lifetime , where the component lifetimes are i.i.d. with an exponential distribution with the cdf for . The exponential distribution has a constant hazard rate, , so, it follows that . Applying Proposition 4 yields the bounds on the Tsallis entropy of the system as . Based on (11), one can compute the exact value as which falls within the bounds.
The next theorem holds under the condition that the expected value of the squared hazard rate function of T exists.
Theorem 5.
Under the conditions of Proposition 4 such that , for and , it holds that Proof. It is not hard to see that
, while its failure rate function is given by
Thus, by (4) and the Cauchy-Schwarz inequality, we have
In the last equality, we use the substitution
, and this completes the proof. □
3. Characterization Results
This section presents characterization results for consecutive
k-out-of-
n good systems based on Tsallis entropy. The analysis focuses on linear consecutive (n-i)-out-of-n good systems,
and
. We begin by recalling a lemma that relies on the Müntz–Szász theorem, as presented by Kamps [
35].
Lemma 3.
For an integrable function on the finite interval if 1, then for almost all , where is a strictly increasing sequence of positive integers satisfying .
It is worth pointing out that Lemma 3 is a well-established concept in functional analysis, stating that the sets
constitutes a complete sequence. Notably, Hwang and Lin [
36] expanded the scope of the Müntz-Szász theorem for the functions
, where
is both absolutely continuous and monotonic over the interval
.
Theorem 6.
Let us assume two consecutives ()-out-of-n good systems with lifetimes and consisting of i.i.d. components with cdfs and , and pdfs and , respectively. Then and belong to the same family of distributions, but for a change in location, if and only if for a fixed , Proof. Since
and
are from the same location family,
, for all
and
. Thus
implies the necessity part. For the sufficiency part, for a consecutive (
)-out-of-n good system, Equation (10) gives
where
and
By assumption
, we can write
or equivalently
where
Applying Lemma 3 to the complete sequence
with
yields the result that
or
. Consequently,
and
are part of the same distribution family, differing only in a location shift. □
Noting that a consecutive n-out-of-n good system corresponds to a classical series system, the following corollary provides a characterization of its Tsallis entropy.
Corollary 1.
Let
and be two series systems having the common pdfs and and cdfs and , respectively. Then and belong to the same family of distributions, but for a change in location, if and only if An additional useful characterization is presented in the following theorem.
Theorem 7.
Under the conditions of Theorem 6, and belong to the same family of distributions, but for a change in location and scale, if and only if for a fixed , Proof. The necessity is trivial. To establish sufficiency, we leverage Equations (6) and (18) to derive
An analogous argument can be made for
. If relation (22) holds for two cdfs
and
, then we can infer from Equation (23) that
Let us set
. By similar arguments as in Theorem 6, we have
The proof follows similarly to Theorem 6. □
Applying Theorem 7 yields the following corollary.
Corollary 2.
Suppose the assumptions of Corollary 1, and belong to the same family of distributions, but for a change in location and scale, if and only if for a fixed ,
The following theorem characterizes the exponential distribution through Tsallis entropy within the framework of consecutive
k-out-of-
n good systems. This result serves as the theoretical basis for a newly proposed goodness-of-fit test for exponentiality, intended to be applicable across a wide variety of datasets. To establish this characterization, we begin by introducing the lower incomplete beta function, defined as:
where
and
are positive real numbers. When
this expression reduces to the complete beta function. We now present the main result of this section.
Theorem 8.
Let us assume that is the lifetime of the consecutive -out-of-n good system having i.i.d. component lifetimes with the pdf and . Then has an exponential distribution with the parameter if and only if for a fixed ,for all
, and
.
Proof. Given an exponentially distributed random variable
, its Tsallis entropy, directly calculated using
, is
. Furthermore, since
, application of Equation (11) yields:
for
. To derive the second term, let us set
and
, upon recalling (10), it holds that
where the necessity is derived. To establish sufficiency, we assume that Equation (25) holds for a fixed value of
and assume that
. Following the proof of Theorem 6 and utilizing the result in Equation (26), we obtain the following relation
which is equivalent to
where
is defined in (18). Thus, it holds that
where
is defined in (21). Applying Lemma 3 to the function
and utilizing the complete sequence
, we can deduce that
This implies that
By solving this equation, it yields
, where
is an arbitrary constant. Utilizing the boundary condition
, it follows that
, we determine that
. Consequently, leading to
for
. This implies the cdf
, confirming that
follows an exponential distribution with scale parameter
. This establishes the theorem. □
4. Tsallis Entropy-Based Exponentiality Testing
In this section, we propose a nonparametric method for estimating the Tsallis entropy of consecutive
k-out-of-
n good systems. Given the wide applicability of the exponential distribution in reliability and lifetime modeling, numerous test statistics have been developed to assess exponentiality—many of which are grounded in core principles of statistical theory. The primary objective here is to test whether the distribution of a random variable
T follows an exponential law. Let
, for
, denote the cumulative distribution function under the null hypothesis. The hypothesis to be tested is formally stated as follows:
A specific case of Tsallis entropy of order 2, referred to as extropy, has recently attracted considerable attention as a useful metric for goodness-of-fit testing. Qiu and Jia [
37] pioneered the development of two consistent estimators for extropy based on the concept of spacings and subsequently introduced a goodness-of-fit test for the uniform distribution using the more efficient of the two estimators. In a related contribution, Xiong et al. [
38] utilized properties of classical record values to derive a characterization result for the exponential distribution, leading to the development of a novel exponentiality test. Their study outlined the test statistic in detail and demonstrated its effectiveness, particularly in small-sample scenarios. Building on this foundation, Jose and Sathar [
39] proposed a new test for exponentiality based on a characterization involving the extropy of lower
r-record values. Extending these developments, the present section explores the Tsallis entropy of consecutive
k-out-of-
n good systems. As established in Theorem 8, the exponential distribution can be uniquely characterized through the Tsallis entropy associated with such systems. Leveraging Equation (25), and following appropriate simplification, we now propose a new test statistic for exponentiality, denoted by
, defined for
as follows:
where
then Theorem 8 directly implies that
if and only if
is exponentially distributed. This fundamental property establishes
as a viable measure of exponentiality and a suitable candidate for a test statistic. Given a random sample
, an estimator
of
can be used as a test statistic. Significant deviations of
from its expected value under the null hypothesis (i.e., the assumption of an exponential distribution) would indicate non-exponentiality, prompting the rejection of the null hypothesis. Consider a random sample of size
, denoted by
drawn from an absolutely continuous distribution
and
represent the corresponding order statistics. To estimate the test statistic, we adopt an estimator proposed by Vasicek [
40] for
as follows:
and
is a positive integer smaller than
which is known as the window size and
for
and
for
. So, a reasonable estimator for
can be derived using Equation (30) as follows:
Establishing estimator consistency is a fundamental step when evaluating estimators for parametric functions. The following theorem confirms the consistency of the estimator defined in Equation (31). The proof follows an approach similar to that of Theorem 1 in Vasicek [
40], who introduced a widely adopted technique for proving consistency in entropy-based statistics. This method has also been employed by Park [
41] and Xiong et al. [
38] to validate the reliability of their respective test statistics.
Theorem 9.
Assume that is a random sample of size taken from a population with pdf and cdf . Also, let the variance of the random variable be finite. Then as and , where stands for the convergence in probability for all .
Proof. To establish the consistency of the estimator
, we employ the approach given in Noughabi and Arghami [
42]. As both
and
tend to infinity, with the ratio
approaching 0, we can approximate the density as follows:
where
represents the empirical distribution function. Furthermore, given that
, we can express
where the second approximation relies on the almost sure convergence of the empirical distribution function i.e.,
as
. Now, applying the Strong Law of Large Numbers, we have
This convergence demonstrates that
is a consistent estimator of
and hence completes the proof of consistency for all
. □
The root mean square error (RMSE) of the estimator
i is invariant under location shifts in the random variable
T, but not under scale transformations. This property is formally established in the following theorem by adapting the arguments of Ebrahimi et al. [
43].
Theorem 10.
Assume that is a random sample of size taken from a population with pdf and and . Denote the estimators for on the basis of and with and , respectively. Then, the following properties apply:
- (i)
,
- (ii)
,
- (iii)
, for all .
Proof. It is not hard to see from (25) that
Hence, we complete the proof by applying the properties of the mean, variance, and RMSE of
. □
A variety of test statistics can be constructed by selecting different combinations of n and i. For computational implementation, we consider the case n = 3 and i = 1, which simplifies the evaluation of the expression in Equation (30).
The test statistic converges to zero asymptotically as the sample size N approaches infinity under the null hypothesis . Conversely, under an alternative distribution with an absolutely continuous cdf , it converges to a positive value as . Consequently, for a finite sample size , we reject the null hypothesis at a significance level α if the observed value of exceeds the critical value . However, the asymptotic distribution of is analytically intractable due to its complex dependence on N and the window parameter m.
To address this challenge, we employed a Monte Carlo simulation approach. Specifically, 10,000 random samples of sizes
were generated from the standard exponential distribution under the null hypothesis. For each sample size, we computed the
-th quantile of the simulated values of
to determine the critical values corresponding to significance levels
and
while varying the window size
m from 2 to 30.
Figure 1 and
Figure 2 display the resulting critical values for each sample size and significance level. The critical values of the proposed test statistic
at the significance level α=0.01 are presented in
Figure 3, which serve as the basis for evaluating the power performance through Monte Carlo simulations.
Power Comparisons
The power of the test statistic
was evaluated through a Monte Carlo simulation involving nine alternative probability distributions. For each specified sample size
replicates of size N were drawn from each alternative distribution, and the value of
was computed for each replicate. The empirical power at a given significance level α was estimated as the proportion of test statistics exceeding the corresponding critical value. To assess the efficiency of the newly proposed test based on
, its performance was benchmarked against several well-established tests for exponentiality reported in the literature. The specifications of the alternative distributions considered are summarized in
Table 2.
The simulation setup, including the selection of alternative distributions and their associated parameters, closely follows the framework proposed by Jose and Sathar [
39]. To evaluate the effectiveness of the newly proposed test based on the statistic
, its performance is compared with several well-established tests for exponentiality documented in the literature. A summary of these comparative tests is provided in
Table 3.
The performance of the test statistic
is influenced by the choice of window size
m, making it necessary to determine an appropriate value in advance to ensure sufficient adjusted statistical power. Simulation results across various sample sizes led to the empirical recommendation
, where
denotes the floor function. This heuristic formula offers a practical guideline for selecting
m and aims to ensure robust power performance across a range of alternative distributions. To comprehensively evaluate the performance of the proposed
test, we selected ten established tests for exponentiality and assessed their power against a diverse set of alternative distributions. Notably, Xiong et al. [
38] proposed a test based on the Tsallis entropy of classical record values, while Jose and Sathar [
39] introduced a test statistic using Tsallis entropy derived from lower
r-records as a characterization of the exponential distribution.
The two tests referred to as
and
in
Table 4 are included in our comparative analysis due to their basis in information-theoretic principles. The original authors provided extensive justification for their use in testing exponentiality, highlighting their theoretical soundness and practical applicability. To estimate the power of each test, we simulated 10,000 independent samples for each sample size
from each alternative distribution specified in
Table 4. The power of the proposed
test was then computed at the 5% significance level. Subsequently, power values for both
and the eleven competing tests were obtained using the same simulation framework. A summary of the comparative results is presented in
Table 4.
Overall, the test statistic exhibits strong discriminatory power in detecting departures from exponentiality in the direction of the gamma distribution. In contrast, its performance against other alternatives, such as the Weibull, uniform, half-normal, and log-normal distributions, is more moderate, reflecting a balanced sensitivity without displaying either pronounced strength or notable limitations.
5. Conclusions
This study has investigated the utility of Tsallis entropy as a flexible and informative measure of uncertainty within the reliability framework of consecutive k-out-of-n good systems. A central contribution lies in establishing a meaningful relationship between the Tsallis entropy of such systems, under general continuous lifetime distributions, and their counterparts governed by the uniform distribution. Given the analytical complexity involved in deriving closed-form entropy expressions, especially for systems with large n or heterogeneous component behaviors, we derived a suite of informative bounds. These approximations not only facilitate practical computation but also deepen the theoretical understanding of entropy dynamics in complex system structures. Additionally, we proposed a nonparametric estimator specifically tailored to the structure of consecutive k-out-of-n systems. Its consistency and performance were validated through simulation studies and empirical applications. This estimator provides a valuable tool for quantifying system-level uncertainty and supports broader applications such as statistical inference and pattern recognition, including image processing and reliability-centered decision-making. In summary, this work contributes to the growing literature on information-theoretic measures in reliability by (i) establishing theoretical foundations that link Tsallis entropy to system reliability behavior; (ii) introducing practical bounding techniques to overcome analytical intractability; and (iii) developing a robust entropy-based estimator suitable for practical use.
Despite these advancements, several promising avenues remain open for further exploration: (i) The current analysis assumes independent and identically distributed components. Extending the framework to systems with dependent or heterogeneous lifetimes, such as those governed by copula-based models or frailty structures, would significantly broaden applicability. (ii) Investigating Tsallis entropy in more general system configurations (e.g., coherent systems, phased-mission systems, or dynamic networks) could yield new insights into uncertainty and resilience. (iii) Developing online or censored-data versions of the Tsallis entropy estimator would enhance its relevance in real-world reliability monitoring and predictive maintenance applications. (iv) Leveraging entropy measures to guide optimal system design (e.g., maximizing reliability for a fixed entropy budget) represents a novel and practically important direction. (v) A systematic comparison between Tsallis, Rényi, and cumulative residual entropy within the same system contexts may reveal cases where one measure is superior in inference, diagnostics, or optimization. These directions highlight the richness of Tsallis entropy as both a theoretical construct and a practical tool in reliability analysis and statistical modeling.