Next Article in Journal
Review of Recent (2015–2024) Popular Entropy Definitions Applied to Physiological Signals
Previous Article in Journal
A Model Framework for Ion Channels with Selectivity Filters Based on Non-Equilibrium Thermodynamics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Tsallis Entropy in Consecutive k-out-of-n Good Systems: Bounds, Characterization, and Testing for Exponentiality

1
Department of Statistics and Operations Research, College of Science, Qassim University, Buraydah 51482, Saudi Arabia
2
Department of Mathematical Sciences, College of Science, Princess Nourah bint Abdulrahman University, Riyadh 11671, Saudi Arabia
3
Department of Statistics and Operations Research, College of Science, King Saud University, Riyadh 11451, Saudi Arabia
*
Author to whom correspondence should be addressed.
Entropy 2025, 27(9), 982; https://doi.org/10.3390/e27090982
Submission received: 4 July 2025 / Revised: 16 September 2025 / Accepted: 19 September 2025 / Published: 20 September 2025

Abstract

This study explores the application of Tsallis entropy in evaluating uncertainty within the framework of consecutive k-out-of-n good systems, which are widely utilized in various reliability and engineering contexts. We derive new analytical expressions and meaningful bounds for the Tsallis entropy under various lifetime distributions, offering fresh insight into the structural behavior of system-level uncertainty. The approach establishes theoretical connections with classical entropy measures, such as Shannon and Rényi entropies, and provides a foundation for comparing systems under different stochastic orders. A nonparametric estimator is proposed to estimate the Tsallis entropy in this setting, and its performance is evaluated through Monte Carlo simulations. In addition, we develop a new entropy-based test for exponentiality, building on the distinctive properties of system lifetimes. So, Tsallis entropy serves as a flexible tool in both reliability characterization and statistical inference.

1. Introduction

In recent decades, the study of consecutive k-out-of-n systems has gained prominence due to their relevance across numerous engineering and industrial settings. These models effectively capture configurations found in applications such as communication relay networks, segmented pipeline systems, and complex assemblies used in high-energy physics. The operational behavior of such systems depends on the arrangement of their components and the logic that governs their collective functioning. A key configuration, known as the linear consecutive k-out-of-n good system, consists of n sequential components, each assumed to function independently and identically. The system remains operational provided that at least k consecutive components are functioning simultaneously. Classical series and parallel structures appear as limiting cases of this setup: the n-out-of-n case corresponds to a series system, while the 1-out-of-n configuration resembles a parallel structure. The foundational contributions of researchers such as Jung and Kim [1], Shen and Zuo [2], Kuo and Zuo [3], Chung et al. [4], Boland and Samaniego [5], and Eryılmaz [6,7] have laid the groundwork for this domain. Among the various system configurations, linear arrangements that satisfy the condition 2kn are of particular significance. These systems offer a meaningful compromise between analytical tractability and practical relevance in reliability engineering. In such models, the lifetime of the i-th component is typically represented by T1, T2, …, Tₙ, where each component follows a common continuous lifetime distribution characterized by its probability density function (pdf) f(t), cumulative distribution function (cdf) F(t), and survival function S(t) = P(T > t). The overall system lifetime is denoted by T k n , representing the operational lifetime of a linear consecutive k-out-of-n good system. Eryılmaz [8] established that when the condition 2k ≥ n holds, the reliability function of such a system takes the form:
S k n t = n k + 1 S k t n k S k + 1 t ,   t > 0 .
In the context of information theory, a key aim is to measure the uncertainty associated with probability distributions. This study examines the use of Tsallis entropy as a tool for evaluating such uncertainty in consecutive k-out-of-n good systems, where the components are assumed to have continuous lifetime distributions. It is important to highlight that Tsallis [9] stimulated a resurgence of interest in the generalized entropy measures, often referred to as Tsallis entropy, building on earlier work by Havrda and Charvát [10] and the independent development of similar concepts in ecology by Patil and Tallie [11]. It provides a flexible alternative to classical measures and is widely regarded for its capacity to capture non-extensive behavior. The Tsallis entropy of order β is mathematically defined as:
H β T = 1 1 β 0 f β t   d t 1 = 1 1 β 0 1 f β 1 F 1 u   d u 1 ,
f o r   a l l   β Θ = 0,1 1 , , where F 1 ( u ) = i n f { t ; F ( t ) u } , for u [ 0,1 ] , represents the quantile function of F ( t ) . Of particular note, the Shannon differential entropy, introduced by Shannon [12], emerges as a special case of Tsallis entropy in the limit as β approaches 1. This foundational concept in information theory provides a baseline for measuring uncertainty under additive conditions. Mathematically, it is defined as:
H T = l i m   H β T β 1 = 0   f t   log f t   d t .
An alternative and insightful expression of Tsallis entropy can be obtained by reformulating Equation (2) in terms of the hazard rate function. This representation provides a useful perspective in reliability analysis, particularly when examining lifetime distributions. The resulting form is given by:
H β T = 1 1 β 1 β E λ β 1 T β 1 ,
where λ ( t ) = f ( t ) / S ( t ) represents the hazard rate function, E ( ) means the expectation, and T β follows a pdf given by:
f β t = β f t S β 1 t ,   t > 0 ,   f o r   a l l   β > 0 .
Note that Equation (5) is known as the proportional hazards model in the literature and defines a general class of distributions, which are sometimes known as exponentiated survival distributions. The term proportional hazards model has been used widely to describe the relative risk or Cox model and in survival analysis because the parameter β controls the tail behavior of the distributions; for more details, see, e.g., Kalbfleisch and Prentice [13] and Lawless [14]. It has long been understood that entropic measures for continuous random variables often yield negative values across many distribution types. This observation holds for various entropy functionals, including those proposed by Shannon and Tsallis. While discrete random variables typically yield non-negative values under these measures, continuous ones can be negative. Moreover, both Tsallis and Shannon entropies share their invariance under location shifts and their sensitivity to scale transformations. Unlike the additive nature of Shannon entropy, which is only additive for independent random variables, Tsallis entropy is non-additive, making it a more general and adaptable tool in complex systems. Specifically, for two independent continuous random variables (T1, T2), the entropy satisfies the relation: H β T 1 , T 2 = H β T 1 + H β T 2 + 1 β H β T 1 H β T 2 , however for Shannon entropy that the rightmost term vanishes because 1 β 0 as β 0 , resulting in the classic additivity property when T1 and T2 are independent i.e., H T 1 , T 2 = H T 1 + H T 2 . This non-additivity property is particularly relevant in the context of independent random variables, which underscores the versatility and theoretical strength of the Tsallis functional. When X and Y are dependent, we have H ( T 1 , T 2 ) = H T 1 + H T 2 I ( X , Y ) , where I ( X , Y ) is the mutual information (see, e.g., Cover and Thomas [15]). A similar result holds for dependent random variables in the context of Tsallis entropy, as demonstrated by Màrius Vila et al. [16]. In the field of telecommunications, entropic measures such as those by Shannon and Tsallis help define the fundamental performance boundaries of communication systems. These include limits on data compression efficiency and transmission reliability. Shannon entropy, for instance, quantifies the average uncertainty or information content of messages from a given source. According to Shannon’s Source Coding Theorem, entropy defines the minimum average number of bits per symbol required for lossless encoding. Moreover, the Shannon–Hartley Theorem links entropy to the maximum achievable rate for reliable communication over a noisy channel. In essence, Shannon entropy is a cornerstone in information theory, setting the theoretical limits for the two most essential tasks in digital communication: efficient compression and robust transmission. For clarity, Table 1 presents our notation, common alternatives found in the literature, symbols, conceptual definitions, and the location of mathematical definitions.
The study of information-theoretic measures within the context of reliability systems and order statistics has attracted growing interest in recent decades. Several foundational works have shaped this domain, including contributions by Wong and Chen [17], Park [18], Ebrahimi et al. [19], Zarezadeh and Asadi [20], Toomaj and Doostparast [21], Toomaj [22], and Mesfioui et al. [23], among others. Building on this foundation, Alomani and Kayid [24] extended the analysis of Tsallis entropy to coherent and mixed systems, assuming independent and identically distributed (i.i.d.) component lifetimes. Further developments include Baratpour and Khammar [25], who investigated the entropy’s behavior with respect to order statistics and record values, and Kumar [26], who analyzed its relevance in the study of r-records. Kayid and Alshehri [27] provided notable advancements by deriving a closed-form expression for the lifetime entropy of consecutive k-out-of-n good systems. Their work also established a characterization result, proposed practical bounds, and introduced a nonparametric estimation method. Complementing these efforts, Kayid and Shrahili [28] focused on the fractional generalized cumulative residual entropy in similar systems, presenting a computational framework, establishing several preservation properties, and offering two nonparametric estimators supported by simulation-based evidence.
Building on previous studies, this work seeks to provide a more comprehensive understanding of how Tsallis entropy can be applied to analyze consecutive k-out-of-n good systems. We expand upon earlier findings by offering new insights into the structural properties of these systems, proposing improved bounding strategies, and developing estimation techniques tailored to their unique reliability characteristics. Although this study centers on consecutive systems with i.i.d. components, it is worth recognizing that more general binary systems, involving non-i.i.d. structures, have received significant attention in the literature. Notable contributions in this direction include the works of Tsallis et al. [29] and Hanel et al. [30,31], which examine how dependencies among components influence entropic formulations and their theoretical implications.
The structure of this paper is outlined as follows. In Section 2, we introduce a novel expression for the Tsallis entropy of consecutive k-out-of-n good systems, denoted by T k n , where component lifetimes are drawn from a general continuous distribution function F. This formulation is developed using the uniform distribution as a starting point. Due to the challenges involved in deriving closed-form results for Tsallis entropy in complex reliability settings, we also establish several analytical bounds, supported by illustrative numerical examples. Section 3 focuses on characterization results, highlighting key theoretical properties of Tsallis entropy in the setting of consecutive systems. In Section 4, we present computational validation of our findings and propose a nonparametric estimator specifically designed for evaluating system-level Tsallis entropy. The estimator’s performance is assessed using both simulated and empirical data. Finally, Section 5 summarizes the main conclusions and discusses their broader implications.

2. Tsallis Entropy of Consecutive k-out-of-n Good System

This section is structured into three parts. We begin with a brief overview of essential properties of Tsallis entropy and its connections to other well-known measures, such as Rényi and Shannon differential entropies. In the second part, we derive a closed-form expression for the Tsallis entropy in the context of consecutive k-out-of-n good systems and analyze its behavior with respect to different stochastic orderings. The final part introduces a series of analytical bounds that further clarify the entropy characteristics of these systems.

2.1. Results on Tsallis Entropy

In this paper, we consider a random variable, denoted by T, which is assumed to be absolutely continuous and nonnegative. A random variable is a mathematical construct used to represent the outcome of a random process, assigning numerical values to each possible outcome. In this context, T specifically represents the lifetime of a component, system, or living organism, meaning it quantifies the duration until a specific event occurs, such as the failure of a mechanical component, the breakdown of a system, or the death of an organism. The term absolutely continuous implies that the random variable T has a probability density function, allowing for a continuous range of possible values (e.g., any positive real number) rather than being restricted to discrete values. The nonnegative property ensures that T 0 , which is appropriate for modeling lifetimes, as time cannot be negative. This setup provides a flexible framework for analyzing the probabilistic behavior of lifetimes in various applications. Here, we present the relationship between Rényi and Tsallis entropy. For a non-negative random variable T with density function f(t), Rényi entropy introduces a tunable parameter β, allowing different aspects of the distribution’s uncertainty to be emphasized. This parameterized form enables more flexibility in analyzing the behavior of uncertainty across various probability models. It is formally defined as:
R β T = 1 1 β log 0 f β t   d t , f o r   a l l   β > 0 , β 1 .
Both Tsallis and Rényi entropies serve as measures of deviation from uniformity, as they quantify the concentration of the probability distribution f. These entropy measures can take values across the extended real line, i.e., within the interval [−∞, ∞]. For an absolutely continuous, non-negative random variable T, it is established that H β T H(T) for all 0 < β < 1, and H β T H(T) for all β > 1. Furthermore, the relationship between Tsallis and Rényi entropies follows a similar pattern: H β T R β T when 0 < β < 1, and H β T R β T when β > 1. In the theorem that follows, we explore the connection between the Shannon differential entropy under the proportional hazards rate model, as defined in Equation (5), and the corresponding Tsallis entropy.
Theorem 1. 
Let T be an absolutely continuous, non-negative random variable. Then, H β T 1 H T β ,   for all 0 < β < 1 and H β T 1 H T β ,   for all β > 1 .
Proof. 
By the log-sum inequality (see Cover and Thomas [15]), we have
0 f β t     log βf t S β 1 t f β t d t 0 f β t     d t log 0 f β t     d t 0 f β t     d t         1 0 f β t     d t ,
which implies
0 f ( t )   log λ β ( t ) d t 0 f β ( t )   d t 1 ,
where λ β ( t ) = β λ ( t ) denotes the hazard rate function of T β . By noting that
0 f β t   log λ β t d t = 1 H T β ,
we get the results for all 1 β > 0 ( 1 β < 0 ) , and hence the theorem. □

2.2. Expression and Stochastic Orders

To derive the Tsallis entropy for a consecutive k-out-of-n good system, we begin by applying the probability integral transformation W k | n = F T k | n , where F is the continuous cumulative distribution function of the component lifetimes. Under standard assumptions, this transformation maps the system lifetime into a variable that follows a uniform distribution on the interval [0, 1]. Leveraging this property, we obtain an explicit form for the Tsallis entropy of the system lifetime T k n , assuming that the component lifetimes are independently and identically distributed. Based on Equation (1), the probability density function of T k n is expressed as:
f k n t = f t [ k n k + 1 S k 1 t k + 1 n k S k t ] , t > 0 .
Furthermore, when 2 k n , the pdf of W k | n = F T k | n can be represented as follows:
ρ k n ( w ) = k ( n k + 1 ) ( 1 w ) k 1 ( k + 1 ) ( n k ) ( 1 w ) k , f o r   a l l   0 < w < 1 .
We next state a key result that follows directly from the preceding analysis. As the proof closely parallels the argument used in Theorem 1 of Mesfioui et al. [23], it is omitted here for brevity.
Proposition 1.
Let T k n : G denote the system lifetime of a consecutive k-out-of-n good system, where 2 k n . Then, for all β Θ , the Tsallis entropy of T k n : G is given by:
    H β T k n = 1 1 β 0 1 ρ k n : G β ( w )     f β 1 F 1 ( w ) d w 1 .
In the next theorem, an alternative formulation of H β T k n is derived using Theorem 1 in conjunction with Newton’s generalized binomial theorem.
Theorem 2. 
Under the conditions of Proposition 1, we get
H β T k n = 1 1 β i = 0 β i [ ( k + 1 ) ( n k ) ] i + β [ k ( k n 1 ) ] i E f β 1 F 1 Z i , k , β i + β ( k 1 ) + 1 1 ,
where Z i , k , β B e t a ( 1 , i + β ( k 1 ) + 1 ) and β i = β ( β 1 ) ( β i + 1 ) i ! for all β Θ .
Proof. 
By defining A = k ( n k + 1 ) and B = ( k + 1 ) ( n k ) , and referring to (10) and (11), we find that
0 1 ρ k n β ( w ) f β 1 F 1 ( w )     d w = 0 1 ( 1 w ) β k 1 ( A B ( 1 w ) ) β f β 1 F 1 ( w )   d w = A β 0 1 z β ( k 1 ) 1 B A z β f β 1 F 1 ( 1 z )     d z , ( t a k i n g   z = 1 w ) = A β i = 0 β i B A i ( 1 ) i     0 1 ( 1 u ) i + β ( k 1 ) f β 1 F 1 ( u )     d u , ( t a k i n g   u = 1 z ) = A β i = 0 β i ( k + 1 ) ( n k ) k ( k n 1 ) i E f β 1 F 1 Z i , k , β i + β ( k 1 ) + 1     ,
where the third equality follows directly from Newton’s generalized binomial series ( 1 x ) β = i = 0 β i ( 1 ) i x i   . This result, in conjunction with Equation (11), complete the proof. □
To demonstrate the usefulness of the representation given in Equation (11), we consider the following illustrative example.
Example 1. 
Consider a linear consecutive 2-out-of-4 good system whose lifetime is given by:
T 2 4 = m a x m i n T 1 , T 2 , m i n T 2 , T 3 , m i n , T 3 , T 4 .
Let us assume that the component lifetimes are i.i.d. and follow the log-logistic distribution (known as the Fisk distribution in economics). The pdf of this distribution with the shape parameter  γ and the scale parameter one is represented as follows:
f t = γ t γ 1 1 + t γ 2 , t , γ > 0 .
After appropriate algebraic manipulation, the following identity is obtained:
f F 1 ( w ) = γ w γ 1 γ ( 1 w ) γ + 1 γ , for   0 < w < 1 .
As a result of specific algebraic manipulations, we obtain the following expression for the Tsallis entropy:
H β T 2 4 = 1 1 β [ γ β 1 0 1 ρ 2 4 β ( w ) w γ 1 β 1 γ ( 1 w ) γ + 1 β 1 γ d w 1 ] , f o r   a l l   β Θ .
Due to the complexity of deriving a closed-form expression, numerical techniques are used to explore how the Tsallis entropy  H β T 2 4 varies with the parameters β and γ. The analysis focuses on the consecutive 2-out-of-4 good system and is conducted for values  β > 1  and  γ > 1 , since the integral diverges when  0 < β < 1  or  0 < γ < 1 .
Figure 1 demonstrates that Tsallis entropy decreases as both β and γ increase. This behavior highlights the entropy’s sensitivity to changes in these parameters and emphasizes their influence on the system’s underlying uncertainty and information-theoretic profile.
Definition 1. 
Assume two absolutely continuous nonnegative random variables T 1 and T 2 with pdfs f 1 and f 2 , cdfs F 1 and F 2 and survival functions S 1 and S 2 , respectively. Then, (i) T 1 T 2   d i s p   i.e., T 1 is smaller than or equal to T 2 in the dispersive order, if and only if f 1 F 1 1 ( w ) f 2 F 2 1 ( w ) for all 0 < w < 1 ; (ii) T 1 T 2   h r i.e., T 1 is smaller than T 2 in the hazard rate order if S 2 ( t ) / S 1 ( t ) is increasing for all t > 0 ; (iii) T 1 has a decreasing failure rate (DFR) property if f 1 ( t ) / S 1 ( t ) is decreasing in t > 0 .
For a thorough discussion of stochastic ordering concepts, readers are referred to the seminal work of Shaked and Shanthikumar [32]. The next theorem follows directly from the representation established in Equation (11).
Theorem 3. 
Let T k n 1 and T k n 2 be the lifetimes of two consecutive k-out-of-n good systems having n i.i.d. component lifetimes with cdfs F 1 and F 2 , respectively. If  T 1 T 2   d i s p   , then
H β T k n 1 H β T k n 2 , f o r   a l l   β Θ .
Proof. 
If T 1 T 2   d i s p   , then for all β ( ) 1 , we have
1 β H β T k n 1   = 0 1 ρ k n β w f 1 β 1 F 1 1 w d w 1                 0 1 ρ k n β w f 2 β 1 F 2 1 w     d w 1 = H β T k n 2 .
This yields that H β T k n 1 H β T k n 2 , for all β Θ , and this completes the proof. □
The following result formally establishes that, among consecutive k-out-of-n good systems whose components possess the decreasing failure rate (DFR) property, the series system attains the minimum Tsallis entropy.
Proposition 2. 
Let T k n denote the lifetime of a consecutive k-out-of-n good system, comprising n i.i.d. components that exhibit the DFR property. Then, for 2 k n , and for all β Θ ,
(i) 
it holds that H β T 1 : n H β T k n .
(ii) 
it holds that H β T 1 : r H β T k n .
Proof. 
(i) It is easy to see that T 1 : n T k n h r . Furthermore, if T exhibits the DFR property, then it follows that T 1 : n also possesses the DFR property. Due to Bagai and Kochar [33], it can be concluded that T 1 : n T k n   d i s p   which immediately obtain H β T 1 : n H β T k n by recalling Theorem 3. (ii) Based on the findings presented in Proposition 3.2 of Navarro and Eryılmaz [34], it can be inferred that T 1 : r T k n h r . Consequently, employing analogous reasoning to that employed in Part (i) leads to the acquisition of similar results. □
An important application of Equation (11) is in comparing the Tsallis entropy of consecutive k-out-of-n good systems with independent components drawn from different lifetime distributions. This comparison is formally addressed in the following result.
Proposition 3. 
Under the conditions of Theorem 3, if H β T 1 H β T 2 for all β > 0 , and i n f A 1   k k n ( w ) s u p A 2   k k n ( w ) , for A 1 = w [ 0,1 ] : f 1 F 1 1 ( w ) f 2 F 2 1 ( w ) ,   A 2 = v [ 0,1 ] : f 1 F 1 1 ( w ) > f 2 F 2 1 ( w ) , then H β T k n 1 H β T k n 2 for all 2 k n and β Θ .
Proof. 
Given that H β T 1 H β T 2 for all β > 0 , Equation (2) implies
H β T 2 H β T 1 = 1 1 β 0 1 ζ β w d w 0 ,
where ζ β w = f 2 β 1 F 2 1 w f 1 β 1 F 1 1 w , 0 < w < 1 . Assuming β > 1 , based on Equation (11), we have
H β T k n 2 H β T k n 1 = 1 1 β 0 1     ρ k n β w ζ β w d w = 1 1 β A 1 ρ k n β ( w ) ζ β ( w )   d w + A 2 ρ k n β ( w ) ζ β ( w )     d w s i n c e   A 1 A 2 = [ 0,1 ] i n f v A 1   ρ k n ( w ) β 1 β A 1 ζ β ( w )   d w + s u p v A 2   ρ k n ( w ) β 1 β A 2 ζ β ( w )     d w s u p v A 2   ρ k n ( w ) β 1 β 0 1   ζ β ( w )   d w 0 .
The first inequality holds because ζ β ( w ) 0 in A 1 and ζ β ( w ) < 0 in A 2 when β > 1 . The last inequality follows directly from Equation (14). Consequently, we have H β T k n 1 H β T k n 2 for 2 k n , which completes the proof for the case β > 1 . The proof for the case 0 < β 1 , follows a similar argument. □
The following example serves to illustrate the practical application of the preceding proposition.
Example 2. 
Assume coherent systems with lifetimes T 2 3 1 = m i n m a x T 1 , T 2 , m a x T 2 , T 3 and T 2 3 2 = m i n m a x Z 1 , Z 2 , m a x Z 2 , Z 3 , where T 1 , T 2 , T 3 are i.i.d. component lifetimes with a common cdf F 1 ( t ) = 1 e 2 t , t > 0 , and Z 1 , Z 2 , Z 3 are i.i.d. component lifetimes with the common cdf F 2 ( t ) = 1 e 6 t , t > 0 . We can easily confirm that H β T = 0.125 and H β Z = 0.0416 , so H β T H β Z . Additionally, since A 1 = [ 0,1 ) and A 2 = { 1 } , we have i n f A 1   ρ 2 3 ( w ) = s u p A 2   ρ 2 3 ( w ) = 0 . Thus Theorem 3 implies that H β T 2 3 1 H β T 2 3 2 .

2.3. Some Bounds

In situations where closed-form expressions for Tsallis entropy are unavailable, particularly for systems with diverse lifetime distributions or a large number of components, bounding techniques offer a practical approach for approximating the entropy’s behavior over the system’s lifetime. This subsection explores the use of analytical bounds to characterize the Tsallis entropy of consecutive k-out-of-n good systems. In particular, we present the following theorem, which establishes a lower bound on the system’s Tsallis entropy. This bound provides valuable insights into the entropy structure under realistic conditions and supports a deeper understanding of system-level uncertainty.
Lemma 1. 
Consider a nonnegative continuous random variable T with pdf f and cdf F such that M = f ( m ) < , where m = s u p { x : f ( t ) M } , denotes the mode of the pdf f . Then, for 2 k n , we have
H β T k n M β 1 H β W k n + 1 1 β M β 1 1 ,   f o r   a l l   β Θ .
Proof. 
By noting that f F 1 ( w ) M , 0 < w < 1 , then for 0 < β < 1 , we have f β 1 F 1 ( w ) M β 1 , 0 < w < 1 . Now, the identity 1 β > 0 implies that
H β T k n = 1 1 β 0 1 ρ k n β ( w ) h β 1 H 1 ( w ) d w 1 1 1 β M β 1 0 1 ρ k n β ( w ) d w 1 = M β 1 1 β 0 1 ρ k n β ( w ) d w 1 + M β 1 1 β 1 1 M β 1 = M β 1 S β W k n + 1 1 β M β 1 1 ,
and hence the result. When β > 1 , then we have f β 1 F 1 ( w )   M β 1 , 0 < w < 1 . Now, since 1 β < 0 , by using the similar arguments, we have the results. □
The following theorem presents a lower bound for the Tsallis entropy of T k n . This bound is expressed in terms of the Tsallis entropy of a consecutive k-out-of-n good system assuming uniformly distributed component lifetimes, and it incorporates the mode M of the original lifetime distribution.
Example 3. 
Assume a linear consecutive k-out-of-n good system with lifetime
        T k n = m a x T [ 1 : k ] , T [ 2 : k + 1 ] , , T [ n k + 1 : n ] ,
where  T [ j : m ] = m i n T j , , T m  for  1 j < m n . Let further that the lifetimes of the components are i.i.d. having the common mixture of two Pareto distributions with parameters  β 1  and  β 2  with pdf as follows:
f ( t ) = θ β 1 x β 1 1 + ( 1 θ ) β 2 t β 2 1 , x 1,0 θ 1 , β 1 > β 2 > 0 .
Given that the mode of this distribution is  m = 1 , we can determine the mode value  M  as  M = f ( 1 ) = θ β 1 + ( 1 θ ) β 2 . Consequently, from Lemma 2, we get
H β T k n θ β 1 + 1 θ β 2 β 1 H β W r n : G + 1 1 β θ β 1 + 1 θ β 2 β 1 1 , for   all   β > 0 .
The next theorem establishes bounds for the Tsallis entropy of consecutive k-out-of-n good systems by relating it to the Tsallis entropy of the individual component lifetimes.
Theorem 4. 
When β > 1 ( 0 < β < 1 ) , we have
      H β T k n ρ k n β w H β T + 1 1 β ρ k n β w 1 ,
where  ρ k n w where  w = 2 n 3 k + 1 ( k + 1 ) ( n k ) .
Proof. 
The mode of k k n ( w ) is clearly observed as w = 2 n 3 k + 1 ( k + 1 ) ( n k ) . As a result, we can establish that ρ k n ( w ) ρ k n w for 0 < w < 1 . Therefore, for β > 1 ( 0 < β < 1 ) , we can conclude that:
H β T k n = 1 1 β 0 1 ρ k n β ( w ) f F 1 ( w ) β 1 d w 1 ( ) 1 1 β ρ k n β w 0 1 f F 1 ( w ) β 1 d w 1 = ρ k n β w 1 β 0 1 f F 1 ( w ) β 1 d w 1 + ρ k n β w 1 β 1 1 ρ k n β w = ρ k n β w H β T + 1 1 β ρ k n β w 1 ,
and hence the theorem. □
To demonstrate the lower bound established in Theorem 5, we now consider its application to a consecutive k-out-of-n good system.
Example 4. 
Let us consider a linear consecutive 10-out-of-18 good system with lifetime T 10 18 = m a x T [ 1 : 10 ] , T [ 2 : 11 ] , , T [ 10 : 18 ] , where T [ j : m ] = m i n T j , , T m for 1 j < m 18 . In order to conduct the analysis, we assume that the lifetimes of the individual components in the system are i.i.d. according to a common standard exponential distribution. By performing a simple verification, we find that the optimal value w is equal to 0.08, resulting in a corresponding value of ρ 10 18 w as 4.268. Utilizing Theorem 4, we can write
      H β T 10 18 4.268 H β T + 3.268 / ( 1 β ) ,   for   all   β > 1 ( 0 < β < 1 ) .
The next result establishes bounds for consecutive k-out-of-n good systems based on the hazard rate function of the component lifetimes.
Proposition 4. 
Let T i , i = 1,2 , , n , be the lifetimes of components of a consecutive k-out-of-n good systems with T k n having the common failure rate function λ ( t ) . If 2 k n and β Θ , then
  1 1 β ( 2 k n ) β 1 β E λ β 1 T k n , β 12 1 H β T k n 1 1 β k β 1 β E λ β 1 T k n , β 12 1
where  T r n : G , β 12  has the pdf  f k n , β 12 ( t ) = β f k n ( t ) S k n β 1 ( t )  , for  t > 0 .
Proof. 
The hazard rate function of T k n can be easily represented by λ k n ( t ) = ψ k , n ( S ( t ) ) λ ( t ) , where
ψ k , n ( z ) = k n k + 1 k + 1 n k z n k + 1 n k z , 0 < z < 1 .
Since ψ k , n ( z ) < 0 for 2 k n and 0 < z < 1 , it follows that ψ k , n ( z ) is a monotonically decreasing function of z . Since ψ k , n ( 0 ) = k and ψ k , n ( 1 ) = 2 k n , we have 2 k n ψ k , n ( S ( t ) ) k for 0 < S ( t ) < 1 , which implies that 2 k n λ ( t ) λ k n : G ( S ( t ) ) k λ ( t ) , for t > 0 . Combining this result with the relationship between Tsallis entropy and the hazard rate (as defined in Equation (4)) for β > 1 ( 0 < β 1 ) , completes the proof. □
We now present an illustrative example to demonstrate the application of the preceding proposition.
Example 5. 
Consider a linear consecutive 2-out-of-3 good system with lifetime T 2 3 = m a x m i n T 1 , T 2 , m i n T 2 , T 3 , where the component lifetimes T i are i.i.d. with an exponential distribution with the cdf F ( t ) = 1 e λ t for t > 0 . The exponential distribution has a constant hazard rate, λ 1 ( t ) = λ , so, it follows that E λ T 2 3 12 = λ . Applying Proposition 4 yields the bounds on the Tsallis entropy of the system as 0.5 λ H β T 2 3 0.25 λ . Based on (11), one can compute the exact value as H β T 2 3 = 0.35 λ which falls within the bounds.
The next theorem holds under the condition that the expected value of the squared hazard rate function of T exists.
Theorem 5. 
Under the conditions of Proposition 4 such that E λ 2 ( T ) < , for 2 k n and β > 1 ( 0 < β 1 ) , it holds that
  H β T k n 1 1 β Ω k , n , β E λ 2 β 1 T 1 ,   Ω k , n , β = 0 1 w 2 ρ k n 4 ( w )   d w .
Proof. 
It is not hard to see that f k n ( t ) = f ( t ) ρ k n ( F ( t ) ) , while its failure rate function is given by
λ k n t = λ t F t ρ k n F t S k n t , f o r t > 0 .
Thus, by (4) and the Cauchy-Schwarz inequality, we have
0 λ k n β 1 ( t ) f k n ( t ) S k n β 1 ( t )   d t = 0 λ β 1 ( t ) f ( t ) f ( t ) F ( t ) f k n 2 ( F ( t ) )     d t 0 λ 2 ( β 1 ) ( t ) f ( t )     d t 1 / 2 0   F ( t ) ρ k n 2 ( F ( t ) ) 2 h ( t )   d t 1 / 2 = E λ 2 ( β 1 ) ( T ) 1 / 2 0 1 w 2 ρ k n 4 ( w )     d w 1 / 2 .
In the last equality, we use the substitution w = F ( t ) , and this completes the proof. □

3. Characterization Results

This section presents characterization results for consecutive k-out-of-n good systems based on Tsallis entropy. The analysis focuses on linear consecutive (n-i)-out-of-n good systems, n 2 i and i = 0,1 , , n / 2 . We begin by recalling a lemma that relies on the Müntz–Szász theorem, as presented by Kamps [35].
Lemma 3. 
For an integrable function ψ ( t ) on the finite interval ( a , b ) if a b x n j ψ ( x )   d x = 0 , j 1, then ψ ( x ) = 0 for almost all x ( a , b ) , where n j , j 1 is a strictly increasing sequence of positive integers satisfying j = 1 1 n j   = .
It is worth pointing out that Lemma 3 is a well-established concept in functional analysis, stating that the sets x n 1 , x n 2 , ; 1 n 1 < n 2 < constitutes a complete sequence. Notably, Hwang and Lin [36] expanded the scope of the Müntz-Szász theorem for the functions ϕ n j ( x ) , n j 1 , where ϕ ( x ) is both absolutely continuous and monotonic over the interval ( a , b ) .
Theorem 6. 
Let us assume two consecutives ( n i )-out-of-n good systems with lifetimes T n i n 1 and T n i n 2 consisting of n i.i.d. components with cdfs F 1 and F 2 , and pdfs f 1 and f 2 , respectively. Then F 1 and F 2 belong to the same family of distributions, but for a change in location, if and only if for a fixed i 0 ,
          H β T n i n 1 = H β T n i n 2 , f o r   a l l   n 2 i ,   a n d   β Θ .  
Proof. 
Since F 1 and F 2 are from the same location family, F 2 ( y ) = F 1 ( y a ) , for all y a and a R . Thus
H β T n i n 2   = 1 1 β a f 2 , n i n β ( y )     d y = 1 1 β a f 1 , n i n β ( y a )     d y       = 1 1 β 0 f 1 , n i n β x     d x = H β T n i n 1 . t a k i n g   x = y a ,
implies the necessity part. For the sufficiency part, for a consecutive ( n i )-out-of-n good system, Equation (10) gives
ρ n i n : F ( w ) = ( n i ) ( i + 1 ) ( 1 w ) n i 1 i ( n i + 1 ) ( 1 w ) n i
where 0 < w < 1 , n 2 i and 0 i n / 2 . By assumption H β T n i n 1 = H β T n i n 2 , we can write
1 1 β 0 1     ρ n i n β ( w ) f 1 β 1 F 1 1 ( w ) d w 1     = 1 1 β 0 1     ρ n i n β ( w ) f 2 β 1 F 2 1 ( w ) d w 1 ,
or equivalently
0 1 ( 1 w ) n 2 i β ϕ i , n , β ( w ) f 1 β 1 F 1 1 w f 2 β 1 F 2 1 w     d w = 0 ,
where
ϕ i , n , β w = ( n i ) ( i + 1 ) ( 1 w ) i + 1 i ( n i + 1 ) w i β ,   f o r   0 < w < 1 .
Applying Lemma 3 to the complete sequence w ( n 2 i ) β , n 2 i with
ψ w = ϕ i , n , β w f 1 β 1 F 1 1 w f 2 β 1 F 2 1 w ,
yields the result that f 1 β 1 F 1 1 ( w ) = f 2 β 1 F 2 1 ( w ) , a . e . w 0,1 , or f 1 F 1 1 ( w ) = f 2 F 2 1 ( w ) ,   0 < w < 1 . Consequently, F 1 and F 2 are part of the same distribution family, differing only in a location shift. □
Noting that a consecutive n-out-of-n good system corresponds to a classical series system, the following corollary provides a characterization of its Tsallis entropy.
Corollary 1. 
Let T n n 1 and T n n 2 be two series systems having the common pdfs f 1 ( t ) and f 2 ( t ) and cdfs F 1 ( t ) and F 2 ( t ) , respectively. Then F 1 and F 2 belong to the same family of distributions, but for a change in location, if and only if
H β T n n 1 = H β T n n 2 ,   f o r   a l l   n 1   a n d   β Θ .
An additional useful characterization is presented in the following theorem.
Theorem 7. 
Under the conditions of Theorem 6, F 1 and F 2 belong to the same family of distributions, but for a change in location and scale, if and only if for a fixed i ,
H β T n i n 1 H β ( T ) = H β T n i n 2 H β T 2 ,   f o r   a l l   n 2 i ,   a n d   β Θ .
Proof. 
The necessity is trivial. To establish sufficiency, we leverage Equations (6) and (18) to derive
H β T n i n 1 H β T 1 = 1 1 β 0 1 ρ n i n : G β ( w ) f 1 β 1 F 1 1 ( w ) H β T 1     d w 1 .
An analogous argument can be made for H β T n i n 2 / H β T 2 . If relation (22) holds for two cdfs F 1 and F 2 , then we can infer from Equation (23) that
0 1 k n i n β w f 1 β 1 F 1 1 w H β T 1 d w = 0 1 k n i n β w f 2 β 1 F 2 1 w H β T 2 d w .
Let us set c = H β T 2 / H β T 1 . By similar arguments as in Theorem 6, we have
0 1 w n 2 i β ϕ i , n , β ( w ) c f 1 β 1 F 1 1 w f 2 β 1 F 2 1 w d w = 0 .
The proof follows similarly to Theorem 6. □
Applying Theorem 7 yields the following corollary.
Corollary 2. 
Suppose the assumptions of Corollary 1,  F 1  and  F 2  belong to the same family of distributions, but for a change in location and scale, if and only if for a fixed  n ,
        H β T n n 1 H β ( T ) = H β T n n 2 H β T 2 ,   for   all   n 1 ,   and   β Θ .
The following theorem characterizes the exponential distribution through Tsallis entropy within the framework of consecutive k-out-of-n good systems. This result serves as the theoretical basis for a newly proposed goodness-of-fit test for exponentiality, intended to be applicable across a wide variety of datasets. To establish this characterization, we begin by introducing the lower incomplete beta function, defined as:
B t ; a , b = 0 t x a 1 ( 1 x ) b 1 d x , 0 < t < 1 ,
where a and b are positive real numbers. When t = 0 , this expression reduces to the complete beta function. We now present the main result of this section.
Theorem 8. 
Let us assume that T n i n is the lifetime of the consecutive ( n i ) -out-of-n good system having n i.i.d. component lifetimes with the pdf f and c d f F . Then T has an exponential distribution with the parameter λ if and only if for a fixed i 0 ,
H β T n i n = [ ( n i ) ( i + 1 ) ] β ( n i + 1 ) [ i ( n i + 1 ) ] β ( n i ) B i ( n i + 1 ) ( n i ) ( i + 1 ) ; β ( n i ) , β + 1 β H β ( T ) + β 1 β 1 1 β ,    
for all n 2 i , and  β Θ .
Proof. 
Given an exponentially distributed random variable T , its Tsallis entropy, directly calculated using ( 6 ) , is H β ( T ) = λ β 1 β β ( 1 β ) . Furthermore, since f F 1 ( w ) = λ ( 1 w ) , application of Equation (11) yields:
H β T n i n = 1 1 β 0 1 ρ n i n β w f β 1 F 1 w d w 1     = λ β 1 1 β 0 1 ρ n i n β ( w ) ( 1 w ) β 1 d w 1 1 β         = β H β ( T ) + β 1 β 0 1 ρ n i n β ( w ) ( 1 w ) β 1 d w 1 1 β ,
for β > 0 . To derive the second term, let us set A = ( n i ) ( i + 1 ) and B = i ( n i + 1 ) , upon recalling (10), it holds that
0 1 ρ n i n β ( w ) ( 1 w ) β 1 d w = 0 1 ( 1 w ) β ( n i ) 1 ( A B ( 1 w ) ) β     d w = A β 0 1 z β ( n i ) 1 1 B A z β     d z , ( t a k i n g   z = 1 w )   = A β ( n i + 1 ) B β ( n i ) 0 B A u β ( n i ) 1 ( 1 u ) β     d u , t a k i n g   u = B A z = A β ( n i + 1 ) B β ( n i ) B B A ; β ( n i ) , β + 1 ,                                                                                        
where the necessity is derived. To establish sufficiency, we assume that Equation (25) holds for a fixed value of r and assume that D = β H β ( T ) + β 1 β . Following the proof of Theorem 6 and utilizing the result in Equation (26), we obtain the following relation
      1 1 β 0 1 ρ n i n β ( w ) f β 1 F 1 ( w )   d w = D 0 1 ρ n i n β ( w ) ( 1 w ) β 1   d w
which is equivalent to
0 1 ρ n i n β ( w ) f β 1 F 1 w ( 1 β ) D ( 1 w ) β 1     d w = 0 ,
where k n i n : G ( w ) is defined in (18). Thus, it holds that
0 1 ( 1 w ) n 2 i β ϕ i , n , β ( w ) f β 1 F 1 w ( 1 β ) D ( 1 w ) β 1     d w = 0 ,
where ϕ i , n , β ( w ) is defined in (21). Applying Lemma 3 to the function
ψ w = ϕ i , n , β w f β 1 F 1 w ( 1 β ) D ( 1 w ) β 1 ,
and utilizing the complete sequence ( 1 w ) ( n 2 i ) , n 2 i , we can deduce that
f F 1 ( w ) = [ 1 β D ] 1 β 1 1 w , a . e . w 0,1 .
This implies that
d F 1 ( w ) d w = 1 f F 1 ( w ) = 1 [ ( 1 β ) D ] 1 β 1 ( 1 w ) .
By solving this equation, it yields F 1 ( w ) = log ( 1 w ) [ ( 1 β ) D ] 1 β 1 + d , where d is an arbitrary constant. Utilizing the boundary condition l i m w 0   F 1 ( w ) = 0 , it follows that d = 0 , we determine that d = 0 . Consequently, leading to F 1 w = [ log 1 w ] / [ ( 1 β ) D ] 1 β 1 for w > 0 . This implies the cdf F ( w ) = 1 e β ( 1 β ) H β ( T ) + β 1 β 1 w , w > 0 , confirming that T follows an exponential distribution with scale parameter β ( 1 β ) H β ( T ) + β 1 β 1 . This establishes the theorem. □

4. Tsallis Entropy-Based Exponentiality Testing

In this section, we propose a nonparametric method for estimating the Tsallis entropy of consecutive k-out-of-n good systems. Given the wide applicability of the exponential distribution in reliability and lifetime modeling, numerous test statistics have been developed to assess exponentiality—many of which are grounded in core principles of statistical theory. The primary objective here is to test whether the distribution of a random variable T follows an exponential law. Let F 0 ( t ) = 1 e λ t , for t > 0 , denote the cumulative distribution function under the null hypothesis. The hypothesis to be tested is formally stated as follows:
H 0 : F t = F 0 t , v s . H 1 : F t F 0 t .
A specific case of Tsallis entropy of order 2, referred to as extropy, has recently attracted considerable attention as a useful metric for goodness-of-fit testing. Qiu and Jia [37] pioneered the development of two consistent estimators for extropy based on the concept of spacings and subsequently introduced a goodness-of-fit test for the uniform distribution using the more efficient of the two estimators. In a related contribution, Xiong et al. [38] utilized properties of classical record values to derive a characterization result for the exponential distribution, leading to the development of a novel exponentiality test. Their study outlined the test statistic in detail and demonstrated its effectiveness, particularly in small-sample scenarios. Building on this foundation, Jose and Sathar [39] proposed a new test for exponentiality based on a characterization involving the extropy of lower r-record values. Extending these developments, the present section explores the Tsallis entropy of consecutive k-out-of-n good systems. As established in Theorem 8, the exponential distribution can be uniquely characterized through the Tsallis entropy associated with such systems. Leveraging Equation (25), and following appropriate simplification, we now propose a new test statistic for exponentiality, denoted by T S i , n , β , defined for n 2 i as follows:
T S i , n , β = H β T n i n : G η i , n , β β H β T + β 1 β + 1 1 β ,
where
η i , n , β = [ ( n i ) ( i + 1 ) ] β ( n i + 1 ) [ i ( n i + 1 ) ] β ( n i ) B i ( n i + 1 ) ( n i ) ( i + 1 ) ; β ( n i ) ,   β + 1 . f o r   a l l   β Θ . I f   n 2 i ,
then Theorem 8 directly implies that T S i , n , β = 0 if and only if T is exponentially distributed. This fundamental property establishes T S i , n , β as a viable measure of exponentiality and a suitable candidate for a test statistic. Given a random sample T 1 , T 2 , , T N , an estimator T S ^ i , n , β of T S i , n , β can be used as a test statistic. Significant deviations of T S ^ i , n , β from its expected value under the null hypothesis (i.e., the assumption of an exponential distribution) would indicate non-exponentiality, prompting the rejection of the null hypothesis. Consider a random sample of size N , denoted by T 1 , T 2 , , T N drawn from an absolutely continuous distribution F and T 1 : N T 2 : N T N : N represent the corresponding order statistics. To estimate the test statistic, we adopt an estimator proposed by Vasicek [40] for d H 1 ( w ) d w = 1 / h H 1 ( w ) as follows:
d F 1 ( w ) d w N T l + m T l m 2 m ,   f o r   a l l   l = m + 1 , m + 2 , , N m ,  
and m is a positive integer smaller than N / 2 which is known as the window size and X l m : N = X 1 : N for l m and X l + m : N = X N : N for l N m . So, a reasonable estimator for T S ^ i , n , β can be derived using Equation (30) as follows:
T S ^ i , n , β = 1 N ( 1 β ) l = 1 N   2 m N T ( l + m ) T ( l m ) β 1 k n i n : G l N + 1 β β η i , n , β   .
Establishing estimator consistency is a fundamental step when evaluating estimators for parametric functions. The following theorem confirms the consistency of the estimator defined in Equation (31). The proof follows an approach similar to that of Theorem 1 in Vasicek [40], who introduced a widely adopted technique for proving consistency in entropy-based statistics. This method has also been employed by Park [41] and Xiong et al. [38] to validate the reliability of their respective test statistics.
Theorem 9. 
Assume that T 1 , T 2 , , T N is a random sample of size N taken from a population with pdf f and cdf F . Also, let the variance of the random variable be finite. Then T S ^ i , n , β p   T S i , n , β as N + , m + and m N 0 , where p   stands for the convergence in probability for all β Θ .
Proof. 
To establish the consistency of the estimator T S ^ i , n , β , we employ the approach given in Noughabi and Arghami [42]. As both m and N tend to infinity, with the ratio m / N approaching 0, we can approximate the density as follows:
2 m N X l + m : N X l m : N = F N X l + m : N F N X l m : N X l + m : N X l m : N F X l + m : N F X l m : N X l + m : N X l m : N f X l + m : N f X l m : N 2 f X l : N ,    
where F N represents the empirical distribution function. Furthermore, given that l N + 1 = F N X l : N , we can express
T S ^ i , n , β   = 1 N ( 1 β ) l = 1 N 2 m N T ( l + m ) T ( l m ) β 1 ρ n i n l N + 1 β β η i , n , β           1 N ( 1 β ) l = 1 N f β 1 X l : N ρ n i n F N X l : N β β η i , n , β               = 1 N ( 1 β ) l = 1 N f β 1 X l : N ρ n i n F X l : N β β η i , n , β             = 1 N ( 1 β ) l = 1 N f β 1 X l ρ n i n F X l β β η i , n , β     ,  
where the second approximation relies on the almost sure convergence of the empirical distribution function i.e., F N X l : N a . s . F X l : N as N . Now, applying the Strong Law of Large Numbers, we have
1 N ( 1 β ) l = 1 N f β 1 X l [ ρ n i n F X l β     β η i , n , β ]         a . s .       E 1 ( 1 β ) f β 1 ( X ) ρ n i n ( F ( X ) ) β β η i , n , β     = 1 ( 1 β ) 0 f β ( w ) ρ n i n ( F ( w ) ) β β η i , n , β     d w     = S β T n i n : G η i , n , β β H β ( T ) + β 1 β + 1 1 β ,
This convergence demonstrates that T S ^ i , n , β is a consistent estimator of T S i , n , β and hence completes the proof of consistency for all β Θ . □
The root mean square error (RMSE) of the estimator T S ^ i , n , β i is invariant under location shifts in the random variable T, but not under scale transformations. This property is formally established in the following theorem by adapting the arguments of Ebrahimi et al. [43].
Theorem 10. 
Assume that T 1 , T 2 , , T N is a random sample of size N taken from a population with pdf h and c d f H and Y j = a T j + b , a > 0 , b R . Denote the estimators for T S i , n , β on the basis of T j and Y j with T S ^ i , n , β T and T S ^ i , n , β Y , respectively. Then, the following properties apply:
(i)
E T S ^ i , n , β Y = E T S ^ i , n , β T / a ,
(ii)
V a r T S ^ i , n , β Y = V a r T S ^ i , n , β T / a 2 ,
(iii)
R M S E T S ^ i , n , β Y = R M S E T S ^ i , n , β T / a , for all  β Θ .
Proof. 
It is not hard to see from (25) that
T S ^ i , n , β Y   = 1 N ( 1 β ) l = 1 N   2 m N Y ( l + m ) Y ( l m ) β 1 ρ n i n l N + 1 β β η i , n , β = 1 N ( 1 β ) l = 1 N     2 m N a T ( l + m ) T ( l m ) β 1 [ k n i n : G l N + 1 β β η i , n , β ]     = T S ^ i , n , β T / a .        
Hence, we complete the proof by applying the properties of the mean, variance, and RMSE of T S ^ i , n , β Y = T S ^ i , n , β T / a . □
A variety of test statistics can be constructed by selecting different combinations of n and i. For computational implementation, we consider the case n = 3 and i = 1, which simplifies the evaluation of the expression in Equation (30).
The test statistic T S ^ 1 , 3 , 2 converges to zero asymptotically as the sample size N approaches infinity under the null hypothesis H 0 . Conversely, under an alternative distribution with an absolutely continuous cdf F , it converges to a positive value as N + . Consequently, for a finite sample size N , we reject the null hypothesis at a significance level α if the observed value of T S ^ 1 , 3 , 2 exceeds the critical value T S ^ 1 , 3 , 2 ( 1 α ) . However, the asymptotic distribution of T S ^ 1 , 3 , 2 is analytically intractable due to its complex dependence on N and the window parameter m.
To address this challenge, we employed a Monte Carlo simulation approach. Specifically, 10,000 random samples of sizes N = 5 , 10 , 20 , 30 , 40 , 50 , 100 were generated from the standard exponential distribution under the null hypothesis. For each sample size, we computed the 1 α -th quantile of the simulated values of T S ^ 1 , 3 , 2 to determine the critical values corresponding to significance levels α = 0.05 and 0.01 while varying the window size m from 2 to 30. Figure 1 and Figure 2 display the resulting critical values for each sample size and significance level. The critical values of the proposed test statistic T S ^ 1 , 3 , 2 at the significance level α=0.01 are presented in Figure 3, which serve as the basis for evaluating the power performance through Monte Carlo simulations.

Power Comparisons

The power of the test statistic T S ^ 1 , 3 , 2 was evaluated through a Monte Carlo simulation involving nine alternative probability distributions. For each specified sample size N = 10,000 replicates of size N were drawn from each alternative distribution, and the value of T S ^ 1 , 3 , 2 was computed for each replicate. The empirical power at a given significance level α was estimated as the proportion of test statistics exceeding the corresponding critical value. To assess the efficiency of the newly proposed test based on T S ^ 1 , 3 , 2 , its performance was benchmarked against several well-established tests for exponentiality reported in the literature. The specifications of the alternative distributions considered are summarized in Table 2.
The simulation setup, including the selection of alternative distributions and their associated parameters, closely follows the framework proposed by Jose and Sathar [39]. To evaluate the effectiveness of the newly proposed test based on the statistic T S ^ 1 , 3 , 2 , its performance is compared with several well-established tests for exponentiality documented in the literature. A summary of these comparative tests is provided in Table 3.
The performance of the test statistic T S ^ 1 , 3 , 2 is influenced by the choice of window size m, making it necessary to determine an appropriate value in advance to ensure sufficient adjusted statistical power. Simulation results across various sample sizes led to the empirical recommendation m = 0.4 N , where x denotes the floor function. This heuristic formula offers a practical guideline for selecting m and aims to ensure robust power performance across a range of alternative distributions. To comprehensively evaluate the performance of the proposed T S ^ 1 , 3 , 2 test, we selected ten established tests for exponentiality and assessed their power against a diverse set of alternative distributions. Notably, Xiong et al. [38] proposed a test based on the Tsallis entropy of classical record values, while Jose and Sathar [39] introduced a test statistic using Tsallis entropy derived from lower r-records as a characterization of the exponential distribution.
The two tests referred to as D 9 and D 10 in Table 4 are included in our comparative analysis due to their basis in information-theoretic principles. The original authors provided extensive justification for their use in testing exponentiality, highlighting their theoretical soundness and practical applicability. To estimate the power of each test, we simulated 10,000 independent samples for each sample size N { 10,20,50 } from each alternative distribution specified in Table 4. The power of the proposed T S ^ 1 , 3 , 2 test was then computed at the 5% significance level. Subsequently, power values for both T S ^ 1 , 3 , 2 and the eleven competing tests were obtained using the same simulation framework. A summary of the comparative results is presented in Table 4.
Overall, the test statistic T S ^ 1 , 3 , 2 exhibits strong discriminatory power in detecting departures from exponentiality in the direction of the gamma distribution. In contrast, its performance against other alternatives, such as the Weibull, uniform, half-normal, and log-normal distributions, is more moderate, reflecting a balanced sensitivity without displaying either pronounced strength or notable limitations.

5. Conclusions

This study has investigated the utility of Tsallis entropy as a flexible and informative measure of uncertainty within the reliability framework of consecutive k-out-of-n good systems. A central contribution lies in establishing a meaningful relationship between the Tsallis entropy of such systems, under general continuous lifetime distributions, and their counterparts governed by the uniform distribution. Given the analytical complexity involved in deriving closed-form entropy expressions, especially for systems with large n or heterogeneous component behaviors, we derived a suite of informative bounds. These approximations not only facilitate practical computation but also deepen the theoretical understanding of entropy dynamics in complex system structures. Additionally, we proposed a nonparametric estimator specifically tailored to the structure of consecutive k-out-of-n systems. Its consistency and performance were validated through simulation studies and empirical applications. This estimator provides a valuable tool for quantifying system-level uncertainty and supports broader applications such as statistical inference and pattern recognition, including image processing and reliability-centered decision-making. In summary, this work contributes to the growing literature on information-theoretic measures in reliability by (i) establishing theoretical foundations that link Tsallis entropy to system reliability behavior; (ii) introducing practical bounding techniques to overcome analytical intractability; and (iii) developing a robust entropy-based estimator suitable for practical use.
Despite these advancements, several promising avenues remain open for further exploration: (i) The current analysis assumes independent and identically distributed components. Extending the framework to systems with dependent or heterogeneous lifetimes, such as those governed by copula-based models or frailty structures, would significantly broaden applicability. (ii) Investigating Tsallis entropy in more general system configurations (e.g., coherent systems, phased-mission systems, or dynamic networks) could yield new insights into uncertainty and resilience. (iii) Developing online or censored-data versions of the Tsallis entropy estimator would enhance its relevance in real-world reliability monitoring and predictive maintenance applications. (iv) Leveraging entropy measures to guide optimal system design (e.g., maximizing reliability for a fixed entropy budget) represents a novel and practically important direction. (v) A systematic comparison between Tsallis, Rényi, and cumulative residual entropy within the same system contexts may reveal cases where one measure is superior in inference, diagnostics, or optimization. These directions highlight the richness of Tsallis entropy as both a theoretical construct and a practical tool in reliability analysis and statistical modeling.

Author Contributions

A.A.A.: Visualization, validation, software, resources, investigation, and editing, writing—original draft, and conceptualization; G.A.: visualization, investigation, validation, resources, and conceptualization; M.K.: writing—review and editing, writing—original draft, visualization, validation, software, resources, funding acquisition, data curation, and conceptualization. All authors have read and agreed to the published version of the manuscript.

Funding

Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2025R226), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

Institutional Review Board Statement

This study did not involve human participants or animals.

Data Availability Statement

The datasets are available within the manuscript.

Acknowledgments

The authors gratefully thank the Academic Editor and the three anonymous reviewers for their insightful and constructive feedback, which has greatly enhanced the quality and clarity of this manuscript.

Conflicts of Interest

The authors declare that they have no conflicts of interest to report regarding the present study.

References

  1. Jung, K.H.; Kim, H. Linear consecutive-k-out-of-n:F system reliability with common-mode forced outages. Reliab. Eng. Syst. Saf. 1993, 41, 49–55. [Google Scholar] [CrossRef]
  2. Shen, J.; Zuo, M.J. Optimal design of series consecutive-k-out-of-n:G systems. Reliab. Eng. Syst. Saf. 1994, 45, 277–283. [Google Scholar] [CrossRef]
  3. Kuo, W.; Zuo, M.J. Optimal Reliability Modeling: Principles and Applications; John Wiley & Sons: Hoboken, NJ, USA, 2003. [Google Scholar]
  4. Chung, C.I.H.; Cui, L.; Hwang, F.K. Reliabilities of Consecutive-k Systems; Springer: Berlin/Heidelberg, Germany, 2013; Volume 4. [Google Scholar]
  5. Boland, P.J.; Samaniego, F.J. Stochastic ordering results for consecutive k-out-of-n:F systems. IEEE Trans. Reliab. 2004, 53, 7–10. [Google Scholar] [CrossRef]
  6. Eryılmaz, S. Mixture representations for the reliability of consecutive-k systems. Math. Comput. Model. 2010, 51, 405–412. [Google Scholar] [CrossRef]
  7. Eryılmaz, S. Conditional lifetimes of consecutive k-out-of-n systems. IEEE Trans. Reliab. 2010, 59, 178–182. [Google Scholar] [CrossRef]
  8. Eryılmaz, S. Reliability properties of consecutive k-out-of-n systems of arbitrarily dependent components. Reliab. Eng. Syst. Saf. 2009, 94, 350–356. [Google Scholar] [CrossRef]
  9. Tsallis, C. Possible generalization of Boltzmann-Gibbs statistics. J. Stat. Phys. 1988, 52, 479–487. [Google Scholar] [CrossRef]
  10. Havrda, J.; Charvát, F. Quantification method of classification processes: Concept of structural-entropy. Kybernetika 1967, 3, 30–35. [Google Scholar]
  11. Patil, G.P.; Taillie, C. Diversity as a concept and its measurement. J. Am. Stat. Assoc. 1982, 77, 548–567. [Google Scholar] [CrossRef]
  12. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
  13. Kalbfleisch, J.D.; Prentice, R.L. The Statistical Analysis of Failure Time Data; Wiley: New York, NY, USA, 2011; Volume 360. [Google Scholar]
  14. Lawless, J.F. Statistical Models and Methods for Lifetime Data, Wiley Series in Probability and Statistics; Wiley-Interscience: New York, NY, USA, 2003. [Google Scholar]
  15. Cover, T.M.; Thomas, J. Elements of Information Theory; John Wiley and Sons Inc.: Hoboken, NJ, USA, 1991. [Google Scholar]
  16. Duran, M.V.; Reig, A.B.I.; Feixas, M.F.; Sbert, M. Tsallis mutual information for document classification. Entropy 2011, 13, 1694–1707. [Google Scholar]
  17. Wong, K.M.; Chen, S. The entropy of ordered sequences and order statistics. IEEE Trans. Inf. Theory 1990, 36, 276–284. [Google Scholar] [CrossRef]
  18. Park, S. The entropy of consecutive order statistics. IEEE Trans. Inf. Theory 1995, 41, 2003–2007. [Google Scholar] [CrossRef]
  19. Ebrahimi, N.; Soofi, E.S.; Soyer, R. Information measures in perspective. Int. Stat. Rev. 2010, 78, 383–412. [Google Scholar] [CrossRef]
  20. Zarezadeh, S.; Asadi, M. Results on residual Rényi entropy of order statistics and record values. Inf. Sci. 2010, 180, 4195–4206. [Google Scholar] [CrossRef]
  21. Toomaj, A.; Doostparast, M. A note on signature-based expressions for the entropy of mixed r-out-of-n systems. Naval Res. Logist. 2014, 61, 202–206. [Google Scholar] [CrossRef]
  22. Toomaj, A. Rényi entropy properties of mixed systems. Commun. Stat. Theory Methods 2017, 46, 906–916. [Google Scholar] [CrossRef]
  23. Mesfioui, M.; Kayid, M.; Shrahili, M. Rényi entropy of the residual lifetime of a reliability system at the system level. Axioms 2023, 12, 320. [Google Scholar] [CrossRef]
  24. Alomani, G.; Kayid, M. Further properties of Tsallis entropy and its application. Entropy 2023, 25, 199. [Google Scholar] [CrossRef] [PubMed]
  25. Baratpour, S.; Khammar, A.H. Results on Tsallis entropy of order statistics and record values, Istatistik. J. Turk. Stat. Assoc. 2016, 8, 60–73. [Google Scholar]
  26. Kumar, V. Some results on Tsallis entropy measure and k-record values. Physica A 2016, 462, 667–673. [Google Scholar] [CrossRef]
  27. Kayid, M.; Alshehri, M.A. Shannon differential entropy properties of consecutive k-out-of-n:G systems. Oper. Res. Lett. 2024, 57, 107190. [Google Scholar] [CrossRef]
  28. Kayid, M.; Shrahili, M. Information properties of consecutive systems using fractional generalized cumulative residual entropy. Fractal Fract. 2024, 8, 568. [Google Scholar] [CrossRef]
  29. Tsallis, C.; Gell-Mann, M.; Sato, Y. Asymptotically scale-invariant occupancy of phase space makes the entropy Sq extensive. Proc. Natl. Acad. Sci. USA 2005, 102, 15377–15382. [Google Scholar] [CrossRef]
  30. Hanel, R.; Thurner, S.; Gell-Mann, M. Generalized entropies and the transformation group of superstatistics. Proc. Natl. Acad. Sci. USA 2011, 108, 6390–6394. [Google Scholar] [CrossRef]
  31. Hanel, R.; Thurner, S.; Gell-Mann, M. Generalized entropies and logarithms and their duality relations. Proc. Natl. Acad. Sci. USA 2012, 109, 19151–19154. [Google Scholar] [CrossRef]
  32. Shaked, M.; Shanthikumar, J.G. Shanthikumar, Stochastic Orders; Springer: Berlin/Heidelberg, Germany, 2007. [Google Scholar]
  33. Bagai, I.; Kochar, S.C. On tail-ordering and comparison of failure rates. Commun. Stat. Theory Methods 1986, 15, 1377–1388. [Google Scholar] [CrossRef]
  34. Navarro, J.; Eryılmaz, S. Mean residual lifetimes of consecutive-k-out-of-n systems. J. Appl. Probab. 2007, 44, 82–98. [Google Scholar] [CrossRef]
  35. Kamps, U. Characterizations of distributions by recurrence relations and identities for moments of order statistics. In Handbook of Statistics; Elsevier: Amsterdam, The Netherlands, 1998; Volume 16, pp. 291–311. [Google Scholar]
  36. Hwang, J.S.; Lin, G.D. On a generalized moment problem. II. Proc. Amer. Math. Soc. 1984, 91, 577–580. [Google Scholar] [CrossRef]
  37. Qiu, G.; Jia, K. Extropy estimators with applications in testing uniformity. J. Nonparametr. Stat. 2018, 30, 182–196. [Google Scholar] [CrossRef]
  38. Xiong, P.; Zhuang, W.; Qiu, G. Testing exponentiality based on the extropy of record values. J. Appl. Stat. 2022, 49, 782–802. [Google Scholar] [CrossRef] [PubMed]
  39. Jose, J.; Sathar, E.I.A. Characterization of exponential distribution using extropy based on lower k-records and its application in testing exponentiality. J. Comput. Appl. Math. 2022, 402, 113816. [Google Scholar] [CrossRef]
  40. Vasicek, O. A test for normality based on sample entropy. J. R. Stat. Soc. Ser. B Stat. Methodol. 1976, 38, 54–59. [Google Scholar] [CrossRef]
  41. Park, S. A goodness-of-fit test for normality based on the sample entropy of order statistics. Stat. Probab. Lett. 1999, 44, 359–363. [Google Scholar] [CrossRef]
  42. Noughabi, H.A.; Arghami, N.R. Testing exponentiality based on characterizations of the exponential distribution. J. Stat. Comput. Simul. 2011, 81, 1641–1651. [Google Scholar] [CrossRef]
  43. Ebrahimi, N.; Pflughoeft, K.; Soofi, E.S. Two measures of sample entropy. Stat. Probab. Lett. 1994, 20, 225–234. [Google Scholar] [CrossRef]
  44. Fortiana, J.; Grané, A. A scale-free goodness-of-fit statistic for the exponential distribution based on maximum correlations. J. Stat. Plann. Inference 2002, 108, 85–97. [Google Scholar] [CrossRef]
  45. Choi, B.; Kim, K.; Song, S.H. Goodness-of-fit test for exponentiality based on Kullback-Leibler information. Commun. Stat. Simul. Comput. 2004, 33, 525–536. [Google Scholar] [CrossRef]
  46. Mimoto, N.; Zitikis, R. The Atkinson index, the Moran statistic, and testing exponentiality. J. Jpn. Stat. Soc. 2008, 38, 187–205. [Google Scholar] [CrossRef]
  47. Volkova, K.Y. On asymptotic efficiency of exponentiality tests based on Rossberg’s characterization. J. Math. Sci. 2010, 167, 489–493. [Google Scholar] [CrossRef]
  48. Zamanzade, E.; Arghami, N.R. Goodness-of-fit test based on correcting moments of modified entropy estimator. J. Stat. Comput. Simul. 2011, 81, 2077–2093. [Google Scholar] [CrossRef]
  49. Baratpour, S.; Rad, A.H. Testing goodness-of-fit for exponential distribution based on cumulative residual entropy. Commun. Stat. Theory Methods 2012, 41, 1387–1396. [Google Scholar] [CrossRef]
  50. Noughabi, H.A.; Arghami, N.R. Goodness-of-fit tests based on correcting moments of entropy estimators. Commun. Stat. Simul. Comput. 2013, 42, 499–513. [Google Scholar] [CrossRef]
  51. Volkova, K.Y.; Nikitin, Y.Y. Exponentiality tests based on Ahsanullah’s characterization and their efficiency. J. Math. Sci. 2015, 204, 42–54. [Google Scholar] [CrossRef]
  52. Torabi, H.; Montazeri, N.H.; Grané, A. A wide review on exponentiality tests and two competitive proposals with application on reliability. J. Stat. Comput. Simul. 2018, 88, 108–139. [Google Scholar] [CrossRef]
Figure 1. The plot of H β T 2 4 with respect to β and γ as demonstrated in Example 1.
Figure 1. The plot of H β T 2 4 with respect to β and γ as demonstrated in Example 1.
Entropy 27 00982 g001
Figure 2. Critical values of the T S ^ 1 , 3 , 2 statistic at significance level α = 0.05 .
Figure 2. Critical values of the T S ^ 1 , 3 , 2 statistic at significance level α = 0.05 .
Entropy 27 00982 g002
Figure 3. Critical values of the T S ^ 1 , 3 , 2 statistic at significance level α = 0.01 .
Figure 3. Critical values of the T S ^ 1 , 3 , 2 statistic at significance level α = 0.01 .
Entropy 27 00982 g003
Table 1. Notational Conventions and Their Definitions in the Context of Entropy Measures.
Table 1. Notational Conventions and Their Definitions in the Context of Entropy Measures.
SymbolConceptual DefinitionLocation of Mathematical DefinitionAlternative Notations
in the Literature
S Reliability functionN/A F ¯ , H ¯
S k n Reliability function of
consecutive k-out-of-n good system
Equation (1), before Equation (2) S k n : G , F ¯ k n : G , H ¯ k n : G
H Shannon entropyEquation (3), before Equation (4), after Equation (2) S ,   h ,   H
H β Tsallis entropyEquation (2), before Equation (3), after Equation (1) S q , S β , S α , H β , H α
R β Rényi entropyEquation (6), after Equation (5) S β , S α , H β , H α
Table 2. Alternative Probability Distributions for Evaluating the Power of the Test Statistic.
Table 2. Alternative Probability Distributions for Evaluating the Power of the Test Statistic.
DistributionProbability Density FunctionSupportNotation
Weibull f ( t ) = α β t β α 1 e t β α , t > 0 , β , σ > 0 W ( α , β )
Gamma f ( t ) = 1 β α Γ ( α ) t α 1 e t / β t > 0 , α , β > 0 G ( α , β )
Uniform f ( t ) = 1 β α , α t β U ( α , β )
Half-Normal f ( t ) = β λ π e t β β λ β , t > 0 , λ > 0 H N ( λ )
Log-Normal f ( t ) = 1 t λ β π e ( l n t μ ) 2 β λ β , t > 0 , λ > 0 , μ R L N ( μ , λ )
Table 3. Competing Tests for Exponentiality.
Table 3. Competing Tests for Exponentiality.
TestReferenceNotation
1Fortiana and Grané [44] D 1
2Choi et al. [45] D 2
3Mimoto and Zitikis [46] D 3
4Volkova [47] D 4
5Zamanzade and Arghami [48] D 5
6Baratpour and Rad [49] D 6
7Noughabi and Arghami [50] D 7
8Volkova and Nikitin [51] D 8
9Torabi et al. [52] D 9
10Xiong et al. [38] D 10
11Jose and Sathar [39] D 11
Table 4. Power comparisons of the tests at the significance level α = 0.05 .
Table 4. Power comparisons of the tests at the significance level α = 0.05 .
N H 1 D 1 D 2 D 3 D 4 D 5 D 6 D 7 D 8 D 9 D 10 D 11 T S ^ 1 , 3 , 2
10 G ( 1 , 1 ) 555555555555
G ( 0.4 ,   1 ) 3411504629705065608375
W ( 1.4 ,   1 ) 1517161316232916115173
H N ( 1 ) 1110108108201011885
U ( 0 ,   1 ) 422831153360512429310010
L N ( 0 ,   0.8 ) 12241621231726192552
L N ( 0 ,   1.4 ) 361940629151947323
20 G ( 1 ,   1 ) 555555556555
G ( 0.4 ,   1 ) 56317780662508289959996
W ( 1.4 ,   1 ) 322734271733472962982
H N ( 1 ) 23141912730231423754
U ( 0 ,   1 ) 86526328549280391810010012
L N ( 0 ,   0.8 ) 18522648421849458440
L N ( 0 ,   1.4 ) 61456711644301671032
50 G ( 1 ,   1 ) 555555555555
G ( 0.4 ,   1 ) 897899999568099100100100100
W ( 1.4 ,   1 ) 735579655628067375670
H N ( 1 ) 592352251544828137322
U ( 0 ,   1 ) 1009198626210099727810010024
L N ( 0 ,   0.8 ) 269344935524849247330
L N ( 0 ,   1.4 ) 93859525938502995020
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alqefari, A.A.; Alomani, G.; Kayid, M. Tsallis Entropy in Consecutive k-out-of-n Good Systems: Bounds, Characterization, and Testing for Exponentiality. Entropy 2025, 27, 982. https://doi.org/10.3390/e27090982

AMA Style

Alqefari AA, Alomani G, Kayid M. Tsallis Entropy in Consecutive k-out-of-n Good Systems: Bounds, Characterization, and Testing for Exponentiality. Entropy. 2025; 27(9):982. https://doi.org/10.3390/e27090982

Chicago/Turabian Style

Alqefari, Anfal A., Ghadah Alomani, and Mohamed Kayid. 2025. "Tsallis Entropy in Consecutive k-out-of-n Good Systems: Bounds, Characterization, and Testing for Exponentiality" Entropy 27, no. 9: 982. https://doi.org/10.3390/e27090982

APA Style

Alqefari, A. A., Alomani, G., & Kayid, M. (2025). Tsallis Entropy in Consecutive k-out-of-n Good Systems: Bounds, Characterization, and Testing for Exponentiality. Entropy, 27(9), 982. https://doi.org/10.3390/e27090982

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop